url
stringlengths
31
38
title
stringlengths
7
229
abstract
stringlengths
44
2.87k
text
stringlengths
319
2.51M
meta
dict
https://arxiv.org/abs/1701.00362
Facial structures of lattice path matroid polytopes
A lattice path matroid is a transversal matroid corresponding to a pair of lattice paths on the plane. A matroid base polytope is the polytope whose vertices are the incidence vectors of the bases of the given matroid. In this paper, we study facial structures of matroid base polytopes corresponding to lattice path matroids.
\section{Introduction} \label{sec-introduction} For a matroid $M$ on a ground set $[n]:= \{1, 2, \dots, n \}$ with a set of bases $\mathcal{B}(M)$, a \emph{matroid base polytope} $\mathcal{P}(M) := \mathcal{P}(\mathcal{B}(M))$ of $M$ is the polytope in ${\mathbb R}^n$ whose vertices are the incidence vectors of the bases of $M$. The polytope $\mathcal{P}(M)$ is a face of a \emph{matroid independence polytope} first studied by Edmonds~\cite{Edmonds}, whose vertices are the incidence vectors of all the independent sets in $M$. There are a lot of research activities about matroid base polytopes for the last few years since it has various applications in algebraic geometry, combinatorial optimization, Coxeter group theory, and tropical geometry. In general, matroid base polytopes are not well understood. A \emph{lattice path matroid} is a transversal matroid corresponding to a pair of lattice paths having common end points. Many interesting and striking properties of these matroids have been studied. The combinatorial and structural properties of lattice path matroids are given by Bonin et al. in \cite{BoninMierNoy} and \cite{BoninMier}. The $h$-vectors, Bergman complexes, and Tutte polynomials of lattice path matroids are studied by several authors~\cite{DelucchiDlugosch, MortonTurner, Schweig2010}. In this paper, we study the facial structure of a \emph{lattice path matroid polytope} which is a matroid base polytope corresponding to a lattice path matroid. This class of matroid base polytopes is belong to important classes of polytopes such as positroid polytopes and generalized permutohedra. Positroid polytopes are studied by Ardila et al.~\cite{ArdilaRinconWilliams} and generalized permutohedra are studied by Postnikov and other authors~\cite{Oh, PostnikovReinerWilliams, Postnikov}. Bidkhori~\cite{Bidkhori} provides a description of facets of a lattice path matroid polytope and we extend it to all the faces. This paper is organized as follows. In Section~\ref{sec-lattice-path-matroids}, definitions and properties of lattice path matroids are given. In Section~\ref{sec-matroid-base-polytopes}, we define lattice path matroid polytopes and give known results about them. In Section~\ref{sec-border strips}, lattice path matroid polytopes for the case of border strips are studied. In particular, we show that all the faces of a lattice path matroid polytope in this case can be described by certain subsets of deletions, contractions and direct sums and express them in terms of a lattice path obtained from the border strip. Section~\ref{sec-skew-shape} explains facial structures of a lattice path matroid polytope for a general case in terms of certain tilings of skew shapes inside the given region. \section{Lattice path matroids} \label{sec-lattice-path-matroids} In this section, we provide basic definitions and properties of lattice path matroids. A \emph{matroid $M$} is a pair $(E(M), \mathcal{B}(M))$ consisting of a finite set $E(M)$ and a collection $\mathcal{B}(M)$ of subsets of $E(M)$ that satisfy the following conditions: \begin{enumerate} \item $\mathcal{B}(M) \ne \emptyset$, and \item for each pair of distinct sets $B, B'$ in $\mathcal{B}(M)$ and for each element $x \in B - B'$, there is an element $y \in B' - B$ such that $(B - \{ x \}) \cup \{ y \} \in \mathcal{B}(M)$. \end{enumerate} The set $E(M)$ is called the \emph{ground set} of $M$ and the sets in $\mathcal{B}(M)$ are called the \emph{bases} of $M$. Subsets of bases are called the \emph{independent sets} of $M$. It is easy to see that all the bases of $M$ have the same cardinality, called the \emph{rank} of $M$. A set system $\mathcal{A} = \{ A_j : j \in J \}$ is a multiset of subsets of a finite set~$S$. A \emph{transversal} of $\mathcal{A}$ is a set $\{ x_j : x_j \in A_j \text{ for all } j \in J \}$ of $\abs{J}$ distinct elements. A \emph{partial transversal} of $\mathcal{A}$ is a transversal of a set system of the form $\{A_k : k \in K$ with $K \subseteq J \}$. Edmonds and Fulkerson \cite{EdmondsFulkerson} show the following fundamental result: \begin{theorem} The transversals of a set system $\mathcal{A} = \{ A_j : j \in J \}$ form the bases of a matroid on $S$. \end{theorem} A \emph{transversal matroid} is a matroid whose bases are the transversals of some set system $\mathcal{A} = \{ A_j : j \in J \}$. The set system $\mathcal{A}$ is called the \emph{presentation} of the transversal matroid. The independent sets of a transversal matroid are the partial transversals of $\mathcal{A}$. We consider lattice paths in the plane using steps $E = (1,0)$ and $N = (0,1)$. The letters are abbreviations of East and North. We will often treat lattice paths as words in the alphabets $E$ and $N$, and we will use the notation~$\alpha^n$ to denote the concatenation of $n$ letters of $\alpha$. The \emph{length} of a lattice path $P = p_1 p_2 \cdots p_n$ is $n$, the number of steps in $P$. \begin{definition} Let $P = p_1 p_2 \cdots p_{m+r}$ and $Q = q_1 q_2 \cdots q_{m+r}$ be two lattice paths from $(0,0)$ to $(m, r)$ with $P$ never going above $Q$. Let $\{ p_{u_1}, p_{u_2}, \dots, p_{u_r} \}$ be the set of North steps of $P$, with $u_1 < u_2 < \cdots < u_r$; similarly, let $\{ q_{l_1}, q_{l_2}, \dots, q_{l_r} \}$ be the set of North steps of $Q$ with $l_1 < l_2 < \cdots < l_r$. Let $N_i$ be the interval $[l_i, u_i]$ of integers. Let $M(P,Q)$ be the transversal matroid that has ground set $[m+r]$ and presentation $\{ N_i: i \in [r] \}$. The pair $(P,Q)$ is a \emph{lattice path presentation} of $M(P,Q)$. A \emph{lattice path matroid} is a matroid~$M$ that is isomorphic to $M(P,Q)$ for some such pair of lattice paths $P$ and $Q$. We will sometimes call a lattice path presentation of $M$ simply a presentation of $M$ when there is no danger of confusion and when doing so avoids awkward repetition. \end{definition} The fundamental connection between the transversal matroid $M(P,Q)$ and the lattice paths that stay in the region bounded by $P$ and $Q$ is the following theorem of Bonin et al.~\cite{BoninMierNoy}. \begin{theorem} \label{thm-bases-of-lattice-path-matroids} A subset $B$ of $[m+r]$ with $\abs{B} = r$ is a basis of $M(P,Q)$ if and only if the associated lattice path $P(B)$ stays in the region bounded by $P$ and $Q$, where $P(B)$ is the path which has its north steps on the set $B$ positions and east steps elsewhere. \end{theorem} \section{Lattice path matroid polytopes} \label{sec-matroid-base-polytopes} Let $\mathcal{B}$ be a collection of $r$-element subsets of $[n]$. For each subset $B = \{ b_1, \dots, b_r \}$ of $[n]$, let $$ e_B = e_{b_1} + \cdots + e_{b_r} \in {\mathbb R}^n , $$ where $e_i$ is the $i$th standard basis vector of ${\mathbb R}^n$. The collection $\mathcal{B}$ is represented by the convex hull of these points $$ \mathcal{P}(\mathcal{B}) = \mathrm{conv} \{ e_B : B \in \mathcal{B} \}. $$ This is a convex polytope of dimension $\le n-1$ and is a subset of the $(n-1)$-simplex $$ \Delta_n = \{ (x_1, \dots, x_n) \in {\mathbb R}^n : x_1 \ge 0, \dots, x_n \ge 0, x_1 + \cdots + x_n = r \}. $$ Gel$'$fand, Goresky, MacPherson, and Serganova~\cite[Thm. 4.1]{GelfandGoreskyMacPhersonSerganova} show the following characterization of matroid base polytopes. \begin{theorem} \label{thm-matroid-polytopes} The subset $\mathcal{B}$ is the collection of bases of a matroid if and only if every edge of the polytope $\mathcal{P}(\mathcal{B})$ is parallel to a difference $e_\alpha - e_\beta$ of two distinct standard basis vectors. \end{theorem} By the definition, the vertices of $\mathcal{P}(M)$ represent the bases of $M$. For two bases $B$ and $B'$ in $\mathcal{B}(M)$, $e_B$ and $e_{B'}$ are connected by an edge if and only if $e_B - e_{B'} = e_\alpha - e_\beta$. Since the latter condition is equivalent to $B - B' = \{ \alpha \}$ and $B' - B = \{ \beta \}$, the edges of $\mathcal{P}(M)$ represent the basis exchange axiom. The basis exchange axiom gives the following equivalence relation on the ground set $[n]$ of the matroid $M$: $\alpha$ and $\beta$ are \emph{equivalent} if there exist bases $B$ and $B'$ in $\mathcal{B}(M)$ with $\alpha \in B$ and $B' = (B - \{ \alpha \} ) \cup \{ \beta \}$. The equivalence classes are called the \emph{connected components} of $M$. The matroid $M$ is called \emph{connected} if it has only one connected component. Feichtner and Sturmfels~\cite[Prop. 2.4]{FeichtnerSturmfels} express the dimension of the matroid base polytope $\mathcal{P}(M)$ in terms of the number of connected components of $M$. \begin{proposition} \label{dim-of-matroid-polytope} Let $M$ be a matroid on $[n]$. The dimension of the matroid base polytope $\mathcal{P}(M)$ equals $n - c(M)$, where $c(M)$ is the number of connected components of $M$. \end{proposition} Bonin et al.~\cite{BoninMierNoy} give the following result explaining the number of connected components of the lattice path matroid. \begin{proposition} The lattice path matroid is connected if and only if the bounding lattice paths $P$ and $Q$ meet only at $(0,0)$ and $(m,r)$. \end{proposition} Remind that, for a skew shape bounded by lattice paths $P$ and $Q$ (denoted by $[P,Q]$) and a related rank $r$ lattice path matroid $M[P,Q]$ with a set of bases $\mathcal{B}(M[P,Q])$, the lattice path matroid polytope $\mathcal{P}(M[P,Q])$ is the convex hull $\mathcal{P}(\mathcal{B}(M[P,Q])) = \mathrm{conv} \{ e_B = e_{b_1} + \cdots + e_{b_r} : B = \{ b_1, \dots, b_r \} \in~\mathcal{B} \}$ where $e_i$ is the $i$th standard basis vector of ${\mathbb R}^{m+r}$. The following result about the dimension of the lattice path matroid polytope immediately follows from these results. \begin{corollary} \label{dimension-of-matroid-polytope} For lattice paths $P$ and $Q$ from $(0,0)$ to $(m, r)$ with $P$ never going above~$Q$, the dimension of the lattice path matroid polytope $\mathcal{P}(M[P,Q])$ is $m+r-k+1$, where $k$ is the number of intersection points of $P$ and $Q$. \end{corollary} \section{Lattice path matroid polytopes for border strips} \label{sec-border strips} For a matroid $(E(M), \mathcal{B}(M))$ and a subset $S$ of $E(M)$, let $r(S)$ denote the \emph{rank} of $S$, the size of largest independent subsets of $S$. Then, the \emph{restriction} $M|_S$ is the matroid on $S$ having the bases $\mathcal{B} (M|_S) = \{ B \cap S : B \in \mathcal{B}(M) \text{ and } |B \cap S| = r(S)\}$, and the \emph{contraction} $M/S$ is the matroid on $E(M) - S$ having the bases $\mathcal{B} (M/S) = \{ B - S : B \in \mathcal{B}(M) \text{ and } |B \cap~S| = r(S) \}$. For two matroids $(E(M_1), \mathcal{B}(M_1))$ and $(E(M_2), \mathcal{B}(M_2))$, their \emph{direct sum} $M_1 \oplus M_2$ is the matroid on $E(M_1) \charfusion[\mathbin]{\cup}{\cdot} E(M_2)$ having the bases $\mathcal{B}(M_1 \oplus~M_2) = \{B_1 \cup B_2 : B_1 \in \mathcal{B}(M_1), B_2 \in \mathcal{B}(M_2)\}$. \begin{theorem}[Bonin and de Mier~\cite{BoninMier}, 2006] \label{class-of-lattice-path-matroid} The class of lattice path matroids is closed under restrictions, contractions, and direct sums. \end{theorem} The connection between constructions of lattice path matroids and skew shapes bounded by lattice paths $P$ and $Q$ is known as follows. First, we label a step of skew shape $[P,Q]$ by $i+j+1$ if the step begins at $(i, j)$. Then, a restriction and a contraction of lattice path matroid $M[P,Q]$ correspond to the deletion of the corresponding region in skew shape $[P,Q]$. If the begin point of one skew shape is attached to the end point of the other, and all the steps are relabeled, we get a direct sum of two matroids. Figure~\ref{R-C-D-LPM} shows how a restriction, a contraction, and a direct sum work to $[P,Q]$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=.45] \draw (0,0)--(1,0)--(1,1)--(0,1)--cycle; \draw (1,0)--(2,0)--(2,1)--(1,1)--cycle; \draw (1,1)--(2,1)--(2,2)--(1,2)--cycle; \draw (0.3,0.5) node{\scriptsize{$1$}}; \draw (1.3,0.5) node{\scriptsize{$2$}}; \draw (2.3,0.5) node{\scriptsize{$3$}}; \draw (1.3,1.5) node{\scriptsize{$3$}}; \draw (2.3,1.5) node{\scriptsize{$4$}}; \draw (1,-1.3) node{\footnotesize{[P,Q]}}; \filldraw[orange, fill=orange] (5,1)--(5,1.3)--(6,1.3)--(6,1)--(7,0)--(7,-0.3)--(6,-0.3)--(6,0)--cycle; \draw (5,0)--(6,0)--(6,1)--(5,1)--cycle; \draw (6,0)--(7,0)--(7,1)--(6,1)--cycle; \draw (6,1)--(7,1)--(7,2)--(6,2)--cycle; \draw[ultra thick,orange] (9,1)--(10,0); \draw (9,0)--(10,0)--(10,1)--(9,1)--cycle; \draw (9,1)--(10,1)--(10,2)--(9,2)--cycle; \draw (9.3,0.5) node{\scriptsize{$1$}}; \draw (9.3,1.5) node{\scriptsize{$3$}}; \draw (10.3,0.5) node{\scriptsize{$3$}}; \draw (10.3,1.5) node{\scriptsize{$4$}}; \draw (8,1) node{$\rightarrow$}; \draw (8,-1.3) node{\footnotesize{Restriction}}; \filldraw[orange, fill=orange] (14,1)--(13.7,1)--(13.7,2)--(14,2)--(15,1)--(15.3,1)--(15.3,0)--(15,0)--cycle; \draw (13,0)--(14,0)--(14,1)--(13,1)--cycle; \draw (14,0)--(15,0)--(15,1)--(14,1)--cycle; \draw (14,1)--(15,1)--(15,2)--(14,2)--cycle; \draw[ultra thick,orange] (18,1)--(19,0); \draw (17,0)--(18,0)--(18,1)--(17,1)--cycle; \draw (18,0)--(19,0)--(19,1)--(18,1)--cycle; \draw (17.3,0.5) node{\scriptsize{$1$}}; \draw (18.3,0.5) node{\scriptsize{$2$}}; \draw (19.3,0.5) node{\scriptsize{$4$}}; \draw (16,1) node{$\rightarrow$}; \draw (16,-1.3) node{\footnotesize{Contraction}}; \draw[orange, fill=orange] (24,2) circle (1ex); \draw (22,0)--(23,0)--(23,1)--(22,1)--cycle; \draw (23,0)--(24,0)--(24,1)--(23,1)--cycle; \draw (23,1)--(24,1)--(24,2)--(23,2)--cycle; \draw[orange, fill=orange] (24,2.5) circle (1ex); \draw (24,2.5)--(25,2.5)--(25,3.5)--(24,3.5)--cycle; \draw (24.3,3) node{\scriptsize{$1$}}; \draw (25.3,3) node{\scriptsize{$2$}}; \draw[orange, fill=orange] (28,2) circle (1ex); \draw (26,0)--(27,0)--(27,1)--(26,1)--cycle; \draw (27,0)--(28,0)--(28,1)--(27,1)--cycle; \draw (27,1)--(28,1)--(28,2)--(27,2)--cycle; \draw (26.3,0.5) node{\scriptsize{$1$}}; \draw (27.3,0.5) node{\scriptsize{$2$}}; \draw (28.3,0.5) node{\scriptsize{$3$}}; \draw (27.3,1.5) node{\scriptsize{$3$}}; \draw (28.3,1.5) node{\scriptsize{$4$}}; \draw (28,2)--(29,2)--(29,3)--(28,3)--cycle; \draw (28.3,2.5) node{\scriptsize{$5$}}; \draw (29.3,2.5) node{\scriptsize{$6$}}; \draw (25,1) node{$\rightarrow$}; \draw (25,-1.3) node{\footnotesize{Direct Sum}}; \end{tikzpicture} \end{center} \vspace{-3mm} \caption{Restriction, contraction, and direct sum of matroids on $[P,Q]$.} \label{R-C-D-LPM} \end{figure} Our main question is about lattice path matroid polytopes regarding to restrictions, contractions, and direct sums of matroids. Can we use constructions of matroids to figure out the properties of lattice path matroid polytopes? To study facets and describe facial structures of lattice path matroid polytopes, we introduce more specific definitions and notations. The \emph{$i$-deletion} of $M[P,Q]$ is defined by a matroid $M|_{E(M)-\{i\}}$, the restriction of $M[P,Q]$ on $E(M)-\{i\}$. The \emph{$i$-contraction} of $M[P,Q]$ is a matroid $M/{\{i\}} \oplus \{i\}$ which is isomorphic to $M/{\{i\}}$, the contraction of $M[P,Q]$ on~\{i\}. An \emph{outside corner} $(p,q)$ of the region $[P,Q]$ is a point on a corner $NE$ on~$P$ or a corner $EN$ on $Q$. At the outside corner $(p,q)$, let \emph{$(p,q)$-direct sum} of $M[P,Q]$ be a matroid $M_1 \oplus M_2$ where a matroid $M_1$ and $M_2$ correspond to the lower left quadrant and the upper right quadrant of $[P,Q]$ with the center $(p,q)$, respectively. If $(p,q)$ is the unique outside corner of $[P,Q]$ such that $p+q = i$ for some integer $i$, then $(p,q)$-direct sum is abbreviated to \emph{$i$-direct sum}. Figure~\ref{i-R-C-D-LPM} shows how a deletion, a contraction, and a direct sum work to $[P,Q]$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=.45] \draw (0,18)--(1,18)--(1,19)--(0,19)--cycle; \draw (1,18)--(2,18)--(2,19)--(1,19)--cycle; \draw (1,19)--(2,19)--(2,20)--(1,20)--cycle; \draw (0,19)--(1,19)--(1,20)--(0,20)--cycle; \draw (1,20)--(2,20)--(2,21)--(1,21)--cycle; \draw (0.3,18.5) node{\scriptsize{$1$}}; \draw (1.3,18.5) node{\scriptsize{$2$}}; \draw (2.3,18.5) node{\scriptsize{$3$}}; \draw (0.3,19.5) node{\scriptsize{$2$}}; \draw (1.3,19.5) node{\scriptsize{$3$}}; \draw (2.3,19.5) node{\scriptsize{$4$}}; \draw (1.3,20.5) node{\scriptsize{$4$}}; \draw (2.3,20.5) node{\scriptsize{$5$}}; \filldraw[orange, fill=orange] (-5,13)--(-5,14.3)--(-4,14.3)--(-4,14)--(-4,13)--(-3,12)--(-3,11.7)--(-4,11.7)--(-4,12)--cycle; \draw (-5,12)--(-4,12)--(-4,13)--(-5,13)--cycle; \draw (-4,12)--(-3,12)--(-3,13)--(-4,13)--cycle; \draw (-4,13)--(-3,13)--(-3,14)--(-4,14)--cycle; \draw (-5,13)--(-4,13)--(-4,14)--(-5,14)--cycle; \draw (-4,14)--(-3,14)--(-3,15)--(-4,15)--cycle; \draw (-1.3,13.5) node{$\rightarrow$}; \draw[ultra thick,orange] (0.5,13)--(1.5,12); \draw (0.5,12)--(1.5,12)--(1.5,13)--(0.5,13)--cycle; \draw (0.5,13)--(1.5,13)--(1.5,14)--(0.5,14)--cycle; \draw (0.5,14)--(1.5,14)--(1.5,15)--(0.5,15)--cycle; \draw (0.8,12.5) node{\scriptsize{$1$}}; \draw (1.8,12.5) node{\scriptsize{$3$}}; \draw (0.8,13.5) node{\scriptsize{$3$}}; \draw (1.8,13.5) node{\scriptsize{$4$}}; \draw (0.8,14.5) node{\scriptsize{$4$}}; \draw (1.8,14.5) node{\scriptsize{$5$}}; \filldraw[orange, fill=orange] (-4,8)--(-4.3,8)--(-4.3,9)--(-4,9)--(-3,8)--(-2.7,8)--(-2.7,7)--(-3,7)--cycle; \draw (-5,6)--(-4,6)--(-4,7)--(-5,7)--cycle; \draw (-4,6)--(-3,6)--(-3,7)--(-4,7)--cycle; \draw (-4,7)--(-3,7)--(-3,8)--(-4,8)--cycle; \draw (-5,7)--(-4,7)--(-4,8)--(-5,8)--cycle; \draw (-4,8)--(-3,8)--(-3,9)--(-4,9)--cycle; \draw (-1.5,7.5) node{$\rightarrow$}; \draw[ultra thick,orange] (1,8)--(2,7); \draw (0,6)--(1,6)--(1,7)--(0,7)--cycle; \draw (1,6)--(2,6)--(2,7)--(1,7)--cycle; \draw (1,7)--(2,7)--(2,8)--(1,8)--cycle; \draw (0,7)--(1,7)--(1,8)--(0,8)--cycle; \draw (0.3,6.5) node{\scriptsize{$1$}}; \draw (1.3,6.5) node{\scriptsize{$2$}}; \draw (2.3,6.5) node{\scriptsize{$3$}}; \draw (0.3,7.5) node{\scriptsize{$2$}}; \draw (1.3,7.5) node{\scriptsize{$3$}}; \draw (2.3,7.5) node{\scriptsize{$5$}}; \filldraw[orange, fill=orange] (-4,0) rectangle (-3,2); \draw[orange, fill=orange] (-4,2) circle (-1ex); \draw (-5,0)--(-4,0)--(-4,1)--(-5,1)--cycle; \draw (-4,0)--(-3,0)--(-3,1)--(-4,1)--cycle; \draw (-4,1)--(-3,1)--(-3,2)--(-4,2)--cycle; \draw (-5,1)--(-4,1)--(-4,2)--(-5,2)--cycle; \draw (-4,2)--(-3,2)--(-3,3)--(-4,3)--cycle; \draw (-1.5,1.5) node{$\rightarrow$}; \draw[orange, fill=orange] (1,2) circle (1ex); \draw (0,0)--(1,0)--(1,1)--(0,1)--cycle; \draw (0,1)--(1,1)--(1,2)--(0,2)--cycle; \draw (1,2)--(2,2)--(2,3)--(1,3)--cycle; \draw (0.3,0.5) node{\scriptsize{$1$}}; \draw (0.3,1.5) node{\scriptsize{$2$}}; \draw (1.3,0.5) node{\scriptsize{$2$}}; \draw (1.3,1.5) node{\scriptsize{$3$}}; \draw (1.3,2.5) node{\scriptsize{$4$}}; \draw (2.3,2.5) node{\scriptsize{$5$}}; \draw (14,18.3) node{\footnotesize{$124~125~134~135~145~234~235~245~345$}}; \draw (14,20.3) node{\footnotesize{Bases for $M[P,Q]$}}; \draw (14,12.3) node{\footnotesize{$134~135~145~345$}}; \draw (14,14.3) node{\footnotesize{Bases for $M|_{\{1,3,4,5\}}$ after $2$-deletion}}; \draw (14,6.3) node{\footnotesize{$124~134~145~234~245~345$}}; \draw (14,8.3) node{\footnotesize{Bases for $M/\{4\} \oplus \{4\}$ after $4$-contraction}}; \draw (14,0.3) node{\footnotesize{$124~125~134~135~234~235$}}; \draw (10.55,2.4)--(10.85,2.4)--(10.85,2.7)--(10.55,2.7)--cycle; \draw (10.55,2.1)--(10.85,2.1)--(10.85,2.4)--(10.55,2.4)--cycle; \draw (13.8,2.2)--(14.1,2.2)--(14.1,2.5)--(13.8,2.5)--cycle; \draw (14,2.3) node{\footnotesize{Bases for $M_1[\hspace{3mm}] \oplus M_2[\hspace{3mm}]$ after $(1,2)$-direct sum}}; \end{tikzpicture} \end{center} \caption{$2$-deletion, $4$-contraction, and $(1,2)$-direct sum of matroids on $[P,Q]$ and their bases.} \label{i-R-C-D-LPM} \end{figure} \begin{example} \label{facets-border-strip} The polytopes in Figure~\ref{i-R-C-D-facets} are facets of the polytope $\mathcal{P}(M[P,Q])$ corresponding to $2$-deletion, $4$-contraction, and $3$-direct sum of the matroid $M[P,Q]$ in Figure~\ref{i-R-C-D-LPM}. \end{example} \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=.45] \draw[ultra thick,orange] (0,1)--(1,0); \draw (0,0)--(1,0)--(1,1)--(0,1)--cycle; \draw (0,1)--(1,1)--(1,2)--(0,2)--cycle; \draw (0,2)--(1,2)--(1,3)--(0,3)--cycle; \draw (0.3,0.5) node{\scriptsize{$1$}}; \draw (1.3,0.5) node{\scriptsize{$3$}}; \draw (0.3,1.5) node{\scriptsize{$3$}}; \draw (1.3,1.5) node{\scriptsize{$4$}}; \draw (0.3,2.5) node{\scriptsize{$4$}}; \draw (1.3,2.5) node{\scriptsize{$5$}}; \draw (2.5,1.5) node{$\rightarrow$}; \draw (5.5,3)--(4,0.7)--(5.5,0)--(7,0.7)--cycle; \draw[dashed] (4,0.7)--(7,0.7); \draw (5.5,3)--(5.5,0); \draw (5.5,3.3) node{\scriptsize{$134$}}; \draw (7.5,0.7) node{\scriptsize{$135$}}; \draw (3.4,0.7) node{\scriptsize{$145$}}; \draw (5.5,-0.5) node{\scriptsize{$345$}}; \draw[ultra thick,orange] (11,2.5)--(12,1.5); \draw (10,0.5)--(11,0.5)--(11,1.5)--(10,1.5)--cycle; \draw (11,0.5)--(12,0.5)--(12,1.5)--(11,1.5)--cycle; \draw (11,1.5)--(12,1.5)--(12,2.5)--(11,2.5)--cycle; \draw (10,1.5)--(11,1.5)--(11,2.5)--(10,2.5)--cycle; \draw (10.3,1) node{\scriptsize{$1$}}; \draw (11.3,1) node{\scriptsize{$2$}}; \draw (12.3,1) node{\scriptsize{$3$}}; \draw (10.3,2) node{\scriptsize{$2$}}; \draw (11.3,2) node{\scriptsize{$3$}}; \draw (12.3,2) node{\scriptsize{$5$}}; \draw (13.5,1.5) node{$\rightarrow$}; \draw (16.5,3.3)--(15,1.5)--(16.5,0)--(18,1.8)--(16.5,3.3); \draw (15,1.5)--(17,1.5)--(18,1.8); \draw[dashed] (18,1.8)--(16,1.8)--(15,1.5); \draw (16.5,3.3)--(17,1.5)--(16.5,0); \draw[dashed] (16.5,3.3)--(16,1.8)--(16.5,0); \draw (16.5,3.6) node{\scriptsize{$124$}}; \draw (15.5,2.2) node{\scriptsize{$245$}}; \draw (18.4,2.2) node{\scriptsize{$145$}}; \draw (14.6,1) node{\scriptsize{$234$}}; \draw (17.5,1) node{\scriptsize{$134$}}; \draw (16.5,-0.5) node{\scriptsize{$345$}}; \hspace{3mm} \draw[orange, fill=orange] (21,2) circle (1ex); \draw (20,0)--(21,0)--(21,1)--(20,1)--cycle; \draw (20,1)--(21,1)--(21,2)--(20,2)--cycle; \draw (21,2)--(22,2)--(22,3)--(21,3)--cycle; \draw (20.3,0.5) node{\scriptsize{$1$}}; \draw (20.3,1.5) node{\scriptsize{$2$}}; \draw (21.3,0.5) node{\scriptsize{$2$}}; \draw (21.3,1.5) node{\scriptsize{$3$}}; \draw (21.3,2.5) node{\scriptsize{$4$}}; \draw (22.3,2.5) node{\scriptsize{$5$}}; \draw (23,1.5) node{$\rightarrow$}; \draw (24.5,3)--(27,3)--(25.75,2.6)--(24.5,3); \draw (24.5,3)--(24.5,0.3)--(25.75,-0.1)--(27,0.3)--(27,3); \draw (25.75,2.6)--(25.75,-0.1); \draw[dashed] (24.5,0.3)--(27,0.3); \draw (24,3.4) node{\scriptsize{$124$}}; \draw (24,-0.1) node{\scriptsize{$125$}}; \draw (27.5,3.4) node{\scriptsize{$134$}}; \draw (27.5,-0.1) node{\scriptsize{$135$}}; \draw (25.7,3) node{\scriptsize{$234$}}; \draw (25.7,-0.5) node{\scriptsize{$235$}}; \end{tikzpicture} \end{center} \caption{$2$-deletion, $4$-contraction, and $3$-direct sum of matroids on $[P,Q]$ and corresponding facets.} \label{i-R-C-D-facets} \end{figure} In this section we focus on the properties of lattice path matroid polytopes corresponding to \emph{border strips}, connected (non-empty) skew shapes with no $2 \times2 $ square. Let $P = p_1 p_2 \cdots p_{m+r}$ and $Q = q_1 q_2 \cdots q_{m+r}$ be two lattice paths from $(0,0)$ to $(m, r)$ with $P$ never going above $Q$. Note that the region bounded by $P$ and $Q$ is a border strip if and only if $p_i=q_i$ for $1 < i < m+r$, $p_1=q_{m+r}=\text{East step}$, and $q_1=p_{m+r}=\text{North step}$. For such $P$ and $Q$ where $m+r > 2$, we define a lattice path $R = R(P,Q) = r_1 r_2 \cdots r_{m+r}$ as $r_i = p_i (= q_i)$ for $1 < i < m+r$, $r_1 = r_2$, and $r_{m+r-1} = r_{m+r}$. If $m+r = 2$, that means $P$ and $Q$ are lattice paths from $(0,0)$ to $(1,1)$, a lattice path $R$ is defined as two consecutive North steps from $(0,0)$ to $(0,2)$. Not only in the case $m+r = 2$, but in general $R$ is not needed to be a path from $(0,0)$ to $(m, r)$. See Figure~\ref{P-Q-R}. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=.45] \draw[orange, pattern=north west lines, pattern color=orange] (0,0)--(1,0)--(0,1)--cycle; \draw[orange, pattern=north west lines, pattern color=orange] (5,7)--(4,7)--(5,6)--cycle; \filldraw[orange, fill=orange] (0,3)--(1,3)--(2,2)--(1,2)--cycle; \filldraw[orange, fill=orange] (4,6)--(4,5)--(5,4)--(5,5)--cycle; \draw (0,0)--(1,0)--(1,1)--(0,1)--cycle; \draw (0,1)--(1,1)--(1,2)--(0,2)--cycle; \draw (0,2)--(1,2)--(1,3)--(0,3)--cycle; \draw (1,2)--(2,2)--(2,3)--(1,3)--cycle; \draw (2,2)--(3,2)--(3,3)--(2,3)--cycle; \draw (3,2)--(4,2)--(4,3)--(3,3)--cycle; \draw (3,3)--(4,3)--(4,4)--(3,4)--cycle; \draw (4,3)--(5,3)--(5,4)--(4,4)--cycle; \draw (4,4)--(5,4)--(5,5)--(4,5)--cycle; \draw (4,5)--(5,5)--(5,6)--(4,6)--cycle; \draw (4,6)--(5,6)--(5,7)--(4,7)--cycle; \draw (1.45,-0.7) node{\footnotesize{$E = p_1$}}; \draw (-1.5,0.33) node{\footnotesize{$q_1 = N$}}; \draw (-1.3,3.4) node{\footnotesize{$p_4 = q_4 = E$}}; \draw (7.6,4.5) node{\footnotesize{$N = p_{10} = q_{10}$}}; \draw (3.5,7.4) node{\footnotesize{$q_{12} = E$}}; \draw (6.6,6.5) node{\footnotesize{$N = p_{12}$}}; \draw (2.5,-2.3) node{\footnotesize{$[P,Q]$}}; \draw (13,0)--(13,3)--(16,3)--(16,4)--(17,4)--(17,8); \draw[ultra thick,orange] (13,3)--(14,3); \draw[ultra thick,orange] (17,5)--(17,6); \fill (13,0) circle(4pt); \fill (13,1) circle(4pt); \fill (13,2) circle(4pt); \fill (13,3) circle(4pt); \fill (14,3) circle(4pt); \fill (15,3) circle(4pt); \fill (16,3) circle(4pt); \fill (16,4) circle(4pt); \fill (17,4) circle(4pt); \fill (17,5) circle(4pt); \fill (17,6) circle(4pt); \fill (17,7) circle(4pt); \fill (17,8) circle(4pt); \draw (12.5,3.5) node{\footnotesize{$r_4 = E$}}; \draw (19.9,7.4) node{\footnotesize{$N = r_{12} (= r_{11})$}}; \draw (15.6,0.4) node{\footnotesize{$N = r_1 (= r_2)$}}; \draw (18.7,5.4) node{\footnotesize{$N = r_{10}$}}; \draw (15.5,-2.3) node{\footnotesize{$R(P,Q)$}}; \end{tikzpicture} \end{center} \caption{Border strip $[P,Q]$ from $(0,0)$ to $(5,7)$ and lattice path $R(P,Q)$ from $(0,0)$ to $(4,8)$.} \label{P-Q-R} \end{figure} For a lattice path $R = R(P,Q)$ we define three sets $\mathcal{D}(R)$, $\mathcal{C}(R)$, $\mathcal{S}(R)$ as follows: \begin{align*} \mathcal{D}(R) &= \{i \text{-deletion of $M[P,Q]$} : r_i = \text{East step}\},\\ \mathcal{C}(R) &= \{i \text{-contraction of $M[P,Q]$} : r_i = \text{North step}\}, \text{and}\\ \mathcal{S}(R) &= \{i \text{-direct sum of $M[P,Q]$} : r_i \neq r_{i+1}\}. \end{align*} \noindent For each element of $\mathcal{D}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{S}(R(P,Q))$ we have the corresponding border strip. See Figure~\ref{R-C-D-LPMP}. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=.30] \filldraw[orange, fill=orange] (3,2)--(4,1)--(4,2)--(3,3)--cycle; \draw (0,0)--(1,0)--(1,1)--(0,1)--cycle; \draw (0,1)--(1,1)--(1,2)--(0,2)--cycle; \draw (1,1)--(2,1)--(2,2)--(1,2)--cycle; \draw (2,1)--(3,1)--(3,2)--(2,2)--cycle; \draw (3,1)--(4,1)--(4,2)--(3,2)--cycle; \draw (3,2)--(4,2)--(4,3)--(3,3)--cycle; \draw (3,3)--(4,3)--(4,4)--(3,4)--cycle; \draw (4,3)--(5,3)--(5,4)--(4,4)--cycle; \draw (6.5,2) node{$\rightarrow$}; \draw (6.5,-2) node{\footnotesize{$6$-contraction ($r_6 = N$)}}; \draw[ultra thick,orange] (11,2)--(12,1); \draw (8,0)--(9,0)--(9,1)--(8,1)--cycle; \draw (8,1)--(9,1)--(9,2)--(8,2)--cycle; \draw (9,1)--(10,1)--(10,2)--(9,2)--cycle; \draw (10,1)--(11,1)--(11,2)--(10,2)--cycle; \draw (11,1)--(12,1)--(12,2)--(11,2)--cycle; \draw (11,2)--(12,2)--(12,3)--(11,3)--cycle; \draw (12, 2)--(13,2)--(13,3)--(12,3)--cycle; \filldraw[orange, fill=orange] (17,2)--(18,2)--(19,1)--(18,1)--cycle; \draw (16,0)--(17,0)--(17,1)--(16,1)--cycle; \draw (16,1)--(17,1)--(17,2)--(16,2)--cycle; \draw (17,1)--(18,1)--(18,2)--(17,2)--cycle; \draw (18,1)--(19,1)--(19,2)--(18,2)--cycle; \draw (19,1)--(20,1)--(20,2)--(19,2)--cycle; \draw (19,2)--(20,2)--(20,3)--(19,3)--cycle; \draw (19,3)--(20,3)--(20,4)--(19,4)--cycle; \draw (20,3)--(21,3)--(21,4)--(20,4)--cycle; \draw (22.5,2) node{$\rightarrow$}; \draw (22.5,-2) node{\footnotesize{$4$-deletion ($r_4 = E$)}}; \draw[ultra thick,orange] (25,2)--(26,1); \draw (24,0)--(25,0)--(25,1)--(24,1)--cycle; \draw (24,1)--(25,1)--(25,2)--(24,2)--cycle; \draw (25,1)--(26,1)--(26,2)--(25,2)--cycle; \draw (26,1)--(27,1)--(27,2)--(26,2)--cycle; \draw (26,2)--(27,2)--(27,3)--(26,3)--cycle; \draw (26,3)--(27,3)--(27,4)--(26,4)--cycle; \draw (27,3)--(28,3)--(28,4)--(27,4)--cycle; \filldraw[orange, fill=orange] (31,1)--(31,2)--(32,2)--(32,1)--cycle; \draw (31,0)--(32,0)--(32,1)--(31,1)--cycle; \draw (31,1)--(32,1)--(32,2)--(31,2)--cycle; \draw (32,1)--(33,1)--(33,2)--(32,2)--cycle; \draw (33,1)--(34,1)--(34,2)--(33,2)--cycle; \draw (34,1)--(35,1)--(35,2)--(34,2)--cycle; \draw (34,2)--(35,2)--(35,3)--(34,3)--cycle; \draw (34,3)--(35,3)--(35,4)--(34,4)--cycle; \draw (35,3)--(36,3)--(36,4)--(35,4)--cycle; \draw (37.5,2) node{$\rightarrow$}; \draw (37.5,-2) node{\footnotesize{$2$-direct sum ($r_2 \neq r_3$)}}; \draw[orange, fill=orange] (40,1) circle (1ex); \draw (39,0)--(40,0)--(40,1)--(39,1)--cycle; \draw (40,1)--(41,1)--(41,2)--(40,2)--cycle; \draw (41,1)--(42,1)--(42,2)--(41,2)--cycle; \draw (42,1)--(43,1)--(43,2)--(42,2)--cycle; \draw (42,2)--(43,2)--(43,3)--(42,3)--cycle; \draw (42,3)--(43,3)--(43,4)--(42,4)--cycle; \draw (43,3)--(44,3)--(44,4)--(43,4)--cycle; \hspace{10mm} \draw (0,8)--(1,8)--(1,9)--(0,9)--cycle; \draw (0,9)--(1,9)--(1,10)--(0,10)--cycle; \draw (1,9)--(2,9)--(2,10)--(1,10)--cycle; \draw (2,9)--(3,9)--(3,10)--(2,10)--cycle; \draw (3,9)--(4,9)--(4,10)--(3,10)--cycle; \draw (3,10)--(4,10)--(4,11)--(3,11)--cycle; \draw (3,11)--(4,11)--(4,12)--(3,12)--cycle; \draw (4,11)--(5,11)--(5,12)--(4,12)--cycle; \draw (2.5,6.3) node{\footnotesize{$[P,Q]$}}; \draw (12,8)--(12,10)--(15,10)--(15,12)--(17,12); \fill (12,8) circle(4pt); \fill (12,9) circle(4pt); \fill (12,10) circle(4pt); \fill (13,10) circle(4pt); \fill (14,10) circle(4pt); \fill (15,10) circle(4pt); \fill (15,11) circle(4pt); \fill (15,12) circle(4pt); \fill (16,12) circle(4pt); \fill (17,12) circle(4pt); \draw (15,6.3) node{\footnotesize{$R(P,Q)$}}; \draw (31,10) node{\footnotesize{$R(P,Q) = r_1 r_2 r_3 r_4 r_5 r_6 r_7 r_8 r_9$}}; \draw (33,8.5) node{\footnotesize{$= N N E E E N N E E$}}; \end{tikzpicture} \end{center} \caption{Border strips corresponding to deletion, contraction, and direct sum of $M[P,Q]$.} \label{R-C-D-LPMP} \end{figure} Note that the dimension of $\mathcal{P}(M[P,Q]) = \dim(\mathcal{P}(M[P,Q])) = m+r-1$ for lattice paths $P$ and $Q$ from $(0,0)$ to $(m, r)$ with $P$ never going above $Q$ where the region bounded by $P$ and $Q$ is a border strip since $P$ and $Q$ satisfy $k=2$ in Corollary~\ref{dimension-of-matroid-polytope}. \begin{lemma} \label{facets-of-matroid-polytope} The set of facets of lattice path matroid polytope $\mathcal{P}(M[P,Q])$, where the region bounded by $P$ and $Q$ is a border strip, has a one-to-one correspondence with $\mathcal{D}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{S}(R(P,Q))$. \end{lemma} \begin{proof} For the polytope $\mathcal{P}(M[P,Q]) = \mathcal{P}(\mathcal{B}(M)) = \mathrm{conv} \{ e_B = e_{b_1} + \cdots + e_{b_r} : B \in \mathcal{B} \}$, take a subset $S^{-i} = \mathrm{conv}\{e_{B} : i \notin B \in \mathcal{B}\}$ on the hyperplane $x_i = 0$ in ${\mathbb R}^{m+r-k+2}$, which is corresponding to $i$-deletion in $\mathcal{D}(R)$. All the vertices $e_B \in \mathcal{P}(M[P,Q]) - S^{-i}$ lie on the half-space $x_i > 0$ since their $i\text{-}th$ coordinates are $1 (> 0)$. The dimension of $S^{-i}$ is $(m-1)+r-k+1 = m+r-k$ since $k$ and $r$ are fixed and the only width $m$ dropped by $1$ during the $i$-deletion. Hence, $S^{-i}$ is a facet of $\mathcal{P}(M[P,Q]).$ Similarly for $i$-contraction in $\mathcal{C}(R)$ if we take a subset $S^{+i} = \mathrm{conv}\{e_{B} : i \in B \in \mathcal{B}\}$ on the hyperplane $x_i = 1$ in ${\mathbb R}^{m+r-k+2}$, all the vertices of $\mathcal{P}(M[P,Q]) - S^{+i}$ have $i\text{-}th$ coordinate $0 (< 1)$ and lie on the half-space of the hyperplane $x_i < 1$. After the $i-$contraction the height $r$ dropped by $1$, while $m$ and $k$ are fixed, and the dimension of $S^{+i}$ is $m+(r-1)-k+1 = m+r-k$. Hence, $S^{+i}$ is also a facet of $\mathcal{P}(M[P,Q])$. For $i$-direct sum in $\mathcal{D}(R)$, without loss of generality, we may assume that $r_i$ is East step and $r_{i+1}$ is North step. That means a direct sum occurs at the point $(p,q)$ of the path $Q$ where $p+q = i$. Take a subset $S^{i} = \mathrm{conv}\{e_{B} : |[i] \cap B| = q, B \in \mathcal{B}\}$ of $\mathcal{P}(M[P,Q])$. Then, $S^i$ lies on the hyperplane $x_1+x_2+ \cdots + x_i = q$ and the other points on the half-space $x_1+x_2+ \cdots + x_i < q$. The dimension of $S^{i}$ is $m+r-(k+1)+1 = m+r-k$ since we have the same end points and get one more intersection point after $i$-direct sum. Hence, $S^{i}$ is a facet of $\mathcal{P}(M[P,Q])$. For the other direction, to show is all the facets of $\mathcal{P}(M[P,Q])$ are on the hyperplanes corresponding to $\mathcal{D}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{S}(R(P,Q))$. Suppose a polytope $\mathcal{P}(M[P,Q])$ has a facet not lying on the hyperplane from $\mathcal{D}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{S}(R(P,Q))$. That means hyperplanes corresponding to $\mathcal{D}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{S}(R(P,Q))$ do not generate $\mathcal{P}(M[P,Q])$ in its affine hull $x_1 + x_2 + \cdots + x_{m+r-k+2} = r$. Then, there exists a point $x = (x_1, x_2, \ldots, x_{m+r-k+2}) \in {\mathbb R}^{m+r-k+2} - \mathcal{P}(M[P,Q])$, located in the intersection of all the half-spaces mentioned in the previous paragraphs and $x_1 + x_2 + \cdots + x_{m+r-k+2} = r$. Since $x$ satisfies above conditions, for any maximal sequence of consecutive $E$'s of $R$, $r_u r_{u+1} \cdots r_{u+v}$, we have $x_i \geq 0$ for $i \in [u, u+v]$ and $x_1 + x_2 + \cdots + x_u \geq N(R,u)$ and $x_1 + x_2 + \cdots + x_{u+v} \leq N(R, u) + 1$ where $N(R, i)$ is the number of North steps until $i$th step of $R$. Then, $x_i \leq 1$ for $i \in [u, u+v]$, and this implies $0 \leq x_i \leq 1$ if $r_i = E$. Similarly, for any maximal sequence $r_u r_{u+1} \cdots r_{u+v}$ of consecutive $N$'s of $R$, we obtain $0 \leq x_i \leq 1$ for $i \in [u, u+v]$, and this means $0 \leq x_i \leq 1$ if $r_i = N$. Hence, we get $0 \leq x_i \leq 1$ for all $i$ in $[m+r]$. For each sequence $r_j r_{j+1}$ of $R$ such that $r_j r_{j+1} = NE$ or $j = m+r$, two conditions, $x_j \leq 1$ and $x_1 + x_2 + \cdots + x_j \geq N(R, j) = N(P, j)$, are given. Hence, it follows that $x_1 + x_2 + \cdots + x_{j-1} \geq N(R, j) - 1 = N(R, j-1) = N(P, j-1)$. If $r_{j-1}$ is a North step, $x_{j-1} \leq 1$ and $x_1 + x_2 + \cdots + x_{j-2} \geq N(R, j-1) - 1 = N(R, j-2) = N(P, j-2)$. If $r_{j-1}$ is an East step, $x_1 + x_2 + \cdots + x_{j-2} \geq N(R, h) = N(R, j-2) = N(P, j-2)$ where $r_h r_{h+1}$ is a previous $NE$ sequence of $R$ or $h=1$. After checking each East step $r_{j-k} $ for $1 \leq k \leq j-h-1$, we have $x_1 + x_2 + \cdots + x_i \geq N(P, i)$ for all $i$ in $[m+r]$. If we apply a similar way to each subsequence $r_j r_{j+1}$ of $R$ such that $r_j r_{j+1} = EN$ or $j=1$ where $x_{j} \geq 0$ and $x_1 + x_2 + \cdots + x_j \leq N(R, j) + 1 = N(Q, j)$, we also get $x_1 + x_2 + \cdots + x_i \leq N(Q,i)$ for all $i$ in $[m+r]$. Hence, we conclude $0 \leq x_i \leq 1$ and $N(P,i) \leq x_1 + x_2 + \cdots + x_i \leq N(Q,i)$ for all $i \in [m+r]$. This is a contradiction to the fact that $x$ is not a point in $\mathcal{P}(M[P,Q])$. Therefore, hyperplanes corresponding to $\mathcal{D}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{S}(R(P,Q))$ generate $\mathcal{P}(M[P,Q])$ in its affine hull $x_1 + x_2 + \cdots + x_{m+r-k+2} = r$, and all the facets of $\mathcal{P}(M[P,Q])$ are on the hyperplanes corresponding to $\mathcal{D}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{S}(R(P,Q))$. \end{proof} \begin{corollary} \label{number-of-facets} For lattice paths $P$ and $Q$ from $(0,0)$ to $(m, r)$ with the region bounded by $P$ and $Q$ as a border strip, the number of facets of $\mathcal{P}(M[P,Q])$ is $m+r+d$ where $d$ is the number of outside corners of the region $[P,Q]$. \end{corollary} Not only facets of lattice path matroid polytope, we will find a one to one corresponding set for all the faces of lattice path matroid polytope. If we consider a lattice path $R(P,Q)$ as a sequence on $\{E, N\}^{m+r}$ and cut the sequence $R$ at every direct sum position, we may get $d+1$ subsequences where $d = |\mathcal{S}(R(P,Q))| = \text{the number of corners of } R(P,Q)$. Let $(S_1, S_2, \ldots, S_{d+1})$ be a set partition of $\mathcal{D}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(R(P,Q))$ with $S_i$ corresponding to $i$th subsequence of $R$. Define a set $S_i^{L} = S_i \cup \{(i-1) \text{th direct sum}\}$ for $1 < i \leq d+1$ and $S_1^L = S_1$. Similarly, define a set $S_i^R = S_i \cup \{i \text{th direct sum}\}$ for $1 \leq i < d+1$ and $S_{d+1}^R = S_{d+1}$. For a subset $T$ of $\mathcal{D}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{S}(R(P,Q))$ we introduce the following three conditions: \begin{enumerate} \item[(C1)] $S_i^R \nsubseteq T$ for $2 \leq i \leq d+1$. \item[(C2)] For each sequence $S_i^L, S_{i+1}, \ldots, S_{d}$ where $1 \leq i \leq d$, a subsequence $S_i^L, S_{i+1}, \ldots, S_{j}$ can be included in $T$ if and only if the subsequence has an even-length and $S_{j+1} \nsubseteq T$. \item[(C3)] A sequence $S_i^L, S_{i+1}, \ldots, S_{d+1}-\{(m+r)\text{-deletion}, (m+r)\text{-contraction}\}$ can be included in $T$ for $1 \leq i \leq d$ if and only if the sequence has an even-length. \end{enumerate} \begin{example} \label{3-rules} For a lattice path $R(P,Q) = E^2N^2ENE^3NEN^4$, we have $d = 7$ and $S_1 = \{1 \text{-deletion}, 2 \text{-deletion}\}$, $S_2 = \{3 \text{-contraction}, 4 \text{-contraction}\}$, $S_3 = \{5 \text{-deletion}\},$ $\ldots ,$ $S_8 = \{12 \text{-contraction}, \ldots, 15 \text{-contraction}\}$. Figure~\ref{3-conditions} shows condition (C1), (C2), and (C3). (a) The orange arrows represent $7$ sets, $S_2^R, S_3^R, \ldots, S_7^R, \text{and}~ S_8^R (=S_8)$, which are not included in $T$ by (C1). As an example, $T$ cannot be a $4$-subset such as $\{3 \text{-contraction}, 4 \text{-contraction}, 4 \text{-direct sum}, 5 \text{-deletion}\}$ since it contains $S_2^R$. (b) The orange arrow is a sequence $S_2^L, S_{3}, \ldots, S_{7}$. By condition (C2), $\{2 \text{-direct sum}, 3 \text{-contraction}, 4 \text{-contraction}\}$ cannot be $T$ since it contains $S_2^L$, but not $S_3$. However, $\{2 \text{-direct sum}, 3 \text{-contraction}, 4 \text{-contraction}, 5 \text{-deletion}\}$ can be $T$ since it includes $S_2^L$ and $S_3$, but not $S_4$. (c) As we see the green and blue arrows in Figure~\ref{3-conditions}(b), we need the third condition (C3) if $j = d+1$. The green arrow represents the last condition that $T$ can include a sequence $S_3^L, S_4, S_5, S_6, S_7, S_8-\{15 \text{-contraction}\}$, and the last condition for the blue arrow is that a sequence $S_1^L, S_2, S_3, S_4, S_5, S_6, S_7, S_8-\{15 \text{-contraction}\}$ can be contained in $T$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=.35] \draw (0,0)--(2,0)--(2,2)--(3,2)--(3,3)--(6,3)--(6,4)--(7,4)--(7,8); \fill (0,0) circle(6pt); \fill (1,0) circle(6pt); \fill (2,0) circle(6pt); \fill (2,1) circle(6pt); \fill (2,2) circle(6pt); \fill (3,2) circle(6pt); \fill (3,3) circle(6pt); \fill (4,3) circle(6pt); \fill (5,3) circle(6pt); \fill (6,3) circle(6pt); \fill (6,4) circle(6pt); \fill (7,4) circle(6pt); \fill (7,5) circle(6pt); \fill (7,6) circle(6pt); \fill (7,7) circle(6pt); \fill (7,8) circle(6pt); \draw (4,-1.6) node{\footnotesize{(a) $S_i^R$ for $2 \leq i \leq 8$}}; \draw[orange,thick,->] (7,8) -- (7.3,4); \fill (7,8) circle(4pt); \draw[orange,thick,->] (7,4) -- (6,4.3); \fill[orange] (7,4) circle(4pt); \draw[orange,thick,->] (6,4) -- (6.3,3); \fill[orange] (6,4) circle(4pt); \draw[orange,thick,->] (6,3) -- (3,3.3); \fill[orange] (6,3) circle(4pt); \draw[orange,thick,->] (3,3) -- (3.3,2); \fill[orange] (3,3) circle(4pt); \draw[orange,thick,->] (3,2) -- (2,2.3); \fill[orange] (3,2) circle(4pt); \draw[orange,thick,->] (2,2) -- (2.3,0); \fill[orange] (2,2) circle(4pt); \hspace{-5mm} \draw (15,0)--(17,0)--(17,2)--(18,2)--(18,3)--(21,3)--(21,4)--(22,4)--(22,8); \fill (15,0) circle(6pt); \fill (16,0) circle(6pt); \fill (17,0) circle(6pt); \fill (17,1) circle(6pt); \fill (17,2) circle(6pt); \fill (18,2) circle(6pt); \fill (18,3) circle(6pt); \fill (19,3) circle(6pt); \fill (20,3) circle(6pt); \fill (21,3) circle(6pt); \fill (21,4) circle(6pt); \fill (22,4) circle(6pt); \fill (22,5) circle(6pt); \fill (22,6) circle(6pt); \fill (22,7) circle(6pt); \fill (22,8) circle(6pt); \draw (20,-1.6) node{\footnotesize{(b) $S_i^L, S_{i+1}, \ldots, S_{j}$ for $1 \leq i \leq 7$}}; \draw[blue,thick,->] (15,0)--(17.3,-0.3)--(17.3,1.7)--(18.3,1.7)--(18.3,2.7)--(21.3,2.7)--(21.3,3.7)--(22.3,3.7)--(22.3,7); \fill (15,0) circle(4pt); \draw[orange,thick,->] (17,0)--(16.4,2.6)--(17.4,2.6)--(17.4,3.6)--(20.4,3.6)--(20.4,4.6)--(22,4.6); \fill[orange] (17,0) circle(4pt); \draw[olive,thick,->] (17,2)--(17.7,2.3)--(17.7,3.3)--(20.7,3.3)--(20.7,4.3)--(21.7,4.3)--(21.7,7); \fill[olive] (17,2) circle(4pt); \draw[gray,thick,dashed] (22,6) circle (2); \draw[gray,->,snake=snake,segment amplitude=.4mm,segment length=2mm,line after snake=1mm] (24,6)--(31,6); \draw (30,0.5)--(34.5,0.5)--(34.5,2)--(36,2)--(36,8); \fill (30,0.5) circle(6pt); \fill (31.5,0.5) circle(6pt); \fill (33,0.5) circle(6pt); \fill (34.5,0.5) circle(6pt); \fill (34.5,2) circle(6pt); \fill (36,2) circle(6pt); \fill (36,3.5) circle(6pt); \fill (36,5) circle(6pt); \fill (36,6.5) circle(6pt); \fill (36,8) circle(6pt); \draw (34,-1.6) node{\footnotesize{(c) $S_{j} = S_8$}}; \draw[gray,thick,dashed] (34,6.5)--(38,6.5); \draw[gray,thick,dashed] (35,5) circle (4); \draw[orange,thick,->] (30,1.1)--(33.9,1.1)--(33.9,2.6)--(36,2.6); \draw[blue,thick,->] (30,0.2)--(34.8,0.2)--(34.8,1.7)--(36.3,1.7)--(36.3,6.5); \draw[olive,thick,->] (30,0.8)--(34.2,0.8)--(34.2,2.3)--(35.7,2.3)--(35.7,6.5); \end{tikzpicture} \end{center} \caption{Corners on $R(P,Q)$.} \label{3-conditions} \end{figure} \end{example} \begin{theorem} \label{faces(m+r-1-l)-of-matroid-polytope} For lattice paths $P$ and $Q$ from $(0,0)$ to $(m, r)$ with $P$ never going above $Q$ and the region bounded by $P$ and $Q$ being a border strip, the set of $(m+r-1-t)$-dimensional faces of the lattice path matroid polytope $\mathcal{P}(M[P,Q])$ has a one-to-one correspondence with $t$-subsets of $\mathcal{D}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{S}(R(P,Q))$ satisfying condition (C1), (C2), and (C3). \end{theorem} \begin{proof} Note that $(m+r-1-t)$-dimensional faces of lattice path matroid polytope $\mathcal{P}(M[P,Q])$ are facets of $(m+r-t)$-dimensional faces of $\mathcal{P}(M[P,Q])$. First, to show is each construction, in $t$-subsets of $\mathcal{D}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{S}(R(P,Q))$ satisfying condition (C1), (C2), and (C3), is possible after other constructions have been done while the dimensions are being dropped from $m+r-1$ to $m+r-1-t$. That means, any $s$ structures from $t$-subsets drop the dimension of $\mathcal{P}(M[P,Q])$ by $s$ exactly, where $1 \leq s \leq t$. It is not hard to check using the similar steps in Lemma~\ref{facets-of-matroid-polytope}. For the other direction, suppose that $S_i^R \subseteq T$ for some $i~(2 \leq i \leq d+1)$ in (C1). Since $i$-direct sum drops more than $1$ dimension of the polytope obtained by $|S_i|$ contractions (or deletions) in $S_i^R$, the dimension of faces after applying all the constructions in $S_i^R$ is less than $m+r-1-|S_i^R|$. Similarly, suppose an odd length sequence $S_i^L, S_{i+1}, \ldots, S_{j} \subseteq T$ and $S_{j+1} \nsubseteq T$ for some $i$ and $j$ such that $1 \leq i < j \leq d$ in (C2). Then, the dimension of faces after all the constructions in $S_i^L, S_{i+1}, \ldots, S_{j} \subseteq T$ are applied is less than $m+r-1-(|S_i^L| + |S_{i+1}| + \cdots + |S_{j}|)$ since $(i-1)$-direct sum drops more than $1$ dimension of the polytope obtained after $|S_i| + |S_{i+1}| + \cdots + |S_{j}|$ contractions and deletions. As the same way, $(i-1)$-direct sum also drops more than $1$ dimension of the polytope in (C3). Therefore, the set of $(m+r-1-t)$-dimensional faces of lattice path matroid polytope $\mathcal{P}(M[P,Q])$ has a one-to-one correspondence with $t$-subsets of $\mathcal{D}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(R(P,Q)) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{S}(R(P,Q))$ satisfying condition (C1), (C2), and (C3). \end{proof} \section{General case} \label{sec-skew-shape} In this section, we consider the region $[P,Q]$ as a connected (non-empty) skew shape, and the faces of a lattice path matroid polytope $\mathcal{P}(M[P,Q])$ are described in terms of certain tiled subregions without $2 \times 2$ rectangles inside $[P,Q]$. Note that, even if the subregion is not allowed, $[P,Q]$ may contain $2 \times 2$ rectangles unlike previous sections. We begin with the following proposition which is a generalization of Lemma~\ref{facets-of-matroid-polytope}. Since $[P,Q]$ is a connected skew shape now, $P$ and $Q$ have only two intersection points $(0,0)$ and $(m,r)$, and $R$ from $[P,Q]$ is a sequence corresponding to a border strip from $(0,0)$ to $(m,r)$ contained in $[P,Q]$. Its proof is omitted since it is similar to the proof of Lemma~\ref{facets-of-matroid-polytope}. \begin{proposition} \label{facets-of-matroid-polytope-general-case} The set of facets of lattice path matroid polytope $\mathcal{P}(M[P,Q])$ has one-to-one correspondence with the disjoint union of the following three sets: \begin{align*} \mathcal{D}(P,Q) &= \{i \text{-deletion} : r_i = \text{East step for some $R$ from } [P,Q]\},\\ \mathcal{C}(P,Q) &= \{i \text{-contraction} : r_i = \text{North step for some $R$ from } [P,Q]\}, \text{and}\\ \mathcal{S}(P,Q) &= \{(p,q) \text{-direct sum} : (p, q) \text{ is an outside corner of } [P,Q]\}. \end{align*} \end{proposition} \noindent Note that if the region $[P,Q]$ is a border strip, the above three sets coincide with $\mathcal{D}(R), \mathcal{C}(R)$, and $\mathcal{S}(R)$ in the previous section, respectively. Since a face of $\mathcal{P}(M[P,Q])$ is a facet of a one-higher-dimensional face of $\mathcal{P}(M[P,Q])$, Proposition~\ref{facets-of-matroid-polytope-general-case} implies that all the faces of $\mathcal{P}(M[P,Q])$ are obtained after applying deletions, contractions, and direct sums. Note that a set of matroid constructions used to generate a face of $\mathcal{P}(M[P,Q])$ may not be uniquely determined. Before we give a description of faces of $\mathcal{P}(M[P,Q])$, we need to define several notions. We label each unit box having points $(i,j)$ and $(i+1,j+1)$ inside the region $[P,Q]$ with $i+j+1$, and let $(i,j)$ be the \emph{starting point} and $(i+1,j+1)$ be the \emph{ending point} of the box. A \emph{block} is a border strip located inside $[P,Q]$, and we may consider a block as a tableau with labels of boxes in the block. The starting point and ending point of a block are the starting point of the smallest labelled box and the ending point of the largest labelled box contained in the block respectively. The \emph{clones} inside the region $[P,Q]$ are blocks such that they are the same as tableaux and distinguishable only by the difference in their positions. A block is a clone of itself. In Figure~\ref{clone}, the block with the starting point $(1,0)$ and the ending point $(3,2)$ is a clone of the block with the starting point $(0,1)$ and the ending point $(2,3)$, and vice versa. \begin{figure}[h] \begin{center} \includegraphics[width = 0.15\textwidth]{tiling234.pdf} \end{center} \caption{Clones labeled by $2$-$3$-$4$ in $[P,Q]$} \label{clone} \end{figure} For some subregion $[\lambda, \mu]$ of the region $[P,Q]$ and some tiling $\tau$ of $[\lambda, \mu]$, where $\lambda$ and $\mu$ are lattice paths from $(0,0)$ to $(m,r)$ in $[P,Q]$ with $\lambda$ never going above $\mu$, we define a \emph{block-tiled region} (abbreviated BTR) $[\lambda, \mu]_{\tau}$ as follows: \begin{enumerate} \item Each maximal continuous intersection of $\lambda$ and $\mu$ passes through an outside corner or an end point of $[P,Q]$. \item $\tau$ uses blocks as tiles. That means the set of all the unit boxes in $[\lambda, \mu]$ is covered by blocks without gaps or overlaps. \item If two blocks have a same-labeled-box in $[\lambda, \mu]_{\tau}$, they are clones to each other. \end{enumerate} \noindent Note that a block-tiled region is not always defined for any subregion or any tilling. For two block-tiled regions $[\lambda, \mu]_{\tau}$ and $[\lambda', \mu']_{\tau'}$, we say $[\lambda, \mu]_{\tau}$ covers $[\lambda', \mu']_{\tau'}$ if $[\lambda, \mu]_{\tau}$ can be obtained by attaching a clone of a block in $[\lambda', \mu']_{\tau'}$ below $\lambda'$ or above $\mu'$. If a block-tiled region is not covered by any other, it is called a \emph{maximal block-tiled region}. \begin{lemma} \label{faces-regions} There is a one-to-one correspondence between the set of all the faces of $\mathcal{P}(M[P,Q])$ and the set of all the maximal block-tiled regions inside $[P,Q]$. \end{lemma} \begin{proof} For a face $\sigma$ of $\mathcal{P}(M[P,Q])$, take a set of matroid constructions by which $\sigma$ is obtained from $\mathcal{P}(M[P,Q])$. If $(p,q)$-direct sum is in the set where $(p,q)$ is an outside corner of $P$ or $Q$, we remove all the steps inside $[P,Q]$ which lie strictly northwest or southeast of the point $(p,q)$ respectively. Also, if $i$-contraction or $i$-deletion is in the set, all the East steps or North steps with the label $i$ inside $[P,Q]$ are removed respectively. After removing all the steps corresponding to constructions in the set, if we delete remaining steps not on the connected lattice paths from $(0,0)$ to $(m,r)$ lastly, we end up with a maximal block-tiled region $[\lambda, \mu]_{\tau}$ inside $[P,Q]$. Conversely, if some operations in $\mathcal{D}(P,Q) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{C}(P,Q) \charfusion[\mathbin]{\cup}{\cdot} \mathcal{S}(P,Q)$, which should be used to get the maximal block-tiled region $[\lambda, \mu]_{\tau}$ from $[P,Q]$, are used to the polytope $\mathcal{P}(M[P,Q])$, one can easily check that the obtained corresponding face is the face $\sigma$. \end{proof} A \emph{block-tiled band} is a block-tiled region containing no $2 \times 2$ squares. We say block-tiled bands inside $[P,Q]$ are in the same \emph{family} if the sets of maximal continuous intersections and clones constituting them are the same. A \emph{block-tiled bottom} is a block-tiled band which is the lowest one among block-tiled bands in its family. Note that a block-tiled band bordered by $\lambda$ inside a maximal block-tiled region $[\lambda, \mu]_{\tau}$ is a block-tiled bottom. We say that two blocks are \emph{adjacent} if they share a step on their boundaries. Note that, if two adjacent blocks in a block-tiled region $[\lambda, \mu]_{\tau}$ share two or more steps, they are clones each other. A block in a block-tiled band can be adjacent at most two other blocks. There is a one-to-one correspondence between the set of all the maximal block-tiled regions inside $[P, Q]$ and the set of all the block-tiled bottoms inside $[P, Q]$. For a given maximal block-tiled region $[\lambda, \mu]_{\tau}$ inside $[P, Q]$, one can remove all the clones except the lowest ones to get a block-tiled bottom $[\lambda, \nu]_{\tau}$ inside $[P, Q]$ where we identity $\lambda$ with the lower bounding path of the Young diagram of $[\lambda, \mu] (= \lambda \setminus \mu)$, and $\mu$ and $\nu$ with upper bounding paths of the Young diagrams of $[\lambda, \mu]$ and $[\lambda, \nu]$ respectively. The inverse is obtained by inserting all the clones of each block in the block-tiled bottom $[\lambda, \nu]_{\tau}$ into a region $[\nu, \mu]$ where $\mu$ is the upper bounding path of the highest block-tiled band among the family members of $[\lambda, \nu]_{\tau}$. See Example~\ref{eg-regions-bottoms}. Hence, there is a one-to-one correspondence between the set of all the faces of $\mathcal{P}(M[P,Q])$ and the set of all the block-tiled bottoms inside $[P,Q]$ by Lemma~\ref{faces-regions}. \begin{example} Let $P = E^3N^3EN^2$ and $Q = N^3ENE^2NE$. The shaded block-tiled region shown in Figure~\ref{fig-regions-bottoms}(a) is a block-tiled bottom inside $[P,Q]$. Since the block-tiled region in Figure~\ref{fig-regions-bottoms}(b) is obtained by inserting the clone of the block labeled by $2$-$3$-$4$ in the shaded block-tiled bottom, it covers the block-tiled region in Figure~\ref{fig-regions-bottoms}(a). Hence, the block-tiled region in Figure~\ref{fig-regions-bottoms}(a) is not maximal. If we also insert the clone of the single block labeled by $5$ as in Figure~\ref{fig-regions-bottoms}(c), the upper bound of the region in Figure~\ref{fig-regions-bottoms}(c) is not a lattice path. Hence, the region in Figure~\ref{fig-regions-bottoms}(c) is not a skew shape, and not a block-tiled region. Therefore, the striped block-tiled band in Figure~\ref{fig-regions-bottoms}(b) is the highest family member of the shaded block-tiled bottom, and the blocked-tiled region in Figure~\ref{fig-regions-bottoms}(b) is the maximal block-tiled region corresponding to the shaded block-tiled bottom. \label{eg-regions-bottoms} \end{example} \begin{figure}[h] \begin{center} \begin{tabular}{ccccc} \includegraphics[width = 0.15\textwidth]{tiling6_1} & & \includegraphics[width = 0.15\textwidth]{tiling6_22} & & \includegraphics[width = 0.15\textwidth]{tiling6_3} \\ \footnotesize{(a) Not maximal} & & \footnotesize{(b) Maximal block-tiled region} & & \footnotesize{(c) Not a skew shape} \end{tabular} \end{center} \caption{Correspondence between block-tiled bottoms and maximal block-tiled regions} \label{fig-regions-bottoms} \end{figure} The next proposition describes the covering relation in the face poset of $\mathcal{P}(M[P,Q])$ in terms of block-tiled bottoms. The proof is straightforward and is omitted. \begin{proposition} \label{covering-relation} Codimension $1$ subfaces of an $n$-dimensional face corresponding to a block-tiled bottom $[\lambda, \nu]_\tau$ inside $[P,Q]$ are obtained as follows: \begin{enumerate} \item[(1)] (Direct sum at an outside corner) For an outside corner $(p,q)$ of $[P,Q]$, let $[P,Q]_{(p,q)}$ be the subregion of $[P,Q]$ which can be obtained after $(p,q)$-direct sum works to $[P,Q]$. If there exists a family member $f$ of $[\lambda, \nu]_\tau$ such that $f \setminus [P,Q]_{(p,q)}$ consists of a single block containing a starting point $(p,*)$, an ending point $(*,q)$, and an outside corner $(p,q)$, then one can take the lowest one among such $f$'s and remove the single block keeping the lattice path through $(p,*)$, $(p,q)$, and $(*,q)$. See Example~\ref{eg-direct-sum-upper}. \item[(2)] (Deletion of a block) Let $i$ be the smallest box label of a block $B_1$ in $[\lambda, \nu]_\tau$. If $B_1$ is adjacent to only one block $B_2$ whose labels are bigger than those of $B_1$, delete $B_1$ from the family members of $[\lambda, \nu]_\tau$ keeping the perimeter of $B_1$ from $(0,0)$ to the starting point of the clone of $B_2$ so that new block-tiled bands with one less blocks than $[\lambda, \nu]_\tau$ are obtained. Note that the new bands fall into at most two types of families. One can get the block-tiled bottoms corresponding to $(n-1)$-dimensional subfaces by taking the lowest ones by families. If the perimeter begins with an East step (a North step), the obtained bottom corresponds to the $i$-deletion (respectively, $i$-contraction). See Figure~\ref{fig-deletion-contraction}(c) and~\ref{fig-deletion-contraction}(d) in Example~\ref{eg-deletion-contraction}. If $B_1$ is adjacent to only one block whose labels are smaller than those of $B_1$, similar operations can be done. If $B_1$ is not adjacent to any other block, one can either replace $\lambda$ by removing all the boxes of $B_1$, or replace $\nu$ by adding all the boxes of $B_1$. This corresponds to the $i$-contraction or the $i$-deletion, respectively. See Figure~\ref{fig-deletion-contraction}(e) and~\ref{fig-deletion-contraction}(f) in Example~\ref{eg-deletion-contraction}. \item[(3)] (Merge of adjacent blocks) If two blocks $B_1$ and $B_2$ in $[\lambda, \nu]_\tau$ are adjacent along a step in this order, one can merge $B_1$ and the clone of~$B_2$ in the family members of $[\lambda, \nu]_\tau$ by deleting the step between $B_1$ and the clone of $B_2$ so that new block-tiled bands with one less blocks than $[\lambda, \nu]_\tau$ are gained. Note that the new bands also fall into at most two types of families. One can obtain the block-tiled bottoms corresponding to $(n-1)$-dimensional subfaces by taking the lowest ones by families. If the deleted step is a North step (an East step), this corresponds to the $i$-deletion (respectively, $i$-contraction). See Figure~\ref{fig-deletion-contraction}(g) and~\ref{fig-deletion-contraction}(h) in Example~\ref{eg-deletion-contraction}. \end{enumerate} \label{prop-cover-relation} \end{proposition} \begin{example} For the region $[P,Q]$ where $P = E^5N^2E^2NE^2N^4$ and $Q = N^4E^4N^3E^5$, one can have an outside corner $(4,4)$ of $Q$ and a block-tiled bottom as in Figure~\ref{fig-direct-sum-upper}(a). Note that the maximal block-tiled region corresponding to the block-tiled bottom in Figure~\ref{fig-direct-sum-upper}(b) consists of all the family members of the given bottom. The shaded block-tiled band shown in Figure~\ref{fig-direct-sum-upper}(c) is the lowest family member satisfying conditions in case (1) of Proposition~\ref{prop-cover-relation}. One can obtain the new block-tiled bottom from the lowest family member after removing the $8$-labeled block in Figure~\ref{fig-direct-sum-upper}(d). \label{eg-direct-sum-upper} \end{example} \begin{figure}[h] \begin{center} \begin{tabular}{ccc} \includegraphics[width = 0.3\textwidth]{tiling_ds10}& \hspace{1cm} & \includegraphics[width = 0.3\textwidth]{tiling_ds20}\\ \footnotesize{(a) Block-tiled bottom} & & \footnotesize{(b) Maximal block-tiled region}\\\\ \includegraphics[width = 0.3\textwidth]{tiling_ds30}& \hspace{1cm} & \includegraphics[width = 0.3\textwidth]{tiling_ds40}\\ \footnotesize{(c) Lowest family member}& & \hspace{1cm}\footnotesize{(d) Block-tiled bottom} \qquad \quad \quad\\ \hspace{0.7cm}\footnotesize{as in Proposition~\ref{prop-cover-relation}(1)}& & \hspace{1cm}\footnotesize{after $(4,4)$-direct sum} \end{tabular} \end{center} \caption{Block-tiled bottoms corresponding to faces of $\mathcal{P}(M[P,Q])$} \label{fig-direct-sum-upper} \end{figure} \begin{example} Let $P = E^2NE^3N^4EN$ and $Q = N^3EN^2E^2NE^3$. Figure~\ref{fig-deletion-contraction}(a) shows the maximal block-tiled region for a $7$-dimensional face of $\mathcal{P}(M[P,Q])$. The maximal block-tiled region for the face obtained from $(5,5)$-direct sum from this face is shown in Figure~\ref{fig-deletion-contraction}(b). Figures~\ref{fig-deletion-contraction}(c)-(h) show codimension $1$ faces of the $6$-dimensional face corresponding to the maximal block-tiled region shown in Figure~\ref{fig-deletion-contraction}(b). They are examples for cases (2) and (3) of Proposition~\ref{covering-relation}. \label{eg-deletion-contraction} \end{example} \begin{figure}[h] \begin{center} \begin{tabular}{ccccccc} \includegraphics[width = 0.13\textwidth]{new_tiling1}& \hspace{0.05cm} & \includegraphics[width = 0.13\textwidth]{new_tiling2}& \hspace{0.05cm} & \includegraphics[width = 0.13\textwidth]{new_tiling6-1}& \hspace{0.05cm} & \includegraphics[width = 0.13\textwidth]{new_tiling6-2}\\ \footnotesize{(a) A maximal BTR} & & \footnotesize{(b) $(5,5)$-direct sum} & & \footnotesize{(c) $1$-deletion} & & \footnotesize{(d) $1$-contraction}\\ \includegraphics[width = 0.13\textwidth]{new_tiling5-2}& & \includegraphics[width = 0.13\textwidth]{new_tiling5-1}& & \includegraphics[width = 0.13\textwidth]{new_tiling3-1}& & \includegraphics[width = 0.13\textwidth]{new_tiling3-2}\\ \footnotesize{(e) $11$-contraction} & & \footnotesize{(f) $11$-deletion} & & \footnotesize{(g) $2$-deletion} & & \footnotesize{(h) $2$-contraction}\\ \end{tabular} \end{center} \caption{Direct sum/Deletion/Contraction} \label{fig-deletion-contraction} \end{figure} The following corollary follows from Proposition~\ref{covering-relation}. \begin{corollary} \label{faces-using-border-strips} All the $n$-dimensional faces of the polytope $\mathcal{P}(M[P,Q])$ are corresponding to the block-tiled bottoms with $n$ blocks inside the region $[P,Q]$. \end{corollary} \begin{proof} By Proposition~\ref{dimension-of-matroid-polytope}, the dimension of a lattice path matroid polytope $\mathcal{P}(M[P,Q])$ having $m+r-1$ single-boxed-blocks in the block-tiled bottom of $[P,Q]$ is $m+r-1$. The result follows since each covering relation in the face poset of $\mathcal{P}(M[P,Q])$ reduces the number of blocks in the block-tiled bottom by one. \end{proof} In the following proposition, we give a more explicit proof for a special case of Corollary~\ref{faces-using-border-strips}. \begin{proposition} All the edges of the polytope $\mathcal{P}(M[P,Q])$ are corresponding to the block-tiled bottoms with $1$ block inside the region $[P,Q]$. \label{1-block} \end{proposition} \begin{proof} Take an edge $e_{B} e_{B'}$ of the polytope $\mathcal{P}(M[P,Q])$ where \begin{align*} e_{B} = e_{b_1} + \cdots + e_{b_r} &= (x_1, x_2, \ldots, x_n) \in \Delta_n ~\text{and}\\ e_{B'} = e_{b'_1} + \cdots + e_{b'_r} &= (y_1, y_2, \ldots, y_n) \in \Delta_n \end{align*} for $(n-1)$-simplex $\Delta_n = \{ (x_1, \dots, x_n) \in {\mathbb R}^n : x_1 \ge 0, \dots, x_n \ge 0, x_1 + \cdots + x_n = r \}$. By Theorem~\ref{thm-matroid-polytopes}, without loss of generality, there exist $j$ and $k$ ($1 \leq j < k \leq n$) such that $$\left( \begin{matrix} x_j&x_k\\ y_j&y_k \end{matrix} \right) = \left( \begin{matrix} 1&0\\ 0&1 \end{matrix} \right),$$ and $x_i = y_i$ ($1 \leq i \leq n$) for the other coordinates. Because the associated lattice path $P(B)$ is identical with $P(B')$ before the $j$th step and $\binom{x_j}{y_j} = \binom{1}{0}$, we have a starting point of a region $R_1$ at the $j$th step. Similarly, we have an ending point of a region $R_2$ at the $k$th step. Since $P(B)$ and $P(B')$ also have identical sequences between $j$th step and $k$th step after being separated only by $1$ step at $j$th step, $P(B)$ and $P(B')$ do not intersect between $j$th step and $k$th step, and $2 \times 2$ squares cannot be contained between $P(B)$ and $P(B')$. Hence, $R_1$ and $R_2$ are the same skew-shaped region without $2 \times 2$ squares. Therefore, there is a unique block generated by $P(B)$ and $P(B')$ inside $[P,Q]$. \end{proof} Consider the lattice path matroid polytope $\mathcal{P}(M[P,Q])$ and a lattice path~$L$ inside the region $[P,Q]$. We define $L'= L'(P,L)$ as the lattice path such that $L'$ passes all the intersection points of $P$ and $L$ and, for each maximal connected region between $P$ and $L$ with the starting point $(x,y)$ and the ending point $(x+a,y+b)$, $L'$ has a sequence $E^aN^b$ from $(x,y)$ to $(x+a,y+b)$. \begin{corollary} For the lattice path matroid polytope $\mathcal{P}(M[P,Q])$, the number of edges of this polytope is equal to the sum of the areas between $L$ and $L'$ where the sum is over all lattice paths $L$ from $(0,0)$ to $(m,r)$ inside the region $[P,Q]$. \label{cor-edge-area} \end{corollary} \begin{proof} For a lattice path $L$ from $(0,0)$ to $(m,r)$ inside the region $[P,Q]$ and a unit box $U$ with the starting point $(u_1,u_2)$ inside the region $[L',L]$, one can construct the block-tiled bottom $[\lambda, L]_\tau$ inside $[P,Q]$ such that $[\lambda, L]_\tau$ has only one block with the starting point $(*,u_2)$ and the ending point $(u_1+1,*)$ on $L$, and $\lambda$ is identical with $L$ before $(*,u_2)$ and after $(u_1+1,*)$. If a block-tiled bottom $[\lambda, \nu]_\tau$ with $1$ block inside $[P,Q]$ is given, one can take a lattice path $\nu$ and a unit box with the starting point $(u_1-1, u_2)$ where $u_1$ is the $x$-coordinate of the ending point and $u_2$ is the $y$-coordinate of the starting point of the block. Hence, there is a one to one correspondence between block-tiled bottoms with $1$ block inside $[P,Q]$ and the pairs of a lattice path $L$ from $(0,0)$ to $(m,r)$ inside $[P,Q]$ and a unit box inside $[L',L]$. By Proposition~\ref{1-block}, the number of edges of the polytope $\mathcal{P}(M[P,Q])$ is equal to the sum of areas between $L$ and $L'$ with summation over all lattice paths $L$ from $(0,0)$ to $(m,r)$ inside the region $[P,Q]$. \end{proof} Corollary~\ref{cor-edge-area} is a nice generalization of the following result~\cite[Lemma 3.6]{Bidkhori}. Note that $L'=P$ in the case $P$ is the lattice path $E^mN^r$. \begin{corollary} \label{1-box} Consider the lattice path matroid polytope $\mathcal{P}(M[E^mN^r,Q])$ where $Q$ is a lattice path from $(0,0)$ to $(m,r)$. The number of edges of this polytope is equal to the sum of areas below the path $L$ with summation over all lattice paths $L$ from $(0,0)$ to $(m,r)$ inside the region $[E^mN^r,Q]$. \end{corollary} \section{Future works} \label{sec-future-works} The $\mathbf{c}\mathbf{d}$-index $\Psi (\mathcal{P})$ of a polytope $\mathcal{P}$, a polynomial in the noncommutative variables $\mathbf{c}$ and $\mathbf{d}$, is a very compact encoding of the flag numbers of a polytope $\mathcal{P}$~\cite{BayerKlapper}. The third author shows how the $\mathbf{c}\mathbf{d}$-index of a polytope can be expressed when a polytope is cut by a hyperplane~\cite{SangwookFlag}. For lattice path matroids of rank $2$, the following is obtained from~\cite{SangwookFlag}. \begin{proposition} For lattice paths $P=E^{\alpha+\beta} N E^{\gamma} N$ and $Q=NE^{\alpha} NE^{\beta+\gamma}$ with $\alpha+\beta+\gamma = m$, the $\mathbf{c}\mathbf{d}$-index of the lattice path matroid polytope $\mathcal{P}(M[P,Q])$ is {\footnotesize \begin{multline*} \Psi(\mathcal{P}(M[P,Q])) = \sum_{i = \alpha+1}^{\alpha+\beta} \Psi(\mathcal{P}(M_i)) - \left( \sum_{i=\alpha+2}^{\alpha+\beta} \Psi(\Delta_i \times \Delta_{m-i+2}) \right) \cdot \mathbf{c} \\ - \hspace{-3mm} \sum_{(0,0) < (i,j) \le (\alpha, \gamma)} \hspace{-2mm} \binom{\alpha+1}{i} \binom{\gamma+1}{j} \hspace{-1mm} \left( \sum_{k=2}^{\beta-2} \Psi(\Delta_{\alpha-i+k} \times \Delta_{\beta-j+\gamma-k+2}) \right) \cdot \mathbf{d} \cdot \Psi(\Delta_{i+j}), \end{multline*} } where $M_i = M[E^i N E^{m-i} N, N E^{i-1} N E^{m-i+1}]$ and $\Delta_j$ is $(j-1)$-dimensional simplex. \end{proposition} It is known that a hyperplane split of a lattice path matroid polytope $\mathcal{P}(M[P,Q])$ can occur when $[P,Q]$ contains a $2 \times 2$ square~\cite{Bidkhori}. Using descriptions of faces of a lattice path matroid polytope appeared in Sections~\ref{sec-border strips} and \ref{sec-skew-shape}, we would like to understand the $\mathbf{c}\mathbf{d}$-index of a lattice path matroid polytope of rank greater than $2$. \bibliographystyle{plain \def$'${$'$}
{ "timestamp": "2017-01-03T02:06:37", "yymm": "1701", "arxiv_id": "1701.00362", "language": "en", "url": "https://arxiv.org/abs/1701.00362", "abstract": "A lattice path matroid is a transversal matroid corresponding to a pair of lattice paths on the plane. A matroid base polytope is the polytope whose vertices are the incidence vectors of the bases of the given matroid. In this paper, we study facial structures of matroid base polytopes corresponding to lattice path matroids.", "subjects": "Combinatorics (math.CO)", "title": "Facial structures of lattice path matroid polytopes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713861502773, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7080104932481041 }
https://arxiv.org/abs/0705.1384
Matroid Pathwidth and Code Trellis Complexity
We relate the notion of matroid pathwidth to the minimum trellis state-complexity (which we term trellis-width) of a linear code, and to the pathwidth of a graph. By reducing from the problem of computing the pathwidth of a graph, we show that the problem of determining the pathwidth of a representable matroid is NP-hard. Consequently, the problem of computing the trellis-width of a linear code is also NP-hard. For a finite field $\F$, we also consider the class of $\F$-representable matroids of pathwidth at most $w$, and correspondingly, the family of linear codes over $\F$ with trellis-width at most $w$. These are easily seen to be minor-closed. Since these matroids (and codes) have branchwidth at most $w$, a result of Geelen and Whittle shows that such matroids (and the corresponding codes) are characterized by finitely many excluded minors. We provide the complete list of excluded minors for $w=1$, and give a partial list for $w=2$.
\section{Introduction\label{intro_section}} The notion of pathwidth of a matroid has received some recent attention in the matroid theory literature \cite{GGW06}, \cite{hall07}. This notion has long been studied in the coding theory literature, where it is used as a measure of trellis complexity of a linear code \cite{muder}, \cite{For94}, \cite{vardy}. However, there appears to be no standard coding-theoretic nomenclature for this notion. It has been called the state complexity of a code in \cite{horn}, but the use of this term there conflicts slightly with its use in \cite{vardy}. So to avoid ambiguity, we will give it a new name here --- \emph{trellis-width} --- which acknowledges its roots in trellis complexity. The relationship between matroid pathwidth and code trellis-width can be made precise as follows. To an arbitrary linear code ${\mathcal C}$ over a finite field ${\mathbb F}$, we associate a matroid, $M({\mathcal C})$, which is simply the vector matroid, over ${\mathbb F}$, of any generator matrix of the code. Recall that in coding theory, a matrix $G$ is called a generator matrix of a code ${\mathcal C}$, if ${\mathcal C}$ is the rowspace of $G$. Consequently, the matroid $M({\mathcal C})$ does not depend on the actual choice of the generator matrix, and so is a characteristic of the code ${\mathcal C}$. The code ${\mathcal C}$ may in fact be viewed as a representation over ${\mathbb F}$ of the matroid $M({\mathcal C})$. The trellis-width of ${\mathcal C}$ is simply the pathwidth of $M({\mathcal C})$; we will give the precise definition of matroid pathwidth in Section~\ref{pw_section}. It has repeatedly been conjectured in the coding theory literature that computing the trellis-width of a linear code over a fixed finite field ${\mathbb F}$ is NP-hard \cite{horn}, \cite{jain}, \cite[Section~5]{vardy}. This would imply that the corresponding decision problem (over a fixed finite field ${\mathbb F}$) --- given a generator matrix for a code ${\mathcal C}$ over ${\mathbb F}$, and a positive integer $w$, deciding whether or not the trellis-width of ${\mathcal C}$ is at most $w$ --- is NP-complete. This decision problem has been given various names --- ``Maximum Partition Rank Permutation'' \cite{horn}, ``Maximum Width'' \cite{jain} and ``Trellis State-Complexity'' \cite{vardy}. An equivalent statement of the trellis-width conjecture above is the following: given a matrix $A$ over ${\mathbb F}$, the problem of computing the pathwidth the vector matroid $M[A]$ is NP-hard. In this paper, we prove the above statement for any fixed field ${\mathbb F}$, not necessarily finite. Our proof is by reduction from the problem of computing the pathwidth of a graph, which is known to be NP-hard \cite{arnborg}, \cite{bod93}. Thus, in particular, computing the trellis-width of a linear code over ${\mathbb F}$ is NP-hard, which settles the aforementioned coding-theoretic conjecture. The situation is rather different if we weaken the trellis-width decision problem above by \emph{not} considering the integer $w$ to be a part of the input to the problem. In other words, for a fixed finite field ${\mathbb F}$, and a \emph{fixed} integer $w > 0$, consider the following problem: \\[-6pt] \begin{quote} given a length-$n$ linear code ${\mathcal C}$ over ${\mathbb F}$, decide whether or not ${\mathcal C}$ has trellis-width at most $w$. \\[-6pt] \end{quote} The equivalent decision problem for matroid pathwidth would be to decide (for a fixed finite field ${\mathbb F}$ and integer $w > 0$) whether or not a given ${\mathbb F}$-representable matroid has pathwidth at most $w$. Based on results from the structure theory of matroids \cite{GGW}, we strongly believe that these problems are solvable in polynomial time. In the process of studying matroids of bounded pathwidth, we observe that for any finite field ${\mathbb F}_q = GF(q)$ and integer $w > 0$, the class, ${\mathcal P}_{w,q}$, of ${\mathbb F}_q$-representable matroids having pathwidth at most $w$, is minor-closed and has finitely many excluded minors. As a relatively easy exercise, we show that the list of excluded minors for ${\mathcal P}_{1,q}$ consists of\footnote{In this paper, we take the connectivity function of a matroid $M$ with ground set $E$ and rank function $r$ to be $\lambda_M(X) = r(X) + r(E - X) - r(E)$ for $X \subset E$. Therefore, what we consider to be matroids of pathwidth one would be matroids of pathwidth two in \cite{GGW06}, \cite{hall07}.} $U_{2,4}$, $M(K_4)$, $M(K_{2,3})$ and $M^*(K_{2,3})$. Unfortunately, the problem of finding excluded-minor characterizations of ${\mathcal P}_{w,q}$ for $w > 1$ becomes difficult very quickly. We give a list of excluded minors for ${\mathcal P}_{2,q}$, which is probably not complete. The rest of the paper is organized as follows. In Section~\ref{prelim_section}, we lay down the definitions and notation used in the paper. In Section~\ref{NP_section}, we prove that, for any fixed field ${\mathbb F}$, the problem of computing the pathwidth of an ${\mathbb F}$-representable matroid is NP-hard, and therefore, so is the problem of computing the trellis-width of a linear code over ${\mathbb F}$. Finally, in Section~\ref{bounded_pw_section}, we consider the class of matroids ${\mathcal P}_{w,q}$. We give the complete lists of excluded minors for ${\mathcal P}_{1,q}$ and the corresponding family of linear codes over ${\mathbb F}_q$ having trellis-width at most one. We also give a partial list of excluded minors for ${\mathcal P}_{2,q}$. \section{Preliminaries\label{prelim_section}} We assume familiarity with the basic definitions and notation of matroid theory, as expounded by Oxley \cite{oxley}. The main results and proofs in this paper will be given in the language of matroid theory, rather than that of coding theory, as it is easier to do so. However, as our results may be of some interest to coding theorists, we make an effort in this section to provide the vocabulary necessary to translate the language of matroid theory into that of coding theory. Definitions of coding-theoretic terms not explicitly defined here can be found in any text on coding theory (\emph{e.g.}, \cite{sloane}). \subsection{Codes and their Associated Matroids\label{codes_matroids_section}} Let ${\mathcal C}$ be a linear code of length $n$ over the finite field ${\mathbb F}_q = GF(q)$. The dimension of ${\mathcal C}$ is denoted by $\dim({\mathcal C})$, and the coordinates of ${\mathcal C}$ are indexed by the integers from the set $[n] = \{1,2,\ldots,n\}$ as usual. We will also associate with the coordinates of ${\mathcal C}$ a set, $E({\mathcal C})$, of \emph{coordinate labels}, so that there is a bijection $\alpha_{\mathcal C}: [n] \rightarrow E({\mathcal C})$. The \emph{label sequence} of ${\mathcal C}$ is defined to be the $n$-tuple $(\alpha_1, \alpha_2, \ldots, \alpha_n)$, where $\alpha_i = \alpha_{\mathcal C}(i)$. For notational convenience, we will simply let $\alpha_{\mathcal C}$ denote the label sequence of ${\mathcal C}$. Unless specified otherwise (as in the case of code minors and duals below), we will, by default, set $E({\mathcal C})$ to be $[n]$, and $\alpha_{\mathcal C}$ to be the $n$-tuple $(1,2,3,\ldots,n)$. In such a case, the label of each coordinate is the same as its index. Given a code ${\mathcal C}$ over ${\mathbb F}_q$, specified by a generator matrix $G$, we define its \emph{associated matroid} $M({\mathcal C})$ to be the vector matroid, $M[G]$, of $G$. We identify the ground set of $M({\mathcal C})$ with $E({\mathcal C})$. Note that if $G$ and $G'$ are distinct generator matrices of the code ${\mathcal C}$, then $M[G] = M[G']$, and hence, $M({\mathcal C})$ is independent of the choice of generator matrix. Thus, any generator matrix of ${\mathcal C}$ is an ${\mathbb F}_q$-representation of $M({\mathcal C})$. Conversely, if $M$ is an ${\mathbb F}_q$-representable matroid, and $G$ is an ${\mathbb F}_q$-representation of $M$, then $M = M({\mathcal C})$ for the code ${\mathcal C}$ generated by $G$. Thus, each ${\mathbb F}_q$-representable matroid is associated with some code ${\mathcal C}$ over ${\mathbb F}_q$. For any code ${\mathcal C}$, the dual code, ${\mathcal C}^\perp$, is specified to have the same label sequence as ${\mathcal C}$, \emph{i.e.}, $\alpha_{{\mathcal C}^\perp} = \alpha_{\mathcal C}$. It is a particularly nice fact \cite[Theorem~2.2.8]{oxley} that the matroids associated with ${\mathcal C}$ and ${\mathcal C}^\perp$ are dual to each other, \emph{i.e.}, $M({\mathcal C}^\perp) = (M({\mathcal C}))^* \stackrel{\text{\footnotesize def}}{=} M^*({\mathcal C})$. Given a $J \subset E({\mathcal C})$, we will denote by ${\mathcal C} \setminus\! J$ (resp.\ ${\mathcal C} \shorten J$) the code obtained from ${\mathcal C}$ by puncturing (resp.\ shortening at) those coordinates having labels in $J$. Thus, ${\mathcal C} \shorten J = ({\mathcal C}^\perp \setminus\! J)^\perp$. A \emph{minor} of ${\mathcal C}$ is a code of the form ${\mathcal C} / X \setminus\! Y$ for disjoint subsets $X,Y \subset E({\mathcal C})$. A minor of ${\mathcal C}$ that is not ${\mathcal C}$ itself is called a \emph{proper minor} of ${\mathcal C}$. The coordinates of a minor of ${\mathcal C}$ retain their labels from $E({\mathcal C})$. More precisely, we set $E({\mathcal C} / X \setminus\! Y) = E({\mathcal C}) - (X \cup Y)$, and take the label sequence of ${\mathcal C} / X \setminus\! Y$ to be the $(n-|X \cup Y|)$-tuple obtained from $\alpha_{\mathcal C} = (\alpha_1,\alpha_2,\ldots,\alpha_n)$ by simply removing those entries that are in $X \cup Y$. The operations of puncturing and shortening correspond to the matroid-theoretic operations of deletion and contraction, respectively: for $J \subset E({\mathcal C})$, $$ M({\mathcal C} \setminus\! J) = M({\mathcal C}) \punc J \ \ \ \text{and} \ \ \ M({\mathcal C} \shorten J) = M({\mathcal C}) \shorten J. $$ We will find it convenient to use ${\mathcal C}|_J$ to denote the restriction of ${\mathcal C}$ to the coordinates with labels in $J$, \emph{i.e.}, ${\mathcal C}|_J = {\mathcal C} \setminus\! J^c$, where $J^c$ denotes the set difference $E({\mathcal C}) - J$. This allows us to express the rank function, $r:\ E({\mathcal C}) \rightarrow {\mathbb Z}$, of the matroid $M({\mathcal C})$ as follows: for $J \subset E({\mathcal C})$, $r(J) = \dim({\mathcal C}|_J)$. Two length-$n$ linear codes ${\mathcal C}$ and ${\mathcal C}'$ over ${\mathbb F}_q$ are defined to be \emph{equivalent} if there is an $n \times n$ permutation matrix $\Pi$ and an invertible $n \times n$ diagonal matrix $\Delta$, such that ${\mathcal C}'$ is the image of ${\mathcal C}$ under the vector space isomorphism $\phi: {\mathbb F}_q^n \rightarrow {\mathbb F}_q^n$ defined by $\phi({\mathbf x}) = (\Pi \Delta) {\mathbf x}$. Informally, ${\mathcal C}'$ is equivalent to ${\mathcal C}$ if ${\mathcal C}'$ can be obtained by first multiplying the coordinates of ${\mathcal C}$ by some nonzero elements of ${\mathbb F}_q$, and then applying a coordinate permutation. In such a case, we write ${\mathcal C} \equiv {\mathcal C}'$. The equivalence class of codes equivalent to ${\mathcal C}$ will be denoted by $[{\mathcal C}]$. It is clear that if codes ${\mathcal C}$ and ${\mathcal C}'$ are equivalent, then their associated matroids are isomorphic. We remark that code equivalence has been defined above according to the coding-theoretic convention. Note that, under this definition, if ${\mathcal C}'$ is obtained by applying an automorphism of the field ${\mathbb F}_q$ to ${\mathcal C}$, then ${\mathcal C}$ and ${\mathcal C}'$ would in general be considered to be inequivalent. A family, ${\mathfrak C}$, of codes over ${\mathbb F}_q$ is said to be \emph{minor-closed} if, for each ${\mathcal C} \in {\mathfrak C}$, any code equivalent to a minor of ${\mathcal C}$ is also in ${\mathfrak C}$. A code, ${\mathcal D}$, over ${\mathbb F}_q$ is said to be an \emph{excluded minor} for a minor-closed family ${\mathfrak C}$, if ${\mathcal D} \notin {\mathfrak C}$, but every proper minor of ${\mathcal D}$ is in ${\mathfrak C}$. It is easily verified that if ${\mathfrak C}$ is a minor-closed family, then a code ${\mathcal C}$ is in ${\mathfrak C}$ iff no minor of ${\mathcal C}$ is an excluded minor for ${\mathfrak C}$. Given a collection, ${\mathcal M}$, of ${\mathbb F}_q$-representable matroids, define the code family \begin{equation} {\mathfrak C}({\mathcal M}) = \{{\mathcal C}:\ {\mathcal C} \text{ is a linear code over ${\mathbb F}_q$ such that } M({\mathcal C}) \in {\mathcal M} \}. \label{CM_def} \end{equation} Evidently, if ${\mathcal M}$ is a minor-closed class of ${\mathbb F}_q$-representable matroids, then ${\mathfrak C}({\mathcal M})$ is also minor-closed. In this case, if ${\mathcal F}$ is the set of all excluded minors for ${\mathcal M}$, then ${\mathfrak C}({\mathcal F})$ is the set of all excluded minors for ${\mathfrak C}({\mathcal M})$. \subsection{Pathwidth, Trellis-width and Branchwidth\label{pw_section}} The definitions in this section rely on the notion of the connectivity function of a matroid. Let $M$ be a matroid with ground set $E(M)$ and rank function $r_M$. Its \emph{connectivity function}, $\lambda_M$, is defined by $\lambda_M(X) = r_M(X) + r_M(E(M)-X) - r_M(E(M))$ for $X \subset E(M)$. Note that $\lambda_M(X) = \lambda_M(E(M) - X)$, and $\lambda_M(E(M)) = \lambda_M(\emptyset) = 0$. It should be pointed out that in the matroid theory literature, the prevalent definition of the connectivity function adds a `+1' to the expression we have given. We have chosen not to follow suit in order that we can give a minimum-fuss definition of trellis-width below. The connectivity function is non-negative, \emph{i.e.}, $\lambda_M(X) \geq 0$ for all $X \subset E(M)$, and submodular \emph{i.e.}, $\lambda_M(X \cup Y) + \lambda_M(X \cap Y) \leq \lambda_M(X) + \lambda_M(Y)$ for all $X,Y \subset E(M)$. It is monotone under the action of taking minors --- if $N$ is a minor of $M$, then for all $X \subset E(N)$, $\lambda_N(X) \leq \lambda_M(X)$. Finally, the connectivity function of a matroid is identical to that of its dual, \emph{i.e.}, $\lambda_M(X) = \lambda_{M^*}(X)$ for all $X \subset E(M)$. Given an ordering $(e_1,e_2,\ldots,e_n)$ of the elements of $M$, define the \emph{width} of the ordering to be $w_M(e_1,e_2,\ldots,e_n) = \max_{i \in [n]} \lambda_M(e_1,e_2,\ldots,e_i)$. (For simplicity of notation, we use $\lambda_M(e_1,e_2,\ldots,e_i)$ instead of $\lambda_M(\{e_1,e_2,\ldots,e_i\})$.) The \emph{pathwidth} of $M$ is defined as $\text{pw}(M) = \min w_M(e_1,e_2,\ldots,e_n)$, the minimum being taken over all orderings $(e_1,e_2,\ldots,e_n)$ of $E(M)$. An ordering $(e_1,e_2,\ldots,e_n)$ of $E(M)$ such that $w_M(e_1,e_2,\ldots,e_n) = \text{pw}(M)$ is called an \emph{optimal} ordering. Since $\lambda_M \equiv \lambda_{M^*}$, it is clear that $\text{pw}(M) = \text{pw}(M^*)$. Another useful and easily verifiable property of pathwidth is that, for matroids $M_1$ and $M_2$, the pathwidth of their direct sum, $\text{pw}(M_1 \oplus M_2)$, equals $\max\{\text{pw}(M_1),\text{pw}(M_2)\}$. The property of pathwidth most important for our purposes is stated in the following lemma. \\[-6pt] \begin{lemma} If $N$ is a minor of $M$, then $\text{pw}(N) \leq \text{pw}(M)$. \label{minor_pathwidth_lemma} \end{lemma} \begin{proof} Let $(e_1,\ldots,e_n)$ be an optimal ordering of $E(M)$. It is enough to show the result in the case when $N = M \punc e_i$ or $N = M \shorten e_i$ for some $i \in [n]$. In such a case, consider the ordering $(e_1,\ldots,e_{i-1},e_{i+1},\ldots,e_n)$ of $E(N)$. For $j \in \{1,\ldots,i-1\}$, we have $\lambda_N(e_1,\ldots,e_j) \leq \lambda_M(e_1,\ldots,e_j)$. For $j \in \{i+1,\ldots,n\}$, we have \begin{eqnarray*} \lambda_N(e_1,\ldots,e_{i-1},e_{i+1},\ldots,e_j) &=& \lambda_N(e_{j+1},\ldots,e_n) \\ &\leq& \lambda_M(e_{j+1},\ldots,e_n) \ = \ \lambda_M(e_1,\ldots,e_{j}). \end{eqnarray*} It follows that $w_N(e_1,\ldots,e_{i-1},e_{i+1},\ldots,e_n) \leq w_M(e_1,\ldots,e_n) = \text{pw}(M)$, and hence, $\text{pw}(N) \leq \text{pw}(M)$. \end{proof} \mbox{}\\[-6pt] The \emph{trellis-width} of a linear code ${\mathcal C}$ over ${\mathbb F}_q$ is defined to be $\text{tw}({\mathcal C}) = \text{pw}(M({\mathcal C}))$. For a discussion of the motivation and practical implications of this definition, we refer the reader to \cite[Section~5]{vardy}. The pathwidth of a matroid is an upper bound on its branchwidth, a more well known measure of matroid complexity. The branchwidth of a matroid is defined via cubic trees. A \emph{cubic tree} is a tree in which the degree of any vertex is either one or three. The vertices of degree one are called \emph{leaves}. A \emph{branch-decomposition} of a matroid $M$ is a cubic tree, $T$, with $|E(M)|$ leaves, labelled in a one-to-one fashion by the elements of $M$. Each edge $e$ of such a branch-decomposition $T$ connects two subtrees of $T$, so $T \punc e$ has two components. We say that edge $e$ \emph{displays} a subset $X \subset E(M)$ if $X$ is the set of labels of leaves of one of the components of $T \punc e$. The \emph{width} of an edge $e$ of $T$ is defined to be $\lambda_M(X)$, where $X$ is one of the label sets displayed by $e$. The \emph{width} of $T$ is the maximum among the widths of its edges. \begin{figure}[t] \centering{\epsfig{file=tree3.eps, width=6.5cm}} \caption{A branch-decomposition of $M$ having width equal to $w_M(e_1,e_2,\ldots,e_n)$.} \label{branch_decomp_fig} \end{figure} The \emph{branchwidth} of $M$ is the minimum among the widths of all its branch-decompositions. Note that if $T$ is the branch-decomposition of $M$ shown in Figure~\ref{branch_decomp_fig}, then the width of $T$ is precisely $w_M(e_1,e_2,\ldots,e_n)$. Indeed, the width of any edge of $T$ is either $\lambda_M(e_i)$ or $\lambda_M(e_1,\ldots,e_i)$ for some $i \in [n]$. Now, for any $i \in [n]$, \begin{eqnarray*} \lambda_M(e_1,\ldots,e_{i-1}) + \lambda_M(e_1,\ldots,e_i) &=& \lambda_M(e_i,e_{i+1},\ldots,e_n) + \lambda_M(e_1,\ldots,e_i) \\ &\geq& \lambda_M(E(M)) + \lambda_M(e_i) \ = \ \lambda_M(e_i), \end{eqnarray*} the inequality above arising from the submodularity of $\lambda_M$. Since $\lambda_M(e_i) \in \{0,1\}$, either $\lambda_M(e_1,\ldots,e_{i-1})$ or $\lambda_M(e_1,\ldots,e_i)$ is at least as large as $\lambda_M(e_i)$. Therefore, the width of $T$ is given by $\max_{i \in [n]} \lambda_M(e_1,\ldots,e_i) = w_M(e_1,\ldots,e_n)$. It follows that the branchwidth of $M$ is upper-bounded by $\text{pw}(M)$. \section{NP-Hardness of Matroid Pathwidth and Code Trellis-Width\label{NP_section}} In this section, we prove that for any fixed field ${\mathbb F}$, the problem of computing the pathwidth of an ${\mathbb F}$-representable matroid $M$, given a representation of $M$ over ${\mathbb F}$, is NP-hard. We accomplish this by reduction from the known NP-hard problem of computing the pathwidth of a graph \cite{arnborg}, \cite{bod93}. The notion of graph pathwidth was introduced by Robertson and Seymour in \cite{RS-I}. Let ${\mathcal G}$ be a graph with vertex set $V$. An ordered collection ${\mathcal V} = (V_1,\ldots,V_t)$, $t \geq 1$, of subsets of $V$ is called a \emph{path-decomposition} of ${\mathcal G}$, if \begin{itemize} \item[(i)] $\bigcup_{i=1}^t V_i = V$; \item[(ii)] for each pair of adjacent vertices $u,v \in V$, we have $\{u,v\} \subset V_i$ for some $i \in [t]$; and \item[(iii)] for $1 \leq i < j < k \leq t$, $V_i \cap V_k \subset V_j$. \end{itemize} The \emph{width} of such a path-decomposition ${\mathcal V}$ is defined to be $w_{{\mathcal G}}({\mathcal V}) = \max_{i \in [t]} |V_i| - 1$. The \emph{pathwidth} of ${\mathcal G}$, denoted by $\text{pw}({\mathcal G})$, is the minimum among the widths of all its path-decompositions. A path-decomposition ${\mathcal V}$ such that $w_{{\mathcal G}}({\mathcal V}) = \text{pw}({\mathcal G})$ is called an \emph{optimal} path-decomposition of ${\mathcal G}$. Let ${\mathbb F}$ be an arbitrary field. Given a graph ${\mathcal G}$ with vertex set $V$, our aim is to produce, in time polynomial in $|V|$, a matrix $A$ over ${\mathbb F}$ such that $\text{pw}({\mathcal G})$ can be directly computed from $\text{pw}(M[A])$. The NP-hardness of computing graph pathwidth then implies the NP-hardness of computing the pathwidth of an ${\mathbb F}$-representable matroid. The obvious idea of taking $A$ to be a representation of the cycle matroid of ${\mathcal G}$ does not work. As observed by Robertson and Seymour \cite{RS-I}, trees can have arbitrarily large pathwidth; however, the cycle matroid of any tree is $U_{n,n}$ for some $n$, and $\text{pw}(U_{n,n})= 0$. What actually turns out to work is to take $A$ to be a representation of the cycle matroid of a certain graph constructible from ${\mathcal G}$ in polynomial time, as we describe next. \begin{figure}[t] \centering{\epsfig{file=Gprime.eps, width=7.5cm}} \caption{Construction of ${\mathcal G}'$ from ${\mathcal G}$.} \label{Gprime} \end{figure} Let ${\mathcal G}'$ be a graph defined on the same vertex set, $V$, as ${\mathcal G}$, having the following properties (see Figure~\ref{Gprime}): \begin{itemize} \item[(P1)] ${\mathcal G}'$ is loopless; \item[(P2)] a pair of distinct vertices is adjacent in ${\mathcal G}'$ iff it is adjacent in ${\mathcal G}$; and \item[(P3)] in ${\mathcal G}'$, there are exactly two edges between each pair of adjacent vertices. \end{itemize} It is evident from the definition that $(V_1,\ldots,V_t)$ is a path-decomposition of ${\mathcal G}$ iff it is a path-decomposition of ${\mathcal G}'$. Therefore, $\text{pw}({\mathcal G}') = \text{pw}({\mathcal G})$. Define ${\overline{\cG}}$ to be the graph obtained by adding an extra vertex, henceforth denoted by $x$, to ${\mathcal G}'$, along with a pair of parallel edges from $x$ to each $v \in V$ (see Figure~\ref{Gbar}). Clearly, ${\overline{\cG}}$ is constructible directly from ${\mathcal G}$ in $O(|V|^2)$ time. But more importantly, the pathwidth of the cycle matroid, $M({\overline{\cG}})$, of ${\overline{\cG}}$ relates very simply to the pathwidth of ${\mathcal G}$, as made precise by the following proposition. \\[-6pt] \begin{proposition} $\text{pw}(M({\overline{\cG}})) = \text{pw}({\mathcal G}) + 1$. \\[-6pt] \label{pw_prop} \end{proposition} Before proving the result, we present some of its implications. For any field ${\mathbb F}$, $M({\overline{\cG}})$ is ${\mathbb F}$-representable. Indeed, if $D({\overline{\cG}})$ is any directed graph obtained by arbitrarily assigning orientations to the edges of ${\overline{\cG}}$, then the vertex-arc incidence matrix of $D({\overline{\cG}})$ is an ${\mathbb F}$-representation of $M({\overline{\cG}})$ \cite[Proposition~5.1.2]{oxley}. It is easily verified that such an ${\mathbb F}$-representation of $M({\overline{\cG}})$ can be constructed directly from ${\mathcal G}$ in $O(|V|^3)$ time. Now, suppose that there were a polynomial-time algorithm for computing the pathwidth of an arbitrary ${\mathbb F}$-representable matroid, given an ${\mathbb F}$-representation for it. Then, given any graph ${\mathcal G}$, we can construct an ${\mathbb F}$-representation, $A$, of $M({\overline{\cG}})$, and then compute the pathwidth of $M[A] = M({\overline{\cG}})$, all in polynomial time. Therefore, by Proposition~\ref{pw_prop}, we have a polynomial-time algorithm to compute the pathwidth of ${\mathcal G}$. However, the graph pathwidth problem is NP-hard. So, if there exists a polynomial-time algorithm for it, then we must have $P=NP$. This implies the following result. \\[-6pt] \begin{theorem} Let ${\mathbb F}$ be a fixed field. The problem of computing the pathwidth of $M[A]$, for an arbitrary matrix $A$ over ${\mathbb F}$, is NP-hard. \\[-6pt] \label{pw_NP_thm} \end{theorem} As a corollary, we have that computing the trellis-width of a code is NP-hard. \\[-6pt] \begin{corollary} Let ${\mathbb F}$ be a fixed finite field. The problem of computing the trellis-width of an arbitrary linear code over ${\mathbb F}$, specified by any of its generator matrices, is NP-hard. \\[-6pt] \label{tw_NP_cor} \end{corollary} \begin{figure}[t] \centering{\epsfig{file=Gbar.eps, width=7.5cm}} \caption{Construction of ${\overline{\cG}}$ from ${\mathcal G}'$.} \label{Gbar} \end{figure} The remainder of this section is devoted to the proof of Proposition~\ref{pw_prop}. Since $\text{pw}({\mathcal G}') = \text{pw}({\mathcal G})$, for the purpose of our proof, we may assume that ${\mathcal G}' = {\mathcal G}$. Thus, from now until the end of this section, we take ${\mathcal G}$ to be a loopless graph satisfying property (P3) above. Note that ${\overline{\cG}}$ also satisfies (P3). For each pair of adjacent vertices $u,v$ in ${\mathcal G}$ or ${\overline{\cG}}$, we denote by $l_{uv}$ and $r_{uv}$ the two edges between $u$ and $v$. Let $V$ and $E$ denote the sets of vertices and edges of ${\mathcal G}$, and let $\overline{V}$ and $\overline{E}$ denote the corresponding sets of ${\overline{\cG}}$. We thus have $\overline{V} = V \stackrel{\cdot}{\cup} \{x\}$, and $\overline{E} = E \stackrel{\cdot}{\cup} \left(\bigcup_{v \in V} \{l_{xv},r_{xv}\}\right)$. Set $M = M({\overline{\cG}})$, so that $E(M) = \overline{E}$. Note that since ${\overline{\cG}}$ is connected (each $v \in V$ is adjacent to $x$), we have $\rank(M) = |\overline{V}|-1 = |V|$. We will first prove that $\text{pw}(M) \leq \text{pw}({\mathcal G}) + 1$. Let ${\mathcal V} = (V_1,\ldots,V_t)$ be a path-decomposition of ${\mathcal G}$. We need the following fact about ${\mathcal V}$: for each $j \in [t]$, \begin{equation} \bigcup_{i \leq j} V_i \ \cap \ \bigcup_{k \geq j} V_k \ = \ V_j. \label{Vj_eq} \end{equation} The above equality follows from the fact that a path-decomposition, by definition, has the property that for $1 \leq i < j < k \leq t$, $V_i \cap V_k \subset V_j$. For $j \in [t]$, let $F_j$ be the set of edges of ${\mathcal G}$ that have both their end-points in $V_j$. By condition (ii) in the definition of path-decomposition, $\bigcup_{j=1}^t F_j = E$. Now, let $\overline{F_j} = F_j \cup \left(\bigcup_{v \in V_j} \{l_{xv},r_{xv}\}\right)$, so that $\bigcup_{j=1}^t \overline{F_j} = \overline{E}$. \\[-6pt] \begin{Def} An ordering $(e_1,\ldots,e_n)$ of the elements of a matroid $M$ is said to \emph{induce} an ordered partition $(E_1,\ldots,E_t)$ of $E(M)$ if for each $j \in [t]$, $\{e_{n_{j-1}+1},$ $e_{n_{j-1}+2},\ldots,e_{n_j}\} = E_j$, where $n_j = \left|\bigcup_{i \leq j} E_j\right|$ (and $n_0 = 0$). \\[-6pt] \end{Def} Let $\pi = (e_1,\ldots,e_n)$ be any ordering of $\overline{E}$ that induces the ordered partition $(E_1,E_2,\ldots,E_t)$, where for each $j \in [t]$, $E_j = \overline{F_j} - \bigcup_{i < j} \overline{F_i}$. We claim that the width of $\pi$ is at most one more than the width of the path-decomposition ${\mathcal V}$. \\[-6pt] \begin{lemma} $w_M(\pi) \leq w_{{\mathcal G}}({\mathcal V}) + 1$. \\[-6pt] \label{width_lemma} \end{lemma} \begin{proof} Observe first that \begin{eqnarray} w_M(\pi) &=& \max_{j \in [t]} \max_{1 \leq k \leq n_j-n_{j-1}} \lambda_M\left(\bigcup_{i < j} E_i \cup \{e_{n_{j-1}+1}, \ldots, e_{n_{j-1}+k}\}\right) \nonumber \\ & \leq & \max_{j \in [t]} \max_{E' \subset E_j} \lambda_M \left(\bigcup_{i < j} E_i \cup E'\right). \label{wM_eq} \end{eqnarray} Let $X = \bigcup_{i < j} E_i \cup E'$ for some $j \in [t]$ and $E' \subset E_j$, and consider $\lambda_M(X) = r_M(X) + r_M(\overline{E} - X) - r_M(\overline{E})$. Since ${\overline{\cG}}$ is a connected graph, $r_M(\overline{E}) = |\overline{V}|-1 = |V|$. If $v$ is a vertex of ${\overline{\cG}}$ incident with an edge in $X$, then $v \in \bigcup_{i \leq j} V_j \stackrel{\cdot}{\cup} \{x\}$. So, the subgraph of ${\overline{\cG}}$ induced by $X$ has its vertices contained in $\bigcup_{i \leq j} V_j \stackrel{\cdot}{\cup} \{x\}$. Therefore, $r_M(X) \leq \left|\bigcup_{i \leq j} V_j \stackrel{\cdot}{\cup} \{x\}\right| - 1 = \left|\bigcup_{i \leq j} V_j\right|$. Next, consider $\overline{E} - X = (\bigcup_{k > j} E_k) \cup (E_j - E')$. Reasoning as above, the subgraph of ${\mathcal G}$ induced by $\overline{E} - X$ has its vertices contained in $\bigcup_{k \geq j} V_k \stackrel{\cdot}{\cup} \{x\}$. Hence, $r_M(\overline{E}-X) \leq \left|\bigcup_{k \geq j} V_k\right|$. Therefore, we have \begin{eqnarray*} \lambda_M(X) & \leq & \left|\bigcup_{i \leq j} V_j\right| + \left|\bigcup_{k \geq j} V_k\right| - |V| \\ &=& \left|\bigcup_{i \leq j} V_j \cap \bigcup_{k \geq j} V_k\right| \ \ = \ \ |V_j|, \end{eqnarray*} the last equality arising from (\ref{Vj_eq}). Hence, carrying on from (\ref{wM_eq}), $$ w_M(\pi) \leq \max_{j \in [t]} |V_j| = w_{{\mathcal G}}({\mathcal V}) + 1, $$ as desired. \end{proof} \mbox{} \\[-6pt] The fact that $\text{pw}(M) \leq \text{pw}({\mathcal G})+1$ easily follows from the above lemma. Indeed, we may choose ${\mathcal V}$ to be an optimal path-decomposition of ${\mathcal G}$. Then, by Lemma~\ref{width_lemma}, there exists an ordering $(e_1,\ldots,e_n)$ of $E(M)$ such that $w_M(e_1,\ldots,e_n) \leq \text{pw}({\mathcal G})+1$. Hence, $\text{pw}(M) \leq w_M(e_1,\ldots,e_n) \leq \text{pw}({\mathcal G})+1$. \\[-6pt] We prove the reverse inequality in two steps, first showing that $\text{pw}({\overline{\cG}}) = \text{pw}({\mathcal G})+1$, and then showing that $\text{pw}(M) \geq \text{pw}({\overline{\cG}})$. \\[-6pt] \begin{lemma} $\text{pw}({\overline{\cG}}) = \text{pw}({\mathcal G}) + 1$. \\[-6pt] \label{pw_lemma} \end{lemma} \begin{proof} Clearly, if ${\mathcal V} = (V_1,\ldots,V_t)$ is a path-decomposition of ${\mathcal G}$, then $\overline{\cV} = (V_1 \cup \{x\}, \ldots, V_t \cup \{x\})$ is a path-decomposition of ${\overline{\cG}}$. Hence, choosing ${\mathcal V}$ to be an optimal path-decomposition of ${\mathcal G}$, we have that $\text{pw}({\overline{\cG}}) \leq w_{{\overline{\cG}}}(\overline{\cV}) = w_{{\mathcal G}}({\mathcal V})+1 = \text{pw}({\mathcal G})+1$. For the inequality in the other direction, we will show that there exists an optimal path-decomposition, $\widetilde{\cV} = (\widetilde{V}_1, \ldots, \widetilde{V}_s)$, of ${\overline{\cG}}$ such that $x \in \widetilde{V}_i$ for all $i \in [s]$. We then have ${\mathcal V} = (\widetilde{V}_1 - \{x\}, \ldots, \widetilde{V}_s - \{x\})$ being a path-decomposition of ${\mathcal G}$, and hence, $\text{pw}({\mathcal G}) \leq w_{{\mathcal G}}({\mathcal V}) = w_{{\overline{\cG}}}(\widetilde{\cV})-1 = \text{pw}({\overline{\cG}})-1$. Let $\overline{\cV} = (\overline{V}_1,\ldots,\overline{V}_t)$ be an optimal path-decomposition of ${\overline{\cG}}$, and let $i_0 =\min\{i: x \in \overline{V}_i\}$ and $i_1 = \max\{i: x \in \overline{V}_i\}$. Since $\overline{V}_i \cap \overline{V}_k \subset \overline{V}_j$ for $i < j < k$, we must have $x \in \overline{V}_i$ for each $i \in [i_0,i_1]$. We claim that $(\overline{V}_{i_0}, \overline{V}_{i_0+1}, \ldots, \overline{V}_{i_1})$ is a path-decomposition of ${\overline{\cG}}$. We only have to show that $\bigcup_{i=i_0}^{i_1} \overline{V}_i = \overline{V}$, and that for each pair of adjacent vertices $u,v \in \overline{V}$, $\{u,v\} \subset \overline{V}_i$ for some $i \in [i_0,i_1]$. To see why the first assertion is true, consider any $v \in \overline{V}$, $v \neq x$. Since $x$ is adjacent to $v$, and $\overline{\cV}$ is a path-decomposition of ${\overline{\cG}}$, $\{x,v\} \subset \overline{V}_i$ for some $i \in [t]$. However, $x \in \overline{V}_i$ iff $i \in [i_0,i_1]$, and so, $\{x,v\} \subset \overline{V}_i$ for some $i \in [i_0,i_1]$. In particular, $v \in \overline{V}_i$ for some $i \in [i_0,i_1]$. For the second assertion, suppose that $u,v$ is a pair of vertices adjacent in ${\overline{\cG}}$. Obviously, $\{u,v\} \subset \overline{V}_j$ for some $j \in [t]$. Suppose that $j \notin [i_0,i_1]$. We consider the case when $j > i_1$; the case when $j < i_0$ is similar. As $\bigcup_{i=i_0}^{i_1} \overline{V}_i = \overline{V}$, there exist $i_2, i_3 \in [i_0,i_1]$ such that $u \in \overline{V}_{i_2}$ and $v \in \overline{V}_{i_3}$. Without loss of generality (WLOG), $i_2 \leq i_3$. If $i_2 = i_3$, then there exists $i \in [i_0,i_1]$ such that $\{u,v\} \subset \overline{V}_i$. If $i_2 < i_3$, we have $u \in \overline{V}_{i_2} \cap \overline{V}_j$ and $i_2 < i_3 < j$. Hence, $u \in \overline{V}_{i_3}$ as well, and so once again, we have an $i \in [i_0,i_1]$ such that $\{u,v\} \in \overline{V}_i$. Thus, $(\overline{V}_{i_0}, \overline{V}_{i_0+1}, \ldots, \overline{V}_{i_1})$ is a path-decomposition of ${\overline{\cG}}$, with the property that $x \in \overline{V}_i$ for all $i \in [i_0,i_1]$. It must be an optimal path-decomposition, since it is a subsequence of the optimal path-decomposition $\overline{\cV}$. \end{proof} \mbox{}\\[-6pt] To complete the proof of Proposition~\ref{pw_prop}, it remains to show that $\text{pw}(M) \geq \text{pw}({\overline{\cG}})$. We introduce some notation at this point. Recall that the two edges between a pair of adjacent vertices $u$ and $v$ in ${\overline{\cG}}$ (or ${\mathcal G}$) are denoted by $l_{uv}$ and $r_{uv}$. We define \begin{eqnarray*} L_{{\mathcal G}} &=& \{l_{uv}: u,v \text{ are adjacent vertices in } {\mathcal G}\}, \\ R_{{\mathcal G}} &=& \{r_{uv}: u,v \text{ are adjacent vertices in } {\mathcal G}\}, \end{eqnarray*} $L_x = \bigcup_{v \in V} \{l_{xv}\}$ and $R_x = \bigcup_{v \in V} \{r_{xv}\}$, where $x$ is the distinguished vertex in $\overline{V} - V$. Thus, $L_{\mathcal G} \cup R_{\mathcal G} = E$ and $E \cup L_x \cup R_x = \overline{E}$. Note that, by construction of ${\overline{\cG}}$, $\text{cl}_M(L_x) = \text{cl}_M(R_x) = \overline{E}$, where $\text{cl}_M$ denotes the closure operator of $M$. We will need the fact that there exists an optimal ordering $(e_1,\ldots,e_n)$ of $\overline{E}$ that induces a certain ordered partition of $\overline{E}$ of the form $$ (L_1,A_1,B_1,R_1,L_2,A_2,B_2,R_2,\ldots,L_t,A_t,B_t,R_t), $$ where for each $j \in [t]$, $L_j \subset L_x$, $A_j \subset L_{\mathcal G}$, $B_j \subset R_{\mathcal G}$, and $R_j \subset R_x$. This will follow from a re-ordering argument given further below. But first, we make some simple observations about orderings of $\overline{E}$. Given an ordering of $\overline{E}$, we may assume, WLOG, that for each pair of adjacent vertices $u,v \in \overline{V}$, $l_{uv}$ appears before $r_{uv}$ in the ordering; we denote this by $l_{uv} < r_{uv}$. We call such an ordering of $\overline{E}$ a \emph{normal} ordering. \\[-6pt] \begin{lemma} Let $(e_1,\ldots,e_n)$ be a normal ordering of $\overline{E}$. Then, for $1 \leq j \leq n-1$, we have \begin{itemize} \item[(a)] $\lambda_M(e_1,\ldots,e_{j+1}) = \lambda_M(e_1,\ldots,e_j) + 1$ iff $e_{j+1} \notin \text{cl}_M(e_1,\ldots,e_j)$; and \item[(b)] $\lambda_M(e_1,\ldots,e_{j+1}) = \lambda_M(e_1,\ldots,e_j) - 1$ iff $e_{j+1} \notin \text{cl}_M(e_{j+2},\ldots,e_n)$. \\[-6pt] \end{itemize} \label{conn_fn_lemma} \end{lemma} \begin{proof} We only prove (a), as the proof of (b) is similar. It is easy to deduce from the definition of the connectivity function that $\lambda_M(e_1,\ldots,e_{j+1}) = \lambda_M(e_1,\ldots,e_j) + 1$ iff $e_{j+1} \notin \text{cl}_M(e_1,\ldots,e_j)$ and $e_{j+1} \in \text{cl}_M(e_{j+2},\ldots,e_n)$. Now, if $e_{j+1} \notin \text{cl}_M(e_1,\ldots,e_j)$, then $e_{j+1} = l_{uv}$ for some $u,v$. (If not, \emph{i.e.}, if $e_{j+1} = r_{uv}$, then since $l_{uv} < r_{uv}$, we must have $l_{uv} \in \{e_1,\ldots,e_j\}$, and so, $e_{j+1} = r_{uv} \in \text{cl}_M(l_{uv}) \subset \text{cl}_M(e_1,\ldots,e_j)$, a contradiction.) Therefore, $\{e_{j+2},\ldots,e_n\}$ contains $r_{uv}$, and hence, $e_{j+1} = l_{uv} \in \text{cl}_M(e_{j+2},\ldots,e_n)$. We have thus shown that if $e_{j+1} \notin \text{cl}_M(e_1,\ldots,e_j)$, then $e_{j+1} \in \text{cl}_M(e_{j+2},\ldots,e_n)$. Part (a) of the lemma now follows. \end{proof} \mbox{} \\[-6pt] We now describe a procedure that takes as input a normal ordering of $\overline{E}$, and produces as output a re-ordering of $\overline{E}$ with certain desirable properties. \\[-6pt] \pagebreak \noindent\underline{\textsc{Re-ordering Algorithm}} \\[-6pt] \emph{Input}: a normal ordering $(e_1,\ldots,e_n)$ of $\overline{E}$. \\[-6pt] \emph{Initialization}: $j = 0$. \\[-4pt] \begin{tabular}{ll} \underline{Step 0}: & If $j=0$, set $X_j = \emptyset$; \\ & else, set $X_j = \text{cl}_M(e_1,\ldots,e_j) - \{e_1,\ldots,e_j\}$. \\[6pt] \underline{Step 1}: & If $X_j = \emptyset$, \\ & \ \ \ find the least $k > j$ such that \\ & \ \ \ \ \ \ for some $m > j$, $e_m \in L_x \cap \text{cl}_M(e_1,\ldots,e_k)$; \\ & \ \ \ set $(e_1',\ldots,e_n') = (e_1,\ldots,e_j,e_m,e_{j+1},\ldots,e_{m-1},e_{m+1},\ldots,e_n)$.\\[6pt] & If $X_j \neq \emptyset$, \\ & \ \ \ if $L_x \cap X_j \neq \emptyset$, find an $m > j$ such that $e_m \in L_x \cap X_j$; \\ & \ \ \ else, if $L_{\mathcal G} \cap X_j \neq \emptyset$, find an $m > j$ such that $e_m \in L_{\mathcal G} \cap X_j$; \\ & \ \ \ else, if $R_{\mathcal G} \cap X_j \neq \emptyset$, find an $m > j$ such that $e_m \in R_{\mathcal G} \cap X_j$; \\ & \ \ \ else, if $R_x \cap X_j \neq \emptyset$, find an $m > j$ such that $e_m \in R_x \cap X_j$; \\ & \ \ \ set $(e_1',\ldots,e_n') = (e_1,\ldots,e_j,e_m,e_{j+1},\ldots,e_{m-1},e_{m+1},\ldots,e_n)$.\\[6pt] \underline{Step 2}: & Replace $j$ by $j+1$. \\ & If $j < n$, replace $(e_1,\ldots,e_n)$ by $(e_1',\ldots,e_n')$, \\ & \ \ \ and return to Step 0; \\ & else, output $(e_1',\ldots,e_n')$. \end{tabular} \mbox{}\\[10pt] Denote by $(e_1^*,\ldots,e_n^*)$ the final output generated by the above algorithm. Set $X_0^* = \emptyset$, and for $j \in [n]$, $X_j^* = \text{cl}_M(e_1^*,\ldots,e_j^*) - \{e_1^*,\ldots,e_j^*\}$. Stepping through the algorithm, one may easily check that $(e_1^*,\ldots,e_n^*)$ has the following property: for $0 \leq j \leq n-1$, if $X_j^* = \emptyset$, then $e_{j+1}^* \in L_x$, and if $X_j^* \neq \emptyset$, then $$ e_{j+1}^* \in \begin{cases} L_x \cap X_j^*, & \text{ if } L_x \cap X_j^* \neq \emptyset \\ L_{\mathcal G} \cap X_j^*, & \text{ if } L_x \cap X_j^* = \emptyset, \text{ but } L_{{\mathcal G}} \cap X_j^* \neq \emptyset \\ R_{{\mathcal G}} \cap X_j^*, & \text{ if } L_x \cap X_j^* = L_{{\mathcal G}} \cap X_j^* = \emptyset, \text{ but } R_{{\mathcal G}} \cap X_j^* \neq \emptyset \\ R_x \cap X_j^*, & \text{ if } L_x \cap X_j^* = L_{{\mathcal G}} \cap X_j^* = R_{{\mathcal G}} \cap X_j^* = \emptyset, \text{ but } R_x \cap X_j^* \neq \emptyset. \\ \end{cases} $$ The following claim can be readily deduced from this property, and we leave the details to the reader. \\[-6pt] \begin{claim} (a)\ The ordering $(e_1^*,\ldots,e_n^*)$ induces an ordered partition of $\overline{E}$ of the form $$ (L_1,A_1,B_1,R_1,L_2,A_2,B_2,R_2,\ldots,L_t,A_t,B_t,R_t), $$ where for each $j \in [t]$, $L_j \subset L_x$, $A_j \subset L_{\mathcal G}$, $B_j \subset R_{\mathcal G}$ and $R_j \subset R_x$. Moreover, for each $u,v \in \overline{V}$, $l_{uv} \in L_j \cup A_j$ iff $r_{uv} \in B_j \cup R_j$. \\[6pt] (b)\ For the ordered partition in (a), we have for each $j \in [t]$, $$ A_j \cup B_j \subset \text{cl}_M(\bigcup_{i \leq j} L_i) - \text{cl}_M(\bigcup_{i < j} L_i). $$ \\[-6pt] \label{re-ordering_claim} \end{claim} The crucial property of $(e_1^*,\ldots,e_n^*)$ is the following. \\[-6pt] \begin{lemma} If $(e_1^*,\ldots,e_n^*)$ is the output of the Re-ordering Algorithm in response to the input $(e_1,\ldots,e_n)$, then $w_M(e_1^*,\ldots,e_n^*) \leq w_M(e_1,\ldots,e_n)$. \\[-6pt] \label{re-ordering_lemma} \end{lemma} \begin{proof} Steps~0--1 of the algorithm go through $n$ iterations, indexed by $j \in \{0,1,\ldots,n-1\}$. In the $j$th iteration, Step 1 is given a normal ordering $(e_1,\ldots,e_n)$, in response to which it produces an ordering $(e_1',\ldots,e_n')$, which is also normal. To prove the lemma, it is enough to show that $w_M(e_1',\ldots,e_n') \leq w_M(e_1,\ldots,e_n)$. So, suppose that the algorithm is in its $j$th iteration ($0 \leq j \leq n-1$). We first dispose of the case when $X_j \neq \emptyset$. Then, $$ (e_1',\ldots,e_n') = (e_1,\ldots,e_j,e_m,e_{j+1},\ldots,e_{m-1},e_{m+1},\ldots,e_n) $$ for some $m > j$ such that $e_m \in X_j$. Observe that if $1 \leq s \leq j$ or if $m \leq s \leq n$, then $(e_1',\ldots,e_s')$ is just a re-ordering of $(e_1,\ldots,e_s)$, and hence $\lambda_M(e_1',\ldots,e_s') = \lambda_M(e_1,\ldots,e_s)$. So, consider $j < s < m$. In this case, $(e_1',\ldots,e_s') = (e_1,\ldots,e_j,e_m,e_{j+1},\ldots,e_{s-1})$. Since $e_m \in \text{cl}_M(e_1,\ldots,e_j)$, we have $r_M(e_1',\ldots,e_s') = r_M(e_1,\ldots,e_j,e_{j+1},\ldots,e_{s-1})$. On the other hand, \begin{eqnarray} r_M(e_{s+1}',\ldots,e_n') &=& r_M(e_s,\ldots,e_{m-1},e_{m+1},\ldots,e_n) \nonumber \\ &\leq& r_M(e_s,\ldots,e_{m-1},e_m,e_{m+1},\ldots,e_n). \label{rM_eq1} \end{eqnarray} Hence, $\lambda(e_1',\ldots,e_s') \leq \lambda(e_1,\ldots,e_{s-1})$. Therefore, for any $s \in [n]$, we have shown that there exists a $t \in [n]$ such that $\lambda(e_1',\ldots,e_s') \leq \lambda(e_1,\ldots,e_t)$. It follows that $w_M(e_1',\ldots,e_n') \leq w_M(e_1,\ldots,e_n)$. \\[-6pt] We must now deal with the case when $X_j = \emptyset$, \emph{i.e.}, $\text{cl}_M(e_1,\ldots,e_j) = \{e_1,\ldots,e_j\}$. Note that if $L_x \subset \{e_1,\ldots,e_j\}$, then since $\text{cl}_M(L_x) = \overline{E}$, we have $\text{cl}_M(e_1,\ldots,e_j) = \overline{E}$. Therefore, $\{e_1,\ldots,e_j\} = \overline{E}$, which means that $j = n$, a contradiction. Therefore, there must exist some $m > j$ such that $e_m \in L_x$. Let $k^*$ be the least integer $k > j$ such that there exists $e_m \in L_x \cap \text{cl}_M(e_1,\ldots,e_k)$ for some $m > j$. By choice of $k^*$, we have $m \geq k^*$. For this $m$, we again have $$ (e_1',\ldots,e_n') = (e_1,\ldots,e_j,e_m,e_{j+1},\ldots,e_{m-1},e_{m+1},\ldots,e_n). $$ As before, if $1 \leq s \leq j$ or if $m \leq s \leq n$, then $\lambda_M(e_1',\ldots,e_s') = \lambda_M(e_1,\ldots,e_s)$. For $k^* < s < m$, we have $$ r_M(e_1',\ldots,e_s') = r_M(e_1,\ldots,e_j,e_m,e_{j+1},\ldots,e_{s-1}) = r_M(e_1,\ldots,e_j,e_{j+1},\ldots,e_{s-1}), $$ as $e_m \in \text{cl}_M(e_1,\ldots,e_{k^*}) \subset \text{cl}_M(e_1,\ldots,e_{s-1})$. And as in (\ref{rM_eq1}), $r_M(e_{s+1}',\ldots,e_n') \leq r_M(e_s,\ldots,e_n)$. Hence, $\lambda(e_1',\ldots,e_s') \leq \lambda(e_1,\ldots,e_{s-1})$. We are left with $j+1 \leq s \leq k^*$. Note that by choice of $k^*$, $e_m \notin \text{cl}_M(e_1,\ldots,e_{s-1})$. Therefore, \begin{eqnarray*} r_M(e_1',\ldots,e_s') &=& r_M(e_1,\ldots,e_j,e_m,e_{j+1},\ldots,e_{s-1}) \\ &=& 1 + r_M(e_1,\ldots,e_j,e_{j+1},\ldots,e_{s-1}), \end{eqnarray*} Since (\ref{rM_eq1}) again applies, we have that \begin{equation} \lambda_M(e_1',\ldots,e_s') \leq 1 + \lambda_M(e_1,\ldots,e_{s-1}). \label{l_eq1} \end{equation} Observe that, since $e_{j+1} \notin \text{cl}_M(e_1,\ldots,e_j)$, by Lemma~\ref{conn_fn_lemma}(a), \begin{equation} \lambda_M(e_1,\ldots,e_{j+1}) = \lambda_M(e_1,\ldots,e_j) + 1. \label{l_eq2} \end{equation} Furthermore, by choice of $k^*$, $e_m \notin \text{cl}_M(e_1,\ldots,e_{k^*-1})$, but $e_m \in \text{cl}_M(e_1,\ldots,e_{k^*})$, which together imply that $e_{k^*} \notin \text{cl}_M(e_1,\ldots,e_{k^*-1})$. Hence, again by Lemma~\ref{conn_fn_lemma}(a), \begin{equation} \lambda_M(e_1,\ldots,e_{k^*}) = \lambda_M(e_1,\ldots,e_{k^*-1}) + 1. \label{l_eq3} \end{equation} Therefore, from (\ref{l_eq1})--(\ref{l_eq3}), we find that for $s = j+1$ or $s=k^*$, we have $\lambda_M(e_1',\ldots,e_s') \leq \lambda_M(e_1,\ldots,e_s)$. We claim that for $j+1 < s < k^*$, we have $\lambda_M(e_1,\ldots,e_{s-1}) \leq \lambda_M(e_1,\ldots,e_s)$, so that by induction, $\lambda_M(e_1,\ldots,e_{s-1}) \leq \lambda_M(e_1,\ldots,e_{k^*-1})$. This would then imply, via (\ref{l_eq1}) and (\ref{l_eq3}), that $\lambda_M(e_1',\ldots,e_s') \leq \lambda_M(e_1,\ldots,e_{k^*})$. Thus, for any $s \in [n]$, we have a $t \in [n]$ such that $\lambda(e_1',\ldots,e_s') \leq \lambda(e_1,\ldots,e_t)$. Therefore, $w_M(e_1',\ldots,e_n') \leq w_M(e_1,\ldots,e_n)$, which would complete the proof of the lemma. To prove our claim, it is enough to show that when $j+1 < s < k^*$, we have $e_s \in \text{cl}_M(e_{s+1},\ldots,e_n)$. Indeed, it then follows from Lemma~\ref{conn_fn_lemma}(b) that $\lambda_M(e_1,\ldots,e_s) \geq \lambda_M(e_1,\ldots,e_{s-1})$. So, suppose that $e_s \notin \text{cl}_M(e_{s+1},\ldots,e_n)$ for some $j+1 < s < k^*$. Then, $e_s = r_{uv}$ for some $u,v \in \overline{V}$. (Otherwise, if $e_s = l_{uv}$, then since $r_{uv} > l_{uv}$, we would have $e_s \in \text{cl}_M(e_{s+1},\ldots,e_n)$.) Note that $l_{uv} \notin \{e_1,\ldots,e_j\}$; otherwise, the fact that $\{e_1,\ldots,e_j\}$ is a flat of $M$ would imply that $r_{uv} \in \{e_1,\ldots,e_j\}$. So, $l_{uv} \in \{e_{j+1},\ldots,e_{s-1}\}$. Suppose that $e_s = r_{xv}$ for some $v \in \overline{V}$. Then, $l_{xv} \in \{e_{j+1},\ldots,e_{s-1}\}$, which contradicts the choice of $k^*$. Therefore, $e_s \notin R_x$, meaning that $e_s = r_{uv}$ for some $u,v \in V$. Now, if $l_{xu},l_{xv} \in \{e_{s+1},\ldots,e_n\}$, then $e_s \in \text{cl}_M(e_{s+1},\ldots,e_n)$, as $(l_{xu},l_{xv},r_{uv})$ is a triangle in ${\overline{\cG}}$. So, WLOG, $l_{xu} \notin \{e_{s+1},\ldots,e_n\}$. By choice of $k^*$, $l_{xu} \notin \{e_{j+1},\ldots,e_s\}$. Therefore, $l_{xu} \in \{e_1,\ldots,e_j\}$. But now, $l_{xv} \in \text{cl}_M(e_1,\ldots,e_{s-1})$, as $(l_{xu},l_{uv},l_{xv})$ is a triangle in ${\overline{\cG}}$. However, $l_{xv} \notin \{e_1,\ldots,e_j\}$; otherwise, we would have $l_{xu}, l_{xv} \in \{e_1,\ldots,e_j\}$, which, since $\{e_1,\ldots,e_j\}$ is a flat and $(l_{xu},l_{uv},l_{xv})$ is a triangle, would imply that $l_{uv} \in \{e_1,\ldots,e_j\}$. Thus, $l_{xv} = e_{m^*}$ for some $m^* > j$. As already noted, $l_{xv} \in \text{cl}_M(e_1,\ldots,e_{s-1})$, and so once again, our choice of $k^*$ is contradicted. Therefore, our assumption that $e_s \notin \text{cl}_M(e_{s+1},\ldots,e_n)$ always leads to a contradiction, from which we conclude that the assumption is false. This completes the proof of the lemma. \end{proof} \mbox{} \\[-6pt] We can now furnish the last remaining piece of the proof of Proposition~\ref{pw_prop}. \\[-6pt] \begin{lemma} $\text{pw}(M) \geq \text{pw}({\overline{\cG}})$. \\[-6pt] \label{pw_lemma2} \end{lemma} \begin{proof} Let $(e_1,\ldots,e_n)$ be an optimal ordering of $\overline{E}$. WLOG, $(e_1,\ldots,e_n)$ may be assumed to be normal. Let $(e_1^*,\ldots,e_n^*)$ be the output of the Re-ordering Algorithm in the response to the input $(e_1,\ldots,e_n)$. Then, $(e_1^*,\ldots,e_n^*)$ has the properties listed in Claim~\ref{re-ordering_claim}, and, by Lemma~\ref{re-ordering_lemma}, is also an optimal ordering of $\overline{E}$. Now, $(e_1^*,\ldots,e_n^*)$ induces an ordered partition $(L_1,A_1,B_1,R_1,\ldots,L_t,A_t,B_t,R_t)$ of $\overline{E}$, as in Claim~\ref{re-ordering_claim}(a). For $j \in [t]$, define $Y_j = \bigcup_{i < j} (L_i \cup A_i \cup B_i \cup R_i) \cup (L_j \cup A_j)$, and $Y_j' = \overline{E} - Y_j \ = \bigcup_{i > j} (L_i \cup A_i \cup B_i \cup R_i) \cup (B_j \cup R_j)$. Letting ${\overline{\cG}}[Y_j]$ and ${\overline{\cG}}[Y_j']$ denote the subgraphs of ${\overline{\cG}}$ induced by $Y_j$ and $Y_j'$, respectively, set $V_j = V({\overline{\cG}}[Y_j]) \cap V({\overline{\cG}}[Y_j'])$. In other words, $V_j$ is the set of vertices common to both ${\overline{\cG}}[Y_j]$ and ${\overline{\cG}}[Y_j']$. It is easily checked that ${\mathcal V} = (V_1,\ldots,V_t)$ is a path-decomposition of ${\overline{\cG}}$. Note that $$ |V_j| = |V({\overline{\cG}}[Y_j])| + |V({\overline{\cG}}[Y_j'])| - |\overline{V}|. $$ We next observe that ${\overline{\cG}}[Y_j]$ and ${\overline{\cG}}[Y_j']$ are connected graphs. From Claim~\ref{re-ordering_claim}(b), we have that $Y_j \subset \text{cl}_M(\bigcup_{i \leq j} L_i)$. Therefore, for any edge $l_{uv}$ (or $r_{uv}$) in $Y_j - \bigcup_{i \leq j} L_i$, both $l_{xu}$ and $l_{xv}$ must be in some $L_i$, $i \leq j$. Thus, in ${\overline{\cG}}[Y_j]$, each vertex $v \neq x$ is adjacent to $x$, which shows that ${\overline{\cG}}[Y_j]$ is connected. Consider any vertex $v \neq x$ in ${\overline{\cG}}[Y_j']$, such that $r_{xv} \notin Y_j'$. Then, $r_{uv} \in Y_j'$ for some $u \neq x$. So, $r_{uv} \in B_k$ for some $k \geq j$. By Claim~\ref{re-ordering_claim}(b), $r_{uv} \in \text{cl}_M(\bigcup_{i \leq k} L_i) - \text{cl}_M(\bigcup_{i < k} L_i)$. This implies that either $l_{xu} \in L_k$ or $l_{xv} \in L_k$. Hence, either $r_{xu} \in R_k$ or $r_{xv} \in R_k$. However, $r_{xv}$ cannot be in $R_k$, since $r_{xv} \notin Y_j'$, and so, $r_{xu} \in R_k$. Thus, $(r_{xu},r_{uv})$ forms a path in ${\overline{\cG}}[Y_j']$ from $x$ to $v$. It follows that ${\overline{\cG}}[Y_j']$ is connected. Therefore, \begin{eqnarray*} \lambda_M(Y_j) &=& r_M(Y_j) + r_M(Y_j') - r_M(\overline{E}) \\ &=& (|V({\overline{\cG}}[Y_j])| - 1) + (|V({\overline{\cG}}[Y_j'])| - 1) - (\overline{V}-1) = |V_j| - 1. \end{eqnarray*} Hence, $$ \text{pw}(M) = w_M(e_1^*,\ldots,e_n^*) \geq \max_{j \in [t]} \lambda_M(Y_j) = \max_{j \in [t]} |V_j| - 1 = w_{\overline{\cG}}({\mathcal V}) \geq \text{pw}({\overline{\cG}}), $$ which proves the lemma. \end{proof} \mbox{} \\[-6pt] The proof of Proposition~\ref{pw_prop} is now complete. \section{Matroids of Bounded Pathwidth\label{bounded_pw_section}} Theorem~\ref{pw_NP_thm} shows that the following decision problem is NP-complete. \\[-6pt] \begin{tabular}{rl} \textbf{Problem:} & \textsc{Matroid Pathwidth} \\ \multicolumn{2}{l}{Let ${\mathbb F}$ be a fixed field.} \\[2pt] \textbf{Instance:} & An $m \times n$ matrix $A$ over ${\mathbb F}$, and an integer $w > 0$. \\ \textbf{Question:} & Is there an ordering $(e_1,\ldots,e_n)$ of the elements of $M = M[A]$, \\ & such that $w_M(e_1,\ldots,e_n) \leq w$? \\[4pt] \end{tabular} \noindent Similarly, Corollary~\ref{tw_NP_cor} shows that the corresponding decision problem for code trellis-width (over a fixed finite field ${\mathbb F}$) is NP-complete. In this section, we consider the situation when the parameter $w$ above is a fixed constant, and therefore, not considered to be part of the problem instance. In contrast to the NP-completeness of \textsc{Matroid Pathwidth}, we believe that the following decision problem and its coding-theoretic counterpart are solvable in polynomial time. \\[-6pt] \begin{tabular}{rl} \textbf{Problem:} & \textsc{Weak Matroid Pathwidth} \\ \multicolumn{2}{l}{Let ${\mathbb F}_q = GF(q)$ be a fixed finite field, and $w$ a fixed positive integer.} \\[2pt] \textbf{Instance:} & An $m \times n$ matrix $A$ over ${\mathbb F}_q$. \\ \textbf{Question:} & Is there an ordering $(e_1,\ldots,e_n)$ of the elements of $M = M[A]$, \\ & such that $w_M(e_1,\ldots,e_n) \leq w$? \\[4pt] \end{tabular} Our optimism above stems from the fact that the property of having pathwidth bounded by $w$ is preserved by the minors of a matroid. To be precise, let ${\mathcal P}_{w,q}$ be the class of matroids representable over the finite field ${\mathbb F}_q = GF(q)$, that have pathwidth at most $w$. By Lemma~\ref{minor_pathwidth_lemma}, ${\mathcal P}_{w,q}$ is minor-closed. Since pathwidth is an upper bound on the branchwidth of a matroid, all matroids in ${\mathcal P}_{w,q}$ have branchwidth at most $w$. Now, Geelen and Gerards have shown that if ${\mathcal M}$ is any minor-closed class of ${\mathbb F}_q$-representable matroids having bounded branchwidth, then ${\mathcal M}$ has finitely many excluded minors \cite[Theorem~1.4]{GW02}. As a result, we have the following theorem. \\[-6pt] \begin{theorem} For any integer $w > 0$ and finite field ${\mathbb F}_q$, ${\mathcal P}_{w,q}$ has finitely many excluded minors. Consequently, the code family $$ {\mathfrak C}({\mathcal P}_{w,q}) = \{{\mathcal C}:\ {\mathcal C} \text{ is a linear code over ${\mathbb F}_q$ such that } \text{tw}({\mathcal C}) \leq w\}. $$ also has finitely many excluded minors. \\[-6pt] \label{Pwq_thm} \end{theorem} Theorem~\ref{Pwq_thm} shows that deciding whether or not a given ${\mathbb F}_q$-representable matroid $M$ belongs to ${\mathcal P}_{w,q}$ can be accomplished by testing whether or not $M$ contains as a minor one of the finitely many excluded minors of ${\mathcal P}_{w,q}$. The Minor-Recognition Conjecture of Geelen, Gerards and Whittle \cite[Conjecture~1.3]{GGW} states that, for any fixed ${\mathbb F}_q$-representable matroid $N$, testing a given ${\mathbb F}_q$-representable matroid for the presence of an $N$-minor can be done in polynomial time. So, if this conjecture is true --- and there is evidence to support its validity \cite{GGW} --- then membership of an ${\mathbb F}_q$-representable matroid in the class ${\mathcal P}_{w,q}$ can be decided in polynomial time. Hence, assuming the validity of the Minor-Recognition Conjecture, \textsc{Weak Matroid Pathwidth} is solvable in polynomial time. While the finiteness of the list of excluded minors for ${\mathcal P}_{w,q}$ implies, modulo the Minor-Recognition Conjecture, the existence of a polynomial-time algorithm for \textsc{Weak Matroid Pathwidth}, an actual implementation of such an algorithm would require the explicit determination of the excluded minors. As a relatively easy exercise, we prove the following theorem.\\[-6pt] \begin{theorem} A matroid is in ${\mathcal P}_{1,q}$ iff it contains no minor isomorphic to any of the matroids $U_{2,4}$, $M(K_4)$, $M(K_{2,3})$ and $M^*(K_{2,3})$. \\[-6pt] \label{P1q_thm} \end{theorem} We first verify the easy ``only if'' part of the above theorem. \\[-6pt] \begin{proposition} $U_{2,4}$, $M(K_4)$, $M(K_{2,3})$ and $M^*(K_{2,3})$ are not in ${\mathcal P}_{1,q}$. \\[-6pt] \label{TC1_prop1} \end{proposition} \begin{proof} If $(e_1,e_2,e_3,e_4)$ is any ordering of the elements of $M = U_{2,4}$, then $\lambda_M(e_1,e_2) = r_M(e_1,e_2) + r_M(e_3,e_4) - \rank(M) = 2+2-2 = 2$. It follows that $\text{pw}(U_{2,4}) = 2$. Now consider $M = M(K_4)$. For any ordering $(e_1,\ldots,e_6)$ of $E(K_4)$, we have $r_M(e_1,e_2,e_3) \geq 2$, with equality iff $\{e_1,e_2,e_3\}$ is a triangle, in which case $\{e_4,e_5,e_6\}$ is a triad. It follows that $r_M(e_1,e_2,e_3) + r_M(e_4,e_5,e_6) \geq 5$. Hence, $w_M(e_1,\ldots,e_6) \geq \lambda_M(e_1,e_2,e_3) \geq 2$. The proof for $M = M(K_{2,3})$ is very similar. For any $J \subset E(K_{2,3})$ with $|J|=3$, $r_M(J) = 3$, since $K_{2,3}$ has no circuits of size less than 4. Therefore, for any ordering $(e_1,\ldots,e_6)$ of $E(K_{2,3})$, $w_M(e_1,\ldots,e_6) \geq \lambda_M(e_1,e_2,e_3) = 3+3 - 4 = 2$. Thus, $\text{pw}(M(K_{2,3})) \geq 2$, and by duality, $\text{pw}(M^*(K_{2,3})) \geq 2$ as well. \end{proof} \mbox{} \\[-6pt] We now prove the ``if'' part of Theorem~\ref{P1q_thm}. For the duration of the proof, we take $M$ to be a matroid that contains no minor isomorphic to the matroids listed in the statement of the theorem. Since $M(K_4)$ is a minor of each of the matroids $F_7$, $F_7^*$, $M(K_5)$, $M^*(K_5)$, $M(K_{3,3})$ and $M^*(K_{3,3})$, $M$ contains none of these as minors. Therefore, $M = M({\mathcal G})$ for some planar graph ${\mathcal G}$ (cf.\ \cite[Theorem~13.3.1 and Proposition~5.2.6]{oxley}). Evidently, we may take ${\mathcal G}$ to be connected as a graph. Since ${\mathcal P}_{1,q}$ is closed under direct sums, we may assume that $M$ is 2-connected. Therefore, ${\mathcal G}$ is either a graph consisting of a single vertex with a self-loop incident with it, or ${\mathcal G}$ is a loopless graph. In the former case, $M \cong U_{0,1}$, which is in ${\mathcal P}_{1,q}$. So, we may assume that ${\mathcal G}$ is loopless. If ${\mathcal G}$ has exactly two vertices, then $M \cong U_{1,n}$ for some $n$, which is also in ${\mathcal P}_{1,q}$. Hence, we may assume that $|V({\mathcal G})| \geq 3$, in which case, ${\mathcal G}$ is 2-connected as a graph \cite[Corollary~8.2.2]{oxley}. Moreover, if ${\mathcal G}^*$ is any geometric dual of ${\mathcal G}$, then, since $M^* = M({\mathcal G}^*)$ is 2-connected, by the same argument as above, we may assume that ${\mathcal G}^*$ is also 2-connected as a graph. \begin{figure}[t] \centering{\epsfig{file=umbrella.eps, width=4.5cm}} \caption{An ``umbrella'' graph. A dotted line between a pair of vertices represents zero or more parallel edges between them.} \label{umbrella} \end{figure} At this point, we need the following definition. We call a graph an \emph{umbrella} if it is of the form shown in Figure~\ref{umbrella}. Formally, an umbrella is a graph $H$ that consists of a circuit on $m+1$ vertices $u_0,u_1,\ldots,u_m$, and in addition, for each $i \in [m]$, zero or more parallel edges between $u_0$ and $u_i$. Note that $H - u_0$ is a simple path, where $H - u_0$ denotes the graph obtained from $H$ by deleting the vertex $u_0$ and all edges incident with it. Returning to our proof, we have $M = M({\mathcal G})$ for a loopless, 2-connected, planar graph ${\mathcal G}$, such that any geometric dual of ${\mathcal G}$ is also 2-connected. \\[-6pt] \begin{lemma} ${\mathcal G}$ has a geometric dual ${\mathcal G}^*$ that is isomorphic to an umbrella. \\[-6pt] \label{umb_lemma1} \end{lemma} We prove the lemma using the concept of an outerplanar graph. A planar graph is said to be \emph{outerplanar} if it has a planar embedding in which every vertex lies on the exterior (unbounded) face. We will refer to such a planar embedding of the graph as an \emph{outerplanar embedding}. Outerplanar graphs were characterized by Chartrand and Harary \cite{CH67} as graphs that do not contain $K_4$ or $K_{2,3}$ as a minor. \\[-6pt] \emph{Proof of Lemma~\ref{umb_lemma1}\/}: Since $M({\mathcal G})$ contains no $M(K_4)$- or $M(K_{2,3})$-minor, ${\mathcal G}$ cannot contain $K_4$ or $K_{2,3}$ as a minor. Therefore, by the Chartrand-Harary result mentioned above, ${\mathcal G}$ is outerplanar. Let ${\mathcal G}^*$ be the geometric dual of an outerplanar embedding of ${\mathcal G}$. Let $x$ be the vertex of ${\mathcal G}^*$ corresponding to the exterior face of the outerplanar embedding of ${\mathcal G}$. By a result of Fleischner \emph{et al.}\ \cite[Theorem~1]{FGH74}, ${\mathcal G}^* - x$ is a forest. In fact, since ${\mathcal G}^*$ is 2-connected, ${\mathcal G}^* - x$ is a tree. We claim that no vertex of ${\mathcal G}^* - x$ has degree greater than two, and hence, ${\mathcal G}^* - x$ is a simple path. Indeed, suppose that ${\mathcal G}^* - x$ has a vertex $u$ adjacent to three other vertices $v_1,v_2,v_3$. Since $G^*$ is 2-connected, there are paths $\pi_1$, $\pi_2$ and $\pi_3$ in ${\mathcal G}^*$ from $v_1$, $v_2$ and $v_3$, respectively, to $x$ that do not pass through $u$. Also, since ${\mathcal G}^* - x$ is a tree, these paths must be internally disjoint in ${\mathcal G}^*$. The graph ${\mathcal G}^*$ thus has a subgraph as depicted in Figure~\ref{deg3_obs}. But this subgraph is obviously contractible to $K_{2,3}$, and hence ${\mathcal G}^*$ has $K_{2,3}$ as a minor. However, this is impossible, as $M^* = M({\mathcal G}^*)$ does not have $M(K_{2,3})$ as a minor. \begin{figure}[!t] \centering{\epsfig{file=deg3_obstruction.eps, width=3.5cm}} \caption{If ${\mathcal G}^* - x$ has a vertex of degree at least 3, then ${\mathcal G}^*$ has a $K_{2,3}$ minor.} \label{deg3_obs} \end{figure} Thus, ${\mathcal G}^* - x$ is a simple path. The two degree-one vertices (end-points) of this path must be adjacent to $x$ in ${\mathcal G}^*$; otherwise, ${\mathcal G}^*$ is not 2-connected. It follows that ${\mathcal G}^*$ is isomorphic to an umbrella. \endproof \mbox{}\\[-6pt] To complete the proof of Theorem~\ref{P1q_thm}, we show that $M({\mathcal G}^*) \in {\mathcal P}_{1,q}$, so that by duality, $M = M^*({\mathcal G}^*) \in {\mathcal P}_{1,q}$. This is done by the following lemma. \begin{lemma} If $H$ is an umbrella, then $M(H) \in {\mathcal P}_{1,q}$. \label{umb_lemma2} \end{lemma} \begin{proof} Let $H$ be an umbrella on $m+1$ vertices $u_0, u_1, \ldots, u_m$, where $u_0$ is the vertex such that $H - \{u_0\}$ is a simple path. For $i \in [m]$, let $E_i$ denote the set of edges between $u_0$ and $u_i$. Also, for $j \in [m-1]$, let $e_j$ denote the edge between $u_j$ and $u_{j+1}$. Consider any ordering of the edges of $H$ that induces the ordered partition $$ (E_1,e_1,E_2,e_2,\ldots,E_{m-1},e_{m-1},E_m). $$ Let $J = \left(\bigcup_{i=1}^{j-1} (E_i \cup \{e_i\})\right) \cup X$, with $X \subset E_j$ ($X$ may be empty). Note that the subgraph, $H[J]$, of $H$ induced by the edges in $J$ is incident only with vertices in $\{u_0,u_1,\ldots,u_j\}$. Therefore, setting $M = M(H)$, $r_M(J) = |V(H[J])|-1 \leq j$. Similarly, the subgraph of $H$ induced by the edges in $E(H) - J$ is incident only with vertices in $\{u_j,u_{j+1},\ldots,u_m,u_0\}$, and so, $r_M(E(H)-J) \leq m-j+1$. Thus, $\lambda_M(J) \leq j + (m-j+1) - m = 1$, and it follows that $\text{pw}(M) \leq 1$. Being graphic, $M$ is ${\mathbb F}_q$-representable, and hence, $M \in {\mathcal P}_{1,q}$. \end{proof} \mbox{} \\[-6pt] This completes the proof of Theorem~\ref{P1q_thm}. \\[-6pt] As a corollary to the theorem, we give a coding-theoretic characterization of the code family ${\mathfrak C}({\mathcal P}_{1,q})$. In coding theory, an ${\mathbb F}_q$-representation of a uniform matroid is called a \emph{maximum-distance separable (MDS) code}. For any field ${\mathbb F}_q$, the matrices $G_4$, $G_{2,3}$ and $G_{2,3}^*$ below are ${\mathbb F}_q$-representations of $M(K_4)$, $M(K_{2,3})$ and $M^*(K_{2,3})$, respectively. $$ G_4 = \left[ \begin{array}{cccccc} 1 & 0 & 0 & 1 & 0 & -1 \\ 0 & 1 & 0 & 1 & 1 & -1 \\ 0 & 0 & 1 & 0 & 1 & -1 \\ \end{array} \right]; $$ $$ G_{2,3} = \left[ \begin{array}{cccccc} 1 & 0 & 0 & 0 & -1 & -1 \\ 0 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 1 & 1 \end{array} \right]; \ \ G^*_{2,3} = \left[ \begin{array}{cccccc} 1 & -1 & 0 & -1 & 1 & 0 \\ 1 & 0 & -1 & -1 & 0 & 1 \\ \end{array} \right]. $$ The matroids $M(K_4)$, $M(K_{2,3})$ and $M^*(K_{2,3})$, being binary, are uniquely representable over ${\mathbb F}_q$, in the matroid-theoretic sense \cite[Section~6.3 and Theorem~10.1.3]{oxley}. We let ${\mathcal C}(K_4)$, ${\mathcal C}(K_{2,3})$ and ${\mathcal C}(K_{2,3})^\perp$ denote the codes over ${\mathbb F}_q$ generated by the matrices $G_4$, $G_{2,3}$ and $G_{2,3}^*$, respectively. \\[-6pt] \begin{corollary} Let ${\mathbb F}_q$ be an arbitrary finite field. A linear code ${\mathcal C}$ over ${\mathbb F}_q$ has trellis-width at most one iff it contains no minor equivalent to any of the following: \begin{itemize} \item[(i)] a $[4,2]$ MDS code; \item[(ii)] a code obtainable by applying an automorphism of ${\mathbb F}_q$ to one of the codes ${\mathcal C}(K_4)$, ${\mathcal C}(K_{2,3})$ and ${\mathcal C}(K_{2,3})^\perp$. \\[-6pt] \end{itemize} \label{CP1q_cor} \end{corollary} The problem of finding the complete set of excluded minors for ${\mathcal P}_{w,q}$ quickly becomes difficult for $w > 1$. The main obstacle is that we may only assume the basic property of 2-connectedness for such excluded minors. The class ${\mathcal P}_{w,q}$ is not even closed under 2-sums, so excluded minors for the class need not be 3-connected. An illustration of this is given by the following result, which provides a partial list of excluded minors for ${\mathcal P}_{2,q}$. \\[-6pt] \begin{figure}[!t] \centering{\epsfig{file=TC2_ex_minors.eps, width=12cm}} \caption{Some of the planar graphs whose cycle matroids are excluded minors for $P_{2,q}$.} \label{P2q_ex_minors} \end{figure} \begin{proposition} For any finite field ${\mathbb F}_q$, the matroids $F_7$, $F_7^*$, $M(K_5)$, $M^*(K_5)$, $M(K_{3,3})$, $M^*(K_{3,3})$, and $M({\mathcal G})$, where ${\mathcal G}$ is any of the planar graphs in Figure~\ref{P2q_ex_minors}, are excluded minors for ${\mathcal P}_{2,q}$. If $q \geq 4$, then $U_{3,6}$ is also an excluded minor for ${\mathcal P}_{2,q}$. \\[-6pt] \label{P2q_prop} \end{proposition} We omit the proof, as it is only a matter of verifying that for each matroid $M$ listed in the proposition, $M \notin {\mathcal P}_{2,q}$, but $M \punc e, M \shorten e \in {\mathcal P}_{2,q}$ for any $e \in E(M)$. We point out that the cycle matroids of all but the three leftmost graphs in Figure~\ref{P2q_ex_minors} are not 3-connected. \section*{Acknowledgment} The author would like to thank Jim Geelen for contributing some of his ideas to this paper, and Alexander Vardy for pointers to the prior literature on trellis complexity.
{ "timestamp": "2007-05-10T05:00:54", "yymm": "0705", "arxiv_id": "0705.1384", "language": "en", "url": "https://arxiv.org/abs/0705.1384", "abstract": "We relate the notion of matroid pathwidth to the minimum trellis state-complexity (which we term trellis-width) of a linear code, and to the pathwidth of a graph. By reducing from the problem of computing the pathwidth of a graph, we show that the problem of determining the pathwidth of a representable matroid is NP-hard. Consequently, the problem of computing the trellis-width of a linear code is also NP-hard. For a finite field $\\F$, we also consider the class of $\\F$-representable matroids of pathwidth at most $w$, and correspondingly, the family of linear codes over $\\F$ with trellis-width at most $w$. These are easily seen to be minor-closed. Since these matroids (and codes) have branchwidth at most $w$, a result of Geelen and Whittle shows that such matroids (and the corresponding codes) are characterized by finitely many excluded minors. We provide the complete list of excluded minors for $w=1$, and give a partial list for $w=2$.", "subjects": "Discrete Mathematics (cs.DM); Information Theory (cs.IT)", "title": "Matroid Pathwidth and Code Trellis Complexity", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713848528318, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.708010492315767 }
https://arxiv.org/abs/2205.08494
Covariance Estimation: Optimal Dimension-free Guarantees for Adversarial Corruption and Heavy Tails
We provide an estimator of the covariance matrix that achieves the optimal rate of convergence (up to constant factors) in the operator norm under two standard notions of data contamination: We allow the adversary to corrupt an $\eta$-fraction of the sample arbitrarily, while the distribution of the remaining data points only satisfies that the $L_{p}$-marginal moment with some $p \ge 4$ is equivalent to the corresponding $L_2$-marginal moment. Despite requiring the existence of only a few moments, our estimator achieves the same tail estimates as if the underlying distribution were Gaussian. As a part of our analysis, we prove a dimension-free Bai-Yin type theorem in the regime $p > 4$.
\section{Introduction} Estimation of the covariance matrix is a classic topic. In high-dimensional statistics the role of sample covariance matrices is central to Principal Component Analysis (PCA) and to linear least squares. Most of the existing work focuses on estimation of covariance matrices under different structural assumptions allowing minimax estimation in the high-dimensional setup. We refer to the line of work \cite{bickel2008covariance, bickel2008regularized, lam2009sparsistency, el2008operator, cai2010optimal, cai2013optimal} and the recent surveys \cite{cai2016estimating, el2018random}. A line of research on non-asymptotic guarantees for sample covariance matrices was initiated by the question of Kannan, Lov{\'a}sz and Simonovits \cite{kannan1997random} on the computation of the volume of a convex body. For the class of log-concave measures the optimal $\sqrt{\frac{d}{N}}$ ($d$ is the dimension and $N$ is the sample size) rate of convergence in operator norm was first obtained in the renowned work of Adamczak, Litvak, Pajor and Tomczak-Jaegermann \cite{adamczak2010quantitative}. Since then several authors focused on making less assumptions on the distributions \cite{srivastava2013covariance, mendelson2012generic, mendelson2014singular, guedon2017interval}. The best known result in this direction is due to K. Tikhomirov \cite{tikhomirov2018sample} who proved the optimal rate of convergence $\sqrt{\frac{d}{N}}$ for the sample covariance matrix assuming only the existence of $p > 4$ moments. However, as discussed by Chen, Gao and Ren \cite{chen2018robust} if there exists only one outlier in the whole sample, the statistical performance of the sample covariance matrix can be compromised. Thus, one is interested in estimators of the covariance matrix robust to adversarial contamination of the data \cite{cheng2019faster, minsker2022robust}. For a standard perspective on robustness, building on the ideas of contaminated models, influence functions and breakdown points, we refer to the standard books \cite{hampel1971general, huber1981robust, rousseeuw1987robust}. On the other hand, there is a growing interest in getting dimension-free guarantees for estimating the covariance matrix. Given the covariance $\Sigma$, the \emph{effective rank} (see \cite{vershynin_2012}) of $\Sigma$ is defined as \[ \mathbf{r}(\Sigma) = \frac{\tr(\Sigma)}{\|\Sigma\|}, \] where, for the rest of the paper, $\|\cdot\|$ denotes the operator norm of the matrix and the Euclidean norm for the vector. Koltchinskii and Lounici \cite{koltchinskii2017operators} proved the optimal high probability bound for the sample covariance matrix in the Gaussian case that depends on the effective rank rather than the dimension. Since then, their result was recovered and extended multiple times and via different techniques \cite{van2017structured,Vershynin2016HDP, zhivotovskiy2021dimension, koltchinskii2020asymptotically}. Finally, we mention the recent interest in getting the so-called \emph{sub-Gaussian estimators} when the data is heavy-tailed. This direction was initiated by O. Catoni in \cite{catoni2012challenging}, where the sub-Gaussian estimation of the mean of a random variable is considered. Speaking informally, one aims to construct statistical estimators performing as good as the sample mean does for the Gaussian distribution, while making as weak assumptions on the distribution as possible. For a recent survey with focus on multivariate mean estimation, we refer to \cite{lugosi2019mean}. The central ideas behind the robust mean estimation found their applications in many related problems such as regression \cite{hsu2016heavy, brownlees2015empirical, lugosi2019risk, chinot2019robust, mendelson2019unrestricted, mourtada2022distribution}, covariance estimation \cite{catoni2016pac, catoni2017dimension, mendelson2020robust, ostrovskii2019affine, hardle2021robustifying, minsker2022robust}, and clustering \cite{klochkov2021robust}. For related results in the context of covariance estimation for heavy-tailed distributions, we refer to the recent survey \cite{ke2019user}. Our goal is to provide an estimator that simultaneously achieves all the properties described above: \begin{itemize} \item We allow the adversarial contamination and recover the optimal dependence on the contamination level based on the number of moments of the underlying distribution. \item Our bounds do not contain unnecessary logarithmic factors and the convergence rates coincide with the classical asymptotic result of Bay and Yin \cite{baiyin1994} provided that there are at least four moments of the distribution. \item The convergence rates scale with the effective rank $\mathbf{r}(\Sigma)$ rather than the dimension $d$. \item We allow the distributions satisfying certain weak norm equivalence assumptions instead of more restrictive Gaussian/log-concave assumptions appearing in the literature. At the same time, we provide the same high probability bounds as if the data were Gaussian. \end{itemize} We begin with the following definition. We say that the distribution of a zero mean random vector $X$ satisfies the $L_p-L_2$ \emph{norm equivalence} (\emph{hypercontractivity}), if for all $v \in \mathbb{R}^d$ and $2 \le q \le p$, \begin{equation} \label{eq:momeqv} ({\mathbb E}|\langle X, v\rangle|^{q})^{1/q} \le \kappa(q)({\mathbb E}|\langle X, v\rangle|^{2})^{1/2}, \end{equation} where $\kappa(\cdot)$ is a function of $q$ and $\langle\cdot, \cdot\rangle$ denotes the inner product. Without loss of generality we assume that $\kappa$ is a non-decreasing function. We say that the sample $\widetilde X_1, \ldots, \widetilde X_N$ is $\eta$-\emph{corrupted} if it is obtained from the sample $X_1, \ldots X_N$ of independent copies of $X$ by replacing at most $\eta N$ points by arbitrary vectors that might depend on $X_{1}, \ldots, X_N$. This corruption model is described in detail \cite{lugosi2021robust} and captures the standard setups in robust statistics such as the Huber contamination model \cite{huber1964}. This model is sometimes called the model of $\eta$-\emph{corruption} or the \emph{strong contamination model} \cite{diakonikolas2019recent}. We present a simplified version of our main result. \begin{theorem}[Informal] \label{thm:informalmain} Assume that $X$ is a zero mean random vector with covariance $\Sigma$ satisfying $L_p-L_2$ \emph{norm equivalence} with $p \ge 4$. Fix the corruption level $\eta \in [0, 1]$ and the confidence level $\delta \in (0, 1)$. Assume that $\widetilde{X}_{1}, \ldots, \widetilde{X}_N$ is an $\eta$-corrupted sample. There is an estimator $\widehat{\Sigma}_{\eta, \delta} = \widehat{\Sigma}_{\eta, \delta}(\widetilde{X}_{1}, \ldots, \widetilde{X}_N)$ depending on $\eta, \delta$ such that if $N \ge c(p)(\mathbf{r}(\Sigma) + \log(1/\delta))$, then with probability at least $1 - \delta$, it holds that \[ \left\|\widehat{\Sigma}_{\eta, \delta} - \Sigma\right\| \le C\|\Sigma\|\left(\sqrt{\frac{\mathbf{r}(\Sigma) + \log(1/\delta)}{N}} + \kappa(p)^2\eta^{1-2/p}\right), \] where $C > 0$ is an absolute constant that depends only on the value $\kappa(4)$ and $c(p)$ depends only on $p$ and $\kappa(p)$. Moreover, under these assumptions, no estimator can perform better up to multiplicative constant factors. \end{theorem} We show that the term $\kappa(p)^2\eta^{1-2/p}$ scales as $\sqrt{\eta}$ when $p = 4$ and as $\eta\log(1/\eta)$ for sub-Gaussian distributions. We also show that both rates cannot be improved. A detailed version of the Theorem \ref{thm:informalmain} with the corresponding explicit estimators is stated in Theorem \ref{thm:thecasepfour} in the regime $p = 4$ and in Theorem \ref{thm:thecasep_greater_four} in the regime $p > 4$ respectively. For the detailed discussion on the optimality of our result, we refer to Section \ref{sec:optimality}. Since our estimator is robust both to heavy tails and adversarial corruption, it should necessarily differ from the sample covariance matrix. However, it still has a relatively simple form. The estimator depends on a specifically tuned scalar $\lambda > 0$, which in turn depends on some parameters including $\delta$ and $\eta$ (the details are postponed to Theorem \ref{thm:thecasepfour} and Theorem \ref{thm:thecasep_greater_four}). We define the truncation function \begin{equation} \label{eq:truncfunction} \psi(x) = \begin{cases} x,\quad &\textrm{for}\; x \in [-1, 1]; \\ \operatorname{sign}(x),\quad &\textrm{for}\; |x| > 1. \end{cases} \end{equation} Then, when $p = 4$, our estimator has the following form \begin{equation} \label{eq:minmaxestimator} \widehat{\Sigma}_{\eta, \delta}= \mathop{\mathrm{argmin}}\limits_{\widehat{\Sigma} \in \mathbb{S}_{+}^d}\sup\limits_{v \in S^{d - 1}}\left|\frac{1}{\lambda N}\sum\limits_{i = 1}^N\psi(\lambda\langle \widetilde X_i, v\rangle^2) - v^{\top}\widehat{\Sigma} v \right|, \end{equation} where $\mathbb{S}_{+}^d$ is the set of $d$ by $d$ positive semi-definite matrices and $S^{d - 1}$ denotes the unit sphere in $\mathbb{R}^d$. A simple observation is that, if $\lambda \to 0$, our minimizer coincides with the sample covariance matrix $\frac{1}{N}\sum\nolimits_{i = 1}^N\widetilde X_i\otimes \widetilde X_i$. For a fixed value of $\lambda$, our estimator can be seen as the second order extension of the classical trimmed mean estimator \cite{tukey1963less, bickel1965some, stigler1973asymptotic, lugosi2021robust}. We are aiming to remove a fraction of extreme observations and then average over the remaining sample. The only difference is that, in the matrix case, we need to take into account all possible directions in the unit sphere. The result of Theorem \ref{thm:informalmain} has multiple advantages over the best known results in the literature: \begin{itemize} \item \textbf{Adversarial corruption:} One of the well known results in the adversarial corruption setup is due to Chen, Gao and Ren \cite{chen2018robust}. They analyze a version of Tukey's median under the restrictive assumption of elliptical distributions satisfying certain growth conditions. Moreover, their results are dimension dependent. See \cite{minsker2022robust} for some recent extensions. \item \textbf{Heavy-tails:} The best-known result in the literature in the heavy-tailed setup (the only assumption is $L_4-L_2$ hypercontractivity) is due to S. Mendelson and the second author \cite{mendelson2020robust}. However, their results can be further improved. First, there is an additional logarithmic factor $\log \mathbf{r}(\Sigma)$ due to the application of the non-commutative Bernstein inequality in the analysis\footnote{We note that under the $L_4-L_2$ the logarithmic factor can be removed as follows from previous arguments. In particular, without adversarial contamination the technique of Catoni and Giulini \cite[Proposition 4.1]{catoni2017dimension} can be adapted to recover the desired rate; their estimator has a complicated form and depends on some additional parameters of the distribution. Similarly, for the trimmed mean the result of the second author \cite[Lemma 5]{zhivotovskiy2021dimension} can be used to prove the desired bound. Our analysis adapts the latter derivations in several proofs.}. Second, because their estimator is based on Median-of-Means, it is not easy to get the dependence on $\eta$ better than $\sqrt{\eta}$ in Theorem \ref{thm:informalmain}\footnote{The best known dependence on $\eta$ in multivariate mean estimation via Median-of-Means scales as $\eta^{3/4}$ and appears in \cite[Section 3.1]{minsker2018uniform}.}. \end{itemize} The most involved step is to get the correct dependence on the corruption level $\eta$ when $p > 4$ in \eqref{eq:momeqv}. One of the technical results used in this paper is a dimension-free version of the classical Bay-Yin theorem \cite{baiyin1994}. In a nutshell, the result of Bay and Yin implies that if $X_1, \ldots, X_N$ are independent copies of a zero mean random vector $X$ in $\mathbb{R}^d$ with unit covariance and independent identically coordinates having four bounded moments, that is $p = 4$, then \[ \left\|\frac{1}{N}\sum\limits_{i = 1}^NX_i\otimes X_i - I_d\right\| \to \sqrt{\frac{d}{N}} \] almost surely as $d, N \to \infty$ so that $d/N \to \beta \in (0, 1]$. Moreover, if the distribution only has $p < 4$ moments, no convergence at such a rate is possible. Our result is non-asymptotic and is somewhat stronger, since we do not require that the coordinates of $X_i$ are independent. Moreover, our focus is on the dimension-free bound. At the same time, we require that $p > 4$ and the constant in the bound depends on how well $p - 4$ is separated from zero. \begin{theorem}[A dimension-free Bay-Yin type theorem] \label{thm:baiyin} Assume that $Y$ is a zero mean random vector with covariance $\Sigma$ satisfying $L_p-L_2$ \emph{norm equivalence} with $p > 4$. Let $Y_1, \ldots, Y_N$ be a sample of independent copies of $Y$. Consider the truncated vectors $X_i = Y_i\ind{\|Y_i\| \le (N\tr(\Sigma)\|\Sigma\|)^{1/4}}$ for $i = 1, \ldots, N$. If $N \ge c(p)\mathbf{r}(\Sigma)$, then it holds that \[ {\mathbb E}\left\|\frac{1}{N}\sum\limits_{i = 1}^N \left(X_i \otimes X_i - {\mathbb E} X_i\otimes X_i\right)\right\| \le C(p)\|\Sigma\|\sqrt{\frac{\mathbf{r}(\Sigma)}{N}}, \] where $c(p)$ and $C(p)$ are non-increasing and both satisfy $C(p), c(p) \to \infty$ as $p \to 4$. \end{theorem} A natural attempt to prove this bound would be to apply the matrix Bernstein inequality (see the survey \cite{tropp15} for more details on matrix concentration inequalities) as in \cite{mendelson2020robust}. However, in this case, we would get an additional multiplicative $\log \mathbf{r}(\Sigma)$-term that does not allow us to recover the optimal Bay-Yin bound in Theorem \ref{thm:informalmain} for $p > 4$. The proof of Theorem \ref{thm:baiyin} is based on the combination of the approach developed by K. Tikhomirov \cite{tikhomirov2018sample} with the variational approach applied by the second author in \cite{zhivotovskiy2021dimension}. \begin{remark} Whether the bound of Theorem \ref{thm:baiyin} holds for $p = 4$ is a well-known open problem (see \cite{vershynin2012close} and the survey \cite{vershynin_2012}) even in the isotropic case where $\Sigma = I_d$. The analysis of Theorem \ref{thm:informalmain} bypasses the bound of Theorem \ref{thm:baiyin} in the regime where $p = 4$. This makes our analysis different from the approach used in \cite{mendelson2020robust}. \end{remark} \paragraph{On the breakdown point of our estimator.} Informally speaking, the breakdown point of an estimator is defined as the largest proportion of outliers in the data for which the estimator gives a non-vacuous result \cite{hampel1971general}. The form of the bounds we are interested in is a high probability bound of the form $\|\widehat\Sigma - \Sigma\| \le \varepsilon \|\Sigma\|$, where $\widehat \Sigma$ is our estimator and $\varepsilon > 0$ is precision parameter. Since $\widehat \Sigma = 0$ satisfies this inequality with $\varepsilon = 1$, in Theorem \ref{thm:informalmain} we are focusing on $\eta \in [0, c]$, where $c$ is a small enough constant so that for large enough sample we have $\varepsilon < 1$. \paragraph{Practical considerations.} Our main focus is on the statistical properties of the estimator. In fact, we are aiming to achieve the best possible statistical performance computational questions aside. While there is some algorithmic progress in the case of adversarial contamination (see the survey \cite{diakonikolas2019recent}), a recent paper of Hopkins, Li and Zhang \cite{hopkins2020robust} provides an evidence that achieving the same for heavy-tailed distributions is computationally hard at least when Median-of-Means estimators are used. For some practical estimators in the context of heavy-tailed covariance estimation, we refer to \cite{ke2019user, hardle2021robustifying, minsker2022robust}. \paragraph{Technical overview.} Our analysis extends three separate arguments in the literature. First, we prove a dimension-free version of the Bay-Yin theorem (Theorem \ref{thm:baiyin}) that allows us to eliminate unnecessary logarithmic factors in our bounds. The proof of this result consists of two parts: The analysis of the so-called \emph{peaky part} is based the arguments in K. Tikhomirov \cite{tikhomirov2018sample}, which in itself improves the line of research \cite{bourgain1996random, adamczak2010quantitative, mendelson2014singular, guedon2017interval}. The analysis of the \emph{spread part} is based on the variational inequality techniques applied by the second author in \cite{zhivotovskiy2021dimension}. The latter approach traces back to the works of O. Catoni and co-authors on robust mean estimation \cite{catoni2007pacbayes, audibert2011robust, catoni2016pac, catoni2017dimension}. Extending the techniques in \cite{zhivotovskiy2021dimension}, we prove Theorem \ref{thm:informalmain} in the case where $p = 4$. Proposition \ref{prop:estlargesteigenvalue} provides an optimal estimator for the largest eigenvalue of the the covariance matrix. Finally, the proof of Theorem \ref{thm:informalmain} in the regime $p > 4$ is based on combining Theorem \ref{thm:baiyin} with the second order version of the multivariate trimmed mean estimator of Lugosi and Mendelson \cite{lugosi2021robust}. Our lower bounds are based on reducing to corresponding lower bounds in the multivariate mean estimation setup. \paragraph{Structure of the paper.} The rest of the paper is organized as follows. In Section \ref{sec:baiyin}, we present a proof of Theorem \ref{thm:baiyin}. In Section \ref{sec:pfour}, we provide a proof Theorem \ref{thm:informalmain} in the regime $p = 4$. In Section \ref{sec:opernorm}, we provide an auxiliary result on estimating the largest eigenvalue of the covariance matrix. Then, using Theorem \ref{thm:baiyin} in Section \ref{sec:pmorefour}, we give a proof Theorem \ref{thm:informalmain} in the regime $p > 4$. We conclude with a detailed discussion of the lower bounds, showing the optimality of our results in Section \ref{sec:optimality}. \paragraph{Notation.} Throughout the proofs $C(p)$ and $c(p)$ will denote the constants depending only on $p$ and (possibly) on $\kappa(p)$, where $\kappa(\cdot)$ is given by \eqref{eq:momeqv}. The exact values of $C(p)$ and $c(p)$ may change from line to line. For an integer $N$, we set $[N] = \{1, \ldots, N\}$. Let $\ind{A}$ denote the indicator of an event $A$. The symbol $\mathbb{R}_{+}$ denotes the set of positive reals. For any two functions (or random variables) $f, g$ defined in some common domain, the notation $f \lesssim g$ means that there is an absolute constant $c$ such that $f \le cg$ and $f\sim g$ means that $f\lesssim g$ and $g\lesssim f$. For $(x_i)_{i = 1}^m \in \mathbb{R}^m$ the sequence $(x_i^*)_{i = 1}^m \in \mathbb{R}^m$ is a non-increasing rearrangement of $(|x_i|)_{i = 1}^m$. For a set $I \subseteq [N]$ let $I^c = [N]\setminus I$. Let $\mathbb{S}_{+}^d$ denote the set of $d$ by $d$ positive-definite matrices. The symbol $\|\cdot\|$ denotes the operator norm of a matrix or the Euclidean norm of the vector depending on the context. The symbol $\|a\|_{0}$ corresponds to the number of non-zero components of the vector $a$. For a random variable $X$, let $\|X\|_{\infty}$ denote its essential supremum. \section{Proof of Theorem \ref{thm:baiyin}} \label{sec:baiyin} We start by proving the dimension-free version of the Bay-Yin theorem. Fix $\lambda > 0$. In the notation of Theorem \ref{thm:baiyin}, let us write the following decomposition: \begin{align*} \sup_{v\in S^{d-1}} \left| \frac{1}{N}\sum\nolimits_{i=1}^N \langle X_i,v\rangle^2 - {\mathbb E}\langle X,v\rangle^2\right| &\le \underbrace{\sup_{v\in S^{d-1}}\frac{1}{N}\sum\nolimits_{i=1}^N \langle X_i,v\rangle^2 \ind{\lambda\langle X_i,v\rangle^2>1}}_{\textrm{Peaky part}} \\ &\qquad+ \underbrace{\sup_{v\in S^{d-1}}\left|\frac{1}{N\lambda}\sum\nolimits_{i=1}^N \psi(\lambda\langle X_i,v\rangle^2) - {\mathbb E}\langle X,v\rangle^2 \right|}_{\textrm{Spread part}}~, \end{align*} where the terminology comes from \cite[Chapter 14.3]{Talagrand2014}. Our analysis will consist of two steps, where we first analyze the \emph{peaky part} and then analyze the \emph{spread part}. \subsection{Dimension-free upper bound on the peaky part} To analyze this term we follow the strategy of K. Tikhomirov in \cite{tikhomirov2018sample}, which in itself improves a line of research \cite{bourgain1996random, adamczak2010quantitative, mendelson2014singular, Talagrand2014, guedon2017interval}. Although this part of our proof follows their steps, we need several modifications to avoid any explicit dependence on the dimension. Following \cite{tikhomirov2018sample} given two sets $C,I \subset [N]$ and a positive integer $k$, we define the quantities $f(k,C)$, $g(k,C,I)$ and $W_{v,i}$ as \begin{equation*} f(k,C)= \sup_{\substack{\|y\|_2=1 \\ \|y\|_0 \le k \\ \supp{y}\subset C }} \left\|\sum_{i=1}^N y_i X_i\right\|^2, \end{equation*} \begin{equation*} g(k,C,I)= \sup_{\substack{\|y\|_2=1 \\ \|y\|_0 \le k}} \sup_{\substack{\|z\|_2=1 \\ \|z\|_0 \le k}} \left\langle \sum_{i\in I\cap C}y_iX_i,\sum_{j\in I^c \cap C}z_jX_j \right\rangle, \end{equation*} \begin{equation} \label{eq:wvi} W_{v,i}= \left\langle X_i,\sum_{j=1}^N v_jX_j \right\rangle. \end{equation} In particular, following the standard derivation in \cite[Chapter 14.3]{Talagrand2014}, we have \begin{equation} \label{eq:relationfanda} \sup\limits_{v \in S^{d - 1}}\sup_{|I|\le k}\sum\limits_{i \in I}\langle v, X_i\rangle^2= \sup_{|I|\le k}\sup_{\sum_{i\in I}a_i^2 \le 1} \left\|\sum_{i\in I}a_iX_i\right\|^2 = f(k, [N]). \end{equation} We show below that, to control the peaky part, it is sufficient to upper bound the quantity $f(k, [N])$. The role of $g(K, C, I)$ goes as follows: Using the decoupling argument (see the derivation in \cite[page 11]{tikhomirov2018sample}), we obtain \[ f(k,C) \le \max_{i\le N} \|X_i\|^2 + 2^{-N+2} \sum_{I\subset[N]}g(k,C,I). \] Since $\max_{i\le N}\|X_i\|^2 \le (N\tr(\Sigma)\|\Sigma\|)^{1/2}$ by the assumption, we limit our attention on bounding $g(k,C,I)$. We use the following auxiliary bound. \begin{lemma} \label{lem:normofthevec} Let $Y \in \mathbb{R}^d$ be a zero mean random vector with covariance $\Sigma$ satisfying the $L_p-L_2$ norm equivalence for $p>4$ with $\kappa(\cdot)$ in \eqref{eq:momeqv}. Denote $X = Y\ind{\|Y\| \le R}$ where $R > 0$ and let $\kappa = \kappa(4)$. It holds that \[ \mathbb{P}(\|X\|_2\ge t) \le \kappa^{4}\frac{\tr(\Sigma)^2}{t^4}. \] \end{lemma} \begin{proof} We have \begin{equation*} {\mathbb E}\|X\|_2^4 = {\mathbb E}\left(\sum\nolimits_{i=1}^d \langle X_i,e_i\rangle^2\right)^2 ={\mathbb E} \sum\nolimits_{i=1}^d \langle X,e_i\rangle^4 + {\mathbb E}\sum\nolimits_{k\neq l} \langle X,e_k\rangle^2\langle X,e_l\rangle^2. \end{equation*} We use that, for all $i\in [d]$, \[ ({\mathbb E}\langle X,e_i\rangle^4)^{1/4} \le ({\mathbb E}\langle Y,e_i\rangle^4)^{1/4}\le \kappa ({\mathbb E}\langle Y,e_i\rangle^2)^{1/2} = \Sigma_{ii}^{1/2}. \] Now, by the above display and H\"{o}lder's inequality we have \begin{align*} \kappa^4 \sum\nolimits_{i=1}^d ({\mathbb E} \langle X,e_i\rangle^2)^2 + \kappa^{4}\sum\nolimits_{k \neq l} {\mathbb E}\langle X,e_k\rangle^2 {\mathbb E}\langle X,e_l\rangle^2 &= \kappa^4\left(\sum\nolimits_{j=1}^d \Sigma_{jj}^2 + \sum\nolimits_{k \neq l} \Sigma_{kk} \Sigma_{ll}\right) \\ &= \kappa^4 \tr(\Sigma)^2. \end{align*} The Markov's inequality concludes the proof. \end{proof} Following \cite{tikhomirov2018sample}, the next step is to split the vectors $X_1, \ldots, X_N$ is several groups so that the vectors in each group are almost orthogonal. Fix $H > 0$. Consider the random graph $\mathcal G_{H} = ([N], E)$, where the set of edges is \[ E = \left\{(i, j): 1 \le i < j \le N, \quad |\langle X_i, X_j\rangle| > H\max\limits_{k \le N}\|X_k\|\right\}. \] If we color this graph so that no two adjacent vertices share the same color, we have that the vertices of the same color correspond to a pair of vectors having a small inner product. Let $\chi(\mathcal G_{H})$ denote the chromatic number of $\mathcal G_{H}$. We now induce the sets $\{\mathcal{C}_m^H\}_{m = 1}^N$ performing the partition of $[N]$ such that $\mathcal{C}_m^H = \emptyset$ for $m > \chi(\mathcal G_{H})$ and each $\mathcal{C}_m^H$ contains the vertices of the same color. Following the proof of the dimension-dependent result, we provide a dimension-free bound on the chromatic number of this random graph. \begin{proposition} \label{prop:coloringprop} Let $Y \in \mathbb{R}^d$ be a zero mean random vector with covariance $\Sigma$ satisfying the $L_p-L_2$ norm equivalence for $p>4$ with $\kappa(\cdot)$ as in \eqref{eq:momeqv}. Define, for an arbitrary positive $R$, $X = Y\ind{\|Y\| \le R}$ and let $\kappa = \kappa(4)$. Then, for any $H>0$ and any integer $m>1$, with probability at least $1-(\kappa(p)^p\|\Sigma\|^{p/2}NH^{-p})^{m-1} N\kappa(p)^4 \tr(\Sigma)^2 H^{-4}$, \[ \chi(\mathcal{G}_H) \le m. \] \end{proposition} \begin{proof} We consider the greedy coloring process. Let $Y(1), Y(2), \ldots$ be an auxiliary random process such that $Y(1)=1$ and \begin{equation*} Y(i)= \min \{ r\in \mathbb{N}: \forall j<i \ \text{with}\ Y(j)=r \ \text{we have}\ |\langle X_i,X_j\rangle|\le H\|X_j\| \}. \end{equation*} Note that, by definition, $\chi(\mathcal{G}_H) \le \max_{i\in [N]}Y(i)$. Next, for each $i>1$ and $m\ge 1$, we have \begin{align*} \mathbb{P}(Y(i)=m+1) &\le \mathbb{P}(\text{There is}\ l\le i-1 \ \text{such that} \ |\langle X_i,X_j\rangle|> H\|X_j\| \ \text{and}\ Y(l)=m)\\ & \le \sum_{l=1}^{i-1} \mathbb{P}(|\langle X_i,X_j\rangle|> H\|X_j\|\ \text{and}\ Y(l)=m)\\ & = \sum_{l=1}^{i-1} \mathbb{P}(|\langle X_i,X_j\rangle|> H\|X_j\|\ |\ Y(l)=m)\mathbb{P}(Y(l)=m)\\ & \le \kappa(p)^p\frac{\|\Sigma\|^{p/2}}{H^p}\sum_{l=1}^{i-1} \mathbb{P}(Y(l)=m) \\ & \le \kappa(p)^p\frac{\|\Sigma\|^{p/2}}{H^p} \mathbb{E}|\{j\le N: Y(j) = m\}|, \end{align*} where we used that since $X_i$ is independent of $Y(l)$, and by Markov's inequality \begin{align*} \mathbb{P}(|\langle X_i,X_j\rangle|> H\|X_j\|\ |\ Y(l)=m) &= \mathbb{P}(|\langle X_i,X_j\rangle|> H\|X_j\| \ \textrm{and} \ \|X_j\| \neq 0\ |\ Y(l)=m) \\ & \le \frac{\kappa(p)^{p}\|\Sigma\|^{p/2}}{H^{p}}. \end{align*} We obtain the following recursion, \begin{equation*} \mathbb{E}|\{j\le N: Y(j) = m+1\}| \le \kappa(p)^p\|\Sigma\|^{p/2}N\frac{1}{H^p} \mathbb{E}|\{j\le N: Y(j) = m\}| \end{equation*} Now we apply Lemma \ref{lem:normofthevec} and use the monotonicity of $\kappa(\cdot)$ to obtain, \begin{equation*} \mathbb{E}|\{j\le N: Y(j) = 2\}| \le N \mathbb{P}(\|X\|>H) \le N\kappa(p)^4 \tr(\Sigma)^2 H^{-4}. \end{equation*} Combining the estimates above, we get \[ \mathbb{E}|\{j\le N: Y(j) = m+1\}|\le (\kappa(p)^p\|\Sigma\|^{p/2}NH^{-p})^{m-1} N\kappa(p)^4 \tr(\Sigma)^2 H^{-4}. \] Finally, we obtain \begin{align*} \mathbb{P}(\chi(\mathcal{G}_{H}) \ge m+1) &\le \mathbb{P}(\exists j\le N: \ Y(j)=m+1) \\ & \le \mathbb{E}|\{j\le N: Y(j) = m+1\}| \\ &\le (\kappa(p)^p\|\Sigma\|^{p/2}NH^{-p})^{m-1} N\kappa(p)^4 \tr(\Sigma)^2 H^{-4}. \end{align*} \end{proof} We need the following result. It can be seen as one of the main ingredients of the proof and follows from the so-called \emph{Sparsifying} lemma \cite[Lemma 4.1]{tikhomirov2018sample}. Our key observation here is that this lemma does not involve explicitly the dimension $d$. Recall that for $(x_i)_{i = 1}^m \in \mathbb{R}^m$ the sequence $(x_i^*)_{i = 1}^m \in \mathbb{R}^m$ is a non-increasing rearrangement of $(|x_i|)_{i = 1}^m$ and $v$ is an $s$-sparse vector if $\|v\|_0 \le s$. We remark that the statement below is deterministic and holds for any realization of $X_1, \ldots, X_N$. \begin{proposition}[Proposition 4.4 in \cite{tikhomirov2018sample}] \label{prop:sparsification} There exists an absolute constant $C>0$ such that the following holds. Fix $I \subseteq [N]$ and let $\gamma \in (0, 1/3)$ with $k \ge 24/\gamma^2$ and $N \ge 128C\gamma^{-2}k$. Set $t = \lfloor \log_2 \frac{\gamma^2 k}{24}\rfloor$ and $k_j = \lfloor \frac{k}{2^j}\rfloor$ for $0 \le j \le t$. There are subsets $\mathcal N_j, \mathcal N^{\prime}_j$ for $0 \le j \le t - 1$ supported on $I$ and $I^c$ respectively and consisting of unit $\gamma k_j$-sparse vectors such that \[ \max\left\{\mathcal N_j, \mathcal N^{\prime}_j \right\} \le \left(\frac{CN}{\gamma k_j}\right)^{2\gamma k_j}, \] and for any $\mathcal C \subseteq [N]$, \[ g(k, \mathcal C, I) \le C\gamma^{-2}\left(\log k \max\limits_{i \neq j \in \mathcal C}|\langle X_i, X_j\rangle| + \sum\limits_{j = 0}^{t - 1}\sqrt{k_j}\left(\sup\limits_{u \in \mathcal N_j}\left(A_{u}\right)^*_{\left\lfloor \frac{k_{j + 1}}{16}\right\rfloor} + \sup\limits_{v \in \mathcal N^{\prime}_j}\left(B_{v}\right)^*_{\left\lfloor \frac{k_{j + 1}}{16}\right\rfloor}\right)\right), \] where the random vectors $A_u \in \mathbb{R}^{|I^c|}$ and $B_v \in \mathbb{R}^{|I|}$ are given by $A_u = (|W_{u, i}|)_{i \in I^c}$ and $B_v = (|W_{v, i}|)_{i \in I}$ and $W_{u, i}, W_{v, i}$ are given by \eqref{eq:wvi}. \end{proposition} The next result is a dimension-free analog of Proposition 5.1 in \cite{tikhomirov2018sample} in the regime $p > 4$. \begin{proposition} \label{prop:gfuncupperbound} There exists an absolute constant $C>0$ such that if $I\subset [N]$ with $|I|\le s$ is fixed and $\log \frac{N}{s}\ge C$, then simultaneously for all $\mathcal{C}\subset [N]$, with probability at least $1-1/N^3$, \begin{equation*} g(s,\mathcal{C},I) \le C\left(\log^2\frac{N}{s}\log s\max_{i\neq j \in \mathcal{C}}|\langle X_i,X_j\rangle| + p\kappa(p) \|\Sigma\|^{1/2}\log^2\frac{N}{s}\sqrt{s}\left( \frac{N}{s}\right)^{1/p} \sqrt{f(s, [N])}\right) . \end{equation*} \end{proposition} \begin{proof} We provide the required changes of the original proof. First, we notice that if $s<C \log^2\frac{N}{s}$, then \begin{equation*} g(s,\mathcal{C},I)\le s\max_{i\neq j \in \mathcal{C}}|\langle X_i,X_j\rangle|<C \log^2\frac{N}{s}\max_{i\neq j \in \mathcal{C}}|\langle X_i,X_j\rangle|, \end{equation*} and the claim trivially follows. For the rest of the proof we assume $s\ge C \log^2\frac{N}{s}$. We define $\gamma=\frac{1}{\log(N/s)}$, $t=\lfloor\log_2\frac{\gamma^2 s}{C}\rfloor$ and $k_j=\lfloor\frac{s}{2^j}\rfloor$. We choose $C>24$ to be able to apply Proposition \ref{prop:sparsification} with this choice of $\gamma, t$ and $k_j$. For a fixed $0 \le j < t$ consider $u \in \mathcal N_j$, where $\mathcal N_j$ is given by Proposition \ref{prop:sparsification}. By its definition $u$ is supported on $I$. Therefore, for any $l \in I^{c}$ recalling \eqref{eq:wvi}, we have $W_{u,l}=\langle X_l,\sum_{i\in I}u_iX_i\rangle$. Observe that conditionally on the realization of $X_i, i \in I$, we have that $W_{u,l}$ are independent random variables for all $l \in I^c$. Therefore, by the norm equivalence assumption \eqref{eq:momeqv}, we have \[ {\mathbb E}[|W_{u,l}|^p|X_i, i\in I] \le \kappa(p)^p \|\Sigma\|^{p/2} \left\|\sum\nolimits_{i\in I}u_iX_i\right\|^p \le \kappa(p)^p \|\Sigma\|^{p/2}f(s, [N])^{p/2}. \] For the same vector $u$ and the same set $I$, consider the random vector \[ A_u = (|W_{u, l}|)_{l \in I^c}. \] We apply the standard bound \cite[Lemma 2.5]{tikhomirov2018sample} to control the coordinates of $A_u$ with \[ \tau_j=(32e)^{1/p}\|\Sigma\|^{1/2}\kappa(p)\sqrt{f(s, [N])}\left(\frac{N}{k_{j+1}}\right)^{p^{-1}\left(1+256\gamma\right)} \] to obtain \begin{equation*} \begin{split} \mathbb{P}((A_u)^*_{\lfloor k_{j + 1}/16\rfloor} \ge \tau_j)&\le \left(\frac{e\kappa^p\|\Sigma\|^{p/2}f(s, [N])^{p/2}N}{\tau_j^p\lfloor k_{j+1}/16\rfloor}\right)^{\lfloor k_{j+1}/16\rfloor}\\ & \le \left(\frac{k_{j+1}}{N}\right)^{4\gamma k_j}. \end{split} \end{equation*} We union bound over the net $\mathcal{N}_j$, whose size is bounded in Proposition \ref{prop:sparsification}, to get for some absolute constant $C_1 > 0$, \begin{equation*} \mathbb{P}\left(\sup\limits_{u \in \mathcal N_j}(A_u)^*_{\lfloor k_{j + 1}/16\rfloor}\ge \tau_j\right)\le \left(\frac{k_{j+1}}{N}\right)^{4\gamma k_j}|\mathcal{N}_j| \le \left(\frac{C_1k_j}{\gamma N}\right)^{2\gamma k_j}\le \left(\frac{k_j}{N}\right)^{\gamma k_j} \le \left(\frac{k_t}{N}\right)^{\gamma k_t} \le \frac{1}{N^4}, \end{equation*} where we assumed $N\ge C_1^2\gamma^{-2}k_j$. Repeating the same arguments for $\mathcal{N}^{\prime}_j$ and summing over all $j$ in Proposition \ref{prop:sparsification} we obtain, with probability at least $1 - 1/N^3$, \[ g(s, \mathcal C, I) \lesssim \gamma^{-2}\left(\log s \max_{i\neq j \in \mathcal{C}}|\langle X_i,X_j\rangle| + \sum\nolimits_{j = 0}^{t - 1}\tau_j\sqrt{k_j}\right). \] The sum $\sum\nolimits_{j = 0}^{t - 1}\tau_j\sqrt{k_j}$ can be upper bounded exactly the same was as in the the end of the proof of Proposition 5.1 in \cite{tikhomirov2018sample}. Using $p > 4$, we conclude the proof. \end{proof} The next result applies Proposition \ref{prop:gfuncupperbound} to upper bound $f(s,\mathcal{C}_m^H)$, where $\mathcal{C}_m^H$ is a class of coloring of the sample. \begin{lemma} \label{lem:individualcolor} Let $s\le N$ be fixed and assume that $\log\frac{N}{s}\ge C_1$, where $C_1>0$ is a large enough absolute constant. Let $\mathcal{C}_m^{H}$ be the class from the coloring of the sample $X_1,\ldots,X_N$ with threshold $H>0$. Then there exists an absolute constant $C>0$ such that, with probability at least $1-\frac{1}{N^2}$, \[ f(s,\mathcal{C}_m^H) \le \max_{i\le N}\|X_i\|^2 + C\log^2\frac{N}{s}\left(H\log s \max_{i\le N}\|X_i\| + p\kappa\|\Sigma\|^{1/2}\sqrt{s}\left( \frac{N}{s}\right)^{1/p}\sqrt{f(s,[N])}\right). \] \end{lemma} \begin{proof} The proof follows exactly the same steps (an application of Proposition \ref{prop:gfuncupperbound} in our case) as the proof of Lemma 5.2 in \cite{tikhomirov2018sample} by replacing $d$ (their notation for the dimension is $n$) by $s$. \end{proof} Our final step is to upper bound the desired function $f(s,[N])$ using the upper bound on $f(s,\mathcal{C}_m^H)$ for different coloring classes. Our new observation is that since we are only interested in the regime $p > 4$, we can worsen the dependence on some logarithmic factors appearing in the corresponding result in \cite{tikhomirov2018sample}. This allows us to build our analysis on a somewhat weaker Lemma \ref{lem:normofthevec} (compare it with a dimension-dependent result in \cite[Lemma 2.6]{tikhomirov2018sample}). \begin{lemma} \label{lem:upboundcolor} Let $s,N$ satisfy $\log\frac{N}{s}\ge C_1$, where $C_1>0$ is a large enough absolute constant. Then there exits $\chi = \chi(p)$ that depends only on $p$ such that \begin{equation*} f(s,[N]) \lesssim \chi^2 \max_{i\le N}\|X_i\|^2+ \kappa^2\|\Sigma\|s\left(\frac{N}{s}\right)^{4/(4+p)}\log^4\frac{N}{s} + \chi^2p^2\kappa(p)^2\|\Sigma\|s\log^4\frac{N}{s}\left( \frac{N}{s}\right)^{2/p}, \end{equation*} with probability at least $1-N^{(4 - p)(\chi - 2)/(p + 4)} s^{-(p^2(\chi- 1) + 4p)/(2p + 8)} (\log s)^{p(\chi - 1) + 4} \mathbf{r}^2(\Sigma)-\chi/N^2$. \end{lemma} \begin{proof} We define $\chi(\mathcal{G}_H)$ to be the chromatic number of the random graph induced by the partition of the sample $X_1,\ldots,X_N$ into the classes $\mathcal{C}_m^{H}$. We fix $H =\kappa\|\Sigma\|^{1/2}(\frac{N}{s})^{2/(4+p)} \frac{\sqrt{s}}{\log s}$, where $\kappa = \kappa(p)$ and apply Lemma \ref{lem:individualcolor} together with union bound to obtain that, with probability at least $1-\chi/N^2$, \begin{align*} \frac{1}{\chi}\sum_{m=1}^{\chi} f(s,\mathcal{C}_m^{H}) &\le \max_{i\le N}\|X_i\|^2 \\ &\quad+ C\log^2\frac{N}{s}\left(H\log s \max_{i\le N}\|X_i\| + p\kappa\|\Sigma\|^{1/2}\sqrt{s}\left( \frac{N}{s}\right)^{1/p} \sqrt{f(s,[N])}\right). \end{align*} We now apply the dimension-free version of the coloring Proposition \ref{prop:coloringprop} to obtain that the chromatic number $\chi(\mathcal{G}_H)$ is at most $\chi$, with probability at least \[ 1-(\kappa^p\|\Sigma\|^{p/2}NH^{-p})^{\chi-1} N\kappa^4 \tr(\Sigma)^2 H^{-4}. \] Now, we estimate the latter probability. We have \begin{align*} &(\kappa^p \|\Sigma\|^{p/2} N H^{-p})^{\chi-1}N\kappa^4\tr(\Sigma)^2H^{-4} \\ &\qquad= (N^{1-2p/(p+4)}s^{2p/(p+4)-p/2})^{\chi-1} N^{1-8/(4+p)}s^{8/(4+p) - 2} (\log s)^{p(\chi - 1) + 4} \mathbf{r}^2(\Sigma)\\ &\qquad=N^{(4 - p)(\chi - 2)/(p + 4)} s^{-(p^2(\chi- 1) + 4p)/(2p + 8)} (\log s)^{p(\chi - 1) + 4}\mathbf{r}^2(\Sigma). \end{align*} Following exactly the lines of the proof of Proposition 5.3 in \cite{tikhomirov2018sample} we obtain \begin{align*} \chi^{-1}(\mathcal{G_H})f(s,[N]) &\le \chi^{-1}(\mathcal{G_H})\sum_{m=1}^{\chi(\mathcal{G}_H)} f(s,\mathcal{C}_m^{H}) \\ &\le\max_{i\le N}\|X_i\|^2+ C\kappa\|\Sigma\|^{1/2}\left(\frac{N}{s}\right)^{2/(4+p)} \sqrt{s}\log^2\frac{N}{s} \max_{i\le N}\|X_i\| \\ &\qquad+ Cp\kappa\|\Sigma\|\log^2\frac{N}{s}\sqrt{s}\left( \frac{N}{s}\right)^{1/p} \sqrt{f(s,[N])}, \end{align*} with probability at least $1-N^{(4 - p)(\chi - 2)/(p + 4)} s^{-(p^2(\chi- 1) + 4p})/(2p + 8)(\log s)^{p(\chi - 1) + 4} \mathbf{r}^2(\Sigma)$. Now we solve the inequality above by applying the inequality $2ab \le \gamma a^2 + \frac{b^2}{\gamma}$ for $a, b, \gamma \ge 0$ twice and solving with respect to $f(s,[N])$. \end{proof} The following bound is the main result of this section. \begin{theorem} \label{thm:rearrengmentthm} Assume that for some large enough absolute constant $c > 0$ it holds that $N \ge c\mathbf{r(\Sigma)}$. For large enough sample size $N$, simultaneously for all integers $s$ satisfying $\mathbf{r}(\Sigma) \le s \le N/c$, with probability at least $1 - \frac{c(p)}{N}$, it holds that \begin{equation} \label{eq:boundonf} f(s,[N]) \le C(p) \left( \max_{i\le N}\|X_i\|^2 + \|\Sigma\|s\left(\frac{N}{s}\right)^{4/(4+p)}\log^4\frac{N}{s} \right), \end{equation} where $C(p)$ and $c(p)$ depend only on $p$ and $\kappa(p)$. \end{theorem} \begin{proof} The proof is based on the application of Lemma \ref{lem:upboundcolor}. Let $s \ge \mathbf{r}(\Sigma)$ and fix $\chi = \max(10,\frac{4p}{p-4})$. Consider two cases: \begin{itemize} \item If $\mathbf{r}(\Sigma) \le 100$, we have \[ N^{(4 - p)(\chi - 2)/(p + 4)} s^{(p^2(\chi- 1) +4p)/(2p + 8)}(\log s)^{p(\chi - 1) + 4} \mathbf{r}^2(\Sigma) \le c(p)N^{-2}. \] \item Otherwise, if $\mathbf{r}(\Sigma) > 100$, then we have $s \ge 100$ and $\log s \le s^{1/3}$. Therefore, \[ N^{(4 - p)(\chi - 2)/(p + 4)} s^{-(p^2(\chi- 1) + 4p)/(2p + 8) + (p(\chi - 1) + 4)/3} \mathbf{r}^2(\Sigma) \le s^{-2}\mathbf{r}^2(\Sigma)N^{-2} \le N^{-2}. \] \end{itemize} Therefore, for a fixed integer $s$ satisfying $\mathbf{r}(\Sigma) \le s \le N/c$, we have that $\log \frac{N}{s}$ is sufficiently large so that, by Lemma \ref{lem:upboundcolor} and the above calculations, with probability at least $1 - \frac{c(p)}{N^2}$, \[ f(s,[N]) \le C(p) \left( \max_{i\le N}\|X_i\|^2 + \|\Sigma\|s\left(\frac{N}{s}\right)^{4/(4+p)}\log^4\frac{N}{s} \right). \] By taking the union bound with respect to the value of $s$ we conclude the proof. \end{proof} \subsection{Analysis of the spread part} The main tool of this section is the following Lemma sometimes called the PAC-Bayesian inequality (see \cite[Proposition 2.1]{catoni2017dimension} or \cite{appert2021new} for a detailed proof taking care of measurability questions). In our proofs, we adapt several computations appearing in \cite{zhivotovskiy2021dimension}. \begin{lemma} \label{lem:pacbayes} Assume that $X_i$, $i = 1, \ldots, N$ are i.i.d. random variables defined on some measurable space. Assume also that $\Theta$ (called the parameter space) is a subset of $\mathbb{R}^d$ for some $d \ge 1$. Let $\mu$ be a distribution (called prior) on $\Theta$ and let $\rho$ be any distribution (called posterior) on $\Theta$ such that $\rho \ll \mu$. Then, simultaneously for any such $\rho$ we have, with probability at least $1 - \exp(-t)$, \[ \frac{1}{N}\sum\limits_{i = 1}^N{\mathbb E}_{\rho}f(X_i, \theta) \le {\mathbb E}_{\rho}\log({\mathbb E}_X\exp(f(X, \theta))) + \frac{\mathcal{KL}(\rho, \mu) + t}{N}. \] Here $\theta$ is distributed according to $\rho$. Moreover, \[ {\mathbb E}\sup\limits_{\rho}\left(\sum\limits_{i = 1}^N{\mathbb E}_{\rho}f(X_i, \theta) - N{\mathbb E}_{\rho}\log({\mathbb E}_X\exp(f(X, \theta))) - \mathcal{KL}(\rho, \mu)\right) \le 0. \] \end{lemma} Our analysis will also exploit the following elementary relation. \begin{lemma}[Lemma 4 in \cite{zhivotovskiy2021dimension}] \label{lem:almostconvex} Let the truncation function $\psi$ be given by \eqref{eq:truncfunction} and let $Z$ be a square integrable random variable. We have \[ \psi({\mathbb E} Z) \le {\mathbb E}\log(1 + Z + Z^2) + \min\{1, {\mathbb E} Z^2/6\}. \] Moreover for any $a > 0$, it holds that \[ {\mathbb E}\log(1 + Z + Z^2) + a{\mathbb E}\min\{1, Z^2/6\} \le {\mathbb E}\log\left(1 + Z + \left(1 + \frac{(7 + \sqrt{6})(\exp(a) - 1)}{6}\right) Z^2\right). \] \end{lemma} The main result of this section is the following. \begin{proposition} \label{prop:trimmedprocess} Assume that $Y$ is a zero mean random vector with covariance $\Sigma$ satisfying $L_p-L_2$ \emph{norm equivalence} with $p \ge 4$. Let $Y_1, \ldots, Y_N$ be a sample of independent copies of $Y$. Consider the truncated vectors $X_i = Y_i\ind{\|Y_i\| \le R}$ for $i = 1, \ldots, N$ and some $R > 0$. For a fixed truncation level $\lambda > 0$, it holds that \[ {\mathbb E}\sup_{v\in S^{d-1}}\left|\frac{1}{N\lambda}\sum_{i=1}^N \psi(\lambda\langle X_i,v\rangle^2) - {\mathbb E}\langle X,v\rangle^2 \right| \lesssim \frac{\mathbf{r}(\Sigma)}{\lambda N} + \lambda\kappa^{4}\|\Sigma\|^2, \] where $\kappa = \kappa(4)$. In particular, when $\lambda = \frac{1}{\kappa^2\|\Sigma\|} \sqrt{\frac{\mathbf{r}(\Sigma)}{N}}$, we have \[ {\mathbb E}\sup_{v\in S^{d-1}}\left|\frac{1}{N\lambda}\sum_{i=1}^N \psi(\lambda\langle X_i,v\rangle^2) - {\mathbb E}\langle X,v\rangle^2 \right| \lesssim \kappa^2\|\Sigma\|\sqrt{\frac{\mathbf{r}(\Sigma)}{N}}. \] \end{proposition} \begin{proof} Our aim is to choose the distributions $\mu$ and $\rho$ in Lemma \ref{lem:pacbayes}. Let $ \Theta = (\mathbb{R}^d)^2. $ We choose $\mu$ to be a product of two zero mean multivariate Gaussians with mean zero and covariance $\beta^{-1}I_d$. For $v \in S^{d - 1}$, let $\rho_{v}$ be a product of two multivariate Gaussian distribution with mean $v$ and covariance $\beta^{-1}I_d$. Because of this, if $(\theta, \nu)$ is distributed according to $\rho_{v}$, we have ${\mathbb E}_{\rho_{v}}(\theta, \nu) = (v, v)$. By the additivity of $\mathcal{KL}$-divergence for product measures and the standard formula, we have \[ \mathcal{KL}(\rho_{v}, \mu) = \beta. \] By the first part of Lemma \ref{lem:almostconvex} we have, \begin{align} \psi\left(\lambda\langle X, v\rangle^2\right) &=\psi\left(\lambda{\mathbb E}_{\rho_{v}}\langle X, \theta\rangle\langle X, \nu\rangle\right) \nonumber \\ &\le {\mathbb E}_{\rho_{v}}\log\left(1 + \lambda\langle X, \theta\rangle\langle X, \nu\rangle + \lambda^2\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2 \right) \nonumber \\ &\qquad+ \min\{1, \lambda^{2}{\mathbb E}_{\rho_v}\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2/6\}. \label{eq:twoparts} \end{align} Observe that \begin{equation} \label{eq:forthnorm} {\mathbb E}_{\rho_v}\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2 = (\langle X, v\rangle^2 + \beta^{-1}\|X\|^2)^2 \le 2\langle X, v\rangle^4 + 2\beta^{-2}\|X\|^4, \end{equation} and we can write \[ \min\{1, \lambda^{2}{\mathbb E}_{\rho_v}\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2/6\} \le \min\{1, 2\lambda^2\langle X, v\rangle^4/6\} + \min\{1, 2\lambda^2\beta^{-2}\|X\|^4/6\}. \] Conditionally on $X$ the distribution of $\langle X, \theta\rangle$ is Gaussian with mean $\langle X, v\rangle$. Since it is symmetric, we have that $\Pr_{\rho_v}\left((\langle X, \theta\rangle)^2(\langle X, \nu\rangle)^2 \ge \langle X, v\rangle^4\right) \ge \frac{1}{4}$ and this holds trivially when $X = 0$. Therefore, \[ \min\{1, 2\lambda^2\langle X, v\rangle^4/6\} \le 8{\mathbb E}_{\rho_v}\min\{1, \lambda^{2}\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2/6\}. \] By the second part of Lemma \ref{lem:almostconvex}, we have for some absolute constant $c > 0$, \begin{align*} &{\mathbb E}_{\rho_{v}}\log\left(1 + \lambda\langle X, \theta\rangle\langle X, \nu\rangle + \lambda^2\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2 \right) + 8{\mathbb E}_{\rho_v}\min\{1, \lambda^{2}\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2/6\} \\ &\qquad\le {\mathbb E}_{\rho_{v}}\log\left(1 + \lambda\langle X, \theta\rangle\langle X, \nu\rangle + c\lambda^2\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2 \right). \end{align*} Following the proof of Lemma \ref{lem:normofthevec} we get \begin{equation} \label{eq:normforth} {\mathbb E}\|X\|^4 \le {\mathbb E}\|Y\|^4 \le \kappa^4(\tr(\Sigma))^2 \quad \text{and} \quad {\mathbb E}\langle X, v\rangle^2 \le {\mathbb E}\langle Y, v\rangle^2 \le \|\Sigma\|, \end{equation} where $v \in S^{d - 1}$. Using $\log(1 + y) \le y$ for $y \ge - 1$ and Fubini's theorem, we have \begin{align*} &{\mathbb E}_{\rho_{v}}\log{\mathbb E}\left(1 + \lambda\langle X, \theta\rangle\langle X, \nu\rangle + c\lambda^2\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2 \right) \\ &\le \lambda {\mathbb E}_{\rho_{v}}{\mathbb E}\langle X, \theta\rangle\langle X, \nu\rangle + c\lambda^2{\mathbb E}\E_{\rho_v}\langle X, \theta\rangle^2\langle X, \nu\rangle^2 \\ &\le \lambda {\mathbb E}\langle X, v\rangle^2 + 2c\lambda^2({\mathbb E}\langle X, v\rangle^4 + {\mathbb E}\beta^{-2}\|X\|^4) \quad \text{(by \eqref{eq:forthnorm})} \\ &\le \lambda {\mathbb E}\langle X, v\rangle^2+ 2c\lambda^2\kappa^{4}(({\mathbb E}\langle X, v\rangle^2)^2 + \beta^{-2}(\tr(\Sigma))^2) \quad \text{(by \eqref{eq:normforth})} \\ &\le \lambda {\mathbb E}\langle X, v\rangle^2 + 2c\lambda^2\kappa^{4}(\|\Sigma\|^2 + \beta^{-2}(\tr(\Sigma))^2). \end{align*} We plug \[ f(X, \theta, \nu) = \log\left(1 + \lambda\langle X, \theta\rangle\langle X, \nu\rangle + c\lambda^2\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2 \right) \] in the second part of Lemma \ref{lem:pacbayes}. Choosing $\beta = \mathbf{r}(\Sigma)$ and dividing both sides by $N$, we have \begin{equation} \label{eq:expectedsuptruncated} {\mathbb E}\sup\limits_{v \in S^{d - 1} \cup \{0\}}\left(\frac{1}{N}\sum\limits_{i = 1}^N {\mathbb E}_{\rho_v}f\left(X_i, \theta, \nu\right) - \lambda{\mathbb E}\langle X, v\rangle^2\right) \le \frac{\mathbf{r}(\Sigma)}{ N} + 4c\lambda^2\kappa^{4}\|\Sigma\|^2, \end{equation} where we added the $0$ vector by considering $\mu$ as a prior distribution and observing that $0 = \mathcal{KL}(\mu, \mu) \le \beta$. By adding the $0$ vector we guarantee that the supremum in \eqref{eq:expectedsuptruncated} is always non-negative. Adding the term \[ \frac{1}{N}{\mathbb E} \sum\nolimits_{i = 1}^N\min\{1, 2\lambda^2(\mathbf{r}(\Sigma))^{-2}\|X_i\|^4/6\} \le \lambda^2 \frac{\kappa^4\|\Sigma\|^2}{3}, \] to the inequality \eqref{eq:expectedsuptruncated} and using the derivations in the beginning of the proof, we have \[ {\mathbb E}\sup\limits_{v \in S^{d - 1} \cup \{0\}}\left(\frac{1}{N}\sum\limits_{i = 1}^N\psi(\lambda\langle X, v\rangle^2) - \lambda{\mathbb E}\langle X, v\rangle^2\right) \le \frac{\mathbf{r}(\Sigma)}{N} + \left(4c + \frac{1}{3}\right)\lambda^2\kappa^{4}\|\Sigma\|^2. \] The same argument works for $-\lambda$ instead. This leads to \[ {\mathbb E}\sup\limits_{v \in S^{d - 1}}\left|\frac{1}{\lambda N}\sum\limits_{i = 1}^N\psi(\lambda\langle X, v\rangle^2) - {\mathbb E}\langle X, v\rangle^2 \right| \le 2\frac{\mathbf{r}(\Sigma)}{\lambda N} + 2\left(4c + \frac{1}{3}\right)\lambda\kappa^{4}\|\Sigma\|^2. \] Our choice of $\lambda$ concludes the proof. \end{proof} \subsection{Combining peaky and spread parts} In this Section we conclude the proof of Theorem \ref{thm:baiyin}. Using our decomposition we have \begin{align*} {\mathbb E}\sup_{v\in S^{d-1}} \left| \frac{1}{N}\sum_{i=1}^N \langle X_i,v\rangle^2 - {\mathbb E}\langle X,v\rangle^2\right| &\le {\mathbb E}\sup_{v\in S^{d-1}}\left|\frac{1}{N\lambda}\sum_{i=1}^N \psi(\lambda\langle X_i,v\rangle^2) - {\mathbb E}\langle X,v\rangle^2 \right| \\ &\qquad+ {\mathbb E}\sup_{v\in S^{d-1}}\frac{1}{N}\sum_{i=1}^N \langle X_i,v\rangle^2 \ind{\lambda\langle X_i,v\rangle^2>1}. \end{align*} We choose $\lambda = \frac{1}{\kappa(4)^2\|\Sigma\|} \sqrt{\frac{\mathbf{r}(\Sigma)}{N}}$. By Proposition \ref{prop:trimmedprocess} we have \[ {\mathbb E}\sup_{v\in S^{d-1}}\left|\frac{1}{N\lambda}\sum_{i=1}^N \psi(\lambda\langle X_i,v\rangle^2) - {\mathbb E}\langle X,v\rangle^2 \right| \lesssim \kappa(4)^2\|\Sigma\|\sqrt{\frac{\mathbf{r}(\Sigma)}{N}}. \] We proceed with the remaining term. Define the random set $I_v = \{i\in [N]: \langle X_i,v\rangle^2> \lambda^{-1}\}$. Let $m =\sup\limits_{v \in S^{d - 1}}|I_v|$. By \eqref{eq:relationfanda} we have \begin{equation} \label{eq:twoineqs} \frac{m}{N\lambda} \le \sup_{v\in S^{d-1}}\frac{1}{N}\sum\nolimits_{i=1}^N \langle X_i,v\rangle^2 \ind{\lambda\langle X_i,v\rangle^2>1} \le \frac{1}{N}f(m, [N]). \end{equation} Observe that for any $p > 4$, there is $C(p) > 0$ such that for all $x \ge 1$, \[ x^{1 - 4/(4+p)}\log^4 x \le C(p)\sqrt{x}. \] We now want to apply Theorem \ref{thm:rearrengmentthm} with the (random) value $m$. First, we assume that $\mathbf{r}(\Sigma) \le m \le N/c$. In this case, by Theorem \ref{thm:rearrengmentthm}, with probability at least $1-\frac{c(p)}{N}$, it holds that \begin{equation*} \frac{1}{N}f(m, [N]) \le C(p)\left(\frac{1}{N}\max_{i\le N}\|X_i\|^2 + \|\Sigma\| \left(\frac{m}{N}\right)^{1/2}\right). \end{equation*} By \eqref{eq:twoineqs} and since $\|X_i\| \le (N\tr(\Sigma)\|\Sigma\|)^{1/4}$ we have on the same event \begin{equation*} m \le C(p) \lambda \left(\max_{i\le N}\|X_i\|^2 + \sqrt{mN} \|\Sigma\|\right) \le C(p)\frac{1}{\|\Sigma\|}\sqrt{\frac{\mathbf{r}(\Sigma)}{N}}\left(\sqrt{mN} \|\Sigma\| + \sqrt{\tr(\Sigma)\|\Sigma \|N}\right) . \end{equation*} Solving the inequality above with respect to $m$ we have on the same event \begin{equation*} m \le C(p)\mathbf{r}(\Sigma). \end{equation*} We consider the case where $m$ does not satisfy $\mathbf{r}(\Sigma) \le m \le N/c$. If $m < \mathbf{r}(\Sigma)$, then we recover the same upper bound as above. If $m > N/c$, then on the event of Theorem \ref{thm:rearrengmentthm} for $k = \left\lfloor N/c -1\right\rfloor$ by \eqref{eq:twoineqs} and monotonicity we have \[ \frac{k}{\lambda} \le f(k, [N]) \le C(p)\left(\sqrt{N\tr(\Sigma)\|\Sigma\|} + \|\Sigma\| \left(kN\right)^{1/2}\right). \] For our choice of $\lambda$ this bound leads to contradiction if $N \ge c(p)\mathbf{r}(\Sigma)$ for large enough $c(p)$. We are now ready to plug our bound $m \le C(p)\mathbf{r}(\Sigma)$ in \eqref{eq:twoineqs} to obtain that, with probability at least $1-\frac{c(p)}{N}$, \[ \frac{1}{N}f(C(p)\mathbf{r}(\Sigma), [N]) \le C(p)\|\Sigma\|\sqrt{\frac{\mathbf{r}(\Sigma)}{N}}. \] Finally, for any $v \in S^{d - 1}$, we have $|\langle X_i,v\rangle| \le \|X_i\| \le (N\tr(\Sigma)\|\Sigma\|)^{1/4}$. Therefore, \begin{align*} {\mathbb E} \sup_{v\in S^{d-1}}\frac{1}{N}\sum_{i=1}^N \langle X_i,v\rangle^2 \ind{\lambda\langle X_i,v\rangle^2>1} &\le C(p)\|\Sigma\|\sqrt{\frac{\mathbf{r}(\Sigma)}{N}} + \frac{c(p)}{N}\sqrt{N\tr(\Sigma)\|\Sigma\|} \\ &=(C(p) + c(p))\|\Sigma\|\sqrt{\frac{\mathbf{r}(\Sigma)}{N}}. \end{align*} The claim of Theorem \ref{thm:baiyin} follows. \qed \section{Proof of Theorem \ref{thm:informalmain} in the regime \texorpdfstring{$p = 4$}{Lg}} \label{sec:pfour} First, we provide a high probability version of Proposition \ref{prop:trimmedprocess}. In the definition of our estimator we assume that both $\|\Sigma\|$ and $\tr(\Sigma)$ are known, so that our estimator can depend on these quantities. In Section \ref{sec:estimtraceandopernorm}, we provide optimal guarantees for estimating these parameters. The following result can be seen as a special case of Lemma 5 in \cite{zhivotovskiy2021dimension}. We provide a detailed sketch of the proof for the sake of completeness. \begin{proposition} \label{prop:uniftruncation} Assume that $X$ is a zero mean random vector with covariance $\Sigma$ satisfying $L_4-L_2$ \emph{norm equivalence}. Let $X_1, \ldots, X_N$ be a sample of independent copies of $X$. For a fixed truncation level $\lambda > 0$, with probability at least $1 - \delta$, it holds that \[ \sup_{v\in S^{d-1}}\left|\frac{1}{N\lambda}\sum_{i=1}^N \psi(\lambda\langle X_i,v\rangle^2) - {\mathbb E}\langle X,v\rangle^2 \right| \lesssim \frac{\mathbf{r}(\Sigma) + \log(1/\delta)}{\lambda N} + \lambda\kappa^{4}\|\Sigma\|^2, \] where $\kappa = \kappa(4)$. \end{proposition} \begin{proof} We repeat the lines of the proof of Proposition \ref{prop:trimmedprocess}. However, instead we plug \[ f(X, \theta, \nu) = \log\left(1 + \lambda\langle X, \theta\rangle\langle X, \nu\rangle + c\lambda^2\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2 \right) \] in the first part of Lemma \ref{lem:pacbayes}. This implies that, with probability at least $1 - \delta$, \begin{equation} \label{eq:uniformboundwithtrimming} \sup\limits_{v \in S^{d - 1}}\left(\frac{1}{N}\sum\limits_{i = 1}^N {\mathbb E}_{\rho_v}f\left(X_i, \theta, \nu\right) - \lambda{\mathbb E}\langle X, v\rangle^2\right) \le \frac{\mathbf{r}(\Sigma) + \log(1/\delta)}{ N} + 4c\lambda^2\kappa^{4}\|\Sigma\|^2, \end{equation} where $c > 0$ is the same constant as in the proof of Proposition \ref{prop:trimmedprocess}. Furthermore, by the Bernstein inequality we have \begin{align*} \frac{1}{N}\sum\limits_{i = 1}^N\min\{1, \lambda^2(\mathbf{r}(\Sigma))^{-2}\|X_i\|^4/3\} &\le {\mathbb E}\min\{1, 2\lambda^2(\mathbf{r}(\Sigma))^{-2}\|X\|^4/6\} \\ &\ + \sqrt{\frac{2\log(1/\delta)}{N}{\mathbb E}\min\{1, 2\lambda^2(\mathbf{r}(\Sigma))^{-2}\|X\|^4/6\}} + \frac{2\log(1/\delta)}{3N}. \end{align*} where we used that each summand belongs to the interval $[0, 1]$ and therefore the variance of each summand is bounded by its expectation. Following the proof of Lemma \ref{lem:normofthevec}, we have ${\mathbb E}\|X\|^4 \le \kappa^4(\tr(\Sigma))^2$. This implies that, with probability at least $1 - \delta$, \begin{equation} \label{eq:bernstein} \frac{1}{N}\sum\limits_{i = 1}^N\min\{1, 2\lambda^2(\mathbf{r}(\Sigma))^{-2}\|X_i\|^4/6\} \le 2\lambda^2\kappa^4\|\Sigma\|^2/3 + \frac{3\log(1/\delta)}{N}. \end{equation} Using this line and the union bound together with \eqref{eq:uniformboundwithtrimming} and the derivations in the proof of Proposition \ref{prop:trimmedprocess}, we obtain the one-sided version of our claim. Repeating the same lines by replacing $\lambda$ by $-\lambda$ and using the union bound again, we finish the proof. \end{proof} Our next result concludes the proof of Theorem 1 when $p = 4$. Recall that $\kappa$ denotes $\kappa(4)$. \begin{theorem} \label{thm:thecasepfour} There is an absolute constant $C > 0$ such that the following holds. Assume that $X$ is a zero mean random vector with covariance $\Sigma$ satisfying $L_4-L_2$ norm equivalence. Fix the corruption level $\eta \in [0, 1]$ and the confidence level $\delta \in (0, 1)$. Assume that $\widetilde{X}_{1}, \ldots, \widetilde{X}_N$ is an $\eta$-corrupted sample. Then there exists an estimator $\widehat{\Sigma}_{\eta, \delta}$ such that, with probability at least $1 - \delta$, \[ \left\|\widehat{\Sigma}_{\eta, \delta} - \Sigma\right\| \le C\kappa^2\|\Sigma\|\left(\sqrt{\frac{\mathbf{r}(\Sigma) + \log(1/\delta)}{N}} + \sqrt{\eta}\right). \] \end{theorem} We begin with presenting the estimator. As we mentioned, we assume that $\eta, \|\Sigma\|, \tr(\Sigma)$ and the constant $\kappa$ are known. In Section \ref{sec:estimtraceandopernorm}, we discuss how to estimate $\|\Sigma\|$ and $\tr(\Sigma)$ up to multiplicative constant factors. The dependence on $\kappa$ in our estimator can also be ignored, though in this case we obtain a slightly weaker dependence on this parameter in the final bound. \medskip \begin{tcolorbox} \label{Box:firstEstimator} The estimator in Theorem \ref{thm:thecasepfour} \begin{enumerate} \item Given $\delta, \tr(\Sigma), \|\Sigma\|, \kappa$, and an $\eta$-corrupted sample $\widetilde{X}_1,\ldots,\widetilde{X}_{N}$ we set \[ \lambda = \frac{1}{\kappa^2\|\Sigma\|}\sqrt{\frac{\mathbf{r}(\Sigma) + \log(1/\delta) + \eta N}{N}}. \] \item Define \[ \Gamma = \bigcap\limits_{v \in S^{d - 1}}\left\{A \in \mathbb{S}_{+}^d: \left|\frac{1}{\lambda N}\sum\limits_{i=1}^N \psi\left(\lambda\langle\widetilde{X}_i,v\rangle^2\right) - v^{\top}A v\right| \le C\lambda\kappa^4\|\Sigma\|^2/2\right\}. \] \item Let $\widehat{\Sigma}_{\eta,\delta}$ be any matrix in the set $\Gamma$. If the set is empty, we output $\widehat{\Sigma}_{\eta,\delta} = 0$. \end{enumerate} \end{tcolorbox} \begin{proof} Since the truncation function $\psi(\cdot)$ is bounded, we have \[ \sup_{v\in S^{d-1}}\left|\frac{1}{N\lambda}\sum_{i=1}^N \psi(\lambda\langle \widetilde{X}_i,v\rangle^2) - {\mathbb E}\langle X,v\rangle^2 \right| \le \sup_{v\in S^{d-1}}\left|\frac{1}{N\lambda}\sum_{i=1}^N \psi(\lambda\langle X_i,v\rangle^2) - {\mathbb E}\langle X,v\rangle^2 \right| + \frac{\eta}{\lambda}. \] Combining this with Proposition \ref{prop:uniftruncation} we have for some $C > 0$, with probability at least $1 - \delta$, \[ \sup_{v\in S^{d-1}}\left|\frac{1}{N\lambda}\sum_{i=1}^N \psi(\lambda\langle \widetilde X_i,v\rangle^2) - v^{\top}\Sigma v \right| \le \frac{C}{4}\left(\frac{\mathbf{r}(\Sigma) + \log(1/\delta) + \eta N}{\lambda N} + \lambda\kappa^{4}\|\Sigma\|^2\right). \] Using the triangle inequality, the definition of $\widehat{\Sigma}_{\eta, \delta}$ and the line above, we have on the same event \begin{align*} \left\|\widehat{\Sigma}_{\eta, \delta} - \Sigma\right\| &\le \sup\limits_{v \in S^{d-1}}\left|\frac{1}{N\lambda}\sum_{i=1}^N \psi(\lambda\langle \widetilde X_i,v\rangle^2) - v^{\top}\widehat{\Sigma}_{\eta, \delta}v\right| + \sup\limits_{v \in S^{d-1}}\left|\frac{1}{N\lambda}\sum_{i=1}^N \psi(\lambda\langle \widetilde X_i,v\rangle^2) - v^{\top}\Sigma v\right| \\ &\le C\kappa^2\|\Sigma\|\left(\sqrt{\frac{\mathbf{r}(\Sigma) + \log(1/\delta)}{N}} + \sqrt{\eta}\right). \end{align*} Our choice of $\lambda$ concludes the proof. \end{proof} \begin{remark} It is straightforward to verify that the matrix $\widehat{\Sigma}_{\eta, \delta}$ defined by \eqref{eq:minmaxestimator} satisfies the assumption of Theorem \ref{thm:thecasepfour} and can be used as an estimator of the covariance matrix. \end{remark} Finally, we show how to estimate the value of the truncation parameter $\lambda$ using the corrupted observations. \subsection{Estimating the truncation level \texorpdfstring{$\lambda$}{Lg} in the regime \texorpdfstring{$p = 4$}{Lg}} \label{sec:estimtraceandopernorm} An important aspect of the proof of Theorem \ref{thm:thecasepfour} is that we only need to know $\|\Sigma\|$ and $\tr(\Sigma)$ up to multiplicative constant factors. That is, we want to find two estimators $\widehat \varphi_1$ and $\widehat \varphi_2$ such that for some $c > 1$, with high probability with respect to the realization of the corrupted sample $\widetilde X_1, \ldots, \widetilde X_N$, \[ \frac{1}{c}\tr(\Sigma) \le \widehat \varphi_1 \le c\tr(\Sigma) \quad \text{and} \quad \frac{1}{c}\|\Sigma\| \le \widehat \varphi_2 \le c\|\Sigma\|. \] If these estimators are available, we can assume without the loss of generality that our initial sample is of size $3N$. We use the first $2N$ elements to estimate $\tr(\Sigma)$ and $\|\Sigma\|$ and then plug them into the truncation level $\lambda$ in Theorem \ref{thm:thecasepfour}. A similar sample splitting strategy is used in \cite{mendelson2020robust}. Our contribution is that we provide estimators $\widehat \varphi_1$ and $\widehat \varphi_2$ such that they take the adversarial corruption into account as well as do not contain an unnecessary logarithmic factor. For the rest of the section, we fix $\kappa = \kappa(4)$, where $\kappa(\cdot)$ is defined in \eqref{eq:momeqv}. \subsection{Estimating \texorpdfstring{$\tr(\Sigma)$}{Lg}} One can estimate $\tr(\Sigma)$ using, for example, the trimmed mean estimator analyzed in \cite{lugosi2021robust}. Let $e_1, \ldots, e_d$ denote the standard basis in $\mathbb{R}^d$. Observe that \[ \tr(\Sigma) = \sum\nolimits_{i = 1}^d{\mathbb E}\langle X, e_i\rangle^2. \] Using the $L_4-L_2$ norm equivalence and the proof of Lemma \ref{lem:normofthevec} we have $\Var\left(\sum\nolimits_{i = 1}^d{\mathbb E}\langle X, e_i\rangle^2\right) \le \kappa^4(\tr(\Sigma))^2$. Given an $\eta$-corrupted sample of size $2N$ the trimmed mean estimator $\widehat \varphi_1$ applied to a random variable $\sum\nolimits_{i = 1}^d{\mathbb E}\langle X, e_i\rangle^2$ implies for any $\delta \in (\exp(-N)/4, 1)$, with probability at least $1 - 4\exp(-\varepsilon N/12)$, \[ |\widehat \varphi_1 - \tr(\Sigma)| \le 10\sqrt{\varepsilon}\kappa^2\tr(\Sigma) , \] where $\varepsilon = 8\eta + 12\frac{\log(4/\delta)}{N}$. This bound is presented in \cite[Theorem 1]{lugosi2021robust}. For the sake of brevity, we do not provide the details of the multivariate trimmed mean estimator. Observe that for this choice of $\varepsilon$, it holds that $1 - 4\exp(-\varepsilon N/12) \ge 1 - \delta$. Provided that $10\sqrt{\varepsilon}\kappa^2 \le \frac{1}{2}$ we obtain that, with probability at least $1 - \delta$, \begin{equation} \label{eq:estuptoconst} \frac{1}{2} \tr(\Sigma) \le \widehat \varphi_1 \le 2 \tr(\Sigma). \end{equation} It is only left to check that it is sufficient to have $\eta \in \left[0, \frac{1}{40^2\kappa^2}\right]$ and $N \ge 12\cdot 40^2\kappa^4\log(4/\delta)$. \subsection{Estimating \texorpdfstring{$\|\Sigma\|$}{Lg}} \label{sec:opernorm} A simply way of estimating $\|\Sigma\|$ is to first estimate the full covariance matrix $\Sigma$ as in Theorem \ref{thm:thecasepfour} and then compute the operator norm of the estimator. However, the difficulty of estimating $\|\Sigma\|$ is that we are not allowed to choose the truncation level $\lambda$ depending on $\|\Sigma\|$. In particular, all previous results we are aware of lead to an additional logarithmic factor in the assumption on the sample size (see \cite{mendelson2020robust, catoni2017dimension}). Instead we provide an adaptive estimator similar in spirit to the original robust estimator of O. Catoni \cite{catoni2012challenging} for estimating the mean of a random variable. To simplify the proof, we assume that the distribution of $X$ satisfies $\Pr(X = 0) = 0$. This assumption is mild and can always be satisfied if we add a small Gaussian perturbation to $X$ without changing the covariance matrix of $X$ too much. \begin{proposition} \label{prop:estlargesteigenvalue} Assume that $X$ is a zero mean random vector with covariance $\Sigma$ satisfying $L_4-L_2$ norm equivalence. Assume additionally that $\Pr(X = 0) = 0$. Let $c \ge 1$ be a large enough absolute constant. Fix the corruption level $\eta \in [0, \frac{1}{300c\kappa^4}]$ and the confidence level $\delta \in (0, 1/4)$. Assume that $\widetilde{X}_{1}, \ldots, \widetilde{X}_N$ is an $\eta$-corrupted sample. Then there is a unique value $\widehat{\alpha} > 0$ satisfying \[ \frac{1}{N}\sup\limits_{v \in S^{d - 1}}\sum\nolimits_{i = 1}^N \psi\left(\widehat\alpha^2\langle \widetilde X_i, v\rangle^2\right) = \frac{1}{20c\kappa^4} + \eta. \] If $N \ge 100c\kappa^4\mathbf{r}(\Sigma)+ 400c\kappa^4\log(1/\delta)$, then with probability at least $1 - 4\delta$, it holds that \[ \frac{1}{4} \|\Sigma\| \le \frac{1}{24c\kappa^4\widehat{\alpha}^2} \le 4\|\Sigma\|. \] \end{proposition} \begin{proof} We begin with the analysis of an uncorrupted sample $X_1, \ldots, X_N$. Our first aim is to choose the distributions $\mu$ and $\rho$ in Lemma \ref{lem:pacbayes}. Let $ \Theta = (\mathbb{R}^d)^2 $ and choose $\mu$ to be a product of two zero mean multivariate Gaussians with mean zero and covariance $\beta^{-1}I_d$. For $v \in S^{d - 1}$ and $\alpha > 0$ let $\rho_{\alpha, v}$ be a product of two multivariate Gaussian distribution with mean $\alpha v$ and covariance $\beta^{-1}I_d$. Because of this, if $(\theta, \nu)$ is distributed according to $\rho_{\alpha, v}$, we have ${\mathbb E}_{\rho_{v}}(\theta, \nu) = (\alpha v, \alpha v)$. By the additivity of $\mathcal{KL}$-divergence for product measures and the standard formula, we have \[ \mathcal{KL}(\rho_{\alpha, v}, \mu) = \alpha^2\beta. \] For the rest of the proof we sometimes write $\rho$ instead of $\rho_{\alpha, v}$. By the first part of Lemma \ref{lem:almostconvex} we have, \begin{align*} \psi\left(\alpha^2\langle X, v\rangle^2\right) &=\psi\left({\mathbb E}_{\rho}\langle X, \theta\rangle\langle X, \nu\rangle\right) \\ &\le {\mathbb E}_{\rho}\log\left(1 + \langle X, \theta\rangle\langle X, \nu\rangle + \left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2 \right) + \min\{1, {\mathbb E}_{\rho}\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2/6\}. \end{align*} Observe that \[ {\mathbb E}_{\rho}\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2 = (\alpha^2\langle X, v\rangle^2 + \beta^{-1}\|X\|^2)^2 \le 2\alpha^4\langle X, v\rangle^4 + 2\beta^{-2}\|X\|^4, \] and we can write \[ \min\{1, {\mathbb E}_{\rho}\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2/6\} \le \min\{1, 2\alpha^4\langle X, v\rangle^4/6\} + \min\{1, 2\beta^{-2}\|X\|^4/6\}. \] Repeating the lines of the proof of Proposition \ref{prop:trimmedprocess}, we get \[ {\mathbb E}_{\rho}\log{\mathbb E}\left(1 + \langle X, \theta\rangle\langle X, \nu\rangle + c\left(\langle X, \theta\rangle\langle X, \nu\rangle\right)^2 \right) \le \alpha^2 v^{\top}\Sigma v + 2c\kappa^{4}(\alpha^4\|\Sigma\|^2 + \beta^{-2}(\tr(\Sigma))^2), \] where $c \ge 1$ is an absolute constant. Using the first part of Lemma \ref{lem:pacbayes} we obtain that, with probability at least $1 - \delta$, simultaneously for all $v \in S^{d - 1}$ and $\alpha > 0$, \begin{align*} \frac{1}{N}\sum\nolimits_{i = 1}^N \psi\left(\alpha^2\langle X_i, v\rangle^2\right) &\le \alpha^2 v^{\top}\Sigma v + 2c\kappa^{4}(\alpha^4\|\Sigma\|^2 + \beta^{-2}(\tr(\Sigma))^2) \\ &\qquad+ \frac{\alpha^2 \beta + \log(1/\delta)}{N} + \frac{1}{N}\sum\nolimits_{i = 1}^N\min\{1, 2\beta^{-2}\|X_i\|^4/6\}. \end{align*} Using the Bernstein inequality as in \eqref{eq:bernstein} we have, with probability at least $1 - \delta$, \[ \frac{1}{N}\sum\nolimits_{i = 1}^N\min\{1, 2\beta^{-2}\|X_i\|^4/6\} \le 2\kappa^4\beta^{-2}(\tr(\Sigma))^2/3 + \frac{3\log(1/\delta)}{N}. \] We choose $\beta = 10c\kappa^4 \tr(\Sigma)$ and using the union bound, we obtain that, with probability at least $1 - 2\delta$, simultaneously for all $v \in S^{d - 1}$ and $\alpha > 0$, \[ \frac{1}{N}\sum\nolimits_{i = 1}^N \psi\left(\alpha^2\langle X_i, v\rangle^2\right) \le \alpha^2 v^{\top}\Sigma v + 2c\kappa^{4}\alpha^4\|\Sigma\|^2 + \frac{2}{75c\kappa^4} + \frac{10c\kappa^4\alpha^2\tr(\Sigma)}{N} + \frac{4\log(1/\delta)}{N}. \] Since $N \ge 100c\kappa^4\mathbf{r}(\Sigma)+ 400c\kappa^4\log(1/\delta)$, we have on the same event \[ \frac{1}{N}\sum\nolimits_{i = 1}^N \psi\left(\alpha^2\langle X_i, v\rangle^2\right) \le \alpha^2 v^{\top}\Sigma v + \frac{1}{10}\alpha^2\|\Sigma\|+ 2c\kappa^{4}\alpha^4\|\Sigma\|^2 + \frac{11}{300c\kappa^4}. \] Returning to the $\eta$-corrupted sample $\widetilde X_1, \ldots, \widetilde X_N$ and since $|\psi(\cdot)| \le 1$, \begin{equation} \label{eq:sumofcorrupt} \frac{1}{N}\sum\nolimits_{i = 1}^N \psi\left(\alpha^2\langle \widetilde X_i, v\rangle^2\right) \le \alpha^2 v^{\top}\Sigma v + \frac{1}{10}\alpha^2\|\Sigma\|+ 2c\kappa^{4}\alpha^4\|\Sigma\|^2 + \frac{11}{300c\kappa^4} + \eta. \end{equation} Observe that the function $\varphi: \mathbb{R}_{+} \to \mathbb{R}_{+}$ given by \[ \varphi(\alpha) = \frac{1}{N}\sup\limits_{v \in S^{d - 1}}\sum\nolimits_{i = 1}^N \psi\left(\alpha^2\langle \widetilde X_i, v\rangle^2\right) \] is continuous (it is easy to check that $\varphi(\cdot)$ is a supremum over equi-Lipschitz functions) and non-decreasing. Furthermore, there is $w \in S^{d - 1}$ such that $\langle X_i, w\rangle \neq 0$ for all $i \in [N]$. Indeed, to construct such a vector we pick a random vector $U$ distributed as a zero mean multivariate Gaussian random vector with unit covariance. Since $X_i \neq 0$ almost surely for all $i \in [N]$ we have that $\langle U, X_i\rangle$ is a zero mean Gaussian (conditionally on $X_i$) with non-zero covariance. Thus, taking $w = U/\|U\|$ we have almost surely $\langle X_i, w\rangle \neq 0$ for all $i \in [N]$. We have $\varphi(0) = 0$ and $\varphi\left(\frac{1}{\min_{i \in [N]}|\langle X_i, w\rangle|}\right) \ge 1 - \eta$ since at most $\eta N$ vectors $\widetilde{X}_i$ can be orthogonal to $w$. Therefore, since $\kappa, c \ge 1$ and $\eta \le \frac{1}{300c\kappa^4}$ there is $\widehat{\alpha} > 0$ such that \[ \varphi(\widehat{\alpha}) = \frac{1}{20c\kappa^4} + \eta. \] Using \eqref{eq:sumofcorrupt}, we get on the same event \[ 2c\kappa^{4}\widehat{\alpha}^4\|\Sigma\|^2 + \frac{11}{10}\widehat{\alpha}^2\|\Sigma\| - \frac{1}{75c\kappa^4} \ge 0. \] Solving this as a quadratic equation (only the positive root plays a role) with respect to $\widehat{\alpha}^2\|\Sigma\|$ we get \[ \frac{-11/10 + \sqrt{(11/10)^2 + 8/75}}{4c\kappa^{4}\widehat{\alpha}^2} \le \|\Sigma\|. \] This provides a lower bound on $\|\Sigma\|$ in terms of $\widehat{\alpha}^2$. We proceed with an upper bound. For $v \in S^{d - 1}$ and $\alpha > 0$ let $\rho_{\alpha, -\alpha, v}$ be a product of two multivariate Gaussian distributions with means $\alpha v$ and $-\alpha v$ respectively and the same covariance matrix $\beta^{-1}I_d$. Repeating the proof, we obtain the following analog of inequality \eqref{eq:sumofcorrupt}. With probability at least $1 - 2\delta$, for all $v \in S^{d - 1}$ and $\alpha > 0$, \begin{equation} \label{eq:lowertailquadraticeq} -\frac{1}{N}\sum\nolimits_{i = 1}^N \psi\left(\alpha^2\langle \widetilde X_i, v\rangle^2\right) \le -\alpha^2 v^{\top}\Sigma v + \frac{1}{10}\alpha^2\|\Sigma\|+ 2c\kappa^{4}\alpha^4\|\Sigma\|^2 + \frac{11}{300c\kappa^4} + \eta. \end{equation} Let $v_1$ be a maximizer of $v^{\top}\Sigma v$ in $S^{d - 1}$. We have $v_1^{\top}\Sigma v_1 =\|\Sigma\|$. Furthermore, we have \[ -\sup\limits_{v \in S^{d - 1}}\frac{1}{N}\sum\nolimits_{i = 1}^N \psi\left(\alpha^2\langle \widetilde X_i, v\rangle^2\right) \le -\frac{1}{N}\sum\nolimits_{i = 1}^N \psi\left(\alpha^2\langle \widetilde X_i, v_1\rangle^2\right). \] Since $-\varphi(\cdot)$ is non-increasing, this implies that simultaneously for all $\alpha \in [0, \widehat{\alpha}]$ on the event where \eqref{eq:lowertailquadraticeq} holds, we have \[ -\frac{1}{20c\kappa^4} - \eta \le -\frac{9}{10}\alpha^2 \|\Sigma\|+ 2c\kappa^{4}\alpha^4\|\Sigma\|^2 + \frac{11}{300c\kappa^4} + \eta. \] Since $\eta \le \frac{1}{300c\kappa^4}$ we reduce this equation to \begin{equation} \label{eq:onalpha} 2c\kappa^{4}\alpha^4\|\Sigma\|^2 -\frac{9}{10}\alpha^2 \|\Sigma\| + \frac{7}{75c\kappa^4} \ge 0. \end{equation} As a function of $\alpha^2 \|\Sigma\|$ this quadratic equation has two non-negative roots: \[ x_1 = \frac{9/10 - \sqrt{(9/10)^2 - 56/75}}{4c\kappa^4}\quad \text{and} \quad x_2 = \frac{9/10 + \sqrt{(9/10)^2 - 56/75}}{4c\kappa^4}. \] We show that since \eqref{eq:onalpha} holds for all $\alpha \in [0, \widehat{\alpha}]$, we should have that $\widehat{\alpha}^2\|\Sigma\| \le x_1$. Indeed, assume that instead $\widehat{\alpha}^2\|\Sigma\| \ge x_2$. In this case $0 \le (x_{1} + x_2)/2 \le \widehat{\alpha}^2\|\Sigma\|$ and thus \eqref{eq:onalpha} should be satisfied for $\alpha^2\|\Sigma\| = (x_{1} + x_2)/2$. The obtained contradiction proves that $\widehat{\alpha}^2\|\Sigma\| \le x_1$. Finally, by the union bound we have, with probability at least $1 - 4\delta$, \[ \frac{-11/10 + \sqrt{(11/10)^2 + 8/75}}{4c\kappa^{4}\widehat{\alpha}^2} \le \|\Sigma\| \le \frac{9/10 - \sqrt{(9/10)^2 - 56/75}}{4c\kappa^4\widehat{\alpha}^2}. \] Algebraic computations conclude the proof. \end{proof} \section{Proof of Theorem \ref{thm:informalmain} in the regime \texorpdfstring{$p > 4$}{Lg}} \label{sec:pmorefour} In this section, we prove Theorem \ref{thm:informalmain} in the regime $p>4$. We begin with the following symmetrized version of Theorem \ref{thm:baiyin}. \begin{corollary} \label{cor:baiyin} Assume that $Y$ is a zero mean random vector with covariance $\Sigma$ satisfying $L_p-L_2$ \emph{norm equivalence} with $p > 4$. Let $Y_1, \ldots, Y_N$ be a sample of independent copies of $Y$. Consider the truncated vectors $X_i = Y_i\ind{\|Y_i\| \le (N\tr(\Sigma)\|\Sigma\|)^{1/4}}$ for $i = 1, \ldots, N$. Let $\varepsilon_1, \ldots, \varepsilon_N$ be independent Rademacher random signs. If $N \ge c(p)\mathbf{r}(\Sigma)$, then it holds that \[ {\mathbb E}\sup\limits_{v \in S^{d - 1}}\left|\frac{1}{N}\sum\limits_{i = 1}^N \varepsilon_i\left\langle X_i, v\right\rangle^2\right| \le C(p)\|\Sigma\|\sqrt{\frac{\mathbf{r}(\Sigma)}{N}}, \] where $c(p)$ and $C(p)$ depend only on $p$ and $\kappa(p)$ and the expectation is taken with respect to both $X_i$ and $\varepsilon_i$, $i = 1, \ldots, N$. \end{corollary} \begin{proof} We use the standard desymmetrization argument \cite[Section 2.1]{koltchinskii2011oracle}. Let $X^{\prime}_1, \ldots, X^{\prime}_N$ be the independent copies of $X$ and let ${\mathbb E}^{\prime}$ denote the expectation with respect to these random vectors. Using the triangle inequality, Jensen's inequality and the contraction principle \cite[Theorem 4.4]{ledoux2013probability} we get \begin{align*} {\mathbb E}\sup\limits_{v \in S^{d - 1}}\left|\sum\limits_{i = 1}^N \varepsilon_i\left\langle X_i, v\right\rangle^2\right| &\le {\mathbb E}\sup\limits_{v \in S^{d - 1}}\left|\sum\limits_{i = 1}^N \varepsilon_i(\left\langle X_i, v\right\rangle^2 - {\mathbb E}^{\prime}\left\langle X^{\prime}_i, v\right\rangle^2)\right| + {\mathbb E}\sup\limits_{v \in S^{d - 1}}\left|\sum\limits_{i = 1}^N \varepsilon_i{\mathbb E}\left\langle X_i, v\right\rangle^2\right| \\ &\le{\mathbb E}\E^{\prime}\sup\limits_{v \in S^{d - 1}}\left|\sum\limits_{i = 1}^N \left\langle X_i, v\right\rangle^2 - \left\langle X^{\prime}_i, v\right\rangle^2\right| + \|\Sigma\|\sqrt{N} \\ &\le2{\mathbb E}\sup\limits_{v \in S^{d - 1}}\left|\sum\limits_{i = 1}^N \left\langle X_i, v\right\rangle^2 - {\mathbb E}\left\langle X_i, v\right\rangle^2\right| + \|\Sigma\|\sqrt{N}. \end{align*} The bound of Theorem \ref{thm:baiyin} concludes the proof. \end{proof} In the regime $p > 4$, we show how to adapt the proposed estimator to obtain the optimal dependence on the contamination level $\eta$. The main insight is to interpret our estimator as a second order version of the trimmed mean estimator of Lugosi and Mendelson \cite{lugosi2021robust}. Therefore, our proof mainly follows their steps, however two important modifications are required: First, we need to exploit Theorem \ref{thm:baiyin} when controlling the expected supremum of quadratic processes, while in \cite{lugosi2021robust} the control of the corresponding process follows from a simple application of the Cauchy-Schwartz inequality. To do so, we also need to truncate the norms of our observations as explained after Proposition \ref{prop:truncdoesnothurt}. Second, since the quadratic forms of interest are non-negative, we consider only a one-sided truncation. The main theorem of this section is the following result. \begin{theorem} \label{thm:thecasep_greater_four} Assume that $X$ is a zero mean random vector with covariance $\Sigma$ satisfying $L_p-L_2$ norm equivalence with $p>4$. Fix the corruption level $\eta \in [0, 1]$ and the confidence level $\delta \in (0, 1)$. Then there exists an estimator $\widehat{\Sigma}_{\eta, \delta}$ such that, with probability at least $1 - \delta$, \[ \left\|\widehat{\Sigma}_{\eta, \delta} - \Sigma\right\| \le C(p)\|\Sigma\|\left(\sqrt{\frac{\mathbf{r}(\Sigma) + \log(1/\delta)}{n}} + \kappa(p)^2\eta^{1-2/p}\right).\ \] Here $C(p)$ is a non-increasing function of $p$ that satisfies $C(p) \to \infty$ as $p \to 4$. \end{theorem} Let us first recall the form of our estimator in the case where $p = 4$. For some specifically chosen $\lambda > 0$, the estimator in Theorem \ref{thm:thecasepfour} is defined by the following set \[ \Gamma = \bigcap\limits_{v \in S^{d - 1}}\left\{A \in \mathbb{S}_{+}^d: \left|\frac{1}{\lambda N}\sum\limits_{i=1}^N \psi\left(\lambda\langle\widetilde{X}_i,v\rangle^2\right) - v^{\top}A v\right| \le C\lambda\kappa^4\|\Sigma\|^2/2\right\}, \] where $\psi(\cdot)$ is the truncation function given by \eqref{eq:truncfunction}. We defined $\widehat{\Sigma}_{\eta,\delta}$ to be any matrix in $\Gamma$. In the regime $p > 4$, we introduce a more involved estimator that uses a direction dependent value of the one-sided threshold. For simplicity, we assume that we observe a sample of size $2N$. For a set of positive semi-definite matrices $\Gamma$, we define its diameter as $\Delta(\Gamma)=\sup_{A,B \in \Gamma}\|A-B\|$. \medskip \begin{tcolorbox} \label{Box:Estimator} The estimator in Theorem \ref{thm:thecasep_greater_four} \begin{enumerate} \item Given the confidence level $\delta$, the corruption level $\eta$ and the $\eta$-corrupted sample $\widetilde{Y}_1,\ldots,\widetilde{Y}_{2N}$, we split the sample in two parts of equal size: Truncate each vector $\widetilde{Z}_i=\widetilde{Y}_i \ind{\|\widetilde{Y}_i\|\le R}$ for $i\in \{1,\ldots,N\}$ and $\widetilde{X}_i=\widetilde{Y}_i \ind{\|\widetilde{Y}_i\|\le R}$ for $i\in \{N+1,\ldots,2N\}$, where $R=(N\tr(\Sigma)\|\Sigma\|)^{1/4}$. \item Set $\varepsilon = \max\left(20\eta, 560\frac{\log(2/\delta)}{N}\right)$. \item For the first half of the sample, define $q_v=(\langle \widetilde{Z}_i,v\rangle^2)_{N\varepsilon/2}^{\ast}$. \item For the second half of the sample $\widetilde{X}_1,\ldots,\widetilde{X}_N$, we proceed as follows: For every $Q>0, v\in S^{d - 1}$, define the trimming level $\lambda_v(Q) = (q_v +Q)^{-1}$ and set \begin{equation*} \Gamma(Q)= \bigcap\limits_{v\in S^{d-1}}\left\{A \in \mathbb{S}_{+}^d: \left|\frac{1}{\lambda_v(Q)N}\sum_{i=1}^N \psi\left(\lambda_v(Q)\langle\widetilde{X}_i,v\rangle^2\right) - v^{\top}A v\right| \le 4\varepsilon Q\right\}. \end{equation*} \item Let $\widehat{\Sigma}_{\eta,\delta}$ be any matrix in the set $\Gamma(2^{i^{\ast}})$, where $i^{\ast}$ minimizes the diameter $\Delta(\Gamma(2^{i}))$ over all integers $i$ subject to be non-empty. \end{enumerate} \end{tcolorbox} \begin{remark} Due to the bounds in Section \ref{sec:estimtraceandopernorm}, we explicitly assume the knowledge of the operator norm $\|\Sigma\|$ and the effective rank $\mathbf{r}(\Sigma)$ up to multiplicative constant factors. \end{remark} Our first technical observation is that, by truncating the norms of the vectors at the level $R = (N\tr(\Sigma)\|\Sigma\|)^{1/4}$, the covariance matrix does not change too much. At the same time, the truncation allows us to apply Theorem \ref{thm:baiyin} in our analysis. \begin{proposition} \label{prop:truncdoesnothurt} Assume that $Y$ is a zero mean random vector with covariance $\Sigma$ satisfying $L_p-L_2$ \emph{norm equivalence} with $p \ge 4$. Consider the truncated vector $X = Y\ind{\|Y\| \le (N\tr(\Sigma)\|\Sigma\|)^{1/4}}$ and define $\widetilde \Sigma = {\mathbb E} X\otimes X$. It holds that \[ \|\Sigma - \widetilde \Sigma\| \le \kappa(4)^4\|\Sigma\|\sqrt{\frac{\mathbf{r}(\Sigma)}{N}}. \] Moreover, $\|\widetilde \Sigma\| \le \|\Sigma\|$ and $\tr(\widetilde \Sigma) \le \tr(\Sigma)$. \end{proposition} \begin{proof} The proof follows from the proof Lemma 2.1 in \cite{mendelson2020robust}. For the second part of the proof we observe that $\Sigma - \widetilde \Sigma$ is positive semi-definite. \end{proof} The result of Proposition \ref{prop:truncdoesnothurt} allows us to focus on estimating the matrix of second moments $\widetilde \Sigma$, since triangle inequality, the estimator $\widehat{\Sigma}_{\eta,\delta}$ satisfies \[ \|\widehat{\Sigma}_{\eta,\delta} - \Sigma\| \le \|\widehat{\Sigma}_{\eta,\delta} - \widetilde\Sigma\| + \kappa(4)^4\|\Sigma\|\sqrt{\frac{\mathbf{r}(\Sigma)}{N}}, \] and the last term only affects the multiplicative constant factors in our bound. We proceed by defining the following quantity \begin{equation*} Q_0 = C_{Q_0}\max\left\{\frac{1}{\varepsilon}\|\Sigma\|\sqrt{\frac{\mathbf{r}(\Sigma)+\log(2/\delta)}{N}}, \varepsilon^{-2/p}\kappa(p)^2\|\Sigma\|\right\}. \end{equation*} Here $C_{Q_0} = C_{Q_0}(p)>0$ is to be chosen later, depends only on $p$ and is non-increasing with respect to $p$. The reader should keep in mind that the order of the convergence rate will be dictated by $\varepsilon Q_0$. The first term in the definition above is responsible for the rate in the uncontaminated case and the second term captures the rate of $\eta$ according to the value of $p$. We start with an important lemma that is the second order analogue of \cite[Lemma 1]{lugosi2021robust}. For the rest of this section, $\widetilde{\Sigma}$ is the matrix of second moments of the truncated vector $X = Y\ind{\|Y\| \le (N\tr(\Sigma)\|\Sigma\|)^{1/4})}$ and $\varepsilon = \max\left(20\eta, 560\frac{\log(2/\delta)}{N}\right)$. \begin{lemma} \label{lem:empirical_quantile} Let $Y \in \mathbb{R}^d$ be a mean zero random vector satisfying the $L_p-L_2$ norm equivalence assumption with $p>4$. Let $Y_1,\ldots,Y_N$ be i.i.d. copies of $Y$ and set $Z_i=Y_i\ind{\|Y_i\| \le (N\tr(\Sigma)\|\Sigma\|)^{1/4})}$ for every $i\in [N]$, then, with probability at least $1-\delta/2$, \begin{equation*} \sup_{v\in S^{d-1}}\left|\left\{i \in [N]:\langle Z_i,v\rangle^2-v^{\top}\widetilde{\Sigma} v \ge Q_0\right\}\right|\le \frac{\varepsilon}{4}N. \end{equation*} \end{lemma} \begin{proof} For simplicity of notation, we write $\overline{Z}_i(v)=\langle Z_i,v\rangle^2-v^{\top}\widetilde{\Sigma} v$. The proof is a standard application of a small ball argument in empirical process theory. Consider a function $\xi:\mathbb{R}\rightarrow \mathbb{R}$ defined by \begin{equation*} \xi(x)= \begin{cases} 0, &x\le Q_0/2,\\ \frac{2x}{Q_0} -1, &x\in (Q_0/2,Q_0], \\ 1, &x>Q_0. \end{cases} \end{equation*} It is clear that $\ind{\overline{Z}_i(v)\ge Q_0}\le \xi(\overline{Z}_i(v))\le\ind{\overline{Z}_i(v)\ge Q_0/2} $ and $\xi$ is a Lipschitz function with constant $2/Q_0$. By symmetrization \cite{gine1984some} and the contraction lemma for Rademacher processes \cite[Theorem 4.4]{ledoux2013probability}, we have \begin{equation*} \begin{split} \mathbb{E}\sup_{v\in S^{d-1}}\frac{1}{N}\sum_{i=1}^N \ind{\overline{Z}_i(v)\ge Q_0} &\le \mathbb{E}\sup_{v\in S^{d-1}}\frac{1}{N}\sum_{i=1}^N \xi(\overline{Z}_i(v))\\ & \le 2\mathbb{E}\sup_{v\in S^{d-1}}\frac{1}{N}\sum_{i=1}^N \left|\varepsilon_i\xi(\overline{Z}_i(v))\right| + \sup_{v\in S^{d-1}}\mathbb{E}\xi(\overline{Z}_i(v))\\ &\le \frac{4}{Q_0}\mathbb{E}\sup_{v\in S^{d-1}}\frac{1}{N}\left|\sum_{i=1}^N \varepsilon_i\overline{Z}_i(v)\right| + \sup_{v\in S^{d-1}}\mathbb{E}\xi(\overline{Z}(v)). \end{split} \end{equation*} Now, we define $C_{BY} = C_{BY}(p)>0$ to be the constant in the conclusion of Corollary \ref{cor:baiyin}. We remark that $C_{BY}$ is at most an absolute constant for any $p \ge c > 4$, where $c$ is another absolute constant. We now apply Corollary \ref{cor:baiyin} and the same arguments as in its proof and obtain \begin{equation*} \begin{split} \mathbb{E}\sup_{v \in S^{d - 1}}\frac{1}{N}\sum_{i=1}^N \left|\varepsilon_i (\langle Z_i,v\rangle^2- v^{\top}\widetilde{\Sigma}v)\right| &\le \mathbb{E}\sup_{v \in S^{d - 1}}\frac{1}{N}\left|\sum_{i=1}^n \varepsilon_i \langle Z_i,v\rangle^2\right| + \frac{\|\Sigma\|}{N}\mathbb{E}\left|\sum_{i=1}^N \varepsilon_i\right| \\ &\leq 2C_{BY} \|\Sigma\|\sqrt{\frac{\mathbf{r}(\Sigma)}{N}} \le \frac{Q_0\varepsilon}{128}. \end{split} \end{equation*} The last steps follow from the fact that $\mathbf{r}(\Sigma)\ge 1$ and $\|\widetilde{\Sigma}\|\le \|\Sigma\|$. By the definition of $Q_0$ we conclude that the first term is at most $\frac{\varepsilon}{32}$ if we choose a sufficiently large constant $C_{Q_0}>0$. We now proceed to bound the second term. We use Markov's inequality together with the $L_p-L_2$ norm equivalence to obtain \begin{equation*} \begin{split} \mathbb{E}\xi\left(\langle Z,v\rangle^2-v^{\top}\widetilde{\Sigma} v\right) \le \mathbb{P}\left(\langle Z,v\rangle^2\ge Q/2+v^{\top}\widetilde{\Sigma} v\right) &\leq \frac{\kappa(p)^p}{\left(Q/2+v^{\top}\widetilde{\Sigma} v\right)^{p/2}}(\mathbb{E}\langle Z,v\rangle^2)^{p/2}\\ &\leq \left(\frac{2\kappa(p)^2\|\Sigma\|}{Q}\right)^{p/2}. \end{split} \end{equation*} Again, by the definition of $Q_0$, we can choose $C_{Q_0}>0$ such that the right hand side above is at most $\frac{\varepsilon}{32}$. By Talagrand's concentration inequality for supremum of empirical process (Massart's version \cite{massart2000constants}), with probability at least $1-\exp(-x)$, \begin{equation*} \sup_{v\in S^{d-1}}\frac{1}{N}\sum_{i=1}^N \ind{\overline{Z}_i(v)\ge Q} \le \frac{\varepsilon}{8} + \sqrt{\frac{8x}{N}}\frac{\sqrt{\varepsilon}}{8} + \frac{35 x}{N}. \end{equation*} We choose $x = \varepsilon N/560$ to conclude the proof. \end{proof} \noindent We now describe the first part of the estimation procedure. First, we assume without loss of generality that $\varepsilon < 1$. When the conclusion of the previous lemma holds, we immediately obtain that, for every $v\in S^{d-1}$ and $Q\in (2Q_0,4Q_0)$, \begin{equation*} \quad q_v -v^{\top}\widetilde{\Sigma} v + Q \le 5Q_0 \quad \text{and} \quad q_v -v^{\top}\widetilde{\Sigma} v + Q \ge Q_0. \end{equation*} To see why the latter condition holds, suppose by contradiction that $q_v -v^{\top}\widetilde{\Sigma} v + Q \le Q_0$, then $q_v \le v^{\top}\widetilde{\Sigma} v -Q_0$. Since $\varepsilon < 1$, by the choice of $Q_0$, we have $Q_0 > \|\Sigma\|$ and then we get $0\le q_v \le v^{\top}\widetilde{\Sigma} v -Q_0 <0$, the latter is clearly a contradiction. Moreover, both inequalities above still hold for an $\eta$-corrupted sample, since $\varepsilon\ge 20\eta$ and by the definition of $\varepsilon$, we have that the fraction of points greater than $Q_0$ is at most $\frac{\varepsilon}{4}+\eta \le \varepsilon(\frac{1}{20}+\frac{1}{4})<\frac{\varepsilon}{2}$. So we use the samples $\widetilde{Z}_1,\ldots,\widetilde{Z}_N$ to estimate $q_v$ and since they are independent from the second half of the samples, we work conditionally to the fact that the event of Lemma \ref{lem:empirical_quantile} holds. The second part of the estimation procedure consists in proving that $\Gamma(v,Q)$ is indeed non-empty. The formal statement is the proposition below. \begin{proposition} \label{prop:U_Q(v)} Under the notation of Theorem \ref{thm:thecasep_greater_four}, fix $Q \in [2Q_0,4Q_0]$ and assume that the event of Lemma \ref{lem:empirical_quantile} holds. There exists an absolute constant $C>0$ such that if $C_{Q_0}\ge C$, then, with probability at least $1-\delta/2$, \begin{equation*} \sup_{v\in S^{d-1}} \left|\frac{1}{\lambda_v(Q)N}\sum_{i=1}^N \psi\left(\lambda_v(Q)\langle \widetilde{X}_i,v\rangle^2\right) - v^{\top}\widetilde{\Sigma} v\right| \le 4\varepsilon Q, \end{equation*} \end{proposition} It will be convenient to work with two a sided truncation function in the analysis, so we define the following function, for arbitrary positive $\lambda_1 \ge \lambda_2$, \begin{equation*} \psi_{\lambda_1,\lambda_2}(x)=\begin{cases} \frac{1}{\lambda_2}, \quad x>\frac{1}{\lambda_2}\\ x, \quad x\in [\frac{1}{\lambda_1},\frac{1}{\lambda_2}]\\ \frac{1}{\lambda_1}, \quad x<\frac{1}{\lambda_1}. \end{cases} \end{equation*} Clearly, if $\lambda_1=\lambda_2=\lambda$, both definitions agree. That is, $\frac{1}{\lambda}\psi(\lambda x) = \psi_{\lambda,\lambda}(x)$. \begin{proof} First, for every $v\in S^{d-1}$, due to the assumption that the event of Lemma \ref{lem:empirical_quantile} holds, the uncorrupted sample $X_i$, satisfies \begin{equation*} \left| \frac{1}{\lambda_v(Q)N}\sum_{i=1}^N \psi\left(\lambda_v(Q)\langle \widetilde{X}_i,v\rangle^2\right) - \frac{1}{\lambda_v(Q)N}\sum_{i=1}^N \psi\left(\lambda_v(Q)\langle X_i,v\rangle^2\right)\right| \le 2\eta Q \le \frac{\varepsilon Q}{10}. \end{equation*} We centralize the empirical process to obtain that \begin{equation*} \frac{1}{\lambda_v(Q)N}\sum_{i=1}^N \psi\left(\lambda_v(Q)\langle X_i,v\rangle^2\right) -v^{\top}\widetilde{\Sigma} v = \frac{1}{\lambda_v^{c}(Q)N}\sum_{i=1}^N \psi\left(\lambda_v^c(Q)(\langle X_i,v\rangle^2-v^{\top}\widetilde{\Sigma} v)\right), \end{equation*} where $\lambda_v^c(Q)=(q_v+Q-v^{\top}\widetilde{\Sigma} v)^{-1}$. To see why it holds, observe that both sides are equal except when $\langle X_i,v\rangle^2 - v^{\top}\widetilde{\Sigma} v \le -q_v - Q+v^{\top}\widetilde{\Sigma} v$. In this case, $0\le \langle X_i,v\rangle^2 \le -q_v -Q + 2v^{\top}\widetilde{\Sigma} v$. Since $Q\ge 2\|\Sigma\|$ if $C_{Q_0}\ge 2$ and $\varepsilon < 1$, the latter inequality cannot happen. Now, on the event of Lemma \ref{lem:empirical_quantile}, $Q_0\le q_v -v^{\top}\widetilde{\Sigma} v + Q\le 5Q_0 $ and therefore \begin{equation*} \frac{1}{\lambda_v^{c}(Q)N}\sum_{i=1}^N \psi\left(\lambda_v^c(Q)(\langle X_i,v\rangle^2-v^{\top}\widetilde{\Sigma} v)\right) \le \frac{1}{N}\sum_{i=1}^N\psi_{-\frac{1}{Q_0},\frac{1}{5Q_0}}\left(\langle X_i,v\rangle^2-v^{\top}\widetilde{\Sigma} v\right). \end{equation*} We define $\overline{U}_Q(v)$ to be the right hand side in the inequality above. We first study how the empirical process of interest concentrates around the mean. To simplify the analysis, we consider a simpler quantity, $\overline{W}_Q(v)=\frac{1}{N}\sum_{i=1}^N \psi_{\frac{-1}{3Q},\frac{1}{3Q}}\left(\langle X_i,v\rangle^2 -v^{\top}\widetilde{\Sigma}v\right)$. The start point for the concentration part is \begin{align*} \sup_{v\in S^{d-1}}(\overline{U}_Q(v)- \mathbb{E}\overline{U}_Q(v)) &\le \sup_{v\in S^{d-1}}(\overline{U}_Q(v)- \overline{W}_Q(v)) + \sup_{v\in S^{d-1}}(\overline{W}_Q(v)- \mathbb{E}\overline{W}_Q(v)) \\ &\qquad+ \sup_{v\in S^{d-1}}(\mathbb{E}\overline{W}_Q(v)- \mathbb{E}\overline{U}_Q(v))\\ &= (a) + (b) + (c). \end{align*} For the first term, observe that it is non-zero if and only if the argument of the function is above $5Q_0$ or below $-Q_0$, the latter cannot happen because of the same contradiction argument as above. The magnitude of the difference is at most $3Q$. By Lemma \ref{lem:empirical_quantile}, there are at most $\varepsilon N/4$ points in the range $(5Q_0,\infty)$, therefore, we obtain that the first term (a) is at most $\frac{3\varepsilon Q}{4}$. Similarly, we apply the Markov inequality together with the $L_p-L_2$ norm equivalence assumption to obtain that \begin{equation*} \begin{split} \sup_{v\in S^{d-1}}\mathbb{E}\left(\overline{W}_Q(v) - \overline{U}_Q(v)\right) &\le 3Q\mathbb{P}(\langle X,v\rangle^2 - v^{\top}\widetilde{\Sigma} v >5Q_0)\\ &\leq 3Q \left(\frac{\kappa(p)^2\|\Sigma\|}{5Q_0}\right)^{p/2} \le \frac{3\varepsilon Q}{32}. \end{split} \end{equation*} We conclude the bound for (c). The second term (b) is handled with Talagrand's concentration inequality for empirical processes (Massart's version \cite{massart2000constants}). Observe that, for every $v\in S^{d-1}$, \begin{equation*} \left|\psi_{-\frac{1}{3Q},\frac{1}{3Q}}\left(\langle X,v\rangle^2 - v^{\top}\widetilde{\Sigma} v\right)\right| \le 3Q \quad \text{and} \quad \mathbb{E}\left|\psi_{-\frac{1}{3Q},\frac{1}{3Q}}\left(\langle X,v\rangle ^2\right)\right|^2 \le \mathbb{E}\langle X,v\rangle^4 \leq \kappa(4)^4 \|\Sigma\|^2. \end{equation*} Moreover, the function $\psi(x)_{-\frac{1}{Q},\frac{1}{Q}}$, for every $Q>0$, is a $1$-Lipschitz function that passes through the origin. By the same symmetrization and contraction arguments of the proof of Lemma \ref{lem:empirical_quantile}, we obtain that \begin{equation*} \mathbb{E}\sup_{v\in S^{d-1}}\left|\overline{W}_Q(v) - \mathbb{E}\overline{W}_Q(v)\right| \le 2 \mathbb{E}\sup_{v\in S^{d-1}}\left|\sum_{i=1}^N \varepsilon_i \langle X_i,v\rangle^2\right| \leq 2C_{BY} \|\Sigma\| \sqrt{\frac{\mathbf{r}(\Sigma)}{N}}. \end{equation*} Therefore, by Talagrand's concentration inequality for empirical process, with probability at least $1-2e^{-x}$, \begin{equation*} \sup_{v\in S^{d-1}}|W_Q(v)-\mathbb{E}W_Q(v)| \leq 4C_{BY}\|\Sigma\|\sqrt{\frac{\mathbf{r}(\Sigma)}{N}} +\kappa(4)^2\|\Sigma\|\sqrt{\frac{8x}{N}} + 105Q \frac{x}{N}. \end{equation*} We choose $x = 2\log (2/\delta)$ to obtain that the right hand side above is less than $\frac{\varepsilon Q}{2}$. Now, the problem boils down to control uniformly the typical magnitude of $\overline{U}_Q(v)$. For simplicity, we define $\overline{X}_i(v)=\langle X_i,v\rangle^2-v^{\top}\widetilde{\Sigma}v$ and write \begin{equation*} \begin{split} \sup_{v\in S^{d-1}}\mathbb{E}\overline{U}_Q(v) &= \sup_{v\in S^{d-1}}\mathbb{E}[\psi_{-\frac{1}{Q_0},\frac{1}{5Q_0}}(\langle X,v\rangle^2 - v^{\top}\widetilde{\Sigma} v))(\ind{\overline{X}_i(v)\le 5Q_0}+\ind{\overline{X}_i(v)> 5Q_0})]\\ &\le \sup_{v\in S^{d-1}}5Q_0\mathbb{E}\ind{\overline{X}(v)\ge 5Q_0}\\ &\le 5Q\frac{\varepsilon}{64}. \end{split} \end{equation*} The final bound for the first part becomes $\varepsilon Q(\frac{1}{10}+\frac{3}{4}+\frac{3}{32}+\frac{5}{64}+\frac{1}{2})\le 2\varepsilon Q$. For the other side bound, observe that \begin{equation*} \begin{split} v^{\top}\widetilde{\Sigma} v-\frac{1}{\lambda_v(Q)N}\sum_{i=1}^N\psi(\lambda_v(Q)\langle X_i,v\rangle^2) &= \frac{1}{\lambda_v^c(Q)N}\sum_{i=1}^N\psi\left(\lambda_v^c(Q)v^{\top}\widetilde{\Sigma} v- \langle X_i,v\rangle^2\right)\\ &\le \frac{1}{N}\sum_{i=1}^N\psi_{-\frac{1}{Q_0},\frac{1}{5Q_0}}\left(v^{\top}\widetilde{\Sigma} v-\langle X_i,v\rangle^2\right). \end{split} \end{equation*} The same analysis follows, this concludes the proof. \end{proof} The proof of Theorem \ref{thm:thecasep_greater_four} follows directly from Lemma \ref{lem:empirical_quantile} and Proposition \ref{prop:U_Q(v)}. Indeed, consider the event of success $\mathcal{E}$ to be the event of Lemma \ref{lem:empirical_quantile} and Proposition \ref{prop:U_Q(v)}. It occurs with probability at least $1-\delta$ as desired. The argument is standard in the literature \cite{lugosi2021robust} and is similar to Lepskii's method \cite{lepskii1992asymptotically}. Consider $i_0$ to be the smallest integer satisfying $2^{i_0} \in (2Q_0,4Q_0)$. By Lemma \ref{lem:empirical_quantile} we know that at most $\frac{\varepsilon N}{4}$ samples are outside of the range $q_v + 2^{i_0}$. We conclude that the the difference between $U_Q(v)$ and $U_{2Q}(v)$ is at most $\frac{\varepsilon Q}{2}$. By induction this implies that $\Gamma(2^{i_0}) \subset \Gamma(2^{i_0+1})$ for every $i\ge i_0$, we conclude that the sets are nested. The diameter of the set $\bigcap_{i\ge i^{\ast}}\Gamma(2^{i})$ is at most $8\varepsilon Q$ and the true covariance $\widetilde{\Sigma}$ belongs to an even smaller set. Therefore, $\|\hat{\Sigma}-\widetilde{\Sigma}\|\le 8\varepsilon Q$. We conclude the proof by applying Proposition \ref{prop:truncdoesnothurt}. \qed \section{Optimality} \label{sec:optimality} In Theorem \ref{thm:informalmain} we showed that there is an estimator for the covariance matrix under heavy tails and adversarial corruption. It is natural to ask about the optimality of our statistical guarantees. It is well known that the term $C\|\Sigma\| \sqrt{\mathbf{r}(\Sigma)/N}$ appears in the minimax lower bound for covariance estimation \cite[Theorem 2]{lounici2014high} as well as in the lower bound for the performance of the sample covariance matrix in the Gaussian case \cite[Theorem 4]{koltchinskii2017operators}. Clearly, this term is necessary because the adversary can always leave the sample uncorrupted and the multivariate Gaussian distribution satisfies the norm equivalence assumption \eqref{eq:momeqv}. Further, we show that the term $C\|\Sigma\| \sqrt{\log(1/\delta)/N}$ cannot be improved in general. As shown in \cite{mendelson2020robust} (following the lower bound in \cite{lugosi2019near}), the term scaling as $R\sqrt{\log(1/\delta)/N}$ appears in the lower bound for any estimator of the covariance matrix. Here $R^2$ is the so-called \emph{weak variance} defined by \[ R^2 = \sup\nolimits_{v \in S^{d - 1}}{\mathbb E}\left(v^{\top}(X \otimes X - {\mathbb E} X \otimes X)v\right)^2. \] In the multivariate Gaussian case, using the standard relation between moments, we have \[ R^2 \ge \sup\nolimits_{v \in S^{d - 1}}{\mathbb E}\langle v, X\rangle^4 - \|\Sigma\|^2 = 3\sup\limits_{v \in S^{d - 1}}({\mathbb E}\langle v, X\rangle)^2 - \|\Sigma\|^2 = 2\|\Sigma\|^2. \] This shows the necessity of the term $C\|\Sigma\| \sqrt{\log(1/\delta)/N}$ in Theorem \ref{thm:informalmain}. Finally, we can restrict the discussion to the convergence rate with respect to the fraction of corrupted samples $\eta$. Our key argument to derive the minimax optimality with respect to $\eta$ is to reduce our problem to a mean estimation problem. For the mean estimation, a simple computation shows a lower bound with respect to $\eta$. To describe this result, we first consider a basic definition. \begin{definition} For a random variable $X$, we define the quantile \begin{equation*} Q_q(X) = \sup\{M\in \mathbb{R}: \mathbb{P}(X \ge M) \ge 1-q\}. \end{equation*} \end{definition} Formally, the following simple lower bound holds. \begin{proposition}[Inequality 2.3 in \cite{lugosi2021robust}] Let $X$ be a random variable with mean $\mu$, variance $\sigma_X$, and with an absolutely continuous distribution. Suppose that $\widetilde{X}_1,\ldots,\widetilde{X}_N$ is an $\eta$-corrupted sample sampled according to the distribution of $X$. Let $\overline{X}=X-\mathbb{E}X$ and define $\epsilon(\overline{X},\eta)$ as follows \begin{equation*} \epsilon (\overline{X},\eta)= \max\left\{\mathbb{E}|\overline{X}-Q_{\eta/2}(\overline{X})|\ind{\overline{X}\le Q_{\eta/2}(\overline{X})},\mathbb{E}|\overline{X}-Q_{1-\eta/2}(\overline{X})|\ind{\overline{X}\ge Q_{1-\eta/2}(\overline{X})}\right\}. \end{equation*} Then, no estimator $\widehat{\mu}=\widehat{\mu}(\widetilde{X}_1,\ldots,\widetilde{X}_N)$ of the mean $\mu$ can perform better than \begin{equation*} |\widehat{\mu} - \mu| \le \epsilon(\overline{X},\eta). \end{equation*} \end{proposition} To obtain the minimax rates with respect to $\eta$, it is enough to restrict our attention to the one dimensional case. Assume that the distribution of a zero mean random variable $X$ satisfies the $L_4-L_2$ norm equivalence \eqref{eq:momeqv}. That is, for some $\kappa \ge 1$, it holds that $\left({\mathbb E} X^4\right)^{1/4} \le \kappa \left({\mathbb E} X^2\right)^{1/2}$. Denote $Y = X^2$ and observe that $Y$ is non-negative. In this case, the estimation of the variance of $X$ can be seen as the estimation of the mean of $Y$. The norm equivalence assumption can be rewritten as $ \left({\mathbb E} Y^2\right)^{1/2} \le \kappa^2{\mathbb E} Y. $ \begin{example}[Optimality of the $\sqrt{\eta}$-term when $p = 4$] \label{ex:sqrteta} First, a few intermediate random variables are considered before the definition of $Y$. Define \begin{equation*} Y_1=\begin{cases} -\frac{1}{\sqrt{\eta}}, &\text{with probability} \ \frac{\eta}{2},\\ -1, &\text{with probability} \frac{1-\eta}{2},\\ 1, &\text{with probability} \ \frac{1-\eta}{2},\\ \frac{1}{\sqrt{\eta}}, &\text{with probability} \ \frac{\eta}{2}. \end{cases} \end{equation*} Clearly $\mathbb{E}Y_1 =0$. Let $\sigma_{Y_1}$ denote the standard deviation of $Y_1$. It holds that \begin{equation*} \sigma_{Y_1}=(\mathbb{E}Y_1^2)^{1/2} = \sqrt{2-\eta} \le \sqrt{2}. \end{equation*} Assume that $\eta \le 1/4$. We have $Q_{\eta/2}(Y_1) = -1$. Indeed, $\mathbb{P}(Y_1\ge -1) = 1-\frac{\eta}{2}$ and $\mathbb{P}(Y_1\ge 1) = \frac{1}{2}< 1-\frac{\eta}{2}$. We conclude that \begin{equation*} \epsilon(Y_1,\eta)\ge \mathbb{E}|Y_1+1|\ind{Y_1\le -1} = \left(\frac{1}{\sqrt{\eta}}-1\right)\frac{\eta}{2} = \frac{\sqrt{\eta}}{2}-\frac{\eta}{2}\ge \frac{1}{4}\sqrt{\eta}. \end{equation*} We now consider the random variable $Y_2=\frac{\sigma^2}{\sqrt{2-\eta}}Y_1$, where $\sigma > 0$ is a positive real number. It still holds that $\mathbb{E}Y_2 = 0$, but now, $(\mathbb{E}Y_2^2)^{1/2} = \sigma^2$. By homogeneity, we have \[ \epsilon(Y_2,\eta)\ge \frac{\sigma^2\sqrt{\eta}}{4\sqrt{2-\eta}} \ge \frac{\sigma^2\sqrt{\eta}}{4\sqrt{2}}. \] Finally, observe that $Y=Y_2 +\|Y_2\|_{\infty}$ is a non-negative random variable and $\epsilon(Y,\eta) = \epsilon(Y_2,\eta)$ since the difference between $Y$ and $Y_2$ is a constant. We conclude by observing that ${\mathbb E} Y = \|Y_2\|_{\infty}$ and ${\mathbb E} Y^2 = {\mathbb E} Y_2^2 + \|Y_2\|^2_{\infty} \le 2\|Y_2\|^2_{\infty}$. Thus, the norm equivalence assumption $\left({\mathbb E} Y^2\right)^{1/2} \le \kappa^2{\mathbb E} Y$ holds with $\kappa^2 = \sqrt{2}$. \end{example} We now present an example that attains the sub-exponential minimax rate in the mean estimation and therefore confirms the optimality of our covariance estimation results in the sub-Gaussian case. Indeed, observe that if $X$ is a sub-Gaussian random variable, then $Y = X^2$ is a sub-exponential random variable (see \cite[Lemma 2.7.6]{Vershynin2016HDP}). Therefore, when making a reduction from covariance estimation to a mean estimation problem as in Example \ref{ex:sqrteta}, we have to analyze the mean estimation problem in the sub-exponential regime. We note that a similar sub-Gaussian construction appears in \cite[Remark before Section 2.1]{lugosi2021robust}\footnote{To make their construction work in the Gaussian case, we changed the definition of $Q$, so that it becomes an $\eta/4$-quantile instead of an $\eta/2$-quantile claimed in \cite{lugosi2021robust}}. \begin{example}[Optimality of $\eta\log(1/\eta)$-term in sub-Gaussian covariance estimation] Let $Y= \min\{1,|g|^2\}\ind{|g|^{2}< Q} + |g|^2\ind{|g|^2\ge Q}$, where $g$ is a standard Gaussian random variable and $Q$ is such that $\mathbb{P}(|g|^2 \ge Q) = \eta/4$. Consequently, we have $\mathbb{P}(|g|\ge \sqrt{Q}) = \eta/4$. It is easy to see that for $\eta$ sufficiently small, it holds that $Q > 1$. Let us now prove that $Q_{1 - \eta/2}(Y) = 1$. Indeed, since $\Pr(Y \ge 1) = \Pr(|g| \ge 1) > \eta/2$ for small enough $\eta$, then $Q_{1 - \eta/2}(Y) \ge 1$. It is clear that if $Q_{1 - \eta/2}(Y) > 1$, then $Q_{1 - \eta/2}(Y) \ge Q$, which contradicts $\mathbb{P}(|g|^2 \ge Q) = \eta/4 < \eta/2$. Thus, we have \[ \varepsilon(\overline{Y},\eta) \ge \mathbb{E}|\overline{Y}-Q_{1-\eta/2}(\overline{Y})|\ind{\overline{Y}\ge Q_{1-\eta/2}(\overline{Y})} = \mathbb{E}(Y-1)\ind{Y\ge 1}. \] This leads to \begin{align*} \mathbb{E}(Y-1)\ind{Y\ge 1} &= \mathbb{E}Y\ind{1 \le Y < Q} + \mathbb{E}Y\ind{Y\ge Q} - \Pr(Y \ge 1) \\ &=\Pr(1 \le |g|^2 < Q) + \mathbb{E}|g|^2\ind{|g|^2\ge Q} - \Pr(|g|^2 \ge 1) \\ &= \mathbb{E}|g|^2\ind{|g|^2\ge Q} - \Pr(|g|^2 \ge Q) \end{align*} Using the standard computation (see \cite[Exercise 2.1.4]{Vershynin2016HDP}), we obtain \[ \mathbb{E}|g|^2\ind{|g|^2\ge Q} = \frac{2}{\sqrt{2\pi}}\sqrt{Q}\exp(-Q/2) + \Pr(|g| \ge \sqrt{Q}). \] Therefore, since $\eta/4= \mathbb{P}(|g|\ge \sqrt{Q})$, and by the standard Gaussian integration \cite[Proposition 2.1.2]{Vershynin2016HDP} we have \[ \varepsilon(\overline{Y},\eta) \ge \frac{2}{\sqrt{2\pi}}\sqrt{Q}\exp(-Q/2) \ge Q\Pr(|g| \ge \sqrt{Q}) = \frac{Q\eta}{4}. \] Using the same Gaussian integration formula, we conclude that, in order to get $\Pr(|g| \ge \sqrt{Q}) = \eta/4$, we need to choose $Q\sim \log\left(1/\eta\right)$. This implies that \[ \epsilon(\overline{Y},\eta)\ge \frac{Q\eta}{4} \gtrsim \eta \log\left(1/\eta\right). \] We claim that the rate $\eta\log\left(1/\eta\right)$ is sharp. To do so, we compute the upper bound with respect to $\eta$ in Theorem \ref{thm:informalmain}. If the original random variable $X$ is a sub-Gaussian, then $\kappa(p)^2 \sim p$ for every $p\ge 1$ (see \cite[Proposition 2.5.2]{Vershynin2016HDP}) and the upper bound scales as $\eta p/\eta^{2/p}$. We choose $p\sim \log\left(1/\eta\right)$ to optimize the fraction and obtain the claimed rate. \end{example} \paragraph{Acknowledgments.} The authors would like to thank Afonso Bandeira for several valuable discussions. Nikita Zhivotovskiy is funded in part by ETH Foundations of Data Science (ETH-FDS). \bibliographystyle{abbrv} {\footnotesize
{ "timestamp": "2022-05-18T02:38:54", "yymm": "2205", "arxiv_id": "2205.08494", "language": "en", "url": "https://arxiv.org/abs/2205.08494", "abstract": "We provide an estimator of the covariance matrix that achieves the optimal rate of convergence (up to constant factors) in the operator norm under two standard notions of data contamination: We allow the adversary to corrupt an $\\eta$-fraction of the sample arbitrarily, while the distribution of the remaining data points only satisfies that the $L_{p}$-marginal moment with some $p \\ge 4$ is equivalent to the corresponding $L_2$-marginal moment. Despite requiring the existence of only a few moments, our estimator achieves the same tail estimates as if the underlying distribution were Gaussian. As a part of our analysis, we prove a dimension-free Bai-Yin type theorem in the regime $p > 4$.", "subjects": "Statistics Theory (math.ST); Data Structures and Algorithms (cs.DS); Probability (math.PR)", "title": "Covariance Estimation: Optimal Dimension-free Guarantees for Adversarial Corruption and Heavy Tails", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713839878681, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7080104916942088 }
https://arxiv.org/abs/0707.3425
Norms and spectral radii of linear fractional composition operators on the ball
We give a new proof that every linear fractional map of the unit ball induces a bounded composition operator on the standard scale of Hilbert function spaces on the ball, and obtain norm bounds analogous to the standard one-variable estimates. We also show that Cowen's one-variable spectral radius formula extends to these operators. The key observation underlying these results is that every linear fractional map of the ball belongs to the Schur-Agler class.
\section{Introduction} \subsection{Background} Given a set $\Omega$, a collection of functions $\mathcal F :\Omega\to \mathbb C$ and a map $\varphi:\Omega\to \Omega$, one can define a \emph{composition operator} $$ C_\varphi :f\to f\circ \varphi $$ Typically $\Omega$ is a domain in $\mathbb{C}$ or $\mathbb{C}^m$, $\varphi$ is a holomorphic map and $\mathcal F$ is a Banach space of holomorphic functions. Broadly, one is interested in extracting properties of $C_\varphi$ acting on $\mathcal F$ (boundedness, spectral properties, etc.) from function theoretic or dynamical properties of $\varphi$. The most studied case is that of $\Omega=\mathbb{D}$, the open unit disk in $\mathbb{C}$, and $\mathcal F$ the Hardy space $H^2$. In this case it follows from the Littlewood subordination principle that every holomorphic self-map $\varphi$ of $\mathbb{D}$ induces a bounded composition operator on $H^2$. A theorem of C. Cowen computes the spectral radius of $C_\varphi$. The purpose of the present paper is to extend Cowen's theorem to a certain class of composition operators acting on the standard scale of holomorphic spaces on the open unit ball $\mathbb{B}^m \subset \mathbb{C}^m$. The primary difficulty in studying composition operators on the ball is that not every holomorphic self map $\varphi$ induces a bounded composition operator on the standard spaces. Moreover, in many cases even when boundedness can be established, it is difficult to obtain useful norm estimates. In \cite{jury-pams} we showed that every self-map $\varphi$ of the ball belonging to the \emph{Schur-Agler class} $\mathcal{S}_m$ (defined below) induces a bounded composition operator on the standard scale of spaces, and moreover obeys a norm estimate analogous to the one-variable case. Since every self-map of the unit disk belongs to the Schur-Agler class, one's intuition is that the maps $\varphi\in\mathcal{S}_m$ should have more behavior in common with self-maps of the disk than do generic self-maps of the ball. In this paper we show that the linear fractional maps of $\mathbb{B}^m$ introduced by Cowen and MacCluer \cite{MR1768872} belong to the Schur-Agler class and obtain norm bounds. We then use this result together with an explicit parametrization of the non-elliptic linear fractional maps obtained by Bracci et al.\,\cite{bracci-contreras-diazmadrigal} to obtain a formula for the spectral radius, which extends Cowen's result to linear fractional maps in higher dimensions. Moreover we conjecture that this formula should hold for all maps in the Schur-Agler class. The paper is organized as follows: we conclude this introductory section by defining the Schur-Agler class $\mathcal S_m$ and describing its relevant properties. In Section~\ref{S:lft} we prove that every linear fractional map of $\mathbb{B}^m$ belongs to $\mathcal S_m$ and obtain a norm estimate for the induced composition operators; from the norm estimate we deduce a prototype expression for the spectral radius. In Section~\ref{S:specrad} we prove the spectral radius formula for linear fractional maps and describe some of the geometric difficulties (absent in the one-variable case) encountered in trying to extend the formula to all Schur-Agler mappings. \subsection{The Schur-Agler class} Let $\mathbb{B}^m$ denote the open unit ball of $\mathbb{C}^m$. We will write $\langle \cdot , \cdot \rangle$ for the standard Hermitian inner product on $\mathbb{C}^m$ and $|z|=\sqrt{\langle z,z\rangle}$ for the Euclidean length. It will often be convenient to write points of $\mathbb C^m$ in the form $z=(z_1, z^\prime)$ with $z_1\in \mathbb C$ and $z^\prime =(z_2, \dots z_m)\in\mathbb C^{m-1}$. \begin{defn} The \emph{Schur-Agler class} $\mathcal{S}_m$ is the set of all holomorphic mappings $\varphi:\mathbb{B}^m\to \mathbb{B}^m$ for which the Hermitian kernel \begin{equation}\label{E:dBR} k^\varphi(z,w)=\frac{1-\langle \varphi(z), \varphi(w)\rangle}{1-\langle z,w \rangle} \end{equation} is positive semidefinite. \end{defn} The kernel (\ref{E:dBR}) will be called the \emph{de Branges-Rovnyak kernel} associated to $\varphi$. When $m=1$ these are the classical de Branges-Rovnyak kernels \cite{MR0215065, MR1289670}. The functions $\varphi$ for which $k^\varphi$ is positive are precisely those admitting a representation as a transfer function of a multivariate linear system \cite{MR1846055}, but we will not use this representation explicitly. It is an elementary but important fact that $\mathcal{S}_m$ is closed under composition: \begin{thm} If $\varphi, \psi\in\mathcal{S}_m$ then so is $\varphi\circ\psi$. \end{thm} \begin{proof} The kernel $k^{\varphi\circ\psi}$ may be factored as $$ k^{\varphi\circ\psi}(z,w)=k^\varphi(\psi(z), \psi(w))\cdot k^\psi(z,w) $$ which is a pointwise product of positive kernels and hence positive. \end{proof} \noindent In particular iterates of Schur-Agler mappings remain in the Schur-Agler class. It will be proved in the next section that every linear fractional map of $\mathbb{B}^m$ belongs to $\mathcal{S}_m$; in particular every automorphism of the ball belongs to the Schur-Agler class. \begin{defn} Let $m, \beta$ be positive integers. The space $H^2_{m,\beta}$ is the space of holomorphic functions on the unit ball $\mathbb{B}^m$ with reproducing kernel $$ k_\beta(z,w)=\frac{1}{(1-\langle z,w\rangle)^\beta} $$ \end{defn} When $\beta=1$ this is the Drury-Arveson space, which is strictly smaller than the classical Hardy space on the ball but often the more appropriate setting for multivariable operator theory; see e.g. \cite{MR1882259, MR1668582}. When $\beta=m$ we obtain the classical Hardy space and $\beta=m+1$ gives the Bergman space. This scale of spaces can be extended to non-integral values of $\beta$ via Calderon interpolation, and all of the results of this paper are valid for this larger scale. However since the primary values of interest are $\beta=1, m$ and $m+1$, we omit the details. It was shown in \cite{jury-pams} that every $\varphi\in\mathcal{S}_m$ induces a bounded composition operator on each of the spaces $H^2_{m, \beta}$, satisfying a ``one-variable style'' norm estimate, in particular an estimate which depends only on the value of $\varphi$ at $0$. In fact when $m=1$ this is precisely the ``classical'' norm estimate for composition operators on the standard scale of Hilbert function spaces. In higher dimensions, a related upper bound was obtained by Bayart \cite[Theorem 4.1]{MR2296311}, which applies to certain univalent mappings (not necessarily in $\mathcal{S}_m$) but which depends both on $\varphi(0)$ and on global estimates for derivatives of $\varphi$. \begin{thm}\label{T:SA_boundedness} If $\varphi\in\mathcal{S}_m$ then $C_\varphi$ is bounded on $H^2_{m, \beta}$ and \begin{equation}\label{E:SA_boundedness} \left(\frac{1}{1-|\varphi(0)|^2}\right)^{\beta/2}\leq \|C_\varphi\|\leq \left( \frac{1+|\varphi(0)|}{1-|\varphi(0)|}\right)^{\beta/2} \end{equation} \end{thm} \begin{proof} The upper bound is proved in \cite{jury-pams}; the lower bound is generic for composition operators acting on reproducing kernel Hilbert spaces: since $k_\beta(\cdot,0)\equiv 1$, \begin{align*} \|C_\varphi^*\| &\geq \|C_\varphi^* k_\beta(\cdot,0)\|\\ &= \| k_\beta (\cdot,\varphi(0))\| \\&= \left(\frac{1}{1-|\varphi(0)|^2}\right)^{\beta/2} \end{align*} \end{proof} We obtain immediately an expression for the spectral radius of $C_\varphi$. In what follows we let $\varphi_n$ denote the $n^{th}$ iterate of $\varphi$, and observe that $C_\varphi^n =C_{\varphi_n}$. \begin{cor}\label{C:proto_specrad} If $\varphi\in\mathcal{S}_m$ then the spectral radius of $C_\varphi$ acting on $H^2_{m,\beta}$ is \begin{equation}\label{E:proto_specrad} \lim_{n\to \infty} (1-|\varphi_n(0)|)^{-\beta/2n} \end{equation} \end{cor} \begin{proof} Since $\mathcal{S}_m$ is closed under composition, we may iterate the norm inequality (\ref{E:SA_boundedness}) to obtain $$ \|C_\varphi^n\|=\|C_{\varphi_n}\|\sim(1-|\varphi_n(0)|)^{-\beta/2} $$ Since $r(C_\varphi)=\lim \|C_\varphi^n\|^{1/n}$, the corollary follows. \end{proof} The expression (\ref{E:proto_specrad}) should not really be regarded as a formula for the spectral radius, unless some method of evaluating the limit is available. In one dimension (for maps without interior fixed points), the limit can be evaluated in terms of the angular derivative at the Denjoy-Wolff point. The evaluation of this limit for linear fractional mappings in higher dimensions is the purpose of the next section; we obtain a result analogous to the one-variable case, where the dilatation coefficient (defined below) plays the role of the angular derivative. Intuitively, one may expect that Schur-Agler mappings of $\mathbb{B}^m$ may exhibit a stronger affinity with self-maps of $\mathbb D$ than do generic self-maps of $\mathbb{B}^m$. The reason for this is that \emph{every} self map of $\mathbb{D}$ belongs to $\mathcal S_1$, while for $m>1$ $\mathcal S_m$ is always a proper subset of the self-maps of $\mathbb{B}^m$. In particular any fact about self-maps of $\mathbb{D}$ which can be proved using only the positivity of the de Branges-Rovnyak kernel ought to have an analogue for the Schur-Agler class; though of course this analogy cannot be taken too literally. Finally, a bit of notation: given two sequences of positive numbers $a_n, b_n$, we write $a_n\sim b_n$ to mean that there exists strictly positive constants $C_1, C_2$ such that $$ C_1 \leq \frac{a_n}{b_n}\leq C_2 $$ for all $n$. \section{Linear fractional maps}\label{S:lft} We now prove that the linear fractional maps of $\mathbb{B}^d$ introduced by Cowen and MacCluer \cite{MR1768872} belong to $\mathcal{S}_m$. By the theorem and its corollary we obtain a new proof of the boundedness of linear fractional composition operators on the standard spaces, as well as the norm estimate (\ref{E:SA_boundedness}). Following Cowen and MacCluer \cite{MR1768872}, a linear fractional map on $\mathbb{B}^m$ is defined to be a function of the form \begin{equation} \varphi(z)=\frac{Az+B}{\langle z, C\rangle +D} \end{equation} where $A$ is a $m\times m$ matrix, $B,C$ are column vectors in $\mathbb{C}^m$, and $D$ is a complex number. Here $\langle \cdot, \cdot\rangle$ denotes the standard inner product on $\mathbb{C}^m$. Clearly, the parameters $A,B, C, D$ are not uniquely determined, since they may all be multiplied by a fixed scalar without changing $\varphi$. It is shown in \cite{MR1768872} that such map takes $\mathbb{B}^m$ into itself if and only if for some choice of $A,B, C, D$ representing $\varphi$, the $(m+1)\times (m+1)$ matrix \begin{equation} T=\begin{pmatrix} A & B \\ C^* & D \end{pmatrix} \end{equation} is contractive with respect to the indefinite bilinear form on $\mathbb{C}^{m+1}$ defined by \begin{equation} [v,w]=\langle Jv, w\rangle \end{equation} where $J$ is the matrix \begin{equation} J= \begin{pmatrix} I_m & 0 \\ 0 & -1 \end{pmatrix} \end{equation} That is, $T$ must satisfy $$ [Tv, Tv]\leq [v,v] $$ for all $v\in\mathbb{C}^{m+1}$. This contractivity condition is satisfied if and only if the matrix $J-T^* JT$ is positive semidefinite. We will make use of the condition in this latter form. It is then proved in \cite{MR1768872} that every such map induces a bounded composition operator on the standard scale of spaces (at least when $\beta \geq m$), though this proof is indirect and in particular does not provide an estimate for the norm of $C_\varphi$. We will prove that $C_\varphi$ is bounded by appeal to Theorem~\ref{T:SA_boundedness}, and prove that the de Branges-Rovnyak kernel $k^\varphi$ is positive by exhibiting an explicit factorization, which we obtain from a factorization of the (assumed positive) matrix $J-T^*JT$. We can now state the factorization result: \begin{thm}\label{T:LFT_factorization} Every linear fractional map $\varphi:\mathbb{B}^m\to \mathbb{B}^m$ belongs to the Schur-Agler class $\mathcal S_m$. \end{thm} \begin{proof} Let $T$ be a $(m+1)\times(m+1)$ matrix which is contractive with respect to $[\cdot, \cdot]$ and has the form \begin{equation} T=\begin{pmatrix} A & B \\ C^* & D \end{pmatrix} \end{equation} and let $\varphi$ denote the associated linear fractional transformation. (By the remarks preceding the proof, every linear fractional self-map of $\mathbb B^m$ arises in this way.) Factor $J-T^*JT$ as \begin{equation} J-T^*JT =X^*X \end{equation} with \begin{equation} X=\begin{pmatrix} X_{11} & X_{12} \\ X_{21}^* & X_{22} \end{pmatrix} \end{equation} Now define a function $L:\mathbb{B}^m\to \mathbb{C}^{m+1}$ by \begin{equation} L(z) = X\begin{pmatrix} z\\ 1\end{pmatrix}=\begin{pmatrix} X_{11} z +X_{12} \\ \langle z, X_{21}\rangle +X_{22}\end{pmatrix} \end{equation} We now claim that the de Branges-Rovnyak kernel can be factored as \begin{equation}\label{E:factorization} k^\varphi(z,w)= \frac{1}{\langle z, C\rangle+D} \left(1+\frac{L(z)L(w)^*}{1-\langle z,w\rangle}\right) \frac{1}{\overline{\langle w, C\rangle+D}} \end{equation} from which it is apparent that $k^\varphi$ is positive. To verify (\ref{E:factorization}), we first write out $k^\varphi(z,w)$ as \begin{align} \begin{split} k^\varphi(z,w) &=\frac{1}{\langle z, C\rangle+D} \frac{1}{\overline{\langle w, C\rangle+D}} \\ &\phantom{==}\times \frac{(\langle z, C\rangle+D)\overline{(\langle w, C\rangle+D)} -\langle Az +B, Aw+B\rangle }{1-\langle z,w\rangle} \end{split} \end{align} Working with the factor on the second line, we verify that its numerator is equal to $1-\langle z,w\rangle +L(z)L(w)^*$, which proves (\ref{E:factorization}): \begin{align*} 1-\langle z,w\rangle +L(z)&L(w)^* = 1-\langle z,w\rangle +\langle X^*X \begin{pmatrix} z \\ 1\end{pmatrix},\begin{pmatrix} w \\ 1\end{pmatrix} \rangle\\ &= 1-\langle z,w\rangle +\langle J-T^*JT \begin{pmatrix} z \\ 1\end{pmatrix},\begin{pmatrix} w \\ 1\end{pmatrix}\rangle \\ &= -\langle JT \begin{pmatrix} z \\ 1\end{pmatrix},T\begin{pmatrix} w \\ 1\end{pmatrix}\rangle \\ &= (\langle z, C\rangle+D)\overline{(\langle w, C\rangle+D)} -\langle Az +B, Aw+B\rangle \end{align*} \end{proof} \section{Spectral radii}\label{S:specrad} We begin with some basic definitions and results about the iteration of self-maps of the ball, and then describe some known results. Suppose that $\varphi:\mathbb{B}^m\to \mathbb{B}^m$ is a holomorphic mapping which does not fix any point of $\mathbb{B}^m$. MacCluer \cite{MR694933} showed that an analogue of the Denjoy-Wolff theorem holds: there exists a unique point $\zeta\in\partial\mathbb{B}^m$ such that the iterates of $\varphi$ converge uniformly to $\zeta$ on compact subsets of $\mathbb{B}^m$. This point will be called the \emph{Denjoy-Wolff} point of $\varphi$. Moreover, it follows from \cite[Theorem 1.3]{MR694933} that $$ 0< \liminf_{z\to \zeta} \frac{1-|\varphi(z)|^2}{1-|z|^2} =\alpha \leq 1 $$ and hence by the Julia-Caratheodory theorem on the ball \cite[Theorem 8.5.6]{MR601594} the complex directional derivative $D_\zeta \varphi$ has a radial limit\footnote{In fact this limit exists in the wider sense of \emph{restricted $K$-limit} (or \emph{hypoadmissible limit}) but we will not require this notion at the moment.} $\alpha$ at $\zeta$; this number is called the \emph{dilatation coefficient} of $\varphi$. (When $m=1$, $\alpha$ is the angular derivative of $\varphi$ at $\zeta$.) The following is then a special case of Julia's theorem on the ball (\cite[Theorem 1.3]{MR694933} and \cite[Theorem 8.5.3]{MR601594}): \begin{thm} Let $\varphi:\mathbb{B}^m\to \mathbb{B}^m$ with Denjoy-Wolff point $\zeta\in\partial\mathbb B^m$ and dilatation coefficient $\alpha$. Then for all $z\in \mathbb{B}^m$, \begin{equation}\label{E:julia} \frac{|1-\langle \varphi(z), \zeta\rangle|^2}{1-|\varphi(z)|^2} \leq \alpha\frac{|1-\langle z, \zeta\rangle|^2}{1-|z|^2} \end{equation} \end{thm} We now divide the self-maps of $\mathbb{B}^m$ into three classes: \begin{defn} A holomorphic self-map $\varphi$ of $\mathbb{B}^m$ will be called \begin{itemize} \item \emph{elliptic} if $\varphi$ fixes a point of $\mathbb{B}^m$, \item \emph{parabolic} if $\varphi$ has no fixed point and dilatation coefficient $1$, and \item \emph{hyperbolic} if $\varphi$ has no fixed point and dilatation coefficient $\alpha<1$. \end{itemize} \end{defn} In one dimension, Cowen \cite{MR695941} obtained the following formula for the spectral radius of composition operators on $H^2(\mathbb{D})$: \begin{thm} Let $\varphi:\mathbb{D}\to \mathbb{D}$. If $\varphi$ is elliptic then the spectral radius of $C_\varphi$ is $1$; if $\varphi$ is non-elliptic then the spectral radius is $\alpha^{-1/2}$. \end{thm} For linear fractional maps in higher dimensions, MacCluer \cite{MR760878} obtained the full spectrum for automorphic symbols $\varphi$ acting on the Hardy space (our case $\beta=m$); it follows from these results that the spectral radius is $1$ for elliptic automorphisms and $\alpha^{-m/2}$ otherwise. More recently Bayart \cite{MR2296311} obtained the full spectrum for certain parabolic maps conjugate to generalized Heisenberg translations of the Siegel half-space; for these parabolic maps the spectral radius is $1$. The spectral radius formulae we obtain will be valid for all elliptic and parabolic maps in the Schur-Agler class; it is only in the hyperbolic case that we restrict to linear fractional maps. Indeed in the elliptic and parabolic cases the proof we now give is identical to Cowen's in dimension $1$. \begin{thm}\label{T:spectral_radius} Let $\varphi\in\mathcal{S}_m$. If $\varphi$ is elliptic or parabolic, then the spectral radius of $C_\varphi$ on $H^2_{d,\beta}$ is $1$. \end{thm} \begin{proof} If $\varphi$ is elliptic, then $C_\varphi$ is similar (via conjugation by an automorphism) to a composition operator $C_\psi$ with $\psi\in\mathcal{S}_m$ and $\psi(0)=0$. Since $\mathcal{S}_m$ is automorphism invariant, $\psi\in\mathcal{S}_m$ and hence $\|C_{\psi_n}\|=1$ for all $n$ by Theorem~\ref{T:SA_boundedness}, and thus $r(C_\varphi)=r(C_\psi)=1$. Now assume $\varphi$ is parabolic with Denjoy-Wolff point $\zeta\in\partial\mathbb{B}^m$. If $z_n$ is a sequence in $\mathbb{B}^m$ such that $z_n\to \zeta$, $\varphi(z_n)\to \zeta$, and the limit $$ M=\lim_{n\to \infty} \left( \frac{1-|\varphi(z_n)|}{1-|z_n|}\right) $$ exists, then $M\geq 1$. It follows that $$ \liminf_{n\to\infty} \left(\frac{1-|\varphi_n(0)|}{1-|\varphi_{n-1}(0)|}\right)\geq 1 $$ Therefore \begin{align*} \lim_{n\to\infty} (1-|\varphi_n(0)|)^{-1/2n} &= \lim_{n\to\infty} \left( \prod_{k=0}^{n-1} \frac{1-|\varphi_k(0)|}{1-|\varphi_{k-1}(0)|} \right)^{1/2n} \\ & \leq \limsup_{n\to\infty} \left( \frac{1-|\varphi_{n-1}(0)|}{1-|\varphi_n(0)|}\right)^{1/2} \\ &\leq 1 \end{align*} Thus $r(C_\varphi)\leq 1$ by Corollary~\ref{C:proto_specrad}, and since $1$ is an eigenvalue $r(C_\varphi)=1$. \end{proof} The evaluation of the limit (\ref{E:proto_specrad}) in the hyperbolic case requires a more detailed analysis of the orbit $\{\varphi_n(0)\}$, which can be carried out explicitly in the case of linear fractional maps. The proof exploits a parametrization of non-elliptic linear fractional maps (conjugated to the Siegel half-space) obtained by Bracci, Contreras and Diaz-Madrigal \cite[Lemma 4.1 and Proposition 4.2]{bracci-contreras-diazmadrigal}. \begin{thm}\label{T:hyperbolic_specrad} Let $\varphi$ be a hyperbolic linear fractional map of $\mathbb{B}^m$ with dilatation coefficient $\alpha<1$. Then \begin{equation}\label{E:hyperbolic_specrad} \lim_{n\to \infty} (1-|\varphi_n(0)|^2)^{1/n} =\alpha \end{equation} \end{thm} \begin{proof} Conjugating $\varphi$ by a rotation of $\mathbb C^m$, we may assume the Denjoy-Wolff point is $e_1=(1, 0,\dots 0)$; clearly (\ref{E:hyperbolic_specrad}) is unchanged. It will be convenient to move the problem to the Siegel right half-space $$ \mathbb{H}^m =\{(w_1,w^\prime)\in\mathbb{C}\times\mathbb{C}^{m-1}:\text{Re}\ w_1>\|w^\prime\|^2\} $$ which is biholomorphically equivalent to $\mathbb{B}^m$ via the generalized Cayley transform $$ \psi(z_1, z^\prime)=\left( \frac{1+z_1}{1-z_1}, \frac{z^\prime}{1-z_1}\right) $$ and its inverse $$ \psi^{-1}(w_1, w^\prime)=\left(\frac{w_1-1}{w_1+1}, \frac{2w^\prime}{w_1+1}\right) $$ This correspondence extends continuously to identify $\partial\mathbb{B}^m$ with the one-point compactification of $\partial\mathbb{H}^m$, with $e_1$ taken to the point at infinity. In particular one may calculate that for any $z=(z_1, z^\prime)\in\mathbb{B}^m$, if $w=\psi(z)$ then $$ 1-|z|^2= \frac{4}{|w_1+1|^2}(\text{Re}\ w_1 -\|w^\prime\|^2) $$ By \cite[Lemma 4.1]{bracci-contreras-diazmadrigal} a map $\varphi$ satisfying our hypotheses is conjugate to a map $\tilde{\varphi}:\mathbb{H}^n \to \mathbb{H}^n$ of the form \begin{equation}\label{E:BCD_repn} \tilde\varphi(w_1, w^\prime)=\frac1\alpha (w_1+c+\langle w^\prime,b\rangle , Aw^\prime+d\rangle \end{equation} for suitable scalar $c\in\mathbb{C}$, vectors $b,d\in\mathbb{C}^{m-1}$ and $(m-1)\times (m-1)$ matrix $A$. Of course these parameters satisfy a number of relations, determined by the condition that $\tilde\varphi$ maps $\mathbb H^m$ into itself; the only one we will require explicitly is the fact that $\|A\|\leq \alpha^{1/2} <1$ \cite[Lemma 4.1(i)]{bracci-contreras-diazmadrigal}. Let us now write $$ \tilde\varphi_n(1,0)=(u_n, v_n) $$ with $u_n\in\mathbb{C}, v_n\in\mathbb{C}^{m-1}$. Our goal is now to show that \begin{equation}\label{E:halfspace_specrad} \lim_{n\to \infty} \left( \frac{4}{|u_n+1|^2}(\text{Re}\ u_n -\|v_n\|^2)\right)^{1/n}=\alpha \end{equation} Since $\tilde{\varphi}$ has Denjoy-Wolff point $\infty$, it follows in particular that $|u_n|\to \infty$ and hence $|u_n|\sim |u_n +1|$. Thus, to establish (\ref{E:halfspace_specrad}) it suffices to show \begin{equation} |u_n|\sim \frac{1}{\alpha^n} \end{equation} and \begin{equation} (\text{Re}\ u_n -\|v_n\|^2)\sim \frac{1}{\alpha^n} \end{equation} To do this, we will obtain fairly explicit expressions for $u_n$ and $v_n$; we begin by introducing some notation. For each integer $n\geq 0$ define $$ \beta_n=\sum_{k=0}^n \alpha^k $$ and polynomials $$ p_n(z)=\sum_{k=0}^n \beta_{n-k} z^k, \qquad q_n(z)=\sum_{k=0}^n \alpha^{n-k} z^k $$ It is straightforward to verify the following recurrence relations: \begin{gather} \beta_{n+1}=\alpha \beta_n +1 \\ p_{n+1}(z)=\alpha p_n(z) +\sum_{k=0}^{n+1} z^k \\ q_{n+1}(z)=zq_n(z)+\alpha^n \end{gather} Using these one may also deduce \begin{equation} q_n(z)+p_{n-1}(z)=p_n(z) \end{equation} With these identities established one can verify by induction that $$ \tilde{\varphi}(1,0)=\frac1\alpha(1+c, d) $$ and for all $n\geq 2$ $$ \tilde{\varphi}_n(1,0)=\frac{1}{\alpha^n}\left(1+\beta_{n-1} c +\langle p_{n-2}(A)d, b\rangle, q_{n-1}(A)d\right) $$ So in particular $$ u_n = \frac{1}{\alpha^n}(1+\beta_{n-1} c +\langle p_{n-2}(A)d, b\rangle). $$ Now define $$ x_n:=\alpha^n u_n = (1+\beta_{n-1} c +\langle p_{n-2}(A)d, b\rangle). $$ We observe that the real part of $x_n$ must always be strictly positive, and we will show that $x_n\to x$ with $\text{Re}\ x\geq 1$. This establishes the claimed asymptotic behavior of $|u_n|$. The convergence of $x_n$ depends upon the convergence of the polynomials $p_n$; in particular the following fact: \begin{claim} The sequence of polynomials $p_n$ converges to $$ \frac{1}{1-\alpha}\frac{1}{1-z} $$ uniformly in the disk $|z|\leq \sqrt{\alpha}$. \end{claim} \emph{Proof of claim:} Let $\|\cdot \|_\infty$ denote the supremum norm over the closed disk of radius $\sqrt{\alpha}$. Then for every $n$ \begin{align} \left\|(1-\alpha)p_n -\sum_{k=0}^{n+1} z^k\right\|_\infty &=\left\| \sum_{k=0}^{n+1} \alpha^{n-k+1}z^k\right\|_\infty \\ &\leq \alpha^{n+1} \left\|\sum_{k=0}^{n+1} \alpha^{-k}z^k\right\|_\infty \\ &\leq \alpha^{n+1}\frac{\alpha^{-(n+2)/{2}}-1}{\alpha^{-1/2}-1} \end{align} which tends to $0$ as $n\to \infty$. Since $\sum_{k=0}^n z^k\to (1-z)^{-1} $ uniformly in this disk, the claim is proved. Using now the crucial fact that $\|A\|\leq \sqrt{\alpha}$, we conclude that $x_n$ converges to \begin{equation}\label{E:xdef} x=1+\frac{1}{1-\alpha}(c+\langle (I-A)^{-1}d, b\rangle ) \end{equation} Now define $$ u=(I-A)^{-1}d $$ and observe that $Au+d=u$. Since $\tilde{\varphi}$ maps the closure of $\mathbb{H}^n$ into itself, it follows that $\tilde\varphi(\|w^\prime\|^2, w^\prime)\in\overline{\mathbb H^m}$ for all $w^\prime\in \mathbb C^{m-1}$; that is, $$ \alpha \|w^\prime\|^2 +\alpha \text{Re}\ \langle w^\prime,b\rangle +\alpha \text{Re}\ c \geq \|Aw^\prime+d\|^2. $$ Applying this with $w^\prime=u$ gives $$ \text{Re}\ \langle u,b\rangle +\text{Re}\ c \geq \frac{1-\alpha}{\alpha} \|u\|^2 \geq 0 $$ and hence $\text{Re}\ x\geq 1$. We now consider $\alpha^n (\text{Re}\ u_n-\|v_n\|^2)$. Since $\text{Re}\ u_n-\|v_n\|^2\geq 0$, the upper bound follows immediately from the upper bound for $\alpha^n|u_n|$. To prove boundedness from below, we return momentarily to the ball. By induction on Julia's theorem (\ref{E:julia}), $$ \frac{|1-\langle \varphi_n(0), e_1\rangle|^2}{1-|\varphi_n(0)|^2}\leq \alpha^n $$ for all $n$. Transferring this inequality to $\mathbb{H}^m$ we obtain $$ \alpha^n (\text{Re}\ u_n-\|v_n\|^2)\geq 1 $$ for all $n$. \end{proof} \begin{cor} If $\varphi$ is a hyperbolic linear fractional map of $\mathbb{B}^m$ with dilatation coefficient $\alpha$, then the spectral radius of $C_\varphi$ acting on $H^2_{m, \beta}$ is $\alpha^{-\beta/2}$. \end{cor} \begin{proof} Combine Theorem~\ref{T:hyperbolic_specrad} and Corollary~\ref{C:proto_specrad}. \end{proof} To summarize, combining the two spectral radius results we have extended Cowen's spectral radius formula to linear fractional maps in higher dimensions: \begin{thm}\label{T:lft_spectral_radius} Let $\varphi$ be a linear fractional self-map of $\mathbb{B}^m$. The spectral radius of $C_\varphi$ acting on $H^2_{m,\beta}$ is $1$ if $\varphi$ is elliptic; if $\varphi$ is non-elliptic with dilatation coefficient $\alpha$ the spectral radius is $\alpha^{-\beta/2}$. \end{thm} \begin{conj} The spectral radius formulae of Theorem~\ref{T:lft_spectral_radius} are valid for all $\varphi\in\mathcal{S}_m$. \end{conj} \noindent By Theorem~\ref{T:spectral_radius} the conjecture is true for elliptic and parabolic maps. In the hyperbolic case, one may try to prove the conjecture by a method analogous to Cowen's proof in the disk \cite{MR695941}, namely, by proving that the iterates $\varphi_n(0)$ converge to the Denjoy-Wolff point sufficiently well so that $$ \lim_{n\to \infty}\frac{1-|\varphi_n(0)|^2}{1-|\varphi_{n-1}(0)|^2}=\alpha $$ In one variable, this is accomplished by showing that when $\alpha <1$, the iterates $\varphi_n(0)$ converge nontantgentially to the Denjoy-Wolff point; the above limit then follows from the Julia-Caratheodory theorem. In the ball, one needs \emph{restricted} convergence in order to invoke the corresponding version of Julia-Caratheodory: to define this, fix a point $\zeta\in\partial\mathbb{B}^n$ and consider a curve $\Gamma:[0, 1)\to \mathbb{B}^n$ such that $\Gamma(t)\to \zeta$ as $t\to 1$. Let $\gamma(t)=\langle \Gamma(t), \zeta\rangle\zeta$ be the projection of $\Gamma$ onto the complex line through $\zeta$. The curve $\Gamma$ is called \emph{special} if \begin{equation} \lim_{t\to 1}\frac{|\Gamma -\gamma|^2}{1-|\gamma|^2}=0 \end{equation} and \emph{restricted} if it is special and in addition \begin{equation} \frac{|\zeta-\gamma|}{1-|\gamma|^2}\leq A \end{equation} for some constant $A>0$. We say that a function $f:\mathbb{B}^n\to \mathbb{C}$ has \emph{restricted $K$-limit} $L$ at $\zeta$ if $\lim_{z\to \zeta}f(z)=L$ along every restricted curve. Now, if $\varphi$ is a non-elliptic self-map of $\mathbb{B}^m$ with Denjoy-Wolff point $\zeta$ and dilatation coefficient $\alpha$, it follows from the Julia-Caratheodory theorem that the function $$ \frac{1-|\varphi(z)|^2}{1-|z|^2} $$ has restricted $K$-limit $\alpha$ at $\zeta$. Thus the conjecture is true for any hyperbolic $\varphi$ for which $\varphi_n(0)\to \zeta$ restrictedly. However the following shows that in general we need not have restricted convergence, even for linear fractional maps. \begin{prop} Let $\varphi$ be a hyperbolic linear fractional map with with Denjoy-Wolff point $e_1$ and dilatation coefficient $\alpha$, and let $\tilde\varphi$ be the conjugate mapping of $\mathbb{H}^m$ given by (\ref{E:BCD_repn}). If $\varphi_n(0)\to e_1$ restrictedly, then \begin{equation}\label{E:restricted_little_o} \|q_{n-1}(A)d\|^2=o(\alpha^n). \end{equation} \end{prop} \begin{proof} If $\varphi_n(0)\to e_1$ restrictedly then $$ \lim_{n\to \infty} \frac{|\varphi_n(0)-\langle \varphi_n(0), e_1\rangle |^2}{1-|\langle \varphi_n(0), e_1\rangle |^2} =0 $$ Under the Cayley transform, this is equivalent to $$ \lim_{n\to \infty} \frac{\|v_n^2\|}{\text{Re}\ u_n} =0 $$ which is in turn the same as $$ \lim_{n\to \infty} \frac{1}{\alpha^n}\frac{\|q_{n-1}(A)d\|^2}{\text{Re}\ x_n}=0 $$ Since $\text{Re}\ x_n\sim 1$, this proves the theorem. \end{proof} Using the parametrization (\ref{E:BCD_repn}) it is straightforward to construct hyperbolic linear fractional maps for which the condition (\ref{E:restricted_little_o}) does not hold.\footnote{The corresponding ``big O'' condition is always satisfied.} To do this, fix $0<\alpha<1$ and let $A$ be the diagonal matrix with each diagonal entry equal to $\sqrt{\alpha}$. Let $d$ be any unit vector in $\mathbb{C}^{m-1}$ and define $b=2\alpha^{-1/2} d$, $c=\alpha^{-1}$. Then $\tilde\varphi$ defined by (\ref{E:BCD_repn}) is a conjugate to a hyperbolic linear fractional map for which (\ref{E:restricted_little_o}) is violated: we calculate $$ \alpha^{-n}\|q_{n-1}(A)d\|^2 = \alpha^n \left( \sum_{k=0}^n \alpha^{-k/2}\right)^2 =\left( \frac{1-\alpha^{(n+1)/2}}{1-\alpha^{1/2}}\right)^2 $$ which is greater than $1$ for all $n$. Even though the orbit $\varphi_n(0)$ need not approach the Denjoy-Wolff point restrictedly, it can be shown (at least when $m=2$) that when $\varphi$ is a linear fractional map, the limit $$ \lim_{n\to \infty}\frac{1-|\varphi_n(0)|^2}{1-|\varphi_{n-1}(0)|^2} $$ exists and equals $\alpha$. We do not know if this is true of general Schur-Agler mappings. \begin{ques} If $\varphi\in\mathcal{S}_m$ is hyperbolic with dilatation coefficient $\alpha$, is it true that $$ \lim_{n\to \infty}\frac{1-|\varphi_n(0)|^2}{1-|\varphi_{n-1}(0)|^2} $$ exists and equals $\alpha$? \end{ques} An affirmative answer to this question would prove the conjecture. If on the other hand the limit exists for some $\varphi$ but has a value different from $\alpha$ (necessarily larger) then the conjecture would be false. One may try to answer the question by looking for a stronger form of the Julia-Caratheodory theorem in the ball (valid for Schur-Agler mappings). Some results in this direction are obtained in \cite{jury-preprint}, but so far these results are not sufficient to answer the question. \bibliographystyle{plain}
{ "timestamp": "2007-07-23T20:10:22", "yymm": "0707", "arxiv_id": "0707.3425", "language": "en", "url": "https://arxiv.org/abs/0707.3425", "abstract": "We give a new proof that every linear fractional map of the unit ball induces a bounded composition operator on the standard scale of Hilbert function spaces on the ball, and obtain norm bounds analogous to the standard one-variable estimates. We also show that Cowen's one-variable spectral radius formula extends to these operators. The key observation underlying these results is that every linear fractional map of the ball belongs to the Schur-Agler class.", "subjects": "Functional Analysis (math.FA)", "title": "Norms and spectral radii of linear fractional composition operators on the ball", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713839878681, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7080104916942088 }
https://arxiv.org/abs/2108.01133
Asymptotic behavior of the principal eigenvalue and basic reproduction ratio for periodic patch models
This paper is devoted to the study of the asymptotic behavior of the principal eigenvalue and basic reproduction ratio associated with periodic population models in a patchy environment for small and large dispersal rates. We first deal with the eigenspace corresponding to the zero eigenvalue of the connectivity matrix. Then we investigate the limiting profile of the principal eigenvalue of an associated periodic eigenvalue problem as the dispersal rate goes to zero and infinity, respectively. We further establish the asymptotic behavior of the basic reproduction ratio in the case of small and large dispersal rates. Finally, we apply these results to a periodic Ross-Macdonald patch model.
\section{Introduction} In 2007, Allen et al. \cite{allen2007Asymptotic} studied the following epidemic model in a patchy environment: \begin{equation}\label{equ:SIS:auto} \begin{aligned} \frac{\mathrm{d} S_i}{\mathrm{d} t}= d_S \sum_{j =1}^{n} l_{ij} S_j - \beta_i \frac{S_i I_i}{S_i + I_i} + \gamma_i I_i, &~ i=1,\cdots,n,\\ \frac{\mathrm{d} I_i}{\mathrm{d} t}= d_I \sum_{j =1}^{n} l_{ij} I_j + \beta_i \frac{S_i I_i}{S_i + I_i} - \gamma_i I_i, &~ i=1,\cdots,n.\\ \end{aligned} \end{equation} Here $n \geq 2$ is the number of patches, $S_i(t)$ and $I_i(t)$ are the numbers of susceptible and infected individuals in patch $i$ at time $t$, respectively. The parameters $d_S$ and $d_I$ are the migration rate of susceptible and infected populations; $l_{ij}$ is a nonnegative constant which denotes the degree of movement from patch $j$ to patch $i$ for $j \neq i$ and $l_{ii}=- \sum_{j \neq i}l_{ji}$ is the degree of movement from patch $i$ to all other patches; $\beta_i \geq 0$ and $\gamma_i >0$ are disease transmission and recovery rates at patch $i$, respectively. Let $L=(l_{ij})_{n \times n}$, $F=\mathrm{diag}(\beta_1,\cdots,\beta_n)$ and $V=\mathrm{diag}(\gamma_1,\cdots,\gamma_n)$. Following \cite{diekmann1990definition,van2002reproduction}, the basic reproduction ratio of system \eqref{equ:SIS:auto} is expressed as $\mathcal{R}_0(d_I)=r((V-d_I L)^{-1}F)$, $d_I \geq 0$, where $r((V-d_I L)^{-1}F)$ is the spectral radius of $(V-d_I L)^{-1}F$. Recall that a square matrix is said to be cooperative if its off-diagonal elements are nonnegative, and nonnegative if all elements are nonnegative; a square matrix is said to be irreducible if it is not similar, via a permutation, to a block lower triangular matrix, and reducible if otherwise; and the spectral bound (also called the stability modulus) of a square matrix $A$ is defined as $s(A)=\sup \{\mathrm{Re} \lambda: \lambda \text{ is an eigenvalue of } A\}$. Under the assumption that the migration matrix $L$ of infected individuals is symmetric and irreducible, Allen et al. \cite{allen2007Asymptotic} showed that $$ \lim\limits_{d_I \rightarrow 0^+} s(d_I L -V+F) =\max_{1 \leq i \leq n}(\beta_i -\gamma_i), ~\lim\limits_{d_I \rightarrow +\infty} s(d_I L -V+F)= \frac{1}{n}\sum_{i =1}^{n} (\beta_i -\gamma_i), $$ $$ \lim\limits_{d_I \rightarrow 0^+}\mathcal{R}_0(d_I)= \mathcal{R}_0(0)=\max_{1 \leq i \leq n} \frac{\beta_i}{\gamma_i}, \text{ and } \lim\limits_{d_I \rightarrow +\infty}\mathcal{R}_0(d_I)= \frac{\sum_{i =1}^{n}\beta_i}{\sum_{i =1}^{n} \gamma_i}. $$ Without assuming the symmetry of $L$, Gao and Dong \cite{gao2019travel,gao2020fast} and Chen et al. \cite{chen2020asymptoticJMB} recently proved the same limiting properties for $s(d_I L -V+F)$ and $\mathcal{R}_0(d_I)$ as $d_I \rightarrow 0^+$, and generalized the other two limits into $$ \lim\limits_{d_I \rightarrow +\infty} s(d_I L -V+F)= \sum_{i =1}^{n} (\beta_i -\gamma_i) q_i, \quad \lim\limits_{d_I \rightarrow +\infty}\mathcal{R}_0(d_I)= \frac{\sum_{i =1}^{n}\beta_i q_i}{\sum_{i =1}^{n} \gamma_i q_i}, $$ where $\bm{q}=(q_1,\cdots,q_n)^T$ is a right eigenvector of $L$ corresponding to the eigenvalue $0$ such that $\sum_{i=1}^{n} q_i=1$. Note that the connectivity matrix obtained from the linearization of system \eqref{equ:SIS:auto} at the disease-free equilibrium refers to the migration matrix of infected individuals. In many multi-population models in a patchy environment, however, the connectivity matrix is reducible, although the migration matrix for each population is irreducible (see, e.g., \cite{gao2014periodic,gao2012multipatch}). Thus, a natural question is how to further characterize the above limiting profiles for $s(d_I L -V+F)$ and $\mathcal{R}_0(d_I)$ without the irreducibility condition on the connectivity matrix. Such problems have been explored for reaction-diffusion systems (see, e.g., \cite{allen2008asymptotic,wang2012basic,magal2019basic,chen2020asymptoticSIAP,dancer2009principal,lam2016asymptotic,zhang2020asymptotic}). In the case where the connectivity matrix is symmetric, this question is much easier than the associated problem for reaction-diffusion systems. It is worthy pointing out that the limiting problem for large dispersal rate is highly nontrivial when the connectivity matrix is non-symmetric. For time-periodic patch population models (see, e.g., \cite{gao2014periodic,zhang2007periodic}), we may conjecture that similar limiting results on the principal eigenvalue and basic reproduction ratio hold true. This conjecture was confirmed for reaction-diffusion systems (see, e.g., \cite{hutson2001evolution,yang2019dynamics,zhang2020asymptotic,peng2012reaction,peng2015effects}). However, it seems that these methods and arguments may not be well adapted to such periodic patch models due to the lack of irreducibility and symmetry for the connectivity matrix. Our purpose of this paper is to address the afore-mentioned two questions for patch population models. Motivated by \cite{wang2004epidemic,zhang2007periodic,allen2007Asymptotic,gao2012multipatch,gao2020fast}, we assume that the connectivity matrix $L$ admits the property that \begin{itemize} \item[(H1)] $L=(l_{ij})_{n \times n} $ is an $n\times n$ cooperative matrix with zero column sums. \end{itemize} Then we have the following elementary observation, which plays a key role in our analysis. \begin{thmx}[see Lemmas \ref{lem:L_K}, \ref{lem:L_K_M} and \ref{lem:PQM}] \label{thm:A} Assume that {\rm (H1)} holds. Let $\alpha_0$ be the algebraic multiplicity of the zero eigenvalue of $L$. Then the following statements are valid: \begin{itemize} \item[\rm (i)]There exist nonnegative matrices $P=(p_{hj})_{\alpha_0 \times n}$ and $Q=(q_{il})_{n \times \alpha_0}$ such that $PL$ and $LQ$ are zero matrices and $PQ$ is an $\alpha_0 \times \alpha_0$ identity matrix. \item[\rm (ii)] If $M$ is an $n \times n$ cooperative matrix, then $PMQ$ is an $\alpha_0 \times \alpha_0$ cooperative matrix. \item[\rm (iii)] Let $\hat{P}=(\hat{p}_{hj})_{\alpha_0 \times n}$ and $\hat{Q}=(\hat{q}_{il})_{n \times \alpha_0}$ be two nonnegative matrices such that $\hat{P}L$ and $L\hat{Q}$ are zero matrices and $\hat{P}\hat{Q}$ is an $\alpha_0 \times \alpha_0$ identity matrix. Then $P M Q$ is similar to $\hat{P} M \hat{Q}$. \end{itemize} \end{thmx} We remark that all rows of $P$ and columns of $Q$ are the left and right eigenvectors of $L$, respectively. Note that any autonomous system can be regarded as a periodic one with the period being any given positive number. As a straightforward consequence of our general result for periodic systems (see Theorem \ref{thm:C} below), we have the following result on the limiting profiles of the spectral bound and basic reproduction ratio with small and large dispersal rate for autonomous patch models. \begin{thmx}[]\label{thm:B} Assume that {\rm (H1)} holds, $-V$ is an $n \times n $ cooperative matrix, and $F$ is an $n \times n $ nonnegative matrix. Let $P$ and $Q$ be defined as in Theorem \ref{thm:A}, $\tilde{V}:= PVQ$ and $\tilde{F}:=PFQ$. Then the following statements are valid: \begin{itemize} \item[\rm (i)] $\lim\limits_{d \rightarrow 0^+} s(d L -V +F)= s(-V+F)$ and $\lim\limits_{d \rightarrow +\infty} s(d L -V +F)= s(-\tilde{V}+\tilde{F})$. \item[\rm (ii)] If, in addition, $s(dL-V)<0$ for all $d \geq 0$ and $s(\tilde{V})<0$, then $\lim\limits_{d \rightarrow 0^+} \mathcal{R}_0 (d) =\mathcal{R}_0(0)$ and $\lim\limits_{d \rightarrow +\infty} \mathcal{R}_0= \tilde{\mathcal{R}}_0$, where $\mathcal{R}_0(d):=r((V-dL)^{-1}F)$, $\forall d \geq 0$, and $\tilde{\mathcal{R}}_0:= r (\tilde{V}^{-1} \tilde{F})$. \end{itemize} \end{thmx} Note that the additional conditions $s(dL-V)<0$ for all $d \geq 0$ and $s(\tilde{V})<0$ are used to guarantee that the associated basic reproduction ratios $\mathcal{R}_0(d)$ and $\tilde{\mathcal{R}}_0$ are well defined (see, e.g., \cite{van2002reproduction}), and $s(-\tilde{V}+\tilde{F})$ is independent of the choice of $P$ and $Q$ due to Theorem \ref{thm:A}. In the case where $L$ is irreducible, the results in Theorem \ref{thm:B} were established in \cite{gao2019travel,gao2020fast,chen2020asymptoticJMB}. To present our main result for time-periodic systems, we use $T>0$ to denote the period throughout this paper. Let $F(t)$ and $V(t)$ be two continuous $n\times n$ matrix-valued functions of $t \in \mathbb{R}$ such that \begin{itemize} \item[(H2)] $F(t+T)=F(t)$, $V(t+T)=V(t)$, $F(t)$ is nonnegative, and $-V(t)$ is cooperative for all $t \in \mathbb{R}$. \end{itemize} For any $t \in \mathbb{R}$, let $\tilde{F}(t):=PF(t)Q$ and $\tilde{V}(t):=PV(t)Q$, where $P$ and $Q$ are defined as in Theorem \ref{thm:A}. For any $d \geq 0$, let $\{\Phi_{d}(t,s): t \geq s\}$ be the evolution family on $\mathbb{R}^n$ of $ \frac{\mathrm{d} \bm{v}}{\mathrm{d} t}=d L \bm{v} -V(t) \bm{v}, $ and let $\{\tilde{\Phi}(t,s): t \geq s\}$ be the evolution family on $\mathbb{R}^{\alpha_0}$ of $\frac{\mathrm{d} \bm{v}}{\mathrm{d} t}= - \tilde{V}(t) \bm{v}$ (see Definition \ref{def:evol}), where $\alpha_0$ is the algebraic multiplicity of the zero eigenvalue of $L$. Let $\omega(\Phi)$ be the exponential growth bound of an evolution family $\Phi$ (see Definition \ref{def:evol}). We further assume that \begin{itemize} \item[(H3)] $\omega(\Phi_{d})<0$ for all $d \geq 0$ and $\omega(\tilde{\Phi})<0$. \end{itemize} For any $d \geq 0$, let $\lambda_{d}^{*}$ be the principal eigenvalue of the periodic eigenvalue problem (see Definition \ref{def:principal} and Theorem \ref{thm:existence}): $$\frac{\mathrm{d} \bm{u}}{\mathrm{d} t}=d L \bm{u} -V(t) \bm{u} +F(t) \bm{u} - \lambda \bm{u}.$$ According to \cite{bacaer2006epidemic, wang2008threshold}, the basic reproduction ratio $\mathcal{R}_0(d)$ is well defined for the following periodic ODE system (see section \ref{sec:R0}): \begin{equation}\label{equ:dLVF} \frac{\mathrm{d} \bm{v}}{\mathrm{d} t}=d L \bm{v} -V(t) \bm{v} +F(t) \bm{v}. \end{equation} In view of Theorem \ref{thm:A}, we see that $-\tilde{V}(t)$ is cooperative for any $t \in \mathbb{R}$. Moreover, $\tilde{F}(t)$ is nonnegative for any $t \in \mathbb{R}$. Let $\tilde{\lambda}^{*}$ be the principal eigenvalue of the periodic eigenvalue problem: $$\frac{\mathrm{d} \bm{u}}{\mathrm{d} t}= -\tilde{V}(t) \bm{u} + \tilde{F}(t) \bm{u} - \lambda \bm{u},$$ and $\tilde{\mathcal{R}}_0$ be the basic reproduction ratio of the following periodic equation (see section \ref{sec:R0}): \begin{equation}\label{equ:tdLVF} \frac{\mathrm{d} \bm{v}}{\mathrm{d} t}=-\tilde{V}(t) \bm{v} + \tilde{F}(t) \bm{v}. \end{equation} Then we have the following result on the asymptotic behavior of $\lambda_{d}^{*}$ and $\mathcal{R}_0(d)$ for periodic patch models. \begin{thmx}[see Theorems \ref{thm:eig} and \ref{thm:R0}] \label{thm:C} Assume that {\rm (H1)--(H3)} hold. Then the following statements are valid: \begin{itemize} \item[\rm (i)] $\lim\limits_{d \rightarrow 0^+} \lambda_{d}^{*}= \lambda_{0}^{*}$ and $\lim\limits_{d \rightarrow +\infty} \lambda_{d}^{*}= \tilde{\lambda}^{*}$. \item[\rm (ii)] $\lim\limits_{d \rightarrow 0^+} \mathcal{R}_0(d)= \mathcal{R}_0(0)$ and $\lim\limits_{d \rightarrow +\infty} \mathcal{R}_0(d)= \tilde{\mathcal{R}}_0$. \end{itemize} \end{thmx} We should point out that $\tilde{\lambda}^{*}$ is independent of the choice of $P$ and $Q$ (see Lemma \ref{lem:PQO}). The statements (i) and (ii) in Theorem \ref{thm:C} are straightforward consequences of Theorems \ref{thm:eig} and \ref{thm:R0}, respectively. In Theorem \ref{thm:R0}, we also introduce a metric space of parameters to discuss the continuity of the basic reproduction ratio with respect to parameters. Since the Poincar\'e (period) map of system \eqref{equ:dLVF}, which is a square matrix, is continuous with respect to the dispersal rate $d \in[0,+\infty)$, so is the principal eigenvalue due to the standard matrix perturbation theory. To obtain the limiting profile of the principal eigenvalue as the dispersal rate goes to infinity, we distinguish two cases. In the case where the Poincar\'e map (matrix) of \eqref{equ:tdLVF} is irreducible, we use some ideas inspired by \cite{hale1986large,hale1987varying,hale1989shadow,hutson2001evolution,zhang2020asymptotic}, where the asymptotic behavior of the positive steady states or periodic solutions was derived for large diffusion coefficients. In the case where such a matrix is reducible, we combine the perturbation technique and the results for appropriate subsystems such that the Poincar\'e maps of the associated limiting systems are irreducible. In our recent paper \cite{zhang2020asymptotic}, we established the continuity of the basic reproduction ratio with respect to parameters under the setting of Thieme \cite{thieme2009spectral}, which enables us to reduce the limiting profile of the basic reproduction ratio to the asymptotic behavior of the principal eigenvalue of the associated periodic eigenvalue problem with parameters. In the current paper, we give a more general result in this regard and then use it to prove Theorem \ref{thm:C} (ii). The remaining part of this paper is organized as follows. In the next section, we present some basic properties of cooperative matrices and prove a general result in order to study the continuity of the basic reproduction ratio with respect to parameters. In section \ref{sec:eig}, we study the asymptotic behavior of the principal eigenvalue for periodic cooperative ODE systems with large dispersal rate. In section \ref{sec:R0}, we prove the continuity of the basic reproduction ratio with respect to the dispersal rate and investigate the limiting profile of the basic reproduction ratio as the dispersal rate goes to infinity. As an illustrative example, we also apply these analytic results to a periodic Ross-Macdonald patch model. \section{Preliminaries} In this section, we present some properties of cooperative matrices and prove a general result in order to study the continuity of the basic reproduction ratio with respect to parameters. Throughout the whole paper, we denote $\bm{0}=(0,\cdots,0)^T$ in the case of any finite dimension. Moreover, without ambiguity, $0$ refers to zero matrix. \begin{lemma}\label{lem:L_K:0} Assume that {\rm (H1)} holds and $L$ can be split into a block lower triangular matrix $$ L= \left( \begin{array}{ccc} L_{11}& \cdots & L_{1\alpha} \\ \vdots& \ddots& \vdots\\ L_{\alpha 1}& \cdots & L_{\alpha \alpha}\\ \end{array} \right) $$ such that $L_{hh}$ is an $n_h \times n_h $ irreducible matrix for $1 \leq h \leq \alpha$ with $\sum_{l=1}^{\alpha} n_l=n$, and $L_{hl}=0$ for $1 \leq h < l \leq \alpha$. Then for any fixed $1 \leq l \leq \alpha$, $s(L_{ll})=0$ if $L_{hl}=0$ for all $1 \leq l < h \leq \alpha$, and $s(L_{ll})<0$ if otherwise. Equivalently, $L_{hl}=0$ for all $1 \leq h \neq l \leq \alpha$ if $s(L_{ll})=0$, and there is some $h_0 \neq l$ such that $L_{h_0l}$ is a nonzero matrix if otherwise. \end{lemma} \begin{proof} Let $\bm{e}=(1,\cdots,1)^T$ and $\bm{e}_l=(1,\cdots,1)^T$ be $n$ and $n_l$-dimensional vectors for any $1 \leq l \leq \alpha$, respectively. It is easy to see that $\bm{e}^T L= \bm{0}^T$. For a fixed $1 \leq l_0 \leq \alpha$, an easy computation yields that $$ \sum_{h =1}^{\alpha}(\bm{e}_h)^T L_{h l_0}=\sum_{h =l_0}^{\alpha}(\bm{e}_h)^T L_{h l_0} =\bm{0}^T. $$ If $L_{hl_0}$ is a zero matrix for all $1 \leq l_0 < h \leq \alpha$, then $(\bm{e}_{l_0})^T L_{l_0l_0} = \bm{0}^T$. This implies that $s(L_{l_0l_0})=0$. If otherwise, by the irreducibility of $L_{l_0l_0}$, we conclude that $s(L_{l_0l_0})<0$ due to \cite[Theorem II.1.11]{berman1994nonnegative}. \end{proof} \begin{itemize} \item[(H1)$'$] $L=(l_{ij})_{n \times n} $ is an $n\times n$ cooperative matrix with $s(L)=0$, and $L$ can be split into a block lower triangular matrix $$ L= \left( \begin{array}{ccc} L_{11}& \cdots & L_{1\alpha} \\ \vdots& \ddots& \vdots\\ L_{\alpha 1}& \cdots & L_{\alpha \alpha}\\ \end{array} \right) $$ such that $L_{hh}$ is an $n_h \times n_h $ irreducible matrix for $1 \leq h \leq \alpha$ with $\sum_{l=1}^{\alpha} n_l=n$, $L_{hl}=0$ for $1 \leq h < l \leq \alpha$, and $L_{hl}=0$ for all $l \in \Lambda_0$ and $1\leq h \leq \alpha$ with $h \neq l$, where $$\Lambda_0:=\{1 \leq l \leq \alpha: s(L_{ll})=0 \}, \text{ and } \Lambda_0^{c}:=\{1 \leq l \leq \alpha: s(L_{ll})<0 \}.$$ Let $\alpha_0$ and $\alpha_0^c$ denote the number of all elements in $\Lambda_0$ and $\Lambda_0^{c}$, respectively. \end{itemize} In the use of Lemma \ref{lem:L_K:0}, we choose $\alpha=1$ if $L$ is irreducible, and write $L$ as such a block lower triangular matrix via a permutation if $L$ is reducible. Accordingly, Lemma \ref{lem:L_K:0} implies that (H1) is sufficient for (H1)$'$ to hold. \begin{lemma}\label{lem:L_K} Assume that {\rm (H1)$'$} holds. Let $\bm{\nu}$ be an $\alpha_0$-dimensional vector defined by $ \bm{\nu}=(\nu_1,\cdots,\nu_{\alpha_0})^T $ with $\nu_l= \alpha_0^c +l$, $\forall 1 \leq l \leq \alpha_0$. Then the following statements are valid: \begin{itemize} \item[\rm (i)] If $\Lambda_0^c \neq \emptyset$, then $\Lambda_0^c=\{1,\cdots,\alpha_0^c\}$ and $\Lambda_0=\{\alpha_0^c+1,\cdots,\alpha\}$, via a permutation. \item[\rm (ii)] The algebraic multiplicity of the zero eigenvalue of $L$ is $\alpha_0$, and there exist $\alpha_0$ linearly independent left positive eigenvectors $(\bm{p}_l)^T:=((\bm{p}_l^1)^T,\cdots,(\bm{p}_l^\alpha)^T)$, $1 \leq l \leq \alpha_0$ of $L$ and right positive eigenvectors $\bm{q}_l=((\bm{q}_l^1)^T,\cdots,(\bm{q}_l^\alpha)^T)^T$, $1 \leq l \leq \alpha_0$ of $L$ corresponding to $0$ such that $(\bm{p}_l )^T\bm{q}_h=\delta_{lh}$ for $1 \leq l, h \leq \alpha_0$, where $\bm{p}_l^i$ and $\bm{q}_l^i$ are $n_i$-dimensional vectors and $\delta_{lh}$ denotes the Kronecker delta function(that is, $\delta_{lh}=1$ if $l=h$ and $\delta_{lh}=0$ if otherwise). Moreover, $\bm{p}_l^{\nu_l} \gg \bm{0}$, $\bm{q}_l^{\nu_l} \gg \bm{0}$, $L_{\nu_l\nu_l}\bm{q}_l^{\nu_l}=\bm{0}$ and $(\bm{p}_{l}^{\nu_l})^TL_{\nu_l\nu_l}=\bm{0}^T$, $\forall 1 \leq l \leq \alpha_0$. \item[\rm (iii)] Define $$ P:= \left(\begin{matrix} \bm{p}_1, \cdots, \bm{p}_{\alpha_0} \end{matrix} \right)^T \text{ and } Q:= \left(\begin{matrix} \bm{q}_1, \cdots, \bm{q}_{\alpha_0} \end{matrix} \right). $$ Then $PQ$ is an $\alpha_0 \times \alpha_0$ identity matrix. Moreover, $PL=0$ and $LQ=0$. \end{itemize} \end{lemma} \begin{proof} (i) can be derived by a permutation due to (H1)$'$. (ii) We only consider the case of $\alpha_0^c >0$, since the case of $\alpha_0^c =0$ can be obtained similarly. For any $1 \leq l \leq \alpha_0$, choose $\bm{q}^{\nu_l} \gg 0$ such that $L_{\nu_l\nu_l}\bm{q}^{\nu_l}=\bm{0}$, and define $$ \bm{q}_l:=((\bm{q}_l^1)^T,\cdots,(\bm{q}_l^\alpha)^T)^T, $$ where $\bm{q}_l^i$ is an $n_i$-dimensional vector, $\bm{q}_l^{\nu_l}=\bm{q}^{\nu_l}$, and $\bm{q}_l^i=\bm{0}$ if $i \neq \nu_l$. This implies that $L\bm{q}_l=0$ for any $1 \leq l \leq \alpha_0$. For any $1 \leq l \leq \alpha_0$, choose $\bm{p}^{\nu_l} \gg 0$ such that $(\bm{p}^{\nu_l})^T L_{\nu_l\nu_l}=\bm{0}^T$ with $(\bm{p}^{\nu_l})^T\bm{q}^{\nu_l}=1$. Define $$ \bm{p}_l:=((\bm{p}_l^1)^T,\cdots,(\bm{p}_l^\alpha)^T)^T, $$ where $\bm{p}_l^i$ is an $n_i$-dimensional vector, $\bm{p}_l^{\nu_l}=\bm{p}^{\nu_l}$, and $\bm{p}_l^i=\bm{0}$ if $i \in \Lambda_0$ with $i \neq \nu_l$, and $\bm{p}_l^i$ is solved by the following equations if $i \in \Lambda_0^c$. \begin{equation}\label{equ:p:Lhl} \sum_{i =1}^\alpha (\bm{p}_l^i)^T L_{ih}=\bm{0}^T, ~1 \leq h \leq \alpha. \end{equation} This is equivalent to $$ \sum_{i \in \Lambda_0^c} (\bm{p}_l^i)^T L_{ih}=-\sum_{i \in \Lambda_0} (\bm{p}_l^i)^T L_{ih},~ 1 \leq h \leq \alpha. $$ Notice that the coefficient matrix $$\tilde{L}=\left( \begin{array}{ccc} L_{11}& \cdots & L_{1\alpha_0^c} \\ \vdots& \ddots& \vdots\\ L_{\alpha_0^c1}& \cdots & L_{\alpha_0^c\alpha_0^c}\\ \end{array} \right)$$ of the above equations is a block lower triangular matrix whose diagonal elements are $L_{ii}$ for $i \in \Lambda_0^c$. It then follows that $s(\tilde{L})= \max_{i \in \Lambda_0^c}s(L_{ii})<0$, and hence, \eqref{equ:p:Lhl} adimits a unique solution. Moreover, $\bm{p}_l^i$ is nonnegative for $i \in \Lambda_0^c$ since $\tilde{L}$ is cooperative. Thus, $\bm{p}_l^T L =\bm{0}^T$, $\forall 1 \leq l \leq \alpha_0$. In view of the above arguments, it easily follows that the algebraic multiplicity of the zero eigenvalue of $L$ is no less than $\alpha_0$. To obtain the converse statement, it suffices to prove the following two claims. {\it Claim 1.} If $L\bm{q}=\bm{0}$ with $\bm{q}=((\bm{q}^1)^T,\cdots,(\bm{q}^\alpha)^T)^T$, where $\bm{q}^l$ is an $n_l$-dimensional vector, then $\bm{q}^{l}=\bm{0}$ for $l \in \Lambda_0^c$ and $L_{ll}\bm{q}^l=\bm{0}$ for $l \in \Lambda_0$. {\it Claim 2.} If $L^m\bm{q}=\bm{0}$ for some $m>1$ with $\bm{q}=((\bm{q}^1)^T,\cdots,(\bm{q}^\alpha)^T)^T$, where $\bm{q}^l$ is an $n_l$-dimensional vector, then $\bm{q}^{l}=\bm{0}$ for $l \in \Lambda_0^c$ and $L_{ll}\bm{q}^l=\bm{0}$ for $l \in \Lambda_0$. Let us postpone the proof of these claims, and complete the proof in a few lines. By the irreducibility of $L_{ll}$, it then follows from Claim 1 that $\bm{q}$ is a linear combination of $\{\bm{q}_l \}_{1 \leq l \leq \alpha }$ for any $\bm{q}$ with $L\bm{q}=\bm{0}$, and hence, the geometric multiplicity of the zero eigenvalue of $L$ is no more than $\alpha_0$. Similarly, it follows from Claim 2 that the algebraic multiplicity of the zero eigenvalue of $L$ is no more than $\alpha_0$. Thus, the desired conclusion holds. We now return to the proof of Claim 1, and first show that $\bm{q}^{l}=\bm{0}$ for all $l \in \Lambda_0^c$ by the induction method. It is easy to see that $L_{11}\bm{q}^1=\bm{0}$. Thus, $s(L_{11})<0$ implies that $\bm{q}^1=\bm{0}$. Assume that $\bm{q}^{l}=\bm{0}$ for $1 \leq l \leq l_0$ with ${l_0} \in \Lambda_0^c$, it suffices to prove $\bm{q}^{{l_0}+1}=\bm{0}$ if $l_0 + 1 \in \Lambda_0^c$. In view of $L\bm{q}=\bm{0}$, we have $$ L_{({l_0}+1)({l_0}+1)}\bm{q}^{{l_0}+1} =\sum_{i=1}^{\alpha} L_{({l_0}+1) i} \bm{q}^{i} =\bm{0}, $$ due to $L_{({l_0}+1) i }=0$ if $i > l_0 +1$, and $\bm{q}^{i}=\bm{0}$ if $i < l_0 +1$. Thus, $\bm{q}^{l}=\bm{0}$ for all $l \in \Lambda_0^c$. By (H1)$'$, $L_{hl}=0$ if $ h,l \in \Lambda_0$ with $h \neq l$. It then follows that $$L_{ll}\bm{q}^l=\sum_{i =1}^{\alpha}L_{li}\bm{q}^i=\bm{0},~l \in \Lambda_0.$$ We next verify Claim 2. Since $L$ is a block lower triangular matrix, $\mathrm{diag}(L_{11}^m,\cdots,L_{\alpha \alpha}^m)$ is the block diagonal of $L^m$. In view of Claim 1, we have $\bm{q}^{l}=\bm{0}$ for $l \in \Lambda_0^c$ and $L_{ll}^{m}\bm{q}^l=\bm{0}$ for $l \in \Lambda_0$. Thus, the irreducibility of $L_{ll}$ implies that $L_{ll}\bm{q}^l=\bm{0}$. \end{proof} In view of Lemma \ref{lem:L_K}, we observe that $\alpha_0$ is not only the number of the elements in $\Lambda_0$, but also the algebraic multiplicity of the zero eigenvalue of $L$. In the rest of this paper, we use the same notations $\bm{\nu}$, $\bm{p}_l$, $\bm{q}_l$, $P$ and $Q$ as in Lemma \ref{lem:L_K}. \begin{lemma}\label{lem:L_K_M} Assume that {\rm (H1)$'$} holds, $\Lambda_0^c=\{1,\cdots,\alpha_0^c\}$ and $\Lambda_0=\{\alpha_0^c+1,\cdots,\alpha\}$ whenever $\Lambda_0^c \neq \emptyset$. Let $M$ be a cooperative matrix such that $$ M= \left( \begin{array}{ccc} M_{11}& \cdots & M_{1\alpha} \\ \vdots& \ddots& \vdots\\ M_{\alpha 1}& \cdots & M_{\alpha \alpha}\\ \end{array} \right), $$ where $M_{hl}$ is an $n_h \times n_l $ matrix for $1 \leq h,l \leq \alpha$. Then the following statements are valid: \begin{itemize} \item[\rm (i)] $PMQ$ is cooperative. \item[\rm(ii)] Let $\bm{b}$ be an $\alpha_0$-dimensional vector defined by $ \bm{b}=(b_1,\cdots,b_{\alpha_0})^T $ with $1 \leq b_i \leq \alpha_0$ and $b_i \neq b_j$ if $i \neq j$. Define a matrix $\tilde{M}:=(\tilde{m}_{hl})_{\alpha_0 \times \alpha_0}$ by $\tilde{m}_{hl}=\bm{p}_{b_h}^T M \bm{q}_{b_l}$. Then $\tilde{M}$ is similar to $PMQ$, via a permutation. If $\tilde{M}$ is reducible, then $\tilde{M}$, via exchanging the order of the components of $\bm{b}$, can be split into $$ \tilde{M}= \left( \begin{array}{ccc} \tilde{M}_{11}& \cdots & \tilde{M}_{1\tilde{n}} \\ \vdots& \ddots& \vdots\\ \tilde{M}_{\tilde{n}1}& \cdots & \tilde{M}_{\tilde{n}\tilde{n}}\\ \end{array} \right), $$ where $\tilde{M}_{ii}$ is an $\alpha_i \times \alpha_i$ irreducible matrix for all $1 \leq i \leq \tilde{n}$ with $\sum_{i=1}^{\tilde{n}} \alpha_i =\alpha_0$, and $\tilde{M}_{ij}=0$ for all $1 \leq i < j \leq \tilde{n}$. \item[\rm (iii)] Let $\bm{b}=((\bm{b}^1)^T,\cdots,(\bm{b}^{\tilde{n}})^T)^T=(b_1,\cdots,b_{\alpha_0})^T$, where $\bm{b}^i=(b^i_1,\cdots,b^i_{\alpha_i})^T$. Then $\tilde{M}_{ii}$ is still an $\alpha_i \times \alpha_i$ irreducible matrix for all $1 \leq i \leq \tilde{n}$ with $\sum_{i=1}^{\tilde{n}} \alpha_i =\alpha_0$, and $\tilde{M}_{ij}=0$ for all $1 \leq i < j \leq \tilde{n}$ by exchanging the order of the components of $\bm{b}^i$ such that $b^i_1< \cdots < b^i_{\alpha_i}$. \item[\rm (iv)] For any $1 \leq i \leq \tilde{n}$, let $$ \tilde{\nu}^{i}_{j}:= \begin{cases} j,&1 \leq j \leq \alpha_0^c,\\ \alpha_0^c +b^{i}_{j-\alpha_0^c}, & 1+ \alpha_0^c \leq j \leq \alpha_0^c +\alpha_i, \end{cases} $$ if $\alpha_0^c>0$ and $\tilde{\nu}^{i}_{j}:=b^{i}_{j-\alpha_0^c}$, $1 \leq j \leq \alpha_i$, if $\alpha_0^c=0$, and define $$\bm{p}_{l,i}:=((\bm{p}_{l,i}^{1})^T,\cdots,(\bm{p}_{l,i}^{\alpha_i})^T)^T,~ \bm{q}_{l,i}:=((\bm{q}_{l,i}^{1})^T,\cdots,(\bm{q}_{l,i}^{\alpha_i})^T)^T,$$ $L^{i}:=(L_{hl}^i)_{(\alpha_0^c+\alpha_i)\times (\alpha_0^c+\alpha_i)}$, $M^{i}:=(M_{hl}^i)_{(\alpha_0^c+\alpha_i)\times (\alpha_0^c+\alpha_i)}$, and $\tilde{M}^{i}:=(\tilde{m}_{hl}^{i})_{\alpha_i \times \alpha_i}$ by $$ \bm{p}_{l,i}^{j}= \bm{p}_{b_{l}^{i}}^{\tilde{\nu}_{j}^{i}}, ~\bm{q}_{l,i}^{j}= \bm{q}_{b_{l}^{i}}^{\tilde{\nu}_{j}^{i}},~ 1 \leq j \leq \alpha_0^c+ \alpha_i,~ 1\leq l \leq \alpha_i, $$ $$ L_{hl}^{i}= L_{\tilde{\nu}^{i}_h \tilde{\nu}^{i}_l},~M_{hl}^{i}= M_{\tilde{\nu}^{i}_h \tilde{\nu}^{i}_l}, ~1 \leq h, l \leq \alpha_0^c +\alpha_i. $$ and $$ \tilde{m}_{hl}^{i}=\bm{p}_{h,i}^T M^i \bm{q}_{l,i}, ~1 \leq h, l \leq \alpha_i. $$ Then for any $1 \leq i \leq \tilde{n}$, we have $\tilde{M}_{ii}=\tilde{M}^{i}$, and for any $1\leq l \leq \alpha_i$, $$ \bm{p}_{l,i}^T\bm{q}_{l,i}=1,~ \bm{p}_{l,i}^T\bm{q}_{h,i}=0,~ h\neq l,~ L^{i}\bm{q}_{l,i}=\bm{0}, \text{ and } \bm{p}_{l,i}^T L^{i}=\bm{0}^T. $$ \end{itemize} \end{lemma} \begin{proof} We only consider the case of $\alpha_0^c >0$, since the case of $\alpha_0^c =0$ can be addressed in a similar way. (i) For any $1 \leq l \leq \alpha_0$, $\bm{q}_{l}^{\nu_{l}} \neq \bm{0}$ and $\bm{q}_{l}^{j} = \bm{0}$, $j \neq \nu_{l}$, and $\bm{p}_{l}^{\nu_{l}} \neq \bm{0}$ and $\bm{p}_{l}^{j} = \bm{0}$, $j \neq \nu_{l}$ with $j>\alpha_0^c$. An easy computation yields that \begin{equation} \bm{p}_{h}^T M \bm{q}_{l} = \sum_{j =1}^{\alpha_0^c} (\bm{p}_{h}^{j})^T M_{j \nu_{l}} \bm{q}_{l}^{\nu_{l}} + (\bm{p}_{h}^{\nu_{h}})^T M_{\nu_{h} \nu_{l}} \bm{q}_{l}^{\nu_{l}},~ 1\leq h,l \leq \alpha_0. \end{equation} Since $M_{j\nu_{l}}$ is nonnegative for all $1 \leq j \leq \alpha_0^c$ and $M_{\nu_{h} \nu_{l}}$ is nonnegative for $h \neq l$, it follows that $PMQ$ is cooperative. Note that exchanging the order of the components of $\bm{b}$ is equivalent to exchanging the row and column simultaneously. Thus, statements (ii) and (iii) follow from \cite[Section 2.3]{berman1994nonnegative}. (iv) It is easy to see that $\tilde{\nu}^{i}_{1}< \cdots<\tilde{\nu}^{i}_{\alpha_i+\alpha_0^c} $ and $\tilde{\nu}^{i}_{l+\alpha_0^c}=b_{l}^{i}+\alpha_{0}^{c}=\nu_{b^{i}_{l}}$, $\forall 1< l < \alpha_i$. Thus, for any $1\leq i \leq \tilde{n}$, $1 \leq l \leq \alpha_i$, $$\bm{q}_{l,i}^{\nu_l}=\bm{q}_{l,i}^{\alpha_0^c+l}=\bm{q}_{b_{l}^{i}}^{\tilde{\nu}_{\alpha_0^c+l}^{i}}=\bm{q}_{b_l^i}^{\nu_{b_l^i}} \neq \bm{0}, ~\bm{q}_{l,i}^{j}= \bm{q}_{b_{l}^{i}}^{\tilde{\nu}_{j}^{i}}= \bm{0}, ~\forall j \neq \nu_l,$$ $$\bm{p}_{l,i}^{\nu_l}=\bm{p}_{l,i}^{\alpha_0^c+l}=\bm{p}_{b_{l}^{i}}^{\tilde{\nu}_{\alpha_0^c+l}^{i}}=\bm{p}_{b_l^i}^{\nu_{b_l^i}} \neq \bm{0},~ \bm{p}_{l,i}^{j}= \bm{p}_{b_{l}^{i}}^{\tilde{\nu}_{j}^{i}}= \bm{0}, ~\forall j \neq \nu_l, \text{ with } j>\alpha_0^c, $$ $$ M_{j \nu_{l}}^i =M_{j (l+ \alpha_0^c)}^i =M_{\tilde{\nu}_{j}^{i}\tilde{\nu}_{l+ \alpha_0^c}^{i}} =M_{j\tilde{\nu}_{l+ \alpha_0^c}^{i}} =M_{j \nu_{b_l^i}},~ \forall 1 \leq j \leq \alpha_0^c, $$ and $$ M_{\nu_{h} \nu_{l}}^i =M_{(h+ \alpha_0^c) (l+ \alpha_0^c)}^i =M_{\tilde{\nu}_{h+ \alpha_0^c}^{i}\tilde{\nu}_{l+ \alpha_0^c}^{i}} =M_{\nu_{b_h^i} \nu_{b_l^i}}, ~ \forall 1 \leq h \leq \alpha_i. $$ Thus, for any $1\leq i \leq \tilde{n}$, $1 \leq l,h \leq \alpha_i$, $$ \begin{aligned} \tilde{m}_{hl}^{i} &=\bm{p}_{h,i}^T M^i \bm{q}_{l,i} = \sum_{j =1}^{\alpha_0^c} (\bm{p}_{h,i}^{j})^T M_{j \nu_{l}}^i \bm{q}_{l,i}^{\nu_{l}} + (\bm{p}_{h,i}^{\nu_{h}})^T M_{\nu_{h} \nu_{l}}^i \bm{q}_{l,i}^{\nu_{l}}\\ &=\sum_{j =1}^{\alpha_0^c} (\bm{p}_{b_h^i}^{j})^T M_{j \nu_{b_l^i}} \bm{q}_{b_l^i}^{\nu_{b_l^i}} + (\bm{p}_{b_h^i}^{\nu_{b_h^i}})^T M_{\nu_{b_h^i} \nu_{b_l^i}} \bm{q}_{b_l^i}^{\nu_{b_l^i}}. \end{aligned} $$ We also have $$ \tilde{m}_{hl}= \bm{p}_{b_h}^T M \bm{q}_{b_l} = \sum_{j =1}^{\alpha_0^c} (\bm{p}_{b_h}^{j})^T M_{j \nu_{b_l}} \bm{q}_{b_l}^{\nu_{b_l}} + (\bm{p}_{b_h}^{\nu_{b_h}})^T M_{\nu_{b_h} \nu_{b_l}} \bm{q}_{b_l}^{\nu_{b_l}}, ~1\leq h,l \leq \alpha_0. $$ Since $b_{h}^{i}=b_{h}$ if $i=1$ and $b_{h}^{i}=b_{h+\sum_{j =1}^{i-1} \alpha_j}$ if $i\geq 2$, we obtain that $ \tilde{m}_{hl}^{i}=\tilde{m}_{hl}$ for all $1\leq h,l \leq \alpha_i$ if $i =1$ and $ \tilde{m}_{hl}^{i}=\tilde{m}_{(h+\sum_{j=1}^{i-1} \alpha_j) (l+\sum_{j=1}^{i-1} \alpha_j)}$ for all $1\leq h,l \leq \alpha_i$ if $i \geq 2$. This yields that $\tilde{M}^{i}=\tilde{M}_{ii}$, $\forall 1 \leq i \leq \tilde{n}$. Similarly, we can show that $$ \bm{p}_{l,i}^T\bm{q}_{l,i}=1,~ \bm{p}_{l,i}^T\bm{q}_{h,i}=0,~ h\neq l,~ L^{i}\bm{q}_{l,i}=\bm{0}, \text{ and } \bm{p}_{l,i}^T L^{i}=\bm{0}^T $$ hold for any $ 1 \leq i \leq \tilde{n}, ~1\leq l \leq \alpha_i$. \end{proof} Note that we define $M^i$ by choosing all indexes $\{\tilde{\nu}_j^i: 1 \leq j \leq \alpha_0^c +\alpha_i\}$ from $M$, and define $L^{i}$, $\bm{p}_{l,i}$, $\bm{q}_{l,i}$ by using the same indexes of $L$, $\bm{p}_{l}$, $\bm{q}_{l}$, respectively. Thus, the analysis of a reducible matrix $\tilde{M}$ can be transferred into that of its irreducible block. \begin{lemma}\label{lem:PQM} Assume that {\rm (H1)$'$} holds. Let $\hat{P}=(\hat{p}_{lj})_{\alpha_0 \times n}$ and $\hat{Q}=(\hat{q}_{ih})_{n \times \alpha_0 }$ be two nonnegative matrices such that $\hat{P}L=0$, $L\hat{Q}=0$, and $\hat{P}\hat{Q}=I$, where $I$ is an $\alpha_0 \times \alpha_0$ identity matrix. If $M$ is an $n\times n$ matrix, then $PMQ$ is similar to $\hat{P}M\hat{Q}$. \end{lemma} \begin{proof} For any $1\leq l \leq \alpha_0$, let $\bm{\hat{p}}_l:=(\hat{P}_{l1},\cdots,\hat{P}_{ln})^T$. It is easy to see that $\bm{\hat{p}}_l^T$ is a left eigenvector of $L$ corresponding to the zero eigenvalue. Since $PQ=I$ and $\hat{P}\hat{Q}=I$, the matrices $P$, $Q$, $\hat{P}$ and $\hat{Q}$ share the same rank $\alpha_0$. This implies that $\{\bm{p}_i^T: 1 \leq i \leq \alpha_0\}$ and $\{\bm{\hat{p}}_i^T: 1 \leq i \leq \alpha_0\}$ are two bases of the left eigenspace of $L$ corresponding to the zero eigenvalue due to Lemma \ref{lem:L_K}. Thus, there exists an $\alpha_0 \times \alpha_0$ invertible matrix $A$ such that $AP=\hat{P}$. Similarly, there exists an $\alpha_0 \times \alpha_0$ invertible  matrix $B$ such that $QB=\hat{Q}$. It then follows that $AB=APQB=\hat{P}\hat{Q}=I$, and hence, $A=B^{-1}$. Therefore, $\hat{P}M\hat{Q}=APMQA^{-1}$. \end{proof} In order to study the continuity of the basic reproduction ratio with respect to parameters, we next generalize the results in \cite[Theorems 2.1 and 2.2]{zhang2020asymptotic}. Let $(\Theta,\rho_{\Theta})$ be a metric space with metric $\rho_{\Theta}$ and let $H(\mu, \theta)$ be a mapping from $\mathbb{R}_+ \times \Theta \rightarrow \mathbb{R}$. Assume that for any $\theta \in \Theta$, one of the following two properties holds: \begin{itemize} \item[(P1)] There exists a unique $\mu(\theta)>0$ such that $H(\mu(\theta),\theta)=0$, $H(\mu,\theta)<0$ for all $\mu > \mu(\theta)$ and $H(\mu,\theta)>0$ for all $\mu < \mu(\theta)$. \item[(P2)] $H(\mu,\theta)<0$ for all $\mu > 0$. \end{itemize} For convenience, we define $\mu(\theta):=0$ in the case (P2). Then we have the following observation. \begin{lemma}\label{lem:R0:continuity} Assume that for any $\theta \in \Theta$, either {\rm(P1)} or {\rm (P2)} holds. Let $\theta_0 \in \Theta$ be given. If $H(\mu,\theta)$ converges to $H(\mu,\theta_0)$ as $\theta \rightarrow \theta_0$ for any $\mu>0$, then $\lim\limits_{\theta \rightarrow \theta_0}\mu(\theta)= \mu(\theta_0)$. \end{lemma} \begin{proof} We proceed according to two cases: {\it Case 1.} (P1) holds for $\theta_0$. For any $\epsilon \in (0,\mu(\theta_0))$, it follows from (P1) that $$H(\mu(\theta_0)-\epsilon,\theta_0)>0, \text{ and }H(\mu(\theta_0)+\epsilon,\theta_0)<0.$$ Thus, there exists $\delta>0$ such that if $\rho_{\Theta} (\theta,\theta_0)<\delta$, then $$H(\mu(\theta_0)-\epsilon,\theta)>0,\text{ and }H(\mu(\theta_0)+\epsilon,\theta)<0.$$ Assumption (P1) implies that $$ \mu(\theta_0)-\epsilon <\mu(\theta)< \mu(\theta_0)+\epsilon $$ provided that $\rho_{\Theta}(\theta,\theta_0)<\delta$. That is, $\lim\limits_{\theta \rightarrow \theta_0}\mu(\theta)= \mu(\theta_0)$. {\it Case 2.} (P2) holds for $\theta_0$. It suffices to show that $\lim\limits_{\theta \rightarrow \theta_0}\mu(\theta)=0= \mu(\theta_0)$. For any given $\epsilon>0$, the assumption (P2) implies that $H(\epsilon,\theta_0)<0$. Then there exists $\delta>0$ such that $H(\epsilon,\theta)<0$ if $\rho_{\Theta}(\theta,\theta_0)<\delta$. In view of (P1) or (P2), we conclude that $0 \leq \mu(\theta)< \epsilon$ provided that $\rho_{\Theta}(\theta,\theta_0)<\delta$. \end{proof} \section{The principal eigenvalue}\label{sec:eig} In this section, we investigate the asymptotic behavior of the principal eigenvalue for periodic cooperative patch models with large dispersal rate. We first recall some properties of time-periodic evolution families. \begin{definition}\label{def:evol} A family of bounded linear operators $\varUpsilon(t,s)$, $t,s \in \mathbb{R}$ with $t\geq s$, on a Banach space $E$ is called a $T$-periodic evolution family provided that $$ \varUpsilon(s,s)=I,\quad \varUpsilon(t,r)\varUpsilon(r,s)=\varUpsilon(t,s),\quad \varUpsilon(t+T,s+T)=\varUpsilon(t,s), $$ for all $t,s,r \in \mathbb{R}$ with $t\geq r \geq s$, and for each $e\in E$, $\varUpsilon(t,s)e$ is a continuous function of $(t,s)$ with $t \geq s$. The exponential growth bound of the evolution family $\{\varUpsilon(t,s): t\geq s\}$ is defined as $$ \omega(\varUpsilon) = \inf \{\tilde{\omega} \in \mathbb{R} : \exists M \geq 1: \forall t,s \in \mathbb{R},~t \geq s: \Vert \varUpsilon(t,s)\Vert_{E} \leq M e^{\tilde{\omega }(t-s)} \}. $$ \end{definition} \begin{lemma}{\sc (\cite[Propostion A.2]{thieme2009spectral})} \label{lem:w_theta:equ} Let $\{ \varUpsilon(t,s): t\geq s \}$ be a $T$-periodic evolution family on a Banach space $E$. Then $\omega(\varUpsilon)=\frac{\ln r(\varUpsilon(T,0))}{T}=\frac{\ln r(\varUpsilon(T+\tau,\tau))}{T},~ \forall \tau\in [0,T]$. \end{lemma} Let $M(t)=(m_{ij}(t))_{n \times n}$ be a continuous $n\times n$ matrix-valued function of $t \in \mathbb{R}$ such that \begin{itemize} \item[(H4)] $M(t)=M(t+T)$ and $M(t)$ is cooperative for all $t \in \mathbb{R}$. \end{itemize} Motivated by population models in a patchy environment, we consider the following periodic ODE system \begin{equation}\label{equ:sys:periodic} \frac{\mathrm{d} \bm{v}}{\mathrm{d} t}=d L \bm{v} + M(t) \bm{v}, \end{equation} and the associated eigenvalue problem: \begin{equation}\label{equ:eig:periodic} \frac{\mathrm{d} \bm{u}}{\mathrm{d} t}=d L \bm{u} + M(t) \bm{u} -\lambda \bm{u}. \end{equation} \begin{definition}\label{def:principal} $\lambda^*$ is called the principal eigenvalue of \eqref{equ:eig:periodic} if it is a real eigenvalue with a nonnegative eigenfunction and the real parts of all other eigenvalues are not greater than $\lambda^*$. \end{definition} For any $d \geq 0$, according to \cite[Section 7.3]{krasnoselskij1964positive} or \cite[Chapter 5]{pazy1983semigroups}, system \eqref{equ:sys:periodic} admits a unique evolution family $\{\mathbb{O}_d(t,s): t \geq s \}$ on $\mathbb{R}^n$ with $\mathbb{O}_d(t,s) \bm{\phi}=\bm{u}(t,s;\bm{\phi})$, $\forall t\geq s$ and $\bm{\phi} \in \mathbb{R}^n$, where $\bm{u}(t,s;\bm{\phi})$ is the unique solution at time $t$ of \eqref{equ:sys:periodic} with initial data $\bm{\phi}$ at time $s$. In view of \cite[Theorem 7.17]{krasnoselskij1964positive} and Lemma \ref{lem:w_theta:equ}, we have the following result (see also \cite[Theorem 2.7]{liang2017principal}). \begin{theorem}\label{thm:existence} Assume that {\rm (H1)} and {\rm (H4)} hold. Then the eigenvalue problem \eqref{equ:eig:periodic} admits the principal eigenvalue $\lambda_d^*= \omega (\mathbb{O}_d)= \frac{\ln r(\mathbb{O}_d(T,0))}{T}$ for all $d \geq 0$. \end{theorem} The subsequent result is a consequence of the standard comparison arguments. \begin{lemma}\label{lem:comparison} Assume that {\rm (H4)} holds. Let $\hat{M}(t)=(\hat{m}_{ij}(t))_{n \times n}$ be a continuous $n\times n$ matrix-valued function of $t \in \mathbb{R}$ with $\hat{M}(t)=\hat{M}(t+T)$ such that $\hat{M}(t)$ is cooperative for all $t \in \mathbb{R}$. Let $\lambda^*_M$ and $\lambda^*_{\hat{M}}$ be the principal eigenvalue of $\frac{\mathrm{d} \bm{u}}{\mathrm{d} t}= M(t) \bm{u} -\lambda \bm{u}$ and $\frac{\mathrm{d} \bm{u}}{\mathrm{d} t}= \hat{M}(t) \bm{u} -\lambda \bm{u}$, respectively. If $m_{ij}(t) \geq \hat{m}_{ij}(t)$, $\forall 1 \leq i,j \leq n$, $t \in \mathbb{R}$, then $\lambda^*_{M} \geq \lambda^*_{\hat{M}}$. Further, if $M(t)$ can be split into $$ M(t)= \left( \begin{array}{cc} M_{11}(t)& M_{12}(t) \\ M_{2 1}(t)& M_{2 2}(t)\\ \end{array} \right), $$ then $\lambda_M^* \geq \lambda^*_{M_{11}}$, where $\lambda^*_{M_{11}}$ is the principal eigenvalue of $\frac{\mathrm{d} \bm{u}}{\mathrm{d} t}= M_{11}(t) \bm{u} -\lambda \bm{u}$. \end{lemma} From now on, we let $\lambda_{d}^{*}$ be the principal eigenvalue of \eqref{equ:eig:periodic} and $\bm{u}_d=(u_{d,1},\cdots,u_{d,n})^T$ be an nonnegative eigenvector corresponding to $\lambda_{d}^{*}$ for any given $d \geq 0$. For convenience, we normalize $\bm{u}_d$ by $\max_{1 \leq i \leq n} \max_{t \in \mathbb{R}} u_{d,i}(t)=1$. \begin{lemma}\label{lem:bounded:1} Assume that {\rm (H1)} and {\rm (H4)} hold. Then there exists real number $C>0$, independent of $d$, such that $ \vert \lambda_{d}^{*} \vert \leq C$. \end{lemma} \begin{proof} Let $\overline{M}:= \max_{1 \leq i,j \leq n} \{\max_{ t\in \mathbb{R}} m_{ij}(t)\}$ and $\underline{M}:= \min_{1 \leq i,j \leq n}\{ \min_{ t\in \mathbb{R}} m_{ij}(t)\}$, and define two $n \times n$ matrices $M^1:=(m_{ij}^1)_{n\times n} $ by $m_{ij}^{1}=\overline{M}$ and $M^2:=\mathrm{diag}(\underline{M},\cdots\underline{M})$. Let $\overline{\lambda}$ and $\underline{\lambda}$ be the principal eigenvalue of $dL+ M^1$ and $dL+ M^2$, respectively. We use $\bm{e}=(1,\cdots,1)^T$ to denote an $n$-dimensional vector. By the Perron-Frobenius theorem (see, e.g., \cite[Theorem 4.3.1]{smith2008monotone}), it then follows from $\bm{e}^T(dL+ M^1)=n\overline{M}\bm{e}^T$ and $\bm{e}^T(dL+ M^2)=\underline{M}\bm{e}^T$ that $\overline{\lambda}= n \overline{M}$ and $\underline{\lambda}= \underline{M}$. In view of Lemma \ref{lem:comparison}, we have $\underline{\lambda}\leq \lambda \leq \overline{\lambda}$. \end{proof} For any $d>0$, we define $$\tilde{u}^l_d(t):= \bm{p}_l^T \bm{u}_d(t),~ \forall 1 \leq l \leq \alpha_0,~\tilde{\bm{u}}_d(t):= \sum_{l=1}^{\alpha_0} \tilde{u}^l_d(t) \bm{q}_l \text{ and } \hat{\bm{u}}_d(t):=\bm{u}_d(t) - \tilde{\bm{u}}_d(t). $$ \begin{lemma}\label{lem:hat:ud} Assume that {\rm (H1)} and {\rm (H4)} hold. Then $\sup_{t \in \mathbb{R}}\Vert \hat{\bm{u}}_d(t) \Vert_{\mathbb{R}^n} \rightarrow 0$ as $d \rightarrow +\infty$. \end{lemma} \begin{proof} Our arguments are motivated by \cite{hale1986large,hale1987varying,hale1989shadow,hutson2001evolution,zhang2020asymptotic}. Define $$ X_1:={\rm Span}\{\bm{q}_l\}_{1 \leq l \leq \alpha_0} \text{ and } X_2:=\{\bm{q} \in \mathbb{R}^n: \bm{p}_l^T \bm{q}=0,~ 1 \leq l \leq \alpha_0 \}. $$ It then follows that $$ \mathbb{R}^n= X_1 \oplus X_2. $$ Let $S_d(t)$ be the semigroup generated by $dL$, that is, $S_d(t)=e^{dLt}$. It is easy to see that $S_d(t) X_1 \subseteq X_1$ and $S_d(t) X_2 \subseteq X_2$. According to \cite[Theorem 7.3]{daners1992abstract}, we then have $$ \Vert S_d(t) \bm{\phi} \Vert_{\mathbb{R}^n} \leq C_1 e^{-\gamma_0 d t} \Vert \bm{\phi} \Vert_{\mathbb{R}^n}, ~ \forall \bm{\phi} \in X_2 $$ for some $\gamma_0>0$ and $C_1>0$, independent of $t$ and $d$. We multiply \eqref{equ:eig:periodic} from left by $\bm{p}_l^T$ to obtain $$ \frac{\mathrm{d} }{\mathrm{d} t} \tilde{u}_d^{l}= \bm{p}_l^T M(t) \bm{u}_d- \lambda_{d}^{*}\tilde{u}_d^{l}, ~ \forall 1 \leq l \leq \alpha_0, $$ and then multiply the above equation by $\bm{q}_l$ to get $$ \frac{\mathrm{d} }{\mathrm{d} t} (\tilde{u}_d^{l}\bm{q}_l)= [\bm{p}_l^T M(t) \bm{u}_d] \bm{q}_l- \lambda_{d}^{*}\tilde{u}_d^{l}\bm{q}_l, ~ \forall 1 \leq l \leq \alpha_0. $$ Adding them together yields \begin{equation}\label{equ:bmu:d} \frac{\mathrm{d} }{\mathrm{d} t} \tilde{\bm{u}}_d= \sum_{l=1}^{\alpha_0}[\bm{p}_l^T M(t) \bm{u}_d] \bm{q}_l- \lambda_{d}^{*}\tilde{\bm{u}}_d. \end{equation} Subtracting \eqref{equ:bmu:d} from \eqref{equ:eig:periodic}, we have $$ \frac{\mathrm{d} }{\mathrm{d} t} \hat{\bm{u}}_d= d L \hat{\bm{u}}_d + M(t) \bm{u}_d- \sum_{l=1}^{\alpha_0}[\bm{p}_l^T M(t) \bm{u}_d] \bm{q}_l- \lambda_{d}^{*}\hat{\bm{u}}_d. $$ Clearly, for any $1 \leq l \leq \alpha_0$, $$ \bm{p}_l^T L \hat{\bm{u}}_d =0,~ \bm{p}_l^T \hat{\bm{u}}_d=\bm{p}_l^T (\bm{u}_d-\tilde{\bm{u}}_d)= \tilde{u}_d^l- \tilde{u}_d^l(\bm{p}_l^T \bm{q}_l)= 0, $$ and $$ \bm{p}_l^T\left(M(t) \bm{u}_d- \sum_{l=1}^{\alpha_0}[\bm{p}_l^T M(t) \bm{u}_d] \bm{q}_l\right)= \bm{p}_l^TM(t) \bm{u}_d- \bm{p}_l^T M(t) \bm{u}_d (\bm{p}_l^T \bm{q}_l)=0. $$ That is, $L \hat{\bm{u}}_d \in X_2$, $\hat{\bm{u}}_d \in X_2$, and $M(t) \bm{u}_d- \sum_{l=1}^{\alpha_0}(\bm{p}_l^T M(t) \bm{u}_d) \bm{q}_l \in X_2$. By Lemma \ref{lem:bounded:1}, there exists a $C_2>0$, independent of $d$ and $t$, such that $$ \left \Vert M(t) \bm{u}_d- \sum_{l=1}^{\alpha_0}[\bm{p}_l^T M(t) \bm{u}_d] \bm{q}_l \right\Vert_{\mathbb{R}^n} \leq C_2 \text{ and } \vert \lambda_{d}^{*} \vert \leq C_2. $$ In view of the constant-variation formula, we obtain $$ \hat{\bm{u}}_d(t) = S_d(t) \hat{\bm{u}}_d(0) + \int_{0}^{t} S_d(t-s) \left\{ M(s) \bm{u}_d(s) - \sum_{l=1}^{\alpha_0}\left[\bm{p}_l^T M(s) \bm{u}_d(s) \right] \bm{q}_l- \lambda_{d}^{*}\hat{\bm{u}}_d(s) \right\} \mathrm{d} s, $$ for all $t \geq 0$. An easy computation gives rise to $$ \Vert \hat{\bm{u}}_d (t) \Vert_{\mathbb{R}^n} \leq C_1 e^{-\gamma_0 d t} \Vert \hat{\bm{u}}_d (0) \Vert_{\mathbb{R}^n} +C_2 \int_{0}^{t} e^{-\gamma_0 d (t-s)} \mathrm{d} s+ C_2 \int_{0}^{t} e^{-\gamma_0 d (t-s)} \Vert \hat{\bm{u}}_d (s) \Vert_{\mathbb{R}^n} \mathrm{d} s. $$ Choose $\gamma_1 \in (0,\gamma_0)$, and define $\zeta_d(t):= e^{\gamma_1 d t } \Vert \hat{\bm{u}}_d(t) \Vert_{\mathbb{R}^n}$, $\overline{\zeta}_d(t):= \sup \{\zeta_d(s): 0 \leq s \leq t \}$ and $$ C_3:= \int_{0}^{\infty} e^{-s[1 -\gamma_1(\gamma_0)^{-1}]} \mathrm{d} s. $$ It then follows that $$ \zeta_d (t) \leq C_1 e^{-(\gamma_0-\gamma_1)dt} \zeta_d (0) + C_2 C_3(\gamma_0 d)^{-1} e^{\gamma_1 d t} + C_2 C_3 (\gamma_0 d)^{-1} \overline{\zeta}_d (t), ~ t \geq 0, $$ and hence, $$ \overline{\zeta}_d (t) \leq C_1 e^{-(\gamma_0-\gamma_1)dt} \zeta_d (0) + C_2 C_3(\gamma_0 d)^{-1} e^{\gamma_1 d t} + C_2 C_3 (\gamma_0 d)^{-1} \overline{\zeta}_d (t), ~ t \geq 0. $$ For any $d >0$, let $\xi (d):=C_2 C_3 (\gamma_0 d)^{-1}$. Notice that $\xi (d) \rightarrow 0$ as $d \rightarrow +\infty$. From now on, we assume that $d$ is large enough such that $\xi(d) < \frac{1}{2}$, which implies that $(1 -\xi(d))^{-1} \leq 2$. This leads to $$ \begin{aligned} \overline{\zeta}_d (t) &\leq (1 -\xi(d))^{-1}[C_1 e^{-(\gamma_0-\gamma_1)dt} \zeta_d (0) + C_2 C_3(\gamma_0 d)^{-1} e^{\gamma_1 d t}]\\ &\leq 2[C_1 e^{-(\gamma_0-\gamma_1)dt} \zeta_d (0) + C_2 C_3(\gamma_0 d)^{-1} e^{\gamma_1 d t}], \end{aligned} $$ and hence, $$ \Vert \hat{\bm{u}}_d(t) \Vert_{\mathbb{R}^n} \leq e^{-\gamma_1 d t} \overline{\zeta}_d (t) \leq 2[C_1 e^{-(\gamma_0-\gamma_1)dt} \zeta_d (0) e^{-\gamma_1 d t} + C_2 C_3(\gamma_0 d)^{-1}]. $$ Letting $t \rightarrow +\infty$, we obtain $$ \limsup\limits_{t \rightarrow +\infty} \Vert \hat{\bm{u}}_d(t) \Vert_{\mathbb{R}^n} \leq 2 C_2 C_3 (\gamma_0 d)^{-1}. $$ Since $\hat{\bm{u}}(t)$ is periodic in $t \in \mathbb{R}$, it follows that $$ \Vert \hat{\bm{u}}_d(t) \Vert_{\mathbb{R}^n} \leq 2 C_2 C_3 (\gamma_0 d)^{-1}. $$ This yields the desired conclusion. \end{proof} Define $\tilde{M}(t)=(\tilde{m}_{hl}(t))_{\alpha_0 \times \alpha_0}$ by $\tilde{m}_{hl}(t)=\bm{p}_h^T M(t) \bm{q}_l$, that is, $\tilde{M}(t)=P M(t) Q$. Let $\{\tilde{O}(t,s): t \geq s \}$ be the evolution family on $\mathbb{R}^{\alpha_0}$ of $$ \frac{\mathrm{d} \bm{v}}{\mathrm{d} t}= \tilde{M}(t) \bm{v}, $$ and let $\tilde{\lambda}^{*}$ be the principal eigenvalue of $$ \frac{\mathrm{d} \bm{u}}{\mathrm{d} t}= \tilde{M}(t) \bm{u} -\lambda \bm{u}. $$ It is easy to see that $\omega(\tilde{O})= \tilde{\lambda}^{*}$ due to Theorem \ref{thm:existence}. The following result indicates that $\tilde{\lambda}^{*}$ is independent of the choice of $P$ and $Q$. \begin{lemma}\label{lem:PQO} Assume that {\rm (H1)$'$} holds. Let $\widehat{P}=(\widehat{p}_{lj})_{\alpha_0 \times n}$ and $\widehat{Q}=(\widehat{q}_{ih})_{n \times \alpha_0 }$ be two nonnegative matrices such that $\widehat{P}L=0$, $L\widehat{Q}=0$ and $\widehat{P}\widehat{Q}=I$, where $I$ is an $\alpha_0 \times \alpha_0$ identity matrix. Let $\widehat{M}(t):=\widehat{P} M(t) \widehat{Q}$ and $\{\widehat{O}(t,s): t \geq s \}$ be the evolution family on $\mathbb{R}^{\alpha_0}$ of $\frac{\mathrm{d} \bm{v}}{\mathrm{d} t}= \widehat{M}(t) \bm{v}$. Then $\widehat{O}(T,0)$ is similar to $\tilde{O}(T,0)$. Moreover, $\omega(\widehat{O})=\omega(\tilde{O})$. \end{lemma} \begin{proof} According to Lemma \ref{lem:PQM}, there exists an $\alpha_0 \times \alpha_0$ invertible matrix $A$ such that $AP=\widehat{P}$ and $QA^{-1}=\widehat{Q}$. By a change of variable $\bm{w}=A^{-1}\bm{v}$, we then transfer $\frac{\mathrm{d} \bm{v}}{\mathrm{d} t}= \widehat{M}(t) \bm{v}$ into $\frac{\mathrm{d} \bm{w}}{\mathrm{d} t} = \tilde{M}(t) \bm{w}$. Thus, we have $\widehat{O}(T,0)=A[\tilde{O}(T,0)]A^{-1}$. \end{proof} \begin{lemma}\label{lem:eig:irreducible} Assume that {\rm (H1)} and {\rm (H4)} hold. If $\tilde{O}(T,0)$ is irreducible, then $\lim\limits_{d \rightarrow +\infty} \lambda_{d}^{*}= \tilde{\lambda}^{*}$. \end{lemma} \begin{proof} For any $1 \leq h \leq \alpha_0$, we multiply \eqref{equ:eig:periodic} from left by $\bm{p}_h^T$ to obtain \begin{equation}\label{equ:udh} \frac{\mathrm{d} }{\mathrm{d} t}\tilde{u}_d^h= \bm{p}_h^TM(t) \bm{u}_d -\lambda_{d}^{*} \tilde{u}_d^h. \end{equation} Then there exists $C_1>0$ such that $$ \left\vert \frac{\mathrm{d} }{\mathrm{d} t}\tilde{u}_d^h \right\vert \leq C_1, ~\forall1 \leq h \leq \alpha_0. $$ By the Ascoli–Arzel\`{a} theorem (see, e.g., \cite[Theorem I.28]{reed1980methods}), it follows that there exists a sequence $d_m \rightarrow +\infty$ such that $\lambda_{d_m} \rightarrow \lambda_{\infty}$ and $\vert \tilde{u}^h_{d_m} (t) - \tilde{u}^h_{\infty} (t)\vert \rightarrow 0$ uniformly for $t \in \mathbb{R}$, $1 \leq h \leq \alpha_0$, as $m \rightarrow +\infty$, for some $\lambda_{\infty}$ and $\tilde{u}^h_{\infty} \in C(\mathbb{R},\mathbb{R}_+)$ with $\tilde{u}^h_{\infty}(t+T)=\tilde{u}^h_{\infty}(t)$, $\forall t \in \mathbb{R},~ 1 \leq h \leq \alpha_0$. We integrate \eqref{equ:udh} from $0$ to $t$ to obtain $$ \tilde{u}_d^h(t)- \tilde{u}_d^h(0) =\int_{0}^{t} [\bm{p}_h^TM(s) \tilde{\bm{u}}_d(s)+ \bm{p}_h^TM(s) \hat{\bm{u}}_d(s) -\lambda_{d}^{*} \tilde{u}_d^h(s) ]\mathrm{d} s. $$ By Lemma \ref{lem:hat:ud}, letting $d_m \rightarrow +\infty$, for any $1 \leq h \leq \alpha_0$, we have $$ \tilde{u}_{\infty}^h(t)- \tilde{u}_{\infty}^h(0) =\int_{0}^{t} \left[\bm{p}_h^TM(s) \left(\sum_{l=1}^{\alpha_0} \tilde{u}_{\infty}^l(s) \bm{q}_l \right)-\lambda_{\infty} \tilde{u}_{\infty}^h(s) \right]\mathrm{d} s, $$ and hence, $$ \frac{\mathrm{d} }{\mathrm{d} t} \tilde{u}_{\infty}^h(t) =\sum_{l=1}^{\alpha_0}[\bm{p}_h^TM(t) \bm{q}_l ] \tilde{u}_{\infty}^l(t) -\lambda_{\infty} \tilde{u}_{\infty}^h(t). $$ Letting $\bm{\phi}=(\tilde{u}_{\infty}^1 (0),\cdots, \tilde{u}_{\infty}^{\alpha_0}(0))^T$, we see that $$ \bm{\phi}=e^{-\lambda_{\infty} T}\tilde{O}(T,0) \bm{\phi}. $$ With the irreducibility of $\tilde{O}(T,0)$, the Perron-Frobenius theorem (see, e.g., \cite[Theorem 4.3.1]{smith2008monotone}) then leads to $\lambda_{\infty}=\tilde{\lambda}^{*}$. \end{proof} To remove the irreducibility condition on $\tilde{O}(T,0)$ in Lemma \ref{lem:eig:irreducible}, below we prove the same conclusion as in Lemma \ref{lem:bounded:1} under weaker conditions. \begin{lemma} Assume that {\rm (H1)$'$} and {\rm (H4)} hold. Then there exists some $C>0$ such that $ \vert \lambda_{d}^{*} \vert \leq C$. \end{lemma} \begin{proof} We proceed according to two cases: {\it Case 1.} $\Lambda_0^c= \emptyset$. The proof is motivated by the arguments for Lemma \ref{lem:bounded:1}. Define $$\overline{M}:= \max_{1 \leq i,j \leq n} \{\max_{ t\in \mathbb{R}} m_{ij}(t)\}, \text{ and }\underline{M}:= \min_{1 \leq i,j \leq n} \{\min_{ t\in \mathbb{R}} m_{ij}(t)\}.$$ For any $1 \leq i \leq \alpha_0$, choose $\bm{p}^i\gg0$ such that $(\bm{p}^i)^T L_{ii}= \bm{0}^T$. Let $\bm{p}=((\bm{p}^1)^T,\cdots,(\bm{p}^{\alpha_0})^T)^T=(p_{1},\cdots,p_{n})^T$. Thus, $\bm{p}^T L=\bm{0}^T$. Without loss of generality, we assume that $\min_{1 \leq j \leq n}p_j= 1$. Define two $n \times n$ matrices $M^1:=(m_{ij}^1)_{n\times n} $ by $m_{ij}^{1}=\overline{M} p_j$, $\forall 1 \leq i,j \leq n$ and $M^2:=\mathrm{diag}(\underline{M},\cdots,\underline{M} )$. Let $\overline{\lambda}$ and $\underline{\lambda}$ be the principal eigenvalue of $dL+ M^1$ and $dL+ M^2$, respectively. In view of $\bm{p}^T(dL+ M^1)= (\sum_{j =1}^{n} p_j)\overline{M}\bm{p}^T$ and $\bm{p}^T(dL+ M^2)=\underline{M}\bm{p}^T$, the Perron-Frobenius theorem (see, e.g., \cite[Theorem 4.3.1]{smith2008monotone}) implies that $\overline{\lambda}= (\sum_{j =1}^{n} p_j)\overline{M}$ and $\underline{\lambda}= \underline{M}$. By Lemma \ref{lem:comparison}, it easily follows that $\underline{\lambda}\leq \lambda \leq \overline{\lambda}$. {\it Case 2.} $\Lambda_0^c \neq \emptyset$. Without loss of generality, in view of (H1)$'$, we assume that $\Lambda_0^c=\{1,\cdots,\alpha_0^c\}$ and $\Lambda_0=\{\alpha_0^c+1,\cdots,\alpha\}$, and still write $\nu_l:=\alpha_0^c+l$, $ 1\leq l \leq \alpha_0$, as Lemma \ref{lem:L_K}. Let us first prove that $\lambda_{d}^{*}$ has a lower bound independent of $d$. We split the matrix-valued function $M(t)$ into a block form as follows $$ M(t)= \left( \begin{array}{ccc} M_{11}(t)& \cdots & M_{1\alpha}(t) \\ \vdots& \ddots& \vdots\\ M_{\alpha 1}(t)& \cdots & M_{\alpha \alpha}(t)\\ \end{array} \right), $$ where $M_{hl}$ is an $n_h \times n_l $ matrix for $1 \leq h,l \leq \alpha$. Define a matrix $\hat{L}= \mathrm{diag}(L_{\nu_1\nu_1},\cdots,L_{\alpha\alpha})$ and a matrix-valued function $\hat{M}(t)$ by $$ \hat{M}(t)=\left( \begin{array}{ccc} M_{\nu_1\nu_1}(t)& \cdots & M_{\nu_1 \alpha}(t) \\ \vdots& \ddots& \vdots\\ M_{\alpha\nu_1}(t)& \cdots & M_{\alpha \alpha}(t)\\ \end{array} \right). $$ Let $\hat{\lambda}_{d}^{*}$ be the principal eigenvalue of $$ \frac{\mathrm{d} \bm{u}}{\mathrm{d} t}=d \hat{L} \bm{u} + \hat{M}(t) \bm{u} -\lambda \bm{u}. $$ Since $s(L_{ll})=0$ for all $\nu_1 \leq l \leq \alpha$, and $\hat{M}(t)$ is cooperative for any $t \in \mathbb{R}$, it then follows from Lemma \ref{lem:comparison} that $\hat{\lambda}_{d}^{*} \leq \lambda_{d}^{*}$. By the proof of Case 1, $\hat{\lambda}_{d}^{*}$ has a lower bound independent of $d$, so does $\lambda_{d}^{*}$. We next show that $\lambda_{d}^{*}$ has an upper bound independent of $d$. Define a matrix $\overline{L}$ by $$ \overline{L}= \left( \begin{array}{ccc} \overline{L}_{11}& \cdots & \overline{L}_{1\alpha} \\ \vdots& \ddots& \vdots\\ \overline{L}_{\alpha 1}& \cdots & \overline{L}_{\alpha \alpha}\\ \end{array} \right), $$ where $\overline{L}_{hl}=L_{hl}$, for $ 1 \leq h \leq \alpha_0^c$, $1 \leq l \leq \alpha $ and $ \nu_1 \leq h,l \leq \alpha$ and $\overline{L}_{hl}=L_{hl} + \bm{e}_{n_h} \bm{e}_{n_l}^T$ for $ \nu_1 \leq h \leq \alpha$, $1 \leq l \leq \alpha_0^c $. Here $\bm{e}_{n_h} =(1,\cdots,1)^T$ is an $n_h$-dimensional vector. For any $1 \leq l \leq \alpha_0$, choose $\bm{p}^{\nu_l} \gg 0$ such that $(\bm{p}^{\nu_l})^T L_{\nu_l\nu_l}=\bm{0}^T$. Since all elements of $\overline{L}_{hl}$ are positive for $ \nu_1 \leq h \leq \alpha$, $1 \leq l \leq \alpha_0^c $ and $L_{ll}$ is irreducible for $1 \leq l \leq \alpha_0^c$, by the arguments similar to those for \eqref{equ:p:Lhl}, there exist $\bm{p}^i \gg 0$, $1 \leq i \leq \alpha_0^c$ such that $ \sum_{i =1}^\alpha (\bm{p}^i)^T \overline{L}_{ih}=\bm{0}^T,~ 1 \leq h \leq \alpha_0^c. $ Define $ \bm{p}:=((\bm{p}^1)^T,\cdots,(\bm{p}^\alpha)^T)^T $, where $\bm{p}^i$ is an $n_i$-dimensional vector. By repeating the arguments for the upper bound in Case 1, we obtain the desired conclusion. \end{proof} \begin{remark}\label{rem:eig:irreducible} Assume that {\rm (H1)$'$} and {\rm (H4)} hold. If $\tilde{O}(T,0)$ is irreducible, then $\lim\limits_{d \rightarrow +\infty} \lambda_{d}^{*}= \tilde{\lambda}^{*}$. \end{remark} The following result provides a powerful tool to analyze the matrix $\tilde{O}(T,0)$ in the case where it is reducible. \begin{lemma}\label{lem:widehat:O} For any $\alpha_0$-dimensional vector $\bm{b}=(b_1,\cdots,b_{\alpha_0})^T$ with $1 \leq b_i \leq \alpha_0$ and $b_i \neq b_j$ if $i \neq j$, define $\widehat{M} (t)=(\widehat{m}_{hl}(t))_{\alpha_0 \times \alpha_0}$ by $\widehat{m}_{hl}(t)=\bm{p}_{b_h}^T M(t) \bm{q}_{b_l}$. Let $\{\widehat{O}(t,s): t\geq s \}$ be the evolution family of $\frac{\mathrm{d} \bm{v}}{\mathrm{d} t}= \widehat{M}(t) \bm{v}$ on $\mathbb{R}^{\alpha_0}$. Then the matrix $\tilde{O}(T,0)$ is similar to the matrix $\widehat{O}(T,0)$. If, in addition, $\tilde{O}(T,0)$ is reducible, then $\widehat{O}(T,0)$ is a block lower triangular matrix after choosing a suitable $\bm{b}$. \end{lemma} The following two results are straightforward consequence of \cite[Lemmas 3.5 and 3.7]{zhang2020asymptotic}. \begin{lemma}\label{lem:split} Write $A:=\tilde{O}(T,0)=(a_{ij})_{\alpha_0 \times \alpha_0}$ and let $$ A= \left( \begin{matrix} A_{11} & \cdots & A_{1 \tilde{n}}\\ \vdots & \ddots &\vdots\\ A_{\tilde{n} 1} & \cdots & A_{\tilde{n} \tilde{n}} \end{matrix} \right),\quad \text{and} \, \, \, \tilde{M}(t)= \left( \begin{matrix} \tilde{M}_{11}(t) & \cdots & \tilde{M}_{1 \tilde{n}}(t)\\ \vdots & \ddots &\vdots\\ \tilde{M}_{\tilde{n} 1} (t)& \cdots & \tilde{M}_{\tilde{n} \tilde{n}}(t) \end{matrix} \right), $$ where $A_{ii}$ is an $\alpha_i \times \alpha_i$ matrix with $\sum_{i=1}^{\tilde{n}} \alpha_i =\alpha_0$, and $\tilde{M}_{ii}(t)$ is an $\alpha_i \times \alpha_i$ matrix-valued function of $t \in \mathbb{R}$. If $A_{ij}$ are zero matrices for all $1 \leq i < j \leq \tilde{n}$, then so are $\tilde{M}_{ij}(t)$ for any $ t\in \mathbb{R}$. Moreover, let $\tilde{\lambda}_{i}^{*}$ be the principal eigenvalue of $\frac{\mathrm{d} \bm{u}}{\mathrm{d} t}=\tilde{M}_{ii} (t) \bm{u} - \lambda \bm{u},~t>0$, then $\tilde{\lambda}_{i}^{*}= \frac{\ln r(A_{ii})}{T}$ and $\tilde{\lambda}^{*}=\max_{1 \leq i \leq \tilde{n}} \tilde{\lambda}_{i}^{*}$. \end{lemma} \begin{lemma}\label{lem:conti_limits} Let $g$ be a continuous function on $(a,b)$ and write $g_+=\limsup_{x \rightarrow b} g(x)$ and $g_-=\liminf_{x \rightarrow b} g(x)$. Then for any $c \in [g_-,g_+]$, there exists a sequence $x_k \rightarrow b$ as $k \rightarrow \infty$ with $x_k \in (a,b)$ such that $\lim\limits_{k \rightarrow \infty} g(x_k)=c$. \end{lemma} Now we are in a position to prove the main result of this section. \begin{theorem}\label{thm:eig} Assume that {\rm (H1)} and {\rm (H4)} hold. Then the following statements are valid: \begin{itemize} \item[\rm (i)] $\lim\limits_{d \rightarrow 0^+} \lambda_{d}^{*}= \lambda_{0}^{*}$ and $\lim\limits_{d \rightarrow \hat{d}} \lambda_{d}^{*}= \lambda_{\hat{d}}^{*}$ for any $\hat{d}>0$. \item[\rm (ii)] $\lim\limits_{d \rightarrow +\infty} \lambda_{d}^{*}= \tilde{\lambda}^{*}$. \end{itemize} \end{theorem} \begin{proof} (i) We only prove that $\lim\limits_{d \rightarrow 0^+} \lambda_{d}^{*}= \lambda_{0}^{*}$, since $\lim\limits_{d \rightarrow \hat{d}} \lambda_{d}^{*}= \lambda_{\hat{d}}^{*}$ can be derived for any $\hat{d}>0$ in a similar way. Since solutions of \eqref{equ:sys:periodic} depend continuously upon parameters (see, e.g., \cite[Section I.3]{hale1969ordinary}), it follows that $\mathbb{O}_d(T,0)$ converges to $\mathbb{O}_0(T,0)$ in the matrix norm as $d \rightarrow 0^+$. For the definition of the matrix norm, we refer to \cite[Section II.2]{steward1990matrices}. Therefore, the desired statement (i) follows from the perturbation theory of matrix (see, e.g., \cite{kato1976perturbation,steward1990matrices}). (ii) Our proof is motivated by the arguments for \cite[Theorem 3.3]{zhang2020asymptotic}. Since the conclusion has been proved in the case where $\tilde{O}(T,0)$ is irreducible in Lemma \ref{lem:eig:irreducible}, we only need to consider the case where that $\tilde{O}(T,0)$ is reducible. We proceed in three steps. {\it Step 1.} $\lambda_{\infty}:=\lim_{d\rightarrow +\infty} \lambda_{d}^{*}$ exists. According to Lemma \ref{lem:bounded:1}, both $\lambda_+:=\limsup_{d\rightarrow +\infty} \lambda_{d}^{*}$ and $\lambda_-:=\liminf_{d \rightarrow +\infty} \lambda_{d}^{*}$ exist, and $C_1 \leq \lambda_-,\lambda_+ \leq C_2$ for some $C_1$ and $C_2$. It suffices to prove that $\lambda_-=\lambda_+$. Suppose that $\lambda_-< \lambda_+$, for any $\hat{\lambda} \in [\lambda_-, \lambda_+]$, by repeating the arguments in the proof of Lemma \ref{lem:eig:irreducible}, there exists a positive vector $\bm{\phi}$ such that $$ \bm{\phi}=e^{-\hat{\lambda}T}\tilde{O}(T,0) \bm{\phi}. $$ This implies that $e^{\hat{\lambda}T}$ is an eigenvalue of $\tilde{O}(T,0)$ for any $\hat{\lambda} \in [\lambda_-, \lambda_+]$, which is impossible. {\it Step 2.} $\lambda_{\infty} \leq \tilde{\lambda}^{*}$. For any given $\epsilon>0$, let $M^{\epsilon}=(m_{ij}^{\epsilon})_{n \times n}$ and $\tilde{M}^{\epsilon}=(\tilde{m}_{hl}^{\epsilon})_{\alpha_0 \times \alpha_0}$ be two continuous matrix-valued functions of $t \in \mathbb{R}$ with $m_{ij}^{\epsilon}(t)=m_{ij}(t)+\epsilon$, $\forall t \in \mathbb{R}$ and $\tilde{m}_{hl}^{\epsilon}(t)=\bm{p}_h^T M^{\epsilon}(t) \bm{q}_l$, $\forall t \in \mathbb{R}$. Let $\tilde{\lambda}^{*}(\epsilon)$ be the principal eigenvalue of the eigenvalue problem $$ \frac{\mathrm{d} \bm{u}}{\mathrm{d} t}=\tilde{M}^{\epsilon}(t) \bm{u} - \lambda \bm{u}, ~t>0. $$ Let $\lambda_{d}^{*}(\epsilon)$ be the principal eigenvalue of the eigenvalue problem \eqref{equ:eig:periodic} with $M$ replaced by $M^{\epsilon}$. Clearly, $\lambda_{d}^{*}(\epsilon) \geq \lambda_{d}^{*}$ for all $\epsilon>0$ due to Lemma \ref{lem:comparison}. It then follows that $$\tilde{\lambda}^{*}(\epsilon) =\lim\limits_{d \rightarrow +\infty} \lambda_{d}^{*}(\epsilon) \geq \lim\limits_{d \rightarrow +\infty} \lambda_{d}^{*} =\lambda_{\infty}, \forall \epsilon >0.$$ Since $\tilde{\lambda}^{*}(\epsilon) \rightarrow \tilde{\lambda}^{*}$ as $\epsilon \rightarrow 0^+$, we conclude that $\lambda_{\infty}\leq \lim\limits_{\epsilon \rightarrow 0^+}\tilde{\lambda}^{*}(\epsilon)= \tilde{\lambda}^{*}$. {\it Step 3.} $\lambda_{\infty} \geq \tilde{\lambda}^{*}$. We only consider the case of $\alpha_0^c>0$, since the case of $\alpha_0^c=0$ can be addressed in a similar way. Without loss of generality, by Lemma \ref{lem:L_K}, we assume that $\Lambda_0^c=\{1,\cdots,\alpha_0^c\}$ and $\Lambda_0=\{\alpha_0^c+1,\cdots,\alpha\}$. Based on Lemma \ref{lem:widehat:O}, we can redefine $\tilde{M} (t)=(\tilde{m}_{hl}(t))_{\alpha_0 \times \alpha_0}$ by $\tilde{m}_{hl}(t)=\bm{p}_{b_h}^T M(t) \bm{q}_{b_l}$ for a specified $\alpha_0$-dimensional vector $\bm{b}$ such that \begin{itemize} \item[(1)] $\bm{b}=(b_1,\cdots,b_{\alpha_0})^T$ with $1 \leq b_i \leq \alpha_0$ and $b_i \neq b_j$ if $i \neq j$. \item[(2)] The matrix $A:=\tilde{O}(T,0)$ can be split into $$ A= \left( \begin{matrix} A_{11} & \cdots & A_{1 \tilde{n}}\\ \vdots & \ddots &\vdots\\ A_{\tilde{n} 1} & \cdots & A_{\tilde{n} \tilde{n}} \end{matrix} \right), $$ where $A_{ij}$ is an $\alpha_i \times \alpha_j$ matrix with $\sum_{i=1}^{\tilde{n}} \alpha_i =\alpha_0$, $A_{ij}=0$ for all $1 \leq i < j \leq \tilde{n}$ and $A_{ii}$ is irreducible for all $1 \leq i \leq \tilde{n}$. \item[(3)] $\bm{b}=((\bm{b}^1)^T,\cdots,(\bm{b}^{\tilde{n}})^T)^T$, where $\bm{b}^i=(b^i_1,\cdots,b^i_{\alpha_i})^T$ with $b^i_1< \cdots < b^i_{\alpha_i}$ for all $1 \leq i \leq \tilde{n}$. \end{itemize} Here (3) is achievable because both (1) and (2) are still valid by exchanging the components of $\bm{b}^i$ due to Lemma \ref{lem:L_K_M}. By Lemma \ref{lem:split}, $\tilde{M}(t)$ can be split into $$ \tilde{M}(t)= \left( \begin{matrix} \tilde{M}_{11}(t) & \cdots & \tilde{M}_{1 \tilde{n}}(t)\\ \vdots & \ddots &\vdots\\ \tilde{M}_{\tilde{n} 1} (t)& \cdots & \tilde{M}_{\tilde{n} \tilde{n}}(t) \end{matrix} \right), $$ where $\tilde{M}_{ij}$ is an $\alpha_i \times \alpha_j$ matrix-valued function with $\tilde{M}_{ij}(t)=0$ for all $1 \leq i < j \leq \tilde{n}$. Let $\tilde{\lambda}_{i}^{*}$ be the principal eigenvalue of the eigenvalue problem $$\frac{\mathrm{d} \bm{u}}{\mathrm{d} t}=\tilde{M}_{ii}(t) \bm{u} - \lambda \bm{u},~t>0.$$ Thus, Lemma \ref{lem:split} yields that $\tilde{\lambda}_{i}^{*}= \frac{\ln r(A_{ii})}{T}$ and $\tilde{\lambda}^{*}=\max_{1 \leq i \leq \tilde{n}} \tilde{\lambda}_{i}^{*}$. For any $1 \leq i \leq \tilde{n}$ and $t \in \mathbb{R}$, define $L^{i}$, $M^{i}(t)$ and $\tilde{M}^{i}(t)$ as $L^{i}$, $M^{i}$ and $\tilde{M}^{i}$ in Lemma \ref{lem:L_K_M}. For any $1 \leq i \leq \tilde{n}$, $1 \leq l \leq \alpha_i$, choose $\bm{p}_{l,i}$ and $\bm{q}_{l,i}$ in the same way as in Lemma \ref{lem:L_K_M}. It then follows from Lemma \ref{lem:L_K_M} that for any $1 \leq i \leq \tilde{n}$, $\tilde{M}_{ii}(t)=\tilde{M}^{i}(t)$, $\forall t \in \mathbb{R}$, and for any $1\leq l \leq \alpha_i$ $$ \bm{p}_{l,i}^T\bm{q}_{l,i}=1,~ \bm{p}_{l,i}^T\bm{q}_{h,i}=0,~ h\neq l,~ L^{i}\bm{q}_{l,i}=\bm{0}, \text{ and } \bm{p}_{l,i}^T L^{i}=\bm{0}^T. $$ Let $\lambda_{d,i}^{*}$ be the principal eigenvalue of \begin{equation}\label{equ:eig:periodic:i} \frac{\mathrm{d} \bm{u}}{\mathrm{d} t}=d L^i \bm{u} + M^i(t) \bm{u} -\lambda \bm{u}. \end{equation} Since $A_{ii}$ is irreducible and (H1)$'$ holds for $L^i$, we conclude that $\lambda_{d,i}^{*} \rightarrow \tilde{\lambda}_{i}^{*}$ as $d \rightarrow +\infty$ due to Lemma \ref{lem:eig:irreducible} and Remark \ref{rem:eig:irreducible}. It then follows from Lemma \ref{lem:comparison} that $\lambda_{d,i}^{*} \leq \lambda_{d}^{*}$. Notice that $$ \tilde{\lambda}_{i}^{*}=\lim_{d \rightarrow +\infty} \lambda_{d,i}^{*} \leq \lim_{d \rightarrow +\infty} \lambda_{d}^{*} =\lambda_{\infty}, ~1 \leq i \leq \tilde{n}. $$ Thus, Lemma \ref{lem:split} implies that $\tilde{\lambda}^{*}=\max_{1 \leq i \leq \tilde{n}} \tilde{\lambda}_{i}^{*} \leq \lambda_{\infty}$. \end{proof} \begin{remark}\label{rem:eig:reducible} Assume that {\rm (H1)$'$} and {\rm (H4)} hold. Then $\lim\limits_{d \rightarrow 0^+} \lambda_{d}^{*}= \lambda_{0}^{*}$, $\lim\limits_{d \rightarrow \hat{d}} \lambda_{d}^{*}= \lambda_{\hat{d}}^{*}$ for any $\hat{d}>0$, and $\lim\limits_{d \rightarrow +\infty} \lambda_{d}^{*}= \tilde{\lambda}^{*}$. \end{remark} \section{The basic reproduction ratio}\label{sec:R0} In this section, we study the continuity of the basic reproduction ratio with respect to parameters and investigate its asymptotic behavior as the dispersal rates go to infinity for a periodic patch model. In order to discuss the continuity of the basic reproduction ratio with respect to parameters, we introduce a metric space $(\mathcal{X}, \rho_{\mathcal{X}})$ with metric $\rho_{\mathcal{X}}$. For any given $\chi \in \mathcal{X}$, let $F_{\chi}(t)=(f_{ij,\chi}(t))_{n\times n}$ and $V_{\chi}(t)=(v_{ij,\chi}(t))_{n\times n}$ be two continuous $n\times n$ matrix-valued functions of $t \in \mathbb{R}$ such that \begin{enumerate} \item[(H2)$'$] $F_{\chi}(t+T)=F_{\chi}(t)$, $V_{\chi}(t+T)=V_{\chi}(t)$, $F_{\chi}(t)$ is nonnegative, and $-V_{\chi}(t)$ is cooperative for all $\chi \in \mathcal{X}$ and $t \in \mathbb{R} $. \end{enumerate} Let $\tilde{F}_{\chi}(t)=(\tilde{f}_{hl,\chi}(t))_{\alpha_0 \times \alpha_0}= PF_{\chi}(t)Q$ and $\tilde{V}_{\chi}(t)=(\tilde{v}_{hl,\chi}(t))_{\alpha_0 \times \alpha_0}= PV_{\chi}(t)Q$ for all $t \in \mathbb{R}$. For any $d \geq 0 $, let $\{\Phi_{d,\chi}(t,s): t \geq s \}$ be the evolution family on $\mathbb{R}^{n}$ of $\frac{\partial \bm{v}}{\partial t}= d L \bm{v} - V_{\chi}(t) \bm{v}$, $t\geq s$. We use $\{\tilde{\Phi}_{\chi} (t,s): t \geq s\}$ to denote the evolution family on $\mathbb{R}^{\alpha_0}$ of $ \frac{\partial \bm{v}}{\partial t}= - \tilde{V}_{\chi}(t) \bm{v}$, $t\geq s$. We further assume that \begin{enumerate} \item[(H3)$'$] $\omega (\Phi_{d,\chi})<0$ for all $d \geq 0$ and $\omega (\tilde{\Phi}_{\chi})<0$. \end{enumerate} It is easy to see that (H2)$'$ and (H3)$'$ are generalizations of (H2) and (H3), respectively. For any $\mu >0$ and $d \geq 0$, let $\{\mathbb{U}_{d,\chi}^{\mu}(t,s): t \geq s \} $ be the evolution family on $\mathbb{R}^{n}$ of \begin{equation}\label{equ:U_kappa} \frac{\partial \bm{v}}{\partial t}= d L \bm{v} - V_{\chi}(t) \bm{v}+ \frac{1}{\mu} F_{\chi} (t) \bm{v}, ~t \geq s, \end{equation} and let $\{\tilde{\mathbb{U}}_{\chi}^{\mu}(t,s): t \geq s \} $ be the evolution family on $\mathbb{R}^{\alpha_0}$ of \begin{equation}\label{equ:U_infty} \frac{\partial \bm{v}}{\partial t}= - \tilde{V}_{\chi}(t) \bm{v} +\frac{1}{\mu} \tilde{F}_{\chi} (t) \bm{v},~t \geq s. \end{equation} Define $$ \mathbb{X}:=\{ \bm{u} \in C(\mathbb{R},\mathbb{R}^n): \bm{u}(t)=\bm{u}(t+T), ~t \in \mathbb{R} \}, $$ $$ \tilde{\mathbb{X}}:=\{ \bm{u} \in C(\mathbb{R},\mathbb{R}^{\alpha_0}): \bm{u}(t)=\bm{u}(t+T), ~t \in \mathbb{R} \}, $$ $$ \mathbb{X}_+:=\{ \bm{u} \in C(\mathbb{R},\mathbb{R}_+^n): \bm{u}(t)=\bm{u}(t+T),~t \in \mathbb{R}\}, $$ and $$ \tilde{\mathbb{X}}_+:=\{ \bm{u} \in C(\mathbb{R},\mathbb{R}_+^{\alpha_0}): \bm{u}(t)=\bm{u}(t+T),~t \in \mathbb{R}\} $$ with the maximum norm $\Vert \bm{u} \Vert_{\mathbb{X}} = \max_{1 \leq i \leq n}\max_{0 \leq t \leq T} \vert u_i (t) \vert$ for $\bm{u}=(u_1,u_2,\cdots,u_n)^T$ and $\Vert \bm{u} \Vert_{\tilde{\mathbb{X}}} = \max_{1 \leq i \leq \alpha_0}\max_{0 \leq t \leq T} \vert u_i (t) \vert$ for $\bm{u}=(u_1,u_2,\cdots,u_{\alpha_0})^T$. Then $(\mathbb{X},\mathbb{X}_+)$ and $(\tilde{\mathbb{X}},\tilde{\mathbb{X}}_+)$ are two ordered Banach spaces. For any $d \geq 0$, we define a bounded linear positive operator $\mathbb{Q}_{d,\chi}: \mathbb{X} \rightarrow \mathbb{X}$ by $$ [\mathbb{Q}_{d,\chi} \bm{u}](t): = \int_{0}^{+\infty} \Phi_{d,\chi}(t,t-s)F_{\chi}(t-s)\bm{u}(t-s) \mathrm{d} s, ~ t \in \mathbb{R},~ \bm{u} \in \mathbb{X}, $$ and $\mathcal{R}_0(d,\chi):=r(\mathbb{Q}_{d,\chi})$. Define $\tilde{\mathbb{Q}}_{\chi} : \tilde{\mathbb{X}} \rightarrow \tilde{\mathbb{X}}$ by $$ [\tilde{\mathbb{Q}}_{\chi} \bm{u}] (t):= \int_{0}^{+\infty} \tilde{\Phi}_{\chi}(t,t-s)\tilde{F}_{\chi}(t-s) \bm{u}(t-s) \mathrm{d} s, ~ t \in \mathbb{R},~ \bm{u} \in \tilde{\mathbb{X}}, $$ and $\tilde{\mathcal{R}}_0(\chi):=r(\tilde{\mathbb{Q}}_{\chi})$. The subsequent result is a straightforward consequence of \cite[Theorems 2.1 and 2.2]{wang2008threshold}. \begin{lemma}\label{lem:R0:equivalent} Assume that {\rm (H1)}, {\rm (H2)$'$} and {\rm (H3)$'$} hold. Then the following statements are valid for any $\mu >0$ and $\chi \in \mathcal{X}$: \begin{itemize} \item[\rm (i)] For any $d \geq 0$, $\mathcal{R}_0(d,\chi) -\mu$ has the same sign as $\omega(\mathbb{U}_{d,\chi}^{\mu})$. \item[\rm (ii)] $\tilde{\mathcal{R}}_0(\chi) -\mu$ has the same sign as $\omega(\tilde{\mathbb{U}}_{\chi}^{\mu})$. \end{itemize} \end{lemma} Now we are ready to prove the main result of this section. \begin{theorem}\label{thm:R0} Assume that {\rm (H1)}, {\rm (H2)$'$} and {\rm (H3)$'$} hold, and there exists $\chi_0 \in \mathcal{X}$ such that $V_{\chi}$ and $F_{\chi}$ converge to $V_{\chi_0}$ and $F_{\chi_0}$ in the matrix norm as $\chi \rightarrow \chi_0$, respectively. Then the following statements are vaild: \begin{itemize} \item[\rm (i)] $\lim\limits_{d\rightarrow 0^+,\chi \rightarrow \chi_0} \mathcal{R}_0(d,\chi)= \mathcal{R}_0(0,\chi_0)$, and $\lim\limits_{d\rightarrow \hat{d},\chi \rightarrow \chi_0} \mathcal{R}_0(d,\chi)= \mathcal{R}_0(\hat{d},\chi_0)$ for any $\hat{d}>0$. \item[\rm (ii)] $\lim\limits_{d\rightarrow +\infty,\chi \rightarrow \chi_0} \mathcal{R}_0(d,\chi)= \tilde{R}_{0}(\chi_0).$ \end{itemize} \end{theorem} \begin{proof} (i) Without loss of generality, we only prove $$\lim\limits_{d\rightarrow 0^+,\chi \rightarrow \chi_0} \mathcal{R}_0(d,\chi)= \mathcal{R}_0(0,\chi_0).$$ In this case, we choose $\Theta=\mathbb{R}_+ \times \mathcal{X}$ and $\theta_0:=(0,\chi_0)\in \Theta$. Define $$H(\mu,\theta):=\omega(\mathbb{U}_{d,\chi}^{\mu}),~ \forall \mu \in \mathbb{R}_+,~ \theta:=(d,\chi)\in \Theta.$$ According to Lemma \ref{lem:R0:equivalent}, for any $d \geq 0$ and $\chi \in \mathcal{X}$, $H(\mathcal{R}_0(d,\chi),(d,\chi))=0$, $H(\mu,(d,\chi))<0$ for all $\mu > \mathcal{R}_0(d,\chi)$ and $H(\mu,(d,\chi))>0$ for all $\mu < \mathcal{R}_0(d,\chi)$. By Lemma \ref{lem:R0:continuity}, it suffices to show that for any $ \mu >0$, $\lim\limits_{d \rightarrow 0^+,\chi \rightarrow \chi_0}\omega(\mathbb{U}_{d,\chi}^{\mu}) = \omega(\mathbb{U}^{\mu}_{0,\chi_0}) $, which can be derived by the arguments similar to those in the Claim of \cite[Theorem 4.1]{zhang2020asymptotic} and Theorem \ref{thm:eig}. (ii) Now we choose $\Theta=({\rm Int}(\mathbb{R}_+) \times \mathcal{X})\cup \{\theta_0\}$, where $\theta_0:=(0,\chi_0)$. Let $\kappa= \frac{1}{d}$ and $\theta:=(\kappa,\chi)\in \Theta$ with $d >0$. Define $$H(\mu,\theta):=\omega(\mathbb{U}_{d,\chi}^{\mu}),~ \forall \mu \in \mathbb{R}_+,~ \theta:=(\kappa,\chi)\in \Theta \setminus \{ \theta_0 \},$$ and $$ H(\mu,\theta_0)= \omega(\tilde{\mathbb{U}}_{\chi}^{\mu}),~ \forall \mu \in \mathbb{R}_+. $$ According to Lemma \ref{lem:R0:equivalent}, for any $\kappa > 0$ and $\chi \in \mathcal{X}$, $H(\mathcal{R}_0(\kappa,\chi),(\kappa,\chi))=0$, $H(\mu,(\kappa,\chi))<0$ for all $\mu > \mathcal{R}_0(\kappa,\chi)$ and $H(\mu,(\kappa,\chi))>0$ for all $\mu < \mathcal{R}_0(\kappa,\chi)$. By Lemma \ref{lem:R0:continuity}, it suffices to show that for any $ \mu >0$, $\lim\limits_{d \rightarrow +\infty,\chi \rightarrow \chi_0}\omega(\mathbb{U}_{d,\chi}^{\mu}) = \omega(\tilde{\mathbb{U}}^{\mu}_{\chi_0}) $, which can be derived by the arguments similar to those in the Claim of \cite[Theorem 4.1]{zhang2020asymptotic} and Theorem \ref{thm:eig}. \end{proof} To finish this section, we apply Theorem \ref{thm:R0} to a periodic Ross-Macdonald model in a patch environment. According to \cite{gao2014periodic}, we consider the following $T$-periodic patch system: \begin{subequations}\label{eqn:model:RM}\small \begin{align} &\frac{\mathrm{d} H_i}{\mathrm{d} t} = d \sum_{j=1}^{m} l_{ij}^H H_j(t), &1 \leq i \leq m,\label{eqn:model:RM:a}\\ &\frac{\mathrm{d} V_i}{\mathrm{d} t} = \epsilon_i(t) - \mu_i(t) V_i(t),&1 \leq i \leq m,\label{eqn:model:RM:b}\\ &\frac{\mathrm{d} h_i}{\mathrm{d} t} = \sigma_{1i} \beta_i (t)\frac{H_i(t) -h_i(t)}{H_i(t)} v_i(t) - \gamma_i h_i(t) + d\sum_{j=1}^{m} l_{ij}^H h_j(t),&1 \leq i \leq m,\label{eqn:model:RM:c}\\ &\frac{\mathrm{d} v_i}{\mathrm{d} t} = \sigma_{2i} \beta_i(t) \frac{h_i(t)}{H_i(t)} (V_i(t)-v_i(t)) - \mu_i(t) v_i(t),&1 \leq i \leq m\label{eqn:model:RM:d}. \end{align} \end{subequations} Here $H_i(t)$ and $V_i(t)$ are the total populations of humans and mosquitoes in patch $i$ at time $t$, respectively; $h_i(t)$ and $v_i(t)$ denote the numbers of infectious humans and mosquitoes in patch $i$ at time $t$, respectively; $\epsilon_i(t)>0$ is the recruitment rate of mosquitoes in patch $i$ at time $t$; $\mu_i(t)>0$ is the mortality rate of mosquitoes in patch $i$ at time $t$; $\sigma_{1i}>0$ $(\sigma_{2i}>0)$ are transmission probability from infectious mosquitoes (humans) to susceptible humans (mosquitoes) in patch $i$ at time $t$; $\beta_i(t)>0$ is the mosquito biting rate in patch $i$ at time $t$; $\gamma^{-1}_i>0$ is the human infectious period; $l_{ij}^H$ is the degree of human migration from patch $j$ to patch $i$ for $i \neq j$; $l_{ii}^H$ is the degree of human migration from patch $i$ to all other patches; $d$ is the migration coefficients. We assume that there is no death or birth during travel, so the emigration rate of humans in patch $i$ satisfies $\sum_{j=1}^{m} l_{ji}^H=0,~ \forall 1 \leq i \leq m $; the functions $\epsilon_i$, $\mu_i$ and $\beta_i$ are $T$-periodic and continuous on $\mathbb{R}$. We further assume that the total populations of human $N^H=\sum_{j=1}^{m} H_j(0)>0$. By \cite[Lemma 3.1 ]{gao2014periodic}, it then follows that \eqref{eqn:model:RM:a} admits a globally asymptotically stable equilibrium $\bm{H}^*=(H_{1}^*,\cdots,H_{n}^*)^T$, which is independent of $d>0$ and $t \in \mathbb{R}$; and that \eqref{eqn:model:RM:b} admits a globally asymptotically stable $T$-periodic solution $\bm{V}^*(t)=(V_{1}^*(t),\cdots,V_{n}^*(t))^T$, which is independent of $d>0$. Moreover, $\sum_{j=1}^{m} H_{j}^*=\sum_{j=1}^{m} H_j(0)$. We linearize system \eqref{eqn:model:RM} at the disease-free periodic solution $$ (H_{1}^*,\cdots,H_{n}^*,V_{1}^*,\cdots,V_{n}^*,0,\cdots,0,0,\cdots,0)^T $$ to obtain \begin{equation} \begin{cases} \frac{\mathrm{d} h_i}{\mathrm{d} t} = \sigma_{1i} \beta_i (t) v_i(t) - \gamma_i h_i(t) + d\sum_{j=1}^{m} l_{ij}^{H} h_j(t),&1 \leq i \leq m,\\ \frac{\mathrm{d} v_i}{\mathrm{d} t} = \sigma_{2i} \beta_i(t) \frac{V_i^*(t)}{H_{i}^*}h_i(t) - \mu_i(t) v_i(t),&1 \leq i \leq m. \end{cases} \end{equation} We next choose $n=2m$, and hence, $ \mathbb{X}:=\{ \bm{u} \in C(\mathbb{R},\mathbb{R}^{2m}): \bm{u}(t)=\bm{u}(t+T), ~t \in \mathbb{R} \}. $ Let $$ V_{11}(t):=(\delta_{ij}\gamma_i)_{m \times m},~ V_{22}(t):=(\delta_{ij}\mu_i(t))_{m \times m},\text{ and } V(t):= \left( \begin{array}{cc} V_{11}(t)& 0 \\ 0& V_{22}(t) \end{array} \right). $$ Define $$ F_{12}(t):=(\delta_{ij}\sigma_{1i} \beta_i (t))_{m \times m},~ F_{21}(t):=\left(\delta_{ij}\sigma_{2i} \beta_i(t) \frac{V_i^*(t)}{H_{i}^*}\right)_{m \times m}, $$ and $$ F(t):= \left( \begin{array}{cc} 0& F_{12} (t) \\ F_{21}(t)& 0 \end{array} \right). $$ Let $L=(l_{ij})_{2m \times 2m}$ be a cooperative matrix with zero column sum defined by $l_{ij}=l_{ij}^H$, $1\leq i,j \leq m$ and $l_{ij}=0$ if otherwise. For any $d \geq 0 $, let $\{\Phi_{d}(t,s): t \geq s \}$ be the evolution family on $\mathbb{R}^{2m}$ of $$ \frac{\partial \bm{v}}{\partial t}= d L \bm{v} - V(t) \bm{v}, ~t\geq s, $$ and define a bounded linear positive operator $\mathbb{Q}_{d}: \mathbb{X} \rightarrow \mathbb{X}$ by $$ [\mathbb{Q}_{d} \bm{u}](t): = \int_{0}^{+\infty} \Phi_{d}(t,t-s)F(t-s)\bm{u}(t-s) \mathrm{d} s, ~ t \in \mathbb{R},~ \bm{u} \in \mathbb{X}, $$ and $\mathcal{R}_0 (d):=r(\mathbb{Q}_{d})$. According to Theorem \ref{thm:R0}, we see that $\mathcal{R}_0(d)$ is continuous with respect to $d\in(0,+\infty)$. Indeed, Theorem \ref{thm:R0} shows that the basic reproduction ratio is continuous with respect to all parameters in the model. Now we turn to the limiting profile of $\mathcal{R}_0(d)$ as $d \rightarrow +\infty$. Let $\bm{q}=(q_1,\cdots,q_m)^T$ be a strongly positive vector such that $L^H\bm{q}=\bm{0}$ and $\sum_{i =1}^{m} q_i =1$, where $L^H=(l_{ij}^H)_{m \times m}$. Notice that $L$ is a reducible matrix. Since $L^H$ is irreducible, we have $\alpha_0=m+1$ due to Lemma \ref{lem:L_K}, and hence, $ \tilde{\mathbb{X}}:=\{ \bm{u} \in C(\mathbb{R},\mathbb{R}^{m+1}): \bm{u}(t)=\bm{u}(t+T), ~t \in \mathbb{R} \} $. Moreover, $P=(p_{hj})_{\alpha_0\times 2m}$ and $Q=(q_{il})_{2m \times\alpha_0}$ can be defined by $p_{1j}=1$, $1\leq j \leq m$, $p_{h(h+m-1)}=1$, $2 \leq h \leq \alpha_0$, $q_{i1}=q_{i}$, $1\leq i \leq m$, $q_{(l+m-1)l}=1$, $2\leq l \leq \alpha_0$, $p_{hj}=0$ and $q_{il}=0$ if otherwise. For any $t \in \mathbb{R}$, define $\tilde{V}(t):=PV(t)Q$, $\tilde{F}(t):= PF(t)Q$. Let $\{\tilde{\Phi}(t,s): t \geq s \}$ be the evolution family on $\mathbb{R}^{\alpha_0}$ of $$\frac{\partial \bm{v}}{\partial t}= - \tilde{V}(t) \bm{v}, ~t\geq s,$$ and define a bounded linear positive operator $\tilde{\mathbb{Q}}: \tilde{\mathbb{X}} \rightarrow \tilde{\mathbb{X}}$ by $$ [\tilde{\mathbb{Q}} \bm{u}](t): = \int_{0}^{+\infty} \tilde{\Phi}(t,t-s)\tilde{F}(t-s)\bm{u}(t-s) \mathrm{d} s, ~ t \in \mathbb{R},~ \bm{u} \in \tilde{\mathbb{X}}, $$ and $\tilde{\mathcal{R}}_0:=r(\tilde{\mathbb{Q}})$. It then follows from Theorem \ref{thm:C} that $\mathcal{R}_0(d) \rightarrow \tilde{\mathcal{R}}_0$ as $d \rightarrow +\infty$. At last we numerically compute $\mathcal{R}_0$ by using the algorithm developed in \cite{wang2008threshold,liang2019basic}. The baseline parameters are $m=2$, $T=365$, $N^H=500$, $\sigma_{1i}=0.2$, $\sigma_{2i}=0.3$, $\gamma_i=0.02$, $\mu=0.1$, as derived from \cite{gao2014periodic}, $\epsilon_1(t)=12.5-5 \cos(\frac{2\pi t}{T})-5 \cos(\frac{4\pi t}{T})$, $\epsilon_2(t)=12.5- 5\cos(\frac{2\pi t}{T})$, $\beta_i(t)=0.028 \epsilon_i(t)$, $l_{12}^H=1$, and $l_{21}^H=1$. Our numerical result shows that the basic reproduction ratios on patches 1 and 2 are $\mathcal{R}_0^{(1)}=1.5340$ and $\mathcal{R}_0^{(2)}=1.4478$, respectively. From Figure \ref{fig1} we observe that the dependence of $\mathcal{R}_0$ with respect to $d$ may be very complicated: $\mathcal{R}_0$ is decreasing when $d$ is small enough and large enough, while it is increasing on an interval. Moreover, $\mathcal{R}_0(d) \rightarrow \max(\mathcal{R}_0^{(1)},\mathcal{R}_0^{(2)})$ as $d\rightarrow 0$, and $\mathcal{R}_0 \rightarrow \tilde{\mathcal{R}}_0=1.5028$ as $d\rightarrow +\infty$. For the corresponding time-averaged autonomous system, we found that its basic reproduction number is $\bar{\mathcal{R}}_0=1.3555$, which is independent of $d$. This suggests that the use of a time-averaged autonomous model may underestimate the disease severity in some transmission settings. \begin{figure}[htbp] \centering \includegraphics[width=4in]{R0d.pdf} \caption{{\small \it $\mathcal{R}_0$ initially decreases then increases and finally decreases with respect to $d$. }}\label{fig1} \end{figure} \ {\noindent \bf Acknowledgements.} L. Zhang's research is supported in part by the National Natural Science Foundation of China (11901138) and the Natural Science Foundation of Shandong Province (ZR2019QA006), and X.-Q. Zhao's research is supported in part by the NSERC of Canada. We are grateful to the anonymous referees for their careful reading and valuable comments, which led to an improvement of our original manuscript.
{ "timestamp": "2021-08-04T02:02:33", "yymm": "2108", "arxiv_id": "2108.01133", "language": "en", "url": "https://arxiv.org/abs/2108.01133", "abstract": "This paper is devoted to the study of the asymptotic behavior of the principal eigenvalue and basic reproduction ratio associated with periodic population models in a patchy environment for small and large dispersal rates. We first deal with the eigenspace corresponding to the zero eigenvalue of the connectivity matrix. Then we investigate the limiting profile of the principal eigenvalue of an associated periodic eigenvalue problem as the dispersal rate goes to zero and infinity, respectively. We further establish the asymptotic behavior of the basic reproduction ratio in the case of small and large dispersal rates. Finally, we apply these results to a periodic Ross-Macdonald patch model.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "Asymptotic behavior of the principal eigenvalue and basic reproduction ratio for periodic patch models", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713831229043, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7080104910726507 }
https://arxiv.org/abs/2009.03708
Winning Strategy for the Multiplayer and Multialliance Zeckendorf Games
Edouard Zeckendorf proved that every positive integer $n$ can be uniquely written \cite{Ze} as the sum of non-adjacent Fibonacci numbers, known as the Zeckendorf decomposition. Based on Zeckendorf's decomposition, we have the Zeckendorf game for multiple players. We show that when the Zeckendorf game has at least $3$ players, none of the players have a winning strategy for $n\geq 5$. Then we extend the multi-player game to the multi-alliance game, finding some interesting situations in which no alliance has a winning strategy. This includes the two-alliance game, and some cases in which one alliance always has a winning strategy.%We examine what alliances, or combinations of players, can win, and what size they have to be in order to do so. We also find necessary structural constraints on what alliances our method of proof can show to be winning. Furthermore, we find some alliance structures which must have winning strategies.%We also extend the Generalized Zeckendorf game from $2$-players to multiple players. We find that when the game has $3$ players, player $2$ never has a winning strategy for any significantly large $n$. We also find that when the game has at least $4$ players, no player has a winning strategy for any significantly large $n$.
\section{Introduction} \subsection{Rules of Zeckendorf Game} The Fibonacci sequence is one of the most fabulous sequences with a number of beautiful properties. Among these properties is a theorem by Edouard Zeckendorf \cite{Ze}. Zeckendorf proved that every positive integer $n$ can be uniquely written as the sum of distinct non-consecutive Fibonacci numbers. This sum is also known as the \textit{Zeckendorf decomposition} of $n$. We define the Fibonacci sequence as $F_1 = 1, F_2 = 2, F_3 = 3$, and $F_{n+1}=F_n+F_{n-1}$. If we stuck with $F_1=F_2=1$, we lose uniqueness. Baird-Smith, Epstein, Flint and Miller \cite{BEFMgen,BEFM} create a game based on the Zeckendorf decomposition. We quote from \cite{BEFM} to describe the game. We first introduce some notation. By $\{1^n\}$ or $\{{F_1}^n\}$ we mean $n$ copies of $1$, or $F_1$. If we have $3$ copies of $F_1$, $2$ copies of $F_2$, and $7$ copies of $F_4$, we write either $\{{F_1}^3 + {F_2}^2 + {F_4}^7 \}$ or $\{1^3 + 2^2 + 5^7\}$. \begin{defi}[The Zeckendorf Game]\label{defi:zg} At the beginning of the game, there is an unordered list of $n$ 1's. Let $F_1 = 1, F_2 = 2$, and $F_{i+1} = F_i + F_{i-1}$; therefore the initial list is $\{{F_1}^n\}$. On each turn, a player can do one of the following moves. \begin{enumerate} \item If the list contains two consecutive Fibonacci numbers, $F_{i-1}, F_i$, these can be combined to create $F_{i+1}$. We denote this move by $F_{i-1} + F_i = F_{i+1}$. \item If the list has two of the same Fibonacci number, $F_i, F_i$, then \begin{enumerate} \item if $i=1$, $F_1, F_1$ can be combined to create $F_2$, denoted by $1+1=2$, \item if $i=2$, a player can change $F_2, F_2$ to $F_1, F_3$, denoted by $2+2=1+3$, and \item if $i \geq 3$, a player can change $F_i, F_i$ to $F_{i-2}, F_{i+1}$, denoted by $F_i + F_i = F_{i-2}+ F_{i+1}$. \end{enumerate} \end{enumerate} The players alternate moving. The game ends when one player moves to create the Zeckendorf decomposition. \end{defi} In the following results, $p$ represents the number of players in the Zeckendorf game. \subsection{Previous Results} Baird-Smith, Epstein, Flint and Miller \cite{BEFM} proved the following results in the Zeckendorf game. \begin{thm} Every game terminates in a finite number of moves at the Zeckendorf decomposition. \end{thm} \begin{thm}\label{thm1} In the two-player game $(p=2)$, for any $n>2$, player $2$ always has a winning strategy. \end{thm} It is worth noting that the proof on Theorem \ref{thm1} is non-constructive, and it is still an open problem to find the constructive winning strategy of player $2$. \subsection{New Results} \begin{thm}\label{thm2} When $n \geq 5$, for any $p \geq 3$ no player has a winning strategy. \end{thm} Now we extend the multi-player games to the game of more than two alliances, or teams. We use $t$ to represent the number of teams. We then have the following theorems. \begin{thm}\label{thm3} For any $n\geq2k^2+4k$ and $t \geq 3$, if each team has exactly $k=t-1$ consecutive players, then no team has a winning strategy. \end{thm} \begin{thm}\label{thm4} Let $n\geq 30$ and $p=6$. If one alliance has $4$ players and the other alliance has $2$ players, then the $4$-player alliance always has a winning strategy. \end{thm} \begin{thm}\label{thm5} Let $n\geq 32$ and $p\geq 7$. If there are two alliances, one alliance with $p-2$ players (called the big alliance), and the other with exactly $2$ players (called the small alliance), then the big alliance always has a winning strategy. \end{thm} Lastly, we extend this to larger alliances with the following theorems. \begin{thm}\label{thm6} For any $n\geq4pb+2p-2b$, if one alliance contains more than two-thirds of the players and there is some integer $b$ such that if a player $i$ is not on the alliance, player $(i - b)\bmod p$ is on it, and the alliance has at least $2b$ players in a row, then that alliance has a winning strategy. \end{thm} \begin{thm}\label{thm7} Let $n\geq 2p+4b$. If there is some integer $b$ such that if a player $i$ is not on the alliance then player $(i - b)\bmod p$ is on the alliance, and the alliance has at least $3b$ players in a row, then that alliance has a winning strategy. \end{thm} \begin{thm}\label{thm8} Assume we have one big alliance (sized $2d$) and one small alliance (sized $d$). In this two-alliance game consisting of $3d$ players ($d$ can be any positive integer), when the small alliance consists of $d$ consecutive players and the big alliance consists of $2d$ consecutive players, if $n\geq 12d^2+4d$, then the big alliance always has a winning strategy. \end{thm} \section{Winning Strategy for Zeckendorf Game} \subsection{Proof Of Theorem \ref{thm2}} Note: In all the following proofs of this section, player $0 =$ player $p$ $($under $\bmod$ $p)$. To prove Theorem \ref{thm2}, we introduce the following property: \begin{property}\label{p1} Suppose player $m$ has a winning strategy $(1\leq m \leq p)$. For any $p\geq 3$, if player $m$ is not the player who takes step $2$ listed below, then any winning path of player $m$ does not contain the following $3$ consecutive steps: Step $1: 1+1=2$. Step $2: 1+1=2$. Step $3: 2+2=1+3$. \end{property} \begin{proof} Suppose player $m$ has a winning strategy and there is a winning path that contains these $3$ consecutive steps. Then there exists a player $a$ where $1\leq a\leq p,\ a \neq m$, such that player $a-1 \pmod p$ can take step $1$, player $a$ can take step $2$ and player $a+1 \pmod p$ can take step $3$. Note that instead of doing $1+1=2$, player $a$ can do $1+2=3$. Then player $m-1 \pmod p$ has a winning strategy, which is a contradiction. Therefore, by using stealing strategy, Property \ref{p1} holds. \end{proof} We now prove Theorem \ref{thm2} by splitting it into the following $2$ lemmas, Lemma \ref{lem2.1.1} and Lemma \ref{lem2.1.2}. \begin{lem}\label{lem2.1.1} When $n \geq 13$, for any $p \geq 4$ no player has a winning strategy. \end{lem} \begin{proof} Suppose player $m$ has a winning strategy $(1\leq m\leq p)$. Consider the following two cases.\\ \begin{itemize} \item[Case 1.] If $m\geq 4$, then player $1, 2, 3$ can do the following: Player $1: 1+1=2$. Player $2: 1+1=2$. Player $3: 2+2=1+3$. This contradicts Property \ref{p1}, so player $m$ does not have winning strategy for any $m\geq 4$.\\ \item[Case 2.] If $m\leq 3$, then after player $m$'s first move, player $m+1, m+2, m+3$ can do the following: Player $m+1: 1+1=2$. Player $m+2: 1+1=2$. Player $m+3: 2+2=1+3$. This contradicts Property \ref{p1}, so player $m$ does not have winning strategy for any $m\leq 3$. \end{itemize} By Case $1$ and Case $2$, Lemma \ref{lem2.1.1} is proved. \end{proof} \begin{lem}\label{lem2.1.2} When $n \geq 13$, for $p=3$ no player has a winning strategy. \end{lem} \begin{proof} Suppose player $m$ has a winning strategy $(1\leq m\leq 3)$. After player $m$'s first move, players $m+1$ and $m+2$ can do the following $($if $m=3$, we can start the following process from the first step of the game$)$: Player $m+1: 1+1=2$ (Step $1$). Player $m+2: 1+1=2$ (Step $2$). Player $m$: player $m$ can do anything (Step $3$).\\ Note that if player $m$ does $2+2=1+3$, then Steps $1, 2$, and $3$ violate Property \ref{p1}, which is a contradiction. If player $m$ does anything else other than $2+2=1+3$, then after player $m$'s first move, the other $2$ players can do the following (continuing after the first $3$ steps listed above with $2$ more steps; if $m=3$, player $m+1$ is player $1$): Player $m+1: 1+1=2$ (Step $1$). Player $m+2: 1+1=2$ (Step $2$). Player $m$: player $m$ can do anything (Step $3$). Player $m+1: 1+1=2$ (Step $4$). Player $m+2: 2+2=1+3$ (Step $5$). Note that Step $3$ removes at most one $2$, but Step $1$ and Step $2$ generate two $2$'s in total, so there will be at least one $2$ remaining after step $3$. Therefore, player $m+1$ can do $1+2=3$ instead in Step $4$. By doing so, now player $m-1\pmod p$ has winning strategy, which is a contradiction. Thus by using a stealing strategy, Lemma \ref{lem2.1.2} is proved. \end{proof} By Lemmas \ref{lem2.1.1} and \ref{lem2.1.2}, and brute force computations for $n<13$, Theorem \ref{thm2} is proved. \subsection{Proof Of Theorem \ref{thm3}} For the following proofs, team $0 =$ team $t$ $($under $\bmod\ t)$. Note that player $tk$'s next player is player $1$, and we regard player $tk$ and player $1$ as two consecutive players. Therefore, without loss of generality, in all the following proofs, we assume that team $1$ has player $1, 2, 3,\dots, k;$ team $2$ has player $k+1, k+2,\dots, 2k;$ team $3$ has player $2k+1, 2k+2,\dots, 3k$ and so on. For this proof we utilize the following property. \begin{property}\label{p2} Suppose team $m$ has a winning strategy $(1\leq m \leq t)$. For any $t\geq 3$ and $k=t-1$, if none of the middle $k$ players listed below belong to team $m$, then any winning path for team $m$ does not contain the following $3k$ consecutive steps: First $k$ players all do: $1+1=2$. Middle $k$ players all do: $1+1=2$. Last $k$ players all do: $2+2=1+3$. \end{property} \begin{proof} Suppose team $m$ has a winning strategy and there is a winning path for team $m$ that contains such $3k$ consecutive steps. Then $\exists\ q$ $(1\leq q\leq p)$ such that player $q$ belongs to team $m$ and takes the last step of the game. For the middle $k$ players, instead of doing $1+1=2$, they can all do $1+2=3$. By doing so, player $q-k$ now becomes the player who takes the last step. Note that team $m$ has $k$ players, so player $q-k$ belongs to team $m-1 \pmod t$. So team $m-1 ($ mod $t)$ has winning strategy, which contradicts with our assumption. Therefore, by using stealing strategy, it is proved that Property \ref{p2} holds. \end{proof} After proving Property \ref{p2}, we prove Theorem \ref{thm3} by splitting it into the following $2$ lemmas: Lemmas \ref{lem3} and \ref{lem4}. \begin{lem}\label{lem3} When $n\geq2k^2+4k$, for any $t \geq 4$ and $k=t-1$ no team has winning strategy. \end{lem} \begin{proof} Suppose team $m$ has a winning strategy $(1\leq m\leq t)$. Note that the last player in team $m$ is player $mk$, so the first player after team $m$ is player $mk+1 \pmod p$. Also, since there are $t-1=k$ other teams, and each team has $k$ players, where $k\geq 4-1=3$. Therefore, there are $k^2\geq 3k $ consecutive players from other teams. After all the members of team $m$'s first move, the consecutive $t-1=k$ other teams can do the following: $($If $m=t$, we start the following steps from the first step of player $1.)$ In all the following, given players' numbers are mod $p$. From player $mk+1$ to $(m+1)k$ all do: $1+1=2$. From player $(m+1)k+1$ to player $(m+2)k$ all do: $1+1=2$. From player $(m+2)k+1$ to player $(m+3)k$ all do: $2+2=1+3$. Since all these $3k$ players are not from team $m$, it contradicts with property \ref{p2}, so team $m$ does not have winning strategy. Therefore, Lemma \ref{lem3} is proved. \end{proof} \begin{lem}\label{lem4} When $n\geq 30$, for any $t=3$ and $k=2$ no team has winning strategy. \end{lem} \begin{proof} Suppose team $m$ has a winning strategy $(1\leq m\leq 3)$. Note that the game has $3$ teams and $6$ players in total, so all the players' numbers listed below are under mod $6$, and all the teams' numbers listed below are under mod $3$. Team $m+1$ has players $2m+1$ and $2m+2$; team $m-1$ has players $2m+3$ and $2m+4$; team $m$ has players $2m-1$ and $2m$. After player $2m$'s (last player from team $m$) first move, let's do the following first: $($if $m=3$, the same following process can start from the first step of player $1.)$ Player $2m+1: 1+1=2$ $($Step $1)$. Player $2m+2: 1+1=2$ $($Step $2)$. Player $2m+3: 1+1=2$ $($Step $3)$. Player $2m+4: 1+1=2$ $($Step $4)$. Player $2m-1:$ anything $($Step $5)$. Player $2m:$ anything $($Step $6)$. Player $2m+1: 1+1=2$ $($Step $7)$. Player $2m+2: 1+1=2$ $($Step $8)$. Player $2m+3: 1+1=2$ $($Step $9)$. Player $2m+4: 1+1=2$ $($Step $10)$. Player $2m-1:$ anything $($Step $11)$. Player $2m:$ anything $($Step $12)$. $($Note: step $5, 6, 11, 12$ can be anything because they are controlled by team $m.)$ Now we prove this lemma in $2$ cases.\\ \begin{itemize} \item[Case 1.] If step $5,6$ are both $2+2=1+3$, then look at steps $1,2,3,4,5,6.$ This contradicts with property \ref{p2}, so team $m$ has no winning strategy. Similarly, if step $11,12$ are both $2+2=1+3$, then look at steps $7,8,9,10,11,12.$ This contradicts with property \ref{p2}, so team $m$ has no winning strategy.\\ \item[Case 2.] Otherwise, if one of the steps from $5,6$ is not $2+2=1+3$, and one of the steps from $11,12$ is not $2+2=1+3$, then let's do the following after player $2m$'s first move $($continuing after first $12$ steps with $4$ more steps; if $m=3$, the same following process can start from the very first step of player $1)$: Player $2m+1: 1+1=2$ $($Step $1)$. Player $2m+2: 1+1=2$ $($Step $2)$. Player $2m+3: 1+1=2$ $($Step $3)$. Player $2m+4: 1+1=2$ $($Step $4)$. Player $2m-1:$ anything $($Step $5)$. Player $2m:$ anything $($Step $6)$. Player $2m+1: 1+1=2$ $($Step $7)$. Player $2m+2: 1+1=2$ $($Step $8)$. Player $2m+3: 1+1=2$ $($Step $9)$. Player $2m+4: 1+1=2$ $($Step $10)$. Player $2m-1:$ anything $($Step $11)$. Player $2m:$ anything $($Step $12)$. Player $2m+1: 1+1=2$ $($Step $13)$. Player $2m+2: 1+1=2$ $($Step $14)$. Player $2m+3: 2+2=1+3$ $($Step $15)$. Player $2m+4: 2+2=1+3$ $($Step $16)$. Note that one of the steps from $5,6$ is not $2+2=1+3$, and one of the steps from $11,12$ is not $2+2=1+3$, so steps $5,6$ will take away at most three ``$2$'s'' in total, and steps $11,12$ will take away at most three ``$2$'s'' in total. Also note that steps $1,2,3,4,5,6,7,8,9,10$ generate eight ``$2$'s'' in total. So after step $12$, there will be at least two $2$'s remaining. Therefore, for player $2m+1$ and $2m+2$, instead of doing $1+1=2$, they can both do $1+2=3$. Since team $m$ has winning strategy, either player $2m-1$ or player $2m$ takes the last step. If player $2m-1$ originally takes the last step, by using the stealing strategy mentioned above, now player $2m-1-2=2m-3$ now takes the last step, which means that player $2m+3$ now takes the last step, so team $m-1$ now has the winning strategy, which contradicts with our assumption. If player $2m$ originally takes the last step, by using the stealing strategy mentioned above, now player $2m-2$ now takes the last step, which means that player $2m+4$ now takes the last step, so team $m-1$ now has the winning strategy, which contradicts with our assumption. \end{itemize} In both cases, we can find contradiction by using stealing strategy, so Lemma \ref{lem4} is proved. \end{proof} Therefore, by Lemmas \ref{lem3} and \ref{lem4}, Theorem \ref{thm3} is proved. \subsection{Proof Of Theorem \ref{thm4}} Note: in the following proof, since player $6$'s next player is player $1$, player $6$ and player $1$ are considered as two consecutive players. The $4$-player alliance has $3$ possible cases.\\ \begin{itemize} \item[Case $1$.] If the $4$-player alliance consists of $4$ consecutive players, then the $2$-player alliance will also be $2$ consecutive players. To show that the $2$-player alliance does not actually have a winning strategy, the $4$ consecutive players in the big alliance can be regarded as $2$ teams, where each team has $2$ consecutive players. Therefore, according to Lemma \ref{lem4}, the $2$-player alliance does not have a winning strategy. Therefore, the $4$-player alliance always has a winning strategy in this case.\\ \item[Case $2$.] If the $4$-player alliance is separated in two parts, and each part has $2$ consecutive players. Note that this situation is equivalent to the $3$-player game situation, where $2$ of them are in the same team now. According to Lemma \ref{lem2.1.2}, the single player in the $3$-player game does not have a winning strategy. Equivalently, the $2$-player alliance does not have a winning strategy in this case. Therefore, the $4$-player alliance always has a winning strategy in this case.\\ \item[Case $3$.] If the $4$-player alliance is separated in two parts, where one part has $3$ consecutive players and the other part only has $1$ player. Then, the $2$ players in the $2$-player alliance are separated from each other (if they are not separated, then the $4$ players of the $4$-player alliance will be $4$ consecutive players). Suppose the $2$-player alliance has a winning strategy, then there always exists a player $q$ from the $2$-player alliance who takes the last step. Suppose the $3$ consecutive players in the $4$ player alliance are player $a$, $a+1\pmod 6$, $a+2 \pmod 6$. Then let's do the following from player $a$'s first move: Player $a: 1+1=2$. Player $a+1: 1+1=2$. Player $a+2: 2+2=1+3$. Note that if player $a+1$ does $1+2=3$ instead, then player $q-1$ will now be the player who takes the last step. Since 2 players of the $2$-player alliance are separated, player $q-1$ belongs to the $4$-player alliance. Therefore, the $4$-player alliance now has the winning strategy, which contradicts with our assumption. Therefore, by using stealing strategy, we proved that the $4$-player alliance has a winning strategy in this case. \end{itemize} Thus by Case $1$, Case $2$ and Case $3$, Theorem \ref{thm4} follows. \subsection{Proof Of Theorem \ref{thm5}} We look at the case of $7$-player game first. \begin{lem}\label{lem5} When $n\geq 32$, if one alliance has $5$ players and the other alliance has 2 players, then the $5$-player alliance always has a winning strategy. \end{lem} \begin{proof} We prove this lemma by considering $2$ cases.\\ \begin{itemize} \item[Case $1$.] If the $2$ players in the $2$-player alliance are not $2$ consecutive players, then the $5$-player alliance will be separated by $2$ parts (considering player $7$ and player $1$ as two consecutive players). According to the pigeonhole principle, one of these $2$ parts will contain at least $3$ consecutive players (we call this part ``large part''). Suppose the $2$-player alliance has a winning strategy, then there exists a player $q$ from the $2$-player alliance who takes the last step. Suppose the first player in the large part is player $a$, then starting from player $a$'s first move, let's do the following: Player $a: 1+1=2$. Player $a+1: 1+1=2$. Player $a+2: 2+2=1+3$. Note that instead of doing $1+1=2$, player $a+1$ can do $1+2=3$. Then player $q-1$ is the player who takes the last step of the winning path. Since the $2$ players in the $2$-player alliance are not consecutive, player $q-1$ belongs to the $5$-player alliance. As a result, the $5$-player alliance now has the winning strategy, which contradicts our assuption. Therefore, by using the stealing strategy, we proved that the $5$-player alliance has the winning strategy in this case.\\ \item[Case $2$.] If the $2$ players in the $2$-player alliance are consecutive, then the $5$ players in the $5$-player are also consecutive. Suppose that the $5$ consecutive players of the $5$-player alliance starts with player $a$. Then starting with player $a$'s first move, let's do the following: Player $a: 1+1=2$ $($Step $1)$. Player $a+1: 1+1=2$ $($Step $2)$. Player $a+2: 1+1=2$ $($Step $3)$. Player $a+3: 1+1=2$ $($Step $4)$. Player $a+4: 1+1=2$ $($Step $5)$. Player $a+5:$ anything $($Step $6)$. Player $a+6:$ anything $($Step $7)$. (Note that player $a+5$ and player $a+6$ are in the $2$-player alliance.) Player $a: 1+1=2$ $($Step $8)$. Player $a+1: 1+1=2$ $($Step $9)$. Player $a+2: 1+1=2$ $($Step $10)$. Player $a+3: 2+2=1+3$ $($Step $11)$. Player $a+4: 2+2=1+3$ $($Step $12)$. Suppose the $2$-player alliance has a winning strategy, then there always exists a player $q$ from the $2$-player alliance who takes the last step. Note that step $6$ and step $7$ can take away at most four $2$'s in total, and steps $1, 2, 3, 4, 5, 8$ have generated six $2$'s in total. As a result, after step $8$, there will be at least two $2$'s remaining. Therefore, player $a+1$ in step $9$ and player $a+2$ in step $10$ can both do $1+2=3$ instead. Then player $q-2$ becomes the player who takes the last step. Since $2$ players of the 2-player alliance are consecutive, player $q-2$ belongs to the $5$-player alliance. Therefore, the $5$-player alliance now has the winning strategy, which contradicts with our assumption. Therefore, by using stealing strategy, Case $2$ is proved. \end{itemize} Thus by our analysis in Case $1$ and Case $2$, Lemma \ref{lem5} is proved. \end{proof} Now let's look at the game of $8$ or more players. \begin{lem}\label{lem6} In a $p$-player game with $2$ alliances, when $n$ is significantly large $(n\geq 22)$ and $p\geq 8$, if one alliance has $p-2$ players (let's call it big alliance, which has at least $6$ players), and the other alliance has $2$ players, then the big alliance always has a winning strategy. \end{lem} \begin{proof} We prove this lemma by considering $2$ cases.\\ \begin{itemize} \item[Case 1.] If the $2$ players of the $2$-player alliance are not consecutive, then the big alliance will be separated by $2$ parts. Note that the big alliance has at least 6 players. By pigeonhole principle, there will be at least one part having at least $3$ players (let's call it big part). Suppose $2$-player alliance has a winning strategy. Then for any winning path, there exists a player $q$ in the $2$-player alliance who takes the last step. Suppose the first player in the big part is player $a$, and let's do the following starting from player $a$'s first move: Player $a: 1+1=2$ $($Step $1)$. Player $a+1: 1+1=2$ $($Step $2)$. Player $a+2: 2+2=1+3$ $($Step $3)$. Note that instead of doing $1+1=2,$ player $a+1$ can do $1+2=3$ instead in step $2$. Now player $q-1$ becomes the player who takes the last step. Since the $2$ players in the $2$-player alliance are not consecutive, player $q-1$ belongs to the big alliance, so the big alliance now has the winning strategy, which contradicts with our assumption. Therefore, we proved case $1$ by using stealing strategy.\\ \item[Case 2.] If the $2$ players of the $2$-player alliance are consecutive, then the $p-2$ players of the big alliance are consecutive. Suppose $2$-player alliance has a winning strategy, then for any winning path, there exists a player $q$ from the $2$-player alliance who takes the last step. Suppose the big alliance's $p-2$ consecutive players start with player $a$. $($Note the the big alliance has at least $6$ players, so players $a, a+1, a+2, a+3, a+4, a+5$ are all in the big alliance$).$ Let's do the following starting from player $a$'s first move: Player $a: 1+1=2$ $($Step $1)$. Player $a+1: 1+1=2$ $($Step $2)$. Player $a+2: 1+1=2$ $($Step $3)$. Player $a+3: 1+1=2$ $($Step $4)$. Player $a+4: 2+2=1+3$ $($Step $5)$. Player $a+5: 2+2=1+3$ $($Step $6)$. Note that player $a+2$ in step $3$ and player $a+3$ in step $4$ can both do $1+2=3$ instead. If they both do so, then player $q-2$ becomes the player who takes the last step. Since the $2$ players in the $2$-player alliance are consecutive, player $q-2$ belongs to the big alliance. Therefore, the big alliance now has the winning strategy, which contradicts with our assumption. By using the stealing strategy, we proved case $2$. \end{itemize} Thus, by our analysis in Case $1$, and Case $2$, Lemma \ref{lem6} is proved. \end{proof} By Lemmas \ref{lem5} and \ref{lem6}, Theorem \ref{thm5} is proved. \subsection{Proof Of Theorem \ref{thm6}} Say we have an alliance $a$ with over two-thirds of the players. For a sufficiently large $n$, it can then force the creation of an arbitrary number of $2$'s eventually, as players on the alliance can each produce at least one per round: if each plays $1 + 1 = 2$, opposed players can each remove only two per round by playing $2 + 2 = 1 + 3$, meaning each round, alliance $a$ can net increase the number of $2$'s by at least one. As such, at some point, alliance $a$ can force the creation of at least $b$ $2$'s. By assumption, alliance $a$ has at least $2b$ subsequent players. Say we have at least $b$ $2$'s and are about to begin the turns of those players. The first $b$ players could all play $1+1=2$, and the next $b$ players could all play $2+2=1+3$, as the first $b$ players would create $b$ $2$'s and the next $b$ players would use up those and the preexisting $b$ $2$'s. Let us say the turn after this is turn $2b$, changing what we call turn $0$ to make this the case. Now assume the opposing players have a winning strategy. In that case, after this, there would be a winning strategy from the resultant state for an alliance that goes when the opposed players do. In particular, this alliance goes on turns $2b+p_1, 2b+p_2,$ $2b+p_3, \dots, 2b+p_n$, where $p_1, p_2, \dots$ are some numbers selected to make this the case. Now, note that the $2b$ players can instead play as follows: the first $b$ can all play $1+2=3$, resulting in the same state as the one we got to after all $2b$ players last time. As such, the same strategy as the one used previously gives the win to the alliance that goes on turns ${b+p_1, b+p_2, b+p_3, \dots, b+p_n}$. By assumption, alliance $a$ has all players $b$ before a player from the opposition. We know the players with turns $2b+p_1, 2b+p_2, 2b+p_3, \dots, 2b+p_n$ are opposed to alliance $a$, so the players with turns $b+p_1, b+p_2, b+p_3, \dots, b+p_n$ must be on alliance $a$. As such, this would be a winning strategy for alliance $a$, contradicting the assumption that the opposed players have a winning strategy. As such, alliance $a$ must have a winning strategy. \subsection{Proof Of Theorem \ref{thm7}} Assume we have an alliance $a$ with $3b$ consecutive players at some point, and $n \geq 2p+4b$. First, let's examine the first turn of the $3b$ players. As $n$ is at least $2p+4b$, regardless of when they start, there will be at least enough $1$'s in the game for the first $2b$ of them to play $1+1=2$. If they do so, the next $b$ could all play $2+2=1+3$. Let us say the turn after this is turn $3b$, changing what we call turn $0$ to make this the case. Now assume the opposing players have a winning strategy. In that case, after this, there would be a winning strategy from the resultant state for an alliance that goes when the opposed players do. In particular, this alliance goes on turns $3b+p_1, 3b+p_2,$ $3b+p_3, \dots, 3b+p_n$, where $p_1, p_2, \dots$ are some numbers selected to make this the case. Now, note that the $3b$ players can instead play as follows: the first $b$ players can all play $1+1=2$ and the next $b$ players can all play $1+2=3$, resulting in the same state as the one we got to after all $3b$ players last time. As such, the same strategy as the one used previously gives the win to the alliance that goes on turns $2b+p_1, 2b+p_2, 2b+p_3, \dots, 2b+p_n$. By assumption, alliance $a$ has all players $b$ before a player from the opposition. We know the players with turns $3b+p_1, 3b+p_2, 3b+p_3, \dots, 3b+p_n$ are opposed to alliance $a$, so the players with turns ${2b+p_1, 2b+p_2, 2b+p_3, \dots, 2b+p_n}$ must be on alliance $a$. As such, this would be a winning strategy for alliance $a$, contradicting the assumption that the opposed players have a winning strategy. As such, alliance $a$ must have a winning strategy. \subsection{Proof Of Theorem \ref{thm8}} Let the $2d$ players in the big alliance all do $1+1=2$ for the first $d$ rounds (one round means every player takes a move, and we define the first round starting from the big alliance's first move). If in any of the first $d$ round, the $d$ consecutive players in the small alliance all do $2+2=1+3$, then we can directly let the second half of the $2d$ players in the big alliance (which is $d$ consecutive players) all do $1+2=3$ instead in the next round. In this case, suppose that small alliance has a winning strategy. Then for any winning path, there exists a player $q$ from the small alliance who takes the last step. Note that player $q$ belongs to small alliance, so player $q-d$ belongs to big alliance. Since player $q$'s last winning step becomes player $q-d$'s last step, so the big alliance now has the winning strategy, which contradicts with our assumption. Otherwise, if in each of the first $d$ rounds, there is at least one player from small alliance who does not take $2+2=1+3$, which means that this step will take away at most one $2$. Then there will be at least one $2$ generated in each round. As a result, after $d$ rounds, there are at least $d$ $2$'s generated for the stealing. After that, in the $(d+1)$th round, we can let the first half of the players in big alliance (which are the first $d$ consecutive players) all do $1+1=2$, and the second half of the big alliance (which are last $d$ consecutive players) all do $2+2=1+3$. In this case, suppose that the small alliance has a winning strategy. Then for any winning path, there exists a player $q$ from the small alliance who takes the last step. Note that player $q$ belongs to the small alliance, so player $q-d$ belongs to big alliance. In this round, the first half of the big alliance (which are first the $d$ consecutive players) can all do $1+2=3$ instead, so player $q$'s last winning step becomes player $q-d$'s last step. As a result, the big alliance now has the winning strategy, which contradicts with our assumption. Therefore, the big alliance always has a winning strategy, so we have proved this theorem.\\
{ "timestamp": "2020-10-22T02:07:10", "yymm": "2009", "arxiv_id": "2009.03708", "language": "en", "url": "https://arxiv.org/abs/2009.03708", "abstract": "Edouard Zeckendorf proved that every positive integer $n$ can be uniquely written \\cite{Ze} as the sum of non-adjacent Fibonacci numbers, known as the Zeckendorf decomposition. Based on Zeckendorf's decomposition, we have the Zeckendorf game for multiple players. We show that when the Zeckendorf game has at least $3$ players, none of the players have a winning strategy for $n\\geq 5$. Then we extend the multi-player game to the multi-alliance game, finding some interesting situations in which no alliance has a winning strategy. This includes the two-alliance game, and some cases in which one alliance always has a winning strategy.%We examine what alliances, or combinations of players, can win, and what size they have to be in order to do so. We also find necessary structural constraints on what alliances our method of proof can show to be winning. Furthermore, we find some alliance structures which must have winning strategies.%We also extend the Generalized Zeckendorf game from $2$-players to multiple players. We find that when the game has $3$ players, player $2$ never has a winning strategy for any significantly large $n$. We also find that when the game has at least $4$ players, no player has a winning strategy for any significantly large $n$.", "subjects": "Number Theory (math.NT)", "title": "Winning Strategy for the Multiplayer and Multialliance Zeckendorf Games", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713891776497, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7080104894857979 }
https://arxiv.org/abs/0906.1397
Resilient pancyclicity of random and pseudo-random graphs
A graph $G$ on $n$ vertices is \textit{pancyclic} if it contains cycles of length $t$ for all $3 \leq t \leq n$. In this paper we prove that for any fixed $\epsilon>0$, the random graph $G(n,p)$ with $p(n)\gg n^{-1/2}$ asymptotically almost surely has the following resilience property. If $H$ is a subgraph of $G$ with maximum degree at most $(1/2 - \epsilon)np$ then $G-H$ is pancyclic. In fact, we prove a more general result which says that if $p \gg n^{-1+1/(l-1)}$ for some integer $l \geq 3$ then for any $\epsilon>0$, asymptotically almost surely every subgraph of $G(n,p)$ with minimum degree greater than $(1/2+\epsilon)np$ contains cycles of length $t$ for all $l \leq t \leq n$. These results are tight in two ways. First, the condition on $p$ essentially cannot be relaxed. Second, it is impossible to improve the constant 1/2 in the assumption for the minimum degree. We also prove corresponding results for pseudo-random graphs.
\section{Introduction} \label{introduction_section} A typical result in graph theory can be stated as ``Under certain conditions, a graph $G$ possesses a property $\mathcal{P}$''. Once this type of a result is established, it is natural to ask ``How strongly does $G$ possess $\mathcal{P}$?''. In fact, several important results in extremal graph theory can be viewed as an answer to this question for various graph properties (we will provide some concrete examples after introducing necessary definitions). In this paper we will study this question in the context of random and pseudo-random graphs. The random graph model we consider is the binomial random graph $G(n,p)$. The random graph $G(n,p)$ denotes the probability space whose points are graphs with vertex set $[n] = \{1,\ldots,n\}$ where each pair of vertices forms an edge randomly and independently with probability $p$. We say that $G(n,p)$ possesses a graph property $\mathcal{P}$ \textit{asymptotically almost surely}, or a.a.s. for brevity, if the probability that $G(n,p)$ possesses $\mathcal{P}$ tends to 1 as $n$ goes to infinity. The pseudo-random graphs we will study are $(n,d,\lambda)$-graphs with $\lambda = o(d)$, where an $(n,d,\lambda)$-graph is a $d$-regular graph on $n$ vertices whose second largest (in absolute value) eigenvalue of the adjacency matrix is bounded by $\lambda$. The abundance of structure and results arising from this simple looking definition is quite surprising (see, e.g., \cite{MR2223394} for more details). A graph property is called \textit{monotone increasing (decreasing)} if it is preserved under edge addition (deletion). The main concept studied in this paper and briefly outlined above is that of \textit{resilience}. Formally, following \cite{MR2462249}, we define: \begin{dfn} Let $\mathcal{P}$ be a monotone increasing (decreasing) graph property. \begin{itemize} \item[(i)] (Global resilience) The global resilience of $G$ with respect to $\mathcal{P}$ is the minimum number $r$ such that by deleting (adding) $r$ edges from $G$ one can obtain a graph not having $\mathcal{P}$. \item[(ii)] (Local resilience) The local resilience of a graph $G$ with respect to $\mathcal{P}$ is the minimum number $r$ such that by deleting (adding) at most $r$ edges at each vertex of $G$ one can obtain a graph not having $\mathcal{P}$. \end{itemize} \end{dfn} Using this terminology, one can state the celebrated theorem of Tur\'{a}n \cite{MR0018405} as ``The complete graph on $n$ vertices $K_n$ has global resilience $\frac{n^2}{2r} - \frac{n}{2}$ with respect to being $K_{r+1}$-free''. Another classical theorem, that of Dirac (see, e.g., \cite{MR2159259}) can be rephrased as ``$K_n$ has local resilience $\left\lfloor n/2 \right\rfloor$ with respect to Hamiltonicity''. As these examples suggest, the notion of resilience lies in the center of extremal graph theory. In \cite{MR2462249}, Sudakov and Vu have initiated the systematic study of global and local resilience of random and pseudo-random graphs. They obtained resilience results with respect to various properties such as perfect matching, hamiltonicity, chromatic number and having a nontrivial automorphism (this result appeared in their earlier paper with Kim \cite{MR1945368}). For example, they showed that if $p > \log^4 n / n$ then a.a.s. any subgraph of $G(n,p)$ with minimum degree $(1/2+o(1))np$ is hamiltonian. An interesting thing to notice is that this result can be viewed as a generalization of Dirac's Theorem mentioned above as a complete graph is also a random graph $G(n,p)$ with $p=1$. As we will see, this connection is very natural and most of the resilience results can be viewed as a generalization of classic graph theory results to random and pseudo-random graphs. There are several other papers that obtained resilience type results. Krivelevich and Frieze \cite{MR2430433} gave a lower bound (not tight) on resilience of $G(n,p)$ with respect to being Hamiltonian in the range of $p$ not covered by the above mentioned result of Sudakov and Vu. Dellamonica, Kohayakawa, Marciniszyn and Steger \cite{MR2383452} studied the global resilience of random graphs with respect to containing a cycle of length at least $(1-\alpha)n$ for a fixed $\alpha$ as a generalization of a theorem of Woodall \cite{MR0318000}. Recently, Ben-Shimon, Krivelevich and Sudakov \cite{MR000000} investigated the resilience of random regular graphs with respect to being hamiltonian. A graph on $n$ vertices is called \textit{pancyclic} if it contains cycles of length $t$ for all $3 \leq t \leq n$. In this paper, we study the resilience of random and pseudo-random graphs with respect to this property. Similarly to the above mentioned results, our result can also be viewed as a generalization of a classical result in graph theory -- that by Bondy \cite{MR0285424}. It says that if $G$ is a graph on $n$ vertices with minimum degree greater than $n/2$, then $G$ is pancyclic. The corresponding theorems we prove are, \begin{thm} \label{pancyclic_thm3} If $p \gg n^{-1/2}$ then $G(n,p)$ asymptotically almost surely has local resilience $(1/2 + o(1))np$ with respect to being pancyclic. \end{thm} \begin{thm} \label{pancyclic_pseudo_thm00} Let $G=(V,E)$ be a $(n,d,\lambda)$-graph satisfying $d^2 /n \gg \lambda$. Then $G$ has local resilience $(1/2 + o(1))d$ with respect to being pancyclic. \end{thm} Our results are asymptotically tight in two ways. First, one cannot improve the constant $1/2$ since both random and pseudo-random graphs can be made bipartite by randomly partitioning the graph into two equal size parts. In this way we typically have a subgraph with minimum degree about one half of the original degree which does not contain any odd cycles. Second, the restrictions on the parameters are also essentially tight. To see this for random graphs, note that if $p \ll n^{-1/2}$ then typically each vertex has degree $(1+o(1))np$ and the number of triangles containing each vertex is at most ${O}(n^2p^3) \ll np$. Therefore deleting edges of all triangles leaves all degrees essentially unchanged. For pseudo-random graphs this can be derived from a variant of the construction of Alon \cite{MR1302331} (see, e.g., \cite{MR2223394}) which gives a triangle-free $(n,d,\lambda)$-graph with $d^3 / \lambda^2 = \Theta (n)$. We can also prove more general results for sparser graphs. Let the \textit{girth} of a graph be the length of its shortest cycle and the \textit{circumference} be the length of its longest cycle. Brandt, Faudree and Goddard \cite{MR1611825} called a graph \textit{weakly pancyclic} if it contains cycles of length $t$ where $t$ ranges from its girth up to its circumference. The following theorems, generalizing Theorems \ref{pancyclic_thm3} and \ref{pancyclic_pseudo_thm00}, are motivated by this concept of weak pancyclicity. \begin{thm} \label{pancyclic_thm4} For any fixed integer $l \geq 3$, if $p \gg n^{-1+1/(l-1)}$ then $G(n,p)$ asymptotically almost surely has local resilience $(1/2 + o(1))np$ with respect to containing cycles of length $t$ for all $l \leq t \leq n$. \end{thm} \begin{thm} \label{pancyclic_pseudo_thm01} Let $k$ be either 3 or an even integer satisfying $k \geq 4$ and let $G=(V,E)$ be a $(n,d,\lambda)$-graph satisfying $d^{k-1} /n \gg \lambda^{k-2}$. Then $G$ has local resilience $(1/2 + o(1))d$ with respect to containing cycles of length $t$ for all $k \leq t \leq n$. \end{thm} The above results are not exactly weak pancyclicity results since if we allow the adversary to delete half of the edges at each vertex he might decide not to remove a 3-cycle and then remove every other cycle of length $4$ up to $l-1$. But still it is best to view these results in the context of weak pancyclicity. Similarly as before, the result for random graphs is asymptotically tight. Indeed, note that if $p \ll n^{-1+1/(l-1)}$ then typically each vertex of the random graph has degree $(1+o(1))np$ and the number of cycles of length $l$ containing each vertex is at most $O(n^{l-1}p^l) \ll np$. Therefore we can delete few edges from each vertex to remove every $l$-cycle. We suspect that our result for pseudo-random graphs is asymptotically tight as well. Note that the assumption $d^{k-1} /n \gg \lambda^{k-2}$ in particular implies $\lambda = o(d)$ since $d^{k-2} \geq d^{k-1} /n \gg \lambda^{k-2}$, so even when we do not explicitly mention $\lambda = o(d)$, we are always in this situation. Although odd integers $k > 3$ are omitted from the result of pseudo-random graphs, nevertheless in this case $d^{k-1} /n \gg \lambda^{k-2}$ implies $d^{k} /n \gg \lambda^{k-1}$ and so by using the result for $k+1$ (which is now even) we can find cycles of length $t$ for all $k+1 \leq t \leq n$. We believe that the result of Theorem \ref{pancyclic_pseudo_thm01} is valid also for odd $k\ge 5$, but at present we do not have enough tools to verify it. We will address this point in more details in concluding remarks. The rest of this paper is organized as follows. In Section \ref{preliminaries_section} we collect some known results which we need later to prove our main theorems. In Section \ref{randomgraphproperty_section} we establish properties of random graphs and use them in Section \ref{proofofrandomgraphthm_section} to prove Theorem \ref{pancyclic_thm4}. In Sections \ref{pseudorandomgraphproperty_section}, \ref{proofofpseudorandomgraphthm_section} we follow the same pattern to prove the pseudo-random graph analog, Theorem \ref{pancyclic_pseudo_thm01}. The last section contains some concluding remarks and open problems. \\ \noindent\textbf{Notation.} $G=(V,E)$ denotes a graph with vertex set $V$ and edge set $E$. We use $v \sim w$ to indicate that $v,w$ are adjacent. $\Delta(G), \delta(G)$ denote the maximum degree and the minimum degree of $G$, respectively. For a set $X \subset V$, let $N(X)$ be the collection of all vertices $v$ which are adjacent to at least one vertex in $X$. If $X=\{u\}$ is a singleton set we denote its neighborhood by $N(u)$. Let $N^{(0)}(v) := \{ v \}$ and $N^{(k)}(v)$ be the vertices at distance exactly $k$ from $v$. This can also be recursively defined as $N^{(k)}(v) = N(N^{(k-1)}(v)) \backslash (N^{(k-1)}(v) \cup N^{(k-2)}(v))$. Note that $N^{(1)}(v) = N(v)$. For a set $X$, we denote by $E(X)$ the set of edges in the induced subgraph $G[X]$ and by $e(X)= |E(X)|$ its size. Similarly, for two sets $X$ and $Y$, we denote by $E(X,Y)$ the set of ordered pairs $(x,y) \in E$ such that $x \in X$ and $y \in Y$, also $e(X,Y) = |E(X,Y)|$. Note that $e(X,X) = 2e(X)$. If we have several graphs, then the graph we are currently working with will be stated as a subscript. For example $N^{(k)}_G(v)$ is the $k$-th neighborhood of $v$ in graph $G$. A cycle of length $l$ is denoted by $C_l$. We also utilize the following standard asymptotic notation. For two functions $f(n)$ and $g(n)$, write $f(n)=\Omega(g(n))$ if there exists a constant $C$ such that $\liminf_{n \rightarrow \infty} f(n)/g(n) \geq C$. If there is a subscript such as in $\Omega_\epsilon$ this means that the constant $C$ may depend on $\epsilon$. We write $f(n)=o(g(n))$ or $f(n) \ll g(n)$ if $\limsup_{n\rightarrow \infty}f(n)/g(n) = 0$. Also, $f(n)=O(g(n))$ if there exists a positive constant $C>0$ such that $\limsup_{n\rightarrow \infty} f(n)/g(n) \leq C$. Throughout the paper log denotes the natural logarithm. To simplify the presentation, we often omit floor and ceiling signs whenever these are not crucial and make no attempts to optimize absolute constants involved. We also assume that the order $n$ of all graphs tends to infinity and therefore is sufficiently large whenever necessary. \section{Preliminaries} \label{preliminaries_section} In this section we collect various results to be used later in the proofs of the theorems. \subsection{Resilience} The local resilience of random graphs with respect to being hamiltonian \cite{MR2462249} and containing fixed cycles (\cite{MR1339852}, \cite{MR1394514}, \cite{MR2145507}) have been studied before and our arguments for the proof of the main theorems will use these results. The following results about the local resilience of random and pseudo-random graphs with respect to hamiltonicity were proved in \cite{MR2462249}. \begin{thm} \label{pancyclic_thm1} For every fixed $\epsilon > 0$, if $p \geq \log^4n /n$ then the random graph $G(n,p)$ with probability $1 - o(n^{-1})$ has local resilience at least $(1/2 - \epsilon)np$ with respect to being hamiltonian. \end{thm} \begin{rem} The above formulation is stronger than the original statement since it explicitly states the success probability to be $1 - o(n^{-1})$. But this conclusion follows from the original argument if one carefully performs the error probability calculations. We will need this stronger estimate on success probability for our application. \end{rem} During the proof we will work with graphs that are similar to $(n,d,\lambda)$-graphs but are not necessarily regular. The particular graphs we will encounter are graphs $G=(V,E)$ on $n$ vertices that have minimum degree at least $(1 - \epsilon)d$ and satisfy the constraint $$\big|e(X,Y) - \frac{d}{n}|X||Y|\big| \leq \lambda \sqrt{|X||Y|} \mbox{ for all } X,Y \subset V $$ on the number of edges between sets. We will call such graphs \textit{$(n,\epsilon, d,\lambda)$-graphs}. Observe that $(n,0,d,\lambda)$-graphs are a more general/flexible concept than that of $(n,d,\lambda)$-graphs, as it does not put specific assumptions on graph eigenvalues. \begin{thm} \label{pancyclic_pseudo_thm2} Fix $\epsilon, \epsilon'$ such that $0 \leq 5\epsilon' < \epsilon, k \geq 3$ and let $G$ be an $(n,\epsilon', d,\lambda)$-graph satisfying $d^{k-1}/n = \omega(n) \lambda^{k-2}$ for an arbitrary function $\omega(n)$ increasing to infinity. Then for large enough $n$, $G$ has local resilience at least $(1/2 - \epsilon)d$ with respect to being hamiltonian. \end{thm} \begin{rem} The theorem above does not appear in the original paper \cite{MR2462249} and unfortunately we cannot directly apply the result from Sudakov and Vu. But in fact, they proved a general theorem which can be modified to work under the assumption above. The necessarily modification will be given in the Appendix. \end{rem} Next we state the results of Haxell, Kohayakawa and {\L}uczak (\cite{MR1339852}, \cite{MR1394514}) about the local resilience of random graphs with respect to containing a fixed cycle $C_l$, and of Sudakov, Szab\`{o} and Vu \cite{MR2145507} about the local resilience of pseudo-random graphs with respect to containing a triangle. \begin{thm} \label{pancyclic_thm2} For any fixed integer $l \geq 3$ and $\epsilon > 0$, there exists a constant $C=C(l, \epsilon)$ such that, if $p \geq C n^{-1+1/(l-1)}$ then $G(n,p)$ a.a.s. has local resilience at least $(1/2 - \epsilon)np$ with respect to containing $C_l$. \end{thm} \begin{thm} \label{pancyclic_pseudo_thm1} Let $G$ be a $(n,d,\lambda)$-graph satisfying $d^2 / n \geq \omega(n) \lambda$ for an arbitrary function $\omega(n)$ tending to infinity. Then $G$ has local resilience $(1/2 + o(1))d$ with respect to containing a triangle. \end{thm} \begin{rem} Both theorems are originally stated in a global resilience form but for convenience we stated it as above in a slightly weaker local resilience form. Also the conclusion of Theorem \ref{pancyclic_thm2} (as stated) for even cycles is weaker than in the original paper. \end{rem} \subsection{Extremal Graph Theory} The following simple but useful lemma allows one to find a large minimum degree subgraph in a graph with large average degree. (See, e.g., \cite{MR2159259}, Proposition 1.2.2) \begin{lemma} \label{lemma_findingmindegree} Let $G=(V,E)$ be a graph on $n$ vertices with at least $dn/2$ edges. Then $G$ contains a subgraph $G' \subset G$ with minimum degree at least $d/2$. \end{lemma} Next theorem is a classical result by Bondy and Simonovits \cite{MR0340095} about even cycles in graphs. \begin{thm} \label{bondysimonovits1_thm} Let $k$ be a positive integer and $G=(V,E)$ be a graph on $n$ vertices satisfying $|E| > 90kn^{1 + 1/k}$. Then $G$ contains a cycle of length $2k$. \end{thm} We will also need the celebrated P\`{o}sa rotation-extension lemma (see \cite{MR2321240}, Ch. 10, Problem 20). This lemma will help us in finding long paths in a graph with expansion properties. \begin{lemma} \label{lemma_pancyclic3} Let $G=(V,E)$ be a graph such that $|N(X)\setminus X| \geq 2|X| -1$ for all $X \subset V$ with $|X| \leq t$. Then for any vertex $v \in V$ there exists a path of length $3t-2$ in $G$ that has $v$ as an end point. \end{lemma} \subsection{Concentration} The following two well-known concentration results (see, for example \cite{MR1782847}, Theorems 2.3 and 2.10) will be used several times during the proof. We denote by $Bi(n,p)$ a binomial random variable with parameters $n$ and $p$. \begin{thm}(Chernoff inequality) If $X \sim Bi(n,p)$ and $\epsilon>0$, then \[ P\big(|X - \mathbb{E}[X]| \geq \epsilon \mathbb{E}[X]\big) \leq e^{-\Omega_\epsilon(\mathbb{E}[X])}. \] \end{thm} Let $m,n$ and $N$ be positive integers with $m,n < N$, let $X = [N], X' = [n]$, and let $A$ be a $m$-element subset of $X$ chosen uniformly at random. Then the distribution of the random variable $|A \cap X'|$ is called the \textit{hypergeometric distribution} with parameters $N, n$ and $m$. \begin{thm} \label{lemma_hypergeometric} Let $X$ have the hypergeometric distribution with parameters $N, n$ and $m$. Then, \[ P\big(|X - \mathbb{E}[X]| \geq \epsilon \mathbb{E}[X]\big) \leq e^{-\Omega_\epsilon(\mathbb{E}[X])}. \] \end{thm} \section{Properties of Random Graphs} \label{randomgraphproperty_section} In this section we establish properties of random graphs to be used later to prove Theorem \ref{pancyclic_thm4}. First we show formally a rather expected monotonicity property -- (relative) local resilience with respect to cycles can only grow with the edge probability $p(n)$. \begin{prop} \label{probrestriction_prop} Let $l$ be fixed and let $p'=p'(n)$ satisfy: $0 < p' \leq p \leq 1$ and $np' \gg \log n$. If $G(n, p')$ a.a.s. has local resilience at least $(1/2 - \epsilon/2)np'$ with respect to containing cycles of length $t$ for all $l \leq t \leq n$ then $G(n,p)$ a.a.s. has local resilience at least $(1/2 - \epsilon)np$ with respect to the same property. \end{prop} \begin{pf} Let $\mathcal{P}$ be the property of having local resilience at least $(1/2 - \epsilon/2)np'$ with respect to containing every cycles of length $t$ for all $l \leq t \leq n$. Define $q = p' / p$ and consider the following two round process of exposing the edges of $G(n,p')$. In the first round, every edge appears with probability $p$ (call this graph $G_1$). Then at the second round, every edge that appeared in the first round will remain with probability $q$ and will be deleted with probability $1-q$ (call this graph $G_2$). Then $G_1$ has the same distribution as $G(n,p)$ and $G_2$ has the same distribution as $G(n,p')$. By our assumption we know that $G_2$ a.a.s. has property $\mathcal{P}$. Now define $X$ to be the event that $G_1$ satisfies: $P(G_2 \notin \mathcal{P} | G_1) \geq 1/2$. Then $(1/2)P(X) \leq P(G_2 \notin \mathcal{P})=o(1)$ and therefore $P(X) = o(1)$. Thus a.a.s. in $G(n,p)$, $P(G_2 \notin \mathcal{P} | G_1) < 1/2$ or in other words $P(G_2 \in \mathcal{P} | G_1) \geq 1/2$. Let $\mathcal{A}$ be the collection of graphs $G_1\in G(n,p)$ having this property. Now given any subgraph $H$ of $G_1$ with maximum degree at most $(1/2 - \epsilon)np$, select every edge with probability $q$ to get a graph $H'$. Then by Chernoff inequality, each vertex of $H'$ has maximum degree at most $(1/2 - \epsilon/2)np'$ with probability at least $1 - e^{-\Omega_\epsilon(np')} = 1 - o(n^{-1})$. Therefore $H'$ has maximum degree at most $(1/2 - \epsilon/2)np'$ with probability at least $1-o(1)$. Finally to put things together, condition on the event that $G_1=G(n,p) \in \mathcal{A}$. By the first part of the proof a.a.s. $G \in \mathcal{A}$ so if we can prove the claim under this assumption then we are done. Given a subgraph $H \subset G$ with maximum degree at most $(1/2 - \epsilon)np$, sample every edge of $G$ with probability $q$ to obtain subgraphs $H' \subset G' \subset G$. Since $G \in \mathcal{A}$, we know that $P(G' \in \mathcal{P} | G) \geq 1/2$ and by the second part of the proof we know that $P\big( \Delta(H') \leq (1/2 - \epsilon/2)np'\big) \geq 1-o(1)$. Thus, these two events have a non-empty intersection and therefore it is possible to find subgraphs $H' \subset G' \subset G$ such that $G' \in \mathcal{P}$ and $\Delta(H') \leq (1/2 - \epsilon/2)np'$. Then $G' - H'$ (and hence $G - H$) must contain cycles of length $t$ for all $l \leq t \leq n$. \end{pf} \begin{rem} Note that there is nothing special about the property of ``containing cycles'' and in fact if for some $\log n/n\ll p' \leq p$ and $\alpha > 0$, we have a monotone increasing graph property $\mathcal{Q}$ such that $G(n,p')$ a.a.s. has local resilience at least $(\alpha+o(1))np'$ with respect to having property $\mathcal{Q}$ then $G(n,p)$ a.a.s. has local resilience at least $(\alpha+o(1))np$ with respect to having property $\mathcal{Q}$. \end{rem} From now on we may assume that $p = Cn^{-1 + 1/(l-1)}$ instead of $p \geq C n^{-1 + 1/(l-1)}$ since if we can prove the theorem under this condition then we can extend it to the whole range using the previous proposition. Moreover we will assume that the constant $C$ is large enough without further mentioning. In the next two lemmas we establish some expansion properties of random graphs. \begin{lemma} \label{pancyclic_random_lemma10} Fix a positive integer $l$ and $0< \epsilon <1$ and let $G=(V,E)$ be a random graph $G(n,p)$ with $p = C n^{-1+1/(l-1)}$. Then a.a.s. every subset $X \subset V$ of size $|X| \leq (2C)^{l-1}n^{-1}p^{-2}$ satisfies $(1 - \epsilon)|X|np \leq |N(X)| \leq (1 + \epsilon)|X|np$. \end{lemma} \begin{pf} Fix a set $X \subset V$ of size $|X| \leq (2C)^{l-1}n^{-1}p^{-2}$. For each $v \in V$ let $Y_v$ be indicator random variable of the event that $v \in N(X)$. Since $|X|p=o(1)$, we have $P(Y_v = 1) = 1 - (1-p)^{|X|} = (1+o(1))|X|p$. Consider the random variable $Y=\sum_{v \in V \backslash X} Y_v= |N(X)\setminus X|$ and note that \[ \mathbb{E}[Y] = \sum_{v \in V \backslash X} P(Y_v =1) = (n-|X|)(1+o(1))|X|p = (1+o(1))|X|np. \] Moreover if $v \in V \backslash X$ then $Y_v$ are mutually independent so we can apply the Chernoff inequality to get \[ P\big( |Y - \mathbb{E}[Y]| \geq (\epsilon/3) \mathbb{E}[Y]\big) \leq e^{-\Omega_\epsilon(\mathbb{E}[Y])}. \] Combine this with the estimate on $\mathbb{E}[Y]$ and we have, \[ P\big( (1 - 2\epsilon/3) |X|np \leq Y \leq (1 + 2\epsilon/3) |X|np \big) \geq 1-e^{-\Omega_\epsilon(\mathbb{E}[Y])} = 1-e^{-\Omega_\epsilon(|X|np)} \] for large enough $n$. Finally, note that $|N(X) - Y| \leq |X| = o(|X|np)$ and thus for large enough $n$, we have $(1 - \epsilon) |X|np \leq |N(X)| \leq (1 + \epsilon)|X|np$ with probability at least $1-e^{-\Omega_\epsilon(|X|np)}$. Taking the union bound over all choices of $X$, we get \begin{eqnarray} \sum_{1 \leq |X| \leq (2C)^{l-1}n^{-1}p^{-2}} e^{-\Omega_\epsilon(|X|np)} &=& \sum_{1 \leq k \leq (2C)^{l-1}n^{-1}p^{-2}} \binom{n}{k} e^{-\Omega_\epsilon(knp)} \leq \sum_{1 \leq k \leq (2C)^{l-1}n^{-1}p^{-2}} \left(\frac{en}{k} e^{-\Omega_\epsilon(np)}\right)^k \nonumber \\ &\leq& \sum_{1 \leq k \leq (2C)^{l-1}n^{-1}p^{-2}} ne^{-\Omega_\epsilon(np)} \leq n^2e^{-\Omega_\epsilon(np)} = o(1). \nonumber \end{eqnarray} This implies the assertion of the lemma. \end{pf} \begin{lemma} \label{pancyclic_random_lemma1} Fix a positive integer $l$ and $0< \epsilon <1$ and let $G=G(n,p)$ be a random graph with $p = C n^{-1+1/(l-1)}$. Then a.a.s. $G$ has the following property . If $H$ is a subgraph of $G$ with maximum degree at most $(1/2 - \epsilon)np$ and $G' = G-H$, then every set $X$ with $|X| \geq 2^{-l}(np)^{l-2}$ satisfies $|N_{G'}(X)| \geq (1/2 + \epsilon/2)n$. \end{lemma} \begin{pf} It is enough to show that a.a.s. for any $H$ as above the claim holds for every set $X$ with size exactly $|X| = 2^{-l}(np)^{l-2}$. Fix a set $X$ of size $2^{-l}(np)^{l-2}$ and let $Y \subset V$ be a set of size $|Y| \geq (1/2 - \epsilon/2)n$ disjoint from $X$. Then we have $\mathbb{E}[e_G(X,Y)] = |X||Y|p >2^{-l-2}(np)^{l-1}=2^{-l-2}C^{l-1}n$ and by the Chernoff inequality, \begin{eqnarray} P\Big(\big|e_G(X, Y) - |X||Y|p\big| \geq (\epsilon/4)|X||Y|p\Big)&<& e^{-\Omega_\epsilon(|X||Y|p)} \leq e^{-\Omega_\epsilon(2^{-l-2}C^{l-1}n)}. \label{pancyclic_lemma1_eqn3} \end{eqnarray} Thus with probability at least the right hand side of (\ref{pancyclic_lemma1_eqn3}) we have $e_G(X,Y) \geq (1 - \epsilon/4)|X||Y|p \geq (\frac{1}{2} - 3\epsilon/4)|X|np$. Since there are at most $2^{2n}$ possible choices of the pairs $X, Y$ and the right hand side of (\ref{pancyclic_lemma1_eqn3}) is $\ll 2^{-2n}$ for large enough $C$, we a.a.s. have $e_G(X,Y) > (\frac{1}{2} - \epsilon)|X|np$ for every pair $X,Y$ as above. On the other hand, we know that $e_H(X,Y) \leq (1/2 - \epsilon)np|X|$. Therefore a.a.s. $e_{G'}(X,Y) \geq e_G(X,Y) - e_H(X,Y) > 0$. This implies that $N_{G'}(X) \cap Y \neq \emptyset$ for all $Y$ with $|Y| \geq (1/2 - \epsilon/2)n$. Thus $|N_{G'}(X)| \geq n - (1/2 - \epsilon/2)n = (1/2 + \epsilon/2)n$. \end{pf} We also need the following lemma that proves expansion property for subgraphs of $G(n,p)$ with large minimum degree. \begin{lemma} \label{lemma_pancyclic2} If $p = C n^{-1+1/(l-1)}$ and $\epsilon' > 0$ then a.a.s. every subgraph $G' \subset G(n,p)$ with minimum degree at least $\epsilon' np$ satisfies the following expansion property. For all $X \subset V$ with $|X| \leq \frac{1}{80}\epsilon' n$, $|N_{G'}(X)\backslash X| \geq 2|X|$. \end{lemma} \begin{pf} Assume to the contrary that there exists a set $X \subset V$ such that $|X| \leq \frac{1}{80}\epsilon' n$ and $|N_{G'}(X) \backslash X| < 2|X|$, and let $Y = X \cup N_{G'}(X)$ so that $|Y| \leq 3|X| \leq \frac{1}{20} \epsilon' n$. Then by the minimum degree condition we know that $e_{G'}(Y) \geq \frac{1}{2} |X| \epsilon' np \geq \frac{1}{8}|Y| \epsilon' np$. Now we will estimate the probability that such event can happen for a set $Y$ with $|Y| = a$. We can restrict the range to $\epsilon' np \leq a \leq \frac{1}{20} \epsilon' n$ since $G'$ has minimum degree at least $\epsilon' np$. The probability that there exists a set of size $a$ which spans at least $\frac{1}{8}a \epsilon' np$ edges is, \begin{eqnarray} \binom{n}{a} \binom{a(a-1)/2}{a\epsilon'np/8} p^{\epsilon' a np/8 } &\leq& \left(\frac{en}{a}\right)^a \left(\frac{4ea}{\epsilon' n p}\right)^{\epsilon' a n p/8} p^{\epsilon' a np/8 } = \bigg(\frac{en}{a} \Big(\frac{4ea}{\epsilon' n }\Big)^{\epsilon' np/8} \bigg)^{a} \nonumber \\ &\ll& \bigg( \Big(\frac{e}{4}\Big)^{\epsilon' np/8} \bigg)^{a} \ll n^{-2}\,. \nonumber \end{eqnarray} Summing over all $\epsilon' np \leq a \leq \frac{1}{20}\epsilon' n$ we get that the probability that there is a set violating the assertion of the lemma is $o(1)$. \end{pf} \section{Proof of Theorem \ref{pancyclic_thm4} } \label{proofofrandomgraphthm_section} In this section we prove Theorem \ref{pancyclic_thm4}. First we need an additional lemma which gives us more properties of a random graph with deleted edges. \begin{lemma} \label{lemma_pancyclic1} For every integer $l\geq 3$ and $\epsilon>0$ there exists $C=C(\epsilon)$ such that if $p = C n^{-1+1/(l-1)}$ then $G=G(n,p)$ a.a.s. has the following properties. Let $H$ be a subgraph of $G$ with maximum degree at most $(\frac{1}{2} - \epsilon)np$, $G'=G-H$ and $v \in V$, then \begin{itemize} \item[(a)] For every $1 \leq i \leq l-2$, $2^{-i} (np)^{i} \leq |N^{(i)}_{G'}(v)| \leq (1 + \epsilon)^i(np)^{i}$. \item[(b)] $|N^{(l-1)}_{G'}(v)| \geq (\frac{1}{2} + \frac{\epsilon}{3})n$. \item[(c)] For every vertex $w \in V$ whose distance from $v$ is at least $l-2$, $|N^{(l-2)}_G(v) \cap N_G(w)| \leq \log n$. \end{itemize} \end{lemma} \begin{pf} Let $v$ be a vertex of $G$ and for simplicity of notation let $Y_j = N_{G'}^{(j)}(v)$ for $j=1,\ldots, l-2$. (a) By using induction we will show that $2^{-i} (np)^{i} \leq |Y_i| \leq (1+\epsilon)^i(np)^{i}$ for all $1 \leq i \leq l-2$. For the initial case $i=1$ by Lemma \ref{pancyclic_random_lemma10} we have $|Y_1| \leq |N^{(1)}_{G}(v)| \leq (1 + \epsilon)np$. By the same lemma we also know that $|N_G^{(1)}(v)| \geq (1 - \epsilon)np$. Therefore $|Y_1| \geq |N^{(1)}_{G}(v)| - |N^{(1)}_{H}(v)| \geq np/2$. Now assume that we have established the claim up to some $i \leq l-3$ and let us look at the case $i+1$. First notice $(1+\epsilon)^{l-3}(np)^{l-3} \leq (2C)^{l-1}n^{-1}p^{-2}$ so that we can apply Lemma \ref{pancyclic_random_lemma10}. Then the upper bound easily follows as $|Y_{i+1}| \leq |N_{G}(Y_i)| \leq (1+\epsilon)np|Y_i| \leq (1+\epsilon)^{i+1}(np)^{i+1}$ by that lemma and the inductive hypothesis. To obtain the lower bound, we use that $|N_{H}(Y_i)| \leq \Delta(H)|Y_i|$ and that $|N_{G}(Y_i)| \geq (1 - \epsilon/2)np|Y_i|$ by Lemma \ref{pancyclic_random_lemma10} (where we substitute $\epsilon/2$ instead of $\epsilon$). Therefore, \[ |N_{G'}(Y_i)| \geq |N_{G}(Y_i)| - |N_{H}(Y_i)| \geq (1 - \epsilon/2)np|Y_i| - (1/2 - \epsilon)np|Y_i| = (1/2 + \epsilon/2)np|Y_i|. \] Recall the recursive formula $Y_{i+1} = N_{G'}(Y_i) - Y_i - Y_{i-1}$. By the inductive hypothesis it is easy to check that $|Y_{i-1}| = o(|Y_i|)$. Thus, \[ |Y_{i+1}| \geq |N_{G'}(Y_i)| - |Y_i| - |Y_{i-1}| \geq (1/2 + \epsilon/2)np|Y_i| - o(np|Y_i|) \geq 2^{-i-1} (np)^{i+1}\,, \] which completes the proof of the first part. (b) By part (a) we have $2^{-l+2}(np)^{l-2} \leq |Y_{l-2}| \leq (1+\epsilon)^{l-2}(np)^{l-2}$ and $|Y_{l-3}| \leq (1+\epsilon)^{l-3}(np)^{l-3}$. Apply Lemma \ref{pancyclic_random_lemma1} to get $|N_{G'}(Y_{l-2})| \geq (1/2 + \epsilon/2)n$. Then $$|Y_{l-1}| \geq |N_{G'}(Y_{l-2})| - |Y_{l-2}| - |Y_{l-3}| \geq (1/2 + \epsilon/2)n - (2np)^{l-2}\,,$$ and therefore for large enough $n$, $|Y_{l-1}| \geq (1/2 + \epsilon/3)n$. (c) Condition on the event that $|N^{(i)}_{G}(v)| \leq (1+\epsilon)^{i}(np)^{i}$ for all $i=0,\ldots,l-2$ and let $X = \cup_{i=0}^{l-3} N^{(i)}_{G}(v)$. Notice that so far we only exposed the edges inside $G[X]$ and the edges connecting $X$ to $N^{(l-2)}_{G}(v)$. Therefore for any vertex $w \notin X$ which is at distance is at least $l-2$ from $v$, the edges between $w$ and $N_{G}^{(l-2)}(v)$ are not yet exposed. Thus we can bound the probability that $w$ has degree at least $\log n$ in $N^{(l-2)}_{G}(v)$ as follows: \begin{eqnarray} \binom{|N^{(l-2)}_{G}(v)|}{\log n} p^{\log n} &<& \bigg(\frac{e|N^{(l-2)}_{G}(v)|p}{\log n}\bigg)^{\log n} < \bigg(\frac{e2^{l-2}(np)^{l-2}p}{\log n}\bigg)^{\log n} = \bigg(\frac{e2^{l-2}C^{l-1}}{\log n}\bigg)^{\log n}. \nonumber \end{eqnarray} Since the last estimate is $o(n^{-2})$, a.a.s. every pair of vertices as above satisfies the claim. \end{pf} Now we are ready to prove Theorem \ref{pancyclic_thm4}. First we restate it in a more accurate and general form. \begin{thm} For every $\epsilon>0$ there exists $C$ such that if $p \geq Cn^{-1+1/(l-1)}$ then $G(n,p)$ almost surely has local resilience at least $(1/2 - \epsilon)np$ with respect to being pancyclic. \end{thm} \begin{pf} By Proposition \ref{probrestriction_prop} we may assume that $p = Cn^{-1+1/(l-1)}$ where $C$ is taken to be the maximal of the corresponding constants in Theorem \ref{pancyclic_thm2} and Lemma \ref{lemma_pancyclic1}. Let $G=G(n,p)$, $H$ be a subgraph of maximum degree $\Delta(H) \leq (\frac{1}{2} - \epsilon)np$, and $G' = G - H$. The proof consists of three parts. In each part we will show the existence of short, medium length, and long cycles, respectively, in $G'$.\\ \noindent \textbf{Short Cycles.}\, The existence of cycles of length $l$ to $2l-2$ in $G'$ is a direct corollary of Haxell, Kohayakawa and {\L}uczak's Theorem \ref{pancyclic_thm2}.\\ \noindent \textbf{Medium Length Cycles.}\, Now we show the existence of cycles of length $2l-1$ up to $\frac{1}{320}\epsilon n$. Fix a vertex $v \in V$ and let $Y=N_{G'}^{(l-1)}(v)$. Then by Lemma \ref{lemma_pancyclic1} part (b) a.a.s. $|Y| \geq (\frac{1}{2} + \epsilon/3)n$. By applying the Chernoff inequality and then taking the union bound over sets $Y$ of appropriate sizes, we know that a.a.s. $e_G(Y) \geq (1 - \epsilon/6)\binom{|Y|}{2}p \geq \frac{1}{4}|Y|np$. And by the restriction on the maximum degree of $H$ we know that $e_H(Y) \leq \frac{1}{2}(\frac{1}{2} - \epsilon)|Y|np$. Therefore, \[ e_{G'}(Y) \geq e_{G}(Y) - e_{H}(Y) \geq \frac{1}{2}\epsilon|Y|np \geq \frac{1}{4}\epsilon n^2p. \] Thus by Lemma \ref{lemma_findingmindegree}, we can find a subgraph $G_1 \subset G'[Y]$ with minimum degree at least $\frac{1}{4}\epsilon np$. Fix any vertex $v_{l-1} \in V(G_1)$ and let $v_i \in N^{(i)}_{G'}(v)$ for $i= 1,\ldots, l-2$ be the vertices of a path $vv_1 v_2 \ldots v_{l-1}$ in $G'$ from $v$ to $v_{l-1}$. Delete every vertex in $N^{(l-2)}_{G'}(v_1) \cap N^{(l-1)}_{G'}(v) \subset N^{(l-1)}_{G'}(v)$ except $v_{l-1}$ from $G_1$ to obtain $G_2$. Then by Lemma \ref{lemma_pancyclic1} part (c), $\delta(G_2) \geq \delta(G_1) - \log n$, and so for large enough $n$, $G_2$ has minimum degree at least $\frac{1}{8}\epsilon np$. Now by Lemma \ref{lemma_pancyclic2}, $G_2$ has the property that every subset $X$ of size $|X| \leq \frac{1}{640}\epsilon n$ satisfies $|N_{G_2}(X)\backslash X| \geq 2|X|$. Therefore by P\`{o}sa's rotation-extension Lemma \ref{lemma_pancyclic3} we can find a path $P$ of length at least $\frac{1}{320}\epsilon n$ starting at $v_{l-1}$ inside $G_2$. Let this path be $P=v_{l-1} w_1 \ldots w_{\epsilon'n}$ where $\epsilon' \geq \frac{1}{320}\epsilon$. Finally observe that for any vertex $w_t (t>0)$ there is a path $vz_1z_2 \ldots z_{l-2}w_t$ in $G'$ such that $z_j \in N^{(j)}_{G'}(v)$ for $j=1, \ldots l-2$. Moreover since we deleted vertices that can be reached from $v_1$, $v_j \neq z_j$ for all $1 \leq j \leq l-2$. Thus we have a cycle $vv_1\ldots v_{l-1}w_1\ldots w_tz_{l-2}\ldots z_1 v$ which has length $t + 2l - 2$. Since $t$ can be arbitrarily chosen in the range $1 \leq t \leq \epsilon' n$, we are done with the second part of the proof.\\ \noindent \textbf{Long Cycles.}\, Let $\alpha = \frac{1}{320}\epsilon$. In this part we will show how to find all cycles of length from $\alpha n$ to $n$ in $G'$. For a fixed integer $n^*$ satisfying $\alpha n \leq n^* \leq n$ choose uniformly at random $n^*$ vertices $V^{*}$ out of $V$ and let $G^{*}=G[V^{*}], H^{*}=H[V^{*}]$. Let $\mathcal{P}$ be the graph property of a graph on $n^*$ vertices having local resilience at least $(1/2-\epsilon/2)n^*p$ with respect to hamiltonicity. We claim that with probability $1-o(n^{-1})$, $P(G^{*} \in \mathcal{P} |G) \geq 1/2$. First note that $G^{*}$ has distribution $G(n^*, p)$ and apply Sudakov and Vu's Theorem \ref{pancyclic_thm1} to get $P(G^{*} \notin \mathcal{P}) = o(n^{-1})$. Let $A$ be the event in the probability space $G(n,p)$ such that $P(G^{*} \notin \mathcal{P} |G) \geq 1/2$. Then we have $(1/2)P(A) \leq P(G^{*} \notin \mathcal{P}) = o(n^{-1})$. Therefore $P(A) = o(n^{-1})$, or in other words $P(G^{*} \in \mathcal{P} |G) \geq 1/2$ with probability at least $1-o(n^{-1})$. Let $\mathcal{A}_{n^{*}}$ be the collection of graphs $G$ having this property. On the other hand, observe that the degree of a vertex in $H^{*}$ follows the hypergeometric distribution and thus we can apply Lemma \ref{lemma_hypergeometric}. Hence for a vertex $v \in V^{*}$, \[ P\big(|\deg_{H^{*}}(v) - (1/2 - \epsilon)n^*p| \geq \epsilon n^*p /2 \big) \leq e^{ -\Omega_{\epsilon}(n^*p) } \leq e^{-\Omega_{\epsilon} (np)}\,, \] thus a.a.s. every vertex in $V^{*}$ has degree at most $(1/2 - \epsilon/2)n^* p$ in $H^{*}$. We can conclude that if $G \in \mathcal{A}_{n^*}$ then there exists a set $V^{*}$ of size $n^*$ such that $G^{*} \in \mathcal{P}$ and $\Delta(H^{*}) \leq (1/2 - \epsilon/2)n^*p$. This gives a hamilton cycle inside $G^{*} - H^{*}$ which is a cycle of length $n^*$ inside $G' = G-H$. Finally note that since $G \in \mathcal{A}_{n^*}$ with probability at least $1 - o(n^{-1})$, cycles of length $n^*$ exist with probability at least $1-o(n^{-1})$ for any fixed $n^*$ by the previous observation. Therefore by taking the union bound we can see that a.a.s. $G'$ simultaneously contains cycles of length $n^*$ for all $\alpha n \leq n^* \leq n$. This concludes the proof. \end{pf} \section{Properties of pseudo-random graphs} \label{pseudorandomgraphproperty_section} Here we collect properties of pseudo-random graphs which we will use later to prove Theorem \ref{pancyclic_pseudo_thm01}. The main fact that we use about $(n,d,\lambda)$-graphs is the following formula established by N. Alon (see, e.g., \cite{MR2223394}) which connects between eigenvalues and edge distribution. \begin{lemma} \label{pancyclic_pseudo_lemma0} If $G=(V,E)$ is an $(n,d,\lambda)$-graph, then for any $X,Y \subset V$ we have, \[ \Big|e(X,Y) - \frac{d}{n}|X||Y|\Big| \leq \lambda \sqrt{|X||Y|}. \] \end{lemma} As in Section \ref{randomgraphproperty_section} we will prove several lemmas that establish some expansion properties of pseudo-random graphs. These lemmas correspond to Lemma \ref{pancyclic_random_lemma10}, Lemma \ref{pancyclic_random_lemma1} and Lemma \ref{lemma_pancyclic2} in the random graph case. \begin{lemma} \label{pancyclic_pseudo_lemma10} Let $\epsilon, \epsilon'$ be such that $0 \leq 5\epsilon' < \epsilon, k \geq 3$, and let $G = (V,E)$ be an $(n,\epsilon',d,\lambda)$-graph with $d^{k-1} /n = \omega(n) \lambda^{k-2}$ where $\omega(n) \rightarrow \infty$. Then $G$ has the following property. If $H$ is a subgraph of $G$ with $\Delta (H) \leq (1/2 - \epsilon) d$ and $G' = G-H$, then every set $X$ with $|X| \leq \epsilon n/4$ satisfies $|N_{G'}(X)| \geq \min \big(\epsilon n /2, \frac{d^2}{4\lambda^2} |X|\big)$. \end{lemma} \begin{pf} Let $Y = N_{G'}(X)$ and assume that $|Y| \leq \epsilon n/2$ as otherwise we are done. Since $G'$ has minimum degree at least $(1-\epsilon')d - \Delta(H) \geq (1-\epsilon')d - (1/2-\epsilon)d \geq (1/2 + 4\epsilon/5)d$, we have \begin{eqnarray} e_{G'}(X, Y) &\geq& (1/2 + 4\epsilon/5)d|X| - 2e_G(X) \geq (1/2 + 4\epsilon/5)d|X| - (d|X|^2/n + \lambda |X|) \nonumber \\ &\geq& (1/2 + 4\epsilon/5)d|X| - (\epsilon d/4 + \lambda) |X| \geq (1/2 + \epsilon/2)d|X|. \label{eq:111} \end{eqnarray} On the other hand, since $|Y| \leq \epsilon n/2$, we have: \begin{eqnarray} e_G(X, Y) &\leq& \frac{d|X| |Y|}{n} + \lambda \sqrt{|X||Y|} \leq (\epsilon/2) d |X| + \lambda \sqrt{|X||Y|}. \label{eq:112} \end{eqnarray} Therefore by (\ref{eq:111}),(\ref{eq:112}) and $e_{G'}(X,Y) \leq e_{G}(X,Y)$ we have, $(1/2 + \epsilon/2)d|X| \leq (\epsilon/2) d |X| + \lambda \sqrt{|X||Y|}$ which implies $|Y| \geq \frac{d^2}{4\lambda^2} |X|$. \end{pf} \begin{lemma} \label{pancyclic_pseudo_lemma1} Let $k \geq 3$ and let $G = (V,E)$ be an $(n,\epsilon',d,\lambda)$-graph with $d^{k-1} /n = \omega(n) \lambda^{k-2}$ where $\omega(n) \rightarrow \infty$. Then for any function $\delta=\delta(n)$ such that $1 \ll \delta \ll d^2/\lambda^2$, $G$ has the following property. If $H$ is a subgraph of $G$ with $\Delta (H) \leq (1/2 - \epsilon) d$ and $G' = G-H$, then every set $X$ with $|X| \geq \delta (\lambda^2 / d^2)n$ satisfies $|N_{G'}(X)| \geq (1/2 + \epsilon/2)n$. \end{lemma} \begin{pf} We only have to verify this for sets of size exactly $\delta(\lambda^2 / d^2)n$, so assume for the contrary that there exists $X \subset V$ of size $|X| = \delta(\lambda^2 / d^2)n$ which has $|N_{G'}(X)| < (1/2 + \epsilon/2)n$ and define $Y = V \backslash (X \cup N_{G'}(X))$. Since $|X|=o(n)$, we have $|Y| > (1/2 - 2\epsilon/3)n$. Therefore by Lemma \ref{pancyclic_pseudo_lemma0}, \begin{eqnarray} e_G(X, Y) \geq \frac{d|X||Y|}{n} - \lambda \sqrt{|X||Y|} \geq (1/2 - 2\epsilon/3)\delta (\lambda^2/d) n - \delta(\lambda^2/d) n > (1/2 - \epsilon) \delta (\lambda^2/d) n. \nonumber \end{eqnarray} On the other hand by the maximum degree restriction, $e_H(X,Y) \leq (1/2 - \epsilon)d|X| = (1/2 - \epsilon)\delta (\lambda^2 /d) n$. But since there are no edges between $X$ and $Y$ we must have $0=e_{G'}(X,Y) \geq e_G(X,Y) - e_H(X,Y) > 0$ which gives us a contradiction. \end{pf} The next lemma proves expansion property for subgraphs of $(n,d,\lambda)$-graphs with large minimum degree. \begin{lemma} \label{pancyclic_pseudo_lemma2} Let $G=(V,E)$ be an $(n,d,\lambda)$-graph with $\lambda =o(d)$, and let $G'$ be a subgraph of $G$ with $\delta(G') \geq \epsilon d$ for some fixed constant $\epsilon > 0$. Then every $X \subset V(G')$ with $|X| \leq \epsilon n /10$ satisfies $|N(X) \backslash X| \geq 2|X|$. \end{lemma} \begin{pf} Assume to the contrary that there exists a set $X \subset V(G')$ with $|X| \leq \epsilon n/10$ and $|N(X) \backslash X| < 2|X|$. Let $A = X \cup N(X)$ and note that $|A|< 3|X|$. Then by Lemma \ref{pancyclic_pseudo_lemma0}, \[ e_G(A)=e_G(A,A)/2 \leq \frac{d}{2n}|A|^2 + \frac{\lambda}{2}|A| \leq |X| (9\epsilon d/10 + 3\lambda)/2. \] On the other hand, since $G'$ has minimum degree at least $\epsilon d$, we have \[ e_G(A) \geq e_{G'}(A) \geq |X|\epsilon d/2 \] which is a contradiction, since $\lambda = o(d)$. \end{pf} \section{Proof of Theorem \ref{pancyclic_pseudo_thm01}} \label{proofofpseudorandomgraphthm_section} In this section we prove Theorem \ref{pancyclic_pseudo_thm01}. As in the random graph case, we need an additional lemma which gives us more properties of a pseudo-random graph with deleted edges. \begin{lemma} \label{pancyclic_pseudo_lemma11} Fix $\epsilon > 0, k \geq 3$ and let $G = (V,E)$ be a $(n,d,\lambda)$-graph with $d^{k-1} /n = \omega(n) \lambda^{k-2}$ where $\omega(n) \rightarrow \infty$. Let $H$ be a subgraph of $G$ with $\Delta (H) \leq (1/2 - \epsilon) d$, $G'=G-H$ and $v \in V$. Then there exist $l, 1 \leq l \leq \lfloor(k-1)/2\rfloor$ and sets $X_i(v), Y_i(v)$ for $i=0, 1, \ldots, l$ such that, \begin{itemize} \item[(a)] $X_0(v) = Y_0(v) = \{ v \}$, $X_i(v) \cap Y_i(v) = \emptyset, |X_i(v)| = |Y_i(v)|$ for all $i \neq 0$; \item[(b)] $|X_{i+1}(v)|\geq \frac{d^2}{16\lambda^2}|X_i(v)|$, $|Y_{i+1}(v)|\geq \frac{d^2}{16\lambda^2}|Y_i(v)|$ for all $i=0, \ldots l-2$ and $|X_{i}(v)|=|Y_{i}(v)| \leq (\epsilon/(8k))n$ for all $i=0,1 \ldots, l$; \item[(c)] Let $Z_i(v) = \cup_{j=0}^{i} (X_j(v) \cup Y_j(v)) $. Then $X_{i+1}(v) \subset N_{G'}(X_i(v)) \backslash Z_i(v)$, $Y_{i+1}(v) \subset N_{G'}(Y_i(v)) \backslash Z_i(v)$ for all $0 \leq i \leq l-1$. \item[(d)] $|X_{l}(v)|=|Y_{l}(v)| \geq \delta (\lambda^2 / d^2) n$ for some function $\delta=\delta(n) \rightarrow \infty$. \end{itemize} \end{lemma} \begin{pf} Let $\delta = \delta(n)=\min(d / \lambda, (\omega(n))^{1/2})$ and note that indeed $\delta \rightarrow \infty$. Given a vertex $v \in V$, we will inductively construct sets $X_i = X_i(v), Y_i = Y_i(v)$ satisfying the condition above. Since $G'$ has minimum degree at least $(1/2 + \epsilon)d$, put $d/4$ vertices of $N_{G'}(v)$ into $X_1$ and put another $d/4$ vertices into $Y_1$. Suppose that for some $i \leq \lfloor (k-1)/2 \rfloor - 1$ we have already constructed $X_0, Y_0, \ldots, X_i, Y_i$ satisfying conditions $(a),(b),(c),(d)$. Next we show how to construct $X_{i+1}, Y_{i+1}$. Let $Z_i = \cup_{j=0}^{i} (X_j \cup Y_j)$ and note that $\big| Z_i \big| \leq 2i|X_i| \leq k|X_i|$. If $|X_i| = |Y_i| \geq \delta (\lambda^2 / d^2) n$ then define $l=i$ and stop the process. Otherwise $|X_i| = |Y_i| < \delta (\lambda^2 / d^2) n \leq (\lambda /d)n = o(n)$ and by Lemma \ref{pancyclic_pseudo_lemma10} we have that $|N(X_i)|, |N(Y_i)| \geq \min (\epsilon n /2, \frac{d^2}{4\lambda^2} |X|)$. Thus \[ \big| N(X_i) \backslash Z_i \big| \geq \min\Big(\epsilon n /2 - k|X_i|, \frac{d^2}{4\lambda^2} |X_i| - k|X_i| \Big) \geq \min \Big(\epsilon n /4 , \frac{d^2}{8\lambda^2}|X_i|\Big) \] and a similar inequality also holds for $N(Y_i)$. Therefore, by splitting the vertices of $N(X_i) \cap N(Y_i)$ between $X_{i+1}$ and $Y_{i+1}$ we can always choose $X_{i+1} \subset N(X_i) \backslash Z_i$ and $Y_{i+1} \subset N(Y_i) \backslash Z_i$ so that $X_{i+1} \cap Y_{i+1} = \emptyset$ and $|X_{i+1}|=|Y_{i+1}| \geq (1/2) \min (\epsilon n /4 , \frac{d^2}{8\lambda^2}|X_i|)$. If $\epsilon n /4 \leq \frac{d^2}{8\lambda^2} |X_i|$ then $|X_{i+1}|, |Y_{i+1}| \geq \epsilon n / 8$, so stop the process and define $l=i+1$. Otherwise we can make $|X_{i+1}|, |Y_{i+1}| \geq \frac{d^2}{16\lambda^2} |X_i|$ and continue. Note that (b) holds in this case. If the process does not terminate after constructing $X_1, \ldots, X_{\lfloor (k-1)/2 \rfloor}$ and $Y_1, \ldots, Y_{\lfloor (k-1)/2 \rfloor}$ then by property (b) we get that $\frac{d}{4} (\frac{d^2}{16\lambda^2})^{\lfloor (k-1)/2 \rfloor - 1} \leq X_{\lfloor (k-1)/2 \rfloor} < \delta \frac{\lambda^2}{d^2} n$. This implies: \[ \delta \frac{\lambda^2}{d^2} n > \frac{d}{4}\left(\frac{d^2}{16\lambda^2}\right)^{\lfloor (k-1)/2 \rfloor - 1} \geq \frac{d}{4}\left(\frac{d^2}{16\lambda^2}\right)^{k/2 - 2} = \frac{d^{k-3}}{4^{k-3}\lambda^{k-4}} = \frac{\omega(n)}{4^{k-5}} \cdot \frac{\lambda^2}{d^2} n\] which is a contradiction, since $\delta \ll \omega(n)$. Finally note that we can always shrink final sets $X_l, Y_l$ so that they become smaller than $(\epsilon /(8k))n$. Since $|X_{l-1}|=|Y_{l-1}| < \delta (\lambda^2/d^2)n \ll (\epsilon/(8k)) n$, (b) holds for all $i=0,1 \ldots, l$. Thus we can find sets as claimed. \end{pf} We are ready to prove Theorem \ref{pancyclic_pseudo_thm01}. First we restate it here with more quantifiers. \begin{thm} Fix $\epsilon>0$ and let $k$ be either 3 or an even integer satisfying $k \geq 4$, and let $G=(V,E)$ be a $(n,d,\lambda)$-graph satisfying $d^{k-1} /n = \omega(n) \lambda^{k-2}$ where $\omega(n) \rightarrow \infty$. Then for large enough $n$, $G$ has local resilience at least $(1/2 - \epsilon)d$ with respect to containing cycles of length $t$ for $k \leq t \leq n$. \end{thm} \begin{pf} Let $H$ be a subgraph of $G$ with $\Delta(H) \leq (1/2 - \epsilon)d$ and let $G' = G-H$. If $d>(1-\epsilon)n$, then $G'$ has minimum degree larger than $(1/2+\epsilon)d>(1/2+\epsilon)(1-\epsilon)n>n/2$ for small enough $\epsilon>0$. Hence by Bondy's theorem, mentioned in the introduction, $G'$ is pancyclic. Assume therefore that $d\leq(1-\epsilon)n$. This implies that $\lambda=\Omega(\sqrt{d})$. Indeed, let $A$ be the adjacency matrix of $G$ and $d=\lambda_1, \ldots, \lambda_n$ be its eigenvalues. The trace of $A^2$ is the number of ones in $A$, which implies that $$ 2|E(G)|=nd=Tr(A^2)=\sum_{i=1}^n\lambda_i^2\le d^2+(n-1)\lambda^2.$$ Solving the above inequality for $\lambda$ establishes the claim. A proof of the theorem consists of three parts. In each part we will show the existence of short, medium length and long cycles respectively.\\ \noindent \textbf{Short Cycles.}\, For $k=3$, the existence of $3$-cycles is a direct corollary of Sudakov, Szab\'{o} and Vu's Theorem \ref{pancyclic_pseudo_thm1}. Also in this case we have that $d^3/\lambda^2\geq d^2/\lambda \gg n$. Therefore for $k=3$ the existence of cycles of length $4, \ldots, n$ follows from the proof of case $k=4$. So from now on we assume that $k=2k'$ is even. Since $\lambda = \Omega(\sqrt{d})$, we have $d^{k-1} = \omega(n) \lambda^{k-2} n = \omega(n) \Omega(d^{k/2 - 1}) n$. Therefore $d \gg n^{2/k}$ and by Bondy and Simonovits' Theorem \ref{bondysimonovits1_thm} $G'$ must have a $k$-cycle.\\ \noindent \textbf{Medium Length Cycles.}\, The next step is to prove the existence of cycles of length from $k+1$ up to $\epsilon n/ 20$. Fix a vertex $v$ and apply Lemma \ref{pancyclic_pseudo_lemma11} to find sets $X_1=X_1(v), \ldots, X_l=X_l(v), Y_1=Y_1(v), \ldots, Y_l=Y_l(v)$ where $l \leq k'-1$ with $|X_{l}|=|Y_{l}| \geq \delta (\lambda^2 / d^2) n$ and $|X_{i}|=|Y_{i}| \leq (\epsilon/(8k))n$ for all $i=1, \ldots, l$. Then $|\cup_{i=0}^{l}X_{i} \cup Y_{i}| \leq \epsilon n /4$. By Lemma \ref{pancyclic_pseudo_lemma1} we know that $|N_{G'}(X_l)| \geq (1/2 + \epsilon/2)n$ and so if we let $Z = N_{G'}(X_l) \backslash \big(\cup_{i=0}^{l}X_{i} \cup Y_{i}\big)$ then $|Z| \geq (1/2 + \epsilon/4)n$. Since $\lambda=o(d)$, by Lemma \ref{pancyclic_pseudo_lemma0}, we have $e_G(Z) \geq d|Z|^2/(2n) - \lambda|Z|/2 \geq d|Z|/4$. On the other hand $e_H(Z) \leq (1/2-\epsilon)d|Z|/2$. Hence $e_{G'}(Z) \geq e_{G}(Z) - e_{H}(Z) \geq \epsilon d|Z|/2$. This implies by Lemma \ref{lemma_findingmindegree} that inside $G'[Z]$ we can find a subgraph $G_1 \subset G'$ which has minimum degree at least $\epsilon d/2$. Then using Lemma \ref{pancyclic_pseudo_lemma0} it is easy to check that $|V(G_1)| \geq \epsilon n /3$ and so we can choose a set $W \subset V(G_1) \subset Z$ of size $\epsilon n / 8$. Then by Lemma \ref{pancyclic_pseudo_lemma1}, we have that both $|N_{G'}(W)|, |N_{G'}(Y_l)| \geq (1/2 + \epsilon)n$. Therefore the set $\big(N_{G'}(W) \cap N_{G'}(Y_l)\big) \backslash \big(\cup_{i=0}^{l}X_{i} \cup Y_{i}\big)$ has size at least $2\epsilon n-\epsilon n/4>0$. In particular, there must exist a vertex $p \in \big(N_{G'}(W) \cap N_{G'}(Y_l)\big) \backslash \big(\cup_{i=0}^{l}X_{i} \cup Y_{i}\big)$, and let $y_l \in Y_l$, $w \in W$ be neighbors of $p$ in $G'$. Since $y_l \in Y_l$, by the definition of $Y_l$, there exists a path $vy_1y_2 \ldots y_l$ in $G'$ from $v$ to $y_l$ such that $y_i \in Y_i$ for $i=1, \ldots l$. If $p \in V(G_1)$ then let $G_2 = G_1 \backslash \{ p\}$ and note that $G_2$ has minimum degree at least $\epsilon d / 4$. Now by Lemma \ref{pancyclic_pseudo_lemma2} every set $X \subset V(G_2)$ of size $|X| \leq \epsilon d /40$ satisfies $|N_{G_2}(X)\backslash X| \geq 2|X|$. Then by P\`{o}sa's rotation-extension Lemma \ref{lemma_pancyclic3} we know that there exists a path $P = v_0 v_1 \ldots v_t $ starting at $v_0 = w$ which has length at least $t \geq \epsilon n/ 20$ inside $G_2$. For an arbitrary $v_i \in P, i \geq 0$ since $v_i \in V(G_2) \subset N_{G'}(X_l)$, there is a path $vx_1\ldots x_lv_i$ in $G'$ such that $x_i \in X_i$ for $i=1,\ldots,l$. Thus we have a cycle $v x_1 x_2 \ldots x_l v_i v_{i-1} \ldots v_0 p y_l y_{l-1} \ldots y_1v$ which has length $2(l + 1) + i + 1 \leq k+i+1$. Since $i$ can be arbitrarily chosen in the range $0 \leq i \leq \epsilon n/20$, we are done with the second part of the proof.\\ \noindent \textbf{Long Cycles.}\, The final step is to prove the existence of cycles of length $\epsilon n/20$ to $n$. Pick $\epsilon /20 \leq \alpha \leq 1$ such that $n^{*} = \alpha n$ is an integer. Let $V^{*} \subset V$ be a set of size $n^{*}$ chosen uniformly at random and $G^{*}=G[V^{*}], H^{*}=H[V^{*}]$ be the induced subgraph of $G, H$ respectively. Since $d \gg n^{2/k}$, by the concentration of the hypergeometric distribution (Lemma \ref{lemma_hypergeometric}), for every vertex $v \in V^{*}$ we have that \begin{eqnarray} P\big(deg_{H^{*}}(v) \geq (1/2 - \epsilon /2)\alpha d \big) &\leq& e^{-\Omega_\epsilon(\alpha d)} \leq e^{-\Omega_\epsilon(n^{2/k})}. \nonumber \end{eqnarray} Similarly, with probability $1-o(n^{-1})$, the graph $G^{*}$ has minimum degree $(1-\epsilon/6)\alpha d$ and therefore if $n$ is large enough, for every $\epsilon n/20\le n^*\le n$ there exists a choice of $V^{*}$ where $\Delta(H^{*}) \leq (1/2 - \epsilon/2)\alpha d$ and $\delta(G^{*}) \geq (1-\epsilon/6)\alpha d$. Moreover since $G^{*}$ is an induced subgraph of $G$, its edge distribution is still governed by the estimate from Lemma \ref{pancyclic_pseudo_lemma0}. Therefore $G^{*}$ is an $(\alpha n, \epsilon/6, \alpha d, \lambda)$-graph. Now by Sudakov and Vu's Theorem \ref{pancyclic_pseudo_thm2}, for large enough $n$, $G^{*}$ has local resilience at least $(1/2 - \epsilon/2)\alpha d$ with respect to being hamiltonian. Thus for the choice of $V^{*}$ as above, $G^{*}-H^{*}$ must contain a hamilton cycle which is a cycle of length $n^{*}$ in $G'$. This concludes the proof. \end{pf} \section{Concluding Remarks} \label{concludingremarks_section} \subsection{Cycles through a given vertex} In this paper we found cycles inside a subgraph of random and pseudo-random graphs. We proved that under certain conditions there exist cycles of various lengths \textit{somewhere} in the subgraph. In fact, we can find most of these cycles even if we fix a vertex $v$ and insist that a cycle of desired length passes through this vertex. Theorem \ref{pancyclic_thm4} says that for any fixed integer $l \geq 3$, if $p \gg n^{-1+1/(l-1)}$ then $G(n,p)$ almost surely has local resilience $(1/2 + o(1))np$ with respect to containing cycles of length $t$ for $l \leq t \leq n$. By carefully examining the proof, one can realize that when finding middle length and long cycles, we can insist on the cycle to pass through a fixed vertex. Thus we have that for any fixed vertex $v$, there is a cycle of length $t$ for $2l-1 \leq t \leq n$ which passes through $v$. Observe in addition that for every odd integer $l \leq t < 2l-1$, we cannot guarantee a cycle of length $t$ through a fixed vertex $v$ as can be seen using Lemma \ref{pancyclic_random_lemma10}. Let $G=G(n,p)$. This lemma implies that for any fixed vertex $v$, $|N^{(i)}_G(v)| = o(n)$ for all $1 \leq i \leq l-2$. Therefore typically the degrees inside $N^{(i)}_G(v)$ are all $o(np)$, and we can delete every edge inside these sets without violating the maximum degree condition on $H$ to get a graph $G-H$ which does not contain a cycle of length $t$ through $v$. Similarly, Theorem \ref{pancyclic_pseudo_thm01} says that if $k$ is either 3 or an even integer satisfying $k \geq 4$ and $G=(V,E)$ is a $(n,d,\lambda)$-graph satisfying $d^{k-1} /n \gg \lambda^{k-2}$, then $G$ has local resilience $(1/2 + o(1))d$ with respect to containing cycles of length $t$ for $k \leq t \leq n$. Again even if we fix a vertex we can force all the middle length and long cycles to contain it. Namely, for any fixed vertex $v$, there exists a cycle of length $k+1$ up to $n$ passing through $v$. \subsection{Paths through a given pair of vertices} Another possible and rather straightforward extension of our results is to show that random and pseudo-random graphs are locally resilient with respect to the following property: for every given pair of vertices $u,v$ and for every given length $l\ge t$ (where $t$ is a constant depending on our choice of parameters, quite similarly to the situation in Theorems \ref{pancyclic_thm4} and \ref{pancyclic_pseudo_thm01}) there is a path of length $l$ between $u$ and $v$. We do not provide much details here, but here is a very short sketch of the argument. For medium length paths (between $t$ and $\delta n$, for some constant $\delta>0$) the proof is obtained by a rather trivial modification of the corresponding proofs for medium length cycles in Theorems \ref{pancyclic_thm4} and \ref{pancyclic_pseudo_thm01}. For example, in the pseudo-random case, instead of growing sets $X_i(v)$, $Y_i(v)$ as in the proof of Theorem \ref{pancyclic_pseudo_thm01}, we grow disjoint sets $X_i(u)$ and $X_i(v)$ till they reach substantial size, and then find a subgraph $G_1$ with large minimum degree on at least $(1/2+\delta)n$ vertices in the neighborhood of $X_i(u)$. Since $|V(G_1)|\ge (1/2+\delta)n$, the set $X_i(v)$ has a neighbor $w$ in $G_1$. Due to expansion properties of $G_1$, there is a path of linear length in it starting from $w$. This path can be used to find paths of medium length between $u$ and $v$. For paths of linear length, the key is the ability to find a Hamilton path through a given pair of vertices $u,v$ in an edge deleted random or pseudo-random graph. Here we can argue as follows. First, if $e=(u,v)\not\in E(G)\setminus E(H)$ (where $G$ is the original (pseudo-)random graph, and $H$ is the graph of deleted edges meeting the imposed condition on maximum degree), add $e$ to the graph; clearly nothing really changes in its edge distribution. Then, find a path $P$ of linear length with $e$ in somewhere in the middle (i.e., some sizable distance from both ends), and then grow $P$ and close it to a Hamilton cycle through rotations and extensions as usually, each time forbidding to touch an interval of constant length surrounding $P$; our expansion assumptions enable easily to meet this restriction. The so obtained Hamilton cycle $C$ is guaranteed to contain $e$. Finally, omit $e$ from $C$, thus getting a Hamilton path between $u$ and $v$. \subsection{Open Problems} We believe that Theorem \ref{pancyclic_pseudo_thm1} can be extended (with appropriate adjustments) to cycles of an arbitrary but fixed odd length. More specifically, it is plausible that for an odd $k\ge 5$, if $G$ is an $(n,d,\lambda)$-graph and $d^{k-1}n/\gg \lambda^{k-2}$, then the local resilience of $G$ with respect to containing a cycle of length $k$ is $(1/2-o(1))d$. The validity of this conjecture would allow to extend the assertion of Theorem \ref{pancyclic_pseudo_thm01} to all $k\ge 3$. A more natural generalization of Theorem \ref{pancyclic_pseudo_thm1} (actually, of its original global resilience form as in \cite{MR2145507}) is the following conjecture. \begin{conj} Let $k\geq 5$ be an odd integer and $G$ be a $(n,d,\lambda)$-graph satisfying $d^{k-1} / n \gg \lambda^{k-2}$. Then $G$ has global resilience $(1/4 + o(1))nd$ with respect to being $C_k$-free. \end{conj}
{ "timestamp": "2009-06-08T01:50:01", "yymm": "0906", "arxiv_id": "0906.1397", "language": "en", "url": "https://arxiv.org/abs/0906.1397", "abstract": "A graph $G$ on $n$ vertices is \\textit{pancyclic} if it contains cycles of length $t$ for all $3 \\leq t \\leq n$. In this paper we prove that for any fixed $\\epsilon>0$, the random graph $G(n,p)$ with $p(n)\\gg n^{-1/2}$ asymptotically almost surely has the following resilience property. If $H$ is a subgraph of $G$ with maximum degree at most $(1/2 - \\epsilon)np$ then $G-H$ is pancyclic. In fact, we prove a more general result which says that if $p \\gg n^{-1+1/(l-1)}$ for some integer $l \\geq 3$ then for any $\\epsilon>0$, asymptotically almost surely every subgraph of $G(n,p)$ with minimum degree greater than $(1/2+\\epsilon)np$ contains cycles of length $t$ for all $l \\leq t \\leq n$. These results are tight in two ways. First, the condition on $p$ essentially cannot be relaxed. Second, it is impossible to improve the constant 1/2 in the assumption for the minimum degree. We also prove corresponding results for pseudo-random graphs.", "subjects": "Combinatorics (math.CO)", "title": "Resilient pancyclicity of random and pseudo-random graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985271387015241, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7080104879319032 }
https://arxiv.org/abs/math/0607448
Uniqueness of the (22,891,1/4) spherical code
We use techniques of Bannai and Sloane to give a new proof that there is a unique (22,891,1/4) spherical code; this result is implicit in a recent paper by Cuypers. We also correct a minor error in the uniqueness proof given by Bannai and Sloane for the (23,4600,1/3) spherical code.
\section[Introduction]{\hlabel{sec1}Introduction} An $(n,N,t)$ spherical code is a set of $N$ points on the unit sphere $S^{n-1} \subset {\mathbb R}^n$ such that no two distinct points in the set have inner product greater than $t$. In other words, the angles between them are all at least $\cos^{-1} t$. The fundamental problem is to maximize $N$ for a given value of $t$, or equivalently to minimize $t$ given $N$. Of course, for specific values of $N$ and $t$, maximality of $N$ given $t$ is not equivalent to minimality of $t$ given $N$, but complete solutions of these problems for all parameter values would be equivalent. Linear programming bounds are a powerful tool for proving upper bounds on $N$ given $t$ (see \cite{DGS}, \cite{KL}, or Chapter~{9} in \cite{CS}). In particular, they prove sharp bounds in a number of important special cases, listed in \cite{L}. Once a code has been proved optimal, it is natural to ask whether it is unique up to orthogonal transformations. That is known in every case to which linear programming bounds apply except for one infinite family that is not always unique (see Appendix~A of \cite{CK} for an overview). However, one should not expect uniqueness to hold in general for optimal spherical codes: for example, the $D_5$ kissing arrangement appears to be an optimal $40$-point spherical code in ${\mathbb R}^5$, but there is at least one other $(5,40,1/2)$ code (see \cite{Leech}). One noteworthy case is the $(22,891,1/4)$ code. A proof of uniqueness is implicit in the recent paper \cite{Cu} by Cuypers, but we are unaware of any explicit discussion of uniqueness in the literature (by contrast, every other case has been explicitly analyzed). In this paper, we apply techniques from \cite{BS} to give a new proof that it is unique. This code arises naturally in the study of the Leech lattice in ${\mathbb R}^{24}$ (see \cite{E} or \cite{CS} for background). In the sphere packing derived from the Leech lattice, each sphere is tangent to $196560$ others. The points of tangency form a $(24,196560,1/2)$ code known as the kissing configuration of the Leech lattice. It can be viewed as a packing in $23$-dimensional spherical geometry, whose kissing configuration is a $(23,4600,1/3)$ code. The $(22,891,1/4)$ code is obtained by taking the kissing configuration once more; it is well defined because the automorphism group of the Leech lattice acts distance transitively on the $(24,196560,1/2)$ code. All three of these codes are optimal (in fact, universally optimal---see \cite{CK}), although that is not known for the $(21,336,1/5)$ code that comes next in the sequence. The linear programming bounds are not sharp for the $(21,336,1/5)$ code, and we make no conjecture as to whether it is optimal. Its kissing configuration is a $(20,170,1/6)$ code whose symmetry group does not even act transitively: there are two orbits of points, one with $10$ points (forming the midpoints of the edges of a regular $4$-dimensional simplex) and one with $160$ points. Because of the lack of transitivity, this configuration has two different types of kissing configurations, and it seems fruitless to continue examining iterated kissing configurations. The $(20,170,1/6)$ code is not universally optimal and probably not even optimal. One can also construct the $(22,891,1/4)$ code using a $6$-dimensional Hermitian space over ${\mathbb F}_4$. Points in the configuration correspond to $3$-dimensional totally isotropic subspaces, with the inner product between two points ($-1/2$, $1/4$, or $-1/8$) determined by the dimension of the intersection of the corresponding subspaces ($2$, $1$, or $0$, respectively). The graph with these subspaces as vertices and with edges between pairs of subspaces with intersection dimension $2$ is the dual polar graph associated with the group $\textup{PSU}(6,2)$ (see Section~{9.4} in \cite{BCN}). In the paper \cite{Cu}, it is implicit in the proof of Proposition~2.2 that a $(22,891,1/4)$ spherical code must have the combinatorial structure of a $(2,4,20)$ regular near hexagon, which is equivalent to this dual polar space structure (see \cite{SY}). Uniqueness then follows from the classification of all polar spaces of rank at least $3$ by Tits in \cite{T}. By contrast, our proof makes use of entirely different machinery. The linear programming bounds not only prove bounds on spherical codes, but also provide additional information about the codes that achieve a given bound. When used with the auxiliary polynomial $(x+1/2)^2(x+1/8)^2(x-1/4)$, they prove that every code in $S^{21}$ with maximal inner product $1/4$ has size at most $891$, and that equality is achieved iff all inner products between distinct vectors are in $\{-1/2,-1/8,1/4\}$ and the code is a spherical $5$-design. Recall that a spherical $t$-design is a finite subset of the sphere $S^{n-1} \subset {\mathbb R}^n$ such that for every polynomial function $p \colon {\mathbb R}^{n} \to {\mathbb R}$ of total degree at most $t$, the average of $p$ over the design equals its average over the entire sphere. The techniques we use to prove uniqueness were developed by Bannai and Sloane in \cite{BS}, and we follow their approach quite closely. (Note that their paper is reprinted as Chapter~{14} of \cite{CS}.) They proved uniqueness for the $(24,196560,1/2)$ and $(23,4600,1/3)$ codes, as well as analogous codes derived from the $E_8$ root lattice. Here we correct a minor error in their proof for the $(23,4600,1/3)$ code. They construct a lattice $L$ and conclude their proof by saying ``and hence $L$ must be the Leech lattice,'' but in fact it is not the Leech lattice (it is a sublattice of index~$2$). At the end of this paper we explain the problem and how to correct it. One small difference between this paper and \cite{BS} is that the $(22,891,1/4)$ code is not a tight spherical design, whereas all the designs dealt with in \cite{BS} are tight. (A tight spherical $(2e+1)$-design in ${\mathbb R}^n$ is one with $2\binom{n+e-1}{n-1}$ points, which by Theorem~5.12 of \cite{DGS} is a lower bound for the number of points.) However, no fundamental changes in the techniques are needed. The only important difference is that we cannot conclude that the $(22,891,1/4)$ code is the only $891$-point spherical $5$-design in ${\mathbb R}^{22}$, as we could if it were tight. \section[Uniqueness of the $(22,891,1/4)$ code]{\hlabel{sec2}Uniqueness of the $(22,891,1/4)$ code} \begin{theorem} \bothlabel{theorem:891} There is a unique $(22,891,1/4)$ spherical code, up to orthogonal transformations of ${\mathbb R}^{22}$. \end{theorem} Let $\mathcal{C}$ be such a code. We begin with the observation that by the sharpness of the linear programming bounds, $-1/2$, $-1/8$, and $1/4$ are the only possible inner products that can occur between distinct points in $\mathcal{C}$. Let $u_1,\dots,u_{891}$ be the points in $\mathcal{C}$, and let \begin{align*} U_i &= (1, 1/\sqrt{3}, \sqrt{8/3}\, u_i), \\ V_0 &= (2,0,\dots, 0), \textup{ and} \\ V_1 &= (1,\sqrt{3},0, \dots, 0) \end{align*} be vectors in ${\mathbb R}^{24}$. The slightly nonstandard notation $(1, 1/\sqrt{3}, \sqrt{8/3}\, u_i)$ of course means the concatenation of the vectors $(1,1/\sqrt{3})$ and $\sqrt{8/3}\, u_i$. It is easy to check that all these vectors have norm $4$ and the inner product between any two of them is an integer; specifically, $\langle U_i,U_j \rangle$ is $4$, $2$, $1$, or $0$ according as $\langle u_i, u_j \rangle$ is $1$, $1/4$, $-1/8$, or $-1/2$, respectively. Let $L$ be the lattice spanned by $U_1,\dots,U_{891}$, $V_0$, and $V_1$. It follows that $L$ is an even lattice (i.e., all vectors have even norms). We will show that $L$ is uniquely determined, up to orthogonal transformations of ${\mathbb R}^{24}$ that fix $V_0$ and $V_1$, as is $\{U_1,\dots,U_{891}\}$. In what follows, vectors in ${\mathbb R}^{24}$ are generally denoted by uppercase letters and vectors in ${\mathbb R}^{22}$ by lowercase letters. One exception is the standard basis $e_1,\dots,e_{24}$ of ${\mathbb R}^{24}$. \begin{lemma} \bothlabel{lemma:norm4} The minimal norm $\langle V,V \rangle$ for $V \in L \setminus \{0\}$ is $4$. \end{lemma} \begin{proof} Suppose there exists $V \in L$ with $\langle V, V\rangle = 2$. Then $\langle V,W \rangle \in \{0,\pm 1, \pm 2\}$ for all $W \in \{V_0,V_1,U_1,\dots,U_{891}\}$, because $\langle V,W \rangle \in \mathbb{Z}$ and $|\langle V,W \rangle| \leq |V| |W| = 2\sqrt{2}$. Now let $V = (x,y/\sqrt{3},\sqrt{8/3}\,u)$ with $u \in {\mathbb R}^{22}$ and $x,y \in {\mathbb R}$. We note that $x$ and $y$ must be integers of the same parity, from the description of the generators of the lattice $L$. Also, we must have $x^2 + y^2/3 \leq 2$, by the condition on the norm of $V$. This implies that $(x,y) \in \{(0,0), (0, \pm 2), (\pm 1, \pm 1) \}$. We can furthermore assume that $(x,y) \in \{(0,0), (0, 2), (1, \pm 1) \}$, because otherwise we replace $V$ with $-V$. If $(x,y) = (0,2)$, then $\langle V,V_1 \rangle = 2$ and thus $|V_1-V|^2=2$, so we can replace $V$ with $V_1-V$ and $(x,y)$ with $(1,1)$. If $(x,y) = (1,-1)$, then we can replace $V$ with $V_0-V$ and $(x,y)$ with $(1,1)$. We can therefore assume that $(x,y)$ is $(0,0)$ or $(1,1)$. If $(x,y)=(1,1)$, then we claim that there exists an $i$ such that $\langle V, U_i \rangle = 2$, in which case replacing $V$ with $V-U_i$ reduces to the case of $(x,y)=(0,0)$. To prove the existence of such an $i$, consider the point $u \in {\mathbb R}^{22}$, which has $|u|=1/2$. For each $i$, if $\langle V, U_i \rangle \in \{-2,-1,0,1\}$, then $\langle u,u_i \rangle \in \{-5/4,-7/8,-1/2,-1/8\}$. If that were always the case, then the set $\{2u,u_1,\dots,u_{891}\}$ would be a $(22,892,1/4)$ spherical code, which is impossible. We are left with only one case, namely that $(x,y) = (0,0)$. Then $|u| = \sqrt{3}/{2}$. The inner products $\langle u, u_i \rangle$ must lie in the set $\{0 ,\pm 3/8, \pm 3/4\}$, corresponding to the restriction that $\langle V, U_i \rangle \in \{0, \pm 1, \pm 2\}$. Let $N_{0}, N_{3/8}, N_{-3/8}, N_{3/4}, N_{-3/4}$ be the numbers of vectors $u_i$ that have inner products $0,3/8,-3/8,3/4,-3/4$, respectively, with $u$. Now from the fact that $\mathcal{C}$ is a $5$-design (which we obtain from the linear programming bounds), we observe that for every polynomial $p(x)$ of degree at most $5$, $$ \frac{\sum_{\alpha \in \{0,3/8,-3/8,3/4,-3/4\}} N_\alpha p(\alpha)}{891} = \int_{S^{21}} p\big(\langle z, u \rangle\big) \,d\mu(z), $$ where the surface measure $\mu$ on $S^{21}$ has been normalized to have total volume $1$. The right side does not depend on the direction of $u$, only on its magnitude, and it is easily evaluated when $p(x)=x^i$: for $i$ odd it vanishes, and for $i$ even it equals $$ |u|^i \frac{i!(22/2-1)!}{(i/2+22/2-1)!(i/2)!2^i} = \frac{i!\,10!}{(10+i/2)!(i/2)!} \left(\frac{\sqrt{3}}{4}\right)^{i}. $$ We write down five equations corresponding to the monomials $p(x)=1$, $x$, $x^2$, $ x^3$, and $x^4$ and solve the resulting system of equations to get $$ (N_0, N_{3/8}, N_{-3/8}, N_{3/4}, N_{-3/4}) = (657, 120, 120, -3, -3). $$ The negative numbers give us the contradiction. \end{proof} As an immediate corollary we observe that the (integral) inner product between two minimal vectors of $L$ cannot be $\pm 3$ and so must lie in $\{0,\pm 1, \pm 2, \pm 4\}$: if $\langle U,V \rangle = 3$ with $U$ and $V$ minimal vectors, then $|U-V|^2 = |U|^2 + |V|^2 - 2\langle U,V \rangle = 2$, contradicting Lemma~\ref{lemma:norm4}. \begin{table}\hlabel{table:intnums} \begin{align*} P_1(1/4,1/4) &= 336 & P_1(-1/8,-1/8) &= 512 & P_1(-1/2,-1/2) &= 42\\[3pt] P_{1/4}(1/4,1/4) &= 170 & P_{1/4}(-1/8,-1/8) &= 320 & P_{1/4}(-1/2,-1/2) &= 5\\ P_{1/4}(1/4,-1/8) &= 160 & P_{1/4}(1/4,-1/2) &= 5 & P_{1/4}(-1/8,-1/2) &= 32\\[3pt] P_{-1/8}(1/4,1/4) &= 105 & P_{-1/8}(-1/8,-1/8) &= 280 & P_{-1/8}(-1/2,-1/2) &= 0\\ P_{-1/8}(1/4,-1/8) &= 210 & P_{-1/8}(1/4,-1/2) &= 21 & P_{-1/8}(-1/8,-1/2) &= 21\\[3pt] P_{-1/2}(1/4,1/4) &= 40 & P_{-1/2}(-1/8,-1/8) &= 256 & P_{-1/2}(-1/2,-1/2) &= 1\\ P_{-1/2}(1/4,-1/8) &= 256 & P_{-1/2}(1/4,-1/2) &= 40 & P_{-1/2}(-1/8,-1/2) &= 0\\ \end{align*} \caption{Intersection numbers for a $(22,891,1/4)$ code.} \label{table:intnums} \end{table} It follows from Theorem~7.4 in \cite{DGS} that because $\mathcal{C}$ is a $5$-design in which $3$ inner products other than $1$ occur and $5 \ge 2\cdot3 - 2$, the points in $\mathcal{C}$ form a $3$-class association scheme when pairs of points are grouped according to their inner products. In other words, given $\alpha,\beta,\gamma \in \{-1/2,-1/4,1/8,1\}$, there is a number $P_\gamma(\alpha,\beta)$ such that whenever $\langle u_i,u_j \rangle = \gamma$, there are exactly $P_\gamma(\alpha,\beta)$ points $u_k$ such that $\langle u_i,u_k \rangle = \alpha$ and $\langle u_j,u_k \rangle = \beta$. These numbers are called intersection numbers and are determined in the proof of the theorem in \cite{DGS}. We have tabulated them in Table~\ref{table:intnums} (note that $P_\gamma(\alpha,\beta) = P_\gamma(\beta,\alpha)$, $P_\gamma(\alpha,1)$ is the Kronecker delta function $\delta_{\alpha,\gamma}$, and $P_1(\alpha,\beta) = 0$ unless $\alpha=\beta$). \begin{lemma} \bothlabel{lemma:Dn} The lattice $L$ contains a sublattice isometric to $\sqrt{2}D_{24}$ and containing $V_0$ and $V_1$. \end{lemma} Recall that the minimal norm in $D_n$ is $2$, so it is $4$ in $\sqrt{2}D_n$. \begin{proof} We prove by induction on $n$ that there exist minimal vectors $G_1,\dots,G_n \in L$ such that $\langle G_1, G_2 \rangle = 0$, $\langle G_1, G_3 \rangle = -2$, $\langle G_i,G_{i+1}\rangle = -2$ for $2 \le i \le n-1$, and all other inner products vanish. In other words, for $3 \le k \le n$, the vectors $G_1,\dots,G_k$ span a copy of $\sqrt{2}D_k$, as one can see from the Dynkin diagram of $D_k$: \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(7.55684,2.4)(-0.7,-0.2) \put(0,0){\circle*{0.15}} \put(1,1){\circle*{0.15}} \put(0,2){\circle*{0.15}} \put(2.41421,1){\circle*{0.15}} \put(3.81842,1){\circle*{0.15}} \put(6.65684,1){\circle*{0.15}} \put(1,1){\line(-1,1){1}} \put(1,1){\line(-1,-1){1}} \put(1,1){\line(1,0){1.4142135}} \put(2.41421,1){\line(1,0){1.4142135}} \put(3.81842,1){\line(1,0){1.0142135}} \put(6.65684,1){\line(-1,0){1.0142135}} \put(5.24263,1){\raisebox{-0.0175cm}{\hskip -0.2cm $\dots$}} \put(-0.2,2){\raisebox{-3pt}{\hskip -0.4cm $G_1$}} \put(-0.2,0){\raisebox{-3pt}{\hskip -0.4cm $G_2$}} \put(1,1.2){$G_3$} \put(2.41421,1.2){\hskip -0.2cm $G_4$} \put(3.81842,1.2){\hskip -0.2cm $G_5$} \put(6.65684,1.2){\hskip -0.2cm $G_k$} \end{picture} \end{center} In what follows we refer to this copy as $\sqrt{2}D_k$, and we write $G_1 = -\sqrt{2}(E_1+E_2)$ and $G_i = \sqrt{2}(E_{i-1}-E_{i})$ for $i \ge 2$, so $E_1,\dots,E_n$ is an orthonormal basis of the ambient space ${\mathbb R} D_n = {\mathbb R} \otimes_{\mathbb Z} \sqrt{2}D_n$ of $\sqrt{2} D_n$. We will furthermore choose this sublattice to contain $V_0$ and $V_1$ when $n \ge 5$. For $n=3$, the existence of such vectors follows immediately from the fact that the intersection numbers $P_1(-1/2,-1/2)=42$ and $P_{-1/2}(1/4,1/4)=40$ are positive. Choose $G_1=U_i$ for any $i$. Then among $U_1,\dots,U_{891}$ there are $42$ choices for $G_2$, and $40$ choices among $-U_1,\dots,-U_{891}$ for $G_3$ given $G_2$. Now suppose the assertion holds up to dimension $n$, with $3 \le n < 24$. As a first step we show that there are at least $43$ minimal vectors $W$ in $L$ such that $\langle G_i, W \rangle = 2$ for $i \in \{1,2\}$, whereas in $\sqrt{2}D_n$ there are only $2n-4 \le 42$, namely $G_1+G_2 + \dots +G_k$ and $-G_3-G_4-\dots-G_k$ with $3 \le k \le n$. (Checking this assertion for $\sqrt{2}D_n$ is a straightforward exercise in manipulating coordinates.) The next lattice $\sqrt{2}D_{n+1}$ will be spanned by $\sqrt{2}D_n$ and such a vector $W$. To construct these vectors $W$ we work as follows. Renumbering the vectors if necessary, we can assume that $U_1,\dots,U_{40}$ satisfy $\langle G_i, U_j \rangle = 2$ for $i \in \{1,2\}$ and $1 \le j \le 40$ (because $P_{-1/2}(1/4,1/4)=40$). The vectors $V_0$ and $V_1$ also satisfy $\langle G_i, V_j \rangle = 2$ for $i \in \{1,2\}$ and $j \in \{0,1\}$. We must still find one more choice of $W$. To do so, note that $P_{-1/2}(-1/2,-1/2)=1$. Hence there is a unique vector $U_\ell$ such that $\langle U_\ell, G_1\rangle = \langle U_\ell, G_2\rangle = 0$. The vector $V_2 = V_0 - U_\ell$ is another choice for $W$ (we could also choose $V_1-U_\ell$, but we will not require that many possibilities). The $43$ vectors $U_1,\dots,U_{40},V_0,V_1,V_2$ are all distinct: the only possible danger is if $V_2$ equals one of the other vectors. Because $V_2=V_0-U_\ell$, clearly $V_2 \ne V_0$, and $V_2 \ne V_1$ follows from looking at the second coordinate in the definitions of $V_0,V_1,U_i$. Similarly, $V_2 = U_i$ is impossible because comparing second coordinates shows that $V_0 \ne U_i+U_\ell$. Thus, there are at least $43$ minimal vectors $W$ satisfying $\langle G_k, W \rangle = 2$ for $k = 1,2$, whereas in $\sqrt{2}D_n$ there are at most $42$. Choose a minimal vector $W$ with this property such that $W \notin \sqrt{2}D_n$, and in particular choose $W=V_0$ or $W=V_1$ if possible. (That will ensure that $V_0,V_1 \in \sqrt{2}D_n$ if $n \ge 5$.) This vector $W$ cannot be in ${\mathbb R} D_n$: if $W = \sum_{i=1}^n c_i E_i$, then $\langle G_k, W \rangle = 2$ for $k \in \{1,2\}$ implies $c_1 =0$ and $c_2=-\sqrt{2}$. For $3 \leq i \leq n$, $\sqrt{2}(E_2\pm E_i)$ is a minimal vector in $\sqrt{2}D_n \subset L$, and therefore $\langle W, \sqrt{2} (E_2 \pm E_i)\rangle \in \{0, \pm 1, \pm 2\}$. (The inner product cannot be $\pm 4$ because $\sqrt{2}(E_2\pm E_i)\in \sqrt{2}D_n$ but $W\not\in\sqrt{2}D_n$.) Because $\langle W, \sqrt{2} (E_2 \pm E_i)\rangle = -2 \pm \sqrt{2}c_i$, it follows that $c_3 = c_4 = \dots = c_n = 0$, which contradicts $\langle W,W \rangle = 4$. Choose $E_{n+1}$ so that $\{E_1,\dots, E_{n+1}\}$ is an orthonormal basis for ${\mathbb R} D_n \oplus {\mathbb R} W$, and let $W = c_1 E_1 + \dots + c_{n+1} E_{n+1}$. Then the same calculation gives $c_1 = 0$, $c_2 = -\sqrt{2}$, $c_3 = \dots = c_n = 0$, and $c_{n+1} = \pm \sqrt{2}$. Thus, $\sqrt{2}D_n$ and $W$ span a copy of $\sqrt{2}D_{n+1}$ contained in $L$. \end{proof} It will be convenient in the rest of the proof of Theorem~\ref{theorem:891} to change coordinates to agree with the standard coordinates for the Leech lattice (see \cite[p.~131]{CS}). To do so, choose coordinates so that $L$ contains the usual lattice $\sqrt{2}D_{24}$ (i.e., $\sqrt{2}$ times all the integral vectors with even coordinate sum), with $V_0,V_1 \in \sqrt{2}D_{24}$. We can furthermore assume that $V_0 = (4,4,0,\dots, 0)/\sqrt{8}$ and $V_1 = (4,0,4,0,0,\dots,0)/\sqrt{8}$, because the automorphism group of $\sqrt{2}D_{24}$ acts transitively on pairs of minimal vectors with inner product $2$. (We write the vectors in this way, with $4/\sqrt{8}$ instead of $\sqrt{2}$, because it will prove helpful in dealing with the Leech lattice.) In these new coordinates, all inner products are of course preserved, but the coordinates for $U_1,\dots,U_{891}$ are no longer the same as those we previously used. Let $e_1,\dots,e_{24}$ denote the standard basis of ${\mathbb R}^{24}$ with respect to the new coordinates. From this point on, all uses of coordinates refer to the new coordinates. We wish to show that the vectors $U_1,\dots,U_{891}$ are uniquely determined, up to orthogonal transformations of ${\mathbb R}^{24}$ fixing $V_0$ and $V_1$, which include of course permutations and sign changes of the last $21$ coordinates. Let $W = (w_1,\dots, w_{24})/\sqrt{8}$ be one of the $U_i$'s. Then $$ \sum_{i=1}^{24} w_i^2 = 8|W|^2 = 32, $$ $$ (w_i \pm w_j)/2 = \big\langle W, \sqrt{2}(e_i \pm e_j) \big\rangle \in \{0, \pm 1, \pm 2, \pm 4\} $$ for $i \ne j$, $$ (w_1+w_2)/2 = \langle W, V_0 \rangle = 2, $$ and $$ (w_1+w_3)/2 = \langle W, V_1 \rangle = 2. $$ {}From the above conditions we see that each $w_i$ is an integer (because $(w_i+w_j)/2$ and $(w_i-w_j)/2$ are), and that they are all at most $4$ in absolute value and of the same parity. A little more work shows that the only possibilities are \begin{equation}\bothlabel{eq:cases} \sqrt{8}W = \begin{cases}\hlabel{I} 4(e_1\pm e_j) &\textup{with $j \ge 4$}, \\ \hlabel{II} 4(e_2+e_3), \\\hlabel{III} 2(e_1+e_2+e_3) +2 \sum_{k=1}^5 \pm e_{j_k} &\textup{with $3 < j_1 < j_2 < \dots < j_5$, or} \\\hlabel{IV} 3e_1 + e_2 + e_3 + \sum_{j=4}^{24} \pm e_j. \end{cases} \end{equation} To prove this, note first that $w_1 \ge 0$ since $(w_1+w_2)/2=2$ and $w_2 \le 4$. If $w_1=0$, then $w_2=w_3=4$, and $w_i=0$ for $i>3$ because $|W|^2=4$. If $w_1=1$, then $w_2=w_3=3$ and hence $(w_2+w_3)/2 = 3$, which is impossible. If $w_1=2$, then $w_2=w_3=2$; the constraint that $(w_1\pm w_i)/2 \in \{0, \pm 1, \pm 2, \pm 4\}$ rules out $w_i = \pm 4$, so all remaining coordinates are in $\{0, \pm 2\}$, and there must be five more $\pm 2$'s because $|W|^2=4$. If $w_1=3$, then $w_2=w_3=1$, and $(w_1\pm w_i)/2 \in \{0, \pm 1, \pm 2, \pm 4\}$ rules out $w_i = \pm 3$ for $i>1$, so all remaining coordinates must be $\pm 1$. Finally, if $w_1=4$, then $w_2=w_3=0$, and $(w_1\pm w_i)/2 \in \{0, \pm 1, \pm 2, \pm 4\}$ implies $w_i \in \{0, \pm 4\}$; exactly one more coordinate must be $\pm 4$ because $|W|^2=4$. Call the cases enumerated in Equation \eqref{eq:cases} above Case~\shref{I}, Case~\shref{II}, Case~\shref{III}, and Case~\shref{IV}, respectively. By abuse of notation, view $\{0,1\}^{21}$ as being contained in ${\mathbb Z}^{21} = \sum_{i=4}^{24} {\mathbb Z} e_i$. We define a code $\mathcal{D} \subset \{0,1\}^{21}$ by stipulating that $c \in \mathcal {D}$ iff $(2(e_1+e_2+e_3) + 2c+4z)/\sqrt{8}$ is one of the $U_i$'s for some $z \in {\mathbb Z}^{21}$. This corresponds to Case~\shref{III} above. The codewords in $\mathcal{D}$ have weight $5$, and the minimum distance between codewords is at least $8$, since the minimum distance between vectors of the lattice $L$ is $2$. (If $(2(e_1+e_2+e_3) + 2c_1+4z_1)/\sqrt{8}$ and $(2(e_1+e_2+e_3) + 2c_2+4z_2)/\sqrt{8}$ are both as above, then $(2(c_1-c_2)+4(z_1-z_2))/\sqrt{8} \in L$. One can add an element of $\sqrt{2}D_{24}$ to cancel all of $4(z_1-z_2)/\sqrt{8}$ except for one coordinate, and another to cancel the remaining coordinate at the cost of changing the sign of one of the $\pm 2$'s occurring in $2(c_1-c_2)/\sqrt{8}$. Then if the distance between the codewords $c_1$ and $c_2$ in $\mathcal{D}$ is less than $8$, the resulting vector in $L$ has length less than $2$.) It follows from the linear programming bounds for constant-weight binary codes (see \cite[p.~545]{MS}) that the largest such code has size $21$. In particular, it is a projective plane over ${\mathbb F}_4$ (the points are coordinates and the lines are the supports of the codewords), or equivalently an $S(2,5,21)$ Steiner system, and it is thus unique up to permutations of the coordinates (Satz~1 in \cite{W}). Also, for each codeword of $\mathcal{D}$, we can only use at most half of the possible sign assignments in the $\pm 2$'s in Case~\shref{III}, since otherwise we would get two elements of $L$ that agree except for one sign and are thus at distance $(2-(-2))/\sqrt{8} = \sqrt{2}$, which is again a contradiction. This gives a total of at most $2^4 \cdot 21 = 336$ possible minimal vectors for Case~\shref{III}. Similarly, for Case~\shref{IV}, define a code $\mathcal{E} \subset \{0,1\}^{21}$ so that $c \in \mathcal{E}$ iff $$ \left( 3e_1 + e_2 + e_3 + 2c -\sum_{i=4}^{24} e_i\right)/\sqrt{8} $$ is one of the $U_i$'s. We note as before that codewords have distance at least $8$ from each other, and also at most $16$ (otherwise two $U_i$'s would be too far apart). The largest such code has $512$ codewords, as is easily proved using linear programming bounds (see Theorem~20 of Chapter~{17} in \cite[p.~542]{MS}), if one takes into account both the minimal and the maximal distance. This is more subtle than it might at first appear, because the linear programming bounds are not in fact sharp if one uses only the minimal distance. We conclude that there are at most $512$ vectors in Case~\shref{IV}. In all, the number of possible $U_i$'s is at most $2\cdot21+1+336+512 = 891$. On the other hand, we already know that there are $891$ of them. This forces the codes $\mathcal{D}$ and $\mathcal{E}$ to have the greatest possible size. In particular, $\mathcal{D}$ is uniquely determined, up to permutation of coordinates. These coordinate permutations are orthogonal transformations of ${\mathbb R}^{24}$ that fix $V_0$ and $V_1$ and preserve $\sqrt{2} D_{24}$. To complete the proof of uniqueness, it will be enough to show that after performing further such transformations that preserve the code $\mathcal{D}$, we can specify all the vectors of Cases~\shref{III} and \shref{IV} exactly. Let $W_0 = \left(3e_1 + e_2 + e_3 +2c_0 -\sum_{i=4}^{24} e_i\right)/\sqrt{8}$ be a fixed vector from Case~\shref{IV}. Let $i_1,\dots,i_r$ be the places (between $4$ and $24$) where $c_0$ has a $1$. Then let $\phi$ be the composition of reflections in the corresponding hyperplanes (i.e., change the signs of those coordinates). Applying $\phi$ clearly fixes $V_0$ and $V_1$ and it takes the vector $W_0$ to $\left(3e_1 + e_2 + e_3 -\sum_{i=4}^{24} e_i\right)/\sqrt{8}$. It also preserves the code $\mathcal{D}$. Thus, we can assume that $W_0 = \left(3e_1 + e_2 + e_3 -\sum_{i=4}^{24} e_i\right)/\sqrt{8}$. Now we try to determine the precise form of the vectors of Case~\shref{III}. We know that they have $\pm 2$ entries in the positions of the code $\mathcal{D}$; the only question is if we can pin down the positions of the signs. Let $d \in \mathcal{D}$ be a codeword, and let $V$ be any vector in Case~\shref{III} with $\pm 2$'s at the positions specified by the codeword $d$. Suppose $r$ of these are $-2$'s and $5-r$ are $2$'s. Then taking the inner product with $W_0$, we get $$ \langle W_0, V \rangle = \frac{1}{8} (6+2+2 + 2r - 2(5-r)) = \frac{4r}{8}. $$ Since this inner product is an integer, we deduce that $r$ is even. For each codeword $d$, this gives $2^5/2 = 2^4 = 16$ possible vectors. Thus the maximum number of allowed vectors is $21 \cdot 16 = 336$, which we already know is the number of vectors from Case~\shref{III}. Therefore equality holds, and we have specified all the vectors of Case~\shref{III}. Namely, they are all vectors of the form $$ 2e_1+2e_2+2e_3+ 2\sum_{j \textup{ such that } d_j = 1} \pm e_j, $$ where an even number of minus signs are used and $d$ ranges over all codewords in $\mathcal{D}$. Now we claim that the lattice $L$ is generated by $\sqrt{2} D_{24}$, the vectors in Case~\shref{III}, and $W_0$, which implies that the vectors in Case~\shref{IV} are uniquely determined. (Recall that they are the only remaining vectors in $L$ that satisfy the constraints enumerated in the paragraph before Equation \eqref{eq:cases}.) To show that $L$ is generated, it suffices to show that the vectors in Case~\shref{IV} are, because all other generators are already included. For this, let $W = \left(3e_1+ e_2 +e_3 + 2c - \sum_{i=4}^{24} e_i \right)/\sqrt{8}$ be any vector in Case~\shref{IV}. Then $W-W_0 = 2c/\sqrt{8}$ is in the lattice $L$ and it is enough to show that it is in the span of the above generators excluding $W_0$. Equivalently, we must show that $c$ is in the span of $2(e_i \pm e_j)$ and $e_1+e_2+e_3+d$ with $d \in \mathcal{D}$. Because $2c/\sqrt{8} \in L$ and $L$ is even, the weight of $c$ must be a multiple of $4$. Therefore, what we need to show is that in ${\mathbb F}_2^{24}$, the codeword $000c$ is in the span of the codewords $111d$ for $d \in \mathcal{D}$, where of course $000c$ denotes the concatenation of $(0,0,0)$ with $c$. (When we work modulo $2$, the vectors $2(e_i \pm e_j)$ vanish. Fortunately, that is not a problem, because $000c$ and $111d$ all have weights divisible by $4$. It follows from congruence modulo $2$ that the difference in ${\mathbb Z}^{24}$ between $000c$ and a sum of vectors of the form $111d$ is not only in the span of the vectors $2e_i$ but in fact in the span of $2(e_i \pm e_j)$.) Conversely, any vector of the form $000c$ that is in the span of the codewords $111d$ for $d \in \mathcal{D}$ will correspond to a vector in Case~\shref{IV}. Of course one must take the sum of an even number of words $111d$ to arrive at a word of the form $000c$. It is easily checked that the code $\mathcal{D}$ spans a $10$-dimensional subspace of ${\mathbb F}_2^{21}$ (simply check that the incidence matrix of the projective plane over ${\mathbb F}_4$ has rank $10$ over ${\mathbb F}_2$; this is easily checked directly or by using a general formula that is implicit in \cite{GM} and explicit in \cite{MM} and \cite{S}). Hence the codewords of the form $111d$ with $d \in \mathcal{D}$ span $512$ words of the form $000c$. Converting back to vectors, these give us $512$ vectors of the form $W-W_0$ with $W$ in Case~\shref{IV}. However, we know that the total number of $W$'s in Case~\shref{IV} is $512$. Therefore all of them must come from this construction. In other words, this shows that $000c$ is always in the span of $111d$ with $d \in \mathcal{D}$, which is what we wanted to prove. This concludes the proof of Theorem~\ref{theorem:891}. \section[Uniqueness of the $(23,4600,1/3)$ code]{\hlabel{sec3}Uniqueness of the $(23,4600,1/3)$ code} Now we add a correction to the proof in \cite{BS} that there is a unique code of size $4600$ and maximum inner product $1/3$ in $S^{22}$, up to orthogonal transformations of ${\mathbb R}^{23}$. As mentioned above, this code is derived from the Leech lattice by taking the kissing arrangement twice. \begin{theorem} \bothlabel{theorem:4600} There is a unique $(23,4600,1/3)$ spherical code, up to orthogonal transformations of ${\mathbb R}^{23}$. \end{theorem} \begin{proof} Let $u_1,\dots,u_{4600}$ be the points in the code, and set $$ V_0 = (2,0,\dots,0) \in {\mathbb R}^{24} $$ and $$ U_i = (1, \sqrt{3} u_i). $$ Let $L$ be the lattice spanned by $V_0$ and $U_1,\dots,U_{4600}$. The analogues of Lemmas~\ref{lemma:norm4} and~\ref{lemma:Dn} go through as before. However, it is then stated in \cite{BS} that $L$ is the Leech lattice, which is not correct (for by construction, every element of $L$ has even inner product with $V_0$, which is not true for every vector in the kissing configuration of the Leech lattice). However, one can take the path that we have described above. Briefly, we have the following setup: Choose new coordinates so that $L$ contains the usual lattice $\sqrt{2}D_{24}$ and $V_0 = (4e_1+4e_2)/\sqrt{8}$. The vectors in $L$ that could possibly have inner product $2$ with $V_0$ are of the form $(w_1,\dots,w_{24})/\sqrt{8}$ with $$ (w_1,w_2,\dots,w_{24}) = \begin{cases}\hlabel{Itwo} 4(e_1+e_j) & \textup{with $j \geq 3$,} \\\hlabel{IItwo} 4(e_2+e_j) & \textup{with $j \geq 3$,}\\\hlabel{IIItwo} 2(e_1+e_2) +2 \sum_{k=1}^6 \pm e_{j_k} & \textup{with $2 < j_1 < j_2 < \dots < j_6$,} \\\hlabel{IVtwo} 3e_1 + e_2 + \sum_{j=3}^{24} \pm e_j, & \textup{or} \\\hlabel{Vtwo} e_1 + 3e_2 + \sum_{j=3}^{24} \pm e_j. \end{cases} $$ Call these cases Case~\twoshref{I} through Case~\twoshref{V}. Once again we enumerate the possibilities: Cases~\twoshref{I} and~\twoshref{II} lead to $22$ vectors each. Case~\twoshref{III} leads to a $(22,8,6)$ code, which has at most $77$ elements by the linear programming bounds for constant-weight codes. Therefore there are at most $2^5 \cdot 77 = 2464$ vectors from Case~\twoshref{III} (as in the previous case only half of the possible sign patterns can occur). Finally, Cases~\twoshref{IV} and~\twoshref{V} both lead to $(22,8)$ codes, so they give at most $2^{10} = 1024$ vectors each by the linear programming bounds for binary codes. The total number of possible vectors is $4600$ exactly, i.e., as many as we started with. Therefore the numbers must be exact, and in particular, we can normalize the code $\mathcal{D}$ corresponding to Case~\twoshref{III}, by the uniqueness of the $(3,6,22)$ Steiner system (Satz~4 in \cite{W}). We need to show that the vectors of Case~\twoshref{IV} and \twoshref{V} are determined (up to isometries fixing $V_0$, $\sqrt{2}D_{24}$, and the code $\mathcal{D}$) from this. Let $W_0 = \left(3e_1 + e_2 - \sum_{i=3}^{24} e_i \right)/\sqrt{8}$ be a vector from Case~\twoshref{IV}, which we can assume after applying isometries as before. Let $V$ be a vector from Case~\twoshref{III} with $\pm 2$'s at locations in the codeword $d \in \mathcal{D}$, and suppose there are $r$ minus signs and $6-r$ plus signs. Then $$ \langle W_0, V \rangle = \frac{1}{8} (6+2 + 2r -2(6-r)) = \frac{-4+ 4r}{8}, $$ which forces $r$ to be odd. Now we get $2^6/2$ vectors for each codeword, for $77$ codewords. Again exactness shows us that all the vectors of Case~\twoshref{III} are uniquely determined. Next we would like to show that vectors of Cases~\twoshref{I}, \twoshref{II} and~\twoshref{III} span the lattice $L$, or in particular, the remaining generators from Cases~\twoshref{IV} and~\twoshref{V}. It suffices to deal with Case~\twoshref{IV} since clearly Case~\twoshref{V} is obtained by subtracting vectors of Case~\twoshref{IV} from $4(e_1+e_2)/\sqrt{8} = V_0$. For Case~\twoshref{IV}, we employ the same technique used in Case~\twoshref{IV} for the $(22,891,1/4)$ code. It amounts to showing that the linear span of the $77$ codewords $11d$ with $d \in \mathcal{D}$ contains exactly $1024$ vectors of the form $00c$ with $c \in {\mathbb F}_2^{22}$, which is easily checked on a computer. \end{proof} \section*{Acknowledgements} We thank Eiichi Bannai for pointing out the reference \cite{Cu} and the anonymous referee for helpful feedback.
{ "timestamp": "2007-06-14T03:05:11", "yymm": "0607", "arxiv_id": "math/0607448", "language": "en", "url": "https://arxiv.org/abs/math/0607448", "abstract": "We use techniques of Bannai and Sloane to give a new proof that there is a unique (22,891,1/4) spherical code; this result is implicit in a recent paper by Cuypers. We also correct a minor error in the uniqueness proof given by Bannai and Sloane for the (23,4600,1/3) spherical code.", "subjects": "Metric Geometry (math.MG)", "title": "Uniqueness of the (22,891,1/4) spherical code", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713857177955, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7080104869995661 }
https://arxiv.org/abs/1008.1207
Higher Order SPT-Functions
Andrews' spt-function can be written as the difference between the second symmetrized crank and rank moment functions. Using the machinery of Bailey pairs a combinatorial interpretation is given for the difference between higher order symmetrized crank and rank moment functions. This implies an inequality between crank and rank moments that was only know previously for sufficiently large n and fixed order. This combinatorial interpretation is in terms of a weighted sum of partitions. A number of congruences for higher order spt-functions are derived.
\section{Introduction} \label{sec:intro} Andrews \cite{An08b} defined the function $\spt(n)$ as the number of smallest parts in the partitions of $n$. He related this function to the second rank moment. He also proved some surprising congruences mod $5$, $7$ and $13$. Namely, he showed that \begin{equation} \spt(n) = n p(n) - \frac{1}{2} N_2(n), \mylabel{eq:sptid} \end{equation} where $N_2(n)$ is the second rank moment function and $p(n)$ is the number of partitions of $n$, and he proved that \begin{align*} \spt(5n+4) &\equiv 0 \pmod{5},\\ \spt(7n+5) &\equiv 0 \pmod{7},\\ \spt(13n+6) &\equiv 0 \pmod{13}. \end{align*} As noted in \cite{Ga10a}, \eqn{sptid} can be rewritten as $$ \spt(n) = \frac{1}{2}(M_2(n) - N_2(n)), $$ where $M_2(n)$ is the second crank moment function. Rank and crank moments were introduced by A.~O.~L.~Atkin and the author \cite{At-Ga}. Bringmann \cite{Br08} studied analytic, asymptotic and congruence properties of the generating function for the second rank moment as a quasi-weak Maass form. Further congruence properties of Andrews' spt-function were found by the author \cite{Ga10a}, Folsom and Ono \cite{Fo-On} and Ono \cite{On10}. In \cite{Ga10a} it was conjectured that \begin{equation} M_{2k}(n) > N_{2k}(n), \mylabel{eq:MNineq} \end{equation} for all $k \ge 1$ and $n\ge 1$. Here $M_{2k}(n)$ and $N_{2k}(n)$ are the $2k$-th crank and $2k$-th rank moment functions. For each fixed $k$, the inequality was proved for sufficiently large $n$ by Bringmann, Mahlburg and Rhoades \cite{Br-Ma-Rh}, who determined the asymptotic behaviour for the difference $M_{2k}(n)-N_{2k}(n)$ (see Section \sect{remarks}). The first few cases of the conjecture were previously proved by Bringmann and Mahlburg \cite{Br-Ma}. In this paper we prove the inequality unconditionally for all $n$ and $k$ by finding a combinatorial interpretation for the difference between symmetrized crank and rank moments. Analytic and arithmetic properties of higher order rank moments were studied by Bringmann, Lovejoy and Osburn \cite{Br-Lo-Os} and by Bringmann, the author and Mahlburg \cite{Br-Ga-Ma}. Andrews \cite{An07a} defined the $k$-th symmetrized rank function by $$ \eta_k(n) = \sum_{m=-n}^n \binom{m+\lfloor\frac{k-1}{2}\rfloor}{k} N(m,n), $$ where $N(m,n)$ is the number of partitions of $n$ with rank $m$. Andrews gave a new interpretation of the symmetrized rank function in terms of Durfee symbols. As a natural analog to the symmetrized rank function we define the $k$-th symmetrized crank function by $$ \mu_k(n) = \sum_{m=-n}^n \binom{m+\lfloor\frac{k-1}{2}\rfloor}{k} M(m,n), $$ where $M(m,n)$ is number of partitions of $n$ with crank $m$, for $n\ne1$. For $n=1$ we define $$ M(-1,1)=1, M(0,1)=-1, M(1,1)=1,\quad\mbox{and otherwise $M(m,1)=0$.} $$ One of our main results is the following identity \begin{align} &\sum_{n=1}^\infty \left( \mu_{2k}(n) - \eta_{2k}(n)\right) q^n \mylabel{eq:mainid}\\ &= \sum_{n_k \ge n_{k-1} \ge \cdots \ge n_1 \ge 1} \frac{q^{n_1 + n_2 + \cdots + n_k}} {(1-q^{n_k})^2 (1-q^{n_{k-1}})^2 \cdots (1-q^{n_1})^2(q^{n_1+1};q)_\infty}. \nonumber \end{align} When $k=1$ this result reduces to \eqn{sptid}. In equation \eqn{mainid} and throughout this paper we use standard $q$-notation \cite{Ga-Ra-book}. We compare equation \eqn{mainid} with the identity \beq \sum_{n=1}^\infty \mu_{2k}(n) q^n = \frac{1}{(q)_\infty} \sum_{n_k \ge n_{k-1} \ge \cdots \ge n_1 \ge 1} \frac{q^{n_1 + n_2 + \cdots + n_k}} {(1-q^{n_k})^2 (1-q^{n_{k-1}})^2 \cdots (1-q^{n_1})^2}, \mylabel{eq:Mukidintro} \eeq which is proved in Section \sect{bailey}. Some remarks about this identity are also given in Section \sect{remarks}. In Section \sect{symM} we show that many of Andrews' results \cite{An07a} for symmetrized rank moments can be extended to symmetrized crank moments. In Section \sect{bailey} we prove a general result for Bailey pairs from which our main identity \eqn{mainid} follows. In Section \sect{MNineq}, we use an analog of Stirling numbers of the second kind to show how ordinary moments can be expressed in terms of symmetrized moments and how our main identity implies the inequality \eqn{MNineq}. For each $k\ge1$, we are able to define a higher-order spt function $\spt_k(n)$ so that $$ \spt_k(n) = \mu_{2k}(n) - \eta_{2k}(n), $$ for all $k\ge1$ and $n\ge1$. In Section \sect{sptk} we give the combinatorial definition of $\spt_k(n)$ is in terms of a weighted sum over the partitions of $n$. We note that when $k=1$, $\spt_k(n)$ coincides with Andrews' spt-function. In Section \sect{congsptk} we prove a number of congruences for the higher order spt-functions. In Section \sect{remarks} we make some concluding remarks and close the paper with a table of $\spt_k(n)$ for small $n$ and $k$. \section{Symmetrized crank moments}\label{sec:symM} In this section we collect some results for symmetrized crank moments. Many of Andrews' results and proofs for symmetrized rank moments have analogs for symmetrized crank moments; thus we omit some details. \begin{align*} C(z,q) &:= \sum_{n=1}^\infty \sum_{m=-n}^n M(m,n) z^m q^n \\ &= \frac{1}{(q)_\infty} \left(1 + \sum_{n=1}^\infty \frac{ (1-z)(1-z^{-1}) (-1)^n q^{n(n+1)/2}(1 + q^n)} {(1-zq^n)(1-z^{-1}q^n)}\right)\qquad\\ & \hspace{3in}\mbox{(by \cite[Eq.(7.15), p.70]{Ga88a})}\\ &= \frac{1}{(q)_\infty} \left(1 + \sum_{n=1}^\infty (-1)^n q^{n(n+1)/2} \left( \frac{1-z}{1-zq^n} + \frac{1-z^{-1}}{1-z^{-1}q^n}\right)\right)\\ &= \frac{(1-z)} {(q)_\infty} \sum_{n=-\infty}^\infty \frac{(-1)^n q^{n(n+1)/2}}{1-zq^n}, \end{align*} and $$ C^{(j)}(z,q) = \frac{-j!}{(q)_\infty} \sum_{\substack{n=-\infty\\n\ne0}}^\infty \frac{(-1)^nq^{n(n-1)/2 + jn}(1-q^n)} {(1-zq^n)^{j+1}}, $$ for $j\ge0$. By \cite[Theorem 1]{An07a} we know that $\eta_k(n)=0$ if $k$ is odd. In a similar fashion we find that $\mu_k(n)=0$ if $k$ is odd. We will need \begin{theorem}[Andrews \cite{An07a}] \begin{equation} \sum_{n=1}^\infty \eta_{2k}(n) q^n = \frac{1}{(q)_\infty} \sum_{n=1}^\infty (-1)^{n-1} q^{n(3n-1)/2 + kn} \frac{(1+q^n)} {(1-q^n)^{2k}} =\frac{1}{(q)_\infty} \sum_{\substack{n=-\infty\\n\ne0}}^\infty \frac{(-1)^{n-1}q^{n(3n+1)/2 + kn}} {(1-q^n)^{2k}}. \mylabel{eq:symNid} \end{equation} \end{theorem} This theorem has a crank analog. \begin{theorem} \begin{equation} \sum_{n=1}^\infty \mu_{2k}(n) q^n = \frac{1}{(q)_\infty} \sum_{n=1}^\infty (-1)^{n-1} q^{n(n-1)/2 + kn} \frac{(1+q^n)} {(1-q^n)^{2k}} =\frac{1}{(q)_\infty} \sum_{\substack{n=-\infty\\n\ne0}}^\infty \frac{(-1)^{n-1}q^{n(n+1)/2 + kn}} {(1-q^n)^{2k}}. \mylabel{eq:symMid} \end{equation} \end{theorem} \begin{proof} As in the proof of \cite[Theorem 2]{An07a} we have \begin{align*} \sum_{n=1}^\infty \mu_{2k}(n) q^n & = \frac{1}{(2k)!} \left.\left(\left(\frac{\partial}{\partial z}\right)^{2k} z^{k-1} C(z,q) \right)\right\vert_{z=1}\\ &= \frac{1}{(2k)!} \sum_{j=0}^{k-1} \binom{2k}{j} (v-1) \cdots (v-j) C^{(2k-j)}(1,q)\\ &=\frac{1}{(q)_\infty} \sum_{j=0}^{k-1} \binom{k-1}{j} \sum_{\substack{n=-\infty\\n\ne0}}^\infty \frac{(-1)^{n-1}q^{n(n-1)/2 + (2k-j)n}(1-q^n)} {(1-q^n)^{2k-j+1}},\\ &=\frac{1}{(q)_\infty} \sum_{\substack{n=-\infty\\n\ne0}}^\infty \frac{(-1)^{n-1}q^{n(n-1)/2 + 2kn}} {(1-q^n)^{2k}} \left(1 + \frac{q^{-n}}{(1-q^n)^{-1}}\right)^{k-1}\\ &=\frac{1}{(q)_\infty} \sum_{\substack{n=-\infty\\n\ne0}}^\infty \frac{(-1)^{n-1}q^{n(n+1)/2 + kn}} {(1-q^n)^{2k}}. \end{align*} \end{proof} \section{Rank moments, crank moments and Bailey Chains}\label{sec:bailey} In \cite{Pa10}, Alexander Patkowski used a limiting form of Bailey's Lemma to obtain a partition identity analogous to \eqn{sptid}, which relates an spt-like function to the second rank moment. We consider a similar limiting form that iterates Bailey's Lemma and obtain a general theorem for Bailey pairs (see Theorem \thm{mainthm} below). Then we show how our main identity \eqn{mainid} for the difference between symmetrized crank and rank moments follows from using well-known Bailey pairs. In this section we use the standard notation found in \cite{Ga-Ra-book}. \begin{definition} \label{def:baileypair} A pair of sequences $(\alpha_n(a,q),\beta_n(a,q))$ is called a Bailey pair with parameters $(a,q)$ if $$ \beta_n(a,q) = \sum_{r=0}^n \frac{\alpha_r(a,q)}{(q;q)_{n-r} (aq;q)_{n+r}} $$ for all $n\ge 0$. \end{definition} \begin{theorem}[Bailey's Lemma] Suppose $(\alpha_n(a,q),\beta_n(a,q))$ is a Bailey pair with parameters $(a,q)$. Then $(\alpha_n'(a,q), \beta_n'(a,q))$ is another Bailey pair with with parameters $(a,q)$, where $$ \alpha_n'(a,q)=\frac{(\rho_1,\rho_2;q)_n}{(aq/\rho_1,aq/\rho_2;q)_n} \left(\frac{aq}{\rho_1\rho_2}\right)^n \alpha_n(a,q) $$ and $$ \beta_n'(a,q)= \sum_{k=0}^n \frac{(\rho_1,\rho_2;q)_k (aq/\rho_1\rho_2;q)_{n-k}} {(aq/\rho_1,aq/\rho_2;q)_{n}(q;q)_{n-k}} \left(\frac{aq}{\rho_1\rho_2}\right)^k\beta_k(a,q). $$ \end{theorem} For more information on Bailey's Lemma and its applications see \cite[Ch.3]{An-CBMS-book}. We will need the following limit which is an easy exercise. \begin{equation} \lim_{\rho_2\to1} \lim_{\rho_1\to1} \frac{1}{(1-\rho_1)(1-\rho_2)} \left(1 - \frac{ (q)_k (q/\rho_1 \rho_2)_k}{(q/\rho_1)_k (q/\rho_2)_k}\right) = \sum_{j=1}^k \frac{q^j}{(1-q^j)^2}. \mylabel{eq:easylimit} \end{equation} \begin{theorem} \label{thm:mainthm} Suppose $(\alpha_n,\beta_n)=(\alpha_n(1,q),\beta_n(1,q))$ is a Bailey pair with $a=1$, and $\alpha_0=\beta_0=1$. Then \begin{align*} &\sum_{n_k \ge n_{k-1} \ge \cdots \ge n_1 \ge 1} \frac{ (q)_{n_1}^2 q^{n_1 + n_2 + \cdots + n_k} \beta_{n_1}} {(1-q^{n_k})^2 (1-q^{n_{k-1}})^2 \cdots (1-q^{n_1})^2}\\ &= \sum_{n_k \ge n_{k-1} \ge \cdots \ge n_1 \ge 1} \frac{q^{n_1 + n_2 + \cdots + n_k}} {(1-q^{n_k})^2 (1-q^{n_{k-1}})^2 \cdots (1-q^{n_1})^2} + \sum_{r=1}^\infty \frac{q^{kr} \alpha_r}{ (1-q^r)^{2k}}. \end{align*} \end{theorem} \begin{proof} From Bailey's Lemma we have \begin{align*} \sum_{j=1}^n &\frac{ (\rho_1)_j (\rho_2)_j (q/\rho_1\rho_2)_{n-j} (q/\rho_1\rho_2)^j \beta_j} { (q)_{n-j}}\\ & = \frac{ (q/\rho_1)_n (q/\rho_2)_n}{(q)_n^2} \left(1 - \frac{ (q)_n (q/\rho_1\rho_2)_n } { (q/\rho_1)_n (q/\rho_2)_n } \right) \\ & \qquad \qquad + (q/\rho_1)_n (q/\rho_2)_n \sum_{r=1}^n \frac{ (\rho_1)_r (\rho_2)_r } {(q)_{n-r} (q)_{n+r} (q/\rho_1)_r (q/\rho_2)_r } \left( \frac{q}{\rho_1 \rho_2}\right)^r \alpha_r. \end{align*} We divide both sides by $(1-\rho_1)(1-\rho_2)$, let $\rho_1\to1$, $\rho_2\to1$, and use \eqn{easylimit} to obtain $$ \sum_{j=1}^n (q)_{j-1}^2 q^j \beta_j = \sum_{j=1}^n \frac{q^j}{(1-q^j)^2} + (q)_n^2 \sum_{r=1}^n \frac{ q^r \alpha_r}{(q)_{n-r} (q)_{n+r} (1-q^r)^2}. $$ Letting $n\to\infty$ we have $$ \sum_{j=1}^\infty (q)_{j-1}^2 q^j \beta_j = \sum_{j=1}^\infty \frac{q^j}{(1-q^j)^2} + \sum_{r=1}^\infty \frac{ q^r \alpha_r}{(1-q^r)^2}, $$ which is the case $k=1$ of the theorem. Now we suppose that the theorem is true for $k=K-1$, so that \begin{align*} &\sum_{n_K \ge n_{K-1} \ge \cdots \ge n_2 \ge 1} \frac{ (q)_{n_2}^2 q^{n_2 + \cdots + n_K} \beta_{n_2}} {(1-q^{n_K})^2 (1-q^{n_{K-1}})^2 \cdots (1-q^{n_2})^2}\\ &= \sum_{n_K \ge n_{K-1} \ge \cdots \ge n_2 \ge 1} \frac{q^{n_2 + \cdots + n_K}} {(1-q^{n_K})^2 (1-q^{n_{K-1}})^2 \cdots (1-q^{n_2})^2} + \sum_{r=1}^\infty \frac{q^{(K-1)r} \alpha_r}{ (1-q^r)^{2K-2}}. \end{align*} We now replace $(\alpha_n,\beta_n)$ by the Bailey pair $(\alpha_n',\beta_n')$ in Bailey's Lemma to obtain \begin{align*} &\sum_{\substack{n_K \ge n_{K-1} \ge \cdots \ge n_2 \ge 1\\ n_2\ge n_1\ge0}} \frac{ (q)_{n_2}^2 q^{n_2 + \cdots + n_K} (\rho_1)_{n_1} (\rho_2)_{n_1} (q/\rho_1\rho_2)_{n_2-n_1} (q/\rho_1\rho_2)^{n_1} \beta_{n_1}} {(1-q^{n_K})^2 (1-q^{n_{K-1}})^2 \cdots (1-q^{n_2})^2 (q)_{n_2-n_1} (q/\rho_1)_{n_2} (q/\rho_2)_{n_2}}\\ &= \sum_{n_K \ge n_{K-1} \ge \cdots \ge n_2 \ge 1} \frac{q^{n_2 + \cdots + n_K}} {(1-q^{n_K})^2 (1-q^{n_{K-1}})^2 \cdots (1-q^{n_2})^2} + \sum_{r=1}^\infty \frac{q^{(K-1)r} (\rho_1)_r (\rho_2)_r (q/\rho_1\rho_2)^r \alpha_r} {(1-q^r)^{2K-2} (q/\rho_1)_r (q/\rho_2)_r}, \end{align*} and \begin{align*} &\sum_{n_K \ge n_{K-1} \ge \cdots \ge n_2 \ge n_1\ge1} \frac{ (q)_{n_2}^2 q^{n_2 + \cdots + n_K} (\rho_1)_{n_1} (\rho_2)_{n_1} (q/\rho_1\rho_2)_{n_2-n_1} (q/\rho_1\rho_2)^{n_1} \beta_{n_1}} {(1-q^{n_K})^2 (1-q^{n_{K-1}})^2 \cdots (1-q^{n_2})^2 (q)_{n_2-n_1} (q/\rho_1)_{n_2} (q/\rho_2)_{n_2}}\\ &= \sum_{n_K \ge n_{K-1} \ge \cdots \ge n_2 \ge 1} \frac{q^{n_2 + \cdots + n_K}} {(1-q^{n_K})^2 (1-q^{n_{K-1}})^2 \cdots (1-q^{n_2})^2} \left(1 - \frac{ (q)_{n_2} (q/\rho_1\rho_2)_{n_2}} { (q/\rho_1)_{n_2} (q/\rho_2)_{n_2} } \right) \\ &\qquad + \sum_{r=1}^\infty \frac{q^{(K-1)r} (\rho_1)_r (\rho_2)_r (q/\rho_1\rho_2)^r \alpha_r} {(1-q^r)^{2K-2} (q/\rho_1)_r (q/\rho_2)_r}. \end{align*} We divide both sides by $(1-\rho_1)(1-\rho_2)$, let $\rho_1\to1$, $\rho_2\to1$, and use \eqn{easylimit} to obtain \begin{align*} &\sum_{n_K \ge n_{K-1} \ge \cdots \ge n_1 \ge 1} \frac{ (q)_{n_1}^2 q^{n_1 + n_2 + \cdots + n_K} \beta_{n_1}} {(1-q^{n_K})^2 (1-q^{n_{K-1}})^2 \cdots (1-q^{n_1})^2}\\ &= \sum_{n_K \ge n_{K-1} \ge \cdots \ge n_1 \ge 1} \frac{q^{n_1 + n_2 + \cdots + n_K}} {(1-q^{n_K})^2 (1-q^{n_{K-1}})^2 \cdots (1-q^{n_1})^2} + \sum_{r=1}^\infty \frac{q^{Kr} \alpha_r}{ (1-q^r)^{2K}}, \end{align*} which is the result for $k=K$. The general result follows by induction. \end{proof} \begin{cor} \begin{equation} \sum_{n_k \ge n_{k-1} \ge \cdots \ge n_1 \ge 1} \frac{q^{n_1 + n_2 + \cdots + n_k}} {(1-q^{n_k})^2 (1-q^{n_{k-1}})^2 \cdots (1-q^{n_1})^2} = \sum_{n=1}^\infty (-1)^{n-1} q^{n(n-1)/2 + kn} \frac{ (1+q^n) } { (1-q^n)^{2k} }. \mylabel{eq:symMid2} \end{equation} \end{cor} \begin{proof} The result follows from Theorem \thm{mainthm} using the well-known Bailey pair \cite[pp.27-28]{An-CBMS-book} $$ \alpha_n = \begin{cases} 1 & n=0,\\ (-1)^n q^{n(n-1)/2}(1+q^n) & n\ge1, \end{cases}\qquad \beta_n = \begin{cases} 1 & n=0,\\ 0 & n\ge1. \end{cases} $$ \end{proof} We note that we can rewrite \eqn{symMid2} as \beq \sum_{n=1}^\infty \mu_{2k}(n) q^n = \frac{1}{(q)_\infty} \sum_{n_k \ge n_{k-1} \ge \cdots \ge n_1 \ge 1} \frac{q^{n_1 + n_2 + \cdots + n_k}} {(1-q^{n_k})^2 (1-q^{n_{k-1}})^2 \cdots (1-q^{n_1})^2}, \mylabel{eq:Mukid} \eeq after using \eqn{symMid}. \begin{cor} \begin{align} &\sum_{n_k \ge n_{k-1} \ge \cdots \ge n_1 \ge 1} \frac{(q)_{n_1} q^{n_1 + n_2 + \cdots + n_k}} {(1-q^{n_k})^2 (1-q^{n_{k-1}})^2 \cdots (1-q^{n_1})^2} \mylabel{eq:symNid2}\\ &= \sum_{n_k \ge n_{k-1} \ge \cdots \ge n_1 \ge 1} \frac{q^{n_1 + n_2 + \cdots + n_k}} {(1-q^{n_k})^2 (1-q^{n_{k-1}})^2 \cdots (1-q^{n_1})^2} + \sum_{n=1}^\infty (-1)^{n} q^{n(3n-1)/2 + kn} \frac{ (1+q^n) } { (1-q^n)^{2k} }. \nonumber \end{align} \end{cor} \begin{proof} The result follows from Theorem \thm{mainthm} using the well-known Bailey pair \cite[p.28]{An-CBMS-book} $$ \alpha_n = \begin{cases} 1 & n=0,\\ (-1)^n q^{n(3n-1)/2}(1+q^n) & n\ge1, \end{cases}\qquad \beta_n = \frac{1}{(q)_n}. $$ \end{proof} \begin{cor} \begin{align} &\sum_{n=1}^\infty \left( \mu_{2k}(n) - \eta_{2k}(n)\right) q^n \mylabel{eq:sptkid}\\ &= \sum_{n_k \ge n_{k-1} \ge \cdots \ge n_1 \ge 1} \frac{q^{n_1 + n_2 + \cdots + n_k}} {(1-q^{n_k})^2 (1-q^{n_{k-1}})^2 \cdots (1-q^{n_1})^2(q^{n_1+1};q)_\infty}. \nonumber \end{align} \end{cor} \begin{proof} After dividing both sides of \eqn{symNid2} by $(q)_\infty$ and using \eqn{symMid2} we have \begin{align*} &\sum_{n_k \ge n_{k-1} \ge \cdots \ge n_1 \ge 1} \frac{q^{n_1 + n_2 + \cdots + n_k}} {(1-q^{n_k})^2 (1-q^{n_{k-1}})^2 \cdots (1-q^{n_1})^2(q^{n_1+1};q)_\infty} \\ &= \frac{1}{(q)_\infty}\left( \sum_{n=1}^\infty (-1)^{n-1} q^{n(n-1)/2 + kn} \frac{ (1+q^n) } { (1-q^n)^{2k} } - \sum_{n=1}^\infty (-1)^{n-1} q^{n(3n-1)/2 + kn} \frac{ (1+q^n) } { (1-q^n)^{2k} } \right) \\ &=\sum_{n=1}^\infty \left( \mu_{2k}(n) - \eta_{2k}(n)\right) q^n, \end{align*} by \eqn{symMid} and \eqn{symNid}. \end{proof} \section{Rank and crank moment inequalities}\label{sec:MNineq} In this section we prove the conjectured inequality \eqn{MNineq} for rank and crank moments. We need to relate ordinary and symmetrized moments. This is achieved by defining an analog of Stirling numbers of the second kind. This approach was suggested by Mike Hirschhorn. We define a sequence of polynomials $$ g_k(x) = \prod_{j=0}^{k-1} (x^2 - j^2), $$ for $k\ge 1$. We want a sequence of numbers $S^{*}(n,k)$ such that $$ x^{2n} = \sum_{k=1}^n S^{*}(n,k) g_k(x), $$ for $n\ge 1$. \begin{definition} We define the sequence $S^{*}(n,k)$ ($1\ge k \ge n$) recursively by \begin{enumerate} \item $S^{*}(1,1)=1$, \item $S^{*}(n,k)=0$ if $k\le 0$ or $k > n$, and \item $S^{*}(n+1,k) = S^{*}(n,k-1) + k^2 S^{*}(n,k)$, for $1 \le k \le n+1$. \end{enumerate} \end{definition} Below is a table of $S^{*}(n,k)$ for small n: \begin{center} $1$ $1$ \qquad $1$ $1$ \qquad $5$ \qquad $1$ $1$ \qquad $21$ \qquad $14$ \qquad $1$ $1$ \qquad $85$ \qquad $147$ \qquad $30$ \qquad $1$ $1$ \qquad $341$ \qquad $1408$ \qquad $649$ \qquad $55$ \qquad $1$ \end{center} We note that if we replace $k^2$ by $k$ in the recurrence we obtain the Stirling numbers of the second kind. The numbers $S^{*}(n,k)$ first occur in a paper of MacMahon \cite[p.106]{MacM19}. Mikl\'os B\'ona reminded me that Neil Sloane's Online Encylopedia of Integer Sequences \cite{Sl-OEIS} can also handle $2$-dimensional sequences. One just needs to input the first few terms of \beq \left\{\left\{ S^{*}(n,k)\right\}_{k=1}^n\right\}_{n=1}^\infty = 1, 1, 1, 1, 5, 1, 1, 21, 14, 1, 1, 85, 147, 30, 1, \dots, \mylabel{eq:Ssseq} \eeq to find the sequence labelled \texttt{A036969} \cite{Sl-A036969}, where more references can be found. We have \begin{lemma} \label{lem:stirling} For $n\ge 1$, $$ x^{2n} = \sum_{k=1}^n S^{*}(n,k) g_k(x). $$ \end{lemma} \begin{proof} We proceed by induction on $n$. The result is true for $n=1$ since $S^{*}(1,1)=1$ and $g_1(x)=x^2$. We now suppose the result is true for $n=m$, so that $$ x^{2m} = \sum_{k=1}^m S^{*}(m,k) g_k(x). $$ We have $g_{k+1}(x) = (x^2 - k^2) g_k(x)$ and $$ x^2 g_k(x) = g_{k+1}(x) + k^2 g_k(x), $$ for $k\ge 1$. Thus \begin{align*} x^{2m+2} &= \sum_{k=1}^m S^{*}(m,k) x^2 g_k(x) \\ &= \sum_{k=1}^m S^{*}(m,k)( g_{k+1}(x) + k^2 g_k(x))\\ &= \sum_{k=1}^{m+1} (S^{*}(m,k-1) + k^2 S^{*}(m,k)) g_k(x))\\ &= \sum_{k=1}^{m} S^{*}(m+1,k) g_k(x), \end{align*} and the result is true for $n=m+1$ and true for all $n$ by induction. \end{proof} We can now express ordinary moments in terms of symmetrized moments. \begin{theorem} For $k\ge 1$ \begin{align} \mu_{2k}(n) &= \frac{1}{(2k)!} \sum_{m=-n}^n g_k(m) \,M(m,n), \mylabel{eq:M2symM}\\ \eta_{2k}(n) &= \frac{1}{(2k)!} \sum_{m=-n}^n g_k(m) \,N(m,n), \mylabel{eq:N2symN}\\ M_{2k}(n) &= \sum_{j=1}^k (2j)! \, S^{*}(k,j) \, \mu_{2j}(n), \mylabel{eq:symM2M}\\ N_{2k}(n) &= \sum_{j=1}^k (2j)! \, S^{*}(k,j) \, \eta_{2j}(n). \mylabel{eq:symN2N} \end{align} \end{theorem} \begin{proof} Suppose $k\ge1$. Then \begin{align*} \mu_{2k}(n) &= \sum_{m=-n}^n \binom{m+k-1}{2k} M(m,n)\\ &= \frac{1}{(2k)!}\sum_{m=-n}^n (m+k-1)(m+k-2)\cdots(m-k) M(m,n)\\ &= \frac{1}{(2k)!}\sum_{m=-n}^n (m^2-(k-1)^2)(m^2-(k-2)^2)\cdots(m^2-1)m(m-k) M(m,n)\\ &= \frac{1}{(2k)!}\sum_{m=-n}^n g_k(m) M(m,n), \end{align*} since $M(-m,n)=M(m,n)$ for all $m$. This gives \eqn{M2symM} and similarly \eqn{N2symN}. Using Lemma \lem{stirling} and \eqn{M2symM} we see that \begin{align*} M_{2k} &= \sum_{m=-n}^n m^{2k} M(m,n)\\ &= \sum_{m=-n}^n \left(\sum_{j=1}^k S^{*}(k,j) g_j(m)\right)M(m,n)\\ &= \sum_{j=1}^k (2j)! \, S^{*}(k,j) \, \mu_{2j}(n), \end{align*} which is \eqn{symM2M}. Equation \eqn{symN2N} follows similarly. \end{proof} We can now deduce our crank-rank moment inequality. \begin{cor} For $1\le k \le n$ $$ M_{2k}(n) > N_{2k}(n). $$ \end{cor} \begin{proof} Suppose $k\ge1$. Then from \eqn{sptkid} we have $$ \sum_{n=1}^\infty \left( \mu_{2j}(n) - \eta_{2j}(n)\right) q^n = \frac{q^j}{(1-q)^{2j} (q^2;q)_\infty} + \cdots, $$ and we see that $$ \mu_{2j}(n) > \eta_{2j}(n), $$ for all $n\ge j\ge 1$. Now using \eqn{symM2M}, \eqn{symN2N} and the fact that the coefficients $S^{*}(k,j)$ are positive integers we have \begin{align*} M_{2k}(n) - N_{2k}(n) &= \sum_{j=1}^k (2j)!\, S^{*}(k,j) \,\left(\mu_{2j}(n) - \eta_{2j}(n)\right)\\ & \ge 2 \left(\mu_{2}(n) - \eta_{2}(n)\right) > 0, \end{align*} for all $n\ge 1$. \end{proof} \section{Higher order spt-functions}\label{sec:sptk} In this section we define a higher-order spt function $\spt_k(n)$ so that $$ \spt_k(n) = \mu_{2k}(n) - \eta_{2k}(n), $$ for all $k\ge1$ and $n\ge1$. The idea is to interpret the right side of \eqn{sptkid} in terms of partitions. \begin{definition} For a partition $\pi$ with $m$ different parts $$ n_1 < n_2 < \cdots < n_m, $$ we define $f_j=f_j(\pi)$ to be the frequency of part $n_j$ for $1 \le j \le m$. \end{definition} We note that $f_1=f_1(\pi)$ is the number of smallest parts in the partition $\pi$ and Andrews' function $$ \spt(n) = \sum_{\pi \vdash n} f_1(\pi). $$ \begin{definition} Let $k\ge 1$. For a partition $\pi$ we define a weight $$ \omega_k(\pi) = \sum_{\substack{ m_1 + m_2 + \cdots + m_r = k \\ 1 \le r \le k}} \binom{f_1 + m_1 -1}{2m_1 -1} \sum_{2 \le j_2 < j_3 < \cdots < j_r} \binom{f_{j_2} + m_2}{2m_2} \binom{f_{j_3} + m_3}{2m_3} \cdots \binom{f_{j_r} + m_r}{2m_r}, $$ and $$ \spt_k(n) = \sum_{\pi \vdash n} \omega_k(\pi). $$ \end{definition} We note that the outer sum above is over all compositions $m_1 + m_2 + \cdots + m_r$ of $k$. \begin{example}[$k=1$] There is only one composition of $1$, $\omega_1(\pi)=f_1(\pi)$ and $$ \spt_1(n) = \spt(n). $$ \end{example} \begin{example}[$k=2$] There are two compositions of $2$, namely $2$ and $1+1$, $$ \omega_2(\pi) = \binom{f_1+1}{3} + f_1 \sum_{2\le j} \binom{f_j+1}{2}, $$ and $$ \spt_2(n) = \sum_{\pi \vdash n} \omega_2(\pi). $$ We calculate $\spt_2(4)$. There are five partitions of $4$: $$ \begin{array}{lll} 4 & f_1=1 & \omega_2 = 0\\ 3+1 & f_1=f_2=1 & \omega_2 = 1\\ 2+2 & f_1=2 & \omega_2 = 1\\ 2+1+1 & f_1=2, f_2=1 & \omega_2 = 1+2=3\\ 1+1+1+1 & f_1=4 & \omega_2 = 10 \end{array} $$ Hence $\spt_2(4) = 0 + 1 + 1 + 3 + 10 = 15$. \end{example} \begin{example}[$k=3$] There are four compositions of $3$, namely $3$, $2+1$, $1+2$ and $1+1+1+1$. Hence the definition of $\omega_3(\pi)$ has four terms: $$ \omega_3(\pi) = \binom{f_1+2}{5} + \binom{f_1 + 1}{3} \sum_{2\le j} \binom{f_j+1}{2} + f_1 \sum_{2\le j} \binom{f_j+2}{4} + f_1 \sum_{2\le j < k} \binom{f_j+1}{2}\binom{f_k+1}{2}, $$ and $$ \spt_3(n) = \sum_{\pi \vdash n} \omega_3(\pi). $$ To illustrate, we calculate $\spt_3(5)$. There are seven partitions of $5$: $$ \begin{array}{lll} 5 & f_1=1 & \omega_3 = 0\\ 4+1 & f_1=f_2=1 & \omega_3 = 0\\ 3+2 & f_1=f_2=1 & \omega_3 = 0\\ 3+1+1 & f_1=2, f_2=1 & \omega_3 = 1\\ 2+2+1 & f_1=1, f_2=2 & \omega_3 = 1\\ 2+1+1+1 & f_1=3, f_2=1 & \omega_3 = 1 + 4 = 5\\ 1+1+1+1+1 & f_1=5& \omega_3 = 21\\ \end{array} $$ Hence $\spt_3(5) = 0 + 0 + 0 + 1 +1 + 5 + 21 = 28$. \end{example} Our goal in this section is to prove \begin{theorem} \label{thm:sptkthm} For $1\le k \le n$ $$ \spt_k(n) = \mu_k(n) - \eta_k(n). $$ \end{theorem} \begin{proof} First we need the elementary identities $$ \sum_{n=j}^\infty \binom{n+j-1}{2j-1}x^n = \frac{x^j}{(1-x)^{2j}}\quad \mbox{and}\quad \sum_{n=j}^\infty \binom{n+j}{2j}x^n = \frac{x^j}{(1-x)^{2j+1}}. $$ To give the idea of the proof we first consider the case $k=4$. From \eqn{sptkid} we have \begin{align*} &\sum_{n=4}^\infty \left( \mu_{8}(n) - \eta_{8k}(n)\right) q^n \\ & = \sum_{1\le m \le j \le k \le n} \frac{q^{m+j+k+n}}{ (1-q^m)^2 (1-q^j)^2 (1-q^k)^2 (1-q^n)^2 (q^{m+1};q)_\infty}\\ & = \sum_{1 \le m=j=k=n } + \sum_{1 \le m=j=k<n } + \sum_{1 \le m=j<k=n } + \sum_{1 \le m<j=k=n } + \\ & \quad \sum_{1 \le m=j<k<n } + \sum_{1 \le m<j=k<n } + \sum_{1 \le m<j<k=n } + \sum_{1 \le m<j<k<n } \\ & = \sum_{m=1}^\infty \frac{q^{4m}}{(1-q^m)^8} \prod_{i> m} \frac{1}{(1-q^i)} + \sum_{1\le m < n} \frac{q^{3m}}{(1-q^m)^6} \frac{q^n}{(1-q^n)^3} \prod_{\substack{i> m\\ i\ne n}} \frac{1}{(1-q^i)} \\ & + \sum_{1\le m < n} \frac{q^{2m}}{(1-q^m)^4} \frac{q^{2n}}{(1-q^n)^5} \prod_{\substack{i> m\\ i\ne n}} \frac{1}{(1-q^i)} + \sum_{1\le m < n} \frac{q^{m}}{(1-q^m)^2} \frac{q^{3n}}{(1-q^n)^7} \prod_{\substack{i> m\\ i\ne n}} \frac{1}{(1-q^i)} \\ & + \sum_{1\le m < k < n} \frac{q^{2m}}{(1-q^m)^4} \frac{q^{k}}{(1-q^k)^3} \frac{q^{n}}{(1-q^n)^3} \prod_{\substack{i> m\\ i\ne k,n}} \frac{1}{(1-q^i)} \\ & + \sum_{1\le m < k < n} \frac{q^{m}}{(1-q^m)^2} \frac{q^{2k}}{(1-q^k)^5} \frac{q^{n}}{(1-q^n)^3} \prod_{\substack{i> m\\ i\ne k,n}} \frac{1}{(1-q^i)} \\ & + \sum_{1\le m < k < n} \frac{q^{m}}{(1-q^m)^2} \frac{q^{k}}{(1-q^k)^3} \frac{q^{2n}}{(1-q^n)^5} \prod_{\substack{i> m\\ i\ne k,n}} \frac{1}{(1-q^i)} \\ & + \sum_{1\le m < j < k < n} \frac{q^{m}}{(1-q^m)^2} \frac{q^{j}}{(1-q^k)^3} \frac{q^{k}}{(1-q^k)^3} \frac{q^{n}}{(1-q^n)^3} \prod_{\substack{i> m\\ i\ne j,k,n}} \frac{1}{(1-q^i)}. \end{align*} There are eight compositions of $4$: $4$, $3+1$, $2+2$, $1+3$, $2+1+1$, $1+2+1$, $1+1+2$, and $1+1+1+1$. Each of the eight sums above has the form $$ \sum_{1 \le n_1 < n_{j_2} < \cdots < n_{j_r}} \frac{q^{m_1 n_1}}{(1-q^{n_1})^{2 m_1}} \frac{q^{m_2 n_{j_2}}}{(1-q^{n_{j_2}})^{2 m_2+1}} \cdots \frac{q^{m_r n_{j_r}}}{(1-q^{n_{j_r}})^{2 m_r+1}} \prod_{\substack{i> n_1\\ i\not\in\{n_{j_2},\dots,n_{j_r}\}}} \frac{1}{(1-q^i)}, $$ where $m_1 + m_2 + \cdots + m_r$ is a composition of $k=4$. This sum can be written as \begin{align*} &\sum_{\substack{{1 \le n_1 < n_{j_2} < \cdots < n_{j_r}}\\ {f_1\ge m_1, f_{j_2}\ge m_{2}, \dots f_{j_r}\ge m_{r}} }} \binom{f_1 + m_1 -1 }{2m_1 - 1} \binom{f_{j_2} + m_2 -1 }{2m_2} \cdots \binom{f_{j_r} + m_r -1 }{2m_r} \\ &\qquad \qquad \qquad \qquad \qquad \qquad\cdot \, q^{f_1 n_1 + f_{j_2} n_{j_2} + \cdots + f_{j_r} n_{j_r}} \prod_{\substack{i> n_1\\ i\not\in\{n_{j_2},\dots,n_{j_r}\}}} \frac{1}{(1-q^i)}. \end{align*} We see that this is the generating function for certain weighted partitions in which $n_1$ is the smallest part, $n_1 < n_{j_2} < \cdots < n_{j_r}$ is an $r$-subset of the parts of the partition, and $f_j$ is the frequency of part $n_j$ for each $j$. It follows that $$ \sum_{n=1}^\infty \left( \mu_{8}(n) - \eta_{8k}(n)\right) q^n = \sum_{\pi \vdash n} \omega_4(\pi) = \sum_{n=4}^\infty \spt_4(n) q^n. $$ The proof of the general case is completely analogous. Now suppose $k\ge 1$. From \eqn{sptkid} we have \begin{align*} &\sum_{n=1}^\infty \left( \mu_{2k}(n) - \eta_{2k}(n)\right) q^n \nonumber\\ &= \sum_{1\le n_1 \le n_{2} \le \cdots \le n_{k}} \frac{q^{n_1 + n_2 + \cdots + n_k}} {(1-q^{n_1})^2 (1-q^{n_{2}})^2 \cdots (1-q^{n_k})^2(q^{n_1+1};q)_\infty}. \end{align*} We partition this sum into $2^{k-1}$ subsums by changing each ``$\le$" in the general inequality ${n_1 \le n_{2} \le \cdots \le n_k}$ to either ``$=$" or ``$<$". In this way each subsum corresponds to a unique composition $m_1 + m_2 + \cdots + m_r$ of $k$ (where $1 \le r \le k$). We proceed just as in the case $k=4$ and the general result follows. \end{proof} \section{Congruences for higher order spt-functions}\label{sec:congsptk} In \cite{Br-Ga-Ma} it was shown that given any prime $\ell>3$ with $k$ and $j$ fixed there are infinitely many arithmetic progressions $An+B$ such that $$ \eta_{2k}(An + B) \equiv 0 \pmod{\ell^j}. $$ Using known results for crank moments \cite[\S 7]{Br-Ga-Ma} and standard techniques \cite{Br-Ga-Ma}, \cite{Br08} we may deduce the analog of this result for higher order spt-functions. In this section we prove a number of nice explicit congruences for higher order spt-functions. Many of the congruences follow from known results for rank and crank moments \cite{At-Ga}. \begin{theorem} \begin{align} \spt_2(n) &\equiv 0 \pmod{5},\quad \mbox{if $n\equiv 0,1,4\pmod{5}$} \mylabel{eq:spt2mod5}\\ \spt_2(n) &\equiv 0 \pmod{7},\quad \mbox{if $n\equiv 0,1,5\pmod{7}$} \mylabel{eq:spt2mod7}\\ \spt_2(n) &\equiv 0 \pmod{11},\quad \mbox{if $n\equiv 0\pmod{11}$}. \mylabel{eq:spt2mod11} \end{align} \end{theorem} \begin{proof} By definition, $$ \spt_2(n) = \mu_4(n) - \eta_4(n) = \frac{1}{24}(M_4(n) - M_2(n) - N_4(n) + N_2(n)). $$ From \cite[(5.6)]{At-Ga} we have $$ N_4(n) = -\frac{2}{3}(3n+1) M_2(n) + \frac{8}{3} M_4(n) + (1-12n) N_2(n), $$ and \begin{equation} 24 \spt_2(n) = (2n - \tfrac{1}{3}) M_2(n) - \frac{5}{3} M_4(n) + 12n N_2(n). \mylabel{eq:spt2id} \end{equation} The congruence \eqn{spt2mod5} now follows from \begin{align*} M_2(n) &= 2n p(n), \qquad \mbox{(\cite[(1.27)]{At-Ga}}\\ N_2(n) &\equiv (n+4) p(n),\quad \mbox{for $n\not\equiv0,3\pmod{5}$} \qquad \mbox{(\cite[p.285]{Ga10a})},\\ p(5n+4) &\equiv 0 \pmod{5}. \end{align*} To begin the proof of \eqn{spt2mod7} we use \eqn{spt2id} to obtain $$ \spt_2(n) \equiv M_4(n) + 3(n+1) M_2(n) + 4n N_2(n) \pmod{7}. $$ From \cite[p.285]{Ga10a} \begin{equation} N_2(n) \equiv (6n+1) p(n) \pmod{7},\quad \mbox{for $n\not\equiv0,2,6\pmod{7}$} \mylabel{eq:N2mod7} \end{equation} so that \begin{equation} \spt_2(n) \equiv M_4(n) + 3(n+1) M_2(n)\pmod{7},\quad \mbox{for $n\equiv0,1,5\pmod{7}$.} \mylabel{eq:spt2mod7id} \end{equation} From \cite[(1.21)]{At-Ga} we have $$ M_4(7n+5) \equiv M_2(7n+5) \equiv 0\pmod{7},\quad\mbox{and}\quad \spt_2(7n+5)\equiv0\pmod{7}. $$ From \cite[(6.5)]{At-Ga} \begin{equation} (n+2) M_4(n) \equiv -(6n^2+4n+1) M_2(n)\pmod{7}, \mylabel{eq:M42mod7} \end{equation} so that \begin{align} M_4(7n)&\equiv 3 M_2(7n) \equiv 0 \pmod{7},\quad \mbox{(since $M_2(n)=2np(n)$)}, \mylabel{eq:M420mod7}\\ M_4(7n+1) &\equiv M_2(7n+1) \pmod{7}, \mylabel{eq:M421mod7} \end{align} and $$ \spt_2(7n)\equiv\spt_2(7n+1)\equiv0\pmod{7}, $$ by \eqn{spt2mod7id}. The proof of \eqn{spt2mod11} is similar to that of \eqn{spt2mod5} and \eqn{spt2mod7}. From \eqn{spt2id} we have $$ \spt_2(n) \equiv M_4(n) + (n+9) M_2(n) + 6n N_2(n) \pmod{11}. $$ From \cite[(6.6)]{At-Ga} $$ (n+5)^3 M_4(n) \equiv (5n^4 + 10n^3 + 8n^2 + 8n+ 9) M_2(n)\pmod{11}, $$ so that $$ M_4(11n)\equiv M_2(11n)\equiv0\pmod{11}, $$ and $$ \spt_2(11n)\equiv0\pmod{11}. $$ \end{proof} \begin{theorem} \begin{align} \spt_3(n) &\equiv 0 \pmod{7},\quad \mbox{if $n\not\equiv 3,6\pmod{7}$}, \mylabel{eq:spt3mod7}\\ \spt_3(n) &\equiv 0 \pmod{2},\quad \mbox{if $n\equiv 1\pmod{4}$.} \mylabel{eq:spt3mod2} \end{align} \end{theorem} \begin{proof} From \cite[(5.6)-(5.7)]{At-Ga} and the definition of $\spt_3(n)$ we have \begin{align} \spt_3(n) & = - \frac{7}{7920} M_6(n) + \frac{1}{1584}(60n+13) M_4(n) + \frac{1}{3960}(7-78n-108n^2) M_2(n) \nonumber\\ &\qquad - \frac{1}{20} n(1+3n) N_2(n), \mylabel{eq:spt3id} \end{align} and \begin{equation} \spt_3(n) \equiv n(5n+4) M_2(n) + (3 + 2n) M_4(n) + n(3n+1) N_2(n)\pmod{7}. \mylabel{eq:spt3mod7id} \end{equation} This implies that $$ \spt_3(7n+2)\equiv0\pmod{7}. $$ Known results for the rank and crank \cite[(1.18),(1.21)]{At-Ga} imply that $$ \spt_3(7n+5)\equiv0\pmod{7}. $$ The congruences \eqn{N2mod7}, \eqn{M420mod7}, \eqn{M421mod7} and \eqn{spt3mod7id} imply that $$ \spt_3(7n)\equiv\spt_3(7n+1)\equiv0\pmod{7}. $$ The congruences \eqn{M42mod7} and \eqn{spt3mod7id} imply that $$ \spt_3(7n+4)\equiv2M_2(7n+4) + 3N_2(7n+4)\pmod{7}. $$ From \eqn{N2mod7} and the fact that $M_2(n) = 2np(n)$ we have $$ M_2(7n+4)\equiv p(7n+4), \quad N_2(7n+4) \equiv 4 p(7n+4) \pmod{7} $$ and $$ \spt_3(7n+4)\equiv0 \pmod{7}. $$ We now turn to the congruence \eqn{spt3mod2}. First we note that the term $$ \frac{1}{20} n(1+3n) N_2(n) \equiv 0 \pmod{2}, $$ when $n\equiv 1 \pmod{4}$ since $N_2(n)\equiv0\pmod{2}$. We define $$ s_3(n) = - \frac{7}{7920} M_6(n) + \frac{1}{1584}(60n+13) M_4(n) + \frac{1}{3960}(7-78n-108n^2) M_2(n) $$ so that $$ \spt_3(4n+1) \equiv s_3(4n+1) \pmod{2}. $$ By \cite[Theorem 4.2]{At-Ga}, the function $$ S_3(q) := \sum_{n=1}^\infty s_3(n) q^n \in P \mathcal{W}_3, $$ where $\mathcal{W}_n$ is a space of quasimodular forms of weight bounded by $2n$ defined in \cite[(3.7)]{At-Ga}, and \beq P = P(q) = \frac{1}{(q)_\infty}. \mylabel{eq:Pdef} \eeq We define the functions $$ P_3(q) = \sum_{n=1}^\infty p_3(n) q^n := P(q) \sum_{n=1}^\infty \sigma_3(n) q^n, $$ and $$ P_5(q) = \sum_{n=1}^\infty p_5(n) q^n := P(q) \sum_{n=1}^\infty \sigma_5(n) q^n. $$ Let $\delta_q = q\frac{d}{dq}$. By \cite[(3.29) and Lemma 4.1]{At-Ga} the functions $\delta_q(P)$, $\delta_q^2(P)$, $\delta_q^3(P)$, $P_3$, $\delta_q(P_3)$, and $P_5 \in P \mathcal{W}_3$. Since $\dim \mathcal{W}_3 = 6$ by \cite[Cor.3.6]{At-Ga}, there is a linear relation between these functions and $S_3(q)$. A calculation gives that $$ s_3(n) = \frac{n}{270}(5 - 12n - 147n^2) p(n) + \frac{1}{12}(6n+1) p_3(n) - \frac{7}{540} p_5(n) $$ and $$ 4 s_3(n) \equiv 6n (1+n^2) p(n) + (3+2n) p_3(n) + 7 p_5(n) \pmod{8}. $$ Since $d^3\equiv d^5\pmod{8}$ it follows that $$ \sigma_3(n)\equiv\sigma_5(n) \pmod{8}\quad\mbox{and}\quad p_3(n)\equiv p_5(n) \pmod{8}. $$ Hence $$ 4 s_3(n) \equiv 6n (1+n^2) p(n) + (10+2n) p_3(n)\pmod{8}, $$ and $$ s_3(4n + 1) \equiv p(4n+1) + p_3(4n+1)\pmod{2}. $$ It is well known that $$ \delta_q(P) = \sum_{n=1}^\infty n p(n) q^n = P(q) \sum_{n=1}^\infty \sigma(n) q^n. $$ Since $\sigma(n)\equiv\sigma_3(n)\pmod{2}$ it follows that $$ n p(n) \equiv p_3(n) \pmod{2}, $$ $$ p(4n+1) \equiv p_3(4n+1) \pmod{2}, $$ and $$ s_3(4n+1)\equiv 0 \pmod{2}, $$ which completes the proof of \eqn{spt3mod2}. \end{proof} \begin{theorem} \label{thm:spt4mod3} \begin{equation} \spt_4(3n) \equiv 0 \pmod{3}. \mylabel{eq:spt4mod3} \end{equation} \end{theorem} \begin{proof} From \eqn{symNid} and \eqn{symMid} we have \begin{align*} \sum_{n=1}^\infty \spt_4(n) q^n & = \frac{1}{(q)_\infty} \left( \sum_{\substack{n=-\infty\\n\ne0}}^\infty \frac{(-1)^{n-1}q^{n(n+1)/2 + 4n}} {(1-q^n)^{8}} -\sum_{\substack{n=-\infty\\n\ne0}}^\infty \frac{(-1)^{n-1}q^{n(3n+1)/2 + 4n}} {(1-q^n)^{8}} \right)\\ &\equiv \frac{1}{(q)_\infty} \left( \sum_{\substack{n=-\infty\\n\ne0}}^\infty \frac{(-1)^{n-1}q^{n(n+1)/2 + 4n}(1-q^n)} {(1-q^{9n})} \right. \\ & \qquad\qquad \left. -\sum_{\substack{n=-\infty\\n\ne0}}^\infty \frac{(-1)^{n-1}q^{n(3n+1)/2 + 4n}(1-q^n)} {(1-q^{9n})} \right) \pmod{3}. \end{align*} Before we can proceed we need some results for the rank and crank mod $9$. We define $$ S_k(b) = S_k(b,t) := \ \sum_{\substack{n=-\infty\\n\ne0}}^\infty \frac{(-1)^{n}q^{n(kn+1)/2 + bn}} {(1-q^{tn})}, $$ so that \begin{align*} \sum_{n=1}^\infty \spt_4(n) q^n &\equiv \frac{1}{(q)_\infty}\left(-S_1(4,9) + S_1(5,9) + S_3(4,9) - S_3(5,9) \right) \pmod{9}. \end{align*} Now let $M(r,t,n)$ denote the number of partitions of $n$ with crank congruent to $r$ mod $t$ and let $N(r,t,n)$ denote the number of partitions of $n$ with rank congruent to $r$ mod $t$. Then by \cite[(2.13)]{At-Sw} and \cite[(2.5)]{Ek00} we have $$ \sum_{n=0}^\infty N(r,t,n)q^n = \frac{1}{(q)_\infty}\left(S_3(r,t)+S_3(t-r,r) \right) $$ and $$ \sum_{n=0}^\infty M(r,t,n)q^n = \frac{1}{(q)_\infty}\left(S_1(r,t)+S_1(t-r,r) \right). $$ From \cite[(2.3)]{Ek00} and \cite[(6.2)]{At-Sw} $$ S_k(b,t) = - S_k(t-1-b), $$ for $k=1,3$. Hence $$ \sum_{n=0}^\infty M(4,9,n)q^n = \frac{1}{(q)_\infty}(S_1(4,9) + S_1(5,9)) = \frac{1}{(q)_\infty} S_1(5,9) $$ and $$ \sum_{n=0}^\infty N(4,9,n)q^n = \frac{1}{(q)_\infty}(S_3(4,9) + S_3(5,9)) = \frac{1}{(q)_\infty} S_3(5,9) $$ since $$ S_1(4,9) = S_3(4,9) = 0. $$ It follows that $$ \spt_4(n) \equiv M(4,9,n) - N(4,9,n) \pmod{3}. $$ Lewis \cite[(1a)]{Le92} has shown that $$ M(4,9,3n) = N(4,9,3n) $$ and our congruence \eqn{spt4mod3} follows. \end{proof} If we try the approach of using quasimodular forms to the prove the congruence \eqn{spt4mod3} we are led to a congruence for the Ramanujan tau-function. \begin{cor} \begin{align} \qquad\qquad \tau(n) &\equiv \left( 588+297\,n+258\,{n}^{2} +9\,{n}^{3} +108\,{n}^{4}+486\,{n}^{5}\right) \sigma_{{1}} \left( n \right) \nonumber\\ & + \left( 60+255\,n+189\,{n}^{2}+612\,{n}^{3}+162\,{n}^{4} \right) \sigma_{{3}} \left( n\right) \mylabel{eq:taucong}\\ & + \left(306+297\,n+ 540\,{n}^{2}+180\,{n}^{3} \right) \sigma_{{5}} \left( n \right) + \left( 177+576\,n+454\,{n}^{2}\right) \sigma_{{7}} \left( n \right)\nonumber\\ & + \left(201+ 690\,n\right) \sigma_{{9}} \left( n \right) +117\,\sigma_{{11}} \left( n \right) \pmod{3^6}. \nonumber \end{align} \end{cor} \begin{proof} From \cite[(5.6)-(5.8)]{At-Ga} and the definition of $\spt_3(n)$ we see that \begin{align} \spt_4(n) &= -{\frac {67}{7362432}}M_8(n) +{\frac{1}{2629440}}(491+1176n)M_6(n) \nonumber\\ &-{\frac{1}{1051776}}(1309+8400n+5856{n}^{2})M_4(n)\nonumber\\ &+{\frac{1}{3067680}}(-851+10966n+21204{n}^{2}+12162{n}^{3})M_2(n)\nonumber\\ &+{\frac{1}{140}}(n+4{n}^{2}+3{n}^{3})N_2(n). \mylabel{eq:spt4id} \end{align} We define \begin{align*} s_4(n) &= -{\frac {67}{7362432}}M_8(n) +{\frac{1}{2629440}}(491+1176n)M_6(n) \nonumber\\ &-{\frac{1}{1051776}}(1309+8400n+5856{n}^{2})M_4(n)\nonumber\\ &+{\frac{1}{3067680}}(-851+10966n+21204{n}^{2}+12162{n}^{3})M_2(n)\nonumber \end{align*} so that $$ \spt_4(3n)\equiv s_4(3n) \pmod{3}. $$ By \cite[Theorem 4.2]{At-Ga}, the function $$ S_4(q) := \sum_{n=1}^\infty s_4(n) q^n \in P \mathcal{W}_4, $$ $$ S^{*}_4(q) := \left(\delta_q^2-1\right)S_4(q) = \sum_{n=1}^\infty (n^2-1) s_4(n) q^n \in P \mathcal{W}_6, $$ and \begin{equation} S^{*}_4(q) \equiv 0\pmod{3}, \mylabel{eq:S4starmod3} \end{equation} by Theorem \thm{spt4mod3}. By \cite[(3.29)]{At-Ga} the functions $\delta_q^j(\Phi_{2k+1})$ ($0 \le j \le 5-k$, $0 \le k \le 5$), and $\Delta \in \mathcal{W}_6$, where $$ \Phi_j=\Phi_j(q) = \sum_{n=1}^\infty \frac{n^j q^n}{1-q^n} = \sum_{m,n\ge1} n^j q^{nm} = \sum_{n=1}^\infty \sigma_j(n) q^n, $$ and $$ \Delta = \Delta(q) = \sum_{n=1}^\infty \tau(n) q^n = q\prod_{n=1}^\infty (1-q^n)^{24}. $$ Since $\dim \mathcal{W}_6 = 22$ by \cite[Cor.3.6]{At-Ga}, there is a linear relation between these functions and $S^{*}_4(q)/P$. In fact, we can write the function $S^{*}_4(q)/P$ as a linear combination of the $22$ functions $\delta_q^j(\Phi_{2k+1})$ ($0 \le j \le 5-k$, $0 \le k \le 5$), and $\Delta \in \mathcal{W}_6$. The coefficients in this linear combination are rational numbers, and we find that we need to multiply each coefficient by $3^5$ to obtain $3$-integral rationals. The congruence \eqn{S4starmod3} then implies a congruence mod $3^6$ between the arithmetic functions $n^j(\sigma_{2k+1}(n))$ ($0 \le j \le 5-k$, $0 \le k \le 5$), and $\tau(n)$. Solving this congruence for $\tau(n)$ gives the result \eqn{taucong}. \end{proof} Ashworth \cite{As-thesis} (see also \cite{Ko76}) has also obtained congruences for $\tau(n)$ mod powers of $3$. Ashworth's congruences have a different form and depend on the residue of $n$ mod $3$. \section{Concluding remarks}\label{sec:remarks} It should be pointed out that Bringmann, Mahlburg and Rhoades \cite{Br-Ma-Rh} have proved that there are positive constants $\alpha_{k}$ and $\beta_{k}$ such that \begin{align} M_{2k}(n) \sim N_{2k}(n) &\sim \alpha_{k} n^k \, p(n) \mylabel{eq:asymp1}\\ M_{2k}(n) - N_{2k}(n) &\sim \beta_{k} n^{k-\frac{1}{2}} \, p(n), \mylabel{eq:asymp2} \end{align} as $n\to\infty$ when $k$ is fixed. This implies that \eject \begin{equation} \spt_k(n) \sim \frac{\beta_{k}}{(2k)!} n^{k-\frac{1}{2}} \, p(n), \mylabel{eq:sptkapprox} \end{equation} as $n\to\infty$ when $k$ is fixed. It would interesting to consider whether the new identity \eqn{mainid} could lead to an elementary upper bound for $\spt_k(n)$. Folsom and Ono \cite{Fo-On} found nontrivial congruences for Andrews spt-function mod $2$ and $3$. Ono \cite{On10} also found simple explicit congruences for Andrews' spt-function modulo every prime $>3$. These congruences are related to the action of a weight $\tfrac{3}{2}$ Hecke operator. It would be interesting to determine whether such behavior continues for the higher degree spt-functions and higher weight Hecke operators. The function \beq A_k(q) = \sum_{n_k \ge n_{k-1} \ge \cdots \ge n_1 \ge 1} \frac{q^{n_1 + n_2 + \cdots + n_k}} {(1-q^{n_k})^2 (1-q^{n_{k-1}})^2 \cdots (1-q^{n_1})^2} \mylabel{eq:Akdef} \eeq occurs in equation \eqn{Mukidintro} so that \beq \sum_{n=1}^\infty \mu_{2k}(n) q^n = \frac{1}{(q)_\infty} \, A_k(q). \mylabel{eq:MukAkid} \eeq The function $A_k(q)$ was first studied by MacMahon \cite{MacM19} as a generalization of \beq A_1(q) = \sum_{n=1}^\infty \sigma_1(n) q^n = \sum_{n=1} \frac{q^n}{(1-q^n)^2}. \mylabel{eq:A1} \eeq He conjectured that the coefficients of $A_k(q)$ could be expressed in terms of divisors functions. This conjecture was recently proved by Andrews and Rose \cite{An-Ro} by showing that in general $A_k(q)$ is a quasimodular form. The result also follows from \eqn{MukAkid}, \eqn{symM2M} and the fact that the generating function for $M_{2k}(n)$ is $P(q)$ times a quasimodular form, which was proved Atkin and the author \cite[Theorem 4.2]{At-Ga}. Andrews and Rose's proof is more direct. Andrews and Rose's were motivated by a certain curve-counting problem on Abelian surfaces. \section{Table}\label{sec:sptktab} For reference we include a table of $\spt_k(n)$ for $1\le k \le 6$, $1\le n\le29$. $$ \begin {array}{rrrrrrr} n{\backslash}k&1&2&3&4&5&6\\ \noalign{\smallskip}1&1&0 &0&0&0&0\\ \noalign{\smallskip}2&3&1&0&0&0&0\\ \noalign{\smallskip}3&5&5&1 &0&0&0\\ \noalign{\smallskip}4&10&15&7&1&0&0\\ \noalign{\smallskip}5&14&35 &28&9&1&0\\ \noalign{\smallskip}6&26&75&85&45&11&1\\ \noalign{\smallskip}7 &35&140&217&166&66&13\\ \noalign{\smallskip}8&57&259&497&505&287&91 \\ \noalign{\smallskip}9&80&435&1036&1341&1013&456\\ \noalign{\smallskip} 10&119&735&2044&3223&3081&1834\\ \noalign{\smallskip}11&161&1155&3787& 7149&8372&6293\\ \noalign{\smallskip}12&238&1841&6797&14916&20824&19125 \\ \noalign{\smallskip}13&315&2765&11648&29480&48192&52781 \\ \noalign{\smallskip}14&440&4200&19558&55902&105117&134643 \\ \noalign{\smallskip}15&589&6125&31703&101892&217945&321622 \\ \noalign{\smallskip}16&801&8975&50645&180245&433017&726650 \\ \noalign{\smallskip}17&1048&12731&78674&309297&828346&1564696 \\ \noalign{\smallskip}18&1407&18179&120932&518859&1534271&3231635 \\ \noalign{\smallskip}19&1820&25235&181664&849563&2759132&6432859 \\ \noalign{\smallskip}20&2399&35180&270600&1366441&4837638&12395504 \\ \noalign{\smallskip}21&3087&48055&395682&2154789&8283014&23195905 \\ \noalign{\smallskip}22&3998&65681&574329&3348972&13894554&42287433 \\ \noalign{\smallskip}23&5092&88299&820834&5119981&22856717&75274166 \\ \noalign{\smallskip}24&6545&118895&1166109&7733835&36968045&131143033 \\ \noalign{\smallskip}25&8263&157690&1634668&11520100&58818578& 223982780\\ \noalign{\smallskip}26&10486&209230&2279242&16985374& 92258215&375713010\\ \noalign{\smallskip}27&13165&274510&3142903& 24746334&142699970&619712403\\ \noalign{\smallskip}28&16562&359779& 4312063&35735413&218041302&1006599177\\ \noalign{\smallskip}29&20630& 466970&5859616&51073008&329162610&1611563058\end {array} $$ \noindent \textbf{Acknowledgements} \noindent Firstly, I would like to thank Richard McIntosh for showing me Alexander Patkowksi's paper \cite{Pa10}. It was Patkowski's idea of using a limiting form of Bailey's Lemma to derive spt-like results that first got me started on the way to generalizing Andrews' spt-function. Secondly, I would like to thank Mike Hirschhorn for hosting my stay at UNSW in June 2010, and suggesting to me that I take a look at Stirling numbers of the second kind. This was quite helpful in relating ordinary and symmetrized moments. Finally, I would like to thank George Andrews, Mikl\'os B\'ona, Kathrin Bringmann, Jeremy Lovejoy and Karl Mahlburg for their comments and suggestions. \bibliographystyle{amsplain}
{ "timestamp": "2010-11-02T01:01:09", "yymm": "1008", "arxiv_id": "1008.1207", "language": "en", "url": "https://arxiv.org/abs/1008.1207", "abstract": "Andrews' spt-function can be written as the difference between the second symmetrized crank and rank moment functions. Using the machinery of Bailey pairs a combinatorial interpretation is given for the difference between higher order symmetrized crank and rank moment functions. This implies an inequality between crank and rank moments that was only know previously for sufficiently large n and fixed order. This combinatorial interpretation is in terms of a weighted sum of partitions. A number of congruences for higher order spt-functions are derived.", "subjects": "Number Theory (math.NT); Combinatorics (math.CO)", "title": "Higher Order SPT-Functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713848528318, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7080104863780081 }
https://arxiv.org/abs/2110.14306
Integrability and solvability of polynomial Liénard differential systems
We provide the necessary and sufficient conditions of Liouvillian integrability for Liénard differential systems describing nonlinear oscillators with a polynomial damping and a polynomial restoring force. We prove that Liénard differential systems are not Darboux integrable excluding subfamilies with certain restrictions on the degrees of the polynomials arising in the systems. We demonstrate that if the degree of a polynomial responsible for the restoring force is greater than the degree of a polynomial producing the damping, then a generic Liénard differential system is not Liouvillian integrable with the exception of linear Liénard systems. However, for any fixed degrees of the polynomials describing the damping and the restoring force we present subfamilies possessing Liouvillian first integrals. As a by-product of our results, we find a number of novel Liouvillian integrable subfamilies. In addition, we study the existence of non-autonomous Darboux first integrals and non-autonomous Jacobi last multipliers with a time-dependent exponential factor.
\section{Introduction}\label{Sec_I} Performing classifications of integrable or solvable subfamilies for a given multi-parameter system of ordinary differential equations is a very difficult problem. The aim of the present article is to solve the integrability problem for the following systems of first-order ordinary differential equations \begin{equation} \label{Lienard_gen} x_t=y,\quad y_t=-f(x)y-g(x). \end{equation} We suppose that $f(x)$ and $g(x)$ are polynomials \begin{equation} \label{Lienard_fg} f(x)=f_0x^m+\ldots+f_m, \quad g(x)=g_0x^{n}+\ldots+g_{n},\quad f_0 g_0\neq0 \end{equation} with coefficients from the field $\mathbb{C}$. Systems \eqref{Lienard_gen} are named in honor of the French physicist and engineer Alfred--Marie Li\'{e}nard \cite{Lienard01}. These systems describe oscillators with a polynomial damping $f(x)$ and a polynomial restoring force $g(x)$. In addition, Li\'{e}nard differential systems have a variety of other applications in physics, chemistry, biology, economics, etc. For example, these systems \eqref{Lienard_gen} arise as traveling wave reductions of the following general families of reaction-convection-diffusion equations \begin{equation} \label{PDE_RCD} u_t=Du_{xx}+A(u)u_x+B(u),\quad u=u(x,t), \end{equation} where $D$ is a diffusion coefficient, $A(u)$ describes a nonlinear convective flux and $B(u)$ is responsible for a reaction force. Let the variable $y$ be privileged with respect to the variable $x$, then the function $y(x)$ satisfies the following family of Abel differential equations of the second kind \begin{equation} \label{Lienard_y_x} yy_x+f(x)y+g(x)=0. \end{equation} The integrability properties of Li\'{e}nard differential systems are investigated by various methods and in the framework of various theories. Let us enumerate the most important studies: \begin{enumerate} \item local analysis \cite{Cherkas01, Christopher_Lienard, LV_Lienard_int, Gasull_Lienard_int, Llibre04}; \item classical Lie symmetry analysis \cite{Lakshmanan01, Lakshmanan02} and $\lambda$ symmetries \cite{Ruiz01}; \item local \cite{Chiellini01, Abel01, Abel02, Lienard-Riccati, Choudhury_Lienard} and non-local transformations \cite{Berkovich01, DS2019, Guha_Lienard, Sin2020, DS2021, Sin2021}; \item differential Galois theory \cite{Morales-Ruiz01}; \item extended Prelle--Singer method \cite{Lakshmanan03} and the Darboux theory of integrability~\cite{Llibre06, Cheze01, Stachowiak, Demina07, Demina13, Demina17, Demina16, DS2019, DS2021}. \end{enumerate} A collection of integrable and solvable subfamilies of Li\'{e}nard differential systems is presented by A. D. Polyanin and V.~F.~Zaitsev in \cite{Polyanin}. The transformation $y(x)=1/w(x)$ brings Abel differential equations of the second kind \eqref{Lienard_y_x} to Abel differential equations of the first kind \begin{equation} \label{Abel} w_x=g(x)w^3+f(x)w^2. \end{equation} Consequently, certain results available for such Abel differential equations can be transferred to equations \eqref{Lienard_y_x} and related Li\'{e}nard differential systems \cite{Chiellini01, Abel01, Abel02}. Let us note that many scientific works dealing with the global integrability problem present sufficient conditions of integrability. Thus, these works do not provide classifications of integrable Li\'{e}nard differential systems and Abel differential equations. Meanwhile, it is an important scientific problem to find all integrable families for fixed degrees of the polynomials $f(x)$ and $g(x)$. This article is devoted to the study of the general integrability properties of polynomial Li\'{e}nard differential systems. We focus on the necessary and sufficient conditions of Darboux and Liouvillian integrability. J. Llibre and C. Valls \cite{Llibre06} proved that Li\'{e}nard differential systems \eqref{Lienard_gen} under the condition $\deg g\leq \deg f$ do not have Liouvillian first integrals excluding the trivial case $g(x)=\alpha f(x)$, $\alpha\in\mathbb{C}$. Consequently, we only need to study systems \eqref{Lienard_gen} satisfying the restriction $\deg g>\deg f$. Throughout this article it is supposed that $f(x)\not\equiv0$. Any Li\'{e}nard differential system is Hamiltonian with a polynomial first integral \begin{equation} \label{Lienard_Hamiltonian} I(x,y)=y^2+2\int_{0}^{x}g(s)ds \end{equation} whenever the relation $f(x)\equiv0$ is valid. We shall prove that Li\'{e}nard differential systems satisfying the restrictions $\deg g> \deg f$ and $\deg g\neq2 \deg f+1$ are not Darboux integrable, while there are Liouvillian integrable subfamilies. In contrast, Li\'{e}nard differential systems in the case $\deg g=2 \deg f+1$ exhibit a variety of rational, Darboux, and Liouvillian first integrals existing under certain restrictions on the parameters. This fact is also recognized by many scientists \cite{Lakshmanan01, Lakshmanan02, Ruiz01, Chiellini01, Choudhury_Lienard, Guha_Lienard, Lakshmanan03}. Our main tools include the modern Darboux theory of integrability \cite{Singer, Christopher}, the method of Puiseux series \cite{Demina12, Demina11}, and the local theory of invariants \cite{Demina_Gine_Valls}. We do not impose any non-trivial restrictions on the coefficients of the polynomials $f(x)$ and $g(x)$ with the exception of Li\'{e}nard differential systems from the families $\deg g=2 \deg f+1$. We mainly study systems that are non-resonant near infinity provided that the following restriction $\deg g=2 \deg f+1$ is valid. To be more precise, we say that a system \eqref{Lienard_gen} is resonant near infinity whenever the highest-degree coefficients $f_0$ and $g_0$ satisfy a resonance condition. This condition is explicitly given in Section \ref{S:Lienard_IAC} and arises only in the case $\deg g=2 \deg f+1$. Let us note that the subset of resonant systems is of zero Lebesgue measure in the set of all polynomial systems \eqref{Lienard_gen} with fixed degrees of the polynomials $f(x)$ and $g(x)$. Our results are also valid in the resonant case, but they are not complete. For all other polynomial Li\'{e}nard differential systems we present a complete classification of Liouvillian integrable subfamilies. In addition, we classify polynomial Li\'{e}nard differential systems possessing non-autonomous Darboux first integrals and non-autonomous Jacobi last multipliers with a time-dependent exponential factor. We demonstrate that the integrability properties of Li\'{e}nard differential systems are substantially different in the following three cases: \begin{center} \begin{enumerate}[(A)] \item $\qquad$ $\deg f<\deg g<2\deg f+1$; \item $\qquad$ $\deg g=2\deg f+1$; \item $\qquad$ $\deg g>2\deg f+1$. \end{enumerate} \end{center} This article is organized as follows. Sections \ref{S:Darboux}, \ref{S:Local}, and \ref{S:Lienard_IAC} contain a review of the known results and several preliminary observations on the methods we use in the subsequent part. In Section \ref{S:Darboux}, we describe the Darboux theory of integrability and consider some related questions. Section \ref{S:Local} is devoted to the method of Puiseux series and to the local theory of invariants. In Section \ref{S:Lienard_IAC}, the results on invariant algebraic curves of Li\'{e}nard differential systems are described. In Section \ref{S:Lienard}, we present some integrability properties valid for a generic polynomial Li\'{e}nard differential system. In Sections \ref{S:Lienard_A}, \ref{S:Lienard_B}, \ref{S:Lienard_C}, we investigate the integrability and solvability of Li\'{e}nard differential systems from families ($A$), ($B$), and ($C$), respectively. In Section \ref{S:Example_L24}, we consider an example: we study the Li\'{e}nard differential systems satisfying the restrictions $\deg f=2$ and $\deg g=4$. \section{The Darboux theory of integrability}\label{S:Darboux} The main aim of the present section is to describe some basic aspects of the Darboux theory of integrability. We focus on the problem of finding Darboux and Liouvillian first integrals of polynomial differential systems in the plane \begin{equation} \label{DS} x_t=P(x,y),\quad y_t=Q(x,y), \end{equation} where $P(x,y)$ and $Q(x,y)$ are relatively prime elements of the ring $\mathbb{C}[x,y]$. By $\mathbb{C}[x,y]$ we denote the ring of bivariate polynomials with complex-valued coefficients. The vector field related to system \eqref{DS} is defined as \begin{equation} \label{VF} \mathcal{X}=P(x,y)\frac{\partial}{\partial x}+Q(x,y)\frac{\partial}{\partial y}. \end{equation} \begin{definition} A non-constant function $I(x,y)$: $D\subset\mathbb{C}^2\rightarrow\mathbb{C}$ is called a first integral of differential system \eqref{DS} and the related vector field $ \mathcal{X}$ on an open subset $D\subset \mathbb{C}^2$ if $I(x(t), y(t)) = C$ with $C$ being a constant for all values of $t$ such that the solution $(x(t), y(t))$ of system \eqref{DS} is defined in $D$. \end{definition} If $I(x,y)$ is of a class at least $C^1$ in $ D$, then $I(x,y)$ is a first integral of differential system~\eqref{DS} if and only if $\mathcal{X} I=0$. \begin{definition} A non-constant function $M(x,y)$: $D\subset\mathbb{C}^2\rightarrow\mathbb{C}$ is called an integrating factor of differential system \eqref{DS} and the related vector field $ \mathcal{X}$ in an open subset $D\subset \mathbb{C}^2$ if the differential form $M(x,y)(P(x,y)dy-Q(x,y)dx)$ is exact in $ D$. In other words, there exists a function $I(x,y)$ of a class at least $C^1$ in $ D$ such that the following relation is valid $M(x,y)(P(x,y)dy-Q(x,y)dx)=dI(x,y)$. \end{definition} If an integrating factor $M(x,y)$ is of a class at least $C^1$ in $ D$, then it satisfies the following linear first-order partial differential equation $ \mathcal{X}M=-\text{div}\,\mathcal{X} M$, where $\text{div}\,\mathcal{X}=P_x+Q_y$ is the divergence of the vector field $ \mathcal{X}$. A function $I(x,y)$ is refereed to as a Liouvillian function of two variables $x$ and $y$ if it belongs to a Liouvillian extension of the field of rational functions $\mathbb{C}(x,y)$ over $\mathbb{C}$. Generally speaking, any Liouvillian function can be represented as a finite superposition of algebraic functions, antiderivatives, and exponentials \cite{Singer}. A function $\Phi(x,y)$ is called \textit{a Darboux function} of two variables $x$ and $y$, if it can be presented in the form \begin{equation} \begin{gathered} \label{Darboux_function} \Phi(x,y)=F^{d_1}_1(x,y)\ldots F^{d_K}_K(x,y)\exp\{R(x,y)\}, \end{gathered} \end{equation} where $F_1(x,y)\in\mathbb{C}[x,y]$, $\ldots$, $F_K(x,y)\in\mathbb{C}[x,y]$, $R(x,y)\in\mathbb{C}(x,y)$, $d_1,\ldots, d_K\in\mathbb{C}$. We see that any Darboux function is a Liouvillian function. The converse is not generally true. A differential system \eqref{DS} is called \textit{Darboux (Liouvillian) integrable} if it possesses a Darboux (Liouvillian) first integral. It is known that the problem of establishing Darboux or Liouvillian integrability of a differential system \eqref{DS} can be reduced to the problem of constructing all irreducible invariant algebraic curves of \eqref{DS} and all exponential invariants of \eqref{DS}, for more details see \cite{Zhang, Singer, Christopher}. \begin{definition}\label{D:IAC} The curve $F(x,y)=0$ with $F(x,y)\in \mathbb{C}[x,y]\setminus\mathbb{C}$ is an invariant algebraic curve of a differential system \eqref{DS} whenever the following condition $F_t|_{F=0}=(PF_x+QF_y)|_{F=0}=0$ is valid. \end{definition} If the polynomial $F(x,y)$ producing the invariant algebraic curve $F(x,y)=0$ is irreducible in $\mathbb{C}[x,y]$, then the ideal generated by $F(x,y)$ is radical. Consequently, there exists an element $\lambda(x,y)$ of the ring $\mathbb{C}[x,y]$ such that $F(x,y)$ satisfies the partial differential equation $P(x,y)F_x+Q(x,y)F_y=\lambda(x,y) F$. The polynomial $\lambda(x,y)$ is called \textit{the cofactor} of the invariant algebraic curve $F(x,y)=0$. It is straightforward to show that the degree of $\lambda(x,y)$ is at most $L-1$, where $L$ is the maximum between the degrees of the polynomials $P(x,y)$ and $Q(x,y)$. We conclude that an invariant algebraic curve of differential system \eqref{DS} is formed from solutions of the latter. A solution of differential system \eqref{DS} has either empty intersection with the zero set of $F(x,y)$ or it is entirely contained in $F(x,y) = 0$. The generating polynomial $F(x,y)$ of an invariant algebraic curve $F(x,y) = 0$ is refereed to as \textit{a Darboux polynomial} or \textit{an algebraic invariant}. \begin{definition} A function $E(x,y)=\exp[g(x,y)/f(x,y)]$ with the relatively prime polynomials $g(x,y)$, $f(x,y)\in\mathbb{C}[x,y]$ is called an exponential invariant of a differential system~\eqref{DS} whenever the following condition $\mathcal{X}E=\varrho(x,y)E$ is valid, where $\varrho(x,y)\in\mathbb{C}[x,y]$. \end{definition} The polynomial $\varrho(x,y)$ is refereed to as \textit{the cofactor} of the exponential invariant $E(x,y)$. It is straightforward to show that the product of the exponential invariants $E_1(x,y)$ and $E_2(x,y)$ with the cofactors $\varrho_1(x,y)$ and $\varrho_2(x,y)$, respectively, is an exponential invariant possessing the cofactor $\varrho(x,y)=\varrho_1(x,y)+\varrho_2(x,y)$. It is known that the polynomial $f(x,y)\in\mathbb{C}[x,y]\setminus\mathbb{C}$ arising in an exponential invariant $E(x,y)=\exp[g(x,y)/f(x,y)]$ produces an invariant algebraic curve $f(x,y)=0$ of the system under consideration \cite{Christopher}. It turns out that the study of autonomous first integrals and autonomous integrating factors is sometimes restrictive, even if an autonomous differential system is under consideration. In this article we do not consider the Darboux theory of integrability for non-autonomous systems in the general case, for more details see \cite{Llibre11, Pantazi01, Demina15}. A non-autonomous first integral $I(x,y,t)$ and a non-autonomous integrating factor $M(x$, $y$, $t)$ of a differential system \eqref{DS} are defined similarly to the autonomous case. They satisfy the following linear partial differential equations $I_t+\mathcal{X}I=0$ and $M_t+\mathcal{X}M=-\text{div}\,\mathcal{X} M$, respectively, whenever $I(x,y,t)$ and $M(x,y,t)$ are function of a class at least $C^1$ in $D\subset\mathbb{C}^3$. Non-autonomous integrating factors are commonly refereed to as Jacobi last multipliers or simply Jacobi multipliers. The following theorems are the essence of the modern Darboux theory of integrability. \begin{theorem}\label{T:Darboux_rat} A polynomial differential system \eqref{DS} is Darboux integrable if and only if it has a rational integrating factor. \end{theorem} The fact that a Darboux integrable differential system \eqref{DS} has a rational integrating factor was derived by J. Chavarriga et al., see \cite{Chavarriga_rat}. The converse statement was established by C. Christopher et al., see \cite{Christopher_elemntary_FI}. \begin{theorem}\label{T:Liouville} A polynomial differential system \eqref{DS} is Liouvillian integrable if and only if it has a Darboux integrating factor. \end{theorem} Theorem \ref{T:Liouville} was proved by M. F. Singer \cite{Singer}. \begin{theorem}\label{T:L23_Non_aut_FI} A polynomial differential system \eqref{DS} possesses a first integral of the form \begin{equation} \begin{gathered} \label{FI_t_gen} I(x,y,t)=\prod_{j=1}^{K}F^{d_j}_j(x,y)\exp\left\{\frac{S(x,y)}{R(x,y)}\right\}\exp{(\omega t)},\quad \omega,\, d_1,\, \ldots,\, d_K\in\mathbb{C}, \end{gathered} \end{equation} where $F_1(x,y)$, $\ldots$, $F_K(x,y)$ are pairwise relatively prime irreducible bivariate polynomials from the ring $\mathbb{C}[x,y]$, $S(x,y)$ and $R(x,y)$ are relatively prime bivariate polynomials from the ring $\mathbb{C}[x,y]$, if and only if $F_1(x,y)=0$, $\ldots$, $F_K(x,y)=0$, $R(x,y)=0$ are invariant algebraic curves of the system and $E(x,y)=\exp\{S(x,y)/R(x,y)\}$ is an exponential invariant of the system such that the following condition \begin{equation} \begin{gathered} \label{NDFI_gen_cond} \sum_{j=1}^{N}d_j\lambda_j(x,y)+\varrho(x,y)+\omega =0 \end{gathered} \end{equation} holds identically. In this expression $\lambda_j(x,y)$ is the cofactor of the invariant algebraic curve $F_j(x,y)=0$ and $\varrho(x,y)$ is the cofactor of the exponential invariant $E(x,y)$. \end{theorem} Theorem \ref{T:L23_Non_aut_FI} follows from the classical theory of Darboux integrability, see~\cite{Zhang}. We name a first integral of Theorem \ref{T:L23_Non_aut_FI} as a non-autonomous Darboux first integral provided that $\omega\neq 0$. If $\omega=0$, then function \eqref{FI_t_gen} gives a Darboux first integral. \begin{theorem}\label{T:L23_Non_aut_JLM} Under the assumptions of Theorem \ref{T:L23_Non_aut_FI} a polynomial differential system \eqref{DS} possesses a Jacobi last multiplier of the form \begin{equation} \begin{gathered} \label{JLM_gen} M(x,y,t)=\prod_{j=1}^{K}F^{d_j}_j(x,y)\exp\left\{\frac{S(x,y)}{R(x,y)}\right\}\exp{(\omega t)},\quad \omega,\, d_1,\, \ldots,\, d_K\in\mathbb{C}, \end{gathered} \end{equation} if and only if $F_1(x,y)=0$, $\ldots$, $F_K(x,y)=0$, $R(x,y)=0$ are invariant algebraic curves of the system and $E(x,y)=\exp\{S(x,y)/R(x,y)\}$ is an exponential invariant of the system such that the following condition \begin{equation} \begin{gathered} \label{JLM_gen_cond} \sum_{j=1}^{N}d_j\lambda_j(x,y)+\varrho(x,y)+\omega =-\text{div}\,\mathcal{X} \end{gathered} \end{equation} is identically valid. In this expression $\lambda_j(x,y)$ is the cofactor of the invariant algebraic curve $F_j(x,y)=0$ and $\varrho(x,y)$ is the cofactor of the exponential invariant $E(x,y)$. \end{theorem} Theorem \ref{T:L23_Non_aut_JLM} with the restriction $\omega=0$ was derived by C.~Christopher \cite{Christopher}. The case $\omega\neq0$ was considered in article \cite{Demina15}. A Jacobi last multiplier of Theorem \ref{T:L23_Non_aut_JLM} will be referred to as a non-autonomous Darboux--Jacobi last multiplier whenever $\omega\neq0$. These theorems suggest the following algorithm for searching autonomous and non-autonomous Darboux first integrals and Jacobi last multipliers: \begin{enumerate} \item find all relatively prime irreducible invariant algebraic curves and all exponential invariants with linearly independent cofactors; \item find, or prove the non-existence of, complex numbers $d_1$, $\ldots$, $d_K$, $\omega$ such that condition \eqref{NDFI_gen_cond} or \eqref{JLM_gen_cond} is identically satisfied; the polynomial $\varrho(x,y)$ arising in conditions \eqref{NDFI_gen_cond} and \eqref{JLM_gen_cond} equals the sum of the cofactors of exponential invariants found at the first step. \end{enumerate} Let us note that there exist certain estimates of the number of pairwise distinct invariants which guarantees the existence of rational, Darbour or Liouvillian first integrals in the autonomous case. For more details see books \cite{Zhang, Ilyashenko} and the references therein. The first step of this algorithm is extremely difficult. This is due to the absence of \textit{a priori} upper bounds on the degrees of bivariate polynomials giving irreducible invariant algebraic curves. It is shown in articles \cite{Demina06, Demina11, Demina12} that the method of Puiseux series, which is described in the next section, can facilitate the first step. It is straightforward to see that integrating factors and Jacobi last multipliers are defined modulo to the multiplication by a non-zero constant. Two integrating factors or Jacobi last multipliers producing a constant ratio are supposed to be equivalent. We do not distinguish them. The uniqueness of integrating factors and Jacobi last multipliers is understood exactly in this sense. In general, the existence of only one independent non-autonomous first integral is not sufficient for the complete integrability of the system under study in the framework of the Darboux theory. However, the knowledge of a non-autonomous Darboux first integral can be used to derive the general solutions. Several examples are given in article \cite{Demina16}. Moreover, we demonstrate that some Li\'{e}nard differential systems from family ($B$) simultaneously have independent autonomous and non-autonomous Darboux first integrals. Eliminating the variable $y=x_t$ from these first integrals, one can find explicit expressions of the general solutions; see also article \cite{Ruiz01}, where some similar examples are presented. To conclude this section let us note that trajectories lying in a zero set of inverse Jacobi last multipliers are of importance in the qualitative theory of differential systems~\cite{Zhang}. Consequently, the classification of Li\'{e}nard differential systems possessing Darboux--Jacobi last multipliers seem to be a significant problem. \section{The method of Puiseux series and the local theory of invariants}\label{S:Local} We start with a brief review of the theory of fractional-power (or Puiseux) series. A Puiseux series around a point $x_0\in\mathbb{C}$ can be presented as \begin{equation} \begin{gathered} \label{Puiseux_null} y(x)=\sum_{l=0}^{+\infty}c_l(x-x_0)^{\frac{l_0+l}{n_0}},\quad c_0\neq0, \end{gathered} \end{equation} where $l_0\in\mathbb{Z}$ and $n_0\in\mathbb{N}$. It is without loss of generality to suppose that the number $n_0$ is relatively prime with the greatest common divisor of the numbers $\{l_0+l$ : $c_l\neq0$, $l\in\mathbb{N}\cup\{0\}\}$. While a Puiseux series around the point $x=\infty$ takes the form \begin{equation} \begin{gathered} \label{Puiseux_inf} y(x)=\sum_{l=0}^{+\infty}b_lx^{\frac{l_0-l}{n_0}},\quad b_0\neq0, \end{gathered} \end{equation} where $l_0\in\mathbb{Z}$ and $n_0\in\mathbb{N}$. Again we assume that the number $n_0$ is relatively prime with the greatest common divisor of the numbers $\{l_0-l$ : $b_l\neq0$, $l\in\mathbb{N}\cup\{0\}\}$. Let us consider the algebraic equation $F(x,y)=0$, where $F(x,y)\in\mathbb{C}[x,y]\setminus\mathbb{C}[x]$. Giving preference to one of the variables with respect to another, a solution $y$ of this equation viewed as a function of $x$ can be locally expanded into a convergent Puiseux series. This statement is known as the Newton--Puiseux theorem. The set of all Puiseux series given by \eqref{Puiseux_null} or \eqref{Puiseux_inf} forms an algebraically closed field, which we denote by $\mathbb{C}_{x_0}\{x\}$ or $\mathbb{C}_{\infty}\{x\}$, respectively. The ring of polynomials in one variable $y$ with coefficients from the fields $\mathbb{C}_{x_0}\{x\}$ or $\mathbb{C}_{\infty}\{x\}$ is denoted as $\mathbb{C}_{x_0}\{x\}[y]$ or $\mathbb{C}_{\infty}\{x\}[y]$, respectively. Suppose $S(x,y)$ is an element of the ring $\mathbb{C}_{\infty}\{x\}[y]$. Let us introduce two operators of projection acting on this ring. The first operator $\{S(x,y)\}_+ $ gives the sum of the monomials of $S(x,y)$ with non-negative integer powers. In other words, $\{S(x,y)\}_+$ yields the polynomial part of $S(x,y)$. Analogously, the projection $\{S(x,y)\}_{-}=S(x,y)-\{S(x,y)\}_+$ produces the non-polynomial part of $S(x,y)$. It is straightforward to show that these projections are linear operators. In addition, we see that the action of the projection operators can be extended to the ring of the Puiseux series in $y$ near the point $y=\infty$ with coefficients from the field $\mathbb{C}_{\infty}\{x\}$. We denote this ring as $\mathbb{C}_{\infty}\{x\}[\{y\}]$. Thus, we get the relation $\{S(x,y)\}_+\in\mathbb{C}[x,y]$, where $S(x,y)\in\mathbb{C}_{\infty}\{x\}[\{y\}]$. We endow the fields $\mathbb{C}_{x_0}\{x\}$ and $\mathbb{C}_{\infty}\{x\}$ with the differential operator $\partial_x$. Analogously, we endow the rings $\mathbb{C}_{x_0}\{x\}[y]$ and $\mathbb{C}_{\infty}\{x\}[y]$ with the differential operators $\partial_x$ and $\partial_y$. The action of these differential operators is standard. The following basic statements are proved in articles \cite{Demina07, Demina12, Demina11}. \begin{lemma}\label{L_Puiseux series} Let $y(x)$ be a Puiseux series from one of the fields $\mathbb{C}_{x_0}\{x\}$ or $\mathbb{C}_{\infty}\{x\}$. Suppose that the series $y(x)$ satisfies the equation $F(x,y(x))=0$, where $F(x,y)\in \mathbb{C}[x,y]\setminus \mathbb{C}[x]$ gives an invariant algebraic curve $F(x,y)=0$ of a system \eqref{DS}. Then the series $y(x)$ solves the equation \begin{equation} \label{ODE_y} P(x,y)y_x-Q(x,y)=0. \end{equation} \end{lemma} Using algebraic closeness of the field of Puiseux series $\mathbb{C}_{\infty}\{x\}$, it is straightforward to find the general structure of irreducible invariant algebraic curves and their cofactors. \begin{theorem}\label{T:Darboux_pols} Let $F(x,y)=0$, where $F(x,y)\in \mathbb{C}[x,y]\setminus\mathbb{C}[x]$, be an irreducible invariant algebraic curve of a differential system~\eqref{DS}. Then the polynomial $F(x,y)$ and the cofactor $\lambda(x,y)\in\mathbb{C}[x,y]$ of the curve $F(x,y)=0$ take the form \begin{equation} \label{General_Fl} F(x,y)=\left\{\mu(x)\prod_{j=1}^{N}\left\{y-y_{j,\infty}(x)\right\}\right\}_{+}, \end{equation} \begin{equation} \label{General_Fl_cof} \begin{gathered} \lambda(x,y)=\left\{\sum_{m=0}^{+\infty}\sum_{j=1}^{N}\frac{\{Q(x,y)-P(x,y)(y_{j,\infty})_{x}\}(y_{j,\infty})^m}{y^{m+1}} +P(x,y)\sum_{m=0}^{+\infty}\sum_{l=1}^{L}\frac{\nu_lx_l^m}{x^{m+1}}\right\}_{+}, \end{gathered} \end{equation} where $y_{1,\infty}(x)$, $\ldots$, $y_{N,\infty}(x)$ are pairwise distinct Puiseux series in a neighborhood of the point $x=\infty$ that satisfy equation \eqref{ODE_y}, $x_1$, $\ldots$, $x_L$ are pairwise distinct zeros of the polynomial $\mu(x)\in\mathbb{C}[x]$ with multiplicities $\nu_1$, $\ldots$, $\nu_L\in\mathbb{N}$ and $L\in\mathbb{N}\cup\{0\}$. Moreover, the degree of $F(x,y)$ with respect to $y$ does not exceed the number of distinct Puiseux series of the from \eqref{Puiseux_inf} satisfying equation \eqref{ODE_y} whenever this number is finite. If $\mu(x)=\mu_0$, where $\mu_0\in\mathbb{C}$, then we suppose that $L=0$ and the first series is absent in the expression of the cofactor~$\lambda(x,y)$. \end{theorem} This theorem introduces an algebraic tool enabling one to find invariant algebraic curves explicitly, for more details see article \cite{Demina18}. The method of Puiseux series can be also used whenever one wishes to find exponential invariants with non-polynomial arguments. The following theorem giving necessary conditions for an exponential invariant related to an invariant algebraic curve to exist is proved in article~\cite{Demina07}. \begin{theorem}\label{T:Exp_fact} Let the polynomial $f(x,y)\in\mathbb{C}[x,y]\setminus \mathbb{C}[x]$ give an invariant algebraic curve $f(x,y)=0$ of a differential system~\eqref{DS}. The cofactor of the invariant algebraic curve $f(x,y)=0$ we denote by $\lambda(x,y)\in\mathbb{C}[x,y]$. Suppose that this system admits an exponential invariant $E = \exp(g/f)$ related to the algebraic curve $f(x,y)=0$, then for each non-constant Puiseux series $y_{j,\infty}(x)$ in a neighborhood of the point $x=\infty$ that satisfies the equation $f(x,y)=0$ there exists a number $q\in\mathbb{Q}$ such that the Puiseux series for the function $\lambda(x,y_{j,\infty}(x))/P(x,y_{j,\infty}(x))$ in a neighborhood of the point $x=\infty$ is \begin{equation} \begin{gathered} \label{Exp_fact_cond} \frac{\lambda(x,y_{j,\infty}(x))}{P(x,y_{j,\infty}(x))}=\sum_{k=n}^{+\infty}b_kx^{-\frac{k}{n}},\quad b_n=q. \end{gathered} \end{equation} \end{theorem} Now our aim is to introduce a notion of local invariant curves that are given by elements from one of the rings $\mathbb{C}_{x_0}\{x\}[y]$ and $\mathbb{C}_{\infty}\{x\}[y]$. Let $x_0$ be a point of the extended complex plane $\overline{\mathbb{C}}=\mathbb{C}\cup\{\infty\}$. Elements of the ring $\mathbb{C}_{x_0}\{x\}[y]$ are referred to as Puiseux polynomials. Note that the polynomials $P(x,y)$ and $Q(x,y)$ can be regarded as Puiseux polynomials for any~$x_0\in \overline{\mathbb{C}}$. Again giving preference to the variable $y$, we shall deal with formal Puiseux series solutions $y=y_{x_0}(x)$ of algebraic first-order ordinary differential equation~\eqref{ODE_y}. We can do the same analysis choosing the variable $x$ as depended. \begin{definition} The curve $F(x,y)=0$, where $F(x,y)\in \mathbb{C}_{x_0}\{x\}[y]$, is called a local invariant curve of a differential system \eqref{DS} whenever the following condition $F_t|_{F=0}=(PF_x+QF_y)|_{F=0}=0$ is valid. \end{definition} If the element $F(x,y)\in\mathbb{C}_{x_0}\{x\}[y]$ gives a local invariant curve $F(x,y)=0$ of differential system \eqref{DS}, then there exists the Puiseux polynomial $\lambda(x,y)\in\mathbb{C}_{x_0}\{x\}[y]$ such that the following equation $\mathcal{X}F(x,y)=\lambda(x,y) F(x,y)$ is satisfied \cite{Demina_Gine_Valls}. By analogy with the algebraic case, the Puiseux polynomials $F(x,y)$ and $\lambda(x,y)$ will be called a local invariant and the cofactor of the related local invariant and local curve, respectively. The theory of local invariants is introduced in article \cite{Demina_Gine_Valls}, where all the statements given below are proved. \smallskip In what follows we call local invariants and associated local invariant curves given by the polynomials $F(x,y)=h_{x_0}(x)$ and $F(x,y)=y-y_{x_0}(x)$, $h_{x_0}(x)$, $y_{x_0}(x)\in\mathbb{C}_{x_0}\{x\}$ \textit{elementary}. Any local invariant $F(x,y)$ can be represented as a product of elementary local invariants and the cofactor of the local invariant $F(x,y)$ is the sum of the cofactors of all the factors. \begin{theorem}\label{T:inv_curve_prim} Let $y_{x_0}(x)$ be a Puiseux series near the point $x=x_0$. The element $F(x,y)=y-y_{x_0}(x)$ of the ring $\mathbb{C}_{x_0}\{x\}[y]$ gives a local invariant curve of differential system \eqref{DS} if and only if the series $y=y_{x_0}(x)$ satisfies equation~\eqref{ODE_y}. \end{theorem} It is straightforward to obtain an explicit expression of the cofactor similar to that presented in Theorem \ref{T:Darboux_pols}. We introduce the operator of projection $\left\{S(x,y)\right\}_{+,\,y}$ yielding the polynomial part with respect to $y$ of the element $S(x,y)\in\mathbb{C}_{x_0,\infty} \{x\}$ $[\{y\}]$. The symbol $\mathbb{C}_{x_0,\infty}\{x\}[\{y\}]$ denotes the ring of Puiseux series in $y$ near the point $y=\infty$ with coefficients from the field $\mathbb{C}_{x_0}\{x\}$. In other words, we obtain the relation $\left\{S(x,y)\right\}_{+,\,y}\in\mathbb{C}_{x_0}\{x\}[y]$. \begin{theorem}\label{T:coff_local2} Let $F(x,y)=y-Y_{x_0}(x)$, $Y_{x_0}(x)\in\mathbb{C}_{x_0}\{x\}$ give an elementary local invariant curve $F(x,y)=0$ of differential system \eqref{DS}, then its cofactor takes the form \begin{equation} \label{Gen_coff1-2} \lambda(x,y)=\left\{\sum_{m=0}^{+\infty}\frac{\{Q(x,y)-P(x,y)(Y_{x_0}(x))_x\}Y_{x_0}^m(x)}{y^{m+1}}\right\}_{+,\,y}. \end{equation} \end{theorem} Similarly to the case of local invariant curves, we can introduce a concept of local exponential invariants. \begin{definition} A function $E(x,y)=\exp[g(x,y)/f(x,y)]$ with relatively prime elements $g(x,y)$ and $f(x,y)$ of the ring $\mathbb{C}_{x_0}\{x\}[y] $ is called \textit{a local exponential invariant} of a differential system \eqref{DS} if $E(x,y)$ satisfies the partial differential equation $\mathcal{X}E(x,y)=\varrho(x,y)E(x,y)$, where $\varrho(x,y)$ is a Puiseux polynomial. \end{definition} Again the Puiseux polynomial $\varrho(x,y)$ will be referred to as the cofactor of the local exponential invariant $E(x,y)$. Note that exponential invariants belong to a differential extension of the field of fractions of the ring $\mathbb{C}_{x_0}\{x\}[y] $. \begin{lemma}\label{L:exp_factor_inv_curve} Let $E(x,y)=\exp[g(x,y)/f(x,y)]$, $f(x,y)\in\mathbb{C}_{x_0}\{x\}[y] \setminus \mathbb{C}_{x_0}\{x\}$ be a local exponential invariant of a differential system \eqref{DS}, then $f(x,y)=0$ is a local invariant curve of the system in question. \end{lemma} We shall say that local exponential invariants \begin{equation} \label{Gen_coff11} \begin{gathered} E(x,y)=\exp\left[g_l(x)y^l\right],\quad g_l(x)\in \mathbb{C}_{x_0}\{x\},\quad l\in\mathbb{N}\cup\{0\};\hfill \\ E(x,y)=\exp\left[ \frac{u(x)}{ \{y-y_{x_0}(x)\}^n}\right],\quad y_{x_0}(x), \,\,u(x)\in \mathbb{C}_{x_0}\{x\},\quad n\in\mathbb{N} \end{gathered} \end{equation} are \textit{elementary local exponential invariants}. Since $E^{\alpha}(x,y)$ with $\alpha\in\mathbb{C}$ is a local exponential invariant whenever so does $E(x,y)$, the highest-order coefficients of the series $g_l(x)$ and $u(x)$ can be fixed in advance. Any local exponential invariant $E(x,y)$ is as a product of elementary local exponential invariants and the cofactor of the local invariant $E(x,y)$ is the sum of the cofactors of all the factors. We conclude that if system \eqref{DS} has a global invariant (algebraic or exponential), then for any $x_0\in\mathbb{\overline{C}}$ this invariant must be a product of local elementary invariants in such a way that the sum of their cofactors is a true, not Puiseux, polynomial. This observation is used in subsequent sections. \section{Invariant algebraic curves of polynomial Li\'{e}nard differential systems}\label{S:Lienard_IAC} With the aim to study the integrability properties of polynomial Li\'{e}nard differential systems we need information on the existence of their invariant algebraic curves. The general structure of bivariate polynomials giving irreducible invariant algebraic curves is derived in articles \cite{Demina12, Demina18}. \begin{theorem}\label{T1:Lienard_gen} Let $F(x,y)=0$, $F(x,y)\in \mathbb{C}[x,y]\setminus\mathbb{C}$ be an irreducible invariant algebraic curve of a Li\'{e}nard differential system \eqref{Lienard_gen} from family ($A$). Then the polynomial $F(x,y)$ and the cofactor $\lambda(x,y)$ of the invariant algebraic curve $F(x,y)=0$ take the form \begin{equation} \begin{gathered} \label{Lienard1_F} F(x,y)=\left\{\prod_{j=1}^{N-k}\left\{y-y^{(1)}_{j,\infty}(x)\right\}\left\{y-y^{(2)}_{N,\infty}(x)\right\}^{k}\right\}_{+}, \end{gathered} \end{equation} \begin{equation} \begin{gathered} \label{Lienard1_cof} \lambda(x,y)=-Nf-(N-k)q_x-kp_x, \end{gathered} \end{equation} where $k=0$ or $k=1$, $N\in\mathbb{N}$, and $y^{(1)}_{1,\infty}(x)$, $\ldots$, $y^{(1)}_{N-1,\infty}(x)$, $y^{(2)}_{N,\infty}(x)$ are the series \begin{equation} \begin{gathered} \label{Lienard1_F_series} (I):\,y^{(1)}_{j,\infty}(x)=q(x)+\sum_{l=0}^{+\infty}b_{m+1+l}^{(j)}x^{-l},\quad j=1,\ldots, N-k;\\ (II):\,y^{(2)}_{N,\infty}(x)=p(x)+\sum_{l=0}^{+\infty}b_{n-m+l}^{(N)}x^{-l}.\hfill \end{gathered} \end{equation} The coefficients of the series of type (II) and of the polynomials \begin{equation} \begin{gathered} \label{Lienard1_F_series_q} q(x)=-\frac{f_0}{m+1}x^{m+1}+\sum_{l=1}^{m} q_{m+1-l} x^l\in\mathbb{C}[x],\\ p(x)=-\frac{g_0}{f_0}x^{n-m}+\sum_{l=1}^{n-m-1} p_{n-m-l} x^l\in\mathbb{C}[x] \end{gathered} \end{equation} are uniquely determined. The coefficients $b_{m+1}^{(j)}$, $j=1,\ldots, N-k$ are pairwise distinct. All other coefficients $b_{m+1+l}^{(j)}$, $l\in\mathbb{N}$ are expressible via $b_{m+1}^{(j)}$, where $j=1,\ldots, N-k$. The corresponding product in \eqref{Lienard1_F} is unit whenever $k=1$ and $N=1$. \end{theorem} \begin{theorem}\label{T:L_two_curves} A Li\'{e}nard differential system \eqref{Lienard_gen} from family ($A$) with fixed coefficients of the polynomials $f(x)$ and $g(x)$ has at most two distinct irreducible invariant algebraic curves simultaneously. If the two distinct irreducible invariant algebraic curves exist, then the first has $k=1$ in representation \eqref{Lienard1_F} and the second is given by a first-degree polynomial with respect to $y$ and takes the form $y-q(x)-z_0=0$. \end{theorem} \textit{Corollary.} Li\'{e}nard differential systems \eqref{Lienard_gen} from family ($A$) are not integrable with a rational first integral. Theorems \ref{T1:Lienard_gen} and \ref{T:L_two_curves} are proved in article \cite{Demina12}. The structure of polynomials producing invariant algebraic curves for Li\'{e}nard differential systems from family ($B$) is in strong correlation with the properties of the following quadratic equation \begin{equation}\label{eq:DP2_5_2} p^2-\varrho p+(m+1)\varrho=0, \end{equation} where we have introduced notation \begin{equation}\label{eq:DP2_5_3} \varrho=4(m+1)-\frac{f_0^2}{g_0}. \end{equation} The set of all positive rational numbers will be denoted as $\mathbb{Q}^+$. Let $p_1$ and $p_2$ be the roots of equation \eqref{eq:DP2_5_2}. \begin{theorem}\label{T:Lienard_degenerate} Suppose $F(x,y)=0$, $F(x,y)\in \mathbb{C}[x,y]\setminus\mathbb{C}$ is an irreducible invariant algebraic curve of a Li\'{e}nard differential system \eqref{Lienard_gen} from family ($B$) and equation \eqref{eq:DP2_5_2} has no solutions in $\mathbb{Q}^+$. One of the following statements holds. \begin{enumerate} \item If $p_1p_2\neq0$, then the polynomial $F(x,y)$ is of degree at most two with respect to $y$ and \begin{equation} \begin{gathered} \label{F_L2_5_1} F(x,y)=\left\{\left\{y-y^{(1)}_{\infty}(x)\right\}^{s_1}\left\{y-y^{(2)}_{\infty}(x)\right\}^{s_2}\right\}_{+},\hfill\\ \lambda(x,y)=-(s_1+s_2)f(x)-\left\{s_1\left(y^{(1)}_{\infty}\right)_x+s_2\left(y^{(2)}_{\infty}\right)_x\right\}_{+},\hfill\\ y^{(k)}_{+\infty}(x)=\sum_{l=0}^{\infty}b_l^{(k)}x^{m+1-l},\quad b_0^{(k)}=\frac{f_0}{p_k-2(m+1)},\quad k=1, 2, \end{gathered} \end{equation} where $s_1$ and $s_2$ are either $0$ or $1$ independently, $s_1+s_2>0$. The Puiseux series $y^{(k)}_{\infty}(x)$, $k=1$, $2$ are Laurent series and possess uniquely determined coefficients. \item If $p_1p_2=0$, then $p_1=p_2=0$. The polynomial $F(x,y)$ and its cofactor $\lambda(x,y)$ take the form \begin{equation} \begin{gathered} \label{F_L2_5_4} F(x,y)=y+\frac{f_0}{2(m+1)}x^{m+1}-\sum_{l=1}^{m+1}b_{l}x^{m+1-l},\hfill\\ \lambda(x,y)=-f(x)+\frac{f_0}{2}x^{m}-\sum_{l=1}^{m}(m+1-l)b_{l}x^{m-l},\hfill\\ \end{gathered} \end{equation} where the coefficients $b_1$, $\ldots $, $b_{m+1}$ are uniquely determined. In addition, the following relation $4(m+1)g_0-f_0^2=0$ is valid. \end{enumerate} \end{theorem} This theorem is proved in \cite{Demina18}. Note that the Li\'{e}nard differential systems such that the related equation \eqref{eq:DP2_5_2} has a positive rational solution are also studied in~\cite{Demina18}. We do not reproduce the related results here. The structure of polynomials giving invariant algebraic curves and their cofactors for Li\'{e}nard differential systems satisfying the condition $\deg g>2\deg f+1$ is derived in~\cite{Demina18}. The following theorem is valid. \begin{theorem}\label{T1:Lienard_2m+1} Let $F(x,y)=0$, $F(x,y)\in \mathbb{C}[x,y]\setminus\mathbb{C}$ be an irreducible invariant algebraic curve of a Li\'{e}nard differential system~\eqref{Lienard_gen} from family ($C$). Then the polynomial $F(x,y)$ and the cofactor $\lambda(x,y)$ of the invariant algebraic curve $F(x,y)=0$ take the form \begin{equation} \begin{gathered} \label{Lienard2_F} F(x,y)=\left\{\prod_{j=1}^{N_1}\left\{y-y^{(1)}_{j,\infty}(x)\right\}\prod_{j=1}^{N_2}\left\{y-y^{(2)}_{j,\infty}(x)\right\}\right\}_{+}, \end{gathered} \end{equation} \begin{equation} \begin{gathered} \label{Lienard2_cof} \lambda(x,y)=-(N_1+N_2)f-\left\{N_1h^{(1)}_x+N_2h^{(2)}_x\right\}_+ \end{gathered} \end{equation} where the Puiseux series $y^{(1,2)}_{j,\infty}(x)$ are given by the relations \begin{equation} \begin{gathered} \label{Lienard2_F_series} y^{(1,2)}_{j,\infty}(x)=h^{(1,2)}(x)+\sum_{k=2(n+1)}^{+\infty}b^{(1,2)}_{k,\,j}x^{\frac{n+1}{2}-\frac{k}{2}},\quad h^{(1,2)}(x)=\sum_{k=0}^{2n+1}b^{(1,2)}_{k}x^{\frac{n+1}{2}-\frac{k}{2}}, \end{gathered} \end{equation} $N_1$, $N_2\in\mathbb{N}\cup\{0\}$, and $N_1+N_2\geq1$. The coefficients $b_{2(n+1),\,j}^{(1,2)}$ with the same upper index are pairwise distinct and all the coefficients $b_{l,\,j}^{(1,2)}$ with $l>2(n+1)$ are expressible via $b_{2(n+1),\,j}^{(1,2)}$. If $n$ is an odd number, then the corresponding Puiseux series are Laurent series and $b_{2l-1}^{(1,2)}=0$, $b_{2l-1,\,j}^{(1,2)}=0$, when $l\in\mathbb{N}$. In addition, $N_k=1$ whenever $n$ is odd and $N_l=0$, where $k$, $l=1$, $2$ and $k\neq l$. If $n$ is an even number, then~$N_1=N_2$. \end{theorem} All the Puiseux series arising in Theorems \ref{T1:Lienard_gen}, \ref{T:Lienard_degenerate}, \ref{T1:Lienard_2m+1} solve the equation \eqref{Lienard_y_x} related to the Li\'{e}nard differential system under consideration. The following theorem provides the necessary and sufficient conditions for a Li\'{e}nard differential system \eqref{Lienard_gen} to have an invariant algebraic curve. \begin{theorem}\label{T:Darboux_pols_computation_Lienard} The polynomial $F(x,y)\in \mathbb{C}[x,y]\setminus\mathbb{C}$ of degree $N>0$ with respect to $y$ gives an invariant algebraic curve $F(x,y)=0$ of a Li\'{e}nard differential system \eqref{Lienard_gen} if and only if there exists $N$ Puiseux series $y_{1,\infty}(x)$, $\ldots$, $y_{N,\infty}(x)$ from the field $\mathbb{C}_{\infty}\{x\}$ that solve equation~\eqref{Lienard_y_x} and satisfy the conditions \begin{equation} \begin{gathered} \label{Existance_F_conditions_Lienard} \left\{\sum_{j=1}^{N}y_{j,\infty}(x)\right\}_{-}=0. \end{gathered} \end{equation} \end{theorem} The theorems presented in this section essentially are based on the method of Puiseux series. \section{Integrability of a generic Li\'{e}nard differential system}\label{S:Lienard} We begin this section by investigating the existence of exponential invariants with polynomial arguments. \begin{lemma}\label{L:Lienard_exp_inv1} If a Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the condition $\deg g>\deg f$ has an exponential invariant of the form $E(x,y)=\exp[h(x,y)]$, $h(x,y)\in\mathbb{C}[x,y]$ with the cofactor $\varrho(x,y)\in\mathbb{C}[x,y]$ of degree at most $\deg g-1$, then the polynomial $\varrho(x,y)$ is divisible by $y$ in the ring $\mathbb{C}[x,y]$. \end{lemma} \begin{proof} It is straightforward to show that the polynomial $h(x,y)$ satisfies the following linear inhomogeneous partial differential equation \begin{equation} \label{Lienard_exp_inv1_1} yh_x-\{f(x)y+g(x)\}h_y=\varrho(x,y). \end{equation} Let us represent the functions $h(x,y)$ and $\varrho(x,y)$ as polynomials in $y$ with polynomial in $x$ coefficients. Substituting the expressions $h(x,y)=h_0(x)+h_1(x)y+\ldots$ and $\varrho(x,y)=\varrho_0(x)+\varrho_1(x)y+\ldots$ into equation \eqref{Lienard_exp_inv1_1} and selecting the coefficients of $y^0$, we get the relation $g(x)h_1(x)=-\varrho_0(x)$. Since the degree of the polynomial $\varrho_0(x)$ is at most $\deg g-1$, we conclude that $h_1(x)\equiv0$ and $\varrho_0(x)\equiv 0$. As a result the polynomial $\varrho(x,y)$ is given by the relation $\varrho(x,y)=(\varrho_1(x)+\ldots)y$. This completes the proof. \end{proof} As a direct consequence of this lemma, we establish that Darboux or Liouvillian integrable Li\'{e}nard differential systems necessarily have invariant algebraic curves. \begin{theorem}\label{T:Lienard_nonintegrability} If a Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the condition $\deg g>\deg f$ has no invariant algebraic curves, then this system is not integrable with a Darboux or Liouvillian first integral. \end{theorem} \begin{proof} It is a simple result of the Darboux theory of integrability that a differential system \eqref{DS} without invariant algebraic curves cannot have rational first integrals. Suppose that a Li\'{e}nard differential system without invariant algebraic curves possesses a Darboux first integral. Then this first integral is given by an exponential invariant with a zero cofactor. Consequently, the argument of the exponential function in the invariant provides a rational first integral. This is a contradiction. If a Li\'{e}nard differential system \eqref{Lienard_gen} without invariant algebraic curves is Liouvillian integrable, then there exists a Darboux integrating factor and this integrating factor equals some exponential invariant of the form $E(x,y)=\exp[h(x,y)]$, $h(x,y)\in\mathbb{C}[x,y]$. Calculating the divergence of the vector field \begin{equation} \label{Lienard_Inegrability_VF} \mathcal{X}=y\frac{\partial}{\partial x}-(f(x)y+g(x))\frac{\partial}{\partial y}, \end{equation} we get: $\text{div}\,\mathcal{X}=-f(x)$. The divergence is independent of $y$, while by Lemma \ref{L:Lienard_exp_inv1} the cofactor of the exponential invariant $E(x,y)=\exp[h(x,y)]$ is divisible by $y$ provided that $\deg g>\deg f$. Consequently, condition \eqref{NDFI_gen_cond} in the autonomous case is not valid. \end{proof} Thus, we conclude that Liouvillian integrable Li\'{e}nard differential systems~\eqref{Lienard_gen} with the restriction $\deg f<\deg g$ must have at least one invariant algebraic curve. \begin{theorem}\label{T:Lienard_Intgerability_generic} A generic Li\'{e}nard differential system \eqref{Lienard_gen} with fixed degrees of the polynomials $f(x)$ and $g(x)$ is neither Darboux nor Liouvillian integrable provided that the following restrictions $\deg g>\deg f$ and ($\deg f, \deg g$) $\neq (0,1)$ are valid. \end{theorem} \begin{proof} Let us denote the set of all Li\'{e}nard differential systems with fixed degrees of the polynomials $f(x)$ and $g(x)$ as $L_{m,n}$. Any particular Li\'{e}nard differential system can be identified with a point in $\mathbb{C}^{m+n}\times(\mathbb{C}\setminus\{0\})^2$. In view of Theorem \ref{T:Lienard_nonintegrability}, we need to prove that the subset of Li\'{e}nard differential systems~\eqref{Lienard_gen} without invariant algebraic curves is of full Lebesgue measure in the set $L_{m,n}$. Note that here we speak about finite invariant algebraic curves. Extending systems \eqref{Lienard_gen} from the complex plane $\mathbb{C}^2$ to the complex projective plane $\mathbb{C}P^2$, we see that the infinite line is always invariant for Li\'{e}nard differential systems. We begin by considering systems from family ($A$). The subset of Li\'{e}nard differential systems such that the related equations~\eqref{Lienard_y_x} possess a family of formal Puiseux series solutions $y^{(1)}_{\infty}(x)$ has Lebesgue measure zero. Indeed, there always arises a compatibility condition enabling this family of series to exist. We only need to show that the compatibility condition cannot be identically satisfied. For this aim we track the appearance of the coefficient $g_{n-m}$ in the series $y^{(1)}_{\infty}(x)$. We use the following representation \begin{equation} \label{Lienard_Inegrability_Generic_PS} y^{(1)}_{\infty}(x)=\sum_{l=0}^{+\infty}v_l(x)\left(g_{n-m}\right)^l, \end{equation} where $\{v_l(x)\}$ are elements of the field $\mathbb{C}_{\infty}\{x\}$. The compatibly condition arises, when one tries to find the coefficient of $x^0$. Substituting representation \eqref{Lienard_Inegrability_Generic_PS} into equations \eqref{Lienard_y_x} and setting to zero the coefficients of $g^l_{n-m}$ for $l=0$, $1$, we find the ordinary differential equations \begin{equation} \label{Lienard_Inegrability_Generic_PS2} v_0v_{0,x}+f(x)v_0+\hat{g}(x)=0,\quad v_0v_{1,x}+(f(x)+v_{0,x})v_1+x^m=0, \end{equation} where we use the designation $\hat{g}(x)=g(x)-g_{n-m}x^m$. The dominant behavior of the series $v_0(x)$ near the point $x=\infty$ is \begin{equation} \label{Lienard_Inegrability_Generic_PS3} v_0(x)=-\frac{f_0}{m+1}x^{m+1}+o(x^{m+1}),\quad x\rightarrow\infty. \end{equation} Analyzing the ordinary differential equation for the series $v_1(x)$, we see that this equation should have a solution with the dominant behavior $v_1(x)=e_0x^0$, $e_0\in\mathbb{C}\setminus\{0\}$, $x\rightarrow\infty$, but it does not. In addition, the equations for $v_l(x)$, $l\geq 2$ have a zero solution whenever $v_1(x)$ is zero. Thus, the compatibility condition enabling the series $y^{(1)}_{\infty}(x)$ to exist is given by a first-degree polynomial with respect to $g_{n-m}$ and cannot hold identically. Consequently, it is necessary to consider systems with invariant algebraic curves given by bivariate polynomials not involving the series $y^{(1)}_{\infty}(x)$ into the factorization in the ring $\mathbb{C}_{\infty}\{x\}[y]$. In view of Theorem \ref{T1:Lienard_gen}, the corresponding irreducible invariant algebraic curve is given by the polynomial $F(x,y)=y-p(x)-z_1$, where $z_1\in\mathbb{C}$ and the polynomial $p(x)$ of degree $n-m$ is described by expression~\eqref{Lienard1_F_series_q}. Now we consider the subset of Li\'{e}nard differential systems with the invariant algebraic curve $y-p(x)-z_1=0$. Since the related equation~\eqref{Lienard_y_x} possesses the polynomial solution $y(x)=p(x)+z_1$, we find the polynomial $g(x)$. The result is $g(x)=-(p+z_1)(p_x+f)$. We see that the dimension of the subset of Li\'{e}nard differential systems under consideration is less than the dimension of $L_{m,n}$. We turn to systems from family ($B$). It follows from Theorem \ref{T:Lienard_degenerate} that if a generic Li\'{e}nard differential system from family ($B$) possesses irreducible invariant algebraic curves, then their generating polynomials are of degrees either $1$ or $2$ with respect to $y$. Let us suppose that there exists the irreducible invariant algebraic curve $y-q_l(x)=0$. In this expression $q_l(x)=\{y^{(l)}_{\infty}(x)\}_+$, $l=1$, $2$ is a polynomial of degree $m+1$. By analogue with systems from family ($A$), we can find the polynomial $g(x)$. We see from the relation $g(x)=-q_l(f+q_{l,x})$ that the subset of Li\'{e}nard differential systems with the invariant algebraic curve $y-q_l(x)=0$ is of dimension $2m+3$, while the dimension of $L_{m,n}$ is $m+n+2=3(m+1)$. Hence the subset in question is of zero Lebesgue measure in $L_{m,n}$ provided that $m>0$. Now we suppose that Li\'{e}nard differential systems from family ($B$) have the hyperelliptic invariant algebraic curve $(y+u(x))^2+w(x)=0$. By Theorem \ref{T:Lienard_degenerate} this curve is unique. In addition, we see that the polynomial $u(x)$ is of degree $m+1$ and the polynomial $w(x)$ is of degree at most $2m+2$. Following H. \.{Z}o{\l}\c{a}dec \cite{Zoladec01}, we substitute the bivariate polynomial $F(x,y)=(y+u(x))^2+w(x)$ into the partial differential equation \begin{equation} \label{Lienard_Inegrability_Generic_PDE} yF_x-(f(x)y+g(x))F_y-\lambda(x,y)F=0. \end{equation} Further, we set to zero the coefficients at different powers of $y$. Since the cofactor $\lambda(x,y)$ is independent of $y$, we express the polynomials $f(x)$ and $g(x)$ via $u(x)$ and $v(x)$. The result~is \begin{equation} \label{Lienard_Inegrability_Generic_PS4} f(x)=u_x+\frac{uw_x}{2w},\quad g(x)=\frac{w_x}{2}+\frac{u^2w_x}{2w}. \end{equation} We see from these expressions that any zero of the polynomial $w(x)$ is also a zero of the polynomial $u(x)$. The polynomial $u(x)$ is parameterized by at most $m+2$ parameters and the polynomial $w(x)$ adds only one new parameter. Thus, we conclude that the dimension of Li\'{e}nard differential systems from family ($B$) with the hyperelliptic invariant algebraic curve $(y+u(x))^2+w(x)=0$ is at most $m+3$. While the dimension of $L_{m,2m+1}$ is $3m+3$. Finally, we are left with systems from family ($C$). Let us track the dependence of the Puiseux series $y^{(1)}_{\infty}(x)$ and $y^{(2)}_{\infty}(x)$ on the coefficient $f_m$. We introduce the representation \begin{equation} \label{Lienard_Inegrability_Generic_PS_Cn} y^{(k)}_{\infty}(x)=\sum_{l=0}^{+\infty}v_l^{(k)}(x)\left(f_{m}\right)^l,\quad k=1,2, \end{equation} where $\{v_l^{(k)}(x)\}$ are elements of the field $\mathbb{C}_{\infty}\{x\}$. Let us introduce the following designation: $\hat{f}(x)=f(x)-f_m$. Substituting these representations into equation~\eqref{Lienard_y_x} and setting to zero the coefficients of $f^l_{m}$ with $l=0$, $1$, we find the ordinary differential equations \begin{equation} \label{Lienard_Inegrability_Generic_PS2_Cn} v_0v_{0,x}+\hat{f}(x)v_0+g(x)=0,\quad v_0v_{1,x}+(\hat{f}(x)+v_{0,x})v_1+v_0=0. \end{equation} Note that we omit the upper index. Let us suppose that $n$ is even. Puiseux series from the field $\mathbb{C}_{\infty}\{x\}$ that satisfy these equations are of the form \begin{equation} \label{Lienard_Inegrability_Generic_PS3_Cn} \begin{gathered} v_0^{(k)}(x)=\sum_{j=0}^{+\infty}c_j^{(k)}x^{\frac{n+1-k}{2}},\quad k=1,2,\quad c_0^{(1,2)}=\pm \frac{\sqrt{-2(n+1)g_0}}{n+1};\\ v_1^{(k)}(x)=\sum_{j=0}^{+\infty}e_j^{(k)}x^{\frac{2-k}{2}},\quad k=1,2,\quad e_0^{(1,2)}=- \frac{2}{n+1}.\hfill \end{gathered} \end{equation} Theorem \ref{T:Darboux_pols_computation_Lienard} gives the following necessary condition for an invariant algebraic curve $F(x,y)=0$ to exist: $N_1b_{n+3}^{(1)}+N_2b_{n+3}^{(2)}=0$. Recall that $b_{n+3}^{(k)}$ is a coefficient of the family of series $y^{(k)}_{\infty}(x)$ and $N_k$ is the number of times the family of series $y^{(k)}_{\infty}(x)$ enters the factorization of $F(x,y)$ in the ring $\mathbb{C}_{\infty}\{x\}[y]$. Using Theorem \ref{T1:Lienard_2m+1}, we get $N_1=N_2$. Calculating the first five coefficients of the series $v_1^{(k)}(x)$, we see that $e_{4}^{(1)}+e_{4}^{(2)}$ is not identically zero for any non-zero polynomial $f(x)$ of degree $m<(n-1)/2$. Hence the expression $N_1(b_{n+3}^{(1)}+b_{n+3}^{(2)})$ is a polynomial with respect to $f_m$ possessing a non-zero coefficient of $f_m$ in the generic case. Now we assume that $n$ is odd. The Puiseux series $v_0^{(k)}(x)$ and $v_1^{(k)}(x)$ solving equations \eqref{Lienard_Inegrability_Generic_PS2_Cn} are of the form \eqref{Lienard_Inegrability_Generic_PS3_Cn}, where the coefficients $\left\{c_j^{(k)}\right\}$ and $\left\{e_j^{(k)}\right\}$ with odd lower indices equal zero. Further, we again consider the necessary condition $N_1b_{n+3}^{(1)}+N_2b_{n+3}^{(2)}=0$, but now the numbers $N_1$ and $N_2$ may not be equal. Calculating the first three non-trivial coefficients of the series $v_1^{(k)}(x)$ , we see that the condition $N_1e_{4}^{(1)}+N_2e_{4}^{(2)}$ is not identically zero for any numbers $N_1$, $N_2\geq0$, $N_1+N_2>0$ and any non-zero polynomial $f(x)$ of degree $m<(n-1)/2$. Consequently, the expression $N_1b_{n+3}^{(1)}+N_2b_{n+3}^{(2)}$, regarded as a polynomial with respect to $f_m$, possesses a non-zero coefficient of $f_m$ in the generic case. We conclude that a generic Li\'{e}nard differential system from family ($C$) has no finite invariant algebraic curves. \end{proof} It is straightforward to see that if $(\deg f,\deg g)=(0,1)$, then the associated Li\'{e}nard differential systems are linear. They always have invariant lines and are Darboux integrable. \section{Integrability of Li\'{e}nard differential systems from family ($A$)}\label{S:Lienard_A} We have proved in the previous section that if a polynomial Li\'{e}nard differential system is Liouvillian integrable, then it has at least one invariant algebraic curve. Let us investigate the existence of exponential invariants with non-polynomial arguments. Here and in what follows we use the designations of Theorem \ref{T1:Lienard_gen}. In particular, the polynomials $q(x)$ and $p(x)$ give the initial parts of the Puiseux series near the point $x=\infty$ that solve equation \eqref{Lienard_y_x} whenever $\deg f<\deg g<2\deg f+1$. These polynomials are presented in expression \eqref{Lienard1_F_series_q}. \begin{lemma}\label{L:Lienard_exp_inv2} Li\'{e}nard differential systems \eqref{Lienard_gen} from family ($A$) do not have exponential invariants of the form $E(x,y)=\exp\left\{h(x,y)/r(x,y)\right\}$, where $h(x,y)\in\mathbb{C}[x,y]$ and $r(x,y)\in\mathbb{C}[x,y]\setminus\mathbb{C}$ are coprime polynomials. \end{lemma} \begin{proof} The proof is by contradiction. Let $E(x,y)=\exp\left\{h(x,y)/r(x,y)\right\}$ be an exponential invariant of a Li\'{e}nard differential system \eqref{Lienard_gen} from family ($A$). Since the polynomial $r(x,y)$ is not a constant, we see that $r(x,y)=0$ is an invariant algebraic curve of the differential system under consideration \cite{Christopher}. According to the results of Theorem \ref{T1:Lienard_gen} we can represent the polynomial $r(x,y)$ in the form $r(x,y)=F_1^{n_1}(x,y)F_2^{n_2}(x,y)$, where $n_1$, $n_2\in\mathbb{N}_0$, $n_1+n_2>0$, and the polynomials $F_1(x,y)$ and $F_2(x,y)$ produce invariant algebraic curves of the related differential system. These polynomials are given by expression \eqref{Lienard1_F} with $N=1$, $k=0$ (the invariant algebraic curve $F_1(x,y)=0$) and by expression \eqref{Lienard1_F} with $N\in\mathbb{N}$, $k=1$ (the invariant algebraic curve $F_2(x,y)=0$). Relation \eqref{Lienard1_cof} yields the explicit expressions of the cofactors \begin{equation} \label{Lienard_exp_inv1_cof1} \begin{gathered} \lambda_1(x,y)=-f(x)-q_x(x)=o(x^m),\quad x\rightarrow \infty,\hfill \\ \lambda_2(x,y)=-Nf(x)-(N-1)q_x(x)-p_x(x) =-f_0x^m+o(x^m),\quad x\rightarrow \infty. \end{gathered} \end{equation} related to these invariant algebraic curves, respectively. The cofactor of the invariant algebraic curve $r(x,y)=0$ equals $\lambda(x,y)=n_1\lambda_1(x,y)+n_2\lambda_2(x,y)$. If the Li\'{e}nard differential system under consideration possesses only one irreducible invariant algebraic curve, for example $F_1(x,y)=0$, then we assume that $n_2=0$ and vice versa. Here and in what follows, $O$ denotes the zero element of the field $\mathbb{C}_{\infty}\{x\}$. Let us suppose that $y^{(2)}_{\infty}(x)$ is a Puiseux series of type ($II$) such that the following condition $\displaystyle F_2\left(x,y^{(2)}_{\infty}(x)\right)=O$ is valid. Substituting $y=y^{(2)}_{\infty}(x)$ into the function $ \lambda(x,y)/y$, we can expand it into a Puiseux series near infinity. Supposing that $n_2>0$, we find the dominant behavior near the point $x=\infty$ of this series. The result is \begin{equation} \label{Lienard_exp_inv1_3} \frac{ \lambda\left(x,y^{(2)}_{\infty}(x)\right)}{y^{(2)}_{\infty}(x)}=\frac{n_2f_0^2}{g_0}x^{2m-n}+o\left(x^{2m-n}\right), \quad x\rightarrow\infty. \end{equation} The inequality $\deg f<\deg g<2\deg f+1$ yields $2m-n>-1$. Using Theorem~\ref{T:Exp_fact}, we come to a contradiction. Consequently, we should set $n_2=0$. The polynomial $r(x,y)$ now takes the form $r(x,y)=F_1^{n_1}(x,y)$ with $F_1(x,y)=y-q(x)-z_0$ and $z_0\in\mathbb{C}$. Equation $\mathcal{X}E=\varrho(x,y) E$ produces the following linear inhomogeneous partial differential equation \begin{equation} \label{Lienard_exp_inv1_3a} yh_x-(g(x)y+f(x))h_y=n_1\lambda_{1}(x,y)h+\varrho(x,y) F^{n_1}_{1}(x,y), \, \varrho(x,y)\in\mathbb{C}[x,y] \end{equation} satisfied by the polynomial $h(x,y)$. Let us consider the truncated Puiseux series $y(x)=q(x)+z_0$ that solves equation \eqref{Lienard_y_x} whenever invariant algebraic curve $F_{1}(x,y)=0$ exists. Recall that $y(x)$ is a zero of the polynomial $F_{1}(x,y)$. Considering the restriction $H(x)=h(x,y)|_{y=y(x)}$, we obtain the ordinary differential equation \begin{equation} \label{Lienard_exp_inv1_4} \left(q(x)+z_0\right)\frac{d H}{d x}=n_1\lambda_{1}(x,y) H,\quad \lambda_1(x,y)=-f(x)-q_x(x). \end{equation} Since the polynomials $h(x,y)$ and $F_{1}(x,y)$ are relatively prime, we conclude that $H(x)\not\equiv 0$. Indeed, assuming the converse and using the B\'{e}zout's theorem, we see that the bivariate polynomials $h(x,y)$ and $r(x,y)$ have a common factor. It follows from the relations $\deg q=m+1$ and $\deg \lambda_1\leq m-1$ that equation \eqref{Lienard_exp_inv1_4} does not have non-zero polynomial solutions. This fact contradicts the existence of exponential invariants $E(x,y)=\exp\left\{h(x,y)/r(x,y)\right\}$ with a non-constant polynomial $r(x,y)$. \end{proof} Now let us establish that Li\'{e}nard differential systems \eqref{Lienard_gen} do not have Darboux first integrals whenever $\deg f<\deg g<2\deg f+1$. \begin{theorem}\label{T:Lienard_Integrability1} Li\'{e}nard differential systems \eqref{Lienard_gen} from family ($A$) are not Darboux integrable. \end{theorem} \begin{proof} Using Theorem \ref{T:L_two_curves}, we see that a Li\'{e}nard differential system \eqref{Lienard_gen} with the restriction $\deg f<\deg g<2\deg f+1$ has at most two distinct irreducible invariant algebraic curves simultaneously. As in the proof of Lemma \ref{L:Lienard_exp_inv2}, we denote these curves as $F_1(x,y)=0$ and $F_2(x,y)=0$. Without loss of generality, we can suppose that the generating polynomials of these algebraic curves are given by expression \eqref{Lienard1_F} with $N=1$, $k=0$ and $N\in\mathbb{N}$, $k=1$, respectively. If there exists a first integral being a Darboux function, then this first integral should be of the form \begin{equation} \label{Lienard_Inegrability_Gen1} I(x,y)=F_1^{d_1}(x,y)F_2^{d_2}(x,y),\quad d_1,d_2\in\mathbb{C},\quad |d_1|+|d_2|>0. \end{equation} Indeed, Lemma \ref{L:Lienard_exp_inv2} forbids the existence of exponential factors given by invariants $E(x,y)=\exp\left\{h(x,y)/r(x,y)\right\}$, where $h(x,y)\in\mathbb{C}[x,y]$ and $r(x,y)\in\mathbb{C}[x,y]\setminus\mathbb{C}$. By Lemma \ref{L:Lienard_exp_inv1} exponential invariants $E(x,y)=\exp[h(x,y)]$ that could arise in an expression of the first integral have divisible by $y$ cofactors, while invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$ have cofactors that are independent of~$y$. Recall that there are no rational first integrals by the corollary to Theorem \ref{T:L_two_curves}. Consequently, first integrals that are Darboux functions do not have exponential factors. Let us note that if the Li\'{e}nard differential system in question has only one irreducible invariant algebraic curve, for example $F_1(x,y)=0$, then we suppose that $d_2=0$ and vice versa. The cofactors $\lambda_1(x,y)$ and $\lambda_2(x,y)$ of invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$ are given in relation \eqref{Lienard_exp_inv1_cof1}. First integral \eqref{Lienard_Inegrability_Gen1} exists provided that the following condition $d_1\lambda_1(x,y)+d_2\lambda_2(x,y)=0$ is satisfied. This condition can be rewritten as \begin{equation} \label{Lienard_Inegrability_Gen2} d_1[f(x)+q_x(x)]+d_2[Nf(x)+(N-1)q_x(x)+p_x(x)]=0. \end{equation} The highest-degree term in expression \eqref{Lienard_Inegrability_Gen1} is $d_2f_0x^m $. Since $f_0\neq0$, we get $d_2=0$. Consequently, the first integral can be chosen as a rational function with $d_1=1$. Again we recall the corollary to Theorem \ref{T:L_two_curves}, which excludes the existence of rational first integrals. \end{proof} Further, our aim is to study the existence of non-autonomous Darboux first integrals with a time-dependent exponential factor. \begin{lemma}\label{L:Lienard_Integrability_time1} A Li\'{e}nard differential system \eqref{Lienard_gen} from family ($A$) has a non-autonomous Darboux first integral with a time-dependent exponential factor \eqref{FI_t_gen} if and only if $\deg g=\deg f +1$ and the following condition \begin{equation} \label{Lienard_Inegrability_1g} g(x)=\omega\left(\int_{0}^xf(s)ds-\omega x-z_0\right),\quad \omega,z_0\in\mathbb{C},\quad \omega\neq0 \end{equation} is valid. A first integral takes the form \begin{equation} \label{Lienard_Inegrability_1} I(x,y,t)=\left(y+\int_{0}^xf(s)ds-\omega x-z_0\right)\exp(\omega t). \end{equation} There are no other independent non-autonomous Darboux first integrals with a time-dependent exponential factor. \end{lemma} \begin{proof} If the polynomial $g(x)$ is given by relation \eqref{Lienard_Inegrability_1g}, then it is straightforward to verify that function \eqref{Lienard_Inegrability_1} is a non-autonomous first integral of the related differential system. Let us establish the necessity of condition \eqref{Lienard_Inegrability_1g}. We repeat the reasoning given in the proof of Theorem \ref{T:Lienard_Integrability1}. By Theorems~\ref{T:L23_Non_aut_FI} and \ref{T:L_two_curves} a non-autonomous first integral \eqref{FI_t_gen} reads as \begin{equation} \label{Lienard_Inegrability_t_Gen1} \begin{gathered} I(x,y,t)=F_1^{d_1}(x,y)F_2^{d_2}(x,y)\exp(\omega t),\, d_1,d_2,\omega\in\mathbb{C},\, |d_1|+|d_2|>0,\, \omega\neq0. \end{gathered} \end{equation} Now we need to study the following condition \begin{equation} \label{Lienard_Inegrability_t_Gen2} d_1[f(x)+q_x(x)]+d_2[Nf(x)+(N-1)q_x(x)+p_x(x)]-\omega =0. \end{equation} Similarly to the case of Theorem \ref{T:Lienard_Integrability1}, we obtain $d_2=0$. Further, it is without loss of generality to set $d_1=1$. Considering relation \eqref{Lienard_Inegrability_t_Gen2} as an ordinary differential equation for the polynomial $q(x)$ and performing the integration, we find its solution \begin{equation} \label{Lienard_Inegrability_t_Gen2_2} q(x)=-\int_{0}^xf(s)ds+\omega x \end{equation} satisfying the condition $q(0)=0$ presented in \eqref{Lienard1_F_series_q}. Finally, we note that the invariant algebraic curve $F_1(x,y)=0$ given by the polynomial $F_1(x,y)=y-q(x)-z_0$ exists whenever the series of type ($I$) defined in \eqref{Lienard1_F_series} terminates at the zero term. In this case equation \eqref{Lienard_y_x} has a polynomial solution $y(x)=q(x)+z_0$. Substituting our results into equation \eqref{Lienard_y_x}, we find expression \eqref{Lienard_Inegrability_1g}. Consequently, we obtain the following equality $\deg g= \deg f +1 $. The absence of other independent non-autonomous Darboux first integrals \eqref{FI_t_gen} follows from the previous considerations and the uniqueness of the invariant algebraic curve $F_1(x,y)=0$. \end{proof} It follows from Lemma \ref{L:Lienard_Integrability_time1} that ordinary differential equation \eqref{Lienard_y_x} related to a Li\'{e}nard differential system from family ($A$) possessing a non-autonomous Darboux first integral has a polynomial solution $y(x)=q(x)+z_0$. We recall that integrating factors and Jacobi last multipliers with a constant ratio belong to the same equivalence class. We do not distinguish between them. \begin{theorem}\label{T:Lienard_Integrability2} A Li\'{e}nard differential system \eqref{Lienard_gen} from family ($A$) is Liouvillian integrable if and only if the following assertions are valid: \begin{enumerate} \item the system under consideration possesses two distinct irreducible invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$, where the polynomial $F_1(x,y)=y-q(x)-z_0$ is given by expression~\eqref{Lienard1_F} with $N=1$, $k=0$ and the polynomial $F_2(x,y)\in\mathbb{C}[x,y]$ takes the form \eqref{Lienard1_F} with $N\in\mathbb{N}$, $k=1$; \item the polynomials $q(x)$ and $p(x)$ giving initial parts of the Puiseux series near the point $x=\infty$ that solve equation \eqref{Lienard_y_x} identically satisfy the condition \begin{equation} \label{Lienard_Inegrability_C1_cond} (n-m)[f(x)+q_x(x)]+(m+1)p_x(x)=0. \end{equation} \end{enumerate} The related Li\'{e}nard differential system has the unique Darboux integrating factor \begin{equation} \label{Lienard_Inegrability_Int_fact} M(x,y)=\frac{(y-q(x)-z_0)^{\frac{N(m+1)-(n+1)}{m+1}}}{F_2(x,y)}. \end{equation} \end{theorem} \begin{proof} Using Theorem \ref{T:Liouville}, we see that a Liouvillian integrable differential system should have a Darboux integrating factor. Arguing as in the proof of Theorem~\ref{T:Lienard_Integrability1}, we conclude that such an integrating factor does not involve exponential invariants and is of the form \begin{equation} \label{Lienard_Inegrability_IF1} M(x,y)=F_1^{d_1}(x,y)F_2^{d_2}(x,y),\quad d_1,d_2\in\mathbb{C},\quad |d_1|+|d_2|>0. \end{equation} In this expression the polynomials $F_1(x,y)$ and $F_2(x,y)$ define invariant algebraic curves. The polynomial $F_1(x,y)$ is given by expression \eqref{Lienard1_F} with $N=1$ and $k=0$. The polynomial $F_2(x,y)$ is produced by the same expression with $N\in\mathbb{N}$ and $k=1$. Again we note that if the Li\'{e}nard differential system in question has only one irreducible invariant algebraic curve, for example $F_1(x,y)=0$, then we set $d_2=0$ and vice versa. Calculating the divergence of vector field \eqref{Lienard_Inegrability_VF} yields $\text{div}\,\mathcal{X}=-f(x)$. Hence the necessary and sufficient condition for Darboux integrating factor~\eqref{Lienard_Inegrability_IF1} to exist takes the form $d_1\lambda_1(x,y)+d_2\lambda_2(x,y)=f(x)$, where $\lambda_1(x,y)$ and $\lambda_2(x,y)$ are the cofactors of invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$, respectively. These cofactors are given in expression \eqref{Lienard_exp_inv1_cof1}. The condition enabling the existence of the Darboux integrating factor explicitly reads as \begin{equation} \label{Lienard_Inegrability_IF2} d_1[f(x)+q_x(x)]+d_2[Nf(x)+(N-1)q_x(x)+p_x(x)]=-f(x). \end{equation} Balancing the highest-degree terms in this expression, we get $d_2=-1$. Further, we recall that the invariant algebraic curve $F_1(x,y)=0$ is given by the polynomial $F_1(x,y)=y-q(x)-z_0$. If $d_1=0$, then the curve $F_1(x,y)=0$ either does not exist or does not enter the explicit expression \eqref{Lienard_Inegrability_IF1} of the integrating factor. Now let us consider the representation of the cofactors $\lambda_1(x,y)$ and $\lambda_2(x,y)$ in the ring $\mathbb{C}_{\infty}\{x\}[y]$. They are of the form \begin{equation} \label{Lienard_Inegrability_C1_n1} \begin{gathered} \lambda_1(x,y)=-f(x)-q_x(x),\, \lambda_2(x,y)=-\sum_{j=1}^{N-1}\left[f(x)+\left(y^{(1)}_{j,\infty}\right)_x\right]-\left[f(x)+\left(y^{(2)}_{N,\infty}\right)_x\right]. \end{gathered} \end{equation} Substituting these expressions into the condition $d_1\lambda_1(x,y)+d_2\lambda_2(x,y)=f(x)$ with $d_2=-1$, we find \begin{equation} \label{Lienard_Inegrability_C1_n2} \begin{gathered} d_1[f(x)+q_x(x)]-\sum_{j=1}^{N-1}\left[f(x)+\left(y^{(1)}_{j,\infty}\right)_x\right]=\left(y^{(2)}_{N,\infty}\right)_x. \end{gathered} \end{equation} The Puiseux series $y(x)=y^{(1)}_{j,\infty}(x)$ and the polynomial $y(x)=q(x)+z_0$ solve equation \eqref{Lienard_y_x}. Thus, we obtain \begin{equation} \label{Lienard_Inegrability_C1_n3} \begin{gathered} f(x)+q_x(x)=-\frac{g(x)}{q(x)+z_0},\quad f(x)+\left(y^{(1)}_{j,\infty}\right)_x=-\frac{g(x)}{y^{(1)}_{j,\infty}}. \end{gathered} \end{equation} Again we suppose that if there is no polynomial solution $y(x)=q(x)+z_0$, then $d_1=0$. Using expressions \eqref{Lienard_Inegrability_C1_n3}, we rewrite condition \eqref{Lienard_Inegrability_C1_n2} in the form \begin{equation} \label{Lienard_Inegrability_C1_n4} \begin{gathered} g(x)\left[\sum_{j=1}^{N-1}\frac{1}{y^{(1)}_{j,\infty}}-\frac{d_1}{q(x)+z_0}\right]=\left(y^{(2)}_{N,\infty}\right)_x. \end{gathered} \end{equation} Let us find the highest-order terms in a neighborhood of the point $x=\infty$. Using the asymptotic formulae \begin{equation} \label{Lienard_Inegrability_C1_n5} \begin{gathered} g(x)=g_0x^n+o(x^n),\quad q(x)=-\frac{f_0}{m+1}x^{m+1}+o(x^{m+1}),\quad x\rightarrow\infty\hfill\\ y^{(1)}_{j,\infty}(x)=-\frac{f_0}{m+1}x^{m+1}+o(x^{m+1}),\quad y^{(2)}_{N,\infty}(x)=-\frac{g_0}{f_0}x^{n-m}+o(x^{n-m}),\quad x\rightarrow\infty, \end{gathered} \end{equation} we collect the coefficients of $x^{n-m-1}$ in expression \eqref{Lienard_Inegrability_C1_n4}. This yields \begin{equation} \label{Lienard_Inegrability_C1_n6} d_1=N-1-\frac{n-m}{m+1}. \end{equation} Using inequalities $m<n<2m+1$, we see that the parameter $d_1$ cannot be zero. Substituting relation \eqref{Lienard_Inegrability_C1_n6} and $d_2=-1$ into expressions \eqref{Lienard_Inegrability_IF1} and \eqref{Lienard_Inegrability_IF2}, we find condition \eqref{Lienard_Inegrability_C1_cond} and the Darboux integrating factor as given in \eqref{Lienard_Inegrability_IF1}. \end{proof} \textit{Remark.} Condition \eqref{Lienard_Inegrability_C1_cond} is identically satisfied whenever $\deg g=\deg f +1$. Indeed, if $n=m+1$, then we get \begin{equation} \label{Lienard_Inegrability_C1_n7} f(x)+q_x(x)=\frac{(m+1)g_0}{f_0},\quad p(x)=-\frac{g_0}{f_0}x. \end{equation} Consequently, Li\'{e}nard differential systems satisfying the restriction $\deg g=\deg f +1$ and possessing two distinct irreducible invariant algebraic curves are always Liouvillian integrable. We see from Theorem \ref{T:Lienard_Integrability2} that equation \eqref{Lienard_y_x} related to a Liouvillian integrable Li\'{e}nard differential system from family ($A$) has a polynomial solution given by the expression $y(x)=q(x)+z_0$. In addition, using condition \eqref{Lienard_Inegrability_C1_cond} and equation \eqref{Lienard_y_x}, we can represent the polynomials $f(x)$ and $g(x)$ in the integrable cases as \begin{equation} \label{Lienard_Inegrability_C1_n8} f(x)=-q_x(x)-\frac{m+1}{n-m}p_x(x),\quad g(x)=\frac{m+1}{n-m}[q(x)+z_0]p_x(x). \end{equation} Recall that the polynomial $q(x)$ is of degree $m+1$ and the polynomial $p(x)$ is of degree $n-m$. It follows from Theorem \ref{T:Lienard_Integrability2} that Li\'{e}nard differential systems \eqref{Lienard_gen} from family ($A$) are Liouvillian integrable if and only if the polynomials $f(x)$ and $g(x)$ are given by expressions~\eqref{Lienard_Inegrability_C1_n8} and the systems possess an invariant algebraic curve with the generating polynomial taking the form \eqref{Lienard1_F}, where $N\in\mathbb{N}$ and $k=1$. Now let us study the integrability of Li\'{e}nard differential systems such that the related equation \eqref{Lienard_y_x} has two distinct polynomial solutions. The following theorem is valid. \begin{theorem}\label{T:Lienard_IntegrabilityA_partial} A Li\'{e}nard differential system \eqref{Lienard_gen} from family ($A$) with two distinct invariant algebraic curves given by first-degree polynomials with respect to $y$ is Liouvillian integrable if and only if the system is of the form \begin{equation} \label{Lienard_Inegrability_Apartial_1} x_t=y,\quad y_t=\left[k\beta v^{k-1}(x)+(k+l)v^{l-1}(x)\right]v_xy-k\left[\beta v^k(x)+v^l(x)\right]v^{l-1}(x)v_x, \end{equation} where $\beta\in\mathbb{C}\setminus\{0\}$, $v(x)$ is a polynomial of degree $(n-m)/l$, $k$ and $l$ are relatively prime natural numbers such that the restriction $(m+1)l=(n-m)k$ holds. The related Li\'{e}nard differential system has the unique Darboux integrating factor \begin{equation} \label{Lienard_Inegrability_Int_Fact_A_p1} M(x,y)=\{y-\beta v^k(x)-v^l(x)\}^{-\frac{l}{k}}\left\{y-v^l(x)\right\}^{-1} \end{equation} and the invariant algebraic curves $y-\beta v^k(x)-v^l(x)=0$, $y-v^l(x)=0$. A Liouvillian first integral is of the form \begin{equation} \begin{gathered} \label{Lienard_Inegrability_FI_A_p1} I(x,y)=\frac{k\beta^{\frac{l}{k}}}{k-l}\{y-\beta v^k(x)-v^l(x)\}^{\frac{k-l}{k}}+ \sum_{j=0}^{m}\exp\left[-\frac{\pi l(2j+1)i}{k}\right]\\ \times\ln\left\{\{y-\beta v^k(x)-v^l(x)\}^{\frac{1}{m+1}} -\exp\left[\frac{\pi(2j+1)i}{m+1}\right][\beta v^k(x)]^{\frac{1}{m+1}}\right\}. \end{gathered} \end{equation} \end{theorem} \begin{proof} We begin the proof by noting that we use novel designations for the polynomial solutions of equation \eqref{Lienard_y_x}. We set $\tilde{q}(x)=q(x)+z_0$ and $\tilde{p}(x)=p(x)+z_1$. Condition~\eqref{Lienard_Inegrability_C1_cond} and equation \eqref{Lienard_y_x} provide explicit expressions \eqref{Lienard_Inegrability_C1_n8} for the polynomials $f(x)$ and $g(x)$. Substituting the relation $y(x)=\tilde{p}(x)$ and expressions \eqref{Lienard_Inegrability_C1_n8} into equation~\eqref{Lienard_y_x}, we integrate the resulting equation with respect to the polynomial $\tilde{q}(x)$. This way, we get \begin{equation} \label{Lienard_Inegrability_C1_n9} \tilde{q}(x)=\beta \tilde{p}^{\frac{m+1}{n-m}}(x)+\tilde{p}(x), \end{equation} where $\beta\in\mathbb{C}$ is a constant of integration. Recalling the fact that $\tilde{q}(x)$ is a polynomial of degree $m+1$ and $\tilde{p}(x)$ is a polynomial of degree $n-m$, we find that $\beta\neq0$. Introducing relatively prime natural numbers $k$ and $l$ according to the rule $(m+1)l=(n-m)k$, we represent the polynomials $\tilde{p}(x)$ and $\tilde{q}(x)$ as \begin{equation} \label{Lienard_Inegrability_C1_p_q} \tilde{p}(x)=v^l(x),\quad \tilde{q}(x)=\beta v^k(x) + v^l(x). \end{equation} In this expression $v(x)$ is an arbitrary polynomial of degree $(n-m)/l$. Substituting $N=1$ into expression~\eqref{Lienard_Inegrability_Int_fact}, we find the integrating factor as given in relation \eqref{Lienard_Inegrability_Int_Fact_A_p1}. Finally, we calculate the line integral \begin{equation} \label{Lienard_InegrabilityA_first_integrals} \begin{gathered} I(x,y)=\int_{(x_0,y_0)}^{(x,y)}M(x,y)\left[ydy+\left\{f(x)y+g(x)\right\}dx\right] \end{gathered} \end{equation} and obtain first integral \eqref{Lienard_Inegrability_FI_A_p1}, where $i$ is the imaginary unit. \end{proof} \textit{Remark.} The family of systems \eqref{Lienard_Inegrability_Apartial_1} can be transformed to the following simple form \begin{equation} \label{Lienard_Inegrability_Apartial_1_Sundman} s_{\tau}=z,\quad z_{\tau}=\left[k\beta s^{k-1}+(k+l)s^{l-1}\right]z -k\left[\beta s^k+s^l\right]s^{l-1} \end{equation} via the generalized Sundman transformation $s(\tau)=v(x)$, $z(\tau)=y$, $d\tau=v_x(x)dt$. Substituting $v(x)=s$, $y=z$ into \eqref{Lienard_Inegrability_FI_A_p1}, we find a Liouvillian first integral for systems \eqref{Lienard_Inegrability_Apartial_1_Sundman}. Let us demonstrate that for any fixed degrees of the polynomials $f(x)$ and $g(x)$ there exist Li\'{e}nard differential systems \eqref{Lienard_gen} from family ($A$) that have Liouvillian first integrals. With this aim we suppose that the polynomial $\tilde{p}(x)$ takes the following form $\tilde{p}(x)=x^{n-m}$. We find the polynomial $\tilde{q}(x)$ using expression \eqref{Lienard_Inegrability_C1_n9}. The result is $\tilde{q}(x)=\beta x^{m+1}+x^{n-m}$. Thus, we conclude that the following family of Li\'{e}nard differential systems \begin{equation} \label{Lienard_Inegrability_C1_n10} \begin{gathered} x_t=y,\quad y_t=\left[(m+1)\beta x^m+(n+1)x^{n-m-1}\right]y -(m+1)\left(\beta x^n+x^{2n-2m-1}\right) \end{gathered} \end{equation} is Liouvillian integrable. The related Darboux integrating factor reads as \begin{equation} \label{Lienard_Inegrability_C1_n10_DIF} \begin{gathered} M(x,y)=\left(y-x^{n-m}\right)^{-1}\left(y-\beta x^{m+1}-x^{n-m}\right)^{\frac{m-n}{m+1}}. \end{gathered} \end{equation} In addition, if the numbers $n$ and $m$ are the following $n=l(k+1)-1$ and $m=lk-1$, where $l$, $k\in\mathbb{N}$, $k>1$, then relations \eqref{Lienard_Inegrability_C1_n8} and \eqref{Lienard_Inegrability_C1_n9} produce Liouvillian integrable systems \begin{equation} \label{Lienard_Inegrability_C1_n11} \begin{gathered} x_t=y,\quad y_t=\left[k\beta \tilde{p}^{k-1}(x)+k+1\right]\tilde{p}_x(x)y -k\left(\beta\tilde{p}^{k-1}(x)+1\right)\tilde{p}(x)\tilde{p}_x(x), \end{gathered} \end{equation} where $\tilde{p}(x)$ is a polynomial of degree $n-m=l$. Using expression \eqref{Lienard_Inegrability_Int_Fact_A_p1}, we see that systems~\eqref{Lienard_Inegrability_C1_n11} possess the Darboux integrating factor \begin{equation} \label{Lienard_Inegrability_C1_n12} \begin{gathered} M(x,y)=\{y-\beta\tilde{p}^k(x)-\tilde{p}(x)\}^{-\frac1{k}}\{y-\tilde{p}(x)\}^{-1}. \end{gathered} \end{equation} A related Liouvillian first integral is fairly simple in the case $k=2$. Let us explicitly present another expression of a first integral: \begin{equation} \label{Lienard_Inegrability_C1_n13} \begin{gathered} I(x,y)=\text{arctanh}\,\left\{\sqrt{\frac{\beta \tilde{p}^{\,2}(x)}{F(x,y)}}\right\}+\sqrt{\beta F(x,y)},\quad F(x,y)=\beta\tilde{p}^{\,2}(x)+\tilde{p}(x)-y. \end{gathered} \end{equation} Finally, let us note that there exist Liouvillian integrable Li\'{e}nard differential systems from family ($A$) such that the polynomial $F_2(x,y)$ in expression \eqref{Lienard_Inegrability_Int_fact} is of degree with respect to $y$ greater than $1$. In fact, the degree with respect to $y$ of the polynomial $F_2(x,y)$ can be an arbitrary natural number. This fact was established in article \cite{Demina13}. Our next step is to investigate the existence of non-autonomous Darboux--Jacobi last multipliers. The cases $\deg g=\deg f +1$ and $\deg g\neq \deg f +1$ will be considered separately. \begin{lemma}\label{L:Lienard_Integrability_t2} A Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the condition $\deg f+1<\deg g<2\deg f+1$ has a non-autonomous Darboux--Jacobi last multiplier of the form \eqref{JLM_gen} if and only if the following assertions are valid: \begin{enumerate} \item the system under consideration possesses two distinct irreducible invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$, where the polynomial $F_1(x,y)=y-q(x)-z_0$ is given by expression~\eqref{Lienard1_F} with $N=1$, $k=0$ and the polynomial $F_2(x,y)\in\mathbb{C}[x,y]$ reads as \eqref{Lienard1_F} with $N\in\mathbb{N}$, $k=1$; \item the polynomials $q(x)$ and $p(x)$ giving initial parts of the Puiseux series near the point $x=\infty$ that solve equation \eqref{Lienard_y_x} identically satisfy the condition \begin{equation} \label{Lienard_Inegrability_C1_cond_time} (n-m)[f(x)+q_x(x)]+(m+1)[p_x(x)+\omega]=0, \end{equation} where $\omega\in\mathbb{C}\setminus\{0\}$ is a constant. \end{enumerate} The non-autonomous Darboux--Jacobi last multiplier is unique and takes the form \begin{equation} \label{Lienard_Inegrability_Int_fact_time} M(x,y,t)=\frac{(y-q(x)-z_0)^{\frac{N(m+1)-(n+1)}{m+1}}}{F_2(x,y)}\exp(\omega t). \end{equation} \end{lemma} \begin{proof} We use Theorem \ref{T:L23_Non_aut_JLM} and repeat the proof of the previous theorem. The only difference is in condition \eqref{Lienard_Inegrability_IF2}. Now this condition reads as \begin{equation} \label{Lienard_Inegrability_t_IF2} d_1[f(x)+q_x(x)]+d_2[Nf(x)+(N-1)q_x(x)+p_x(x)]-\omega=-f(x), \end{equation} where $\omega\neq0$. The related non-autonomous Darboux--Jacobi last multiplier is given by the expression \begin{equation} \label{Lienard_Inegrability_new1_time_add2} M(x,y,t)=F_1^{d_1}(x,y)F_2^{d_2}(x,y)\exp[\omega t]. \end{equation} Similarly to the case of Theorem \ref{T:Lienard_Integrability2}, we find the values of $d_1$ and $d_2$. They are $d_1=N-1-(n-m)/(m+1)$ and $d_2=-1$. Both of them are non-zero. Consequently, a Li\'{e}nard differential system \eqref{Lienard_gen} with a non-autonomous Darboux--Jacobi last multiplier has two distinct irreducible invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$. Substituting explicit values of the parameters $d_1$ and $d_2$ into condition \eqref{Lienard_Inegrability_t_IF2} yields relation \eqref{Lienard_Inegrability_C1_cond_time}. If there exist two distinct non-autonomous Darboux-Jacobi last multipliers~\eqref{JLM_gen}, then their ratio is a Darboux first integral either autonomous or non-autonomous of the form \eqref{FI_t_gen}. By Theorem \ref{T:Lienard_Integrability1} and Lemma \ref{L:Lienard_Integrability_time1} such a situation is impossible. \end{proof} \begin{lemma}\label{L:Lienard_Integrability_t2_add} A Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the condition $\,$ $\deg g = \deg f+1$ has a non-autonomous Darboux--Jacobi last multiplier of the form \eqref{JLM_gen} if and only if the system under study possesses the irreducible invariant algebraic curve $F_2(x,y)=0$ given by relation \eqref{Lienard1_F} with $N\in\mathbb{N}$ and $k=1$. A non-autonomous Darboux--Jacobi last multiplier is of the from \begin{equation} \label{Lienard_Inegrability_Int_fact_time_add} M(x,y,t)=\frac{\exp\left[\frac{g_0}{f_0}\left\{1-(N-1)(m+1) \right\}t\right]}{F_2(x,y)}. \end{equation} There are no other non-autonomous Darboux--Jacobi last multipliers whenever the invariant algebraic curve $F_2(x,y)=0$ is unique. If, in addition, there exists the invariant algebraic curve $F_1(x,y)=0$ presented in expression~\eqref{Lienard1_F} with $N=1$ and $k=0$, then the system under consideration has a family of non-autonomous Darboux--Jacobi last multipliers \begin{equation} \label{Lienard_Inegrability_Int_fact_time_addn} M(x,y,t)=\frac{F_1^{d_1}(x,y)\exp\left[\frac{g_0}{f_0}\left\{1+\{d_1-(N-1)\}(m+1) \right\}t\right]}{F_2(x,y)},\, d_1\in\mathbb{C}. \end{equation} \end{lemma} \begin{proof} Similarly to the case of Lemma \ref{L:Lienard_Integrability_t2}, we see that a non-autonomous Darboux--Jacobi last multiplier of the form \eqref{JLM_gen} exists if and only if the system under consideration has at least one invariant algebraic curve and condition~\eqref{Lienard_Inegrability_t_IF2} is valid. Again we suppose that $d_j=0$ whenever the invariant algebraic curve $F_j(x,y)=0$ does not exist. The explicit expression of a non-autonomous Darboux--Jacobi last multiplier is given by \eqref{Lienard_Inegrability_new1_time_add2}. Balancing the highest-degree terms in condition \eqref{Lienard_Inegrability_t_IF2} yields $d_2=-1$. This fact proves the existence of the invariant algebraic curve $F_2(x,y)=0$. Further, we find that the polynomials $f(x)+q_x(x)$ and $p_x(x)$ are constants, see relation \eqref{Lienard_Inegrability_C1_n7}. Substituting equalities $d_2=-1$, $f(x)+q_x(x)=(m+1)g_0/f_0$ and $p_x(x)=-g_0/f_0$ into condition \eqref{Lienard_Inegrability_t_IF2}, we obtain \begin{equation} \label{Lienard_Inegrability_Int_fact_time_add_pq} (m+1)g_0d_1+\left\{1-(N-1)(m+1)\right\}g_0-f_0\omega=0. \end{equation} Setting $d_1=0$, we find the value of $\omega$ as given in relation \eqref{Lienard_Inegrability_Int_fact_time_add}. If the Li\'{e}nard differential system in question does not have the invariant algebraic curve $F_1(x,y)=0$, then non-autonomous Darboux--Jacobi last multiplier \eqref{Lienard_Inegrability_Int_fact_time_add} is unique. In the converse case, non-autonomous Darboux--Jacobi last multiplier \eqref{Lienard_Inegrability_Int_fact_time_add} also exists. In addition, we recall that the system has time-dependent Darboux first integral \eqref{Lienard_Inegrability_1}. It is straightforward to show that the product of a first integral and a non-constant Jacobi last multiplier is another Jacobi last multiplier. \end{proof} It is demonstrated in \cite{Demina17} that Li\'{e}nard differential systems with non-autonomous Darboux--Jacobi last multipliers \eqref{Lienard_Inegrability_Int_fact_time_add} indeed exist. In addition, let us note that the famous Duffing--van der Pol oscillators belong to family ($A$). The classification of Liouvillian integrable Duffing--van der Pol oscillators is performed in \cite{Demina07}. \section{Integrability of Li\'{e}nard differential systems from family ($B$)}\label{S:Lienard_B} Let us consider families of Li\'{e}nard differential systems with fixed degrees of the polynomials $f(x)$ and $g(x)$ such that the following relation $\deg g=2\deg f +1$ is satisfied. If we do not impose restrictions on the highest-degree coefficients $f_0$ and $g_0$ of the polynomials $f(x)$ and $g(x)$, then it has been established in \cite{Demina18} that the Fuchs indices of the Puiseux series near the point $x=\infty$ that solve related equations \eqref{Lienard_y_x} depend on the parameters $f_0$ and $g_0$. Consequently, performing the classification of irreducible invariant algebraic curves and Darboux or Liouvillian first integrals is a very difficult problem whenever only the degrees of the polynomials $f(x)$ and $g(x)$ are fixed. The method of Puiseux series can deal with each case of a positive rational Fuchs index separately. Interestingly, such a degeneracy leads to a variety of distinct integrable cases arising in Li\'{e}nard differential systems from family ($B$). This section is mainly devoted to the non-resonant case. We say that a Li\'{e}nard differential system from family ($B$) is resonant near infinity if equation~\eqref{eq:DP2_5_2} possesses a solution in $\mathbb{Q}^+$. For convenience, we introduce the parameter $\delta$ according to the rule \begin{equation}\label{eq:DP2_5_sig} g_0=\frac{f^2_0-\delta^2}{4(m+1)},\quad \delta\in\mathbb{C},\quad \delta\neq\pm f_0. \end{equation} Using this normalization, we solve equation \eqref{eq:DP2_5_2}. As a result we find the Fuchs indices. They take the form \begin{equation}\label{Lienard_degenerate_Int1} p_1=\frac{2(m+1)\delta}{\delta-f_0},\quad p_2=\frac{2(m+1)\delta}{\delta+f_0}. \end{equation} There are no positive rational Fuchs indices if and only if the condition $\delta/f_0\not\in\mathbb{Q}\setminus\{0\}$ is valid. In addition, we assume that the following inequality $ \deg f>0$ holds. The integrability problem for Li\'{e}nard differential systems~\eqref{Lienard_gen} under restrictions $\deg f=0$ and $\deg g=1$ is simple, see the end of Section \ref{S:Lienard}. In this section we use the designations of Theorem \ref{T:Lienard_degenerate}. In particular, the Puiseux series near the point $x=\infty$ that solve equation \eqref{Lienard_y_x} are denoted as $y^{(1)}_{\infty}(x)$ and $y^{(2)}_{\infty}(x)$. Introducing the variable $\delta$ instead of the parameter $g_0$, we see that these series have the following dominant behavior \begin{equation}\label{Lienard_degenerate_Puiseux_series_dominant} \begin{gathered} y^{(1)}_{\infty}(x)=\frac{\delta-f_0}{2(m+1)}x^{m+1}+o(x^{m+1}),\quad x\rightarrow\infty;\\ y^{(2)}_{\infty}(x)=-\frac{\delta+f_0}{2(m+1)}x^{m+1}+o(x^{m+1}),\quad x\rightarrow\infty. \end{gathered} \end{equation} Let us denote the polynomial parts of the series $y^{(1)}_{\infty}(x)$ and $y^{(2)}_{\infty}(x)$ as $q_1(x)$ and $q_2(x)$, respectively. Thus, we have the equalities \begin{equation}\label{Lienard_degenerate_Puiseux_series_Polynomial_parts} \begin{gathered} q_1(x)=\left\{y^{(1)}_{\infty}(x)\right\}_+,\quad q_2(x)=\left\{y^{(2)}_{\infty}(x)\right\}_+. \end{gathered} \end{equation} If $\delta=0$, then the series $y^{(1)}_{\infty}(x)$ and $y^{(2)}_{\infty}(x)$ coincide. Let us omit the indices and set $q(x)=\left\{y_{\infty}(x)\right\}_+$. We begin by investigating the existence of exponential invariants related to invariant algebraic curves. \begin{lemma}\label{L:Lienard_exp_degenerate} Let $h(x,y)\in\mathbb{C}[x,y]$ and $r(x,y)\in\mathbb{C}[x,y]\setminus\mathbb{C}$ be relatively prime polynomials. A Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta/f_0\not\in\mathbb{Q}\setminus\{0\}$ has exponential invariants $E(x,y)=\exp\left\{h(x,y)/r(x,y)\right\}$ if and only if the following statements are valid: \begin{enumerate} \item $\delta=0$; \item there exists the invariant algebraic curve $F(x,y)=0 $ with the generating polynomial $F(x,y)=y-q(x)$; \item the ordinary differential equation \begin{equation}\label{Lienard_degenerate_Int2} q(x)u_x(x)+[f(x)+q_x(x)]u(x)=0 \end{equation} has a non-zero polynomial solution $u(x)$. \end{enumerate} The related exponential invariant is of the form \begin{equation}\label{Lienard_degenerate_exp_inv_explicit} E(x,y)=\exp\left[\frac{u(x)}{y-q(x)}\right] \end{equation} and possesses the cofactor $\varrho(x,y)=u_x(x)$. \end{lemma} \begin{proof} It follows from Theorem \ref{T:Lienard_degenerate} that any Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta/f_0\not\in\mathbb{Q}\setminus\{0\}$ has at most two distinct irreducible invariant algebraic curves simultaneously. The degrees with respect to $y$ of polynomials producing irreducible invariant algebraic curves are either $1$ or $2$. If there exists an irreducible invariant algebraic curves of degree $2$ with respect to $y$, then it is unique. Further, there can arise at most two distinct irreducible invariant algebraic curves of degree $1$ with respect to $y$. Since $E(x,y)=\exp\left\{h(x,y)/r(x,y)\right\}$ is an exponential invariant, we conclude that $r(x,y)=0$ is an invariant algebraic curve of the related system. Let us represent exponential invariants in the from \begin{equation}\label{Lienard_degenerate_exp_inv_explicit_gen} E(x,y)=\exp\left[\frac{h(x,y)}{F^{n_1}_1(x,y)F^{n_2}_2(x,y)}\right],\quad n_1,n_2\in\mathbb{N}_0,\quad n_1+n_2>0, \end{equation} where $F_1(x,y)=0$ and $F_2(x,y)=0$ are irreducible invariant algebraic curves. Without loss of generality we set $n_k=0$ whenever the invariant algebraic curve $F_k(x,y)=0$ does not exist. Here $k=1$ or $k=2$. \textit{Case 1.} Let us suppose that there exists only one irreducible invariant algebraic curve $F_1(x,y)=0$ of degree $2$ with respect to $y$. Using relation \eqref{F_L2_5_1}, we find the cofactor $\lambda_1(x,y)$. The result is \begin{equation}\label{Lienard_degenerate_exp_inv_cof1} \lambda(x,y)=-2f(x)-\left\{\left(y^{(1)}_{\infty}\right)_x+\left(y^{(2)}_{\infty}\right)_x\right\}_+ \end{equation} The dominant behavior of the cofactor near the point $x=\infty$ is $\lambda(x,y)=-f_0x^{m}+o(x^{m})$. Now let us take one of the Puiseux series, for example, the series $y^{(1)}_{\infty}(x)$. We find the asymptotic relation \begin{equation}\label{Lienard_degenerate_exp_inv_cof1_add} \frac{\lambda\left(x,y^{(1)}_{\infty}(x)\right)}{y^{(1)}_{\infty}(x)}= -\frac{2(m+1)f_0}{(\delta-f_0)x}+o\left(\frac1{x}\right),\quad x\rightarrow\infty \end{equation} Since $\delta/f_0\not\in\mathbb{Q}\setminus\{0\}$, we conclude that the conditions of Theorem \ref{T:Exp_fact} are not satisfied whenever $\delta\neq0$. The case $\delta=0$ will be considered separately. Exponential invariants do not exist provided that $\delta\neq 0$. \textit{Case 2.} Let us suppose that the differential system under consideration has two distinct irreducible invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$. The polynomials producing the curves are of degree $1$ with respect to $y$ and have the cofactors \begin{equation}\label{Lienard_degenerate_exp_inv_cof2} \lambda_m(x,y)=-f(x)-\left\{\left(y^{(m)}_{\infty}\right)_x\right\}_+,\quad m=1,2. \end{equation} The cofactor of the invariant algebraic curve $F^{n_1}_1(x,y)F^{n_2}_2(x,y)=0$ is given by the polynomial $\lambda(x,y)=n_1\lambda_1(x,y)+n_2\lambda_2(x,y)$ and has the following dominant behavior near the point $x=\infty$: \begin{equation}\label{Lienard_degenerate_exp_inv_cof2_add} \lambda(x,y)=\frac12\left[n_2(\delta-f_0)-n_1(\delta+f_0)\right]x^{m}+o(x^{m}),\quad x\rightarrow \infty. \end{equation} Suppose that one of the numbers $n_1$ and $n_2$ is non-zero, for example, $n_1$. Further, we consider the following asymptotic relation \begin{equation}\label{Lienard_degenerate_exp_inv_cof2_addn} \frac{\lambda\left(x,y^{(1)}_{\infty}(x)\right)}{y^{(1)}_{\infty}(x)}=(m+1)\left(n_2-\frac{(\delta+f_0)n_1}{\delta-f_0}\right) \frac{1}{x}+o\left(\frac1{x}\right),\quad x\rightarrow\infty. \end{equation} Using Theorem \ref{T:Exp_fact} and expression $\delta/f_0\not\in\mathbb{Q}\setminus\{0\}$, we see that there are no exponential invariants \eqref{Lienard_degenerate_exp_inv_explicit_gen} whenever $\delta\neq0$. \textit{Case 3.} Let us suppose that the differential system in question has only one irreducible invariant algebraic curve: either $F_1(x,y)=0$ or $F_2(x,y)=0$. If $\delta\neq0$, then arguing as in the previous case, we prove the non-existence of exponential invariants. \textit{Case 4.} Now let us suppose that $\delta=0$. Recall that the Puiseux series $y^{(1)}_{\infty}(x)$ and $y^{(2)}_{\infty}(x)$ merge in this case. The unique Puiseux series centered at the point $x=\infty$ that satisfies equation \eqref{Lienard_y_x} is denoted as $y_{\infty}(x)$. In what follows we use the local theory of invariants considered in Section \ref{S:Local}. Let a Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta=0$ have the irreducible invariant algebraic curve $F(x,y)=0$, where $F(x,y)=y-q(x)$. If in addition the system possesses an exponential invariant $E(x,y)=\exp\{h(x,y)/F^{k}(x,y)\}$ for some $k\in\mathbb{N}$, then it is without loss of generality to suppose that the degree of the polynomial $h(x,y)$ with respect to $y$ is at most $k-1$. There exists a finite number of local elementary exponential invariants \begin{equation} \label{Lienard_degenerate_exp_inv1_loc1} \begin{gathered} E_j(x,y)=\exp\left[\frac{u_j(x)}{\{y-y_{\infty}(x)\}^{k_j}}\right],\quad u_j(x)\in \mathbb{C}_{\infty}\{x\},\\ k_j\in\mathbb{N},\quad j=1,\ldots, K,\quad K\in\mathbb{N} \end{gathered} \end{equation} such that the exponential invariant $E(x,y)$ equals the product $\displaystyle \prod_{j=1}^KE_j^{}(x,y)$. We assume that the following inequalities $1\leq k_1<k_2<\ldots<k_K\leq k$ are valid. Let us denote the cofactor of the local elementary exponential invariant $E_j(x,y)$ by $\varrho_j(x,y)\in\mathbb{C}_{\infty}\{x\}[y]$. We see that the following expression $\displaystyle \sum_{j=1}^K\varrho_j(x,y)$ equals the cofactor $\varrho(x,y)$ of the invariant $E(x,y)$. Note that the cofactor $\varrho(x,y)$ is an element of the ring~$\mathbb{C}[x,y]$. Substituting the explicit representation of $E_j(x,y)$ into the partial differential equation $\mathcal{X}E_j(x,y)=\varrho_j(x,y)E_j(x,y)$, we find \begin{equation} \label{Lienard_degenerate_exp_inv1_2n} \begin{gathered} yu_{j,x}=k_j\lambda_j(x,y)u_j(x)+\varrho_j(x,y)\left\{y-y_{\infty}(x)\right\}^{k_j}. \end{gathered} \end{equation} In this expression $\lambda_j(x,y)\in\mathbb{C}_{\infty}\{x\}[y]$ is the cofactor of the local elementary invariant $f(x,y)=y-y_{\infty}(x)$. Using Theorem \ref{T:coff_local2}, we obtain the cofactor $\lambda_j(x,y)$. The result is \begin{equation} \label{Lienard_degenerate_exp_inv1_cof_local} \begin{gathered} \lambda_j(x,y)=-f(x)-\left\{y_{\infty}(x)\right\}_x. \end{gathered} \end{equation} Analyzing expression \eqref{Lienard_degenerate_exp_inv1_2n}, we see that $j=1$, $k_1=k=1$, $\varrho_1(x,y)=u_{1,x}(x)$, and the series $u_1(x)$ satisfies the following ordinary differential equation \begin{equation} \label{Lienard_degenerate_exp_inv1_3n} \begin{gathered} y_{\infty}(x)u_{1,x}(x)+\left(f(x)+\left\{y_{\infty}(x)\right\}_x\right)u_1(x)=0. \end{gathered} \end{equation} Since there exists the invariant algebraic curve $y-q(x)=0$, we conclude that the Puiseux series $y_{\infty}(x)$ is in fact the polynomial $q(x)$: $y_{\infty}(x)=q(x)$. Consequently, the exponential invariant $E(x,y)=\exp\{h(x,y)/F(x,y)\}$ exists if and only if $h(x,y)=u_1(x)$ and $u_1(x)$ is a polynomial. Omitting the index, we get expressions \eqref{Lienard_degenerate_Int2} and \eqref{Lienard_degenerate_exp_inv_explicit}. \end{proof} \textit{Remark.} Balancing the highest-degree terms in equation \eqref{Lienard_degenerate_Int2}, it can be shown that the polynomial $u(x)$ is of degree $m+1$. Our next step is to derive the necessary and sufficient conditions of Darboux integrability for non-resonant Li\'{e}nard differential systems from family ($B$). The case $\delta=0$ will be considered separately. \begin{theorem}\label{T:Lienard_degenerate_Darboux1} A Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta/f_0\not\in\mathbb{Q}$ is Darboux integrable if and only if the system is of the form \begin{equation} \label{Lienard_degenerate_Darboux_explicit} x_t=y,\quad y_t=\frac{2f_0}{f_0-\delta}q_{1,\,x}y-\frac{f_0+\delta}{f_0-\delta}q_{1,\,x}q_1, \end{equation} where $q_1(x)$ is a polynomial of degree $m+1$ with the highest-degree coefficient $(\delta-f_0)/(2\{m+1\})$. A related Darboux first integral reads as \begin{equation}\label{Lienard_degenerate_Darboux_FI1} I(x,y)=\left[y-q_1(x)\right]^{\delta-f_0}\left[y-\frac{(f_0+\delta)}{(f_0-\delta)}q_1(x)\right]^{\delta+f_0}. \end{equation} \end{theorem} \begin{proof} It is straightforward to verify that expression \eqref{Lienard_degenerate_Darboux_FI1} gives a Darboux first integral whenever all other conditions of the theorem are valid. Let us prove the converse statement. It follows from Lemmas \ref{L:Lienard_exp_inv1} and \ref{L:Lienard_exp_degenerate} that Darboux first integrals of Li\'{e}nard differential systems \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta/f_0\not\in\mathbb{Q}$ do not have exponential factors. By Theorem \ref{T:Lienard_degenerate} the systems under consideration have at most two distinct irreducible invariant algebraic curves simultaneously. First, we consider a Li\'{e}nard differential system \eqref{Lienard_gen} that satisfies the conditions $\deg g=2\deg f+1$, $\delta/f_0\not\in\mathbb{Q}$ and possesses only one irreducible invariant algebraic curve. If there exists a Darboux first integral, then it can be chosen as a bivariate polynomial $I(x,y)=F(x,y)$ producing the invariant algebraic curve $F(x,y)=0$. Using Theorem \ref{T:Lienard_degenerate}, we see that the generating polynomial $F(x,y)$ is of degree at most $2$ with respect to $y$. The Darboux first integral $I(x,y)=F(x,y)$ exists if an only if the cofactor $\lambda(x,y)$ of the invariant algebraic curve $F(x,y)=0$ is identically zero. We need to consider three possibilities. Let us write down the related cofactors and their dominant behavior near the point $x=\infty$. With the help of relations \eqref{Lienard_degenerate_Puiseux_series_dominant} and \eqref{Lienard_degenerate_Puiseux_series_Polynomial_parts}, we get \begin{equation} \label{Lienard_degenerate_Darboux_FI2} \begin{gathered} N=1:\, \lambda(x,y)=-f(x)-q_{1,x},\, \lambda(x,y)=-\frac12(\delta+f_0)x^m+o(x^m),\quad x\rightarrow\infty;\hfill\\ N=1:\, \lambda(x,y)=-f(x)-q_{2,x},\, \lambda(x,y)=\frac12(\delta-f_0)x^m+o(x^m),\quad x\rightarrow\infty;\hfill\\ N=2:\, \lambda(x,y)=-2f(x)-q_{1,x}-q_{2,x},\, \lambda(x,y)=-f_0x^m+o(x^m),\quad x\rightarrow\infty.\hfill\\ \end{gathered} \end{equation} In these expressions $N$ denotes the degree of the polynomial $F(x,y)$ with respect to $y$. We conclude that the cofactors are not identically zero. Thus, there are no Darboux first integrals. Now, we consider a Li\'{e}nard differential system \eqref{Lienard_gen} that satisfies the conditions $\deg g=2\deg f+1$, $\delta/f_0\not\in\mathbb{Q}$ and possesses two distinct irreducible invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$. By Theorem \ref{T:Lienard_degenerate} the polynomials $F_1(x,y)$ and $F_2(x,y)$ are of the form $F_1(x,y)=y-q_1(x)$ and $F_2(x,y)=y-q_2(x)$. If there exists a Darboux first integral, then it can be represented in the form \begin{equation} \label{Lienard_degenerate_Darboux_FI3} \begin{gathered} I(x,y)=F_1^{d_1}(x,y)F_2^{d_2}(x,y),\quad d_1,d_2\in\mathbb{C},\quad |d_1|+|d_2|>0 \end{gathered} \end{equation} This first integral exists if and only if the following condition $d_1\lambda_1(x,y)+d_2\lambda_2(x,y)=0$ is satisfied. Finding the dominant behavior near the point $x=\infty$ of the cofactors \begin{equation} \label{Lienard_degenerate_Darboux_FI3_dom} \begin{gathered} \lambda_1(x,y)=-\frac12(\delta+f_0)x^m+o(x^m),\quad \lambda_2(x,y)=\frac12(\delta-f_0)x^m+o(x^m), \end{gathered} \end{equation} we obtain $d_1=\delta-f_0$, $d_2=\delta+f_0$ and the expression \begin{equation}\label{Lienard_degenerate_Darboux_condition1} 2\delta f(x)+(\delta-f_0)q_{1,x}(x)+(\delta+f_0)q_{2,x}(x)=0. \end{equation} Recall that one of the parameters $d_1$ or $d_2$ can be chosen arbitrary. By Theorem \ref{T:Lienard_degenerate_Darboux1} equation~\eqref{Lienard_y_x} related to the Li\'{e}nard differential system under consideration possesses two distinct polynomial solutions $y(x)=q_1(x)$ and $y(x)=q_2(x)$. Thus, we obtain the following relations \begin{equation} \label{Lienard_degenerate_Darboux_explicit_add2} f(x)+q_{1,x}(x)=-\frac{g(x)}{q_{1}(x)},\quad f(x)+q_{2,x}(x)=-\frac{g(x)}{q_{2}(x)}. \end{equation} Substituting these relations and the values of $d_1$ and $d_2$ into the condition on the cofactors $d_1[f(x)+q_{1,x}(x)]+d_2[f(x)+q_{2,x}(x)]=0$ yields the expression $q_2(x)=-d_2q_1(x)/d_1$. Finally, we use relations \eqref{Lienard_degenerate_Darboux_condition1} and \eqref{Lienard_degenerate_Darboux_explicit_add2} in order to derive the explicit representations of the polynomials $f(x)$ and $g(x)$. \end{proof} \textit{Corollary.} The family of first-order ordinary differential equations \begin{equation} \label{Lienard_degenerate_Darboux_explicit_add} yy_x-\frac{2f_0}{f_0-\delta}q_{1,\,x}y+\frac{f_0+\delta}{f_0-\delta}q_{1,\,x}q_1=0 \end{equation} associated with systems \eqref{Lienard_degenerate_Darboux_explicit} has two distinct polynomial solutions of the form $y(x)=q_1(x)$ and $y(x)=(f_0+\delta)q_1(x)/(f_0-\delta)$. \textit{Remark 1.} Suppose we are in assumptions of Theorem \ref{T:Lienard_degenerate_Darboux1} with the exception of the condition $\delta/f_0\not\in\mathbb{Q}\setminus\{0\}$. Then function~\eqref{Lienard_degenerate_Darboux_FI1} is still a Darboux first integral of the related Li\'{e}nard differential system. In addition, the corollary to Theorem \ref{T:Lienard_degenerate_Darboux1} is also valid. However, there may exist other resonant Li\'{e}nard differential systems from family ($B$) with Darboux first integrals. \textit{Remark 2.} Expression \eqref{Lienard_degenerate_Darboux_explicit} provides a set of systems with rational first integrals~\eqref{Lienard_degenerate_Darboux_FI1} provided that $f_0/\delta$ is a rational number. Next, let us study the Darboux integrability in the case $\delta=0$. \begin{theorem}\label{T:Lienard_degenerate_Darboux2} A Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta=0$ is Darboux integrable if and only if the system can be represented in the form \begin{equation}\label{Lienard_degenerate_Darboux_del0_f_g} x_t=y,\quad y_t=2q_x(x)y-q(x)q_x(x), \end{equation} where $q(x)$ is a polynomial of degree $m+1$. A related Darboux first integral reads~as \begin{equation}\label{Lienard_degenerate_Darboux_FI1_del0} I(x,y)=\left[y-q(x)\right]\exp\left[-\frac{q(x)}{y-q(x)}\right]. \end{equation} \end{theorem} \begin{proof} By direct computations we verify that expression \eqref{Lienard_degenerate_Darboux_FI1_del0} gives a Darboux first integral of a Li\'{e}nard differential system \eqref{Lienard_gen} with the polynomials $f(x)$ and $g(x)$ satisfying relations~\eqref{Lienard_degenerate_Darboux_del0_f_g} and $\delta=0$. Thus, we have established sufficiency of conditions presented in the theorem. Let us prove their necessity. Suppose that a Li\'{e}nard differential system from family ($B$) with $\delta=0$ possesses a Darboux first integral. A Darboux integrable differential system~\eqref{DS} has at least one invariant algebraic curve. It follows from Theorem \ref{T:Lienard_degenerate} that the Li\'{e}nard differential system in question possesses at most one irreducible invariant algebraic curve. This curve is of the form $y-q(x)=0$ and has the cofactor $\lambda(x,y)=-f(x)-q_x(x)$. Thus, we have established the existence of the invariant algebraic curve $y-q(x)=0$. According to Lemma~\ref{L:Lienard_exp_degenerate}, the Li\'{e}nard differential system under consideration may have exponential invariants associated with the invariant algebraic curve $y-q(x)=0$. These invariants take the form \eqref{Lienard_degenerate_exp_inv_explicit} and possess the cofactor $\varrho(x,y)=u_x(x)$, where the polynomial $u(x) $ satisfies equation \eqref{Lienard_degenerate_Int2}. Using Lemma \ref{L:Lienard_exp_inv1}, we conclude that exponential invariants with a polynomial argument cannot enter explicit expressions of Darboux first integrals. Consequently, a Darboux first integral can be represented in the form \begin{equation}\label{Lienard_degenerate_Darboux_FI1_del0_test} I(x,y)=\left[y-q(x)\right]^{d}\exp\left[\frac{u(x)}{y-q(x)}\right],\quad d\in\mathbb{C}, \end{equation} where we suppose that $u(x)\equiv0$ whenever exponential invariants \eqref{Lienard_degenerate_exp_inv_explicit} do not exist. If $d=0$, then the related Li\'{e}nard differential system has an invariant algebraic curve $u(x)=0$ independent of $y$. It is impossible due to Theorem \ref{T:Lienard_degenerate}. Thus, it is without loss of generality to set $d=1$. The cofactors of all the invariants identically satisfy the relation $\lambda(x,y)+\varrho(x,y)=0$ provided that first integral \eqref{Lienard_degenerate_Darboux_FI1_del0_test} exists. As a result, we get the expression $f(x)=u_x(x)-q_x(x)$. Substituting this expression into equation \eqref{Lienard_degenerate_Int2} yields $u(x)=-q(x)$. This relation proves existence of exponential invariants \eqref{Lienard_degenerate_exp_inv_explicit}. Since $y=q(x)$ is a polynomial solution of equation \eqref{Lienard_y_x} and $f(x)=-2q_x(x)$, we find the polynomial $g(x)$. The result is $g(x)=q(x)q_x(x)$. \end{proof} Interestingly, Li\'{e}nard differential systems \eqref{Lienard_degenerate_Darboux_explicit} and \eqref{Lienard_degenerate_Darboux_del0_f_g} are those characterized by the so-called Chiellini integrability condition \begin{equation} \label{Lienard_Chiellini} \frac{d}{dx}\left[\frac{g(x)}{f(x)}\right]=\alpha f(x),\quad \alpha\in\mathbb{C}\setminus\{0\}. \end{equation} This condition was originally introduced by A. Chiellini \cite{Chiellini01}. Chiellini integrable Li\'{e}nard differential systems can be transformed to linear systems $s_{\tau}=z$, $z_{\tau}=-z-\alpha s$ via the generalized Sundman transformation $s(\tau)=\int f(x) dx$, $z(\tau)=y$, $d\tau=f(x)dt$, see \cite{Berkovich01}. Some other properties of Chiellini integrable Li\'{e}nard differential systems are presented in~\cite{Choudhury_Lienard}. Our next step is to study the existence of non-autonomous Darboux first integrals with a time-dependent exponential factor~\eqref{FI_t_gen}. \begin{lemma}\label{L:Lienard_degenerate_Darboux_time1} A Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$, $\deg f\neq0$, and $\delta/f_0\not\in\mathbb{Q}$ possesses a non-autonomous Darboux first integral \eqref{FI_t_gen} if and only if the system reads as \begin{equation}\label{Lienard_degenerate_Darboux_system_time} \begin{gathered} x_t=y,\quad y_t=-\left\{f_0(x-x_0)^m+\frac{(m+2)\omega}{2(m+1)\delta}\right\}y- \frac{f_0^2-\delta^2}{4(m+1)}(x-x_0)^{2m+1}\\ -\frac{f_0\omega}{2(m+1)\delta}(x-x_0)^{m+1}-\frac{\omega^2}{4(m+1)\delta^2}(x-x_0)^{}, \end{gathered} \end{equation} where $x_0\in\mathbb{C}$ and $\omega\in\mathbb{C}\setminus\{0\}$. A related non-autonomous Darboux first integral takes the form \begin{equation}\label{Lienard_degenerate_Darboux_FI1_time} \begin{gathered} I(x,y,t)=\left[y+\frac{f_0-\delta}{2(m+1)}(x-x_0)^{m+1}+\frac{\omega}{2(m+1)\delta}(x-x_0)\right]^{\delta-f_0}\\ \times\left[y+\frac{\delta+f_0}{2(m+1)}(x-x_0)^{m+1}+\frac{\omega}{2(m+1)\delta}(x-x_0)\right]^{\delta+f_0}\exp(\omega t). \end{gathered} \end{equation} \end{lemma} \begin{proof} It is straightforward to derive that expression \eqref{Lienard_degenerate_Darboux_FI1_time} is a non-autonomous Darboux first integral of systems \eqref{Lienard_degenerate_Darboux_system_time}. We only need to prove the converse statement. Supposing that a non-resonant Li\'{e}nard differential system from family ($B$) possesses a non-autonomous Darboux first integral \eqref{FI_t_gen}, we use the arguments given in the proof of Theorem \ref{T:Lienard_degenerate_Darboux1} to represent this first integral as \begin{equation}\label{Lienard_degenerate_Darboux_FI1_time_a} I(x,y,t)=\left[y-q_1(x)\right]^{\delta-f_0}\left[y-q_2(x)\right]^{\delta+f_0}\exp(\omega t) ,\quad \omega\neq0, \end{equation} where $q_1(x)$ and $q_2(x)$ are distinct polynomial solutions of equation \eqref{Lienard_y_x}. In addition, we get the following condition \begin{equation}\label{Lienard_degenerate_Darboux_condition1_time} 2\delta f(x)+(\delta-f_0)q_{1,\,x}(x)+(\delta+f_0)q_{2,\,x}(x)-\omega=0. \end{equation} Further, we express the polynomial $f(x)$ from this condition. Substituting $y(x)=q_1(x)$ into equation \eqref{Lienard_y_x}, we obtain the polynomial $g(x)$. Let us introduce the polynomial $v(x)$ according to the rule $q_2(x)-q_1(x)=v(x)$. Requiring that the function $q_2(x)=q_1(x)+v(x)$ is a solution of equation \eqref{Lienard_y_x}, we get the relation \begin{equation}\label{Lienard_degenerate_Darboux_condition1_time_eq_v} [(\delta-f_0)v(x)+2\delta q_1(x)]v_x(x)+\omega v(x)=0 \end{equation} Since $q_1(x)$ is a polynomial, we obtain the ordinary differential equation $\beta(x-x_0)v_x=v$, where $\beta$, $x_0\in\mathbb{C}$. Integrating this equation yields $v(x)=v_0(x-x_0)^{1/\beta}$ with $v_0\in\mathbb{C}$ being a constant of integration. It follows from expression $q_2(x)-q_1(x)=v(x)$ that $v(x)$ is a polynomial of degree $m+1$. Thus, we obtain $\beta=1/(m+1)$. As a result the polynomials $q_1(x)$ and $q_2(x)$ can be represented in the form \begin{equation}\label{Lienard_degenerate_Darboux_condition1_time_eq_q_12} \begin{gathered} q_1(x)=\frac{\delta-f_0}{2(m+1)}(x-x_0)^{m+1}-\frac{\omega}{2(m+1)\delta}(x-x_0),\\ q_2(x)=-\frac{\delta+f_0}{2(m+1)}(x-x_0)^{m+1}-\frac{\omega}{2(m+1)\delta}(x-x_0). \end{gathered} \end{equation} Finally, we find the polynomials $f(x)$ and $g(x)$ from condition \eqref{Lienard_degenerate_Darboux_condition1_time} and equation \eqref{Lienard_y_x} recalling the fact that, for example, $y(x)=q_1(x)$ is a solution of the latter. The uniqueness of independent non-autonomous Darboux first integral \eqref{Lienard_degenerate_Darboux_FI1_time} follows from the uniqueness of the polynomials $q_1(x)$, $q_2(x)$ and the dominant behavior of the cofactors given by expression~\eqref{Lienard_degenerate_Darboux_FI3_dom}. \end{proof} \textit{Remark.} Suppose we are in assumptions of Lemma \ref{L:Lienard_degenerate_Darboux_time1} with the exception of the condition $\delta/f_0\not\in\mathbb{Q}\setminus\{0\}$. Then function~\eqref{Lienard_degenerate_Darboux_FI1_time} is still a non-autonomous Darboux first integral of the related Li\'{e}nard differential system. However, there may exist other resonant Li\'{e}nard differential systems from family ($B$) with non-autonomous Darboux first integrals of the form~\eqref{FI_t_gen}. Let us note that if $\delta=\pm mf_0/(m+2)$, then any system \eqref{Lienard_degenerate_Darboux_system_time} has not only a non-autonomous Darboux first integral \eqref{Lienard_degenerate_Darboux_FI1_time}, but also an independent Darboux first integral~\eqref{Lienard_degenerate_Darboux_FI1}, where $q_1(x)$ is given by the relation \begin{equation}\label{Lienard_degenerate_Darboux_time_GS_q1} \begin{gathered} q(x)=-\frac{f_{0} x^{m +1}}{\left(m +1\right) \left(m +2\right)}-\frac{\left(m +2\right) \omega x}{2 \left(m +1\right) f_{0} m}. \end{gathered} \end{equation} Let $I_1(x,y)$ be Darboux first integral~\eqref{Lienard_degenerate_Darboux_FI1} and $I_2(x,y,t)$ be non-autonomous Darboux first integral~\eqref{Lienard_degenerate_Darboux_FI1_time}. Eliminating $y$ from the relations $I_1(x,y)=C_1$ and $I_2(x,y,t)=C_2$, we can find the general solution of a system \eqref{Lienard_degenerate_Darboux_system_time} under the condition $\delta=\pm mf_0/(m+2)$. Note that such a system is resonant near $x=\infty$. The general solution in the case $m=2$ previously appeared in \cite{Ruiz01}, see also \cite{DS2021}. \begin{lemma}\label{L:Lienard_degenerate_Darboux_time2} A Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$, $\deg f\neq0$, and $\delta=0$ possesses a non-autonomous Darboux first integral~\eqref{FI_t_gen} if and only if the system is of the form \begin{equation}\label{Lienard_degenerate_Darboux_del0_f_g_time} \begin{gathered} x_t=y,\quad y_t=-\left\{f_0(x-x_0)^m+\frac{(m+2)\omega}{m+1}\right\}y -\frac{(x-x_0)}{4(m+1)}\left\{f_0(x-x_0)^{m}+2\omega\right\}^2, \end{gathered} \end{equation} where $x_0\in\mathbb{C}$ and $\omega\in\mathbb{C}\setminus\{0\}$. A related non-autonomous Darboux first integral reads as \begin{equation}\label{Lienard_degenerate_Darboux_FI1_del0_time} \begin{gathered} I(x,y,t)=\exp\left[\frac{f_0(x-x_0)^{m+1}}{2(m+1)y+f_0(x-x_0)^{m+1}+2\omega(x-x_0)}\right]\\ \times\left[y+\frac{f_0(x-x_0)^{m+1}}{2(m+1)}+\frac{\omega(x-x_0)}{m+1}\right]\exp(\omega t). \end{gathered} \end{equation} \end{lemma} \begin{proof} By direct computations we verify that expression \eqref{Lienard_degenerate_Darboux_FI1_del0_time} is a time-dependent first integral of system \eqref{Lienard_degenerate_Darboux_del0_f_g_time}. Let us prove the converse statement. We suppose that a Li\'{e}nard differential system from family ($B$) satisfies the restriction $\delta=0$ and possesses a non-autonomous Darboux first integral~\eqref{FI_t_gen}. Repeating the arguments used in the proof of Theorem \ref{T:Lienard_degenerate_Darboux2}, we represent such a first integral in the form \begin{equation}\label{Lienard_degenerate_Darboux_FI1_del0_time_new} I(x,y,t)=\left[y-q(x)\right]^d\exp\left[\frac{u(x)}{y-q(x)}\right]\exp(\omega t),\quad d\in\mathbb{C},\quad\omega\in\mathbb{C}\setminus\{0\}. \end{equation} In addition, we note that the related system possesses the invariant algebraic curve $y-q(x)=0$ with the cofactor $\lambda(x,y)=-f(x)-q_x(x)$. By Lemma \ref{L:Lienard_exp_degenerate} the cofactor $\varrho(x,y)$ of the exponential invariant $E(x,y)=\exp[u(x)/(y-q(x))]$ reads as $\varrho(x,y)=u_x(x)$. Condition~\eqref{JLM_gen_cond} relating these cofactors and the parameter $\omega$ takes the form \begin{equation}\label{Lienard_degenerate_cond_cof_time1} d[f(x)+q_x(x)]-u_x(x)-\omega=0,\quad d\in\mathbb{C}. \end{equation} Let us begin with the case $d=0$. We conclude from condition \eqref{Lienard_degenerate_cond_cof_time1} that $u(x)$ is a first-degree polynomial. Using the remark to Lemma \ref{L:Lienard_exp_degenerate}, we find the value of $m$. The result is $m=0$. As it was mentioned at the beginning of this section, we do not consider Li\'{e}nard differential systems with the restriction $m=0$ ($\deg f=0$). We turn to the case $d\neq 0$. Without loss of generality, we set $d=1$. Now let us suppose that the restriction $u(x)\equiv0$ is valid. The related system may not have exponential invariants. Finding the dominant behavior of the cofactor $\lambda(x,y)=-f(x)-q_x(x)$ near the point $x=\infty$, we obtain \begin{equation}\label{Lienard_degenerate_cof_dom_time1} \lambda(x,y)=-\frac{f_0}{2}x^m+o(x^m),\quad x\rightarrow \infty. \end{equation} Recalling the inequality $m>0$, we see that condition \eqref{Lienard_degenerate_cond_cof_time1} is not satisfied. Thus, we conclude that the exponential invariant exists. Further, we eliminate from relations \eqref{Lienard_degenerate_Int2} and \eqref{Lienard_degenerate_cond_cof_time1} the polynomial $f(x)$. As a result we get the following expression \begin{equation}\label{Lienard_degenerate_DI_q} q(x)=-u(x)-\omega\frac{u(x)}{u_x(x)}. \end{equation} Consequently, the ratio $u(x)/u_x(x)$ is a polynomial. It is straightforward to see that this polynomial is of the first degree and can be represented as $\beta(x-x_0)$, where $\beta$, $x_0\in\mathbb{C}$. Integrating the ordinary differential equation $\{\beta(x-x_0)\}u_x(x)=u(x)$, we obtain $u(x)=u_0(x-x_0)^{1/\beta}$, where $u_0\in\mathbb{C}$ is a constant of integration. We recall that that $u(x)$ is a polynomial of degree $m+1$. As a result we get $\beta=1/(m+1)$. Substituting the relation $u(x)=u_0(x-x_0)^{m+1}$ into expressions \eqref{Lienard_degenerate_DI_q} and \eqref{Lienard_degenerate_cond_cof_time1}, we find the polynomials $q(x)$ and $f(x)$. In addition, we choose the parametrization $u_0=f_0/(2\{m+1\})$. The polynomial $g(x)$ we find recalling the fact that $y(x)=q(x)$ is the polynomial solution of the related equation \eqref{Lienard_y_x}. Thus, we see that if the Li\'{e}nard differential system in question has a non-autonomous Darboux first integral~\eqref{FI_t_gen}, then there exist the invariant algebraic curve $y-q(x)=0$ and the exponential invariant $E(x,y)=\exp[\alpha u(x)/(y-q(x))]$, where $\alpha\in\mathbb{C}$, the polynomial $q(x)$ is given by expression \eqref{Lienard_degenerate_DI_q}, and the polynomial $u(x)$ is $u(x)=f_0(x-x_0)^{m+1}/(2\{m+1\})$. \end{proof} Below we shall prove that Li\'{e}nard differential systems \eqref{Lienard_degenerate_Darboux_system_time} and \eqref{Lienard_degenerate_Darboux_del0_f_g_time} are Liouvillian integrable. Let us study the Liouvillian integrability of Li\'{e}nard differential systems from family ($B$). We begin with some partial case characterized by Darboux integrating factors of a special form. Note that in Theorem \ref{T:Lienard_degenerate_Liouville_polynomial} we use novel designations for polynomial solutions of equations \eqref{Lienard_y_x} related to Li\'{e}nard differential systems. We need novel designations because polynomials $p_1(x)$ and $p_2(x)$ may have coinsiding dominant terms. After considering this special case, we shall turn to non-resonant systems. \begin{theorem}\label{T:Lienard_degenerate_Liouville_polynomial} A Li\'{e}nard differential system \eqref{Lienard_gen} from family ($B$) possesses the Darboux integrating factor \begin{equation}\label{Lienard_degenerate_Liouville_IF_polynomial} M(x,y)=[y-p_1(x)]^{d_1}[y-p_2(x)]^{d_2}, \quad d_1,d_2\in\mathbb{C}\setminus\{0\}, \end{equation} where $p_1(x)$ and $p_2(x)$ are distinct polynomials, if and only if one of the following assertions is valid. \begin{enumerate} \item The system is of the form \eqref{Lienard_degenerate_Darboux_explicit} and the polynomials $p_1(x)$ and $p_2(x)$ are linearly dependent: $p_2(x)=(f_0+\delta)/(f_0-\delta)p_1(x)$. In this case the parameters $d_1$ and $d_2$ can be chosen as $d_1=d_2=-1$ and the following relations $p_1(x)=q_1(x)$, $p_2(x)=q_2(x)$ are valid. In fact, there exists a family of Darboux integrating factors~\eqref{Lienard_degenerate_Liouville_IF_polynomial} that are products of the integrating factor $M_0(x,y)=[y-q_1(x)]^{-1}[y-q_2(x)]^{-1}$ and the Darboux first integrals $I^{\varkappa}(x,y)$, where the function $I(x,y)$ is given by expression \eqref{Lienard_degenerate_Darboux_FI1} and $\varkappa\in\mathbb{C}$. \item The system reads as \begin{equation}\label{Lienard_degenerate_Liouville_system_polynomial} \begin{gathered} x_t=y,\quad y_t=\left[\beta(l+k)u^{k-1}+\frac{\{(2d_1+1)l+k\}l}{k-l}u^{l-1}\right]u_xy\\ -\left[l\beta^2u^{2k-1}+\frac{\{(2d_1+1)l+k\}l\beta}{k-l}u^{k+l-1}+\frac{(ld_1+k)(d_1+1)l^2}{(k-l)^2}u^{2l-1}\right]u_x, \end{gathered} \end{equation} where $k$ and $l$ are relatively prime both non-unit natural numbers, $u(x)$ is a polynomial of degree $(m+1)/\max\{k,l\}$ and $\beta\in\mathbb{C}\setminus\{0\}$. The polynomials $p_1(x)$ and $p_2(x)$ can be represented as \begin{equation}\label{Lienard_degenerate_Liouville_system_polynomial_q} \begin{gathered} p_1(x)=\beta u^k(x)+\frac{(d_1+1)l}{k-l}u^l(x),\quad p_2(x)=\beta u^k(x)+\frac{(ld_1+k)}{k-l}u^l(x) \end{gathered} \end{equation} and the parameter $d_2$ is given by the relation $d_2=-(d_1+1+k/l)$. \end{enumerate} \end{theorem} \begin{proof} Expression \eqref{Lienard_degenerate_Liouville_IF_polynomial} gives a Darboux integrating factor of a Li\'{e}nard differential system if and only if the system possesses the invariant algebraic curves $y-p_1(x)=0$ and $y-p_2(x)=0$ such that the following condition $d_1\lambda_1(x,y)+d_2\lambda_2(x,y)-f(x)=0$ is identically satisfied. It is straightforward to find the cofactor $\lambda_j(x,y)$ of the invariant algebraic curve $y-p_j(x)=0$. The result is $\lambda_j(x,y)=-f(x)-p_{j,\,x}(x)$, $j=1$, $2$. Thus, we arrive at the condition \begin{equation}\label{Lienard_degenerate_Liouville_polynomial_cond1} \begin{gathered} (d_1+d_2+1)f(x)+d_1p_{1,\,x}(x)+d_2p_{2,\,x}(x)=0. \end{gathered} \end{equation} If the following restriction $d_2=-1-d_1$ is valid, then integrating equation \eqref{Lienard_degenerate_Liouville_polynomial_cond1} with respect to the polynomial $p_1(x)$, we obtain $p_1(x)=(d_1+1)p_2(x)/d_1+\beta$, where $\beta\in\mathbb{C} $ is a constant of integration. Recalling the fact that $y=p_1(x)$ and $y=p_2(x)$ are polynomial solutions of equation \eqref{Lienard_y_x}, we find the polynomials $f(x)$ and $g(x)$. The polynomial $f(x)$ can be represented in the form \begin{equation}\label{Lienard_degenerate_Liouville_polynomial_cond1_fg} \begin{gathered} f(x)=-\frac{(2d_1+1)p_2(x)+d_1(d_1+1)\beta}{d_1(p_2(x)+d_1\beta)}p_{2,\,x}(x). \end{gathered} \end{equation} Analyzing this expression, we conclude that the function on the right-hand side is not a polynomial whenever $p_2(x)$ is non-constant. Let us consider the case $d_2\neq-1-d_1$. We find the polynomials $f(x)$ and $g(x)$ from condition \eqref{Lienard_degenerate_Liouville_polynomial_cond1} and equation \eqref{Lienard_y_x}, where we set $y(x)=p_1(x)$. Substituting the resulting expressions into equation \eqref{Lienard_y_x} and recalling the fact that $y(x)=p_2(x)$ is a solution of the latter, we obtain the equation \begin{equation}\label{Lienard_degenerate_Liouville_polynomial_cond1_q1_2} \begin{gathered} \{(d_2+1)p_1(x)+d_1p_2(x)\}p_{1,\,x}(x)-\{d_2p_1(x)+(d_1+1)p_2(x)\}p_{2,\,x}(x)=0. \end{gathered} \end{equation} Introducing the polynomial $v(x)$ according to the rule $v(x)=p_2(x)-p_1(x)$, we substitute the relation $p_2(x)=p_1(x)+v(x)$ into equation \eqref{Lienard_degenerate_Liouville_polynomial_cond1_q1_2}. Integrating the result with respect to the polynomial $p_1(x)$, we obtain \begin{equation}\label{Lienard_degenerate_Liouville_polynomial_cond1_q1} \begin{gathered} d_2=-2-d_1:\quad p_1(x)=-(d_1+1)v(x)\ln v(x)+\beta v(x);\hfill\\ d_2\neq-2-d_1:\quad p_1(x)=\beta v^{-d_2-d_1-1}(x)-\frac{d_1+1}{d_2+d_1+2}v(x), \end{gathered} \end{equation} where $\beta\in\mathbb{C}$ is a constant of integration. Analyzing the first possibility, we need to set $d_1=-1$. As a result, we get Darboux integrable family \eqref{Lienard_degenerate_Darboux_explicit} of Li\'{e}nard differential systems. The parameter $\beta$ can be derived with the help of the dominant behavior of the polynomials $p_1(x)$ and $p_2(x)$, which now coincide with $q_1(x)$ and $q_2(x)$, respectively. Recalling the fact that the product of an integrating factor and a first integral is again an integrating factor, we obtain the family of integrating factors $M_0(x,y)I^{\varkappa}(x,y)$, where $M_0(x,y)=[y-q_1(x)]^{-1}[y-q_2(x)]^{-1}$, $\varkappa\in\mathbb{C}$, and the Darboux first integral $I(x,y)$ is given by expression \eqref{Lienard_degenerate_Darboux_FI1}. Now we turn to the case $d_2\neq-2-d_1$. If $\beta=0$, then we get the equality $p_2(x)=-(d_2+1)p_1(x)/(d_1+1)$. In addition, we obtain the following representations of the polynomials $f(x)$ and $g(x)$: $f(x)=(d_2-d_1)p_{1,\,x}(x)/(d_1+1)$ and $g(x)=-(d_2+1)p_1(x)p_{1,\,x}(x)/(d_1+1)$. Considering these expressions, we again arrive at Darboux integrable systems \eqref{Lienard_degenerate_Darboux_explicit}. Thus, it is without loss of generality to set $\beta\neq0$. Recalling the restriction $d_2\neq-1-d_1$, we introduce relatively prime both non-unit natural numbers $k$ and $l$ satisfying the condition $d_2+d_1+1=-k/l$. Analyzing expression \eqref{Lienard_degenerate_Liouville_polynomial_cond1_q1}, we conclude that there exists a polynomial $u(x)$ such that the following relation $v(x)=u^l(x)$ holds. In this way we express the polynomials $p_1(x)$ and $p_2(x)$ via the polynomial $u(x)$. The result is given in expression \eqref{Lienard_degenerate_Liouville_system_polynomial_q}. By construction the degree of the polynomial $u(x)$ equals $(m+1)/\max\{k,l\}$. \end{proof} Using integrating factor \eqref{Lienard_degenerate_Liouville_IF_polynomial}, we find the following expression of a Liouvillian first integral \begin{equation}\label{Lienard_degenerate_Liouvillian_FI} \begin{gathered} I(x,y)=\frac{p_2(x)B\left(\frac{y-p_1(x)}{p_2(x)-p_1(x)};1+d_1,-d_1-\frac{k}{l}\right)}{\{p_2(x)-p_1(x)\}^{\frac{k}{l}}}- \frac{B\left(\frac{y-p_1(x)}{p_2(x)-p_1(x)};1+d_1,1-d_1-\frac{k}{l}\right)}{\{p_2(x)-p_1(x)\}^{\frac{k}{l}-1}} \end{gathered} \end{equation} of systems \eqref{Lienard_degenerate_Liouville_system_polynomial}. The polynomials $p_1(x)$ and $p_2(x)$ are given by relation \eqref{Lienard_degenerate_Liouville_system_polynomial_q}. Symbol $B(s;\alpha,\delta)$ denotes the incomplete beta function \begin{equation}\label{Lienard_degenerate_Liouvillian_incomplete_beta} B(s;\alpha,\delta)=\int_0^s z^{\alpha-1}(1-z)^{\beta-1}dz. \end{equation} The family of systems \eqref{Lienard_degenerate_Liouville_system_polynomial} can be transformed to the following simple form \begin{equation} \label{Lienard_Inegrability_Bpartial_0_Sundman} \begin{gathered} s_{\tau}=z,\quad z_{\tau}=\left[\beta(l+k)s^{k-1}+\frac{\{(2d_1+1)l+k\}l}{k-l}s^{l-1}\right]z\\ -\left[l\beta^2s^{2k-1}+\frac{\{(2d_1+1)l+k\}l\beta}{k-l}s^{k+l-1}+\frac{(ld_1+k)(d_1+1)l^2}{(k-l)^2}s^{2l-1}\right] \end{gathered} \end{equation} via the generalized Sundman transformation $s(\tau)=u(x)$, $z(\tau)=y$, $d\tau=u_x(x)dt$. Substituting $u(x)=s$, $y=z$ into \eqref{Lienard_degenerate_Liouvillian_FI} and \eqref{Lienard_degenerate_Liouville_system_polynomial_q}, we find a Liouvillian first integral for systems~\eqref{Lienard_Inegrability_Bpartial_0_Sundman}. The careful examination of expression \eqref{Lienard_degenerate_Liouville_system_polynomial_q} shows that systems \eqref{Lienard_degenerate_Liouville_system_polynomial} are resonant near infinity whenever the following inequality $k>l$ is valid. Indeed, two distinct polynomial solutions of equation \eqref{Lienard_y_x} have the coinciding dominant behavior near the point $x=\infty$ only in a resonant case. Let us find the necessary and sufficient conditions of the Liouvillian integrability in the non-resonant case. \begin{theorem}\label{T:Lienard_degenerate_Liouville1} A Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta/f_0\not\in\mathbb{Q}$ is Liouvillian integrable if and only if the system is either Darboux integrable and reads as \eqref{Lienard_degenerate_Darboux_explicit} or takes the form \eqref{Lienard_degenerate_Liouville_system_polynomial}, where $k<l$ and the following normalization \begin{equation}\label{Lienard_degenerate_Liouvillian_normalization} d_1=\frac{(k-l)f_0}{2l\delta }-\frac{l+k}{2l},\quad u_0=\left(-\frac{\delta}{m+1}\right)^{\frac1{l}} \end{equation} is introduced. By $u_0$ we denote the highest-degree coefficient of the polynomial $u(x)$. In the case of systems \eqref{Lienard_degenerate_Liouville_system_polynomial} the Darboux integrating factor is the following \begin{equation}\label{Lienard_degenerate_Darboux_IF} M(x,y)=\left\{y-q_1(x)\right\}^{\frac{(k-l)f_0}{2l\delta }-\frac{l+k}{2l}}\left\{y-q_2(x) \right\}^{\frac{(l-k)f_0}{2l\delta }-\frac{l+k}{2l}} \end{equation} with the polynomials $q_j(x)\equiv p_j(x)$, $j=1$, $2$ given by expression \eqref{Lienard_degenerate_Liouville_system_polynomial_q}. \end{theorem} \begin{proof} By direct computations we verify that systems \eqref{Lienard_degenerate_Darboux_explicit} and \eqref{Lienard_degenerate_Liouville_system_polynomial} are Liouvillian integrable provided that all other conditions of the theorem are satisfied. This observation proves the sufficiency of these conditions. Let us prove their necessity. It follows from Lemmas \ref{L:Lienard_exp_inv1} and \ref{L:Lienard_exp_degenerate} that exponential invariants cannot arise in a Darboux integrating factor. Consequently, a Darboux integrating factor is constructed from generating polynomials of invariant algebraic curves. It is straightforward to see that if there are no invariant algebraic curves, then Darboux integrating factors do not exist. In view of Theorem \ref{T:Lienard_degenerate} we need to consider three distinct cases. \textit{Case 1.} Let us suppose that a Liouvillian integrable Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta/f_0\not\in\mathbb{Q}$ has only one irreducible invariant algebraic curve with a generating polynomial of the first degree with respect to $y$. This curve reads as $y-q_k(x)=0$, $k=1$ or $k=2$, and has the cofactor $\lambda(x,y)=-f(x)-q_{k,x}(x)$. A Darboux integrating factor can be represented in the form \begin{equation}\label{Lienard_degenerate_Liouville_IF_1} M(x,y)=[y-q_k(x)]^{d_k},\quad d_k\in\mathbb{C}\setminus\{0\}. \end{equation} This integrating factor exists if and only if the following condition \begin{equation}\label{Lienard_degenerate_Liouville_cond_1} d_k\{f(x)+q_{k,x}(x)\}+f(x)=0 \end{equation} is identically valid. Balancing the terms at $x^m$ in this relation, we obtain \begin{equation}\label{Lienard_degenerate_Liouville_cond_2} k=1:\quad d_1=-\frac{2f_0}{\delta+f_0};\quad k=2:\quad d_2=\frac{2f_0}{\delta-f_0}. \end{equation} Expressing $f(x)$ and $g(x)$ from relation \eqref{Lienard_degenerate_Liouville_cond_1} and equation \eqref{Lienard_y_x} with $y(x)=q_k(x)$, we see that our Li\'{e}nard differential system is of the form \eqref{Lienard_degenerate_Darboux_explicit} and possesses two distinct irreducible invariant algebraic curves. It is a contradiction. \textit{Case 2.} Let us suppose that a Liouvillian integrable Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta/f_0\not\in\mathbb{Q}$ has an irreducible invariant algebraic curve with a generating polynomial of the second degree with respect to $y$. This curve is given by the expression $\{[y-y^{(1)}_{\infty}(x)][y-y^{(2)}_{\infty}(x)]\}_{+}=0$. Its cofactor reads as $\lambda(x,y)=-2f(x)-q_{1,x}(x)-q_{2,x}(x)$. A Darboux integrating factor can be represented in the form \begin{equation}\label{Lienard_degenerate_Liouville_IF_2} M(x,y)=F^{d}(x,y),\, F(x,y)=\{[y-y^{(1)}_{\infty}(x)][y-y^{(2)}_{\infty}(x)]\}_{+},\, d\in\mathbb{C}\setminus\{0\} \end{equation} and exists if and only if the following condition \begin{equation}\label{Lienard_degenerate_Liouville_cond_3} d\{2f(x)+q_{1,x}(x)+q_{2,x}(x)\}+f(x)=0 \end{equation} is identically satisfied. Considering the coefficients of $x^m$ in this condition yields the value of $d$: $d=-1$. Further, we represent the polynomial $F(x,y)$ in the form $F(x,y)=y^2+v(x)y+w(x)$, where $v(x)$, $w(x)\in\mathbb{C}[x]$. Substituting expression $M(x,y)=F^{-1}(x,y)$ into the partial differential equation \begin{equation}\label{Lienard_degenerate_Liouville_PDE_1} yM_x-[f(x)y+g(x)]M_y-f(x)M=0, \end{equation} we get rid of the denominator. Setting to zero the coefficients of different powers of $y$, we get $w(x)=\beta v(x)^2$, where $\beta\in\mathbb{C}\setminus\{0\}$. This equality contradicts irreducibility of the polynomial~$F(x,y)$. \textit{Case 3.} Now we assume that a Liouvillian integrable Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta/f_0\not\in\mathbb{Q}$ has two distinct irreducible invariant algebraic curves $y-q_1(x)=0$ and $y-q_2(x)=0$. Their cofactors are the following $\lambda_1(x,y)=-f(x)-q_{1,x}(x)$ and $\lambda_2(x,y)=-f(x)-q_{2,x}(x)$, respectively. Condition \eqref{JLM_gen_cond} enabling the existence of a Darboux integrating factor \begin{equation}\label{Lienard_degenerate_Liouville_IF_3} M(x,y)=[y-q_1(x)]^{d_1}[y-q_2(x)]^{d_2},\quad d_1,d_2\in\mathbb{C},\quad |d_1|+|d_2|>0 \end{equation} is of the form \begin{equation}\label{Lienard_degenerate_Liouville_cond_4} d_1\{f(x)+q_{1,x}(x)\}+d_2\{f(x)+q_{2,x}(x)\}+f(x)=0. \end{equation} All the Li\'{e}nard differential systems from family ($B$) with integrating factor of the form~\eqref{Lienard_degenerate_Liouville_IF_3} have been identified in Theorem \ref{T:Lienard_degenerate_Liouville_polynomial}. We need to extract non-resonant systems from those given by expression \eqref{Lienard_degenerate_Liouville_system_polynomial}. Thus, the polynomials $q_1(x)$ and $q_2(x)$ necessarily have distinct dominant terms. This fact yields the inequality $k<l$. Finally, we need to introduce the normalization adopted at the beginning of this section. Using relations \eqref{Lienard_degenerate_Puiseux_series_dominant} and \eqref{Lienard_degenerate_Puiseux_series_Polynomial_parts}, we obtain expression \eqref{Lienard_degenerate_Liouvillian_normalization}. \end{proof} \textit{Corollary.} Li\'{e}nard differential systems \eqref{Lienard_degenerate_Darboux_system_time} with a non-autonomous Darboux first integral \eqref{Lienard_degenerate_Darboux_FI1_time} are Liouvillian integrable. These systems have the Darboux integrating factor \begin{equation}\label{Lienard_degenerate_Liouville_IF_time_partial} \begin{gathered} M(x,y)=\frac{\left[y+\frac{\delta+f_0}{2(m+1)}(x-x_0)^{m+1}+\frac{\omega}{2(m+1)\delta}(x-x_0)\right]^{\frac{mf_0-(m+2)\delta}{2(m+1)\delta}}} {\left[y+\frac{f_0-\delta}{2(m+1)}(x-x_0)^{m+1}+\frac{\omega}{2(m+1)\delta}(x-x_0)\right]^\frac{mf_0+(m+2)\delta}{2(m+1)\delta}}. \end{gathered} \end{equation} \begin{proof} We establish the validity of the statement substituting relations \begin{equation}\label{Lienard_degenerate_Liouville_IF_time_parameters} \begin{gathered} k=1,\, l=m+1,\, u(x)=u_0(x-x_0),\, \beta=-\frac{\omega}{2(m+1)\delta u_0},\, u_0=\left\{-\frac{\delta}{m+1}\right\}^{\frac1{m+1}} \end{gathered} \end{equation} into expressions \eqref{Lienard_degenerate_Liouville_system_polynomial} and \eqref{Lienard_degenerate_Darboux_IF}. In addition, we recall that the parameter $d_1$ reads as~\eqref{Lienard_degenerate_Liouvillian_normalization}. \end{proof} \begin{theorem}\label{T:Lienard_degenerate_Liouville2} A Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta=0$ is Liouvillian integrable if and only if the system has the irreducible invariant algebraic curve $y-q(x)=0$ and an exponential invariant $E(x,y)=\exp[u(x)/(y-q(x))]$ such that one of the following assertions is valid. \begin{enumerate} \item The system is Darboux integrable and takes the form \eqref{Lienard_degenerate_Darboux_del0_f_g}. A related Darboux integrating factor reads as \begin{equation}\label{Lienard_degenerate_Liouville_IF_main2} M(x,y)=\frac{1}{[y-q(x)]^{2}} \end{equation} The polynomial $u(x)$ arising in the exponential invariant is $u(x)=\alpha q(z)$, $ \alpha\in\mathbb{C}\setminus\{0\}$. \item The system is of the form \begin{equation}\label{Lienard_degenerate_Liouville_f_g_Case2} \begin{gathered} x_t=y,\quad y_t=-\left[\frac{2l^2}{l-k}v^{l-1}-(l+k)\beta v^{k-1}\right]v_xy\\ + \left[\frac{2l^2\beta}{l-k}v^{l+k-1}-\frac{l^3}{(l-k)^2}v^{2l-1}-l\beta^2 v^{2k-1}\right]v_x, \end{gathered} \end{equation} where $\beta\in\mathbb{C}\setminus\{0\}$, $v(x)$ is a non-constant polynomial, $k$ and $l$ are relatively prime natural numbers satisfying the inequality $k<l$. The associated Darboux integrating factor reads~as \begin{equation}\label{Lienard_degenerate_Liouville_IF_main3} M(x,y)=[y-q(x)]^{-\frac{l+k}{l}}\exp\left[\frac{v^l(x)}{y-q(x)}\right], \quad q(x)=-\frac{l}{l-k}v^l+\beta v^k. \end{equation} In addition, the following relation $m+1=l\deg v$ is valid. \end{enumerate} \end{theorem} \begin{proof} It is straightforward to verify that expressions \eqref{Lienard_degenerate_Liouville_IF_main2} and \eqref{Lienard_degenerate_Liouville_IF_main3} are Darboux integrating factors of Li\'{e}nard differential systems \eqref{Lienard_gen} satisfying the restrictions $\deg g=2\deg f+1$ and $\delta=0$ whenever all other conditions of the theorem are satisfied. Let us prove the converse statement. Suppose we consider a Liouvillian integrable Li\'{e}nard differential system \eqref{Lienard_gen} such that $\deg g=2\deg f+1$ and $\delta=0$. By Theorems \ref{T:Liouville}, \ref{T:Lienard_degenerate} and Lemmas \ref{L:Lienard_exp_inv1}, \ref{L:Lienard_exp_degenerate} the system has the irreducible invariant algebraic curve $y-q(x)=0$ and a Darboux integrating factor that can be represented in the form \begin{equation}\label{Lienard_degenerate_Liouville_IF_main4} M(x,y)=[y-q(x)]^{d}\exp\left[\frac{u(x)}{y-q(x)}\right],\quad d\in\mathbb{C}, \end{equation} where we suppose that $u(x)\equiv0$ whenever the exponential invariant $E(x,y)=\exp[u(x)/(y-q(x))]$ related to the invariant algebraic curve $y-q(x)=0$ either does not exist or is not involved into an explicit expression of the integrating factor. Condition \eqref{JLM_gen_cond} with $\omega=0$ now takes the form \begin{equation}\label{Lienard_degenerate_Liouville_cond 1} d[f(x)+q_x(x)]-u_x(x)+f(x)=0. \end{equation} Further, we shall consider several distinct cases separately. \textit{Case 1.} Let us begin with the case $u(x)\equiv0$. Substituting the asymptotic relations $f(x)=f_0x^m+o(x^m)$ and $q(x)=-f_0x^{m+1}/(2\{m+1\})+o(x^{m+1})$, $x\rightarrow\infty$ into condition \eqref{Lienard_degenerate_Liouville_cond 1} and setting to zero the coefficient of $x^m$, we obtain the equalities $d=-2$ and $f(x)=-2q_x(x)$. Recalling the fact that $y=q(x)$ is the polynomial solution of equation \eqref{Lienard_y_x}, we find the polynomial $g(x)$ as given in~\eqref{Lienard_degenerate_Darboux_del0_f_g}. Using Theorem~\ref{T:Lienard_degenerate_Darboux2}, we conclude that the related Li\'{e}nard differential system possesses a Darboux first integral \eqref{Lienard_degenerate_Darboux_FI1_del0} and the exponential invariants $E(x,y)=\exp[\alpha q(x)/(y-q(x))]$, $ \alpha\in\mathbb{C}\setminus\{0\}$. Note that integrating the differential form \begin{equation}\label{Lienard_degenerate_Liouville_differential_form} \frac{ydy+(qq_x-2q_xy)dx}{\{y-q(x)\}^2}. \end{equation} yields a first integral in the form $\ln I(x,y)$ with the function $I(x,y)$ given by expression~\eqref{Lienard_degenerate_Darboux_FI1_del0}. \textit{Case 2.} Now let us suppose that the polynomial $u(x)$ is not identically zero. We see that the system under consideration has the exponential invariant $E(x,y)=\exp[u(x)/(y-q(x))]$ with the polynomial $u(x)$ satisfying equation~\eqref{Lienard_degenerate_Int2}. Expressing $f(x)$ from the latter equation and substituting the result into condition \eqref{Lienard_degenerate_Liouville_cond 1} yields the relation \begin{equation}\label{Lienard_degenerate_Liouville_cond 2} (d+1)qu_x+q_xu+uu_x=0. \end{equation} This relation viewed as an ordinary differential equation with respect to the polynomial $q(x)$ can be integrated. Thus, we find the expressions \begin{equation}\label{Lienard_degenerate_Liouville_cond 3} \begin{gathered} d\neq-2:\quad q(x)=-\frac1{d+2}u+\beta u^{-(d+1)};\quad d=-2:\quad q(x)=(\beta - \ln u)u,\hfill \end{gathered} \end{equation} where $\beta\in\mathbb{C}$ is a constant of integration. If $d=-2$, then $q(x)$ is not a polynomial. Further, we set $d\neq-2$. The case $\beta=0$ again leads to a Darboux integrable family of Li\'{e}nard differential systems given in Theorem \ref{T:Lienard_degenerate_Darboux2}. Thus, we suppose that the constant $\beta$ is non-zero. Recalling the fact that $q(x)$ and $u(x)$ are polynomials with the dominant behavior $q(x)=-f_0x^{m+1}/(2\{m+1\})+o(x^{m+1})$ and $u(x)=u_0x^{m+1}+o(x^{m+1})$, $u_0\in\mathbb{C}\setminus\{0\}$ near the point $x=\infty$, we find two possibilities \begin{equation}\label{Lienard_degenerate_Liouville_cond 4} \begin{gathered} d=-1:\quad u(x)=\beta-q(x);\quad d=-\frac{l+k}{l}:\quad u(x)=v^l(x).\hfill \end{gathered} \end{equation} In this expressions $l$ and $k$ are relatively prime natural numbers satisfying the restriction $k<l$ and $v(x)$ is a non-constant polynomial. Analyzing the possibility $d=-1$, we substitute the equality $u(x)=\beta-q(x) $ into relation \eqref{Lienard_degenerate_Int2} and find the polynomial $f(x)$. The result is \begin{equation}\label{Lienard_degenerate_Liouville_cond 5} f(x)=\frac{(2q(x)-\beta)q_x(x)}{\beta-q(x)}. \end{equation} Let $x_0$ be a zero of the polynomial $\beta-q(x) $. Considering the behavior near $x_0$ of the rational function on the right-hand side of expression \eqref{Lienard_degenerate_Liouville_cond 5}, we see that $f(x)$ is not a polynomial whenever $\beta\neq0$. Finally, we suppose that the following relations $d=-(l+k)/l$ and $u(x)=v^l(x)$ are valid. We use expressions \eqref{Lienard_degenerate_Liouville_cond 3}, \eqref{Lienard_degenerate_Liouville_cond 1}, and \eqref{Lienard_y_x} to find the polynomials $f(x)$, $g(x)$, and $q(x)$ as given in relations \eqref{Lienard_degenerate_Liouville_f_g_Case2} and \eqref{Lienard_degenerate_Liouville_IF_main3}. In addition, we verify that equation \eqref{Lienard_degenerate_Int2} is identically satisfied. The proof is completed. \end{proof} \textit{Corollary 1.} Li\'{e}nard differential systems \eqref{Lienard_degenerate_Darboux_del0_f_g_time} possessing a non-autonomous Darboux first integral \eqref{Lienard_degenerate_Darboux_FI1_del0_time} are Liouvillian integrable with the Darboux integrating factor \begin{equation}\label{Lienard_degenerate_Liouville_IF_main3_partial} \begin{gathered} M(x,y)=\exp\left[\frac{mf_0(x-x_0)^{m+1}}{(m+1)\{2(m+1)y+f_0(x-x_0)^{m+1}+2\omega(x-x_0)\}}\right]\\ \times\left[y+\frac{f_0(x-x_0)^{m+1}}{2(m+1)}+\frac{\omega(x-x_0)}{m+1}\right]^{-\frac{m+2}{m+1}}. \end{gathered} \end{equation} \begin{proof} We prove the validity of the statement substituting relations \begin{equation}\label{Lienard_degenerate_Liouville_IF_main3_parameters} \begin{gathered} k=1,\, l=m+1,\, v(x)=v_0(x-x_0),\, \beta=-\frac{\omega}{(m+1)v_0},\, v_0=\left\{\frac{mf_0}{2(m+1)^2}\right\}^{\frac1{m+1}} \end{gathered} \end{equation} into expressions \eqref{Lienard_degenerate_Liouville_f_g_Case2} and \eqref{Lienard_degenerate_Liouville_IF_main3}. \end{proof} \textit{Corollary 2.} If the following inequality $k>l$ holds, then systems \eqref{Lienard_degenerate_Liouville_f_g_Case2} are also Liouvillian integrable Li\'{e}nard differential systems from family ($B$). But these systems are resonant near infinity. The related Darboux integrating factor again is given by expression \eqref{Lienard_degenerate_Liouville_IF_main3}. A Liouvillian first integral produced by integrating factor \eqref{Lienard_degenerate_Liouville_IF_main3} reads as \begin{equation}\label{Lienard_degenerate_Liouville_FI} I(x,y)=v^{l-k}(x)\gamma\left(-\frac{l-k}{l},\frac{v^l(x)}{q(x)-y}\right)- \frac{q(x)}{v^{k}(x)}\gamma\left(\frac{k}{l},\frac{v^l(x)}{q(x)-y}\right), \end{equation} where the polynomial $q(x)$ is given in expression \eqref{Lienard_degenerate_Liouville_f_g_Case2} and $\gamma(\delta,s)$ is the lower incomplete Gamma function \begin{equation}\label{lower_incomplete_Gamma} \gamma(\delta,s)=\int_0^st^{\delta-1}\exp(-t)dt. \end{equation} Note that we need to consider the analytic continuation of this integral for complex or real non-positive values of~$s$. If $k=1$ and $l=2$, then we obtain another representation of a Liouvillian first integral \begin{equation}\label{Lienard_degenerate_Liouville_FI_a} \begin{gathered} I(x,y)=2\sqrt{\beta v(x)-2v^2(x)-y}\exp\left[\frac{v^2(x)}{y+2v^2(x)-\beta v(x)}\right]\\ -\sqrt{\pi}\beta\erfc\left[\frac{v(x)}{\sqrt{\beta v(x)-2v^2(x)-y}}\right], \end{gathered} \end{equation} where $\erfc(s)$ is the complementary error function \begin{equation}\label{error_function} \erfc(s)=\frac{2}{\sqrt{\pi}}\int_s^{\infty}\exp\left(-t^2\right)dt. \end{equation} The family of systems \eqref{Lienard_degenerate_Liouville_f_g_Case2} can be transformed to the following simple form \begin{equation} \label{Lienard_Inegrability_Bpartial_1_Sundman} s_{\tau}=z,\, z_{\tau}=-\left[\frac{2l^2}{l-k}s^{l-1}-(l+k)\beta s^{k-1}\right]z\\ + \frac{2l^2\beta}{l-k}s^{l+k-1}-\frac{l^3}{(l-k)^2}s^{2l-1}-l\beta^2 s^{2k-1} \end{equation} via the generalized Sundman transformation $s(\tau)=v(x)$, $z(\tau)=y$, $d\tau=v_x(x)dt$. Substituting $v(x)=s$, $y=z$ into \eqref{Lienard_degenerate_Liouville_FI}, we find a Liouvillian first integral for systems \eqref{Lienard_Inegrability_Bpartial_1_Sundman}. It seems that Liouvillian integrable families of Li\'{e}nard differential systems given in Theorems \ref{T:Lienard_degenerate_Liouville_polynomial}, \ref{T:Lienard_degenerate_Liouville1}, and \ref{T:Lienard_degenerate_Liouville2} are new with the exception of systems \eqref{Lienard_degenerate_Darboux_del0_f_g_time}. The latter are presented by T. Stachowiak \cite{Stachowiak}. Now let us investigate the existence of Jacobi last multipliers with a time-dependent exponential factor. \begin{lemma}\label{L:Lienard__degenerate Liouville_t1} A Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta/f_0\not\in\mathbb{Q}$ has a non-autonomous Darboux--Jacobi last multiplier of the form \eqref{JLM_gen} if and only if one the following assertions is valid. \begin{enumerate} \item The system under consideration possesses one irreducible invariant algebraic curve $y-q_k(x)=0$, where $k=1$ or $k=2$, such that the polynomials $f(x)$ and $q_k(x)$ identically satisfy the condition \begin{equation} \label{Lienard_degenerate_Inegrability_time1} \begin{gathered} k=1:\, (f_0-\delta)f(x)+2f_0q_{1,x}(x)+(f_0+\delta)\omega=0,\, \omega\in\mathbb{C}\setminus\{0\},\\ k=2:\, (f_0+\delta)f(x)+2f_0q_{2,x}(x)+(f_0-\delta)\omega=0,\, \omega\in\mathbb{C}\setminus\{0\}. \end{gathered} \end{equation} A related Darboux--Jacobi last multiplier reads as \begin{equation} \label{Lienard_degenerate_Inegrability_time2} \begin{gathered} k=1:\quad M(x,y,t)=[y-q_1(x)]^{-\frac{2f_0}{\delta+f_0}}\exp(\omega t),\\ k=2:\quad M(x,y,t)=[y-q_2(x)]^{\frac{2f_0}{\delta-f_0}}\exp(\omega t).\hfill \end{gathered} \end{equation} \item The system under consideration possesses one irreducible invariant algebraic curve $F(x,y)=0$ with $\displaystyle F(x,y)=\left\{\left[y-y^{(1)}_{\infty}(x)\right]\left[y-y^{(2)}_{\infty}(x)\right]\right\}_+$ such that the polynomials $f(x)$, $\displaystyle q_1(x)=\left\{y^{(1)}_{\infty}(x)\right\}_+$, and $\displaystyle q_2(x)=\left\{y^{(2)}_{\infty}(x)\right\}_+$ identically satisfy the condition \begin{equation} \label{Lienard_degenerate_Inegrability_time3} \begin{gathered} f(x)+q_{1,x}(x)+q_{2,x}(x)+\omega=0,\, \omega\in\mathbb{C}\setminus\{0\}. \end{gathered} \end{equation} A related Darboux--Jacobi last multiplier reads as \begin{equation} \label{Lienard_degenerate_Inegrability_time4} \begin{gathered} M(x,y,t)=\frac{\exp(\omega t)}{F(x,y)},\quad F(x,y)=\left\{\left[y-y^{(1)}_{\infty}(x)\right]\left[y-y^{(2)}_{\infty}(x)\right]\right\}_+. \end{gathered} \end{equation} \item The system under consideration possesses two distinct irreducible invariant algebraic curves $y-q_1(x)=0$ and $y-q_2(x)=0$ such that the polynomials $f(x)$, $\displaystyle q_1(x)$, and $\displaystyle q_2(x)$ identically satisfy the condition \begin{equation} \label{Lienard_degenerate_Inegrability_time5} \begin{gathered} \,[(2d_2+1)\delta-f_0]f(x)+[(\delta-f_0)d_2-2f_0]q_{1,x}(x)+(\delta+f_0)d_2q_{2,x}(x)\\ -(\delta+f_0)\omega=0,\quad d_2\in\mathbb{C},\quad\omega\in\mathbb{C}\setminus\{0\}. \end{gathered} \end{equation} A related Darboux--Jacobi last multiplier reads as \begin{equation} \label{Lienard_degenerate_Inegrability_time6} \begin{gathered} M(x,y,t)=[y-q_1(x)]^{\frac{(\delta-f_0)d_2-2f_0}{\delta+f_0}}[y-q_2(x)]^{d_2}\exp(\omega t). \end{gathered} \end{equation} \end{enumerate} \end{lemma} This lemma is proved similarly to Theorem \ref{T:Lienard_degenerate_Liouville1}. \begin{lemma}\label{L:Lienard__degenerate Liouville_t2} A Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g=2\deg f+1$ and $\delta=0$ has a non-autonomous Darboux--Jacobi last multiplier of the form \eqref{JLM_gen} if and only if one the following assertions is valid. \begin{enumerate} \item The system under consideration possesses the irreducible invariant algebraic curve $y-q(x)=0$ such that the polynomials $f(x)$ and $g(x)$ can be represented~as \begin{equation} \label{Lienard_degenerate_Inegrability_time_del0_1} \begin{gathered} f(x)=-2q_x(x)-\omega,\quad g(x)=q(x)(q_x(x)+\omega),\quad \omega\in\mathbb{C}\setminus\{0\}. \end{gathered} \end{equation} A related Darboux--Jacobi last multiplier reads as \begin{equation} \label{Lienard_degenerate_Inegrability_time_del0_2} \begin{gathered} M(x,y,t)=\frac{\exp(\omega t)}{[y-q(x)]^{2}}. \end{gathered} \end{equation} \item The system under consideration possesses the irreducible invariant algebraic curve $y-q(x)=0$ and the exponential invariant $E(x,y)=\exp[u(x)/(y-q(x))]$ such that the polynomials $f(x)$, $q(x)$, and $u(x)$ identically satisfy the condition \begin{equation} \label{Lienard_degenerate_Inegrability_time_del0_3} \begin{gathered} (d+1)f(x)+dq_x(x)=u_x(x)+\omega,\quad d\in\mathbb{C},\quad \omega\in\mathbb{C}\setminus\{0\}. \end{gathered} \end{equation} A related Darboux--Jacobi last multiplier reads as \begin{equation} \label{Lienard_degenerate_Inegrability_time_del0_4} \begin{gathered} M(x,y,t)=[y-q(x)]^{d}\exp\left[\frac{u(x)}{y-q(x)}\right]\exp(\omega t). \end{gathered} \end{equation} \end{enumerate} \end{lemma} The proof of this lemma is analogous to the proof of Theorem \ref{T:Lienard_degenerate_Liouville2}. Concluding this section let us note that autonomous and non-autonomous Darboux first integrals of non-resonant Li\'{e}nard differential systems \eqref{Lienard_gen} satisfying the conditions $\deg f=1$, $\deg g=3$ and $\deg f=2$, $\deg g=5$ are classified in \cite{DS2019, DS2021}. \section{Integrability of Li\'{e}nard differential systems from family ($C$)}\label{S:Lienard_C} We start investigating integrability properties of Li\'{e}nard differential systems from family~($C$) by proving the absence of exponential invariants related to invariant algebraic curves. We use designations of Theorem \ref{T1:Lienard_2m+1}. In particular, by $h^{(1)}(x)$ and $h^{(2)}(x)$ we denote the initial parts of the Puiseux series $y^{(1)}_{\infty}(x)$ and $y^{(2)}_{\infty}(x)$, accordingly. These initial parts involve monomials with exponents exceeding $-(n+1)/2$. Recall that we use the designations $\deg g=n$ and $\deg f =m$. Thus, the following inequality $n>2m+1$ is valid. \begin{lemma}\label{L:Lienard_exp_inv3} Suppose a Li\'{e}nard differential system~\eqref{Lienard_gen} from family ($C$) is not integrable with a rational first integral. Then this system does not have exponential invariants of the form $E(x,y)=\exp\left\{h(x,y)/r(x,y)\right\}$, where $h(x,y)\in\mathbb{C}[x,y]$ and $r(x,y)\in\mathbb{C}[x,y]\setminus\mathbb{C}$ are relatively prime polynomials. \end{lemma} \begin{proof} We shall use the local theory presented in Section~\ref{S:Local}. If a Li\'{e}nard differential system~\eqref{Lienard_gen} has an exponential invariant $E(x,y)=\exp\left\{h(x,y)/r(x,y)\right\}$ with $r(x,y)\in\mathbb{C}[x,y]\setminus\mathbb{C}$, then $r(x,y)\in\mathbb{C}[x,y]\setminus\mathbb{C}[x]$ and it is without loss of generality to assume that the degree of the polynomial $h(x,y)$ with respect to $y$ is less than the degree of the polynomial $r(x,y)$ with respect to $y$. There exists a finite number of local elementary exponential invariants \begin{equation} \label{Lienard_exp_inv1_loc1} \begin{gathered} E_j(x,y)=\exp\left[\frac{u_j(x)}{\{y-Y_{j,\infty}(x)\}^{n_j}}\right],\quad u_j(x),\, Y_{j,\infty}(x)\in \mathbb{C}_{\infty}\{x\},\\ n_j\in\mathbb{N},\quad j=1,\ldots, K,\quad K\in\mathbb{N} \end{gathered} \end{equation} such that the exponential invariant $E(x,y)$ equals the product $E_1^{}(x,y)\times \ldots \times E_K^{}(x,y)$. It follows from Theorem \ref{T:inv_curve_prim} and Lemma \ref{L:exp_factor_inv_curve} that each series $Y_{j,\infty}(x)$ satisfies equation \eqref{Lienard_y_x}. In fact, the series $Y_{j,\infty}(x)$ coincides with one of the series $y^{(1,2)}_{j,\infty}(x)$ presented in Theorem~\ref{T1:Lienard_2m+1}. Let us denote the cofactor of the local elementary exponential invariant $E_j(x,y)$ by $\varrho_j(x,y)\in\mathbb{C}_{\infty}\{x\}[y]$. We see that the following expression $\varrho_1(x,y)+\ldots+\varrho_K(x,y)$ equals the cofactor $\varrho(x,y)$ of the invariant $E(x,y)$. Recall that the cofactor $\varrho(x,y)$ is an element of the ring~$\mathbb{C}[x,y]$. Substituting the explicit representation of $E_j(x,y)$ into the partial differential equation $\mathcal{X}E_j(x,y)=\varrho_j(x,y)E_j(x,y)$, we get \begin{equation} \label{Lienard_exp_inv1_2n} \begin{gathered} yu_{j,x}=n_j\lambda_j(x,y)u_j(x)+\varrho_j(x,y)\left\{y-Y_{j,\infty}(x)\right\}^{n_j}. \end{gathered} \end{equation} In this expression $\lambda_j(x,y)\in\mathbb{C}_{\infty}\{x\}[y]$ is the cofactor of the local elementary invariant $F_j(x,y)=y-Y_{j,\infty}(x)$. Using Theorem \ref{T:coff_local2}, we find the cofactor $\lambda_j(x,y)$. The result is \begin{equation} \label{Lienard_exp_inv1_cof_local} \begin{gathered} \lambda_j(x,y)=-f(x)-\{Y_{j,\infty}(x)\}_x. \end{gathered} \end{equation} Analyzing expression \eqref{Lienard_exp_inv1_2n}, we see that $n_j=1$, $\varrho_j(x,y)=u_{j,x}(x)$, and the series $u_j(x)$ satisfies the following ordinary differential equation \begin{equation} \label{Lienard_exp_inv1_3n} \begin{gathered} Y_{j,\infty}(x)u_{j,x}(x)+(f(x)+[Y_{j,\infty}(x)]_x)u_j(x)=0. \end{gathered} \end{equation} Using the dominant behavior of the series $Y_{j,\infty}(x)$ given by the monomial $b_{0,j}x^{(n+1)/2}$, we find the dominant monomial of the series $u_{j}(x)$. Thus, we get the relation $u_{j}(x)=\sigma_{j}x^{-(n+1)/2}+o(x^{-(n+1)/2})$, where $\sigma_j\in\mathbb{C}$ and $x\rightarrow \infty$. Hence the cofactor $\varrho_j(x,y)=u_{j,x}(x)$ does not have monomials with non-negative exponents. Consequently, the polynomial $\displaystyle \varrho(x,y)=\varrho_1(x,y)+\ldots+\varrho_K(x,y)$ should be identically zero. We conclude that the argument $h(x,y)/r(x,y)$ of the exponential invariant $E(x,y)=\exp\left\{h(x,y)/r(x,y)\right\}$ is a rational first integral of the differential system in question. It is a contradiction. \end{proof} \textit{Remark.} It will be shown below that Li\'{e}nard differential systems \eqref{Lienard_gen} satisfying the condition $\deg g>2\deg f+1$ do not have rational first integrals. Thus, this condition can be removed from the statement of Lemma \ref{L:Lienard_exp_inv3}. Now our aim is to prove that Li\'{e}nard differential systems \eqref{Lienard_gen} do not have Darboux first integrals provided that $\deg g>2\deg f+1$. \begin{theorem}\label{T:Lienard_Integrability1_C} Li\'{e}nard differential systems \eqref{Lienard_gen} from family ($C$) are not Darboux integrable. \end{theorem} \begin{proof} Let us suppose that a Li\'{e}nard differential system~\eqref{Lienard_gen} with $\deg g>2\deg f+1$ has a Darboux first integral. In view of Theorem \ref{T:Darboux_rat} the system possesses a rational integrating factor. Consequently, there exists $K\in\mathbb{N}$ pairwise distinct irreducible invariant algebraic curves $F_1(x,y)=0$, $\ldots$, $F_K(x,y)=0$ and $K$ non-zero integer numbers $d_1$, $\ldots$, $d_K$ such that the following condition \begin{equation} \label{Lienard_Inegrability_new1} \sum_{j=1}^{K}d_j\lambda_j(x,y)=f(x),\quad d_1,\ldots, d_{K}\in\mathbb{Z} \end{equation} is valid. In this expression $\lambda_j(x,y)$ is the cofactor of the invariant algebraic curve $F_j(x,y)=0$. Suppose the family of Puiseux series $y^{(l)}_{\infty}(x)$ arises $N_{l,j}$ times in the factorization of the polynomial $F_j(x,y)$ in the ring $\mathbb{C}_{\infty}\{x\}[y]$. Here $l=1$, $2$ and we use the designations of Theorem \ref{T1:Lienard_2m+1}. The cofactor $\lambda_j(x,y)$ reads~as \begin{equation} \label{Lienard_Inegrability_coff_n1} \begin{gathered} \lambda_j(x,y)=-(N_{1,j}+N_{2,j})f(x)-\left\{N_{1,j}h^{(1)}_x(x)+N_{2,j}h^{(2)}_x(x)\right\}_+,\\ N_{1,j},N_{2,j}\in\mathbb{N}_0,\quad N_{1,j}+N_{2,j}>0, \end{gathered} \end{equation} where $h^{(l)}(x)$ is the initial part of Puiseux series $y^{(l)}_{\infty}(x)$ introduced in Theorem~\ref{T1:Lienard_2m+1}. Substituting expression \eqref{Lienard_Inegrability_coff_n1} into relation \eqref{Lienard_Inegrability_new1} yields \begin{equation} \label{Lienard_Inegrability_new2} \begin{gathered} \sum_{j=1}^{K}d_jN_{1,j}\left(f(x)+\left\{h^{(1)}_x(x)\right\}_+\right) +\sum_{j=1}^{K}d_jN_{2,j}\left(f(x)+\left\{h^{(2)}_x(x)\right\}_+\right) =-f(x). \end{gathered} \end{equation} Let us suppose that $n$ is odd. The dominant behavior of the truncated series $h^{(1)}(x)$ and $h^{(2)}(x)$ is $b_0x^{(n+1)/2}$ and $-b_0x^{(n+1)/2}$, respectively. Setting to zero the coefficient of $x^{(n-1)/2}$ in expression \eqref{Lienard_Inegrability_new2}, we obtain \begin{equation} \label{Lienard_Inegrability_new3} \begin{gathered} \sum_{j=1}^{K}d_jN_{1,j}=\sum_{j=1}^{K}d_jN_{2,j}. \end{gathered} \end{equation} If $n$ is even, then by Theorem \ref{T1:Lienard_2m+1} we get $N_{1,j}=N_{2,j}$. Consequently, relation~\eqref{Lienard_Inegrability_new3} is identically satisfied. As a result, condition \eqref{Lienard_Inegrability_new2} takes the form \begin{equation} \label{Lienard_Inegrability_new4} \begin{gathered} B\left(2f(x)+\left\{h^{(1)}_x(x)+h^{(2)}_x(x)\right\}_+\right) =-f(x),\quad B=\sum_{j=1}^{K}d_jN_{1,j}. \end{gathered} \end{equation} Obviously $B$ is a non-zero integer number. Now let us consider the equation \begin{equation} \label{Lienard_Inegrability_ODE_alt} yy_x+\varepsilon f(x)y+g(x)=0. \end{equation} Puiseux series near the point $x=\infty$ that satisfy this equation coincide with the Puiseux series $y_{j,\infty}^{(1,2)}(x)$ presented in Theorem \ref{T1:Lienard_2m+1} provided that $\varepsilon=1$. Analogously to the case $\varepsilon=1$, we find two families of Puiseux series near the point $x=\infty$ solving equation \eqref{Lienard_Inegrability_ODE_alt}. We denote these families as $Y_{\infty}^{(1)}(x)$ and $Y_{\infty}^{(2)}(x)$. It is straightforward to see that these series can be represented as \begin{equation} \label{Lienard_Inegrability_PS_alt} Y_{\infty}^{(1,2)}(x)=\sum_{k=0}^{\infty}\varepsilon^kv^{(1,2)}_k(x), \end{equation} where the coefficients $v^{(1,2)}_k(x)$ are Puiseux series near the point $x=\infty$. Substituting expression \eqref{Lienard_Inegrability_PS_alt} into equation \eqref{Lienard_Inegrability_ODE_alt} and setting to zero the coefficients of different powers of $\varepsilon$, we find ordinary differential equations for the series $v^{(1,2)}_k(x)$. Two first equations take the form \begin{equation} \label{Lienard_Inegrability_ODE_for_coeff} v_0v_{0,x}+g(x)=0,\quad v_0v_{1,x}+v_{0,x}v_1+f(x)v_0=0, \end{equation} where the upper index is omitted. Thus, we get \begin{equation} \label{Lienard_Inegrability_coeff_expl} v^{(2)}_0(x)=-v^{(1)}_0(x),\quad v^{(1,2)}_1(x)=-\frac{2f_0}{2m+n+3}x^{m+1}+o(x^{m+1}),\, x\rightarrow\infty. \end{equation} Analyzing other ordinary differential equations for the series $v^{(1,2)}_k(x)$ with $k\geq 2$, we find $v^{(1,2)}_k(x)=o(x^{m+1})$, $x\rightarrow\infty$, $k\geq 2$. We recall that the Puiseux series $y_{j,\infty}^{(1,2)}(x)$ with the same upper index have coinciding initial parts involving monomials with exponents exceeding $-(n+1)/2$. These initial parts are given by $h^{(1)}(x)$ and $h^{(2)}(x)$. Substituting our results into condition \eqref{Lienard_Inegrability_new4} and considering the coefficients of the leading term $x^m$, we come to the equation \begin{equation} \label{Lienard_Inegrability_new5} \begin{gathered} B\left(2f_0-\frac{4(m+1)f_0}{2m+n+3}\right)=-f_0. \end{gathered} \end{equation} Solving this equation, we find the expression \begin{equation} \label{Lienard_Inegrability_new6} \begin{gathered} B=-\frac{m+1}{n+1}-\frac12. \end{gathered} \end{equation} Inequality $n>2m+1$ shows that $B$ is not an integer. It is a contradiction. \end{proof} \textit{Corollary 1.} A Li\'{e}nard differential system~\eqref{Lienard_gen} from family ($C$) has at most one invariant algebraic curve $F(x,y)=0$ with the property $N_1=N_2$. \begin{proof} Recall that the variables $N_1$ and $N_2$ give the number of distinct Puiseux series $y^{(1)}_{\infty}(x)$ and $y^{(2)}_{\infty}(x)$ from the field $\mathbb{C}_{\infty}\{x\}$ arising in the factorization of the polynomial producing the algebraic curve $F(x,y)=0$, respectively. For more details see Theorem~\ref{T1:Lienard_2m+1}. Note that we do not require the polynomial $F(x,y)$ to be irreducible. The cofactor of such an algebraic curve takes the form \begin{equation} \label{Lienard_Inegrability_coff_n1_add} \begin{gathered} \lambda(x,y)=-2N_1f(x)-N_1\left\{h^{(1)}_x(x)+h^{(2)}_x(x)\right\}_+. \end{gathered} \end{equation} If there exists another invariant algebraic curve with the property $N_1=N_2$, then the system under consideration possesses a rational first integral. This fact contradicts Theorem \ref{T:Lienard_Integrability1_C}. \end{proof} \textit{Corollary 2.} A Li\'{e}nard differential system~\eqref{Lienard_gen} from family ($C$) cannot have two distinct irreducible invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$ with the property $N_{1,1}N_{2,2}-N_{1,2}N_{2,1}=0$, where $N_{l,j}$ is the number of times the family of Puiseux series $y^{(l)}_{\infty}(x)$ enters the factorization of the polynomial $F_j(x,y)$ in the ring $\mathbb{C}_{\infty}\{x\}[y]$. \begin{proof} The proof is by contradiction. Suppose a Li\'{e}nard differential system~\eqref{Lienard_gen} satisfying the condition $\deg g>2\deg f+1$ possesses two distinct irreducible invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$ with the property $N_{1,1}N_{2,2}-N_{1,2}N_{2,1}=0$. First of all, let us assume that one of the numbers $N_{l,j}$, where $l$, $j= 1$,~$2$, is zero. Without loss of generality, we choose $N_{2,1}=0$. This gives $N_{1,1}N_{2,2}=0$. The polynomial $F_j(x,y)$, $j=1$, $2$ should have at least one Puiseux series near the point $x=\infty$ in its factorization. Thus, we get $N_{1,1}>0$, $N_{2,2}=0$, and $N_{1,2}>0$. By Theorem \ref{T1:Lienard_2m+1} there exists at most one irreducible invariant algebraic curve possessing only one family of Puiseux series near the point $x=\infty$ in the factorization. This yields a contradiction. Now suppose that all the numbers $N_{l,j}$, where $l$, $j= 1$,~$2$, are non-zero. Introducing the variable \begin{equation} \label{Lienard_Inegrability_coff_n1_number} \begin{gathered} \varkappa=\frac{N_{1,1}}{N_{2,1}}=\frac{N_{1,2}}{N_{2,2}}, \end{gathered} \end{equation} we represent the cofactors $\lambda_1(x,y)$ and $\lambda_2(x,y)$ of the invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$ in the form \begin{equation} \label{Lienard_Inegrability_coff_n1_add_coff} \begin{gathered} \lambda_j(x,y)=-N_{2,j}\left[(\varkappa+1) f(x)+\left\{\varkappa h^{(1)}_x(x)+ h^{(2)}_x(x)\right\}_+\right],\quad j=1,2. \end{gathered} \end{equation} We conclude that the cofactors are dependent over the ring $\mathbb{Z}$. Consequently, the Li\'{e}nard differential system under study has a rational first integral. This is a contradiction. \end{proof} We see from Corollary $1$ and Theorem \ref{T1:Lienard_2m+1} that if $\deg g $ denoted as $n$ is an even number, then a Li\'{e}nard differential system~\eqref{Lienard_gen} satisfying the condition $\deg g>2\deg f+1$ cannot have more than one irreducible invariant algebraic curve simultaneously. Next, let us investigate the existence of non-autonomous Darboux first integrals. The following lemma is valid. \begin{lemma}\label{L:Lienard_Integrability_time2} A Li\'{e}nard differential system \eqref{Lienard_gen} from family ($C$) has a non-autonomous Darboux first integral with a time-dependent exponential factor~\eqref{FI_t_gen} if and only if $\deg f=0$ and one of the following assertions is valid. \begin{enumerate} \item There exists an irreducible invariant algebraic curve $F(x,y)=0$ such that the family of Puiseux series $y^{(1)}_{\infty}(x)$ arises in the factorization of the polynomial $F(x,y)$ in the ring $\mathbb{C}_{\infty}\{x\}[y]$ as many times as so does the family $y^{(2)}_{\infty}(x)$, i.e. $N_{1}=N_{2}$. A first integral takes the form \begin{equation} \label{Lienard_Inegrability_FI_C1_deg_f_0} \begin{gathered} I(x,y,t)=F(x,y)\exp\left[\frac{2(n+1)f_0N_1 t}{n+3}\right]. \end{gathered} \end{equation} \item There exist two distinct irreducible invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$ such that the following relation $N_{1,j}\neq N_{2,j}$, $j=1$, $2$ is valid, where $N_{l,j}$ is the number of times the family of Puiseux series $y^{(l)}_{\infty}(x)$ enters the factorization of the polynomial $F_j(x,y)$ in the ring $\mathbb{C}_{\infty}\{x\}[y]$. A first integral reads~as \begin{equation} \label{Lienard_Inegrability_FI_C2_deg_f_0} \begin{gathered} I(x,y,t)=\frac{\left\{F_1(x,y)\right\}^{N_{1,2}-N_{2,2}}}{\left\{F_2(x,y)\right\}^{N_{1,1}-N_{2,1}}} \exp\left[\frac{2(n+1)f_0\Omega t}{n+3}\right] \end{gathered} \end{equation} with the parameter $\Omega$ given by the relation $\Omega= N_{1,2}N_{2,1}-N_{1,1}N_{2,2}$. \end{enumerate} There are no other independent non-autonomous Darboux first integrals with a time-dependent exponential factor \eqref{FI_t_gen}. \end{lemma} \begin{proof} By Lemmas \ref{L:Lienard_exp_inv1} and \ref{L:Lienard_exp_inv2} exponential factors cannot enter an explicit expression of a non-autonomous Darboux first integral \eqref{FI_t_gen}. It follows from Theorem \ref{T:L23_Non_aut_FI} that a Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the condition $\,\,$ $\deg g>2\deg f+1$ has a first integral~\eqref{FI_t_gen} if and only if there exists $K\in\mathbb{N}$ pairwise distinct irreducible invariant algebraic curves $F_1(x,y)=0$, $\ldots$, $F_K(x,y)=0$ and $K+1$ non-zero complex numbers $d_1$, $\ldots$, $d_K$, $\omega$ such that the following condition \begin{equation} \label{Lienard_Inegrability_new1_time} \sum_{j=1}^{K}d_j\lambda_j(x,y)+\omega=0,\quad d_1,\ldots, d_{K},\omega\in\mathbb{C}\setminus\{0\} \end{equation} is valid. In this expression $\lambda_j(x,y)$ is the cofactor of the invariant algebraic curve $F_j(x,y)=0$. The related first integral reads as \begin{equation} \label{Lienard_Inegrability_new1_time_add} I(x,y,t)=\prod_{j=1}^{K}F_j^{d_j}(x,y)\exp[\omega t]. \end{equation} Suppose the family of Puiseux series $y^{(l)}_{\infty}(x)$ arises $N_{l,j}$ times in the factorization of the polynomial $F_j(x,y)$ in the ring $\mathbb{C}_{\infty}\{x\}[y]$. Here $l=1$, $2$ and again we use the designations of Theorem \ref{T1:Lienard_2m+1}. The cofactor $\lambda_j(x,y)$ is given by relation~\eqref{Lienard_Inegrability_coff_n1}. The necessary and sufficient condition for first integral \eqref{Lienard_Inegrability_new1_time_add} to exist reads~as \begin{equation} \label{Lienard_Inegrability_new2_time} \begin{gathered} \sum_{j=1}^{K}d_jN_{1,j}\left(f(x)+\left\{h^{(1)}_x(x)\right\}_+\right) +\sum_{j=1}^{K}d_jN_{2,j}\left(f(x)+\left\{h^{(2)}_x(x)\right\}_+\right)=\omega. \end{gathered} \end{equation} This condition is not satisfied whenever relation \eqref{Lienard_Inegrability_new3} is not valid. Further, we rewrite condition \eqref{Lienard_Inegrability_new2_time} in the form \begin{equation} \label{Lienard_Inegrability_new4_time} \begin{gathered} B\left(2f(x)+\left\{h^{(1)}_x(x)+h^{(2)}_x(x)\right\}_+\right)=\omega,\quad B=\sum_{j=1}^{K}d_jN_{1,j}, \end{gathered} \end{equation} where unlike the case of Theorem \ref{T:Lienard_Integrability1_C} the parameter $B$ may be complex-valued. If $m>0$, then we arrive at the expression \begin{equation} \label{Lienard_Inegrability_new5_time} \begin{gathered} B\left(2f_0-\frac{4(m+1)f_0}{2m+n+3}\right)=0, \end{gathered} \end{equation} which is not valid. Consequently, we should set $m=0$. Relations \eqref{Lienard_Inegrability_new3} and \eqref{Lienard_Inegrability_new4_time} now become \begin{equation} \label{Lienard_Inegrability_new6_time} \begin{gathered} \sum_{j=1}^{K}d_jN_{1,j}=\sum_{j=1}^{K}d_jN_{2,j},\quad 2f_0(n+1)\sum_{j=1}^{K}d_jN_{1,j}=(n+3)\omega. \end{gathered} \end{equation} If the original Li\'{e}nard differential system has only one irreducible invariant algebraic curve ($K=1$), then algebraic system \eqref{Lienard_Inegrability_new6_time} is satisfied if and only if the following relation $N_{1,1}=N_{2,1}$ is valid. We note that the parameter $d_1\neq0$ can be chosen arbitrarily. Thus, we set $d_1=1$. The second equation in \eqref{Lienard_Inegrability_new6_time} produces the value of $\omega$. As a result, we obtain non-autonomous Darboux first integral~\eqref{Lienard_Inegrability_FI_C1_deg_f_0}, where the index $j=1$ is omitted. Now let us suppose that the Li\'{e}nard differential system under consideration has at least two distinct irreducible invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$ satisfying the restriction $N_{1,j}\neq N_{2,j}$, $j=1$, $2$. We see that algebraic system \eqref{Lienard_Inegrability_new6_time} is always satisfied. Indeed, setting $K=2$ and recalling the fact that one of the exponents $d_1$ and $d_2$ can be chosen arbitrary, we obtain a solution \begin{equation} \label{Lienard_Inegrability_FI_C6_deg_f_0} \begin{gathered} d_1=N_{1,2}-N_{2,2},\quad d_2=N_{2,1}-N_{1,1},\quad \omega=\frac{2(n+1)f_0\Omega }{n+3}, \end{gathered} \end{equation} where we use the designation $\Omega= N_{1,2}N_{2,1}-N_{1,1}N_{2,2}$. Hence we have found time-dependent Darboux first integral \eqref{Lienard_Inegrability_FI_C2_deg_f_0}. By Corollary $2$ to Theorem~\ref{T:Lienard_Integrability1_C}, the following relation $\Omega\neq0$ is valid. Suppose a Li\'{e}nard differential system has two independent non-autonomous Darboux first integrals with a time-dependent exponential factor~\eqref{FI_t_gen}, then this system is Darboux integrable. This fact contradicts Theorem~\ref{T:Lienard_Integrability1_C}. \end{proof} \textit{Remark.} Li\'{e}nard differential systems \eqref{Lienard_gen} satisfying the conditions of item~$2$ can only arise when $n=\deg g$ is an odd number. \smallskip There exist Li\'{e}nard differential systems \eqref{Lienard_gen} with non-autonomous Darboux first integrals \eqref{Lienard_Inegrability_FI_C1_deg_f_0} and \eqref{Lienard_Inegrability_FI_C2_deg_f_0}. An example was given in article \cite{Demina16}. Finally, we turn to the Liouvillian integrability. \begin{theorem}\label{T:Lienard_Integrability3} A Li\'{e}nard differential system \eqref{Lienard_gen} from family ($C$) is Liouvillian integrable if and only if the relation \begin{equation} \label{Lienard_Inegrability_IF_C3_Cond} \begin{gathered} 4(m+1)f(x)+(2m+n+3)\left\{h^{(1)}_x(x)+h^{(2)}_x(x)\right\}_+=0 \end{gathered} \end{equation} is identically satisfied and one of the following assertions is valid. \begin{enumerate} \item There exists an irreducible invariant algebraic curve $F(x,y)=0$ such that the family of Puiseux series $y^{(1)}_{\infty}(x)$ arises in the factorization of the polynomial $F(x,y)$ in the ring $\mathbb{C}_{\infty}\{x\}[y]$ as many times as so does the family $y^{(2)}_{\infty}(x)$, i.e. $N_{1}=N_{2}$. In this case the system has the unique Darboux integrating factor \begin{equation} \label{Lienard_Inegrability_IF_C1_deg_f_0} \begin{gathered} M(x,y)=\left\{F(x,y)\right\}^{-\frac{2m+n+3}{2(n+1)N_{1}}}. \end{gathered} \end{equation} \item There exist two distinct irreducible invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$ such that the following relation $N_{1,j}\neq N_{2,j}$, $j=1$, $2$ is valid, where $N_{l,j}$ is the number of times the family of Puiseux series $y^{(l)}_{\infty}(x)$ enters the factorization of the polynomial $F_j(x,y)$ in the ring $\mathbb{C}_{\infty}\{x\}[y]$. In this case the system has the unique Darboux integrating factor \begin{equation} \label{Lienard_Inegrability_IF_C2_deg_f_0} \begin{gathered} M(x,y)=\frac{\left\{F_1(x,y)\right\}^{\frac{(2m+n+3)(N_{2,2}-N_{1,2})}{2(n+1)\Omega}}} {\left\{F_2(x,y)\right\}^{\frac{(2m+n+3)(N_{2,1}-N_{1,1})}{2(n+1)\Omega}}}. \end{gathered} \end{equation} The following designation $\Omega= N_{1,2}N_{2,1}-N_{1,1}N_{2,2}$ is introduced in expression~\eqref{Lienard_Inegrability_IF_C2_deg_f_0}. \end{enumerate} \end{theorem} \begin{proof} It follows from Theorem \ref{T:Liouville} that a Liouvillian integrable differential system \eqref{DS} has a Darboux integrating factor. By Lemmas \ref{L:Lienard_exp_inv1} and \ref{L:Lienard_exp_inv2} exponential invariants cannot enter an explicit expression of a Darboux integrating factor. Thus, the Darboux integrating factor reads as \begin{equation} \label{Lienard_Inegrability_IF_Darboux1} \begin{gathered} M(x,y)=\prod_{j=1}^KF_j^{d_j}(x,y),\quad d_1,\ldots,d_k\in\mathbb{C},\quad K\in\mathbb{N}, \end{gathered} \end{equation} where the polynomials $F_1(x,y)$, $\ldots$, $F_K(x,y)$ give pairwise distinct irreducible invariant algebraic curves $F_1(x,y)=0$, $\ldots$, $F_K(x,y)=0$ of a Li\'{e}nard differential system. Without loss of generality, we suppose that the numbers $d_1$, $\ldots$, $d_K$ are all non-zero. The cofactor $\lambda_j(x,y)$ of the invariant algebraic curve $F_j(x,y)=0$ is given by relation \eqref{Lienard_Inegrability_coff_n1}. The necessary and sufficient condition $d_1\lambda_1(x,y)+\ldots d_K\lambda_K(x,y)=-\text{div}\,\mathcal{X}$ for Darboux integrating factor~\eqref{Lienard_Inegrability_IF_Darboux1} to exist now takes the form \begin{equation} \label{Lienard_Inegrability_IF_C3} \begin{gathered} \sum_{j=1}^{K}d_jN_{1,j}\left(f(x)+\left\{h^{(1)}_x(x)\right\}_+\right) +\sum_{j=1}^{K}d_jN_{2,j}\left(f(x)+\left\{h^{(2)}_x(x)\right\}_+\right) =-f(x). \end{gathered} \end{equation} This condition is not satisfied provided that relation \eqref{Lienard_Inegrability_new3} is not valid. Using relation \eqref{Lienard_Inegrability_new3}, we simplify condition \eqref{Lienard_Inegrability_IF_C3}. Thus, we get \begin{equation} \label{Lienard_Inegrability_IF_C4} \begin{gathered} B\left(2f(x)+\left\{h^{(1)}_x(x)+h^{(2)}_x(x)\right\}_+\right)=-f(x),\quad B=\sum_{j=1}^{K}d_jN_{1,j}, \end{gathered} \end{equation} where unlike the case of Theorem \ref{T:Lienard_Integrability1_C} the parameter $B$ may be complex-valued. Doing the same as in the proof of Theorem \ref{T:Lienard_Integrability1_C}, we find the following equality \begin{equation} \label{Lienard_Inegrability_IF_C5} \begin{gathered} B\left(2f_0-\frac{4(m+1)f_0}{2m+n+3}\right)=-f_0, \end{gathered} \end{equation} which gives the value of $B$. The result is \begin{equation} \label{Lienard_Inegrability_IF_C3_n1} \begin{gathered} B=-\frac{2m+n+3}{2(n+1)}. \end{gathered} \end{equation} Substituting this equality into condition \eqref{Lienard_Inegrability_IF_C4} yields relation \eqref{Lienard_Inegrability_IF_C3_Cond}. Finally, we are left with the following algebraic system \begin{equation} \label{Lienard_Inegrability_IF_C3_n2} \begin{gathered} \sum_{j=1}^{K}d_jN_{1,j}=\sum_{j=1}^{K}d_jN_{2,j},\quad \sum_{j=1}^{K}d_jN_{1,j}=-\frac{2m+n+3}{2(n+1)} \end{gathered} \end{equation} with respect to the unknowns $d_1$, $\ldots$, $d_K$. Suppose the Li\'{e}nard differential system under study has only one irreducible invariant algebraic curve ($K=1$). Algebraic system \eqref{Lienard_Inegrability_IF_C3_n2} is satisfied if and only if $N_{1,1}=N_{2,1}$. Omitting the index $j$, we find the value of $d$ and Darboux integrating factor \eqref{Lienard_Inegrability_IF_C1_deg_f_0}. Now we assume that the Li\'{e}nard differential system in question possesses at least two distinct irreducible invariant algebraic curves. Setting $K=2$, we see that the determinant of algebraic system \eqref{Lienard_Inegrability_IF_C3_n2} equals $\Omega= N_{1,2}N_{2,1}-N_{1,1}N_{2,2}$. By Corollary $2$ to Theorem \ref{T:Lienard_Integrability1_C} we get $\Omega\neq0$. Consequently, algebraic system \eqref{Lienard_Inegrability_IF_C3_n2} has the unique solution. As a result we obtain Darboux integrating factor~\eqref{Lienard_Inegrability_IF_C2_deg_f_0}. If there are no invariant algebraic curves or there exists the unique irreducible invariant algebraic curve satisfying the condition $N_1\neq N_2$, then system \eqref{Lienard_Inegrability_IF_C3_n2} is inconsistent. Suppose the Li\'{e}nard differential system has two distinct Darboux integrating factors. Then their ratio is a Darboux first integral. This fact contradicts Theorem~\ref{T:Lienard_Integrability1_C}. \end{proof} \textit{Corollary.} A Liouvillian integrable Li\'{e}nard differential system \eqref{Lienard_gen} from family ($C$) has at most two distinct irreducible invariant algebraic curves simultaneously provided that $n=\deg g$ is an odd number. \begin{proof} Assuming that a Liouvillian integrable Li\'{e}nard differential system has three or more pairwise distinct irreducible invariant algebraic curves, we use Theorem \ref{T:Lienard_Integrability3} to find at least two distinct Darboux integrating factors. Consequently, the system possesses a Darboux first integral. It is a contradiction. \end{proof} \textit{Remark 1.} Item~$2$ can only arise if the number $n=\deg g$ is odd. Moreover, integrating factor \eqref{Lienard_Inegrability_IF_C2_deg_f_0} transforms into integrating factor~\eqref{Lienard_Inegrability_IF_C1_deg_f_0} whenever the following condition $N_{1,1}+N_{1,2}=N_{2,1}+N_{2,2}$ holds. In this case the polynomial $F(x,y)$ in~\eqref{Lienard_Inegrability_IF_C1_deg_f_0} is reducible: $ F(x,y)=F_1(x,y)F_2(x,y)$. \textit{Remark 2.} Relation \eqref{Lienard_Inegrability_IF_C3_Cond} is identically satisfied whenever $m=0$ ($\deg f=0$). This statement follows from relations \eqref{Lienard_Inegrability_PS_alt} and \eqref{Lienard_Inegrability_ODE_for_coeff}. Let us obtain all Liouvillian integrable Li\'{e}nard differential systems \eqref{Lienard_gen} from family ($C$) with a hyperelliptic invariant algebraic curve $y^2+u(x)y+v(x)=0$, where $u(x)$, $v(x)\in\mathbb{C}[x]$. \begin{theorem}\label{T:Lienard_IntegrabilityC_partial} A Li\'{e}nard differential system \eqref{Lienard_gen} from family ($C$) with a hyperelliptic invariant algebraic curve $y^2+u(x)y+v(x)=0$, where $u(x)$, $v(x)\in\mathbb{C}[x]$, is Liouvillian integrable if and only the system is of the form \begin{equation} \label{Lienard_Inegrability_Cpartial_1} x_t=y,\quad y_t=-\frac{(k+2l)}{4}w^{l-1}w_xy-\frac{k}8\left(w^{2l-1}+4\beta w^{k-1}\right)w_x, \end{equation} where $\beta\in\mathbb{C}\setminus\{0\}$, $w(x)$ is a polynomial of degree $(m+1)/l$, $k$ and $l$ are relatively prime natural numbers such that the following relation $(m+1)k=(n+1)l$ is valid. The associated Li\'{e}nard differential system has the unique Darboux integrating factor \begin{equation} \label{Lienard_Inegrability_Int_Fact_C_p1} M(x,y)=\left\{y^2+w^ly+\frac14w^{2l}+\beta w^k\right\}^{-\left(\frac12+\frac{l}{k}\right)} \end{equation} and the hyperelliptic invariant algebraic curve reads as $4y^2+4w^ly+w^{2l}+4\beta w^k=0$. A Liouvillian first integral is of the form \begin{equation} \begin{gathered} \label{Lienard_Inegrability_FI_C_p1} I(x,y)=\frac{(2l-k)(2y+w^l)}{4kw^{\frac{k}2}\beta^{\frac12+\frac{l}{k}}} {}_2F_1\left(\frac12,\frac12+\frac{l}{k};\frac32;-\frac{(2y+w^l)^2}{4\beta w^k}\right) +\left\{y^2+w^ly+\frac14w^{2l}+\beta w^k\right\}^{\frac12-\frac{l}{k}}, \end{gathered} \end{equation} where ${}_2F_1(\alpha,\delta;\sigma;s)$ is the hypergeometric function. \end{theorem} \begin{proof} It is straightforward to verify that system \eqref{Lienard_Inegrability_Cpartial_1} is Liouvillian integrable with an integrating factor and a first integral given by expressions \eqref{Lienard_Inegrability_Int_Fact_C_p1} and \eqref{Lienard_Inegrability_FI_C_p1}, respectively. Since $\beta\neq0$ and $w(x)$ is a polynomial of degree $(m+1)/l$, we conclude that system \eqref{Lienard_Inegrability_Cpartial_1} is from family ($C$). Now our goal is to prove the converse statement. Let us suppose that a Li\'{e}nard differential system \eqref{Lienard_gen} from family ($C$) is Liouvillian integrable and possesses a hyperelliptic invariant algebraic curve $y^2+u(x)y+v(x)=0$. By Theorem \ref{T:Lienard_Integrability3} condition \eqref{Lienard_Inegrability_IF_C3_Cond} is identically satisfied and the system has integrating factor given by expression~\eqref{Lienard_Inegrability_IF_C1_deg_f_0}, where the polynomial $F(x,y)$ can be chosen in the form $F(x,y)=y^2+u(x)y+v(x)$. In addition, we set $N_1=1$. Substituting integrating factor \eqref{Lienard_Inegrability_IF_C1_deg_f_0} into the partial differential equation $yM_x-[f(x)y+g(x)]M_y-f(x)M=0$ and equating to zero the coefficients of different powers of $y$ yields the relations \begin{equation} \label{Lienard_Inegrability_IF_C3_n3} \begin{gathered} f(x)=\frac{2m+n+3}{4(m+1)}u_x,\quad g(x)=\frac12 v_x+\frac{n-2m-1}{8(m+1)}uu_x \end{gathered} \end{equation} and the following equation \begin{equation} \label{Lienard_Inegrability_IF_C3_n4} \begin{gathered} uv_x-\frac{n+1}{m+1}u_xv+\frac{n-2m-1}{4(m+1)}u^2u_x=0. \end{gathered} \end{equation} Let us note that condition \eqref{Lienard_Inegrability_IF_C3_Cond} produces an explicit expression of the polynomial $f(x)$ similar to that given in relations \eqref{Lienard_Inegrability_IF_C3_n3}. Integrating equation \eqref{Lienard_Inegrability_IF_C3_n4} with respect to the function $v(x)$, we obtain \begin{equation} \label{Lienard_Inegrability_IF_C3_n5} \begin{gathered} v(x)=\beta u^{\frac{n+1}{m+1}}+\frac14u^2, \end{gathered} \end{equation} where $\beta\in\mathbb{C}$ is a constant of integration. Using Theorem \ref{T1:Lienard_2m+1} and the arguments given in the proof of Theorem \ref{T:Lienard_Integrability1_C}, we conclude that $u(x)$ is a polynomial of degree $m+1$ and $v(x)$ is a polynomial of degree $n+1$. Thus, we see that $\beta$ is non-zero. Further, we introduce relatively prime natural numbers $k$ and $l$ satisfying the relation $(m+1)k=(n+1)l$. It follows from expression \eqref{Lienard_Inegrability_IF_C3_n5} that there exists a polynomial $w(x)$ of degree $(m+1)/l$ such that the polynomial $u(x)$ can be represented in the form $u(x)=w^l(x)$. Hence we obtain the equality $v(x)=\beta w^k(x)+w^{2l}(x)$. Substituting the explicit representations of the polynomials $u(x)$ and $v(x)$ into relations \eqref{Lienard_Inegrability_IF_C3_n3}, we find the polynomials $f(x)$ and $g(x)$ as given in \eqref{Lienard_Inegrability_Cpartial_1}. Expressing the number $n$ from the relation $(m+1)k=(n+1)l$, we find Darboux integrating factor \eqref{Lienard_Inegrability_Int_Fact_C_p1} giving Liouvillian first integral \eqref{Lienard_Inegrability_FI_C_p1}. \end{proof} \textit{Remark 1.} We do not require that the polynomial $y^2+u(x)y+v(x)$ is irreducible. See also Remark 1 to Theorem \ref{T:Lienard_Integrability3}. \textit{Remark 2.} The family of systems \eqref{Lienard_Inegrability_Cpartial_1} can be transformed to the following simple form \begin{equation} \label{Lienard_Inegrability_Cpartial_1_Sundman} s_{\tau}=z,\quad z_{\tau}=-\frac{(k+2l)}{4}s^{l-1}z-\frac{k}8\left(s^{2l-1}+4\beta s^{k-1}\right) \end{equation} via the generalized Sundman transformation $s(\tau)=w(x)$, $z(\tau)=y$, $d\tau=w_x(x)dt$. Substituting $w(x)=s$, $y=z$ into \eqref{Lienard_Inegrability_FI_C_p1}, we find a Liouvillian first integral for systems \eqref{Lienard_Inegrability_Cpartial_1_Sundman}. \smallskip It follows from Theorem \ref{T1:Lienard_2m+1} that equation \eqref{Lienard_y_x} related to a Li\'{e}nard differential system~\eqref{Lienard_gen} from family ($C$) may have a polynomial solution only if $n=\deg g(x)$ is an odd number. Such a polynomial solution gives rise to an invariant algebraic curve with the generating polynomial of the first degree with respect to $y$. Let us study the Liouvillian integrability of Li\'{e}nard differential systems \eqref{Lienard_gen} from family ($C$) possessing invariant algebraic curves with generating polynomials of the first degree with respect to $y$. Since arbitrary coefficients arise in the non-polynomial part of the series $y_{\infty}^{(l)}(x)$, $l=1$, $2$, we conclude that equation \eqref{Lienard_y_x} has at most two distinct polynomial solutions simultaneously provided that the inequality $\deg g> 2\deg f +1$ holds. In what follows we denote these polynomial solutions as $y=p_1(x)$ and $y=p_2(x)$. Note that the following relations $p_l(x)=\{h^{(l)}(x)\}_+$, $l=1$, $2$ are valid, where $h^{(l)}(x)$ is the initial part of the series $y_{\infty}^{(l)}(x)$. \begin{theorem}\label{T:Lienard_IntegrabilityC_polynomial} A Li\'{e}nard differential system \eqref{Lienard_gen} from family ($C$) with two distinct invariant algebraic curves given by first-degree polynomials with respect to $y$ is Liouvillian integrable if and only if $n=\deg g(x)$ is an odd number, the system is of the form~\eqref{Lienard_Inegrability_Cpartial_1} and other conditions of Theorem \ref{T:Lienard_IntegrabilityC_partial} are satisfied with the additional restriction: either $k$ is an even number or otherwise $(m+1)/l$ is an even number and the polynomial $w(x)$ has only double roots. The polynomials $p_1(x) $ and $p_2(x)$ producing the invariant algebraic curves $y-p_1(x)=0$ and $y-p_2(x)=0$ can be represented in the form \begin{equation} \label{Lienard_Inegrability_Cpartial_polynomials1} p_1(x)=\sqrt{\beta} w^{\frac{k}{2}}+\frac{1}{2}w^l(x),\quad p_2(x)=-\sqrt{\beta} w^{\frac{k}{2}}+\frac{1}{2}w^l(x),\quad \beta\in\mathbb{C}\setminus\{0\}. \end{equation} \end{theorem} \begin{proof} We use item $2$ of Theorem \ref{T:Lienard_Integrability3} and the arguments given in the proof of Theorem \ref{T:Lienard_IntegrabilityC_partial}. Let us note that the hyperelliptic invariant algebraic curve of Theorem \ref{T:Lienard_IntegrabilityC_partial} with the generating polynomial $y^2+w^ly+\frac14w^{2l}+\beta w^k=(y+w^l/2)^2+\beta w^k$ splits into two distinct invariant algebraic curves $y-p_1(x)=0$ and $y-p_2(x)=0$ if and only if $n$ is an odd number and either $k$ is an even number or otherwise $(m+1)/l$ is an even number and $w(x)$ is a polynomial with double roots. In addition, recall that the degree of the polynomial $w(x)$ equals $(m+1)/l$. \end{proof} \textit{Remark.} This theorem can also be proved directly without using Theorem \ref{T:Lienard_IntegrabilityC_partial}. As an example, see Theorem \ref{T:Lienard_IntegrabilityA_partial}. Further, our goal is to demonstrate that there exist Liouvillian integrable Li\'{e}nard differential systems \eqref{Lienard_gen} from family ($C$) for any choice of the numbers $m=\deg f(x)$ and $n=\deg g(x)$. Setting $u(x)=x^{m+1}$ in expression \eqref{Lienard_Inegrability_IF_C3_n5}, we find the following Liouvillian integrable Li\'{e}nard differential systems from family ($C$) \begin{equation} \label{Lienard_Inegrability_IF_C3_n6} \begin{gathered} x_t=y,\quad y_t=-\frac{2m+n+3}{4}x^my-\frac{n+1}{8}\left(4\beta x^n+x^{2m+1}\right). \end{gathered} \end{equation} The related Darboux integrating factor reads as \begin{equation} \label{Lienard_Inegrability_IF_C3_n7} \begin{gathered} M(x,y)=\left(y^2+x^{m+1}y+\beta x^{n+1}+\frac14x^{2(m+1)}\right)^{-\frac{2m+n+3}{2(n+1)}}. \end{gathered} \end{equation} The numbers $n=\deg g$ and $m=\deg f$ can be chosen arbitrarily. In addition, if the following conditions $n=l(m+1)-1$, $l\in\mathbb{N}$, and $l>2$ hold, then we obtain another particular family of Liouvillian integrable Li\'{e}nard differential systems \begin{equation} \label{Lienard_Inegrability_IF_C3_n8} \begin{gathered} x_t=y,\quad y_t=-\frac{l+2}{4}u_xy-\frac{l}{8}\left(4\beta u^{l-1}+u\right)u_x, \end{gathered} \end{equation} where $u(x)$ is an arbitrary polynomial of degree $m+1$. The associated Darboux integrating factor can be represented in the form \begin{equation} \label{Lienard_Inegrability_IF_C3_n9} \begin{gathered} M(x,y)=\left(y^2+uy+\beta u^{l}+\frac14u^2\right)^{-\frac{l+2}{2l}}. \end{gathered} \end{equation} Now let us study the the existence of non-autonomous Darboux--Jacobi last multipliers. The case $\deg f=0$ is simple. There are families of distinct Jacobi last multipliers arising as products of integrating factors \eqref{Lienard_Inegrability_IF_C1_deg_f_0}, \eqref{Lienard_Inegrability_IF_C2_deg_f_0} and non-autonomous first integrals $I^{\varkappa}(x,y,t)$, where $\varkappa\in\mathbb{C}$ and the function $I(x,y,t)$ is given by relations \eqref{Lienard_Inegrability_FI_C1_deg_f_0} and \eqref{Lienard_Inegrability_FI_C2_deg_f_0}. \begin{lemma}\label{L:Lienard_Integrability_t3} A Li\'{e}nard differential system \eqref{Lienard_gen} satisfying the conditions $\deg g>2\deg f+1$ and $\deg f>0$ has a non-autonomous Darboux--Jacobi last multiplier of the form \eqref{JLM_gen} if and only if there exists a non-zero complex number $\omega$ such that the relation \begin{equation} \label{Lienard_Inegrability_IF_C3_Cond_time} \begin{gathered} 4(m+1)f(x)+(2m+n+3)\left\{h^{(1)}_x(x)+h^{(2)}_x(x)\right\}_++2(n+1)\omega=0 \end{gathered} \end{equation} is identically satisfied and one of the following assertions is valid. \begin{enumerate} \item There exists an irreducible invariant algebraic curve $F(x,y)=0$ such that the family of Puiseux series $y^{(1)}_{\infty}(x)$ arises in the factorization of the polynomial $F(x,y)$ in the ring $\mathbb{C}_{\infty}\{x\}[y]$ as many times as so does the family $y^{(2)}_{\infty}(x)$, i.e. $N_{1}=N_{2}$. In this case the system has the unique Darboux--Jacobi last multiplier \begin{equation} \label{Lienard_Inegrability_IF_C1_deg_f_0_time} \begin{gathered} M(x,y,t)=\left\{F(x,y)\right\}^{-\frac{2m+n+3}{2(n+1)N_{1}}}\exp[\omega t]. \end{gathered} \end{equation} \item There exist two distinct irreducible invariant algebraic curves $F_1(x,y)=0$ and $F_2(x,y)=0$ such that the following relation $N_{1,j}\neq N_{2,j}$, $j=1$, $2$ is valid, where $N_{l,j}$ is the number of times the family of Puiseux series $y^{(l)}_{\infty}(x)$ enters the factorization of the polynomial $F_j(x,y)$ in the ring $\mathbb{C}_{\infty}\{x\}[y]$. In this case the system has the unique Darboux--Jacobi last multiplier \begin{equation} \label{Lienard_Inegrability_IF_C2_deg_f_0_time} \begin{gathered} M(x,y,t)=\frac{\left\{F_1(x,y)\right\}^{\frac{(2m+n+3)(N_{2,2}-N_{1,2})}{2(n+1)\Omega}}}{ \left\{F_2(x,y)\right\}^{\frac{(2m+n+3)(N_{2,1}-N_{1,1})}{2(n+1)\Omega}}}\exp[\omega t], \end{gathered} \end{equation} where the parameter $\Omega$ is given by the relation $\Omega= N_{1,2}N_{2,1}-N_{1,1}N_{2,2}$. \end{enumerate} \end{lemma} \begin{proof} We repeat the proof of Theorem \ref{T:Lienard_Integrability3}. The only difference is in condition~\eqref{Lienard_Inegrability_IF_C3}. In the non-autonomous case this condition takes the form \begin{equation} \label{Lienard_Inegrability_IF_C3_time} \begin{gathered} \sum_{j=1}^{K}d_jN_{1,j}\left(f(x)+\left\{h^{(1)}_x(x)\right\}_+\right) +\sum_{j=1}^{K}d_jN_{2,j}\left(f(x)+\left\{h^{(2)}_x(x)\right\}_+\right) =\omega-f(x), \end{gathered} \end{equation} where $\omega$ is a non-zero complex constant. \end{proof} We have established that Li\'{e}nard differential systems \eqref{Lienard_gen} from family ($C$) have neither rational nor Darboux first integrals. Let us note that the famous Duffing oscillators belong to family ($C$). These oscillators are studied in articles \cite{Demina07, Demina16} in details. \section{Quartic Li\'{e}nard differential equations with a quadratic damping function}\label{S:Example_L24} The aim of the present section is to demonstrate that the necessary and sufficient conditions of Liouvillian integrability presented in the previous sections can be used to find all Liouvillian integrable subfamilies of Li\'{e}nard differential systems without performing the classification of irreducible invariant algebraic curves. As an example, we consider Li\'{e}nard differential systems with the restrictions $\deg f= 2$ and $\deg g =4$: \begin{equation} \begin{gathered} \label{Lienard1_DS24_main} x_t=y,\quad y_t=-(\zeta x^2+\beta x+\alpha)y-(\varepsilon x^4+\xi x^3+ex^2+\sigma x+\delta),\quad \zeta\varepsilon\neq0. \end{gathered} \end{equation} Introducing suitable rescalings and shifts, it is without loss of generality to set $\zeta=3$, $\varepsilon=-3$, and $\beta=0$. In what follows we work with the following systems \begin{equation} \begin{gathered} \label{Lienard1_DS24} x_t=y,\quad y_t=-(3 x^2+\alpha)y+3 x^4-\xi x^3-ex^2-\sigma x-\delta,\quad \zeta\varepsilon\neq0. \end{gathered} \end{equation} Let us solve the integrability problem for systems \eqref{Lienard1_DS24}. \begin{theorem}\label{T:L24_untegrability} Quartic Li\'{e}nard differential systems with a quadratic damping function \eqref{Lienard1_DS24} are Liouvillian integrable if and only the tuple of the parameters ($\alpha, \xi, \delta, \sigma, e $) equals \begin{equation} \label{L24_Integrability_main} \begin{gathered} I:\quad(\alpha, \xi, \delta, \sigma, e )=\left(-\frac{25}{12}, -7, \frac{125}{432}, -\frac{25}{36},-5\right);\\ II:\quad(\alpha, \xi, \delta, \sigma, e )=\left( -\frac{61}{12}, -7, \frac{3905}{432}, \frac{623}{36},4\right).\hfill \end{gathered} \end{equation} The related Darboux integrating factors can be represented as \begin{equation} \label{L24_Integrability_Integrating_factors} \begin{gathered} I:\quad M(x,y)= \frac{1}{\left( y+{x}^{3}+\frac32{x}^{2}+\frac {5}{12}x-{\frac{25}{216}} \right)^{\frac23} \left( y-{x}^{2}-\frac53x-{\frac{25}{36}} \right)};\\ II:\quad M(x,y)=\frac{\left(y+x^3 + \frac32x^2 - \frac{31}{12}x - \frac{781}{216}\right)^{\frac13}}{{y}^{2} +{\frac { \left( 6x+5 \right) \left( 6x-13 \right) \left( 6x+11 \right) y}{216}}-{\frac { \left( 6x-13 \right) \left( 6x+11 \right) ^{2} \left( 6x+5 \right) ^{2}}{7776}} }. \end{gathered} \end{equation} \end{theorem} \begin{proof} Our proof is based on the results of Theorems \ref{T1:Lienard_gen} and \ref{T:Lienard_Integrability2}. The Puiseux series given in relation \eqref{Lienard1_F_series} are now the following \begin{equation} \begin{gathered} \label{Lienard24_PS1} y^{(1)}_{\infty}(x)=-x^3-\frac{3}{2}x^2+\left(\xi-\alpha+\frac{9}{2}\right)x+b_3+\sum_{l=1}^{\infty}b_{l+3}x^{-l};\hfill\\ y^{(2)}_{\infty}(x)=x^2-\frac{1}{3}(\xi+2)x+\frac13\left(\xi+2-\alpha-e\right)+\sum_{l=1}^{\infty}a_{l+2}x^{-l}. \end{gathered} \end{equation} The Puiseux series $y^{(1)}_{\infty}(x)$ has an arbitrary coefficient $b_3$ and exists whenever the restriction $e=3(27+6\xi-4\alpha)/4$ holds. The Puiseux series $y^{(2)}_{\infty}(x)$ possesses uniquely determined coefficients. Note that we use novel designations for the coefficients of the Puiseux series $y^{(2)}_{\infty}(x)$. The series $y^{(1)}_{\infty}(x)$ terminates at the zero term under the condition \begin{equation} \begin{gathered} \label{Lienard24_Invc_2con} \delta=\frac{1}{24}(2\xi+9)(18\alpha+4\xi\alpha-4\sigma-4\xi^2-36\xi-81). \end{gathered} \end{equation} Thus, we see that systems \eqref{Lienard1_DS24} possess the invariant algebraic curve $F_1(x,y)=0$ of Theorem~\ref{T:Lienard_Integrability2} whenever $e=3(27+6\xi-4\alpha)/4$ and $\delta$ is of the form \eqref{Lienard24_Invc_2con}. The related polynomial and the cofactor can be represented as \begin{equation} \begin{gathered} \label{Lienard24_Invc_2} F_1(x,y)=y+x^3+\frac{3}{2}x^2-\left(\xi-\alpha+\frac{9}{2}\right)x-\frac13(\sigma+\xi^2-\xi\alpha)\\ -\frac14(27+12\xi-6\alpha),\quad \lambda_1(x,y)=3x-\xi-\frac92. \end{gathered} \end{equation} Condition \eqref{Lienard_Inegrability_C1_cond} gives the following restriction: $\xi=-7$. Finally, we use Theorem~\ref{T:Darboux_pols_computation_Lienard} to find an irreducible invariant algebraic curve that exists simultaneously with $F_1(x,y)=0$ and is given by expression \eqref{Lienard1_F} where $k=1$ and $N\in\mathbb{N}$. As a result, we obtain the values of the parameters as presented in relation \eqref{L24_Integrability_main}. The related invariant algebraic curves are given by the polynomials \begin{equation} \begin{gathered} \label{Lienard24_Invc_3} (I):\quad F_2(x,y)=y-{x}^{2}-\frac53x-{\frac{25}{36}},\quad \lambda_2(x,y)=-3x^2-2x+\frac{5}{12};\hfill\\ (II):\quad F_2(x,y)={y}^{2} +{\frac { \left( 6x+5 \right) \left( 6x-13 \right) \left( 6x+11 \right) y}{216}}-{\frac { \left( 6x-13 \right)}{7776}}\hfill\\ \times\left( 6x+11 \right)^{2} \left( 6x+5 \right)^{2},\quad\lambda_2(x,y)=-3x^2 + x + \frac{71}{12}. \end{gathered} \end{equation} We calculate explicit expressions of Darboux integrating factors with the help of expression~\eqref{Lienard_Inegrability_Int_fact}. \end{proof} In case ($I$) a Liouvillian first integral is given by expression \eqref{Lienard_Inegrability_FI_A_p1}, where one sets \begin{equation} \begin{gathered} \label{Lienard24_LFI_v} l=2,\quad k=3,\quad \beta=-1,\quad v(x)=x+\frac56. \end{gathered} \end{equation} In case ($II$) a Liouvillian first integral reads as \begin{equation} \begin{gathered} \label{Lienard24_LFI_v2} I(x,y)=\frac{\sqrt{w(x)}}{w(x)}\sum_{l=0}^2\left(\left\{\sqrt{w(x)}-v(x)\right\} U(x)\ln\left(z^{\frac13}-U(x)\exp\left(\frac{2\pi l i}{3}\right)\right)\right.\\ \left.+ \left\{\sqrt{w(x)}+v(x)\right\} V(x)\ln\left(z^{\frac13}-V(x)\exp\left(\frac{2\pi l i}{3}\right)\right)\right)\exp\left(\frac{2\pi l i}{3}\right)+6z^{\frac13}, \end{gathered} \end{equation} where we have introduced the notation \begin{equation} \begin{gathered} \label{Lienard24_LFI_U_V} U(x)= \left\{u(x)-v(x)+\sqrt{w(x)}\right\}^{\frac13}, V(x)= \left\{u(x)-v(x)-\sqrt{w(x)}\right\}^{\frac13},z=y+u(x). \end{gathered} \end{equation} The polynomials $u(x)$, $v(x)$, $w(x)$ take the form \begin{equation} \begin{gathered} \label{Lienard24_LFI_u_v_w} u(x)=\frac{\left(6 x +11\right) \left(36 x^{2}-12 x -71\right)}{216},\quad v(x)=\frac{\left(6 x +5\right) \left(6 x -13\right) \left(6 x +11\right)}{432},\\ w(x)=\frac{\left(6 x -13\right) \left(6 x +11\right)^{3} \left(6 x +5\right)^{2}}{186624}. \end{gathered} \end{equation} Concluding this section we note that the method of Puiseux series and the explicit expression \eqref{General_Fl_cof} of the cofactor of an invariant algebraic curve greatly facilitate the classification of integrable multi-parameter planar differential systems. \section{Conclusion}\label{S:Conclusion} This work completely solves the Liouvillian integrability problem for polynomial Li\'{e}nard differential systems \eqref{Lienard_gen} satisfying the condition $\deg g\neq2\deg f+1$. In the case $\deg g=2\deg f+1$ our results are complete for the non-resonant systems. We say that a Li\'{e}nard differential system with the restriction $\deg g=2\deg f+1$ is resonant near infinity if equation~\eqref{eq:DP2_5_2} possesses a positive rational solution. The resonance condition introduces a restriction on the highest-degree coefficients $f_0$ and $g_0$ of the polynomials $f(x)$ and $g(x)$. We have established that a generic nonlinear polynomial Li\'{e}nard differential system \eqref{Lienard_gen} with fixed degrees of the polynomials $f(x)$ and $g(x)$ is not Liouvillian integrable provided that the following restriction $\deg g>\deg f$ is valid. However, as we have demonstrated, Liouvillian integrable subfamilies exist for any degrees of the polynomials $f(x)$ and $g(x)$ whenever $\deg g>\deg f$. Besides that, we have classified polynomial Li\'{e}nard differential systems possessing non-autonomous Darboux first integrals and non-autonomous Jacobi last multipliers with a time-dependent exponential factor. Some of our results describing explicit families of Liouvillian integrable Li\'{e}nard differential systems are gathered in Table~\ref{Tb:Lienard_IC}. \bigskip \textit{Remarks to Table \ref{Tb:Lienard_IC}} \begin{enumerate} \item Natural numbers $l$ and $k$ are both non-unit. \item Symbols $D$, $E$, and $L$ mean Darboux, elementary, and Liouvillian, respectively. Symbols $\overline{D}$ and $\overline{E}$ mean non-Darboux and non-elementary. \item Family $(A)_1$ gives all Liouvillian integrable families of Li\'{e}nard differential systems \eqref{Lienard_gen} such that the related equation \eqref{Lienard_y_x} possesses two distinct polynomial solutions and the following inequalities $\deg f<\deg g<2\deg f+1$ are valid. \item Families $(B)_1$ and $(B)_2$ produce all non-resonant Darboux integrable Li\'{e}nard differential systems \eqref{Lienard_gen} satisfying the restriction $\deg g=2\deg f+1$. \item Families $(B)_1$, $(B)_2$, $(B)_3$, and $(B)_4$ include all non-resonant Liouvillian integrable Li\'{e}nard differential systems \eqref{Lienard_gen} satisfying the restriction $\deg g=2\deg f+1$. Note that families $(B)_3$ and $(B)_4$ also involve integrable resonant systems. Consequently, additional restrictions should be imposed if one is interested only in the non-resonant case. These restrictions are described in Theorems \ref{T:Lienard_degenerate_Liouville1} and \ref{T:Lienard_degenerate_Liouville2}. \item Family $(C)_1$ gives all Liouvillian integrable families of Li\'{e}nard differential systems~\eqref{Lienard_gen} with $\deg g>2\deg f+1$ possessing either a hyperelliptic invariant algebraic curve or two distinct invariant algebraic curves with generating polynomials of the first degree with respect to $y$. \end{enumerate} \begin{table}[h! \center \footnotesize \begin{tabular}[pos]{|c|c|c|c| } \hline Family & $f(x)$, $g(x)$ & $M(x,y)$ & $I(x,y)$, type of first integral \\ \hline $(A)_1$ & $f(x)=-\left[k\beta v^{k-1}+(k+l)v^{l-1}\right]v_x $ & $\frac{z^{-\frac{l}{k}}}{y-v^l} $ & $\displaystyle \frac{k\beta^{\frac{l}{k}}}{k-l}z^{\frac{k-l}{k}}+ \sum_{j=0}^{m}\exp\left[-\frac{\pi l(2j+1)i}{k}\right]$ \\ Th \ref{T:Lienard_IntegrabilityA_partial} & $g(x)=k\left[\beta v^k+v^l\right]v^{l-1}v_x $ & & $ \times\ln\left\{z^{\frac{1}{m+1}} -\exp\left[\frac{\pi(2j+1)i}{m+1}\right][\beta v^k]^{\frac{1}{m+1}}\right\}$ \\ & $z=y-\beta v^k-v^l$, $\frac{m+1}{n-m}=\frac{k}{l}$, $(l,k)=1$ & & $\overline{D}EL$ \\ & $v(x)\in\mathbb{C}[x]$, $\deg v=\frac{n-m}{l}$, $\beta\in\mathbb{C}\setminus\{0\}$ & & \\ \hline $(B)_1$ & $f(x)=-\frac{2f_0}{f_0-\delta}q_{1,\,x} $, $g(x)=\frac{f_0+\delta}{f_0-\delta}q_{1,\,x}q_1$ & $\frac{1}{y-\frac{(f_0+\delta)}{(f_0-\delta)}q_1} $ & $ \left[y-q_1\right]^{\delta-f_0}\left[y-\frac{(f_0+\delta)}{(f_0-\delta)}q_1\right]^{\delta+f_0}$ \\ Th \ref{T:Lienard_degenerate_Darboux1} & $q_1(x)\in\mathbb{C}[x]$, $\delta$, $f_0\in\mathbb{C}\setminus\{0\}$ & $\times\frac{1}{y-q_1}$ & $DEL$ \\ & $q_1(x)=\frac{\delta-f_0}{2\{m+1\}}x^{m+1}+o(x^{m+1})$ & & \\ \hline $(B)_2$ & $f(x)=-2q_x $, $g(x)=qq_x$ & $\frac{1}{\left\{y-q\right\}^2} $ & $ \left[y-q(x)\right]\exp\left[-\frac{q(x)}{y-q(x)}\right]$ \\ Th \ref{T:Lienard_degenerate_Darboux2} & $q(x)\in\mathbb{C}[x]$, $\deg q=m+1$ & & $DEL$ \\ \hline $(B)_3$ & $f(x)=-\left[\frac{\{(2d_1+1)l+k\}l}{k-l}u^{l-1}\right. $ & $\frac{[y-p_1]^{d_1}}{[y-p_2]^{d_1+1+\frac{k}{l}}}$ & $ \frac{p_2B\left(\frac{y-p_1}{p_2-p_1};1+d_1, -d_1-\frac{k}{l}\right)}{\{p_2-p_1\}^{\frac{k}{l}}}$ \\ Th \ref{T:Lienard_degenerate_Liouville_polynomial} & $ \left.+\beta(l+k)u^{k-1}\right]u_x$ & & $- \frac{B\left(\frac{y-p_1}{p_2-p_1};1+d_1,1-d_1-\frac{k}{l}\right)}{\{p_2-p_1\}^{\frac{k}{l}-1}}$ \\ & $g(x)=\left[l\beta^2u^{2k-1}+ \frac{\{(2d_1+1)l+k\}l\beta}{k-l}\right.$ & & $\overline{D} \overline{E}L$ ($d_1\not\in\mathbb{Q}$) \\ & $\times u^{k+l-1}\left.+\frac{(ld_1+k)(d_1+1)l^2}{(k-l)^2}u^{2l-1}\right]u_x$ & & \\ & $p_1(x)=\beta u^k(x)+\frac{(d_1+1)l}{k-l}u^l(x)$ & & \\ & $p_2(x)=\beta u^k(x)+\frac{(ld_1+k)}{k-l}u^l(x)$ & & \\ & $(l,k)=1$, $d_1$, $\beta\in\mathbb{C}\setminus\{0\}$ & & \\ & $u(x)\in\mathbb{C}[x]$, $\deg u=\frac{m+1}{\max\{k,l\}}$ & & \\ \hline $(B)_4$ & $f(x)=\left[\frac{2l^2}{l-k}v^{l-1}-(l+k)\beta v^{k-1}\right]v_x $ & $\frac{\exp\left[\frac{v^l}{y-q}\right]}{[y-q]^{\frac{l+k}{l}}}$ & $ v^{l-k}\gamma\left(-\frac{l-k}{l},\frac{v^l}{q-y}\right)-\frac{q}{v^{k}}\gamma\left(\frac{k}{l},\frac{v^l}{q-y}\right)$ \\ Th \ref{T:Lienard_degenerate_Liouville2} & $g(x)= \left[\frac{l^3}{(l-k)^2}v^{2l-1}+l\beta^2 v^{2k-1}\right. $ & & $\overline{D} \overline{E}L$ \\ & $\left.-\frac{2l^2\beta}{l-k}v^{l+k-1}\right]v_x $ & & $ $ \\ & $q(x)=-\frac{l}{l-k}v^l+\beta v^k $, $\beta\in\mathbb{C}\setminus\{0\}$ & & \\ & $v(x)\in\mathbb{C}[x]$, $\deg v=\frac{m+1}{\max\{k,l\}}$, $(l,k)=1$ $ $ & & \\ \hline $(C)_1$ & $f(x)=\frac{(k+2l)}{4}w^{l-1}w_x $ & $z^{-\left(\frac12+\frac{l}{k}\right)}$ & $ {}_2F_1\left(\frac12,\frac12+\frac{l}{k};\frac32;-\frac{(2y+w^l)^2}{4\beta w^k}\right)$\\ Th \ref{T:Lienard_IntegrabilityC_partial} & $g(x)= \frac{k}8\left(w^{2l-1}+4\beta w^{k-1}\right)w_x $ & & $\times\frac{(2l-k)(2y+w^l)}{4kw^{\frac{k}2}\beta^{\frac12+\frac{l}{k}}}+z^{\frac12-\frac{l}{k}}$ \\ Th \ref{T:Lienard_IntegrabilityC_polynomial} & $z=\left[y+\frac{w^l}{2}\right]^2+\beta w^k$, $\frac{n+1}{m+1}=\frac{k}{l}$ & & $\overline{D} \overline{E}L$ \\ & $w(x)\in\mathbb{C}[x]$, $\deg w=\frac{m+1}{l}$, $(l,k)=1$ $ $ & & \\ \hline \end{tabular} \caption{The explicit Liouvillian integrable families of Li\'{e}nard differential systems.} \label{Tb:Lienard_IC} \end{table} Let us note that families $(B)_1$ and $(B)_2$ are those given by the Chiellini integrability condition $\{f(x)/g(x)\}_x=\alpha f(x)$, see~\cite{Chiellini01}. Remarkably, Chiellini integrable Li\'{e}nard differential systems can be linearized via generalized Sundman transformations~\cite{Berkovich01}. Other integrable families from Table \ref{Tb:Lienard_IC} can also be transformed to a more simple form via generalized Sundman transformations, see remarks and comments to Theorems \ref{T:Lienard_IntegrabilityA_partial}, \ref{T:Lienard_degenerate_Liouville_polynomial}, \ref{T:Lienard_degenerate_Liouville2}, and \ref{T:Lienard_IntegrabilityC_partial}. These systems with the exception of a number of partial cases that appear in \cite{Lakshmanan01, Lakshmanan02, Lakshmanan03, Polyanin, Stachowiak} seem to be new. Let us enumerate some unsolved problems related to the integrability and solvability of Li\'{e}nard differential systems. Despite the fact that the subset of resonant Li\'{e}nard differential systems is of Lebesgue measure zero in the set of all polynomial Li\'{e}nard differential systems satisfying the condition $\deg g=2\deg f+1$, it is an interesting open problem to perform a classification of invariant algebraic curves and integrable subfamilies of particular resonant polynomial Li\'{e}nard differential systems provided that only a resonant condition is imposed on the parameters of the systems. The method of Puiseux series~\cite{Demina11, Demina18} can deal with each family of resonant Li\'{e}nard differential systems characterized by a fixed positive rational Fuchs index separately. Note that several novel families of Liouvillian integrable resonant systems are presented in Theorems~\ref{T:Lienard_degenerate_Liouville_polynomial} and \ref{T:Lienard_degenerate_Liouville2}, see also Corollary $2$ after the latter theorem. In addition, a number of integrable resonant families with $\deg f=2$ and $\deg g =5$ are found via $\lambda$ symmetries in~\cite{Ruiz01}. These families possess an exciting property: they simultaneously have autonomous and non-autonomous Darboux first integrals given by expression \eqref{FI_t_gen} with $\omega=0$ and $\omega\neq0$, respectively. This fact allowed A.~Ruiz and C.~Muriel to obtain nice expressions of the general solutions. Systems \eqref{Lienard_degenerate_Darboux_system_time} with $\delta=\pm mf_0/(m+2)$ have the same property. Along with this, it is a difficult open problem to perform a classification of integrable polynomial Li\'{e}nard differential systems with non-Liouvillian first integrals. At the moment only particular examples that can be transformed to linear equations are available, for more details see articles \cite{Lienard-Riccati, Morales-Ruiz01, Gine_Airy, Demina_Gine_Valls}. Another important problem is to study rational Li\'{e}nard differential systems. If the functions $f(x)$ and $g(x)$ in expression \eqref{Lienard_gen} are rational, then systems~\eqref{Lienard_gen} give rise to the following polynomial differential systems in the plane \begin{equation} \label{Lienard_gen_rational} x_t=h(x)y,\quad y_t=-\tilde{f}(x)y-\tilde{g}(x),\quad h(x), \tilde{f}(x), \tilde{g}(x)\in\mathbb{C}[x]. \end{equation} Investigating the analytic and qualitative properties of these systems with respect to the degrees of polynomials $h(x)$, $\tilde{f}(x)$, and $\tilde{g}(x)$ is a future challenge. \section{Acknowledgments} This research was supported by Russian Science Foundation grant 19--71--10003. \nocite{* \bibliographystyle{apa}
{ "timestamp": "2022-06-24T02:09:48", "yymm": "2110", "arxiv_id": "2110.14306", "language": "en", "url": "https://arxiv.org/abs/2110.14306", "abstract": "We provide the necessary and sufficient conditions of Liouvillian integrability for Liénard differential systems describing nonlinear oscillators with a polynomial damping and a polynomial restoring force. We prove that Liénard differential systems are not Darboux integrable excluding subfamilies with certain restrictions on the degrees of the polynomials arising in the systems. We demonstrate that if the degree of a polynomial responsible for the restoring force is greater than the degree of a polynomial producing the damping, then a generic Liénard differential system is not Liouvillian integrable with the exception of linear Liénard systems. However, for any fixed degrees of the polynomials describing the damping and the restoring force we present subfamilies possessing Liouvillian first integrals. As a by-product of our results, we find a number of novel Liouvillian integrable subfamilies. In addition, we study the existence of non-autonomous Darboux first integrals and non-autonomous Jacobi last multipliers with a time-dependent exponential factor.", "subjects": "Exactly Solvable and Integrable Systems (nlin.SI)", "title": "Integrability and solvability of polynomial Liénard differential systems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.98527138442035, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7080104860672292 }
https://arxiv.org/abs/2001.01499
A decomposition of Calderón--Zygmund type and some observations on differentiation of integrals on the infinite-dimensional torus
In this note we will show a Calderón--Zygmund decomposition associated with a function $f\in L^1(\mathbb{T}^{\omega})$. The idea relies on an adaptation of a more general result by J. L. Rubio de Francia in the setting of locally compact groups. Some related results about differentiation of integrals on the infinite-dimensional torus are also discussed.
\section{Introduction} In \cite{rdf78}, Jos\'{e} L. Rubio de Francia (JLR) showed a result on differentiation of integrals in the context of a locally compact group $G$, that contained a decomposition of Calder\'on--Zygmund type \cite[Ch. I, Lemma 1]{CZdecomp} under certain conditions. In view of the publication date, his study could be a contemporary of the one by Edwards and Gaudry in \cite[Ch. 2]{edwardsgau}. In this note we revisite and adapt those results by JLR to the case of the (compact, abelian) group $\mathbb{T}^\omega$ (the infinite torus) defined below. Indeed, our goals are the following: \begin{enumerate} \item Present a decomposition of Calder\'{o}n--Zygmund (CZ) type in $\mathbb{T}^\omega$, devised by JLR. \item Observe some issues on differentiation of integrals in $\mathbb{T}^\omega$. \end{enumerate} Let $\mathbb{T}=\{z\in\mathbb{C}\,\vert\, \abs{z}=1\}$ be the one-dimensional torus, identified naturally with the interval $[0,1)$ of the real line through the group isomorphism $e^{2\pi it}\longleftrightarrow t$. We can also identify $\mathbb{T}$ with the additive quotient group $\mathbb{R}/\mathbb{Z}$. We denote by $\mathbb{T}^\omega$ the compact group formed by the product of countably infinite many copies of $\mathbb{T}$ (\emph{complete direct sum}, \cite[B7.]{rudinfg}). We will call it briefly the \emph{infinite torus}. The operation in the group $\mathbb{T}^\omega$ is the sum (mod 1) of real sequences, with identity element $\bar{0}:=(0,0,\ldots)$. For a fixed $n\in\mathbb{N}$ we can write the infinite torus, of points $x=(x_1,x_2,\ldots)$, as the cartesian product $\mathbb{T}^n\times\mathbb{T}^{n,\omega}$ of an $n$-dimensional torus $\mathbb{T}^n$ of points $x_{(n}=(x_1,\ldots,x_n)$ times a copy, but denoted $\mathbb{T}^{n,\omega}$, of the infinite torus itself, of points $x^{(n}=(x_{n+1},x_{n+2},\ldots)$. We denote by $m$, or $dx$, the Haar measure (translation invariant) on $\mathbb{T}^\omega$, normalized such that $m(\mathbb{T}^\omega)=1$. This measure coincides \cite[\S 22]{hewitstrom} with the measure product of countably infinite many copies of the Lebesgue measure $|\cdot|$ on $\mathbb{T}$, so the basic $m$-measurable sets are the so-called \emph{intervals}, i.e. subsets of $\mathbb{T}^\omega$ of the form $I=\prod_{j\in\mathbb{N}} I_j$, where $I_j$ is an interval of $\mathbb{T}$ for each $j$, and $\exists\,N\in\mathbb{N}$ such that $I_{j}=\mathbb{T}$ for all $j>N$. The measure of the interval $I$ is then $m(I)=\int_{\mathbb{T}^\omega} \chi_I(x)\,dx=\prod_{j=1}^N |I_j|$. The space $\mathbb{T}^\omega$ is metrizable. For instance, the function \begin{equation}\label{metrics} d(x,y)=\sum_{n=1}^\infty\frac{|x_n-y_n|}{2^n}\quad (x,y\in\mathbb{T}^\omega) \end{equation} defines a metric in $\mathbb{T}^\omega$ \cite[p. 157]{saks}. We write $\delta(S):=\sup_{x,y\in S}d(x,y)$ for the diameter of the set $S\subset\mathbb{T}^\omega$. The $\sigma$-algebra $\mathcal{B}$ of Borel sets in $\mathbb{T}^\omega$, which is the smallest $\sigma$-algebra containing the open intervals, coincides with the least $\sigma$-algebra containing the open balls with respect to the metric \eqref{metrics} \cite[II \S 2.4]{shiry}. The study of Harmonic Analysis on the infinite torus finds a motivation, on one hand, because it constitutes a logical extension of the $n$-dimensional setting in which estimates have to be obtained independent of the dimension $n$. On the other hand, $\{e^{2\pi i x_k}\colon k=1,2,\ldots\}$ is a system of independent random variables uniformly distributed in the complex unit circumference (i.e., of a complexified version of Rademacher's functions), whose natural completion in $L^2$ is the trigonometric system on $\mathbb{T}^\omega$. Then, the Fourier series of infinitely many variables turn out to be the complex analogue of the Walsh series. Fourier series in $\mathbb{T}^\omega$ also have connection with the Dirichlet series \cite{bohr} and with Prediction Theory \cite{helsonlow}. All these issues were already pointed out by JLR in \cite{rdf80} (see also references therein), where he studied pointwise and norm convergence of Fourier series of infinite variables, although the proofs are just sketched. There is considerable interest on the infinite torus from the point of view of Potential Theory, see \cite{bendikov,b3, b1,b2,berg}. Apart from this, problems of approximation theory on $\mathbb{T}^\omega$ have been analyzed for instance in \cite{Platonov}. The JLR decomposition of Calder\'on--Zygmund type in $\mathbb{T}^\omega$ will be shown in Section \ref{sec:condi} (see Subsection \ref{subsec:CZdes}), and the issues related to differentiation of integrals in $\mathbb{T}^\omega$ are contained in Section \ref{sec:dif}. To be precise, we will look at three differentiation bases. First, the \textit{Rubio de Francia restricted basis} $\mathcal{R}_0$, which is the family associated to the Calder\'on--Zygmund decomposition in Section \ref{sec:condi} and that differentiates $L^1(\mathbb{T}^\omega)$, see Corollary \ref{cor:RDFr}. Second, the \textit{Rubio de Francia basis} $\mathcal{R}$, which arises naturally in the light of the general results by JLR in \cite[Thm. 8]{rdf78}. For such a basis several questions remain open concerning differentiation and the associated maximal function, see Subsection \ref{subsec:DT}. Finally, we present a negative result of differentiation on $\mathbb{T}^\omega$ relative to the so called \textit{extended Rubio de Francia basis} $\mathcal{R^\ast}$, see Subsection \ref{subsec:neg}. \section*{Acknowledgments} The original idea of the Calder\'on--Zygmund (CZ) decomposition presented in Section \ref{sec:condi} was sketched in a personal communication of JLR to the first author in 1977, in Madrid. The authors would like to thank the referees for their very careful reading and useful comments which indeed improved the presentation of the paper. \section{A CZ decomposition in $\mathbb{T}^\omega$} \label{sec:condi} For completeness we will recall some concepts from Probability Theory used later in this section (see for instance \cite[p. 89--94]{steintopics}, \cite[Ch. 5]{edwardsgau}, \cite[2.7]{shiry}). \begin{defi} Let $(X,\mathcal{A},\mu)$ be a finite measure space, and $\mathcal{B}$ be a $\sigma$-algebra contained in $\mathcal{A}$. The \textit{conditional expectation of $f$ given $\mathcal{B}$} is the (unique $\mu$-a.e.) $\mathcal{B}$-measurable function $E^{\mathcal{B}}f$ (the notation is that of \cite{neveu}), such that \begin{equation}\label{especond} \int_B f\,d\mu = \int_B (E^{\mathcal{B}}f)\,d\mu \quad\forall\, B\in\mathcal{B}. \end{equation} E.g., suppose that $\{B_n\}_{n=1}^\infty$ is a countable division of $X$ in $\mathcal{A}$-measurable sets of positive measure, and consider the least $\sigma$-algebra $\mathcal{B}$ which contains those sets (we write $\mathcal{B}:=\sigma(\{B_n\})$. Then, \begin{equation} \label{comparfm1} E^{\mathcal{B}}f(x)=\sum_n f_{B_n}\chi_{B_n}(x) \end{equation} (where $f_{B}:=\frac1{\mu(B)}\int_{B} f\,d\mu$ and $\chi_S$ denotes the characteristic function of the set $S$), since the function $s(x)$ at the right hand side of \eqref{comparfm1} is $\mathcal{B}$-measurable and $\int_{B_n}s\,d\mu=f_{B_n} \mu(B_n)=\int_{B_n}f\,d\mu$ holds. \end{defi} \begin{propi}\label{propi2} If $\mathcal{B}\subset \mathcal{C}$ are sub-$\sigma$-algebras of $\mathcal{A}$, then $E^{\mathcal{B}}(E^{\mathcal{C}}f)=E^{\mathcal{B}}f$ a.e. \end{propi} \begin{defi} Let $(X,\mathcal{A},m)$ be a finite measure space and let $$ \mathcal{B}_1\subset \mathcal{B}_2\subset \cdots\subset \mathcal{B}_n\subset \mathcal{B}_{n+1}\subset \cdots $$ be an increasing sequence of sub-$\sigma$-algebras of $\mathcal{A}$. A sequence of functions $\{f_n\}_{n\in\mathbb{N}}\subset L^1(m)$ such that, for each $n\ge1$ the function $f_n$ is $\mathcal{B}_n$-measurable and $E^{\mathcal{B}_n}f_{n+1}=f_n$ (a.e.), is called a \emph{martingale}. \end{defi} For instance, for every $f\in L^p(\mu)$ ($1\le p\le \infty$) the sequence $f_n:=E^{\mathcal{B}_n}f$ ($n\in\mathbb{N}$) is a martingale, since $E^{\mathcal{B}_n}f_{n+1}=E^{\mathcal{B}_n}(E^{\mathcal{B}_{n+1}}f)=E^{\mathcal{B}_n}f$ a.e., according to Property~\ref{propi2}. Moreover the following holds: \begin{teor} \label{maxmartinteo} \emph{(i)} The maximal operator, associated to $\{\mathcal{B}_n\}$, defined on $L^1(\mu)$ by $E^\ast f(x):=\sup_{n}|f_n(x)|$, where $f_n=E^{\mathcal{B}_n}f$ ($n\in\mathbb{N}$), is weak $(1,1)$ \emph{(Doob's inequality \cite[VII, Thm. 3.2]{doob})}, and strong $(p,p)$, $1<p\le\infty$. \emph{(ii)} Furthermore, $(f_n)$ converges almost everywhere. Actually, $$ \lim_{n\to\infty}f_n(x)=(E^\mathcal{B} f)(x) \ \text{$\mu$-a.e.,} $$ where $\mathcal{B}=\sigma\bigl(\bigcup_{n=1}^\infty \mathcal{B}_n\bigr)$. \end{teor} \subsection{JLR on decomposition of CZ type in locally compact groups} Let $G$ be a locally compact group with identity $e$ and Haar measure (left invariant) $m$, and $H$ be a discrete subgroup of $G$. We will first give a definition and a lemma. \begin{defi} (\cite[Section 1]{rdf78}.) \label{funddom} An open subset $V$ of $G$ is called a \emph{fundamental domain} (FD) for the quotient group $G/H$ if these two conditions hold: \begin{enumerate} \item $VV^{-1}\cap H=\{e\}$ (or what is the same, the restriction $\pi|_V$ of the canonical projection $\pi\colon G\to G/H$ is $1-1$). \item The complement of $VH$ in $G$ is a locally null set.\footnote{Cf. \cite[(20.11) Definition]{hewitstrom}. } \end{enumerate} \end{defi} For example, the open interval $(0,1)$ is a FD for $\mathbb{R}/\mathbb{Z}$. For each $n\in\mathbb{N}$, the interval $\bigl(0,\frac1n\bigr)$ is a FD for $\mathbb{T}/R_n$, where $R_n:=\bigl\{0,\frac1n,\ldots,\frac{n-1}n\bigr\}$ is the subgroup of the $n$th roots of unity in $\mathbb{T}$. \begin{lema} (\cite[Lemma 2]{rdf78}.) Assume that $G$ contains a sequence of discrete subgroups $$ H_1\subset H_2\subset \cdots\subset H_n\subset \cdots\subset G $$ such that each $G/H_n$ is compact, and write $k_n:=\orden(H_{n+1}/ H_n)$. Then, there is a sequence of open sets $$ V_1\supset V_2\supset \cdots\supset V_n\supset \cdots\text{,} $$ such that $V_n$ is a FD for $G/H_n$, and each $V_n$ is, except for a null set, the disjoint union of $k_n$ translates of $V_{n+1}$ by elements of $H_{n+1}$. \end{lema} The main result of JLR on decomposition of CZ type in this context, is the following: \begin{teor} \emph{(\cite[Thm. 8, into the first part of the proof]{rdf78}.)} \label{teor8jlr78} Assume additionally that $\cup_n H_n$ is dense in $G$ and that \begin{equation} \label{jlr78cond1} \sup_n \,k_n=k<\infty. \end{equation} Then, for each $f\in L^1(G)$ and $a>\|f\|_1$ there is a disjoint sequence of open sets $S_j$ belonging to the family $\{tV_n\colon t\in H_n\text{,}\ n\in\mathbb{N}\}$, such that $|f(x)|\le a$ a.e. outside $A=\cup_j S_j$, $m(A)\le C\|f\|_1/a$ for a constant $C$ independent from $f$ and $a$, and $a\le |f|_{S_j}\le ka$ $(j=1,2,\ldots)$. \end{teor} \subsection{A decomposition of CZ type in $\mathbb{T}^\omega$} \label{subsec:CZdes} \quad \smallskip Our subsequent Theorem \ref{descompCZ} will show the original proof of JLR Theorem \ref{teor8jlr78} in case of the group (compact, abelian) $G=\mathbb{T}^\omega$. Before its statement we must establish a suitable sequence of subgroups (that JLR did actually teach us in the aforementioned personal communication), which we present below. The decomposition of CZ type in $\mathbb{T}^\omega$ will turn out to be associated with a certain family $\mathcal{R}_0$ (see \eqref{redtraslad}) of ``dyadic intervals". \begin{defis} (\cite[VII.43]{munroe}.) A \emph{net}\footnote{Do not confuse it with net in the sense \cite[Ch. 2]{kelley} of directed set or generalized sequence. The concept of sequence of nets, originally in the euclidean space, is due to de la Vall\'{e}e Poussin \cite[10.67]{poussin15}. Saks \cite[p. 153]{saks} generalizes it to measure metric spaces. See also \cite[\S 6]{jessen}.} in $\mathbb{T}^\omega$ is a countable class of disjoint measurable sets whose union is $\mathbb{T}^\omega$ except for a set of null measure. Let $\{\mathcal{M}_n\}_{n\in\mathbb{N}}$ be a sequence of nets. The sequence is called \emph{monotonic} if for each positive integer $n$, every set of $\mathcal{M}_{n+1}$ is a subset of some set of $\mathcal{M}_n$. In this case, for almost all $x$ there exists, for each $n\in\mathbb{N}$, an unique set $I_x^{(n)}\in\mathcal{M}_n$ such that $x\in I_x^{(n)}$. \end{defis} Remember that $R_k:=\{0,\frac1k,\ldots,\frac{k-1}k\}$, $k\in \mathbb{N}$. In the following table we see the first terms of the increasing sequence of subgroups $H_m\subset\mathbb{T}^\omega$ proposed by JLR, as well as some of the first terms of the associated decreasing sequence of FD: $$ \begin{array}{r|l|l} m&H_m&V_m\\\hline 1^2&H_1=R_2\times\{\overline{0}^{(1}\}&V_1=(0,\frac 12)\times \mathbb{T}^{1,\omega}\\ 1^2+1&H_2=R_2\times R_2\times\{\overline{0}^{(2}\}&V_2=(0,\frac 12)^2\times \mathbb{T}^{2,\omega}\\ &H_3=R_4\times R_2\times\{\overline{0}^{(2}\}&\\ 2^2&H_4=R_4\times R_4\times\{\overline{0}^{(2}\}&V_4=(0,\frac 14)^2\times \mathbb{T}^{2,\omega}\\ &H_5=R_4\times R_4\times R_2\times\{\overline{0}^{(3}\}&\\ 2^2+2&H_6=R_4\times R_4\times R_4\times\{\overline{0}^{(3}\}&V_6=(0,\frac 14)^3\times \mathbb{T}^{3,\omega}\\ &H_7=R_8\times R_4\times R_4\times\{\overline{0}^{(3}\}&\\ &H_8=R_8\times R_8\times R_4\times\{\overline{0}^{(3}\}&V_{8}=(0,\frac 1{8})^2\times(0,\frac14)\times \mathbb{T}^{3,\omega}\\ 3^2&H_9=R_8\times R_8\times R_8\times\{\overline{0}^{(3}\}&V_9=(0,\frac 18)^3\times \mathbb{T}^{3,\omega}\\ &H_{10}=R_8\times R_8\times R_8\times R_2\times\{\overline{0}^{(4}\}&\\ \leaders\hbox{$\m@th \mkern3mu.\mkern3mu$}\hfill&\leaders\hbox{$\m@th \mkern3mu.\mkern3mu$}\hfill&\leaders\hbox{$\m@th \mkern3mu.\mkern3mu$}\hfill \end{array} $$ Actually, after $H_1=R_2\times\{\bar{0}^{(1}\}$ we define, for each $n\ge 1$ \begin{gather*} H_{n^2+j}=\widetilde{H}_{n^2+j}\times\{\bar{0}^{(n+1}\},\quad (1\le j\le 2n+1),\\ \intertext{and} \widetilde{H}_{n^2+j} :=\begin{cases} R_{2^n}\times\overset{(n)}{\cdots}\times R_{2^n}\times R_{2^j} \qquad \,\qquad\qquad \qquad \text{if $j\in\{1,\ldots,n\}$},\\[4pt] R_{2^{n+1}}\times\overset{(j-n)}{\cdots}\times R_{2^{n+1}}\times R_{2^n}\times\overset{(2n+1-j)}{\cdots}\times R_{2^n}\\[4pt] \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \text{if $j\in\{n+1,\ldots,2n+1\}$}. \end{cases} \end{gather*} For each positive integer $m$, $H_m$ is a discrete (finite, of order $2^m$) subgroup of $\mathbb{T}^\omega$, $H_m\subset H_{m+1}$, and $\orden(H_{m+1}/H_m)=2$ for all $m\ge 1$. Moreover, each $\mathbb{T}^\omega/H_m$ is compact, because $\mathbb{T}^\omega$ is, and the union $\bigcup_m H_m$ is a dense subset of $\mathbb{T}^\omega$ as is easily checked. \begin{figure}[ht] \begin{center} \quad\hspace*{-30pt}\includegraphics[scale=.45]{FR-CZ_fig1.png} \caption{First members of the sequence $\{\widetilde{V}_m\}$. E.g., $\widetilde{V}_7=(0,\frac 18)\times(0,\frac 14)^2$. Two translations of $V_{m+1}$ by elements of $H_{m+1}$ cover (a.e.) $V_m$.}\label{CZ_fig1} \end{center} \end{figure} The associated decreasing sequence of open sets $\{V_m\}$, where for each $m\ge 1$, $V_m$ is a FD for $\mathbb{T}^\omega/H_m$, is defined by $V_1=(0,\tfrac12)\times \mathbb{T}^{1,\omega}$ and, for each $n\ge 1$, \begin{equation} \label{uvesubn} V_{n^2+j}=\widetilde{V}_{n^2+j}\times\mathbb{T}^{n+1,\omega}\quad (1\le j\le 2n+1), \end{equation} with $$ \widetilde{V}_{n^2+j} :=\begin{cases} (0,\tfrac1{2^{n}})^n\times (0,\tfrac1{2^{j}})&\text{if $j\in\{1,\ldots,n\}$},\\[4pt] (0,\tfrac1{2^{n+1}})^{j-n}\times (0,\tfrac1{2^{n}})^{2n+1-j} &\text{if $j\in\{n+1,\ldots,2n+1\}$} \end{cases} $$ (see Figure \ref{CZ_fig1}). We can consider the (finite) net $\mathcal{N}_m:=\{t+V_m\colon t\in H_m\}$ for each $m\in\mathbb{N}$. The sequence $\{\mathcal{N}_m\}_{m\in\mathbb{N}}$ is monotonic. Our final family $\mathcal{R}_0$ of dyadic intervals in $\mathbb{T}^\omega$ is the union of this monotonic sequence of nets, \begin{equation}\label{redtraslad} \mathcal{R}_0:=\bigcup_m \mathcal{N}_m=\left\{t+V_m\colon m\in\mathbb{N}, t\in H_m \right\}. \end{equation} The following announced result holds. The collection $\{I_j\}$ will be a \emph{Calder\'on-Zygmund decomposition} of intervals of $\mathcal{R}_0$ associated with the function $f$ at level~$a$. \begin{teor}[Jos\'{e} L. Rubio de Francia] \label{descompCZ} Let $f\in L^1(\mathbb{T}^\omega)$ and $a> \|f\|_1$. There exists a closed set $F_a$ and an open set $\Omega_a=\mathbb{T}^\omega\setminus F_a$ such that \begin{enumerate} \item[(i)] $|f(x)|\le a$ for almost all $x\in F_a$. \item[(ii)] The set $\Omega_a$ is the union of a sequence $\{I_j\}_{j=1}^\infty$ of pairwise disjoint intervals of the family $\mathcal{R}_0$ such that $a\le |f|_{I_j}\le 2a$ for all $I_j$. \item[(iii)] It is verified $m(\Omega_a)\le \frac{\|f\|_1}a$. \end{enumerate} \end{teor} \begin{proof} (See also \cite[Thm. 2.10 and 2.11]{duoandiko}.) For each $m\in\mathbb{N}$, let $\mathcal{B}_m:=\sigma(\mathcal{N}_m)$. Consider as well the trivial $\sigma$-algebra $\mathcal{B}_0=\{\emptyset,\mathbb{T}^\omega\}$ (which is generated by the open set $V_0:=\mathbb{T}^\omega=\bar{0}+\mathbb{T}^\omega$ and thus has the above general form if we also consider the trivial subgroup $H_0=\{\bar{0}\}$). We have $$ \mathcal{B}_0\subset\mathcal{B}_1 \subset \mathcal{B}_2\subset \cdots \mathcal{B}_m\subset \mathcal{B}_{m+1}\subset \cdots. $$ Define, for $m=0,1,2,\ldots$ \begin{equation}\label{comparfm2} f_m(x):=\sum_{t\in H_m}|f|_{t+V_m}\cdot\chi_{t+V_m}(x). \end{equation} The function $f_m$ is $\mathcal{B}_m$-measurable for each $m$, since it is constant on each interval $t+V_m$ component of the net $\mathcal{N}_m$. Moreover, $f_m\ge 0$ and $$ \int_{\mathbb{T}^\omega}f_m=\sum_{t\in H_m}|f|_{t+V_m}\cdot m(V_m)=\sum_{t\in H_m}\int_{t+V_m}|f|=\int_{\mathbb{T}^\omega}|f|=\|f\|_1<\infty $$ by hypothesis (we have used that $\mathbb{T}^\omega\setminus(H_m+V_m)$ is a null set, since $V_m$ is a FD for $\mathbb{T}^\omega/H_m$ for each $m$). If we compare \eqref{comparfm2} with \eqref{comparfm1} we will see that $f_m$ is the conditional expectation of $|f|$ related to the $\sigma$-algebra $\mathcal{B}_m$. By applying Theorem \ref{maxmartinteo}(ii) we deduce that $f_m(x)\to (E^{\mathcal{B}}|f|)(x)$ a.e. as $m\to\infty$, $\mathcal{B}$ being the least $\sigma$-algebra containing the union $\bigcup_{m=0}^\infty \mathcal{B}_m$. In our case, the set $\bigcup_m H_m$ is dense in $\mathbb{T}^\omega$, and thus the class $\bigcup_{m=0}^\infty \mathcal{B}_m$ is the class of the open sets in $\mathbb{T}^\omega$. Then $\mathcal{B}$ is the $\sigma$-algebra of Borel sets in $\mathbb{T}^\omega$. Therefore the operator $E^{\mathcal{B}}$ of conditional expectation with respect to the $\sigma$-algebra $\mathcal{B}$ is the identity, and so, for all $g\in L^1(\mathbb{T}^\omega)$ it is verified that $E^{\mathcal{B}}g=g$ (a.e.). We conclude that \begin{equation}\label{convaemartin} \lim_{m\to\infty}f_m(x)=\abs{f(x)} \quad\text{a.e. in $\mathbb{T}^\omega$}. \end{equation} Let $$ f^*(x)=\sup_{m\in\mathbb{N}}f_m(x). $$ If $x\in F_a:=\{x\colon f^*(x)\le a\}$, we will have $f_m(x)\le a $ for all $m$, and an application of \eqref{convaemartin} yields $\abs{f(x)}\le a$, so part (i) is proven. The set $\Omega_a=\mathbb{T}^\omega\setminus F_a$ from part (ii) is defined as $\Omega_a =\{x\colon f^*(x)> a\}$, and the weak type $(1,1)$ inequality (Theorem \ref{maxmartinteo}(i)) for the maximal operator\footnote{According to Definitions \ref{defbasedepossel} below, the operator $E^\ast$ would be denoted as $M^{\mathcal{R}_0}$. } $E^\ast\colon f\mapsto f^*$ gives \begin{equation}\label{jlweak11} m(\Omega_a)\le \frac Aa\|f\|_1, \end{equation} where $A$ is a constant independent of $f$ and $a$. Finally, we have supposed that $\|f\|_1 < a$, thus $f_0(x)\le a$ holds for all $x$, and we can recognise the set $\Omega_a$ as the disjoint union of the sets $$ \Omega_a^{(n)}=\{x\colon f_i(x)\le a<f_n(x),\ 0\le i\le n-1\}, \quad n=1,2,\ldots. $$ For each $n\ge 1$, the set $\Omega_a^{(n)}$ is obviously $\mathcal{B}_n$-measurable, therefore it is the disjoint union of intervals of the form $t+V_n$ with $t\in H_n$. If $I_j^{(n)}$ is one of these intervals we have, on one hand, \begin{align} \frac1{m\bigl(I_j^{(n)}\bigr)}\int_{I_j^{(n)}}|f(x)|\,dx&= \frac1{m\bigl(I_j^{(n)}\bigr)}\int_{I_j^{(n)}}\bigl(E^{\mathcal{B}_n}|f|\bigr)(x)\,dx\label{czdetail}\\ &=\frac1{m\bigl(I_j^{(n)}\bigr)}\int_{I_j^{(n)}}f_n(x)\,dx\ge a\notag \end{align} because $f_n(x)>a$ $\forall x\in I_j^{(n)}\subset \Omega_a^{(k)}$. On the other hand, $I_j^{(n)}$ is contained in an interval of the form $s+ V_{n-1}$ ($s\in H_{n-1}$) which is not contained in $\Omega_a^{(n-1)}$ (we make the agreement that $\Omega_a^{(0)}=\emptyset$), so it is $f_{n-1}(x)\le a$ for all $ x\in s+ V_{n-1}$. By using also that $m\bigl(I_j^{(n)}\bigr)=m(V_n)=\frac12m(V_{n-1})$, we have \begin{align*} \frac1{m\bigl(I_j^{(n)}\bigr)}\int_{I_j^{(n)}}|f(x)|\,dx&\le \frac2{m(V_{n-1})}\int_{s+V_{n-1}}\abs{f(x)}\,dx\\ &=\frac2{m(V_{n-1})}\int_{s+V_{n-1}}\bigl(E^{\mathcal{B}_{n-1}}|f|\bigr) (x)\,dx\\ &= \frac2{m(V_{n-1})}\int_{s+V_{n-1}}f_{n-1}(x)\,dx\le 2a, \end{align*} which finishes the proof of (ii). Moreover, from \eqref{czdetail} it follows that $A=1$ in \eqref{jlweak11}. \end{proof} \medskip \noindent\textbf{Remarks.} \smallskip (1) It is well known that the standard use of the CZ decomposition of the open set $\Omega_a=\cup_j I_j$ involves \cite[Ch. I, proof of Lemma 2]{CZdecomp}, also in $\mathbb{T}^\omega$, a decomposition of the function $f$, at each level $a$, in the sum of \begin{equation*} g(x):=f(x)\chi_{F_a}(x)+\sum_j f_{I_j}\chi_{I_j}(x) \quad\text{and}\quad b(x):=f(x)-g(x) \end{equation*} ($f=g+b$, $g$ and $b$ \emph{good} and \emph{bad} (level $a$)-parts of $f$), verifying properties like the following: \begin{equation*} |g(x)|\le 2a \quad\text{(a.e.),} \qquad \int_{I_j} b(x)\,dx=0\ \text{ and } \ |b|_{I_j}\le 4a \ \text{ for all $j$, etc.,} \end{equation*} (see \cite[5.3.8]{grafakos}). \smallskip (2) We have seen that JLR \cite{rdf78} uses Theorem \ref{maxmartinteo} in his proof. The a.e. convergence part (ii) of this theorem plays the role which is played by the differentiation theorem (DT) in a standard proof of the classic result of this type. But here this is not just a style option, we believe, because DT is not a priori assured. These issues are the main content of the next section. \section{On differentiation of integrals in $\mathbb{T}^\omega$} \label{sec:dif} We will start establishing the concepts of differentiation basis and differentiation of integrals adapted to the infinite torus space, which we will adopt in this section. \begin{defis} (\cite[Section 6.1]{bruckner}, \cite[Ch. 2]{guzmanyellow}.) \label{defbasedepossel} For every $y\in\mathbb{T}^\omega$ let $\mathcal{B}(y)$ be a collection of measurable sets of positive measure that contain (or whose topological closures contain) the point $y$. \smallskip If $\{S_n\}_n\subset \mathcal{B}(y)$ and $\delta(S_n)\to 0$, we say that the sequence $S_n$ ``contracts to" $y$, and write $S_n\Rightarrow y$. Suppose that there exists at least a sequence $\{S_n\}\subset \mathcal{B}(y)$ such that $S_n\Rightarrow y$. Let $\mathcal{B}:=\bigcup_{y\in\mathbb{T}^\omega}\mathcal{B}(y)$, and suppose that $\mathcal{B}$ covers (a.e.) $\mathbb{T}^\omega$. We call $(\mathcal{B}, \Rightarrow)$ a \emph{differentiation basis}\footnote{If every $B\in \mathcal{B}$ is an open set and if $x\in B\in\mathcal{B}$ then $B\in\mathcal{B}(x)$, $\mathcal{B}$ is called a \emph{Busemann-Feller basis}.} (DB). \bigskip \textsc{Examples} (The names are ours): \begin{gather*} \mathcal{R}_0:=\left\{t+V_m\colon m\in\mathbb{N}\text{,}\ t\in H_m \right\} \quad \text{(\emph{restricted Rubio de Francia basis}),} \\ \mathcal{R}:=\left\{y+V_m\colon m\in\mathbb{N}\text{,}\ y\in \mathbb{T}^\omega \right\}\quad \text{(\emph{Rubio de Francia basis}),}\\ \mathcal{J}:=\left\{J\subset \mathbb{T}^\omega\colon \text{$J$ is an interval}\right\} \quad \text{(\emph{Jessen basis}), \cite{jessen1950,jessen1952}.} \end{gather*} Let $(\mathcal{B},\Rightarrow)$ be a differentiation basis in $\mathbb{T}^\omega$. Given $f\in L^1(\mathbb{T}^\omega)$, we define the \emph{upper and lower derivative} of $\int\!f$ with respect to $\mathcal{B}$ (and the Haar measure $m$) in the point $x\in\mathbb{T}^\omega$ by (without loss of generality we assume here that $f$ is a real function) \begin{equation*} \overline{D}\bigl( \textstyle\int\!f,x \bigr)=\displaystyle\sup_{\substack{\{B_n\}\subset\mathcal{B}\\B_n\Rightarrow x }}\bigl\{ \limsup_{n} f_{B_n}\bigr\} \quad\text{and}\quad \underline{D}\bigl( \textstyle\int\!f,x \bigr)=\displaystyle\inf_{\substack{\{B_n\}\subset\mathcal{B}\\B_n\Rightarrow x }}\bigl\{ \liminf_{n} f_{B_n}\bigr\}, \end{equation*} respectively. When \begin{equation}\label{Bdiferintf} \textstyle\overline{D}\Bigl(\int\!f,x \Bigr)=\underline{D}\Bigl( \int\!f,x \Bigr) =f(x)\quad \text{a.e.} \end{equation} holds, we write $D\bigl(\int\!f,x \bigr)=f(x)$ and say that the basis $\mathcal{B}$ \emph{differentiates} $\int\!f$ and that the \emph{derivative} of $\int\!f$ is $f$. A necessary condition for \eqref{Bdiferintf} is that \begin{equation*} \lim_{n\in\mathbb{N}} f_{B_n} =f(x)\quad \text{a.e.} \end{equation*} holds, for every sequence $\{B_n\}_{n\in\mathbb{N}}\subset \mathcal{B}$ such that $B_n\Rightarrow x$. When \eqref{Bdiferintf} is satisfied for all $f\in L^\infty$ (resp. $f\in L^1(\mathbb{T}^\omega)$), we say that $\mathcal{B}$ \emph{differentiates} $L^\infty(\mathbb{T}^\omega)$ (resp. $L^1(\mathbb{T}^\omega)$). Note that $L^\infty(\mathbb{T}^\omega)\subset L^1(\mathbb{T}^\omega)$ and thus, if the basis $\mathcal{B}$ \emph{does not} differentiate $L^\infty(\mathbb{T}^\omega)$, then also does not differentiate $L^1(\mathbb{T}^\omega)$. \end{defis} Let $\mathcal{B}$ be a DB in $\mathbb{T}^\omega$. Suppose that the \emph{maximal operator} associated with $\mathcal{B}$ given by $$ M^{\mathcal{B}}f(x)=\sup_{x\in B\in\mathcal{B}}|f|_B, \quad f\in L^1(m), $$ is well defined (i.e., we suppose that for each $f\in L^1(m)$, $M^{\mathcal{B}}f$ is measurable). The following result (due to de Guzm\'{a}n and Welland) holds, its proof is standard. \begin {teor} \emph{(\cite[Thm. 1.1(a)]{guzmanywelland}.)} \label{gywell} If the operator $M^{\mathcal{B}}$ is of weak type $(1,1)$, then the basis $\mathcal{B}$ does differentiate $L^1(\mathbb{T}^\omega)$. \end{teor} \subsection{Differentiation Theorem on locally compact groups. The basis $\mathcal{R}$ in $\mathbb{T}^\omega$} \label{subsec:DT} \begin{teor} \emph{(\cite[Thm. 8]{rdf78}.)} \label{teor8jlr78ii} With the hypothesis of \emph{Theorem \ref{teor8jlr78}}, let $\mathcal{R}$ be the family formed by all sets of the form $yV_n$ with $y\in G$, $n=1,2,\ldots$. If \begin{equation}\label{jlr78cond2} \sup_n \,\frac{m(V_nV_n^{-1}V_n)}{m(V_n)}<\infty \end{equation} holds, then $M^{\mathcal{R}}$ is weak type $(1,1)$ and strong $(p,p)$ for $1<p\le \infty$. \end{teor} As an immediate consequence JLR gives the following result, which establishes a sufficient condition for the basis $\mathcal{R}$ to differentiate $L_{\text{loc}}^1(G)$ (in this setting, the notion of contraction of a sequence $(S_n)\subset \mathcal{R}$ to a point involves $m(S_n)\to 0$). \begin{corol} (\cite[Corol. 5]{rdf78}.) \label{corol5jlr78} Suppose, in addition to the hypothesis of Theorem~\ref{teor8jlr78ii}, that $V_n\subset U_n$ for a basis $\{U_n\}_{n\in\mathbb{N}}$ of neighbourhoods of $e$. Then: $$ \lim_{\substack{x\in R\in\mathcal{R}\\ m(R)\to 0}} f_R =f(x) \quad \text{(a.e.)} $$ for any locally integrable function $f$. \end{corol} In the case in which $G$ is the compact group $\mathbb{T}^\omega$ and $\mathcal{R}$ is our Rubio de Francia basis, the condition \eqref{jlr78cond2} does not hold, because e.g. for $n\ge 1$, \begin{gather*} V_{n^2}=\Bigl(0,\frac1{2^n} \Bigr)^n\times \mathbb{T}^{n,\omega}\\\intertext{and} V_{n^2}-V_{n^2}+V_{n^2}=\left(\Bigl[0,\frac1{2^{n-1}} \Bigr)\cup \Bigl(\frac{2^n-1}{2^n},1\Bigr)\right)^n\times \mathbb{T}^{n,\omega}, \end{gather*} so that $$ \frac{m(V_{n^2}-V_{n^2}+V_{n^2})}{m(V_{n^2})}=\frac{(3/2^n)^n}{(1/2^{n})^n}=3^n,\quad\text{and}\quad \sup_n\frac{m(V_{n}-V_{n}+V_{n})}{m(V_{n})}=\infty. $$ Therefore, we can not guarantee (in principle) the result of Theorem \ref{teor8jlr78ii} for the Rubio de Francia basis $\mathcal{R}$. The additional sufficient condition of the Corollary \ref{corol5jlr78} is satisfied because, for instance, the family $\{V_n-V_n\}_{n\in\mathbb{N}}$ is a basis of (symmetric) neighbourhoods of 0 in $\mathbb{T}^\omega$. On the Rubio de Francia basis $\mathcal{R}$ the following questions remain open: \begin{itemize} \item Does the converse of Theorem \ref{gywell} for the basis $\mathcal{R}$ in $\mathbb{T}^\omega$ hold? (in de Guzm\'{an} and Welland theorem in $\mathbb{R}^n$ \cite[Thm. 1.1(b)]{guzmanywelland} the BD $\mathcal{B}$ is required to be homothecy invariant). \item Is the operator $M^{\mathcal{R}}$ weak type (1,1)?\footnote{We owe this question to Sheldy J. Ombrosi.} \item Does $\mathcal{R}$ differentiate $L^\infty(\mathbb{T}^\omega)$? \end{itemize} Customizing Jessen's proof for the basis $\mathcal{J}$, we will prove below (Subsection \ref{subsec:neg}) that a certain basis $\mathcal{R}^\ast$ slightly wider than $\mathcal{R}$ does not differentiate $L^\infty(\mathbb{T}^\omega)$. \subsection{Bases $\mathcal{R}_0$ and $\mathcal{J}$} \begin{defi} (\cite[p. 153]{saks}, \cite[VII.43]{munroe}.) \label{defsnetfina} Let $\{\mathcal{M}_n\}_{n\in\mathbb{N}}$ be a monotonic sequence of nets in $\mathbb{T}^\omega$, and $\mathcal{M}:=\bigcup_n \mathcal{M}_n$. For each $y\in \mathbb{T}^\omega$ and each $k$, write $I_y^{(k)}$ for the unique element of the net $\mathcal{M}_k$ which contains $y$. We say that the sequence is \emph{indefinitely fine} if for each $x\in \mathbb{T}^\omega$ and each $\varepsilon>0$ there is $n_0\in\mathbb{N}$ such that $\delta\bigl(I_x^{(n_0)}\bigr)<\varepsilon$. \end{defi} Then, $I_x^{(n)} \Rightarrow x$, and $\mathcal{M}$ is a differentiation basis. The following result holds. \begin{teor} \emph{(\cite[43.7]{munroe}, cf. also \cite[\S 9]{jessen}, \cite[15.7]{saks}.)} \label{diferdeL1} If $\{\mathcal{M}_n\}_{n\in\mathbb{N}}$ is a monotonic sequence of nets indefinitely fine, then the basis $\mathcal{M}$ differentiates $L^1(\mathbb{T}^\omega)$. \end{teor} At the end of \cite{rdf78}, JLR pointed out, in the setting of the locally compact group $G$, that if $\mathcal{R}$ is defined to be only consisting of the sets $tV_n$ ($t\in H_n$), $n=1,2,\ldots$, then Theorem~\ref{teor8jlr78ii} is valid without assumptions \eqref{jlr78cond1} and \eqref{jlr78cond2}. In order to corroborate this statement with an example, we provide an immediate consequence of Theorem~\ref{diferdeL1}. It is, on the other hand, an immediate consequence of Theorem \ref{gywell}, because the maximal operator $M^{\mathcal{R}_0}$ is of weak type $(1,1)$. \begin{corol} \label{cor:RDFr} The basis $\mathcal{R}_0= \left\{t+V_m\colon m\in\mathbb{N}, t\in H_m\right\}$ does differentiate $L^1(\mathbb{T}^\omega)$. \end{corol} \begin{proof} The basis $\mathcal{R}_0$ is the union, for $m\in\mathbb{N}$, of the monotonic sequence of nets $\{\mathcal{N}_m\}$, where $\mathcal{N}_m= \left\{t+V_m\colon t\in H_m\right\}$. This sequence is indefinitely fine, because if $I\in \mathcal{N}_m$ and $(n-1)^2<m\le n^2$ ($n\ge 2$), it is easily seen that $$ \delta(I)\le \sum_{j=1}^{n-1} \frac1{2^{n-1+j}}+\frac1{2^{n+1}} +\sum_{j=n+1}^\infty\frac1{2^j}<\frac7{2^{n+1}}. $$ \end{proof} Any subfamily of a basis that differentiates $L^1$ and which is in turn a basis of differentiation, also differentiates $L^1$. In particular, the subfamily of cubic intervals of the base $\mathcal{R}_0$ that Saks already considered \cite[p. 158]{saks} (see \cite[p. 28]{bruckner}) $$ \mathcal{S}:=\bigcup_{m=1}^\infty \mathcal{S}_m\quad \text{ where } \quad \mathcal{S}_m:=\left\{t+V_{m^2}\colon t\in H_{m^2}\right\}, $$ also differentiates $L^1(\mathbb{T}^\omega)$. The question (posed by A. Zygmund, see \cite[p. 55]{jessen1950}) about the differentiation of integrals in $L^1(\mathbb{T}^\omega)$ with respect to this basis $\mathcal{J}$ of all the intervals of $\mathbb{T}^\omega$ was answered negatively around 1950 by Jessen \cite{jessen1950,jessen1952}. The counterexample proposed by Jessen refers to the characteristic function of certain measurable set of positive measure, so in fact he proves that the basis $\mathcal{J}$ \emph{does not} differentiate even $L^\infty(\mathbb{T}^\omega)$. It is indeed a curious phenomenon, since the basis formed by the \emph{intervals} (i.e., the parallelepipeds of edges parallel to the coordinate axes) of $\mathbb{T}^n$ does differentiate $L^\infty(\mathbb{T}^n)$ for all $n\in\mathbb{N}$ \cite[p. 74]{guzmanyellow}. \subsection{A negative result of differentiation on $\boldsymbol{\mathbb{T}^\omega}$: The basis $\boldsymbol{\mathcal{R}^\ast}$} \label{subsec:neg} \quad We begin with questions of nomenclature and notation to briefly represent some dyadic sets in $\mathbb{T}^\omega$. \begin{defis} For $m$, $q\in\mathbb{N}$, $m\ge 2$, and $q\le m$, write $\widetilde{\square}_{m,q}:=\bigl(0,\tfrac1{2^m} \bigr)^q$ and $$\square_{m,q}:=\widetilde{\square}_{m,q}\times\mathbb{T}^{q,\omega}.$$ Call $\square_{m,q}$ the \emph{$(m,q)$-cube}. E.g., $V_{m^2}=\square_{m,m}=\widetilde{\square}_{m,q}\times \bigl(0,\frac1{2^m}\bigr)^{m-q}\times\mathbb{T}^{m,\omega}$, for all $m\ge q$. Consider in $\mathbb{T}^{\omega}$, for $j\in\mathbb{N}$, the translation $\tau_j^{m}$ which adds $\tfrac1{2^m}$ to the coordinate $x_j$. Define the sets (we call them \emph{sacks} of the corresponding cubes) $$S(\square_{m,q}):=\square_{m,q}\cup\bigl(\cup_{j=1}^q\tau_j^{m}(\square_{m,q})\bigr)$$ and, for $y\in\mathbb{T}^\omega$, $S(y+\square_{m,q})=y+S(\square_{m,q})$. \smallskip On the other hand, write $W_{m,r}:=\square_{m,m}\cup \tau_r^{m}(\square_{m,m})$ $(1\le r\le m)$. Call $W_{m,r}$ a \emph{double $(m,m)$-cube}. We have $W_{m,r}=\widetilde{W}_{m,r}\times\mathbb{T}^{m,\omega}$, where $$ \widetilde{W}_{m,r}:=\prod_{j=1}^m (1+\delta_{rj})\cdot \bigl(0,\tfrac1{2^m} \bigr) \qquad\text{(Kronecker's delta).} $$ E.g., $W_{m,m}=V_{m^2-1}$, but $W_{m,j}\notin\mathcal{R}$ when $1\le j\le m-1$ (see Figure \ref{CZ_fig2}). \smallskip We define the \emph{extended Rubio de Francia basis} to be the collection \begin{equation}\label{extendedrdfbasis} \mathcal{R}^\ast:=\mathcal{R}\cup\{y+W_{m,r}\colon y\in\mathbb{T}^\omega,\ m\in\mathbb{N},\ m\ge 2,\ 1\le r\le m\}. \end{equation} \end{defis} \begin{figure}[ht] \begin{center} \quad\hspace*{-10pt}\includegraphics[scale=.6]{FR-CZ_fig2bea222.png \caption{First members of the family $\widetilde{W}_{m,r}$. }\label{CZ_fig2} \end{center} \end{figure} \begin{lema} \label{lemma18} Let $Q\in\{y+\square_{m,q}\colon y\in\mathbb{T}^\omega\}$. For each point $x\in S(Q)$ there is an interval $I_x\in\mathcal{R}^\ast$ such that $$ \frac{m( I_x\cap Q)}{m(I_x)}\ge \frac 12. $$ \end{lema} \begin{figure}[ht] \begin{center} \includegraphics[scale=.4]{FR-CZ_figsack.png} \caption{The sack of the cube $Q=\square_{3,3}$.}\label{CZ_figsack} \end{center} \end{figure} \begin{proof} First consider any case in which $y=\overline{0}$, $Q=\square_{m,q}$. Then, if $x\in Q$ we can take the interval $I_x^{0}=\widetilde{\square}_{m,q}\times\bigl(0,\frac1{2^m}\bigr)^{m-q}\times\mathbb{T}^{m,\omega}=V_{m^2}\in\mathcal{R}$. We have $I_x^{0}\subset Q$, and $$ \frac{m(I_x^{0}\cap Q)}{m(I_x^{0})}=1. $$ Otherwise, if $x\in\tau_r(Q)$ for any $r$, $1\le r\le q$, define ($\delta_{rj}$ is Kronecker's delta) $$ D_{m,r}^{q}:=\prod_{j=1}^q (1+\delta_{rj})\cdot \bigl(0,\tfrac1{2^m} \bigr)\subset \mathbb{T}^q. $$ Then, let us take $I_x^{0}=D_{m,r}^{q}\times\bigl(0,\tfrac1{2^m}\bigr)^{m-q}\times\mathbb{T}^{m,\omega}$. We have $I_x^{(0)}=W_{m,r}\in\mathcal{R}^\ast$, $I_x^{0}\cap Q=\widetilde{\square}_{m,q}\times\bigl(0,\frac1{2^m}\bigr)^{m-q}\times\mathbb{T}^{m,\omega}=\square_{m,m}$, and $$ \frac{m(I_x^{0}\cap Q)}{m(I_x^{0})}=\frac{m(\square_{m,m})}{m(W_{m,r})}=\frac12. $$ In a general case in which $y\ne \overline{0}$, take $I_x=y+I_x^{0}$. The lemma follows. \end{proof} \begin{lema} \label{lemajess52} Let $n\ge 2$ be a fixed integer. It is possible to find in $\mathbb{T}^\omega$ an enumerable family of pairwise disjoint intervals $\{Q_\alpha\}_{\alpha\in A_n}$, $Q_\alpha=y(\alpha)+\square_{m(\alpha),n}$ ($y(\alpha)\in \mathbb{T}^\omega$, $m(\alpha)\ge n$), with the sets $S(Q_\alpha)$ also pairwise disjoint, and: \begin{enumerate} \item If $C_n:=\bigcup_{\alpha} Q_\alpha$, and $N_n:=\mathbb{T}^\omega\setminus \bigl(\cup_{\alpha} S(Q_\alpha)\bigr)$, then $m(C_n)=\frac 1{n+1}$, and $m(N_n)=0$. \item For every $x\notin N_n$ there exists an interval $I_x^{n}\in \mathcal{R}^\ast$ (whose first $n$ edges are $\le 1/2^{n-1}$; in fact, $I_x^{n}$ is a translate either of a cube $V_{n^2}$, or of a double cube $W_{n^2-1,r}$) such that $$\frac{m(I_x^{n}\cap C_n)}{m(I_x^{n})}\ge \frac 12.$$ \end{enumerate} \end{lema} \begin{figure}[ht] \begin{center} \includegraphics[scale=.6]{FR-CZ_fig3.pdf} \includegraphics[scale=.35]{FR-CZ_fig4bea.png} \caption{The construction in Lemma \ref{lemajess52}, $n=2$ and 3.} \label{CZ_fig3} \end{center} \end{figure} \begin{proof} (In what follows, we write $\boldsymbol{t_1\ldots,t_n\overline{0}}$ for the point $\bigl(t_1,\ldots,t_n,0^{(n}\bigr)\in\mathbb{T}^\omega$.) In general, for each $n\ge 2$, we consider the division of $\mathbb{T}^\omega$ into $2^{n(n-1)}$ open cubes (call them \emph{$0$-cells}) of edge $1/2^{n-1}$ by the \lq\lq{}hyperplanes\rq\rq{} $$ x_i=\frac j{2^{n-1}} \qquad(i=1,\ldots,n;\ j=0,1,\ldots, 2^{n-1}-1). $$ The $0$-cells have the form $$ I_{i_1\ldots i_n}^{(n)}=\boldsymbol{\tfrac {i_1}{2^{n-1}}\cdots \tfrac {i_n}{2^{n-1}}\overline{0}}+\bigl(0,\tfrac1{2^{n-1}}\bigr)^n\times\mathbb{T}^{n,\omega}, $$ where $(i_1,\ldots, i_n)\in\{0,1,\ldots,2^{n-1}-1\}^n$. Each one of the $0$-cells is firstly subdivided in $2^n$ open cubic intervals of edge length $1/2^n$ ($1$-\emph{cells}). As an example, the $1$-cells of the $I_{0\ldots 0}^{(n)}$ $0$-cell have the form $$ \boldsymbol{\tfrac {j_1}{2^{n}}\cdots \tfrac {j_n}{2^{n}}\overline{0}}+\square_{n,n}, $$ where $(j_1,\ldots, j_n)\in\{0,1\}^n$. Among these $1$-cells, we call $\boldsymbol{0\ldots0\overline{0}}+\square_{n,n}$ the \emph{principal $1$-cell} and denote it by $Q_{0\ldots 0;1}^{(n)}$ (abbreviated by $Q_1^{(n)}$). The sack of $Q_1^{(n)}$ is $$ S(Q_1^{(n)})=Q_1^{(n)}\cup\Bigl(\bigcup_{j=1}^n \bigl(\boldsymbol{\tfrac{\delta_{1j}}{2^n}\cdots \tfrac{\delta_{nj}}{2^n}\overline{0}} +Q_1^{(n)}\bigr) \Bigr) \quad\text{(Kronecker's $\delta_{ij}$),} $$ and we have $m(S(Q_1^{(n)}))=(n+1)\cdot m(Q_1^{(n)})$. In the $0$-cell $I_{0\ldots 0}^{(n)}$, apart from the $(n+1)$ $1$-cells which form the sack $S(Q_1^{(n)})$, there remain other $2^n-(n+1)$ $1$-cells. In each one of these is carried out a subdivision into $2^n$ open cubes of edge $1/2^{n+1}$ ($2$-\emph{cells}), one of which is the principal $2$-cell $Q_{2;\beta}^{(n)}=\boldsymbol{y_\beta}+\square_{n+1,n}$, with an appropriate $\boldsymbol{y_\beta}\in\mathbb{T}^\omega$. There are $2^n-(n+1)$ principal $2$-cells for each principal $1$-cell. Now, we forget the sacks of the principal $2$-cells, and in all the remaining $2$-cells we proceed inductively. The family $\{Q_\alpha^{(n)}\}_{\alpha\in A_n}$ of the statement of the Lemma is formed by all the principal $k$-cells ($k\ge 1$) in this construction. The indicial set $A_n$ can be defined explicitly (look at Figure \ref{CZ_fig3}), but this is not essential, and for clarity of exposition we will avoid doing it. On the other hand, from the inductive definition it is immediate that $$ Q_\alpha^{(n)}\cap Q_\beta^{(n)}=\emptyset \quad \text{ and }\quad S(Q_\alpha^{(n)})\cap S(Q_\beta^{(n)})=\emptyset \quad \text{ for all $\alpha\ne \beta$ ($n\in\mathbb{N}$, $\alpha,\beta\in A_n$). } $$ \smallskip (1) Let now $C_n:=\bigcup_{\alpha\in A_n} Q_\alpha^{(n)}$. Then, $$ m(C_n)=\sum_{\alpha\in A_n}m(Q_\alpha^{(n)})=\frac{2^{n(n-1)}}{2^{n^2}}\sum_{k=0}^\infty\Bigl( \frac{2^n-(n+1)}{2^n} \Bigr)^k =\frac1{n+1}, $$ and $$ m\Bigl(\bigcup_{\alpha\in A_n} S(Q_\alpha^{(n)})\Bigr)=\sum_{\alpha\in A_n}m(S(Q_\alpha^{(n)})) =(n+1)\sum_{\alpha\in A_n}m(Q_\alpha^{(n)}) =(n+1) m(C_n)=1, $$ from which, denoting $N_n=\mathbb{T}^\omega\setminus \Bigl(\bigcup_{\alpha\in A_n} S(Q_\alpha^{(n)})\Bigr)$, we have $m(N_n)=0$. \smallskip (2) Let $x\notin N_n$. Then, there exists $\alpha_0\in A_n$ such that $x\in S(Q_{\alpha_0}^{(n)})$. Applying Lemma \ref{lemma18}, there exists an interval $I_x\in\mathcal{R}^\ast$ (in fact, either a cube of edge not greater than $1/2^n$, or a double cube) such that $$ \frac{m(I_x\cap Q_{\alpha_0}^{(n)} )}{m(I_x)}=\frac{m(I_x\cap C_n)}{m(I_x)}\ge \frac 12, $$ as required. \end{proof} \begin{teor} \label{contradif} The Rubio de Francia extended basis $\mathcal{R}^\ast$ does not differentiate $L^\infty(\mathbb{T}^\omega)$. \end{teor} \begin{proof} (This argumentation is taken from Jessen \cite{jessen1952}.) Choose an increasing sequence of positive integers $(n_p)_{p=1}^\infty$ ($n_1\ge 2$) such that $\sum_p 1/(n_p+1)\le 3/4$. Then, the union $C:=\bigcup_{p=1}^\infty C_{n_p}$ is a measurable set whose measure satisfies $0<\frac1{n_1+1}\le m(C)\le 3/4$, and the union $N:=\bigcup_{p=1}^\infty N_{n_p}$ is a null measurable set, since for each $p$, the set $N_{n_p}$ is measurable and $m(N_{n_p})=0$. If $x\notin N$, then $x\in\bigcup_{\alpha}S(Q_\alpha^{(n_p)})$ for every $p$, and thus there exists a sequence of indexes $(\alpha_p)_{p=1}^\infty$ such that $x\in S(Q_{\alpha_p}^{(n_p)})$ for each $p$. Then, applying Lemma \ref{lemajess52}(2), for each $p$ there exists an interval $I_x^{(p)}\in\mathcal{R}^\ast$ such that $\delta(I_x^{(p)})\le \delta(W_{n_p,1})<3/2^{n_p}$ (consequently these intervals $I_x^{(p)}$ form a sequence of $\mathcal{R}^\ast$ contracting to the point $x$), and $$ \frac{m(C\cap I_x^{(p)})}{m(I_x^{(p)})}\ge \frac 12. $$ Consider the characteristic function $\chi_C$. For all $x\in\mathbb{T}^\omega\setminus N$ (i.e., a.e. in $\mathbb{T}^\omega$), we have $$ \limsup_{p\to\infty} \frac1{m(I_x^{(p)})}\int_{I_x^{(p)}}\chi_C(y)\,dy=\limsup_{p\to\infty} \frac{m(C\cap I_x^{(p)})}{m(I_x^{(p)})}\ge \frac 12, $$ which immediately implies that $\overline{D}\bigl( \int\!\chi_C,x \bigr)\ge\frac12$ for almost all $x\in\mathbb{T}^\omega$. But $\chi_C(x)=0<\frac12$ for all $x\notin C$, a set of measure $\ge 1/4$. It follows that $\mathcal{R}^\ast$ does not differentiate $\int\!\chi_C$. \end{proof} \medskip \noindent\textbf{Remarks.} \smallskip (1) The proof of Theorem \ref{contradif} in fact shows that the subfamily extracted from $\mathcal{R}^\ast$ which is formed by the cubes $\{y+V_{m^2}\colon y\in\mathbb{T}^\omega\text{,}\ m\ge 2\}$ and the double cubes $\{y+W_{m,r}\colon y\in\mathbb{T}^\omega\text{,}\ m\ge 2\text{,}\ 1\le r\le m\}$ (this subfamily is not contained in the Rubio de Francia basis $\mathcal{R}$) does not differentiate $L^\infty(\mathbb{T}^\omega)$. \smallskip (2) The question whether the DB formed only by the cubes $\{y+V_{m^2}\colon y\in\mathbb{T}^\omega\text{,}\ m\ge 2\}$ does differentiate $L^\infty(\mathbb{T}^\omega)$ (with our notion of contraction of a sequence to a point) remains open for us at the moment. \smallskip (3) Dieudonn\'{e} \cite{dieudonne} also proves that the basis of intervals in $\mathbb{T}^\omega$ (in fact, the subfamily of cubic intervals in $[0,1]^\omega$), does not differentiate $L^\infty(\mathbb{T}^\omega)$ (see \cite[p. 28]{bruckner}). But Dieudonn\'{e} works with the notion of contraction to a point for generalized sequences in the Moore-Smith sense $\{S_\alpha\}_{\alpha\in D}\subset \mathcal{B}(y)$, being $D$ a directed set \cite[p. 81-86]{kelley}, as we explain next: Let $\mathcal{F}$ be the set of finite subsets of $\mathbb{N}$. For each $J\in\mathcal{F}$ we consider $$ \mathbb{T}^\omega=\mathbb{T}^J\times \mathbb{T}^{J,\omega} $$ in such a way that, if $x\in \mathbb{T}^\omega$, $x=(x_J,x_{J'})$ with $x_J\in\mathbb{T}^J$ and $x_{J'}\in\mathbb{T}^{J,\omega}$. Dieudonn\'{e} deals with the DB $\mathcal{D}=\bigcup_{x\in\mathbb{T}^\omega}\mathcal{D}(x)$ where $\mathcal{D}(x)$ is the net (according to the set $\mathbb{N}\times\mathcal{F}$ directed by the order relation $(n_1,J_1)\le(n_2,J_2)$ if and only if $n_1\le n_2$ and $J_1\subseteq J_2$) that consists of the cubic intervals \begin{equation}\label{cubosdedieu} V_{n,J}(x)=\widetilde{V}_{n,J}(x_J)\times \mathbb{T}^{J,\omega}, \quad (n\in\mathbb{N}; \ J\in\mathcal{F}), \end{equation} where $\widetilde{V}_{n,J}(x_J)\subset \mathbb{T}^J$ is the cube of center $x_J$ and side $1/n$. Dieudonn\'{e} defines a measurable set for whose characteristic function $f$, the means $f_{V_{n,J}(x)}$ cannot converge a.e. to $f(x)$ according to the directed set $\mathbb{N}\times\mathcal{F}$. \smallskip (4) A differentiation basis $\mathcal{B}$ satisfies the \emph{density property} if $\mathcal{B}$ differentiates $\chi_E$ for each measurable set $E$, i.e. for almost every $x\in\mathbb{T}^\omega$ we have, if $\{I_k\}$ is any arbitrary sequence of $\mathcal{B}(x)$ contracting to $x$, $$ \lim_{k\to\infty}\frac{m(E\cap I_k)}{m(I_k)}=\chi_E(x) $$ (\cite[p. 227]{busfeller}, \cite[p. 30]{hayes_pauc}, \cite[III.1]{guzmanyellow}). From our proof of Theorem \ref{contradif} it follows that the basis $\mathcal{R}^\ast$ does not satisfies the density property. In fact, it holds the following result (which for the space $\mathbb{R}^n$ can be found, for instance, in \cite[III, Thm. 1.4]{guzmanyellow}): \emph{The basis $\mathcal{B}$ differentiates $L^\infty(\mathbb{T}^\omega)$ if and only if satisfies the density property} \cite[Num. 11, C$\Leftrightarrow$D]{depossel}.
{ "timestamp": "2020-01-07T02:20:16", "yymm": "2001", "arxiv_id": "2001.01499", "language": "en", "url": "https://arxiv.org/abs/2001.01499", "abstract": "In this note we will show a Calderón--Zygmund decomposition associated with a function $f\\in L^1(\\mathbb{T}^{\\omega})$. The idea relies on an adaptation of a more general result by J. L. Rubio de Francia in the setting of locally compact groups. Some related results about differentiation of integrals on the infinite-dimensional torus are also discussed.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "A decomposition of Calderón--Zygmund type and some observations on differentiation of integrals on the infinite-dimensional torus", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713831229043, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.708010485134892 }
https://arxiv.org/abs/2003.09686
Symmetric strong diameter two property in tensor products of Banach spaces
We continue the investigation of the behaviour of diameter two properties in tensor products of Banach spaces. Our main result shows that the symmetric strong diameter two property is stable by taking projective tensor products. We also prove a result for the symmetric strong diameter two property for the injective tensor product.
\section{Introduction} In the last 15 years a lot of attention has been devoted to various diameter two properties since several classical Banach spaces failing the Radon-Nikod\'ym property have them (see \cite{ALL}--\cite{becerra_guerrero_octahedral_2015}, \cite{hlln}--\cite{ruedaarxiv}). According to \cite{ANP} a Banach space $X$ has the \emph{symmetric strong diameter 2 property} (SSD2P) whenever $n\in \mathbb N$, $S_1,\dots, S_n$ are slices of $B_X$ and $\varepsilon > 0$, there exist $x_i \in S_i$ and $y \in B_X$, independent of $i$, such that $x_i \pm y \in S_i$ for every $i \in \{1,\dots,n\}$ and $\|y\| > 1 - \varepsilon$. Geometrically the SSD2P of a Banach space says that, given a finite number of slices of the unit ball, there exists a direction such that all these slices contain a line segment of length almost 2 in this direction. Banach spaces with the SSD2P include, for example, uniform algbras, infinite-dimensional preduals of $L_1$-spaces, $M$-embedded spaces, and even certain Lipschitz spaces (see \cite{hlln} and \cite{LRZ}). If a Banach space has the SSD2P, then the space has the strong diameter two property (SD2P), that is, every convex combination of slices of the unit ball has diameter two \cite[Lemma~4.1]{ALN}. Observe that the SSD2P is strictly stronger than the SD2P, for example, $L_1[0,1]$ has the SD2P, but not the SSD2P \cite[Remark~3.3]{hlln}. In \cite{hlln} the following useful characterization of the SSD2P was obtained. \begin{theorem}[see {\cite[Theorem 2.1]{hlln}}]\label{thm: SSD2P char} Let $X$ be a Banach space. The following assertions are equivalent: \begin{enumerate} \item[(i)]\label{item:ssd2p-char-a} $X$ has the SSD2P. \item[(ii)]\label{item:ssd2p-char-b} Whenever $n \in \mathbb{N}$, $U_1, \ldots, U_n$ are nonempty relatively weakly open subsets of $B_X$ and $\varepsilon > 0$, there exist $x_i \in U_i$, $i \in \{1,\dots,n\}$, and $y \in B_X$ such that $x_i \pm y \in U_i$ for every $i \in \{1,\dots,n\}$ and $\|y\| > 1 - \varepsilon$. \item[(iii)]\label{item:ssd2p-char-d} Whenever $n \in \mathbb{N}$, $x_1, \dots, x_n \in S_X$, there exist nets $(y^i_\alpha) \subset S_X$ and $(z_\alpha) \subset S_X$ such that $y^i_\alpha \to x_i$ weakly, $z_\alpha \to 0$ weakly, and $\|y^i_\alpha \pm z_\alpha\|\rightarrow 1$ for every $i \in \{1,\dots,n\}$. \end{enumerate} \end{theorem} Recall from \cite{ALL} that a Banach space $X$ is \emph{almost square} (ASQ) if whenever $n \in \mathbb{N}$, and $x_1,\dotsc,x_n \in S_X$, there exists a sequence $(y_k) \subset S_X$ such that $y_k \to 0$ weakly and $\|x_i \pm y_k\| \to 1$ for every $i \in \{1,\dots,n\}$. Hence, by Theorem~\ref{thm: SSD2P char} (iii), one has that an ASQ Banach space always has the SSD2P. However, the converse fails, $C[0,1]$ has the SSD2P (see the proof of \cite[Proposition~4.6]{ALN2}) and is not ASQ (this can be easily seen by considering the constant 1 function). Spaces which are ASQ include $c_0(X_n)$, where $X_n$ are arbitrary Banach spaces, and Banach spaces $X$ which are M-ideals in $X^{**}$ (see \cite{ALL}). To summarize, we have the following diagram \[ \text{ ASQ }\Rightarrow \text{ SSD2P } \Rightarrow \text{ SD2P,} \] where none of the implications is reversible. Abrahamsen, Lima, and Nygaard asked in \cite[Section 5, (b)]{ALN} how are the diameter two properties in general preserved by tensor products. In \cite[Theorem~3.5]{becerra_guerrero_octahedral_2015} Becerra~Guerrero, {L}\'{o}pez-{P}\'{e}rez, and Rueda~Zoca proved that the SD2P is preserved from both factors by taking projective tensor product of Banach spaces. Actually, one can even weaken the hypothesis on one of the factors \cite[Theorem~2.2]{hlp}. Very recently Rueda Zoca showed that almost squareness is also preserved from both factors by taking projective tensor product \cite[Theorem~2.1]{rueda}. Therefore it is natural to wonder whether the SSD2P is also stable by forming projective tensor products of Banach spaces. In Section~\ref{sec: projective tensor} we will prove that it is indeed so (see Theorem~\ref{thm: SSD2P in proj tensor product}). This result is proven by making use of the equivalent characterization of the SSD2P from Theorem~\ref{thm: SSD2P char} and classical Rademacher techniques (see \cite{rueda} and \cite{rya}). In Section~\ref{sec: inj tensor} we will provide a sufficient condition on $X$ to assure the SSD2P in $X\ensuremath{\widehat{\otimes}_\varepsilon} Y$ for any nontrivial Banach space $Y$. We pass now to introduce some notation. All Banach spaces considered in this paper are nontrivial and over the real field. The closed unit ball of a Banach space $X$ is denoted by $B_X$ and its unit sphere by $S_X$. The dual space of $X$ is denoted by $X^\ast$ and the bidual by $X^{\ast\ast}$. By a \emph{slice} of $B_X$ we mean a set of the form \begin{equation*} S(B_X, x^*,\alpha) := \{ x \in B_X : x^*(x) > 1 - \alpha \}, \end{equation*} where $x^* \in S_{X^*}$ and $\alpha > 0$. Given two Banach spaces $X$ and $Y$, we will denote by $X\ensuremath{\widehat{\otimes}_\pi} Y$ the projective and by $X\ensuremath{\widehat{\otimes}_\varepsilon} Y$ the injective tensor product of $X$ and $Y$. Recall that the space $\mathcal{B}(X\times Y)$ of bounded bilinear forms defined on $X\times Y$ is linearly isometric to the topological dual of $X\ensuremath{\widehat{\otimes}_\pi} Y$. We refer to \cite{rya} for a detailed treatment and applications of tensor products. For a Banach space $X$ we denote by $\mathcal{L}(X)$ the space of all bounded and linear operators on $X$. By a \emph{multiplier} on $X$ we mean an element $T\in \mathcal{L}(X)$ such that every extreme point of $B_{X^*}$ becomes an eigenvector for $T^*$. The \emph{centralizer} of a real Banach space $X$ (denoted by $Z(X)$) coincides with the set of all multipliers on $X$. We refer to \cite{Behrends} for a detailed treatment of centralizers. \section{Projective tensor product}\label{sec: projective tensor} We begin by recalling a slight rephrasing of a Lemma from \cite{rueda}, which we will include without a proof for the sake of readability. \begin{lemma}[see {\cite[Lemma~2.2]{rueda}}]\label{lemma: rademacher} Let $X$ and $Y$ be Banach spaces. If $x,\tilde{x}\in B_X$ and $y,\tilde{y}\in B_Y$ are such that \[ \|x\pm \tilde{x}\|\leq 1\text{ and } \|y\pm \tilde{y}\|\leq 1, \] then \[ \|x\otimes y \pm \tilde{x}\otimes \tilde{y}\|\leq 1. \] \end{lemma} We are now ready to prove our main result as promised in the Introduction, which will provide a large class of Banach spaces with the SSD2P. \begin{theorem}\label{thm: SSD2P in proj tensor product} Let $X$ and $Y$ be Banach spaces. If $X$ and $Y$ have the SSD2P, then so does $X\ensuremath{\widehat{\otimes}_\pi} Y$. \end{theorem} \begin{proof} Let $n\in\mathbb{N}$ and consider the slices $S_i:=S(B_{X\ensuremath{\widehat{\otimes}_\pi} Y}, B_i, \alpha_i)$, $i\in \{1,\dots,n\}$, where $B_i$ are norm one bilinear forms and $\alpha_i>0$. Let $\varepsilon>0$ be such that $\varepsilon<\min_{i\in\{1,\dots,n\}} \alpha_i$. We will show that there are $z_i\in S_i$ and $z\in B_{X\ensuremath{\widehat{\otimes}_\pi} Y}$ such that $z_i\pm z\in S_i$ for every $i\in \{1,\dots,n\}$ and $\|z\|>1-\varepsilon$. Let $\delta>0$. Choose elements $u_i\otimes v_i\in S_X\otimes S_Y$ such that $B_i(u_i,v_i)>1-\delta$ for every $i\in \{1,\dots,n\}$. Consider first the slices $U_i:=S(B_X, \frac{B_i(\cdot, v_i)}{\|B_i(\cdot, v_i)\|}, \delta)$ of $B_X$. Since $X$ has the SSD2P, there are $x_i\in U_i$ and $x\in B_X$ such that $x_i\pm x\in U_i$ and $\|x\|> 1-\delta$. Therefore, \[ |B_i(x, v_i)| <\delta\|B_i(\cdot, v_i)\| \quad \text{for every $i\in \{1,\dots,n\}$,} \] because $\|x_i\pm x\|\leq 1$ and $B_i(x_i,v_i)>(1-\delta)\|B_i(\cdot, v_i)\|$. Consider now the relatively weakly open sets \[ V_i:=\{y\in B_Y\colon B_i(x_i, y)>(1-\delta)\|B_i(\cdot,v_i)\|\} \] and \[V^0_i:=\{y\in B_Y\colon |B_i(x, y)| <\delta\|B_i(\cdot, v_i)\|,\quad i\in \{1,\dots,n\} \}. \] Observe that $W_i:= V_i\cap V^0_i$ are nonempty relatively weakly open subsets of $B_Y$, because $v_i\in W_i$ for every $i\in \{1,\dots,n\}$. Since $Y$ also has the SSD2P, by Theorem~\ref{thm: SSD2P char}~(ii), there are $y_i\in W_i$ and $y\in B_Y$ such that $y_i\pm y\in W_i$ and $\|y\|> 1-\delta$. Observe that \[ |B_i(x, y)| <2\delta \|B_i(\cdot, v_i)\| \quad \text{for every $i\in \{1,\dots,n\}$,} \] because $|B_i(x, y_i\pm y)| <\delta \|B_i(\cdot, v_i)\|$ and $|B_i(x, y_i)| <\delta \|B_i(\cdot, v_i)\|$. For every $i\in \{1,\dots,n\}$ set $z_i:=x_i\otimes y_i$ and $z:=x\otimes y$. Clearly, $\|z_i\|\leq 1$ and $(1-\delta)^2\leq\|z\|\leq 1$. The fact that $\|z_i\pm z\|\leq 1$ follows from Lemma~\ref{lemma: rademacher}. Finally, \begin{align*} B_i(z_i)=B_i(x_i,y_i)>(1-\delta)\|B_i(\cdot,v_i)\|>(1-\delta)^2 \end{align*} and \begin{align*} B_i(z_i\pm z)&=B_i(x_i,y_i)\pm B_i(x,y)\\ &\geq B_i(x_i,y_i)-|B_i(x,y)|\\ &>(1-\delta)^2-2\delta \|B_i(\cdot, v_i)\|\\ &\geq 1-4\delta+\delta^2. \end{align*} Therefore, by choosing $\delta$ small enough, we can assure that the elements $z_i$ and $z$ are the ones we need. \end{proof} \begin{remark} Theorem \ref{thm: SSD2P in proj tensor product} remains no longer true if one assumes that only $X$ has the SSD2P. Indeed, by \cite[Corollary~3.9]{llr2}, the space $\ell_\infty\ensuremath{\widehat{\otimes}_\pi} \ell^3_3$ fails the SD2P (hence also the SSD2P) although $\ell_\infty$ has the SSD2P. \end{remark} \begin{remark} Recall a property for a Banach space $X$ that was used as part of the hypothesis in \cite[Theorem~3.2]{becerra_guerrero_octahedral_2015}: \begin{itemize} \item[(P)] there is a $u\in S_X$ such that for every $x\in S_X$ and every $\varepsilon>0$, there is an $x^*\in B_{X^*}$ satisfying $|x^*(x)|>1-\varepsilon\quad\text{and}\quad x^*(u)=1.$ \end{itemize} From \cite[Theorem~3.2]{becerra_guerrero_octahedral_2015} it follows that if $X$ has the SD2P and $Y^*$ has (P), then $X\ensuremath{\widehat{\otimes}_\pi} Y$ has the SD2P. However, a similar statement for the SSD2P is no longer true, because $\ell_\infty$ has (P), but $X\ensuremath{\widehat{\otimes}_\pi} \ell_1=\ell_1(X)$ which never has the SSD2P \cite[Theorem~3.1]{hlln}. \end{remark} \begin{remark}\label{rem: sequential SD2P} In \cite[Definition~2.6]{rueda} a formally stronger version of the SSD2P was introduced, which is called the sequential SD2P. Following the ideas in the proof of \cite[Theorem~2.1]{rueda} a bit more technical proof gives that the sequential SD2P is also stable by taking projective tensor products. \end{remark} For a Banach space $X$ one can consider the increasing sequence of its even duals \[ X\subset X^{\ast\ast}\subset X^{(4}\subset \dots \] Since every Banach space is isometrically embedded into its second dual, we can define $X^{(\infty}$ as the completion of the normed space $\bigcup_{n=0}^{\infty} X^{(2n}$. Observe that the proof of \cite[Theorem~3.4]{ABGLP2012} actually shows that: \begin{lemma}\label{lem: infinte centralizer} If $\dmn (Z(X^{(\infty}))=\infty$, then $X$ has the SSD2P. \end{lemma} \begin{remark} Banach spaces which satisfy the assumption of Lemma~\ref{lem: infinte centralizer} include, for example, infinite-dimensional preduals of $L_1$-spaces, infinite-dimensional $C^{*}$-algebras, and $\mathcal{L}(X,Y)$ whenever $Z(Y)$ is infinite-dimensional (see \cite[pp.~466]{ABGRP}). \end{remark} Therefore, by combining Thereom~\ref{thm: SSD2P in proj tensor product} with Lemma~\ref{lem: infinte centralizer}, we get a new consequence, which improves the result \cite[Corollary~3.8]{becerra_guerrero_octahedral_2015}, where under the same hypotheses it is obtained that $X\ensuremath{\widehat{\otimes}_\pi} Y$ has the SD2P. \begin{corollary} Let $X$ and $Y$ be Banach spaces. If $Z(X^{(\infty})$ and $Z(Y^{(\infty})$ are infinite-dimensional, then $X\ensuremath{\widehat{\otimes}_\pi} Y$ has the SSD2P. \end{corollary} \section{Injective tensor product}\label{sec: inj tensor} Recall from \cite[Corollary~2.7]{llr} that $X\ensuremath{\widehat{\otimes}_\varepsilon} Y$ is ASQ whenever $X$ is ASQ. However, it is not known whether a similar result holds for the SD2P \cite[Remark~4.5 (2)]{ruedaarxiv}. On the contary to stability of the SSD2P in projective tensor products, we will now prove that for injective tensor products it is enough to assume a sufficient condition on only one of the factors. Next result improves \cite[Theorem~5.3]{ABGRP}, where under the same hypotheses it is obtained that $X\ensuremath{\widehat{\otimes}_\varepsilon} Y$ has the diameter two property, however our argument follows essentially the same idea. \begin{theorem}\label{thm: SSD2P injective tensor} Let $X$ and $Y$ be Banach spaces. If $\sup\{\dmn (Z(X^{(2n}))\colon n\in\mathbb N)\}=\infty$, then $X\ensuremath{\widehat{\otimes}_\varepsilon} Y$ has the SSD2P. \end{theorem} \begin{proof} Arguing as in the beginning of the proof of \cite[Theorem~5.3]{ABGRP} one has that \begin{equation}\label{eq: inj tensor} (X\ensuremath{\widehat{\otimes}_\varepsilon} Y)^{(\infty}=(X^{(2n}\ensuremath{\widehat{\otimes}_\varepsilon} Y)^{(\infty}\quad \text{ for every $n\in \mathbb{N}$.} \end{equation} On the contrary, suppose that $X\ensuremath{\widehat{\otimes}_\varepsilon} Y$ fails the SSD2P, then, by Lemma~\ref{lem: infinte centralizer}, there is a $m\in \mathbb{N}$ such that $\dmn (Z((X\ensuremath{\widehat{\otimes}_\varepsilon} Y)^{(\infty}))\leq m$. Therefore, by (\ref{eq: inj tensor}), for every $n\in \mathbb N$ we have that $\dmn (Z((X^{(2n}\ensuremath{\widehat{\otimes}_\varepsilon} Y)^{(\infty}))\leq m$ also. Since $Z(X^{(2n}\ensuremath{\widehat{\otimes}_\varepsilon} Y)$ contains a copy of $Z(X^{(2n})\otimes Z(Y)$ (see \cite{Wickstead}) and $Z(Y)\neq 0$ we conclude that $\dmn Z(X^{(2n})\leq m$ for every $n\in \mathbb{N}$. This contradicts the assumption that $\sup\{\dmn (Z(X^{(2n}))\colon n\in\mathbb N)\}=\infty$. \end{proof} In the light of Theorem~\ref{thm: SSD2P injective tensor} and \cite[Corollary~2.7]{llr} it is natural to wonder. \begin{ques} If $X$ has the SSD2P, then $X\ensuremath{\widehat{\otimes}_\varepsilon} Y$ has the SSD2P? \end{ques} \section*{Acknowledgments} The author wishes to thank Abraham Rueda Zoca for his comments on the topic of this paper, in particular, for pointing out Remark~\ref{rem: sequential SD2P}.
{ "timestamp": "2020-03-24T01:08:03", "yymm": "2003", "arxiv_id": "2003.09686", "language": "en", "url": "https://arxiv.org/abs/2003.09686", "abstract": "We continue the investigation of the behaviour of diameter two properties in tensor products of Banach spaces. Our main result shows that the symmetric strong diameter two property is stable by taking projective tensor products. We also prove a result for the symmetric strong diameter two property for the injective tensor product.", "subjects": "Functional Analysis (math.FA)", "title": "Symmetric strong diameter two property in tensor products of Banach spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713883126862, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7080104829264812 }
https://arxiv.org/abs/1808.03453
Stability for Intersecting Families of Perfect Matchings
A family of perfect matchings of $K_{2n}$ is $intersecting$ if any two of its members have an edge in common. It is known that if $\mathcal{F}$ is family of intersecting perfect matchings of $K_{2n}$, then $|\mathcal{F}| \leq (2n-3)!!$ and if equality holds, then $\mathcal{F} = \mathcal{F}_{ij}$ where $ \mathcal{F}_{ij}$ is the family of all perfect matchings of $K_{2n}$ that contain some fixed edge $ij$. In this note, we show that the extremal families are stable, namely, that for any $\epsilon \in (0,1/\sqrt{e})$ and $n > n(\epsilon)$, any intersecting family of perfect matchings of size greater than $(1 - 1/\sqrt{e} + \epsilon)(2n-3)!!$ is contained in $\mathcal{F}_{ij}$ for some edge $ij$. The proof uses the Gelfand pair $(S_{2n},S_2 \wr S_n)$ along with an isoperimetric method of Ellis.
\section{Introduction} Let $\mathcal{M}_{2n}$ be the collection of perfect matchings of the complete graph $K_{2n}$. A family of perfect matchings $\mathcal{F} \subseteq \mathcal{M}_{2n}$ is \emph{intersecting} if $m \cap m' \neq \emptyset$ for any $m,m' \in \mathcal{F}$. It is known that the largest intersecting families of $\mathcal{M}_{2n}$ are the \emph{canonically intersecting families}, which are of the form $ \mathcal{F}_{ij} = \{ m \in \mathcal{M}_{2n} : ij \in m\} \text{ for some } ij \in E(K_{2n})$, as witnessed by the following Erd\H{o}s-Ko-Rado-type result. \begin{theorem}\label{thm:ekr} \emph{\cite{GodsilMeagher,MeagherM05,Lindzey17}} If $\mathcal{F} \subseteq \mathcal{M}_{2n}$ is an intersecting family, then $$|\mathcal{F}| \leq (2n-3)!!.$$ Moreover, equality holds if and only if $\mathcal{F}$ is a canonically intersecting family. \end{theorem} Given such a characterization, a natural next step in extremal combinatorics is to show \emph{stability}, that large families are close in structure to the extremal families. Our main result is that the extremal families in Theorem~\ref{thm:ekr} are stable for sufficiently large $n$. \begin{theorem}\label{thm:main} For any $\epsilon \in (0,1/\sqrt{e})$ and $n > n(\epsilon)$, any intersecting family of $\mathcal{M}_{2n}$ of size greater than $(1 - 1/\sqrt{e} + \epsilon)(2n-3)!!$ is contained in a canonically intersecting family. \end{theorem} Our method of proof was originally used by Ellis~\cite{Ellis12} to prove the bipartite version of our main result, originally conjectured by Cameron and Ku~\cite{CameronK03}. In the sequel~\cite{Ellis11}, he showed this method can also be used to show stability results for \emph{t-intersecting families} of perfect matchings of $K_{n,n}$, that is, families such that any two members share $t$ edges. Theorem~\ref{thm:main} also provides an alternative proof of the characterization of the extremal families in Theorem~\ref{thm:ekr} for sufficiently large $n$; however, one can obtain a characterization holding for all $n$ using polyhedral techniques~\cite{Lindzey17, GodsilMeagher}. It was thought that these polyhedral techniques could be extended to the problem of characterizing the extremal $t$-intersecting families of perfect matchings of $K_{n,n}$~\cite[Theorem 27]{EllisFP11}, but this approach has recently been proven incorrect~\cite{Filmus17}. This refutation has sparked renewed interest in Ellis' method, as it currently provides the simplest proof of the following seminal result in Erd\H{o}s-Ko-Rado combinatorics, that the canonically $t$-intersecting families of perfect matchings of $K_{n,n}$ are the extremal $t$-intersecting families for sufficiently large $n$~\cite[pg. 37]{EllisFF15}. \begin{theorem}\label{thm:df} \emph{\cite{EllisFP11,Ellis11}} Let $t \in \mathbb{N}$. If $\mathcal{F}$ is a $t$-intersecting family of perfect matchings of $K_{n,n}$, then for sufficiently large $n$, we have \[ |\mathcal{F}| \leq (n-t)!.\] Moreover, equality holds if and only if $\mathcal{F}$ is a canonically $t$-intersecting family, that is, every member of $\mathcal{F}$ contains a fixed set of $t$ disjoint edges of $K_{n,n}$. \end{theorem} A well-known conjecture is that a nonbipartite analogue of Theorem~\ref{thm:df} also holds. \begin{conjecture} \cite{Lindzey17,GodsilMeagher} Let $t \in \mathbb{N}$. If $\mathcal{F}$ is a $t$-intersecting family of perfect matchings of $K_{2n}$, then for sufficiently large $n$, we have \[ |\mathcal{F}| \leq (2(n-t)-1)!!.\] Moreover, equality holds if and only if $\mathcal{F}$ is a canonically $t$-intersecting family, that is, every member of $\mathcal{F}$ contains a fixed set of $t$ disjoint edges of $K_{2n}$. \end{conjecture} \noindent This conjecture has resisted the usual combinatorial approaches in Erd\H{o}s-Ko-Rado combinatorics, which is not too surprising as there is also no known combinatorial proof of Theorem~\ref{thm:df}. Our main result suggests a possible algebraic route for characterizing the extremal $t$-intersecting families of $\mathcal{M}_{2n}$ for sufficiently large $n$ and resolving this conjecture. \section{Combinatorial and Algebraic Preliminaries} Let $\mathcal{M}_{2n}$ be the collection of perfect matchings of $K_{2n}$. Since $\mathcal{M}_{2n}$ is in one-to-one correspondence with partitions of $[2n] := \{1,2,\cdots,2n\}$ into parts of size two, we may write any perfect matching as a partition \[m = m_1~m_2|m_3~m_4|\cdots|m_{2n-1}~m_{2n} \text{ where } m_i \in [2n].\] Let $m^* := 1~2|3~4|\cdots |2n$-$1~2n$ be the \emph{identity perfect matching}. The \emph{symmetric group} $S_{2n}$ on $2n$ symbols acts transitively on $\mathcal{M}_{2n}$ under the following action: \[ \sigma m = \sigma(m_1)~\sigma(m_2)~|~\sigma(m_3)~\sigma(m_4)~|~\cdots~|~\sigma(m_{2n-1})~\sigma(m_{2n}).\] It is well-known that the \emph{hyperoctahedral group} $H_n := S_2 \wr S_n$ of order $(2n)!! := 2^nn!$ is the stabilizer of $m^*$. Since perfect matchings are in one-to-one correspondence with cosets of the quotient $S_{2n}/H_n$, it follows that $$|\mathcal{M}_{2n}| = (2n-1)!! := 1 \times 3 \times 5 \times \cdots \times (2n-3) \times (2n - 1).$$ Let $(\!( 2n-1)\!)_k := (2n-1) \times (2n-3) \times \cdots \times (2(n-k+1)-1)$ denote the \emph{odd double falling factorial}, which one may compare to the falling factorial $(n)_k := n(n-1)\cdots(n-k+1)$. For any two perfect matchings $m,m' \in \mathcal{M}_{2n}$, let $\Gamma(m,m') = \Gamma(m',m)$ be the multiset union $m \cup m'$. It is not hard to see that this graph is composed of disjoint even cycles. Let $k$ denote the number of connected components of $\Gamma(m,m')$, and let $2\lambda_i$ denote the number of vertices in a component. For any $m,m' \in \mathcal{M}_{2k}$, if we order the components from largest to smallest by number of vertices, we see that $\Gamma(m,m')$ can be identified with an \emph{(integer) partition} $2\lambda := (2\lambda_1, 2\lambda_2, \cdots , 2\lambda_k) \vdash 2n$. When referring to the Ferrer's diagram of a partition $\lambda \vdash n$, we call $\lambda$ a \emph{shape}. For any $\lambda \vdash n$, if there are $k$ parts that all have the same size $\lambda_i$, we use $\lambda_i^k$ to denote the multiplicity. Let $d(m,m'): \mathcal{M}_{2n} \times \mathcal{M}_{2n} \mapsto \lambda(n)$ denote the aforementioned bijection, where $\lambda(n)$ is the set of all integer partitions of $n$. Depending on the context, we shall refer to $d(m,m')$ as the \emph{cycle type of $m'$ with respect to m} (or vice versa since $d(m,m') = d(m',m)$). If one of the arguments is the identity perfect matching, then we say $d(m^*,m)$ is \emph{the cycle type of m}. Any part of size 1 of a matching's cycle type is called a \emph{fixed point}. Let $\text{fp}(m)$ be the number of fixed points of the cycle type of $m$. A \emph{derangement} of $\mathcal{M}_{2n}$ is a perfect matching $m \in \mathcal{M}_{2n}$ such that $\text{fp}(m) = 0$. The number of derangements of $M_{2n}$, denoted as $D_{2n}$, can be counted via a recurrence quite similar to the classic one for permutation derangements: \[D_{2n} = 2 (n - 1)(D_{2(n - 1)} + D_{2(n - 2)}),\] where $D_0 = 1$ and $D_2 = 0$. Alternatively, via the principle of inclusion-exclusion we have \begin{align*} D_{2n} &= \sum_{k=0}^n (-1)^k \binom{n}{k} (2(n-k)-1)!! = (2n-1)!! \sum_{k=0}^n (-1)^k \frac{(n)_k}{k!(\!(2n-1)\!)_k}, \end{align*} which after taking limits implies that $D_{2n} = (2n-1)!!~(1/\sqrt{e}+o(1))$. To give some insight into the conditions of Theorem~\ref{thm:main}, consider the following intersecting family \[ \mathcal{H}_{1,2} = \{ m \in \mathcal{F}_{1,2} : m \text{ intersects } (1~3)m^* \} \cup \{(1~3)m^*,(1~4)m^*\}.\] This family is not contained in any canonically intersecting family, and for every member $m \in \mathcal{H}_{1,2} \setminus \{(1~3)m^*,(1~4)m^*\}$, we have that $\{1,4\},\{2,3\},\{2,4\},\{1,3\} \notin m$ as well as $m \cap \{\{5,6\},\{7,8\},\cdots,\{2n-1,2n\}\} \neq \emptyset$. The number of perfect matchings $m \in \mathcal{M}_{2n}$ such that $m \cap m^* = \{\{1,2\}\}$ is $D_{2(n-1)}$. The number of perfect matchings such that $m \cap m^* = \{\{1,2\},\{3,4\}\}$ is $D_{2(n-2)}$. Since $|\mathcal{F}_{1,2}| = (2n-3)!!$, we see that the number of perfect matchings containing $\{1,2\}$ and an edge of $\{\{5,6\},\{7,8\},\cdots,\{2n-1,2n\}\}$ is $$|\mathcal{H}_{1,2}| - 2= (2n-3)!! - D_{2(n-1)} - D_{2(n-2)} = (1-1/\sqrt{e}+o(1))(2n-3)!!.$$ Note that relabeling the vertices of $K_{2n}$ gives isomorphic families $\mathcal{H}_{i,j}$ for any edge $ij$. The \emph{derangement graph} is the graph $\mathcal{D}_n$ such that two perfect matchings $m,m' \in \mathcal{M}_{2n}$ are adjacent in $\mathcal{D}_n$ if $d(m,m')$ has no parts of size 1. An \emph{independent set} of graph $\Gamma$ is a set of vertices $S \subseteq V(\Gamma)$ such that $uv \notin E(\Gamma)$ for all $u,v \in S$. Nonadjacent perfect matchings in the derangement graph are intersecting, thus its independent sets are intersecting families of perfect matchings. We now recall some basic facts about finite Gelfand pairs, whose proofs can be found in~\cite{CST,MacDonald95}. A basic understanding of group theory and finite group representation theory is assumed. In particular, we use many well-known facts from the representation theory of the symmetric group. The reader is referred to~\cite{MacDonald95,StanleyV201} for a more thorough treatment. Let $\mathbb{C}[G]$ be the group algebra $G$ over $\mathbb{C}$, and for any subgroup $K \leq G$, define the subalgebra $C(G,K) := \{ f \in \mathbb{C}[G] : f(kxk') = f(x)~\forall x \in G,~\forall k,k' \in K \}$. \begin{theorem}\label{thm:gelfandPair} \emph{\cite{MacDonald95}} Let $K \leq G$ be a finite group. Then the following are equivalent. \begin{enumerate} \item $(G,K)$ is a Gelfand Pair; \item The induced representation $1 \uparrow_K^G \cong \bigoplus_{i=1}^k V_i$ is multiplicity-free; \item The algebra $C(G,K)$ is commutative. \end{enumerate} \end{theorem} \noindent Let $(G,K)$ be a Gelfand pair and define $\chi_i$ to be the character of $V_i$. The functions \[ \phi_i(x) = \frac{1}{|K|} \sum_{k \in K} \overline{\chi_i} (xk) = \frac{1}{|K|} \sum_{k \in K} \chi_i (x^{-1}k) \] form an orthogonal basis for $C(G,K)$ and are called the \emph{spherical functions}. It it is helpful to think of the spherical functions as analogues of characters of irreducible representations, as they are constant on double cosets $Kg_iK$. It is well-known that $(S_{2n}, H_n)$ is a Gelfand pair, which implies the induced representation $1 \uparrow^{S_{2n}}_{H_n}$ admits the following unique decomposition into irreducible representations. \begin{theorem} \emph{\cite{Thrall42}}\label{thm:decomp} Let $\lambda = (\lambda_1, \lambda_2,\cdots,\lambda_k) \vdash n$ and $S^{2\lambda}$ be the Specht module of $S_{2n}$ corresponding to the partition $2\lambda := (2\lambda_1, 2\lambda_2,\cdots,2\lambda_k) \vdash 2n$. Then \[ 1 \uparrow^{S_{2n}}_{H_n} \cong \bigoplus_{\lambda \vdash n} S^{2\lambda}.\] \end{theorem} \noindent The eigenspaces of $\mathcal{D}_n$ are precisely the irreducibles $S^{2\lambda}$ stated in the theorem above, and we say that these irreducibles are the \emph{even irreducibles} of $S_{2n}$. For each $\lambda \vdash n$, let $$\Omega_\lambda := \{ m \in \mathcal{M}_{2n} : d(m,m^*) = \lambda \}$$ be the \emph{$\lambda$-sphere}, and define the \emph{$\lambda$-double-coset} as $H_n \sigma_{\lambda} H_n = \{\sigma \in S_{2n}: d(m^*,\sigma m^*) = \lambda\}$. \begin{proposition}\label{lem:sphereSize} \emph{\cite{MacDonald95}} Let $l(\lambda)$ denote the number of parts of $\lambda \vdash n$, $m_i$ denote the number of parts of $\lambda$ that equal $i$, and set $z_\lambda := \prod_{i \geq 1} i^{m_i} m_i!$. Then $\Omega_\lambda$ has size \[ |\Omega_\lambda| = \frac{|H_n|}{2^{l(\lambda)} z_\lambda}.\] \end{proposition} \begin{proposition}\label{prop:eigs} \emph{\cite{Lindzey17}} Let $\Lambda$ be the collection of all integer partitions of $n$ that have no parts of size 1. The eigenvalues $\{\eta_\mu\}_{\mu \vdash n}$ of $\mathcal{D}_n$ can be written as \[ \eta_\mu = \sum_{\lambda \in \Lambda} |\Omega_\lambda| \phi^\lambda_\mu\] where $\{ \phi_\mu \}_{\mu \vdash n}$ are the spherical functions of $(S_n,H_n)$ and $\phi_\mu^\lambda := \phi_\mu(\sigma)$, $\sigma \in H_n \sigma_\lambda H_n$. \end{proposition} \noindent For a more detailed discussion of the perfect matching derangement graph, see~\cite{Lindzey17,GodsilMeagher,KuW17}. \section{The Derangement Graph and the Ratio Bounds}\label{sec:derangement} The first step in most if not all algebraic proofs of Erd\H{o}s-Ko-Rado-type results is to construct a graph whose independent sets correspond to intersecting families, which in our case is the derangement graph $\mathcal{D}_n$. The following bound of Delsarte and Hoffman has been rather useful for bounding the size of independent sets in such graphs. \begin{theorem}[Ratio Bound~\cite{Delsarte73}]\label{thm:pseudoHoffman} Let $\Gamma$ be a $d$-regular graph with eigenvalues $d = \eta_1 \geq \eta_2 \geq \cdots \geq \eta_{\min}$ and corresponding eigenvectors $v_1, v_2 \cdots , v_{\min}$. If $S\subseteq V$ is an independent set of $\Gamma$, then \[ |S| \leq |V|\frac{-\eta_{\min}}{d - \eta_{\min}}.\] If equality holds, then $1_S \in \emph{Span}\left( \{v_1\} \cup \{v_i : \eta_i = \eta_{\min} \} \right)$. \end{theorem} \noindent See~\cite{GodsilMeagher} for a comprehensive account of the ratio bound in Erd\H{o}s-Ko-Rado Combinatorics.\\ We now give a short proof that the least eigenvalue of $\mathcal{D}_n$ is $\eta_{(n-1,1)} = -D_{2n}/2(n-1)$ and the magnitudes of its eigenvalues, aside from the least and greatest, are $O((2n-5)!!)$. The latter will be an essential ingredient in our proof of Theorem~\ref{thm:main}. For any shape $\lambda \vdash n$, we let $S^\lambda$ denote the \emph{irreducible representation of $S_n$} corresponding to $\lambda$ and define $f^\lambda := \dim~S^\lambda$. We say that an irreducible $S^\lambda$ is \emph{even} if all the parts of $\lambda$ have even size. Let $\rho \downarrow^G_K$ denote the \emph{restriction} of the representation $\rho$ of $G$ to $K$. \begin{theorem}[The Hook Rule~\cite{Sagan}] For any shape $\lambda \vdash n$ and cell $c \in \lambda$, let $h(c)$ denote the total number of cells below $c$ in the same column, and to the right of $c$ in the same row including $c$. Then $ f^\lambda = n!/\prod_{c \in \lambda} h(c)$. \end{theorem} \begin{theorem}[The Branching Rule~\cite{Sagan}] For any irreducible representation $S^\mu$ of $S_n$, we have $$ S^\mu \downarrow^{S_n}_{S_{n-1}} \cong \bigoplus_{\mu^-} S^{\mu^-}$$ where $\mu^-$ ranges over all shapes obtainable from $\mu$ by removing a cell $c$ such that $h(c) = 1$. \end{theorem} \noindent The following result is a well-known and easy to prove consequence of the branching rule. \begin{corollary}\label{cor:nomults} For any $\mu \vdash m$ and $2 \leq i < m$ such that $\mu \neq (m)$ or $(1^m)$, the representation $S^\mu \downarrow^{S_m}_{S_{m-i}}$ is reducible. Moreover, if $S^{2\mu}$ is an even irreducible and $1 \leq i < m$, then the representation $S^{2\mu} \downarrow^{S_{2m}}_{S_{2m-2i}}$ contains at least two even irreducibles unless $2\mu = (2m),(2)^m$. \end{corollary} \noindent A technique of James and Kerber~\cite{JamesKerber} allows us to obtain lower bounds on the degrees of even irreducibles of $S_{2n}$ that are not too small in reverse-lexicographical order. For the following proof, it is convenient to abuse notation and let $\lambda \vdash n$ also denote $S^\lambda$. \begin{lemma}\label{lem:bound} For $n \geq 8$, the only even irreducibles $\lambda$ of $S_{2n}$ such that $f^\lambda < \binom{2n-4}{4}-\binom{2n-4}{3}$ are $(2n)$ and $(2n-2,2)$. \end{lemma} \begin{proof} We proceed by induction on $n \geq 8$. Suppose the claim is true for $S_{2(n-1)}$, but not true for $S_{2n}$. Let $\lambda \vdash 2n$ be an even partition such that $f^\lambda < \binom{2n-4}{4}-\binom{2n-4}{3}$. If $\lambda \downarrow^{S_{2n}}_{S_{2(n-1)}}$ contains $(2n-2)$ or $(2n-4,2)$ as an irreducible representation, then by the branching rule, the only possibilities for $\lambda$ are $(2n),(2n-2,2),(2n-4,4),$ and $(2n-4,2^2)$, as illustrated below. \begin{center} \begin{tikzpicture} \node (max) at (-2,3) {$~~(2n)~~$}; \node (max2) at (0,3) {$~~(2n-2,2)~~$}; \node (max3) at (2,3) {$~~(2n-4,4)~~$}; \node (max4) at (4,3) {$~~(2n-4,2^2)~~$}; \node (a) at (-2,1.5) {$~~(2n-1)~~$}; \node (b) at (0,1.5) {$~~(2n-2,1)~~$}; \node (c) at (2,1.5) {$~~(2n-3,2)~~~$}; \node (cc) at (4,1.5) {$~~(2n-4,3)~~~~$}; \node (ccc) at (6,1.5) {$~~(2n-4,2,1)~~$}; \node (d) at (-1,0) {$~~(2n-2)~~$}; \node (f) at (2,0) {$~~(2n-4,2)~~$}; \draw (d) -- (a) -- (max) (f) -- (cc) -- (max3) (f) -- (ccc) -- (max4) (f) -- (c) -- (max2) -- (b) (d) -- (b); \end{tikzpicture} \end{center} By the hook formula, we have $$ f^\lambda < \binom{2n-4}{4}-\binom{2n-4}{3} = f^{(2n-4,4)} < f^{(2n-4,2^2)},$$ which rules out $(2n-4,4)$ and $(2n-4,2^2)$. We conclude that $(2n-2)$ and $(2n-4,2)$ are not constituents of $\lambda \downarrow^{S_{2n}}_{S_{2(n-1)}}$. By the induction hypothesis, all other even irreducibles $\mu < (2n-4,2)$ of $S_{2(n-1)}$ have $$f^\mu \geq \binom{2(n-1)-4}{4} - \binom{2(n-1)-4}{3}.$$ Moreover, for $n\geq8$ we have $$2\left(\binom{2(n-1)-4}{4} - \binom{2(n-1)-4}{3}\right) \geq \binom{2n-4}{4}-\binom{2n-4}{3}.$$ Corollary~\ref{cor:nomults} implies that $\lambda = (2n),(2^n)$. Since $f^{(2^n)} = \frac{1}{n+1}\binom{2n}{n} > \binom{2n-4}{4} - \binom{2n-3}{3}$, we have $\lambda = (2n)$. We conclude that the claim holds for $S_{2n}$, a contradiction.\end{proof} \noindent The following folklore result gives a crude upperbound on $|\eta_\lambda|$ such that $\lambda \neq (n),(n-1,1)$. \begin{lemma}[The Trace Bound]\label{lem:trace} Let $\Gamma$ be a graph on $N$ vertices with eigenvalues $\{\eta_i\}_{i=1}^N$. Then $\sum_{i=1}^N \eta_i^2 = \emph{Tr}(A(\Gamma)^2) = 2|E(\Gamma)|.$ \end{lemma} \begin{lemma}\label{lem:bounds} For all $\lambda \neq (n), (n-1,1)$, we have $|\eta_{\lambda}| = O(2n-5)!!$. \end{lemma} \begin{proof} By Lemma~\ref{lem:trace} we have $\sum_{\lambda \vdash n} (\sqrt{\dim 2\lambda}~\eta_\lambda)^2 = ((2n-1)!!)^2(1/\sqrt{e} + o(1))$, thus \begin{align*} |\eta_{\lambda}| &\leq \sqrt{\frac{(2n-1)!!^2(1/\sqrt{e} + o(1))}{\dim 2\lambda}} = \frac{(2n-1)!!}{\sqrt{\dim 2\lambda}} \sqrt{1/\sqrt{e} + o(1)} = O((2n-5)!!), \end{align*} where the last equality follows from Lemma~\ref{lem:bound}. \end{proof} \begin{lemma}\label{lem:zonal} \emph{\cite[Ch. VII]{MacDonald95}} Let $\phi_{(n-1,1)}^\lambda$ be the zonal spherical function of the Gelfand pair $(S_{2n},H_n)$ that corresponds to $(n-1,1)$ evaluated at $\Omega_\lambda$. Then $$\phi_{(n-1,1)}^\lambda = \frac{(2n-1)\emph{fp}(\lambda)-n}{2n(n-1)}.$$ \end{lemma} \noindent At the expense of using Gelfand pairs, we arrive at a shorter proof of the following. \begin{theorem}[Godsil and Meagher~\cite{GodsilM15}] The minimum eigenvalue of the perfect matching derangement graph is $\eta_{(n-1,1)} = -D_{2n}/2(n-1)$. \end{theorem} \begin{proof} By Lemma~\ref{lem:bounds}, only $\eta_{(n)} = D_{2n}$ and $|\eta_{(n-1,1)}|$ are $\omega((2n-5)!!)$. Derangements have no singleton parts, thus Lemma~\ref{lem:zonal} implies that $\phi_{(n-1,1)}^\lambda = -\frac{1}{2(n-1)}$ for any derangement $\lambda$. By Proposition~\ref{prop:eigs}, we have $\eta_{(n-1,1)} = -D_{2n}/2(n-1)$, as desired. \end{proof} \noindent A simple application of the ratio bound proves the first part of Theorem~\ref{thm:ekr}. We say two families $\mathcal{F}, \mathcal{G} \subseteq \mathcal{M}_{2n}$ are \emph{cross-intersecting} if $m \cap m' \neq \emptyset$ for all $m \in \mathcal{F}$ and $m' \in \mathcal{G}$. Using the so-called \emph{cross-ratio bound}, we easily obtain Theorem~\ref{cor:cross}, a ``cross-independent" version of the first part of Theorem~\ref{thm:ekr}. \begin{theorem}[Cross-Ratio Bound~\cite{AlonKKMS02}] Let $\Gamma$ be a $d$-regular with eigenvalues $d= |\eta_1| \geq |\eta_2| \geq \cdots \geq |\eta_{n}|$ and corresponding eigenvectors $v_1, v_2 \cdots , v_n$. Let $S,T \subset V$ be sets of vertices such that there are no edges between $S$ and $T$. Then \[\sqrt{\frac{|S||T|}{|V|^2}} \leq \frac{|\eta_2|}{d + |\eta_2|}.\] \end{theorem} \begin{theorem}\label{cor:cross} If $\mathcal{F},\mathcal{G} \subseteq \mathcal{M}_{2n}$ are cross-intersecting, then $|\mathcal{F}| \cdot |\mathcal{G}| \leq ((2n-3)!!)^2.$ \end{theorem} \noindent Let $\mathcal{H}$ be the graph over $\mathcal{M}_{2n}$ such that $m,m'$ are adjacent if and only if $m \cup m'$ is a Hamiltonian cycle of $K_{2n}$. Similarly, let $\mathcal{H}'$ be the graph over $\mathcal{M}_{2n-1}$ such that $m,m'$ are adjacent if and only if $m \cup m'$ is a Hamiltonian path of $K_{2n-1}$. Observe that any maximum matching of $K_{2n-1}$ can be extended to a unique perfect matching of $K_{2n}$ by matching the unmatched vertex of $K_{2n-1}$ to the vertex labeled $2n$, and vice versa. This gives a bijection between Hamiltonian paths of $K_{2n-1}$ and Hamiltonian cycles of $K_{2n}$, and shows that $\mathcal{H} \cong \mathcal{H} '$. This paired with~\cite[Corollary 5.2]{Lindzey17} implies the following. \begin{lemma}\label{lem:eig} The minimum eigenvalue of $\mathcal{H}'$ is $-|H_{n-2}| = -2^{n-2}(n-2)!.$ \end{lemma} \begin{lemma}\label{lem:oddCross} If $\mathcal{F}, \mathcal{G} \subseteq \mathcal{M}_{2n-1}$ are cross-intersecting, then $|\mathcal{F}| \cdot |\mathcal{G}| \leq ((2n-3)!!)^2$. \end{lemma} \begin{proof} Note that $\mathcal{H}'$ is a subgraph of the maximum matching derangement graph (two maximum matchings of $K_{2n-1}$ adjacent iff they share no edges). It follows that any pair of cross-intersecting families of maximum matchings of $K_{2n-1}$ are cross-independent sets in $\mathcal{H} '$. Lemma~\ref{lem:eig} together with the cross-ratio bound gives the result. \end{proof} \noindent For any intersecting family $\mathcal{F} \subseteq \mathcal{M}_{2n}$, we define the restriction $\mathcal{F} \downarrow_{ij} \subseteq \mathcal{F}$ as the subfamily of members that all contain the edge $ij$, formally, $\mathcal{F} \downarrow_{ij} := \{ m \in \mathcal{F} : ij \in m \}$. \begin{lemma}\label{lem:cross} Let $\mathcal{F} \subseteq \mathcal{M}_{2n}$ be an intersecting family. Then for all $i$, $j$ and $k$ with $j \neq k$, we have \[|\mathcal{F} \downarrow_{ij} | \cdot |\mathcal{F} \downarrow_{ik}| \leq ((2n - 5)!!)^2.\] \end{lemma} \begin{proof} Without loss of generality, assume $i = 1$, $j = 2,$ and $k=3$. Note that $ \mathcal{F} \downarrow_{12} \cap~\mathcal{F} \downarrow_{13}~= \emptyset$. Assume both restrictions are nonempty; otherwise, the claim is trivial. Since $\mathcal{F}$ is an intersecting family, any two $m \in \mathcal{F} \downarrow_{12}$ and $m' \in \mathcal{F} \downarrow_{13}$ must share an edge of $E(K_{2n} \setminus \{1,2,3\})$. In other words, $\mathcal{F} \downarrow_{12}$ and $\mathcal{F} \downarrow_{13}$ are isomorphic to two families $\mathcal{G}$ and $\mathcal{G}'$ of $\mathcal{M}_{2n-3}$ that are cross-intersecting. The result now follows from Lemma~\ref{lem:oddCross}. \end{proof} \section{The Transposition Graph and McDiarmid's Bound} The \emph{perfect matching transposition graph} is the graph $\mathcal{T}_n$ such that $m,m' \in \mathcal{M}_{2n}$ are adjacent if $d(m,m') = (2,1^{n-2})$. In other words, two perfect matchings $m,m'$ are adjacent if they differ by a \emph{partner swap}, that is, a transposition $\tau$ such that $m' = \tau m$. This graph will be the combinatorial workhorse of our stability result. The \emph{h-neighborhood} of a set $X \subseteq V$ is the set of vertices $N_h(X) := \{ v \in V : \text{dist} (v,X) \leq h\}$ where $\text{dist} (v,X)$ is the length of a shortest path from $v$ to any vertex of $X$. It is instructive to think of these neighborhoods in the perfect matching transposition graph as balls of radius $h$ in a discrete metric space, as perfect matchings in a ball of small radius around some point in the transposition graph are all structurally quite similar, i.e., they share many edges. Like the permutation transposition graph, the perfect matching transposition graph admits a nice recursive structure. The following is not too hard to show. \begin{proposition} \label{prop:trans} The adjacency matrix of the perfect matching transposition graph of $\mathcal{M}_{2n}$ can be written as the following $(2n-1) \times (2n-1)$ block matrix \[ A(\mathcal{T}_n) \cong \begin{bmatrix} A(\mathcal{T}_{n-1}) & ~ & ~ & ~ \\ ~ & A(\mathcal{T}_{n-1}) & ~ & \emph{\Huge{*}}\\ \emph{\Huge{*}} & ~ & \ddots & ~ \\ ~ & ~ & ~ & A(\mathcal{T}_{n-1})\\ \end{bmatrix} \] where any off-diagonal block in the $*$ region is a $(2n-3)!! \times (2n-3)!!$ permutation matrix. Furthermore, $\mathcal{T}_n$ has diameter $n-1$. \end{proposition} A \emph{partition sequence} of a graph $\Gamma$ is a sequence $\mathcal{P}_0,\mathcal{P}_1, \cdots , \mathcal{P}_m$ of increasingly refined partitions of $\Gamma(V)$ where $\mathcal{P}_0 = \Gamma(V)$ is the trivial partition, $\mathcal{P}_m$ is the discrete partition into singleton blocks, along with a sequence of numbers $c_0,c_1,\cdots,c_m$ with the following property: for each $i \in \{1,2,\cdots,m\}$, whenever $A,B \in \mathcal{P}_i$, and $A,B \subseteq C \in \mathcal{P}_{i-1}$ for some $C$, then there is a bijection $\varphi : A \rightarrow B$ with $d_{\Gamma}(x,\varphi(x)) \leq c_i$ for all $x \in A$. We say that a partition sequence is \emph{nice} if $m = \text{diameter}(\Gamma)$ and $c_i \leq 1$ for all $i \in \{1,2,\cdots,m\}$. \begin{theorem}[McDiarmid's Bound~\cite{McDiarmid89}] Let $\Gamma = (V,E)$ be a graph that admits a partition sequence $\{\mathcal{P}_i\}_{i=0}^m, \{c_i\}_{i=0}^m$, and let $X \subset V$ such that $|X| \geq a|V|$ for some $a \in (0,1)$. Then for any $h \in \mathbb{N}$ such that $$h > h_0 = \sqrt{\frac{1}{2} \sum_{i=0}^m c_i^2 \ln (1/a) },$$ the following holds: \[ N_h(X) \geq \left(1 - \exp \left(\frac{-2(h-h_0)^2}{\sum_{i=0}^m c_i^2} \right) \right)|V|.\] \end{theorem} \noindent By Proposition~\ref{prop:trans}, the perfect matching transposition graph admits a nice partition sequence, and so by McDiarmid's bound, we obtain the following. \begin{proposition} Let $X \subset \mathcal{M}_{2n}$ such that $|X| \geq a(2n-1)!!$ for some $a \in (0,1)$. Then for any $h \in \mathbb{N}$ such that $$h > h_0 = \sqrt{\frac{n}{2} \ln (1/a) },$$ the following holds: \[ N_h(X) \geq \left(1 - \exp \left(\frac{-2(h-h_0)^2}{n} \right) \right)(2n-1)!!.\] \end{proposition} \section*{Proof of the Key Lemma} To prove Theorem~\ref{thm:main}, it suffices to show the following lemma, which we demonstrate below. \begin{lemma}[Key Lemma] For any $c \in (0,1)$, there exists a $C > 0$ such that the following holds. If $\mathcal{F} \subset \mathcal{M}_{2n}$ is an intersecting family with $|\mathcal{F}| \geq c(2n-3)!!$, then there exist an edge $ij$ such that $|\mathcal{F} \setminus \mathcal{F}\downarrow_{ij} | \leq C(2n-5)!!$. \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm:main}] Let $\mathcal{F}$ be an intersecting family such that $|\mathcal{F}| \geq c(2n-3)!!$ and $c \in (1 - 1/\sqrt{e},1)$. By the key lemma, implies there exists an edge $ij \in E(K_{2n})$ such that $|\mathcal{F} \setminus \mathcal{F} \downarrow_{ij} | = O((2n - 5)!!).$ This implies that \begin{align}\label{eq:2} | \mathcal{F} \downarrow_{ij} | \geq (c - O(1/n))(2n-3)!!. \end{align} For sake of contradiction, suppose there exists an $m \in \mathcal{F}$ such that $ij \notin m$. Since any member of $\mathcal{F} \downarrow_{ij}$ must share an edge with $m$, we have that \[ |\mathcal{F} \downarrow_{ij}| \leq (2n-3)!! - D_{2(n-1)} - D_{2(n-2)} = (1 - 1/\sqrt{e} - o(1))(2n-3)!!.\] This contradicts~(\ref{eq:2}) for $n$ sufficiently large depending on $c$, completing the proof. \end{proof} A few preliminary results are needed before starting the proof of the key lemma. First in this list is a generalization of the ratio bound. \begin{theorem}[Stability Version of Ratio Bound~\cite{Ellis12}]\label{thm:stableratio} Let $\Gamma = (V,E)$ be a $d$-regular graph on $N$ vertices with eigenvalues $\eta_{\min}, \cdots, \eta_{\max}=d$ ordered from least to greatest, and corresponding orthonormal eigenvectors $v_{\min}, \cdots, v_{\max}$. Define $\mu := \min \{ \eta_i : \eta_i \neq \eta_{\min} \}$. Let $X \subseteq V$ be a set of vertices of measure $\alpha := |X|/N$ and let $\ell$ denote the number of edges of the subgraph induced by $X$. Let $D$ be the Euclidean distance from the characteristic function $f$ of $X$ to the subspace $U = \emph{Span} \left( \{v_{\max}\} \cup \{v_i : \eta_i = \eta_{\min}\} \right)$. Then \[ D^2 \leq \alpha \frac{(1-\alpha)|\eta_{\min}| - d\alpha}{|\eta_{\min}| - |\mu|}+ 2\ell.\] \end{theorem} Theorem~\ref{thm:stableratio} together with the eigenvalue information on $\mathcal{D}_n$ provides us with upperbounds on how far any intersecting family is from $U$. Recall that equality is met when we apply the ratio bound to $\mathcal{D}_n$, which implies that $1_{\mathcal{F}_{ij}} \in U \cong S^{2(n)} \oplus S^{2(n-1,1)}$. We are concerned with how far a ``large" intersecting family $\mathcal{F}$ is from $U$ where ``large" means having size $c(2n-3)!!$ for some $c \in (0,1)$. Recall that the Euclidean distance $D$ from $1_{\mathcal{F}}$ to $U$ can be written as $D = \|P_{U^\perp} 1_{\mathcal{F}} \|_2$ where $P_V$ denotes the projection onto any subspace $V \leq \mathbb{R}[\mathcal{M}_{2n}]$. Since $S^{(2n)}$ is the space of constant functions, the projection of any characteristic function $1_{\mathcal{F}} \in \mathbb{R}[\mathcal{M}_{2n}]$ onto $S^{(2n)}$ is just $(|\mathcal{F}|/(2n-1)!!) 1_{\mathcal{M}_{2n}}$. More generally, we have the following. \begin{proposition}\label{prop:proj} \emph{\cite{Lindzey17, CST}} Let $E_{\mu} : \mathbb{R}[\mathcal{M}_{2n}] \rightarrow S^{2\mu}$ denote the orthogonal projection onto $S^{2\mu}$ where $\mu \vdash n$. Then \[ [E_{\mu} f](m) = \frac{f^{2\mu}}{(2n-1)!!} \sum_{\lambda \vdash n} \left( \sum_{m' : d(m,m') = \lambda} f(m') \right) \phi_\mu^\lambda.\] \end{proposition} \begin{lemma}\label{lem:proj} The orthogonal projection $E_{(n-1,1)} : \mathbb{R}[\mathcal{M}_{2n}] \rightarrow S^{2(n-1,1)}$ of the characteristic function $f \in \mathbb{R}[\mathcal{M}_{2n}]$ of the family $\mathcal{F} \subseteq \mathcal{M}_{2n}$ can be written as \[ [E_{(n-1,1)} f](m) = \frac{(n-1)}{n(2n-3)!!} \left( \sum_{ij \in m} | \mathcal{F} \downarrow_{ij} | \right) - \frac{|\mathcal{F}|}{2(n-1)}\quad \text{for all $m \in \mathcal{M}_{2n}$}\] \end{lemma} \begin{proof} Applying Proposition~\ref{prop:proj} and Lemma~\ref{lem:zonal} gives us, \begin{align*} [E_{(n-1,1)} f](m) &= \frac{f^{2(n-1,1)}}{(2n-1)!!} \sum_{\lambda \vdash n} \left( \sum_{m' : d(m,m') = \lambda} f(m') \right) \omega_{(n-1,1)}(\lambda)\\ &= \frac{f^{2(n-1,1)}}{(2n-1)!!} \sum_{\lambda \vdash n} \left( \sum_{m' \in \mathcal{F} : d(m,m') = \lambda} \omega_{(n-1,1)}(\lambda)\right) \\ &= \frac{f^{2(n-1,1)}}{(2n-1)!!} \sum_{\lambda \vdash n} \left( \sum_{m' \in \mathcal{F} : d(m,m') = \lambda} \frac{(2n-1)\text{fp}(\lambda)-n}{2n(n-1)} \right) \\ &= \frac{f^{2(n-1,1)}}{(2n-3)!! \cdot 2n(n-1)} \sum_{\lambda \vdash n} \left( \sum_{m' \in \mathcal{F} : d(m,m') = \lambda} \text{fp}(\lambda) \right) - \frac{n|\mathcal{F}|}{2n(n-1)}\\ &= \frac{1}{(2n-5)!! \cdot 2(n-1)} \left( \sum_{ij \in m} | \mathcal{F} \downarrow_{ij} | \right) - \frac{|\mathcal{F}|}{2(n-1)} \end{align*} where the last equality follows from the hook formula and double-counting. \end{proof} We now begin the proof of the key lemma. Due to similarities in the asymptotics of perfect matchings and permutations, some steps follow from~\cite{Ellis12} \emph{mutatis mutandis}. Our notation is consistent with~\cite{Ellis12}. \begin{proof}[Proof of Key Lemma] Let $\mathcal{F}$ be an intersecting family such that $|\mathcal{F}| \geq c(2n-3)!!$ and $c \in (0,1)$. Let $f$ be the characteristic function of $\mathcal{F}$, and let $\alpha = |\mathcal{F}|/(2n-1)!!$. Let $D$ be the Euclidean distance from $f$ to $U$. By Theorem~\ref{thm:stableratio}, we have \begin{align*} D^2 &\leq \alpha \frac{(1-\alpha)D_{2n}/2(n-1) - D_{2n}\alpha}{D_{2n}/2(n-1) - |\mu|}\\ &= \frac{|\mathcal{F}|}{(2n-1)!!}~~ \frac{1-\alpha - 2(n-1)\alpha}{1 - 2(n-1)|\mu|/D_{2n}}\\ &= \frac{|\mathcal{F}|}{(2n-1)!!}~~ \frac{1 - (2n-1)\alpha}{1 - O(1/n)}\\ &\leq \frac{|\mathcal{F}|}{(2n-1)!!}~~ (1 - (2n-1)\alpha)(1 + O(1/n)),\\ \end{align*} where the penultimate equality uses the fact that $|\mu| = o((2n-3)!!)$ from Lemma~\ref{lem:bounds}. Now pick $\delta < 1$ so that $|\mathcal{F}| \leq (1-\delta)(2n-3)!!$. We have $$\| P_{U^\perp} f \|_2^2 = \| f - P_Uf\|_2^2 = D^2 \leq \delta(1 + O(1/n)) \frac{|\mathcal{F}|}{(2n-1)!!},$$ which tends to zero as $n \rightarrow \infty$. This already shows that $f$ is ``close" to being a linear combination of canonically intersecting families, but we now seek a combinatorial explanation for this proximity. By Lemma~\ref{lem:proj}, the projection $P_m := [E_{(n)}f+E_{(n-1,1)}f](m)$ of $f(m)$ onto $U$ is \begin{align}\label{eq:1} P_m = \frac{1}{(2n-5)!! \cdot 2(n-1)} \left( \sum_{ij \in m} | \mathcal{F} \downarrow_{ij} | \right) - \frac{|\mathcal{F}|}{2(n-1)} + \frac{|\mathcal{F}|}{(2n-1)!!}, \end{align} for any $m \in \mathcal{M}_{2n}$. Note that \[ \| f - P_Uf \|_2^2 = \frac{1}{(2n-1)!!} \left( \sum_{m\in \mathcal{F}}(1 - P_m)^2 + \sum_{m \not \in \mathcal{F}} P_m^2 \right) \leq \frac{|\mathcal{F}|}{(2n-1)!!} \delta (1 + O(1/n)),\] which gives us \begin{align*} \sum_{m \in \mathcal{F}}(1 - P_m)^2 + \sum_{m \not \in \mathcal{F}} P_m^2 \leq |\mathcal{F}| \delta (1 + O(1/n)). \end{align*} Pick $C > 0$ large enough so that $$\sum_{m \in \mathcal{F}}(1 - P_m)^2 + \sum_{m \not \in \mathcal{F}} P_m^2 \leq |\mathcal{F}| \delta (1 + O(1/n)) \leq |\mathcal{F}|(1 - 1/n)\delta(1 + C/n).$$ By the non-negativity of each term on the left-hand side of (\ref{eq:1}), at least $|\mathcal{F}|/n$ members of $\mathcal{F}$ satisfy $(1 - P_m)^2 < \delta(1 + C/n)$; therefore, there exists a set $$\mathcal{F}_1 = \{ m \in \mathcal{F} : (1-P_m)^2 < \delta(1 + C/n)\} $$ such that $|\mathcal{F}_1| \geq |\mathcal{F}|/n$. Similarly, suppose there are more than $$(2n-1)|\mathcal{F}|(1+O(1/n))/2 \geq (1-\delta)(2n-1)!!(1+O(1/n))/2$$ perfect matchings outside of $\mathcal{F}$ having $P_m^2 \geq 2\delta/(2n-1)$. Then \[ \sum_{m \not \in \mathcal{F}} P_m^2 > \frac{2\delta}{(2n-1)} (1-\delta)(2n-1)!!(1+O(1/n))/2 \geq |\mathcal{F}|\delta(1+O(1/n)) \] a contradiction; thus there also exists a set $$\mathcal{F}_0 = \{ m \not \in \mathcal{F} : P_m^2 < 2\delta/(2n-1)\} $$ such that $$|\mathcal{F}_0| \geq (2n-1)!! - (1-\delta)(2n-1)!!(1+O(1/n)) /2 - (1-\delta)(2n-3)!!.$$ The projections of the elements of $\mathcal{F}_0$ and $\mathcal{F}_1$ are close to 0 and 1 respectively. We now show that there exists an $m_1 \in \mathcal{F}_1$ and $m_0 \in \mathcal{F}_0$ that are close together in the transposition graph, which implies that the two share many edges. To this end, we claim that there is a path $p$ connecting $m_0$ and $m_1$ in the transposition graph $\mathcal{T}_n$ of length at most $2\sqrt{n/2 \log n}$. To see this, take $a := 1/n^{4}$ and $h := 2h_0$ in McDiarmid's bound. Since $$|\mathcal{F}_1| \geq c(2n-3)!!/n \geq (2n-1)!!/n^4,$$ McDiarmid's bound gives us $$ |N_h(\mathcal{F}_1)| \geq \left(1 - \frac{1}{n^4} \right)(2n-1)!!.$$ Since $|\mathcal{F}_0| > (2n-1)!!/n^4$, we have $|\mathcal{F}_0 \cap N_h(\mathcal{F}_1)| \neq \emptyset$, thus there exists a path $p$ in $\mathcal{T}_n$ of length no more than $2\sqrt{n/2 \log n}$, as desired. The foregoing shows there exist two perfect matchings $m_1 \in \mathcal{F}$, $m_0 \notin \mathcal{F}$ that are structurally quite similar, differing only in $O(\sqrt{n \log (n)})$ partner swaps, yet $$1 - \sqrt{\delta(1 +C/n)} < P_{m_1} \text{ and } P_{m_0} < \sqrt{2\delta/n}.$$ Combining inequalities reveals that $$P_{m_1} - P_{m_0} > (1-\sqrt{\delta} - O(1/\sqrt{n})).$$ By Equation (\ref{eq:1}), this implies that $m_1$ has many more edges in common with members of $\mathcal{F}$ than $m_0$ does, more formally, $$ \left( \sum_{ij \in m_1} |\mathcal{F} \downarrow_{ij}| \right) - \left(\sum_{ij \in m_0}| \mathcal{F} \downarrow_{ij} | \right) \geq (2n-5)!! \cdot 2(n-1)(1-\sqrt{\delta} - O(1/\sqrt{n})).$$ For any $m \in \mathcal{M}_{2n}$, let $m(v)$ denote the partner of $v \in V(K_{2n})$. Let $V(p)$ denote the vertices of $p$. Let $I \subseteq V(K_{2n})$ denote the set of vertices whose partner left them somewhere along the way, less dramatically, $$I := \{ v \in V(K_{2n}) : m(v) \neq m'(v) \text{ for some } m,m' \in V(p) \}.$$ Clearly $|I| \leq 4\ell$, where $\ell$ is the length of $p$, and for any $ v \notin I$, we have $m(v) = m'(v)$ for all $m,m' \in V(p)$. We now have $$ \left( \sum_{ij \in m_1 : i \in I} |\mathcal{F} \downarrow_{ij}| \right) - \left(\sum_{ij \in m_0 : i \in I}| \mathcal{F} \downarrow_{ij} | \right) \geq (2n-5)!! \cdot 2(n-1)(1-\sqrt{\delta} - O(1/\sqrt{n})).$$ This of course implies that $$ \sum_{ij \in m_1 : i \in I} |\mathcal{F} \downarrow_{ij}| \geq (2n-5)!! \cdot 2(n-1)(1-\sqrt{\delta} - O(1/\sqrt{n})).$$ Averaging gives us $$ |\mathcal{F} \downarrow_{ij}| \geq \frac{(2n-5)!! \cdot 2(n-1)}{4\ell}(1-\sqrt{\delta} - O(1/\sqrt{n}))$$ for some $i \in I$. Now we have $$ |\mathcal{F} \downarrow_{ij}| \geq \frac{(2n-5)!! \cdot 2(n-1)}{4\sqrt{n/2\log(n)}}(1-\sqrt{1-c} - O(1/\sqrt{n})) = \omega((2n-5)!!).$$ Lemma~\ref{lem:cross} implies that $|\mathcal{F} \downarrow_{ik}| = o((2n-5)!!)$ for all $k \neq j$. Summing over all $k \neq j$, we have $$| \mathcal{F} \setminus \mathcal{F} \downarrow_{ij}| = \sum_{k \neq j} |\mathcal{F} \downarrow_{ik}| = o((2n-3)!!).$$ This gives us \[ | \mathcal{F} \downarrow_{ij}| = |\mathcal{F}| - | \mathcal{F} \setminus \mathcal{F} \downarrow_{ij}| = (c-o(1))(2n-3)!!.\] Since $|\mathcal{F} \downarrow_{ij}| = O((2n-3)!!)$, Lemma~\ref{lem:cross} again implies $$ |\mathcal{F} \downarrow_{ik}| = O((2n-7)!!)$$ for all $k \neq j$. Summing over all $k \neq j$ again gives $$| \mathcal{F} \setminus \mathcal{F} \downarrow_{ij}| = \sum_{k \neq j} |\mathcal{F} \downarrow_{ik}| = O((2n-5)!!),$$ which completes the proof of the key lemma. \end{proof} \subsubsection*{Acknowledgements} I'd like to thank an anonymous reviewer for pointing out some incorrect calculations in a previous draft, and for several comments that substantially improved the readability. \bibliographystyle{plain}
{ "timestamp": "2018-08-13T02:07:09", "yymm": "1808", "arxiv_id": "1808.03453", "language": "en", "url": "https://arxiv.org/abs/1808.03453", "abstract": "A family of perfect matchings of $K_{2n}$ is $intersecting$ if any two of its members have an edge in common. It is known that if $\\mathcal{F}$ is family of intersecting perfect matchings of $K_{2n}$, then $|\\mathcal{F}| \\leq (2n-3)!!$ and if equality holds, then $\\mathcal{F} = \\mathcal{F}_{ij}$ where $ \\mathcal{F}_{ij}$ is the family of all perfect matchings of $K_{2n}$ that contain some fixed edge $ij$. In this note, we show that the extremal families are stable, namely, that for any $\\epsilon \\in (0,1/\\sqrt{e})$ and $n > n(\\epsilon)$, any intersecting family of perfect matchings of size greater than $(1 - 1/\\sqrt{e} + \\epsilon)(2n-3)!!$ is contained in $\\mathcal{F}_{ij}$ for some edge $ij$. The proof uses the Gelfand pair $(S_{2n},S_2 \\wr S_n)$ along with an isoperimetric method of Ellis.", "subjects": "Combinatorics (math.CO)", "title": "Stability for Intersecting Families of Perfect Matchings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713878802045, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7080104826157023 }
https://arxiv.org/abs/1907.00192
Recurrence along directions in multidimensional words
In this paper we introduce and study new notions of uniform recurrence in multidimensional words. A $d$-dimensional word is called \emph{uniformly recurrent} if for all $(s_1,\ldots,s_d)\in\mathbb{N}^d$ there exists $n\in\mathbb{N}$ such that each block of size $(n,\ldots,n)$ contains the prefix of size $(s_1,\ldots,s_d)$. We are interested in a modification of this property. Namely, we ask that for each rational direction $(q_1,\ldots,q_d)$, each rectangular prefix occurs along this direction in positions $\ell(q_1,\ldots,q_d)$ with bounded gaps. Such words are called \emph{uniformly recurrent along all directions}. We provide several constructions of multidimensional words satisfying this condition, and more generally, a series of four increasingly stronger conditions. In particular, we study the uniform recurrence along directions of multidimentional rotation words and of fixed points of square morphisms.
\section{Introduction} Combinatorics on words in one dimension is a well-studied field of theoretical computer science with its origins in the early 20th century. The study of bidimensional words is less developed, even though many concepts and results are naturally extendable from the unidimensional case (see e.g.\ \cite{Berthe--Vuillon--2000,Cassaigne--1999,Charlier--Karki--Rigo--2010,Durand--Rigo--2013,Labbe--2018,Muchnik--2003,Semenov--1977,Vuillon--1998}). However, some words problems become much more difficult in dimensions higher than one. One of such questions is the connection between local complexity and periodicity. In dimension one, the classical theorem of Morse and Hedlund states that if for some $n$ the number of distinct length-$n$ blocks of an infinite word is less than or equal to $n$, then the word is periodic. In the bidimensional case a similar assertion is known as Nivat's conjecture, and many efforts are made by scientists for checking this hypothesis \cite{Cyr--Kra--2015,Kari--Szabados--2015,Nivat}. In this paper, we introduce and study new notions of multidimensional uniform recurrence. A first and natural attempt to generalize the notion of (simple) recurrence to the multidimensional setting quickly turns out to be rather unsatisfying. Recall that an infinite word $w\colon\mathbb{N}\to A$ (where $A$ is a finite alphabet) is said to be \emph{recurrent} if each prefix occurs at least twice (and hence every factor occurs infinitely often). A straightforward extension of this definition is to say that a bidimensional infinite word is recurrent whenever each rectangular prefix occurs at least twice (and hence every rectangular factor occurs infinitely often). However, with such a definition of bidimensional recurrence, the following bidimensional infinite word is considered as recurrent, even though any column is not in the unidimensional sense of recurrence. \[ \begin{matrix} \vdots & \vdots & \vdots & \vdots & \\ 0 & 0 & 0 & 0 & \cdots \\ 0 & 0 & 0 & 0 & \cdots \\ 0 & 0 & 0 & 0 & \cdots \\ 1 & 1 & 1 & 1 & \cdots \end{matrix} \] In order to avoid this kind of undesirable phenomenon, a common strengthening is to ask that every prefix occurs uniformly, see for example \cite{Berthe--Vuillon--2000,Durand--Rigo--2013}. In the present work, we investigate several notions of recurrence of multidimensional infinite words $w\colon\mathbb{N}^d\to A$, generalizing the usual notion of uniform recurrence of unidimensional infinite words. This paper is organized as follows. In Section~\ref{sec:def}, we define two new notions of uniform recurrence of multidimensional infinite words: the URD words and the SURD words. We also make some first observations in the bidimensional setting. In Section~\ref{sec:origin}, we show that these two new notions of recurrence along directions do not depend on the choice of the origin. This leads us to the definition of the even stronger notion of SSURDO words. In Section~\ref{sec:gcd}, we prove that all multidimensional words obtained by placing some uniformly recurrent word along every rational direction are URD. In Section~\ref{sec:rotation}, we show that all multidimensional rotation words are URD but not SURD. Thus, the notion of SURD words is indeed stronger than that of URD words, justifying the introduced terminology. In Section~\ref{sec:morphism}, we study fixed points of multidimensional square morphisms. In particular, we provide some infinite families of SURD words. We provide a complete characterization of SURD bidimensional infinite words that are fixed points of square morphisms of size $2$. In Section~\ref{sec:construction}, we show how to build uncountably many SURD bidimensional infinite words. In particular, the family of bidimensional infinite words so-obtained contains uncountably many non-morphic SURD elements. We end our study by discussing six open problems in Section~\ref{sec:perspectives}, including potential links with return words and symbolic dynamical aspects. \section{Definitions and first observations} \label{sec:def} Here and throughout the text, $A$ designates an arbitrary finite alphabet and $d$ is a positive integer. For $m,n\in\mathbb{N}$, the notation $[\![m,n]\!]$ designates the interval of integers $\{m,\ldots,n\}$ (which is considered empty for $n<m$). We write $(s_1,\ldots,s_d)\le (t_1,\ldots,t_d)$ (resp.\ $(s_1,\ldots,s_d)< (t_1,\ldots,t_d)$) if $s_i\le t_i$ (resp.\ $s_i< t_i$) for each $i\in[\![1,d]\!]$. A \emph{$d$-dimensional infinite word} over $A$ is a map $w\colon\mathbb{N}^d\to A$. A \emph{$d$-dimensional finite word} over $A$ is a map $w\colon[\![0,s_1-1]\!]\times \cdots\times [\![0,s_d-1]\!]\to A$, for some $(s_1,\ldots,s_d)\in\mathbb{N}^d$, which is called the \emph{size} of $w$. A finite word $f$ of size $(s_1,\ldots,s_d)$ is a \emph{factor} of a $d$-dimensional infinite word $w$ if there exists $\mathbf{p}\in\mathbb{N}^d$ such that for each $\mathbf{i}\in[\![0,s_1-1]\!]\times \cdots\times [\![0,s_d-1]\!]$, we have $f(\mathbf{i})=w(\mathbf{p}+\mathbf{i})$. In this case, we say that the factor $f$ occurs at \emph{position} $\mathbf{p}$ in $w$. Similarly, a \emph{factor} of a $d$-dimensional finite word $w$ of size $(t_1,\ldots,t_d)$ is a finite word $f$ of some size $(s_1,\ldots,s_d)\le(t_1,\ldots,t_d)$ for which there exists $\mathbf{p}\in[\![0,t_1-s_1]\!]\times \cdots\times [\![0,t_d-s_d]\!]$ such that for each $\mathbf{i}\in[\![0,s_1-1]\!]\times \cdots\times [\![0,s_d-1]\!]$, we have $f(\mathbf{i})=w(\mathbf{p}+\mathbf{i})$. In both cases (infinite and finite), if $\mathbf{p}=(0,\ldots,0)$ then the factor $f$ is said to be a \emph{prefix} of $w$. In some places, for the sake of clarity, we will allow ourselves to write $w_\mathbf{i}$ instead of $w(\mathbf{i})$. \begin{remark} In general, a factor need not be rectangular, i.e.\ of the form $[\![0,s_1-1]\!]\times \cdots\times [\![0,s_d-1]\!]$, but could be any polytope. Indeed, any occurrence of any given polytope is contained in a larger rectangular factor. If we are interested in bounding the gaps between occurrences of the polytope, then a bound on the gaps of the larger rectangular factor is sufficient. So, without loss of generality we can restrict our attention to rectangular factors only. Sometimes, multidimensional words are considered over $\mathbb{Z}^d$, i.e.\ $w\colon\mathbb{Z}^d\to A$. Although in our considerations it is more natural to consider one-way infinite words, since for example we will make use of fixed points of morphisms, most of our results and notions can be straightforwardly extended to words over $\mathbb{Z}^d$. For example, general relations between the considered notions hold in $\mathbb{Z}^d$ (Figure~\ref{fig:links_recurrence}), as well as our results for rotation words (Section~\ref{sec:rotation}) and non-morphic SURD examples (Section~\ref{sec:construction}). \end{remark} The following notion of uniform recurrence of multidimensional infinite words was studied by many authors, see for example~\cite{Berthe--Vuillon--2000,Durand--Rigo--2013}. \begin{definition}[UR] A $d$-dimensional infinite word $w$ is \emph{uniformly recurrent} if for every prefix $p$ of $w$, there exists a positive integer $b$ such that every factor of $w$ of size $(b,\ldots, b)$ contains $p$ as a factor. \end{definition} In the previous definition, it is clearly equivalent to ask the same for every factor and not only every prefix. Whenever $d=1$, this definition corresponds to the usual notion of uniform recurrence of infinite words. In the bidimensional setting, the uniform recurrence of the word is not linked to the uniform recurrence of all rows and columns. On the one hand, the fact that rows and columns of a bidimensional word $w\colon\mathbb{N}^2\to A$ are uniformly recurrent (in the unidimensional sense) does not imply that $w$ is UR. \begin{remark} \label{rem:convention-2D} We choose the convention of representing a bidimensional word $w\colon\mathbb{N}^2\to A$ by placing the rows from bottom to top, and the columns from left to right (as for Cartesian coordinates). See Figure~\ref{fig:rep2D}. \begin{figure}[htb] \[ \begin{matrix} \vdots & \vdots & \vdots & \vdots \\ w(0,3) & w(1,3) & w(2,3) & w(3,3) & \cdots \\ w(0,2) & w(1,2) & w(2,2) & w(3,2) & \cdots \\ w(0,1) & w(1,1) & w(2,1) & w(3,1) & \cdots \\ w(0,0) & w(1,0) & w(2,0) & w(3,0) & \cdots \end{matrix} \] \caption{Convention for the representation of bidimensional words.} \label{fig:rep2D} \end{figure} \end{remark} \begin{proposition} \label{prop:rows_columns_UR} Let $w\colon\mathbb{N}^2\to\{0,1\}$ be the bidimensional word obtained by alternating two kinds of rows: $1F$ and $0F$ where $F=01001010\cdots$ is the Fibonacci word, i.e.\ $F$ is the fixed point of the morphism $0\mapsto 01,\, 1\mapsto 0$ (see Figure~\ref{fig:URrows-nonUR}). The rows and the columns of $w$ are all uniformly recurrent but $w$ is not UR. \begin{figure}[htb] \[ \begin{matrix} \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & \cdots \\ 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & \cdots \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & \cdots \\ 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & \cdots \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & \cdots \\ 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & \cdots \end{matrix} \] \caption{A non UR bidimensional word having uniformly recurrent rows and columns.} \label{fig:URrows-nonUR} \end{figure} \end{proposition} \begin{proof} The word $w$ is not UR since the square prefix $\left[\begin{smallmatrix}0&0\\1&0\end{smallmatrix}\right]$ only occurs within the first two columns. The columns of $w$ are uniformly recurrent since they are periodic. It is well known that the words $1F$ and $0F$ are uniformly recurrent, hence the rows of $w$ are uniformly recurrent. \end{proof} On the other hand, the fact that a bidimensional infinite word is UR does not imply that each of its rows/columns is uniformly recurrent either. The construction given by the following proposition is a modification of unidimensional Toeplitz words. \begin{proposition} \label{prop:UR_rows_columns} Let $w\colon\mathbb{N}^2\to\{0,1\}$ be the bidimensional word constructed as follows. The $n$-th row (with $n\in\mathbb{N}$) is indexed by $k$ if $n\equiv 2^k\pmod{2^{k+1}}$ and is indexed by $-1$ if $n=0$. Let $u_k=(10^{2^k-1})^\omega$ for $k\ge0$ and $u_{-1}=10^\omega$. Now fill the rows indexed by $k$ with the words $u_k$ (see Figure~\ref{fig:UR-non-recurrent-rows}). The bidimensional word $w$ is UR, but its first row is not recurrent. \begin{figure}[htb] \[ \begin{array}{c|c|cccccccccccccc} \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots& \vdots \\ 13 &0 & 1&1&1&1&1&1&1&1&1&1&1&1 & \cdots \\ 12 &2 & 1&0&0&0&1&0&0&0&1&0&0&0 & \cdots \\ 11 &0 & 1&1&1&1&1&1&1&1&1&1&1&1 & \cdots \\ 10 &1 & 1&0&1&0&1&0&1&0&1&0&1&0 & \cdots \\ 9 &0 & 1&1&1&1&1&1&1&1&1&1&1&1 & \cdots \\ 8 &3 & 1&0&0&0& 0&0&0&0&1&0&0&0 & \cdots \\ 7 &0 & 1&1&1&1&1&1&1&1&1&1&1&1 & \cdots \\ 6 &1 & 1&0&1&0&1&0&1&0&1&0&1&0 & \cdots \\ 5 &0 & 1&1&1&1&1&1&1&1&1&1&1&1 & \cdots \\ 4 &2 & 1&0&0&0&1&0&0&0&1&0&0&0 & \cdots \\ 3 &0 & 1&1&1&1&1&1&1&1&1&1&1&1 & \cdots \\ 2 &1 & 1&0&1&0&1&0&1&0&1&0&1&0 & \cdots \\ 1 &0 & 1&1&1&1&1&1&1&1&1&1&1&1 & \cdots \\ 0 & -1 & 1&0&0&0&0&0&0&0&0&0&0&0 & \cdots \\ \hline n&k & u_k \end{array} \] \caption{A UR bidimensional infinite word with a non-recurrent row.} \label{fig:UR-non-recurrent-rows} \end{figure} \end{proposition} \begin{proof} Consider first the bidimensional infinite word $w'$ composed of the rows $u_k$ with $k\ge 0$, that is, the word $w$ without its first row. We show that each prefix of $w'$ appears according to a square network. Note that this network argument is also used in the proof of Proposition~\ref{proposition:toeplitz}. In $w'$, let $p$ be a prefix of some size $(s_1,s_2)\in\mathbb{N}^2$ and $N=\max(\lceil \log_2(s_1) \rceil,\lceil \log_2(s_2) \rceil)$. The prefix $p'$ of size $(2^N,2^N)$ appears periodically according to the periods $(2^{N+1},0)$ and $(0,2^{N+1})$. Therefore each factor of $w'$ of size $(2^{N+1}+2^N-1,2^{N+1}+2^N-1)$ contains $p'$. So it contains $p$ as well. Hence $w'$ is UR. Now let $p$ denote a prefix of $w$ of some size $(s_1,s_2)\in\mathbb{N}^2$. Let $N=\max(\lceil \log_2(s_1) \rceil,\lceil \log_2(s_2) \rceil)$. Using the previous paragraph, we know that the prefix of $w'$ of size $(2^N,2^N)$ occurs with periods $(2^{N+1},0)$ and $(0,2^{N+1})$. Since the $2^{N+1}$-th row of $w$ is filled with the infinite word $u_{N+1}=(10^{2^{N+1}-1})^\omega$ and that $2^{N+1}>s_1$, the prefix $p$ also appears in position $(0,2^{N+1})$ in $w$, i.e.\ in position $(0,2^{N+1}-1)$ in $w'$. As $w'$ is UR, $p$ occurs within every factor of $w'$ of size $(n,n)$ for some $n\in\mathbb{N}$. As $w$ is composed of $w'$ with an additional row $u_{-1}$, the prefix $p$ of $w$ occurs also within every factor of $w$ of size $(n+1,n+1)$. \end{proof} In order to obtain the uniform recurrence of all rows and columns in a bidimensional infinite word, we introduce a different version of uniform recurrence of multidimensional infinite words, which involves directions. Throughout this text, when we talk about a \emph{direction} $\mathbf{q}=(q_1,\ldots,q_d)$, we implicitly assume that $q_1,\ldots,q_d$ are coprime nonnegative integers. For the sake of conciseness, if $\mathbf{s}=(s_1,\ldots,s_d)$, we write $[\![\mathbf{0},\mathbf{s}-\mathbf{1}]\!]$ in order to designate the $d$-dimensional interval $[\![0,s_1-1]\!]\times \cdots\times [\![0,s_d-1]\!]$. In particular, we set $\mathbf{0}=(0,\ldots,0)$ and $\mathbf{1}=(1,\ldots,1)$. In what follows, we will use the following notation. Let $w\colon\mathbb{N}^d\to A$ be a $d$-dimensional infinite word, $\mathbf{s}\in\mathbb{N}^d$ and $\mathbf{q}\in\mathbb{N}^d$ be a direction. The \emph{word along the direction $\mathbf{q}$ with respect to the size $\mathbf{s}$ in $w$} is the unidimensional infinite word $\dir{w}{q}{s} \colon \mathbb{N} \to A^{[\![\mathbf{0},\mathbf{s}-\mathbf{1}]\!]}$, where elements of $A^{[\![\mathbf{0},\mathbf{s}-\mathbf{1}]\!]}$ are considered as letters, defined by \[ \forall \ell \in\mathbb{N},\ \forall \mathbf{i}\in [\![\mathbf{0},\mathbf{s}-\mathbf{1}]\!],\ (\dir{w}{q}{s}(\ell))(\mathbf{i})=w(\mathbf{i}+\ell \mathbf{q}). \] See Figure~\ref{fig:def-wqs} for an illustration in the bidimensional case. \begin{figure}[htb] \centering \scalebox{0.7}{ \begin{tabular}{cc} \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[shape=rectangle,fill=none,draw=none,minimum size=0cm,inner sep=2pt] \tikzstyle{every path}=[draw=black,line width = 0.5pt] \foreach \x in {0,1,2,3} { \draw[fill=gray!30] (\x*0.3*7,\x*0.3*2.8) rectangle (\x*0.3*7+1.6,\x*0.3*2.8+1.2); \node at (\x*0.3*7+0.8,\x*0.3*2.8+1.4) {$y_\x$}; } \node[fill=white] at (0.8,1.4){$y_0=p$}; \draw (0,0) to (1.2*7,0); \draw (0,0) to (0,5.8); \tikzstyle{every path}=[draw=red,line width = 1pt] \draw (0,0.01) to (1.2*7,1.2*2.8); \node at (8.6,3.2) {\color{red}$\ell \mathbf{q}$} ; \tikzstyle{every path}=[draw=black,line width = 1pt, ->] \draw[<->] (0,-0.2) to node [below] {$s_1$} (1.6,-0.2); \draw[<->] (1.6+0.2,0) to node [right] {$s_2$} (1.6+0.2,1.2); \end{tikzpicture} & \hspace{1cm} \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[shape=rectangle,fill=none,draw=none,minimum size=0cm,inner sep=2pt] \tikzstyle{every path}=[draw=none,line width = 0.5pt] \draw[fill=gray!30,draw=none] (0,0) rectangle (1.6,1.2); \foreach \x in {1,2,3,4,5} { \draw[fill=gray!30,draw=none] (\x*0.15*7,\x*0.15*5.5) rectangle (\x*0.15*7+1.6,\x*0.15*5.5+1.2); \node at (\x*0.15*7+0.55,\x*0.15*5.5+1.4) {$y_\x$}; } \foreach \x in {0,1,2,3,4,5} { \draw[fill=none] (\x*0.15*7,\x*0.15*5.5) rectangle (\x*0.15*7+1.6,\x*0.15*5.5+1.2); } \node at (0.53,1.4){$y_0=p$}; \draw (0,0) to (7,0); \draw (0,0) to (0,5.8); \tikzstyle{every path}=[draw=red,line width = 1pt] \draw (0,0.01) to (1*7,1*5.5); \node at (7.3,5.4) {\color{red}$\ell \mathbf{q}$}; \tikzstyle{every path}=[draw=black,line width = 1pt, ->] \draw[<->] (0,-0.2) to node [below] {$s_1$} (1.6,-0.2); \draw[<->] (1.6+0.2,0) to node [right] {$s_2$} (1.6+0.2,1.2); \end{tikzpicture} \end{tabular} } \caption{The word $\dir{w}{q}{s}$ is built from the blocks of size $\mathbf{s}$ occurring at positions $\ell\mathbf{q}$ in $w$. Those blocks in $A^{[\![\mathbf{0},\mathbf{s}-\mathbf{1}]\!]}$ may or may not overlap.} \label{fig:def-wqs} \end{figure} Note that, for any choice of direction $\mathbf{q}$, the first letter $\dir{w}{q}{s}(0)$ of the unidimensional infinite word $\dir{w}{q}{s}$ is the prefix of size $\mathbf{s}$ of the $d$-dimensional infinite word $w$. \begin{definition}[URD] \label{def:URD} A $d$-dimensional infinite word $w\colon\mathbb{N}^d\to A$ is \emph{uniformly recurrent along all directions} (URD for short) if for all $\mathbf{s}\in\mathbb{N}^d$ and all directions $\mathbf{q}\in\mathbb{N}^d$, there exists $b\in\mathbb{N}$ such that each length-$b$ factor of $\dir{w}{q}{s}$ contains the letter $\dir{w}{q}{s}(0)$. \end{definition} Alternatively, we can say that the letter $\dir{w}{q}{s}(0)$ occurs infinitely often in $\dir{w}{q}{s}$ with gaps at most $b$. The same reformulation is also valid for further definitions of uniform recurrence. \begin{proposition} A $d$-dimensional infinite word $w\colon\mathbb{N}^d\to A$ is URD if and only if for all $\mathbf{s}\in\mathbb{N}^d$ and all directions $\mathbf{q}\in\mathbb{N}^d$, the unidimensional word $\dir{w}{q}{s}$ is uniformly recurrent. \end{proposition} \begin{proof} The condition is clearly sufficient. Let us show that it is also necessary. Suppose that $w\colon\mathbb{N}^d\to A$ is URD and let $\mathbf{s},\mathbf{q}\in\mathbb{N}^d$ be some fixed size and direction. We show that any prefix of $\dir{w}{q}{s}$ appears infinitely often with bounded gaps in $\dir{w}{q}{s}$. Consider a prefix $p$ of $\dir{w}{q}{s}$ of some length $\ell$. Let $\mathbf{s}'=(\ell-1)\mathbf{q}+\mathbf{s}$. Since $w$ is URD, there exists $b'\in\mathbb{N}$ such that each length-$b'$ factor of $\dir{w}{q}{s'}$ contains the letter $\dir{w}{q}{s'}(0)$. This implies that each length-$(b'+\ell-1)$ factor of $\dir{w}{q}{s}$ contains $p$. \end{proof} We will see that the uniform recurrence along all directions implies that rows and columns are uniformly recurrent (see Proposition~\ref{prop:URD_rows_columns}). However, a URD word is not necessarily UR as shown by the following proposition. In the next section, we will show that UR does not imply URD either (see Corollary~\ref{cor:UR_not_imply_URD}). \begin{proposition}\label{prop:URD_isnot_UR} For any $d\ge 2$, there exists a $d$-dimensional URD word that is not UR. \end{proposition} \begin{proof} We give a sketch of a construction to avoid cumbersome details. Let $A$ be a finite alphabet containing at least two letters, say $0$ and $1$, and let $d\ge2$. Consider the following recursive procedure to construct uncountably many such $d$-dimensional infinite words. See Figure~\ref{fig:URD_isnot_UR} \begin{figure}[htb] \centering \begin{tikzpicture}[every node/.style={scale=0.7}] \tikzset{square matrix/.style={ matrix of nodes, column sep=-\pgflinewidth, row sep=-\pgflinewidth, nodes={ minimum height=#1, anchor=center, text width=#1, align=center, inner sep=0pt }, }, square matrix/.default=0.5cm } \matrix[square matrix] { |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & ? & ? & ? & ? & ? & ? & ? & ? & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & ? & ? & ? & ? & ? & ? & ? & ? & ? & ? & |[fill=col1]| 0 & |[fill=col1]| 0 \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & ? & ? & ? & ? & ? & ? & ? & ? & ? & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & ? & ? & ? & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & ? & ? & ? & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & ? & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col4]| 0 & ? \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & ? & ? & ? & ? & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & ? & ? & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col4]| 0 & ? \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & ? & ? & ? & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & ? & ? & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & ? & ? & ? & ? & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & ? & ? & ? & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & ? \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & ? & ? & ? & ? & ? & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & ? & ? & ? & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & ? & ? & ? & ? & ? & ? \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & ? & ? & ? & ? & ? & ? & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & ? & ? & ? & ? & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & ? \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & ? & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & ? & ? & ? & ? & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & |[fill=col4]| 0 & |[fill=col4]| 0 & |[fill=col4]| 0 & |[fill=col4]| 0 & ? \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & ? & ? & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & ? & ? & ? & ? & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 0 & ? \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & ? & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & ? & ? & ? & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & ? & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & ? & ? & ? & ? & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & ? \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & ? & ? & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & ? & ? & ? & ? & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & ? & ? & ? & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & ? & ? \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & ? & ? & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & ? & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & ? \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col4]| 0 & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col4]| 0 & ? & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & ? & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & ? & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & ? \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & ? & ? & ? & ? & ? & ? & ? & ? & ? & ? \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & ? & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & ? \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & |[fill=col4]| 0 & |[fill=col4]| 0 & |[fill=col4]| 0 & |[fill=col4]| 0 & |[fill=col4]| 0 & ? \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 0 & |[fill=col4]| 0 \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & ? & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & |[fill=col4]| 0 & |[fill=col4]| 0 & |[fill=col4]| 0 & |[fill=col4]| 0 & |[fill=col4]| 0 & |[fill=col4]| 0 \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & ? & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col3]| 1 & |[fill=col3]| 0 & |[fill=col4]| 1 & ? & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & ? & ? & ? & ? & ? & ? & ? & ? & ? & ? & ? & |[fill=col4]| 0 \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & |[fill=col4]| 0 & |[fill=col4]| 1 & ? & ? & ? & ? & ? & ? & |[fill=col4]| 0 \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & ? & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col2]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & ? & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & |[fill=col3]| 0 & ? & ? \\ |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & ? & ? & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & ? & ? & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & |[fill=col4]| 0 & ? & |[fill=col2]| 1 & |[fill=col2]| 0 & |[fill=col2]| 1 & |[fill=col3]| 0 & ? & ? \\ |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 & |[fill=col1]| 0 \\ |[fill=col0]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 & |[fill=col1]| 1 & |[fill=col1]| 0 \\ }; \end{tikzpicture} \caption{The first 5 steps of the construction of a URD word that is not UR, according to the procedure described in Proposition~\ref{prop:URD_isnot_UR}. The letters filled at steps $1,\ldots,5$ are respectively drawn on cells colored in blue, gray, pink, yellow and red. } \label{fig:URD_isnot_UR} \end{figure} for an illustration of a bidimensional binary such word. On the first step, fill the position $\mathbf{0}$ with the letter $1$. On each step $n\ge 2$, consider the prefix $p_n$ of size $\mathbf{n}=(n,\ldots,n)$ which is partially filled. Choose arbitrary letters of $A$ to complete it (in Figure~\ref{fig:URD_isnot_UR}, we chose to complete with $0$'s at each step). For each direction $\mathbf{q}<\mathbf{n}$, choose a constant $b_\mathbf{q}$ and copy $p_n$ in all positions $\ell b_\mathbf{q} \mathbf{q}$ with $\ell \in \mathbb{N}$. Note that the word $\dir{w}{q}{n}$ may be already partially filled, but there always exists a constant $b_\mathbf{q}$ (potentially big) that allows us to perform this procedure. In one of the remaining factors of size $\mathbf{n}$ that do not contain any letter yet, write $n^d$ times the letter $0$ (at each step of Figure~\ref{fig:URD_isnot_UR}, we chose such a square below the diagonal and the closest possible of the origin). All so-constructed words are URD but are not UR since they contain arbitrarily large hypercubes of $0$'s. \end{proof} A natural strengthening of the definition of URD words is to ask that the bound between consecutive occurrences of a prefix only depends on the size of the prefix and not on the chosen direction. \begin{definition}[SURD]\label{def:SURD} A $d$-dimensional infinite word $w\colon\mathbb{N}^d\to A$ is \emph{strongly uniformly recurrent along all directions} (SURD for short) if for each $\mathbf{s}\in\mathbb{N}^d$, there exists $b\in\mathbb{N}$ such that, for each direction $\mathbf{q} \in\mathbb{N}^d$, each length-$b$ factor of $\dir{w}{q}{s}$ contains the letter $\dir{w}{q}{s}(0)$. \end{definition} In Figure~\ref{fig:links_recurrence}, we summarize the relations between the different notions of recurrence we consider. \begin{figure}[htb] \centering \begin{tikzcd}[arrows=Rightarrow, column sep=1.7cm, row sep=0.8cm, every arrow/.append style={shift left=0.8ex}] \text{\small SSURDO} \arrow[shorten <= 3pt,shorten >= 3pt]{dd}{\text{Def.}~\ref{def:SSURDO}} \arrow{r}{\text{Prop.}~\ref{prop:SSURDO_imply_UR}} & \text{\small UR} \arrow[negated,shorten <= 3pt,shorten >= 3pt]{dd} {\text{Cor.}~\ref{cor:UR_not_imply_URD}} \arrow[negated]{dr}{\text{Prop.}~\ref{prop:UR_rows_columns}} & \\ & & \text{\small UR rows} \arrow[negated,shorten >= 6pt]{dl}{\text{Prop.}~\ref{prop:URD_rows_columns}} \arrow[negated]{ul}{\text{Prop.}~\ref{prop:rows_columns_UR}} \\ \text{\small SURD}= \text{\small SURDO} \arrow[negated,shorten <= 3pt,shorten >= 3pt]{uu}{\text{Ex.}~\ref{ex:SURD_isnot_SSURDO}\ } \arrow{r}{\text{Def.}~\ref{def:SURD}} \arrow[draw=red,shorten <= 3pt,shorten >= 3pt]{uur}{\text{\color{red} ?}} & \text{\small URD}=\text{\small URDO} \arrow[negated]{l}{\text{Prop.}~\ref{prop:URD_isnot_SURD}\ } \arrow[shorten <= 6pt]{ur}{\text{Prop.}~\ref{prop:URD_rows_columns}} \arrow[negated,shorten <= 3pt,shorten >= 3pt]{uu}{\text{Prop.}~\ref{prop:URD_isnot_UR}\ } & \end{tikzcd} \begin{tikzpicture}[overlay,remember picture] \node at (-4.8,-1.7){{\text{\footnotesize{Prop.~\ref{prop:URDisURDO}}}}}; \node at (-9.5,-1.7){{\text{\footnotesize{Prop.~\ref{prop:URDisURDO}}}}}; \end{tikzpicture} \caption{In black are drawn the links between the different notions of recurrence. In red is drawn an open question.} \label{fig:links_recurrence} \end{figure} \section{Uniform recurrence along all directions from any origin} \label{sec:origin} As a natural generalization of $d$-dimensional URD and SURD infinite words, we could ask that the recurrence property should not just be taken into account on the lines $\{\ell\mathbf{q}\colon\ell\in\mathbb{N}\}$ for all directions $\mathbf{q}$ but on all lines $\{\ell\mathbf{q}+\mathbf{p}\colon\ell\in\mathbb{N}\}$ for all origins $\mathbf{p}$ and directions $\mathbf{q}$. In fact, this would not be a real generalization; the proof of this claim is the purpose of the present section. \begin{definition}[URDO] \label{def2} A $d$-dimensional infinite word $w\colon\mathbb{N}^d\to A$ is \emph{uniformly recurrent along all directions from any origin} (URDO for short) if for each $\mathbf{p}\in\mathbb{N}^d$, the translated $d$-dimensional infinite word $w^{(\mathbf{p})}\colon \mathbb{N}^d\to A,\ \mathbf{i}\mapsto w(\mathbf{i}+\mathbf{p})$ is URD. \end{definition} \begin{definition}[SURDO] A $d$-dimensional infinite word $w\colon\mathbb{N}^d\to A$ is \emph{strongly uniformly recurrent along all directions from any origin} (SURDO for short) if for each $\mathbf{p}\in\mathbb{N}^d$, the translated $d$-dimensional infinite word $w^{(\mathbf{p})}\colon \mathbb{N}^d\to A,\ \mathbf{i}\mapsto w(\mathbf{i}+\mathbf{p})$ is SURD. \end{definition} \begin{proposition}\label{prop:URDisURDO}\ \begin{itemize} \item A $d$-dimensional infinite word is URD if and only if it is URDO. \item A $d$-dimensional infinite word is SURD if and only if it is SURDO. \end{itemize} \end{proposition} \begin{proof} Both conditions are clearly sufficient. Now we prove that they are necessary. Let $w\colon\mathbb{N}^d\to A$ be URD (SURD, respectively), let $\mathbf{p},\mathbf{s} \in \mathbb{N}^d$ and let $f\colon [\![\mathbf{0},\mathbf{s}-\mathbf{1}]\!]\to A $ be the factor of $w$ of size $\mathbf{s}$ at position $\mathbf{p}$: for all $\mathbf{i}\in[\![\mathbf{0},\mathbf{s}-\mathbf{1}]\!]$, $f(\mathbf{i})=w(\mathbf{i}+\mathbf{p})$. We need to prove that for each direction $\mathbf{q}$, there exists $b\in\mathbb{N}$ such that (that there exists $b\in\mathbb{N}$ such that for all directions $\mathbf{q}$, respectively) each factor of length $b$ taken along the line $\ell\mathbf{q}+\mathbf{p}$ contains $f$. The situation is illustrated in Figure~\ref{fig:proof-URDO}. \begin{figure}[htb] \centering \scalebox{0.7}{ \begin{tikzpicture}[scale=1] \clip (-0.1,-0.1)rectangle (8,5.5); \tikzstyle{every node}=[shape=rectangle,fill=none,draw=none,minimum size=0cm,inner sep=2pt] \tikzstyle{every path}=[draw=black,line width = 0.5pt] \draw[fill=gray!30] (0,0) rectangle (6.6,3.2); \draw (0,0) to (8,0); \draw (0,0) to (0,7); \draw[fill=gray] (5,2) rectangle (6.6,3.2); \draw[dashed] (0,3.2) to (6.6,3.2); \draw[dashed] (6.6,0) to (6.6,3.2); \node at (5.8,2.6){$f$}; \tikzstyle{every path}=[draw=black,line width = 1pt, ->] \draw (0,0) to node [below] {$\mathbf{p}$} (5,2); \draw[<->] (5,2-0.2) to node [below] {$s_1$} (6.6,2-0.2); \draw[<->] (6.6+0.2,2) to node [right] {$s_2$} (6.6+0.2,3.2); \tikzstyle{every path}=[draw=red,line width = 1pt] \draw (0,0) to node [right] {\color{red}$\ell \mathbf{q}$} (2.8*1.3,7*1.3); \draw (0+5,0+2) to node [right] {\color{red}$\ell \mathbf{q}+\mathbf{p}$} (2+5,5+2); \end{tikzpicture}} \caption{Illustration of the proof of Proposition~\ref{prop:URDisURDO} in the bidimensional case.} \label{fig:proof-URDO} \end{figure} Consider the prefix $p$ of size $\mathbf{p}+\mathbf{s}$ of $w$. Since the word is URD (SURD, respectively), for all directions $\mathbf{q}$, there exists $b'$ such that (there exists $b'$ such that for all directions $\mathbf{q}$, respectively) each factor of length $b'$ taken along the line $\ell\mathbf{q}$ contains $p$. Since $f$ occurs at position $\mathbf{p}$ in $p$, this implies the condition we need with $b=b'$. \end{proof} \begin{proposition}\label{prop:URD_rows_columns} If a bidimensional infinite word is URD, then all its rows and columns are uniformly recurrent, but the converse does not hold. \end{proposition} \begin{proof} Let $w$ be a URD bidimensional infinite word. From Proposition~\ref{prop:URDisURDO}, $w$ is also URDO. So, any translated word $w^{(\mathbf{p})}$ with $\mathbf{p}=(0,m)$ is also URD. Hence, in $w^{(\mathbf{p})}$ any factor of size of the form $(s,1)$ occurs along the direction $(1,0)$ with bounded gaps. In other words, any row is uniformly recurrent. The argument is similar for the columns. In order to see that the converse is not true, we can for example consider again the bidimensional word of Proposition~\ref{prop:rows_columns_UR}. \end{proof} \begin{corollary}\label{cor:UR_not_imply_URD} A bidimensional infinite UR word is not necessarily URD. \end{corollary} \begin{proof} This follows from Propositions~\ref{prop:UR_rows_columns} and~\ref{prop:URD_rows_columns}. \end{proof} We can also ask the constant $b$ to be uniform for all the origins. As previously, the notation $\dir{(w^{(\mathbf{p})})}{q}{s}$ designates the unidimensional infinite word along the direction $\mathbf{q}$ with respect to the size $\mathbf{s}$ in the translated $d$-dimensional infinite word $w^{(\mathbf{p})}\colon \mathbb{N}^d\to A,\, \mathbf{i}\mapsto w(\mathbf{i}+\mathbf{p})$. \begin{definition}[SSURDO]\label{def:SSURDO} A $d$-dimensional infinite word $w\colon\mathbb{N}^d\to A$ is \emph{super strongly uniformly recurrent along all directions from any origin} (SSURDO for short) if for all $\mathbf{s}\in\mathbb{N}^d$, there exists $b\in\mathbb{N}$ such that, for each direction $\mathbf{q}\in\mathbb{N}^d$ and each origin $\mathbf{p}\in\mathbb{N}^d$, each length-$b$ factor of $\dir{(w^{(\mathbf{p})})}{q}{s}$ contains the letter $\dir{(w^{(\mathbf{p})})}{q}{s}(0)$. \end{definition} Doubly periodic words satisfy the latter definition (take $b$ the product of the coordinates of the periods) but there also exist SSURDO aperiodic words. One of them is given as the fixed point of a bidimensional morphism introduced in Section~\ref{sec:morphism} (see Proposition~\ref{prop:SSURDO}). Note that this notion of SSURDO words is distinct from that of SURD words (see Example~\ref{ex:SURD_isnot_SSURDO}). \begin{proposition}\label{prop:SSURDO_imply_UR} A $d$-dimensional SSURDO word is necessarily UR. \end{proposition} \begin{proof} Let $w$ be a $d$-dimensional SSURDO word and let $p$ be a prefix of $w$ of some size $\mathbf{s}$. Let $b$ be the bound from Definition~\ref{def:SSURDO} and $\mathbf{b}=(b,\ldots,b)$. It is enough to prove that any factor of $w$ of size $\mathbf{b}+\mathbf{s}-\mathbf{1}$ contains $p$ as a factor. Let $\mathbf{p}=(p_1,\ldots,p_d)$ and let $f$ be the factor of size $\mathbf{b}+\mathbf{s}-\mathbf{1}$ occurring in $w$ at position $\mathbf{p}$. For each $i\in[\![1,d]\!]$, we let $\mathbf{e}_i$ denote the direction $(0,\ldots,0,1,0,\ldots,0)$ with $1$ in the $i$-th coordinate. By definition, in the word $(w^{(\mathbf{0})})_{\mathbf{e}_1,\mathbf{s}}$, each factor of length $b$ contains $p$ (considered as a letter). Therefore, there exists a position $k_1\mathbf{e}_1$ with $p_1 \le k_1 \le p_1 +b-1$ where $p$ occurs in $w$. By definition again, in the word $(w^{(k_1\mathbf{e}_1)})_{\mathbf{e}_2,\mathbf{s}}$, each factor of length $b$ contains an occurrence of $p$. So there exists a position $k_1\mathbf{e}_1+k_2\mathbf{e}_2$ with $p_2\le k_2 \le p_2+b-1$ where $p$ occurs in $w$. Applying the same argument $d-2$ more times, we find a position $k_1 \mathbf{e}_1 +\cdots + k_d\mathbf{e}_d\in[\![\mathbf{p},\mathbf{p}+\mathbf{b}-\mathbf{1}]\!]$ where $p$ occurs in $w$. Thus, $p$ occurs as a factor of $f$ as desired. \end{proof} \section{Construction of URD multidimensional words using the $\gcd$} \label{sec:gcd} In this section, we consider a specific construction of $d$-dimensional infinite words starting from a single unidimensional infinite word. More precisely, for any $u\colon\mathbb{N}\to A$, we define a $d$-dimensional infinite word $w\colon\mathbb{N}^d\to A$ by setting \[ \forall \mathbf{i}\in\mathbb{N}^d, \ w(\mathbf{i})=u(\gcd(\mathbf{i})), \] where $\gcd(\mathbf{i})=\gcd(i_1,\ldots,i_d)$ if $\mathbf{i}=(i_1,\ldots,i_d)$. Otherwise stated, one places the infinite word $u$ in every rational direction: for all directions $\mathbf{q}\in\mathbb{N}^d$ and all $\ell\in\mathbb{N}$, we have $w(\ell \mathbf{q})=u(\ell)$. \begin{lemma} \label{lemma:gcd} Let $\mathbf{q}=(q_1,\ldots,q_d)\in\mathbb{Z}^d$ such that $q_1,\ldots,q_d$ are coprime, let $\alpha_1,\ldots,\alpha_d\in\mathbb{Z}$ such that $\alpha_1 q_1+\cdots+ \alpha_d q_d=1$, and let $\mathbf{i}=(i_1,\ldots,i_d)\in\mathbb{Z}^d\setminus \mathbb{Z}\mathbf{q}$. Then, for all $\ell\in\mathbb{Z}$, we have \begin{align*} \gcd(\ell \mathbf{q}+\mathbf{i}) &=\gcd\big(\ell +\alpha_1 i_1+\cdots+ \alpha_d i_d,\ \gcd( i_jq_k-i_kq_j \colon j,k\in[\![1,d]\!])\big),\\ \gcd(\ell\mathbf{q}) &=\ell. \end{align*} In particular, the sequence $\big(\gcd(\ell\mathbf{q}+\mathbf{i})\big)_{\ell\in\mathbb{Z}}$ is periodic of period $\gcd(i_jq_k-i_kq_j\colon j,k\in[\![1,d]\!])$. \end{lemma} \begin{proof} Let $d=\gcd(\ell \mathbf{q}+\mathbf{i})$ and $D=\gcd\big(\ell +\alpha_1 i_1+\cdots+ \alpha_d i_d,\ \gcd( i_jq_k-i_kq_j \colon j,k\in[\![1,d]\!])\big)$. Then $d$ divides \[ \sum_{j=1}^d \alpha_j(\ell q_j+i_j) =\ell\sum_{j=1}^d \alpha_j q_j+\sum_{j=1}^d \alpha_ji_j =\ell +\sum_{j=1}^d \alpha_ji_j. \] Moreover, for all $j,k\in[\![1,d]\!]$, $d$ also divides $(\ell q_j+i_j)q_k-(\ell q_k+i_k)q_j =i_jq_k-i_kq_j$. This shows that $d\le D$. Conversely, for all $k\in[\![1,d]\!]$, $D$ divides \[ \Big(\ell +\sum_{j=1}^d \alpha_ji_j\Big)q_k +\sum_{j\in[\![1,d]\!]} (i_kq_j-i_jq_k)\alpha_j=\ell q_k+i_k. \] We obtain that $D\le d$, hence $d=D$. The particular case follows from the fact that $\gcd(a,b)=\gcd(a+b,b)$. \end{proof} An arithmetical subsequence of a word $w\colon \mathbb{N}\to A$ is a word $v\colon \mathbb{N}\to A$ such that there exist $p,q\in\mathbb{N}$ with $q\ne 0$ such that, for all $\ell\in\mathbb{N}$, $v(\ell)=w(\ell q+p)$. A proof of the following result can be found in \cite{Avgustinovich--2011}. \begin{lemma}\label{lem:arithm} An arithmetical subsequence of a uniformly recurrent infinite word is uniformly recurrent. \end{lemma} \begin{example} Consider the occurrence of the prefix $01$ of the Thue-Morse word at positions multiple of $3$: \[ {\bf 01}1{\bf 01}0{\bf 01}1001{\bf 01}1{\bf 0 1} 0{\bf 01} 0110{\bf 01} 1{\bf 01}0{\bf 01} 1001{\bf 01}1001101001 {\bf 01}1{\bf 01}0{\bf 01}1001{\bf 01}10\cdots \] From Lemma~\ref{lem:arithm} the distance between any two consecutive such occurrences is bounded. \end{example} \begin{theorem} For any uniformly recurrent word $u\colon\mathbb{N}\to A$, the $d$-dimensional word $w\colon\mathbb{N}^d\to A,\ \mathbf{i}\mapsto u(\gcd(\mathbf{i}))$ is URD. \end{theorem} \begin{proof} Let $u\colon\mathbb{N}\to A$ be a uniformly recurrent word and let $w\colon\mathbb{N}^d\to A$ be the $d$-dimensional word $w\colon\mathbb{N}^d\to A,\ \mathbf{i}\mapsto u(\gcd(\mathbf{i}))$. Let $\mathbf{q}$ be a direction, let $p$ be a prefix of $w$ of some size $\mathbf{s}$ and let $y\colon\mathbb{N}\to A^{[\![0,\mathbf{s}-\mathbf{1}]\!]}$ be the word defined by \[ \forall \ell \in\mathbb{N},\ \forall \mathbf{i}\in[\![0,\mathbf{s}-\mathbf{1}]\!],\ (y(\ell))(\mathbf{i})=w(\ell \mathbf{q}+\mathbf{i}). \] We claim that $y$ contains the letter $y(0)=p$ with bounded gaps. By construction of $w$, we have \[ \forall \ell \in\mathbb{N},\ \forall \mathbf{i}\in[\![0,\mathbf{s}-\mathbf{1}]\!],\ (y(\ell))(\mathbf{i})=u(\gcd(\ell \mathbf{q}+\mathbf{i})). \] Now the conclusion follows from Lemma~\ref{lemma:gcd} and the uniform recurrence of $u$. More precisely, let \[ B=\prod_{\substack{\mathbf{0}\le (i_1,\ldots,i_d)<\mathbf{s} \\ (i_1,\ldots,i_d)\notin\mathbb{N}\mathbf{q}}} \gcd(i_jq_k-i_kq_j\colon j,k\in[\![1,d]\!]) \] and $r=\min\{\lceil\frac{s_1}{q_1}\rceil,\ldots,\lceil\frac{s_d}{q_d}\rceil\}$. By Lemma~\ref{lem:arithm}, the length-$r$ prefix of $u$ occurs at positions multiples of $B$ in $u$ infinitely often with gaps bounded by some constant $C$. Then, by Lemma~\ref{lemma:gcd}, $p$ occurs infinitely often in $y$ with gaps at most $BC$. \end{proof} \section{Recurrence properties of multidimensional rotation words} \label{sec:rotation} We illustrate that URD and SURD notions are distinct using a generalization of rotation words to the multidimensional setting. This generalization includes the bidimensional Sturmian words, which were proven to be UR ~\cite{Berthe--Vuillon--2001}. \begin{definition} Let $\boldsymbol\alpha=(\alpha_1,\ldots,\alpha_d)\in[0,1)^d$ and $\rho\in[0,1)$ be such that $1,\alpha_1,\ldots,\alpha_d$ are rationally independent and let $\{I_1,\ldots,I_k\}$ be a partition of $[0,1)$ into half-open intervals on the right. The \emph{$d$-dimensional (lower) rotation word} $w\colon\mathbb{N}^d\to [\![1,k]\!]$ (with parameters $\boldsymbol\alpha,I_1,\ldots,I_k,\rho$) is defined as \[ \forall \mathbf{i}\in\mathbb{N}^d,\ \forall j\in[\![1,k]\!],\quad w(\mathbf{i}) = j \iff (\rho+\mathbf{i} \cdot \boldsymbol\alpha) \bmod 1\in I_j \] (where $\mathbf{i} \cdot \boldsymbol\alpha$ is the scalar product $i_1\alpha_1+\cdots+i_d\alpha_d$). Similarly, we can also consider half-open intervals on the left. In this case, we talk about \emph{$d$-dimensional upper rotation words.} \end{definition} Note that for $d=2$, $I_1=[0,\alpha_1)$ and $I_2=[\alpha_1,1)$, we recover the definition of bidimensional Sturmian words from~\cite{Berthe--Vuillon--2001}. With the previous notation, for $\mathbf{s}\in\mathbb{N}^d$ and for $f$ a $d$-dimensional finite word of size $\mathbf{s}$ over the alphabet $\{1,\ldots,k\}$, we let \[ I_f=\bigcap_{\mathbf{i}\in[\![\mathbf{0},\mathbf{s}-\mathbf{1}]\!]} R_{\mathbf{i}\cdot\boldsymbol{\alpha}}^{-1}(I_{f(\mathbf{i})}) \] where $R_a\colon[0,1)\to[0,1),\ x\mapsto (x+a)\bmod 1$. Note that an intersection of intervals on the circle is a union of intervals (it does not have to be connected). Since $I_f$ is an intersection of finitely many intervals, it is also a finite union of nonempty disjoint intervals. We let $n(f)$ denote the number of such intervals and $I_{f,1}, \dots I_{f,n(f)}$ the intervals, so that: \begin{equation} \label{eq:union} I_f=\bigcup_{j=1}^{n(f)} I_{f,j}. \end{equation} If $I_f$ is empty then the union is empty, meaning that there is no interval $I_{f,j}$ at all, or equivalently that $n(f)=0$. \begin{lemma} \label{lem:rotation} Let $w$ be a $d$-dimensional rotation word with parameters $\boldsymbol\alpha, I_1,\ldots, I_k, \rho$. \begin{itemize} \item A $d$-dimen\-sional finite word $f$ occurs as a factor of $w$ at some position $\mathbf{p}$ if and only if $(\rho+\mathbf{p} \cdot \boldsymbol\alpha) \bmod 1\in I_f$. \item A $d$-dimensional finite word $f$ is a factor of $w$ if and only if $I_f$ is nonempty. \end{itemize} \end{lemma} \begin{proof} The proof is an adaptation of that of~\cite[Lemma~1]{Berthe--Vuillon--2001}. Let $f$ be a $d$-dimensional finite word. Then $f$ occurs in $w$ at position $\mathbf{p}$ if and only if for all $\mathbf{i}\in[\![\mathbf{0},\mathbf{s}-\mathbf{1}]\!]$ we have that $(\rho+(\mathbf{p}+\mathbf{i}) \cdot \boldsymbol\alpha) \bmod 1\in I_{f(\mathbf{i})}$, which is equivalent to saying that $(\rho+\mathbf{p}\cdot\boldsymbol{\alpha})\bmod 1\in I_f$. If $I_f$ is nonempty then it is a nonempty union of half-open intervals, and hence $I_f$ has nonempty interior. Moreover, by Kronecker's theorem (see for example~\cite{Hardy-Wright--2008}) and since $\alpha_d$ is irrational, we know that the orbit $\{(\rho+p_d\alpha_d)\bmod 1\colon p_d\in\mathbb{N}\}$ of $\rho$ under the rotation $R_{\alpha_d}$ is dense in $[0,1)$. Therefore, if $I_f$ is nonempty then for any $p_1,\ldots,p_{d-1}\in\mathbb{N}$, there exists some $p_d\in\mathbb{N}$ such that $\rho+p_1\alpha_1+\cdots+p_{d-1}\alpha_{d-1}+p_d\alpha_d$ belongs to $I_f$, so $f$ occurs as a factor of $w$ at position $\mathbf{p}=(p_1,\ldots,p_d)$. \end{proof} \begin{proposition} \label{prop:URD_isnot_SURD} All $d$-dimensional rotation words are URD, but none of them are SURD. \end{proposition} \begin{proof} Consider a $d$-dimensional rotation word $w$ with parameters $\boldsymbol\alpha, I_1,\ldots, I_k, \rho$. First, we show that $w$ is URD. Let $\mathbf{q}\in\mathbb{N}^d$ be a direction and $\mathbf{s}=(s_1,\ldots,s_d)\in\mathbb{N}^d$. We claim that the unidimensional word $\dir{w}{\mathbf{q}}{\mathbf{s}}$ is the image of a unidimensional rotation word under a letter-to-letter projection. Indeed, by definition, for each $\ell$, the letter $\dir{w}{\mathbf{q}}{\mathbf{s}}(\ell)$ corresponds to the factor of size $\mathbf{s}$ occurring at position $\ell\mathbf{q}$ in $w$. By Lemma~\ref{lem:rotation}, we get that the word $\dir{w}{\mathbf{q}}{\mathbf{s}}$ is the coding of the rotation on the unit circle of the point $\rho$ under the irrational angle $\mathbf{q}\cdot\boldsymbol\alpha$ with respect to the interval partition $\{I_{f_1,1},\ldots,I_{f_1,n(f_1)},\ldots,I_{f_r,1},\ldots,I_{f_r,n(f_r)}\}$ where $f_1,\ldots,f_r$ are the factors of $w$ of size $\mathbf{s}$ and the intervals $I_{f_i,j}$ are defined as in~\eqref{eq:union}. Note that, since for each $i$, the intervals $I_{f_i,1},\ldots,I_{f_i,n(f_i)}$ are coded by the same "letter" $f_i$ in $\dir{w}{\mathbf{q}}{\mathbf{s}}$, we do not necessarily obtain a rotation word but a letter-to-letter projection of a rotation word. Now, we obtain that $w$ is URD as a direct consequence of the three-gap theorem \cite{Swierczkowski--1959,Slater--1967} stating the following: if $\delta$ is an irrational number and $I$ is an interval of the unit circle then the gaps between the successive integers $j$ such that $\delta j \in I$ take at most three values. So, the letter $\dir{w}{\mathbf{q}}{\mathbf{s}}(0)$ occurs in $\dir{w}{\mathbf{q}}{\mathbf{s}}$ with gaps bounded by the largest gap corresponding to $\delta=\mathbf{q}\cdot\boldsymbol\alpha$ and the interval $I=I_{\dir{w}{\mathbf{q}}{\mathbf{s}}(0),j}$ where $j\in[\![1,k(\dir{w}{\mathbf{q}}{\mathbf{s}}(0))]\!]$ corresponds to the index of the interval $I_{\dir{w}{\mathbf{q}}{\mathbf{s}}(0),j}$ containing $\rho$. However, $w$ is not SURD since the uniform recurrence constant of $\dir{w}{q}{1}$ can be arbitrarily large depending on the direction $\mathbf{q}$. Indeed, by Kronecker's theorem, for each integer $N$, one can choose $\mathbf{q}_N=(q_1,N, \ldots,N)$ so that $\ell(\mathbf{q}_N\cdot \boldsymbol{\alpha} \bmod 1) < \min(|I_1|,\ldots,|I_k|)$ for any $\ell\in[\![0,N]\!]$. Therefore, the word $w_{\mathbf{q}_N,\mathbf{1}}$ contains all the factors $j^N$ for $j\in[\![1,k]\!]$. \end{proof} To end this section, we present an alternative proof of Proposition~\ref{prop:URD_isnot_SURD} using the notion of direct product of words. As it happens, this second proof reveals a property of $d$-dimensional rotation words which is stronger than the URD property (see Remark~\ref{rem:stronger-than-URD}). Further, we hope that this technique could be useful in order to prove that some other families of $d$-dimensional infinite words are URD. Recall that the \emph{direct product} of two unidimensional words $v\colon\mathbb{N}\to A$ and $w\colon\mathbb{N}\to B$ (possibly over different alphabets $A$ and $B$) is defined as the word $v\times w\colon\mathbb{N}\to A\times B$ where the $i$-th letter is $(v(i),w(i))$; for example see~\cite{Salimov--2010}. The direct product of $k\ge 2$ unidimensional words can be defined inductively. First, we need a lemma based on Furstenberg's results \cite{Furstenberg--1981} and their consequences on the direct product of unidimensional rotation words. \begin{lemma} \label{lem:direct_prod_sturmian} Any direct product of unidimensional lower (resp.\ upper) rotation words is uniformly recurrent. \end{lemma} \begin{proof} Let $k\ge 2$ and consider $k$ unidimensional lower (resp.\ upper) rotation words $\mathcal{R}_1,\ldots,\mathcal{R}_k$. For each $i$, suppose that $\mathcal{R}_i$ has slope $\alpha_i$ and intercept $\rho_i$. Let $T_i$ be the transformation associated with $\mathcal{R}_i$, i.e.\ $T_i\colon[0,1)\to[0,1),\, x\mapsto (x+\alpha_i)\bmod 1$. By definition, $\mathcal{R}_i$ is the coding of the orbit of the intercept $\rho_i$ in the dynamical system $([0,1),T_i)$ with respect to some interval partition $(I_{i,1},\ldots,I_{i,\ell_i})$ of $[0,1)$ where each interval $I_{i,j}$ is half open on the right. Moreover, the direct product of $k$ codings can be seen as the coding of the dynamical system product $([0,1)^k,T_1\times\cdots\times T_k)$ where $T_1\times\cdots\times T_k\colon (x_1,\ldots,x_k)\mapsto (T_1(x_1),\ldots,T_k(x_k))$. The maps $T_i$ correspond to the transformation $T$ defined in \cite[Prop. 5.4]{Furstenberg--1981} with $d=1$. Thus, from a dynamical point of view, every point of $([0,1),T_i)$ is recurrent \cite[Prop. 5.4]{Furstenberg--1981} and their product with any recurrent point of $([0,1),T_j)$ is also recurrent \cite[Prop.\ 5.5]{Furstenberg--1981} for the product system. Such points are called \emph{strongly recurrent} by Furstenberg. Since the direct product of strongly recurrent points is also strongly recurrent \cite[Lem.\ 5.10]{Furstenberg--1981} and since strong recurrence implies uniform recurrence \cite[Thm. 5.9]{Furstenberg--1981}, we obtain that the product of any $k$ points of $([0,1),T_1), \ldots, ([0,1),T_k)$ respectively is uniformly recurrent with respect to the product system $([0,1)^k,T_1\times\cdots\times T_k)$. Finally, since $\mathcal{R}_1,\ldots,\mathcal{R}_k$ are all rotation words of the same orientation of the intervals, there is no ambiguity in the coding and the dynamical systems results can be translated in terms of words. Therefore, their direct product is uniformly recurrent. \end{proof} \begin{proof}[Alternative proof of the URD part of Proposition~\ref{prop:URD_isnot_SURD}] Consider a $d$-dimensional rotation word $w$ with parameters $\boldsymbol\alpha, I_1,\ldots, I_k, \rho$. Let $\mathbf{q}\in\mathbb{N}^d$ be a direction and $\mathbf{s}=(s_1,\ldots,s_d)\in\mathbb{N}^d$. For any $\mathbf{p}\in\mathbb{N}^d$, the unidimensional word $(w(\ell\mathbf{q}+\mathbf{p}))_{\ell\in\mathbb{N}}$ is a rotation word. Indeed, it is the coding of the rotation of the point $\rho+\mathbf{p}\cdot\boldsymbol\alpha$ of the unit circle under the irrational angle $\mathbf{q}\cdot\boldsymbol\alpha$, with respect to the partition into the intervals $I_1,\ldots,I_k$. Therefore, the word $\dir{w}{q}{s}$ is a direct product of $s_1\cdots s_d$ unidimensional rotation words (of the same orientation): \[ \dir{w}{q}{s} =\bigtimes_{\mathbf{i}\in[\![\mathbf{0},\mathbf{s}-\mathbf{1}]\!]} w(\ell\mathbf{q}+\mathbf{i})_{\ell\in\mathbb{N}} \] By Lemma~\ref{lem:direct_prod_sturmian}, we obtain that $\dir{w}{q}{s}$ is uniformly recurrent. This proves that $w$ is URD. \end{proof} \begin{remark} \label{rem:stronger-than-URD} We argue that the second proof of Proposition~\ref{prop:URD_isnot_SURD} shows that $d$-dimensional rotation words satisfy a stronger property than URD which is not the SURD property. For any direction $\mathbf{q}$ and any position $\mathbf{p}$, the rotation angle of $(w(\ell\mathbf{q}+\mathbf{p}))_{\ell\in\mathbb{N}}$ is independent of $\mathbf{p}$. Moreover, for any size $\mathbf{s}$, any direction $\mathbf{q}$ and any position $\mathbf{p}$, we have \[ \dir{w}{q}{s}^{(\mathbf{p})} =\bigtimes_{\mathbf{i}\in[\![\mathbf{0},\mathbf{s}-\mathbf{1}]\!]} w(\ell\mathbf{q}+\mathbf{i}+\mathbf{p})_{\ell\in\mathbb{N}}. \] Thus, for any size $\mathbf{s}$ and any direction $\mathbf{q}$, there exists a constant $b$ such that for any $\mathbf{p}$, each factor of length $b$ of the unidimensional word $\dir{w}{q}{s}^{(\mathbf{p})}$ contains all factors of size $\mathbf{s}$. Indeed, the constant $b$ only depends on the rotation angles of the words $ w(\ell\mathbf{q}+\mathbf{i}+\mathbf{p})_{\ell\in\mathbb{N}}$, hence is independent of the origin $\mathbf{p}$ (which is stronger than URD), although depends on the direction $\mathbf{q}$ (which is weaker than SURD). \end{remark} \begin{remark} In the particular case of rotation words of the same slope $\alpha$, one can directly prove (without using Furstenberg's results) that their direct product is uniformly recurrent, except in some exceptional cases described below. See~\cite{Didier--1998} for similar concerns on rotation words. The proof goes as follows. Let $\mathcal{R}_1,\ldots,\mathcal{R}_k$ be unidimensional lower (resp.\ upper) rotation word of intercepts $\rho_1,\ldots,\rho_k$ respectively. As in the proof of Proposition~\ref{prop:URD_isnot_SURD}, for each $i$, the factor of length $s$ at position $m$ in $\mathcal{R}_i$ corresponds to the interval of the point $\rho_i+m\alpha$. For each $i$, let us shift all the intervals of the $i$-th circle by $\rho_1-\rho_i$. Now the factor at position $m$ of each $\mathcal{R}_i$ corresponds to the (shifted) interval of the point $\rho_i+(\rho_1-\rho_i)+m\alpha = \rho_1+m\alpha$. Consider the intervals created as the intersections of all shifted intervals (we have at most $n_1\cdots n_k$ of them where each $n_i$ is the number of intervals in the interval partition corresponding to $\mathcal{R}_i$). These new intervals correspond to the factors of the product $\mathcal{R}_1\times\cdots\times\mathcal{R}_k$. Namely, the factor at position $m$ of $\mathcal{R}_1\times\cdots\times\mathcal{R}_k$ corresponds to the interval containing the point $\rho_1+m\alpha$. This shows that $\mathcal{R}_1\times\cdots\times\mathcal{R}_k$ is a rotation word. So, it is uniformly recurrent by the three-gap theorem. If some words are upper rotational, and some are lower rotational, their direct product might not be uniformly recurrent. This occurs when some intersection of the intervals is a single point, which can only happen in the case when in one of the words the intervals are half-open on the right, and in the other one they are half-open on the left, and the orbit of each point contains this point. On the other hand, if one of the words never touches the intervals endings (which corresponds to an orbit not containing zero), it means that orientation does not play any role for this word and we can assume it is the same as for the other word. \end{remark} \section{Fixed points of multidimensional square morphisms} \label{sec:morphism} Similarly to unidimensional words, one can define morphisms and their fixed points in any dimension; for example, see~\cite{Charlier--Karki--Rigo--2010,Rigo--Maes--2002,Salon--1987}. For other kinds of multidimensional substitutions, we refer to the survey \cite{PriebeFrank--2008}. For simplicity, we only consider constant length morphisms. \begin{definition} A \emph{$d$-dimensional morphism} of constant size $\mathbf{s}=(s_1,\ldots,s_d)\in\mathbb{N}^d$ is a map $\varphi\colon A\to A^{[\![\mathbf{0},\mathbf{s}-\mathbf{1}]\!]}$. For each $a\in A$ and for each integer $n\ge 2$, $\varphi^n(a)$ is recursively defined as \[ \varphi^n(a)\colon [\![\mathbf{0},\mathbf{s}^n-\mathbf{1}]\!]\to A,\ \mathbf{i}\mapsto \Big(\varphi\big((\varphi^{n-1}(a))(\mathbf{q})\big)\Big)(\mathbf{r}), \] where $\mathbf{q}$ and $\mathbf{r}$ are defined by the componentwise Euclidean division of $\mathbf{i}$ by $\mathbf{s}$: $\mathbf{i}=\mathbf{q}\mathbf{s}+\mathbf{r}$. With these notation, the \emph{preimage} of the letter $\big(\varphi^n(a)\big)(\mathbf{i})$ is the letter $\big(\varphi^{n-1}(a)\big)(\mathbf{q})$. In the case $\mathbf{s}=(s,\ldots,s)$, we say that $\varphi$ is a \emph{$d$-dimensional square morphism of size $s$}. \end{definition} Note that $\varphi^n(a)$ is obtained by concatenating $\prod_{i=1}^d s_i$ copies of the images $\varphi^{n-1}(b)$ for the letters $b$ occurring in $\varphi(a)$. For instance, if $d=2$ and $\mathbf{s}=(s_1,s_2)$, the $n$-th image $\varphi^n(a)$ has size $\mathbf{s}^n=(s_1^n,s_2^n)$ and, with the convention of Remark~\ref{rem:convention-2D}, we have \[ \varphi^n(a)= \begin{bmatrix} \varphi^{n-1}(\varphi(a)_{0,s_2-1}) & \cdots & \varphi^{n-1}(\varphi(a)_{s_1-1,s_2-1})\\ \vdots & & \vdots \\ \varphi^{n-1}(\varphi(a)_{0,0}) & \cdots & \varphi^{n-1}(\varphi(a)_{s_1-1,0}) \end{bmatrix} \] where we have used the lighter notation $\varphi(a)_{i,j}$ instead of $\big(\varphi(a)\big)(i,j)$. \begin{example} In Figure~\ref{fig:preimage}, \begin{figure}[htbp] \centering \scalebox{0.8}{ \begin{tikzpicture}[overlay,remember picture] \tikzstyle{every path}=[draw=gray,line width = 1pt] \coordinate (a) at ( $ (pic cs:Z1) - (0.1,0.1) $); \coordinate (b) at ( $ (pic cs:Z2) + (0.2,0.3) $); \draw[fill=gray!50] (a) rectangle (b); \end{tikzpicture} $\begin{array}{c|ccccccccccccccccccccccccccc} 7&1&0&1& 1&1&0\tikzmark{A2}& 1&0&1& 1&0&1& 1&0&1& 1&1&0& 1&0&1& 1&1&0& 1&0&1\\ 6&1&1&0& \tikzmark{A1}0&1&1& 1&1&0& 1&1&0& 1&1&0& 0&1&1& 1&1&0& 0&1&1& 1&1&0\\ 5&1&0&1& 1&0&1& 1&1&0& 1&1&0\tikzmark{B2} & 1&0&1& 1&0&1& 1&0&1& 1&0&1& 1&1&0\\ 4&1&1&0& 1&1&0& 0&1&1& \tikzmark{B1}0&1&1& 1&1&0& 1&1&0& 1&1&0& 1&1&0& 0&1&1 \\ 3&1&0\tikzmark{A}&1& 1&1&0& 1&0&1\tikzmark{Z2}& 1&0&1& 1&1&0& 1&0&1& 1&0&1& 1&0&1\tikzmark{C2}& 1&1&0\\ 2&1&1&0& 0\tikzmark{B}&1&1& 1&1&0& 1&1&0& 0&1&1& 1&1&0& 1&1&0& \tikzmark{C1}1&1&0& 0&1&1 \\ 1&1&0&1& 1&0&1& 1&1\tikzmark{C}&0& 1&0&1 & 1&0&1& 1&1&0& 1&1&0& 1&0&1& 1&0&1\\ 0&\tikzmark{Z1}1&1&0& 1&1&0& 0&1&1& 1&1&0& 1&1&0& 0&1&1& 0&1&1& 1&1&0& 1&1&0\\\hline &0&1&2& 3&4&5& 6&7&8&9&10&11&12&13&14&15&16&17&18&19&20&21&22&23&24&25&26 \end{array}$ \begin{tikzpicture}[overlay,remember picture] \tikzstyle{every path}=[draw=red,line width = 1pt] \coordinate (x) at ( $ (pic cs:A) + (0.12,0.3)$); \draw (x.south) --++(0,-0.4) --++(-0.4,0)--++(0,0.4)--++(0.4,0)--++(0,-0.4); \coordinate (a) at ( $ (pic cs:A1) - (0.1,0.1) $); \coordinate (b) at ( $ (pic cs:A2) + (0.2,0.4) $); \draw (a) rectangle (b); \draw[->] (x) to [bend left = 10] (a); \tikzstyle{every path}=[draw=blue,line width = 1pt] \coordinate (x) at ( $ (pic cs:B) + (0.12,0.3)$); \draw (x.south) --++(0,-0.4) --++(-0.4,0)--++(0,0.4)--++(0.4,0)--++(0,-0.4); \coordinate (a) at ( $ (pic cs:B1) - (0.1,0.1) $); \coordinate (b) at ( $ (pic cs:B2) + (0.2,0.4) $); \draw (a) rectangle (b); \draw[->] (x) to [bend left = 10] (a); \tikzstyle{every path}=[draw=purple,line width = 1pt] \coordinate (x) at ( $ (pic cs:C) + (0.12,0.3)$); \draw (x.south) --++(0,-0.4) --++(-0.4,0)--++(0,0.4)--++(0.4,0)--++(0,-0.4); \coordinate (a) at ( $ (pic cs:C1) - (0.1,0.1) $); \coordinate (b) at ( $ (pic cs:C2) + (0.2,0.4) $); \draw (a) rectangle (b); \draw[->] (x) to [bend left = 10] (a); \end{tikzpicture} } \caption{Third iteration of the morphism} \hspace{4.5cm} $0\mapsto\begin{bmatrix} 1 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix},\ 1\mapsto\begin{bmatrix} 1 & 0 & 1\\ 1 & 1 & 0 \end{bmatrix}$ starting from $1$. \label{fig:preimage} \end{figure} the third iteration of a bidimensional morphism $\varphi$ of size $\mathbf{s}=(3,2)$ is given. The gray zone corresponds to $\varphi^2(1)$. The preimages of different letters is highlighted in colors. For instance, the preimage of $\varphi^3(1)_{4,7}$ (in red) is the letter $\varphi^2(1)_{1,3}$ as $(4,7)=(1,3)\mathbf{s}+(1,1)$ (where the product and sum are understood componentwise). Note that it is also the preimage of $\varphi^3(1)_{3,6},\varphi^3(1)_{3,7}$ and $\varphi^3(1)_{5,7}$ for example. \end{example} \begin{definition} Let $\varphi$ be a $d$-dimensional morphism such that there exists $a\in A$ with $\varphi(a)_{0,0}=a$. We say that $\varphi$ is \emph{prolongable} on $a$ and the limit $\lim_{n\to\infty}\varphi^n(a)$ is well defined. The limit $d$-dimensional infinite word so obtained is called the \emph{fixed point} of $\varphi$ beginning with $a$ and it is denoted by $\varphi^\omega(a)$. A $d$-dimensional infinite word is said to be \emph{pure morphic} if it is the fixed point of a $d$-dimensional morphism. \end{definition} \begin{example} Figure~\ref{fig:sierpinski} \begin{figure}[htb] \centering \scalebox{0.7}{ \includegraphics[width=0.02\textwidth]{Sierpinski1}\quad \includegraphics[width=0.04\textwidth]{Sierpinski2}\quad \includegraphics[width=0.08\textwidth]{Sierpinski3}\quad \includegraphics[width=0.16\textwidth]{Sierpinski4}\quad \includegraphics[width=0.32\textwidth]{Sierpinski5} } \caption{The first five iterations of the 2D morphism.} \hspace{1.5cm}$0\mapsto\begin{bmatrix} 0 & 0\\ 0 & 0 \end{bmatrix}$, $1\mapsto\begin{bmatrix} 1 & 0\\ 1 & 1 \end{bmatrix}$ starting from $1$. \label{fig:sierpinski} \end{figure} depicts the first five iterations of a bidimensional square morphism with the convention that a black (resp.\ white) cell represents the letter 1 (resp.\ 0). The limit object of this process is the famous Sierpinski gasket \cite{Stewart--1995}. \end{example} A first interesting observation is that in order to study the uniform recurrence along all directions (URD) of $d$-dimensional infinite words of the form $\varphi^\omega(a)$ for a square morphism $\varphi$, we only have to consider the distances between consecutive occurrences of the letter $a$. \begin{proposition}\label{prop:reduction} Let $w$ be a fixed point of a $d$-dimensional square morphism of size $s$ and let $\mathbf{q}$ be a direction. If there exists $b\in\mathbb{N}$ such that $w(\mathbf{0})$ occurs infinitely often along $\mathbf{q}$ with gaps at most $b$, then for all $\mathbf{m}\in\mathbb{N}^d$, the prefix of size $\mathbf{m}$ of $w$ occurs infinitely often along $\mathbf{q}$ with gaps at most $s^{\lceil \log_s( \max\mathbf{m})\rceil} b$. \end{proposition} \begin{proof} Let $\mathbf{m}\in\mathbb{N}^d$ and let $p$ be the prefix of size $\mathbf{m}$ of $w$. Let $r$ be the integer defined by $s^{r-1}<\max\mathbf{m}\le s^r$. In the $d$-dimensional infinite word $w$, if the letter $w(\mathbf{0})$ occurs at position $\mathbf{i}$, then the image $\varphi^r(w(\mathbf{0}))$ occurs at position $s^r\mathbf{i}$. Therefore and because we consider a square morphism, if $w(\mathbf{0})$ occurs infinitely often along $\mathbf{q}$ with gaps at most $b$, then $p$ occurs infinitely often along $\mathbf{q}$ with gaps at most $s^r b$. \end{proof} In order to provide a family of SURD $d$-dimensional infinite words, we introduce the following definition. \begin{definition} For an integer $s\ge2$ and $\mathbf{i}=(i_1,\ldots,i_d)\in(\mathbb{Z}/s\mathbb{Z})^d$ such that $i_1,\ldots,i_d$ are coprime, we define $\langle \mathbf{i}\rangle$ to be the additive subgroup of $(\mathbb{Z}/s\mathbb{Z})^d$ that is generated by $\mathbf{i}$: \[ \langle \mathbf{i}\rangle = \{ k\mathbf{i}\colon k\in \mathbb{Z}/s\mathbb{Z}\}. \] Then, we let $\mathcal{C}(s)$ be the family of all cyclic subgroups of $(\mathbb{Z}/s\mathbb{Z})^d$ generated by elements with $\gcd(\mathbf{i})=1$: \[ \mathcal{C}(s)=\{\langle \mathbf{i}\rangle\colon \mathbf{i}\in(\mathbb{Z}/s\mathbb{Z})^d, \ \gcd(\mathbf{i})=1\}. \] \end{definition} \begin{proposition} \label{proposition:main morphic} If $\varphi$ is a $d$-dimensional square morphism of size $s$ prolongable on $a\in A$ and such that, for every $C\in\mathcal{C}(s)$, there exists $\mathbf{i}\in C$ such that $\varphi(b)_\mathbf{i}=a$ for each $b\in A$, then its fixed point $\varphi^\omega(a)$ is SURD. More precisely, for each $\mathbf{m}\in\mathbb{N}^d$, the prefix of size $\mathbf{m}$ of $\varphi^\omega(a)$ occurs infinitely often along any direction with gaps at most $s^{\lceil \log_s( \max\mathbf{m})\rceil+1}$. \end{proposition} \begin{proof} Let $\mathbf{q}\in\mathbb{N}^d$ be a given direction. Let $\mathbf{r}=\mathbf{q}\bmod s$ (componentwise) and $d=\gcd(\mathbf{r})$. By hypothesis, there exists $\mathbf{i}\in\langle\frac 1d\mathbf{r}\rangle$ such that $(\varphi(b))_\mathbf{i}=a$ for each $b\in A$. Let $k\in[\![0,s-1]\!]$ such that $\mathbf{i}=\frac kd\mathbf{r} \bmod s$. Then $k\mathbf{q}\equiv k\mathbf{r}\equiv d\mathbf{i}\pmod s$. Observe that $\gcd(d,s)$ divides $\mathbf{r}$ and $s$, hence also divides $\mathbf{q}$. This implies that $\gcd(d,s)=1$. Let $\ell=d^{-1}k\bmod s$. Then $\ell \mathbf{q}\equiv \mathbf{i}\pmod s$. We obtain that for all $n\in\mathbb{N}$, $(\ell +ns)\mathbf{q}\equiv \mathbf{i}\pmod s$, hence $(\varphi^\omega(a))_{(\ell+ns)\mathbf{q}}=a$. This proves that the letter $a$ occurs infinitely often in $\varphi^\omega(a)$ along the direction $\mathbf{q}$ with gaps at most $s$. Now let $\mathbf{m}\in\mathbb{N}^d$ and consider the prefix $p$ of size $\mathbf{m}$ of $\varphi^\omega(a)$. From the first part of the proof and by using Proposition~\ref{prop:reduction}, we obtain that $p$ occurs infinitely often along any direction with gaps at most $s^{\lceil \log_s( \max\mathbf{m})\rceil+1}$. \end{proof} Since each subgroup of $(\mathbb{Z}/s\mathbb{Z})^d$ contains $\mathbf{0}$, the following result is immediate. \begin{corollary}\label{cor:1} Let $\varphi$ be a $d$-dimensional square morphism of size $s$ such that $(\varphi(b))_\mathbf{0}=a$ for each $b\in A$. Then the fixed point $\varphi^\omega(a)$ is SURD. More precisely, for each $\mathbf{m}\in\mathbb{N}^d$, the prefix of size $\mathbf{m}$ of $\varphi^\omega(a)$ occurs infinitely often along any direction with gaps at most $s^{\lceil \log_s( \max\mathbf{m})\rceil+1}$. \end{corollary} When the alphabet $A$ is binary (in which case we assume without loss of generality that $A=\{0,1\}$), then we talk about \emph{binary} morphism and we always consider that it has a fixed point beginning with $1$. \begin{example} By Corollary~\ref{cor:1}, the fixed point $\varphi^\omega(1)$ of \[ \varphi\colon 0\mapsto\begin{bmatrix} 0 & 0\\ 1 & 0 \end{bmatrix},\quad 1\mapsto\begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} \] is SURD: for all $(m,n)\in\mathbb{N}^2$, the prefix of size $(m,n)$ of $\varphi^\omega(1)$ occurs infinitely often along any direction with gaps at most $2^{\lceil \log_s(\max(m,n))\rceil+1}$. \end{example} \begin{remark} When the size $s$ is prime, the subgroups of $\mathcal{C}(s)$ corresponding to any two elements $\mathbf{i}$ and $ \mathbf{j}$ with coprime coordinates either coincide or have only the element $\mathbf{0}$ in common. Therefore we have exactly $\frac{s^d-1}{s-1}$ distinct subgroups. In particular, for $d=2$, this gives $s+1$ distinct subgroups. Hence we can consider a partition of $(\mathbb{Z}/s\mathbb{Z})^d$ into $s+2$ sets: $s+1$ subgroups without $\mathbf{0}$ and $\mathbf{0}$ itself. When $s$ is not prime, the structure is a bit more complicated and we do not have such a nice partition. Below we consider examples to illustrate the two situations. \end{remark} \begin{example} Partition for $s=5$ and $d=2$ can be illustrated by the following picture where each letter in $\{\alpha,\ldots, \zeta\}$ represents a subgroup: \[ \begin{bmatrix} \beta & \zeta & \epsilon & \delta & \gamma \\ \beta & \delta & \zeta & \gamma & \epsilon \\ \beta & \epsilon & \gamma & \zeta & \delta \\ \beta & \gamma & \delta & \epsilon & \zeta \\ 0 & \alpha & \alpha & \alpha & \alpha \end{bmatrix} \] Due to Proposition~\ref{proposition:main morphic}, in order to obtain a SURD fixed point of a bidimensional square morphism, it is enough to have the letter $a$ in one of the coordinates marked by each Greek letter in the image of each letter $b\in A$. And by Corollary~\ref{cor:1}, having the letter $a$ in the coordinate $(0,0)$ in the image of each letter is enough. \end{example} \begin{example} For $s=6$ and $d=2$, one has 12 subgroups (which can be checked by considering the 36 possible cases of pairs of remainders of the Euclidean division by $6$, out of which there are only 21 coprime pairs to consider): \[ \begin{bmatrix}[c|c|c|c|c|c] \beta & \kappa & \theta & \zeta & \delta & \gamma \\ \hline \beta,\zeta,\lambda & \iota & \delta,\kappa,\epsilon & \lambda & \gamma,\theta,\iota & \epsilon \\ \hline \beta,\delta,\theta,\mu & \eta & \mu & \gamma,\zeta,\kappa,\eta & \mu & \eta \\ \hline \beta,\zeta,\lambda & \epsilon & \gamma,\theta,\iota & \lambda & \delta,\epsilon,\kappa & \iota \\ \hline \beta & \gamma & \delta & \zeta & \theta & \kappa \\ \hline 0 & \alpha & \alpha,\eta,\mu & \alpha,\lambda,\iota,\epsilon & \alpha,\eta,\mu & \alpha \end{bmatrix} \] Here are the correspondence between the 12 subgroups and letters (where we do not write $(0,0)$, which belongs to every subgroup): \[ \begin{tabular}{c|l|l} $\alpha$ & $(1,0)$ & $\{(1,0),(2,0),(3,0),(4,0),(5,0)\}$\\ $\beta$ & $(0,1)$ & $\{(0,1),(0,2),(0,3),(0,4),(0,5)\}$\\ $\gamma$ & $(1,1)$ & $\{(1,1),(2,2),(3,3),(4,4),(5,5)\}$\\ $\delta$ & $(2,1),(4,5)$ & $\{(2,1),(4,2),(0,3),(2,4),(4,5)\}$\\ $\epsilon$ & $(1,2),(5,4)$ & $\{(1,2),(2,4),(3,0),(4,2),(5,4)\}$\\ $\zeta$ & $(3,1),(3,5)$ & $\{(3,1),(0,2),(3,3),(0,4),(3,5)\}$\\ $\eta$ & $(1,3),(5,3)$ & $\{(1,3),(2,0),(3,3),(4,0),(5,3)\}$\\ $\theta$ & $(4,1),(2,5)$ & $\{(4,1),(2,2),(0,3),(4,4),(2,5)\}$\\ $\iota$ & $(1,4),(5,2)$ & $\{(1,4),(2,2),(3,0),(4,4),(5,2)\}$\\ $\kappa$ & $(5,1),(1,5)$ & $\{(5,1),(4,2),(3,3),(2,4),(1,5)\}$\\ $\lambda$ & $(3,4),(3,2)$ & $\{(3,4),(0,2),(3,0),(0,4),(3,2)\}$\\ $\mu$ & $(4,3),(2,3)$ & $\{(4,3),(2,0),(0,3),(4,0),(2,3)\}$ \end{tabular} \] We remark that here the subgroups intersect. For example, the first and third subgroups have the element $(3,3)$ in common. Due to Proposition~\ref{proposition:main morphic}, in order to obtain a SURD word, it suffices to have the letter $a$ in the image of each letter in at least one of the elements of each subgroup. For example, it is the case of the fixed point of any morphism with $a$'s in the marked positions in the images of each letter: \[ \begin{bmatrix} * & * & * & * & * & *\\ a & * & a & * & * & *\\ * & * & * & a & * & *\\ * & * & a & * & * & *\\ * & * & * & * & * & *\\ * & * & * & * & a & *\\ \end{bmatrix} \] \end{example} \begin{corollary}\label{cor:morphism power} If $\psi$ is a $d$-dimensional square morphism of size $s$ such that for some integer $i$, its power $\varphi=\psi^i$ satisfies the conditions of Proposition~\ref{proposition:main morphic}, then the fixed point $\psi^\omega(a)$ is SURD. More precisely, for all $\mathbf{m}\in\mathbb{N}^d$, the prefix of size $\mathbf{m}$ of $\psi^\omega(a)$ occurs infinitely often along any direction with gaps at most $s^{i\lceil \log( \max\mathbf{m})\rceil+i}$. \end{corollary} \begin{proof} Clearly, the fixed points of $\psi$ and $\varphi$ are the same. Now apply Proposition~\ref{proposition:main morphic} to $\varphi$. \end{proof} \begin{example} The morphism \[ \psi\colon 0\mapsto \begin{bmatrix} 0 & 0 & 0\\ 1 & 1 & 1\\ 0 & 1 & 0 \end{bmatrix}, \quad 1\mapsto \begin{bmatrix} 0 & 1 & 0\\ 1 & 0 & 1\\ 1 & 1 & 0 \end{bmatrix} \] satisfies the hypotheses of Corollary~\ref{cor:morphism power} for $s=3$, $i=2$. Indeed, it can be checked that for each $C\in\mathcal{C} (9)$, we can find a 1 at the same position in $C$ in both images $\psi^2(0)$ and $\psi^2(1)$. \end{example} \begin{remark} \label{rem:primitivity} The hypotheses of Proposition~\ref{proposition:main morphic} should be compared to the primitivity property of a morphism. In the unidimensional case, a morphism is said to be primitive if its incidence matrix is primitive, or equivalently, if some power of the morphism is such that all letters appear in the image of every letter; see for example \cite{Durand--1998}. It is well known that fixed points of primitive morphisms are uniformly recurrent. This notion of primitivity generalizes naturally to any dimension $d$. However, if we are interested in studying the URD property, the natural generalization of primitivity is not accurate: we should not only consider the number of times a letter occurs in the image of another letter but also the positions where the letter occurs within each image. See Section~\ref{sec:perspectives} for some perspectives in this direction. \end{remark} Now we give a family of examples of SURD $d$-dimensional words which do not satisfy the hypotheses of Corollary~\ref{cor:morphism power}, showing that it does not give a necessary condition. We first need the following observation on unidimensional fixed points of morphisms. \begin{lemma}\label{lem:001-101} Let $\varphi$ be a unidimensional morphism of constant prime size $s$ and prolongable on $a\in A$ for which there exists $i\in [\![0,s-1]\!]$ such that $\varphi(b)_i=a$ for each $b\in A$. For all positive integers $m$, any factor of length $s$ of the infinite word $(\varphi^\omega(a)_{mk})_{k\in\mathbb{N}}$ contains the letter $a$. \end{lemma} \begin{proof} Let $w=\varphi^\omega(a)$ and let $m$ be a positive integer. The integer $m$ can be decomposed in a unique way as $m=s^e\ell$ with $e,\ell\in\mathbb{N}$ and $\ell\not\equiv 0\pmod s$. We prove the result by induction on $e\in\mathbb{N}$. If $e=0$ then $m\not\equiv 0\pmod s$. Then for all $k\in\mathbb{N}$, at least one of the $s$ integers $m k$, $m (k+1),\ldots,m (k+s-1)$ is congruent to $i$ modulo $s$. Since the letter $a$ appears in the $i$-th place of the images of all letters, at least one of the letters $w_{m k}$, $w_{m (k+1)},\ldots,w_{m (k+s-1)}$ is equal to $a$. Now suppose that $e>0$ and that the result is correct for $e-1$. Observe that, for every $k\in\mathbb{N}$, the preimage of the letter $w_{mk}=w_{s^e\ell k}$ is the letter $w_{\frac ms k}=w_{s^{e-1}\ell k}$. Since the morphism is prolongable on $a$ and since $m\equiv 0\pmod s$, for each $k\in\mathbb{N}$, the letter $w_{mk}$ is equal to $a$ if its preimage is $a$. But by induction hypothesis, for all $k\in\mathbb{N}$, at least one of the $s$ preimages $w_{\frac ms k}$, $w_{\frac ms (k+1)},\ldots,w_{\frac ms (k+s-1)}$ is equal to $a$. Therefore, we obtain that for all $k\in\mathbb{N}$, at least one of the $s$ letters $w_{m k}$, $w_{m (k+1)},\ldots,w_{m (k+s-1)}$ is equal to $a$ as well. \end{proof} \begin{proposition} \label{prop:SURDcor-not-necessary} If $\varphi$ is a $d$-dimensional square morphism of some prime size $s$ and prolongable on $a\in A$ such that \begin{enumerate} \item \label{eq:1} $\forall i_2,\ldots,i_d\in[\![0,s-1]\!]$, $\varphi(a)_{0,i_2,\ldots,i_d}=a$ \item \label{eq:2} $\exists i_1\in[\![0,s-1]\!]$, $\forall i_2,\ldots,i_d\in[\![0,s-1]\!]$, $\varphi(b)_{i_1,\ldots,i_d}=a$ for each $b\in A$ \end{enumerate} then $\varphi^\omega(a)$ is SURD. \end{proposition} \begin{proof} By Proposition~\ref{prop:reduction}, we only have to show that there exists a uniform bound $t$ such that the letter $a$ occurs infinitely often along any direction of $\varphi^\omega(a)$ with gaps bounded by $t$. It is sufficient to prove the result for the fixed point beginning with $1$ of the binary morphism $\psi$ satisfying the hypotheses $\eqref{eq:1}$ and $\eqref{eq:2}$ and having $0$ at any other coordinates in the images of both $0$ and $1$. Indeed, the fixed point $\varphi^\omega(a)$ of any morphism $\varphi$ satisfying $\eqref{eq:1}$ and $\eqref{eq:2}$ differs from this one only by replacing occurrences of $1$ by $a$ and occurrences of $0$ by any letter of the alphabet. For example, for $d=2$, the morphism $\psi$ is \[ \psi\colon 0\mapsto \begin{bmatrix} 0 & 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\ \vdots & \vdots &&&&&& \vdots\\ 0 & 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \end{bmatrix}, \quad 1\mapsto \begin{bmatrix} 1 & 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\ 1 & 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\ \vdots & \vdots &&&&&& \vdots\\ 1 & 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \end{bmatrix} \] (where the common columns of $1$'s are placed at position $i_1$ in both images). Each of the hyperplanes \[ H_k=\{\psi^\omega(1)_{k,i_2\ldots,i_d}\colon i_2,\ldots,i_d\in\mathbb{N}\},\ \mathrm{\ for \ } k\in\mathbb{N} \] of $\psi^\omega(1)$ contains either only $0$'s or only $1$'s. Therefore, for any direction $\mathbf{q}=(q_1,\ldots,q_d)$, we have $\psi^\omega(1)_{\ell \mathbf{q}}=\psi^\omega(1)_{\ell q_1,0,\ldots,0}$, hence the unidimensional word $\mathbb{N}\to A,\ \ell\mapsto \psi^\omega(1)_{\ell \mathbf{q}}$ is the fixed point of the unidimensional morphism \[ \sigma\colon 0\mapsto \begin{bmatrix} 0 & 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \end{bmatrix}, \quad 1\mapsto \begin{bmatrix} 1 & 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\ \end{bmatrix} \] (where, again, the common $1$'s are placed at position $i_1$ in both images). By Lemma~\ref{lem:001-101}, we obtain that $\psi^\omega(1)$ is SURD with the uniform bound $t=s$. \end{proof} Note that the role of the first coordinate $i_1$ could be played by any of the other coordinates $i_2,\ldots,i_d$ with the ad hoc modifications in the statement of Proposition~\ref{prop:SURDcor-not-necessary}. Now we give a sufficient condition for a $d$-dimensional word to be non URD. \begin{proposition} \label{prop:suff-not-nec} Let $\varphi$ be a $d$-dimensional square morphism of a prime size $s$ prolongable on $a\in A$. Let $\mathbf{q}$ be a direction and let $C=\langle \mathbf{q} \bmod{s} \rangle$. If $\varphi(b)_\mathbf{i}\neq a$ for each $b\in A$ and $\mathbf{i}\in C$ except for $\varphi(a)_\mathbf{0}=a$, then $(\varphi^\omega(a)_{\ell\mathbf{q}})_{i\in\mathbb{N}}\in a (A\setminus \{a\})^\omega$. In particular, $\varphi^\omega(a)$ is not recurrent along the direction $\mathbf{q}$. \end{proposition} \begin{proof} Suppose that the first occurrence of $a$ after that in position $\mathbf{0}$ along the direction $\mathbf{q}$ occurs in position $\ell \mathbf{q}$. Since, for each $b\in A$, $\varphi(b)$ has non-$a$ elements on all places defined by $C\setminus \{\mathbf{0}\}$, the letter $\varphi^\omega(a)_{\ell \mathbf{q}}$ must be placed at the coordinate $\mathbf{0}$ of the image of $a$. In particular, the preimage of $\varphi^\omega(a)_{\ell \mathbf{q}}$ must be $a$. Because $s$ is prime, $\ell$ must be divisible by $s$ and the preimage of $\varphi^\omega(a)_{\ell \mathbf{q}}$ is $\varphi^\omega(a)_{\frac{\ell}{s}\mathbf{q}}$. But by the choice of $\ell$ and since $0<\frac{\ell}{s}<\ell$, we must also have $\varphi^\omega(a)_{\frac{\ell}{s}\mathbf{q}}\neq a$, a contradiction. \end{proof} The next results shows that the condition of Proposition~\ref{prop:suff-not-nec} is not necessary. \begin{proposition} The fixed point $\varphi^{\omega}(1)$ of the morphism \[ \varphi:0\mapsto \begin{bmatrix} 1 & 1 & 0\\ 0 & 0 & 0\\ 0 & 0 & 1 \end{bmatrix} \quad 1\mapsto \begin{bmatrix} 1 & 1 & 1\\ 0 & 1 & 0\\ 1 & 1 & 0 \end{bmatrix} \] is not recurrent along the direction $(1,3)$. \end{proposition} \begin{proof} We let $w=\varphi^{\omega}(1)$. We show that the sequence we get along the direction $(1,3)$ is $10^\omega$. It can be seen directly that the first symbols are 100, then we proceed by induction. Suppose the converse, and that $i$ is the smallest positive integer such that $w_{i,3i}=1$. We consider three cases: $i=3i'$, $i=3i'+1$, or $i=3i'+2$. In each case, our aim is to prove that $w_{i',3i'}=1$, contradicting the minimality of $i$. Case 1: $i=3i'$. In this case $\varphi(w_{i',3i'})_{0,0}=w_{i,3i}$. Since $\varphi(0)_{0,0}=0$ and $w_{i,3i}=1$ by the assumption, we must have $w_{i',3i'}=1$. Case 2: $i=3i'+1$. In this case $\varphi(w_{i',3i'+1})_{1,0}=w_{i,3i}$. Since $\varphi(0)_{1,0}=0$ and $w_{i,3i}=1$, we have $w_{i',3i'+1}=1$. The coordinate $(i',3i'+1)$ being a position $(i' \bmod 3, 1)$ in some image $\varphi(a)$, this is possible only in the case when $a=1$ and $i'\equiv 1 \pmod 3$. Indeed, this is the only non-0 position with second coordinate 1 in $\varphi(0)$ and $\varphi(1)$. Therefore, we obtain $w_{i',3i'}=1$. Case 3: $i=3i'+2$. In this case $\varphi(w_{i',3i'+2})_{2,0}=w_{i,3i}$. Since $\varphi(1)_{2,0}=0$ and $w_{i,3i}=1$, we have $w_{i',3i'+2}=0$. The coordinate $(i',3i'+2)$ being a position $(i' \bmod 3, 2)$ in some image $\varphi(a)$, we must have $a=0$ and $i'\equiv 2 \pmod 3$. Indeed, this is the only non-1 position with second coordinate 2 in $\varphi(0)$ and $\varphi(1)$. We obtain once again that $w_{i',3i'}=1$. \end{proof} The next theorem gives a characterization of SURD fixed points of square binary morphisms of size $2$. \begin{theorem}\label{thm:characterization} Let $\varphi$ be a bidimensional binary square morphism of size $2$ prolongable on $1$. The fixed point $\varphi^\omega(1)$ is SURD if and only if either $\varphi(0)_{0,0}=1$ or $\varphi(1)=\left[\begin{smallmatrix} 1 & 1 \\ 1 & 1 \end{smallmatrix}\right]$. \end{theorem} The ``if'' part follows from Corollary~\ref{cor:1}. The ``only if'' part is proven with a rather technical argument involving a case study analysis and using certain properties of arithmetic progressions in the Thue-Morse word $\mathbf{t}=0110100110010110\cdots$. We first provide two useful lemmas about the Thue-Morse word \cite{Thue--1906}. Recall that this word is the fixed point of the unidimensional morphism $0\mapsto 01,\ 1\mapsto 10$. It can also be defined thanks to the function $s_2\colon\mathbb{N}\to \mathbb{N}$ that returns the sum $s_2(n)$ of the digits in the binary expansion of $n$: the $(n+1)$-th letter $t_n$ of the Thue-Morse word $\mathbf{t}$ is equal to $0$ if $s_2(n)\equiv 0\pmod 2$ and to $1$ otherwise. \begin{lemma}\label{lemma:TM1} For any $\ell\in\mathbb{N}$, the Thue--Morse word $\mathbf{t}=(t_n)_{n\in \mathbb{N}}$ satisfies $t_0=0$ and $t_d=t_{2d}=t_{3d}=\ldots=t_{2^\ell d}$ with $d=2^{\ell}-1$. Moreover, $t_d=1$ if $\ell$ is odd and $t_d=0$ if $\ell$ is even. \end{lemma} \begin{proof} Let $m\in[\![1,2^\ell]\!]$. There exist $r$ odd and $i\geq 0$ such that $m = r2^i$. Denote by $r_j r_{j-1}\cdots r_1 r_0$ the binary expansion $(r)_2$ of $r$. In particular, $r_0=1$ since $r$ is odd. Also $r\leq m\leq 2^\ell$ and $r$ odd imply that $r<2^\ell$ and $|(r)_2|\leq \ell$. We have $(r2^\ell)_2=r_j r_{j-1}\cdots r_1 r_0 0^\ell$ and $(r2^\ell-1)_2=r_j r_{j-1}\cdots r_1 0 1^\ell$. Therefore, \begin{align*} s_2(r(2^\ell-1)) &= s_2(r2^\ell -1-(r-1)) \\ &= s_2(r)-1+\ell-s_2(r-1) \tag{as $|(r)_2|\leq \ell$}\\ &= s_2(r)-1+\ell-s_2(r)+1 \tag{as $r$ odd}\\ &= \ell. \end{align*} Since $s_2(m(2^\ell-1))=s_2(r(2^\ell-1))=\ell$, the conclusion follows. \end{proof} Note that the proof of the previous lemma is a modification of Lemma 3.2 in \cite{Bucci--2013}. \begin{lemma} \label{lemma:TM0} For any positive integer $\ell$, the Thue--Morse word $\mathbf{t}=(t_n)_{n\in \mathbb{N}}$ satisfies $t_{0}=t_{d}=t_{2d}=\ldots=t_{2^\ell d}=0$ with $d=2^\ell+1$. \end{lemma} \begin{proof} Let $m\in[\![0,2^\ell-1]\!]$. Since $m<2^\ell$, $|(m)_2|\leq \ell $. So $s_2(m(2^\ell+1))=s_2(m)+s_2(m)$ is even. It follows that $t_{m(2^\ell+1)}=0$. For $m=2^\ell$, we have $s_2(m(2^\ell+1))=s_2(2^{2\ell}+2^\ell)=2$ and $t_{m(2^\ell+1)}=0$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:characterization}] The condition is sufficient by Corollary~\ref{cor:1}. To prove that it is necessary, we show by a case study that the fixed points beginning with $1$ of all the other possible morphisms are not SURD. For the sake of clarity, we set $w=\varphi^\omega(1)$. First, note that $\varphi(1)_{i,j}=\varphi(0)_{i,j}=0$ for some $(i,j)\ne (0,0)$ implies that $w$ contains $10^\omega$ along the direction $(i,j)$. Hence for a given position $(i,j)$, it is sufficient to consider $(\varphi(1)_{i,j},\varphi(0)_{i,j})\in\{(0,1),(1,0),(1,1)\}$. The graph of our case study is depicted in Figure~\ref{fig:2x2}. \begin{figure}[htb] \begin{center} \scalebox{0.9}{ \begin{tikzpicture}[scale=1] \clip (9.8,8.4) rectangle (-1.8,-1.7); \tikzstyle{01}=[draw=blue, line width=1pt] \tikzstyle{10}=[draw=red, line width=1pt] \tikzstyle{11}=[draw=green, line width=1pt] \tikzstyle{every node}=[shape=rectangle,fill=none,draw=none,minimum size=2.7cm,inner sep=2pt] \tikzstyle{every path}=[draw=blue] \begin{scope}[shift={(0,8)}] \morph{1}{0}{.}{.}{0}{1}{.}{.}; \node(a1) at (1.5,-0.5){}; \end{scope} \begin{scope}[shift={(-2,4)}] \morph{1}{0}{0}{?}{0}{1}{1}{?}; \node(b1) at (1.5,-0.5){}; \end{scope} \draw[01] (a1.south)--(b1.north); \begin{scope}[shift={(0,4)}] \morph{1}{0}{1}{?}{0}{1}{0}{?}; \node(c1) at (1.5,-0.5){}; \end{scope} \draw[10] (a1.south)--(c1.north); \begin{scope}[shift={(2,4)}] \morph{1}{0}{1}{?}{0}{1}{1}{?}; \node(d1) at (1.5,-0.5){}; \end{scope} \draw[11] (a1.south)--(d1.north); \begin{scope}[shift={(1,0)}] \morph{1}{0}{1}{0}{0}{1}{1}{1}; \node(k1) at (1.5,-0.5){}; \end{scope} \draw[01] (d1.south)--(k1.north); \begin{scope}[shift={(3,0)}] \morph{1}{0}{1}{1}{0}{1}{1}{*}; \node(l1) at (1.5,-0.5){}; \end{scope} \draw[10] (d1.south) to [bend right=20](l1.north); \draw[11] (d1.south) to [bend left=20](l1.north); \tikzstyle{every node}=[shape=rectangle,fill=none,draw=none,minimum size=0.2cm,inner sep=2pt] \begin{scope}[shift={(1,-1.5)}] \node at (-2,4){Case 1}; \node at (0,4){Case 2}; \node at (2,4){Case 3}; \node at (1,0){Case 3.1}; \node at (3,0){Case 3.2}; \node at (4,3.95){Symm.}; \node at (4,3.45){Case 2}; \node at (6,4){Case 4}; \node at (8,3.95){Symm.}; \node at (8,3.45){Case 3}; \end{scope} \tikzstyle{every node}=[shape=rectangle,fill=none,draw=none,minimum size=2.7cm,inner sep=2pt] \tikzstyle{every path}=[draw=blue] \begin{scope}[shift={(4,8)}] \morph{1}{1}{?}{.}{0}{0}{?}{.}; \node(a2) at (1.5,-0.5){}; \end{scope} \begin{scope}[shift={(8,8)}] \morph{1}{1}{?}{.}{0}{1}{?}{.}; \node(a3) at (1.5,-0.5){}; \end{scope} \begin{scope}[shift={(4,4)}] \morph{1}{1}{0}{?}{0}{0}{1}{?}; \node(b) at (1.5,-0.5){}; \end{scope} \draw[01] (a2.south)--(b.north); \begin{scope}[shift={(6,4)}] \morph{1}{1}{1}{*}{0}{*}{*}{*}; \node(c) at (1.5,-0.5){}; \end{scope} \draw[10] (a2.south) to [bend right=20] (c.north); \draw[11] (a2.south) to [bend left=20](c.north); \draw[10] (a3.south) to [bend right=20] (c.north); \draw[11] (a3.south) to [bend left=20](c.north); \begin{scope}[shift={(8,4)}] \morph{1}{1}{0}{*}{0}{1}{1}{*}; \node(d) at (1.5,-0.5){}; \end{scope} \draw[01] (a3.south)--(d.north); \end{tikzpicture} } \end{center} \caption{Square morphisms of size 2. A black (resp.\ white) cell corresponds to a position filled with letter $1$ (resp.\ $0$). A gray cell corresponds to a position which can contain any letter. The possible pairs $(\varphi(1)_{i,j},\varphi(0)_{i,j})$ with $i,j\in\{0,1\}$ are successively considered. A blue line corresponds to the pair $(\varphi(1)_{i,j},\varphi(0)_{i,j})=(0,1)$, a red one to $(1,0)$ and a green one to $(1,1)$.} \label{fig:2x2} \end{figure} \medskip {\bf Case 1} \[ \varphi\colon 0\mapsto \begin{bmatrix} 1 & *\\ 0 & 1 \end{bmatrix}, \quad 1\mapsto \begin{bmatrix} 0 & *\\ 1 & 0 \end{bmatrix} \] We show that the factor $10^{2^\ell-1}$ occurs along the direction $(p,q) =(2^{2\ell} (2^\ell-1),2^\ell +1)$ with $\ell$ odd (see Figure~\ref{fig:case1}). First note that the first row of $\varphi^\omega(1)$ is equal to $\bar{\mathbf{t}}$. Hence, the first $2^{2\ell}$ rows contain $\varphi^{2\ell}(\bar{\mathbf{t}})$. Let $d=2^{\ell}-1$. By Lemma~\ref{lemma:TM1}, the arithmetical subsequence $(\bar{t}_{md})_{m\in \mathbb{N}}$ begins with $10^{2^\ell}$. Thus, $w_{mp,0}=w_{md2^{2\ell},0}=w_{md,0}=0$ for $m\in\{1,\ldots,2^\ell\}$. To conclude, observe that the first column of $\varphi^{2\ell}(0)$ is a prefix of $\mathbf{t}$. By Lemma~\ref{lemma:TM0}, the arithmetical subsequence $(t_{mq})_{m\in \mathbb{N}}$ begins with $0^{2^\ell}$. Let $m\in[\![1,2^\ell-1]\!]$. As $(2^\ell -1)q= 2^{2\ell}-1 < 2^{2 \ell}$, the letter $w_{mp,mq}$ is inside a square $\varphi^{2\ell}(0)$ with the bottom left corner at position $(mp,0)$, hence $w_{mp,mq}=0$. \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.4] \tikzstyle{every node}=[shape=rectangle,fill=none,draw=none,minimum size=0cm,inner sep=2pt] \tikzstyle{every path}=[draw=black,line width = 0.5pt] \draw (0,0) rectangle (6.4,6.4); \node at (3.2,3.2){\large $\varphi^{2\ell}(1)$}; \node at (0.2,0.2){\tiny 1}; \node at (0.2,0.5){\tiny 0}; \node at (0.2,0.8){\tiny 0}; \node at (0.2,1.1){\tiny 1}; \draw[dotted] (0.2,1.3)--(0.2,1.8); \node at (0.2,2){\tiny 1}; \draw[dotted] (0.2,2.7)--(0.2,3.1); \node at (0.2,3.8){\tiny 1}; \draw[dotted] (0.2,4.5)--(0.2,4.9); \node at (0.2,5.6){\tiny 1}; \draw[dotted] (0.2,5.9)--(0.2,6.3); \foreach \x in {10,20,30} { \draw (\x,0) rectangle (6.4+\x,6.4); \node at (\x+3.2,3.2){\large $\varphi^{2\ell}(0)$}; \node at (\x+0.2,0.2){\tiny 0}; \node at (\x+0.2,0.5){\tiny 1}; \node at (\x+0.2,0.8){\tiny 1}; \node at (\x+0.2,1.1){\tiny 0}; \draw[dotted] (\x+0.2,1.3)--(\x+0.2,1.8); \node at (\x+0.2,2){\tiny 0}; \draw[dotted] (\x+0.2,2.7)--(\x+0.2,3.1); \node at (\x+0.2,3.8){\tiny 0}; \draw[dotted] (\x+0.2,4.5)--(\x+0.2,4.9); \node at (\x+0.2,5.6){\tiny 0}; \draw[dotted] (\x+0.2,5.9)--(\x+0.2,6.3); } \foreach \x in {0,10,20} { \node at (\x+1.5+6.4,3){$\ldots$}; } \tikzstyle{every path}=[draw=red,line width = 1pt, ->] \draw (0.4,0.1) to [bend left = 5] node [above] {\color{red}$p$} (10,0.1); \draw (10+0.4,2) to [bend left = 5] node [above] {\color{red}$p$} (10+10,2); \draw (20+0.4,3.9) to [bend left = 5] node [above] {\color{red}$p$} (20+10,3.9); \draw (10+0.5,0.2+0.1) to [bend right = 20] node [right] {\color{red}$q$} (10+0.5,0.2+1.7); \draw (20+0.5,2.0+0.1) to [bend right = 20] node [right] {\color{red}$q$} (20+0.5,2.0+1.7); \draw (30+0.5,3.8+0.1) to [bend right = 20] node [right] {\color{red}$q$} (30+0.5,3.8+1.7); \end{tikzpicture} \end{center} \caption{Structure of Case 1 morphisms with $(p,q) =(2^{2\ell} (2^\ell-1),2^\ell +1)$ where $\ell$ is odd.}\label{fig:case1} \end{figure} \medskip {\bf Case 2} \[ \varphi\colon 0\mapsto \begin{bmatrix} 1 & *\\ 0 & 0 \end{bmatrix}, \quad 1\mapsto \begin{bmatrix} 0 & *\\ 1 & 1 \end{bmatrix} \] In this case we will prove that for all odd $n\in\mathbb{N}$, the factor $0^{2^n-1}$ occurs along the direction $(1, (2^n-1)2^n)$. More precisely, we claim that for all odd $n\in\mathbb{N}$ and all $i\in[\![1,2^n-1]\!]$, we have $w_{i,i(2^n-1)2^n}=0$. First, notice that there are only $0$'s on the bottom line of the images $\varphi^m(0)$ for all $m\in\mathbb{N}$, namely, $\varphi^m(0)_{i,0}=0$ for all $m\in\mathbb{N}$ and all $i\in[\![0, 2^m-1]\!]$. Second, we use Lemma~\ref{lemma:TM1} which gives $w_{0,i(2^n-1)}=0$ for all odd $n\in\mathbb{N}$ and all $i\in[\![1,2^n]\!]$. By applying the power morphism $\varphi^n$, we get $w_{0,i(2^n-1)2^n}=0$ for all odd $n\in\mathbb{N}$ and all $i\in[\![1, 2^n]\!]$. Since the latter points belong to left bottom corner of $\varphi^n(0)$, we obtain that $w_{i,i(2^n-1)2^n}=0$ for every $i\in[\![1, 2^n-1]\!]$ as desired. \medskip {\bf Case 3.1} \[ \varphi\colon 0\mapsto \begin{bmatrix} 1 & 1\\ 0 & * \end{bmatrix}, \quad 1\mapsto \begin{bmatrix} 0 & 0\\ 1 & * \end{bmatrix} \] Similarly to Case 1, we can show that the factor $10^{2^\ell-1}$ occurs along the direction $(p,q) =(2^\ell +1,2^{2\ell} (2^\ell-1)+2^\ell +1)$ with $\ell$ odd. Indeed, in this case, the Thue-Morse word or its complement appears in the first column and in the diagonal; see Figure~\ref{fig:case3-1}. \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale=0.4] \tikzstyle{every node}=[shape=rectangle,fill=none,draw=none,minimum size=0cm,inner sep=2pt] \tikzstyle{every path}=[draw=black,line width = 0.5pt] \draw (0,0) rectangle (6.4,6.4); \node at (5,1.3){\large $\varphi^{2\ell}(1)$}; \node at (0.2,0.2){\tiny 1}; \node at (0.5,0.5){\tiny 0}; \node at (0.8,0.8){\tiny 0}; \node at (1.1,1.1){\tiny 1}; \draw[dotted] (1.3,1.3)--(1.8,1.8); \node at (2,2){\tiny 1}; \draw[dotted] (2.7,2.7)--(3.1,3.1); \node at (3.8,3.8){\tiny 1}; \draw[dotted] (4.5,4.5)--(4.9,4.9); \node at (5.6,5.6){\tiny 1}; \draw[dotted] (5.9,5.9)--(6.3,6.3); \foreach \y in {10,20,30} { \draw (0,\y) rectangle (6.4,\y+6.4); \node at (5,1.3+\y){\large $\varphi^{2\ell}(0)$}; \node at (0.2,0.2+\y){\tiny 0}; \node at (0.5,0.5+\y){\tiny 1}; \node at (0.8,0.8+\y){\tiny 1}; \node at (1.1,1.1+\y){\tiny 0}; \draw[dotted] (1.3,1.3+\y)--(1.8,1.8+\y); \node at (2,2+\y){\tiny 0}; \draw[dotted] (2.7,2.7+\y)--(3.1,3.1+\y); \node at (3.8,3.8+\y){\tiny 0}; \draw[dotted] (4.5,4.5+\y)--(4.9,\y+4.9); \node at (5.6,\y+5.6){\tiny 0}; \draw[dotted] (5.9,5.9+\y)--(6.3,\y+6.3); } \foreach \y in {0,10,20} { \node at (3,1.5+6.4+\y){$\vdots$}; } \tikzstyle{every path}=[draw=red,line width = 1pt, ->] \draw (-0.1,0.3) to [bend left = 5] node [left] {\color{red}$q$} (-0.1,12); \draw (1.8,12.2) to [bend left = 5] node [left] {\color{red}$q$} (1.8,23.8); \draw (3.6,24) to [bend left = 5] node [left] {\color{red}$q$} (3.6,35.6); \draw (0.2,12) to [bend left = 20] node [above] {\color{red}$p$} (0.2+1.6,12); \draw (2,23.8) to [bend left = 20] node [above] {\color{red}$p$} (2.0+1.6,23.8); \draw (3.8,35.6) to [bend left = 20] node [above] {\color{red}$p$} (3.8+1.6,35.6); \end{tikzpicture} \end{center} \caption{Structure of Case 3.1 morphisms with $(p,q) =(2^\ell +1,2^{2\ell} (2^\ell-1)+2^\ell +1)$ where $\ell$ is odd.} \label{fig:case3-1} \end{figure} \medskip {\bf Case 3.2} \[ \varphi\colon 0\mapsto \begin{bmatrix} 1 & *\\ 0 & 1 \end{bmatrix}, \quad 1\mapsto \begin{bmatrix} 0 & 1\\ 1 & 1 \end{bmatrix} \] In this case we will prove that the word is not recurrent in direction $(2,1)$. More precisely, we show that $(w_{2i,i})_{i\in\mathbb{N}}=10^\omega$. Clearly $w_{0,0}=1$. We prove $w_{2i,i}=0$ for all $i\ge 1$ by induction on $i$. The base case $w_{2,1}=0$ is easily verified. Now let $i>1$ and suppose that $w_{2i',i'}=0$ for all $1\le i'<i$. If $i$ is even, then $w_{2i,i}=\varphi(w_{i,\frac i2})_{0,0}=0$, where the last equality comes from the induction hypothesis with $i'=\frac i2$ and the fact that $\varphi(0)_{0,0}=0$. If $i$ is odd, then $w_{2i,i}=\varphi(w_{i,\frac{i-1}{2}})_{0,1}$. Remark that $w_{i,\frac{i-1}{2}}$ is an element in a right column of a $2\times 2$ block which is an image of $0$ or $1$. An element $w_{i-1,\frac{i-1}{2}}$ (which is equal to $0$ by induction hypothesis with $i'=\frac{i-1}{2}$) is an element in the same block which is situated to the left of $w_{i,\frac{i-1}{2}}$. Due to the forms of $\varphi(0)$ and $\varphi(1)$, if a left element is 0, then the right element in the same line is $1$. So, $w_{i,\frac{i-1}{2}}=1$, hence $w_{2i,i}=\varphi(1)_{0,1}=0$. \medskip {\bf Case 4} We can suppose that \[ \varphi\colon 0\mapsto \begin{bmatrix} * & *\\ 0 & * \end{bmatrix}, \quad 1\mapsto \begin{bmatrix} 1 & 0\\ 1 & 1 \end{bmatrix} \] for otherwise $\varphi(1)=\left[\begin{smallmatrix} 1 & 1 \\ 1 & 1 \end{smallmatrix}\right]$. \medskip In this case we will prove that for all $n\in\mathbb{N}$, the factor $0^{2^n-1}$ occurs along the direction $(2^n-1,1)$. More precisely, for all $n\in\mathbb{N}$, we have $w_{j(2^n-1),j}=0$ for every $j\in[\![1, 2^n-1]\!]$. First, an easy induction on $n$ shows that there are $0$ just above the diagonal from upper left to lower right in the images $\varphi^n(1)$ for all $n\ge 1$, namely $\varphi^n(1)_{2^n-j,j}=0$ for all $n\ge 1$ and all $j\in[\![1,2^n-1]\!]$. For example, for $n=3$, we have \[ \varphi^3(1) =\varphi^2 \begin{bmatrix} 1 & 0\\ 1 & 1 \end{bmatrix} =\begin{bmatrix}[cccc|cccc] 1 & \mathbf{0} & * & * & * & * & * & * \\ 1 & 1 & \mathbf{0} & * & * & * & * & * \\ 1 & 0 & 1 & \mathbf{0} & * & * & * & * \\ 1 & 1 & 1 & 1 & \mathbf{0} & * & * & * \\ \hline 1 & \mathbf{0} & * & * & 1 & \mathbf{0} & * & * \\ 1 & 1 & \mathbf{0} & * & 1 & 1 & \mathbf{0} & * \\ 1 & \mathbf{0} & 1 & \mathbf{0} & 1 & 0 & 1 & \mathbf{0} \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{bmatrix}. \] Second, since $w_{i,0}=1$ for all $i\in\mathbb{N}$, we obtain that, for all $n,k\in\mathbb{N}$, the square factor of size $2^n$ occurring at position $(2^n k,0)$ is equal to $\varphi^n(1)$. Therefore, we have $w_{2^n k + 2^n-j,j}=0$ for all $n,k\in\mathbb{N}$ and $j\in\{1,\ldots,2^n-1\}$. The claim follows by considering the latter equality with $k=j-1$. \end{proof} The previous theorem gives a characterization of strong uniform recurrence along all directions for fixed points of bidimensional square binary morphisms of size $2$. For larger sizes of the morphism, we gave several conditions that are either necessary (Proposition~\ref{prop:suff-not-nec}) or sufficient (Propositions~\ref{proposition:main morphic} and~\ref{prop:SURDcor-not-necessary}). An open problem is to find a necessary and sufficient condition in general (see Section~\ref{sec:perspectives}). We end this section by a small discussion on the SSURDO notion. First we provide an example of SSURDO aperiodic word. Then, we give an example of a SURD word that is not SSURDO. \begin{proposition}\label{prop:SSURDO} Let $\varphi$ be the square binary morphism defined by \[ \varphi\colon 0\mapsto \begin{bmatrix} 1 & 0 &0 \\ 1 & 0 & 1 \\ 1 & 0 & 0 \end{bmatrix}, \quad 1\mapsto \begin{bmatrix} 1 & 0 &1 \\ 1 & 0 & 1 \\ 1 & 0 & 0 \end{bmatrix}. \] The fixed point $\varphi^\omega(1)$ is SSURDO. \end{proposition} \begin{proof} Let $w=\varphi^\omega(1)$ and $\mathbf{q}$ be any direction. By definition of $\varphi$, for every position $\mathbf{p}\not\equiv (2,2) \pmod{3}$, we have $w(\mathbf{p})=w(\mathbf{p}\bmod{3})$. If follows that $w(\mathbf{p})= w(\mathbf{p}+3\mathbf{q})$ for all such $\mathbf{p}$. Consider now a position $\mathbf{p}\equiv (2,2)\pmod{3}$. Since $\gcd(\mathbf{q})=1$, we have $\mathbf{p}+\mathbf{q}\not \equiv (2,2)\pmod{3}$ and $\mathbf{p}+2\mathbf{q}\not \equiv (2,2)\pmod{3}$. So $w(\mathbf{p}+\mathbf{q})=w((\mathbf{p}+\mathbf{q})\bmod{3})$ and $w(\mathbf{p}+2\mathbf{q})=w((\mathbf{p}+2\mathbf{q})\bmod{3})$. By checking all the possible values modulo $3$ of $\mathbf{p}+\mathbf{q}$ and $\mathbf{p}+2\mathbf{q}$, we can verify that $w(\mathbf{p}+\mathbf{q})\neq w(\mathbf{p}+2\mathbf{q})$. So the letter $w(\mathbf{p})$ along the direction $\mathbf{q}$ is firstly repeated within a distance two, then it is repeated every three letters. Now consider a position $\mathbf{p}$ and a factor $f$ of size $\mathbf{s}$ occurring at position $\mathbf{p}$. Let $i=\lceil \max (\log_3 \mathbf{s})\rceil$. We will show that $f$ occurs along $\mathbf{q}$ with gaps bounded by $3^{i+1}$. To do so, we will consider a covering of the grid by the square factors $\varphi^i(0)$ and $\varphi^i(1)$ and study the position of the factor $f$ relatively to this covering; see Figure~\ref{fig:SSURDO}. \begin{figure}[htbp] \centering \scalebox{0.7}{ \begin{tikzpicture}[scale=1] \clip(-0.6,-0.6) rectangle (6.3,4.8); \tikzstyle{every node}=[shape=rectangle,fill=none,draw=none,minimum size=0cm,inner sep=2pt] \tikzstyle{every path}=[draw=black,line width = 0.5pt] \draw[fill=gray!50] (1.8,3.5) rectangle (2.9,4.1); \node at (2.35,3.8){$f$}; \draw[fill=gray!50] (4,1.3) rectangle (5.1,1.9); \foreach \x in {0,0.5,1,1.5,2} { \draw (3*\x,0) to (3*\x,6.5); \draw (0,3*\x) to (7,3*\x); } \node[fill=gray!50] at (4.55,1.6){$f'$}; \draw[<->] (0,-0.2) to node [below] {$3^i$} (1.5,-0.2); \draw[<->] (-0.2,0) to node [left] {$3^i$} (-0.2,1.5); \tikzstyle{every path}=[draw=black,line width = 1pt, ->] \draw (0,0) to node [left] {$\mathbf{p}$} (1.8,3.5); \draw (0,0) to node [above] {$\mathbf{p'}$} (4,1.3); \end{tikzpicture} } \caption{The factor $f$ at position $\mathbf{p}$ occurs ``completely'' inside a factor of the form $\varphi^i(0)$ or $\varphi^i(1)$, while the factor $f'$ (of the same size) at position $\mathbf{p'}$ does not.} \label{fig:SSURDO} \end{figure} If $f$ occurs ``completely'' inside a factor $\varphi^i(0)$ or $\varphi^i(1)$, i.e.\ if \[ \exists \mathbf{k}\in\mathbb{N}^2,\quad 3^i\mathbf{k} \le \mathbf{p}\le \mathbf{p}+\mathbf{s} < 3^i(\mathbf{k}+\mathbf{1}), \] then we use the previous observation about the occurrence of any letter every three positions along $\mathbf{q}$ to conclude that $f$ occurs infinitely often along $\mathbf{q}$ with gaps bounded by $3^{i+1}$. Now, suppose that $f$ does not ``completely'' occur inside a factor $\varphi^i(0)$ or $\varphi^i(1)$, i.e.\ that \[ \exists \mathbf{k}\in\mathbb{N}^2,\quad 3^i\mathbf{k} \le \mathbf{p} < 3^i(\mathbf{k}+\mathbf{1}) \] but there exists $j\in\{1,2\}$ such that \[ p_j+s_j\ge 3^i(k_j+1) \] where $\mathbf{p}=(p_1,p_2)$, $\mathbf{s}=(s_1,s_2)$ and $\mathbf{k}=(k_1,k_2)$. Then $\mathbf{p}+\mathbf{s} < 3^i(\mathbf{k}+\mathbf{2})$ by definition of $i$. Consider the factor $z\colon [\![0,3^i-1]\!]\times[\![0,3^i-1]\!]\to A$ of size $(3^i,3^i)$ at position $3^i\mathbf{k}$: for all $\mathbf{i}\in[\![0,3^i-1]\!]\times[\![0,3^i-1]\!]$, $z(\mathbf{i})=w(\mathbf{i}+3^i\mathbf{k})$. This factor $z$ corresponds exactly to a square factor of the grid, that is either $\varphi^i(0)$ or $\varphi^i(1)$. Hence it occurs along $\mathbf{q}$ from the position $3^i\mathbf{k}$ infinitely many times with gaps bounded by $3^{i+1}$. Now, an easy recurrence shows that $\varphi^j(0)$ and $\varphi^j(1)$ coincide everywhere except in position $(3^j-1,3^j-1)$ for any $j\in\mathbb{N}$. It follows that any factor of size $(3^i, 3^i)$ occurring at a position of the form $3^{i}(x,y)$ extends in a unique way to a factor of size $(2\cdot 3^{i}-1, 2\cdot 3^i-1)$ occurring at the same position. Applying this to the factor $z$, we deduce that distances between consecutive occurrences of $f$ along $\mathbf{q}$ from the position $\mathbf{p}$ coincide with the distances between consecutive occurrences of $z$ along $\mathbf{q}$ from the position $3^i\mathbf{k}$. Hence the conclusion. \end{proof} Thanks to Theorem~\ref{thm:characterization}, we are able to show that SURD does not imply SSURDO, as illustrated by the following example. \begin{example} \label{ex:SURD_isnot_SSURDO} SURD and SSURDO properties define two distinct classes of words. Consider the fixed point $w$ of the square binary morphism $\varphi$ defined by \[ \varphi\colon 0\mapsto \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}, \quad 1\mapsto \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}, \] which is SURD by Theorem~\ref{thm:characterization}. We can show that for the size $\mathbf{s}=(1,1)$, the direction $\mathbf{q}=(1,0)$ and the translations $\mathbf{p}_n=(2^{n+1}-1,2^n-1)$ with $n\in\mathbb{N}$, the words $\dir{(w^{(\mathbf{p}_n)})}{q}{s}$ begins with $\bar{a}a^{3\cdot 2^n}\bar{a}$ where $a\in\{0,1\}$, by observing that $\mathbf{p}_{n+1}=2\mathbf{p}_n+(1,1)$. This is illustrated in Figure~\ref{fig:SURD_isnot_SSURDO}. \begin{figure}[htbp] \centering \scalebox{0.8}{ \begin{tikzpicture}[overlay,remember picture] \tikzstyle{every path}=[draw=white,line width = 1pt] \coordinate (a) at ( $ (pic cs:bloc0a) - (0.1,0.08) $); \coordinate (b) at ( $ (pic cs:bloc0b) + (0.2,0.28) $); \draw[fill=gray!50] (a) rectangle (b); \coordinate (a) at ( $ (pic cs:bloc1a) - (0.1,0.08) $); \coordinate (b) at ( $ (pic cs:bloc1b) + (0.2,0.28) $); \draw[fill=gray!50] (a) rectangle (b); \coordinate (a) at ( $ (pic cs:bloc2a) - (0.1,0.08) $); \coordinate (b) at ( $ (pic cs:bloc2b) + (0.2,0.28) $); \draw[fill=gray!50] (a) rectangle (b); \coordinate (a) at ( $ (pic cs:bloc3a) - (0.1,0.08) $); \coordinate (b) at ( $ (pic cs:bloc3b) + (0.2,0.28) $); \draw[fill=gray!50] (a) rectangle (b); \end{tikzpicture} $\begin{array}{c|ccccccccccccccccccccccccccc} 7 &0&0 &0&0 &0&0&0&0 &1&1&1&1&1&1&1&\tikzmark{bloc3a}1\tikzmark{D2b} &0&0&0&0&0&0&0&0&0&0&0\tikzmark{bloc3b}\\ 6 &1&0 &1&0 &1&0&1&0 &1&1&1&1&1&1&\tikzmark{D2a}1&1 &1&0&1&0&1&0&1&0&1&0&1\\ 5 &0&0 &0&0 &0&0&0&0 &0&0&1&1&0&0&1&1 &0&0&0&0&0&0&0&0&0&0&0\\ 4 &1&0 &1&0 &1&0&1&0 &1&0&1&1&1&0&1&1 &1&0&1&0&1&0&1&0&1&0&1\\ 3 &1&1 &1&1 &0&0&0&\tikzmark{bloc2a}0\tikzmark{D1b}\tikzmark{O2} &1&1&1&1&1&1&1&1 &1&1&1&1&0\tikzmark{bloc2b}&0&0&0&1&1&1\\ 2 &1&1 &1&1 &1&0&\tikzmark{D1a}1&0 &1&1&1&1&1&1&1&1 &1&1&1&1&1&0&1&0&1&1&1 \\ 1 &0&0 &1&\tikzmark{bloc1a}1\tikzmark{D0b}\tikzmark{O1} &0&0&0&0 &0&0&1\tikzmark{bloc1b}&1&0&0&1&1 &0&0&1&1&0&0&0&0&0&0&1\\ 0 &1&\tikzmark{bloc0a}0\tikzmark{O0} &\tikzmark{D0a}1&1 &1&0\tikzmark{bloc0b}&1&0 &1&0&1&1&1&0&1&1 &1&0&1&1&1&0&1&0&1&0&1\\\hline &0&1&2& 3&4&5& 6&7&8&9&10&11&12&13&14&15&16&17&18&19&20&21&22&23&24&25&26 \end{array}$ \begin{tikzpicture}[overlay,remember picture] \tikzstyle{every path}=[draw=red,line width = 1pt] \coordinate (x) at ( $ (pic cs:O0) + (0.12,0.3)$); \coordinate (y) at ( $ (pic cs:O0) - (0.12,0.15)$); \draw (x.south) --++(0,-0.4) --++(-0.4,0)--++(0,0.4)--++(0.4,0)--++(0,-0.4); \coordinate (a) at ( $ (pic cs:D0a) - (0.1,0.1) $); \coordinate (b) at ( $ (pic cs:D0b) + (0.2,0.38) $); \draw (a) rectangle (b); \draw[->] (y) to [bend right = 60] (a); \tikzstyle{every path}=[draw=blue,line width = 1pt] \coordinate (x) at ( $ (pic cs:O1) + (0.12,0.3)$); \draw (x.south) --++(0,-0.4) --++(-0.4,0)--++(0,0.4)--++(0.4,0)--++(0,-0.4); \coordinate (a) at ( $ (pic cs:D1a) - (0.1,0.1) $); \coordinate (b) at ( $ (pic cs:D1b) + (0.2,0.38) $); \draw (a) rectangle (b); \draw[->] (x) to [bend left = 10] (a); \tikzstyle{every path}=[draw=purple,line width = 1pt] \coordinate (x) at ( $ (pic cs:O2) + (0.12,0.3)$); \draw (x.south) --++(0,-0.4) --++(-0.4,0)--++(0,0.4)--++(0.4,0)--++(0,-0.4); \coordinate (a) at ( $ (pic cs:D2a) - (0.1,0.1) $); \coordinate (b) at ( $ (pic cs:D2b) + (0.2,0.38) $); \draw (a) rectangle (b); \draw[->] (x) to [bend left = 10] (a); \end{tikzpicture} } \caption{A prefix of the fixed point of} \hspace{2.5cm}$\varphi\colon 0\mapsto \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}, \quad 1\mapsto \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}$. \label{fig:SURD_isnot_SSURDO} \end{figure} It follows that $w$ is not SSURDO. \end{example} \section{Non-morphic bidimensional SURD words} \label{sec:construction} In this section we provide a construction of non-morphic bidimensional SURD words. To construct such a word $w\colon \mathbb{N}^2\to A$ (where $A$ is any alphabet of size at least $2$), we proceed recursively. The construction is illustrated in Figure~\ref{fig:non-morphic}. \begin{figure}[htb] \[ \begin{matrix} \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ a & \cdot & a & \cdot & a & \cdot & a & \cdots \\ b & c & \cdot & \cdot & b & c & \cdot & \cdots \\ a & d & a & \cdot & a & d & a & \cdots \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdots \\ a & \cdot & a & \cdot & a & \cdot & a & \cdots \\ b & c & \cdot & \cdot & b & c & \cdot & \cdots \\ a & d & a & \cdot & a & d & a & \cdots \end{matrix} \] \caption{Construction of a non-morphic SURD bidimensional word.} \label{fig:non-morphic} \end{figure} \noindent\textbf{Step 0}. Pick some $a\in A$ and for each $(i,j)\in\mathbb{N}^2$, put $w(2i,2j)=a$. \medskip \noindent\textbf{Step 1}. Fill anything you want in positions (0,1), (1,0) and (1,1). For each $(i,j)\in\mathbb{N}^2$, put $w(4i,4j+1)=w(0,1)$, $w(4i+1,4j)=w(1,0)$, $w(4i+1,4j+1)=w(1,1)$. Note that the filled positions are doubly periodic with period 4. \medskip \noindent\textbf{Step $n\ge 2$}. At step $n$, we have filled all the positions $(i,j)$ for $i,j<2^{n}$, and the positions with filled values are doubly periodic with period $2^{n+1}$. Let $S$ be the set of pairs $(k,\ell)$ with $k,\ell<2^{n+1}$ which have not been yet filled in. Fill anything you want in the positions from $S$. Now for each $(k,\ell)$ and each $(k',\ell')\in S$, define $w(2^{n+2}k+k', 2^{n+2}\ell+\ell' )=w(k',\ell')$. Note that the filled positions are doubly periodic with period $2^{n+2}$. \begin{proposition} \label{proposition:toeplitz} The bidimensional infinite word $w$ defined by the construction above is SURD. More precisely, for all $\mathbf{s}\in\mathbb{N}^2$, the prefix of size $\mathbf{s}$ of $w$ occurs infinitely often along any direction with gaps at most $2^{\lceil \log_2(\max\mathbf{s})\rceil}$. \end{proposition} \begin{proof} Let $p$ be the prefix of $w$ of size $\mathbf{s}$ and let $\mathbf{q}$ be a direction. We show that the square prefix $p'$ of size $(2^k, 2^k)$ with $k=\lceil\log_2(\max\mathbf{s})\rceil$ appears within any consecutive $2^{k+1}$ positions along $\mathbf{q}$, hence this is also true for $p$ itself. By construction, at step $k$ we have filled all the positions $\mathbf{i}$ for $\mathbf{i}<(2^k,2^k)$, and the positions with filled values are doubly periodic with periods $(2^{k+1},0)$ and $(0,2^{k+1})$. Therefore the factor of size $(2^k,2^k)$ occurring at position $2^{k+1}\mathbf{q}$ in $w$ is equal to $p'$. The claim follows. \end{proof} Observe that the morphic words satisfying Corollary~\ref{cor:1} for $s=2$ can be obtained by this construction. This construction can be generalized for any $s\in\mathbb{N}$ instead of $2$. Moreover, on each step we can choose as a period any multiple of a previous period. \begin{proposition} \label{proposition:toeplitz non morphic} Among the bidimensional infinite words obtained by the construction above, there are uncountably many words which are not morphic. \end{proposition} \begin{proof} The construction provides uncountably many bidimensional infinite words. However, there exist only countably many morphic words. \end{proof} \section{Perspectives} \label{sec:perspectives} There remain many open questions related to the new notions of directional recurrence introduced in this paper. For example, we would like to generalize the characterization given by Theorem~\ref{thm:characterization} to any morphism size. \begin{question} Find a characterization of strong uniform recurrence along all directions for bidimensional square binary morphisms of size bigger than 2. \end{question} Another question is the missing relation between different notions of recurrence indicated in Figure~\ref{fig:links_recurrence}. \begin{question} Prove or disprove: Strong uniform recurrence along all directions implies uniform recurrence. \end{question} The original motivation to introduce new notions of recurrence comes from the study of return words. In the unidimensional case, a \emph{return word to $u$} in an infinite word $w$ is a factor starting at an occurrence of $u$ in $w$ and ending right before the next occurrence of $u$ in $w$. For instance, the set of return words to $u=011$ in the Thue-Morse word is equal to $\{011010, 011001, 01101001, 0110\}$. When the infinite word $w$ is uniformly recurrent, there are finitely many return words. By coding each return word to $u$ by its order of occurrence in $w$, one obtains the \emph{derivative of $w$ with respect to the prefix $u$}. Pursuing our example, the derivative of the Thue-Morse word with respect to $011$ begins with $12341243123431241234124$. Using these derivatives, Durand obtained in 1998 the following characterization of primitive pure morphic words, i.e.\ fixed points of morphisms having a primitive incidence matrix. \begin{theorem}[Durand~\cite{Durand--1998}] A word is primitive pure morphic if and only if the number of its derivatives is finite. \end{theorem} In dimension higher than one, it is not clear how to generalize the notion of primitivity of a morphism in order to study the uniform recurrence along directions (see~Remark~\ref{rem:primitivity}). A generalization of Durand's result to a bidimensional setting was investigated by Priebe~\cite{Priebe--2000}. In that generalization, words are replaced by tilings, the primitive substitutive property by self-similarity and the notion of derived tilings involves Voronoï cells. Recall that a Voronoï tessallation is a partition of the plane into regions, called Voronoï cells, based on the distance to a set of given points, called \emph{seeds} \cite{Voronoi--1907}. The Voronoï cell of a seed consists of all the points in the plane that are closer to it than to any other seed. Priebe aimed towards a characterization of self-similar tilings in terms of derived Voronoï tessellations and proved the following result. \begin{theorem}[Priebe \cite{Priebe--2000}] Let $\mathcal{T}$ be a tiling of the plane. \begin{itemize} \item If $\mathcal{T}$ is self-similar, then the number of its different derived Voronoï tilings is finite (up to similarity). \item If the number of its different derived Voronoï tilings is finite (up to similarity), then $\mathcal{T}$ is pseudo-self-similar. \end{itemize} \end{theorem} The bidimensional words we are considering in this paper are a particular case of tilings (see for instance Figure~\ref{fig:voronoi}, which has been reproduced from \cite{Priebe--2000}) where the letters correspond to colored unit squares (1 for black and 0 for white). \begin{figure}[htbp] \centering \begin{tabular}{ccc} \includegraphics[scale=0.215]{tiling.png}& \includegraphics[scale=0.2]{locator_set.png}& \includegraphics[scale=0.2]{voronoi_tiling.png}\\ (a) & (b) & (c) \end{tabular} \caption{A tiling (a), the set of positions where the factor $u$ occurs (b) with} $u\ =\ $\begin{tikzpicture}[baseline] \tikzstyle{1}=[shape=rectangle,fill=black,draw=black,,minimum size=3,inner sep=4pt] \tikzstyle{0}=[shape=rectangle,fill=white,draw=black,minimum size=3,inner sep=4pt] \node[0] at (1.25,-0.03) {}; \node[1] at (1.25,0.25) {}; \node[0] at (1.54,0.25) {}; \node[1] at (1.54,-0.03) {}; \end{tikzpicture}, and the associated Voronoï tessellation (c). \label{fig:voronoi} \end{figure} The main drawback of this notion of derived tilings is that, starting from a bidimensional word, we do not obtain another bidimensional word in general (as illustrated in Figure~\ref{fig:voronoi}). Hence the following questions are natural. \begin{question} \label{qu:der1} Find a differential operator for $d$-dimensional words with respect to its prefixes, that is, an operator \[ D\colon (A^{\mathbb{N}^d},\mathbb{N}^d)\to B^{\mathbb{N}^d}, (w,\mathbf{s})\mapsto D_\mathbf{s}(w) \] where $A$ and $B$ are potentially distinct alphabets and $D_\mathbf{s}(w)$ designates the \emph{derivative of $w$ with respect to its prefix of size $\mathbf{s}$}, such that the finiteness of the set \[ \{D_\mathbf{s}(w)\colon s\in\mathbb{N}^d\} \] would provide us with some nice property of the $d$-dimensional infinite word $w$ (such that being primitive substitutive if one thinks of Durand's theorem). \end{question} Here is a variant of the previous question. \begin{question} \label{qu:der2} Find a differential operator for $d$-dimensional words with respect to its prefixes, that is, an operator \[ D\colon (A^{\mathbb{N}^d},\mathbb{N}^d)\to B^{\mathbb{N}^d}, (w,\mathbf{s})\mapsto D_\mathbf{s}(w) \] where $A$ and $B$ are potentially distinct alphabets and $D_\mathbf{s}(w)$ designates the \emph{derivative of $w$ with respect to its prefix of size $\mathbf{s}$}, such that for all $w\in A^{\mathbb{N}^d}$ and all $\mathbf{s},\mathbf{t}\in\mathbb{N}^d$ we have \[ D_\mathbf{s}(D_\mathbf{t}(w))=D_\mathbf{u}(w) \] for some well-chosen size $\mathbf{u}\in\mathbb{N}^d$. \end{question} The notion of SURD words introduced in the present paper provides us with a way of deriving $d$-dimensional words, which generalizes the unidimensional derivatives. The idea is as follows. Let $w\colon\mathbb{N}^d\mapsto A$ be a SURD $d$-dimensional word and let $\mathbf{s}\in\mathbb{N}^d$. Being SURD implies that there exists an integer $r\ge 1$ such that for all directions $\mathbf{q}$, there are at most $r$ distinct return words to the letter $w_{\mathbf{q},\mathbf{s}}(0)$ in the unidimensional word $w_{\mathbf{q},\mathbf{s}}$. We define the {\em derivative of $w$ with respect to the prefix of size $\mathbf{s}$} to be the $d$-dimensional word $D_\mathbf{s}(w)\colon\mathbb{N}^d\mapsto [\![0,r-1]\!]$ such that for all direction $\mathbf{q}$, $((D_\mathbf{s}w)_{\ell\mathbf{q}})_{\ell\in\mathbb{N}}$ is the unidimensional derivative of $w_{\mathbf{q},\mathbf{s}}$ with respect to its first letter obtained by coding the return words to $w_{\mathbf{q},\mathbf{s}}(0)$ in order of appearance from $0$ to $r-1$. For example, if $\varphi^\omega(1)$ is the fixed point of the morphism given in Example~\ref{ex:SURD_isnot_SSURDO} and depicted in Figure~\ref{fig:SURD_isnot_SSURDO}, then its derivative $D_{(1,2)}(\varphi^\omega(1))$ with respect to the prefix of size $(1,2)$ is depicted in Figure~\ref{fig:der1}. \begin{figure}[htb] \centering \scalebox{0.8}{ $ \begin{array}{c|ccccccccccccccccccccccccccc} 7 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 \\ 6 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 2 & 1 & 2 & 1 & 1 & 1 & 2 & 3 & 2 & 1 & 1 & 1 & 1 \\ 5 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 4 & 1 \\ 4 & 0 & 1 & 2 & 0 & 4 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 2 & 1 & 2 & 1 & 0 & 1 & 2 & 0 & 3 & 1 & 2 & 0 & 2 & 1 & 2 \\ 3 & 0 & 0 & 0 & 3 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 3 & 1 & 0 & 3 & 0 & 1 & 2 & 1 & 1 & 0 & 0 & 0 \\ 2 & 1 & 1 & 2 & 0 & 2 & 1 & 1 & 1 & 1 & 1 & 2 & 1 & 2 & 1 & 1 & 0 & 2 & 1 & 2 & 0 & 2 & 1 & 1 & 1 & 1 & 0 & 2 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 \\ \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 & 26 \end{array} $ } \caption{The derivative of the fixed point $\varphi^\omega(1)$ of Figure~\ref{fig:SURD_isnot_SSURDO} w.r.t.\ the prefix of size $(1,2)$ when coding the return words in order of appearance along every direction.} \label{fig:der1} \end{figure} We know from Corollary~\ref{cor:1} that return words to the prefix of size $(1,2)$ in $(\varphi^\omega(1))_{\mathbf{q},(1,2)}$ have length at most $4$ for any direction $\mathbf{q}$. Therefore, there could be at most $4^3=64$ such return words. For instance, the letters on the diagonal correspond to the unidimensional derivative of $\varphi^\omega(1)_{(1,1),(1,2)}$ with respect to its first letter $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right]$: \[ \underbrace{ \left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right]}_{0} \underbrace{ \left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right]}_{1} \underbrace{ \left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 0 \\ \end{smallmatrix}\right]}_{2} \underbrace{ \left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right]}_{3} \underbrace{ \left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right]}_{4} \underbrace{ \left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right]}_{0} \underbrace{ \left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right]}_{1} \underbrace{ \left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right]}_{0} \underbrace{ \left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right]}_{1} \underbrace{ \left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 0 \\ \end{smallmatrix}\right]}_{2} \cdots \] In the previous definition, we chose to code the return words with respect to their order of appearance in $w_{\mathbf{q},\mathbf{s}}$ for each $\mathbf{q}$. This means that two occurrences of the same letter $i\in[\![0,r-1]\!]$, one at a position $\ell\mathbf{q}$ and the other at a position $\ell'\mathbf{q}'$ for different directions $\mathbf{q}$ and $\mathbf{q}'$, might represent different return words. For example, in Figure~\ref{fig:der1}, the letter $1$ at position $(0,1)$ corresponds to the return word $\left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right]$ but the letter $1$ at position $(0,1)$ corresponds to the return word $\left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 0 \\ \end{smallmatrix}\right]$. An alternative definition of derivatives would be to code the return words uniformly, i.e.\ independently of the considered direction $\mathbf{q}$. The derivative of $\varphi^\omega(1)$ with respect to the prefix of size $(1,2)$ obtained by following the latter convention is depicted in Figure~\ref{fig:der2}. The codes of the used return words are given in Table~\ref{tab:codes}. Note that the letter at position $(0,0)$ in the derivative is not well defined since in general, the first return words to the prefix of size $\mathbf{s}$ along two different directions $\mathbf{q}$ and $\mathbf{q}'$ are not the same. \begin{figure}[htb] \centering \scalebox{0.8}{ $ \begin{array}{c|ccccccccccccccccccccccccccc} 7 & 1 & 9 & 7 & 10 & 11 & 3 & 7 & 2 & 1 & 3 & 1 & 4 & 1 & 3 & 6 & 1 & 12 & 9 & 12 & 9 & 11 & 3 & 7 & 3 & 12 & 5 & 12 \\ 6 & 1 & 4 & 6 & 6 & 6 & 5 & 3 & 7 & 12 & 4 & 4 & 6 & 1 & 4 & 6 & 6 & 12 & 4 & 3 & 4 & 12 & 4 & 3 & 5 & 1 & 4 & 6 \\ 5 & 0 & 4 & 6 & 3 & 1 & 2 & 0 & 8 & 0 & 0 & 1 & 3 & 1 & 3 & 6 & 6 & 1 & 4 & 6 & 3 & 11 & 1 & 1 & 3 & 1 & 9 & 6 \\ 4 & 0 & 4 & 7 & 9 & 0 & 4 & 6 & 9 & 1 & 5 & 6 & 4 & 4 & 4 & 6 & 4 & 0 & 4 & 7 & 9 & 0 & 4 & 7 & 9 & 6 & 4 & 10 \\ 3 & 0 & 3 & 6 & 1 & 11 & 4 & 6 & 10 & 1 & 6 & 1 & 1 & 0 & 3 & 6 & 0 & 1 & 3 & 7 & 3 & 12 & 3 & 6 & 16 & 0 & 3 & 6 \\ 2 & 1 & 4 & 4 & 6 & 7 & 6 & 3 & 7 & 1 & 4 & 1 & 4 & 6 & 4 & 8 & 6 & 6 & 4 & 1 & 6 & 11 & 6 & 3 & 7 & 0 & 6 & 5 \\ 1 & 1 & 3 & 1 & 3 & 1 & 4 & 1 & 9 & 1 & 3 & 1 & 3 & 11 & 13 & 6 & 3 & 1 & 3 & 1 & 3 & 1 & 1 & 6 & 9 & 1 & 4 & 6 \\ 0 & ? & 4 & 4 & 5 & 5 & 5 & 4 & 4 & 5 & 4 & 4 & 5 & 4 & 4 & 5 & 5 & 5 & 4 & 4 & 5 & 5 & 5 & 4 & 4 & 5 & 5 & 5 \\ \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 & 26 \\ \end{array} $ } \caption{The derivative of the fixed point $\varphi^\omega(1)$ of Figure~\ref{fig:SURD_isnot_SSURDO} w.r.t.\ the prefix of size $(1,2)$ when coding the return words uniformly.} \label{fig:der2} \end{figure} \begin{table} \centering \begin{tabular}{c|c} return word & code \\ \hline $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right]$& $0$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right]$ & $1$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right]$ & $2$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right]$ & $3$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 0 \\ \end{smallmatrix}\right]$ & $4$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right]$ & $5$ \\ \end{tabular} $\quad$ \begin{tabular}{c|c} return word & code \\ \hline $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right]$ & $6$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right]$ & $7$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 0 \\ \end{smallmatrix}\right]$ & $8$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 0 \\ \end{smallmatrix}\right]$ & $9$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right]$ & $10$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right]$ & $11$ \\ \end{tabular} $\quad$ \begin{tabular}{c|c} return word & code \\ \hline $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right]$ & $12$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 0 \\ \end{smallmatrix}\right]$ & $13$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 0 \\ \end{smallmatrix}\right]$ & $14$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right]$& $15$ \\ $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 0 \\ 0 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 1 \\ \end{smallmatrix}\right] \left[\begin{smallmatrix} 1 \\ 0 \\ \end{smallmatrix}\right]$ & $16$ \\ \\ \end{tabular} \caption{Codes of return words to the prefix} \hspace{0.3cm} $\left[\begin{smallmatrix} 0 \\ 1 \\ \end{smallmatrix}\right]$ occurring in Figure~\ref{fig:der2}. \label{tab:codes} \end{table} We do not know whether one of this definition is a good candidate for answering Question~\ref{qu:der1}. Note that the second definition does not allow us to derive twice (because of the unknown letter at position $\mathbf{0}$), and hence cannot be a good candidate for answering Question~\ref{qu:der2}. In particular, in order to be able to derive twice, the SURD property must be preserved under differentiation. \begin{question} Does our first definition of multidimensional derivatives give rise to SURD words when starting from a SURD word? \end{question} Another aspect we did not treat in the paper is the symbolic dynamical one. It is well known that in the unidimensional case a word is uniformly recurrent if and only if the corresponding dynamical system is minimal (see e.g.\ \cite{Ferenczi--Monteil--2010}). \begin{question} What kind of dynamical properties are reflected by the modifications of the notion of uniform recurrence introduced in the paper? \end{question} \section{Acknowledgements} We are grateful to Mathieu Sablik for inspiring discussions. The second author is partially supported by Russian Foundation of Basic Research (grant 20-01-00488) and by the Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS”. The last author acknowledges partial funding via a Welcome Grant of the Universit\'e de Li\`ege.
{ "timestamp": "2020-06-18T02:17:23", "yymm": "1907", "arxiv_id": "1907.00192", "language": "en", "url": "https://arxiv.org/abs/1907.00192", "abstract": "In this paper we introduce and study new notions of uniform recurrence in multidimensional words. A $d$-dimensional word is called \\emph{uniformly recurrent} if for all $(s_1,\\ldots,s_d)\\in\\mathbb{N}^d$ there exists $n\\in\\mathbb{N}$ such that each block of size $(n,\\ldots,n)$ contains the prefix of size $(s_1,\\ldots,s_d)$. We are interested in a modification of this property. Namely, we ask that for each rational direction $(q_1,\\ldots,q_d)$, each rectangular prefix occurs along this direction in positions $\\ell(q_1,\\ldots,q_d)$ with bounded gaps. Such words are called \\emph{uniformly recurrent along all directions}. We provide several constructions of multidimensional words satisfying this condition, and more generally, a series of four increasingly stronger conditions. In particular, we study the uniform recurrence along directions of multidimentional rotation words and of fixed points of square morphisms.", "subjects": "Combinatorics (math.CO)", "title": "Recurrence along directions in multidimensional words", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713874477227, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7080104823049232 }
https://arxiv.org/abs/1503.01892
On exact linesearch quasi-Newton methods for minimizing a quadratic function
This paper concerns exact linesearch quasi-Newton methods for minimizing a quadratic function whose Hessian is positive definite. We show that by interpreting the method of conjugate gradients as a particular exact linesearch quasi-Newton method, necessary and sufficient conditions can be given for an exact linesearch quasi-Newton method to generate a search direction which is parallel to that of the method of conjugate gradients.We also analyze update matrices and give a complete description of the rank-one update matrices that give search direction parallel to those of the method of conjugate gradients. In particular, we characterize the family of such symmetric rank-one update matrices that preserve positive definiteness of the quasi-Newton matrix. This is in contrast to the classical symmetric-rank-one update where there is no freedom in choosing the matrix, and positive definiteness cannot be preserved.The analysis is extended to search directions that are parallel to those of the preconditioned method of conjugate gradients in a straightforward manner.
\section{Introduction} In this paper we study the connection between the method of conjugate gradients, CG, and quasi-Newton methods, QN, on quadratic problems on the form \begin{equation}\label{qp} \min_{x \in \mathbb{R}^n}q(x)=\min_{x \in \mathbb{R}^n}\frac{1}{2}x^THx+c^Tx,\tag{QP} \end{equation} where $H=H^T \succ 0$ and $c \neq 0$. Solving \eqref{qp} is equivalent to solving a symmetric system of linear equations $Hx+c=0$. It is well-known that, on \eqref{qp}, QN using a update scheme in the one-parameter Broyden family generates identical iterates to those generated by CG, see, e.g., \cite{fletcherpractical, huang, Nazareth}.\footnote{In \cite{dixon}, Dixon shows that on any smooth function using perfect linesearch, the one-parameter Broyden family generate parallel search directions. Note that the connection with CG does not hold for general unconstrained problems} In this paper we give necessary and sufficient conditions on a QN-method for this equivalence with CG on \eqref{qp}. These conditions admit a set of QN-schemes for which the search directions generated are parallel with those generated by CG that includes, but is not limited to, the one-parameter Broyden family. By use of exact line-search, parallel search directions will give identical iterates. Further we show that there is an infinite number of symmetric rank-one update schemes in this set of QN-schemes that are equivalent to CG. Our focus in this paper is on quadratic programming. Besides being important in its own right, problems on the form \eqref{qp} often appear as subproblems in methods for solving general unconstrained problems. See \cite{recentadv} for a survey of methods for general unconstrained problems. In Section~\ref{background}, we make a brief introduction to CG and QN as well as stating some of our previous results on the topic. In Section~\ref{Results}, we present our results which include necessary and sufficient conditions on QN such that CG and QN generate parallel search directions. In Section~\ref{Conclusion} we make some concluding remarks. \section{Background}\label{background} For solving \eqref{qp}, we consider line-search methods on the following form. Given an initial guess $x_0$, let the initial gradient be given by $g_0=Hx_0+c$ and the initial search direction $p_0=-g_0$, the direction of steepest descent at $x_0$. In each iteration $k$, these three quantities are updated using a search direction $p_k$. The $x$-iterate is updated as \begin{equation}\label{updatex} x_{k+1}=x_k+\theta_kp_k, \end{equation} where $\theta_k$ is the step-length along $p_k$. On \eqref{qp}, it is natural to use exact linesearch, i.e., given a search direction $p_k$ the optimal step-length along $p_k$ is given by \begin{equation}\label{exactlinesearch} \theta_k=-\frac{p_k^Tg_k}{p_k^THp_k}. \end{equation} Further the gradient is updated as \begin{equation}\label{updateg} g_{k+1}=g_k+\theta_kHp_k, \end{equation} and satisfy $g_{k+1}=Hx_{k+1}+c$. The iteration process is terminated at iteration $r$, with $r \le n$, when $g_{r}=0$ with $x_{r}$ as the optimal solution to \eqref{qp} or equivalently as the unique solution to $Hx+c=0$. Since $H=H^T$, it can be shown that the method has the descent property $q(x_{k+1})<q(x_k)$ if $p_k^Tg_k \neq 0$. Different line-search methods arise depending on how the search direction $p_k$ is obtained in each iteration $k$. Next we define the methods of conjugate gradients, CG, by Hestenes and Stiefel \cite{HestenesStiefel}. \begin{definition}[The method of conjugate gradients (CG)]\label{def-cg} For a given $x_0$, \emph{the method of conjugate gradients}, CG, is given by \eqref{updatex}, \eqref{exactlinesearch}, \eqref{updateg} with $p_k=p_k^{CG}$ given by \begin{equation}\label{pcg} p_0^{CG}=-g_0, \quad p_k^{CG} =-g_k+\frac{g_k^Tg_k}{g_{k-1}^Tg_{k-1}}p_{k-1}^{CG}, \quad k=1,\dots,r-1. \end{equation} \end{definition} For CG it holds that, for all $k$ \begin{equation}\label{orth} g_k^Tp_i^{CG}=0, \quad i=0, \dots, k-1, \end{equation} and $g_k^Tg_i=0$, for $i=0, \dots, k-1$. In addition, it holds that $\{p_k^{CG}\}_{k=0}^{r-1}$ are mutually conjugate with respect to $H$. For an introduction to CG, see, e.g., \cite{cgwopain, saad, demmel}. In \cite{fletcherreeves}, CG is extended to general unconstrained problems. Next we define what we will refer to as a quasi-Newton method, QN. \begin{definition}[Quasi-Newton method (QN)]\label{def-qn} For a given $x_0$, a \emph{quasi-Newton method}, QN, is given by \eqref{updatex}, \eqref{exactlinesearch}, \eqref{updateg} with the search direction $p_k$ that is obtained by solving \begin{equation}\label{pQN} B_kp_k=-g_k, \quad k=0,\dots,r-1, \end{equation} with $B_0=I$. Assume $B_k=B_k^T$ for $k=0, \dots, r-1$. \end{definition} Different quasi-Newton methods arise depending on the choice of $B_k$ in each iteration $k$. Throughout the paper we make the assumption that $B_k$ is nonsingular for all $k$. Our definition of QN assumes $B_k=B_k^T$ for all $k$, it is also possible to consider QN for a non-symmetric $B_k$, see, e.g., \cite{huang}. Quasi-Newton methods were first suggested by Davidon, see \cite{davidon}, and later modified and formalized by Fletcher and Powell, see \cite{fletcherpowell}. For an introduction to QN-methods, see, e.g., \cite[Chapter 4]{practicalopt}. Note that the index $r$ in Definition~\ref{def-cg} is in general not the same as $r$ in Definition~\ref{def-qn} for an arbitrary choice of $B_k$. However, if conditions are imposed on $B_k$ such that $p_k$ and $p_k^{CG}$ are parallel for all $k$, then the termination index will be the same. In fact, if $p_k$ is parallel to $p_k^{CG}$ for all $k$, then identical $x$-iterates would be obtained for CG and QN by the use of exact linesearch. Since $B_k$ determines $p_k$ as in Definition~\ref{def-qn} we are interested in answering the question; What are the conditions on $B_k$ such that $p_k$ is parallel to $p_k^{CG}$, i.e., $p_k=\delta_kp_k^{CG}$ for some scalar $\delta_k \neq 0$? In this paper, we will derive necessary and sufficient conditions on $B_k$ and show that these conditions are such that the set of QN-schemes admitted is strictly larger than the one-parameter Broyden family. In \cite{ForsgrenOdland}, we derived such conditions based on a sufficient condition to obtain mutually conjugate search directions. In the next section we state briefly some of our previous results. \subsection{Previous results on a sufficient conditions for equivalence for CG and QN} In \cite{ForsgrenOdland}, we base the derivation of the conditions on $B_k$ in each iteration $k$ on a sufficient condition to obtain a search direction $p_k$ conjugate to $\{p_i\}_{i=0}^{k-1}$ with respect to $H$. In each iteration $k$, this sufficient condition on $B_k$ is given by \begin{equation}\label{suff} B_kp_i=H p_i, \quad i =0, \dots, k-1. \end{equation} Considering $B_k$ to be obtained by adding an update matrix $U_k$ to $B_{k-1}$, i.e., \begin{equation}\label{Bupdate} B_k=B_{k-1}+U_k. \end{equation} We show that, based on \eqref{suff}, sufficient conditions on $U_k$ such that $p_k=\delta_kp_k^{CG}$, for some $\delta_k \neq 0$, is requiring that \begin{subequations}\label{suffcondU} \begin{eqnarray} \mathcal{R}(U_k) &\subseteq &span\{g_{k-1}, g_k\}, \label{spanU} \\ U_{k-1}p_{k-1}&=&Hp_{k-1}-B_{k-1}p_{k-1}, \label{secantcond} \end{eqnarray} \end{subequations} see \cite[Proposition 3.3 and Theorem 3.6]{ForsgrenOdland}. Note that \eqref{secantcond} is usually referred to as the secant condition, or the quasi-Newton condition. In \cite{ForsgrenOdland} we found that these sufficient conditions on $U_k$, are equivalent to the update-scheme belonging to the one-parameter Broyden family \cite{broyden1967}. In effect, \begin{align} B_k^{\phi_k}& = B_{k-1}+\frac{1}{p_{k-1}^T Hp_{k-1}}(Hp_{k-1})(Hp_{k-1})^T \nonumber \\ \label{broyden} &-\frac{1}{p_{k-1}^T B_{k-1}p_{k-1}}(B_{k-1}p_{k-1})(B_{k-1}p_{k-1})^T+\phi_k p_{k-1}^TB_{k-1}p_{k-1} w_kw_k^T, \end{align} with $$ w_k=\frac{1}{p_{k-1}^T Hp_{k-1}}Hp_{k-1}-\frac{1}{p_{k-1}^T B_{k-1}p_{k-1}}B_{k-1}p_{k-1}, $$ and where $\phi_k$ is a free parameter, known as the \emph{Broyden parameter}. For example, $\phi_k=0$ gives the Broyden-Fletcher-Goldfarb-Shanno-update, BFGS, see, e.g., \cite[Chapter 8]{nocedalwright}. It is well-known that the one-parameter Broyden family has the property of hereditary symmetry, i.e. if $B_{k-1}=B_{k-1}^T$ then $B_k=B_k^T$. The assumptions made on previous iterations are summarized below. \begin{assumption}\label{assum-B-} In iteration $k$ of QN, it is assumed that \begin{itemize} \item[i)] $p_{i}=\delta_{i}p_{i}^{CG}$, for some $\delta_{i} \neq 0$, $i=0,\dots,k-1$. \item[ii)] $B_{k-1}$ may be expressed as $B_{k-1}=I+V$, with $\mathcal{R}(V) \subseteq span\{g_0, \dots, g_{k-1}\}=\mathcal{K}_k(c,H)$.\footnote{$\mathcal{R}(V) \subseteq span\{g_0, \dots, g_{k-1}\}=\mathcal{K}_k(c,H)$ implies that $B_{k-1}g_k=g_k$.} \end{itemize} \end{assumption} One of the main results in \cite{ForsgrenOdland} were the abovementioned sufficient conditions on $U_k$ in \eqref{suffcondU}. In this paper we will show that there are conditions on $B_k$ and hence on $U_k$, not based on \eqref{suff}, that are less restrictive and therefore admits a strictly larger set of update schemes than the one-parameter Broyden family. In fact, it is well-known that there are update schemes for which $p_k = \delta_k p_k^{CG}$ that are not included in the one-parameter Broyden family. For example, a memory-less BFGS update scheme, which entails \eqref{broyden} with $\phi_k=0$ and all $B_{k-1}$ replaced by $I$, gives $p_k = p_k^{CG}$, i.e., $\delta_k=1$, see e.g. \cite{shanno78}. It is therefore clear that basing the derivation of conditions on $B_k$ on the sufficient condition \eqref{suff} gives only a subset of all update schemes that satisfy $p_k = \delta_k p_k^{CG}$. In this paper we do not base our derivation on \eqref{suff} and in the next section we will state a framework that admits \emph{necessary and sufficient} conditions on $B_k$ such that $p_k = \delta_k p_k^{CG}$. In particular, we can obtain necessary and sufficient conditions on $U_k$ and show that neither \eqref{spanU} nor the secant condition \eqref{secantcond} are necessary conditions to get $p_k = \delta_k p_k^{CG}$. In addition, we will show that there is an infinite number of symmetric rank-one update schemes for QN that give equivalence with CG. \section{Results on the equivalence of CG and QN on \eqref{qp}}\label{Results} \subsection{Necessary and sufficient conditions for equivalence of CG and QN} In this section we present a framework for quasi-Newton methods that will admit necessary and sufficient conditions on $B_k$ such that the corresponding quasi-Newton method and the method of conjugate gradients generate parallel search directions. In the following proposition we give the foundation of this framework which is based on observing the structure of the vector $p_k^{CG}$ in Definition~\ref{def-cg}. \begin{lemma}\label{lem-pcg} Let the method of conjugate gradients, CG, given by Definition~\ref{def-cg}, be at step $k$, then the vector $p_k^{CG}$, given in Definition~\ref{def-cg}, is the unique solution $u$ to \begin{equation}\label{Apcg} A_ku=-g_k, \end{equation} where the nonsingular $A_k$ is is given by \begin{equation}\label{defA} A_0=I, \quad A_k=I+\frac{1}{g_{k-1}^Tg_{k-1}}p_{k-1}^{CG}g_k^T, \quad k=1,2,\dots,r-1. \end{equation} \end{lemma} \begin{proof} For $k=1$, the result is immediate. For $k\ge 1$, the matrix $A_k$ is nonsingular and its inverse is given by \begin{equation}\label{defAinv} A_k^{-1}=I-\frac{1}{g_{k-1}^Tg_{k-1}}p_{k-1}^{CG}g_k^T. \end{equation} This can be seen as the product of $A_k$ given by \eqref{defA} and $A_k^{-1}$ given by \eqref{defAinv} is the identity matrix since $g_k^Tp_{k-1}^{CG}= \nolinebreak 0$ by \eqref{orth}. Hence, the unique solution to \eqref{Apcg} is given by $$ u=-A_k^{-1}g_k=-g_k+\frac{g_k^Tg_k}{g_{k-1}^Tg_{k-1}}p_{k-1}^{CG}, $$ in effect $u=p_k^{CG}$ as in Definition~\ref{def-cg}. \end{proof} Note that, without loss of generality, any non-singular matrix $B_k$ may be expressed as $B_k=A_k^TW_kA_k$, with $A_k$ as in Lemma~\ref{lem-pcg} and some non-singular matrix $W_k$. In effect there is a one-to-one correspondence between $W_k$ and $B_k$, as $A_k$ is a non-singular matrix. In the following theorem we state necessary and sufficient conditions on the matrix $B_k$ in iteration $k$ of QN such that $p_k$ obtained by solving \eqref{pQN} in Definition~\ref{def-qn} gives $p_k=\delta_kp_k^{CG}$ for $\delta_k \neq 0$. We make use of Lemma~\ref{lem-pcg} and the fact that any matrix $B_k$ can be expressed as $B_k=A_k^TW_kA_k$. \begin{theorem}\label{thm-iff} Let $p_k$ be defined as in Definition~\ref{def-qn}, i.e., $B_kp_k=-g_k$, with $B_k$ nonsingular. Let $A_k$ be given as in Lemma~\ref{lem-pcg} and let $W_k$ be defined by the decomposition $B_k=A_k^T W_k A_k$. Then, for any nonzero scalar $\delta_k$, it holds that $p_k=\delta_k p_k^{CG}$ if and only if \[ B_k A_k^{-1} g_k = \frac1{\delta_k} g_k \] if and only if \[ W_k g_k = \frac1{\delta_k} g_k. \] \end{theorem} \begin{proof} By Lemma~\ref{lem-pcg}, $A_k^{-1}g_k=-p_k^{CG}$, hence $p_k=\delta_k p_k^{CG}$ if and only if $B_k(-p_k^{CG})=(1/\delta_k)g_k$, i.e., $B_k A_k^{-1}g_k=(1/\delta_k)g_k$. Since $W_k=A_k^{-T}B_k A_k^{-1}$, the second part of the theorem holds as $$ A_k^{-T}g_k=\big( I-\frac{1}{g_{k-1}^Tg_{k-1}}g_k(p_{k-1}^{CG})^T\big)g_k=g_k, $$ since $(p_{k-1}^{CG})^Tg_k=0$ by \eqref{orth}. \end{proof} The necessary and sufficient conditions of Theorem~\ref{thm-iff} give a straightforward way to check if a matrix $B_k$ is such that the corresponding QN-method and CG will generate parallel search directions. In addition, the second part of Theorem~\ref{thm-iff} gives us the ability to construct any such \emph{symmetric} matrix $B_k$ by choosing an appropriate symmetric matrix $W_k$. In Defintion~\ref{def-qn}, we assume $B_k$ symmetric, however, Theorem~\ref{thm-iff} does not require symmetry of neither $B_k$ nor $W_k$. Note that Theorem~\ref{thm-iff} is in no way based on \eqref{suff}. We shall show that necessary and sufficient conditions on $B_k$ in Theorem~\ref{thm-iff} admits a larger set of QN-schemes than the one-parameter Broyden family. For the case $\delta_k=1$, Theorem~\ref{thm-iff} gives necessary and sufficient conditions on $B_k$ such that $p_k$ and $p_k^{CG}$, in addition to being parallel, are of the exact same length. Note that if $p_{k-1}=\delta_{k-1}p_{k-1}^{CG}$ for $\delta_{k-1}\ne 0$, then $A_k$ defined as in Lemma~\ref{lem-pcg} may equivalently be expressed as \begin{equation}\label{defA2} A_0=I, \quad A_k=I-\frac{1}{p_{k-1}^Tg_{k-1}}p_{k-1}g_k^T \end{equation} since $g_{k-1}^Tg_{k-1}=-(p_{k-1}^{CG})^Tg_{k-1}$ by Definition~\ref{def-cg}. From \eqref{defA2} we see that $A_k$ is independent of the scaling of the previous search direction $p_{k-1}$. Hence, if we consider a QN-scheme where, in each iteration $k$, $B_k$ is chosen as $B_k=A_k^TW_kA_k$ for some $W_k$ that satisfy the necessary and sufficient conditions of Theorem~\ref{thm-iff}, and $A_k$ as in \eqref{defA2}, the CG search directions are not needed explicitly but can be obtained as $p_k^{CG}=(1/\delta_k)p_k$ where $\delta_k$ is given by $\delta_k=(p_k^TB_kp_k)/(g_k^Tg_k)$. Consider the one-parameter Broyden family given by \eqref{broyden} for some non-zero $\phi_k$. Note that $w_k$ in \eqref{broyden} can be expressed as \begin{equation}\label{wbroyden} w_k=\frac{1}{p_{k-1}^T Hp_{k-1}}Hp_{k-1}-\frac{1}{p_{k-1}^T B_{k-1}p_{k-1}}B_{k-1}p_{k-1}=\frac{1}{p_{k-1}^T B_{k-1}p_{k-1}}g_k, \end{equation} by $B_{k-1}p_{k-1}=-g_{k-1}$ and \eqref{updateg}. In the following proposition we derive the expression for $W_k^{\phi_k}$ defined by the decomposition $B_k^{\phi_k}=A_k^T W_k^{\phi_k}A_k$ with $A_k$ as in \eqref{defA2}. In addition, we give the explicit expression for the scaling $\delta_k$ in $p_k=\delta_kp_k^{CG}$ in terms of the Broyden parameter $\phi_k$. This expression denoted by $\delta_k(\phi_k)$ was found in \cite{ForsgrenOdland}, but the derivation is much simplified in the framework of this paper. As \eqref{broyden} includes $B_{k-1}$ we will assume that Assumption~\ref{assum-B-} holds. \begin{proposition}\label{prop-broyden} Let Assumption~\ref{assum-B-} hold. If $B_k^{\phi}$ is given by \eqref{broyden} and $A_k$ is given by \eqref{defA2}, then $W_k^{\phi_k}$ defined by the decomposition $B_k^{\phi_k}= A_k^T W_k^{\phi_k} A_k$ is given by \begin{equation}\label{Wbroyden} W_k^{\phi_k}=B_{k-1}+(\frac{1}{\theta_{k-1}}-1)\frac{1}{p_{k-1}^TB_{k-1} p_{k-1}}g_{k-1}g_{k-1}^T+\phi_k \frac{1}{p_{k-1}^TB_{k-1} p_{k-1}}g_kg_k^T, \end{equation} where $\theta_{k-1} $ is the exact line-search step along $p_{k-1}$ given by \eqref{exactlinesearch}. In addition, if $p_k$ is defined as in Definition~\ref{def-qn} for $B_k=B_k^{\phi_k}$, i.e., $B_k^{\phi_k}p_k=-g_k$, then $p_k=\delta_k(\phi_k) p_k^{CG}$, where \begin{equation}\label{deltaphi} \delta_k(\phi_k)=\frac{1}{1+\phi_k\frac{g_k^Tg_k}{p_{k-1}^TB_{k-1} p_{k-1}}}. \end{equation} \end{proposition} \begin{proof} In this proof we will omit all $k$-subscripts, replace $(k-1)$-subscripts by "$-$''. Given $B^{\phi}$ as in \eqref{broyden}, the expression for $W^{\phi}$ is given by \begin{align*} W^{\phi} & = A^{-T} B^{\phi}A^{-1} \\ & =A^{-T}B^0A^{-1}+\phi p_{-}^TB_{-}p_{-}A^{-T}ww^TA^{-1} \\ &=W^0+\phi p_{-}^TB_{-}p_{-}A^{-T}ww^TA^{-1}. \end{align*} The expression for $W^0$ is given by $$ W^0=B_{-}+(\frac{1}{\theta_{-}}-1)\frac{1}{p_{-}^TB_{-} p_{-}}g_{-}g_{-}^T, $$ which is found, by straightforward manipulations of $A^{-T}B^0A^{-1}$, see Lemma~\ref{W0deriv}. For the second part of the expression of $W^{\phi}$ it holds, using \eqref{wbroyden}, that $$ w^TA^{-1}=\big( \frac{1}{p_{-}^T B_{-}p_{-}}g\big)^T \big( I+\frac{1}{p_{-}^Tg_{-}}p_{-}g^T\big)=\frac{1}{p_{-}^T B_{-}p_{-}}g^T=w^T, $$ as $g^Tp_{-}=0$ and hence, $$ p_{-}^TB_{-}p_{-}ww^T=\frac{1}{p_{-}^T B_{-}p_{-}}gg^T $$ Hence, $W^{\phi}$ defined by the decomposition $B^{\phi}=A^T W^{\phi} A$ is given by \eqref{Wbroyden}. Further, as $$ W^{\phi}g=\Big( B_{-}+(\frac{1}{\theta_{-}}-1)\frac{1}{p_{-}^TB_{-} p_{-}}g_{-}g_{-}^T+\phi\frac{1}{p_{-}^T B_{-}p_{-}}gg^T\Big)g=(1+\phi\frac{g^Tg}{p_{-}^T B_{-}p_{-}})g, $$ it holds, by Theorem~\ref{thm-iff}, that $p_k=\delta_k(\phi_k) p_k^{CG}$, with $\delta_k(\phi_k)$ as in \eqref{deltaphi}. \end{proof} Note that, by Theorem~\ref{thm-iff}, $W_k=I$ and $W_k=B_{k-1}$ are choices for $W_k$ such that $B_k=A_k^TW_kA_k$ give $p_k=\delta_kp_k^{CG}$.\footnote{Under Assumption~\ref{assum-B-}, $B_{k-1}g_k=g_k$} By Proposition~\ref{prop-broyden}, these choices do not in general belong to the one-parameter Broyden family.\footnote{Note that for $\theta_{k-1}=1$ and $\phi_k=0$ we have $W_k^0=B_{k-1}$, this is the BFGS QN-scheme for $\theta_{k-1}=1$. However, $W_k=B_{k-1}$ is not in the one-parameter Broyden family for a general value of $\theta_{k-1}$} Hence, the set of QN-schemes defined by Theorem~\ref{thm-iff} includes, but is not limited to the one-parameter Broyden family. For a general $W_k$ we get the following expression for the matrix $B_k$, \begin{align*} B_k & = A_k^T W_k A_k = (I - \frac1{g_{k-1}^T p_{k-1}} g_k p_{k-1}^T) W_k (I - \frac1{g_{k-1}^T p_{k-1}} p_{k-1} g_k^T) \\ & = W_k - \frac1{g_{k-1}^T p_{k-1}} g_k p_{k-1}^T W_k - \frac1{g_{k-1}^T p_{k-1}} W_k p_{k-1} g_k^T + \frac{p_{k-1}^T W_k p_{k-1}}{(g_{k-1}^T p_{k-1})^2} g_k g_k^T. \end{align*} In particular, $W_k=I$ gives \begin{align} B_k & = I - \frac1{g_{k-1}^T p_{k-1}} g_k p_{k-1}^T - \frac1{g_{k-1}^T p_{k-1}} p_{k-1} g_k^T + \frac{p_{k-1}^T p_{k-1}}{(g_{k-1}^T p_{k-1})^2} g_k g_k^T. \label{W=I} \end{align} Analogously, $W_k=B_{k-1}$ gives \begin{align} B_k & = B_{k-1} + \frac1{g_{k-1}^T p_{k-1}} g_k g_{k-1}^T + \frac1{g_{k-1}^T p_{k-1}} g_{k-1} g_k^T - \frac1{g_{k-1}^T p_{k-1}} g_k g_k^T. \label{W=B-} \end{align} Next we state a corollary of Theorem~\ref{thm-iff} giving necessary and sufficient conditions on an update matrix $U_k$ defined by \eqref{Bupdate}, i.e., $U_k=B_k-B_{k-1}$, such that $p_k=\delta_kp_k^{CG}$. Note that, as for $W_k$ and $B_k$, there is a one-to-one correspondence between $U_k$ and $B_k$ given $B_{k-1}$. \begin{corollary}\label{cor-iffU} Let Assumption~\ref{assum-B-} hold. Let $p_k$ be defined as in Definition~\ref{def-qn}, i.e., $B_kp_k=-g_k$, with $B_k$ nonsingular. Let $A_k$ be given by \eqref{defA} and let $U_{k}$ be defined by \eqref{Bupdate}, i.e., $U_{k}=B_k - B_{k-1}$. Then, for any nonzero scalar $\delta_k$, it holds that $p_k=\delta_k p_k^{CG}$ if and only if \begin{equation}\label{condU} U_{k} A_k^{-1} g_k = (\frac1{\delta_k} -1 ) g_k - \frac{ g_k^T g_k}{p_{k-1}^T B_{k-1}p_{k-1}} g_{k-1}. \end{equation} \end{corollary} \begin{proof} By Theorem~\ref{thm-iff} it holds that $p_k=\delta_k p_k^{CG}$ if and only if $B_kA_k^{-1}g_k=(1/\delta_k)g_k$. For $B_k=B_{k-1}+U_k$ it holds that $p_k=\delta_k p_k^{CG}$ if and only if \begin{align*} U_kA_k^{-1}g_k &=\frac1{\delta_k}g_k-B_{k-1}A_k^{-1}g_k \\ &=\frac1{\delta_k}g_k-B_{k-1}\big( g_k+\frac{g_k^Tg_k}{p_{k-1}^T g_{k-1}}p_{k-1} \big) \\ &=\frac1{\delta_k}g_k- g_k-\frac{g_k^Tg_k}{p_{k-1}^T B_{k-1}p_{k-1}}g_{k-1}, \end{align*} and the statement of the corollary follows. \end{proof} Note that in the right-hand side of \eqref{condU} in Corollary~\ref{cor-iffU}, the component along $g_{k-1}$ is non-zero. Given the necessary and sufficient conditions on $U_k$ in Corollary~\ref{cor-iffU} we can now show that neither of the conditions on $U_k$ given by \eqref{suffcondU} are necessary to get $p_k=\delta_k p_k^{CG}$. Consider $U_k$ defined by $U_k=B_k-B_{k-1}$ for $B_k$ as in \eqref{W=I}, i.e., \begin{equation}\label{UforW=I} U_k=I - \frac1{g_{k-1}^T p_{k-1}} g_k p_{k-1}^T - \frac1{g_{k-1}^T p_{k-1}} p_{k-1} g_k^T + \frac{p_{k-1}^T p_{k-1}}{(g_{k-1}^T p_{k-1})^2} g_k g_k^T-B_{k-1}. \end{equation} As $B_k=A_k^TA_k$ satisfies Theorem~\ref{thm-iff} for $\delta_k=1$, it holds that $U_k$ in \eqref{UforW=I} satisfies Corollary~\ref{cor-iffU} for $\delta_k=1$. However, for $U_k$ as in \eqref{UforW=I}, $U_kp_{k-1} \neq Hp_{k-1}-B_{k-1}p_{k-1}$, i.e., $U_k$ does not satisfy the secant condition \eqref{secantcond}. In addition, $\mathcal{R}(U_k)$ is not limited to $span\{g_{k-1}, g_k\}$ so \eqref{spanU} is not satisfied. Hence, the conditions on $U_k$ in \eqref{suffcondU} are not necessary conditions on $U_k$ to get $p_k=\delta_k p_k^{CG}$. For $U_k$ defined by $U_k=B_k-B_{k-1}$, for $B_k$ as in \eqref{W=B-}, it holds that \eqref{spanU} is satisfied, but the secant condition \eqref{secantcond} is not. \subsection{Symmetric rank-one update schemes} Next we consider the case when $U_k$ is a symmetric matrix of rank one. Under the sufficient condition \eqref{suff} it is well-known that there is a unique symmetric rank-one update scheme in the one-parameter Broyden family usually referred to as SR1, see, e.g., \cite[Chapter 9]{luenberger}. In fact, SR1 is uniquely defined by the secant condition \eqref{secantcond} alone. In this section we show that there are infinitely many symmetric rank-one update schemes that give $p_k=\delta_kp_k^{CG}$ for some $\delta_k \neq 0$. Using Corollary~\ref{cor-iffU} we can now state the following result regarding the case when the update matrix $U_k$ is a symmetric matrix of rank one. \begin{lemma}\label{lem-rank1} Let Assumption~\ref{assum-B-} hold. Let $U_{k}$ be defined by \eqref{Bupdate}, i.e., $U_{k}=B_k - B_{k-1}$, and let $U_k$ be such that the condition in Corollary~\ref{cor-iffU} holds. If $U_{k}$ is symmetric and of rank one, then \begin{equation}\label{Urank1} U_{k} = \frac1{ (\frac1{\delta_k} -1 ) g_k^T g_k - \frac{ (g_k^T g_k)^2}{p_{k-1}^T B_{k-1}p_{k-1}}} u_{k} u_{k}^T, \end{equation} where \[ u_{k}= (\frac1{\delta_k} -1 ) g_k - \frac{ g_k^T g_k}{p_{k-1}^T B_{k-1}p_{k-1}} g_{k-1}. \] In addition, $U_k$ is well defined for all $\delta_k \neq 0$ except \begin{equation}\label{baddelta} \hat{\delta}_k = \frac1{1 + \frac{g_k^T g_k}{p_{k-1}^T B_{k-1}p_{k-1}}}. \end{equation} \end{lemma} \begin{proof} Let $U_k=\beta_k u_ku_k^T$, where $\beta_k$ is a scaling and $u_k$ is a vector in $\mathbb{R}^n$, both to be determined. By \eqref{condU} in Corollary~\ref{cor-iffU} it holds that, $$ \beta_k u_ku_k^TA_k^{-1} g_k = (\frac1{\delta_k} -1 ) g_k - \frac{ g_k^T g_k}{p_{k-1}^T B_{k-1}p_{k-1}} g_{k-1}. $$ Hence, $u_k$ will be equal to the right-hand side of the above expression up to some arbitrary non-zero scaling. Let $$ u_k=(\frac1{\delta_k} -1 ) g_k - \frac{ g_k^T g_k}{p_{k-1}^T B_{k-1}p_{k-1}} g_{k-1}, $$ then scaling of $u_k$ will be contained in $\beta_k$ given by $$ \beta_k=\frac1{u_k^TA_k^{-1} g_k}=\frac1{(\frac1{\delta_k} -1 ) g_k^T g_k - \frac{ (g_k^T g_k)^2}{p_{k-1}^T B_{k-1}p_{k-1}}}, $$ and \eqref{Urank1} follows. For $\delta_k=\hat{\delta}_k$ in \eqref{baddelta}, the expression \eqref{Urank1} is not well-defined due to division by zero. In addition, for $\delta_k=\hat{\delta}_k$, the vector $u_k$ will have the form $u_k=\alpha g_k-\alpha g_{k-1}$ for $\alpha=(g_k^Tg_k)/(p_{k-1}^T B_{k-1}p_{k-1})$ which implies that $U_kA_k^{-1}g_k=0$ and hence Corollary~\ref{cor-iffU} is not satisfied. \end{proof} Note that, by Lemma~\ref{lem-rank1}, $u_k$ has a non-zero component along $g_{k-1}$ and that $\mathcal{R}(U_k) \subseteq span\{g_{k-1}, g_k\} $. Further, it holds that $U_{k}\succ 0$ if $\delta_k < \hat{\delta}_k $. In Lemma~\ref{lem-rank1}, a well-defined update matrix $U_k$ is given for some choice of $\delta_k \neq \hat{\delta}_k$. Consequently, there is an infinite number of rank-one update schemes that give $p_k=\delta_kp_k^{CG}$, for $\delta_k \neq \hat{\delta}_k$. In the following proposition we state this result more precisely. In effect, any symmetric rank-one matrix with $u_k=\alpha_kg_k-\alpha_{k-1}g_{k-1}$, where $\alpha_{k-1} \neq 0$ and $\alpha_k \neq \alpha_{k-1}$, will give $p_k=\delta_kp_k^{CG}$, where $\delta_k$ will be determine by the choices of $\alpha_k$ and $\alpha_{k-1}$. \begin{proposition}\label{prop-rank1} Let Assumption~\ref{assum-B-} hold. Let $p_k$ be defined from Definition~\ref{def-qn}, i.e., $B_kp_k=-g_k$, with $B_k$ nonsingular. Let $B_{k}$ be given by \eqref{Bupdate}, i.e., $B_k= B_{k-1}+U_{k}$, with \begin{equation}\label{Urank1gen} U_{k} = \frac1{ \alpha_{k-1}(\alpha_k-\alpha_{k-1}) p_{k-1}^T B_{k-1}p_{k-1}} u_{k} u_{k}^T, \end{equation} where $$ u_k=\alpha_kg_k-\alpha_{k-1}g_{k-1}, $$ for any $\alpha_k$ and $\alpha_{k-1}$ such that $\alpha_{k-1} \neq 0$ and $\alpha_k \neq \alpha_{k-1}$. Then $p_k=\delta_kp_k^{CG}$ for \begin{equation}\label{deltaalpha} \delta_k=\frac1{1 + \frac{\alpha_k}{\alpha_{k-1}}\frac{g_k^T g_k}{p_{k-1}^T B_{k-1}p_{k-1}}}. \end{equation} \end{proposition} \begin{proof} Let $U_k=\beta_k u_ku_k^T$, with $\beta_k$ as in \eqref{Urank1gen} and $u_k=\alpha_kg_k-\alpha_{k-1}g_{k-1}$. Note that \begin{align*} u_k&=\alpha_kg_k-\alpha_{k-1}g_{k-1}\\ &=\alpha_{k-1}\frac{p_{k-1}^T B_{k-1}p_{k-1}}{g_k^Tg_k}\big(\frac{\alpha_k}{\alpha_{k-1}}\frac{g_k^Tg_k}{p_{k-1}^T B_{k-1}p_{k-1}}g_k- \frac{g_k^Tg_k}{p_{k-1}^T B_{k-1}p_{k-1}}g_{k-1}\big), \end{align*} hence with $$ \frac{1}{\delta_k}-1=\frac{\alpha_k}{\alpha_{k-1}}\frac{g_k^Tg_k}{p_{k-1}^T B_{k-1}p_{k-1}}. $$ it holds that \begin{align*} u_k &=\alpha_{k-1}\frac{p_{k-1}^T B_{k-1}p_{k-1}}{g_k^Tg_k}\big((\frac{1}{\delta_k}-1)g_k- \frac{g_k^Tg_k}{p_{k-1}^T B_{k-1}p_{k-1}}g_{k-1}\big) \end{align*} Further, $u_k^TA_k^{-1}g_k=(\alpha_k-\alpha_{k-1})g_k^Tg_k$. Then with $$ \beta_k=\frac1{ \alpha_{k-1}(\alpha_k-\alpha_{k-1}) p_{k-1}^T B_{k-1}p_{k-1}}=\frac1{(\alpha_k-\alpha_{k-1})g_k^Tg_k}\frac1{\alpha_{k-1}\frac{p_{k-1}^T B_{k-1}p_{k-1}}{g_k^Tg_k}}, $$ it holds that $U_k=\beta_ku_ku_k^T$ with $u_k=\alpha_kg_k-\alpha_{k-1}g_{k-1}$ is equivalent to \eqref{Urank1} in Lemma~\ref{lem-rank1}. Note that $\alpha_{k-1} \neq 0$ and $\alpha_k \neq \alpha_{k-1}$ so the above expression for $\beta_k$ is well defined. Hence, by Lemma~\ref{lem-rank1} and Corollary~\ref{cor-iffU}, $p_k=\delta_kp_k^{CG}$, with $\delta_k$ as in \eqref{deltaalpha}. \end{proof} There are two ways that the symmetric rank-one update matrix given by Proposition~\ref{prop-rank1} can be not well defined. The first way is if $\alpha_{k-1}=0$, then the component of $g_{k-1}$ vanishes and both \eqref{Urank1gen} and \eqref{deltaalpha} in Proposition~\ref{prop-rank1} in become not well defined. The second way is if $\alpha_{k}=\alpha_{k-1}$, then \eqref{Urank1gen} in Proposition~\ref{prop-rank1} becomes not well defined due to division by zero. Further, for $\alpha_{k}=\alpha_{k-1}$, $\delta_k$ in Proposition~\ref{prop-rank1} will be equal to $\hat{\delta}_k$ in Lemma~\ref{lem-rank1}. However, any other choice of $\alpha_{k}$ and $\alpha_{k-1}$ gives a symmetric rank-one update scheme for which $p_k=\delta_kp_k^{CG}$, with $\delta_k$ as in Proposition~\ref{prop-rank1}, and there is an infinite number of such choices. Note that it holds that $U_k \succ 0$ if either i) $\alpha_{k-1}$ and $p_{k-1}^T B_{k-1}p_{k-1}$ have the same sign and $\alpha_k > \alpha_{k-1}$ or ii) $\alpha_{k-1}$ and $p_{k-1}^T B_{k-1}p_{k-1}$ have different signs and $\alpha_k < \alpha_{k-1}$. It is straightforward to show that i) and ii) is equivalent to $\delta_k < \hat{\delta}_k$ with $\hat{\delta}_k$ as in Lemma~\ref{lem-rank1} and $\delta_k$ as in Proposition~\ref{prop-rank1}. Lemma~\ref{lem-rank1} and Proposition~\ref{prop-rank1} give the same result from different perspectives. In Lemma~\ref{lem-rank1}, the symmetric rank-one update matrix is stated given the value of $\delta_k$. One can derive the values of $\alpha_{k-1}$ and $\alpha_k$ depending on $\delta_k$. In Proposition~\ref{prop-rank1}, the symmetric rank-on update matrix is instead stated given the values of $\alpha_{k-1}$ and $\alpha_k$ and $\delta_k$ is depending on these values. Consider SR1, the symmetric rank-one update scheme uniquely defined by the secant condition \eqref{secantcond}, given by $$ U_{k} = \frac1{ p_{k-1}^T(H-B_{k-1})p_{k-1}} u_{k} u_{k}^T, $$ with $$ u_{k}=Hp_{k-1}-B_{k-1}p_{k-1}=\alpha_kg_k-\alpha_{k-1}g_{k-1}, $$ for $\alpha_k=1/\theta_{k-1}$ and $\alpha_{k-1}=(1/\theta_{k-1}-1)$. For $\theta_{k-1}=1$, $\alpha_{k-1}=0$ and SR1 is not well defined by Proposition~\ref{prop-rank1}. Further, by Proposition~\ref{prop-rank1} \[ \delta_k = \frac1{1 + \frac1{(1-\theta_{k-1})}\frac{g_k^T g_k}{p_{k-1}^T B_{k-1}p_{k-1}}}, \] which is also not well defined for $\theta_{k-1}=1$. Hence, in terms of Proposition~\ref{prop-rank1}, for $\theta_{k-1}=1$, SR1 becomes not well defined as $\alpha_{k-1}=0$. However, it is not possible that SR1 becomes not well defined due to $\alpha_k=\alpha_{k-1}$, as $1/\theta_{k-1} \neq 1/\theta_{k-1} -1$ for all $\theta_{k-1}$. Note that for $\alpha_k=1/\theta_{k-1}$ and $\alpha_{k-1}=(1/\theta_{k-1}-1)$, there is no way to ensure $U_k \succ 0$. \subsection{Review of known results for the one-parameter Broyden family}\label{sec-review} In this section we review some known results on the one-parameter Broyden family where the proofs can be done in a straightforward manner based on Theorem~\ref{thm-iff}. The BFGS-update scheme is given by letting $\phi_k=0$ in \eqref{broyden}. By Proposition~\ref{prop-broyden} it holds that for BFGS $p_k=p_k^{CG}$, i.e., $\delta_k(0)=1$, which is well-known result due to Nazareth, see \cite{Nazareth}. Further, by Lemma~\ref{W0deriv}, $W_k^0$ defined by the decomposition $B_k^0=A_k^TW_k^0A_k$ is given by \begin{equation}\label{Wbfgs} W_k^0=B_{k-1}+(\frac{1}{\theta_{k-1}}-1)\frac{1}{p_{k-1}^TB_{k-1} p_{k-1}}g_{k-1}g_{k-1}^T. \end{equation} The BFGS-update scheme is known to have the property of hereditary positive-definite\-ness, see, e.g. \cite[Chapter 9]{luenberger}. In the following proposition we prove this result using the expression for $W_k^0$ in \eqref{Wbfgs}. \begin{proposition}\label{prop-bfgspd} Let Assumption~\ref{assum-B-} hold. Let $B_k^0$ be given by \eqref{broyden} for $\phi_k=0$, and let $W_k^{0}$ be defined by the decomposition $B_k^{0}=A_k^T W_k^{0} A_k$, for $A_k$ be given by \eqref{defA2}, i.e., $W^0$ as in \eqref{Wbfgs}. For $B_{k-1} \succ 0$, it holds that $B_k^0 \succ 0$. \end{proposition} \begin{proof} In this proof we will omit all $k$-subscripts, replace $(k-1)$-subscripts by "$-$''. Since $A$ given by \eqref{defA2} is nonsingular it is enough to show that if $B_{-} \succ 0$, then $W^0 \succ 0$. Consider $W^0$ as in \eqref{Wbfgs} and rewrite the expression as $$ W^0=B_{-}^{1/2}\big(I+ (\frac{1}{\theta_{-}}-1)\frac{1}{p_{-}^TB_{-} p_{-}}B_{-}^{-1/2} g_{-}g_{-}^T B_{-}^{-1/2} \big) B_{-}^{1/2}=B_{-}^{1/2}(I+C) B_{-}^{1/2}. $$ By Sylvester's law of inertia, see, e.g. \cite[Chapter 8]{golubvanloan}, $W^0$ will have the same number of positive eigenvalues as $I+C$, since $B_{-}^{-1/2}$ is nonsingular. Further, since $C$ is a symmetric matrix of rank one, $I+C$ will have only one eigenvalue that might not be equal to one. It remains to show that this eigenvalue is also positive. This eigenvalue, corresponding to the eigenvector $B_{-}^{-1/2}g_{-}$, is given by $$ 1+ (\frac{1}{\theta_{-}}-1)\frac{1}{p_{-}^TB_{-} p_{-}}(B_{-}^{-1/2}g_{-})^T (B_{-}^{-1/2}g_{-})= 1+ (\frac{1}{\theta_{-}}-1)\frac{g_{-}^T B_{-}^{-1}g_{-}}{p_{-}^TB_{-} p_{-} =\frac{1}{\theta_{-}}, $$ using $B_{-}p_{-}=-g_{-}$. Hence, the eigenvalue of $I+C$ that is not necessarily equal to one is given by $(1/\theta_{-}) >0$ since $\theta_{-} =(p_{-}^TB_{-} p_{-})/(p_{-}^TH p_{-})> 0$ for $B_{-} \succ 0$. Hence, if $B_{-} \succ 0$, then $W^0 \succ 0$ and the statement of the proposition follows. \end{proof} Using the same strategy as in Proposition~\ref{prop-bfgspd} we can state the interval of the Broyden parameter $\phi_k$ such that $B_k^{\phi_k} \succ 0$ if $B_{k-1} \succ 0$. This result, see, e.g., \linebreak\cite[Chapter 8]{nocedalwright}, is given in the following proposition using the expression for $W_k^{\phi_k}$ in Proposition~\ref{prop-broyden}. \begin{proposition}\label{prop-broydenpd} Let Assumption~\ref{assum-B-} hold. Let $B_k^{\phi_k}$ be given by \eqref{broyden}, and let $W_k^{\phi_k}$ be defined by the decomposition $B_k^{\phi_k}=A_k^T W_k^{\phi_k} A_k$, for $A_k$ given by \eqref{defA2}, i.e., $W^{\phi_k}$ as in Proposition~\ref{prop-broyden}. For $B_{k-1} \succ 0$, it holds that $B_k^{\phi_k} \succ 0$ if \begin{equation}\label{phipd} \phi_k >-\frac{p_{k-1}^TB_{k-1} p_{k-1}}{g_k^Tg_k}. \end{equation} \end{proposition} \begin{proof} In this proof we will omit all $k$-subscripts, replace $(k-1)$-subscripts by "$-$''. We will use the result of Proposition~\ref{prop-bfgspd}, i.e., $W^0 \succ 0$ for $B_{-} \succ 0$, to show that $W_{\phi} \succ 0$ for $\phi$ as in \eqref{phipd}. Consider $W^{\phi}$ as in Proposition~\ref{prop-broyden} and rewrite the expression as $$ W_{\phi}=W_{0}^{-1/2} \big( I+\phi\frac{1}{p_{-}^TB_{-} p_{-}}(W_{0}^{1/2} g)(W_{0}^{1/2} g)^T\big) W_{0}^{-1/2}=W_{0}^{-1/2} \big( I+C_{\phi}\big) W_{0}^{-1/2} $$ By Sylvester's law of inertia $W_{\phi}$ will have the same number of positive eigenvalues as $I+C_{\phi}$. Since $C_{\phi}$ is a symmetric matrix of rank one, it follows that the only eigenvalue of $I+C_{\phi}$ that might not be equal to one is the eigenvalue with eigenvector $W_{0}^{1/2} g$. This eigenvalue is given by $$ 1+\phi\frac{1}{p_{-}^TB_{-} p_{-}}(W_{0}^{1/2}g)^T(W_{0}^{1/2}g)=1+\phi\frac{1}{p_{-}^TB_{-} p_{-}}g^TW_0 g=1+\phi\frac{g^Tg}{p_{-}^TB_{-} p_{-}}, $$ since $W_0g=g$. Hence, in order for this eigenvalue to be positive it must hold that $\phi$ satisfies \eqref{phipd}. \end{proof} For the particular value $\hat{\phi}_k =-(p_{k-1}^TB_{k-1} p_{k-1})/(g_k^Tg_k)$, the expression for $\delta_k(\phi_k)$ in Proposition~\ref{prop-broyden} will not be well defined and further $$ W_k(\hat{\phi}_k)g_k=0, $$ contradicting the condition in Theorem~\ref{thm-iff} that $W_k$ is a non-singular matrix. The value $\hat{\phi}_k$ is known as the \emph{degenerate value}. \begin{comment} \subsection{Limited memory and memory-less update schemes} In this framework a description of limited memory and memory-less update schemes is straightforward. For example, take the BFGS-update scheme, $$ W_{BFGS}=B_{-}+(\frac{1}{\theta_{-}}-1)\frac{1}{g_{-}^T g_{-}}g_{-}g_{-}^T. $$ For $B=A^TWA$, the amount of previous information used is given by the choice of $B_{-}$. The only condition on $B_{-}$ is that $B_{-}g=g$. For $B_{-}=(B_{BFGS})_{-}$, i.e., we use full information and it holds that $(B_{BFGS})_{-}g=g$. In our framework a memory-less BFGS-update scheme would entail letting $B_{-}=I$, then clearly $B_{-}g=g$. A limited memory BFGS-update scheme would in our framework entail that $B_{-}$ contains some previous information, but not all and that $B_{-}$ is such that $B_{-}g=g$. For any update scheme in the one-parameter Broyden family, on a quadratic problem and using exact linesearch, it does not matter if we use all, some or no previous information. The search directions will be such that $p=\delta p^{CG}$ with $\delta \neq 0$ for all choices of $B_{-}$ such that $B_{-}g=g$. \end{comment} \section{Conclusion}\label{Conclusion} In this paper we have derived necessary and sufficient conditions on the matrix $B_k$ in a QN-method such that $p_k$, obtained by solving $B_kp_k=-g_k$, satisfies $p_k=\delta_k p_k^{CG}$ for some $\delta_k \neq 0$. These conditions are stated in Theorem~\ref{thm-iff}. It should be noted that a QN-method in which $B_k$ is chosen to satisfy the conditions of Theorem~\ref{thm-iff} in each iteration $k$ will not in general be such that $B_r = H$, where $r$ is the iteration where $g_r=0$. This behavior of $B_k$ is for example guaranteed if $B_k$ satisfies \eqref{suff}, see, e.g. \cite[Chapter 8]{nocedalwright}. We show that the set QN-schemes that generate search directions that are parallel to those generated by CG is strictly larger than the one-parameter Broyden family. Further, we show that there is an infinite number of symmetric rank-one update schemes for QN that give parallel search directions to those of CG. In Proposition~\ref{prop-rank1}, we show that with $u_k$ as almost any linear combination of $g_k$ and $g_{k-1}$ gives an $U_k$ for which $p_k=\delta_k p_k^{CG}$ for some $\delta_k \neq 0$. In particular, we can attain all $\delta_k$ except $\hat{\delta}_k$ given by Lemma~\ref{lem-rank1}. In this paper, our focus is on the mathematical properties of CG and QN in exact artimetic. We want to stress that considering the numerical properties in finite precision is of utmost importance, but such an analysis is beyond the scope of this paper. See, e.g., \cite{hager} for an illustration of a case where CG and QN generate identical iterates in exact arithmetic but the difference between numerically computed iterates for the two methods is large.
{ "timestamp": "2015-03-09T01:06:48", "yymm": "1503", "arxiv_id": "1503.01892", "language": "en", "url": "https://arxiv.org/abs/1503.01892", "abstract": "This paper concerns exact linesearch quasi-Newton methods for minimizing a quadratic function whose Hessian is positive definite. We show that by interpreting the method of conjugate gradients as a particular exact linesearch quasi-Newton method, necessary and sufficient conditions can be given for an exact linesearch quasi-Newton method to generate a search direction which is parallel to that of the method of conjugate gradients.We also analyze update matrices and give a complete description of the rank-one update matrices that give search direction parallel to those of the method of conjugate gradients. In particular, we characterize the family of such symmetric rank-one update matrices that preserve positive definiteness of the quasi-Newton matrix. This is in contrast to the classical symmetric-rank-one update where there is no freedom in choosing the matrix, and positive definiteness cannot be preserved.The analysis is extended to search directions that are parallel to those of the preconditioned method of conjugate gradients in a straightforward manner.", "subjects": "Optimization and Control (math.OC)", "title": "On exact linesearch quasi-Newton methods for minimizing a quadratic function", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713835553861, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7080104795079121 }
https://arxiv.org/abs/2302.06927
Algebraic certificates for the truncated moment problem
The truncated moment problem consists of determining whether a given finitedimensional vector of real numbers y is obtained by integrating a basis of the vector space of polynomials of bounded degree with respect to a non-negative measure on a given set K of a finite-dimensional Euclidean space. This problem has plenty of applications e.g. in optimization, control theory and statistics. When K is a compact semialgebraic set, the duality between the cone of moments of non-negative measures on K and the cone of non-negative polynomials on K yields an alternative: either y is a moment vector, or y is not a moment vector, in which case there exists a polynomial strictly positive on K making a linear functional depending on y vanish. Such a polynomial is an algebraic certificate of moment unrepresentability. We study the complexity of computing such a certificate using computer algebra algorithms.
\section{Introduction} \paragraph{Problem statement} Let ${\bm{x}} = (x_1,\ldots,x_n)$ be variables, $\mathbb{R}[{\bm{x}}]$ be the ring of $n$-variate real polynomials and for $d \in \mathbb{N}$, let $\mathbb{R}[{\bm{x}}]_{\leq d}$ be the vector space of real polynomials of degree at most $d$. The multivariate monomial with exponent ${\bm{\alpha}} = ({{\alpha}}_1,\ldots,{{\alpha}}_n) \in \mathbb{N}^n$ is denoted by ${\bm{x}}^{\bm{\alpha}} := x_1^{{{\alpha}}_1} \cdots x_n^{{{\alpha}}_n}$, and its total degree by $|{\bm{\alpha}}| = {{\alpha}}_1+\cdots+{{\alpha}}_n$. For $\g = (g_1,\ldots,g_k) \in \mathbb{R}[{\bm{x}}]^k$, the \fdef{basic semialgebraic set} associated with $\g$ is \begin{equation} \label{semialgset} S(\g) = \{{\bm{a}} \in \mathbb{R}^n : g_1({\bm{a}}) \geq 0, \ldots, g_k({\bm{a}}) \geq 0\}. \end{equation} Given $n,d \in \mathbb{N}$ and a sequence of real numbers ${\bm{y}} = (y_{\bm{\alpha}})_{{\bm{\alpha}} \in \mathbb{N}^n_d}$ indexed by $\mathbb{N}^n_d = \{{\bm{\alpha}} \in \mathbb{N}^n : \sum_{i=1}^n\alpha_i\leq d\}$, the {\it truncated moment problem} (below TMP) is the question of deciding whether there exists a nonnegative Borel measure $\mu$ on $\mathbb{R}^n$, with support in $K = S(\g)$, and such that \begin{equation} \label{moment} y_{\bm{\alpha}} = \int_K x^{\bm{\alpha}} \, d\, \mu, \,\,\,\,\,\,\,\,\, \text{ for all } {\bm{\alpha}} \in \mathbb{N}^n_d, \end{equation} If this is the case, one says that ${\bm{y}}$ is {\it moment-re\-pre\-sen\-ta\-ble on $K$}. More generally, the monomial basis can be replaced by another linear basis of $\mathbb{R}[{\bm{x}}]_{\leq d}$, ({\it e.g.} Chebyshev polynomials). The TMP is the truncated version of the classical {\it full moment problem} \cite{kuhlmann2002positivity,s17}. \paragraph{Overview and state of the art} The TMP is of central importance in data science. It is at the heart of several questions in optimization, control theory or statistics, to mention just a few application domains. It is a key ingredient to the moment-SOS (sum of squares) approach \cite{hkl20} which consists of solving numerically non-convex non-linear problems at the price of solving a family of finite-dimensional conic optimization (typically semidefinite programming) problems. Mathematical foundations of the moment problem were recently surveyed in \cite{s17,f16}. The TMP can be interpreted as a {\em decision problem of the first order theory of the reals}, in which case, the input-output data structure is as follows. The input is encoded by a finite-di\-men\-sio\-nal real vector ${\bm{y}}$, whose coordinates are indexed by a basis of the vector space of $n$-variate polynomials of degree $\leq d$, together with finitely-many polynomial inequalities defining a basic semialgebraic set $K \subset \mathbb{R}^n$. The output is a decision yes/no whether ${\bm{y}}$ is obtained by integrating the basis with respect to a non-negative measure supported on $K$. When $K$ is compact, the TMP is dual, in the convex analysis sense, to the problem of determining whether a polynomial is non-negative on the semialgebraic set $K$. This latter problem is at the heart of the development of real algebra in the twentieth century. Whereas deciding nonnegativity of a polynomial of degree at least four can be challenging (NP-complete problems can be cast as instances of such positivity problems \cite{l09}), there exist algebraic certificates based on SOS for deciding strict positivity under compactness-like assumptions that can be computed by solving a semidefinite programming (SDP) problem \cite{powers1998algorithm}. For positivity over compact basic semialgebraic sets, these certificates have the form of linear combinations with SOS coefficients of polynomials that are explicitly nonnegative on $K$ \cite{putinar1993positive,schmudgen1991}. The size of the SDP is determined by the degree of the SOS-representation. Seminal papers \cite{ramana1997exact,porkolab1997complexity} have started to investigate the complexity of SDP in the context of rational arithmetic. More recent work is based on the determinantal structure of semidefinite programs \cite{henrion2016exact}. The specific case study of computation of SOS certificates have recently received a lot of attention from the computer algebra community \cite{KLYZ08, kaltofen2012exact,henrion2016exact,henrion2019spectra,magron2021exact} especially for the question whether certificates exist over the rational numbers \cite{peyrl2008computing, SaZhi10, guo2013computing,scheiderer2016sums}. In this work we make use of quantifier elimination for determining bounds on the complexity of computing certificates for unrepresentability by using quantitative results from~\cite{BPR96}. These also rely on recent advances in the complexity analysis of Putinar Positivstellensatz \cite{baldi2022effective}. Note that these non-negativity problems can be solved with computer algebra algorithms which are root-finding algorithms, hence which do not provide algebraic certificates of non-negativity but do provide witnesses (real points) of negativity whenever they exist. The first family of algorithms for doing so is based on the so-called Cylindrical Algebraic Decomposition~\cite{Collins}. It has complexity which is doubly exponential in the number of variables, which would lead, in the context of TMP, to complexity bounds that are doubly exponential in $n + \tbinom{n+d}{d}$. The second family of algorithms, named critical point method, initiated by~\cite{GV88}, has complexity which is singly exponential in the number of variables~(see \cite{BPR} and references therein). The purpose of this communication is to leverage on these achievements and initiate the study and development of {\it computer algebra algorithms for solving the truncated moment problem for basic semialgebraic sets}. \paragraph{Overview of the contribution} In the wake of the mentioned duality with moments, the existence of SOS-certificates for nonnegative polynomials in the dual side, suggests that similar certificates might be used for the TMP on the primal side. On the one hand, when a measure exists whose partial moments coincide with ${\bm{y}}$, the measure itself is the natural algebraic proof that allows the user to verify directly that ${\bm{y}}$ is moment-representable. On the other hand, this paper shows the existence of explicit algebraic certificates of unrepresentability: these have the form of positive polynomials on $K$ admitting a positivity certificate and orthogonal to the vector ${\bm{y}}$. Our contribution is based on the fact that the TMP, as a decision problem, is equivalent to the feasibility of a convex conic program in a finite dimensional vector space. More precisely, the question is whether the interior of $\nneg{K}_d$, the cone of polynomials nonnegative on $K$ of degree at most $d$, intersects the vanishing locus of the Riesz functional ${\mathscr{L}}_{\bm{y}} : \mathbb{R}[x]_{\leq d} \to \mathbb{R}$ defined by ${\mathscr{L}}_{\bm{y}}\left(\sum p_{\bm{\alpha}} x^{\bm{\alpha}}\right) = \sum p_{\bm{\alpha}} y_{\bm{\alpha}}.$ When $K$ is compact, Tchakaloff's Theorem \cite{t57} states that ${\bm{y}}$ is moment-re\-pre\-sen\-ta\-ble whenever ${\mathscr{L}}_{\bm{y}}$ is nonnegative on $\nneg{K}_d$, in other words, if the mentioned conic program is {\it weakly feasible}. In this case there exists an atomic measure $\mu = \sum_{i=1}^s c_i \delta_{x_i}$ whose moment sequence of degree $\leq d$ is ${\bm{y}}$: such measure is a (real) solution of a highly structured polynomial system of type multivariate Vandermonde, which we do not investigate here. On the other side of the coin, ${\bm{y}}$ is not moment-representable exactly when the conic program is {\it strongly feasible}: in algebraic terms, this means that there exists a polynomial $p \in \nneg{K}_d$, (strictly) positive on $K$, in the kernel of ${\mathscr{L}}_{\bm{y}}$. In our contribution we study algorithmic aspects of the computation of such {\it unrepresentability algebraic certificates} when ${\bm{y}}$ is not moment-representable. First, we show that if the quadratic module corresponding to the description of $K$ is archimedean, such certificates exist. We define an integer invariant called the unrepresentability degree which measures the complexity of computing such certificate. We give bounds on such degree that only depend on the input size of our algorithm. When the input vector ${\bm{y}}$ is defined over $\mathbb{Q}$, and if it is not moment-representable, we show that there exists a rational certificate of unrepresentability. \section{Preliminaries} \subsection{Nonnegative polynomials} Let $K \subset \mathbb{R}^n$. A polynomial $f \in \mathbb{R}[{\bm{x}}]$ is called \fdef{nonnegative} on $K$ if $f({\bm{a}}) \geq 0$ for all ${\bm{a}} \in K$ and \fdef{positive} if $f({\bm{a}}) > 0$ for all ${\bm{a}} \in K$. We denote by $$ \nneg{K}_d = \{f \in \mathbb{R}[{\bm{x}}]_{\leq d} : f({\bm{a}}) \geq 0, \,\forall{\bm{a}} \in K\} $$ the convex cone of polynomials of degree $\leq d$, nonnegative on $K$. If $K$ is semialgebraic, then $\nneg{K}_d$ is also semialgebraic by {the theorem of Tarski on} quantifier elimination {over the reals} \cite{Tarski}. We denote by $\Sigma_n \subset \mathbb{R}[{\bm{x}}]$ the \fdef{cone of sums of squares} of polynomials, and by $\Sigma_{n,2d} = \Sigma_n \cap \mathbb{R}[{\bm{x}}]_{\leq 2d}$ its degree-$2d$ part (remark that $\Sigma_{n,2d+1}=\Sigma_{n,2d}$). The cone $\Sigma_{n,2d}$ is full-dimensional in $\mathbb{R}[{\bm{x}}]_{\leq 2d}$ and is contained in $\nneg{\mathbb{R}^n}_{2d}$. Testing membership in cones of nonnegative polynomials over semialgebraic sets can be challenging. Indeed, testing nonnegativity of polynomials of degree $\geq 4$ is NP-hard \cite[Sec.~1.1]{l09}. Nevertheless $\nneg{K}_d$ contains subcones that can be represented via linear matrix inequalities, thus testing membership in these subcones can be cast as a \fdef{semidefinite programming (SDP) problem}. Examples of such subsets are quadratic modules: for $K = S(\g)$ as in \eqref{semialgset} and $d \in \mathbb{N}$, and denoting $g_0:=1$, we define the \fdef{quadratic module} associated with $\g$ and its \fdef{truncation of order $d$} respectively by: \begin{align*} Q(\g) &= \left\{\sum_{i=0}^k \sigma_i g_i : \sigma_i \in \Sigma_n\right\} \,\,\,\,\,\,\text{ and}\\ Q(\g)[d] &= \left\{\sum_{i=0}^k \sigma_i g_i : \sigma_i \in \Sigma_n, \,\deg(\sigma_i g_i) \leq d\right\}. \end{align*} If $f \in Q(\g)$, we call the polynomials $[\sigma_0,\sigma_1,\ldots,\sigma_k]$ in an expression $f = \sum_{i=0}^k \sigma_i g_i$ a \fdef{SOS-certificate for $f \in Q(\g)$}. The sets $Q(\g)$ and $Q(\g)[d]$ depend on the polynomials $\g$ in the description of $K$. Denoted by $Q(g)_d := Q(\g) \cap \mathbb{R}[{\bm{x}}]_{\leq d}$, by construction one has $Q(\g)[d] \subset Q(\g)_d$ but, in general, this inclusion is strict: in other words, for some $\g$ there exists a polynomial $f \in Q(\g)$, of degree $d$, such that in any certificate $f = \sum_{i=0}^k \sigma_i g_i$, at least one product $\sigma_i g_i$ has degree $>d$. It is not even true that $Q(\g)_d \subset Q(\g)[D]$ for possibly large $D$: these quadratic modules are called stable. \begin{definition}[{\cite[Sec.~4.1]{marshall2008positive}}] For $d \in \mathbb{N}$, a quadratic module $Q(\g)$ is called \emph{stable in degree $d$} if there exists $D$ such that $Q(\g)_d \subset Q(\g)[D]$. It is called \emph{stable} if it is stable in every $d \in \mathbb{N}$ (we call the function $D=D(d)$ a \emph{stability function} for $Q(\g)$). \end{definition} Stability in a given degree depends on the generators $\g$ whereas stability depends only on the quadratic module $Q(\g)$ and is equivalent to the existence of degree bounds for the representation $f = \sum_i \sigma_i g_i \in Q(\g)$ that only depend on the degree of $f$, see for instance \cite{netzer2009stability}. One example of stable quadratic module is the cone $\Sigma_{n,2d} = Q(0)_{2d}$: indeed, every polynomial in $\Sigma_{n,2d}$ is a sum of squares of polynomials of degree at most $d$, that is, $Q(0)_{2d} = Q(0)[2d]$ and hence $D(2d)=D(2d+1)=2d$ is a stability function for $\Sigma_n$. As well as $\Sigma_{n,2d}$, truncated quadratic modules are \fdef{semidefinite representable sets}, that is linear images of feasible sets of SDP problems (also known as {projected spectrahedra} or spectrahedral {shadows} in the literature): given a description $\g$ for $K = S(\g)$, computing one polynomial in $Q(\g)[D]$ amounts to solving a single SDP problem ({\it cf.} \Cref{ssec:naif_algo}). \begin{definition}[{\cite{putinar1993positive}}] A quadratic module $Q(\g)$ is called \fdef{ar\-chi\-me\-de\-an} if there exists $u \in Q(\g)$ such that $S(u)$ is compact. \end{definition} Remark that if $Q(\g)$ is archimedean, then $S(\g) \subset S(u)$ thus $S(\g)$ is compact. Archimedeanity and stability are often mutually exclusive properties, indeed, for $n \geq 2$, an archimedean quadratic module is not stable. \begin{theorem}[Putinar's Positivstellensatz \cite{putinar1993positive}] \label{putinar} Let $K = S(\g)$ be non empty, and assume $Q(\g)$ is archimedean. Then every polynomial positive on $K$ belongs to $Q(\g)$. \end{theorem} The problem of bounding the degree $D$ of the summands in a Putinar certificate $f=\sum_{i=0}^k \sigma_i g_i$ for a polynomial $f \in \nneg{K}_d$, is called the \fdef{effective} Putinar Positivstellensatz, see \cite{nie2007complexity,baldi2022effective}. The work \cite{baldi2022effective} gives a bound for $D$ as a function of $d,n$, of the polynomial $f$ and of geometrical parameters of $K$, see \cite[Th.~1.7]{baldi2022effective}, {\it cf.} \Cref{bound_unrepr_deg}. \subsection{Moments} A \fdef{nonnegative Borel measure} (below \fdef{measure}, for short) $\mu$ is a bounded nonnegative linear functional on the $\sigma$-algebra of Borel sets $\mathscr{B}(\mathbb{R}^n)$. The \fdef{support} of $\mu$ is the complement of the largest open Borel set $A \in \mathscr{B}(\mathbb{R}^n)$ such that $\mu(A) = 0$, denoted by $\text{supp}(\mu)$. Let $K \subset \mathbb{R}^n$ be a Euclidean closed set. A measure $\mu$ is \fdef{supported} on $K$ if $\text{supp}(\mu) \subset K$ (in particolar it satisfies $\mu(\mathbb{R}^n \setminus K) = 0$). For ${\bm{u}} \in K$, we denote by $\delta_{{\bm{u}}}$ the Dirac measure supported on the singleton $\{{\bm{u}}\}$. A finite linear combinations $\sum_{i=1}^s c_i \delta_{{\bm{u}}_i}$ of $s$ Dirac measures is called $s$-\fdef{atomic}: its support is $\text{supp}(\sum_{i=1}^s c_i \delta_{{\bm{u}}_i})=\{{\bm{u}}_1,\ldots,{\bm{u}}_s\}$. For ${\bm{\alpha}} \in \mathbb{N}^n$, the \fdef{(monomial) moment of exponent ${\bm{\alpha}}$} of $\mu$ is the real number $\int_K {\bm{x}}^{{\bm{\alpha}}} \, \mathrm{d}\mu({\bm{x}})$ as in \eqref{moment}. We say that $\mu$ satisfying \eqref{moment} is a \fdef{representing measure} for the sequence ${\bm{y}}=(y_{\bm{\alpha}})_{{\bm{\alpha}} \in \mathbb{N}^n_d}$. If $K$ is compact, the Stone-Weierstrass Theorem implies that a measure is uniquely determined by its (infinite-dimensional) sequence of monomial moments, see \cite[Cor.~3.3.1]{marshall2008positive}. In this work, we address the following inverse problem for semialgebraic sets: \begin{problem}[Truncated Moment Problem {\cite[Ch.~17-18]{s17}}] \label{TMP} Let $K \subset \mathbb{R}^n$ be a basic closed semialgebraic set. Given a finite sequence ${\bm{y}}=(y_{{\bm{\alpha}}})_{{\bm{\alpha}} \in \mathbb{N}^n_d}$ of real numbers, determine whether ${\bm{y}}$ admits a representing measure supported on $K$. \end{problem} For ${\bm{y}} = (y_{{\bm{\alpha}}})_{{\bm{\alpha}} \in \mathbb{N}^n}$, the matrix $M_d({\bm{y}}) = (y_{{\bm{\alpha}}+{\bm{\beta}}})_{{\bm{\alpha}},{\bm{\beta}} \in \mathbb{N}^n_d}$ is called the \fdef{moment matrix of order $d$} of ${\bm{y}}$. We recall that we denote by ${\mathscr{L}}_{\bm{y}} : \mathbb{R}[{\bm{x}}]_{\leq d} \to \mathbb{R}$ the \fdef{Riesz functional associated with ${\bm{y}}$}, defined by ${\mathscr{L}}_{\bm{y}}({\bm{x}}^{\bm{\alpha}})=y_{{\bm{\alpha}}}$ and extended linearly on $\mathbb{R}[{\bm{x}}]_{\leq d}$. Let now $K = S(\g)$ be a basic closed semialgebraic set, and let \begin{align*} \mathscr{M}(K)_d = \Big\{& {\bm{y}} = (y_{{\bm{\alpha}}})_{{\bm{\alpha}} \in \mathbb{N}^n_d} \in \mathbb{R}^m : \exists\,\mu, \text{supp}(\mu) \subset K, \\ & \forall{\bm{\alpha}}\in\mathbb{N}^n_d, \, y_{{\bm{\alpha}}} = \int_K {\bm{x}}^{{\bm{\alpha}}} d\mu({\bm{x}})\Big\} \end{align*} denote the set of moments of order up to $d$ of nonnegative Borel measures with support in $K$. The set $\mathscr{M}(K)_d$ is in general not closed, as shown by the following example. \begin{example}[{\cite[Rem.~3.147]{BPT}}] \label{example_not_closed} Let $n=1, d=4$ and ${\bm{y}}=(1,0,0,0,1)$. Its second moment matrix $M_2({\bm{y}})$ is positive semidefinite but the vector ${\bm{y}}$ is not representable by a nonnegative univariate measure, indeed $y_2 = 0$ but $y_4 \neq 0$. Thus ${\bm{y}} \not\in \mathscr{M}(\mathbb{R})_4$. Nevertheless ${\bm{y}} \in \overline{\mathscr{M}(\mathbb{R})_4}$, the Euclidean closure of $\mathscr{M}(\mathbb{R})_4$: indeed ${\bm{y}} = \lim_{\epsilon \to 0} {\bm{y}}_\epsilon$ where ${\bm{y}}_\epsilon = (1,0,\epsilon^2,0,1)$ is the sequence of moments of degree $\leq 4$ of the $3-$atomic measure $$ \mu_\epsilon = \frac{\epsilon^4}{2} \left(\delta_{\frac{1}{\epsilon}}+\delta_{-\frac{1}{\epsilon}}\right)+(1-\epsilon^4)\delta_0 $$ \hfill$\Box$ \end{example} Let $V^\vee$ be the {dual vector space} of a real vector space $V$, that is the set of $\mathbb{R}$-linear functionals $L : V \to \mathbb{R}$. If $C \subset V$ is a convex cone, the set $C^* = \{L \in V^\vee : L({\bm{a}}) \geq 0, \,\forall{\bm{a}} \in C\}$ is called the {dual cone} of $C$. It is straightforward to see that $\mathscr{M}(K)_d \subset \nneg{K}_d^*$, and a non-trivial result is that equality holds for $K$ compact: \begin{theorem}[{Tchakaloff's Theorem \cite{t57}}] \label{tchakaloff} Let $K \subset \mathbb{R}^n$ be compact and $d \in \mathbb{N}$. Then \[ \mathscr{M}(K)_d = \nneg{K}_d^* = \{{\bm{y}} \in \mathbb{R}^m : {\mathscr{L}}_{\bm{y}}(p) \geq 0, \:\forall p \in \nneg{K}_d\}. \] \end{theorem} \Cref{tchakaloff} is a finite-dimensional version of the Riesz-Haviland Theorem \cite{cf08}. It is often used in the moment-SOS hierarchy, see \cite[Lemma 1.7]{hkl20}. A modern statement and proof can be found {\it e.g.} in \cite[Theorem 5.13]{l09}, see also \cite{blekhermanCoreVariety}. \section{An algorithm for the moment problem} We describe an algorithm based on semidefinite programming that solves Problem \ref{TMP} for compact basic semialgebraic sets. To do that, we first give a characterization of the interior of cones of nonnegative polynomials on compact sets $K \subset \mathbb{R}^n$ (\Cref{ssec:interior}). Next we interpret \Cref{TMP} as a conic feasibility problem and prove that its solvability is related to the feasibility type of the program (\Cref{ssec:conic}). Finally we describe our algorithm in \Cref{ssec:naif_algo}. \subsection{Interior of $\nneg{K}_d$} \label{ssec:interior} Let $K \subset \mathbb{R}^n$ be non-empty. For $f \in \mathbb{R}[{\bm{x}}]$, we denote by $f^* := \inf_{x \in K} f(x)$, possibly $-\infty$. It is straightforward to construct examples of positive polynomials $f \in \nneg{K}$, on a non-compact set $K$, such that $f \not\in \text{Int}(\nneg{K})$ even if $f^* > 0$ (for instance $1 = \lim_{\epsilon \to 0} 1-\epsilon x$ so $1 \not\in \text{Int}(\nneg{\mathbb{R}}_2)$). More generally, positive sum-of-squares polynomials of degree $<d$ lie in the boundary of the cone of nonnegative polynomials of degree $\leq d$ ({\it cf.} \cite[\S 4.4.3]{BPT}). The next folklore lemma shows that, for compact sets, the interior of $\nneg{K}_d$ is exactly the set of positive polynomials over $K$. \begin{lemma} \label{lem:interior} Let $K \subset \mathbb{R}^n$ be non-empty, and let $d \in \mathbb{N}$. Then $\text{Int}(\nneg{K}_d) \subset \{f \in \mathbb{R}[{\bm{x}}]_{\leq d} : f^* > 0\}$. If $K$ is compact, equality holds, and $\text{Int}(\nneg{K}_d)$ consists of exactly those polynomials in $\mathbb{R}[{\bm{x}}]_{\leq d}$ that are positive on $K$. \end{lemma} \begin{proof} If $f \in \mathbb{R}[{\bm{x}}]_{\leq d}$ is such that $f^* \leq 0$, then $(f-\epsilon)^* = f^*-\epsilon < 0$ for all $\epsilon > 0$, thus $f-\epsilon \not\in \nneg{K}_d$ for all $\epsilon > 0$, hence $f \not\in \text{Int}(\nneg{K}_d)$, which proves the sought inclusion $\subset$. Now assume $K$ is compact, and let $f \not\in \text{Int}(\nneg{K}_d)$. Then $f$ is in the closure of the complement of $\nneg{K}_d$ in $\mathbb{R}[{\bm{x}}]_{\leq d}$, that is, $f$ is the pointwise limit $f = \lim_{k \to \infty} f_k$ of polynomials $f_k \not\in \nneg{K}_d$, in particular, satisfying $f_k^* < 0$ for all $k$. Since $K$ is compact, $f_k^* = \min_{x \in K} f_k(x) = f_k(x_k)$ for some $x_k \in K$. Let $\overline{x} \in K$ be a limit point of $\{x_k\}_k$, which exists by Bolzano-Weierstrass Theorem. Thus up to extracting a subsequence, one has $0 \geq \lim_{k} \, f_k^* = \lim_{k} f_k(x_k) = f(\overline{x}) \geq f^*$, which shows the inclusion $\supset$, thus the equality $\text{Int}(\nneg{K}_d) = \{f \in \mathbb{R}[{\bm{x}}]_{\leq d} : f^* > 0\}$. Since the infimum of a polynomial function on a compact set is its minimum, one has $f^* > 0$ if and only if $\min_{x \in K} f(x) > 0$ if and only if $f$ is positive on $K$, as claimed. \end{proof} \begin{remark} \Cref{putinar} and \Cref{lem:interior} ensure that if $Q(\g)$ is ar\-chi\-me\-de\-an, then $\text{Int}(\nneg{K}_d) \subset Q(\g) \cap \mathbb{R}[{\bm{x}}]_{\leq d} \subset \nneg{K}_d$. Thus under this assumption, \cite[Th.~1.7]{baldi2022effective} yields a degree bound $D = D(d,n,f,K)$ such that if $f \in \text{Int}(\nneg{K}_d)$ then $f \in Q_D(\g)$. Since $Q_D(\g)$ is semidefinite representable, it can be sampled through semidefinite programming: solving such optimization problem yields an element of the boundary of $Q_D(\g)$, thus this might not be sufficient to compute an element of $\text{Int}(\nneg{K}_d)$. \end{remark} The following Corollary shows that one can get elements of $\text{Int}(\nneg{K}_d)$ as well through semidefinite programming, from the knowledge of a polynomial description $\g$ of $K$. \begin{corollary} \label{cor:minusone} Let $\g$ be such that $Q(\g)$ is archimedean, and let $K=S(\g)$. Let $f \in \text{Int}(\nneg{K}_d)$ and $0 < \delta < f^* = \min_K f$. Then $\frac{1}{\delta} f - 1 \in \text{Int}(\nneg{K}_d)$ and there exist $\sigma_0^\delta,\sigma_1^\delta, \ldots,\sigma_k^\delta \in \Sigma_n$ such that $$ \frac{1}{\delta} f - 1 = \sigma_0^\delta+\sum_i \sigma_i^\delta g_i. $$ \end{corollary} \begin{proof} The polynomial $f-\delta$ is positive on $K$, thus by \Cref{lem:interior}, $f-\delta \in \text{Int}(\nneg{K}_d)$ and hence ${(f-\delta)}/{\delta} = \frac{1}{\delta} f - 1 \in \text{Int}(\nneg{K}_d)$, since $\text{Int}(\nneg{K}_d)$ is a cone. Since $\frac{1}{\delta} f - 1$ is positive on $K$, we conclude by \Cref{putinar}. \end{proof} \Cref{cor:minusone} can be rephrased as follows: if $Q(\g)$ is archimedean and $0 < \delta < f^* = \min_K f$, then $f/\delta \in 1+Q(\g)$. Remark that (unless $Q(\g)$ is stable in the degree of $f$) the degrees of the SOS-multipliers for a SOS-certificate $f/\delta \in 1+Q(\g)$ depend on $\delta$ and might be larger than the degrees for a SOS-certificate $f \in Q(\g)$. \subsection{Moment problem as conic feasibility} \label{ssec:conic} Let $C \subset V$ be a convex cone with non-empty interior, and let $L \subset V$ be an affine space. The \fdef{conic program} associated with $C$ and $L$ is called \fdef{feasible} if $L \cap C \neq \emptyset$, otherwise \fdef{infeasible}. It is called \fdef{strongly feasible} if $L \cap \text{Int}(C) \neq \emptyset$, and \fdef{weakly feasible} if it is feasible but not strongly. If $L$ is a linear space (that is if $0 \in L$) then $\{0\} \subset (L \cap C)$, thus $L \cap C$ is always feasible. If $L$ is a hyperplane, then the corresponding program is weakly feasible if and only if $L \cap C$ is a proper face of $C$ and in this case $L$ is called a \fdef{supporting hyperplane} for $C$: geometrically, $C$ is contained in one of the two closed half-spaces bounded by $L$, and $L$ is tangent to the boundary of $C$. \begin{proposition} \label{prop:feas} Let ${\bm{y}} = (y_{\bm{\alpha}})_{{\bm{\alpha}} \in \mathbb{N}^n_d} \in \mathbb{R}^m$, $m=\binom{n+d}{d}$ with ${\bm{y}}_0>0$. Let ${\mathscr{L}}_{\bm{y}} \in (\mathbb{R}[{\bm{x}}]_{\leq d})^\vee$ be the Riesz functional of ${\bm{y}}$, and $L_{\bm{y}} = \{p \in \mathbb{R}[{\bm{x}}]_{\leq d} : {\mathscr{L}}_{\bm{y}}(p)=0\}$. Let $K = S(\g) \subset \mathbb{R}^n$. The following are equivalent: \begin{enumerate} \item[$A_1$.] ${\bm{y}} \in \mathscr{M}(K)_d$; \item[$A_2$.] The conic program $L_{\bm{y}} \cap \nneg{K}_d$ is weakly feasible; \item[$A_3$.] There exist ${\bm{u}}_1,\ldots,\allowbreak{\bm{u}}_s \in K$, with $s \leq m$, such that ${\bm{y}}$ admits a representing measure $\mu$ with $\text{supp}(\mu) = \{{\bm{u}}_1,\ldots,{\bm{u}}_s\}$ and $L_{\bm{y}} \cap \nneg{K}_d = \{p \in \nneg{K}_d : p({\bm{u}}_1) = 0, \ldots, p({\bm{u}}_s) = 0\}$. \end{enumerate} Moreover, the following are equivalent and are strong alternatives to $A_1$-$A_2$-$A_3$: \begin{enumerate} \item[$B_1$.] ${\bm{y}} \not\in \mathscr{M}(K)_d$; \item[$B_2$.] The conic program $L_{\bm{y}} \cap \nneg{K}_d$ is strongly feasible. \end{enumerate} \end{proposition} \begin{proof} The fact that $A_1$ and $B_1$ are strong alternatives is obvious, and the fact that the program $L_{\bm{y}} \cap \nneg{K}_d$ is always feasible (indeed, $L_{\bm{y}}$ is linear) implies that $A_2$ and $B_2$ are strong alternatives. Hence we only have to prove the equivalence of $A_1,A_2$ and $A_3$. {We first prove that $A_1$ is equivalent to $A_3$.} For ${\bm{u}} \in K$, denote by $\lambda_{\bm{u}} = ({\bm{u}}^{\bm{\alpha}})_{{\bm{\alpha}} \in \mathbb{N}^n_d} \in \mathscr{M}(K)_d$ the sequence of moments of order $\leq d$ of the Dirac measure $\delta_{{\bm{u}}}$. By \cite[Th.~17.2]{s17}, ${\bm{y}}$ admits a representing measure supported on $K$, if and only if it admits a representing atomic measure $\mu = \sum_{i=1}^s c_i \delta_{{\bm{u}}_i}$, where $s \leq \dim \mathbb{R}[{\bm{x}}]_{\leq d} = \tbinom{n+d}{d}$ and for some $c_i >0$ and ${\bm{u}}_1,\ldots,{\bm{u}}_s \in K$. For every $p \in \mathbb{R}[{\bm{x}}]_{\leq d}$, one deduces $$ {\mathscr{L}}_{\bm{y}}(p) = \int_K p \, d\mu = \sum_{i=1}^s c_i p({\bm{u}}_i) $$ and thus $p \in L_{\bm{y}}$ if and only if $\sum_{i=1}^s c_i p({\bm{u}}_i) = 0$: then for $p \in \nneg{K}_d$, we conclude that $p$ must vanish on $\{{\bm{u}}_1,\ldots,{\bm{u}}_s\}$. {We deduce that $A_1$ and $A_3$ are equivalent}. {We prove now that $A_1$ and $A_2$ are equivalent.} By \Cref{tchakaloff}, we know that $A_1$ holds if and only if ${\mathscr{L}}_{\bm{y}}$ is {non-negative} over $\nneg{K}_d$: this is the case if and only if the cone $\nneg{K}_d$ is contained in the closed half-space $L_{\bm{y}}^+ = \{p \in \mathbb{R}[{\bm{x}}]_{\leq d} : {\mathscr{L}}_{\bm{y}}(p) \geq 0\}$, and since $0 \in L_{\bm{y}}$, this is equivalent to $L_{\bm{y}}$ being a supporting hyperplane and the program being weakly feasible. \end{proof} When the conic program in \Cref{prop:feas} is weakly feasible, the set $L_{\bm{y}} \cap \nneg{K}_d$ is an exposed and proper face of $\nneg{K}_d$ defined by vanishing on the finite set defined in Item $A_3$. See also \cite{blekherman2015dimensional} and \cite[Sec.~4.4]{BPT}. On the contrary, if the conic program is strongly feasible, we give the following definition. \begin{definition} Let ${\bm{y}} \not\in \mathscr{M}(K)_d$. A polynomial $p \in L_{\bm{y}} \cap \text{Int}(\nneg{K}_d)$ is called a \fdef{un\-re\-pre\-sen\-ta\-bi\-li\-ty certificate} for ${\bm{y}}$ in $K$. \end{definition} \Cref{cor:equiv_B} shows how to compute explicit un\-re\-pre\-sen\-ta\-bi\-li\-ty certificates for \Cref{TMP}. \begin{corollary} \label{cor:equiv_B} Assume that $Q(\g)$ is ar\-chi\-me\-de\-an and that conditions $B_1$-$B_2$ of \Cref{prop:feas} hold. There exists $p \in \text{Int}(\nneg{K}_d)$ such that $p \in 1+Q(\g)$, ${\mathscr{L}}_{\bm{y}}(p)=0$ and $p^*>1$ is arbitrarily large. \end{corollary} \begin{proof} Property $B_2$ of \Cref{prop:feas} ensures that there exists $f \in L_{\bm{y}} \cap \text{Int}(\nneg{K}_d)$, that is $f$ is positive over $K$ and ${\mathscr{L}}_{\bm{y}}(f)=0$. Let $0 < \delta < f^*$, and let $p = \frac{1}{\delta}f \in \text{Int}(\nneg{K}_d)$. From \Cref{cor:minusone} we get that $p-1 \in Q(\g)$, that is, $p \in 1+Q(\g)$. Moreover ${\mathscr{L}}_{\bm{y}}(p) = \frac{1}{\delta}{\mathscr{L}}_{\bm{y}}(f) = 0$, and $p^* = {f^*}/{\delta}>1$ is arbitrarily large. \end{proof} \begin{remark} If ${\bm{y}} \in \mathbb{Q}^m$, then $p$ in \Cref{cor:equiv_B} can be chosen with rational coefficients. Indeed, the hyperplane $L_{\bm{y}}$ is defined by an equation with rational coefficients, and $L_{\bm{y}} \cap \text{Int}(\nneg{K}_d)$ is a non-empty open subset of $L_{\bm{y}}$, hence it contains a rational point $f$. Choosing $\delta \in \mathbb{Q}$ in the proof of \Cref{cor:equiv_B} is thus sufficient to get a rational certificate. Nevertheless, let us recall from \cite{scheiderer2016sums} that there exist polynomials in $\mathbb{Q}[{\bm{x}}]$, that are sums of squares as elements of $\mathbb{R}[{\bm{x}}]$ but not as elements of $\mathbb{Q}[{\bm{x}}]$. In our context, this means that the rational unrepresentability certificate $p$ might not admit rational certificates of positivity showing that $p \in 1+Q(\g)$ (see also \cite{naldi2021conic} for the existence of rational certificates in conic programming). \end{remark} Any polynomial $p$ as in \Cref{cor:equiv_B} is such that $p-1 \in Q(\g)$. In particular, there exists $D>0$ such that $p-1 \in Q(\g)[D]$, that is $p \in 1+Q(\g)[D]$. Bounds for $D$ are given, {\it e.g.}, in \cite[Th.~1.7]{baldi2022effective}. Below we give a bound on $D$ as a function of the input of \Cref{algo:naif}, see \Cref{unrepr_deg} and \Cref{bound_unrepr_deg}. The following example shows that the certificate of \Cref{cor:equiv_B} might exist in non-archimedean contexts. \begin{example} \label{non_archimedean_certificate} The vector ${\bm{y}} = (1,1,0)$ is not a univariate moment vector (indeed the moment matrix $M_1({\bm{y}}) = \left(\begin{smallmatrix} 1 & 1 \\ 1 & 0\end{smallmatrix}\right)$ is not positive semidefinite). Remark that $\mathbb{R} = S(0)$ and $Q(0) = \Sigma_n$ is not archimedean, but an unrepresentability certificate in the spirit of \Cref{cor:equiv_B} exists. Indeed ${\mathscr{L}}_{\bm{y}}(p_0+p_1x+p_2x^2) = p_0+p_1$ and the following identity holds for all $p \in L_{\bm{y}}$: $$ p_0-p_0x+p_2x^2 = 1+ \begin{pmatrix} 1 & x \end{pmatrix} \begin{pmatrix} p_0-1 & -\frac{p_0}{2} \\ -\frac{p_0}{2} & p_2 \end{pmatrix} \begin{pmatrix} 1 \\ x \end{pmatrix}. $$ The identity is a SOS-certificate for $p \in 1+Q(\g)$ if and only if the Gram matrix on the right hand side is positive semidefinite. This yields a spectrahedral representation of the set of unrepresentability certificates of ${\bm{y}}$: $$ \left\{ p \in \mathbb{R}[{\bm{x}}]_{\leq 2} : p_1=-p_0, \left(\begin{smallmatrix} p_0-1 & -\frac{p_0}{2} \\ -\frac{p_0}{2} & p_2 \end{smallmatrix} \right) \succeq 0 \right\} $$ For instance the polynomial $p = 2-2x+x^2 = 1+(1-x)^2 \in (1+Q(0)) \cap L_{\bm{y}}$. We remark that the existence of such $p$ implies ${\bm{y}} \not\in \mathscr{M}(\mathbb{R})_{2}$. Indeed, if ${\bm{y}} \in \mathscr{M}(\mathbb{R})_{2}$, then by $A_3$ of \Cref{prop:feas}, there would exist a representing measure with finite support, thus one would have ${\bm{y}} \in \mathscr{M}([-R,R])_{2}$ for some $R>0$. Now, since $1 + Q(0) \subset 1+Q(R^2-x^2)$ for every $R>0$, we deduce that $p \in 1+Q(R^2-x^2)$, thus $p \in \text{Int}(\nneg{[-R,R]}_2)$ (according to \Cref{lem:interior}) and thus ${\bm{y}} \not\in \mathscr{M}([-R,R])_{2}$, for every $R>0$ (by $B_2$, \Cref{prop:feas}). \hfill$\Box$ \end{example} The polynomial $p$ in \Cref{non_archimedean_certificate}, together with its positivity certificate $1+(1-x)^2$, allows to check rigourously the un\-re\-pre\-sen\-ta\-bi\-li\-ty of ${\bm{y}}=(1,1,0)$. Nevertheless, in general the archimedianity hypothesis cannot be dropped, as shown in \Cref{absence_certificate_non_archimedean}. \begin{example} \label{absence_certificate_non_archimedean} Let ${\bm{y}} = (1,0,0,0,1)$ be the vector of \Cref{example_not_closed}, with $\g=0$. The semidefinite program in \Cref{cor:equiv_B} is infeasible: indeed $Q(0)=\Sigma_{n}$ and it is easy to check that there is no polynomial $p = \sum_{i=0}^4p_ix^i$ satisfying \begin{align*} {\mathscr{L}}_{\bm{y}}(p) = p_0 + p_4 & = 0 \\ p & = 1 + \begin{pmatrix} 1 & x & x^2 \end{pmatrix} X \begin{pmatrix} 1 & x & x^2 \end{pmatrix}^T \\ X & \succeq 0. \end{align*} Indeed, the constraints imply that $p_4 = -p_0 = -(1+X_{11}) < 0$, thus $p$ is negative at infinity, in particular, $p\not\in\nneg{\mathbb{R}}$. Nevertheless for every $R>0$, with $\g=(R-x,R+x)$, \Cref{cor:equiv_B} ensures that there exists $p_R \in 1+Q(R-x,R+x)$ such that ${\mathscr{L}}_{\bm{y}}(p_R)=0$, that certifies that ${\bm{y}} \not\in \mathscr{M}([-R,R])_4$: the polynomial $p_R$ is any solution of the following parametric linear matrix inequality with $\sigma_0,\sigma_1,\sigma_2 \in \Sigma_{1}$: $$ p_0+p_1x+p_2x^2+p_3x^3-p_0x^4 = 1+\sigma_0+\sigma_1(R-x) + \sigma_2(R+x). $$ The fact that $\g=(R-x,R+x)$ is the natural description of $[-R,R]$ (see \cite[Sec.~2.7]{marshall2008positive}) implies that $Q(\g)$ is stable, with stability function $D(d)=d$ (see {\it e.g.} \cite[Prop.~3.3]{s17}), and hence one can assume $\sigma_0 \in \Sigma_{1,4}$ and $\sigma_1,\sigma_2 \in \Sigma_{1,2}$. \hfill$\Box$ \end{example} We terminate this series of examples with a bivariate one. \begin{example} \label{example_didier} Let $n=2,d=6$ and let ${\bm{y}} = (y_{{\bm{\alpha}}})_{{\bm{\alpha}} \in \mathbb{N}^2_6}$ be the vector in $\mathbb{R}^{28}$ whose non-zero entries are \begin{align*} y_{00} & = 32 & y_{22} & = 30 \\ y_{20} & = y_{02} = 34 & y_{60} & = y_{06} = 128 \\ y_{40} & = y_{04} = 43 & y_{42} & = y_{24} = 28. \end{align*} We claim that there is no nonnegative Borel measure supported on the unit ball $K = \{{\bm{a}}=(a_1,a_2) \in \mathbb{R}^2 : a_1^2+a_2^2 \leq 1\}$ whose moments up to degree $6$ agree with ${\bm{y}}$. Remark that the semialgebraic set $K = S(1-x_1^2-x_2^2)$ is compact and $Q(1-x_1^2-x_2^2)$ is archimedean, in particular it is not stable. We give below in \Cref{example_didier_suite} an unrepresentability certificate of small degree certifying that ${\bm{y}} \not\in \mathscr{M}(K)_d$, proving our claim. \hfill$\Box$ \end{example} \subsection{SDP-based algorithm} \label{ssec:naif_algo} The results of \Cref{ssec:conic} yield the following alternatives for \Cref{TMP}. Given ${\bm{y}} = ({\bm{y}}_{\bm{\alpha}})_{{\bm{\alpha}} \in \mathbb{N}^n_d}$ and $K=S(\g)$, according to \Cref{prop:feas}: \begin{itemize} \item either ${\bm{y}} \not\in \mathscr{M}(K)_d$, in which case a certificate of un\-re\-pre\-sen\-ta\-bi\-li\-ty is given by a polynomial $p \in \text{Int}(\nneg{K}_d)$ such that ${\mathscr{L}}_{\bm{y}}(p)=0$ (\Cref{cor:equiv_B}) \item or ${\bm{y}} \in \mathscr{M}(K)_d$, in which case there exists an atomic measure $\mu = \sum_{i=1}^s c_i \delta_{{\bm{u}}_i}$ representing ${\bm{y}}$ (Property $A_3$ in \Cref{prop:feas}). \end{itemize} We describe our main algorithm. \\ \fbox{ \parbox[center]{\textwidth}{ \begin{algorithm} \algoname{certify\_moment} \label{algo:naif} \begin{algorithmic}[1] \Require{ \Statex \textbullet~ $n,d \in \mathbb{N}$ \Statex \textbullet~ A vector ${\bm{y}} \in \mathbb{R}^m$, with $m=\tbinom{n+d}{d}$ \Statex \textbullet~ Polynomials $\g = (g_1,\ldots,g_k) \in \mathbb{R}[{\bm{x}}]^k$ \Statex \textbullet~ A threshold $D \in \mathbb{N}$ } \Ensure{ \Statex \textbullet~ Either $(p,\Sigma)$ where $p \in \mathbb{R}[{\bm{x}}]_{\leq d}$ satisfies \Cref{cor:equiv_B}, and $\Sigma \in \Sigma_n^{k+1}$ is a certificate for $p \in 1+Q(\g)[D]$ \Statex \textbullet~ or a measure $\mu = \sum c_i \delta_{{\bm{u}}_i}$ satisfying $A_3$ in \Cref{prop:feas} } \State $(p,\Sigma) \gets {\bf find\_certificate}(n,d,{\bm{y}},\g,D)$ \label{step_sdp} \InlineIf{$p \neq []$}{\Return $(p,\Sigma)$} \label{if_return} \State \Return ${\bf find\_measure}(n,d,{\bm{y}},\g)$ \label{step:extract:meas} \end{algorithmic} \end{algorithm} }} \vspace{0.3cm} \Cref{algo:naif} depends on two subroutines. The first one, {\bf fi\-nd\_cer\-ti\-fi\-ca\-te} at Step \ref{step_sdp}, returns, if it exists, an un\-re\-pre\-sen\-ta\-bi\-li\-ty certificate $p \in \mathbb{R}[{\bm{x}}]_{\leq d}$ for ${\bm{y}}$, together with a SOS-certificate for $p$ as element of $1+Q(\g)[D]$. \vspace{0.3cm} \fbox{ \parbox[left]{\textwidth}{ \begin{algorithm} \algoname{find\_certificate} \label{algo:find:certificate} \begin{algorithmic}[1] \State $p \gets []$, $\Sigma \gets []$, $g_0 \gets 1$ \State Find $p \in \mathbb{R}[{\bm{x}}]_{\leq d}$ and $\sigma_0,\sigma_1,\ldots,\sigma_k \in \Sigma_n$ such that \begin{itemize} \item ${\mathscr{L}}_{\bm{y}}(p)=0$ \item $p = 1+\sum_{i=0}^k \sigma_i g_i$, $\deg(\sigma_i g_i) \leq D$ \label{algo:find:certificate:sdp} \end{itemize} \State $\Sigma \gets [\sigma_0, \sigma_1, \ldots, \sigma_k]$ \State \Return $(p, \Sigma)$ \end{algorithmic} \end{algorithm} }} \vspace{0.3cm} \Cref{algo:find:certificate} can be performed by solving one (finite-dimensional) SDP feasibility program whose unknowns are $p, \sigma_0, \allowbreak\sigma_1, \ldots, \sigma_k$. First, the constraint ${\mathscr{L}}_{\bm{y}}(p)=0$ is linear in $p$. Next, denoting $\delta_i=\lfloor (D-\deg(g_i))/2 \rfloor$, the constraints $\sigma_i \in \Sigma_n$ and $\deg(\sigma_i g_i) \leq D$ are equivalent to the existence of a symmetric matrix $X_i \succeq 0$, of size $\binom{\delta_i+n}{n}$, such that $\sigma_i = v^T X_i v$, where $v$ is a linear basis of $\mathbb{R}[{\bm{x}}]_{\delta_i}$. Finally the constraint $p=1+Q(\g)[D]$ in Step \ref{algo:find:certificate:sdp} is affine linear in $p$ and in the entries of $X_0, X_1, \ldots, X_k$. We give upper bounds for the value of $D$ in \Cref{bound_unrepr_deg}. The second routine, {\bf find\_measure}, is called if and only if \Cref{algo:naif} reaches Step \ref{step:extract:meas}. It returns a $s$-atomic measure representing the vector ${\bm{y}}$, for some $s \leq \binom{n+d}{d}$. \vspace{0.3cm} \fbox{ \parbox[left]{\textwidth}{ \begin{algorithm} \algoname{find\_measure} \label{algo:extract:measure} \begin{algorithmic}[1] \State $s=1$ \While{$s \leq \binom{n+d}{d}$} \State Find ${{\bm{c}}} \in \mathbb{R}^s$ and $\bm{U} = ({\bm{u}}_1,\ldots,{\bm{u}}_s) \in (\mathbb{R}^n)^s$ s.t. \begin{itemize} \item ${\bm{u}}_1,\ldots,{\bm{u}}_s \in S(\g)$ \item $c_1{\bm{u}}_1^{\bm{\alpha}}+\cdots+c_s{\bm{u}}_s^{\bm{\alpha}} = y_{{\bm{\alpha}}}$ for all ${\bm{\alpha}} \in \mathbb{N}^n_d$ \end{itemize} \InlineIf{solution exists}{\Return $({{\bm{c}}},\bm{U})$} \State $s \gets s+1$ \EndWhile \end{algorithmic} \end{algorithm} }} \vspace{0.3cm} The routine {\bf find\_measure} can be performed by existing algorithms computing one point per connected component of basic semialgebraic sets applied to the set of elements $({{\bm{c}}},\bm{U}) \in \mathbb{R}^s \times (\mathbb{R}^n)^s$ satisfying the inequalities defining $K$ and the polynomial equations $\sum_i c_i {\bm{u}}_i^{\bm{\alpha}} = y_{\bm{\alpha}}$, ${\bm{\alpha}} \in \mathbb{N}^n_d$. The equations have a multivariate Vandermonde structure. A precise complexity analysis of {\bf find\_measure} is left to future work. We define now an integer function of the input of \Cref{TMP}. \begin{definition} \label{unrepr_deg} Let $K=S(\g)$ and ${\bm{y}} \not\in \mathscr{M}(K)_d$. The \emph{unrepresentability degree} of ${\bm{y}}$ in $K$ is the minimum integer $D=D(n,d,{\bm{y}},\g)$ such that there exists $p \in 1+Q(\g)[D]$ satisfying ${\mathscr{L}}_{\bm{y}}(p)=0$. \end{definition} The unrepresentability degree is well defined, according to \Cref{cor:equiv_B}. We prove the correctness of \Cref{algo:naif}. \begin{theorem}[Correctness] \label{correctness} Let ${\bm{y}} = (y_{\bm{\alpha}})_{{\bm{\alpha}} \in \mathbb{N}^n_d}$, and let $\g=(g_1,\ldots,g_k) \in \mathbb{R}[{\bm{x}}]^k$ be such that $Q(\g)$ is archimedean. There exists $D = D(n,d,{\bm{y}},\g) \in \mathbb{N}$ such that \Cref{algo:naif} with input $(n,d,{\bm{y}},\g,D)$ terminates and is correct. \end{theorem} \begin{proof} Let $K=S(\g)$. Since $Q(\g)$ is archimedean, $K$ is compact and by \Cref{lem:interior}, $\text{Int}(\nneg{K}_d) = \{p \in \mathbb{R}[{\bm{x}}]_{\leq d} : p({\bm{a}}) > 0, \,\forall\,{\bm{a}}\in K\}$. By \Cref{putinar}, if a polynomial is positive on $K$, then it belongs to $Q(\g)$. We deduce that $\text{Int}(\nneg{K}_d) \subset Q(\g) \cap \mathbb{R}[{\bm{x}}]_{\leq d} \subset \nneg{K}_d$. We claim that for $D \in \mathbb{N}$ large enough, then ${\bm{y}} \not\in \mathscr{M}(K)_d$ if and only if {\bf certify\_moment} returns a polynomial $p$ at Step \ref{if_return}, that is, if and only if the semidefinite program at Step \ref{step_sdp} is feasible. Assume ${\bm{y}} \not\in \mathscr{M}(K)_d$ and let $D$ the unrepresentability degree of ${\bm{y}}$ in $K$. By \Cref{cor:equiv_B}, there exists a polynomial $p \in 1+Q(\g)[D] \subset \text{Int}(\nneg{K}_d)$, such that ${\mathscr{L}}_{\bm{y}}(p)=0$. In other words, {\bf find\_certificate} returns $(p,\Sigma)$ with $p \neq []$, and hence \Cref{algo:naif} returns its output at Step \ref{if_return}. For the reverse implication, suppose that the semidefinite program at Step \ref{step_sdp} is feasible for some degree $D$. Let $p \in \mathbb{R}[{\bm{x}}]_{\leq d}$ be a solution of such program. Then $p = 1+q$ for some $q \in Q(\g)[D]$, in particular, $p^* \geq 1$ on $K$, that is, $p$ is positive on $K$. Since ${\mathscr{L}}_{\bm{y}}(p)=0$, and by compactness of $K$, one has $p \in \text{Int}(\nneg{K}) \cap L_{\bm{y}}$ and again applying \Cref{prop:feas} one concludes ${\bm{y}}\not\in\mathscr{M}(K)_d$. This proves the claim. Finally, remark that this also shows that ${\bm{y}} \in \mathscr{M}(K)_d$ if and only if {\bf certify\_moment} reaches Step \ref{step:extract:meas}. If this is the case, {\bf find\_measure} computes the support and the weights of an $s$-atomic measure representing ${\bm{y}}$: such measure exists for some $s \leq \binom{n+d}{d}$, according to \Cref{prop:feas}. \end{proof} \section{Bound on the unrepresentability degree} \label{bound_unrepr_deg} {\it A priori} bounds on the degree of Putinar certificates that only depend on the degree of the polynomial exist for stable quadratic modules. Nevertheless, stability and ar\-chi\-me\-de\-ani\-ty properties are mutually exclusive in dimension $n \geq 2$. In this section we give general bounds for the unrepresentability degree of a vector ${\bm{y}} \not\in \mathscr{M}({K})_d$ in a compact basic semialgebraic set $K = S(\g)$. A key ingredient to do this is the use of already existing quantitative analysis of computer algebra algorithms performing quantifier elimination over the reals. We first recall the following bound on the degree of a Putinar representation of a polynomial in $\text{Int}(\nneg{K}_d)$, for an archimedean quadratic module $Q(\g)$, given in \cite{baldi2022effective}. Let $f \in \text{Int}(\nneg{K}_d)$. We denote by $\epsilon(f) := {f^*}/{\|f\|}$ where $$ f^*=\min_{{\bm{a}} \in K} f({\bm{a}}) \,\,\,\,\,\,\,\,\, \text{ and } \,\,\,\,\,\,\,\,\, \|f\| = \max_{{\bm{a}} \in [-1,1]^{n}}f({\bm{a}}). $$ Under the following assumptions: \begin{itemize} \item $1-\sum_i x_i^2 \in Q(\g)$ \item $\|g_i\| \leq \frac12$ for all $i=1,\ldots,k$ \end{itemize} then by \cite[Th.~1.7]{baldi2022effective} there exists a function $\gamma = \gamma(n,\g)$ such that $f \in Q(\g)[D]$ for $D$ of the order of \begin{equation} \label{boundBM} \gamma(n,\g) \, d^{3.5 \L n} \, \epsilon(f)^{-2.5 \L n} \end{equation} where $\mathfrak{c}$ and $\L$ are the $\L$ojasiewicz coefficients as they are defined in~\cite[Def.~2.4]{baldi2022effective}. \\ Let now $(n,d,{\bm{y}},\g)$ be the input of \Cref{algo:naif}. We assume in the whole section that ${\bm{y}} \in \mathbb{Q}^N$, with $N =\binom{n+d}{d}$, that the basic semialgebraic set $K = S(\g) \subset \mathbb{R}^{n}$ is defined by polynomial inequalities $\g=(g_1,\ldots,g_k) \subset \mathbb{Q}[{\bm{x}}]^k$ and that $Q(\g)$ is archimedean. Assume ${\bm{y}} \not\in \mathscr{M}(K)_d$, and let $p \in 1+Q(\g)$ satisfy ${\mathscr{L}}_{\bm{y}}(p)=0$ and $p^* = 1+m$ for some arbitrary constant $m>0$. Such unrepresentability certificate exists by \Cref{cor:equiv_B}. From \eqref{boundBM}, one has that $p \in 1+Q(\g)[D]$ with $D$ depending on $$ \epsilon(p-1) = \frac{p^*-1}{\|p-1\|} \geq \frac{1}{1+\|p\|} $$ where the last inequality derives from \Cref{cor:equiv_B} choosing without loss of generality $m=1$. In the following, we provide an upper bound $B$ on $\|p\|$: this will yield a lower bound on $\epsilon(p-1)$, hence an upper bound on $D$. As already said, in order to do that, we consider the formulation in terms of quantifier elimination over the reals that solves the truncated moment problem in the sense of \Cref{prop:feas} and \Cref{cor:equiv_B}, and we use quantitative results on quantifier elimination over the reals from \cite{BPR}. Consider the following formula with quantified variables ${\bm{x}}=(x_1, \allowbreak\ldots, x_n)$ and parameters the unknown coefficients $\{p_{\bm{\alpha}} : {\bm{\alpha}} \in \mathbb{N}^n_d\}$ of a polynomial $p \in \mathbb{R}^N$: \begin{equation} \label{eq:qe} {\mathscr{L}}_{\bm{y}}(p) = 0 \,\,\, \wedge \,\,\, \left(\forall {\bm{x}} \in \mathbb{R}^{n} \quad \bigwedge_{i=1}^k g_i({\bm{x}})\geq 0 \Rightarrow p({\bm{x}}) > 1 \right). \end{equation} \begin{lemma} \label{lem:formula4} Let $S \subset \mathbb{R}^N$ be the semialgebraic set defined by the quantifier-free formula obtained from \eqref{eq:qe} after eliminating the quantified variables. Then $S$ is an open subset of $L_{\bm{y}}$ with respect to the induced topology. \end{lemma} \begin{proof} For $p \in \mathbb{R}[{\bm{x}}]_{\leq d}$ and $A \subset \mathbb{R}[{\bm{x}}]_{\leq d}$, we denote by $p+A:=\{p+q : q \in A\}$. Since $Q(\g)$ is archimedean, $K$ is compact and according to \Cref{lem:interior}, $\text{Int}(\nneg{K}_d)$ consists exactly of polynomials that are positive on $K$. Thus $S = L_{\bm{y}} \cap (1+\text{Int}(\nneg{K}_d))$. Moreover by \Cref{prop:feas}, we know that $L_{\bm{y}} \cap \text{Int}(\nneg{K}_d)$ is non-empty and open in $L_{\bm{y}}$. Now since $L_{\bm{y}}$ is a linear space intersecting $\text{Int}(\nneg{K}_d)$, which is an open convex cone, so does the affine space $-1+L_{\bm{y}}$. Thus $T = (-1+L_{\bm{y}}) \cap \text{Int}(\nneg{K}_d)$ is open in $-1+L_{\bm{y}}$ and hence $S = 1+T$ is open in $1+(-1+L_{\bm{y}})=L_{\bm{y}}$. \end{proof} Observe that the constraint ${\mathscr{L}}_{\bm{y}}(p)=0$ in \eqref{eq:qe} is linear in $p$ and ${\bm{y}} \neq 0$. Thus one of the coefficients of $p$, say $p_{{\bm{\alpha}}'}$, can be eliminated, yielding an formulation of the quantifier elimination problem \eqref{eq:qe}: \begin{equation} \label{eq:qe2} \forall {\bm{x}} \in \mathbb{R}^{n} \quad \bigwedge_{i=1}^k g_i({\bm{x}})\geq 0 \Rightarrow \tilde{p}({\bm{x}})>1 \end{equation} where $\tilde{p}$ is the polynomial obtained when substituting $p_{{\bm{\alpha}}'}$ in $p$ by a linear form in the other coefficients $p_{\bm{\alpha}}$ using ${\mathscr{L}}_{\bm{y}}(p) = 0$. Below we abuse of notation and consider the set $S$ defined in \Cref{lem:formula4} and by the previous formulae as embedded in $L_{\bm{y}}$ identified with $\mathbb{R}^{N-1}$. We conclude that the set $S$ has non-empty interior in $\mathbb{R}^{N-1}$. We denote by $$ {d_{\g}} = \max_i \big\{\deg(g_i), i=1,\ldots,k\big\} $$ and by $\tau_\g$ the maximum bit size of the coefficients of the $g_i$'s. Note that we can multiply in~\eqref{eq:qe2} the polynomials $g_i$ by the (positive) least commun multiple of the denominators of their coefficients to obtain equivalent inequalities but with coefficients in $\mathbb{Z}$. These least common multiples have height bounded by $(\binom{n+d_\g}{n} + 1)\tau_\g$. Further, we denote by $\tau_{\bm{y}}$ the maximum bit size of the coefficients of ${\bm{y}}$. As above, the equation ${\mathscr{L}}_{\bm{y}}(p) = 0$ can be rewritten with coefficients in $\mathbb{Z}$ of bit size bounded by $(N+1)\tau_{\bm{y}}$. We set \begin{equation} \label{tau} \tau = \max\left((N+1)\tau_{\bm{y}}, \left(\binom{n+d_\g}{n} \right) \tau_\g\right). \end{equation} Note that $\tau$ is a bound on the integer coefficients of the polynomial constraints in \eqref{eq:qe2} once we have multiplied each of them by the least common multiple of the denominators of their coefficients. Finally, let $\delta = \max(d + 1, d_\g)$. Note that $\delta$ dominates the maximum degree of the polynomial constraints in~\eqref{eq:qe2}, indeed, the polynomial $\tilde{p} \in \mathbb{R}[{\bm{x}},p_{\bm{\alpha}} : {\bm{\alpha}} \in \mathbb{N}^n_d]$ has degree $d$ in ${\bm{x}}$ and has degree $1$ with respect to its unknown coefficients. \begin{proposition} \label{prop:bound_eps} There exists $\tilde{p} \in {S}$ with $\|\tilde{p}\| \leq B$ for $B$ in $$ \tau^{O(1)} \left(k(\delta+1)\right)^{O(n(N-1))}. $$ \end{proposition} \begin{proof} We start by providing some quantitative bounds on the quantifier-free formula which defines ${S} \subset \mathbb{R}^{N-1}$ obtained by eliminating the quantified variables ${\bm{x}}=(x_1,\ldots,x_n)$ in \eqref{eq:qe2}. By \cite[Theorem 14.16]{BPR} such a formula satisfies the following properties \begin{itemize} \item It can be obtained with polynomials of degree lying in $(\delta+1)^{O(n)}$; \item the bit size of the coefficients of these polynomials lies in $\tau (\delta+1)^{O(n(N-1))}$; \item this formula is a disjunction of $k^{n+1}\delta^{O(n)}$ conjunctions of $k^{n+1}\delta^{O(n)}$ disjunctive formulas of polynomial inequalities involving $k^{n+1}\delta^{O(n)}$ polynomials. \end{itemize} Recall that since $S$ is open, it coincides with its interior $\text{Int}(S)$. We aim at computing one point with rational coordinates in $S$. To do this, we just put these disjunctions in closed form, replace non-strict inequalities by strict inequalities and call an algorithm for computing at least one point with rational coordinates in $S$; see e.g.~\cite[Theorem 4.1.2]{BPR96}. Note that the input to such an algorithm is a system of polynomial strict inequalities in $\mathbb{R}[p_{\bm{\alpha}} : {\bm{\alpha}} \in \mathbb{N}^n_d]$ of degree $(\delta+1)^{O(n)}$ with bit size coefficients in $\tau (\delta+1)^{O(n(N-1))}$. By~\cite[Theorem 4.1.2]{BPR96}, if the semialgebraic set defined by the input is non-empty, then it outputs a point with rational coordinates of bit size bounded by $\tau^{O(1)} \left( k (\delta+1)\right)^{O(n(N-1))}$. All in all, this bounds the bit size of the coefficients of some polynomial $\tilde{p} \in S$ (with rational coordinates). Since the number of these coefficients is $N-1$, the $2$-norm of $\tilde{p}$ still lies in \[\tau^{O(1)} \left( k (\delta+1)\right)^{O(n(N-1))}.\] Finally, observe that $\|\tilde{p}\|=\min_{{\bm{a}} \in [-1,1]^{n}}\tilde{p}({\bm{a}})$ is bounded above by the $2$-norm of $\tilde{p}$. \end{proof} \begin{corollary} \label{bound_pessimistic} Let $\tau_{\bm{y}}$ and $\tau_\g$ bound the bit-size of ${\bm{y}}$ and $\g$, respectively, and let $d_\g$ be a bound on the degrees of $g_1,\ldots,g_k$. Let $\tau$ be as in \eqref{tau} and $\delta=\max\{d+1,d_\g\}$. If ${\bm{y}} \not\in \mathscr{M}(K)_d$, then the degree of unrepresentability of ${\bm{y}}$ in $K$ is in $$ \gamma(n,\g) \, d^{3.5 \L n} \, \tau^{O(\L n)} \, \left({k}(\delta+1)\right)^{O(\L n^2(N-1))}. $$ \end{corollary} \begin{proof} With the notation introduced in \eqref{boundBM}, by \Cref{cor:equiv_B}, there exists $p \in 1+Q(\g)[D]$ and by applying \cite[Th.~1.7]{baldi2022effective} and \Cref{prop:bound_eps}, the degree $D$ is bounded above by \begin{align*} & \gamma(n,\g) \, d^{3.5 \L n} \, \epsilon(p-1)^{-2.5 \L n} \\ & \leq \gamma(n,\g) \, d^{3.5 \L n} \, (1+\|p\|)^{2.5 \L n} \\ & \leq \gamma(n,\g) \, d^{3.5 \L n} \, (\tau^{O(1)} \left({k}(\delta+1)\right)^{O(n(N-1))})^{2.5 \L n} \\ & = \gamma(n,\g) \, d^{3.5 \L n} \, \tau^{O(\L n)} \, \left({k}({\delta}+1)\right)^{O(\L n^2(N-1))} \end{align*} \end{proof} We terminate with a bivariate example showing that the bound of \Cref{bound_pessimistic} is usually quite pessimistic. \begin{example}[{\Cref{example_didier}} continued] \label{example_didier_suite} Let ${\bm{y}} \in \mathbb{R}^{28}$ be the vector defined in \Cref{example_didier}. Consider the polynomial $$ p = 1+\frac89 (1-x_1^2-x_2^2) $$ One checks that $p \in 1+Q(1-x_1^2-x_2^2)$ and ${\mathscr{L}}_{\bm{y}}(p)= y_{00}(1+\frac89)-\frac89(y_{20}+y_{02})=0$. Since $K=S(1-x_1^2-x_2^2)$ is compact, $p$ certifies that the conic program defined in \Cref{prop:feas} is strongly feasible, in other words, that ${\bm{y}} \not\in \mathscr{M}(K)_d$. \hfill$\Box$ \end{example} \section{Conclusions and perspectives} The goal of this work was to undertake a systematic analysis of the computational complexity of the truncated moment problem on semialgebraic sets. Preliminary results concern the existence of algebraic certificates for vectors that are not representable as moments of measures, and upper bounds on the degree of SOS representations of these certificates. Our contribution offers several challenges and research directions in the computational aspects of the truncated moment problem, let us mention a few. One of these is the need of efficient algorithms for classes of polynomial systems with Vandermonde structure as that defined in the routine {\bf find\_measure}. A second one is to refine the quantifier-elimination bound given in \Cref{bound_pessimistic}. Unlike the viewpoint of the so-called effective Putinar Positivstellensatz introduced in \cite{baldi2022effective}, for which degree bounds depend on the polynomial itself, it is clear from our analysis that for the complexity analysis of the TMP one needs to give uniform degree bounds that only depend on the input of the TMP. One way of getting such uniform bounds is to consider manifestly positive polynomials such as those in $1+Q(\g)$ for compact $K=S(\g)$. A final perspective is to extend our analysis to the more general case of basic closed semialgebraic sets, not necessarily compact (see {\it e.g.} \cite{blekherman2012truncated} and \cite[]{blekhermanCoreVariety}). \section{Acknowledgments} The authors thank Lorenzo Baldi for discussions concerning the degree bounds of SOS-certificates. This work is supported by the {European Commission Marie Sklodowska-Curie Innovative Training Network} POEMA (Polynomial Optimization, Efficiency through Moments and Algebra, 2019-2023); by the {Agence Nationale de la Recher\-che (ANR)}, grant agreements {ANR-18-CE33-0011} (SESAME), {ANR-19-CE40-0018} (De Rerum Natura), {ANR-21-CE48-0006-01} (HYPERSPACE); by the joint ANR-{Austrian Science Fund FWF} grant agreement {ANR-19-CE48-0015} (ECARP); by the EOARD-AFOSR grant agreement {FA8665-20-1-7029}.
{ "timestamp": "2023-02-15T02:09:46", "yymm": "2302", "arxiv_id": "2302.06927", "language": "en", "url": "https://arxiv.org/abs/2302.06927", "abstract": "The truncated moment problem consists of determining whether a given finitedimensional vector of real numbers y is obtained by integrating a basis of the vector space of polynomials of bounded degree with respect to a non-negative measure on a given set K of a finite-dimensional Euclidean space. This problem has plenty of applications e.g. in optimization, control theory and statistics. When K is a compact semialgebraic set, the duality between the cone of moments of non-negative measures on K and the cone of non-negative polynomials on K yields an alternative: either y is a moment vector, or y is not a moment vector, in which case there exists a polynomial strictly positive on K making a linear functional depending on y vanish. Such a polynomial is an algebraic certificate of moment unrepresentability. We study the complexity of computing such a certificate using computer algebra algorithms.", "subjects": "Algebraic Geometry (math.AG); Optimization and Control (math.OC)", "title": "Algebraic certificates for the truncated moment problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713831229043, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7080104791971331 }
https://arxiv.org/abs/1802.05770
Hyperbolicity of Links in Thickened Surfaces
Menasco showed that a non-split, prime, alternating link that is not a 2-braid is hyperbolic in $S^3$. We prove a similar result for links in closed thickened surfaces $S \times I$. We define a link to be fully alternating if it has an alternating projection from $S\times I$ to $S$ where the interior of every complementary region is an open disk. We show that a prime, fully alternating link in $S\times I$ is hyperbolic. Similar to Menasco, we also give an easy way to determine primeness in $S\times I$. A fully alternating link is prime in $S\times I$ if and only if it is "obviously prime". Furthermore, we extend our result to show that a prime link with fully alternating projection to an essential surface embedded in an orientable, hyperbolic 3-manifold has a hyperbolic complement.
\section{Introduction} \label{sec:intro} A link $L$ in a manifold $M$ is hyperbolic if its complement $M\setminus L$ supports a hyperbolic metric. In~\cite{menasco}, W. Menasco showed that a non-split, prime, alternating link in $S^3$ that is not a 2-braid link is hyperbolic. In this paper, we prove an analogous result for links that are embedded in another family of manifolds, namely the closed thickened surfaces $S\times I$, such that $S$ has nonpositive Euler characteristic. Note that in this paper, we restrict our attention to a closed surface $S$, although we believe that a version of our results hold for a surface with boundary. \begin{definition} Let $L$ be a link in a thickened surface $S \times I$, orientable or not, with the exception of the sphere and the projective plane. A projection of $L$ from $S\times I$ to $S$ is fully alternating if it is alternating on $S$ and the interior of every complementary region is an open disk. We say a link $L$ is fully alternating in $S\times I$ if it has a fully alternating projection from $S\times I$ to $S$. \end{definition} A sphere $F$ in $S \times I$ punctured twice by $L$ is essential in $S \times I \setminus L$ if it does not bound a ball containing an unknotted arc of $L$. \begin{definition} A link $L$ is \emph{prime} in $S\times I$ if there does not exist an essential twice-punctured sphere in $S \times I \setminus L$ such that both punctures are created by $L$. \end{definition} \begin{theorem} \label{thm:main} Let $S$ be a closed orientable surface. A prime fully alternating link $L$ in $S \times I$ is hyperbolic. \end{theorem} In order to prove these results, we need to consider surfaces in 3-manifolds. \begin{definition} A surface properly embedded in a compact manifold is \emph{essential} if it is incompressible and not boundary-parallel. \end{definition} Note that throughout the paper, we refer to surfaces being punctured by $L$ or having boundary on $\partial{N}(L)$. In these cases we think of the surface as being embedded in $S \times I \setminus L$ or $S \times I \setminus \mathring{N}(L)$, as appropriate. We use the fact that $S \times I \setminus L$ is homeomorphic to the interior of $S \times I \setminus \mathring{N}(L)$ as needed. \\ Menasco further showed that a non-split, alternating link $L$ in $S^3$ is prime if and only if in any reduced alternating diagram, every circle in the projection crossing the link projection transversely twice bounds a disk in the projection plane containing no crossings of the projection~\cite{menasco}. We call this property ``obviously prime''. We extend this idea and determine the primeness of a link $L$ in $S\times I$, by looking at any given alternating projection of $L$ and seeing if it is obviously prime, as defined below. \\ We define a reduced projection to be a projection that does not have any unnecessary crossings as in Figure~\ref{fig:reduced}. Note that if we start with a fully alternating projection, its reduction is also fully alternating. \begin{figure}[h] \centering \includegraphics[scale=0.4]{reducedalternating.png} \caption{Crossing that can be reduced.} \label{fig:reduced} \end{figure} \begin{definition} A reduced, fully alternating projection $P$ of a link $L$ in $S \times I$ onto $S$ is \emph{obviously prime} if every disk in the projection surface with boundary intersecting $P$ transversely at two points intersects $P$ in an embedded arc. \end{definition} \begin{theorem} \label{thm:primeiffobv} Let $L$ be a fully alternating link in $S\times I$ and let $P$ be a reduced fully alternating projection of $L$ to $S$. Then $L$ is prime if and only if $P$ is obviously prime. \end{theorem} To prove Theorems 1 and 2, we build on techniques first appearing in~\cite{menasco}. The results from~\cite{menasco} rely on a critical theorem by W. Thurston that a compact connected 3-manifold with torus boundaries has hyperbolic interior if and only if it does not contain any essential spheres, tori, or annuli (see~\cite{thurston}). \\\\ Menasco defines the notion of pairwise compressibility, which we adapt and refer to as meridional compressibility. \begin{definition} Let $L$ be a link in a 3-manifold $M$. Let $F$ be a surface embedded in $M\setminus L$. Then $F$ is \emph{meridionally incompressible} if for every disk $D$ with $D\cap F=\partial D$ which is punctured once by $L$, there exists another disk $D' \subset F \cup L$ with $\partial D = \partial D'$, such that $D'$ is punctured once by $L$, and $D$ can be isotoped to $D'$ fixing its boundary. Otherwise $F$ is \emph{meridionally compressible}. \end{definition} In order to prove Theorem~\ref{thm:main}, we show in Section~\ref{sec:SphereTorus} that for a link $L$ in $S\times I$ that is prime and fully alternating, $S \times I \setminus L$ does not contain any incompressible, meridionally incompressible spheres, tori, or annuli with both boundary components on $\partial(S\times I)$. Further, we show that if an embedded sphere or torus is essential, then it cannot be meridionally compressible. Thus we have eliminated essential spheres, tori, and a subset of the essential annuli in $S \times I \setminus L$. We then show in Section~\ref{sec:annulus} that $S \times I \setminus L$ does not contain any other essential annuli. \\\\ We prove Theorem~\ref{thm:primeiffobv} in Section~\ref{sec:prime}, thereby providing a means of easily identifying primeness of links in $S\times I$ with fully alternating projections. \\\\ In Section~\ref{sec:extensions}, we generalize Theorem \ref{thm:main} to links in a neighborhood of an essential surface embedded in a hyperbolic manifold. Note that an essential surface in a hyperbolic 3-manifold must have negative Euler characteristic. \begin{theorem} \label{thm:threemanifold} Let $M$ be a finite volume hyperbolic 3-manifold, possibly with cusps such that any boundary is total geodesic. Let $S$ be an essential, closed surface in $M$ with neighborhood $N$. For any link $L$ that is prime in $N$ with a fully alternating projection on $S$, the manifold $M\setminus L$ is hyperbolic. \end{theorem} In order to prove this we first extend Theorem~\ref{thm:main} to prime, fully alternating links in any $I$-bundle over any closed surface that is either orientable or non-orientable. We then show that the hyperbolicity of the $I$-bundle implies the hyperbolicity of $M$. \\\\ Theorem \ref{thm:threemanifold} substantially increases the number of manifolds known to be hyperbolic. See Section~\ref{sec:extensions} for examples. The proof of the Virtual Haken Conjecture by I. Agol (see~\cite{haken}) gives an embedded incompressible surface in some finite cover $M'$ of any compact hyperbolic 3-manifold. For each such manifold $M'$, Theorem~\ref{thm:threemanifold} generates an infinite family of finite volume hyperbolic link complements. \\\\ A second application of Theorem~\ref{thm:main} is to tiling theory. Specifically, consider a 4-regular tiling of $\mathbb{E}^2$ or $\mathbb{H}^2$ by edge-to-edge polygons such that the symmetry group of the tiling has compact fundamental domain. By adding alternating crossings at the vertices, the tiling becomes an infinite alternating weave. By taking the quotient by an orientation-preserving subgroup of the symmetry group of the tiling, we obtain the projection of a fully alternating link on a closed orientable surface $S$ of positive genus, where the link lives in $S\times I$. This association of tilings to hyperbolic links has appeared previously for certain Euclidean uniform tilings \cite{CKP} and for Euclidean and hyperbolic $k$-uniform tilings in ~\cite{tiling}. By applying Theorem~\ref{thm:main}, we can now turn a broader class of tilings, with polygons not necessarily regular, into hyperbolic links in $S\times I$. \\ If the link in $S\times I$ associated to a tiling is hyperbolic, then we can calculate the volume of its complement. We can then assign to the infinite tiling a volume density, given by the volume of the embedded link in $S \times I$ divided by the number of crossings of the projection of $L$ to $S$ in the fundamental domain. Thus we can apply hyperbolic invariants coming out of hyperbolic manifold theory to tilings. Note that by adding bigon faces to 3-regular tilings, we can turn them into 4-regular tilings to obtain similar results. See~\cite{tiling} for more details and explicit calculations. \\ In recent work using different methods(cf. \cite{HP}), Howie and Purcell have proved a theorem that is more general than our Theorem \ref{thm:threemanifold}, in the case that the manifold and surface are orientable. They do not require the surface to be incompressible, but rather have a weaker condition on representativity in the manifold. They also investigate the volumes and geometries of the resulting manifolds. See also \cite{CKP} where hyperbolicity is proved for appropriate alternating links in a thickened torus, which is a case considered in our Theorem \ref{thm:main}. Note that in these two papers, our term "obviously prime" corresponds to what they define as "weakly prime". \section{Eliminating essential, meridionally incompressible surfaces} \label{sec:SphereTorus} Let $L$ be a fully alternating link in $S \times I$, where $S$ is a closed orientable surface of genus $g\geq 1$. Let $F$ be an essential, meridionally incompressible surface embedded in $S \times I \setminus L$. We consider one of the following cases: \begin{enumerate} \item $F$ is a sphere. \item There are no essential spheres in $S \times I \setminus L$, and $F$ is a torus. \item There are no essential spheres in $S \times I \setminus L$, and $F$ is an annulus whose boundary components both lie on $\partial (S \times I)$. \end{enumerate} Let $S_0=S\times \{\frac{1}{2}\}$. Consider a fully alternating projection of $L$ onto $S_0$. As in~\cite{menasco} we place a ball at each crossing, which we hereafter refer to as a bubble $B$. We note that as $L$ is prime and fully alternating, the projection is connected. \\ Let the overstrand of $L$ at each crossing run over the top of the bubble, and the understrand run under the bottom, as depicted in Figure~\ref{fig:bubble}. In particular, both the overstrand and the understrand are in $\partial B$. We then define $S_+$ to be $S_0$ where the equatorial disk in each bubble is replaced by the upper hemisphere, denoted $\partial B_+$. Similarly, define $S_-$ to be $S_0$ where the equatorial disk in each bubble is replaced by the lower hemisphere, denoted $\partial B_-$. \\ \begin{figure}[h] \centering \includegraphics[scale=0.2]{bubble} \caption{A bubble.} \label{fig:bubble} \end{figure} We clean up our surface $F$ relative to the bubbles by pushing $F$ radially away from the central vertical axis of each bubble $B$, as in~\cite{menasco}. As $F$ lives in the complement of $L$, we observe that what remains of $F$ forms saddles inside $B$, the boundaries of which lie on $\partial B$ and avoid the two arcs in $L\cap\partial B$. \\ We are interested in the intersection curves in $F\cap S_+$ and $F \cap S_-$. Hereafter we refer to $S_+$ and $F \cap S_+$, but consider all arguments and constructions as applied both to $F\cap S_+$ and $F \cap S_-$. \begin{lemma} \label{lem:trivialIffTrivial} An intersection curve is trivial on $F$ if and only if it is also trivial on $S_+$. \end{lemma} \begin{proof} Suppose an intersection curve $\alpha$ is trivial on $F$ and nontrivial on $S_+$. We fill the link in to work in $S\times I$. Note that $S_+$, which is incompressible in $S\times I\setminus L$, is also incompressible in $S\times I$. Consider the set of intersection curves contained in the disk bounded by $\alpha$ on $F$ that are nontrivial on $S_+$. Take one that is innermost on $F$; call it $\beta$. Then all intersection curves contained in the disk bounded by $\beta$ on $F$ must be trivial on $S_+$. Of all such trivial intersection curves contained in the disk bounded by $\beta$ on $F$, let $\gamma$ be the one that is innermost on $F$. Then the disk bounded by $\gamma$ on $F$ does not contain any other intersection curves, so we can isotope $F$ to remove this intersection curve. We can iterate this process for all trivial intersection curves contained in the disk bounded by $\beta$ on $F$. Then $\beta$ would bound a compression disk of $S_+$ on $F$, a contradiction. \\\\ Now suppose an intersection curve $\alpha'$ is trivial on $S_+$. In case (1), $\alpha'$ is clearly trivial on $F$. Consider cases (2) and (3), and note that by hypothesis $S \times I \setminus L$ does not contain any essential spheres. Assume for contradiction that $\alpha'$ is nontrivial on $F$. Consider the set of intersection curves contained in the disk bounded by $\alpha'$ on $S_+$ that are nontrivial on $F$. Take the one that is innermost on $S_+$; call it $\beta'$. Then all intersection curves contained in the disk bounded by $\beta'$ on $S_+$ must be trivial on $F$. Of all such trivial intersection curves contained in the disk bounded by $\beta'$ on $S_+$, let $\gamma'$ be the one that is innermost on $S_+$. We can consider the disks $D_1$ and $D_2$ bounded by $\gamma'$ on $F$ and $S_+$ respectively. Identify $D_1$ and $D_2$ along $\gamma'$ to get a sphere in $S \times I \setminus L$. Given that the boundaries $\partial (S \times I)$ are outside the sphere, and that there are no essential spheres in this case, this sphere must bound a ball. Therefore we can isotope the disk $D_1$ to the disk $D_2$, and push it slightly past $S_+$ to remove $\gamma'$. We can iterate this process for all trivial intersection curves contained in the disk bounded by $\beta'$ on $S_+$. Then $\beta'$ would bound a compression disk of $F$ on $S_+$, a contradiction. \end{proof} Thus we say that a component of $F\cap S_+$ is trivial if it is trivial on either $F$ or $S_+$. \\\\ We associate an ordered pair $(s,i)$ to each embedding of $F$ prior to isotopy, in which $s$ is the number of saddles in $F$ and $i$ is the number of intersection curves in $F\cap S_+$. Pick $F$ to be the embedding such that its ordered pair $(s,i)$ is minimal under lexicographical ordering. Note that as $F$ passes through a bubble $B$, the saddle corresponds to two intersection curves on $S_+$ that run parallel to the overstrand of $B$. We think of the overstrand as dividing $\partial B$ into two sides, and are interested in which side an intersection curve hits. \begin{lemma} \label{lem:isotopyBubbleCleanUp} There exists an isotopy of $F$ such that the following are true: \begin{enumerate} \item[(i)] The set of intersection curves $F \cap S_+$ is nonempty. \item[(ii)] Every intersection curve in $F \cap S_+$ intersects at least one bubble. \item[(iii)] Let $\alpha$ be an arc of an intersection curve in $F \cap S_+$ that begins and ends on the same side of a bubble $B$. Then $\alpha \cup \partial B_+$ must contain a nontrivial simple closed curve on $S_+$. \end{enumerate} \end{lemma} \begin{proof} \text{} \begin{enumerate}[label=(\roman*), wide] \item[(i)] We know that $F\cap S_+$ is nonempty, for otherwise $F$ would be either boundary-parallel or compressible. \\ \item[(iii)] Note that an intersection arc $\alpha$ that satisfies the hypotheses of condition (iii) must correspond to two saddles. If there is an additional intersection arc corresponding to a pair of saddles between these two, we consider the innermost pair of saddles. Using the technique in~\cite{adams toroidal} as in Figure~\ref{fig:bubbleSameSideTwice}, we isotope $F$ by pulling a neighborhood of an arc on $F$ to the bubble to form a band connecting the pair of saddles. If $\alpha \cup \partial B_+$ does not contain a simple closed curve that is nontrivial on $S_+$, we can pull the two saddles and the band through the bubble and out the other side. Note that we have decreased the number of saddles in $F\cap S_+$, contradicting that $F\cap S_+$ has the minimal number of saddles. \begin{figure}[h] \centering \includegraphics[width=4cm, height=5cm]{adams} \caption{Eliminate curve crossing a bubble twice on the same side~\cite{adams toroidal}.} \label{fig:bubbleSameSideTwice} \end{figure} \item[(ii)] We additionally show that there are no intersection curves that do not hit any bubbles. Assume for contradiction that such a curve $\alpha$ exists. \\\\ Suppose $\alpha$ is trivial. Because the projection of $L$ is fully alternating, it is connected. Since $\alpha$ does not pass through any bubbles, there can be no link components contained in the disk bounded by $\alpha$ on $S_+$. Of the set of intersection curves contained in the disk bounded by $\alpha$ on $S_+$, choose the one that is innermost; call it $\beta$. Take the union of the disks $D_1$ and $D_2$ bounded by $\beta$ on $F$ and $S_+$ respectively to obtain a sphere in $S \times I \setminus L$, which must bound a ball. Therefore we can isotope $D_1$ to $D_2$, and push it slightly past $S_+$ to remove $\beta$. We have thus reduced the number of intersection curves without affecting the number of saddles, contradicting that $F$ has the least number of intersection curves among all isotopies with the same number of saddles. \\\\ Now suppose $\alpha$ is nontrivial. Then, since the projection of $L$ onto $S_0$ is fully alternating, and as a nontrivial intersection curve on $S_0$ does not bound a disk, $\alpha$ must pass through at least two disk regions on $S_0$ and therefore pass through a bubble. This shows that $F$ must intersect at least one bubble. \end{enumerate} \end{proof} Note that for the pairing $(s,i)$ defined in the lemma, we now know that both $s$ and $i$ are nonzero by parts (ii) and (i) respectively. \\\\ In order to contradict the existence of $F$ in $S\times I\setminus L$, we need an intersection curve that is trivial on $S_+$. By Lemma~\ref{lem:trivialIffTrivial}, it suffices to show that there exists a curve that is trivial on $F$. \begin{lemma} \label{lem:existsTrivialCurve} There exists an intersection curve that is trivial on $F$. \end{lemma} \begin{proof} First note that Lemma~\ref{lem:existsTrivialCurve} holds in case (1), as all curves on a sphere are trivial. Now consider cases (2) and (3), and suppose $F$ is a torus or an annulus. \\\\ Consider $F$ with all intersection curves both in $F\cap S_+$ and $F\cap S_-$ projected onto it. All saddles correspond to quadrilaterals, which we collapse to vertices to obtain a 4-regular graph on $F$, as in Figure~\ref{fig:EulerCharacteristic}. Notably, for any two adjacent faces in this graph, one corresponds to a region which is contained strictly above $S_+$, and the other to a region contained strictly below $S_-$.\\\\ \begin{figure}[h] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[scale=0.1]{4RegGraphLargeFont.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[scale=0.1]{4RegGraphwVertices.png} \end{subfigure} \caption{Torus $F$ with projection of intersection curves from $F\cap S_+$ and $F\cap S_-$. Annular regions denoted $A$, disk regions denoted $D$; regions contained above $S_+$ denoted $+$, regions contained below $S_-$ denoted $-$. Quadrilateral saddles (left) collapsed to vertices (right). } \label{fig:EulerCharacteristic} \end{figure} We would like to show that at least one of the complementary regions of this graph is a disk, whose boundary corresponds to a trivial intersection curve of either $F \cap S_+$ or $F \cap S_-$. If none of the faces formed by the $4$-regular graph on $F$ are disks, then all faces must have nonpositive Euler characteristic contribution. We have: $$\mathcal{V}=\text{number of saddles}>0$$ $$\mathcal{E}=\frac{4\mathcal{V}}{2}=2\mathcal{V}$$ $$\chi(F) = \mathcal{V} - \mathcal{E} +\mathcal{F} \leq \mathcal{V} - \mathcal{E} = \mathcal{V}-2\mathcal{V}<0$$ But the Euler characteristic of $F$ (a torus or an annulus) is $0$, contradiction. Thus there must be a disk region in the complement of the graph on $F$, which is bounded by a trivial intersection curve $\alpha$ in either $F\cap S_+$ or $F\cap S_-$. For convenience, we assume it is in $F\cap S_+$. \end{proof} Now we show that no trivial curve $\alpha$ intersects a bubble $B$ on both sides, such that the disk on $S_+$ bounded by $\alpha$ contains $L\cap \partial B_+$. \begin{lemma} \label{lem:noCurveBubbleBothSides} Let $\alpha$ be an arc of an intersection curve in $F \cap S_+$ that begins and ends on different sides of a bubble $B$. Then $\alpha \cup \partial B_+$ must contain a nontrivial simple closed curve on $S_+$. \end{lemma} \begin{proof} Of all arcs of $F \cap S_+$ that satisfy all the hypotheses, choose $\alpha$ to be the one that intersects $\partial B_+$ closest to the overstrand of $B$. Assume $\alpha \cup \partial B_+$ does not contain a simple closed curve that is nontrivial on $S_+$. Any other intersection curve $\alpha'$ between the two arcs formed by $\alpha$ inside $\partial B_+$ must pass through $\partial B_+$ at least twice on one side of $\partial B_+$. Clearly $\alpha' \cup \partial B_+$ also does not contain a simple closed curve that is nontrivial on $S_+$, so by Lemma~\ref{lem:isotopyBubbleCleanUp}~(iii), no such $\alpha'$ exists. It follows that $\alpha$ is the innermost intersection curve inside $\partial B_+$. Then $\alpha$ corresponds to one saddle $\sigma$ in $B$. \\\\ Now we again use the idea of the arc $\mu\subset F$ running along the intersection curve $\alpha$ as in the proof of Lemma~\ref{lem:isotopyBubbleCleanUp} and isotope it together with $F$ towards the bubble $B$. This would allow us to find a once-punctured disk with boundary in $\sigma\cup \mu$ on $F$. If there are no other intersection curves in the region bounded by $\sigma\cup \mu$, we can pull $\mu$ towards the bubble without obstruction so that it sits over $B$. \\\\ The only case left to consider is when there exist other intersection curves in the region bounded by $\sigma\cup \mu$. For each such curve $\beta$, since $\alpha$ is trivial, $\beta$ is also trivial. Then all intersection curves in the region bounded by $\sigma\cup \mu$ are trivial and bound disks on $F$ to at least one side. Therefore, we can always isotope $\mu$ with $F$ along these disks so that $\mu$ sits over $B$. As $\alpha$ is the innermost intersection curve in $B$, there exists a circle in $\mu\cup\sigma$ wrapping once around the overstrand of $B$, thereby bounding a meridional compression disk for $F$, a contradiction. Therefore, no intersection curve satisfying the hypotheses can intersect a bubble on both sides. \end{proof} \begin{lemma} \label{lem:noMerIncSurface} Let $L$ be a fully alternating link in $S \times I$ and let $F$ in $S\times I\setminus L$ be a sphere, a torus, or an annulus with boundaries strictly on $S\times I$. Then $F$ cannot be essential and meridionally incompressible. \end{lemma} \begin{proof} By Lemma~\ref{lem:existsTrivialCurve}, we know that there exists a trivial intersection curve. Consider a trivial intersection curve $\alpha$ that is innermost on $S_+$, bounding disk $D$ on $S_+$. By Lemma~\ref{lem:isotopyBubbleCleanUp}, we know that $\alpha$ intersects at least one bubble. \\\\ As the projection is fully alternating, any time $\alpha$ enters a region through a bubble such that $\alpha$ is to the right (similarly left) of the overstrand, $\alpha$ must leave the region through a bubble such that it is to the left (similarly right) of the overstrand, as in Figure~\ref{fig:punchline}. \\ \begin{figure}[h] \centering \includegraphics[scale=0.3]{punchline.png} \caption{Innermost trivial curve $\alpha$.} \label{fig:punchline} \end{figure} Suppose that the disk bounded by $\alpha$ on $S_+$ contains the overstrand of a bubble that it intersects. Then because there are no other curves of intersection in $D$, $\alpha$ must hit the other side of that bubble, contradicting Lemma~\ref{lem:noCurveBubbleBothSides}. \\\\ Hence the disk bounded by $\alpha$ on $S_+$ does not contain the overstrand of any bubble that it intersects. It follows that for some bubble $B$, the curve $\alpha$ passes through one side of $B$ such that the overstrand is on the left (similarly right), and then passes through the same side of $B$ such that the overstrand is on the right (similarly left), with the disk side of $\alpha$ being between these two passes. By Lemma~\ref{lem:isotopyBubbleCleanUp}~(iii), $\partial B_+ \cup \alpha$ contains a simple closed curve that is nontrivial on $S_+$, contradicting that $\alpha$ is trivial on $S_+$. \\ \begin{figure}[h] \centering \includegraphics[scale=0.3]{Curvebothsides.png} \caption{Two cases when $\partial B_+ \cup \alpha$ contains a simple closed curve that is nontrivial on $S_+$} \label{fig:curveBothSides} \end{figure} \end{proof} Finally, in the case that $L$ is prime, we consider essential spheres and tori in $S \times I \setminus L$ that are meridionally compressible. This will eliminate all essential spheres and tori. \begin{lemma} \label{lem:noMerComSphereTorus} Let $L$ be a prime, fully alternating link in $S \times I$ and let $F$ be a sphere, a torus or an annulus with boundaries strictly on $S \times I$. Then $F$ cannot be essential and meridionally compressible. \end{lemma} \begin{proof} It is clear that there are no meridionally compressible spheres. Let $F$ be an incompressible, meridionally compressible torus that is not boundary--parallel. Then a meridional compression yields a twice-punctured sphere $F'$ in $S \times I \setminus L$ that bounds a ball containing a nontrivial portion of $L$. This contradicts the assumption that $L$ is prime in $S \times I$. \\\\ If $F$ is an essential meridionally compressible annulus with boundary in $S \times I$, apply the meridional compression. This results in two once-punctured disks, each with boundary on $S\times I$. But by incompressibility of $S$ in $S \times I$, the boundary of the disk must be trivial on $S$. Hence, we can construct a sphere punctured once by $L$, a contradiction. \end{proof} \section{Eliminating essential annuli} \label{sec:annulus} In this section we eliminate essential annuli, which will allow us to prove Theorem~\ref{thm:main}. Let $A$ be an essential annulus in $M=S\times I\setminus \mathring{N}(L)$. As our annulus $A$ has two boundary components, exactly one of the following holds: \begin{enumerate} \item $A$ has boundary strictly on $\partial (S\times I)$. \item $A$ has boundary strictly on $\partial N(L)$. \item $A$ has boundary on both $\partial (S\times I)$ and $\partial N(L)$. \end{enumerate} \medskip By Lemmas \ref{lem:noMerIncSurface} and \ref{lem:noMerComSphereTorus}, Case (1) has already been eliminated. \begin{lemma} \label{lem:noAnnulusOnL} There are no essential annuli in $M$ with both boundary components on $\partial N(L)$. \end{lemma} \begin{proof} Assume that $A$ is an essential annulus with boundary strictly on $\partial N(L)$. Define $Q$ to be the neighborhood of the union of $A$ with the torus or tori of $L$ on which the annulus has boundary components. As in the proof of Lemma 1.16 in~\cite{hatcher}, one of the following must hold: \begin{enumerate} \itemsep0em \item[(a)] $A$ has boundary on two different tori, and $Q$ is the product of a twice-punctured disk with $S^1$. \item[(b)] $A$ has both boundary components on one torus, and $Q$ is the product of a twice-punctured disk with $S^1$. \item[(c)] $A$ has both boundary components on one torus, and $Q$ is a circle bundle over a punctured Möbius band. \end{enumerate} Observe that cases (a) and (b) have three torus boundaries and case (c) has two torus boundaries. We have previously shown in Lemmas~\ref{lem:noMerIncSurface} and~\ref{lem:noMerComSphereTorus} that our manifold contains no essential tori; thus the tori of $\partial Q\setminus \partial M$ must be either boundary-parallel or compressible. In all cases, we show that constructing $M$ from $Q$ considerably restricts the structure that $M$ can take, from which we derive a contradiction. \\\\ There are up to two tori in the boundary components of $\partial Q\setminus \partial M$. If a torus $T$ of $\partial Q\setminus \partial M$ is boundary-parallel, it follows that $T$ has an external collar $T\times I$, which can be glued to $Q$ via $T$. We note this does not change $Q$ topologically. \\\\ If one of the tori $T$ of $\partial Q\setminus \partial M$ is compressible, consider a compression disk $D$ for $T$. A compression of $T$ along $D$ yields a sphere $R$, and as we have shown that $M$ does not contain essential spheres by Lemma~\ref{lem:noMerIncSurface}, the sphere $R$ must bound a ball $B$. Therefore, if we resurger $R$ to get $T$ back, $B$ becomes a solid torus. \\\\ Hence $M$ comes from $Q$ by gluing either a $T\times I$ or a solid torus to each component of $\partial Q\setminus \partial M$. However, gluing a solid torus to any boundary component of $Q$ lowers its number of boundary components to less than three. Note $M$ must have at least three boundary components: two from $S\times I$, plus one for each component of $L$. Therefore, only copies of $T\times I$ are glued onto $Q$, so $M$ is homeomorphic to $Q$. \\\\ Now, because $M$ as constructed from $Q$ has only torus boundary components, the only possibility is for $S$ to have genus $g=1$. Note that in case (c), the entire manifold only has two boundary components, thereby eliminating case (c) from consideration. \\\\ We observe that cases (a) and (b) are equivalent up to homeomorphism. Thus $M$ is a twice-punctured disk crossed with $S^1$: a manifold with three boundary components. These three boundary components correspond to the boundaries of $S\times I\setminus \mathring{N}(L)$, implying the link $L$ has only one component, corresponding to one of the punctures crossed with $S^1$. Observe that $L$ admits a projection without crossings onto either of the boundaries of $S\times I$. Thus the crossing number of $L$ in $S\times I$ is $0$. However, the crossing number $c(L)$ of an alternating knot on a given surface $S$ in $S\times I$ is given by its reduced alternating projection (see~\cite{crossing number}), which is assumed to have at least one crossing, a contradiction. \end{proof} \begin{lemma} \label{lem:noAnnulusOnLS} There are no essential annuli in $M$ with one boundary component on $\partial N(L)$ and one boundary component on $\partial (S\times I)$. \end{lemma} \begin{proof} Assume that $A$ is an essential annulus with one boundary component on a torus of $\partial N(L)$ and one boundary component on $\partial(S\times I)$. A similar proof to that of Lemma~\ref{lem:noAnnulusOnL} shows that $S$ has genus at least 2. \\\\ Reflect $M$ through its non-torus boundaries to get a second copy $M^R$ of $M$, and let $M'$ be the double of $M$ given by $M \cup_{\partial M}M^R$ with corresponding non-torus boundary components identified. We note that $M'= (S\times S^1)\setminus (\mathring{N}(L)\cup\mathring{N}(L^R))$ and that $\partial M'$ consists of the boundaries of the neighborhoods of $L$ and $L^R$. Note $A$ and its corresponding annulus $A^R$ are now glued along one of their boundaries to form $A' = A \cup A^R$, an essential annulus in $M'$ with its two boundary components on $\partial N(L)$ and $\partial N(L^R)$, respectively. \\\\ We first note that we can extend the fact that there are no essential spheres or tori in $S \times I \setminus L$ to $M'$. For any such sphere or torus could not be entirely contained in either $M$ or $M^R$ by Lemmas \ref{lem:noMerIncSurface} and \ref{lem:noMerComSphereTorus}. By the incompressibility of $\partial S \times I$, this eliminates spheres and any such torus must intersect $M$ and $M^R$ in essential annuli with their boundaries on $\partial S \times I$. But these were also eliminated in Lemmas \ref{lem:noMerIncSurface} and \ref{lem:noMerComSphereTorus}. \\\\ Now, we prove the lemma when $L$ consists of $k$ components, $k\geq 2$. We observe that our new manifold $M'$ has $2k\geq 4$ boundary components. We have already shown that a manifold with no essential tori, which contains an essential annulus meeting only torus components of the boundary, has at most three boundary components, a contradiction. \\\\ Now let $L$ have one component. Then the annulus $A'$ has boundary on $\partial N(L)$ and $\partial N(L^R)$, which corresponds to case (a) of the proof of Lemma~\ref{lem:noAnnulusOnL}. Let $Q$ be the neighborhood of the annulus $A'$ and the tori $\partial N(L)$ and $\partial N(L^R)$. Then the outer torus of $Q$ cannot be essential, so it must be either compressible or boundary-parallel. Note that if it is boundary-parallel, the entire manifold $M'$ must have 3 boundary components. But $M'$ was obtained by doubling $M$, and hence must have an even number of boundary components, a contradiction. \\\\ If the outer torus of $Q$ is compressible, then $M$ is obtained by gluing a solid torus to the outside of $Q$ as in the proof of Lemma~\ref{lem:noAnnulusOnL}. We know that $Q$ is a twice-punctured disk crossed with $S^1$, which we can think of as $T \times I$ minus a nontrivial simple closed curve lying in $T\times\{\frac{1}{2}\}$. Gluing a solid torus to $T \times I$ along one of its boundaries gives us a solid torus, so the result is equivalent to a solid torus with a neighborhood of a $(p,q)$-curve removed, denoted $V_{p,q}$. \\ We wish to distinguish between $V_{p,q}$ and $M'$ to derive a contradiction. For $V_{p,q}$, we first find the fundamental group. Cutting $V_{p,q}$ open along an annulus with both boundaries on the neighborhood of the $(p,q)$-curve, we get a solid torus with generator $\gamma$ and a $T\times I$ with generators $\alpha$ and $\beta$. We notice that $\gamma$ wraps $q$ times around the solid torus inside it, as in the figure below. \\ \begin{figure}[h] \centering \includegraphics[width=4.5cm, height = 4.125cm]{pqcurve} \caption{The cross-section of a solid torus minus the $(p,q)$-curve. Cut along the annulus along the $(p,q)$-curve and we get a solid torus glued to $T\times I$ along the $(p,q)$-annulus.} \label{fig:pqthing} \end{figure} Then, $\pi_1(V_{p,q})=\langle \alpha,\beta,\gamma|\gamma^q=\alpha,\alpha\beta=\beta\alpha\rangle$ and $H_1(V_{p,q})=\mathbb{Z}^2$. On the other hand, we certainly know that $M'$ contains a copy of $S$ as an incompressible genus $g$ surface because $S\times S^1$ does. This implies that $H_1(M')$ contains a $\mathbb{Z}^{2g}$ subgroup, and since $g\geq 2$, we have a contradiction. \end{proof} Thus our manifold can contain no essential annuli. Finally, we prove Theorem~\ref{thm:main}, i.e., a prime, fully alternating link in $S\times I$ is hyperbolic. \begin{proof}[Proof of Theorem~\ref{thm:main}] We check the conditions in Thurston's Hyperbolization Theorem~\cite{thurston}. By Lemmas~\ref{lem:noMerIncSurface} and~\ref{lem:noMerComSphereTorus}, there are no essential spheres or tori in $S\times I\setminus \mathring{N}(L)$. By Lemmas~\ref{lem:noMerIncSurface},~\ref{lem:noMerComSphereTorus},~\ref{lem:noAnnulusOnL} and~\ref{lem:noAnnulusOnLS}, there are no essential annuli in $S\times I\setminus \mathring{N}(L)$. Therefore, a prime fully alternating link $L$ in $S\times I$ is hyperbolic. \end{proof} \section{Primeness of alternating links in $S \times I$} \label{sec:prime} In this section we prove Theorem~\ref{thm:primeiffobv}, i.e. a link in $S\times I$ with a reduced fully alternating projection on $S$ is prime if and only if the projection is obviously prime. \\ Suppose a reduced, fully alternating projection $P$ of $L$ on $S$ is not obviously prime. Then we show that $L$ is not prime in $S\times I$. By definition, we can find a twice-punctured circle in $P$, which bounds a disk containing at least one crossing of $P$. A neighborhood of the disk is a ball $W$ in $S\times I$ contining an alternating portion of $L$. Since the least number of crossings of an alternating link occurs in any reduced alternating projection (\cite{kauffman},\cite{murasugi},\cite{thistlethwaite}), $W$ contains a nontrivial portion of $L$. Since the boundaries of $S \times I$ are outside $W$, we know that $\partial W$ must be essential, which shows that $L$ is not prime. \\\\ Suppose now that $L$ is a non-prime, fully alternating link in $S\times I$ with a reduced fully alternating projection $P$ on $S$. We want to show that $L$ is not obviously prime in $P$. Let $F$ be an essential twice-punctured sphere in $S \times I \setminus L$. We can assume that $F$ is meridionally incompressible, since otherwise a meridional compression would generate two twice-punctured spheres, and we can iterate this process until we obtain an essential, meridionally incompressible twice-punctured sphere. \\\\ As in Section~\ref{sec:SphereTorus}, we project $L$ onto $S$, place bubbles at each crossing, and define the surfaces $S_+$, $S_-$ and $S_0$. We consider the intersection curves $F \cap S_+$ as before, although now we allow two places where intersection curves cross the link, created by the punctures. \begin{lemma} \label{lem:primeLemmas} Let $F$ be an essential, meridionally incompressible twice-punctured sphere in $S \times I \setminus L$. Then there exists an isotopy of $F$ such that the following are true: \begin{enumerate}[label=(\roman*)] \item Every intersection curve of $F \cap S_+$ is trivial on both $F$ and $S_+$. \item Every intersection curve in $F \cap S_+$ intersects a bubble or the link $L$ at least once. \item There are no easily removable intersections of $F$ with bubbles as in Figure~\ref{fig:easilyRemovable}. \end{enumerate} Further, let $\alpha$ be an arc of an intersection curve in $F \cap S_+$, and let $B$ be a bubble that $\alpha$ passes through, such that $\partial B_+ \cup \alpha$ does not contain a simple closed curve that is nontrivial on $S_+$. \begin{enumerate}[resume,label=(\roman*)] \item For any such pair $B$ and $\alpha$, the arc $\alpha$ does not pass through the same side of $\partial B$ more than once. \item For any such pair $B$ and $\alpha$, the arc $\alpha$ does not pass through both sides of $\partial B$. \end{enumerate} \end{lemma} \begin{figure}[h] \includegraphics[scale=0.3]{2puncsphereremoval.png} \centering \caption{Easily removable intersection of $F$ and a bubble.} \label{fig:easilyRemovable} \end{figure} \begin{proof} \text{} \begin{enumerate}[label=(\roman*), wide] \item By the argument from the proof of Lemma~\ref{lem:trivialIffTrivial}, observe that an intersection curve is trivial on $F$ if and only if it is also trivial on $S_+$, where we define a curve to be nontrivial on $F$ if it separates the two punctures. Note that the argument for cases (2) and (3) applies because there are no essential spheres in $S \times I \setminus L$ by Lemma~\ref{lem:noMerIncSurface}. \\\\ Moreover, we claim that any intersection curve on $F\cap S_+$ is trivial on $S_+$. Suppose some intersection curve $\alpha$ on $F\cap S_+$ is nontrivial on $S_+$. Since $F$ is a twice-punctured sphere, $\alpha$ bounds either a disk or a once-punctured disk on $F$ to one side. Fill $L$ back into $S\times I\setminus L$ and consider the closure $D$ of the disk or the once-punctured disk bounded by $\alpha$. Then $D$ is a compression disk of $S_+$ in $S\times I$, contradicting that $S_+$ is incompressible in $S\times I$. Thus all intersection curves are trivial on $S_+$. Note that this argument also applies to intersection curves $F \cap S'$, where $S'$ is defined to be $S_0$ where the equatorial disk in each bubble replaced by any combination of upper and lower hemispheres. \\ \item As in the proof of Lemma~\ref{lem:isotopyBubbleCleanUp}, we associate an ordered pair $(s,i)$ to each embedding of $F$ prior to isotopy, where $s$ is the number of saddles in $F$ and $i$ is the number of intersection curves in $F \cap S_+$. Suppose we pick $F$ to be the embedding such that its ordered pair $(s,i)$ is minimal under lexicographical ordering. Then it follows from the same argument that we can remove intersection curves that do not hit any bubbles and are not punctured by a link component. \\ \item We can assume that there are no easily removable intersections of $F$ with bubbles as in Figure~\ref{fig:easilyRemovable}, for otherwise we could isotope $F$ to eliminate a saddle. \\ \item Again it follows from the same argument that for any pair $B$ and $\alpha$, if $\partial B_+ \cup \alpha$ does not contain a simple closed curve that is nontrivial on $S_+$, then $\alpha$ does not intersect the same side of $\partial B$ more than once. \\ \item As in the proof of Lemma~\ref{lem:noCurveBubbleBothSides}, for any such pair $B$ and $\alpha$, if $\partial B_+ \cup \alpha$ does not contain a simple closed curve that is nontrivial on $S_+$, we get a once-punctured disk $D$ with $D \cap F = \partial D$. Since we have eliminated easily removable intersections of $F$ with the bubbles as in Figure~\ref{fig:easilyRemovable}, any such once-punctured disk $D$ must be a meridional compression, a contradiction. Therefore $\alpha$ does not intersect both sides of $\partial B$. \end{enumerate} \end{proof} Now we consider each intersection curve $\alpha$ of $F \cap S_+$. Note that $\alpha$ may hit bubbles and it can hit $L$ at most twice (which correspond to punctures of $F$). \begin{lemma} \label{lem:atLeastTwice} Every intersection curve of $F \cap S_+$ must intersect $L$ at least twice. \end{lemma} \begin{proof} First observe that in a fully alternating projection, if $\alpha$ intersects $L$ it enters an adjacent region. If $\alpha$ entered the initial region through a bubble with the overstrand on the right (respectively left), it follows that when $\alpha$ leaves this adjacent region through a bubble, the overstrand is again on the right (respectively left). Hence the sum of the number of intersections of $\alpha$ with $L$ and of $\alpha$ with bubbles must be even. \\\\ Suppose $\alpha$ does not hit any bubbles. Then $\alpha$ must intersect $L$, since we eliminated intersection curves that do not hit bubbles or $L$. So, $\alpha$ must intersect $L$ an even number of times, which implies $\alpha$ must intersect $L$ at least twice. \\\\ Now suppose $\alpha$ only intersects a single bubble $B$ and does so once. Then on $S_-$, there is an arc $\beta$ of an intersection curve in $F \cap S_+$ that hits both sides of $\partial B$. By Lemma~\ref{lem:primeLemmas}~(v), $\partial B_+ \cup \beta$ must contain a nontrivial curve on $S_-$ (see Figure~\ref{fig:bubbleonce}(B)). Then $\alpha$ must have been a nontrivial curve on $S_+$, which contradicts that all intersection curves are trivial on $S_+$. \\ \begin{figure}[h] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[scale=0.3]{Prime_diagram_1.pdf} \caption{Intersection curve in $F \cap S_-$ which passes through both sides of a bubble $B$.} \label{fig:primeDiagram1} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[scale=0.3]{Prime_diagram_1b.pdf} \caption{If $\partial B_+ \cup \beta$ contains a nontrivial curve, $\alpha$ must have been nontrivial.} \label{fig:primeDiagram1b} \end{subfigure} \caption{When $\alpha$ hits only one bubble once.} \label{fig:bubbleonce} \end{figure} Finally, suppose $\alpha$ intersects bubbles more than once, such that between any two bubbles intersected by $\alpha$, there exists an arc of $\alpha$ that connects the two bubbles without any punctures in between. Let $D$ be the disk bounded by $\alpha$ on $S_+$. Since $L$ is alternating, $D$ contains the overstrand of at least one bubble. \\\\ Consider each intersection of $\alpha$ with a bubble $B$, such that $\alpha$ bounds the overstrand of $B$ to the interior of $D$. For each such intersection, there are two intersection curves given by $F\cap S_-$ entering $D$ (Figure~\ref{fig:primeDiagram2}). Thus there are an even number of intersection curves entering $D$ on $S_-$, all of which connect within $D$, and thus form arcs with endpoints on $\alpha$. Of these arcs, choose the arc $\beta$ that is outermost in $D$. There are three possibilities: $\beta$ connects two sides of one bubble, $\beta$ connects two different bubbles with a bubble in between, or $\beta$ connects two curves passing through one side of one bubble. \\ \begin{figure}[h] \includegraphics[scale=0.5]{Primediagram2.png} \centering \caption{Intersection curves of $F \cap S_-$ entering $D$.} \label{fig:primeDiagram2} \end{figure} Assume $\beta$ intersects both sides of one bubble. We see from Lemma~\ref{lem:primeLemmas}~(v), that this can only occur when $\partial B_+\cup\beta$ contains a simple nontrivial curve; but this contradicts that $\beta$ is contained in $D$. \\\\ Assume $\beta$ connects two different bubbles with a bubble $B$ in between (Figure~\ref{fig:primeDiagram3}). Again, it follows that $\beta$ must hit both sides of $B$, and as $\beta$ is contained in $D$, this contradicts Lemma~\ref{lem:primeLemmas}~(v). \\ \begin{figure}[h] \includegraphics[scale=0.5]{Primediagram3.png} \centering \caption{When $\beta$ connects two different bubbles with a bubble $B$ in between.} \label{fig:primeDiagram3} \end{figure} Finally, assume $\beta$ connects two intersection arcs both passing through the same side of the same bubble (Figure~\ref{fig:primeDiagram4}). As $\partial B_+ \cup \beta$ does not contain a simple closed curve that is nontrivial on $S_-$, this contradicts that all such curves were removed with Lemma~\ref{lem:primeLemmas}~(iv). \begin{figure}[h] \includegraphics[scale=0.5]{Primediagram4.png} \centering \caption{When $\beta$ connects two strands passing through the same side of one bubble.} \label{fig:primeDiagram4} \end{figure} \\\\ Therefore there are at least two intersections with $L$, which prevent the intersections with bubbles from being consecutive. \end{proof} Now we are ready to show that $P$ is not obviously prime. This completes the proof of Theorem~\ref{thm:primeiffobv}, i.e. a fully alternating link $L$ in $S \times I$ with a reduced fully alternating projection $P$ is prime if and only if $P$ is obviously prime. \begin{proof}[Completion of Proof of Theorem~\ref{thm:primeiffobv}] Observe that since $F$ is a twice-punctured sphere, there can only be two intersections with $L$ in all intersection curves of $F \cap S_+$. By Lemma~\ref{lem:atLeastTwice}, $F\cap S_+$ consists of exactly one intersection curve $\alpha$, which intersects $L$ exactly twice. Since all intersection curves are trivial on $S_+$, $\alpha$ must be trivial on $S_+$. \\\\ Suppose $\alpha$ hits a bubble $B$. Let $\sigma$ be the uppermost saddle of $B$. Then, since $\alpha$ is the only intersection curve, both arcs of $\sigma \cap \partial B$ must belong to $\alpha$. If an arc of $\alpha \setminus B$ satisfies the hypothesis of Lemma~\ref{lem:primeLemmas}~(v), then we obtain a contradiction. \\\\ Suppose that no arc of $\alpha \setminus B$ satisfies the hypothesis of Lemma~\ref{lem:primeLemmas}~(v); in other words, suppose that for every arc $\beta$ of $\alpha \setminus B$, $\partial B_+ \cup \beta$ contains a simple closed curve that is nontrivial on $S_+$. Let $S'$ be the surface obtained from $S_+$ by replacing the upper hemisphere of $B$ with the lower hemisphere. Then $\alpha\cup \partial \sigma$ contains a simple closed curve $\gamma$ that is nontrivial on $S'$. But $\gamma$ is an intersection curve of $F$ and the surface $S'$. Therefore it is trivial on $F$, which contradicts the fact that $S'$ is incompressible in $S\times I$.(See Figure \ref{fig:primeBubbleBothSides}). \\ \begin{figure}[h] \includegraphics[scale=0.3]{2puncspheretorus.png} \centering \caption{When $\alpha \cup \partial \sigma$ contains a nontrivial simple closed curve.} \label{fig:primeBubbleBothSides} \end{figure} Thus $\alpha$ does not intersect any bubbles, so $\alpha$ is a twice-punctured circle which bounds a disk $D$ on $S_+$. Since $F$ is an essential twice-punctured sphere, $D$ must contain at least one crossing of $L$. Thus $P$ is not an obviously prime projection of $L$. \end{proof} \section{Alternating links in more general 3-manifolds} \label{sec:extensions} Given an orientable hyperbolic 3-manifold $M$, we know that $M$ does not contain any essential spheres, tori or annuli by Thurston's theorem. Furthermore, there are no projective planes or Klein bottles, because in an orientable 3-manifold, the boundary of the neighborhood of a projective plane is an essential sphere, and the boundary of the neighborhood of a Klein bottle is an essential torus. \begin{lemma}\label{lem:nbd} Let $S$ be a surface with negative Euler characteristic and let $N$ be an $I$-bundle over $S$. Let $L$ be a prime link in $N$ with a fully alternating projection on $S$. Then, $N\setminus L$ is hyperbolic. \end{lemma} \begin{proof} We split into two cases: $N$ is orientable or $N$ is non-orientable. First suppose $N$ is orientable. By Theorem \ref{thm:main}, we already know that the lemma holds when $S$ is orientable, in which case $N=S\times I$. Hence, we only need to consider the case where $S$ is non-orientable. In this case, we use $N=S\widetilde{\times}I$ to denote the orientable twisted $I$-bundle over $S$. We know that we can always find an orientable double cover $\widetilde{N}=\widetilde{S}\times I$ of $N$, where $\widetilde{S}$ is orientable. We aim to show that the lift $\widetilde{L}$ of $L$ in $\widetilde{N}$ is fully alternating on $\widetilde{S}$ and prime in $\widetilde{N}$. \\\\ To show that $\widetilde{L}$ is alternating, we can think of $S$ as a polygon $R$ with edges identified so that $\widetilde{S}$ is two copies of $R$ glued together with edges identified, denoted by $\widetilde{R}$. Certainly $\widetilde{L}$ is alternating in the interior of $\widetilde{R}$, since $L$ is alternating in the interior of $R$ and along the glued edges. And, because $L$ must alternate across identified edges of $R$, $\widetilde{L}$ must alternate across identified edges of $\widetilde{R}$. Thus, $\widetilde{L}$ is alternating on $\widetilde{S}$. We also know that disks in $S$ always lift to disks in $\widetilde{S}$, so $\widetilde{L}$ is fully alternating on $\widetilde{S}$. \\\\ To show that $\widetilde{L}$ is prime in $\widetilde{N}$, consider an essential twice-punctured sphere $\widetilde{A}$ in $\widetilde{N}\setminus\widetilde{L}$. We know $\widetilde{A}$ must map to an immersed twice-punctured sphere $A$ in $N$ where any self-intersections are comprised of double curves. We want to construct from $A$ an embedded twice-punctured sphere in $N$ that is essential, contradicting the assumption that $L$ is prime in $N$. \\\\ Any self-intersection curve on $A$ can be associated to two curves on $\widetilde{A}$ via the immersion. If both the curves associated to a self-intersection are trivial, then that self-intersection can be isotoped away since $\widetilde{N}$ does not contain any essential spheres. \\\\ If both the curves associated to a self-intersection are non-trivial, we can perform surgery on $A$ by smoothing the intersecting sheets in two possible ways to eliminate the non-trivial self-intersections as shown in cross-section in Figure \ref{fig:smoothing}. Discarding the torus component in the second case, this results in two twice-punctured spheres $A_1$ and $A_2$ in $N$ on either side of the sheet containing the corresponding double-curve, with punctures coming from the link. Note that after smoothing $k$ double-curves associated to two non-trivial curves on $\widetilde{A}$ we obtain $2^k$ twice-punctured spheres, half of which lie on a single side of any smoothed double-curve and half of which lie on the other side. So, if $A$ has $k$ double-curves associated to two non-trivial curves on $\widetilde{A}$, after smoothing them all we have $2^k$ twice-punctured spheres $\{A_i\}$, each with no double curves that are associated to two trivial or two non-trivial curves on $\widetilde{A}$. \\\\ \begin{figure}[h] \includegraphics[scale=.5]{surgery.pdf} \caption{Smoothing of a self-intersecting curve.} \label{fig:smoothing} \end{figure} Finally, if a self-intersection is associated to a trivial curve and a non-trivial curve, consider the innermost trivial curve $\alpha$ that is paired with a non-trivial curve via a double-curve in $A$. Since we've eliminated double-curves in $A$ associated to two trivial curves, the double-curve associated to $\alpha$ must be an innermost trivial curve on $A$. But since it is also non-trivial on $A$, it must bound a compression disk in $A$ which lifts to a compression disk in $\widetilde{A}$ bounded by the corresponding non-trivial curve on $\widetilde{A}$, contradicting essentiality of $\widetilde{A}$ in $\widetilde{N}$. \\\\ Thus, each of the resulting $\{A_i\}$ has no self-intersections, and they are all incompressible since $\widetilde{A}$ is. It suffices to show that at least one of the $\{A_i\}$ is not boundary-parallel in $N$. Recall that a twice-punctured sphere is boundary-parallel if its closure bounds a ball containing a trivial arc of $L$. Note that each $A_i$ bounds a ball to at most one side and that each $A_i$ that bounds a ball must do so to the same side. So, if every $A_i$ bounds a ball containing a trivial arc of $L$, then $\widetilde{A}$ must bound a ball containing a trivial arc of $\widetilde{L}$ since balls in $N$ lift to balls in $\widetilde{N}$, contradicting our original choice of $\widetilde{A}$. \\\\ Now, suppose that $N$ is non-orientable. There are three sub-cases: \begin{enumerate} \item $N=S\times I$ for $S$ non-orientable \item $N=S\widetilde{\times} I$ for $S$ orientable \item $N=S\widetilde{\times} I$ for $S$ non-orientable. \end{enumerate} In sub-cases (1) and (2), $N$ is double-covered by $\widetilde{N}=\widetilde{S}\times I$ for $\widetilde{S}$ orientable. In sub-case (3), $N$ is double-covered by $\widetilde{N}=\widetilde{S}\widetilde{\times} I$ for $\widetilde{N}$ orientable and $\widetilde{S}$ non-orientable. By similar arguments as in the case where $N$ is orientable, we know that the lift $\widetilde{L}$ of $L$ is prime and fully alternating in each orientable double-cover $\widetilde{N}$. \\\\ In all cases except sub-case (3), we must have that $\widetilde{N}\setminus\widetilde{L}$ is hyperbolic by Theorem~\ref{thm:main}. By the Mostow Rigidity Theorem, we know that the deck transformation associated to the covering $\widetilde{N}\to N$ can be realized as an isometry. Thus, $N\setminus L$ must be hyperbolic as well. Finally, in sub-case (3), we reduce to the case where $N$ is orientable after taking the double cover and then apply Mostow Rigidity again to obtain the desired result. \end{proof} With this lemma, we are now ready to prove Theorem~\ref{thm:threemanifold}, which allows us to remove prime, fully alternating links on essential, closed surfaces in hyperbolic $3$-manifolds and preserve hyperbolicity. \begin{proof}[Proof of Theorem~\ref{thm:threemanifold}] We show that in the resulting manifold $M\setminus L$, we still have no essential spheres, tori or annuli. \\\\ First note that any neighborhood $N$ of $S$ in $M$ will be an $I$-bundle over $S$. In the case that $M$ is non-orientable, we'll take an orientable double cover $\widetilde{M}$ of $M$ that lifts $N$ to its corresponding orientable double cover as given in the proof of Lemma~\ref{lem:nbd}. As in the argument at the end of the proof of Lemma~\ref{lem:nbd}, it suffices to show that $\widetilde{M}\setminus\widetilde{L}$ is hyperbolic. And, since $M$ is hyperbolic by assumption, $\widetilde{M}$ is too, so we can completely reduce to the case where $M$ is orientable. \\\\ Now, we claim that any essential sphere, torus or annulus in $M\setminus L$ must intersect $N\setminus L$ in at least one disk or annulus. Suppose that there were an essential sphere, torus or annulus $F$ in $M\setminus (N\setminus L)$. \\\\ Note that there are no boundary-parallel spheres in $M$ because hyperbolic manifolds do not have sphere boundaries. If $F$ is a compressible sphere in $M$, it must bound a ball $B$ in $M$. Because $S$ is incompressible in $M$, $N$ cannot be contained in $B$, so $F$ will still bound a ball in $M\setminus L$. \\\\ If $F\subset M\setminus N$ is a torus that is boundary-parallel in $M$ but not in $M\setminus L$, then $F$ must contain a curve that bounds a once-punctured disk $D$ in $M$ that is punctured multiple times in $M\setminus L$, i.e. $L$ must puncture $D$. Since $L\subset N$, $D$ must intersect $N$ and because $S$ is incompressible in $M$, $D$ must intersect $S$ trivially, in which case we can isotope it off of $N$. But, then $D$ cannot be punctured by $L$, contradicting our assumption regarding boundary-parallelity. \\\\ If $F\subset M\setminus N$ is a torus that has a compression disk $D$ in $M$ but not in $M\setminus L$, then $D$ must intersect $S$. Since $S$ is incompressible, we know that $D$ must intersect $S$ in a trivial curve on $S$, so we can isotope $D$ off of $N$. As before, $D$ must not be punctured by $L$, contradicting the assumption that $F$ is compressible in $M\setminus L$. \\\\ If $F\subset M\setminus N$ is an annulus that is boundary-parallel in $M$ but not in $M\setminus L$, then $F\cup \partial M$ must contain a curve consisting of one arc from $F$ and one arc from $\partial M$ that bounds a disk $D$ punctured by $L$. Thus, $D$ must intersect $S$ and since $S$ is incompressible in $M$, $D$ must intersect $S$ trivially, so we can isotope $D$ off of $N$, leading to a similar contradiction as in the previous cases. If $F\subset M\setminus N$ is an annulus that is compressible in $M$ but incompressible in $M\setminus L$, then all compression disks of $F$ in $M$ must be punctured by $L$, so we can apply a similar argument. \\\\ Thus, any essential sphere, torus or annulus in $M\setminus L$ must have non-empty intersection with $N$ and it follows that the intersection must contain a disk or annulus component. So, let $F$ be an essential sphere, torus or annulus with the least number of intersection curves in $\partial N$. \\\\ If $F$ intersects $N$ in at least one disk, we can push the disk out from the boundary to decrease the number of intersection curves while preserving essentiality, which contradicts our assumption regarding minimality of intersection curves. \\\\ If $F\cap N$ does not contain any disks, it must contain an annulus $A$. We know that $A$ must be incompressible because $F$ is. $A$ cannot be boundary-parallel in $N$, because otherwise we could push it out through the boundary and decrease the number of intersection curves in $F\cap \partial N$. So, $A$ is an essential annulus completely contained in $N$, which contradicts Lemma~\ref{lem:nbd}. \end{proof} \begin{example} Here we provide an example of applying Theorem~\ref{thm:threemanifold} to generate a new hyperbolic manifold by removing a prime, fully alternating link from a hyperbolic 3-manifold with an essential closed surface. Let $M$ be the complement of the hyperbolic link shown in Figure~\ref{fig:manifoldwithsurface}. \begin{figure}[h] \includegraphics[width=8cm,height=6.5cm]{examplenew.pdf} \centering \caption{A link complement containing an essential closed surface.} \label{fig:manifoldwithsurface} \end{figure} As illustrated in Figure~\ref{fig:manifoldwithsurface}, there is a surface $S$ of genus 2 in $M$. \\\\ We show that $S$ is incompressible. We surger $S$ along the shaded punctured disks $E_1$ amd $E_2$ in Figure~\ref{fig:manifoldwithsurface} into a four-punctured sphere $S'$. Note that $S'$ is incompressible to the inside because the tangle contained in it is not rational. \\\\Let $D \subset M$ be a potential compression disk of $S$ to its inside such that $D$ has the fewest possible number of intersection curves with $E_1$ and $E_2$. Then each $E_i$ intersects $D$ in arcs. If there exists an intersection arc, then there must exist an outermost arc, which together with an arc on $\partial D$ must cobound a disk $D' \subset D$ with boundary in $S'$. The disk $D'$ must then isotope onto $S'$, allowing us to isotope $D$ to eliminate the outermost intersection arc, contradicting the fact we chose $D$ to have a minimal number of intersection arcs. We conclude that $S$ is incompressible to the inside. By symmetry, $S$ is also incompressible to the outside. Clearly, $S$ is not boundary-parallel, so it is essential. Now, Theorem~\ref{thm:threemanifold} implies that any prime, fully alternating link on $S$ can be removed from $M$ such that the resulting manifold is hyperbolic. For instance, the thick link in Figure~\ref{fig:addingLinkExample} is prime and fully alternating. Adding this link to the original link produces another hyperbolic link. \begin{figure}[h] \includegraphics[width=9.5cm,height=8cm]{AddLinkWithSurface} \centering \caption{A hyperbolic link obtained by removing a prime, fully alternating link from an essential surface on the complement of a hyperbolic link.} \label{fig:addingLinkExample} \end{figure} Note that the only assumption on the tangle inside $S'$ was that it was a nontrivial non-rational tangle of two arcs. So this yields many more examples, with no requirement that the inner and outer tangles be the same. \end{example} \subsection*{Acknowledgments} The authors are grateful for support they received from NSF Grant DMS-1659037 and the Williams College SMALL REU program.
{ "timestamp": "2018-02-19T02:01:29", "yymm": "1802", "arxiv_id": "1802.05770", "language": "en", "url": "https://arxiv.org/abs/1802.05770", "abstract": "Menasco showed that a non-split, prime, alternating link that is not a 2-braid is hyperbolic in $S^3$. We prove a similar result for links in closed thickened surfaces $S \\times I$. We define a link to be fully alternating if it has an alternating projection from $S\\times I$ to $S$ where the interior of every complementary region is an open disk. We show that a prime, fully alternating link in $S\\times I$ is hyperbolic. Similar to Menasco, we also give an easy way to determine primeness in $S\\times I$. A fully alternating link is prime in $S\\times I$ if and only if it is \"obviously prime\". Furthermore, we extend our result to show that a prime link with fully alternating projection to an essential surface embedded in an orientable, hyperbolic 3-manifold has a hyperbolic complement.", "subjects": "Geometric Topology (math.GT)", "title": "Hyperbolicity of Links in Thickened Surfaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692339078752, "lm_q2_score": 0.7248702880639792, "lm_q1q2_score": 0.7079585089260274 }
https://arxiv.org/abs/1708.09237
Bordering for spectrally arbitrary sign patterns
We develop a matrix bordering technique that can be applied to an irreducible spectrally arbitrary sign pattern to construct a higher order spectrally arbitrary sign pattern. This technique generalizes a recently developed triangle extension method. We describe recursive constructions of spectrally arbitrary patterns using our bordering technique, and show that a slight variation of this technique can be used to construct inertially arbitrary sign patterns.
\section{Introduction} A number of methods have been developed to check that a specific pattern is spectrally or inertially arbitrary, such as the analytic nilpotent-Jacobian method and the algebraic nilpotent-centralizer method (see e.g. \cite{CGKOVV, DJOD, GS, GS2}), and these have been applied to various classes of patterns (see e.g. \cite{Britz, CV, P}). Recently in~\cite{KSVW}, a digraph method called {\emph{triangle extension}} has been developed for constructing higher order spectrally or inertially arbitrary patterns from lower order patterns. In this paper, we generalize the triangle extension method by formulating it as a matrix bordering technique (see Remark~\ref{te}). With this bordering technique, we construct higher order patterns (some of which cannot be obtained by triangle extension) that are spectrally or inertially arbitrary from lower order patterns. We give examples of new spectrally and inertially arbitrary sign patterns obtained by bordering. \subsection{Definitions and the nilpotent Jacobian method.} Given an order $n$ matrix $A=[a_{ij}]$, denote the characteristic polynomial of $A$ by $p_A(z)=\det(zI-A)$. A \emph{sign pattern} is a matrix $\mathcal{A}=[\alpha_{ij}]$ of order $n$ with entries in $\{0,+,-\}$. Let $$Q(\mathcal{A})=\{ A\ | \ a_{ij}=0 {\rm{\ if\ }} \alpha_{ij}=0, a_{ij}>0 {\rm{\ if\ }} \alpha_{ij}=+ {\rm{\ and \ }} a_{ij}<0 {\rm{\ if\ }} \alpha_{ij}=-\}.$$ If $A\in Q(\mathcal{A})$ for some pattern $\mathcal{A}$, then $A$ is a \emph{realization} of $\mathcal{A}$ and we sometimes refer to $\mathcal{A}$ as $\sgn(A)$. A pattern $\mathcal{A}$ is \emph{spectrally arbitrary} if for every degree $n$ monic polynomial $p(z)$ over $\mathbb{R}$, there is some real matrix $A$ such that $A\in Q(\mathcal{A})$ and $p_A(z)=p(z)$. A pattern $\mathcal{B}=[\beta_{ij}]$ is a \emph{superpattern} of $\mathcal{A}$ if $\alpha_{ij}\neq 0$ implies $\beta_{ij}=\alpha_{ij}$, and $\mathcal{A}$ is a \emph{subpattern} of $\mathcal{B}$. Two patterns $\mathcal{A}$ and $\mathcal{B}$ are {\emph{equivalent}} if $\mathcal{B}$ can be obtained from $\mathcal{A}$ via any combination of negation, transposition, permutation similarity and signature similarity. A matrix $A$ is \emph{nilpotent} if $A^k=0$ for some positive integer $k$ and the smallest positive integer $k$ such that $A^k=0$ is the \emph{index} of $A$. An order $n$ nilpotent matrix $A$ has characteristic polynomial $p_A(z)=z^n$. Suppose $\mathcal{A}$ is an order $n$ sign pattern with a nilpotent matrix $A\in Q(\mathcal{A})$ with $m\geq n$ nonzero entries $a_{i_1j_1}, a_{i_2j_2},\ldots, a_{i_mj_m}$. Let $X=X_A(x_1,x_2,\ldots,x_m)$ denote the matrix obtained from $A$ by replacing $a_{i_kj_k}$ with the variable $x_k$ for $k=1,\ldots, m$. Writing $p_{X}(z)=z^n+f_1z^{n-1}+\cdots+f_{n-1}z+f_n$ for some $f_i=f_i(x_1,x_2,\ldots,x_m)$, let $J=J_{X}$ be the $n \times m$ Jacobian matrix with $(i,j)$ entry equal to $\frac{\partial f_i}{\partial x_j}$ for $1\leq i \leq n$, and $1\leq j \leq m$. Let $J_{X=A}$ denote the Jacobian matrix evaluated at the nilpotent realization, that is $J_{X=A}=J\vert_{(x_1,x_2,\ldots,x_m)=(a_{i_1j_1}, a_{i_2j_2},\ldots, a_{i_mj_m})}$. A nilpotent matrix $A$ \emph{allows a full-rank Jacobian} if the rank of $J_{X=A}$ is $n$. Finding a nilpotent matrix $A\in Q(\mathcal{A})$ that allows a full-rank Jacobian is known as the \emph{nilpotent-Jacobian method}. As noted in part (c) of Theorem~\ref{SAP}, this method guarantees that every superpattern of $\mathcal{A}$ is spectrally arbitrary. A matrix $A$ (or pattern $\mathcal{A}$) is \emph{reducible} if there is a permutation matrix $P$ such that $PAP^T$ (resp. $P\mathcal{A} P^T$) is block triangular with more than one nonempty diagonal block. Otherwise it is \emph{irreducible}. A matrix $A$ is \emph{nonderogatory} if the dimension of the eigenspace of every eigenvalue is equal to one. The following theorem combines known results from \cite{CGKOVV} and \cite{DJOD}. \begin{theorem}\label{SAP} Let $\mathcal{A}$ be a sign pattern of order $n$. If a nilpotent matrix $A\in Q(\mathcal{A}}\newcommand{\cT}{\mathcal{T})$ allows a full-rank Jacobian, then \begin{enumerate} \item[(a)] $A$ is irreducible, \item[(b)] $A$ is nonderogatory, and \item[(c)] every superpattern of $\mathcal{A}}\newcommand{\cT}{\mathcal{T}$ is spectrally arbitrary. \end{enumerate} \end{theorem} \begin{proof} Suppose $A\in Q(\mathcal{A}}\newcommand{\cT}{\mathcal{T})$ is a nilpotent matrix of order $n$ that allows a full-rank Jacobian. Part (c) is \cite[Theorem 3.1]{CGKOVV}, which is a reframing of the nilpotent-Jacobian method introduced in \cite{DJOD}. Part (b) is \cite[Corollary 4.5]{CGKOVV}. If $A$ is a reducible nilpotent matrix and $PAP^T$ is block triangular for some permutation matrix $P$, then the index of $A$ is at most the index of the largest order diagonal block of $PAP^T$. Thus the index is bounded above by the order of the largest diagonal block. Since the index of $A$ is $n$, it follows that $A$ is irreducible, proving part (a). \end{proof} Because of part (c) of Theorem~\ref{SAP}, \emph{minimal} spectrally arbitrary patterns (that is, spectrally arbitrary patterns for which no proper subpattern is spectrally arbitrary), are of special interest. For $n=2$ and $n=3$, the minimal spectrally arbitrary patterns are well-known (see, e.g., \cite{Britz,CV}) and, up to equivalence, are: $$\mathcal{T}_2=\left[\begin{array}{cc} +&-\\ +&- \end{array} \right], \qquad \mathcal{T}_3=\left[\begin{array}{ccc} +&-&0\\+&0&-\\0&+&-\end{array}\right],$$ $$ \U_3=\left[\begin{array}{ccc} +&-&+\\+&-&0\\+&0&-\end{array}\right], \ \mathcal{V}}\newcommand{\U}{\mathcal{U}_3=\left[\begin{array}{ccc} +&-&0\\+&0&-\\+&0&-\end{array}\right], {\hbox{\rm{\ and\ }}} \ \mathcal{W}_3=\left[\begin{array}{ccc} +&+&-\\+&0&-\\+&0&-\end{array}\right]. $$ \subsection{Bordering} Let $A=[a_{ij}]$ be an order $n$ matrix, $\mathbf{x},\mathbf{z}\in\mathbb{R}^n$, and let $B$ be the \emph{bordered} matrix of order $n+1$: \begin{equation}\label{B} B= \left[\begin{array}{cc} I_n&\mathbf{0}\\ \mathbf{x}^T&1 \end{array}\right] \left[\begin{array}{cc} A&\mathbf{z}\\ \mathbf{0}^T&0 \end{array}\right] \left[\begin{array}{cc} I_n&\mathbf{0}\\ -\mathbf{x}^T&1 \end{array}\right]= \left[\begin{array}{c|c} A-\mathbf{z}\mathbf{x}^T&\mathbf{z}\\ \hline \mathbf{x}^T(A-\mathbf{z}\mathbf{x}^T)&\mathbf{x}^T\mathbf{z} \end{array}\right]. \end{equation} Since this is a similarity transformation, it follows that $p_B(z)=zp_A(z)$, and thus $B$ is nilpotent if $A$ is nilpotent. Note that (\ref{B}) is a special case of a construction introduced in \cite[Theorem 3.1]{KOSVVV}. Let $\mathbf{e}_i =[0,\dots,0,1,0,\ldots,0]^T$ with a $1$ in position $i$. In this paper, we focus on the special cases $\mathbf{z}=\mathbf{e}_j$ and $\mathbf{x}^T=b \mathbf{e}_k$ for some $b\neq 0$, which we call {\em{standard unit bordering}}. In the next two sections we use bordering to construct higher order spectrally arbitrary patterns out of lower order patterns without having to recalculate a Jacobian matrix. In addition, at each stage, the construction provides an explicit nilpotent realization of the spectrally arbitrary pattern. \section{Standard unit bordering with equal indices}\label{unitequal} Let $A=[a_{ij}]$, and denote the $k$th row of $A$ by $r_k(A)$. Suppose $\mathbf{x}=a_{kk}\mathbf{z}=a_{kk}\mathbf{e}_k$ for some $a_{kk}\neq0$. Then $\mathbf{x}^TA=a_{kk}r_k(A)$ and $A-\mathbf{z}\mathbf{x}^T=A-a_{kk}P_{kk}$ where $P_{kk}$ has a $1$ in entry $(k,k)$ and zeros elsewhere. In this case, the matrix $B$ in (\ref{B}) is \begin{equation}\label{Be} B=\left[\begin{array}{c|c} A-a_{kk}P_{kk}&\mathbf{e}_k\\ \hline a_{kk}r_k(A-a_{kk}P_{kk})&a_{kk} \end{array}\right]. \end{equation} Let $A(u,v)$ denote the matrix obtained from $A$ by deleting row $u$ and column $v$. \begin{theorem}\label{borderT} Let $\mathcal{A}$ be a sign pattern of order $n$. Suppose $A=[a_{ij}]\in Q(\mathcal{A})$ is a nilpotent matrix and $A$ allows a full-rank Jacobian. Suppose $a_{kk}\neq0$ and $a_{kv}\neq 0$ for some $v\neq k$. If $\det{A(k,v)}\neq 0$, then $B$ in $(\ref{Be})$ is a nilpotent matrix that allows a full-rank Jacobian and hence every superpattern of $\mathcal{B}=\sgn(B)$ is spectrally arbitrary. \end{theorem} \begin{proof} Let $A\in Q(\mathcal{A})$ be a nilpotent matrix and $X_A$ be a matrix with the nonzero pattern of $\mathcal{A}$ having variable entries such that the Jacobian $J_{X_A=A}$ has rank $n$. For convenience, assume $k=n$ and the last row of $X_A$ is $[x_{n1},x_{n2},\ldots, x_{nn}]$, recognizing that some of these entries may be zero. Note that by assumption $x_{nv}$ and $x_{nn}$ are nonzero. Let $B$ be as in (\ref{Be}) and \begin{equation}\label{XB} X_B= \left[\begin{array}{ccc|c} &&&\\ &X_A-x_{nn}P_{nn}&&\mathbf{0}\\ &&&1\\ \hline &\mathbf{y}&0&x_{nn} \end{array} \right] \end{equation} with $\mathbf{y}=[y_1,y_2,\ldots, y_{n-1}]$ such that $y_i\neq 0$ if and only if $x_{ni}\neq 0$. (Note that, other than the placement of the variables in $\mathbf{y}$, the nonzero entries of $\mathbf{y}$ are independent of the variables in $X_A$.) Then $X_B$ has the nonzero pattern of $\mathcal{B}$. Using cofactor expansion along the last row of $X_B$ gives \begin{eqnarray*} p_{X_B}(z)&=&\det(zI_{n+1}-X_B)\\ &=&(z-x_{nn})\det(zI_n-X_A+x_{nn}P_{nn})+\sum_{\ell =1}^{n-1}(-1)^{n+\ell }y_\ell \det\left([zI_n-X_A](n,\ell )\right).\\ \end{eqnarray*} However, applying cofactor expansion along the last row of the first summand gives \begin{eqnarray*} \det(zI_n-X_A+x_{nn}P_{nn})&=&z\det\left([zI_n-X_A](n,n)\right)\\ & & +\sum_{\ell =1}^{n-1}(-1)^{n+\ell }x_{n\ell }\det\left([zI_n-X_A](n,\ell )\right). \end{eqnarray*} Thus \begin{eqnarray*} p_{X_B}(z)&=&z\det(zI_n-{X_A}+x_{nn}P_{nn})-x_{nn}z\det([zI_n-X_A](n,n))\\ &+& \sum_{\ell =1}^{n-1}(-1)^{n+\ell }(y_\ell -x_{nn} x_{n\ell })\det\left([zI_n-X_A](n,\ell )\right). \end{eqnarray*} Since the determinant is linear in the rows (or using a rank 1 perturbation of a determinant), it follows that \begin{equation}\label{BB} p_{X_B}(z)=zp_{X_A}(z)+ \sum_{\ell =1}^{n-1}(-1)^{n+\ell }(y_\ell -x_{nn} x_{n\ell })\det\left([zI_n-X_A](n,\ell )\right). \end{equation} Focusing on the coefficients of $p_{X_{B}}(z)$, the second summand can be rewritten as $$\sum_{r=3}^{n+1}\left[\sum_{\ell =1}^{n-1}S_{r,\ell }(y_\ell -x_{nn}x_{n\ell })\right]z^{n-r+1}$$ for some polynomials $S_{r,\ell }$ of the variable entries in $X_A$. To consider the Jacobian of $X_B$, we assume the last columns of $J_{X_B}$ are indexed by the nonzeros of $x_{n1},\ldots,x_{nn},y_1,\ldots,y_{n-1}$. Let $m$ be the number of nonzero entries of $y$ and $w$ be the number of variables in $X_A$. Then the $(n+1)\times(w+m)$ Jacobian matrix $J_{X_B}$ is \begin{equation}\label{JacB} J_{X_B}= \left[\begin{array}{cc} J_{X_A} & O\\ \mathbf{0}^T&\mathbf{0}^T \end{array}\right] +\sum_{\ell =1}^{n-1}(y_\ell -x_{nn}x_{n\ell })M_\ell + \left[\begin{array}{cc} O & N\\ \end{array}\right] \end{equation} for some matrices $M_\ell $ and $(n+1) \times (2m+1)$ matrix $N$ with columns indexed by the nonzeros of $x_{n1},\ldots,x_{nn},y_1,\ldots,y_{n-1}$. Note that by (\ref{B}) and (\ref{XB}), $y_\ell = a_{nn}a_{n\ell }=x_{nn}x_{n\ell }$ in the nilpotent realization, so that we can ignore each matrix $M_\ell $ in (\ref{JacB}), since its coefficient vanishes at the nilpotent realization. Further the column of $N$ corresponding to $y_\ell $ is $\overrightarrow{N}_{y_\ell }=[0,0,S_{3,\ell },S_{4,\ell },\ldots,S_{n+1,\ell }]^T$ for $1\leq \ell \leq n$, and in addition, the column corresponding to $x_{n\ell }$ is $\overrightarrow{N}_{x_{n\ell }}=-x_{nn}\overrightarrow{N}_{y_\ell }$ for $1\leq \ell \leq n-1$ and $\overrightarrow{N}_{x_{nn}}=\sum_{\ell =1}^{n-1}-x_{n\ell }\overrightarrow{N}_{y_\ell }$. It follows that $N$ is column equivalent to $[\ O\ |\overrightarrow{N}_{y_1}|\overrightarrow{N}_{y_2}|\cdots|\overrightarrow{N}_{y_{n-1}}].$ From (\ref{BB}), with $z=0$, $$S_{n+1,\ell }=(-1)^{\ell -1}\det(X_A(n,\ell )),$$ giving $$\left.S_{n+1,\ell }\right\vert_{X_B=B}=(-1)^{\ell -1}\det(A(n,\ell )).$$ Thus, the condition that there exists an index $v\neq n$ such that $a_{nv}\neq 0$ and $\det{A(n,v)}\neq 0$ implies that $\left.S_{n+1,\ell }\right\vert_{X_B=B}\neq 0$ for some $\ell$, $1\leq \ell \leq n-1$. It follows that $J_{X_B=B}$ is equivalent to $$ \left[\begin{array}{cc} J_{X_A=A} & *\\ \mathbf{0}^T&\mathbf{s}^T \end{array}\right] $$ for some $\mathbf{s}\neq 0$. Hence $B$ allows a full-rank Jacobian. Thus by Theorem~\ref{SAP}, every superpattern of $\mathcal{B}$ is spectrally arbitrary. \end{proof} \begin{remark}\label{te}{\rm A method in \cite{KSVW} called \emph{triangle extension on arc} $(u,v)$ (in the digraph associated with $\mathcal{A}$) is equivalent to a special case of applying Theorem~\ref{borderT} to row $u$ of $A$ and entry $(u,v)$, namely in the situation that $a_{uu}$ and $a_{uv}$ are the only nonzero entries in row $u$ of $A$. } \end{remark} \begin{example}\label{E1} {\rm If \[\mathcal{A}=\left[\begin{array}{cccc} 0&+&0&0\\ 0&-&+&0\\ +&0&0&+\\ +&0&-&+\\ \end{array} \right] \mbox{\rm{\qquad and \qquad}} A=\left[\begin{array}{rrrr} 0&1&0&0\\ 0&-1&1&0\\ 1&0&0&1\\ 1&0&-1&1\\ \end{array} \right], \] then $A$ is nilpotent, $A\in Q(\mathcal{A})$, and $A$ allows a full-rank Jacobian. Hence $\mathcal{A}}\newcommand{\cT}{\mathcal{T}$ is spectrally arbitrary ($A$ is equivalent to the second matrix in Appendix A of \cite{CM}). Further, $a_{44}\neq 0$, $a_{41}\neq 0$ and $\det(A(4,1))\neq 0$. Applying Theorem~\ref{borderT} to row $4$ and entry $(4,1)$ gives a spectrally arbitrary pattern $\mathcal{B}_5$ with nilpotent matrix $B\in Q(\mathcal{B}_5)$ for \[ \mathcal{B}_5=\left[\begin{array}{ccccc} 0&+&0&0&0\\ 0&-&+&0&0\\ +&0&0&+&0\\+&0&-&0&+\\ +&0&-&0&+\\ \end{array} \right] {\mbox{\rm{\qquad and \qquad}}} B=\left[\begin{array}{rrrrr} 0&1&0&0&0\\ 0&-1&1&0&0\\ 1&0&0&1&0\\1&0&-1&0&1\\ 1&0&-1&0&1\\ \end{array} \right]. \] Note that since row $4$ of $A$ has more than one off-diagonal entry, triangle extension as described \cite{KSVW} is not possible on the arc $(4,1)$ in the digraph associated with $\mathcal{A}$, demonstrating that Theorem~\ref{borderT} provides a more general technique than triangle extension in \cite{KSVW}. } \end{example} \begin{remark} {\rm Theorem~\ref{borderT} can be applied recursively. In particular, suppose $\det(A(n,v))\neq 0$ and Theorem~\ref{borderT} was applied to row $n$ and entry $(n,v)$ of $A$ to obtain $B$. It follows that $\det(B(n+1,v))=(-1)^n\det(A(n,v))\neq 0$ since there is only one nonzero entry in the last column of $B(n+1,v),$ namely $1$ in the last row. Thus Theorem~\ref{borderT} can now be applied to row $n+1$ and entry $(n+1,v)$ of $B$. } \end{remark} \begin{example}\label{BN} {\rm The sign pattern $\mathcal{B}_5$ in Example~\ref{E1} can be recursively bordered using Theorem~\ref{borderT}, starting with row $5$ and entry $(5,1)$, to obtain a spectrally arbitrary pattern of order $n\geq 6$, with $3n-4$ nonzero entries, of the form \[\mathcal{B}_n=\left[\begin{array}{ccccccc} 0&+&&&&&\\ 0&-&+&&&&\\ +&0&0&+&&O&\\ +&0&-&0&+&&\\ \vdots&\vdots&\vdots&\vdots&\ddots&\ddots&\\ +&0&-&0&\cdots&0&+\\ +&0&-&0&\cdots&0&+ \end{array} \right].\] Note that each nonzero entry of the nilpotent realization of $\mathcal{B}_n$ has magnitude $1$. It can be shown that $\mathcal{B}_5$ and $\mathcal{B}_4=\mathcal{A}$ in Example \ref{E1} are minimally spectrally arbitrary. } \end{example} \section{Standard unit bordering with unequal indices}\label{unitunequal} Referring to (\ref{B}), suppose $\mathbf{x}=b \mathbf{e}_k$ for some $b\neq 0$ and $\mathbf{z}=\mathbf{e}_j$ for $j\neq k$; thus $\mathbf{x}^T\mathbf{z}=0$. With the $k$th row of $A$ denoted by $r_k (A)$, $\mathbf{x}^TA=br_k (A)$ and $A-\mathbf{z}\mathbf{x}^T=A-bP_{jk}$ where $P_{jk}$ has a $1$ in entry $(j,k)$ and zeros elsewhere. In this case, the matrix $B$ in (\ref{B}) is \begin{equation}\label{B2} B=\left[\begin{array}{c|c} A-bP_{jk}&\mathbf{e}_j\\ \hline br_k (A-bP_{jk})&0 \end{array}\right]. \end{equation} Recall that $X_A$ is obtained from $A$ be replacing some of the nonzero entries with variables. In the case that $J_{X=A}$ has rank $n$, we call a nonzero entry of $A$ \emph{Jacobian in $X_A$} if it is replaced by a variable in $X_A$, otherwise the entry is \emph{non-Jacobian in $X_A$}. Note that a non-Jacobian entry may be zero. To simplify the next proof, for $U,V\subseteq \{1,2,\ldots,n\}$, let $A(U,V)$ denote the matrix obtained from $A$ by deleting the rows in $U$ and the columns in $V$. \begin{theorem}\label{borderT2} Let $\mathcal{A}$ be a sign pattern of order $n$. Suppose $A=[a_{ij}]\in Q(\mathcal{A})$ is a nilpotent matrix and $A$ allows a full-rank Jacobian. Suppose $a_{jk }$, $j\neq k$, is non-Jacobian for some choice of $X_A$. If $a_{k v}\neq0$ and $\det{A(j,v)}\neq 0$, for some $v$, then $B$ in $(\ref{B2})$ is a nilpotent matrix that allows a full-rank Jacobian and hence every superpattern of $\mathcal{B}=\sgn(B)$ is spectrally arbitrary. \end{theorem} \begin{proof} Let $A\in Q(\mathcal{A})$ be a nilpotent matrix and $X_A$ be a matrix with the nonzero pattern of $\mathcal{A}$ having variable entries such that the Jacobian $J_{X_A=A}$ has rank $n$ with no variable placed in position $(j,k)$. For convenience, assume that $j=1$, $k=n$, and the last row of $X_A$ is $[x_{n1},x_{n2},\ldots, x_{nn}]$, recognizing that some of these entries may be zero. Let $B$ be as in (\ref{B2}) and \begin{equation}\label{XBQ} X_B= \left[\begin{array}{ccc|c} &&&1\\ &X_A-bP_{1n}&&\mathbf{0}\\ &&&\\ \hline &\mathbf{y}&&0 \end{array} \right] \end{equation} with $\mathbf{y}=[y_1,y_2,\ldots, y_{n}]$ such that $y_i\neq 0$ if and only if $x_{ni}\neq 0$. (Note that, other than the placement of the variables in $\mathbf{y}$, the nonzero entries of $\mathbf{y}$ are independent of the variables in $X_A$.) Then $X_B$ has the nonzero pattern of $\mathcal{B}$. Using cofactor expansion along the last row of $X_B$ gives \begin{eqnarray} p_{X_B}(z)&=&\det(zI_{n+1}-X_B)\nonumber \\ &=&z\det(zI_n-X_A+bP_{1n})+\sum_{\ell=1}^{n}(-1)^{\ell}y_\ell\det\left([zI_n-X_A](1,\ell)\right) \nonumber\\ &=& zp_{X_A}(z) +(-1)^{n+1}zb\det\left([zI_n-X_A](1,n)\right) \label{H1}\\ & & \qquad\qquad +\sum_{\ell=1}^{n}(-1)^{\ell}y_\ell\det\left([zI_n-X_A](1,\ell)\right).\nonumber \end{eqnarray} Let $W_\ell=z\det\left([zI_n-X_A](\{1,n\},\{\ell,n\})\right)$. Applying cofactor expansion on the determinant in the second summand of (\ref{H1}) gives \begin{eqnarray}\label{H2} z\det\left([zI-X_A](1,n)\right)=\sum_{\ell=1}^{n-1}(-1)^{n+\ell}x_{n\ell}W_\ell. \end{eqnarray} Using the fact that the last row of $zI_n-X_A$ is $[0,\cdots,0,z]-[x_{n1},x_{n2},\ldots, x_{nn}],$ and that a determinant is linear in the last row, \begin{eqnarray}\label{H3} \sum_{\ell=1}^{n}(-1)^{\ell}y_\ell\det\left([zI_n-X_A](1,\ell)\right) =\sum_{\ell=1}^{n-1}(-1)^{\ell}y_\ell W_\ell +\sum_{\ell=1}^{n}(-1)^{\ell}y_\ell U_\ell \end{eqnarray} for $$U_\ell=\det\left([zI_n-X_A](1,\ell)-z\left[\begin{array}{cc}O&\mathbf{0}\\\mathbf{0}^T&1\end{array}\right]\right), \qquad {\rm{with \quad}} 1\leq \ell \leq n-1,$$ and $U_n=\det([zI_n-X_A](1,n))$. Using (\ref{H2}) and (\ref{H3}) in (\ref{H1}) gives \begin{eqnarray*} p_{X_B}(z)&=&zp_{X_A}(z)+\sum_{\ell=1}^{n-1}\left((-1)^{\ell+1}bx_{n\ell}W_\ell+(-1)^{\ell}y_\ell W_\ell \right) +\sum_{\ell =1}^{n}(-1)^{\ell }y_\ell U_\ell . \end{eqnarray*} However, using cofactor expansion along the last row of the matrix in $U_\ell $ gives \begin{eqnarray*} U_\ell &=& \sum_{i=1}^{\ell -1}(-1)^{n+i}x_{ni}\det\left([zI_n-x_A](\{1,n\},\{i,\ell \}) \right)\\ &&+\sum_{i=\ell +1}^n(-1)^{n+i-1}x_{ni}\det\left([zI_n-x_A](\{1,n\},\{\ell ,i\}) \right). \end{eqnarray*} Thus \begin{eqnarray*} p_{X_B}=zp_{X_A}&+&\sum_{\ell =1}^{n-1}(-1)^{\ell }(y_\ell -bx_{n\ell })W_\ell \nonumber\\ &+&\sum_{1\leq i<\ell \leq n}(y_\ell x_{ni}-y_ix_{n\ell })(-1)^{n+i+\ell }\det\left([zI_n-X_A](\{1,n\},\{\ell ,i\})\right). \end{eqnarray*} Focusing on the coefficients of $p_{X_{B}}(z)$, we can rewrite $p_{X_B}(z)$ as \begin{equation}\label{pB} zp_{X_A}(z)+\sum_{r=3}^{n+1}\left[\sum_{\ell =1}^{n-1}S_{r,\ell }(y_\ell -bx_{n\ell })\right]z^{n-r+1} + \sum_{r=5}^{n+1}\left[\sum_{1\leq i<\ell \leq n}T_{r,i,\ell }(y_\ell x_{ni}-y_ix_{n\ell })\right]z^{n-r+1} \end{equation} for some polynomials $S_{r,\ell }$ and $T_{r,i,\ell }$ in the variable entries of $X_A$. Note that the variables in the last row of $X_A$ do not appear in $S_{r,\ell }$ or $T_{r,i,\ell }$. To consider the Jacobian of $X_B$, we assume the last columns of $J_{X_B}$ are indexed by the nonzeros of $x_{n1},\ldots,x_{nn}$, and $y_1,\ldots,y_{n}$. Let $m$ be the number of nonzero entries of $y$ and $w$ be the number of variables in $X_A$. Since $x_{1n}$ is non-Jacobian in $X_A$, the $(n+1)\times(w+m)$ Jacobian matrix $J_{X_B}$ is \begin{equation}\label{JacB2} J_{X_B}= \left[\begin{array}{cc} J_{X_A} & O\\ \mathbf{0}^T&\mathbf{0}^T \end{array}\right] +\sum_{\ell =1}^{n-1}(y_\ell -bx_{n\ell })M_\ell +\sum_{1\leq i < \ell \leq n}(y_\ell x_{ni}-y_ix_{n\ell })H_{i,\ell } + \left[\begin{array}{cc} O & N\\ \end{array}\right] \end{equation} for some matrices $M_\ell $, $H_{i,\ell }$ and $(n+1) \times (2m)$ matrix $N$ with columns indexed by the nonzeros of $x_{n1},\ldots,x_{nn},y_1,\ldots,y_{n}$. Note that by (\ref{B2}) and (\ref{XBQ}), $y_\ell = ba_{n\ell }=bx_{n\ell }$ in the nilpotent realization, so that we can ignore each matrix $M_\ell $ and $H_{i,\ell }$ in (\ref{JacB2}), since their coefficients vanish at the nilpotent realization. Let $\mathbf{s}_\ell =[0,0,S_{3,\ell },S_{4,\ell },\ldots,S_{n+1,\ell }]^T$ and $\mathbf{t}_{i\ell }=[0,0,0,0,T_{5,i,\ell },T_{6,i,\ell },\ldots,T_{n+1,i,\ell }]^T.$ By (\ref{pB}), the column of $N$ corresponding to $y_\ell $ is $$\overrightarrow{N}_{y_\ell }=\mathbf{s_\ell }+\sum_{i=1}^{\ell -1}x_{ni}\mathbf{t}_{i\ell }-\sum_{i=\ell +1}^nx_{ni}\mathbf{t}_{i\ell },$$ and the column corresponding to $x_{n\ell }$ is $$\overrightarrow{N}_{x_{n\ell }}=-b\mathbf{s_\ell }-\sum_{i=1}^{\ell -1}y_{i}\mathbf{t}_{i\ell }+ \sum_{i=\ell +1}^ny_{i}\mathbf{t}_{i\ell }$$ for $1\leq \ell \leq n-1$. Thus, evaluated at the nilpotent realization with $y_i= ba_{ni}=bx_{ni}$, $\overrightarrow{N}_{x_{n\ell }}=-b\overrightarrow{N}_{y_\ell }$ for $1\leq \ell \leq n-1.$ Further $\overrightarrow{N}_{y_n}=\sum_{i=1}^{n-1}x_{ni}\mathbf{t}_{in}$ with $\overrightarrow{N}_{x_{nn}}=\sum_{i=1}^{n-1}(-y_{i})\mathbf{t}_{in}$. It follows that $N\vert_{X_B=B}$ is column equivalent to $[\ O\ |\overrightarrow{N}_{y_1}|\overrightarrow{N}_{y_2}|\cdots|\overrightarrow{N}_{y_{n}}].$ From (\ref{H1}), with $z=0$, the $(n+1)$ entry of $\overrightarrow{N}_{y_\ell }$ is $$(-1)^{\ell }\det X_A(1,\ell ),$$ which evaluated at $X_B=B$ is $$ (-1)^{\ell }\det A(1,\ell ). $$ Thus, the hypothesis that there exists an index $v$ such that $a_{k v}\neq 0$ and $\det{A(j,v)}\neq 0$ implies that the $(n+1)$ entry of $\overrightarrow{N}_{y_v}$ is nonzero. It follows that $J_{X_B=B}$ is equivalent to \begin{equation}\label{J2} \left[\begin{array}{cc} J_{X_A=A} & *\\ \mathbf{0}^T&\mathbf{r}^T \end{array}\right] \end{equation} for some $\mathbf{r}\neq \mathbf{0}$. Hence $B$ allows a full-rank Jacobian. Thus by Theorem~\ref{SAP}, every superpattern of $\mathcal{B}$ is spectrally arbitrary. \end{proof} \begin{example}\label{Three}\rm{ Starting with $\mathcal{T}_2$, the unique spectrally arbitrary pattern of order $2$ up to equivalence \cite{DJOD}, the bordering technique of Theorem~\ref{borderT2} gives spectrally arbitrary patterns of order $3$. In particular, consider the nilpotent matrix \[A=\left[ \begin{array}{rr} 1&-1\\ 1&-1\\ \end{array}\right]\in Q(\mathcal{T}_2) \rm{\quad and\ let \quad} X_A=\left[ \begin{array}{rr} x_1&-1\\ x_2&-1\\ \end{array}\right].\] Then $J_{X=A}$ has full rank and entry $a_{12}$ is non-Jacobian in $X_A$. Thus, the bordering technique of Theorem~\ref{borderT2} gives the matrix \[\left[ \begin{array}{rcr} 1&-1-b&1\\ 1&-1&0\\ b&-b&0 \end{array}\right],\] providing different spectrally arbitrary patterns depending on the chosen value of $b\neq 0$. Taking $b=\frac{1}{2}$ gives a sign pattern equivalent to $\mathcal{W}_3$ (see \cite{Britz}) with a full-rank Jacobian. Taking $b=-\frac{1}{2}$ gives a pattern equivalent to a superpattern of $\mathcal{V}}\newcommand{\U}{\mathcal{U}_3$ (see \cite{Britz}) with a full-rank Jacobian. Taking $b=-1$ gives a pattern equivalent to $\mathcal{V}}\newcommand{\U}{\mathcal{U}_3$ with a full-rank Jacobian. This last option, using $b=a_{12}$ maintains sparsity (i.e, it gives a minimal spectrally arbitrary pattern.) }\end{example} \begin{remark}{\rm With a well-chosen example, Theorem~\ref{borderT2} can be applied recursively. In particular, note that in (\ref{J2}), the $n$ variables of $X_A$ are used to show that $B$ allows a full-rank Jacobian. Thus, at most one of the nonzero entries in row $n+1$ of $B$ needs to be Jacobian in $X_B.$ Further, an entry in row $n+1$ that is Jacobian in $X_B$ can be chosen to be any nonzero position $(n+1,v)$ for which $\det A(j,v)$ is nonzero. Note that, since the last column of $B$ has only one nonzero entry, $\det B(n+1,v)=(-1)^{j+n} \det A(j,v)$. Thus, if there is more than one $v$ with $a_{kv}\neq 0$, and $\det A(j,v)\neq 0$, then bordering can be repeated recursively, applying it to $(j,k)=(n+1,k)$ in $B$. }\end{remark} \begin{example}\label{KN}{\rm Consider the nilpotent realization $$\left[\begin{array}{rrr} 1&-1&0\\ 1&0&-1 \\ 1&0&-1 \end{array}\right] $$ of the spectrally arbitrary pattern $\mathcal{V}}\newcommand{\U}{\mathcal{U}_3$. By applying Theorem~\ref{borderT2} with $b=k=1$, $j=3$ and $v=2$, and repeating recursively, increasing $j$ but keeping $b=k=1$ and $v=2$, a spectrally arbitrary pattern $\mathcal{K}_n$ is obtained for $n\geq 4$, with \[\mathcal{K}_n=\left[\begin{array}{ccccccc} +&-&&&&&\\ +&0&-&&&O&\\ 0&0&-&+&&&\\ 0&-&0&0&+&&\\ \vdots&\vdots&\vdots&\vdots&\ddots&\ddots&\\ 0&-&0&0&\cdots&0&+\\ +&-&0&0&\cdots&0&0 \end{array} \right].\] The nonzero entries in a nilpotent realization of $\mathcal{K}_n$ have magnitude $1$. }\end{example} As far as we know, the spectrally arbitrary sign patterns $\mathcal{B}_n$ in Example~\ref{BN} and $\mathcal{K}_n$ in Example~\ref{KN} have not previously appeared in the literature. \medskip \section{General bordering for $n=3$} In Theorems~\ref{borderT} and \ref{borderT2}, we restricted to bordering with standard unit vectors in the place of $\mathbf{x}$ and $\mathbf{z}$ in (\ref{B}). We next illustrate the more general bordering (\ref{B}) with a couple of examples. \begin{example}\label{T3}\rm{ Starting with the nilpotent realization $A$ of $\mathcal{T}_2$ given in Example~\ref{Three}, a nilpotent realization of $\mathcal{T}_3$ can be obtained as follows: \begin{eqnarray*} B=\left[ \begin{array}{rr|r} 1&0&0\\ 0&1&0\\ \hline -\frac{1}{2}&1&1\\ \end{array}\right] \left[ \begin{array}{rr|r} 1&-1&0\\ 1&-1&-1\\ \hline 0&0&0\\ \end{array}\right] \left[\begin{array}{rr|r} 1&0&0\\ 0&1&0\\ \hline \frac{1}{2}&-1&1\\ \end{array}\right]&=& \left[\begin{array}{rrr} 1&-1&0\\ \frac{1}{2}&0&-1\\ 0&\frac{1}{2}&-1\\ \end{array}\right]\in Q(\mathcal{T}_3). \end{eqnarray*} Matrix $B$ allows a full-rank Jacobian and hence (as is well-known \cite{DJOD}) every superpattern of $\mathcal{T}_3$ is spectrally arbitrary by Theorem~\ref{SAP}. }\end{example} \begin{example}\label{U3} \rm{Starting with the nilpotent realization $A$ of $\mathcal{T}_2$ given in Example~\ref{Three}, a nilpotent realization of $\U_3$ (see \cite{Britz}) can be obtained as follows: \begin{eqnarray*} B=\left[ \begin{array}{rr|r} 1&0&0\\ 0&1&0\\ \hline -1&2&1\\ \end{array}\right] \left[ \begin{array}{rr|r} 1&-1&\frac{1}{2}\\ 1&-1&0\\ \hline 0&0&0\\ \end{array}\right] \left[\begin{array}{rr|r} 1&0&0\\ 0&1&0\\ \hline 1&-2&1\\ \end{array}\right]&=& \left[\begin{array}{rrr} \frac{3}{2}&-2&\frac{1}{2}\\ 1&-1&0\\ \frac{1}{2}&0&-\frac{1}{2}\\ \end{array}\right]. \end{eqnarray*} In particular, matrix $B$ is a nilpotent realization of $\U_3$ that allows a full-rank Jacobian and hence every superpattern of $\U_3$ is spectrally arbitrary by Theorem~\ref{SAP}. }\end{example} As demonstrated in \cite{Britz}, every spectrally arbitrary sign pattern of order $3$ is a superpattern of one of the four patterns $\mathcal{T}_3$, $\U_3$, $\mathcal{V}}\newcommand{\U}{\mathcal{U}_3$ and $\mathcal{W}_3$. From Example~\ref{Three}, every superpattern of $\mathcal{V}}\newcommand{\U}{\mathcal{U}_3$ and $\mathcal{W}_3$ is spectrally arbitrary by Theorem~\ref{borderT2} using a standard unit bordering of $\mathcal{T}_2$. The other two order $3$ patterns can be obtained by using a general bordering of $\mathcal{T}_2$ as demonstrated in Examples~\ref{T3} and \ref{U3}. \begin{corollary} Every spectrally arbitrary sign pattern of order $3$ is a superpattern of a pattern obtained from $\mathcal{T}_2$ by bordering as in $(\ref{B})$. \end{corollary} \section{Inertially arbitrary borderings} We conclude by extending the main results in Sections~\ref{unitequal} and \ref{unitunequal} to obtain inertially arbitrary sign patterns. The \emph{inertia} of a matrix $A$ is the ordered triple $i(A)=(a,b,c)$ for which $a$ is the number of eigenvalues of $A$ with positive real parts, $b$ is the number with negative real parts, and $c$ is the number of eigenvalues with real parts zero. The \emph{refined inertia} of a matrix $A$ is the ordered $4$-tuple $ri(A)=(a,b,c_1,c_2)$ for which $c_1$ is the algebraic multiplicity of zero as an eigenvalue for $A$ and $c_1+c_2=c$. Then $c_2$ is the number of nonzero imaginary eigenvalues of $A$. A sign pattern $\mathcal{A}$ of order $n$ is \emph{inertially arbitrary} if, for every non-negative integer choice of $(a,b,c)$ with $a+b+c=n$, there is some matrix $A\in Q(\mathcal{A})$ with $i(A)=(a,b,c)$. As with nilpotent matrices, a matrix $A$ of order $n$ with refined inertia $(0,0, c_1,c_2)$ \emph{allows a full-rank Jacobian} if the Jacobian matrix $J_{X=A}$ has rank $n$. The next theorem combines \cite[Theorem $2.13$]{CF} and \cite[Corollary $4.5$]{CGKOVV}. \begin{theorem}\label{NJ-IAP} Let $\mathcal{A}$ be a sign pattern and $A\in Q(\mathcal{A})$ be a matrix with $\ri{A}=(0,0,c_1,c_2)$ for some $c_1\geq 2$. If $A$ allows a full-rank Jacobian, then \begin{enumerate} \item[(a)] $A$ is nonderogatory, and \item[(b)] every superpattern of $\mathcal{A}}\newcommand{\cT}{\mathcal{T}$ is inertially arbitrary. \end{enumerate} \end{theorem} Note that, unlike the context of Theorem~\ref{SAP}, $A$ is not necessarily irreducible if $A$ allows a full-rank Jacobian in Theorem~\ref{NJ-IAP}. \begin{example}\label{Ione}{\rm Let $$A=\left[\begin{array}{rrrr} 1&-1&0&0\\ 1&-1&0&0\\ 0&0&1&-2\\ 0&0&1&-1\\ \end{array} \right] \qquad {\rm and \qquad} X_A=\left[\begin{array}{rrrr} 1&x_1&0&0\\ 1&x_2&0&0\\ 0&0&1&x_3\\ 0&0&1&x_4\\ \end{array} \right]. $$ This matrix $A \in Q(\mathcal{T}_2\oplus\mathcal{T}_2)$ is a nonderogatory reducible matrix with refined inertia $(0,0,2,2)$ and $J_{X_A=A}$ has rank $4$. Therefore, by Theorem~\ref{NJ-IAP} every superpattern of $$\mathcal{A}=\left[\begin{array}{rrrr} +&-&0&0\\ +&-&0&0\\ 0&0&+&-\\ 0&0&+&-\\ \end{array}\right]$$ is inertially arbitrary. Note that while it is known \cite{C} that $\mathcal{T}_2 \oplus \mathcal{T}_2$ is spectrally arbitrary (and hence inertially arbitrary), it is not yet known if every superpattern of $\mathcal{T}_2 \oplus \mathcal{T}_2$ is spectrally arbitrary. }\end{example} The proof of the next theorem is the same as that for Theorem~\ref{borderT} except it uses Theorem~\ref{NJ-IAP} instead of Theorem~\ref{SAP}. \begin{theorem}\label{borderIT} Let $\mathcal{A}$ be a sign pattern. Suppose $A=[a_{ij}]\in Q(\mathcal{A})$ is a matrix having refined inertia $(0,0,c_1,c_2)$ with $c_1\geq 2$, and $A$ allows a full-rank Jacobian. Suppose $a_{kk}\neq0$ and $a_{kv}\neq 0$ for some $v\neq k$. If $\det{A(k,v)}\neq 0$, then $B$ in $(\ref{Be})$ has refined inertia $(0,0,c_1+1,c_2)$, $B$ allows a full-rank Jacobian and every superpattern of $\mathcal{B}=\sgn(B)$ is inertially arbitrary. \end{theorem} \begin{example}{\rm Let $A$ be the matrix in Example~\ref{Ione}. With $k=2$ and $v=1$, Theorem~\ref{borderIT} implies that every superpattern of $$\mathcal{B}=\left[\begin{array}{rrrrr} +&-&0&0&0\\ +&0&0&0&+\\ 0&0&+&-&0\\ 0&0&+&-&0\\ -&0&0&0&-\\ \end{array}\right]$$ is inertially arbitrary. Note that $\mathcal{B}$ is spectrally arbitrary since $\mathcal{B}$ is equivalent to $\mathcal{T}_2 \oplus \mathcal{V}}\newcommand{\U}{\mathcal{U}_3$, but it is not known if every superpattern of $\mathcal{B}$ is spectrally arbitrary. }\end{example} The proof of the next theorem is the same as that for Theorem~\ref{borderT2} except it uses Theorem~\ref{NJ-IAP} instead of Theorem~\ref{SAP}. \begin{theorem}\label{borderIT2} Let $\mathcal{A}$ is a sign pattern. Suppose $A=[a_{ij}]\in Q(\mathcal{A})$ is a matrix with refined inertia $(0,0,c_1,c_2)$ for some $c_1\geq 2$ and $A$ allows a full-rank Jacobian. Suppose $a_{jk }$, $j\neq k$, is non-Jacobian for some choice of $X_A$. If $a_{k v}\neq0$ and $\det{A(j,v)}\neq 0$, for some $v$, then $B$ in $(\ref{B2})$ has refined inertia $(0,0,c_1+1,c_2)$, $B$ allows a full-rank Jacobian, and every superpattern of $\mathcal{B}=\sgn(B)$ is inertially arbitrary. \end{theorem} \begin{example}{\rm Consider the matrix $$A= \left[\begin{array}{rrrrr} -1 &-1&-1&0&0\\ 2&1&1&0&0\\ 0&0&0&-1&-1\\ 0&-1&0&0&-1\\ -1&0&0&0&0 \end{array}\right], {\rm{\quad and \quad}} X_A= \left[\begin{array}{rrrrr} -1 &x_1&-1&0&0\\ 2&x_2&x_3&0&0\\ 0&0&0&-1&x_4\\ 0&x_5&0&0&-1\\ -1&0&0&0&0 \end{array}\right]. $$ Matrix $A$ has sign pattern $\mathcal{G}_5$ from \cite{KOV} (see also Section 5.3 of \cite{CGKOVV}), $A$ has refined inertia $(0,0,3,2)$, $J_{X=A}$ has full rank and entry $(1,3)$ is non-Jacobian. Thus with $j=1$, $k=3$, $v=4$, and $b=-1$ in Theorem~\ref{borderIT2}, we obtain the inertially arbitrary pattern $$\mathcal{B}=\left[\begin{array}{rrrrrr} - &-&0&0&0&+\\ +&+&+&0&0&0\\ 0&0&0&-&-&0\\ 0&-&0&0&-&0\\ -&0&0&0&0&0\\ 0&0&0&+&+&0 \end{array}\right].$$ Since $\mathcal{G}_{5}$ has no nilpotent realization, it follows that $\mathcal{B}$ has no nilpotent realization; thus $\mathcal{B}$ is not spectrally arbitrary. Note that, for $n\geq 2$, using the sign pattern $\mathcal{G}_{2n+1}$ with matrix $\tilde{A}_{2n+1}$ as listed in Section 5.3 of \cite{CGKOVV}, then $\tilde{A}_{2n+1}$ has refined inertia $(0,0,2n-1,2)$, $\det(\tilde{A}(1,4))\neq 0$, and entry $(1,3)$ is non-Jacobian. Thus, using Theorem~\ref{borderIT2} with $j=1$, $k=3$, $v=4$, and $b=-1$ applied to $\tilde{A}_{2n+1},$ we can construct an even order inertially arbitrary sign pattern with no nilpotent realization for each even order $2n+2\geq 6$. In \cite{KOV}, only odd order sign patterns were provided with these conditions.} \end{example} \textbf{Acknowledgements.} This research was initiated when the third author visited the University of Victoria with support from the Pacific Institute for Mathematical Sciences (PIMS). The research is partially supported by the authors' NSERC Discovery Grants. The authors thank an anonymous referee for comments that helped to clarify parts of this paper. \bigskip \begin{center} \textbf{References} \end{center}
{ "timestamp": "2017-08-31T02:08:41", "yymm": "1708", "arxiv_id": "1708.09237", "language": "en", "url": "https://arxiv.org/abs/1708.09237", "abstract": "We develop a matrix bordering technique that can be applied to an irreducible spectrally arbitrary sign pattern to construct a higher order spectrally arbitrary sign pattern. This technique generalizes a recently developed triangle extension method. We describe recursive constructions of spectrally arbitrary patterns using our bordering technique, and show that a slight variation of this technique can be used to construct inertially arbitrary sign patterns.", "subjects": "Rings and Algebras (math.RA)", "title": "Bordering for spectrally arbitrary sign patterns", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692339078752, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.7079585089260273 }
https://arxiv.org/abs/2205.15857
Reflective Graphs, Ollivier curvature, effective diameter, and rigidity
We give a discrete Bonnet Myers type theorem for the effective diameter assuming positive Ollivier curvature. We prove that this diameter bound is attained if and only if the graph is a cocktail party graph, a Johnson graph, a halved cube, a Schläfli graph, a Gosset graph, or a cartesian product of the mentioned graphs with same Ollivier curvature. As a key step in the proof, we introduce the notion of reflective graphs as graphs such that for any two neighbors there exists a certain self-inverse automorphism mapping one neighbor to another. We classify these graphs as arbitrary cartesian products of the graphs mentioned before.
\section{Introduction} \section{Introduction} Let $G=(V,E)$ be a simple, finite, connected graph. We define the effective diameter \[ \diam_{\operatorname{eff}}(G) = \frac 1 {|V|^2} \sum_{x,y \in V} d(x,y) \] where $d$ is the combinatorial graph distance. We always assume that graphs are simple, finite and connected. The Ollivier Ricci curvature $\kappa$ of an edge $x\sim y$ is given by (see \cite{munch2017ollivier,ollivier2007ricci,lin2011ricci}) \[ \kappa(x,y) := \inf_{\substack{f(y)-f(x)=1\\ \|\nabla f\|_\infty = 1}} \Delta f(x) - \Delta f(y) \] where $\|\nabla f\|_\infty:= \max_{x\sim y} |f(y)-f(x)|$ and $\Delta: \mathbb{R}^V \to \mathbb{R}^V$, \[ \Delta f(x):=\sum_{y\sim x} (f(y)- f(x)). \] It is shown in \cite{jost2021characterizations} that Ollivier and Forman curvature coincide, a curvature notion on CW complexes defined via a discrete Bochner formula. The degree $\operatorname{Deg}(x)$ of a vertex $x \in V$ is defined as $\operatorname{Deg}(x):= |\{y \in V: y \sim x\}|$. It is well known that a positive lower curvature bound $\kappa \geq K$ implies an upper diameter bound, see \cite{ollivier2009ricci,liu2016bakry,munch2017ollivier}, given by $ \operatorname{diam}(G):=\max_{x,y}d(x,y) \leq \frac{2\max \operatorname{Deg}}{K}. $ In \cite{cushing2018rigidity} it was investigated for which graphs equality holds. It turned out that under the additional assumption that for all $x\in V$ there exists $y \in V$ with $d(x,y)=\operatorname{diam}(G)$, the graphs for which holds equality are precisely the following: Cocktail party graphs, Johnson graphs $J(2n,n)$, halved cubes on $2\cdot 4^n$ vertices, Gosset graph, and Cartesian products of the mentioned graphs with same curvature. It is still open whether the additional assumption can be dropped, and this article might be useful for solving this question. Specifically, in this paper, we give a diameter bound for the effective diameter and classify all graphs for which this bound is attained. We now give the diameter bound. \begin{theorem} Let $G=(V,E)$ be a graph with $\kappa(x,y) \geq K >0$ for all $x\sim y$. Then, \[ \diam_{\operatorname{eff}}(G) \leq \frac{\max \operatorname{Deg}}{K}. \] \end{theorem} This theorem reappears as Theorem~\ref{thm:EffDiamBound}. We say a graph is effective Bonnet Myers sharp if the effective diameter bound holds with equality. We now characterize the effective Bonnet Myers sharp graphs. Throughout the paper, we use the abbreviation "T.f.a.e." for "The following are equivalent". \begin{theorem} Let $G=(V,E)$ be a graph with $0<K:=\min_{x\sim y} \kappa(x,y)$. T.f.a.e.: \begin{enumerate}[(i)] \item $\diam_{\operatorname{eff}}(G) = \frac{\max \operatorname{Deg}}{K}$, \item $G$ is a graph from the following list: \begin{enumerate}[(1)] \item Cocktail party graph, \item Johnson graph, \item Halved cube, \item Schläfli graph, \item Gosset graph, \item Cartesian products of the above graphs with same curvature. \end{enumerate} \end{enumerate} \end{theorem} This theorem reappears as Theorem~\ref{thm:effBMsharpChar}. It turns out that the effective Bonnet Myers sharp graphs all share a global combinatorial property which we call reflectiveness and discuss in the next section. We will show that reflectiveness almost characterizes effective Bonnet Myers sharpness. The only difference is that reflectiveness is compatible with arbitrary cartesian products, but effective Bonnet Myers sharpness is only compatible with cartesian products when the factors have the same curvature. The key argument for classifying reflective graphs is the following, \begin{itemize} \item Locally disconnected $\Rightarrow$ Cartesian product \item Locally connected $\Rightarrow$ Distance transitive \end{itemize} Here, locally connected means that the induced subgraph of the neighborhood of any vertices is connected. We then show that in the locally connected case, the graph has to be Lichnerowicz sharp, i.e. $K=\lambda$ where $K$ is the minimal curvature, and $\lambda$ is the smallest positive eigenvalue of $-\Delta$. We then use the classification of distance regular Lichnerowicz sharp graphs with an additional spectral condition from \cite[Theorem~6.5]{cushing2018rigidity} to classify reflective graphs. In the appendix, we briefly explain what happens when replacing Ollivier by Bakry Emery curvature. It turns out that this is quite easy. Particularly, one gets the same diameter bound as in the Ollivier case. For Bakry Emery curvature however, this bound is attained only for hypercubes, and this almost immediately follows from \cite{liu2017rigidity}. \subsection{Historical background} Curvature is one of the fundamental concepts in geometry. Curvature bounds have many important analytic, geometric and topological implications. For example, the Bonnet Myers diameter bound states that a Riemannian manifold with a positive lower Ricci curvature bound can have at most the diameter as the round sphere with the same curvature \cite{myers1941riemannian}. Moreover, Cheng's sphere theorem states that this diameter bound is only attained for the round sphere \cite{cheng1975eigenvalue}. In the last decade, there has been increasing interest in discrete Ricci curvature notions. There are four different basic discrete Ricci curvature notions, Ollivier curvature \cite{ollivier2007ricci,ollivier2009ricci}, Bakry Emery curvature \cite{schmuckenschlager1998curvature,lin2010ricci}, Forman curvature \cite{forman2003bochner,jost2021characterizations}, and entropic curvature \cite{erbar2012ricci,mielke2013geodesic}, and all come together with several sometimes non-linear modifications \cite{bauer2015li,munch2014li,dier2017discrete, kempton2020large,eidi2020ollivier,ikeda2021coarse, asoodeh2018curvature,devriendt2022discrete, topping2021understanding}. Due to the lack of a chain rule for the discrete Laplacian, these curvature notions are all different. In this paper, we focus on the Ollivier curvature. Most of the curvature notions allow for a discrete Bonnet Myers theorem, i.e., an upper diameter bound under a positive Ricci bound, and this diameter bound is sharp for Ollivier and Bakry Emery. In the attempt to classify the graphs for which the diameter bound is attained, interesting relations to spectral graph theory were discovered \cite{cushing2018rigidity}. Particularly, it is conjectured that all these graphs are strongly spherical. While strongly spherical graphs are fully classified \cite{koolen2004structure}, the classification of spherical graphs is still open. It turns out that this paper is right at the interface between these questions: We give a complete classification of all graphs for which the effective diameter bound is attained. To do so, we introduce the notion of reflective graphs. It turns out that the reflective graphs are a strict subclass of spherical graphs and a strict superclass of strongly spherical graphs. \subsection{Structure of the paper} The paper is structured as follows. In the next section, we introduce and classify reflective graphs. In Section~\ref{sec:Curvature}, we show that effective Bonnet Myers sharp graphs are reflective. Finally in Section~\ref{sec:Classifications}, we summarize the results and characterize effective Bonnet Myers sharp graphs, reflective graphs, and distance regular Lichnerowicz sharp graphs, improving Theorem~6.5 in \cite{cushing2018rigidity}. In the appendix, we briefly give effective diameter bounds in terms of Bakry Emery curvature and characterize equality. \section{Reflective Graphs}\label{sec:Reflective} Let $G=(V,E)$ be a simple, finite graph. Let $x\sim y$ be vertices. We write \begin{itemize} \item $V_x^y := \{x':d(x',x)<d(x',y) \}$, \item $V^{xy} := \{z:d(z,x)=d(z,y) \}$. \end{itemize} We remark that $(V_x^y,V_y^x,V^{xy})$ is a partition of $V$. \begin{definition} A map $\phi : V\to V$ is called \emph{reflection} from $x$ to $y$ if \begin{enumerate}[(i)] \item $\phi$ is a graph automorphism, \item $\phi^2 = id$, \item $\phi(x)=y$, \item $E(V_x^y,V_y^x) = \{(x',\phi(x')) : x' \in V_x^y\}$, \item $\phi(z)=z$ for all $z \in V^{xy}$. \end{enumerate} We say a graph is reflective, if for all $x\sim y$, there exists a reflection from $x$ to $y$. We remark that this reflection from $x$ to $y$ is unique, and we call it $\phi_{xy}$. \end{definition} \begin{definition} For vertices $x\sim y$, and $x'\sim y'$, we write $(x,y) \parallel (x',y')$ if $x' \in V_x^y$ and $y' \in V_y^x$. In this case, we say the edges are parallel. \end{definition} We remark that this relation depends on the order of the pairs. It is easy to check that $(x,y) \parallel (x',y')$ if and only if $x' \in V_{x}^y$ and $y' = \phi_{xy}(x')$. The aim of this section is to classify reflective graphs. In the next subsection, we give fundamental properties of parallel edges. Afterwards we show that locally disconnected reflective graphs are cartesian products, and that locally connected reflective graphs are distance transitive. Our next step is to prove that locally connected reflective graphs are Lichnerowicz sharp, and we then apply the classification of Lichnerowicz sharp distance regular graphs from \cite{cushing2018rigidity}. We remark that a similar dichotomy between locally connected and disconnected graphs also appears in the classification of strongly spherical graphs \cite{koolen2004structure}. \subsection{Parallel edges} We first give a useful characterization of parallel edges. \begin{lemma}\label{lem:ParallelEquivalence} Let $G=(V,E)$ be reflective. Let $x\sim y$ and $x'\sim y'$ be vertices. T.f.a.e.: \begin{enumerate}[(i)] \item $(x,y) \parallel (x',y')$, \item $V_x^y = V_{x'}^{y'}$ and $V_y^x = V_{y'}^{x'}$. \end{enumerate} Particularly, if any of the equivalent conditions is satisfied, then \[ d(x,z)-d(y,z) = d(x',z)-d(y',z) \quad \mbox{ for all } z \in V. \] \end{lemma} \begin{proof} We first show $(i) \Rightarrow (ii)$, and particularly, the first equation in $(ii)$. Suppose $z \in V_{x}^{y}$. We aim to show $z \in V_{x'}^{y'}$. Let $(x_k)_{k=0}^n$ be a shortest path from $x$ to $z$. We observe that $x_k \in V_{x}^y$. Let $y_k := \phi_{xy}(x_k) \in V_y^x$. We show via induction that $x_k \in V_{x'}^{y'}$ and $y_k \in V_{y'}^{x'}$. The induction base $k=0$ is clear. Now, assume $x_{k-1} \in V_{x'}^{y'}$ and $y_{x-1} \in V_{y'}^{x'}$. Suppose $x_k \in V^{x'y'}$ for which we find a contradiction. Then, $x_{k} \sim x_{k-1}$ implying \[ x_k = \phi_{x'y'}(x_k) \sim \phi_{x'y'}(x_{k-1}) =y_{k-1}. \] This is a contradiction to $(iv)$ in the definition of reflections, as $y_{k-1} \in V_y^x$ has two different neighbors in $V_x^y$, namely $x_k$ and $x_{k+1}$, and the contradiction shows $x_k \notin V^{x'y'}$. Next, suppose $x_k \in V_{y'}^{x'}$ for which we find a contradiction. Then, $x_{k-1} \in V_{x'}^{y'}$ has two different neighbors in $V_{y'}^{x'}$, namely $x_k$ and $y_{k-1}$. We remark that they are different as $x_k \in V_x^y$ and $y_{k-1} \in V_y^x$. This is a contradiction to $(iv)$ in the definition of reflections. The above two cases show $x_k \in V_{x'}^{y'}$. Similarly, $y_k \in V_{y'}^{x'}$ finishing the induction. Thus, $z \in V_{x'}^{y'}$. This proves $V_x^y \subseteq V_{x'}^{y'}$. The reverse inclusion follows analogously, showing $V_x^y = V_{x'}^{y'}$. By interchanging the role of $x$ and $y$, the second equation in $(ii)$ follows immediately, proving $(i) \Rightarrow (ii)$. We finally prove $(ii) \Rightarrow (i)$. By $(ii)$, we have $x' \in V_{x'}^{y'}=V_x^y$ and $y' \in V_{y'}^{x'}=V_{y}^x$ proving $(ii) \Rightarrow (i)$ and finishing the proof of the equivalence. The "particularly" statement immediately follows from $(ii)$. \end{proof} We next show that one can find parallel edges close to every vertex. We write $B_1(x)=\{y :d(x,y) \leq 1\}$ for the balls of radius one. \begin{lemma}\label{lem:ParellelToBall} Let $x\sim y \in V$ and $z \in V$. Then, there exist $x',y' \in B_1(z)$ such that \[ (x,y) \parallel (x',y'). \] \end{lemma} \begin{proof} Suppose not. If $z \in V_x^y$, then $(x,y) \parallel (z,\phi(z))$. If $z \in V_y^x$, then $(x,y) \parallel (\phi(z),z)$. Hence, we can assume $z \in V^{xy}$. By induction, we assume $n:=d(x,z)$ to be minimal. Let $p \sim z$ on a geodesic from $z$ to $x$, i.e. $d(p,x)=n-1$. We notice \[ n-1 \leq d(p,y) \leq n. \] \begin{description} \item[Case 1: $d(p,y)=n-1$.] Let $x' := \phi_{pz}(x)$, and $y' :=\phi_{xy}(x')$. Then $n-1=d(x',z) = d(y',z)$. Moreover $(x',y') \parallel (x,y)$ contradicting minimality of $n$ as "$\parallel$" is an equivalence relation by Lemma~\ref{lem:ParallelEquivalence}. \item[Case 2: $d(p,y)=n$.] Let $y' :=\phi_{xy}(p)$. As $p \sim z \in Z$, we have $y' \sim z$. Hence, $(x,y) \parallel (p,y')$ and $p,y' \in B_1(z)$ being a contradiction. \end{description} Now the proof is finished by contradiction. \end{proof} \subsection{Cartesian products} We aim to show that locally disconnected graphs are cartesian products. We recall that locally disconnected means that the induced subgraph on the neighborhood of a vertex is disconnected. We now give a characterization of cartesian product graphs from \cite{MathoverflowCartesianProducts,imrich2007recognizing}. \begin{theorem}\cite[Theorem~3.1]{imrich2007recognizing} \label{thm:CartesianFacorization} Let $G=(V,E)$ be a graph. T.f.a.e.: \begin{enumerate}[(i)] \item $G$ is a cartesian product of two non-trivial graphs, \item The graph $(E,\sim)$ is disconnected where $(x,y) \sim (x',y')$ iff \begin{enumerate}[(a)] \item $d(x,x')+d(y,y')\neq d(x,y')+ d(y,x')$ or \item $\{x\}=\{x'\}= S_1(y) \cap S_1(y')$ maybe after interchanging $x$ and $y$ or $x'$ and $y'$. \end{enumerate} \end{enumerate} \end{theorem} We now investigate the connection between the relation "$\sim$" from the theorem above and the parallel relation "$\parallel$" for reflective graphs. \begin{lemma}\label{lem:parallelSim} Let $G=(V,E)$ be a reflective graph. Let $(x,y) \parallel (x',y') \in E$ and $(v,w) \in E$. T.f.a.e.: \begin{enumerate}[(i)] \item $d(x,v)+d(y,w)\neq d(x,w)+d(y,v)$, \item $(x,y) \sim (v,w)$, \item $(x',y') \sim (v,w)$. \end{enumerate} \end{lemma} \begin{proof} For reflective graphs, condition (b) will never be decisive as $|S_1(y) \cap S_1(y')| \geq 2$ whenever $d(y,y')=2$, and if $d(y,y')=1$, then (a) is satisfied as well. This proves $(i)\Rightarrow (ii)$. We now prove $(i) \Rightarrow (iii)$. By Lemma~\ref{lem:ParallelEquivalence} for $z \in \{v,w\}$, \[ d(z,x) - d(z,y) = d(z,x') - d(z,y') \] Therefore, \begin{align*} d(x',v) + d(y',w) &= d(v,x) - d(v,y) + d(v,y') + d(w,x') + d(w,y) - d(w,x) \\ &\neq d(v,y') + d(w,x') \end{align*} proving $(i) \Rightarrow (iii)$. The reverse direction $(iii) \Rightarrow (i)$ can be proven similarly. \end{proof} We next show that locally disconnected graphs are cartesian products. We write $S_n(x)=\{y:d(x,y)=n\}$ for the spheres. \begin{lemma}\label{lem:LocDisconnectedImpliesCartesianProduct} Let $G=(V,E)$ be a reflective graph. Let $z \in V$. Suppose the induced subgraph of $S_1(z)$ is disconnected. Then, $G$ is a cartesian product with non-trivial factors. \end{lemma} \begin{proof} Let $p,p' \in S_1(z)$ in different connected components. Suppose, $G$ is not a cartesian product. Then, by Theorem~\ref{thm:CartesianFacorization}, there is a path $(z_k,p_k)_{k=0}^n$ with respect to $\sim$ and $z_0=z_n=z$ and $p_0=p$ and $p_n=p'$. By Lemma~\ref{lem:ParellelToBall}, there exist $(z_k',p_k') \in B_1(z)$ with $(z_k',p_k') \parallel (z_k,p_k)$. By Lemma~\ref{lem:parallelSim}, this is also a path with respect to $\sim$. Let $(x',y'):=(p_k',z_k')$ be the first edge with a vertex outside of $C \cup \{z\}$ where $C$ is the connected component of $p$ within $S_1(z)$. Then, $(x,y):=(z_{k-1}',p_{k-1}') \in C \cup \{z\}$. We now show via case distinction that $(x,y) \not\sim (x',y')$ which will be a contradiction. We will use that if $x\neq z \neq x'$, then, $d(x,x')=2$ as $x \in C$ but $x' \notin C$. Moreover, if $x',y' \neq z$, then both $x',y' \notin C$ as $x' \sim y'$. \begin{description} \item[Case 1:] $x=z=x'$. Then, $d(x,x') + d(y,y')=0+2=1+1=d(x,y')+d(y,x')$. \item[Case 2:] $x=z$ and $x',y' \neq z$. Then, $d(x,x') + d(y,y')=1+2=1+2=d(x,y')+d(y,x')$. \item[Case 3:] $z\neq x,y$ and $x'=z$. Then, $d(x,x') + d(y,y')=1+2=2+1=d(x,y')+d(y,x')$. \item[Case 4:] $z\neq x,y$ and $x',y' \neq z$. Then, $d(x,x') + d(y,y')=2+2=2+2=d(x,y')+d(y,x')$. \end{description} The remaining cases are analogous. This finishes the case distinction giving the contradiction finishing the proof. \end{proof} We finally prove that a cartesian product is reflective if and only if all factors are reflective. \begin{lemma}\label{lem:CartesianProductInheritsReflective} Let $G_i=(V_i,E_1)$ be graphs for $i=1,2$. T.f.a.e.: \begin{enumerate}[(i)] \item $G_1$ and $G_2$ are reflective, \item $G_1\times G_2$ is reflective. \end{enumerate} \end{lemma} \begin{proof} We first prove $(i)\Rightarrow (ii)$. Let $(x,y)=((x_1,x_2),(y_1,y_2))$ be an edge in $G_1 \times G_2$. W.l.o.g., $x_2=y_2$, as we can interchange $G_1$ and $G_2$. Then, $x_1 \sim y_1$, and we have a reflection $\phi_{x_1y_1}$ in $G_1$. We extend this to \[ \phi(z_1,z_2) := (\phi_{x_1y_1}(z_1), z_2). \] It is straight forward to check that $\phi$ is a reflection. This proves $(i) \Rightarrow (ii)$. We finally prove $(ii) \Rightarrow (i)$. We show that $G_1$ is reflective. Let $x_1 \sim y_1 \in V_1$ and $x_2=y_2 \in V_2$. Then, there is a reflection $\phi$ from $(x_1,x_2)$ to $(y_1,y_2)$. We notice \[ V_{(x_1,x_2)}^{(y_1,y_2)} = (V_1)_{x_1}^{y_1} \times V_2. \] Hence, for all $z_1 \in V_1$, there exists $z_1' \in V_1$ such that for all $z_2 \in V_2$, \[ \phi(z_1,z_2) = (z_1',z_2). \] We define $\phi_{x_1y_1}(z):=z'$. Again, it is straight forward to check that $\phi_{x_1y_1}$ is a reflection. This proves $(ii) \Rightarrow (i)$ and finishes the proof. \end{proof} \subsection{Locally connected implies distance transitive} Having investigated the locally disconnected case, we now come to the locally connected case, and make use of the automorphisms $\phi_{xy}$ to show distance transitivity. We first show that the class of reflective graphs is closed under taking induced subgraph on $V_x^y$ for $x\sim y$. We recall that a subset $W\subseteq V$ is convex if every shortest path with first and last vertex in $W$ must have all vertices in $W$. \begin{lemma}\label{lem:VxyIsometricReflective} Let $G=(V,E)$ be reflective and $x \sim y$. Then, the induced subgraph on $V_x^y$ is convex and reflective. \end{lemma} \begin{proof} We first show that $V_x^y$ is convex. Let $v=v_0 \sim \ldots \sim v_n = v'$ be a shortest path in $G$. Let $w:= \phi_{xy}(v)$. By Lemma~\ref{lem:ParallelEquivalence}, we have $v' \in V_x^y=V_{v}^w$. Thus, $(w,v_0,\ldots,v_n)$ is a shortest path showing $v_k \in V_v^w = V_x^y$ showing that every shortest path from $v$ to $v'$ stays in $V_x^y$. We now show that the induced subgraph on $V_x^y$ is reflective. We first prove that $\phi_{vw} : V_x^y \to V_x^y$ for all neighbors $v,w \in V_x^y$. Let $v' \in V_x^y$. We want to show $\phi_{vw}(v') \in V_x^y$ If $v' \in V^{vw}$, then, $\phi(v')=v' \in V_x^y$. If $v' \in V_w^v$, we interchange $v$ and $w$, so we are left with the case $v' \in V_v^w$. Let $w'=\phi_{vw}(v')$. Then, $(v,w) \parallel (v',w')$. By Lemma~\ref{lem:ParallelEquivalence}, \begin{align*} d(v,x)-d(w,x) &= d(v',x)-d(w',x), \mbox{ and }\\ d(v,y)-d(w,y) &= d(v',y)-d(w',y). \end{align*} Subtracting the equations gives $d(w',y)-d(w',x)=1$ as $v,w,v' \in V_x^y$. Thus, $w' \in V_x^y$. Thus, $\phi_{vw}$ maps $V_x^y$ to itself as claimed. As $V_x^y$ is an isometric subgraph, we see that $\left(V_x^y\right)_v^w = V_v^w \cap V_x^y$ and $\left(V_x^y\right)_w^v = V_w^v \cap V_x^y$. Thus, $\phi_{vw}|_{V_x^y}$ is a reflection on $V_x^y$. This proves that the induced subgraph on $V_x^y$ is reflective as $v \sim w \in V_x^y$ were chosen arbitrarily. This finishes the proof. \end{proof} We now prove that the neighborhood of a vertex is isometric. We recall that a subset $W\subseteq V$ is called isometric if for all $w,w'$, there exists a geodesic from $w$ to $w'$ entirely in $W$. \begin{lemma}\label{lem:S1isometric} Let $G=(V,E)$ reflective and $x \in V$. Assume $S_1(x)$ is connected. Then, $S_1(x)$ is isometric. \end{lemma} \begin{proof} Suppose not. Then, there exist vertices in $S_1(x)$ such that a shortest path within $S_1(x)$ between them has length at least three. Let the first four vertices of such a path be denoted by $y\sim z \sim z' \sim y' \in S_1(x)$. We aim to find $w\in S_1(x)$ with $y \sim w \sim y'$ which would be a contradiction. We define \[ v:=\phi_{xy}(z') = \phi_{xz'}(y). \] As $d(x,z')=2$, we get $v \sim y,z,z'$ and $v \not\sim x$. Moreover, $v \not \sim y'$ as otherwise, $v \in V_y^x$ would have two neighbors in $V_x^y$, namely $y'$ and $z'$. Similar to $v$, we define \[ v':=\phi_{xy'}(z) = \phi_{xz}(y'). \] By similar arguments as before, we get $v' \sim z,z',y'$ and $z \not\sim x,y$. As $d(x,v)=d(y',v) = 2$, and as $v\sim z$, we get \[ v= \phi_{xy'}(v) \sim \phi_{xy'}(z) = v'. \] We finally define \[ w:= \phi_{xz}(v). \] Obviously, $w \in S_1(x)$. As $v\sim y$, we get \[ y=\phi_{xz}(y) \sim \phi_{xz}(v) = w. \] As $v\sim v'$, we get \[ w = \phi_{xz}(v) \sim \phi_{xz} (v') = y' \] as desired. This contradicts the assumption that shortest paths within $S_1(x)$ between $y$ and $y'$ have length three. The contradiction finishes the proof. \end{proof} We now show that intersections of the neighborhood of a vertex with certain spheres are isometric, and particularly, connected. \begin{lemma}\label{lem:connectedIntersection} Let $G=(V,E)$ be reflective. Let $x,x' \in V$ with $d(x,x')=n-1$ for some $n\geq 1$. Assume $S_1(x)$ is connected. Then, \[ S_1(x) \cap S_{n}(x') \mbox{ is isometric.} \] \end{lemma} \begin{proof} Suppose not. By induction, we can assume $n$ to be minimal. By Lemma~\ref{lem:S1isometric}, we can assume $n \geq 2$. Let $x''\sim x'$ be on a shortest path from $x'$ to $x$, i.e. $d(x,x'')=n-2$. We notice \[ S_1(x) \cap S_{n}(x') = S_1(x) \cap S_{n-1}(x'') \cap V_{x''}^{x'}. \] By induction hypothesis, we see that $S_1(x) \cap S_{n-1}(x'')$ is isometric. As $V_{x''}^{x'}$ is convex by Lemma~\ref{lem:VxyIsometricReflective}, it follows that $S_1(x) \cap S_{n-1}(x'') \cap V_{x''}^{x'}$ is also isometric as an intersection of an isometric and a convex set. This finishes the induction step and thus, the proof is complete. \end{proof} We finally show distance transitivity by suitably composing the automorphisms $\phi_{xy}$. \begin{lemma}\label{lem:DistTransitive} Let $G=(V,E)$ is reflective and locally connected. Then, $G$ is distance transitive. \end{lemma} \begin{proof} We first show vertex transitivity. Let $x,x' \in V $ and let $x=x_0\sim\ldots = x_n = x'$ be a path. Then, the automorphism $\phi_{x_{n}x_{n-1}} \circ \ldots \circ \phi_{x_1 x_0}$ maps $x$ to $x'$ showing vertex transitivity. We now show distance transitivity. Let $n \in \mathbb{N}$. Let $x,y,x',y' \in V$ with $d(x,y)=d(x',y')=n$. We aim to find an automorphism $\psi$ mapping $x$ to $x'$ and $y$ to $y'$. We proceed via induction over $n$. The case $n=0$ is clear by vertex transitivity. Now for $n>0$, we assume the claim to be true for $n-1$. Let $z\sim x$ on a shortest path from $x$ to $y$ and $z'\sim y'$ on a shortest path from $x'$ to $y'$. By induction assumption, we can assume $x=x'$ and $z=z'$. Hence, $y,y' \in S_1(z) \cap S_n(x)$. By Lemma~\ref{lem:connectedIntersection}, there is a path $y=y_0 \sim \ldots \sim y_m=y'$ in $S_n(x)$. Now we define \[ \psi:=\phi_{y_{m}y_{m-1}}\circ \ldots \circ \phi_{y_1y_0}. \] Clearly, $\psi(y) = \psi(y')$. Moreover as $d(y_k,x)=n$, we have $\phi_{y_ky_{k-1}}(x)=x$ for all $k$ showing that $\psi(x)=x=x'$. This finishes the induction, and the proof is now complete. \end{proof} \subsection{Lichnerowicz sharpness} In order to classify reflective graphs, we want to use the classification of Lichnerowicz sharp, distance regular graphs from \cite[Theorem~6.5]{cushing2018rigidity}. We now give the details. We recall \[ \Delta f(x) = \sum_{y\sim x} (f(y)-f(x)) \] and $Lip(1):=\{f \in \mathbb{R}^V : f(y)-f(x) d(x,y) \mbox{ for all } x,y\in V\}$ and the Ollivier curvature $\kappa$ of an edge $x\sim y$ is \[ \kappa(x,y) = \inf_{\substack{f \in Lip(1)\\f(y)-f(x)=1}} \Delta f(x)-\Delta f(y). \] The Lichnerowicz estimate states that \[ \lambda \geq \min \kappa \] where $\lambda$ is the smallest positive eigenvalue of $-\Delta$. We show that reflective graphs are Lichnerowicz sharp, i.e., $K=\lambda$. For convenience, we restrict ourselves to locally connected graphs. We recall that a graph $G$ is distance regular with intersection array $(b_0,b_1\ldots,b_{L-1}; c_1=1\ldots c_L)$ if it has diameter $L$, and if every vertex in $S_n(x)$ has $b_n$ neighbors in $S_{n+1}(x)$ and $c_n$ neighbors in $S_{n-1}(x)$, for arbitrary $x \in V$. We now show that locally connected reflective graphs are Lichnerowicz sharp. \begin{theorem}\label{thm:ReflectiveImpliesLichSharp} Let $G=(V,E)$ be locally connected and reflective. Assume $G$ has intersection array \[(b_0,b_1\ldots,b_{L-1}; c_1=1\ldots c_L).\] Then, the non-normalized Ollivier curvature of all edges is \[ \kappa = 1+ b_0 - b_1. \] Moreover, $G$ is Lichnerowicz sharp, i.e. the smallest positive eigenvalue $\lambda$ of the non-normalized Laplacian $-\Delta$ satisfies \[ \lambda=\kappa. \] \end{theorem} \begin{proof} By the Lichnerowicz estimate $\lambda \geq \kappa$, it suffices to prove \[\kappa \geq 1+ b_0 - b_1 \geq \lambda.\] For showing $\lambda \leq 1+ b_0 - b_1$, we construct an eigenfunction $f$ with $\Delta f=-(1+b_0-b_1) f$. Let $x\sim y$. Let \[ f:=d(x,\cdot) - d(y,\cdot). \] We notice \[ b_1 = |S_2(x)\setminus{S_1(y)}| = |V_y^x \cap S_1(y)| = |V_x^y \cap S_1(x)| \] and \[ b_0 = 1 + |V_x^y \cap S_1(x)| + |V^{xy} \cap S_1(x)|. \] Thus, \begin{align*} \Delta f(x) &= f(y)-f(x) + \sum_{z \in V^{xy}\cap S_1(x)}(f(z) - f(x)) + \sum_{z \in V_x^y\cap S_1(x)}(f(z)-f(x)) \\&= 2 + |V^{xy} \cap S_1(x)| \\&= 1+ b_0 - b_1 = -f(x)(1+ b_0 - b_1). \end{align*} Similarly, $\Delta f(y) = - f(y)(1+ b_0 - b_1)$. For $x' \in V_x^y$, we set $y'=\phi_{xy}(x')$ and get $f=d(x',\cdot)-f(y',\cdot)$ by Lemma~\ref{lem:ParallelEquivalence} showing $\Delta f(x') = -f(x')(1+ b_0 - b_1)$, and similarly for $y' \in V_y^x$. For $z \in V^{xy}$, we have with $\phi=\phi_{xy}$, \[ \Delta f(z) = \Delta f\circ \phi (z) = \Delta (d(y,\cdot)-\Delta(x,\cdot))(z) = - \Delta f(z) \] giving $\Delta f(z)=0=f(z)$. This shows that $f$ is an eigenfunction proving $1+b_0-b_1 \geq \lambda$. We finally prove $\kappa \geq 1 + b_0 - b_1$. By distance transitivity, the curvature is constant. Let $x\sim y$. Let $f \in Lip(1)$ with $f(y)-f(x)=1$. We aim to show \[ \Delta f(x)-\Delta f(y) \geq 1+b_0 - b_1. \] Let $\phi:=\phi_{xy}$. We notice $\Delta f(y) = \sum_{z\sim x} f(\phi(z))-f(y)$. Hence, \[ \Delta f(x)-\Delta f(y) = (f(y)-f(x))b_0 + \sum_{z \sim x} f(z)- f(\phi(z)). \] We observe \begin{align*} \sum_{z\sim x} f(z)- f(\phi(z)) = f(y)-f(x) + \sum_{x\sim z \in V_x^y } f(z) - f(\phi(z)) + \sum_{x\sim z \in V^{xy}} f(z)-f(\phi(z)) . \end{align*} The latter sum vanishes as $z=\phi(z)$ on $V^{xy}$. The second sum can be bounded from below by $-|S_1(x) \cap V_x^y|=b_1$ as $z\sim \phi(z)$ and as $f \in Lip(1)$. Thus, \[ \sum_{z\sim x} f(z)- f(\phi(z)) \geq 1 - b_1 \] giving \[ \Delta f(x)-\Delta f(y) \geq 1+b_0 - b_1 \] where we used $f(y)-f(x)=1$. This proves $\kappa \geq 1+b_0-b_1$ and finishes the proof of the theorem. \end{proof} \subsection{First step to classification} In this subsection, we aim to show that reflective graphs are cartesian products of Cocktail party graphs, Johnson graphs, halved cubes, the Schläfli graph, and the Gosset graph. We will prove in Section~\ref{sec:Classifications} that all graphs from the list are indeed reflective. We will use the characterization of distance regular Lichnerowicz sharp graphs with second largest adjacency matrix eigenvalue $\theta = b_1 -1$ from \cite{cushing2018rigidity} which we recall now. \begin{theorem}[Theorem~6.5 in \cite{cushing2018rigidity}]\label{thm:LichSharpChar} Let $G=(V,E)$ be distance regular with intersection array \[ (b_0,b_1,\ldots,b_{L-1};1=c_1,\ldots,c_L) \] Assume the second largest eigenvalue $\theta$ of the adjacency matrix satisfies $\theta = b_1 - 1$. Moreover assume $G$ is Lichnerowicz sharp, i.e., $\min \kappa = \lambda$ where $\lambda$ is the smallest positive eigenvalue of $-\Delta$. Then, $G$ is a graph of the following list: \begin{enumerate}[(1)] \item Cocktailparty graphs $CP(k)$, \item Hamming graphs $K_n^m$, \item Johnson graphs $J(n,k)$, \item Halved cubes $Q^{(2)}(n)$, \item Schlaefli graph, \item Gosset graph. \end{enumerate} Conversely, all graphs from the list satisfy the properties above. \end{theorem} We remark that in \cite{cushing2018rigidity}, Lichnerowicz sharpness refers to the normalized Laplacian which due to constant vertex degree is equivalent to our version. Moreover, we note that Hamming graphs are cartesian products of Johnson graphs. We now apply the above theorem to reflective graphs. \begin{lemma}\label{lem:locConnectedAndReflectiveImpliesList} Let $G=(V,E)$ be locally connected and reflective. Then, the graph $G$ is a graph from the list in Theorem~\ref{thm:LichSharpChar}. \end{lemma} \begin{proof} By Lemma~\ref{lem:DistTransitive}, the graph is distance regular. By Theorem~\ref{thm:ReflectiveImpliesLichSharp}, the graph is Lichnerowicz sharp. As $\theta = b_0 - \lambda$, Theorem~\ref{thm:ReflectiveImpliesLichSharp} furthermore implies $\theta = b_1 - 1$. Hence, Theorem~\ref{thm:LichSharpChar} is applicable finishing the proof. \end{proof} As locally disconnected reflective graphs are cartesian products by Lemma~\ref{lem:LocDisconnectedImpliesCartesianProduct}, we can now classify all reflective graphs. \begin{theorem}\label{thm:ReflectiveImpliesList} Let $G=(V,E)$ be reflective. Then, the graph $G$ is a cartesian product of graphs in the list of Theorem~\ref{thm:LichSharpChar}. \end{theorem} \begin{proof} Lemma~\ref{lem:LocDisconnectedImpliesCartesianProduct} and Lemma~\ref{lem:CartesianProductInheritsReflective} imply that $G$ is a cartesian product of locally connected, reflective graphs. Lemma~\ref{lem:locConnectedAndReflectiveImpliesList} implies that all factors are graphs from the list. This finishes the proof. \end{proof} We remark that the class of reflective graphs lies strictly between the spherical and strongly spherical graphs investigated in \cite{koolen2004structure}. While strongly spherical graphs are fully classified, the classification of spherical graphs is still open, and this article might be useful for answering this question. \section{Ollivier curvature and effective diameter}\label{sec:Curvature} In this section, we give the effective diameter bound mentioned in the introduction, and show that equality implies that the graph is reflective. We first recall the effective diameter. Let $G=(V,E)$ be a graph. Then, the effective diameter is \[ \diam_{\operatorname{eff}}=\diam_{\operatorname{eff}}(G) = \frac 1 {|V|^2} \sum_{x,y \in V} d(x,y). \] \subsection{Diameter bounds} In this subsection we prove the effective diameter bound using the long range Ollivier curvature \[ \kappa(x,y) = \inf_{\substack{\|\nabla f\|_\infty = 1\\f(y)-f(x)=d(x,y)}} \frac{\Delta f(x)-\Delta f(y)}{d(x,y)} \] and using the fact that $\kappa(x,y) \geq K$ for all $x,y \in V$ as soon as this estimate holds for all neighbors $x\sim y$. \begin{theorem}\label{thm:EffDiamBound} Let $G=(V,E)$ be a graph with $\kappa(x,y) \geq K>0$ for all $x\sim y$. Then, \[ \diam_{\operatorname{eff}} \leq \frac{\max \operatorname{Deg}}{K}. \] \end{theorem} \begin{proof} We prove a stronger statement, namely that for all $x\in V$, \[ \frac 1 {|V|}\max_x \sum_y d(x,y) \leq \frac{\max \operatorname{Deg}}{K}. \] Let $x\in V$ and $f:=d(x,\cdot)$. Due to the curvature bound $\kappa \geq K$, we have for all $y \in V$, \[ \Delta f(x) - \Delta f(y) \geq Kd(x,y). \] Summing up over all $y \in V$ gives \[ |V|\operatorname{Deg}(x) = \sum_{y}(\Delta f(x)-\Delta f(y)) \geq K \sum_y d(x,y). \] Rearranging finishes the proof. \end{proof} \subsection{Rigidity and distance function as eigenfunction} We now consider effective Bonnet Myers sharp graphs, i.e., graphs attaining the effective diameter bound, and aim to show that they are reflective. This section is inspired by \cite[Section~3.3.2 and 3.3.3]{munch2019discrete}. We first prove that $d(x,\cdot)$ is a shifted eigenfunction. \begin{lemma}\label{lem:BMsharpDistanceEigenfunction} Let $G=(V,E)$ be a graph with $\kappa(x,y)\geq K>0$ for all $x\sim y$. Suppose $\diam_{\operatorname{eff}} = \frac {\max \operatorname{Deg}}K$. Let $x \in V$. Then, \[ \Delta d(x,\cdot) = \operatorname{Deg}(x)- Kd(x,\cdot). \] Moreover, $\operatorname{Deg} = \max \operatorname{Deg}$ and $\kappa=K$. \end{lemma} \begin{proof} Let $x,y \in V$ and $f:=d(x,\cdot)$. By the proof of Theorem~\ref{thm:EffDiamBound}, and due to equality in the effective diameter bound, we get \[ \Delta f(x)-\Delta f(y) = K (f(y) - f(y)). \] Choosing $x\sim y$ and using $\kappa \geq K$ yields $\kappa(x,y)=K$. As in the proof of Theorem~\ref{thm:EffDiamBound}, we estimated $\max \operatorname{Deg} \geq \operatorname{Deg}(x)$, we also get $\operatorname{Deg}(x)=\max \operatorname{Deg}$. We finally observe \[ \Delta f(y) = \Delta f(x) - K(f(y)-f(x)) = \operatorname{Deg}(x) - Kd(x,y). \] This finishes the proof as $f=d(x,\cdot)$. \end{proof} \subsection{Rigidity and reflectiveness} We aim to show that effective Bonnet Myers sharp graphs are reflective. We first give combinatorial implications of a positive curvature bound, namely that a positively curved edge has to be in many triangles, and there must be a perfect matching between the remaining neighbors. The following lemma is given in \cite[Proposition~3.1]{munch2019discrete}. For a version with the normalized graph Laplacian, also see \cite[Proposition~2.7]{cushing2018rigidity}. We recall $B_1(x)=\{y:d(x,y)\leq 1\}$. \begin{lemma}[{{\cite[Proposition~3.1]{munch2019discrete}}}]\label{lem:curvTriangleMatch} Let $G=(V,E)$ be a graph. Let $x\sim y$. Then, $(x,y)$ contains at least $\kappa(x,y)-2$ triangles. In case of equality, there is a perfect matching between the remaining neighbors of $x$ and the remaining neighbors of $y$, i.e., there is a bijecitve map $\phi:B_1(x) \setminus B_1(y) \to B_1(y) \setminus B_1(x)$ such that $z \sim \phi(z)$ for all $z$ in the domain of $\phi$. \end{lemma} We next show that there is a perfect matching between $V_x^y$ and $V_y^x$. We recall $V_x^y$ is the set of vertices closer to $x$ than to $y$, and $V^{xy}$ is the set of vertices having same distance to $x$ and $y$. \begin{lemma} \label{lem:BMsharpMatchingVxyVyx} Let $G$ be effective Bonnet Myers sharp with curvature $K$. Let $x\sim y$. Then, for every $x' \in V_x^y$ there is exactly one $y' \in V_y^x$ with $x'\sim y'$, and exactly $K-2$ neighbors of $x'$ are in $V^{xy}$. \end{lemma} \begin{proof} Let $D=\max \operatorname{Deg}$. Let $x' \in V_x^y$. We notice $\Delta (d(x,\cdot) \wedge d(y,\cdot))(x) = D - 1$. Thus, \[ \Delta (d(x,\cdot) \wedge d(y,\cdot))(x') \leq D - 1 - d(x,x')K \] where "$\wedge$" denotes the minimum. By Lemma~\ref{lem:curvTriangleMatch}, the edge $(x,y)$ is contained in at least $K-2$ triangles, and thus, \[ \Delta (d(x\cdot) \vee d(y,\cdot))(x) \leq D-1-(K-2) = D+1-K \] implying \[ \Delta (d(x,\cdot) \vee d(y,\cdot))(x') \leq D+1-K - d(x,x')K \] by the curvature bound, where "$\vee$" denotes the maximum. By Lemma~\ref{lem:BMsharpDistanceEigenfunction}, \[ \Delta (d(x,\cdot) + d(y,\cdot))(x') = 2D - K(d(x,x')+d(y,x'))=2D-K -2Kd(x,x'). \] Thus, adding up the above inequalities for $x'$, we get equality, meaning \[ \Delta (d(x,\cdot) \wedge d(y,\cdot))(x') = D - 1 - d(x,x')K. \] Moreover by Lemma~\ref{lem:curvTriangleMatch}, we have $\Delta d(x,\cdot)(x') = D-Kd(x,x')$, and thus, \[ \Delta (1_{V_y^x})(x') = \Delta d(x,\cdot)(x') - \Delta (d(x,\cdot) \wedge d(y,\cdot))(x') = 1 \] showing that $x'$ has exactly one neighbor in $V_y^x$. Similarly, $\Delta d(y,\cdot)(x') = D-Kd(x,x')-K$ and thus, \[ \Delta 1_{V_{x}^y}(x')=\Delta d(y,\cdot) (x') - \Delta (d(x,\cdot) \wedge d(y,\cdot))(x') = 1-K \] showing that $x'$ has exactly $K-1$ neighbors in $V\setminus V_x^y$, implying that $x'$ has exactly $K-2$ neighbors in $V^{xy}$. This finishes the proof. \end{proof} We are now ready to give the main result of this section. \begin{theorem}\label{thm:BMsharpImpliesReflective} Every effective Bonnet Myers sharp graph is reflective. \end{theorem} \begin{proof} Let $x\sim y$. By the above theorem, there exists a perfect matching between $V_x^y$ and $V_y^x$ inducing a map $\phi$ mapping every $x' \in V_x^y$ to its unique neighbor in $V_y^x$, and conversely, and leaving all vertices in $V^{xy}$ fixed. We aim to show that $\phi$ is a reflection. All properties of a reflection are clear except that $\phi$ is an automorphism which we prove now. Let $v\sim w$ be vertices. We aim to show that $\phi(v) \sim \phi(w)$. We proceed by case distinction. \begin{description} \item [Case 1:] $v \in V_x^y$ and $w \in V_y^x$. Then, $\phi(v)=w$ and $\phi(w)=v$, clearly showing $\phi(v) \sim \phi(w)$. \item [Case 2:] $v,w \in V^{xy}$. Then, $v= \phi(v)$ and $w=\phi(w)$, showing $\phi(v) \sim \phi(w)$. \item [Case 3:] $v \in V_x^y$ and $w \in V^{xy}$. By Lemma~\ref{lem:curvTriangleMatch}, the edge $(v,\phi(v))$ contains at least $K-2$ triangles. However, all common neighbors of $v$ and $\phi(v)$ must be in $V^{xy}$ by the first part of Lemma~\ref{lem:BMsharpMatchingVxyVyx}. By the second part of Lemma~\ref{lem:BMsharpMatchingVxyVyx}, $v$ has exactly $K-2$ neighbors in $V^{xy}$ showing that they must all be common neighbors of $v$ and $\phi(w)$. Specifically, $\phi(w)=w \sim \phi(v)$. \item[Case 4:] $v,w \in V_x^y$. By the case above, the edge $(v,\phi(v))$ contains exactly $K-2$ triangles. By Lemma~\ref{lem:curvTriangleMatch}, there must be a perfect matching between the remaining neighbors of $v$ and $\phi(v)$. Specifically, there must be some $w' \notin B_1(v)$ with $w\sim w' \sim \phi(v)$. As $\phi(v)$ has at most one neighbor in $V_x^y$, namely $v$, infer $w' \notin V_x^y$. As all neighbors of $\phi(v)$ in $V^{xy}$ are also neighbors of $v$, but $w'$ is not a neighbor of $v$, we obtain $w' \in V_y^x$. As $\phi(w)$ is the unique neighbor of $w$ in $V_y^x$, we see that $w'=\phi(w)$. Particularly, $\phi(v) \sim \phi(w)$. \end{description} The remaining cases can be proven analogously. As $\phi=\phi^{-1}$, the case distinction shows that $\phi$ is a graph automorphism, and thus a reflection. As $x\sim y$ are chosen arbitrarily, the claim of the theorem follows immediately. \end{proof} \subsection{Lichnerowicz sharpness} We recall that Lichnerowicz sharp graphs are graphs with $\lambda = K$. We also recall that distance regular Lichnerowicz sharp graphs with an additional spectral condition are precisely the graphs in the list in Theorem~\ref{thm:LichSharpChar}, see Theorem~\ref{thm:LichSharpChar} and \cite[Theorem~6.5]{cushing2018rigidity}. We now prove, that one can drop the additional spectral condition. We specifically prove effective Bonnet Myers sharpness which implies Lichnerowicz sharpness by Theorem~\ref{thm:BMsharpImpliesReflective} and Theorem~\ref{thm:ReflectiveImpliesLichSharp}. \begin{theorem}\label{thm:LichSharpDistRegImpliesEffBonnMyersSharp} Let $G=(V,E)$ be Lichnerowicz sharp and distance regular. Then, $G$ is effective Bonnet Myers sharp. \end{theorem} \begin{proof} Let $f \in \mathbb{R}^V$ be an eigenfunction to eigenvalue $K:=\min_{x\sim y} \kappa(x,y)$. Assume $f$ is minimal in $x \in V$. We recall that spheres of radius $n$ are denoted by $S_n$. We define another function $g \in \mathbb{R}^V$ via \[ g(y) := \frac 1{|S_{d(x,y)}(x)|} \sum_{z: d(x,z)=d(x,y)} f(z). \] By distance regularity, $\Delta g(y) = \Delta g(y')$ whenever $d(x,y)=d(x,y')$. Moreover, $g$ also has to be an eigenfunction to eigenvalue $K$. By minimality of $f(x)$ and as $f$ is not constant, we see that $g$ is not constant. We now show that $g=cd(x,\cdot) + C$ for some constants $c,C \in \mathbb{R}$. W.l.o.g., $\|\nabla g\|_\infty = 1$ and $g(z)-g(y)=1$ for some $z,y \in V$. As $g$ is constant on spheres around $x$, we can assume $y \in S_n(x)$ and $z \in S_{n+1}(x)$ for some $n$, or vice versa. As the vice versa case is analogous, we restrict to the case $y \in S_n(x)$ and $z \in S_{n+1}(x)$. We estimate \[ K=K(g(z)-g(y))=\Delta g(y)-\Delta g(z) \geq \inf_{\substack{h(z)-h(y)=1\\ \|\nabla h\|_\infty = 1}} \Delta h(y)-\Delta h(z) =\kappa(y,z)\geq K. \] Choosing $h$ the maximal Lipschitz extension of $g$ on $S_{n+2}(x)$ and the minimal Lipschitz extension of $g$ on $S_{n-1}$, we see $\Delta h(z) \geq \Delta g(z)$ and $\Delta h(y) \leq \Delta g(y)$, showing that $g=h$ on $S_{n-1}(x)$ and $S_{n+2}(x)$. Particularly, the maximal gradient of $g$ is also attained between $S_{n-1}(x)$ and $S_n(x)$, and between $S_{n+1}(x)$ and $S_{n+2}(x)$. Using induction, it follows that $g= d(x,\cdot) + C$ for some constant $C \in \mathbb{R}$. We now show that $\sum_y d(x,y) = |V|\operatorname{Deg}(x)/K$. We first notice that \[-KC=-Kg(x)=\Delta g(x) = \operatorname{Deg}(x)\] showing $C=\frac {- \operatorname{Deg}(x)}K$. Moreover, \[ 0=\sum \Delta g(x) = K\sum g(x) = K\sum_y d(x,y) - \operatorname{Deg}(x)|V|. \] Rearranging shows $\sum_y d(x,y) = |V|\operatorname{Deg}(x)/K$. By distance regularity, this equation holds true for all base points $x$, implying $\diam_{\operatorname{eff}} = \frac{1}{|V|^2} \sum_{x,y} d(x,y)=\operatorname{Deg}(x)/K = \max \operatorname{Deg}/K$. This finishes the proof. \end{proof} \section{Classifications}\label{sec:Classifications} We first characterize locally connected effective Bonnet Myers sharp graphs. \begin{theorem}\label{thm:LocConnectCharEverything} Let $G=(V,E)$ be a locally connected graph. T.f.a.e.: \begin{enumerate}[(i)] \item $G$ is effective Bonnet Myers sharp, \item $G$ is reflective, \item $G$ is distance regular and Lichnerowicz sharp, \item $G$ is a graph of the following list: \begin{itemize} \item Cocktail party graphs with at least $6$ vertices, \item Johnson graphs, \item Halved cubes, \item Schläfli graph, \item Gosset graph. \end{itemize} \end{enumerate} \end{theorem} We note that the cocktail party graph with four vertices is locally disconnected and therefore not on the list, and the cocktail party graph with two vertices is a Johnson graph, too. \begin{proof} Implication $(i) \Rightarrow (ii)$ follows from Theorem~\ref{thm:BMsharpImpliesReflective}. Implication $(ii) \Rightarrow (iv)$ follows from Lemma~\ref{lem:locConnectedAndReflectiveImpliesList}. Implication $(vi) \Rightarrow (iii)$ follows from Theorem~\ref{thm:LichSharpChar}. Finally, implication $(iii) \Rightarrow (i)$ follows from Theorem~\ref{thm:LichSharpDistRegImpliesEffBonnMyersSharp}, finishing the proof. \end{proof} We now drop the assumption of local connectedness. \begin{theorem}\label{thm:effBMsharpChar} Let $G=(V,E)$ be a graph. T.f.a.e.: \begin{enumerate}[(i)] \item $G$ is effective Bonnet Myers sharp, \item $G$ is reflective and has constant curvature, \item $G$ is a graph of the following list: \begin{itemize} \item Cocktail party graphs, \item Johnson graphs, \item Halved cubes, \item Schläfli graph, \item Gosset graph, \item Cartesian product of graphs above with same curvature. \end{itemize} \end{enumerate} \end{theorem} \begin{proof} Implication $(i) \Rightarrow (ii)$ follows from Theorem~\ref{thm:BMsharpImpliesReflective} and Lemma~\ref{lem:BMsharpDistanceEigenfunction}. Implication $(ii) \Rightarrow (iii)$ follows from Theorem~\ref{thm:ReflectiveImpliesList}. We finally prove $(iii) \Rightarrow (i)$. By Theorem~\ref{thm:LocConnectCharEverything}, we have $G=G_1 \times \ldots \times G_n$, and all factors are effective Bonnet Myers sharp. As $\diam_{\operatorname{eff}}(G)=\sum_{i=1}^n\diam_{\operatorname{eff}}(G_i)$ and $\operatorname{Deg}_G = \sum_{i=1}^n \operatorname{Deg}_{G_{i}}$, and as all factors have the same curvature, we conclude that $G$ is also effective Bonnet Myers sharp finishing the proof. \end{proof} We finally drop the constant curvature assumption and characterize reflective graphs. \begin{theorem} Let $G=(V,E)$ be a graph. T.f.a.e.: \begin{enumerate}[(i)] \item $G$ is reflective, \item $G$ is a graph of the following list: \begin{itemize} \item Cocktail party graphs, \item Johnson graphs, \item Halved cubes, \item Schläfli graph, \item Gosset graph, \item Cartesian product of graphs above. \end{itemize} \end{enumerate} \end{theorem} \begin{proof} Implication $(i) \Rightarrow (ii)$ is given in Theorem~\ref{thm:ReflectiveImpliesList}. We finally prove $(ii) \Rightarrow (i)$. By Theorem~\ref{thm:LocConnectCharEverything}, we have $G=G_1\times\ldots \times G_n$, and all factors are reflective. By Lemma~\ref{lem:CartesianProductInheritsReflective}, this implies that $G$ is reflective finishing the proof. \end{proof} \section*{Acknowledgments} The author wants to thank Justin Salez for pointing out the relevance of the effective diameter. He moreover wants to thank Supanat Kamtue, Jack Koolen, Shiping Liu, and Norbert Peyerimhoff for useful and inspiring discussions. \printbibliography
{ "timestamp": "2022-06-01T02:22:34", "yymm": "2205", "arxiv_id": "2205.15857", "language": "en", "url": "https://arxiv.org/abs/2205.15857", "abstract": "We give a discrete Bonnet Myers type theorem for the effective diameter assuming positive Ollivier curvature. We prove that this diameter bound is attained if and only if the graph is a cocktail party graph, a Johnson graph, a halved cube, a Schläfli graph, a Gosset graph, or a cartesian product of the mentioned graphs with same Ollivier curvature. As a key step in the proof, we introduce the notion of reflective graphs as graphs such that for any two neighbors there exists a certain self-inverse automorphism mapping one neighbor to another. We classify these graphs as arbitrary cartesian products of the graphs mentioned before.", "subjects": "Combinatorics (math.CO); Differential Geometry (math.DG)", "title": "Reflective Graphs, Ollivier curvature, effective diameter, and rigidity", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692325496973, "lm_q2_score": 0.7248702880639792, "lm_q1q2_score": 0.7079585079415246 }
https://arxiv.org/abs/1403.1938
Extending the Prym map to toroidal compactifications of the moduli space of abelian varieties
The main purpose of this paper is to present a conceptual approach to understanding the extension of the Prym map from the space of admissible double covers of stable curves to different toroidal compactifications of the moduli space of principally polarized abelian varieties. By separating the combinatorial problems from the geometric aspects we can reduce this to the computation of certain monodromy cones. In this way we not only shed new light on the extension results of Alexeev, Birkenhake, Hulek, and Vologodsky for the second Voronoi toroidal compactification, but we also apply this to other toroidal compactifications, in particular the perfect cone compactification, for which we obtain a combinatorial characterization of the indeterminacy locus, as well as a geometric description up to codimension six, and an explicit toroidal resolution of the Prym map up to codimension four.
\section*{Introduction} A fundamental tool in the study of algebraic curves is the theory of Jacobians. Assigning to a curve its principally polarized Jacobian defines the Torelli period map $M_g\to A_g$ from the coarse moduli space of curves of genus $g$ to the coarse moduli space of principally polarized abelian varieties (ppav) of dimension $g$. It is a well-known fact, due to Mumford and Namikawa \cite{nam}, that the Torelli map extends to a morphism $\overline{M}_g\to \overline{A}_g^V$ from the Deligne--Mumford compactification to the second Voronoi toroidal compactification. More recently, Alexeev and Brunyate \cite{ab} have studied extensions of the Torelli map to other toroidal compactifications and have shown that the period map extends to a morphism to the perfect cone compactification $\overline{A}_g^P$, but not to a morphism to the central cone compactification $\overline{A}^{C}_g$ for $g\ge 9$, disproving a conjecture of Namikawa. While the Torelli map is injective for all $g$, for $g\ge 4$ it is not dominant. One geometric approach to understanding higher-dimensional ppav is via Prym varieties, which are ppav associated to connected \'etale double covers of curves. Associating to a cover its principally polarized Prym variety defines the Prym period map $R_{g+1}\to A_g$, where $R_{g+1}$ is the coarse moduli space of connected \'etale double covers of curves of genus $g+1$. The Prym period map is dominant for $g\le 5$, and has been used to provide a geometric approach to the Schottky problem for $g=4,5$, to study the rationality of threefolds, and to give a better understanding of the geometry of $A_4$ and $A_5$. In contrast to the case of Jacobians, it has been known since the work of Friedman and Smith \cite{fs} that the Prym period map does not extend to a morphism from Beauville's moduli space of admissible double covers $\overline{R}_{g+1}$ to any of the standard toroidal compactifications. Subsequent work of Alexeev, Birkenhake, and Hulek \cite{abh} and Vologodsky \cite{vologodsky} identifies the indeterminacy locus of the rational map $\overline{R}_{g+1}\dashrightarrow \overline{A}_g^V$; it is the closure of the locus of so-called Friedman--Smith covers with at least $4$ nodes (see \S \ref{secFSexamples}). In this paper, we investigate the problem of extending the Prym map to other toroidal compactifications. Our main results are: \begin{itemize} \item A complete combinatorial characterization of the indeterminacy locus of the Prym map to the perfect and central cone compactifications (Theorem \ref{teoprymext}). The techniques also give a complete combinatorial characterization of the indeterminacy locus of the Prym map to the second Voronoi compactification, providing another proof of \cite[Thm.~3.2]{abh}. \item A geometric characterization of the indeterminacy locus of the Prym map $\overline R_{g+1}\dashrightarrow \bar A_g^P$ to the perfect cone compactification up to codimension $6$ in $\overline R_{g+1}$ in terms of Friedman--Smith covers (Theorem \ref{teoindPM}). \item An explicit resolution of the Prym map $\overline R_{g+1}\dashrightarrow \overline{A}_g^P$ up to codimension $4$ (Theorem \ref{teoresPM}). This also resolves the Prym map to $\overline{A}_g^V$ and $\overline{A}_g^C$ up to codimension $4$. \end{itemize} In Appendix \ref{secappMDS}, Mathieu Dutour Sikiri\'c also proves an extension result to the central cone compactification (Theorem \ref{teoindC}). \smallskip In this paper, we approach the extension problem for the Prym map in terms of the Hodge theoretic framework of a general period map $\mathcal{M}\to \mathcal D/\Gamma$ from a moduli space to a classical period domain. This allows us to determine the conditions for extensions of period maps to moduli spaces that are compactified so that the monodromy transformations are of Picard--Lefschetz type (i.e.~given by rank 1 forms). In this way we separate the geometric aspects of the problem from the combinatorial issues involved in dealing with various admissible cone decompositions. In particular, the approach unifies the arguments for Jacobians and Pryms, and we discuss the Torelli map throughout for motivation. As a result, we also get a new proof of the extension results of \cite{abh} for $\overline{R}_{g+1}\dashrightarrow\overline{A}_g^V$. In \cite{abh}, the authors have the additional goal of determining compactified Pryms as stable semiabelic pairs; focusing here on the extension condition allows us to make a more direct, Hodge theoretic argument. With the work in \cite{abh}, translating from our results to the language of stable semiabelic pairs is straightforward (\S \ref{secModStAbVar}, \S \ref{secsubFibers}). In addition, one of our original motivations for this work was investigating the extension of the period map for cubic threefolds to a morphism from a suitable GIT compactification of the moduli space of threefolds to a suitable compactification of $A_5$, stemming from our work \cite{cml} and \cite{cml2}, and using some of the results of our work \cite{gh}. The methods we use in this paper apply in that setting also, and we will return to the study of the period map for cubic threefolds in subsequent work. \smallskip A few words about the structure of the paper. We start in Section \ref{secttoroidal} by reviewing some basic facts about the toroidal compactifications (second Voronoi, perfect, central) that we consider in our paper. We then discuss (Section \ref{sectHT}) the general framework of degenerations of Hodge structures and the connection to toroidal compactifications. This is mostly standard (see eg.~\cite{cattani} for an exposition), but we find it convenient to include a short discussion of this adapted to our needs. In Section \ref{sectPrym}, we briefly review the standard compactification of the moduli of Prym varieties by admissible covers (\cite{b}) and the associated combinatorial data (graphs with an involution, etc.). In Section \ref{sectMon}, we specialize the discussion of Section \ref{sectHT} to curves and Prym varieties and discuss the computation of the monodromy cones in terms of the dual graph. The monodromy cone for Jacobians is classical (eg.~\cite{nam76II}) and that of Pryms is essentially contained in \cite{fs} and \cite{abh}. Nonetheless, we believe that our presentation unifies, simplifies, and clarifies some of the arguments in the literature. Our goal will be to apply similar techniques to the study of other moduli spaces via Hodge theory in the future. With these preliminaries, new results start in Section \ref{sectExt}, where we recast the extension criteria for the Torelli map, and then prove combinatorial criteria, in terms of the dual graph, for the extension of the Prym map to various toroidal compactifications of $A_g$, obtaining Theorem \ref{teoprymext} and thus giving in addition a new proof of \cite[Thm.~3.2]{abh}. We then proceed to relate these combinatorial conditions to geometric conditions on the admissible covers. The so-called Friedman--Smith covers are central to this discussion and we describe in Section \ref{secFSexamples} their monodromy in detail: in Subsection \ref{secFSMonCone} we compute the monodromy cones, and in Theorem \ref{teoFSMCP} we discuss their properties with respect to the fans defining different toroidal compactifications. In Section \ref{sectindeterm}, we use these computations to describe the indeterminacy locus of the Prym map geometrically, and it is interesting to note that this behavior for the perfect cone compactification is quite different from that for the second Voronoi compactifictaion. We are able to give a complete geometric characterization of the indeterminacy locus of the Prym map to the perfect cone compactification $\bar A_g^P$ up to codimension $6$ (Theorem \ref{teoindPM}), utilizing the recent results of Melo and Viviani \cite{MV12}. The computations also allow us to describe the resolution of the period map in terms of explicit, toroidal modifications of the moduli space of admissible covers. In Section \ref{sectRes} we describe the resolution of the period map to the perfect cone compactification completely up to codimension $4$ (Theorem \ref{teoresPM}). In Section \ref{secsubFibers} we start a discussion on the fibers of the Prym map. More precisely, we discuss which types of admissible covers are mapped to which strata. This also provides another link to \cite{abh} since we discuss the relationship between the monodromy cones and the degeneration data of $1$-parameter families, which in turn determine semiabelic varieties which are limits of Pryms. Many of the arguments in the paper regarding the Prym map in low codimension rely on working through a number of examples, and explicit computations of monodromy cones. These are somewhat lengthy and technical, and to maintain the structural unity of the argument we collect these explicit computations in the appendices. Appendix \ref{seccombinatorics} treats the combinatorics of the Friedman--Smith cones and relates these to the various cone decompositions. In Appendix \ref{secexamples} we discuss some examples where the Prym map extends; this comes down to proving that certain monodromy cones belong to either the second Voronoi, perfect cone or central cone decomposition. Appendix \ref{secDegen} contains some lengthy calculations where we discuss further degenerations of Friedman--Smith examples. In particular we compute their monodromy cones and discuss to which, if any, cone decompositions these belong. Finally, in Appendix \ref{secSimp} we discuss a method which allows us to simplify certain monodromy cones and thus to reduce to previous calculations. \subsection*{Acknowledgements} We are very grateful to Mathieu Dutour Sikiri\'c who was always willing to answer our questions on cone decompositions and who helped us check some of our guesses with his powerful computer programs. \subsection*{Notation} We will use calligraphic letters to refer to moduli stacks (e.g.~$\mathcal A_g$, $\mathcal R_{g+1}$, etc.), and Roman letters for the associated coarse moduli spaces (e.g.~$A_g$, $R_{g+1}$, etc.). Since all the spaces occurring here (with the exception of Alexeev's stack of stable semiabelic pairs) are Deligne--Mumford stacks, all the period maps are assumed to be locally liftable, and the extensions are insensitive to finite covers, there is essentially no difference between using stacks or the associated coarse moduli space. In fact, we will typically stick to the coarse moduli space, except for the situations where we want to emphasize the modular meaning. \section{Brief review of toroidal compactifications}\label{secttoroidal} In this section, we briefly review the theory of toroidal compactifications of $A_g$ (see \cite{AMRT}, \cite{nam} and \cite{FC90} for more details), focusing on the three classically known toroidal compactifications (up to refinement of the fans, i.e.~blow-ups), that is the perfect cone (also known as first Voronoi), second Voronoi, and central cone compactification. Primarily the purpose here is to fix the notation and terminology needed later. \begin{notation} As is customary, when necessary, we will use subscripts (eg.~$H_\mathbb{Z}$) to indicate the coefficients for modules and algebraic groups. Unless specified, the coefficients are either $\mathbb{Q}$ or $\mathbb{R}$. \end{notation} \subsection{The Satake--Baily--Borel Compactification} Fix a free abelian group $H$ of rank $2g$, and a non-degenerate, skew-symmetric, bilinear form $Q$ on $H$. We let $D$ be the classifying space of polarized weight $1$ Hodge structures on $H$: $$ D:=\{F\in \operatorname{Grass}(g,H_\mathbb{C}): Q(F,F)=0, \ \ iQ(F,\overline F)>0\}\cong G_\mathbb{R}/K, $$ where $G_\mathbb{R}\cong \operatorname{Sp}(2g,\mathbb{R})$ and $K=U(r)$ is the maximal compact subgroup. Taking $Q$ to be the standard symplectic form, $D$ can be (canonically) identified with the Siegel upper half-space $\mathfrak{H}_g$, the space of symmetric $g\times g$ complex matrices with positive definite imaginary part. The fractional linear transformations give an action of $G_\mathbb{Z}=\operatorname{Sp}(2g,\mathbb{Z})$ on $D\cong \mathfrak{H}_g$, and we set $$A_g:=\mathfrak{H}_g/\operatorname{Sp}(2g,\mathbb{Z}).$$ The Satake--Baily--Borel (SBB) compactification $A_g^*$ is a normal, projective compactification of $A_g$ that admits a stratification: $$A_g^*=A_g\sqcup A_{g-1}\sqcup\ldots\sqcup A_{0}.$$ We recall that $A_g^*$ and the above stratification are obtained (set-theoretically) by adding to $D$ the so called rational boundary components $F_{W_0}$, and then taking the quotient with respect to the natural $G_\mathbb{Z}=\operatorname{Sp}(2g,\mathbb{Z})$ action. Namely, the rational boundary components $F_{W_0}$ of $D$ correspond to the choice of rational maximal parabolic subgroups $P_{W_0}\subset\operatorname{Sp}(Q,H_\mathbb{Q})$, which in turn correspond to the choice of a totally isotropic subspace $W_0\subseteq H_\mathbb{Q}$ (of which $P_{W_0}$ is then the stabilizer). Note that since $\operatorname{Sp}(2g,\mathbb{Z})$ acts transitively on the set of isotropic subspaces $W_0$ of $H_\mathbb{Q}$ of fixed dimension, the set of rational boundary components is essentially indexed by the $\nu(=\dim W_0)\in\{0,\dots, g\}$. Furthermore, the choice of $W_0$ defines a weight filtration on $H_{\mathbb{Q}}$: \begin{equation}\label{eqfiltration} W_{-1}:=\{0\}\subseteq W_0\subseteq W_1:=(W_0)_Q^\perp\subseteq W_2:=H_{\mathbb{Q}}. \end{equation} The polarization $Q$ induces a polarization (non-degenerate symplectic form) $\bar Q$ on $\operatorname{Gr}_1^W=W_1/W_0$. It is then standard (eg.~\cite[p.84]{cattani}) that the boundary component $F_{W_0}$ is the classifying space $D_{g'}$ with ($g'=g-\nu$) of $\bar Q$-polarized Hodge structures on $\operatorname{Gr}_1^W$, giving the component $A_{g'}=F_{W_0}/G_{\mathbb{Z}}$ of $A_g^*$. (N.B. $F_{\{0\}}=D$, and after the identification $F_{W_0}=D_{g'}=\mathfrak H_{g'}$, the action of $G_{\mathbb Z}$ restricts to the action of $\operatorname{Sp}(2g',\mathbb Z)$.) \subsection{Toroidal compactifications} The toroidal compactifications are certain refinements of the SBB compactification $A_g^*$, depending on a choice of a compatible collection of admissible cone decompositions, $\Sigma$. Each such choice gives a compactification $\overline{A}_g^{\Sigma}$ with a canonical map $ \overline{A}_g^{\Sigma}\to A_g^*$. Here we review a few points about the construction from the perspective of Hodge theory (essentially following \cite{cattani}). The construction is relative over $A_g^*$, and one starts by considering a totally isotropic subspace $W_0\subseteq H_{\mathbb{Q}}$ of dimension $\nu\le g$ and the corresponding boundary component of $A_g^*$. Consider then the real Lie subalgebra of $\mathfrak{sp}(Q,H_{\mathbb{R}})$ preserving $W_0$: $$ \mathfrak n(W_0):=\{N\in \mathfrak{sp}(Q,H_{\mathbb{R}})\mid \mathrm{Im}(N)\subseteq W_0\}. $$ Then for any $N\in \mathfrak n(W_0)$ we have $N^2=0$, and thus $N$ defines a weight filtration compatible with that induced by $W_0$, see \eqref{eqfiltration}. In other words, we have $$ \Im(N)=W_0(N)\subseteq W_0\subseteq W_1=W_0^\perp \subseteq W_1(N)=\ker (N)=\Im(N)^\perp, $$ and, in particular, a natural surjection \begin{equation}\label{eqnsurjection} \operatorname{Gr}_2^W(:=W_2/W_1)\twoheadrightarrow \operatorname{Gr}_2(N)(:=W_2(N)/W_1(N)). \end{equation} Furthermore, since $N$ is a nilpotent symplectic endomorphism, we get a natural isomorphism \begin{equation}\label{eqncomp} \begin{CD} \operatorname{Gr}_2(N) @>N>> \operatorname{Gr}_0(N) @>Q(N(\cdot),\cdot)>> \operatorname{Gr}_2(N)^\vee\\ v@>>> N(v)@>>>Q(N(\cdot),v), \end{CD} \end{equation} which can be interpreted as giving a non-degenerate bilinear form $Q_N$ on $\operatorname{Gr}_2(N)$. The form $Q_N$ turns out to be symmetric, and by pullback can be viewed as a form on $\operatorname{Gr}_2^W$; thus there is a natural map (defined over $\mathbb{Q}$) \begin{equation}\label{eqnW} \mathfrak n(W_0)\stackrel{\sim}{\longrightarrow} \operatorname{Hom}(\operatorname{Sym}^2 \operatorname{Gr}_2^W,\mathbb{R}), \end{equation} which (it is not hard to see) is an isomorphism. As described above, $\mathfrak n(W_0)$ is canonically identified with the Lie algebra of symmetric bilinear forms (or equivalently symmetric $g'\times g'$ matrices, with $g'=g-\nu$) on $\operatorname{Gr}_2^W$. With this identification, we consider the cone of positive definite $g'\times g'$ symmetric matrices $$ \mathfrak n(W_0)^+:=\{N\in \mathfrak n(W_0)\mid Q_N \textrm{ is positive definite}\}. $$ Let $\Sigma$ be a compatible collection of admissible cone decompositions (see \S \ref{secAdCD}). Now for each cone $\sigma_{W_0}\in \Sigma_{W_0}$, there is an associated space $B(\sigma_{W_0})$ together with a map $B(\sigma_{W_0})\to F_{W_0}$, where $F_{W_0}$ is the rational boundary component associated to $W_0$ (see eg.~\cite[p.91]{cattani}). These maps are compatible in the sense that if $\tau_{W_0}\le \sigma_{W_0}$ is a face, then there is a commutative diagram $$ \xymatrix@C=.5cm@R=.5cm{ B(\tau_{W_0}) \ar@{->}[rr] \ar@{->}[rd]& & B(\sigma_{W_0}) \ar@{->}[ld]\\ &F_{W_0}& } $$ One then sets $D^{\Sigma}=\bigcup_{W_0}\bigcup_{\sigma_{W_0}\in \Sigma_{W_0}}B(\sigma_{W_0})$. The action of $G_{\mathbb{Z}}=\operatorname{Sp}(2g,\mathbb{Z})$ extends to an action on $D^{\Sigma}$, and then (set-theoretically) $\bar A^\Sigma_g=D^{\Sigma}/G_{\mathbb{Z}}$, inducing also a natural map $\bar A^\Sigma_g\to A^*_g$. \subsection{Admissible cone decompositions for quadratic forms}\label{secAdCD} We now review some basic terminology and results about cone decompositions. Let $\Lambda$ be a free $\mathbb{Z}$-module of rank $g$. The space of quadratic forms on $\Lambda$ is $(\operatorname{Sym}^2 \Lambda)^\vee$, which comes equipped with a natural diagonal action of $\operatorname{GL}(\Lambda)=\operatorname{Aut}_{\mathbb{Z}}(\Lambda)$. One considers the open cone of positive definite quadratic forms $$C\subset (\operatorname{Sym}^2 \Lambda)^\vee\otimes_\mathbb{Z} \mathbb{R},$$ and then lets $\overline{C}^\mathbb{Q}$ be the rational closure. Obviously, $C$ and $\overline{C}^\mathbb{Q}$ are $\operatorname{GL}(\Lambda)$-invariant. For any subgroup $\Gamma\subseteq \operatorname{GL}(\Lambda)$ (typically we will be interested $\Gamma= \operatorname{GL}(\Lambda)$), a $\Gamma$-admissible rational polyhedral decomposition $\Sigma$ (in short {\it admissible decomposition}) of $C$ is a $\Gamma$-invariant collection of (rational, convex, polyhedral) subcones covering $\overline{C}^\mathbb{Q}$ which satisfies certain natural axioms (see \cite{nam} or \cite[Ch.~IV, Def.~2.2, p.96]{FC90} for details), most notably the requirement that there are only finitely many orbits of cones of $\Sigma$ modulo the action of $\Gamma$. For the construction of the toroidal compactifications $\overline{A}_g^\Sigma$ one requires an admissible decomposition for the space of quadratic forms associated to each isotropic subspace $W_0$ (see \eqref{eqnW}). As discussed, all isotropic subspaces $W_0$ of fixed dimension are conjugate, and thus what one needs is an admissible decomposition for each lattice $\Lambda'$ of rank $0\le g'\le g$, compatible in the following sense. We say that $\Sigma'$ and $\Sigma$ are compatible if there exists a surjection $\Lambda \twoheadrightarrow \Lambda'$ so that $\Sigma'$ is obtained from $\Sigma$ via pull-back by the natural inclusion $\overline C^{\mathbb{Q}}(\Lambda')\subseteq \overline C^{\mathbb{Q}}(\Lambda)$. If this is the case for one surjection $\Lambda\twoheadrightarrow \Lambda'$, it will be true for all surjections. In particular, specifying an admissible decomposition for $\Lambda$ then specifies uniquely compatible admissible decompositions for all lattices $\Lambda'$ of smaller rank. In short, all we need to define a toroidal compactification $\bar A^\Sigma_g$ is an admissible cone decomposition for the rank $g$ lattice. Three admissible decompositions are classically known for $A_g$, namely the so called second Voronoi, the perfect cone (or first Voronoi), and the central cone decomposition (these can, of course, be further subdivided). These decompositions are discussed in \cite[\S8, \S9]{nam}. We shall address in this paper all three decompositions and the associated toroidal compactifications. Though we will not review their definitions (the interested reader should see \cite{nam}), we will discuss the relevant facts about them in the following subsection. There is also another admissible decomposition known, namely that into $C$-types \cite{RB}, which is less known to algebraic geometers. This coincides with the second Voronoi decomposition for $g\leq 4$, but for $g\geq 5$ second Voronoi is a proper refinement of the $C$-type decomposition. To our knowledge no geometric interpretation of the corresponding toroidal compactification is known. \smallskip Finally, we recall some terminology. A cone $\sigma\subseteq \overline C^{\mathbb Q}$ is called \emph{basic} if the integral generators of its $1$-dimensional faces can be completed to a $\mathbb Z$-basis of $(\operatorname{Sym}^2\Lambda)^\vee$. It is called \emph{simplicial} if these generators can be completed to a $\mathbb Q$-basis; i.e.~if the generators are linearly independent. \subsection{Admissible cone decompositions and rank $1$ quadrics} In the geometric context of our paper, we will only be interested in cones spanned by rank $1$ quadrics (i.e.~squares of linear forms), since our (log of) monodromy operators will be rank one. For such cones it is essentially a combinatorial problem to decide if they belong to the second Voronoi, perfect, or central cone decompositions. These results are well known and we will refer the reader to \cite{ab} and \cite{MV12} for further details. For $\ell_1,\ldots,\ell_n\in \Lambda_{\mathbb R}^\vee\setminus\lbrace 0\rbrace$, let $\sigma:=\mathbb R_{\ge 0}\langle \ell_i^2\rangle_{i=1}^n$ be the corresponding cone generated by rank $1$ quadrics in $\operatorname{Sym}^2\left(\Lambda_{\mathbb R}^\vee\right)$. Given a basis for $\Lambda$, we will often refer to the cone $\sigma$ by writing the matrix whose $i$-th row is the expression for $\ell_i$ in terms of the dual basis to the given basis, and to any such matrix will associate such a cone. In this setup, we then have the following combinatorial results that determine whether a set of linear forms in $\Lambda^\vee$ generate a cone contained in a cone of one of the three standard admissible decompositions. \begin{lem}[Second Voronoi] \label{lemsecvor} Let $\Lambda$ be free $\mathbb{Z}$-module of rank $g$. Suppose $\ell_1,\ldots,\ell_n\in \Lambda^\vee$ are primitive, non-zero, linear forms. The following are equivalent: \begin{enumerate} \item $\{\ell_1^{2},\ldots, \ell_n^{2}\}$ lie in a common cone of the second Voronoi decomposition. \item $\mathbb{R}_{\ge 0}\langle \ell_1^{2},\ldots, \ell_n^{2}\rangle$ is a cone in the second Voronoi decomposition. \item Any $\mathbb{R}$-linearly independent subset $\{\ell_j\}_{j\in J}\subseteq \{\ell_1,\ldots,\ell_n\}$, is a $\mathbb{Z}$-basis of the $\mathbb Z$-module $ \mathbb{R}\langle \ell_j\rangle_{j\in J}\cap \Lambda^\vee$. \item Any $\mathbb{R}$-linearly independent subset $\{\ell_j\}_{j\in J}\subseteq \{\ell_1,\ldots,\ell_n\}$ of maximal rank, is a $\mathbb{Z}$-basis of the $\mathbb Z$-module $ \mathbb{R}\langle \ell_j\rangle_{j\in J}\cap \Lambda^\vee $. \end{enumerate} \end{lem} \begin{proof} This is well known. We direct the reader to \cite[Lem.~4.5]{ab} and the references therein. \end{proof} One may take as a definition that a \emph{matroidal} cone is a second Voronoi cone generated by rank $1$ quadrics (this is essentially the content of Lemma \ref{lemsecvor}). It follows from the lemma that a face of a matroidal cone is matroidal, and moreover, that matroidal cones are simplicial. We denote by $\Sigma_{\text{mat}}\subseteq\Sigma_V$ the collection of matroidal cones. To connect the discussion with that of \cite{abh}, we recall the notion of a dicing. Fix a collection of codimension-$1$ affine spaces $\{H_i\}_{i\in I}$ in $\Lambda_{\mathbb{R}}$. Let $\mathscr H=\bigcup_{i\in I}H_i$ be the associated arrangement of affine spaces. The arrangement $\mathscr H$ is stratified by the intersections of the $H_i$. We say that $\mathscr H$ defines a \emph{dicing} of $\Lambda$ if the union of the $0$-dimensional strata of $\mathscr H$ is exactly the lattice $\Lambda$. \begin{lem}\label{lemdice} Let $\Lambda$ be a free $\mathbb{Z}$-module of rank $g$. Suppose that $\ell_1,\ldots,\ell_n\in \Lambda^\vee$ are $\mathbb{R}$-linearly independent. Then $\ell_1,\ldots,\ell_n$ form a $\mathbb{Z}$-basis for $\Lambda^\vee$ if and only if they determine a dicing of $\Lambda _{\mathbb{R}}$. More precisely, this means that the collection of hyperplanes $$ H_{i,m}:=\{x\in \Lambda_{\mathbb{R}} : \ell_i(x)=m\} $$ with $i=1,\ldots,n$ and $m\in \mathbb{Z}$ defines a dicing of $\Lambda$. \end{lem} \begin{proof} This follows from the definitions and is left to the reader. \end{proof} \begin{rem}\label{remDelaunay} Associated to a quadratic form $q\in C$ is a so-called Delaunay decomposition of $\Lambda\otimes _{\mathbb Z}\mathbb R$. The second Voronoi decomposition is defined so that the Delaunay decomposition of a quadric remains unchanged for all quadrics in a given (open) second Voronoi cone. We will only be interested in quadratic forms that lie in second Voronoi cones generated by rank $1$ quadrics. In this case, the Delaunay decomposition has a well-known, and simple description (see \cite[Theorem 3.2]{ER2} or the proof of \cite[Lem.~3.1]{abh}): \emph{If $\ell_1,\ldots,\ell_n \in \Lambda^\vee$, span $\Lambda^\vee_{\mathbb R}$, and $\sigma =\mathbb{R}_{\ge 0}\langle \ell_1^{2},\ldots, \ell_n^{2}\rangle$ is a second Voronoi cone, then the Delaunay decomposition for any (positive definite) quadric $q\in \sigma^\circ$ is given by the (dicing) hyperplane arrangement associated to $\ell_1,\ldots,\ell_n$.} \end{rem} \begin{lem}[Perfect cone] \label{lempc} Let $\Lambda$ be a free $\mathbb{Z}$-module of rank $g$. Suppose $\ell_1,\ldots,\ell_n\in \Lambda^\vee$ are primitive, non-zero, linear forms. The following are equivalent. \begin{enumerate} \item $\{\ell_1^{2},\ldots, \ell_n^{2}\}$ lie in the same cone of the perfect cone decomposition. \item There exists a quadratic form $Q$ on $\Lambda^\vee_{\mathbb{R}}$ such that \begin{enumerate} \item $Q(\ell)>0$ for all $\ell \in \Lambda^\vee_{\mathbb{R}}\setminus \{0\}$; i.e.~$Q$ is positive definite. \item $Q(\ell)\ge 1$ for all $\ell \in \Lambda^\vee \setminus \{0\}$. \item $Q(\ell_i)=1$, $i=1,\ldots,n$. \end{enumerate} \end{enumerate} \end{lem} \begin{proof} This follows from the definition of the perfect cone decomposition in \cite{nam}. (See also the \emph{proof} of \cite[Thm.~4.7]{ab}.) \end{proof} \begin{rem}\label{remMV} Since cones in the perfect cone decomposition are generated by rank $1$ quadrics, a cone in the perfect cone decomposition is a second Voronoi cone if and only if it is matroidal (i.e.~$\Sigma_P\cap \Sigma_V\subseteq \Sigma_{\text{mat}}$). Recently Melo and Viviani \cite[Thm.~A]{MV12} showed that matroidal cones are in the perfect cone decomposition (i.e.~$\Sigma_{\text{mat}}\subseteq \Sigma_P$), establishing that $\Sigma_V\cap\Sigma_P=\Sigma_{\text{mat}}$. Note in particular that the following special case of \cite[Thm.~A]{MV12} follows directly from the definitions and Lemma \ref{lempc}: \emph{if $\sigma\in \Sigma_{\text{mat}}$ is generated by at most $g$ rank 1 quadratic forms, then $\sigma\in \Sigma_P$. In particular, if $q\in \sigma\in \Sigma_P$ is a rank $1$ quadric, then $\mathbb{R}_{\ge 0}\langle q\rangle$ is a face of $\sigma$.} \end{rem} \begin{lem}[Central cone]\label{lemcc} Let $\Lambda$ be a free $\mathbb{Z}$-module of rank $g$. Suppose $\ell_1,\ldots,\ell_n\in \Lambda^\vee$ are primitive, non-zero, linear forms. The following are equivalent. \begin{enumerate} \item $\{\ell_1^{2},\ldots, \ell_n^{2}\}$ lie in the same cone of the central cone decomposition. \item There exists a quadratic form $Q$ on $\Lambda^\vee_{\mathbb{R}}$ such that \begin{enumerate} \item $Q(\ell)>0$ for all $\ell\in \Lambda^\vee_{\mathbb{R}}\setminus \{0\}$; i.e.~$Q$ is positive definite. \item $Q(\ell)\ge 1$ for all $\ell \in \Lambda^\vee\setminus \{0\}$. \item $Q(\ell_i)=1$, $i=1,\ldots,n$. \item $Q(\ell)\in \mathbb{Z}$ for all $\ell\in \Lambda^\vee$. \end{enumerate} \end{enumerate} \end{lem} \begin{proof} This follows from the definition of the central cone decomposition in \cite{nam}. (See also the \emph{proof} of \cite[Thm.~4.8]{ab}.) \end{proof} \begin{rem} We note that all but the last condition above are the same as for the perfect cone compactification, and thus it turns out that {\em if a collection of \emph{rank 1} quadratic forms lies in a central cone, they also lie in a perfect cone}, but not vice versa (see also Remarks \ref{remTorPC} and \ref{remTorCC} below). \end{rem} Given an admissible cone decomposition $\Sigma$, we will denote by $\Sigma^{(1)}$ the collection of cones that are generated by rank $1$ quadrics. Note that if $\sigma\in \Sigma^{(1)}$ and $\tau$ is a face of $\sigma$, then $\tau\in \Sigma^{(1)}$. Note also that by definition $\Sigma_P=\Sigma_P^{(1)}$. We can summarize the discussion above as follows. $$ \sigma \in \Sigma^{(1)}_V\ (=\Sigma_{\text{mat}})\ \text { or } \sigma \in \Sigma^{(1)}_C\ \implies \sigma \in \Sigma_P\ \ (=\Sigma^{(1)}_P) $$ \begin{rem} \label{remlowdim} The metrics \begin{equation}\label{Acone} Q_A(\underline x):=\sum_{1\le i\le j \le n} x_ix_j, \ \ \ \ \ \ \ Q_D(\underline x):=\sum_{1\le i\le j \le n, (i,j)\ne (1,2)} x_ix_j \end{equation} define cones of type $A$ and $D$ respectively in the perfect cone decomposition (in fact also in the central cone) decomposition. Cones of type $A$ are matroidal, whereas for $n\ge 4$, the type $D$ cones are not (and also fail to be simplicial). \end{rem} \begin{rem} \label{relationsconedec} At this point we would like to recall the relation between the three known admissible decompositions. For $g= \operatorname{rank}\Lambda$, $g\leq 3$, all three decompositions (namely the second Voronoi, perfect cone and central cone) coincide. For $g=4$ it is still true that the perfect cone and the central cone decomposition coincide and the second Voronoi decomposition is a refinement of these. More precisely the only non-basic cone of the perfect cone decomposition, namely the $D_4$ cone, is subdivided into basic cones in the second Voronoi decomposition (see \cite{ER} for details). For $g=5$ the second Voronoi decomposition is still a refinement of the perfect cone decomposition (\cite{RB}), but this is no longer the case for $g\geq 6$ (\cite{EB}). In general all three decompositions are different in the sense that none is a refinement of the other. \end{rem} \section{Monodromy cones and extensions to toroidal compactifications}\label{sectHT} The central question addressed in this paper is the question of extending the period map for Prym varieties to toroidal compactifications. The basic set-up for such a problem is that of a locally liftable map $\mathcal P:B^\circ\to D/\Gamma$ from a smooth base $B^\circ$ to a locally symmetric variety (eg.~maps arising from weight $1$ VHS associated to families of varieties $\mathfrak X^\circ / B^\circ$). We then consider a partial simple normal crossing smooth compactification $B^\circ\subset B$ and we are asking for extensions of the map $\mathcal P$ from $B$ to a given (fixed) toroidal compactification $\overline{D/\Gamma}^\Sigma$. Since the problem is essentially local, we may assume without loss of generality that $B^\circ$ is a polycylinder (i.e.~$B^0=(S^\circ)^k\times S^{n-k}\subset B=S^n$, where $S^\circ=S\setminus\{0\}$ and $S$ is the unit disk), and that the monodromy operators around the boundary divisors are unipotent. With this set-up the extension question has an elegant answer. Namely, one defines a monodromy cone associated to the period map $\mathcal P$, and then $\mathcal P$ extends if and only if the monodromy cone is compatible with the cones of the admissible decomposition $\Sigma$. We review this below, following Cattani \cite{cattani}, with a focus on weight $1$ Variation of Hodge Structure (although some of the considerations apply more generally). \subsection{Degenerations of weight $1$ Hodge structures} The monodromy cone for a variation of Hodge structures is a basic tool in understanding extensions of period maps. Here we review the definition of the log of monodromy, the monodromy cone, and the connection with quadratic forms. \subsubsection{The log of monodromy} We focus on the case of weight $1$ Hodge structures for simplicity. Let $\pi^\circ:\mathfrak X^\circ \to S^\circ$, be a smooth, projective morphism over the punctured disk $S^\circ$. Fix a base-point $\ast \in S^\circ$, with fiber $X_\ast=(\pi^\circ)^{-1}(\ast)$, and let $T$ be the associated monodromy operator on $H^1(X_\ast,\mathbb{Q})$. It is well known that $T$ is quasi-unipotent; in fact since we are in weight $1$, we have $(T^n-Id)^2=0$. For simplicity, we will assume further that $T$ is in fact unipotent; i.e.~$(T-Id)^2=0$. Since unipotent monodromy can be obtained after a finite base change, this assumption will not affect extension questions (see Remark \ref{remFBC}). Thus \begin{equation}\label{eqnlm1} N=\log T=T-Id\in \operatorname{End} H^1(X_\ast,\mathbb{Q}) \end{equation} is the log of monodromy operator. Note that $N\in \mathfrak{sp}(H,Q)$, where $H=H^1(X_\ast,\mathbb{Q})$ and $Q$ is intersection pairing on $H$, and $N^2=0$. To relate with the discussion of Section \ref{secttoroidal}, we would like to view $N$ as a quadratic form. To this end we recall that there is a limit polarized, mixed Hodge structure $H^1_{\lim}=H^1_{\lim}(N)$ on the torsion free quotient $H^1(X_\ast,\mathbb{Z})_\tau$, where the weight filtration $W_\bullet=W_\bullet(N) $ is defined (using $N^2=0$) by \begin{equation}\label{eqnNwf} W_{-1}=\{0\}\subset W_0=\Im(N)\subseteq W_1:=\ker (N) \subseteq W_2:=H^1(X_\ast,\mathbb{Q}). \end{equation} As in \eqref{eqncomp} and \eqref{eqnW} (which are essentially linear algebra statements about nilpotent symplectic endomorphisms), we can view the log of monodromy as a map \begin{eqnarray}\label{eqnlm3} Q(N(\cdot),\cdot ):\operatorname{Gr}_2(N) &\to& (\operatorname{Gr}_2(N))^\vee \in \operatorname{Hom}(\operatorname{Sym}^2 \operatorname{Gr}_2(N),\mathbb{Q})\\ \bar z &\mapsto& Q(N(\cdot),z),\nonumber \end{eqnarray} or equivalently as a symmetric bilinear form on $\operatorname{Gr}_2(N)$. \begin{rem} Since we will need to explicitly compute monodromy in several cases, we note that with respect to a suitable symplectic basis on $H^1(X_\ast)$, we can write (eg.~\cite[Prop.~4.8]{nam}) \begin{equation*} T=\left( \begin{smallmatrix} 1_{g'}&0&0&0\\ 0&1_{\nu}&0&b\\ 0&0&1_{g'}&0\\ 0&0&0&1_{\nu}\\ \end{smallmatrix} \right) , \ \ N=\log T=\left( \begin{smallmatrix} 0&0&0&0\\ 0&0&0&b\\ 0&0&0&0\\ 0&0&0&0\\ \end{smallmatrix} \right), \end{equation*} with $b$ a symmetric non-degenerate $\nu\times \nu$ matrix, $\nu=\dim W_0=\Im(N)$ and $g'=g-\nu$. The identification of $N$ with a quadratic form is simply obtained by considering the matrix $b$. The salient point of the discussion above is that $b$ should be viewed a quadric form on $\operatorname{Gr}_2(N)$ which is essential for compatibility issues as discussed below. \end{rem} \begin{rem} To a $1$-parameter unipotent degeneration of weight $1$ Hodge structures, one can associate either a limit Mixed Hodge structure (from the point of view of degenerations of Hodge structures following Schmid \cite{schmid} and Steenbrink \cite{steenbrink}) or a semiabelian variety (see \S \ref{secModStAbVar}). The two limit objects are canonically identified via the functorial equivalence of categories between semiabelian varieties and polarized weight $1$ MHS (e.g.~Deligne \cite[\S10]{deligne3}). From the perspective of the monodromy matrices discussed above, the $g'\times g'$ blocks correspond to the compact part of the limit semiabelian variety and are essentially irrelevant to the extension question. On the other hand, the $\nu\times \nu$ matrix $b$ defining the quadratic form is a key ingredient for extension questions. \end{rem} \subsubsection{Monodromy cones}\label{smoncone} We now consider families over higher dimensional bases. Let $\pi^\circ:\mathfrak X^\circ \to (S^\circ)^k\times S^{n-k}$ be a smooth, projective morphism. Fix a base-point $\ast \in (S^\circ)^n$, with fiber $X_\ast=(\pi^\circ)^{-1}(\ast)$, and let $T_i$ ($i=1,\ldots,k$) be the associated monodromy operators on $H^1(X_\ast,\mathbb{Q})$; i.e.~generators for the induced homomorphism $\mathbb{Z}^k\cong \pi_1((S^\circ)^k,\ast)\to \operatorname{Aut} H^1(X_\ast,\mathbb{Q})$. For simplicity, as before, we will assume further that the $T_i$ are in fact unipotent, and let $N_i=\log T_i=T_i-Id$ ($i=1,\ldots,k$) be the log of monodromy operators. Again, since this can be obtained after finite base change, this will not affect extension questions. We can now define the monodromy cone: \begin{equation}\label{eqnmc1} \sigma(\pi^\circ):=\mathbb{R}^+\langle N_1\ldots, N_k\rangle\subseteq \mathfrak{sp}(H_\mathbb{R},Q), \end{equation} with $H=H^1(X_\ast,\mathbb{Q})$ and $Q$ the intersection pairing on $H$. As before, we would like to identify this cone with a cone of quadratic forms on a \emph{fixed} vector space. The point is that for each $\lambda_1,\ldots,\lambda_k>0$, we obtain a limit mixed Hodge structure $H^1_{\lim}(\underline \lambda)=H^1_{\lim}(\sum_{i=1}^k\lambda_iN_i)$ on $H^1(X_\ast,\mathbb{Q})$, with monodromy weight filtration $W_\bullet(\underline \lambda)=W_\bullet(\sum_{i=1}^k\lambda_iN_i)$ given by \eqref{eqnNwf} with $$N=\lambda_1N_1+\ldots+\lambda_kN_k.$$ It is well known (see eg.~\cite{cattani}) that for $\lambda_1,\ldots,\lambda_k>0$, \begin{equation}\label{eqnkerN} \ker (\lambda_1N_1+\ldots+\lambda_kN_k)=\bigcap_{i=1}^k\ker (N_i). \end{equation} Thus $W_1(\underline \lambda)$, and hence $\operatorname{Gr}_2(\underline \lambda)$, is independent of the $\lambda_i>0$. Consequently, for $$\lambda_1,\ldots,\lambda_k>0,$$ the $\lambda_1N_1+\ldots+\lambda_kN_k$ can all be viewed as quadratic forms on $$\operatorname{Gr}_2:=H^1(X_\ast,\mathbb{Q})/\bigcap \ker (N_i).$$ In conclusion, the monodromy cone can be identified with a cone of symmetric matrices on the vector space $\operatorname{Gr}_2$ \begin{equation}\label{eqnmc3} \sigma(\pi^\circ):=\mathbb{R}^+\langle N_1\ldots, N_k\rangle\subseteq \operatorname{Hom} (\operatorname{Sym}^2\operatorname{Gr}_2,\mathbb{R}). \end{equation} \begin{rem} Using $N^2=0$ and the symplectic form $Q$, there is a natural identification $W_0=W_1^\perp$, which then gives an identification $$ W_0(N)=\Im(N)=\sum \Im N_i. $$ \end{rem} \subsubsection{Closures of monodromy cones}\label{sectclosure} We now discuss the closure of the monodromy cone. Clearly in regards to the description \eqref{eqnmc1}, we have \begin{equation}\label{eqncmc1} \overline{\sigma(\pi^\circ)}=\mathbb{R}_{\ge 0}\langle N_1\ldots, N_k\rangle\subseteq \mathfrak{sp}(H_\mathbb{R},Q). \end{equation} However, in regards to \eqref{eqnmc3}, the description is not as obvious. The issue is that in setting $$ \operatorname{Gr}_2=\operatorname{Gr}_2(\underline\lambda)=H^1(X_\ast,\mathbb{Q})/\bigcap \ker (N_i), $$ the $N_i$ \emph{individually} are not naturally identified as quadratic forms in $\operatorname{Hom} (\operatorname{Sym}^2 \operatorname{Gr}_2,\mathbb{R})$; they are quadratic forms in $\operatorname{Hom} (\operatorname{Sym}^2 \operatorname{Gr}_2(W_\bullet(N_i)),\mathbb{R})$, respectively. To remedy this, set $\overline N_i$ ($i=1,\ldots,k$) to be the composition: \begin{footnotesize} \begin{equation}\label{eqnCLM} \begin{CD} \operatorname{Gr}_2(W_\bullet(\underline \lambda))@>\rho_i >> \operatorname{Gr}_2(W_\bullet(N_i)) @>N_i>> \operatorname{Gr}_0(W_\bullet(N_i)) @>\rho_i^\vee>> \operatorname{Gr}_0(W_\bullet(\underline \lambda).\\ @| @| @| @| \\ H^1/\bigcap \ker (N_i) @>\rho_i >> H^1/\ker (N_i) @>Q(\cdot, N_i(\cdot))>> (H^1/\ker (N_i))^\vee @>\rho_i ^\vee>> (H^1/\bigcap \ker (N_i) )^\vee. \end{CD} \end{equation} \end{footnotesize} Then unwinding the definitions, we obtain \begin{equation}\label{eqncmc3} \overline{\sigma(\pi^\circ)}=\mathbb{R}_{\ge 0}\langle \overline N_1\ldots, \overline N_k\rangle\subseteq \operatorname{Hom} (\operatorname{Sym}^2 \operatorname{Gr}_2,\mathbb{R}). \end{equation} \subsection{Monodromy cones in the geometric context} \label{secMGC} In the previous subsection we have discussed the abstract Hodge theoretic aspects associated to a degeneration. This discussion allows us to tie-in with the theory of toroidal compactifications discussed in Section \ref{secttoroidal}. Further, via the discussion of \S\ref{sectclosure}, it reduces the computation of monodromy cones to the case of $1$-parameter bases. Here we assume that this $1$-parameter VHS is arising from a $1$-parameter geometric family. In this situation, we would like to interpret the monodromy cones in terms of the geometry and combinatorics of the central fiber (i.e.~the limit geometric object). Namely, we assume here that there is a smooth family $\pi^\circ:\mathfrak X^\circ\to S^\circ$ which has an extension $\pi:\mathfrak X\to S$ to a projective morphism, with central fiber $X_0=\pi^{-1}(0)$ a simple normal crossing divisor in the family (this can be obtained after a finite base change by the semi-stable reduction theorem, and will not affect extension questions). Note this will also imply that the monodromy is unipotent. As is well known, the central fiber $X_0$ carries a canonical Mixed Hodge Structure (MHS). Furthermore, the Clemens--Schmid exact sequence (eg.~\cite[p.109]{csseq}) relates the limit mixed Hodge structure on $X_\ast$ to the MHS on $X_0$. To recall the sequence we will let $i:X_\ast \to X_0$ be the Clemens collapsing map (the composition of the inclusion $X_\ast\subseteq \mathfrak X$ with the contraction $\mathfrak X\to X_0$), and we will denote by $\operatorname{PD}$ any of the Poincar\'e duality isomorphisms. In the weight $1$ case, the Clemens--Schmid exact sequence is $$ \xymatrix{ 0 \ar@{->}[r]& H^1(X_0) \ar@{->}[r]^{i^*}& H^1_{\lim}(X_\ast) \ar@{->}[r]^N& H^1_{\lim}(X_\ast) \ar@{->}[r]^{\beta=\iota_*\operatorname{PD}}& H_{1}(X_0) \ar@{->}[r]^\alpha & H^3(X_0) \ar@{->}[r]^{i^*} &\ldots } $$ Since we will not use the definition of $\alpha$, we refer the reader to \cite[p.108]{csseq}. The maps $\alpha$, $i^*$, $N$, and $\beta$ are morphisms of mixed Hodge structures of types $(2,2)$, $(0,0)$, $(-1,-1)$, and $(-1,-1)$ respectively. It follows that there are isomorphisms $$ H^1(\Gamma,\mathbb{Q}) =\operatorname{Gr}_0^{X_0}(H^1(X_0))\stackrel{i^*}{\longrightarrow} \operatorname{Gr}_0 $$ (the first identification being given by the Mayer--Vietoris spectral sequence for $X_0$) and $$ \operatorname{Gr}_2\stackrel{\beta=i_*\operatorname{PD}}{\longrightarrow} \operatorname{Gr}_{0}^{X_0}(H_{1}(X_0))=\frac{(W_{-1}(H^{1}(X_0)))^\perp}{(W_{0}(H^{1}(X_0)))^\perp}=(\operatorname{Gr}_{0}^{X_0}(H^{1}(X_0)))^\vee. $$ Thus composing, we may view the log of monodromy as a map \begin{equation}\label{eqnCSLM} (\operatorname{Gr}_{0}^{X_0}(H^{1}(X_0)))^\vee \stackrel{\beta^{-1}}{\longrightarrow} \operatorname{Gr}_2 \stackrel{N}{\longrightarrow} \operatorname{Gr}_0 \stackrel{(\iota^*)^{-1}}{\longrightarrow} H^1(\Gamma,\mathbb{Q}). \end{equation} Again using the identification $ \operatorname{Gr}_{0}^{X_0}(H^{1}(X_0))=H^1(\Gamma,\mathbb{Q}) $, we may identify the spaces $H^1(\Gamma,\mathbb{Q})^\vee=H_1(\Gamma,\mathbb{Q})$ by the universal coefficients theorem, and the composition \eqref{eqnCSLM} allows us to view the log of monodromy as a map \begin{equation}\label{eqnlm4} N:H_1(\Gamma,\mathbb{Q})\to H^1(\Gamma,\mathbb{Q}) \in \operatorname{Hom} (\operatorname{Sym}^2 H_1(\Gamma,\mathbb{Q}),\mathbb{Q}) \end{equation} Consequently, for the case of a family of stable curves $\pi:\mathfrak X \to S^n$, smooth over $(S^\circ)^k\times S^{n-k}$, the monodromy cone is given by \begin{equation}\label{eqnmc4} \sigma(\pi^\circ):=\mathbb{R}^+\langle N_1\ldots, N_k\rangle\subseteq \operatorname{Hom} (\operatorname{Sym}^2H_1(\Gamma,\mathbb{Q}),\mathbb{Q})_\mathbb{R} \end{equation} where $\Gamma$ is the dual graph to the curve $X_0=\pi^{-1}(0)$, and $N_i$ is the log of monodromy around the hyperplane $\{x_i=0\}$. In order to describe the closure of the monodromy cone, we introduce some further notation. Let $0\ne 0_i\in S^n$ be a point in the hyperplane $\{x_i=0\}$, sufficiently close to $0$. Let $X_{0_i}$ be the fiber over $0_i$, and let $\Gamma_i$ be the dual graph of $X_{0_i}$. The issue with describing the closure of the monodromy cone is that $N_i$ is not a quadratic form on $H_1(\Gamma,\mathbb{Q})$, rather it is a quadratic form on $H_1(\Gamma_i,\mathbb{Q})$. We resolve this using \eqref{eqnCLM}, and the identification $\beta:\operatorname{Gr}_2 \to H_1(\Gamma,\mathbb{Q})$. Thus, there exist morphisms \begin{equation}\label{eqnRHO} \begin{CD} H_1(\Gamma,\mathbb{Q})@>\rho_i >> H_1(\Gamma_i,\mathbb{Q}) \end{CD} \end{equation} so that setting $\overline N_i$ to be the composition \begin{equation}\label{eqnMonComp} \begin{CD} H_1(\Gamma,\mathbb{Q})@>\rho_i >> H_1(\Gamma_i,\mathbb{Q})@> N_i>> H^1(\Gamma_i,\mathbb{Q})@>\rho_i^\vee >> H^1(\Gamma,\mathbb{Q}), \end{CD} \end{equation} then the closure of the monodromy cone is given by \begin{equation}\label{eqncmc4} \overline{\sigma(\pi^\circ)}=\mathbb{R}_{\ge 0}\langle \overline N_1\ldots, \overline N_k\rangle\subseteq \operatorname{Hom} (\operatorname{Sym}^2 H_1(\Gamma,\mathbb{Q}),\mathbb{Q})_\mathbb{R}. \end{equation} Finally, the map $\rho_i$ in \eqref{eqnRHO} can be described combinatorially. For each $j=1,\ldots,k$ there is a natural map of chain complexes $C_\bullet(\Gamma,\mathbb{Z})\to C_\bullet (\Gamma_j,\mathbb{Z})$ (see \S\ref{secSimp}, where $\Gamma_j$ is denoted $\Gamma/S^c$), inducing surjective maps \begin{equation}\label{eqnCM} H_1(\Gamma,\mathbb{Q})\to H_1(\Gamma_j,\mathbb{Q}). \end{equation} We claim that this map agrees with the map $\rho_j$ above. This follows from the definitions, and we sketch the argument here. The key point is the identification $$ \beta=i_*\operatorname{PD}:\operatorname{Gr}_2:=H^1_{\lim}/W_1\to \operatorname{Gr}_0^{X_0}(H_1):=H_1(X_0)/W_{-1}^{X_0}(H_1). $$ We define $W_{-1}^{X_0}(H_1)=(W_0^{X_0}(H^1))^\perp$. Dualizing the exact sequence (obtained from the Meyer--Vietoris spectral sequence) $$ 0\to W_0^{X_0}(H^1)\to H^1(X_0)\to H^1(\widehat X_0)\to 0 $$ we obtain that $(W_0^{X_0}(H^1))^\perp=H_1(\widehat X_0)$ using the universal coefficients theorem, where $\widehat X_0$ denotes the normalization of $X_0$. In short, we have $$ \beta=i_*\operatorname{PD}:\operatorname{Gr}_2\to H_1(X_0)/H_1(\widehat X_0). $$ Thus there is a commutative diagram \begin{footnotesize} $$ \begin{CD} \operatorname{Gr}_2(\sum \lambda_jN_j)=H^1(X_\ast)/\bigcap \ker (N_j)@>\rho >> \operatorname{Gr}_2(N_j) =H^1(X_\ast)/\ker (N_j)@.\\ @V\beta=i_*PD VV @V\beta=i_*PD VV\\ H_1(\Gamma)=H^1(\Gamma)^\vee=H_1(X_0)/H_1(\widehat X_0)@>(i_* \operatorname{PD}) (\operatorname{PD}^{-1} i_*^{-1})>> H_1(X_{0_j})/H_1(\widehat X_{0_j})=H^1(\Gamma_j)^\vee=H_1(\Gamma_j). \end{CD} $$ \end{footnotesize} The bottom row is easily seen to be the combinatorial map \eqref{eqnCM} above (note that $i_*^{-1}$ is only defined up to vanishing cycles, but nevertheless, $i_* i_*^{-1}$ is well defined as a map on the quotients). \begin{rem} One can also identify $\rho_j^\vee$ directly as well. Following the definitions, one finds that it is given by $$ H^1(\Gamma_j)=\operatorname{Gr}_0(H^1(X_0)) \stackrel{i^*}{\cong }\operatorname{Gr}_0(N_j)=\Im(N_j)\hookrightarrow \Im(\sum N_j) $$ $$ =\Im(N)=\operatorname{Gr}_0(N)\ \stackrel{(i^*)^{-1}}{\cong } H^1(\Gamma). $$ This can also be identified with the dual of the combinatorial map given above \end{rem} \subsection{Monodromy cones and extensions of period maps}\label{secextpermap} Now that we have defined the pertinent terms, we can discuss the standard extension results for period maps to toroidal compactifications. We begin by making one remark. \begin{rem}\label{remFBC} Let $B$ be a smooth variety. It is a basic fact that if a rational map from $B$ to any of the compactifications of the moduli of abelian varieties extends after a finite base change, then the rational map itself extends. For this reason, we will be free in what follows to make finite base changes when considering extensions of period maps. \end{rem} \subsubsection{Extension via Hodge theory} Fix a compatible collection of admissible cone decompositions $\Sigma$, and let $\bar A^\Sigma_g$ be the associated toroidal compactification. Let $$f^\circ:(S^\circ)^k\times S^{n-k}\to A_g$$ be a locally liftable morphism (i.e.~one induced by a family of abelian varieties). After a finite base change, we may assume that the monodromy operators $T_i$ around the boundary divisors $\{x_i=0\}$ are unipotent (see Remark \ref{remFBC}). Then setting $N_i=\log T_i$, we have seen that for any $\lambda_1,\ldots,\lambda_k>0$, there is a fixed $Q$-isotropic subspace $W_0=\ker (\sum_i \lambda_iN_i)$. The Borel extension theorem (\cite{borel}) implies that $f^\circ$ extends to a morphism $f:S^n\to A^*_g$. The isotropic subspace $W_0$ determines a boundary component $F_{W_0}$, and in turn, a boundary component $A^*_{g'}$. The point $f(0)$ is the point of $A^*_{g'}$ associated to the pure weight $1$ Hodge structure determined by the first graded piece of the limit mixed Hodge structure of any semi-stable reduction of the restriction of the (induced) family (of say abelian varieties) to a one-parameter base. The following extension theorem is well known (see eg.~\cite[Thm.~7.29, Rem.~7.30]{nam}, \cite[Thm.~7.2]{AMRT}, \cite[Thm.~5.7, p.116]{FC90}): \begin{fact}\label{fctext} The map $f^\circ$ extends to a morphism $S^n\to \bar A^\Sigma_g$ if and only if the monodromy cone (as defined in \eqref{eqnmc3}) is contained in a cone in $\Sigma$ (more specifically, a cone in $\Sigma_{W_0}$). \end{fact} \subsubsection{Resolving period maps} \label{secRPM} We now recall how one can resolve the period map to a toroidal compactification $\bar A_g^\Sigma$ of the moduli of abelian varieties. Let us assume that we have a semiabelian scheme $\mathscr X\to B$ where $B$ is smooth, and that there is a simple normal crossing divisor $\Delta\subseteq B$ so that the family is abelian over $B^\circ=B-\Delta$. Fix a base point $0\in \Delta$. The variety $(B,\Delta)$ is toroidal at $0$, corresponding to the semi-group ring $\mathbb C[\mathbb N^k]$, where $k$ is the number of components of $\Delta$ meeting at $0$ (technically we mean that locally, the miniversal space is smooth over this toric variety). Fix a basis $e_1,\ldots,e_{k}$ for $\mathbb R^k$. We say that $\mathbb R_{\ge 0}\langle e_i\rangle$ is the cone associated to the toric data at $0$. Now let $\sigma(\mathscr X/B)$ be the monodromy cone associated to the semiabelian family at $0$. The period map defines a map of cones $$ \mu:\mathbb R_{\ge 0}\langle e_i\rangle \to \sigma(\mathscr X/B) $$ $$ e_i\mapsto \log T_{e_i} $$ where $T_{e_i}$ is the monodromy around the hyperplane associated to $e_i$. Now the admissible decomposition $\Sigma$ decomposes $\sigma(\mathscr X/B)$ into a fan $\mathscr F_0^\Sigma$ of cones. Fact \ref{fctext} states that the period map extends at $0$ if and only if $\mathscr F_0^\Sigma$ has just one cone of maximal dimension. Otherwise, there is an induced fan $\mu^{-1}\mathscr F_0^\Sigma$ decomposing the cone $\mathbb R_{\ge 0}\langle e_i\rangle$. Any fan $\mathscr F$ decomposing $\mathbb R_{\ge 0}\langle e_i\rangle$ determines a birational morphism to $\mathbb A^k_{\mathbb C}$. Any fan $\mathscr F$ that refines $\mu^{-1}\mathscr F_0^\Sigma$ determines a birational modification of $B$ (in an \'etale neighborhood of $0$) that resolves the period map in a (\'etale) neighborhood of $0$. In particular, the fan $\mu^{-1}\mathscr F_0^\Sigma$ determines the minimal, toric birational modification that will resolve the period map. This minimal, toric birational modification is canonical and glues to give a birational modification for a period map on a moduli space. \subsection{Moduli stacks and abelian varieties} \label{secModStAbVar} We may also view the toroidal compactifications as compactifications of the moduli of abelian varieties. We begin by recalling the basic structure of toroidal compactifications from this perspective. Every toroidal compactification $A_g^{\Sigma}$ has a canonical map $$ \varphi^{\Sigma}: A_g^{\Sigma} \to A_g^{*} =A_g \sqcup A_{g-1} \sqcup \ldots \sqcup A_0 $$ which defines a stratification $\beta^{\Sigma}_i= (\varphi^{\Sigma})^{-1}(A_{g-i})$. The strata $\beta^{\Sigma}_i$ are themselves stratified $\beta^{\Sigma}_i=\sqcup \beta(\sigma)$ where $\sigma$ runs through all $GL(i,\mathbb{Z})$ orbits of cones in the decomposition $\Sigma$ of $\operatorname{Sym}^2(\mathbb{Z}^i)$ containing rank $i$ matrices. The strata $\beta(\sigma)$ themselves are of the form $\beta(\sigma)= {\mathcal T}(\sigma) /\mathcal G(\sigma) $ where ${\mathcal T}(\sigma)$ is a torus bundle over the $i$-fold fiber product of the universal family $p: {\mathcal X}_{g-i} \to {\mathcal A}_{g-i}$. Indeed $\pi(\sigma)=p^{\times i}\circ q(\sigma) : {\mathcal T}(\sigma) \to {\mathcal X}_{g-i}^{\times i} \to {\mathcal A}_{g-i}$ where $q(\sigma) $ is a torus bundle whose fibers have dimension $i(i+1)/2 - \dim(\sigma)$. More precisely ${\mathcal T}(\sigma)= {\mathcal T}_i / {\mathcal T}_{\sigma}$ where the fibers of ${\mathcal T}_i$ and ${\mathcal T}_{\sigma}$ are $\operatorname{Sym}^2(\mathbb{Z}^i) \otimes \mathbb{C}^*$ and $(\langle \sigma \rangle \cap \operatorname{Sym}^2(\mathbb{Z}^i)) \otimes \mathbb{C}^*$ respectively. The group $G(\sigma)$ is the stabilizer of the cone $\sigma$ in $GL(i,\mathbb{Z})$ and acts naturally on ${\mathcal T}(\sigma)$. The codimension of ${\mathcal T}(\sigma)$ in $A_g^{\Sigma}$ is $\dim(\sigma)$. \subsubsection{The Faltings--Chai stacks} Faltings--Chai have given a stack theoretic interpretation of the toroidal compactifications. For each toroidal compactification $\bar A_g^\Sigma$, there is an irreducible, normal, proper, Deligne--Mumford $\mathbb{C}$-stack $\overline{\mathcal{A}}_g^\Sigma$ with coarse moduli space $\bar A_g^\Sigma$, and a semiabelian scheme $\mathcal X_g^\Sigma\to \bar{\mathcal A}_g^\Sigma$ extending the universal abelian variety $\mathcal X_g\to \mathcal A_g$ (\cite[Thm.~5.7 (5), p.117]{FC90}). While the stack does not represent a moduli functor of semiabelian varieties, we do have the following. A semiabelian scheme $X\to S$ over the disk, such that the restriction $X^\circ\to S^\circ$ to the punctured disk is abelian (i.e.~a morphism $S\to \bar{\mathcal A}_g^\Sigma$ with $S^\circ \to \mathcal A_g$) is determined by a set of degeneration data (see also \cite{alexeev02}, \cite{abh}), namely: (D0) A principally polarized abelian variety $(A,M)$ (with $M$ an ample line bundle) inducing an isomorphism $\lambda_M:A\to \widehat A$, where $\widehat A=\operatorname{Pic}^0(A)$. (D1a) A semiabelian variety $ 0\to \mathbb T\to G\to A\to 0 $ with split torus part, determined by a homomorphism $c:\Lambda\to \widehat A$, where $\Lambda$ is the character lattice of $\mathbb T$. (D1b) A second semiabelian variety $ 0\to \widehat {\mathbb T}\to \widehat G\to \widehat A\to 0 $ induced by a homomorphism $\hat c: \hat \Lambda\to A$, where $\hat \Lambda$ is the character lattice of $\widehat{\mathbb T}$. (D2) An isomorphism of lattices $\phi:\widehat \Lambda\to \Lambda$ so that $c\circ \phi=\lambda_M\circ \hat c$. (D3) A bihomomorphism $\tau:\hat \Lambda\times \Lambda \to (\hat c\times c)^*(\mathscr P^\circ )^{-1} $, where $\mathscr P^\circ$ is the rigidified Poincar\'e bundle with zero section removed. (D4) A cubical morphism $\psi:\widehat \Lambda \to \hat c^*(M^\circ)^{-1}$, where $M^\circ$ is the principal bundle obtained from removing the zero section of $M$. (D5) For each $\lambda\in \Lambda $ a section $\theta_\lambda \in \Gamma(A,M\otimes c(\lambda))$ satisfying some further compatibility conditions with the data above. This data, more precisely the bihomomorphism $\tau$, defines a quadratic form $B:\Lambda\times \Lambda\to \mathbb Q$, which in fact agrees with the log of monodromy for the $1$-parameter family (at least up to $\operatorname{GL}$-conjugation and scaling, which is irrelevant from the extension of period maps perspective). (D6) The Delaunay decomposition of $\Lambda_{\mathbb R}$ determined by $B$. Given this data, we can describe the image of the central point under the map $f: S\to \bar { A}_g^\Sigma$ as follows. The data (D0) determines the image of $0$ under the composition $S\to \bar {A}_g^\Sigma\to A_g^*$; in particular it determines the stratum $\beta_i$ described above, in which $f(0)$ lies. The quadratic form $B$ lies in a unique cone $\sigma\in \Sigma$ of minimal dimension. The point $f(0)$ then lies in the stratum $\beta(\sigma)\subseteq \beta_i$, described above. The remaining degeneration data determines the specific point $f(0)$ within $\beta(\sigma)$. More precisely, using the description of ${\mathcal T}_i$ given in \cite[Prop.~7.2]{ght} the biholomorphism $\tau$ defines a point in ${\mathcal T}_i$ and $f(0)$ is its image in $\beta^{\operatorname{Vor}}(\sigma)= ({\mathcal T}_i/{\mathcal T}_{\sigma})/G(\sigma)$. \begin{rem}\label{remstack} Let $\overline {\mathcal M}$ be a smooth, Deligne--Mumford $\mathbb{C}$-stack containing an open substack $\mathcal M$ with normal crossing boundary divisor $\Delta=\overline{\mathcal M}\setminus \mathcal M$. Let $\overline M$ and $M$ be the respective coarse moduli spaces. Suppose there exists a morphism $\mathcal M\to \mathcal A_g$, inducing a morphism $M\to A_g$. Using the Abramovich--Vistoli purity lemma: {\it The morphism $\mathcal M\to \mathcal A_g$ extends to a morphism $\overline{\mathcal M}\to \bar {\mathcal A}^\Sigma_g$ if and only if the morphism $M\to A_g$ extends to a morphism $\overline M \to \bar A^\Sigma_g$.} \end{rem} \begin{rem} The singularities of the stack $\bar{\mathcal A}_g^\Sigma$ can be read off from the cones in $\Sigma$. Basic cones give rise to smooth points of the stack. Simplicial but non-basic cones correspond to quotient singularities by finite abelian groups. More precisely these groups are identified with the quotient of the lattice $(\operatorname{Sym}^2(\Lambda))^\vee \cap \langle \sigma \rangle$ by the sub-lattice generated by the integral generators of $\sigma$. Non-simplicial cones give rise to more general (toric) singularities of the moduli stack. Singularities of the varieties $\bar A_g^\Sigma$ can also occur if the cones are basic, in fact they already occur on ${A}_g$ itself. The singularities depend on the finite stabilizer of a point in the toric construction. The codimension of the singular locus of the stack $\bar{\mathcal A}_g^{P}$ is $10$, whereas it is $3$ in the case of $\bar{\mathcal A}_g^V$ \cite{DHS}. \end{rem} \subsubsection{Alexeev's stack of stable semiabelic pairs} \label{secSSAP} Alexeev has constructed a moduli space $\bar {\mathcal A}_g^A$ of complex stable semiabelic pairs that contains $\mathcal A_g$ as an open substack \cite{alexeev02}. The stack $\bar {\mathcal A}_g^A$ is a proper, algebraic (Artin) $\mathbb{C}$-stack with finite diagonal \cite[Thm 5.10.1]{alexeev02}. Moreover, the stack admits a coarse moduli space, with a component that has normalization isomorphic to the second Voronoi compactification $\overline{A}_g^V$ \cite[Thm. 5.11.6, p.701]{alexeev02} (see also Olsson \cite{olsson}). The main point is that the degeneration data described above define locally relatively complete models which admit an action of the universal semiabelian variety. Gluing these to obtain a universal family over a compactifcation of ${\mathcal A}_g$ one is naturally led to the second Voronoi decomposition. We also recall how the degeneration data above determine a stable semi-abelic pair. The Delaunay decomposition defines a fan on $\mathbb R\oplus \Lambda_{\mathbb R}$ with cones determined by the Delaunay decomposition shifted by $(1,0)\in \mathbb N\oplus \Lambda$. We use this to define an $\mathscr O_A$-algebra $\mathscr R$. As a module, $\mathscr R$ is freely generated by $M_\chi:=M^{\otimes d}\otimes c(\lambda)$ for each $\chi=(d,\lambda)\in \mathbb N\oplus \Lambda$. We define multiplication by the natural identification $M_{\chi_1}\otimes M_{\chi_2}=M_{\chi_1+\chi_2}$ when $\chi_1,\chi_2\in \delta$ lie in a common cone in the fan over the Delaunay decomposition. When $\chi_1,\chi_2$ do not lie in a common cone, sections multiply to $0$. The morphisms $\tau$ and $\phi$ define a natural action of $\hat\Lambda$ on $\mathscr R$, and the action is properly discontinuous in the Zariski topology on the relative Proj, so that $X=(\operatorname{Proj}_A\mathscr R)/\hat\Lambda$ is a well-defined, polarized scheme. This is the stable semiabelic pair. Note that the fiber of $X$ over $A$ is a (possibly reducible) projective toric variety, obtained by gluing the toric varieties determined by the tiling in the Delaunay decomposition. \section{Prym varieties and admissible covers}\label{sectPrym} It is well known that $\mathcal M_g$ has a normal crossing compactification $\overline{\mathcal M}_g$ obtained by allowing stable curves, whose limiting Hodge theoretic behavior is controlled by a combinatorial object, the dual graph $\Gamma$. Prym varieties are abelian varieties obtained from connected \'etale double covers of curves. A normal crossing compactification for the moduli of connected \'etale double covers $\mathcal R_g$ (compatible with $\overline {\mathcal M}_g$) was constructed by Beauville by considering admissible double covers of stable curves. The associated combinatorial object governing the limiting Hodge theoretic behavior is a dual graph with involution. We briefly review this below. \subsection{Admissible covers} Let $C$ be a stable curve of genus $g+1\ge 2$. Recall that an \emph{admissible double cover of $C$} is a finite, surjective morphism $\pi:\widetilde C \to C$ of stable curves such that: \begin{enumerate} \item The arithmetic genus of $\widetilde C$ is equal to $\tilde g=2g+1$. \item For each irreducible component $C'$ of $C$, the restriction $\pi: \pi^{-1}(C')\to C'$ has degree two (but $\pi^{-1}(C')$ may be reducible or disconnected). \item If $\iota: \widetilde C\to \widetilde C$ is the sheet interchange involution associated to $\pi$, the fixed points of $\iota$ are a subset of the nodes of $\widetilde C$, and at a fixed node the local branches of $\widetilde C$ are not exchanged. \end{enumerate} There exists a smooth, irreducible, proper Deligne--Mumford $\mathbb{C}$-stack $\overline{\mathcal R}_{g+1}$ parameterizing admissible double covers of stable curves of genus $g+1\ge 2$ \cite{b}. We denote by $\mathcal R_{g+1}$ the open sub-stack of connected, \'etale double covers of smooth curves. The forgetful functor $\overline{\mathcal R}_{g+1}\to \overline {\mathcal M}_{g+1}$ to the moduli of stable curves defines a degree $2^{2(g+1)}-1$ cover, ramified along an irreducible boundary divisor $\delta_0^{\operatorname{ram}}\subset\overline{\mathcal{R}}_{g+1}$ (see \S \ref{secBD}). The full boundary $\delta_{\overline{\mathcal R}_{g+1}}$ of $\overline {\mathcal R}_{g+1}$ is a simple normal crossing divisor with the property that \'etale locally at a point $\pi:\widetilde C\to C$, its irreducible components correspond to the nodes of $C$. We discuss the irreducible components of the boundary divisor $\delta_{\overline{\mathcal{R}}_{g+1}}$ in \S \ref{secBD} below. We denote by $\overline R_{g+1}$ and $R_{g+1}$ the coarse moduli spaces of the respective stacks. \subsection{Involutions of graphs} Here we fix our conventions on graphs. A graph $\Gamma$ is a set of {\em vertices} $V=V(\Gamma)$ and a set of {\em oriented edges} $\overrightarrow E=\overrightarrow E(\Gamma)$ together with maps $(\overrightarrow E \xymatrix{ \ar @{->}^s @< 2pt> [r] \ar@{->}_t @<-2pt> [r] & } V, \overrightarrow E \stackrel{\tau}{\to} \overrightarrow E)$, where $\tau$ is a fixed-point free involution, and $s$ and $t$ are maps satisfying $s(\overrightarrow e)=t(\tau(\overrightarrow e))$ for all $\overrightarrow e \in \overrightarrow E$. The maps $s$ and $t$ are called the \emph{source} and \emph{target} maps respectively. We define the set of \emph{(unoriented) edges} to be $E(\Gamma)=E:=\overrightarrow E/\tau$. Given an oriented edge $\overrightarrow e \in \overrightarrow E $ we will denote by $\underline {\overrightarrow e}$ the class of $\overrightarrow e$ in $E$. An \emph{orientation of an edge} $e\in E$ is a representative for $e$ in $\overrightarrow E$; we use the notation $\overrightarrow e$ and $\overleftarrow e$ for the two possible orientations of $e$. An \emph{orientation of a graph $\Gamma$} is a section $\phi:E\to \overrightarrow E$ of the quotient map. An \emph{oriented graph} consists of a pair $(\Gamma,\phi)$ where $\Gamma$ is a graph and $\phi$ is an orientation. A \emph{morphism of graphs} $\Gamma_1\to \Gamma_2$ consists of a pair of maps $$ V(\Gamma_1)\to V(\Gamma_2) \ \ \text{ and } \ \ \overrightarrow E(\Gamma_1)\to \overrightarrow E(\Gamma_2) $$ so that all of the associated diagrams commute. An \emph{involution $\iota$ of a graph} is an endomorphism of the graph such that $\iota^2$ is the identity. We can define morphisms of oriented graphs as well; an involution of an oriented graph is defined in the obvious way. Associated to a graph $\Gamma$ is a chain complex $$ (C_\bullet(\Gamma,\mathbb{Z}),\partial_\bullet), $$ where $C_0(\Gamma,\mathbb{Z})$ is the free $\mathbb{Z}$-module with basis $V(\Gamma)$, and $C_1(\Gamma,\mathbb{Z})$ is the quotient of the free $\mathbb{Z}$-module with basis $\overrightarrow E(\Gamma)$ by the relation $\overleftarrow e=-\overrightarrow e$ for every $e\in E(\Gamma)$. We denote by $[\overrightarrow e]$ the class of $\overrightarrow e$ in $C_1(\Gamma,\mathbb{Z})$. Note that while $\underline {\overrightarrow e}=\underline {\overleftarrow e}$ in $E$ (they correspond to the same unoriented edge), in $C_1(\Gamma,\mathbb{Z})$ we have $[\overrightarrow e]=-[\overleftarrow e]$. An orientation $\phi$ determines a basis $\{[\phi(e)]\}_{e\in E}$ for $C_1(\Gamma,\mathbb{Z})$, which identifies $C_1(\Gamma,\mathbb{Z})$ with the usual chain group of $1$-chains for the associated simplicial complex. The boundary map is defined by: \begin{equation*} \partial: C_1(\Gamma,\mathbb{Z}) \to C_0(\Gamma,\mathbb{Z}) \end{equation*} \begin{equation*} [\overrightarrow e] \mapsto t(\overrightarrow e)-s(\overrightarrow e). \end{equation*} We will denote by $H_\bullet (\Gamma,\mathbb{Z})$ the groups obtained from the homology of $C_\bullet (\Gamma,\mathbb{Z})$; the group $H_\bullet (\Gamma,\mathbb{Z})$ is isomorphic to the homology of the underlying topological space of $\Gamma$. From the definitions one can check immediately that an involution $\iota$ of a graph $\widetilde \Gamma$ induces an involution $\iota$ of the chain complex $C_\bullet(\widetilde \Gamma,\mathbb{Z})$. This in turn induces an involution $\iota$ on $H_\bullet (\widetilde \Gamma,\mathbb{Z})$; we denote by $H_1(\widetilde \Gamma, \mathbb{Z})^\pm$ the eigenspaces of the action of $\iota$. We define $$ H_1(\widetilde \Gamma,\mathbb{Z})^{[+]}=H_1(\widetilde \Gamma,\mathbb{Z})/H_1(\widetilde \Gamma,\mathbb{Z})^{-} \ \ \text { and } \ \ H_1(\widetilde \Gamma,\mathbb{Z})^{[-]}=H_1(\widetilde \Gamma,\mathbb{Z})/H_1(\widetilde \Gamma,\mathbb{Z})^+. $$ Note that $$ H_1(\widetilde \Gamma,\mathbb{Z})^{[-]}\cong \Im\left( \frac{1}{2}(\operatorname{Id} -\iota) \right) \subseteq \frac{1}{2}H_1(\widetilde \Gamma,\mathbb{Z}) $$ $$ H_1(\widetilde \Gamma,\mathbb{Z})^{[+]}\cong \Im\left( \frac{1}{2}(\operatorname{Id} +\iota) \right) \subseteq \frac{1}{2}H_1(\widetilde \Gamma,\mathbb{Z}). $$ As usual, we construct a cochain complex $C^\bullet(\widetilde \Gamma,\mathbb{Z})=\operatorname{Hom} (C_\bullet(\widetilde \Gamma,\mathbb{Z}),\mathbb{Z})$, and define the cohomology groups $H^\bullet(\widetilde \Gamma,\mathbb{Z})$ to be the homology of this complex. We have $H^i(\widetilde \Gamma,\mathbb{Z})=H_i(\widetilde \Gamma,\mathbb{Z})^\vee$ ($i=0,1$). However, in contrast we have $$ H^i(\widetilde \Gamma,\mathbb{Z})^\pm=\left(H_i(\widetilde \Gamma,\mathbb{Z})^{[\pm]}\right)^\vee \ \ (i=0,1). $$ \begin{rem}\label{remcoedge} Here we make an observation that will be useful for later computations. To simplify the discussion, fix an orientation of the graph $\Gamma$. Then, by definition, $C^1(\Gamma,\mathbb{Z})=C_1(\Gamma,\mathbb{Z})^\vee=(\bigoplus_{e\in E}\mathbb{Z}\cdot e)^\vee=\bigoplus_{e\in E}\mathbb{Z}\cdot e^\vee$. We call the elements $e^\vee$ co-edges. There is by definition a surjection $C^1(\Gamma,\mathbb{Z})\twoheadrightarrow H^1(\Gamma,\mathbb{Z})$. Denote temporarily by $\widehat {e^\vee}$ the image of a co-edge $e^\vee$ in $H^1(\Gamma,\mathbb{Z})$. Now note that if $\widehat {e_1^\vee}=\widehat {e_2^\vee}$, then for any $z\in H_1(\Gamma,\mathbb{Z})\subseteq C_1(\Gamma,\mathbb{Z})$, we have $e_1^\vee(z)=e_2^\vee(z)$. \end{rem} An \emph{admissible involution} of a graph $\widetilde \Gamma$ is an involution $\iota$ such that for all $\overrightarrow e\in \overrightarrow E$, $\iota(\overrightarrow e)\ne \overleftarrow e$. In other words, an involution is admissible if whenever an unoriented edge of the graph is fixed by the involution, the vertices at the endpoints of the edge are not interchanged by the involution (unless the edge is a loop, in which case the condition requires the associated oriented edges not to be interchanged). If $\pi:\widetilde C\to C$ is an admissible cover, then the associated involution $\iota$ of $\widetilde C$ induces a well defined involution $\iota$ of the vertices $V(\Gamma_{\widetilde C})$ and of the unoriented edges $E(\Gamma_{\widetilde C})$. There is also a well defined induced involution on the set of oriented edges. We will call this the \emph{induced admissible involution of the dual graph of $\widetilde C$}. \subsection{Prym varieties} Let $C$ be a stable curve of genus $g\ge 2$. The Jacobian $JC$ is defined to be the connected component of the identity in $\operatorname{Pic}(C)$. The Jacobian is a semiabelian variety of dimension $g$, which can be described explicitly as follows. Let $\nu:N\to C$ be the normalization and let $\Gamma=\Gamma_C$ be the dual graph of $C$. Then there is an exact sequence \begin{equation}\label{eqnExtJac} \begin{CD} 0@>>> H^1(\Gamma,\mathbb{Z})\otimes_{\mathbb{Z}}\mathbb{C}^*@>>> JC @>\nu^*>> JN @>>> 0. \end{CD} \end{equation} The extension determines a class in $$ \operatorname{Ext}^1(JN,H^1(\Gamma,\mathbb{Z})\otimes_{\mathbb{Z}}\mathbb{C}^*)=\operatorname{Hom} (H_1(\Gamma,\mathbb{Z}),\widehat {JN}), $$ where $\widehat {JN}=\operatorname{Pic}^0(JN)$ is the dual abelian variety. We refer the reader to \cite[p.76]{abh} for an explicit description of the extension class (see also \S \ref{secsubFibers}). For later reference we note that $JC$ is an extension of the torus $\mathbb{T}_C:=H^1(\Gamma,\mathbb{Z})\otimes_{\mathbb{Z}}\mathbb{C}^*$, which has character lattice \begin{equation}\label{eqnJClattice} \Lambda_C:=\operatorname{Hom} (\mathbb{T}_C,\mathbb{C}^*)=H_1(\Gamma,\mathbb{Z}). \end{equation} Now let $\pi:\widetilde C\to C$ be an admissible double cover of a stable curve $C$ of genus $g+1\ge 2$. We define the Prym variety $$ P:=P(\widetilde C/C)=\ker \left(\operatorname{Nm}:J\widetilde C\to JC\right)_0 $$ to be the connected component of the identity in the kernel of the norm map. The Prym variety is a semiabelian variety of dimension $g$, which can be explicitly described as follows. Let $\tilde \nu:\widetilde N\to \widetilde C$ and $\nu:N\to C$ be the normalizations, and let $\tilde \Gamma=\Gamma_{\widetilde C}$ and $\Gamma=\Gamma_C$ be the dual graphs of $\widetilde C$ and $C$ respectively. Then there is an exact sequence \begin{equation}\label{eqnExtPrym} \begin{CD} 0@>>> H^1(\widetilde \Gamma,\mathbb{Z})^-\otimes_{\mathbb{Z}}\mathbb{C}^*@>>> P @>>> A @>>> 0, \end{CD} \end{equation} where $A$ is a finite cover of $P_N:=P(\widetilde N/N)=\ker \left(\operatorname{Nm}:J\widetilde N\to JN\right)_0$, the Prym variety of the normalization. The extension determines a class in $$ \operatorname{Ext}^1(A,H^1(\widetilde \Gamma,\mathbb{Z})^-\otimes_{\mathbb{Z}}\mathbb{C}^*)=\operatorname{Hom} (H_1(\Gamma,\mathbb{Z})^{[-]},\widehat A). $$ We direct the reader to \cite[\S 1, Prop.~1.5]{abh} for more details on the relationship between $A$ and $P_N$, as well as for an explicit description of the extension class (see also \S \ref{secsubFibers}). For later reference we note that $P$ is an extension of the torus $\mathbb{T}_P:=H^1(\widetilde \Gamma,\mathbb{Z})^-\otimes_{\mathbb{Z}}\mathbb{C}^*$, which has character lattice \begin{equation}\label{eqnPrymlattice} \Lambda_{\widetilde C/C}:=\operatorname{Hom} (\mathbb{T}_P,\mathbb{C}^*)=H_1(\Gamma,\mathbb{Z})^{[-]}. \end{equation} \subsection{The boundary divisors in $\overline{\calR}_{g+1}$}\label{secBD} In some arguments in what follows we will want to enumerate certain types of admissible covers. We thus review the enumeration of the irreducible boundary components of $\overline{\calR}_{g+1}$ following \cite{farkassurvey} and the references therein (see especially \cite{bernstein}, \cite{FL10}), noting also the corresponding descriptions in terms of vanishing cycles (see also the preprint version of \cite{fs}). Recall that for a smooth curve $C$ of genus $g$, there are natural identifications of the following sets: $$\text{ \{Conn.~\'et.~dbl.~cov.}\ \pi: \widetilde C\to C\}=H^1(C,\mathbb{Z}/2\mathbb{Z})-\{0\}$$ $$ =\{ \eta\in \operatorname{Pic}^0(C): \eta \ncong \mathscr O_C, \ \eta^{\otimes 2}\cong \mathscr O_C\}.$$ For a stable curve $C_0$, with a unique node, we will denote by $C_\ast$ a nearby smooth curve, and $\gamma \in H^1(C_\ast,\mathbb{Z})$ the associated vanishing co-cycle. The irreducible boundary components of $\overline{\calR}_{g+1}$ are as follows. The preimage in $\overline{\calR}_{g+1}$ of the locus of irreducible stable curves $\delta_0\subset\overline{\mathcal{M}}_{g+1}$ has three irreducible components $\delta_0'$, $\delta_0''$, and $\delta_0^{ram}$ defined as follows: $$ \begin{array}{lcl} \delta_0'&=&\{(C_0,a): C_0\in \delta_0^\circ, \ a\in H^1(C_\ast,\mathbb{Z}/2\mathbb{Z})-0,\ a \cdot \gamma =0, \text { but } a \notin\langle \gamma \rangle \}^-\\ \delta_0''&=& \{(C_0,a): C_0\in \delta_0^\circ,\ a\in H^1(C_\ast,\mathbb{Z}/2\mathbb{Z})-0,\ a \cdot \gamma =0, \text { and } a \in\langle \gamma \rangle \}^-\\ \delta_0^{\operatorname{ram}} &=& \{(C_0,a): C_0\in \delta_0^\circ, \ a\in H^1(C_\ast,\mathbb{Z}/2\mathbb{Z})-0, \ a \cdot \gamma \ne 0 \}^- \end{array} $$ In the above, and in what follows, we will denote by $\delta_i^\circ\subseteq \delta_i$ the locus of curves with a single node. The bar after the sets above denotes taking the closure. \begin{figure}[htb] \begin{equation*} \xymatrix{ \widetilde \Gamma \ \ \ \ & *{\bullet} \ar@{-}@(lu,ld)|-{\SelectTips{cm}{}\object@{>}}_{\tilde e^+} \ar@{-}@(ru,rd)|-{\SelectTips{cm}{}\object@{>}}^{\tilde e^-} & & \Gamma \ \ \ \ & *{\bullet} \ar@{-}@(lu,ld)|-{\SelectTips{cm}{}\object@{>}}_{e} & } \end{equation*} \caption{Dual graph of a generic admissible cover in $\delta_0'$.}\label{Fig:d0'} \end{figure} \begin{figure}[htb] \begin{equation*} \xymatrix{ \widetilde \Gamma & *{\bullet} \ar @{-}@/_1pc/[rr]|-{\SelectTips{cm}{}\object@{<}}_{\tilde e^-} \ar@{-} @/^1pc/[rr]|-{\SelectTips{cm}{}\object@{>}}^{\tilde e^+} ^<{\tilde v^-}^>{\tilde v^+} &&*{\bullet} & & \Gamma \ \ \ \ & *{\bullet} \ar@{-}@(lu,ld)|-{\SelectTips{cm}{}\object@{>}}_{e}_<{v} & } \end{equation*} \caption{Dual graph of a generic admissible cover in $\delta_0''$.}\label{Fig:d0''} \end{figure} \begin{figure}[htb] \begin{equation*} \xymatrix{ \widetilde \Gamma \ \ \ \ & *{\bullet} \ar@{-}@(lu,ld)|-{\SelectTips{cm}{}\object@{>}}_{\tilde e} & & \Gamma \ \ \ \ & *{\bullet} \ar@{-}@(lu,ld)|-{\SelectTips{cm}{}\object@{>}}_{e} & } \end{equation*} \caption{ Dual graph of a generic admissible cover in $\delta_0^{\operatorname{ram}}$.}\label{Fig:d0ram} \end{figure} The preimage in $\overline{\calR}_{g+1}$ of the boundary divisor $\delta_i\subset\overline{\mathcal{M}}_{g+1}$ has three irreducible components (only two for $i=(g+1)/2$, in which case the first two are the same) described as follows $$ \begin{array}{lcl} \delta_i&=&\{(C,\eta): C=C_i\cup C_{g+1-i}\in \delta_i^\circ,\ \eta |_{C_{g+1-i}}\cong \mathscr O_{C_{g+1-i}} \}^- \\ \delta_{g+1-i}&=&\{(C,\eta): C=C_i\cup C_{g+1-i}\in \delta_i^\circ,\ \eta |_{C_i}\cong \mathscr O_{C_i} \}^- \\ \delta_{i;g+1-i} &=& \{(C,\eta): C=C_i\cup C_{g+1-i}\in \delta_i^\circ,\ \eta |_{C_i}\ncong \mathscr O_{C_i}, \ \eta |_{C_{g+1-i}}\ncong \mathscr O_{C_{g+1-i}} \}^- \end{array} $$ \begin{figure}[htb] \begin{equation*} \xymatrix@R=.3cm{ & *{\bullet} \ar @{-}@/_0pc/[rd]|-{\SelectTips{cm}{}\object@{>}}^{\tilde e^+} _<{\tilde v^+}&&&&&& & &\\ \widetilde \Gamma \ \ \ \ && *{\bullet}&&\Gamma &*{\bullet} \ar @{-}@/_0pc/[r]|-{\SelectTips{cm}{}\object@{>}}^{e} _<{ } &*{\bullet}\\ &*{\bullet}\ar@{-} @/^0pc/[ru]|-{\SelectTips{cm}{}\object@{>}}_{\tilde e^-} ^<{\tilde v^-} &&& &&& \\ } \end{equation*} \caption{ Dual graph of a generic admissible cover in $\delta_i$ (or $\delta_{g+1-i}$).}\label{Fig:di} \end{figure} \begin{figure}[htb] \begin{equation*} \xymatrix{ \widetilde \Gamma \ \ \ \ & *{\bullet} \ar @{-}@/_1pc/[rr]|-{\SelectTips{cm}{}\object@{>}}_{\tilde e^-} \ar@{-} @/^1pc/[rr]|-{\SelectTips{cm}{}\object@{>}}^{\tilde e^+} &&*{\bullet} & & \Gamma \ \ \ \ & *{\bullet} \ar@{-} @/^0pc/[r]|-{\SelectTips{cm}{}\object@{>}}^{e} & *{\bullet} } \end{equation*} \caption{ Dual graph of a generic admissible cover in $\delta_{i,g+1-i}$}\label{Fig:digi} \end{figure} \section{Monodromy cones for Prym varieties}\label{sectMon} In this section, we compute the monodromy cones (in the terminology of \S\ref{sectHT}) for a boundary point $(\widetilde C,C)$ of $\overline{R}_{g+1}$ in terms of the combinatorics of the dual graph of $(\widetilde C, C)$ (discussed in \S\ref{sectPrym}). As a warm-up, we first review the classical case of Jacobians. The case of Pryms then naturally follows. While essentially equivalent computations can be found in \cite{fs} and \cite{abh}, our presentation for Prym varieties seems to be somewhat new. \subsection{Monodromy cones for stable curves} Let $C$ be a stable curve of genus $g\ge 2$. Let $\mathscr C\to B$ be a miniversal deformation of $C$, with discriminant $\Delta$, and set $B^\circ=B_C^\circ=B-\Delta$. Denote by $0\in B$ the point corresponding to $C$. Let $\Gamma$ be the dual graph of $C$. We have $\dim B_C=3g-3$ and $\Delta$ is a collection of simple normal crossing hyperplanes, indexed by the nodes of $C$ (which in turn are indexed by the edges of $\Gamma$). Recall that the Jacobian of $C$ is a semiabelian variety obtained as an extension of the torus $\mathbb{T}_C= H^1(\Gamma,\mathbb{Z}) \otimes_{\mathbb{Z}} \mathbb{C}^*$, which has character lattice $$ \Lambda_C=H_1(\Gamma,\mathbb{Z}). $$ Since $B^\circ$ is locally (near $0$) a polycylinder and the associated mondoromies are unipotent, we can apply the considerations of \S\ref{smoncone} and \S\ref{secMGC} and define the monodromy cone $$\sigma(C)\subseteq \overline C^{\mathbb{Q}}(\Lambda_C)$$ to be the cone spanned by the log of mondromies around the branches of $\Delta$ (see \eqref{eqnmc1}). More precisely, we recall that for each irreducible component $\Delta_e\subseteq \Delta$, corresponding to an edge $e\in \Gamma$, there is an associated quadratic form obtained from the log of monodromy around $\Delta_e$ (cf. \eqref{eqnlm4}). The closure $\overline \sigma(C)$ is the cone generated by these quadratic forms (see \eqref{eqncmc4}): $$ \overline{\sigma}(C)=\mathbb{R}_{\ge 0}\langle \overline N_e\rangle_{e\in \Gamma}\subseteq \operatorname{Hom} (\operatorname{Sym}^2 H_1(\Gamma,\mathbb{Q}),\mathbb{Q})_\mathbb{R}. $$ We now state the following well-known description of the monodromy cone, and provide a sketch of the proof. \begin{pro}\label{projmoncone} Suppose that $C$ is a stable curve of genus $g\ge 2$. Let $e$ be an edge of the dual graph $\Gamma$ of $C$. Then $(e^\vee)^{2}$ is the quadratic form obtained as the log of monodromy around the corresponding component $\Delta_e$ of the discriminant. Consequently, the closure of the monodromy cone for $C$ is $$ \overline{\sigma}(C)=\mathbb{R}_{\ge 0}\langle (e^\vee)^2 \rangle_{e\in E(\Gamma)}. $$ \end{pro} \begin{proof} We start by describing the monodromy operators as in \eqref{eqnlm4}. Let us first consider the special case where $C$ has a single node. There are two possibilities: \begin{enumerate} \item[(0)] $e^\vee=0\in H^1(\Gamma,\mathbb{Q})$ (equivalently, $C\in \delta_i$, $i>0$). \item [(1)] $e^\vee\ne 0\in H^1(\Gamma,\mathbb{Q})$ (equivalently, $C\in \delta_0$). \end{enumerate} In case (0), $H_1(\Gamma,\mathbb{Q})=0$, so $JC$ is an abelian variety, the monodromy is trivial, and there is nothing to show. In case (1), $H_1(\Gamma,\mathbb{Q})= \mathbb{Q}\langle e\rangle$, where $e$ is the unique edge of $\Gamma$. As before, we view the log of monodromy as a map $$ \begin{CD} H_1(\Gamma,\mathbb Q)@> N_e>> H^1(\Gamma,\mathbb Q)=\left(H_1(\Gamma,\mathbb Q)\right)^\vee. \end{CD} $$ Since the monodromy operator $T_e$ is given by the well-known Picard--Lefschetz transformation, it follows that $ N_e(e)=e^\vee$. The associated quadratic form is then $(e^\vee)^2$. The general case follows by the arguments in \S \ref{secttoroidal}, \S \ref{sectHT} (esp.~\S\ref{secMGC} and \eqref{eqnMonComp}), which establish that $\overline N_e$ is given by the composition: $$ \begin{CD} H_1(\Gamma,\mathbb{Q})@>>> H_1(\Gamma_e,\mathbb{Q}) @> N_e >> H^1(\Gamma_e,\mathbb{Q})@>>> H^1(\Gamma,\mathbb{Q}) \end{CD} $$ where $\Gamma_e$ is the dual graph of the curve obtained from $C$ by smoothing all of the nodes except for the one corresponding to $e$. \end{proof} \begin{rem} Let $\psi:S\to B$ be a morphism from the unit disc to a mini-versal deformation space of the curve $C$, induced from a $1$-parameter deformation of $C$. For each edge $e\in \Gamma$, the dual graph of $C$, let $z_e$ be a local parameter defining the hyperplane $H_e\subseteq B$ parameterizing curves with all nodes smoothed except the one corresponding to $e$. Then the log of monodromy for the family is given by $\sum_{e\in \Gamma}\operatorname{ord}\psi^*(z_e) (e^\vee)^2$. \end{rem} \subsection{Monodromy cones for admissible double covers} Let $\pi:\widetilde C\to C$ be an admissible cover of a stable curve $C$ of arithmetic genus $g+1\ge 2$. Let $B$ be the base of a miniversal deformation of the cover, with discriminant $\Delta$, and set $B^\circ=B-\Delta$. Denote by $0\in B$ the point corresponding to $\pi:\widetilde C\to C$. Let $\widetilde \Gamma$ (resp.~$\Gamma$) be the dual graph of $\widetilde C$ (resp.~$C$). We have that $\dim B=3g$ and $\Delta$ is a collection of simple normal crossing hyperplanes, indexed by the nodes of $C$ (which are in turn indexed by the edges of $\Gamma$). Recall that the Prym variety of $\pi:\widetilde C\to C$ is a semiabelian variety obtained as an extension of the torus $\mathbb{T}_P= H^1(\widetilde \Gamma,\mathbb{Z})^- \otimes_{\mathbb{Z}} \mathbb{C}^*$, which has character lattice $$ \Lambda_{\widetilde C/C}=H_1(\Gamma,\mathbb{Z})^{[-]}. $$ We wish to describe the associated (log of) monodromy cone $$\sigma(\widetilde C/C)\subseteq \overline C^{\mathbb{Q}}(\Lambda_{\widetilde C/C}).$$ Recall that for each irreducible component $\Delta_e\subseteq \Delta$, corresponding to an edge $e\in \Gamma$, there is an associated quadratic form obtained as the log of monodromy around $\Delta_e$. The closure $\overline \sigma(\widetilde C/C)$ is the cone generated by these quadratic forms. From the perspective of Hodge theory, for each $1$-parameter family $f:S\to B$, with $f(0)=0\in B$, there is an associated $1$-parameter variation of Hodge structures determined by the family of Prym varieties. We will denote by $N^-$ the log of monodromy for this VHS. At the same time, there is a $1$-parameter VHS determined by the Jacobians of the covering curves, with log of monodromy $\widetilde N$. One can identify $\operatorname{Gr}_0(N^-)=(\operatorname{Gr}_0\widetilde N)^-=H^1(\widetilde \Gamma,\mathbb Q)^-$. Then as in \eqref{eqnlm3}, we can view the log of monodromy as a map \begin{equation}\label{eqnPrymLM} \begin{CD} H_1(\widetilde \Gamma,\mathbb Q)^-@>N^->> H^1(\widetilde \Gamma,\mathbb Q)^-, \end{CD} \end{equation} or dually as a positive definite quadratic form on $H_1(\widetilde \Gamma,\mathbb Q)^-$. Now to describe the closure of the monodromy cone, for each edge $e\in \Gamma$, one obtains a log of monodromy operator \begin{equation}\label{eqnPrecall2} \begin{CD} H_1(\widetilde \Gamma,\mathbb Q)^-@>\overline N_e^->> H^1(\widetilde \Gamma,\mathbb Q)^-, \end{CD} \end{equation} so that the associated bilinear form is symmetric and positive semi-definite. The closure of the monodromy cone is then given as: $$ \overline{\sigma}(C)=\mathbb{R}_{\ge 0}\langle \overline N_e^-\rangle_{e\in \Gamma}\subseteq \operatorname{Hom} (\operatorname{Sym}^2 H_1(\widetilde \Gamma,\mathbb{Q})^-,\mathbb{Q})_\mathbb{R}. $$ \begin{pro}\label{propmoncone} Suppose that $\pi:\widetilde C\to C$ is an admissible cover of a stable curve of genus $g+1\ge 2$. Let $e$ be an edge of the dual graph $\Gamma$ of $C$ and let $\tilde e$ be an edge of $\widetilde \Gamma$ lying above $e$. Then (up to a scalar factor) $(\tilde e^\vee-\iota\tilde e^\vee)^{2}$ is the quadratic form obtained as the log of monodromy around the corresponding component $\Delta_e$ of the discriminant. Consequently, the closure of the monodromy cone is $$ \overline \sigma(\widetilde C/C)=\mathbb{R}_{\ge 0}\langle (\tilde e^\vee-\iota \tilde e^\vee)^2 \rangle_{e\in E(\Gamma)}. $$ \end{pro} \begin{proof} Let $\pi:\widetilde C\to C$ be an admissible cover of a stable curve of genus $g+1\ge 2$. We will start by describing the closure of the monodromy cone via the log of monodromy as described in \eqref{eqnPrecall2}. First we will consider the special case where $C$ has a unique node. It is convenient to break this down further into three sub-cases. To do this, let us fix some notation. Let $e$ be the unique edge of $\Gamma$, the dual graph of $C$. Let $\tilde e$ be an edge of $\widetilde \Gamma$ lying over $e$. Then exactly one of the following holds: \begin{enumerate} \item[(0)] $\widetilde e^\vee-\iota\tilde e^\vee=0\in H^1(\widetilde \Gamma,\mathbb{Q})$ (equivalently, $\widetilde C\to C\in \delta_{\overline{\mathcal R}_{g+1}}\setminus(\delta_{i,g+1-i}\cup \delta_0'$)). \item [(1)] $\widetilde e^\vee-\iota\tilde e^\vee = 2\tilde e^\vee\ne 0\in H^1(\widetilde \Gamma,\mathbb{Q})$ (equivalently, $\widetilde C\to C\in \delta_{i,g+1-i}$). \item [(2)] $\widetilde e^\vee-\iota\tilde e^\vee \ne 2\tilde e^\vee, 0\in H^1(\widetilde \Gamma,\mathbb{Q})$ (equivalently, $\widetilde C\to C\in \delta_0'$). \end{enumerate} In case (0), $H^1(\widetilde \Gamma,\mathbb{Q})^-=0$, the Prym is an abelian variety, the monodromy is trivial, and there is nothing to show. In cases (1) and (2), $H_1(\widetilde \Gamma,\mathbb{Q})^{-}=\mathbb{Q}\langle \tilde e-\iota \tilde e\rangle$, where $e$ is the unique edge of $\Gamma$ and $\tilde e$ and $\iota \tilde e$ are the edges of $\widetilde \Gamma$ over $e$ interchanged by the involution. Now we will describe the log of monodromy as a map $$ \begin{CD} H_1(\widetilde \Gamma,\mathbb Q)^{-}@>N_e^->> H^1(\widetilde \Gamma,\mathbb Q)^-=\left(H_1(\widetilde \Gamma,\mathbb Q)^{-}\right)^\vee, \end{CD} $$ To do this, let us fix $\ast\in B$ an appropriate base point with $\widetilde C_\ast \to C_\ast$ an \'etale double cover. In suitable coordinates on $H^1(\widetilde C_\ast,\mathbb{C})^-$, the monodromy operator is given by $$ T_e=\left( \begin{smallmatrix} 1&2&0&\cdots&0\\ 0&1&0&\cdots & 0\\ \vdots && \ddots &&\vdots \\ 0&\cdots&0&1&0\\ 0&\cdots &0&0&1\\ \end{smallmatrix} \right) \ \ \ \text{(resp. } \ \ T_e=\left( \begin{smallmatrix} 1&1&0&\cdots&0\\ 0&1&0&\cdots & 0\\ \vdots && \ddots &&\vdots \\ 0&\cdots&0&1&0\\ 0&\cdots &0&0&1\\ \end{smallmatrix} \right) \text{)} $$ This follows from the classification of admissible covers in \S \ref{secBD} (see also \cite{fs}). Since the log of monodromy is given by $N_e=T_e-\operatorname{Id}$, it follows that (up to a scalar multiple) $N_e^-(\tilde e -\iota \tilde e)=\tilde e^\vee -\iota \tilde e^\vee $. Thus the associated quadratic form is $\left(\tilde e^\vee -\iota \tilde e^\vee\right)^2$. The general case then follows by the arguments in \S \ref{secttoroidal}, \S \ref{sectHT} (esp.~\S\ref{secMGC} and \eqref{eqnMonComp}) by considering the composition $$ \begin{CD} H_1(\widetilde \Gamma,\mathbb{Q})^{-}@>>> H_1(\widetilde \Gamma_e,\mathbb{Q})^{-} @> N_e^- >> H^1(\widetilde \Gamma_e,\mathbb{Q})^-@>>> H^1(\widetilde \Gamma,\mathbb{Q})^-, \end{CD} $$ where $\widetilde \Gamma_e$ is the dual graph of the curve obtained from $\widetilde C$ by smoothing all of the nodes except those lying above the node of $C$ corresponding to $e$. \end{proof} \begin{rem}\label{remPrymMonCone} Let $\psi:S\to B$ be a morphism from the disk to a mini-versal deformation space of the admissible cover $\widetilde C\to C$ induced from a $1$-parameter deformation of $\widetilde C\to C$. For each edge $e\in \Gamma$, the dual graph of $C$, let $z_e$ be a local parameter defining the hyperplane $\Delta_e\subseteq B$ parameterizing covers with all nodes smoothed except those corresponding to $e$. Then the log of monodromy for the family is given by $\sum_{e\in \Gamma} \operatorname{ord}\psi^*(z_e)(\tilde e^\vee-\iota\tilde e^\vee)^{2}$. \end{rem} \section{Extension criteria for the Torelli and Prym map}\label{sectExt} After the preliminaries of the previous sections, we can state our results regarding the extension of the period maps to various toroidal compactifications. In general given a period map $\mathcal M\to \mathcal A_g$ and a normal crossing compactification $\mathcal M\subset \overline{\mathcal M}$, the question of extending to the boundary essentially boils down to two steps: the computation of monodromy cones (which we did in the previous section) and then a check that a monodromy cone is contained in one of the cones of the fan defining a toroidal compactification (a combinatorial statement). Of course, this general process is well known and occurs in various guises in the literature, but its systematic application in the case of admissible covers gives a good and uniform understanding of the extensions of Prym maps to toroidal compactifications. \subsection{Extension of the Torelli map} To motivate the arguments for the Prym map, we first review in this section the results of Alexeev and Brunyate \cite{ab} for the Torelli map. Throughout this subsection, we will use the following notation. Fix $g\ge 2$ and $ C$ a stable curve in $\overline {\mathcal M}_g$. Let $\Gamma$ be the dual graph. Recall from Proposition \ref{projmoncone} that the closure of the monodromy cone for the admissible cover is the cone $$ \overline \sigma(C):=\mathbb{R}_{\ge 0}\langle (e^\vee)^2\rangle_{e\in E(\Gamma)}\subseteq \left(\operatorname{Sym}^2H_1(\Gamma,\mathbb{Z})\right)^\vee_{\mathbb{R}}. $$ Fix a free $\mathbb{Z}$-module $\Lambda$ of rank $g$, and a $GL(\Lambda)$-admissible cone decomposition $\Sigma$ of $\overline C^{\mathbb{Q}}(\Lambda)$. Let $\bar A_{g}^{\Sigma}$ be the associated toroidal compactification. Fix a surjection $\Lambda \to \Lambda_C =H_1( \Gamma,\mathbb{Z})$, and denote by $\Sigma_C$ the $GL(\Lambda_C)$-admissible cone decomposition of $\overline C^{\mathbb{Q}}(\Lambda_C)$ (induced by the inclusion $\overline C^{\mathbb{Q}}(\Lambda_C)\hookrightarrow \overline C^{\mathbb{Q}}(\Lambda)$). Recall that $\Sigma_C$ does not depend on the surjection $\Lambda\to \Lambda_C$. We now compile results from the literature due to Mumford, Namikawa \cite{nam76II}, and Alexeev and Brunyate \cite{ab}. \begin{teo}\label{teotorelliext} Fix $g\ge 2$. The Torelli map $$ J^\Sigma:\overline {M}_{g}\dashrightarrow \bar {A}_{g}^{\Sigma} $$ extends to a morphism in a neighborhood of a stable curve $C$ if and only if there exists a cone $\sigma\in \Sigma_C$ of the admissible decomposition containing the monodromy cone $\sigma(C)$. \begin{enumerate} \item \emph{(Mumford--Namikawa {\cite[Cor.~18.9]{nam76II}})} The Torelli map extends to a morphism to the second Voronoi compactification: $$ J^V:\overline M_g\to \overline{A}_g^V. $$ \item \emph{(Alexeev--Brunyate {\cite[Thm.~4.7, Thm.~6.7]{ab}})} The Torelli map extends to a morphism to the perfect cone compactification: $$ J^P:\overline M_g\to \overline{A}_g^P. $$ \item \emph{(Alexeev--Brunyate {\cite[Thm.~4.8]{ab}})} The Torelli map extends to a morphism to $\overline{A}_g^C$ in a neighborhood of $C\in \overline{M}_{g}$ if and only if there exists a quadratic form $Q$ on $H^1( \Gamma,\mathbb{R})$ such that \begin{enumerate} \item $Q(r)>0$ for all $r\in H^1(\Gamma,\mathbb{R})\setminus \{0\}$; i.e.~$Q$ is positive definite. \item $Q(\ell)\ge 1$ for all $\ell \in H^1( \Gamma,\mathbb{Z}) \setminus \{0\}$. \item $Q(e^\vee)=1$, for all $e\in E(\Gamma)$ such that $e^\vee\ne 0$. \item $Q(\ell)\in \mathbb{Z}$ for all $\ell \in H^1( \Gamma,\mathbb{Z})$. \end{enumerate} \end{enumerate} \end{teo} \begin{proof} The first statement of the theorem follows from the standard results on toroidal compactifications discussed in \S \ref{secextpermap}. We sketch the proofs of the remaining parts following Alexeev and Brunyate. (1) From Proposition \ref{propmoncone} and Lemma \ref{lemsecvor}, it follows that the Torelli map extends to a morphism to $\overline{A}_g^V$ in a neighborhood of $C$ if and only if for any collection $e_1,\ldots,e_m\in E(\Gamma)$ of edges such that the co-cycles $e_1^\vee,\ldots,e_m^\vee$ form a basis for $H^1( \Gamma,\mathbb{R})$, the co-cycles $e_1^\vee,\ldots,e_m^\vee$ in fact form a $\mathbb{Z}$-basis for $H^1( \Gamma,\mathbb{Z})$. On the other hand, an elementary result from graph theory (see eg.~\cite[Lem.~3.3]{ab}, \cite[(J6), p.95]{abh}) asserts the following: \emph{For a graph $\Gamma$ and a collection of edges $e_1,\ldots, e_m\in E(\Gamma)$, the co-cycles $e_1^\vee,\ldots,e_m^\vee$ form a $\mathbb{Z}$-basis of $H^1(\Gamma,\mathbb{Z})$ if and only if the co-cycles $e_1^\vee,\ldots,e_m^\vee$ form an $\mathbb{R}$-basis of $H^1(\Gamma,\mathbb{R})$, if and only if the graph obtained from $\Gamma$ by removing the edges $\{e_1,\ldots,e_m\}$ is a spanning tree} (i.e.~$b_0=1$, $b_1=0$, and it contains all the vertices). This completes the proof. (2) The monodromy cone is generated by rank $1$ quadrics, and in the previous paragraph was shown to be matroidal. Consequently, the monodromy cone is a perfect cone \cite[Thm.~A]{MV12} (see Remark \ref{remMV} and Remark \ref{remTorPC}). Thus the period map extends. (3) This is a restatement of Lemma \ref{lemcc}. \end{proof} \begin{rem}\label{remTorPC} For (2), it should be noted that from Lemma \ref{lempc} it follows that the Torelli map extends in a neighborhood of $C\in \overline M_g$ if and only if there exists a positive definite quadratic form $Q$ on $H^1(\Gamma,\mathbb{R})$ such that $Q(\ell)\ge 1$ for all $\ell \in H^1(\Gamma,\mathbb{Z}) -\{0\}$ and $Q(e^\vee)=1$, for all $e\in E(\Gamma)$ such that $e^\vee \ne 0$. In \cite[Thm.~6.7, p.194]{ab} Alexeev and Brunyate establish the existence of such quadratic forms, providing the proof of this case of \cite[Thm.~A]{MV12}. \end{rem} \begin{rem}\label{remTorCC} For $g\le 4$ the central cone compactification agrees with the perfect cone compactification, and consequently the Torelli map extends to a morphism to $\overline{A}_g^C$ for $g\le 4$. In fact, in \cite[Cor.~5.4]{ab}, \cite[Cor.~1.2]{AETAL} it is established that all dual graphs of genus $g\le 8$ admit a quadratic form as in Theorem \ref{teotorelliext} (3), and so the Torelli map extends to a morphism to $\overline{A}_g^C$ for all $g\le 8$. On the other hand, there are dual graphs of curves of all genera $g\ge 9$ that do not admit such quadratic forms \cite[Cor.~5.6]{ab}. Consequently, the Torelli map does not extend to a morphism to $\overline{A}_g^C$ for $g\ge 9$. \end{rem} \begin{rem} Recall from Remark \ref{remstack} that the Torelli map extends to a toroidal compactification, as a map of stacks, if and only if it extends as a map of the coarse moduli spaces. \end{rem} \subsection{Extension of the Prym map} Throughout this subsection, we will use the following notation. Fix $g+1\ge 2$ and $\pi:\widetilde C\to C$ an admissible double cover in $\overline{\mathcal R}_{g+1}$. Let $\widetilde \Gamma$ and $\Gamma$ be the dual graphs of $\widetilde C$ and $C$, respectively. To simplify the discussion, fix once and for all, for each edge $e\in E(\Gamma)$ a choice of edge $\tilde e\in E(\widetilde \Gamma)$ lying over $e$. Having made this choice, then for each edge $e\in E(\Gamma)$, fix a co-cycle $\ell_e\in H^1(\widetilde \Gamma,\mathbb{Z})^-$ by the rule: \begin{equation}\label{EQNdefle} \ell_e:= \left\{ \begin{array}{ll} \tilde e^\vee-\iota\tilde e^\vee& \text{if } \ \iota \tilde e^\vee \ne -\tilde e^\vee \in H^1(\widetilde \Gamma,\mathbb Z),\\ \tilde e^\vee& \text{if }\ \iota\tilde e^\vee=-\tilde e^\vee \in H^1(\widetilde \Gamma,\mathbb Z).\\ \end{array} \right. \end{equation} Recall from Proposition \ref{propmoncone} that the closure of the monodromy cone for the admissible cover is the cone $$ \overline \sigma(\widetilde C/C):=\mathbb{R}_{\ge 0}\langle \ell_e^2\rangle_{e\in E(\Gamma)}\subseteq \left(\operatorname{Sym}^2H_1(\widetilde \Gamma,\mathbb{Z})^{[-]}\right)^\vee_{\mathbb{R}}. $$ Note that $\ell_e^2$ does not depend on the choice of $\tilde e$ lying over a fixed $e\in \Gamma$. \begin{rem}\label{REMle} The definition of $\ell_e$ is made to ensure that $\ell_e$ is primitive in $H^1(\widetilde \Gamma,\mathbb Z)^-$. It is important that one takes the condition $\iota \tilde e^\vee=-\tilde e^\vee$ as being in $H^1(\widetilde \Gamma,\mathbb Z)$. Note in particular that $\iota \tilde e^\vee$ never agrees with $-\tilde e^\vee$ in $C^1(\widetilde \Gamma,\mathbb Z)$, but always agrees with $-\tilde e^\vee$ viewed as a linear function on $H_1(\widetilde \Gamma,\mathbb Z)^{[-]}$. \end{rem} Fix a free $\mathbb{Z}$-module $\Lambda$ of rank $g$, and a $GL(\Lambda)$-admissible cone decomposition $\Sigma$ of $\overline C^{\mathbb{Q}}(\Lambda)$. Let $\bar A_{g}^{\Sigma}$ be the associated toroidal compactification. Fix a surjection $\Lambda \to \Lambda_{\widetilde C/C} =H_1(\widetilde \Gamma,\mathbb{Z})^{[-]}$, and denote by $\Sigma_{\widetilde C/C}$ the $GL(\Lambda_{\widetilde C/C})$-admissible cone decomposition of $\overline C^{\mathbb{Q}}(\Lambda_{\widetilde C/C})$ (induced by the inclusion $\overline C^{\mathbb{Q}}(\Lambda_{\widetilde C/C})\hookrightarrow \overline C^{\mathbb{Q}}(\Lambda)$). Recall that $\Sigma_{\widetilde C/C}$ does not depend on the surjection $\Lambda\to \Lambda_{\widetilde C/C}$. We now use this to prove an extension theorem for the Prym map. The case of the second Voronoi compactification gives another proof of \cite[Thm.~3.2 (1), (4)]{abh} (see Remark \ref{remABH} below), while the results for the perfect and central cone are new. \begin{teo}\label{teoprymext} Fix $g\ge 1$. The Prym map $$ P^\Sigma:\overline {R}_{g+1}\dashrightarrow \bar {A}_{g}^{\Sigma} $$ extends to a morphism in a neighborhood of an admissible cover $\pi:\widetilde C\to C$ if and only if there exists a cone $\sigma\in \Sigma_{\widetilde C/C}$ of the admissible decomposition containing the monodromy cone $\sigma(\widetilde C/C)$. \begin{enumerate} \item \emph{(Alexeev--Birkenhake--Hulek \cite[Thm.~3.2 (1), (4)]{abh})} The Prym map extends to a morphism to the second Voronoi compactification $\overline{A}_g^V$ in a neighborhood of $(\pi:\widetilde C\to C)\in \overline{R}_{g+1}$ if and only if: \vskip .2 cm \begin{enumerate} \item[(V)] For any collection $e_1,\ldots,e_m\in E(\Gamma)$ of edges such that the corresponding co-cycles $\ell_{e_1},\ldots,\ell_{e_m}$ form a basis for $H^1(\widetilde \Gamma,\mathbb{R})^-$, the co-cycles $\ell_{e_1},\ldots,\ell_{e_m}$ in fact form a $\mathbb{Z}$-basis for $H^1(\widetilde \Gamma,\mathbb{Z})^-$. \end{enumerate} \vskip .2 cm \item The Prym map extends to a morphism to the perfect cone compactification $\bar{ A}_{g}^{P}$ in a neighborhood of $(\pi:\widetilde C\to C)\in \overline{R}_{g+1}$ if and only if: \vskip .2 cm \begin{enumerate} \item[(P)] There exists a quadratic form $Q$ on $H^1(\widetilde \Gamma,\mathbb{R})^-$ such that: \begin{enumerate} \item $Q(r)>0$ for all $r\in H^1(\widetilde \Gamma,\mathbb{R})^--\{0\}$; i.e.~$Q$ is positive definite. \item $Q(\ell)\ge 1$ for all $\ell \in H^1(\widetilde \Gamma,\mathbb{Z})^- -\{0\}$. \item $Q(\ell_e)=1$, for all $e\in E(\Gamma)$ such that $\ell_e\ne 0$. \end{enumerate} \end{enumerate} \vskip .2 cm \item The Prym map extends to a morphism to the central cone compactification $\bar {A}_{g}^{C}$ in a neighborhood of $(\pi:\widetilde C\to C)\in \overline{R}_{g+1}$ if and only if: \begin{enumerate} \item[(C)] There exists a quadratic form $Q$ on $H^1(\widetilde \Gamma,\mathbb{R})^-$ such that in addition to satisfying \emph{(i)-(iii)} above, $Q$ also satisfies: \vskip .2 cm \begin{enumerate} \item[(iv)] $Q(\ell)\in \mathbb{Z}$ for all $\ell \in H^1(\widetilde \Gamma,\mathbb{Z})^-$. \end{enumerate} \end{enumerate} \end{enumerate} \end{teo} \begin{proof} The first statement of the theorem follows from the standard results on toroidal compactifications discussed in \S \ref{secextpermap}. (1) then follows from Proposition \ref{propmoncone} and Lemma \ref{lemsecvor}. (2) follows from Lemma \ref{lempc} and (3) from Lemma \ref{lemcc}. \end{proof} \begin{rem}\label{remABH} To see that Theorem \ref{teoprymext} (1) is equivalent to \cite[Thm.~3.2 (1), (4)]{abh} observe that it follows from Lemma \ref{lemdice} that (V) is equivalent to \begin{enumerate} \item[(V')] The linear functions $\{\ell_{e}\}_{e\in E(\Gamma)}$ define a dicing of the lattice $H_1(\widetilde \Gamma,\mathbb{Z})^{[-]}$. \end{enumerate} Then note that as functions on $H_1(\widetilde \Gamma,\mathbb{Z})^{[-]}$, the linear forms $\tilde e^\vee-\iota \tilde e^\vee$ and $2\tilde e^\vee$ agree (see Remark \ref{REMle}). Thus the condition (V') here is the same as the condition ($\ast$) of \cite[p.98]{abh}, and so (V) is equivalent to the condition for extension given in \cite[Thm.~3.2 (1), (4)]{abh}. \end{rem} \begin{rem} Recall from Remark \ref{remstack} that the following statement holds also for stacks. The Prym map $$ P^\Sigma:\overline {\mathcal R}_{g+1}\dashrightarrow \bar{\mathcal A}_g^\Sigma $$ extends to a morphism in a neighborhood of an admissible cover $\pi:\widetilde C\to C$ if and only if there exists a cone $\sigma\in \Sigma_{\widetilde C/C}$ of the admissible decomposition containing the monodromy cone $\sigma(\widetilde C/C)$. \end{rem} \section{Monodromy cones for Friedman--Smith covers}\label{secFSexamples} We now investigate a class of admissible covers discovered by Friedman and Smith \cite{fs}, who used these examples to show that the Prym map does not extend to the second Voronoi, perfect cone or central cone compactifications. Alexeev, Birkenhake, and Hulek \cite{abh} and Vologodsky \cite{vologodsky} then showed that these examples characterize the indeterminacy locus of the Prym map to the second Voronoi compactification. In this section we give a detailed description of the monodromy cone for these examples with the aim of giving a geometric characterization of the indeterminacy locus of the Prym map to the perfect and central cone compactifications. In the subsequent sections we will actually need some more elaborate monodromy computations for further degenerations of these examples. The method for obtaining these is the same as the one discussed here, and thus all such further computations will be given in the appendix. In the main body of the paper we will reference those combinatorial results as needed. \subsection{Friedman--Smith covers} A \emph{Friedman--Smith cover with $2n\ge 2$ nodes} (see also Figure \ref{Fig:dgFSG}) is an admissible cover $\pi:\widetilde C\to C$ such that \begin{enumerate} \item $\widetilde C=\widetilde C_1\cup \widetilde C_2$ with $\widetilde C_1$ and $\widetilde C_2$ irreducible and smooth, and $$\widetilde C_1\cap \widetilde C_2=\{\tilde p_1^+,\tilde p_1^-\ldots,\tilde p_{n}^+,\tilde p_{n}^-\}.$$ \item $\iota \widetilde C_i=\widetilde C_i$ for $i=1,2$, \item $\iota \tilde p_i^+=\tilde p_i^-$ for $i=1,\ldots,n$. \end{enumerate} \begin{rem} An admissible cover $\pi:\widetilde C\to C$ is called a {\em degeneration of a Friedman--Smith cover with $2n$ nodes} if it can be obtained from a Friedman--Smith cover by a further degeneration. More precisely, an admissible cover $\pi:\widetilde C\to C$ is such a degeneration if and only if $\widetilde C=\widetilde C_1\cup \widetilde C_2$ with $\widetilde C_1$ and $\widetilde C_2$ connected (possibly reducible), $\widetilde C_1\cap \widetilde C_2=\{\tilde p_1^+,\tilde p_1^-\ldots,\tilde p_{n}^+\tilde p_{n}^-\}$, $\iota \widetilde C_i=\widetilde C_i$ for $i=1,2$, and $\iota \tilde p_i^+=\tilde p_i^-$ for $i=1,\ldots,n$. \end{rem} For later use, we denote by $FS_{n}\subseteq \overline{R}_{g+1}$ the locus of Friedman--Smith covers, and by $\overline{FS}_{n}$ its closure; i.e.~the locus of degenerations of Friedman--Smith covers. A \emph{(degeneration of a) Friedman--Smith graph} is a dual graph together with an involution, which can be obtained as the dual graph of a (degeneration of a) Friedman--Smith cover with induced involution. \begin{rem} The following is slightly stronger than a direct translation of the remark above into the language of graphs. A graph $\widetilde \Gamma$ with admissible involution $\iota$ is a degeneration of a Friedman--Smith graph with at least $2n\ge 2$ edges if and only if $\widetilde \Gamma$ admits disjoint, connected subgraphs $\widetilde \Gamma_1,\widetilde \Gamma_2$ connected by exactly $2m\ge 2n$ edges $\tilde e_1^+,\tilde e_1^-,\ldots,\tilde e_m^+,\tilde e_m^-$, with $\iota (\widetilde \Gamma_i)=\widetilde \Gamma_i$ ($i = 1,2$), and $\iota \tilde e_i^+=\tilde e_i^-$ ($i=1,\ldots,m$), and furthermore $\widetilde \Gamma_1$ and $\widetilde \Gamma_2$ are not connected by a $\iota$-invariant path \cite[Lem.~1.2]{vologodsky}. \end{rem} One can see that $\overline{FS}_1=\bigcup \delta_{i,g-i}$ and for $n\ge 2$, $\overline{FS}_n$ is codimension $n$ (if non-empty), and contained in the $n$-fold self-intersection of $\delta_0'$. In $\overline {\mathcal R}_{g+1}$ there are $ \lfloor \frac{g-n+2}{2} \rfloor $ irreducible components of $FS_{n}$, determined by the pairs of genera $(g(C_1),g(C_2))$ given by $$ (1,g-n+1), (2,g-n),\ldots (\lfloor \frac{g-n+2}{2} \rfloor,\lfloor \frac{g-n+3}{2} \rfloor). $$ In particular $FS_{n}=\emptyset$ in $\overline{\mathcal R}_{g+1}$ if $n\ge g+1$. Note also that the covers $\widetilde C_i\to C_i$ are \'etale, so that in particular, the curves $\widetilde C_i$ have odd genus $2g(C_i)-1$. \begin{rem} The index $n$ for $\overline{FS}_{n}$ refers to the codimension of the locus in $\overline{R}_{g+1}$, or equivalently the number of edges in the dual graph of the base curve. We will use similar notational conventions for other loci occurring later in the paper. \end{rem} \subsection{The monodromy cone}\label{secFSMonCone} Let $\pi:\widetilde C\to C$ be a Friedman--Smith cover with $2n\ge 2$ nodes. The dual graph $\widetilde \Gamma$ of $\widetilde C$ has vertices $V(\widetilde \Gamma)=\{\tilde v_1,\tilde v_2\}$ and edges $E(\widetilde \Gamma)=\{\tilde e_1^+,\tilde e_1^-,\ldots,\tilde e_n^+,\tilde e_n^-\}$. The involution $\iota$ acts by $\iota(\tilde v_i)=\tilde v_i$ ($i=1,2$) and $\iota(\tilde e_i^+)=\tilde e_i^-$ ($i=1,\ldots,n$). For simplicity, we will fix a compatible orientation on $\widetilde \Gamma$, as in Figure \ref{Fig:dgFSG}; i.e.~for all $i$ set $t(\tilde e_i^{\pm})=\tilde v_2$ and $s(\tilde e_i^{\pm})=\tilde v_1$. \begin{figure}[htb] \begin{equation*} \xymatrix{ \widetilde \Gamma & *{\bullet} \ar @{-}@/_1pc/[rr]|-{\SelectTips{cm}{}\object@{>}}_{\tilde e_n^+} \ar @{-} @/_2.5pc/[rr] |-{\SelectTips{cm}{}\object@{>}}_{\tilde e_n^-} \ar@{-} @/^1pc/[rr]|-{\SelectTips{cm}{}\object@{>}}^{\tilde e_1^-} \ar@{-}@/^2.5pc/[rr]|-{\SelectTips{cm}{}\object@{>}}^{\tilde e_1^+}^<{\tilde v_1}^>{\tilde v_2} &\vdots &*{\bullet} &&\Gamma& *{\bullet} \ar @{-}@/_1pc/[rr]|-{\SelectTips{cm}{}\object@{>}}_{e_n} \ar @{-} \ar@{-} @/^1pc/[rr]|-{\SelectTips{cm}{}\object@{>}}^{e_1} ^<{ v_1}^>{ v_2} &\vdots &*{\bullet} } \end{equation*} \caption{ Dual graph of a Friedman--Smith example with $2n\ge 2$ nodes ($FS_n$).}\label{Fig:dgFSG} \end{figure} One has \begin{equation}\label{eqnh1fs} H_1(\widetilde \Gamma,\mathbb{Z})= \mathbb{Z}\langle \tilde e_1^+-\tilde e_1^-,\ldots,\tilde e_n^+-\tilde e_n^-, \tilde e_1^+-\tilde e_2^-,\ldots,\tilde e_{n-1}^+-\tilde e_n^-\rangle. \end{equation} Indeed, we have $b_1(\widetilde \Gamma)=\#E(\widetilde \Gamma)-\#V(\widetilde \Gamma)+b_0(\widetilde \Gamma)=2n-1$, since $\widetilde \Gamma$ is connected. The $2n-1$ elements listed above are in fact a generating set for $H_1(\widetilde \Gamma,\mathbb{Z})$, as can be easily detected from the associated matrix. For instance, if one takes the elements in the order $\tilde e_1^+-\tilde e_1^-, \tilde e_1^+-\tilde e_2^-,\ldots,\tilde e_n^+-\tilde e_n^-,\tilde e_{n-1}^+-\tilde e_n^-$ and constructs a matrix with rows expressing these elements with respect to the basis $ \tilde e_1^-,\tilde e_1^+, \ldots, \tilde e_n^-,\tilde e_n^+$, one obtains a $(2n-1)\times (2n)$ matrix whose first $(2n-1)\times (2n-1)$ sub-matrix is upper triangular with all the diagonal entries equal to $\pm1$. Recall that $H_1(\widetilde \Gamma,\mathbb{Z})^{[-]}=H_1(\widetilde \Gamma, \mathbb{Z})/H_1(\widetilde\Gamma,\mathbb{Z})^+$ and is isomorphic to the image of the map $$\frac{1}{2}(\operatorname{Id} -\iota):H_1(\widetilde \Gamma,\mathbb{Z})\to H_1(\widetilde \Gamma,\mathbb{R}).$$ From \eqref{eqnh1fs}, one has $$ H_1(\widetilde \Gamma,\mathbb{Z})^{[-]}\cong \mathbb{Z}\langle \tilde e_1^+-\tilde e_1^-,\frac{1}{2}(\tilde e_1^+-\tilde e_1^-)+\frac{1}{2}(\tilde e_2^+-\tilde e_2^-),\ldots,\frac{1}{2}(\tilde e_{n-1}^+-\tilde e_{n-1}^-)+\frac{1}{2}(\tilde e_n^+-\tilde e_n^-)\rangle. $$ For brevity, set $$ z_1=\tilde e_1^+-\tilde e_1^- , \ z_2=\frac{1}{2}(\tilde e_1^+-\tilde e_1^-)+\frac{1}{2}(\tilde e_2^+-\tilde e_2^-), \ldots, \ z_n=\frac{1}{2}(\tilde e_{n-1}^+-\tilde e_{n-1}^-)+\frac{1}{2}(\tilde e_n^+-\tilde e_n^-) $$ so that $ H_1(\widetilde \Gamma,\mathbb{Z})^{[-]}\cong \mathbb{Z}\langle z_1,\ldots,z_n\rangle. $ Then $$H^1(\widetilde \Gamma,\mathbb{Z})^-=\left(H_1(\widetilde \Gamma,\mathbb{Z})^{[-]}\right)^\vee\cong \mathbb{Z}\langle z_1^\vee,\ldots,z_n^\vee \rangle .$$ Now observe that $$H^1(\widetilde \Gamma,\mathbb{Z})=\mathbb{Z}\langle (\tilde e_1^+)^\vee, (\tilde e_1^-)^\vee, \ldots, (\tilde e_n^+)^\vee, (\tilde e_n^-)^\vee \rangle/ \langle (\tilde e_1^+)^\vee+(\tilde e_1^-)^\vee+ \ldots+ (\tilde e_n^+)^\vee+ (\tilde e_n^-)^\vee \rangle. $$ It follows that for $i=1,\ldots,n$, $$ \begin{array}{ll} \iota(\tilde e_i^+)^\vee=(\tilde e_i^-)^\vee=-(\tilde e_i^+)^\vee& \text{if }\ n=1,\\ \iota (\tilde e_i^+)^\vee=(\tilde e_i^-)^\vee\ne -(\tilde e_i^+)^\vee& \text{if }\ n\ge 2.\\ \end{array} $$ Consequently, we may choose for $i=1,\ldots,n$, $$ \ell_{e_i}:= \left\{ \begin{array}{ll} (\tilde e_i^+)^\vee& \text{if }\ n=1,\\ (\tilde e_i^+)^\vee-(\tilde e_i^-)^\vee& \text{if } \ n\ge 2 .\\ \end{array} \right. $$ For $n=1$, $\ell_{e_1}$ is clearly a basis for $H^1(\widetilde \Gamma,\mathbb{Z})^-$, and so we note that condition (V) of Theorem \ref{teoprymext} holds in this case. Now consider the case $n\ge 2$. Evaluating the $\ell_{e_i}$ on the basis $z_1,\ldots, z_n$, we obtain that \begin{equation} \begin{array}{ccccccccccc} \ell_{e_1}& =& 2z_1^\vee& +&z_2^\vee&&&&&&\\ \ell_{e_2}& =& & &z_2^\vee&+&z_3^\vee&&&&\\ \vdots &\vdots &&&&\ddots&&&&\\ \vdots &\vdots &&&&&\ddots&&&\\ \ \ \ell_{e_{n-1}}& =& & &&&&&z_{n-1}^\vee&+&z_n^\vee\\ \ell_{e_{n}}& =& &&&&&&&&z_n^\vee\\ \end{array} \end{equation} Thus, with respect to these bases, the matrix representation of the monodromy cone is: \begin{equation}\label{eqnFSMCo} \left(\begin{smallmatrix} 2&1&&&&&\\ &1&1&&&&\\ &&1&1&&&\\ &&&\ddots&\ddots&&\\ &&&&1&1&\\ &&&&&1&1\\ &&&&&&1\\ \end{smallmatrix}\right). \end{equation} Since the determinant of this matrix is $2$, it follows that for $n\ge 2$, $\{\ell_{e_1},\ldots,\ell_{e_n}\}$ is a basis for $H^1(\widetilde \Gamma,\mathbb{R})^-$, but is \emph{not} a $\mathbb{Z}$-basis for $H^1(\widetilde \Gamma,\mathbb{Z})^-$. Consequently, condition (V) does not hold for $n\ge 2$. \subsection{Properties of Friedman--Smith monodromy cones} With the above description of the Friedman--Smith monodromy cone, it is now a combinatorial problem to describe the relationship of the Friedman--Smith monodromy cone to the various cone decompositions. The details of the arguments are contained in the appendix. Here we compile the results for reference. \begin{teo}\label{teoFSMCP} A Friedman--Smith cone is: \begin{enumerate} \item Basic for $n \geq 3$, $n=1$, and simplicial but not basic for $n=2$. \item Matroidal if and only if $n=1$. Every proper face of a Friedman--Smith cone is matroidal. \item Contained in a cone in the perfect cone decomposition if and only if $n\ne 2,3$. In fact, a Friedman--Smith cone \emph{is} a cone in the perfect cone decomposition if and only if $n\ne 2,3,4$. \end{enumerate} \end{teo} \begin{proof} (1) See Lemma \ref{lemBasicCone}. (2) See Lemma \ref{lemFSMAT}. (3) See Proposition \ref{proFS=PC}. \end{proof} \begin{rem} In Appendix \ref{secappMDS}, Dutour Sikiri\'c shows that the Friedman--Smith monodromy cone is contained in a cone in the central cone decomposition if and only if $n\ne 2,3$. \end{rem} \begin{rem}\label{remFSSVD} It is an unpublished result of Alexeev that \emph{for $n\ge 2$, each cone in the barycentric subdivision of a Friedman--Smith cone is contained in a cone in the second Voronoi decomposition} (see \cite[p.3159]{vologodskyres}). It is easy to see that the decomposition of the Friedman--Smith cone into cones contained in second Voronoi cones must be a refinement of the barycentric subdivision. One can then argue as in Vologodsky \cite{vologodskyres} to show that the barycentric subdivision suffices. We will use this in the case $n=2,3$ in our investigation of the resolution of the period map to the perfect cone compactification (see Remark \ref{relationsconedec}). In the appendix (\S \ref{secFScd}) we give an explicit description of second Voronoi cones generated by rank $1$ quadrics that decompose the Friedman--Smith cone for $n=2,3$, providing another proof in these special cases. We will use these explicit cones in studying other monodromy cones of degenerations of Friedman--Smith covers, and also in describing Delaunay decompositions. \end{rem} \section{The indeterminacy locus of the Prym map}\label{sectindeterm} Here we further investigate the indeterminacy locus of the Prym map by reformulating the combinatorial characterization given in Theorem \ref{teoprymext}, in terms of geometry. For the second Voronoi compactification, Vologodsky \cite[Thm.~0.1]{vologodsky} has shown that the combinatorial condition in Theorem \ref{teoprymext} (1) is equivalent to the cover being a degeneration of a Friedman--Smith cover with at least $4$ nodes. In other words, the indeterminacy locus for the Prym map to the second Voronoi compactification is equal to $\bigcup_{n\ge 2} \overline {FS}_n$. Consequently, here we focus on the period map to the perfect cone compactification, for which it turns out that the indeterminacy locus is smaller. While at the moment we are unable to obtain a statement if full generality analogous to \cite[Thm.~0.1]{vologodsky}, we describe completely the situation up to codimension $6$. \begin{teo}\label{teoindPM} The indeterminacy locus of the Prym map $P_g^P:\overline R_{g+1}\dashrightarrow \bar A_g^P$ satisfies \begin{equation}\label{eqnIndLoc} \overline {FS}_2\cup \overline {FS}_3\subseteq Ind(P_g^P)\subseteq \overline {FS}_2\cup \overline {FS}_3\cup \partial \overline{FS}_4\cup\ldots \cup \partial \overline {FS}_{g} \end{equation} where $\partial \overline{FS}_{n}=\overline{FS}_{n}-FS_{n}$. Moreover, $$ \operatorname{codim}_{\overline R_{g+1}} Ind(P_g^P)\setminus \left(\overline {FS}_2\cup \overline {FS}_3\right)\ge 6. $$ \end{teo} \begin{rem} Since $\bar A^P_1=A^*_1$, it is immediate (from the Borel extension theorem) that for $g=1$ the Prym map is a morphism $\overline{R}_2\to\bar {A}_1^P$; in this case both $FS_2$ and $FS_3$ are empty. For $g=2$ the locus $FS_3$ is empty, so we have $Ind(P_2^P)=\overline {FS}_2$. Similarly, $Ind(P_3^P)=\overline {FS}_2\cup \overline {FS}_3$. For $P^P_4:\overline R_5\dashrightarrow \bar A_4^P$ we have $\partial \overline {FS_4}\setminus \left(\overline {FS}_2\cup \overline {FS}_3\right)\ne \emptyset$. In this case, the theorem above says roughly that the ``generic points'' of this locus do not lie in the indeterminacy locus. We note that in the course of the proof we will obtain a slightly stronger result than the statement of the theorem, by showing that the Prym map extends to $\overline{FS}_4$ and $\overline{FS}_5$ up to codimension two. \end{rem} \begin{rem} We have the following relationships among the indeterminacy loci. For $g\le 3$, $Ind(P_g^P)=Ind(P_g^V)=Ind(P_g^C)$. For $g\ge 4$, $Ind(P_g^P)\subsetneq Ind(P_g^V)$. In Appendix \ref{secappMDS}, Mathieu Dutour Sikiri\'c shows that for $g\ge 4$, $Ind(P_g^C)\nsupseteq Ind(P_g^V)$, and for $g\ge 9$, $Ind(P_g^C)\nsubseteq Ind(P_g^V)$. \end{rem} In the process of proving the results in this paper, we have considered a number of degenerations of Friedman--Smith covers. In these examples, the monodromy cone has failed to be contained in a cone in the PCD if and only if the example lies in $\overline{FS}_2\cup \overline{FS}_3$. We thus pose the following question. \begin{ques} \label{quesindeterminacy} Is it true that the indeterminacy locus $Ind(P_g^P)$ is equal to $\overline{FS}_2\cup \overline{FS}_3$? \end{ques} \begin{proof}[Proof of Theorem \ref{teoindPM}] We start by showing: $$ \overline {FS}_2\cup \overline {FS}_3\subseteq Ind(P_g^P)\subseteq \overline {FS}_2\cup \overline {FS}_3\cup \partial \overline{FS}_4\ldots \cup \partial \overline {FS}_{g}. $$ Theorem \ref{teoFSMCP} (3) implies that the covers in the loci $FS_2, FS_3$ have monodromy cones not contained in cones in the PCD. This gives the left inclusion. For the right inclusion, the results \cite[Thm.~3.2 (1), (4)]{abh} and \cite[Thm.~0.1]{vologodsky} imply that on $\overline R_{g+1}\setminus \left(\overline {FS}_2\ldots \cup \overline {FS}_{g}\right)$ the monodromy cones are matroidal, which by \cite[Thm.~A]{MV12} are also perfect (see Remark \ref{remMV}). Moreover, in Theorem \ref{teoFSMCP} (3) we showed that for $4\le n\le g$ a cover in $FS_n$ has a monodromy cone contained in a cone in the PCD, and thus the period map extends there as well. We now prove $$ \operatorname{codim}_{\overline R_{g+1}} Ind(P_g^P)\setminus \left(\overline {FS}_2\cup \overline {FS}_3\right)\ge 6.$$ Since $\operatorname{codim} \overline{FS}_n=n$, it is enough to restrict attention to $\partial \overline{FS}_4$. In fact we will show the stronger statement that $$ \operatorname{codim}_{\overline {FS}_{n}} (Ind(P_g^P) \cap \overline{FS}_n) \setminus \left(\overline {FS}_2\cup \overline {FS}_3\right)\ge 2 $$ for $n=4,5$. To achieve this, we simply need to enumerate the codimension $1$ degenerations in $\overline{FS}_n$ for $n=4,5$ and check that for each of them the monodromy cone is not contained in a cone in the PCD if and only the degeneration also lies in $\overline {FS}_2$ or $\overline{FS}_3$. To be precise, we will consider all degenerations of an $\overline{FS}_n$ cover so that the dual graph of the base curve has exactly $n+1$ edges. The complement of this locus has codimension $2$ in $\overline {FS}_n$. We observe that the dual graph of the base of a degeneration of an $\overline{FS}_n$ cover has exactly $n+1$ edges if and only if the dual graph of the covering curve is obtained by replacing a vertex in the dual graph of an $FS_n$ cover (see Figure \ref{Fig:dgFSG}) with one of the dual graphs in Figures \ref{Fig:d0'}-\ref{Fig:digi}. Thus we have five cases to consider. First consider the case where we replace the vertex with a graph as in Figure \ref{Fig:d0''} (corresponding to $\delta_0''$). This give rise to an $FS_{n}+W_1$ example (see Figure \ref{Fig:FSn+Wn} with $m=1$). The monodromy computation is made in \S \ref{secFS+W}, and establishes Lemma \ref{lemmcFSW} stating that for $n\le 7$, the monodromy cone is contained in a cone in the PCD if and only if the cover is not a degeneration of an $FS_2$ or $FS_3$ cover. Next consider the case where we replace the vertex with a graph as in Figure \ref{Fig:digi} (corresponding to $\delta_{i,g+1-i}$). This gives rise to $FS_{n_1+n_2}+FS_1$ examples with $n_1+n_2=n$ (see Figure \ref{Fig:FSn+FSm} with $m=1$). The monodromy computation is made in \S \ref{secFS+FS}, and establishes Lemma \ref{lemFS+FS} stating that the monodromy cone is contained in a cone in the PCD if and only if the cover is not a degeneration of an $FS_2$ or $FS_3$ cover. Next consider the case where we replace the vertex with a graph as in Figure \ref{Fig:di} (corresponding to $\delta_{i}$). This gives rise to $FS_{n_1+n_2}+\delta_i$ examples with $n_1+n_2=n$ (see Figure \ref{Fig:dgFSn+d}). The monodromy computation is made in \S \ref{secFS+d}, culminating in Lemma \ref{lemFS+d}, which shows that for $n\le 5$, the monodromy cone is contained in a cone in the PCD if and only if the cover is not a degeneration of an $FS_2$ or $FS_3$ cover. The cases where we replace the vertex with a graph as in Figure \ref{Fig:d0ram} ($\delta_0^{\operatorname{ram}}$) or Figure \ref{Fig:d0'} $(\delta_0'$) are similar. These give rise to $FS_n+B_1$ (resp.~$FS_n+EE_1$) examples (see \S \ref{secFS+B}, resp.~\S \ref{secFS+EE}). Lemmas \ref{AlemFSB} and \ref{AlemFSEE} show that the monodromy cone is contained in a cone in the PCD if and only if the cover is not a degeneration of an $FS_2$ or $FS_3$ cover. \end{proof} \section{Resolving the Prym map}\label{sectRes} As discussed previously, in contrast to the case of the Torelli map for curves, the Prym map is not regular along (certain components of) the Friedman--Smith locus. For geometric applications (eg.~the study of moduli of cubic threefolds) it is important to have a regular map. Using the fact that the normal crossing compactifications and the toroidal compactifications have a toric structure at the boundary, it is always possible to refine the normal crossing compactification (by further toric blow-ups) to get a regular map. In this section we resolve the Prym map up to codimension $4$. In the appendices we have worked out some further special cases in all genera for the perfect cone compactification; some special cases in all genera have also been considered by Alexeev and Vologodsky for the second Voronoi compactification (see \S \ref{secDR} and \cite{vologodskyres}). \begin{teo} \label{teoresPM} There is a closed locus $Z\subseteq \bigcup_{n=2}^g \partial \overline {FS}_n\subseteq \overline{R}_{g+1}$ with $\operatorname{codim}_{\overline{R}_{g+1}} Z\ge 4$, such that setting $U=\overline R_{g+1}\setminus Z$, the restriction to $U$ of the Prym period map $P_g^P:\overline R_{g+1}\dashrightarrow \bar A_g^P$ can be resolved in the following way: \begin{enumerate} \item The period map is regular on $U\setminus (\overline {FS}_2\cup \overline{FS}_3)$. \item If $x\in U\cap \overline {FS}_2$, then \'etale locally there are either 1, 2, or 3 components of $\overline{FS}_2$ meeting at $x$. If there are 1 or 2 components meeting, the period map is resolved by blowing up the union of the components. If there are 3 components meeting at $x$, the period map is resolved by the toric morphism determined by Figure \ref{Fig:3comp}. \item We have $U\cap \overline{FS}_3=FS_3$, and at a point $x\in FS_3$ the period map is resolved by blowing up the locus $FS_3$. \end{enumerate} In addition, for $g=2$ the period map $\overline R_3\dashrightarrow \bar A_2^P$ ($=\bar A_2^V,\bar A_2^C$) is resolved simply by blowing up $\overline {FS}_{2}$, which is irreducible (globally and \'etale locally). \end{teo} \begin{rem} One may take $Z$ in the theorem above so that for $n\ge 4$, $U\cap\overline {FS}_n=FS_n$. Then if in addition one blows-up along $FS_n$ for $n\ge 4$, this resolves the period map on $U$ to the second Voronoi compactification. \end{rem} \begin{rem} In the appendix, we provide explicit resolutions of the period map to $\bar A_g^P$ for many more types of degenerations of Friedman--Smith covers. While these still do not cover enough special cases to resolve the period map $\overline{R}_{g+1}\dashrightarrow \bar A_g^P$ for any $g\ge 3$, in principle, these computations could be carried out further to completely resolve the period map for low $g$. \end{rem} \begin{rem}\label{remresR3R4} In part (2) of the theorem, in the case where $3$ components of $\overline {FS}_2$ meet, we point out that the birational modification is not the blow-up of the union of the $3$ components, nor is it obtained by blowing up the intersection of the $3$ components, followed by blowing up the strict transforms of the components (and neither of these birational modifications resolves the period map). \end{rem} \begin{proof} First let us define the locus $Z$ in the statement of the theorem. Let $Z_2\subseteq \partial \overline{FS}_2$ be the locus of degenerations whose dual graph is not obtained by replacing a vertex in the dual graph of an $FS_2$ cover (see Figure \ref{Fig:dgFSG}) with one of the dual graphs in Figures \ref{Fig:d0'}-\ref{Fig:digi}. Let $$Z=Z_2\cup \bigcup_{n=3}^g \partial \overline{FS}_n$$ Thus $\operatorname{codim} Z\ge 4$, and on $U$ the period map only fails to be regular along $U\cap \overline{FS}_2$ and $U\cap \overline{FS}_3=FS_3$. From Remark \ref{remFSSVD}, at points of $FS_2$ and $FS_3$, the period map is resolved in a neighborhood by a blow-up of the $FS$ locus. The proof now proceeds in a similar fashion to the proof of Theorem \ref{teoindPM}. We enumerate the dual graphs obtained from covers in $U\cap \partial \overline{FS}_2$, and for each of them decompose the corresponding monodromy cone into cones in the PCD. Recall that this provides a resolution of the period map in the following way. Given an admissible double cover $\widetilde C\to C$, the miniversal space has snc boundary with components in bijection with edges of the dual graph $\Gamma$ of $C$. Consequently, for each edge $e$ of $\Gamma$, there is a corresponding quadratic form obtained via the log of monodromy. This induces a map from the standard simplex with vertices indexed by the edges of $\Gamma$, to the closure of the monodromy cone. Decomposing the monodromy cone into cones contained in the admissible cone decomposition, and then pulling back to the standard simplex, gives a decomposition of the standard simplex, which determines the minimal resolution of the period map in a neighborhood of the admissible cover $\widetilde C\to C$. We now proceed to implement this, using the same enumeration as in the proof of Theorem \ref{teoindPM} For the case of a $FS_2+W_1$ cover $\widetilde C\to C$, the dual graph $\Gamma$ has $3$ edges $e_1,e_2,f$. The cone decomposition is given in \S \ref{secFS+W} and has the form $$ \xymatrix{ &*{\bullet} \ar@{-}[rd]|-{\SelectTips{cm}{}\object@{}}^>{e_2} \ar@{-}[ld]|-{\SelectTips{cm}{}\object@{}}_<{f} _>{e_1} \ar@{-}[d]|-{\SelectTips{cm}{}\object@{}} & \\ *{\bullet} \ar@{-}[rr]|-{\SelectTips{cm}{}\object@{}}&*{} &*{\bullet}\\ }\ \ \ \ \ \ \ \ \ \xymatrix{ &*{\bullet} \ar@{-}[rd]|-{\SelectTips{cm}{}\object@{}}^>{x_2^2} \ar@{-}[ld]|-{\SelectTips{cm}{}\object@{}}_<{x_1^2} _>{(2x_1-x_2)^2} \ar@{-}[d]|-{\SelectTips{cm}{}\object@{}} & \\ *{\bullet} \ar@{-}[rr]|-{\SelectTips{cm}{}\object@{}}&*{} &*{\bullet}\\ } $$ where $e_1\mapsto (2x_1-x_2)^2$, $e_2\mapsto x_2^2$, and $f\mapsto x_1^2$. \'Etale locally, the divisors corresponding to $f$, $e_1$, and $e_2$ are all of type $\delta_0'$. The intersection of the two copies of $\delta_0'$ corresponding to $e_1$ and $e_2$ is exactly the locus of Friedman--Smith covers, which are of type $\overline{FS}_2$. The decomposition above indicates that this locus is blown-up in the minimal resolution. For the case of a $FS_{2+0}+FS_1$ cover, the cone decomposition is given in \S \ref{secFS+FS} (see Figure \ref{figFS20FS1decomp}) and (in similar notation to the example above) has the form $$ \xymatrix{ &*{\bullet} \ar@{-}[rd]|-{\SelectTips{cm}{}\object@{}}^>{e_2} \ar@{-}[ld]|-{\SelectTips{cm}{}\object@{}}_<{f} _>{e_1} \ar@{-}[d]|-{\SelectTips{cm}{}\object@{}} & \\ *{\bullet} \ar@{-}[rr]|-{\SelectTips{cm}{}\object@{}}&*{} &*{\bullet}\\ }\ \ \ \ \ \ \ \ \ \xymatrix{ &*{\bullet} \ar@{-}[rd]|-{\SelectTips{cm}{}\object@{}}^>{x_2^2} \ar@{-}[ld]|-{\SelectTips{cm}{}\object@{}}_<{x_3^2} _>{(2x_1-x_2)^2} \ar@{-}[d]|-{\SelectTips{cm}{}\object@{}} & \\ *{\bullet} \ar@{-}[rr]|-{\SelectTips{cm}{}\object@{}}&*{} &*{\bullet}\\ } $$ where $e_1\mapsto (2x_1-x_2)^2$, $e_2\mapsto x_2^2$, and $f\mapsto x_3^2$. \'Etale locally, the divisors corresponding to $e_1$, and $e_2$ are of type $\delta_0'$. The divisor corresponding to $f$ is of type $\delta_{1,1}$ (the $\overline {FS}_1$ locus). The intersection of the two copies of $\delta_0'$ corresponding to $e_1$ and $e_2$ is exactly the locus of Friedman--Smith covers of type $\overline{FS}_2$. The decomposition above indicates that this locus is blown-up in the minimal resolution. For the case of a $FS_{1+1}+FS_1$ cover it is shown in \S \ref{secFS+FS} (see also \S \ref{secDR} and \cite{vologodskyres}), that the cone decomposition is (in similar notation) \begin{equation}\label{Fig:3comp} \xymatrix@C=.4cm@R=.5cm{ &&*{\bullet} \ar@{-}[rrdd]|-{\SelectTips{cm}{}\object@{}}^>{e_2} ^<{f} \ar@{-}[lldd]|-{\SelectTips{cm}{}\object@{}}_>{e_1}&&\\ &*{} \ar@{-}[rd] \ar@{-}[rr]&&*{} \ar@{-}[ld] &\\ *{\bullet} \ar@{-}[rrrr] && *{}&&*{\bullet} } \ \ \ \ \ \ \ \ \ \ \ \ \xymatrix@C=.4cm@R=.5cm{ &&*{\bullet} \ar@{-}[rrdd]|-{\SelectTips{cm}{}\object@{}}^>{x_2^2} ^<{(-x_2+2x_3)^2} \ar@{-}[lldd]|-{\SelectTips{cm}{}\object@{}}_>{(2x_1-x_2)^2}&&\\ &*{} \ar@{-}[rd] \ar@{-}[rr]&&*{} \ar@{-}[ld] &\\ *{\bullet} \ar@{-}[rrrr] && *{}&&*{\bullet} } \end{equation} \'Etale locally, the divisors corresponding to $f$, $e_1$ and $e_2$ are of type $\delta_0'$. In this case, each of the $3$ pairwise intersections of these divisors is an $\overline {FS}_2$ locus. The associated birational modification that resolves the period map is an isomorphism away from this locus, and introduces $3$ exceptional divisors. The corresponding birational modification of $\mathbb A^3_{\mathbb C}$ has as fiber over the origin equal to $3$ copies of $\mathbb P^1_{\mathbb C}$ attached at a point. For the $FS_{2+0}+\delta_1$ example, from the analysis in section \S \ref{secFS+d} we see that the cone decomposition is $$ \xymatrix{ &*{\bullet} \ar@{-}[rd]|-{\SelectTips{cm}{}\object@{}}^>{e_2} \ar@{-}[ld]|-{\SelectTips{cm}{}\object@{}}_<{f} _>{e_1} \ar@{-}[d]|-{\SelectTips{cm}{}\object@{}} & \\ *{\bullet} \ar@{-}[rr]|-{\SelectTips{cm}{}\object@{}}&*{} &*{\bullet}\\ }\ \ \ \ \ \ \ \ \ \xymatrix{ &*{\bullet} \ar@{-}[rd]|-{\SelectTips{cm}{}\object@{}}^>{x_2^2} \ar@{-}[ld]|-{\SelectTips{cm}{}\object@{}}_<{x_1^2} _>{(2x_1-x_2)^2} \ar@{-}[d]|-{\SelectTips{cm}{}\object@{}} & \\ *{\bullet} \ar@{-}[rr]|-{\SelectTips{cm}{}\object@{}}&*{} &*{\bullet}\\ } $$ The divisors corresponding to $e_1$, $e_2$ and $f$ are of type $\delta_0'$. The Friedman--Smith locus ($\overline {FS}_2$) is given as the intersection of the two divisors corresponding to $e_1$ and $e_2$. The decomposition tells us that in the neighborhood of such a point, the minimal resolution is the blow-up of the $\overline {FS}_2$ locus. For the $FS_{1+1}+\delta_1$ example, in \S \ref{secFS+d} we see that the cone decomposition is given as \begin{equation*} \xymatrix@C=.4cm@R=.5cm{ &&*{\bullet} \ar@{-}[rrdd]|-{\SelectTips{cm}{}\object@{}}^>{E_1} ^<{f} \ar@{-}[lldd]|-{\SelectTips{cm}{}\object@{}}_>{e_1}&&\\ &*{} &&*{}\ar@{-}[ld] &\\ *{\bullet} \ar@{-}[rrrr] && *{}&&*{\bullet} } \ \ \ \ \ \ \ \ \ \ \ \ \xymatrix@C=.5cm@R=.5cm{ &&&&\\ &&\\ *{\bullet} \ar@{-}[rrrr]^>{x_2^2} ^<{(2x_1-x_2)^2}&& *{|}&&*{\bullet} } \end{equation*} where $e_1\mapsto (2x_1-x_2)^2$, $E_1\mapsto x_2^2$ and $f\mapsto (2x_1-x_2)^2$. The divisors corresponding to $e_1$, $E_1$ and $f$ are of type $\delta_0'$. In this case, the two components of the Friedman--Smith locus correspond to the intersection of the divisor corresponding to $e_1$ with the divisor corresponding to $E_1$, and also to the divisor corresponding to $f$ intersecting the divisor corresponding to $E_1$. These two loci are both of type $\overline {FS}_2$. The period map is resolved by blowing up the union of these loci. For the $FS_{0+2}+\delta_1$ example, from the analysis in section \S \ref{secFS+d} we see that the cone decomposition is $$ \xymatrix{ &*{\bullet} \ar@{-}[rd]|-{\SelectTips{cm}{}\object@{}}^>{E_2} \ar@{-}[ld]|-{\SelectTips{cm}{}\object@{}}_<{f} _>{E_1} \ar@{-}[d]|-{\SelectTips{cm}{}\object@{}} & \\ *{\bullet} \ar@{-}[rr]|-{\SelectTips{cm}{}\object@{}}&*{} &*{\bullet}\\ }\ \ \ \ \ \ \ \ \ \xymatrix{ & \ar@{}[rd]|-{\SelectTips{cm}{}\object@{}}^>{x_2^2} \ar@{}[ld]|-{\SelectTips{cm}{}\object@{}}_<{} _>{(2x_1-x_2)^2} \ar@{}[d]|-{\SelectTips{cm}{}\object@{}} & \\ *{\bullet} \ar@{-}[rr]|-{\SelectTips{cm}{}\object@{}}&*{|} &*{\bullet}\\ } $$ where $E_1\mapsto (2x_1-x_2)^2$, $E_2\mapsto x_2^2$, and $f\mapsto 0$. The divisors corresponding to $E_1$, $E_2$ are of type $\delta_0'$, while the divisor corresponding to $f$ is of type $\delta_1$. The Friedman--Smith locus ($\overline {FS}_2$) is given as the intersection of the two divisors corresponding to $E_1$ and $E_2$. The decomposition tells us that in the neighborhood of such a point, the minimal resolution is the blow-up of the $\overline {FS}_2$ locus. For the $FS_{2}+B_1$ example, from the analysis in section \S \ref{secFS+B} we see that the cone decomposition is $$ \xymatrix{ &*{\bullet} \ar@{-}[rd]|-{\SelectTips{cm}{}\object@{}}^>{e_1} \ar@{-}[ld]|-{\SelectTips{cm}{}\object@{}}_<{f} _>{e_2} \ar@{-}[d]|-{\SelectTips{cm}{}\object@{}} & \\ *{\bullet} \ar@{-}[rr]|-{\SelectTips{cm}{}\object@{}}&*{} &*{\bullet}\\ }\ \ \ \ \ \ \ \ \ \xymatrix{ & \ar@{}[rd]|-{\SelectTips{cm}{}\object@{}}^>{x_2^2} \ar@{}[ld]|-{\SelectTips{cm}{}\object@{}}_<{} _>{(2x_1-x_2)^2} \ar@{}[d]|-{\SelectTips{cm}{}\object@{}} & \\ *{\bullet} \ar@{-}[rr]|-{\SelectTips{cm}{}\object@{}}&*{|} &*{\bullet}\\ } $$ where $e_1\mapsto (2x_1-x_2)^2$, $e_2\mapsto x_2^2$, and $f\mapsto 0$. The divisors corresponding to $e_1$, $e_2$ are of type $\delta_0'$, while the divisor corresponding to $f$ is of type $\delta_0^{\operatorname{ram}}$. The Friedman--Smith locus ($\overline {FS}_2$) is given as the intersection of the two divisors corresponding to $e_1$ and $e_2$. The decomposition tells us that in the neighborhood of such a point, the minimal resolution is the blow-up of the $\overline {FS}_2$ locus. For the $FS_{2}+EE_1$ example, from the analysis in section \S \ref{secFS+EE} we see that the cone decomposition is $$ \xymatrix{ &*{\bullet} \ar@{-}[rd]|-{\SelectTips{cm}{}\object@{}}^>{e_2} \ar@{-}[ld]|-{\SelectTips{cm}{}\object@{}}_<{f} _>{e_1} \ar@{-}[d]|-{\SelectTips{cm}{}\object@{}} & \\ *{\bullet} \ar@{-}[rr]|-{\SelectTips{cm}{}\object@{}}&*{} &*{\bullet}\\ }\ \ \ \ \ \ \ \ \ \xymatrix{ &*{\bullet} \ar@{-}[rd]|-{\SelectTips{cm}{}\object@{}}^>{x_2^2} \ar@{-}[ld]|-{\SelectTips{cm}{}\object@{}}_<{x_3^2} _>{(2x_1-x_2)^2} \ar@{-}[d]|-{\SelectTips{cm}{}\object@{}} & \\ *{\bullet} \ar@{-}[rr]|-{\SelectTips{cm}{}\object@{}}&*{} &*{\bullet}\\ } $$ The divisors corresponding to $e_1$, $e_2$ and $f$ are of type $\delta_0'$. The Friedman--Smith locus ($\overline {FS}_2$) is given as the intersection of the two divisors corresponding to $e_1$ and $e_2$. The decomposition tells us that in the neighborhood of such a point, the minimal resolution is the blow-up of the $\overline {FS}_2$ locus. The proof that the period map $\overline R_3\dashrightarrow \bar A_2$, is resolved by blowing up $\overline {FS}_{2}$ is similar. There are more cases to consider, but in each case, the associated combinatorial data is a simplex that is star-subdivided along the edge corresponding to two the Friedman--Smith edges. \end{proof} \section{Fibers of the resolution} \label{secsubFibers} We now consider the question of describing the fibers of the resolution. We expect that with \cite{donagi} and the techniques we describe here, it should be possible to give compete descriptions of the fibers over certain strata in low genus. We will pursue this elsewhere; here we limit ourselves to the following. Given a point $x\in \bar A_g$, in a given stratum, describe loci in the resolution of $\overline R_{g+1}$ that map to the same stratum. This can be rephrased in terms of $1$-parameter families, which is what we actually consider. Moreover, since $\bar A_g^V$, $\bar A_g^P$ and $\bar A_g^C$ coincide outside $\overline{\beta}_{4}$ (torus rank $4$ or more), for $\beta_0\cup\beta_1\cup \beta_2 \cup \beta_3$ we can work with any one of them, eg.~we can adopt the language of the second Voronoi compactification as we shall do below. \subsection{Degeneration data for Pryms} Limits of one parameter families of ppav are determined by degeneration data, which in turn determine the limit point in the toroidal compactification (see \S\ref{secModStAbVar}). Here we recall from \cite{abh} the degeneration data for Pryms. We begin with an admissible cover $\widetilde C\to C$, and a $1$-parameter deformation associated to a map $\psi:S\to \operatorname{Def}_{\widetilde C/C}$, from the unit disk to the base of a mini-versal deformation. Let $\Psi:S\to \bar A_{g}^\Sigma $ be the composition of $\psi$ with the rational map $\operatorname{Def}_{\widetilde C/C}\dashrightarrow \bar A_{g}^\Sigma$. First let us consider the degeneration data for the Jacobian of the covering curve $\widetilde C$. In this case, we saw in \eqref{eqnExtJac} that the generalized Jacobian $J\widetilde C$ corresponds to a morphism $\tilde c:H_1(\widetilde \Gamma,\mathbb Z)\to \widehat {J\widetilde N}$, where $\widetilde N$ is the normalization of $\widetilde C$. We recall the morphism $\tilde c$. A node on ${\widetilde C}$ corresponds to an edge $e$ in $C_1(\widetilde {\Gamma},\mathbb Z)={\oplus} \mathbb{Z} {\tilde e_j}$ going from a vertex $\tilde v^+$ to a vertex $\tilde v^{-}$. Let $\tilde Q_+(e)$ be the point corresponding to $\tilde v^+$ in ${\widetilde N}$ and similarly with $\tilde v^-$ (if $\tilde v^+=\tilde v^-$ then it does not matter which of the two points above the double point is $\tilde Q_+(e)$ and which one is $\tilde Q_-(e)$). The map $\tilde c$ is defined by restricting the map $$ \tilde c: C_1({\tilde{\Gamma}},\mathbb Z) \to J{\widetilde N}_0, \ \ \ \ e \mapsto {\mathcal O}(Q_+(e)) \otimes {\mathcal O}(Q_-(e))^{-1} $$ to the sub-lattice $H_1(\widetilde \Gamma,\mathbb Z)$. The isomorphism $\phi:H^1(\widetilde \Gamma,\mathbb Z)\to H_1(\widetilde \Gamma,\mathbb Z)$ is the canonical isomorphism, and $\hat{\tilde c}:H^1(\widetilde \Gamma,\mathbb Z)\to J\widetilde N$ is defined as $\lambda^{-1}\circ c \circ \phi$, where $\lambda$ is the canonical principal polarization. The biholomorphism $\tilde \tau$ is related to the Deligne symbol, and the quadratic form $\widetilde B$ is given by the valuation of $\tilde \tau$ or alternatively the log of monodromy computed in Proposition \ref{projmoncone}. Now let us describe the degeneration data for the Prym. As discussed in \eqref{eqnExtPrym}, the generalized (open) Prym corresponds to a morphism $ c^-:H_1(\widetilde \Gamma,\mathbb Z)^{[-]}\to \widehat A$, where $A$ is a finite cover of the abelian variety $$P_N:=\ker(\operatorname{Nm}:J\widetilde N \to JN)_0$$ (see \cite[Prop.~1.5]{abh}). The map $c^-$ is given by the commutative diagram $$ \begin{CD} H_1(\widetilde \Gamma,\mathbb Z) @>{} >> H_1(\widetilde \Gamma,\mathbb Z)^{[-]}@.\\ @V \tilde c VV @V c^- VV\\ \widehat {J\widetilde N} @>{}>> \widehat A. \end{CD} $$ The biholomorphism $\tau^-$ is given by the ``restriction'' of the bihomomorphism $\tilde \tau$ (\cite[\S 3.2, esp.~\S 2.2, p.93]{abh}). The bilinear form $B^-$ is again given by the valuation of $\tau^-$ or equivalently the log of monodromy computed in Proposition \ref{propmoncone}. In summary, $\Psi(0) \in \beta_i$ if and only if $\operatorname{rank} H_1(\widetilde \Gamma,\mathbb Z)=i$, and $\Psi(0)\in \beta(\sigma)$ if and only if $B^-\in \sigma$ where $\sigma$ is the minimal cone with this property. The remaining modulus for $\Psi(0)$ is determined by the remaining degeneration data. Note that by definition, $B^-\in \sigma(\widetilde C/C)$, the monodromy cone. If $\sigma(\widetilde C/C)$ is contained in a cone in $\Sigma$, then the Prym map extends in a neighborhood of $\widetilde C\to C$, and so $\Psi(0)$ depends only on $\widetilde C\to C$, and not on the $1$-parameter family. Otherwise, a decomposition of $\sigma(\widetilde C/C)$ into cones in $\Sigma$ shows the dependence of even the combinatorial data on the $1$-parameter family. More precisely, in the notation of Remark \ref{remPrymMonCone}, $B^-=\sum_{e\in \Gamma}\operatorname{ord} \psi^*(z_e)(\tilde e^\vee-\iota\tilde e^\vee)^2$, and the order of vanishing determines the sub cone of $\sigma(\widetilde C/C)$ in which the quadratic form $B^-$ lies. We shall now illustrate the above discussion with several examples. As a final note, the combinatorics of rank $1$ quadratic forms on a lattice are best considered by using squares of primitive rank $1$ linear forms. This is the convention we use in the appendices. To match those descriptions to the ones in this section, it works best to describe the quadratic form $B^-$ as $$ B^-=\sum_{e\in \Gamma}\alpha_e\ell_e^2 $$ where $\ell_e$ is defined in \eqref{EQNdefle} and $$ \alpha_e:= \left\{ \begin{array}{ll} \operatorname{ord}\psi^*(z_e) & \text{if } \ \iota \tilde e^\vee \ne -\tilde e^\vee .\\ 4\operatorname{ord}\psi^*(z_e)& \text{if }\ \iota\tilde e^\vee=-\tilde e^\vee,\\ \end{array} \right. $$ \subsection{The Friedman--Smith loci $FS_n$} In this section we consider the image of the strict transforms of the Friedman--Smith loci. That is, given a $1$-parameter family of admissible covers, degenerating to a Friedman--Smith cover in $FS_n$, we want to describe the associated point in $\bar A_g^V$. As we have already seen the Friedman--Smith loci $FS_{n}$ consist of several components. These can be enumerated as follows: the curve $C$ is reducible, more precisely $C=C_1 \cup C_2$ where $C_1$ and $C_2$ intersect in $n$ points. The curves $C_i$ are smooth and irreducible. If $g_i=g(C_i)$, then $g_1 + g_2 + n-1=g+1$. The components of $FS_{n}$ then correspond to the different possibilities for $g_1 \geq g_2 > 0$. From \S \ref{sectPrym}, \S \ref{secFSexamples} one can see that $\operatorname{rank}H_1(\widetilde \Gamma,\mathbb Z)^{[-]}=n$ and $A=P_N=P_{N_1}\times P_{N_2}$ (see also \cite{abh}). This determines the image in $A_{g-1}^*$. For the remaining extension data, we will focus on the quadratic form $B^-$, starting with a general point on an irreducible component of $FS_2$. \subsubsection{The $FS_2$ locus} Here we are in the torus rank $2$ case, i.e.~$i=2$. Thus we are no longer in Mumford's partial compactification, but we are still in the range where all known toroidal compactifications coincide, in particular also the second Voronoi compactification and the perfect cone compactification. In the notation of \S \ref{secFScd}, where the decomposition of the monodromy cone is established, the form $B^-$ is given by $ \alpha_1(\tilde e_1^\vee-\iota \tilde e_1^\vee )^2+\alpha_2(\tilde e_2^\vee-\iota \tilde e_2^\vee )^2 $ with $\alpha_i=\operatorname{ord}\psi^*(z_{e_i})$, and the monodromy cone decomposes as $\alpha_1<\alpha_2$, $\alpha_1=\alpha_2$, $\alpha_1>\alpha_2$. In the general case, $\alpha_1=\alpha_2$. As explained in Remark \ref{remFS2Del}, the associated Delaunay decomposition of $\mathbb{R}^2$ is that of squares and the corresponding cone is equivalent to the standard cone $\sigma_{1+1}=\langle x_1^2,x_2^2 \rangle$. The degenerate Prym is in this case a $\mathbb{P}^1 \times \mathbb{P}^1$ bundle over $A$ with ``opposite'' coordinate lines $\{0\} \times \mathbb{P}^1$ and $\{\infty\} \times \mathbb{P}^1$ as well as $\mathbb{P}^1 \times \{0\}$ and $\mathbb{P}^1 \times \{\infty\}$ identified with a shift over $A$. This shift is determined by $c^-$. There is a further parameter $b\in \mathbb{C}^*$ which describes the gluing of the lines which are identified. This parameter is given by the bihomomorphism $\tau$ and varies with the chosen $1$-parameter family (even if $B^-$ does not change). In fact this is the parameter in the fibers of the $\mathbb{P}^1$-bundle given by blowing up the general point of an irreducible component of $FS_2$. We can also choose $1$-parameter families with $\alpha_1>\alpha_2$ (the case $\alpha_1<\alpha_2$ can be obtained by symmetry). Now, as explained in Remark \ref{remFS2Del}, the Delaunay decomposition changes: every square breaks up into two triangles and the corresponding cone is equivalent to $\sigma_{K_3} = \langle x_1^2, x_2^2, (x_1 + x_2)^2 \rangle$. The degenerate Prym is the union of two $\mathbb{P}^2$-bundles over $A$ with their coordinate lines identified appropriately again with a shift over $A$ which is determined by $c^-$. In this case the torus bundle ${\mathcal T}(\sigma)$ has rank $0$ and the bihomomorphism $\tau$ is trivial. We shall see below that points in $\beta(\sigma_{1+1})$ can also arise from other degenerations. \subsubsection{The $FS_3$ locus} We shall now move on to general points on $FS_3$. As before the crucial point is the form $B^-$. In the notation of \S \ref{secFScd}, where the decomposition of the monodromy cone is established, the form $B^-$ is given by $ \sum_{i=1}^3\alpha_i(\tilde e_i^\vee-\iota \tilde e_i^\vee )^2 $ with $\alpha_i=\operatorname{ord}\psi^*(z_{e_i})$, and the monodromy cone decomposes as $\alpha_1=\alpha_2=\alpha_3$, $\alpha_1<\alpha_2=\alpha_3$, $\alpha_1<\alpha_2 < \alpha_3$, together with all permuations. Here we discuss the general case, $\alpha_1=\alpha_2=\alpha_3$. In Remark \ref{remFS3Del} is is shown that the associated cone is equivalent to $\sigma_{C_4}=\langle x_1^2, x_2^2, x_3^2, (x_1 + x_2 + x_3)^2 \rangle$. The associated Delaunay decomposition of $\mathbb{R}^3$ consists of $1$ octahedron and $2$ tetrahedra. The homomorphism $c^-$ again defines shift parameters and $\tau$ defines gluing parameters, see \cite[Section 7.2]{ghdegenerations}. The latter depends on the $1$-parameter family. The $1$-parameter families with different orders of vanishing will result in $B^-$ lying in one of the other cones in the decomposition of the monodromy cone (see Remark \ref{remFS3Del}). \begin{rem} The interesting point to note here is that this is a codimension $1$ stratum in $\beta_2^{\Sigma}$ and hence the exceptional divisors over the general points of the irreducible components of $FS_3$ do {\em{not}} map dominantly to $\beta_2^{\Sigma}$, even for small $g$. Thus the question remains to find an example which maps to the (unique) maximal stratum in $\beta_2^{\Sigma}$, namely the stratum $\beta(\sigma_{1+1+1})$ where $\sigma_{1+1+1}= \langle x_1^2, x_2^2, x_3^2\rangle$. Indeed this is not difficult to find. We can take an elementary \'etale covering with $6$ nodes (discussed below). \end{rem} \begin{rem} More generally, for an $FS_n$ locus with $n\ge 2$, if both $g_1, g_2 > 1$, then the abelian variety $A$ is reducible. Hence the strict transforms of the corresponding Friedman--Smith loci cannot map dominantly onto $\beta_i$, even when mapping to the relevant stratum $A_{g-n}$ of $A_g^{*}$. For the remaining stratum $g_2=1$, it is possible that the map from this component of $FS_2$ to $\beta_2$ is dominant for $g\leq 5$. This is clear for $g=2$, but in general needs some discussion of the continuous parameters. The main issue is whether the projection under $q$ (see \S \ref{secModStAbVar}) maps surjectively to ${\mathcal X}^{\times 2}$. \end{rem} \subsection{Elementary \'etale examples} In this section we show that the elementary \'etale examples (see \S \ref{secEEE}) map to the (unique) maximal stratum in $\beta_2^{\Sigma}$, namely the stratum $\beta(\sigma_{1+1+1})$ where $\sigma_{1+1+1}= \langle x_1^2, x_2^2, x_3^2\rangle$. Indeed it is shown in \S \ref{secEEE} that the monodromy cone of an elementary \'etale example with $2n$ nodes is of type $\sigma_{1+\cdots+1}$. The associated degenerate abelian varieties are $(\mathbb{P}^1)^n$-bundles over abelian varieties with ``opposite sides'' glued with a shift. The shift is given by $c^-$, the gluing by $\tau$. In particular points in $\beta(\sigma_{1 + 1})$ can arise not only from $FS_2$ covers but also from elementary \'etale covers with $4$ nodes. \subsection{The Wirtinger locus $W_n$} We conclude this discussion with the Wirtinger examples $W_n$ (see \S \ref{secWE}). In this case $\widetilde C$ has two components which are exchanged by the involution and $2n$ nodes which are pairwise interchanged. In Remark \ref{remWDel} it is established that the $W_n$ monodromy cone is $\mathbb R_{\ge 0}\langle x_1^2,(x_1-x_2)^2, \ldots, (x_{n-2}-x_{n-1})^2,x_{n-1}^2\rangle$. For $n=3$, the Delaunay decomposition of $\mathbb R^2$ is the tiling by $2$-simplices obtained by slicing the standard square into $2$ triangles. The associated degenerate abelian variety is a union of $2$ copies of $\mathbb{P}^2$-bundles, corresponding to the slicing of the square into $2$ triangles. As always the gluing is determined by $c^-$ the biholomorphism $\tau$ is trivial here. We also see that points in $\beta(\sigma_{K_3})$ can arise from both the blow-up of $FS_2$ as well as from Wirtinger examples $W_3$. For $n=4$ the toric part of the semi-abelic variety corresponds to the dicing of a cube into an octahedron and two tetrahedra, and thus consists of a complete intersection $F(2,2)$ of two quadrics in $\mathbb{P}^5$ and two copies of $\mathbb{P}^3$. We take an opportunity to correct here an error in \cite[Example 5.2.2]{abh} where it was claimed that the toric parts are all projective spaces. We note that the general $FS_3$ degenerations are mapped to the same stratum.
{ "timestamp": "2015-03-11T01:03:17", "yymm": "1403", "arxiv_id": "1403.1938", "language": "en", "url": "https://arxiv.org/abs/1403.1938", "abstract": "The main purpose of this paper is to present a conceptual approach to understanding the extension of the Prym map from the space of admissible double covers of stable curves to different toroidal compactifications of the moduli space of principally polarized abelian varieties. By separating the combinatorial problems from the geometric aspects we can reduce this to the computation of certain monodromy cones. In this way we not only shed new light on the extension results of Alexeev, Birkenhake, Hulek, and Vologodsky for the second Voronoi toroidal compactification, but we also apply this to other toroidal compactifications, in particular the perfect cone compactification, for which we obtain a combinatorial characterization of the indeterminacy locus, as well as a geometric description up to codimension six, and an explicit toroidal resolution of the Prym map up to codimension four.", "subjects": "Algebraic Geometry (math.AG)", "title": "Extending the Prym map to toroidal compactifications of the moduli space of abelian varieties", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692318706084, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.7079585074492731 }
https://arxiv.org/abs/1901.06334
Inversion and extension of the finite Hilbert transform on (-1,1)
The principle of optimizing inequalities, or their equivalent operator theoretic formulation, is well established in analysis. For an operator, this corresponds to extending its action to larger domains, hopefully to the largest possible such domain (i.e, its \textit{optimal domain}). Some classical operators are already optimally defined (e.g., the Hilbert transform in $L^p(\mathbb{R})$, $1<p<\infty$) and others are not (e.g., the Hausdorff-Young inequality in $L^p(\mathbb{T})$, $1<p<2$, or Sobolev's inequality in various spaces). In this paper a detailed investigation is undertaken of the finite Hilbert transform $T$ acting on rearrangement invariant spaces $X$ on $(-1,1)$, an operator whose singular kernel is neither positive nor does it possess any monotonicity properties. For a large class of such spaces $X$ it is shown that $T$ is already optimally defined on $X$ (this is known for $L^p(-1,1)$ for all $1<p<\infty$, except $p=2$). The case $p=2$ is significantly different because the range of $T$ is a proper dense subspace of $L^2(-1,1)$. Nevertheless, by a completely different approach, it is established that $T$ is also optimally defined on $L^2(-1,1)$. Our methods are also used to show that the solution of the airfoil equation, which is well known for the spaces $L^p(-1,1)$ whenever $p\not=2$ (due to certain properties of $T$), can also be extended to the class of r.i.\ spaces $X$ considered in this paper.
\section{Introduction} For $1\le p\le2$ the Fourier transform $F$ maps $L^p(\mathbb{T})$ into $\ell^{p'}(\mathbb{Z})$, with $\frac{1}{p}+\frac{1}{p'}=1$. The Hausdorff-Young inequality $\|F(f)\|_{p'}\le \|f\|_p$ for $f\in L^p(\mathbb{T})$ ensures that $F$ is continuous. The following question was raised by R.\ E.\ Edwards, \cite[p. 206]{edwards}, 50 years ago: \textit{Given $1\le p\le 2$, what can be said about the space $\mathbf{F}^p(\mathbb{T})$ consisting of those functions $f\in L^1(\mathbb{T})$ having the property that $F(f\chi_A)\in \ell^{p'}(\mathbb{Z})$ for all sets $A$ in the Borel $\sigma$-algebra $\mathcal{B}_{\mathbb{T}}$ on $\mathbb{T}$?} A consideration of the functional \begin{equation}\label{fourier} f\mapsto\sup_{A\in\mathcal{B}_{\mathbb{T}}}\left\|F(\chi_Af)\right\|_{p'}, \end{equation} would be expected to be relevant in this regard. For $p=2$, the operator $F\colon L^2(\mathbb{T})\to \ell^2(\mathbb{Z})$ is a Banach space \textit{isomorphism}, which implies that $\mathbf{F}^2(\mathbb{T})=L^2(\mathbb{T})$. What about the case $1<p<2$? It turns out that the functional \eqref{fourier} is a norm, that $\mathbf{F}^p(\mathbb{T})\subseteq L^1(\mathbb{T})$ is a Banach function space (briefly, B.f.s.) \textit{properly} containing $L^p(\mathbb{T})$, and that $F\colon \mathbf{F}^p(\mathbb{T})\to \ell^{p'}(\mathbb{Z})$ is continuous. Moreover, $\mathbf{F}^p(\mathbb{T})$ is the \textit{largest} such space in a certain sense. For the above facts we refer to \cite{mockenhaupt-ricker}. The point is that the Hausdorff-Young inequality for functions in $L^p(\mathbb{T})$, $1<p<2$, \textit{can} be extended to its \textit{genuinely larger} optimal domain space $\mathbf{F}^p(\mathbb{T})$. For many classical inequalities in analysis, or their equivalent operator theoretic formulation, an investigation along the lines of the Hausdorff-Young inequality alluded to above can be quite fruitful. One has a linear operator $S$ defined on some B.f.s.\ $Z\subseteq L^0(\mu)$, with $(\Omega,\Sigma,\mu)$ a measure space, taking values in a Banach space $Y$ and a B.f.s.\ $X\subseteq Z$ such that $S\colon X\to Y$ is bounded. The above question posed by Edwards is also meaningful in this setting: What can be said about the space $X_S$ consisting of those functions $f\in Z$ satisfying $S(f\chi_A)\in Y$ for all $A\in\Sigma$? In particular, is $X_S$ genuinely larger than $X$? If so, can $X_S$ be equipped with a function norm such that $X\subseteq X_S$ continuously and $S$ has a $Y$-valued, continuous linear extension to $X_S$? And, of course, $X_S$ should be the \textit{largest} space with these properties. A few examples will illuminate this discussion. Let $\Omega\subset\mathbb{R}^n$ be a bounded domain with $|\Omega|=1$. The validity of the generalized Sobolev inequality $\|u^*\|_Y\le C \||\nabla u|^*\|_X$ for $u\in C_0^1(\Omega)$, where $v^*$ is the decreasing rearrangement of a function $v$ and $X,Y$ are rearrangement invariant (briefly, r.i.) spaces on $[0,1]$, is equivalent to the boundedness of the inclusion operator $j\colon W_0^1X(\Omega)\to Y(\Omega)$ for a suitable Sobolev space $W_0^1X(\Omega)$. By using a generalized Poincar\'e inequality, Cwikel and Pustylnik, \cite{cwikel-pustylnik}, and Edmunds, Kerman and Pick, \cite{edmunds-kerman-pick}, showed that the boundedness of $j$ is equivalent to the boundedness, from $X$ into $Y$, of the 1-dimensional operator $S$ associated with Sobolev's inequality, namely, $$ (S(f))(t):=\int_t^1f(s)s^{(1/n)-1}ds,\quad t\in[0,1], $$ which is generated by the kernel $K(t,s):=s^{(1/n)-1}\chi_{[t,1]}$ on $[0,1]\times[0,1]$. Accordingly, being able to extend the operator $S$ is equivalent to extending the imbedding $j$ and hence, to refining the generalized Sobolev inequality. The optimal extension of this kernel operator $S$ is treated in \cite{curbera-ricker-sm}; whether or not the initial space becomes genuinely larger depends on properties of $X$ and $Y$. A knowledge of the optimal domain of $S$ has implications for the compactness of the Sobolev imbedding $j$, \cite{curbera-ricker-tams}, \cite{curbera-ricker-ind}. For $0<\alpha<1$, the classical fractional integral operator in the spaces $L^p(0,1)$, $1\le p\le \infty$, has kernel (up to a constant) given by $K(t,s)=|s-t|^{\alpha-1}$. Its optimal extension has been investigated in \cite{curbera-ricker-nach}. For convolution (and more general Fourier multipliers) operators in $L^p(G)$, $1\le p<\infty$, with $G$ a compact abelian group, see \cite{mockenhaupt-okada-ricker}, \cite[Ch.7]{okada-ricker-sanchez} and the references therein. The optimal extension of the classical Hardy operator in $L^p(\mathbb{R})$, $1<p<\infty$, with kernel $K(t,s):=(1/t)\chi_{[0,t]}(s)$ has been investigated in \cite{delgado-soria}. In this paper we consider another classical singular integral operator. The Hilbert transform $H\colon L^p(\mathbb R)\to L^p(\mathbb R)$, for $1<p<\infty$ (whose boundedness is due to M.\ Riesz), is defined via convolution as a principal value integral; see, for example, \cite[\S6.7]{edwards-gaudry}. Since $H^2=-I$, the operator $H$ is a Banach space \textit{isomorphism} on $L^p(\mathbb R)$ for every $1<p<\infty$ and so there is \textit{no} larger B.f.s.\ which contains $L^p(\mathbb R)$ and such that $H$ has an $L^p(\mathbb R)$-valued extension to this space. A related operator is the Hilbert transform $H_{2\pi}$ of $2\pi$-periodic functions defined via the principal value integrals \begin{equation*}\label{FHT-2pi} (H_{2\pi}(f))(x)=p.v. \frac{1}{2\pi}\int_{-\pi}^{\pi}f(x-u)\cot(u/2)\,du \end{equation*} for every measurable $2\pi$-periodic function $f$ and for every point $x\in[-\pi,\pi]$ for which the p.v.-integral exists. For each $1<p<\infty$, the operator $H_{2\pi}$ is linear and continuous from $L^p(-\pi,\pi)$ into itself; denote this operator by $H^p_{2\pi}$. It is known that $H^p_{2\pi}$ has proper closed range, \cite[Sect. 9.1]{butzer-nessel}. Hence, $H^p_{2\pi}$ is surely not an isomorphism on $L^p(-\pi,\pi)$. Nevertheless, as for $H$, it turns out that there is \textit{no} genuinely larger B.f.s.\ containing $L^p(-\pi,\pi)$ such that $H^p_{2\pi}$ has an $L^p(-\pi,\pi)$-valued extension to this space, \cite[Example 4.20]{okada-ricker-sanchez}. The finite Hilbert transform $T(f)$ of $f\in L^1(-1,1)$ is the principal value integral \begin{equation*} (T(f))(t)=\lim_{\varepsilon\to0^+} \frac{1}{\pi} \left(\int_{-1}^{t-\varepsilon}+\int_{t+\varepsilon}^1\right) \frac{f(x)}{x-t}\,dx , \end{equation*} which exists for a.e.\ $t\in(-1,1)$ and is a measurable function. It is known to have important applications to aerodynamics, via the resolution of the so-called \textit{airfoil equation}, \cite{cheng-rott}, \cite[Ch.11]{king}, \cite{reissner}, \cite{tricomi-1}, \cite{tricomi}. More recently, the finite Hilbert transform has also found applications to problems arising in image reconstruction; see, for example, \cite{katsevich-tovbis}, \cite{sidky-etal}. For each $1<p<\infty$ the linear operator $f\mapsto T(f)$ maps $L^p(-1,1)$ continuously into itself (denote this operator by $T_p$). Except when $p=2$, the operator $T_p$ behaves similarly, in some sense, to $H^p_{2\pi}$. Consequently, there is no larger B.f.s.\ containing $L^p(-1,1)$ such that $T_p$ has an $L^p(-1,1)$-valued extension to this space, \cite[Example 4.21]{okada-ricker-sanchez}. However, for $p=2$ the situation is significantly different, as already pointed out long ago in \cite[p.44]{sohngen}. One of the reasons is that the range of $T_2$ is a proper dense subspace of $L^2(-1,1)$. The arguments used for $T_p$ in the cases $1<p<2$ and $2<p<\infty$ do not apply to $T_2$. Moreover, they fail to indicate whether or not $T_2$ has an $L^2(-1,1)$-valued extension to a B.f.s.\ genuinely larger than $L^2(-1,1)$. The atypical behavior of $T$ when $p=2$ has also been observed in \cite{astala-etal}, where $T$ is considered to be acting in weighted $L^p$-spaces. Accordingly, the case $p=2$ requires different arguments. In this paper we consider the inversion and the extension of the finite Hilbert transform $T$ on function spaces on $(-1,1)$. In Section \ref{S3} we extend known properties of $T$ when it acts on the spaces $L^p(-1,1)$, for $p\not=2$, to a larger class of r.i.\ spaces $X$ on $(-1,1)$ satisfying certain restrictions on their Boyd indices, more precisely, that $0<\underline{\alpha}_X\le\overline{\alpha}_X<1/2$ or $1/2<\underline{\alpha}_X\le\overline{\alpha}_X<1$; see Theorems \ref{theo-3} and \ref{theo-4}. In particular, it is established that $T$ is a Fredholm operator in such r.i.\ spaces. This allows a refinement of the solution of the airfoil equation by extending it to such r.i.\ spaces; see Corollary \ref{cor-airfoil}. In Section \ref{S4} we apply the results of the previous section to prove (cf. Theorem \ref{theo-10}) the impossibility of extending the finite Hilbert transform when it acts on r.i.\ spaces $X$ satisfying $0<\underline{\alpha}_X\le\overline{\alpha}_X<1/2$ or $1/2<\underline{\alpha}_X\le\overline{\alpha}_X<1$. The proof relies on a deep result of Talagrand concerning $L^0$-valued measures. In the course of that investigation we establish a rather unexpected characterization of when a function $f\in L^1(-1,1)$ belongs to $X$ in terms of the set of $T$-transforms $\{T(f\chi_A): A\; \mathrm{measurable}\}$; see Proposition \ref{cor-8}. In the final Section \ref{S5} we address the case $p=2$. It is established (cf. Theorem \ref{theo-14}), via a completely different approach, that $T\colon L^2(-1,1)\to L^2(-1,1)$ does not have a continuous $L^2(-1,1)$-valued extension to any larger B.f.s. The argument relies on showing that the norm \begin{equation*} f\mapsto \sup_{|\theta|=1}\left\|T(\theta f)\right\|_2 \end{equation*} (equivalent to \eqref{fourier} in the appropriate setting) is equivalent to the usual norm in $L^2(-1,1)$. We conclude Section \ref{S5} by extending the above mentioned characterization to show that $f\in L^2(-1,1)$ if and only if $T(f\chi_A)\in L^2(-1,1)$ for every measurable set $A\subseteq(-1,1)$; see Corollary \ref{cor-15}. Not all r.i.\ spaces $X$ which $T$ maps into itself (i.e., satisfying $0<\underline{\alpha}_X\le\overline{\alpha}_X<1$) are covered. Except when $X=L^2(-1,1)$, for those r.i.\ spaces $X$ not satisfying the conditions $0<\underline{\alpha}_X\le\overline{\alpha}_X<1/2$ or $1/2<\underline{\alpha}_X\le\overline{\alpha}_X<1$ (e.g., the Lorentz spaces $L^{2,q}$ for $1\le q\le\infty$ with $q\not=2$) the techniques used here do not apply; see Remark \ref{rem-final}. \section{Preliminaries} \label{S2} In this paper the relevant measure space is $(-1,1)$ equipped with its Borel $\sigma$-algebra $\mathcal{B}$ and Lebesgue measure $|\cdot|$ (restricted to $\mathcal{B}$). We denote by $\text{sim }\mathcal{B}$ the vector space of all $\mathbb{C}$-valued, $\mathcal{B}$-simple functions and by $L^0(-1,1)=L^0$ the space (of equivalence classes) of all $\mathbb{C}$-valued measurable functions, endowed with the topology of convergence in measure. The space $L^p(-1,1)$ is denoted simply by $L^p$, for $1\le p\le\infty$. A \textit{Banach function space} (B.f.s.) $X$ on $(-1,1)$ is a Banach space $X\subseteq L^0$ satisfying the ideal property, that is, $g\in X$ and $\|g\|_X\le\|f\|_X$ whenever $f\in X$ and $|g|\le|f|$ a.e. The \textit{associate space} $X'$ of $X$ consists of all functions $g$ satisfying $\int_{-1}^1|fg|<\infty$, for every $f\in X$, equipped with the norm $\|g\|_{X'}:=\sup\{|\int_{-1}^1fg|:\|f\|_X\le1\}$. The space $X'$ is a closed subspace of the Banach space dual $X^*$ of $X$. The second associate space $X''$ of $X$ is defined as $X''=(X')'$. The norm in $X$ is absolutely continuous if, for every $f\in X$, we have $\|f\chi_A\|_X\to0$ whenever $|A|\to0$. The space $X$ satisfies the Fatou property if, whenever $\{f_n\}_{n=1}^\infty\subseteq X$ satisfies $0\le f_n\le f_{n+1}\uparrow f$ a.e.\ with $\sup_n\|f_n\|_X<\infty$, then $f\in X$ and $\|f_n\|_X\to\|f\|_X$. A \textit{rearrangement invariant} (r.i.) space $X$ on $(-1,1)$ is a B.f.s.\ such that if $g^*\le f^*$ with $f\in X$, then $g\in X$ and $\|g\|_X\le\|f\|_X$. Here $f^*\colon[0,2]\to[0,\infty]$ is the decreasing rearrangement of $f$, that is, the right continuous inverse of its distribution function: $\lambda\mapsto|\{t\in (-1,1):\,|f(t)|>\lambda\}|$. The associate space $X'$ of a r.i.\ space $X$ is again a r.i.\ space. Every r.i.\ space on $(-1,1)$ satisfies $L^\infty\subseteq X\subseteq L^1$, \cite[Corollary II.6.7]{bennett-sharpley}. Moreover, if $f\in X$ and $g\in X'$, then $fg\in L^1$ and $\|fg\|_{L^1}\le \|f\|_X \|g\|_{X'}$, i.e., H\"older's inequality is available. The fundamental function of $X$ is defined by $\varphi_X(t):=\|\chi_{A}\|_X$ for $A\in\mathcal{B}$ with $|A|=t$, for $t\in[0,2]$. In this paper \textit{all} B.f.s.' $X$ (hence, all r.i.\ spaces) are on $(-1,1)$ relative to Lebesgue measure and, as in \cite{bennett-sharpley}, satisfy the Fatou property. In this case $X''=X$ and hence, $f\in X$ if and only if $\int_{-1}^1|fg|<\infty$, for every $g\in X'$. Moreover, $X'$ is a norm-fundamental subspace of $X^*$, that is, $\|f\|_X=\sup_{\|g\|_{X'}\le1} |\int_{-1}^1fg|$ for $f\in X$, \cite[pp.12-13]{bennett-sharpley}. If $X$ is separable, then $X'=X^*$. The family of r.i.\ spaces includes many classical spaces appearing in analysis, such as the Lorentz $L^{p,q}$ spaces, \cite[Definition IV.4.1]{bennett-sharpley}, Orlicz $L^\varphi$ spaces \cite[\S4.8]{bennett-sharpley}, Marcinkiewicz $M_\varphi$ spaces, \cite[Definition II.5.7]{bennett-sharpley}, Lorentz $\Lambda_\varphi$ spaces, \cite[Definition II.5.12]{bennett-sharpley}, and the Zygmund $L^p(\text{log L})^\alpha$ spaces, \cite[Definition IV.6.11]{bennett-sharpley}. In particular, $L^p=L^{p,p}$, for $1\le p\le \infty$. The space weak-$L^1$, denoted by $L^{1,\infty}(-1,1)=L^{1,\infty}$, will play an important role; it is not a Banach space, \cite[Definition IV.4.1]{bennett-sharpley}. It satisfies $L^1\subseteq L^{1,\infty} \subseteq L^0$, with all inclusions continuous. The dilation operator $E_t$ for $t>0$ is defined, for each $f\in X$, by $E_t(f)(s):=f(st)$ for $-1\le st\le1$ and zero in other cases. The operator $E_t\colon X\to X$ is bounded with $\|E_t\|_{X\to X}\le \max\{t,1\}$. The \textit{lower} and \textit{upper Boyd indices} of $X$ are defined, respectively, by \begin{equation*} \underline{\alpha}_X\,:=\,\sup_{0<t<1}\frac{\log \|E_{1/t}\|_{X\to X}}{\log t} \;\;\mbox{and}\;\; \overline{\alpha}_X\,:=\,\inf_{1<t<\infty}\frac{\log \|E_{1/t}\|_{X\to X}}{\log t} , \end{equation*} \cite[Definition III.5.12]{bennett-sharpley}. They satisfy $0\le\underline{\alpha}_X\le \overline{\alpha}_X\le1$. Note that $\underline{\alpha}_{L^p}= \overline{\alpha}_{L^p}=1/p$. We recall a technical fact from the theory of r.i.\ spaces that will be often used; see, for example, \cite[Proposition 2.b.3]{lindenstrauss-tzafriri}. \begin{lemma}\label{lemma-1} Let $X$ be a r.i.\ space such that $0<\alpha<\underline{\alpha}_X\le \overline{\alpha}_X<\beta<1$. Then there exist $p,q$ satisfying $1/\beta<p<q<1/\alpha$ such that $L^q\subseteq X \subseteq L^p$ with continuous inclusions. \end{lemma} An important role will be played by the Marcinkiwiecz space $L^{2,\infty}(-1,1)=L^{2,\infty}$, also known as weak-$L^2$, \cite[Definition IV.4.1]{bennett-sharpley}. It consists of those $f\in L^0$ satisfying \begin{equation}\label{L2oo} f^*(t)\le \frac{M}{t^{1/2}},\quad 0<t\le2, \end{equation} for some constant $M>0$. Consider the function $1/\sqrt{1-x^2}$ on $(-1,1)$. Since its decreasing rearrangement $(1/\sqrt{1-x^2})^*$ is the function $t\mapsto 2/t^{1/2}$, it follows that $1/\sqrt{1-x^2}$ belongs to $L^{2,\infty}$. Actually, for any r.i.\ space $X$ it is the case that $1/\sqrt{1-x^2}\in X$ if and only if $L^{2,\infty}\subseteq X$. Consequently, $L^{2,\infty}$ is the \textit{smallest} r.i.\ space which contains $1/\sqrt{1-x^2}$. Note that $\underline{\alpha}_{L^{2,\infty}}= \overline{\alpha}_{L^{2,\infty}}=1/2$. For all of the above and further facts on r.i.\ spaces see \cite{bennett-sharpley}, \cite{lindenstrauss-tzafriri}, for example. \section{Inversion of the finite Hilbert transform on r.i.\ spaces} \label{S3} In \cite[Ch.11]{king}, \cite{okada-elliot}, \cite[\S4.3]{tricomi} a detailed study of the inversion of the finite Hilbert transform was undertaken for $T$ acting on the spaces $L^p$ whenever $1<p<2$ and $2<p<\infty$. We study here the extension of those results to a larger class of spaces, namely, the r.i.\ spaces. The restrictions on $p$ indicated above for the $L^p$ spaces can be formulated for r.i.\ spaces in terms of their Boyd indices, namely, $0<\underline{\alpha}_X\le\overline{\alpha}_X<1/2$ and $1/2<\underline{\alpha}_X\le\overline{\alpha}_X<1$. A result of Boyd, \cite[Theorem III.5.18]{bennett-sharpley}, allows the extension of Riesz's classical theorem on the boundedness of the Hilbert transform $H$ on the spaces $L^p(\mathbb{R})$, for $1<p<\infty$, to a certain class of r.i.\ spaces. Indeed, since $Tf=\chi_{(-1,1)}H(f\chi_{(-1,1)})$, it follows for a r.i.\ space $X$ with non-trivial lower and upper Boyd indices, that is, $0<\underline{\alpha}_X\le \overline{\alpha}_X<1$, that $T\colon X\to X$ boundedly; this is indicated by simply writing $T_X$. Since $\underline{\alpha}_{X'}=1-\overline{\alpha}_X$ and $\overline{\alpha}_{X'}=1-\underline{\alpha}_X$, the condition $0<\underline{\alpha}_X\le \overline{\alpha}_X<1$ implies that $0<\underline{\alpha}_{X'} \le \overline{\alpha}_{X'}<1$. Hence, $T_{X'}\colon X'\to X'$ is also bounded. The operator $T$ is not continuous on $L^1$. However, due to a result of Kolmogorov, \cite[Theorem III.4.9(b)]{bennett-sharpley}, $T\colon L^1\to L^{1,\infty}$ is continuous. It follows from the Parseval formula in Proposition \ref{prop-2}(b) below that the restriction of the dual operator $T_X^*\colon X^*\to X^*$ of $T_X$ to the closed subspace $X'$ of $X^*$ is precisely $-T_{X'}\colon X'\to X'$. In the study of the operator $T$ an important role is played by the particular function $1/\sqrt{1-x^2}$, which belongs to each $L^p$, $1\le p<2$. The reason is that \begin{equation}\label{0} T\Big(\frac{1}{\sqrt{1-x^2}}\Big)(t)= p.v. \frac{1}{\pi}\int_{-1}^{1}\frac{1}{\sqrt{1-x^2} (x-t)}\,dx=0,\quad -1< t< 1, \end{equation} and, moreover, that if $T(f)(t)=0$ for a.e.\ $t\in(-1,1)$ with $f$ a function belonging to some space $L^p$, $1<p<\infty$, then necessarily $f(x)=C/\sqrt{1-x^2}$ for some constant $C\in\mathbb C$; \cite[\S4.3 (14)]{tricomi}. Combining this observation with Lemma \ref{lemma-1} it follows, for every r.i.\ space $X$ satisfying $0<\underline{\alpha}_X\le\overline{\alpha}_X<1$, that $T_X$ is either injective or $\mathrm{dim}(\mathrm{Ker}(T_X))=1$. Recall that $L^{2,\infty}$ is the \textit{smallest} r.i.\ space containing the function $1/\sqrt{1-x^2}$, that is, $1/\sqrt{1-x^2}\in X$ if and only if $L^{2,\infty}\subseteq X$. The Parseval and Poincar\'e-Bertrand formulae are important tools for studying the finite Hilbert transform in the spaces $L^p$, $1<p<\infty$, \cite[\S 4.3]{tricomi}. It should be noted that a result of Love is essential in order to have a sharp version of the Poincar\'e-Bertrand formula, \cite{love}. The validity of both of these formulae can be extended to the setting of r.i.\ spaces. \begin{proposition}\label{prop-2} Let $X$ be a r.i.\ space satisfying $0<\underline{\alpha}_X\le \overline{\alpha}_X<1$. \begin{itemize} \item[(a)] Let $f\in L^1$ satisfy $fT_{X'}(g)\in L^1$ for all $g\in X'$. Then, for every $g\in X'$, the function $gT(f)\in L^1$ and $$ \int_{-1}^1fT_{X'}(g)=-\int_{-1}^1gT(f). $$ \item[(b)] The Parseval formula holds for the pair $X$ and $X'$, that is, \begin{equation*}\label{parseval} \int_{-1}^{1}fT_{X'}(g)=-\int_{-1}^{1}gT_X(f),\quad f\in X, g\in X'. \end{equation*} \item[(c)] The Poincar\'e-Bertrand formula holds for the pair $X$ and $X'$, that is, for all $f\in X$ and $g\in X'$ we have \begin{equation*}\label{poincare} T(gT_X(f)+fT_{X'}(g))=(T_X(f))(T_{X'}(g))-fg,\quad \mathrm{a.e.} \end{equation*} \end{itemize} \end{proposition} \begin{proof} (a) Assume first that $f\in L^\infty$. By Lemma \ref{lemma-1}, there exists $1<q<\infty$ satisfying $L^q\subseteq X$, so that $X'\subseteq L^{q'}$. Then $$ \int_{-1}^1fT_{X'}(g)=-\int_{-1}^1gT_X(f)=-\int_{-1}^1gT(f),\quad g\in X', $$ via Parseval formula for the pair $L^q$ and $L^{q'}$, \cite[Sect. 11.10.8]{king}, \cite[Sect. 4.2, 4.3]{tricomi}, because $f\in L^\infty\subseteq L^q$ and $g\in X'\subseteq L^{q'}$. Now let $f\in L^1$ be a general function satisfying the assumption of (a). Define $A_n:=|f|^{-1}([0,n])$ and $f_n:=f\chi_{A_n}\in L^\infty$ for $n\in\mathbb N$. Then $\lim_n f_n=f$ in $L^1$. It follows from Kolmogorov's Theorem that $\lim_nT(f_n)=T(f) $ in $L^{1,\infty}$. Since the inclusion $L^{1,\infty}\subseteq L^0$ is continuous, we can conclude that $\lim_nT(f_n)=T(f)$ in measure. Accordingly, by passing to a subsequence if necessary, we may assume that $\lim_nT_X(f_n)=\lim_nT(f_n)=T(f)$ pointwise a.e. Fix $g\in X'$. Given any $A\in\mathcal{B}$, the Dominated Convergence Theorem ensures that \begin{equation}\label{i} \lim_n f_nT_{X'}(g\chi_A)=fT_{X'}(g\chi_A),\quad \text{in } L^1, \end{equation} as $|f_nT_{X'}(g\chi_A)|\le|fT_{X'}(g\chi_A)|$ pointwise for $n\in\mathbb N$ and because $fT_{X'}(g\chi_A)\in L^1$ by assumption. For each $n\in\mathbb N$, the first part of this proof applied to $f_n\in L^\infty\subseteq X$ yields $\int_{-1}^1f_nT_{X'}(g\chi_A)=-\int_{-1}^1(g\chi_A)T_X(f_n)$. It follows from \eqref{i} that \begin{align*}\label{ii} \lim_n\int_AgT_X(f_n)&=\lim_n\int_{-1}^1 (g\chi_A)T_X(f_n) \nonumber \\ & = -\lim_n\int_{-1}^1 f_nT_{X'}(g\chi_A) = -\int_{-1}^1 fT_{X'}(g\chi_A). \end{align*} Since this holds for all sets $A\in\mathcal{B}$ and since $\lim_n gT_X(f_n)=gT(f)$ pointwise a.e., we can conclude that both $gT(f)\in L^1$ and \begin{equation}\label{iii} \lim_n gT_X(f_n)=gT(f),\quad \text{in } L^1; \end{equation} see, for example, \cite[Lemma 2.3]{lewis}. This and \eqref{i} with $A:=(-1,1)$ ensure that $\int_{-1}^1fT_{X'}(g)=-\int_{-1}^1gT(f)$. So, (a) is established. (b) Given any $f\in X$ and $g\in X'$, H\"older's inequality ensures that $fT_{X'}(g)\in L^1$. So, part (b) follows from (a). (c) Fix $f\in X$ and $g\in X'$. The proof of part (a) shows that there exists a sequence $\{f_n\}_{n=1}^\infty\subseteq L^\infty\subseteq X$ satisfying the conditions: \begin{itemize} \item[(i)] $\lim_n f_n=f$ and $\lim_n T_X(f_n)=T_X(f)$ pointwise a.e., as well as \item[(ii)] $\lim_n f_nT_{X'}(g)=fT_{X'}(g)$ in $L^1$ and $\lim_n gT_X(f_n)=gT_X(f)$ in $L^1$; \end{itemize} see \eqref{i} with $A:=(-1,1)$ and \eqref{iii}, respectively. Condition (ii) implies that \begin{equation}\label{iv} \lim_n T(gT_X(f_n)+f_nT_{X'}(g))=T(gT_X(f)+fT_{X'}(g)) \end{equation} in $L^{1,\infty}$ (via Kolmogorov's Theorem) and hence, in $L^0$. On the other hand, condition (i) implies that \begin{equation}\label{v} \lim_n \big((T_X(f_n))(T_{X'}(g))-f_ng\big)= (T_X(f))(T_{X'}(g))-fg \end{equation} pointwise a.e. As in the proof of part (a), select $1<q<\infty$ such that $L^q\subseteq X$. Since $f_n\in L^\infty\subseteq L^q$ for $n\in\mathbb N$ and $g\in X'\subseteq L^{q'}$, the Poincar\'e-Bertrand formula for the pair $L^q$ and $L^{q'}$ gives, for each $n\in\mathbb N$, that \begin{equation}\label{vi} T(gT_X(f_n)+f_nT_{X'}(g)=(T_X(f_n)(T_{X'}(g))-f_ng,\quad a.e., \end{equation} with the identities holding outside a null set which is independent of $n\in\mathbb N$. In view of \eqref{iv} and \eqref{v}, take the limit of both sides of \eqref{vi} in $L^0$ to obtain the identity $T(gT_X(f)+fT_{X'}(g))=(T_X(f))(T_{X'}(g))-fg$ in $L^0$. This is precisely the Poincar\'e-Bertrand formula for $f\in X$ and $g\in X'$. \end{proof} We can now extend certain results obtained in \cite{okada-elliot}, \cite[\S4.3]{tricomi} for the spaces $L^p$ with $1<p<2$ to the larger family of r.i.\ spaces satisfying $1/2<\underline{\alpha}_X\le \overline{\alpha}_X<1$. For each $f\in X$ define pointwise the measurable function \begin{equation}\label{T-hat} (\widehat{T}_X(f))(x):=\frac{-1}{\sqrt{1-x^2}}\, T_X(\sqrt{1-t^2}f(t))(x),\quad \mathrm{a.e. }\; x\in (-1,1). \end{equation} \begin{theorem}\label{theo-3} Let $X$ be a r.i.\ space satisfying $1/2<\underline{\alpha}_X\le \overline{\alpha}_X<1.$ \begin{itemize} \item[(a)] $\mathrm{Ker}(T_X)$ is the 1-dimensional subspace of $X$ spanned by the function $1/\sqrt{1-x^2}$. \item[(b)] The linear operator $\widehat{T}_X$ defined by \eqref{T-hat} maps $X$ boundedly into $X$ and satisfies $T_X\widehat T_X=I_X$ (the identity operator on $X$). Moreover, \begin{equation}\label{3-b} \int_{-1}^1(\widehat{T}_X(f))(x)\,dx=0,\quad f\in X. \end{equation} \item[(c)] The operator $T_X\colon X\to X$ is surjective. \item[(d)] The identity $\widehat T_XT_X=I_X -P_X$ holds, with $P_X$ the bounded projection given by \begin{equation}\label{3-d} f\mapsto P_X(f):=\left(\frac1\pi\int_{-1}^1 f(t)\,dt\right) \frac{1}{\sqrt{1-x^2}},\quad f\in X. \end{equation} \item[(e)] The operator $\widehat{T}_X$ is an isomorphism onto its range $\text{R}(\widehat{T}_X)$. Moreover, \begin{equation}\label{3-e} \text{R}(\widehat{T}_X)=\left\{f\in X: \int_{-1}^{1} f(x)dx=0\right\}. \end{equation} \item[(f)] The following decomposition of $X$ holds (with $\langle\cdot\rangle$ denoting linear span): \begin{equation}\label{3-f} X=\left\{f\in X: \int_{-1}^{1} f(x)dx=0\right\}\oplus \left\langle \frac{1}{\sqrt{1-x^2}}\right\rangle =\text{R}(\widehat{T}_X) \oplus \left\langle \frac{1}{\sqrt{1-x^2}}\right\rangle. \end{equation} \end{itemize} \end{theorem} \begin{proof} (a) Since $1/2<\underline{\alpha}_X$ we have $L^{2,\infty}\subseteq X$ and so $1/\sqrt{1-x^2}\in X$. Accordingly, $\langle \frac{1}{\sqrt{1-x^2}}\rangle\subseteq \text{Ker}(T_X)$. Conversely, let $f\in \text{Ker}(T_X)$. By Lemma \ref{lemma-1} there is $1<p<2$ such that $f\in L^p$. As noted prior to Proposition \ref{prop-2}, this implies that $f(x)=c/\sqrt{1-x^2}$ for some $c\in \mathbb C$. (b) Via Lemma \ref{lemma-1} there exist $1<p<q<2$ such that $1/q<\underline{\alpha}_X\le \overline{\alpha}_X<1/p$ and $L^q\subseteq X \subseteq L^p$. Consider the weight function $\rho(x):= 1/\sqrt{1-x^2}$ on $(-1,1)$. Appealing to results on boundedness of the Hilbert transform on weighted $L^p$ spaces, $T$ is bounded from the weighted space $L^p((-1,1),\rho)$ into itself and from the weighted space $L^q((-1,1),\rho)$ into itself, \cite[Ch.1, Theorem 4.1]{gohberg-krupnik}. This is equivalent to the fact that $$ f\mapsto \widehat{T}(f):=\frac{-1}{\sqrt{1-x^2}} T_X\big(\sqrt{1-x^2}f(x)\big), $$ is well defined on $L^p$ and bounded as an operator from $L^p$ into $L^p$ and from $L^q$ into $L^q$. The condition on the indices $1/q<\underline{\alpha}_X\le \overline{\alpha}_X<1/p$ allows us to apply Boyd's interpolation theorem, \cite[Theorem 2.b.11]{lindenstrauss-tzafriri}, to conclude that $\widehat{T}$ maps $X$ boundedly into $X$. According to \eqref{T-hat}, note that $\widehat{T}_X$ is the operator $\widehat{T}\colon X\to X$. To establish $T_X\widehat{T}_X=I_X$, choose $1<p<2$ such that $X\subseteq L^p$. It follows from (2.7) on p.46 of \cite{okada-elliot} that $T_{L^p}\widehat{T}_{L^p}=I_{L^p}$. Let $f\in X\subseteq L^p$. Since all three operators $T_X$, $\widehat{T}_X$ and $I_X$ map $X$ into $X$ it follows that $T_X(\widehat{T}_X(f))=f=I_X(f)$. To establish \eqref{3-b} let $f\in X\subseteq L^p$, with $1<p<2$ as above. Then \eqref{3-b} above follows from the validity of \eqref{3-b} in $L^p$; see (2.6) on p.46 of \cite{okada-elliot}. (c) Follows immediately from $T_X\widehat{T}_X=I_X$. (d) Since $(1/\pi)\int_{-1}^1dx/\sqrt{1-x^2}=1$, it follows that $P_X$ as given in \eqref{3-d} is indeed a linear projection from $X$ onto the 1-dimensional subspace $\langle \frac{1}{\sqrt{1-x^2}}\rangle\subseteq X$. The boundedness of $P_X$ is a consequence of H\"older's inequality (applied to $f=\mathbf{1}\cdot f$ with $\mathbf{1}\in X'$ and $f\in X$ fixed), namely $$ \|P_X(f)\|_X\le \frac1\pi \left\|\frac{1}{\sqrt{1-x^2}}\right\|_X \|\mathbf{1}\|_{X'}\|f\|_X. $$ To verify that $P_X=I_X-\widehat{T}_XT_X$, fix $f\in X$. Then $T_X\widehat{T}_X=I_X$ implies that $T_X(I_X-\widehat{T}_XT_X)(f)=0$, that is, $$ (I_X-\widehat{T}_XT_X)(f)\in \text{Ker}(T_X). $$ According to part (a) there exists $c\in\mathbb C$ such that \begin{equation}\label{b} (I_X-\widehat{T}_XT_X)(f)=\frac{c}{\sqrt{1-x^2}}. \end{equation} % But, $\int_{-1}^1\widehat{T}_X(T_X(f))(x)\,dx=0$ (by \eqref{3-b}) and so \eqref{b} implies that $$ \int_{-1}^1f(x)\,dx=c\int_{-1}^1dx/\sqrt{1-x^2}=c\pi, $$ that is, $c=(1/\pi) \int_{-1}^1f(x)\,dx$. So, again by \eqref{b}, we can conclude that $(I_X-\widehat{T}_XT_X)(f)=P_X(f)$. Since $f\in X$ is arbitrary, it follows that $I_X-\widehat{T}_XT_X=P_X$. (e) The identity $T_X\widehat{T}_X=I_X$ implies that $\widehat{T}_X$ is injective. So, $\widehat{T}_X\colon X\to R(\widehat{T})$ is a linear bijection. To verify \eqref{3-e} suppose $f\in X$ satisfies $\int_{-1}^1f(x)\,dx=0$, i.e., $P_X(f)=0$. Then the identity $\widehat{T}_XT_X=I_X-P_X$ shows that $f=\widehat{T}_X(h)$ with $h:=T_X(f)\in X$, i.e., $f\in R(\widehat{T}_X)$. Conversely, suppose that $f=\widehat{T}_X(g)\in R(\widehat{T}_X)$ for some $g\in X$. Then $g=T_X(f)$ as $T_X\widehat{T}_X=I_X$. Accordingly, $$ f=\widehat{T}_X(g)=\widehat{T}_XT_X(f)=I_X(f)-P_X(f)=f-P_X(f) $$ and so $P_X(f)=0$. It is then clear from \eqref{3-d} that $\int_{-1}^1f(x)\,dx=0$, i.e., $f$ belongs to the right-side of \eqref{3-e}. This establishes \eqref{3-e}. Since the linear functional $f\mapsto \varphi_1(f):=\int_{-1}^1f(x)\,dx$, for $f\in X$, belongs to $X^*$, as $\mathbf{1}\in X'\subseteq X^*$, it follows via \eqref{3-e} that $R(\widehat{T}_X)=\text{Ker}(\varphi_1)$ and hence, $R(\widehat{T}_X)$ is a \textit{closed} subspace of $X$. Accordingly, $\widehat{T}_X\colon X\to R(\widehat{T}_X)$ is a Banach space isomorphism. (f) The identity $\widehat{T}_XT_X+P_X=I_X$ shows that each $f\in X$ has the form $f=\widehat{T}_X(T_X(f))+P_X(f)$ with $\widehat{T}_X(T_X(f))\in R(\widehat{T}_X)$ and, via \eqref{3-d}, $P_X(f)\in \langle1/\sqrt{1-x^2}\rangle$. So, it remains to show that the decomposition in \eqref{3-f} is a \textit{direct sum}. To this effect, let $h\in R(\widehat{T}_X)\cap \langle1/\sqrt{1-x^2}\rangle$, in which case $h=\widehat{T}_X(f)$ for some $f\in X$ and $h=c/\sqrt{1-x^2}$ for some $c\in\mathbb C$, that is, $\widehat{T}_X(f)=c/\sqrt{1-x^2}$. Integrating both sides of this identity over $(-1,1)$ and appealing to \eqref{3-b} shows that $c=0$. Hence, $h=0$. \end{proof} Next we extend certain results obtained in \cite{okada-elliot}, \cite[\S4.3]{tricomi}, for the spaces $L^p$ with $2<p<\infty$, to the larger family of r.i.\ spaces $X$ satisfying $0<\underline{\alpha}_X\le \overline{\alpha}_X<1/2$. Then $1/2<\underline{\alpha}_{X'}\le \overline{\alpha}_{X'}<1$ and so $1/\sqrt{1-x^2}\in X'$. Hence, for every $f\in X$, the function $f(x)/\sqrt{1-x^2}\in L^1$. Accordingly, we can define pointwise the measurable function \begin{equation}\label{T-check} (\check{T}_X(f))(x):=-\sqrt{1-x^2}\, T\Big(\frac{f(t)}{\sqrt{1-t^2}}\Big)(x),\quad \mathrm{a.e. } \;x\in(-1,1). \end{equation} \begin{theorem}\label{theo-4} Let $X$ be a r.i.\ space satisfying $0<\underline{\alpha}_X\le \overline{\alpha}_X<1/2.$ \begin{itemize} \item[(a)] The operator $T_X\colon X\to X$ is injective. \item[(b)] The linear operator $\check{T}_X$ defined by \eqref{T-check} is bounded from $X$ into $X$ and satisfies $\check T_XT_X=I_X$. \item[(c)] The identity $T_X\check T_X=I_X -Q_X$ holds, with $Q_X$ the bounded projection given by \begin{equation}\label{B} f\in X\mapsto Q_X(f):=\left(\frac1\pi\int_{-1}^1 \frac{f(x)}{\sqrt{1-x^2}}\,dx\right) \mathbf{1}. \end{equation} \item[(d)] The range of $T_X$ is the closed subspace of $X$ given by \begin{equation}\label{C} R(T_X)=\left\{f\in X: \int_{-1}^{1} \frac{f(x)}{\sqrt{1-x^2}}dx=0\right\} =\mathrm{Ker}(Q_X). \end{equation} Moreover, $\check T_X$ is an isomorphism from $\text{R}(T_X)$ onto $X$. \item[(e)] The following decomposition of $X$ holds: \begin{equation}\label{D} X=\left\{f\in X: \int_{-1}^{1} \frac{f(x)}{\sqrt{1-x^2}}dx=0\right\}\oplus \left\langle \mathbf{1}\right\rangle= R(T_X)\oplus \left\langle \mathbf{1}\right\rangle. \end{equation} \end{itemize} \end{theorem} \begin{proof} (a) Since $\overline{\alpha}_X<1/2$ we have that $X\subsetneqq L^{2,\infty}$ and so $1/\sqrt{1-x^2}\notin X$. Hence, $T_X$ is injective; see the discussion after \eqref{0}. (b) Via Lemma \ref{lemma-1} there exist $2<p<q<\infty$ such that $1/q<\underline{\alpha}_X\le \overline{\alpha}_X<1/p$ and $L^q\subseteq X \subseteq L^p$. Consider the weight function $\rho(x):= \sqrt{1-x^2}$ on $(-1,1)$. Appealing again to results on boundedness of the Hilbert transform on weighted $L^p$ spaces, $T$ is bounded from the weighted space $L^p((-1,1),\rho)$ into itself and from the weighted space $L^q((-1,1),\rho)$ into itself, \cite[Ch.1 Theorem 4.1]{gohberg-krupnik}. This is equivalent to the fact that $$ f\mapsto \check{T}(f):= -\sqrt{1-x^2}\,T\Big(\frac{f(x)}{\sqrt{1-x^2}}\Big), $$ is well defined on $L^p$ and bounded as an operator from $L^p$ into $L^p$ and from $L^q$ into $L^q$. The condition on the indices $1/q<\underline{\alpha}_X\le \overline{\alpha}_X<1/p$ allows us to apply Boyd's interpolation theorem, \cite[Theorem 2.b.11]{lindenstrauss-tzafriri}, to deduce that $\check{T}$ maps $X$ boundedly into $X$. According to \eqref{T-check} note that $\check{T}_X$ is the operator $\check{T}\colon X\to X$. To establish $\check T_XT_X=I_X$, recall that $X\subseteq L^p$. It follows from (2.10) on p.48 of \cite{okada-elliot} that $\check T_{L^p}T_{L^p}=I_{L^p}$. Let $f\in X\subseteq L^p$. Since all three operators $T_X$, $\check{T}_X$ and $I_X$ map $X$ into $X$ it follows that $\check T_X(T_X(f))=f=I_X(f)$. (c) It is routine to check that $Q_X$ is a linear projection onto the 1-dimensional space $\langle\mathbf{1}\rangle$. Since $g(x)=1/\sqrt{1-x^2}\in X'$, the boundedness of $Q_X$ follows from \eqref{B} via H\"older's inequality, namely $$ \|Q_X(f)\|_X\le\frac1\pi\|g\|_{X'} \|\mathbf{1}\|_X\|f\|_X,\quad f\in X. $$ To establish the identity $T_X\check{T}_X=I_X-Q_X$, choose $2<p<\infty$ such that $X\subseteq L^p$. It follows from (2.11) on p.48 of \cite{okada-elliot} that $T_{L^p}\check{T}_{L^p}=I_{L^p}-Q_{L^p}$. Let $f\in X\subseteq L^p$. Since all four operators $T_X$, $\check{T}_X$, $Q_X$ and $I_X$ map $X$ into $X$ it follows that $T_X(\check{T}_X(f))=f-Q_X(f)=(I_X-Q_X)(f)$. (d) Using the identities $\check{T}_XT_X=I_X$ and $T_X\check{T}_X=I_X-Q_X$ one can argue as on p.48 of \cite{okada-elliot} to verify the identity \eqref{C}. In particular, since $Q_X$ is bounded, it follows that $R(T_X)=\text{Ker}(Q_X)$ is a \textit{closed} subspace of $X$. It is clear from $\check{T}_XT_X=I_X$ that $\check{T}_X$ maps $R(T_X)$ onto $X$ and also that $\check{T}_X$ restricted to $R(T_X)$ is injective, i.e., $\check{T}_X\colon R(T_X)\to X$ is a linear bijection and bounded. By the Open Mapping Theorem $\check{T}_X\colon R(T_X)\to X$ is actually a Banach space isomorphism. (e) As $Q_X$ is a bounded projection, we have $X=\text{Ker}(Q_X)\oplus R(Q_X)$. But, $\text{Ker}(Q_X)=R(T_X)$ by part (d) and $R(Q_X)=\langle\mathbf{1}\rangle$ by part (c). The direct sum decomposition \eqref{D} is then immediate. \end{proof} \begin{remark} Let $X$ be a r.i.\ space satisfying $0<\underline{\alpha}_{X}\le \overline{\alpha}_{X}<1/2$ or $1/2<\underline{\alpha}_X\le \overline{\alpha}_X<1$. Then $T_X\colon X\to X$ is a Fredholm operator, that is, $\text{dim}( \text{Ker}(T_X))<\infty$, the range $R(T_X)$ is a closed subspace of $X$ and $\text{dim}( X/R(T_X))<\infty$. This holds when $1/2<\underline{\alpha}_X\le \overline{\alpha}_X<1$ because $\text{dim}( \text{Ker}(T_X))=1$ and $T_X$ is surjective; see Theorem \ref{theo-3}(a), (c). The operator $T_X$ is also Fredholm when $0<\underline{\alpha}_{X}\le \overline{\alpha}_{X}<1/2$ because it is injective, $R(T_X)$ is closed in $X$ and $\text{dim}( X/R(T_X))=1$; see (a), (d), (e) of Theorem \ref{theo-4}. \end{remark} A consequence of Theorems \ref{theo-3} and \ref{theo-4} is the possibility to extend the results in \cite[Ch.11]{king}, \cite{okada-elliot}, \cite[\S4.3]{tricomi}, concerning the \textit{inversion} of the airfoil equation \begin{equation}\label{airfoil} (T(f))(t)=p.v. \frac{1}{\pi} \int_{-1}^{1}\frac{f(x)}{x-t}\,dx=g(t), \quad \mathrm{a.e. }\; t\in(-1,1), \end{equation} within the class of $L^p$-spaces for $1<p<\infty$, $p\not=2$ (with $g\in L^p$ given), to the significantly larger class of r.i.\ spaces $X$ whose Boyd indices satisfy $0<\underline{\alpha}_X\le \overline{\alpha}_X<1/2$ or $1/2<\underline{\alpha}_X\le \overline{\alpha}_X<1.$ \begin{corollary}\label{cor-airfoil} Let $X$ be a r.i.\ space. \begin{itemize} \item[(a)] Suppose that $1/2<\underline{\alpha}_X\le \overline{\alpha}_X<1$ and $g\in X$ is fixed. Then all solutions $f\in X$ of the airfoil equation \eqref{airfoil} are given by \begin{equation}\label{E} f(x)=\frac{-1}{\sqrt{1-x^2}}\; T_X\left(\sqrt{1-t^2}g(t)\right) (x) + \frac{\lambda}{\sqrt{1-x^2}},\quad \mathrm{a.e. } \;x\in(-1,1), \end{equation} with $\lambda\in\mathbb{C}$ arbitrary. \item[(b)] Suppose that $0<\underline{\alpha}_X\le \overline{\alpha}_X<1/2$ and $g\in X$ satisfies $\int_{-1}^{1} \frac{g(x)}{\sqrt{1-x^2}}dx=0.$ Then there is a unique solution $f\in X$ of the airfoil equation \eqref{airfoil}, namely $$ f(x):=-\sqrt{1-x^2}\; T_X\left(\frac{g(t)}{\sqrt{1-t^2}}\right)(x),\quad \mathrm{a.e. } \;x\in(-1,1). $$ \end{itemize} \end{corollary} \begin{proof} (a) In this case $1/\sqrt{1-x^2}\in X$. Given any $\lambda\in\mathbb{C}$ define the function $$ f(x):=\frac{-1}{\sqrt{1-x^2}}\; T_X\left(\sqrt{1-t^2}g(t)\right) (x) + \frac{\lambda}{\sqrt{1-x^2}} =\widehat{T}_X(g)(x)+\frac{\lambda}{\sqrt{1-x^2}}. $$ Then the identities $T_X\widehat{T}_X(g)=g$ and $T_X(\lambda/\sqrt{1-x^2})=0$ (see Theorem \ref{theo-3}) imply that $T_X(f)=g$. Conversely, suppose that $f\in X$ satisfies $T_X(f)=g$. It follows from $\widehat{T}_XT_X=I_X-P_X$ that $f-P_X(f)=\widehat{T}_X(g)$. By \eqref{3-d} there exists $\lambda\in\mathbb{C}$ such that $P_X(f)=\lambda/\sqrt{1-x^2}$ and hence, $f=\widehat{T}_X(g)+ \frac{\lambda}{\sqrt{1-x^2}}$. So, all solutions of the airfoil equation are indeed given by \eqref{E}. (b) Define $f(x):=-\sqrt{1-x^2}\, T(g(t)/\sqrt{1-t^2})(x)=\check{T}_X(g)$. By Theorem \ref{theo-4}(c) we have $$ T_X(f)=T_X\check{T}_X(g)=g-Q_X(g). $$ But, the hypothesis on $g\in X$ implies, via \eqref{C}, that $g\in\text{Ker}(Q_X)$ and so $T_X(f)=g$. The uniqueness of the solution $f$ is immediate as $T_X$ is injective (by Theorem \ref{theo-4}(a)). \end{proof} \begin{remark} The conditions $0<\underline{\alpha}_{X}\le \overline{\alpha}_{X}<1/2$ or $1/2<\underline{\alpha}_X\le \overline{\alpha}_X<1$ are not always satisfied, e.g., if $X=L^{2,q}$ with $1\le q\le\infty$. There also exist r.i.\ spaces $X$ such that $\underline{\alpha}_{X}<1/2< \overline{\alpha}_{X}$; see \cite[pp. 177--178]{bennett-sharpley}. \end{remark} \section{Extension of the finite Hilbert transform on r.i.\ spaces} \label{S4} The finite Hilbert transform T$\colon L^1\to L^{1,\infty}$ has the property that $T(L^1)\not\subseteq L^1$. Hence, for any r.i.\ space $X$ we necessarily have $T(L^1)\not\subseteq X$. On the other hand, if $X$ satisfies $0<\underline{\alpha}_X\le \overline{\alpha}_X<1$, then $T(X)\subseteq X$ continuously. Do there exist any other B.f.s.' $Z\subseteq L^1$ such that $X\subsetneqq Z$ and $T(Z)\subseteq X$? As a consequence of Theorems \ref{theo-3} and \ref{theo-4}, for those r.i.\ spaces $X$ satisfying $1/2<\underline{\alpha}_X\le \overline{\alpha}_X<1$ or $0<\underline{\alpha}_X\le \overline{\alpha}_X<1/2$, the answer is shown to be negative; see Theorem \ref{theo-10}. The proof of the following result uses important facts from the theory of vector measures, namely, a theorem of Talagrand concerning $L^0$-valued measures and the Dieudonn\'e-Grothendieck Theorem for bounded vector measures. \begin{proposition}\label{prop-7} Let $X$ be a r.i.\ space satisfying $0<\underline{\alpha}_X\le \overline{\alpha}_X<1$. Let $f\in L^1$. The following conditions are equivalent. \begin{itemize} \item[(a)] $T(f\chi_A)\in X$ for every $A\in\mathcal{B}$. \item[(b)] $\displaystyle \sup_{A\in\mathcal{B}}\|T(f\chi_A)\|_X<\infty.$ \item[(c)] $T(h)\in X$ for every $h\in L^0$ with $|h|\le |f|$ a.e. \item[(d)] $\displaystyle \sup_{|h|\le|f|}\|T(h)\|_X<\infty.$ \item[(e)] $T(\theta f)\in X$ for every $\theta\in L^\infty$ with $|\theta|=1$ a.e. \item[(f)] $\displaystyle \sup_{|\theta|=1}\|T(\theta f)\|_X<\infty.$ \item[(g)] $fT_{X'}(g)\in L^1$ for every $g\in X'$. \end{itemize} Moreover, if any one of $\mathrm{(a)}$-$\mathrm{(g)}$ is satisfied, then \begin{equation}\label{norms} \sup_{A\in\mathcal{B}}\big\|T(\chi_A f)\big\|_X \le \sup_{|\theta|=1}\big\|T(\theta f)\big\|_X \le \sup_{|h|\le|f|}\big\|T(h)\big\|_X \le 4 \sup_{A\in\mathcal{B}}\big\|T(\chi_A f)\big\|_X. \end{equation} \end{proposition} \begin{proof} (a)$\Rightarrow$(b). Consider the $X$-valued, finitely additive measure \begin{equation}\label{nu} \nu\colon A\mapsto T(f\chi_A),\quad A\in\mathcal {B}. \end{equation} Let $J_X\colon X\to L^0$ denote the natural continuous linear embedding. Then the composition $J_X\circ \nu\colon\mathcal{B}\to L^0$ is $\sigma$-additive. To establish this let $A_n\downarrow\emptyset$ in $\mathcal{B}$. Then $\lim_nf\chi_{A_n}=0$ in $L^1$ and hence, $\lim_nT(f\chi_{A_n})=0$ in $L^{1,\infty}$ by Kolmogorov's Theorem. Since $L^{1,\infty}\subseteq L^0$ continuously, we also have $\lim_nT(f\chi_{A_n})=0$ in $L^0$. Consequently, $\lim_n(J_X\circ\nu)({A_n})=0$ in $L^0$ which verifies the $\sigma$-additivity of $J_X\circ\nu$. It follows from a result of Talagrand, \cite[Theorem B]{talagrand}, that there exists a non-negative function $\Psi_0\in L^0$ and a $\sigma$-additive vector measure $\mu_0\colon\mathcal{B}\to L^2$ such that $$ (J_X\circ\nu)({A})=\Psi_0\cdot \mu_0(A),\quad A\in\mathcal{B}, $$ where $\Psi_0\cdot \mu_0(A)$ is the pointwise product of two functions in $L^0$. Define $B_0:=\Psi_0^{-1}(\{0\})$. Then $\Psi:=\Psi_0+\chi_{B_0}\in L^0$ is strictly positive. Consider the $L^2$-valued vector measure $$ \mu\colon A\mapsto \chi_{(-1,1)\setminus B_0}\cdot \mu_0(A),\quad A\in\mathcal{B}. $$ For every $A\in\mathcal{B}$, we claim that $(J_X\circ\nu)({A})=\Psi\cdot\mu(A)$. This follows from \begin{align*} \Psi\cdot\mu(A)&=(\Psi_0+\chi_{B_0})\cdot\chi_{(-1,1)\setminus B_0}\cdot \mu_0(A) \\ &= \chi_{(-1,1)\setminus B_0}\cdot \Psi_0\cdot \mu_0(A) \\ &= \chi_{(-1,1)\setminus B_0}\cdot (J_X\circ\nu)({A}) + \chi_{ B_0}\cdot (J_X\circ\nu)({A}) \\ &= (J_X\circ\nu)({A}), \end{align*} where we have used $\chi_{ B_0}\cdot (J_X\circ\nu)({A}) =\chi_{ B_0}\cdot \Psi_0\cdot \mu({A})=0$. Set $B_n:=\{x\in(-1,1): (n-1)<1/\Psi(x)\le n\}$, for $n\in\mathbb{N}$. Then the subset \begin{equation}\label{c2} \big\{\chi_{B_n\cap B}/\Psi:n\in\mathbb{N}, B\in\mathcal{B}\big\} \end{equation} of $L^\infty\subseteq X'\subseteq X^*$ is total for $X$. To verify this, let $g\in X$ satisfy $$ \int_{-1}^1g(x)\chi_{B_n\cap B}(x)/\Psi(x)\,dx=0,\quad n\in\mathbb{N}, B\in\mathcal{B}. $$ Then, for every $n\in\mathbb{N}$, the function $(g\chi_{B_n}/\Psi)\in X\subseteq L^1$ is 0 a.e. Since $1/\Psi$ is strictly positive on $(-1,1)=\cup_{n=1}^\infty B_n$, we have $g=0$ a.e. This implies that the subset \eqref{c2} of $X^*$ is total for $X$. Fix $n\in\mathbb{N}$ and $B\in\mathcal{B}$. Then the scalar-valued set function $A\mapsto\langle\nu(A),\chi_{B_n\cap B}/\Psi\rangle$, for $A\in\mathcal{B}$, is $\sigma$-additive. Indeed, as $\nu(A)\in X$ and $(\chi_{B_n\cap B}/\Psi)\in L^\infty\subseteq X'$ we have, for each $A\in\mathcal{B}$, that \begin{align*} \langle\nu(A),\chi_{B_n\cap B}/\Psi\rangle &= \int_{-1}^1\nu(A)(x)\chi_{B_n\cap B}(x)/\Psi(x)\,dx \\ &= \int_{-1}^1\mu(A)(x)\chi_{B_n\cap B}(x)\,dx =\langle\mu(A),\chi_{B_n\cap B}\rangle, \end{align*} which implies the desired $\sigma$-additivity because $\mu$ is $\sigma$-additive as an $L^2$-valued vector measure and $\chi_{B_n\cap B}\in L^2$. Consequently, each $\mathbb{C}$-valued, $\sigma$-additive measure $A\mapsto\langle\nu(A),\chi_{B_n\cap B}/\Psi\rangle$ on $\mathcal{B}$, for $n\in\mathbb{N}$, has bounded range. Recalling that the subset \eqref{c2} of $X^*$ is total for $X$, the Dieudonn\'e-Grothendieck Theorem, \cite[Corollary I.3.3]{diestel-uhl}, implies that $\nu$ has bounded range in $X$. Hence, (b) is established. (b)$\Rightarrow$(c). The \textit{semivariation} $\|\nu\|(\cdot)$ of the bounded, finitely additive, $X$-valued measure $\nu$ defined in \eqref{nu} satisfies both $$ \|\nu\|(A)=\sup\big\{\|T(\chi_A fs)\|_X: s\in\text{sim }\mathcal{B},\; |s|\le1\big\},\quad A\in\mathcal{B}, $$ and $$ \sup_{B\in \mathcal{B}, B\subseteq A}\|\nu(B)\|_X\le \|\nu\|(A) \le 4 \sup_{B\in\mathcal{B}, B\subseteq A}\|\nu(B)\|_X,\quad A\in\mathcal{B}, $$ \cite[p.2 and Proposition I.1.11]{diestel-uhl}. Thus, for $s\in\text{sim }\mathcal{B}$ with $s\not=0$, \begin{equation}\label{c3} \|T(fs)\|_X\le \Big(4\sup_{A\in\mathcal{B}}\|T(f\chi_A)\|_X\Big)\cdot \sup_{|x|<1}|s(x)|<\infty \end{equation} because $|s|\le\sup_{|x|<1}|s(x)|$ pointwise on $(-1,1)$, \cite[p.6]{diestel-uhl}. To obtain (c) from \eqref{c3}, take any $h\in L^0$ with $|h|\le|f|$ a.e. Then $h=f\varphi$ for some $\varphi\in L^0$ with $|\varphi|\le1$ a.e. Select a sequence $\{s_n\}_{n=1}^\infty\subseteq \text{sim }\mathcal{B}$ such that $|s_n|\le |\varphi|$ on $(-1,1)$ for all $n\in\mathbb{N}$ and $s_n\to\varphi$ uniformly on $(-1,1)$ as $n\to\infty$. Then the sequence $\{T(fs_n)\}_{n=1}^\infty$ is Cauchy in $X$ as \eqref{c3} yields $$ \|T(fs_j)-T(fs_k)\|_X \le \Big(4\sup_{A\in\mathcal{B}}\|T(f\chi_A)\|_X\Big)\cdot \sup_{|x|<1}|s_j(x)-s_k(x)| $$ for all $j,k\in\mathbb{N}$. Accordingly, $\{T(fs_n)\}_{n=1}^\infty$ has a limit in $X$, say $g$. Since the natural inclusion $X\subseteq L^{1,\infty}$ is continuous, we have $\lim_nT(fs_n)=g$ in $L^{1,\infty}$. On the other hand, since $\lim_nfs_n=f\varphi$ in $L^1$, Kolmogorov's Theorem gives $\lim_n T(fs_n)=T(f\varphi)$ in $L^{1,\infty}$. Thus, $T(h)=T(f\varphi)=g$ as elements of $L^0$. In particular, $T(h)\in X$ as $g\in X$. So, (c) is established. (c)$\Rightarrow$(d). Clearly (c)$\Rightarrow$(a) and we already know that (a)$\Rightarrow$(b). Thus, the previous arguments also imply the inequality \begin{equation}\label{c4} \sup_{|h|\le|f|}\|T(h)\|_X \le 4 \sup_{A\in\mathcal{B}}\|T(f\chi_A)\|_X. \end{equation} To see this consider any $h\in L^0$ with $|h|\le|f|$ a.e. Select $\varphi$ and $\{s_n\}_{n=1}^\infty\subseteq \text{sim }\mathcal{B}$ as in the previous paragraph. Then \eqref{c3} yields \begin{align*} \|T(h)\|_X &= \lim_{n}\|T(fs_n)\|_X \\ &\le \Big(4 \sup_{A\in\mathcal{B}}\|T(f\chi_A)\|_X\Big) \sup_{n\in\mathbb{N}} \sup_{|x|<1}|s_n(x)| \\ & = \Big(4 \sup_{A\in\mathcal{B}}\|T(f\chi_A)\|_X\Big) \sup_{|x|<1}|\varphi(x)| \\ &\le 4 \sup_{A\in\mathcal{B}}\|T(f\chi_A)\|_X . \end{align*} (d)$\Rightarrow$(f)$\Rightarrow$(e) Clear. (e)$\Rightarrow$(a) Fix $A\in\mathcal{B}$. Since $|\chi_A\pm\chi_{(-1,1)\setminus A}|=1$ it follows from (e) that both $$ T(f\chi_A)+T(f\chi_{(-1,1)\setminus A})=T(f(\chi_A+\chi_{(-1,1)\setminus A}))\in X $$ and $$ T(f\chi_A)-T(f\chi_{(-1,1)\setminus A})=T(f(\chi_A-\chi_{(-1,1)\setminus A}))\in X. $$ These two identities imply that $T(f\chi_A)\in X$. (d)$\Rightarrow$(g). Fix $g\in X'$. Given $n\in\mathbb N$ define $A_n:=|f|^{-1}([0,n])$ and set $f_n:=f\chi_{A_n}\in L^\infty\subseteq X$. Since $|f_n|\uparrow |f|$ pointwise on $(-1,1)$, the Monotone Convergence Theorem yields \begin{equation}\label{dg} \int_{-1}^1|f(x)|\cdot |(T_{X'}(g))(x)|\,dx=\lim_n \int_{-1}^1|f_n(x)|\cdot |(T_{X'}(g))(x)|\,dx. \end{equation} Select $\theta_1, \theta_2\in L^\infty$ with $|\theta_1|=1$ and $|\theta_2|=1$ pointwise such that $|f|=\theta_1 f$ and $|T_{X'}(g)|=\theta_2 T_{X'}(g)$ pointwise. In particular, $|f_n|=\theta_1 f_n$ pointwise for all $n\in\mathbb N$. Then Parseval's formula (cf. Proposition \ref{prop-2}(b)), H\"older's inequality and condition (d) ensure, for every $n\in\mathbb N$, that \begin{align*} \int_{-1}^1|f_n(x)|\cdot |(T_{X'}(g))(x)|\,dx &= \int_{-1}^1\theta_1(x)\theta_2(x)f_n(x)(T_{X'}(g))(x)\,dx \\ &= - \int_{-1}^1(T_{X}(\theta_1\theta_2f_n))(x)g(x)\,dx \\ & \le \|T_{X}(\theta_1\theta_2f_n)\|_X \|g\|_{X'} \\ & \le \sup_{|h|\le|f|}\|T(h)\|_X \|g\|_{X'} <\infty. \end{align*} This inequality and \eqref{dg} imply that (g) holds. (g)$\Rightarrow$(a). Fix any $A\in\mathcal{B}$. Then $(f\chi_A)T_{X'}(g)\in L^1$ for every $g\in X'$ by assumption. Apply Proposition \ref{prop-2}(a) to $f\chi_A$ in place of $f$ to obtain that $gT(f\chi_A)\in L^1$ for all $g\in X'$. Accordingly, $T(f\chi_A)\in X''=X$, which establishes (a). The equivalences (a)-(g) are thereby established. Suppose now that any one of (a)-(g) is satisfied. The second inequality of \eqref{norms} is clear. For the left-hand inequality fix $A\in\mathcal{B}$. Then $T(f\chi_A)= 1/2(T(\theta_1 f)+T(\theta_2 f))$, where $\theta_1=1$ and $\theta_2=\chi_A-\chi_{(-1,1)\setminus A}$ satisfy $|\theta_1|=1$ and $|\theta_2|=1$. Accordingly, $$ \|T(f\chi_A)\|_X\le 1/2(\|T(\theta_1 f)\|_X+\|T(\theta_2 f)\|_X) \le \sup_{|\theta|=1}\|T(\theta f)\|_X. $$ Finally, the last inequality in \eqref{norms} is precisely \eqref{c4} above. \end{proof} Another consequence of Theorems \ref{theo-3} and \ref{theo-4} is that membership of a given r.i.\ space $X$ is completely determined by the finite Hilbert transform in $X$. \begin{proposition}\label{cor-8} Let $X$ be a r.i.\ space satisfying either $1/2<\underline{\alpha}_X\le \overline{\alpha}_X<1$ or $0<\underline{\alpha}_X\le \overline{\alpha}_X<1/2$. Let $f\in L^1$. The following conditions are equivalent. \begin{itemize} \item[(a)] $f\in X$. \item[(b)] $T(f\chi_A)\in X$ for every $A\in\mathcal{B}$. \item[(c)] $T(f\theta)\in X$ for every $\theta\in L^\infty$ with $|\theta|=1$ a.e. \item[(d)] $T(h)\in X$ for every $h\in L^0$ with $|h|\le |f|$ a.e. \end{itemize} \end{proposition} \begin{proof} The three conditions (b), (c) and (d) are equivalent by Proposition \ref{prop-7}. (a)$\Rightarrow$(b). Clear as $T\colon X\to X$ is bounded. (b)$\Rightarrow$(a). By Proposition 4.1 we have $fT_{X'}(g)\in L^1$ for every $g\in X'$, which we shall use to obtain (a). Assume that $1/2<\underline{\alpha}_X\le\overline{\alpha}_X<1$, in which case $0<\underline{\alpha}_{X'}\le\overline{\alpha}_{X'}<1/2$. This enables us to apply Theorem \ref{theo-4}(c), with $X'$ in place of $X$, to the operator $T_{X'}$. So, for any $\psi\in X'$, it follows that $\psi=T_{X'}(\check{T}_{X'}(\psi))+c\mathbf{1}$ with $c:=(1/\pi)\int_{-1}^1(\psi(x)/\sqrt{1-x^2})\,dx$. Define $g:=\check{T}_{X'}(\psi)\in X'$. Then $fT_{X'}(\check{T}_{X'}(\psi))\in L^1$ and hence, $f\psi-cf=fT_{X'}(\check{T}_{X'}(\psi))$ belongs to $L^1$. But, $cf\in L^1$ as $f\in L^1$ by assumption. So, $f\psi\in L^1$, from which it follows that $f\in X''=X$ as $\psi\in X'$ is arbitrary. Thus (a) holds. Consider the remaining case when $0<\underline{\alpha}_X\le\overline{\alpha}_X<1/2$. Then $1/2<\underline{\alpha}_{X'}\le\overline{\alpha}_{X'}<1$. We apply Theorem \ref{theo-3}(c) with $X'$ in place of $X$, to conclude that $T_{X'}\colon X'\to X'$ is surjective. So, given any $\psi\in X'$ there exists $g\in X'$ with $\psi=T_{X'}(g)$. It follows that $f\psi=fT_{X'}(g)\in L^1$. Since $\psi\in X'$ is arbitrary we may conclude that $f\in X''=X$. Hence, (a) again holds. \end{proof} Even though $T_X$ is not an isomorphism, Theorems \ref{theo-3} and \ref{theo-4} imply the impossibility of extending (continuously) the finite Hilbert transform $T_X\colon X\to X$ to any genuinely larger domain space within $L^1$ while still maintaining its values in $X$; see Theorem \ref{theo-10} below. This is in contrast to the situation for the Fourier transform operator acting in the spaces $L^p(\mathbb{T})$, $1<p<2$; see the Introduction. We first require an important technical construction. Define \begin{equation*}\label{tx} [T,X]:=\big\{f\in L^1: T(h)\in X, \;\forall |h|\le|f|\big\}. \end{equation*} If $f\in[T,X]$, then $f\in L^1$ and $T(h)\in X$ for every $h\in L^0$ with $|h|\le|f|$. Hence, Proposition \ref{prop-7} implies that \begin{equation}\label{TX-norm} \|f\|_{[T,X]}:=\sup_{|h|\le|f|} \|T(h)\|_X<\infty,\quad f\in[T,X]. \end{equation} The properties of $[T,X]$ are established via a series of steps, with the aim of showing that it is a B.f.s. First, the functional $f\mapsto \|f\|_{[T,X]}$ is compatible with the lattice structure in the following sense: if $f_1, f_2\in [T,X]$ satisfy $|f_1|\le|f_2|$, then $ \|f_1\|_{[T,X]}\le \|f_2\|_{[T,X]}$. This is because $\{h:|h|\le|f_1|\}\subseteq \{h:|h|\le|f_2|\}$. The same argument shows that $[T,X]$ is an ideal in $L^1$. In particular, $X\subseteq[T,X]$. It is routine to verify that if $\alpha\in\mathbb{C}$ and $f\in[T,X]$, then $\alpha f\in[T,X]$ and $\|\alpha f\|_{[T,X]}=|\alpha |\cdot \|f\|_{[T,X]}$. To verify the subadditivity of $\|\cdot\|_{[T,X]}$ we use the following Freudenthal type decomposition: if $h, f_1, f_2\in L^1$ with $|h|\le |f_1+f_2|$, then there exists $h_1, h_2$ such that $h=h_1+h_2$ and $|h_1|\le |f_1|, |h_2|\le |f_2|$; this follows from \cite[Theorem 91.3]{zaanen} applied in $L^1$. Using this fact, given $f_1,f_2\in [T,X]$, it follows that $f_1+f_2\in[T,X]$ and \begin{align*} \|f_1+f_2\|_{[T,X]} &= \sup\Big\{\|T(h)\|_X:|h|\le|f_1+f_2|\Big\} \\ &= \sup\Big\{\|T(h_1)+T(h_2)\|_X:|h|\le| f_1+f_2|, h=h_1+h_2, |h_i|\le|f_i|\Big\} \\ &\le \sup\Big\{\|T(h_1)\|_X:|h_1|\le|f_1|\Big\} + \sup\Big\{\|T(h_2)\|_X:|h_2|\le|f_2|\Big\} \\ &= \| f_1\|_{[T,X]}+ \| f_2\|_{[T,X]}. \end{align*} So, $[T,X]$ is a vector space and $\|\cdot\|_{[T,X]}$ is a lattice seminorm on $[T,X]$. Let $\|f\|_{[T,X]}=0$. Then $T(h)=0$ in $X$ for every $h\in L^0$ with $|h|\le|f|$. Suppose that $f\not=0$. Then there exists $A\in\mathcal{B}$ with $|A|>0$ such that $f\chi_A\in L^\infty$ and $f(x)\chi_A(x)\not=0$ for every $x\in A$. Choose two disjoint sets $A_1, A_2\in\mathcal{B}\cap A$ with $|A_j|>0$, $j=1,2$, and define $h_j:=f\chi_{A_j}$, $j=1,2$. Then $h_j\in L^\infty\subseteq X$ satisfies $|h_j|\le|f|$ and so $T_X(h_j)=T(h_j)=0$ for $j=1,2$. That is, $h_1,h_2\in \text{Ker}(T_X)$. Since $h_1, h_2$ are linearly independent elements in $X$, it follows that $\text{dim}( \text{Ker}(T_X))\ge2$. But, this contradicts the fact that $T_X$ is either injective or its kernel is 1-dimensional; see the discussion after \eqref{0}. Hence, $f=0$. So, we have shown that $[T,X]$ is a normed function space. The following result is a Parseval type formula that will be needed in the sequel. \begin{lemma}\label{NEW-4.3} Let $X$ be a r.i.\ space satisfying $0<\underline{\alpha}_X\le \overline{\alpha}_X<1$. Then $$ \int_{-1}^1fT_{X'}(g)=-\int_{-1}^1gT(f),\quad f\in[T,X],\;\; g\in X'. $$ \end{lemma} \begin{proof} Given $f\in[T,X]\subseteq L^1$, it follows from the definition of $[T,X]$ and Proposition \ref{prop-7} that $fT_{X'}(g)\in L^1$ for every $g\in X'$. The desired formula in then immediate from Proposition \ref{prop-2}(a). \end{proof} \begin{lemma}\label{NEW-4.4} Let $X$ be a r.i.\ space satisfying $0<\underline{\alpha}_X\le \overline{\alpha}_X<1$. Then the normed function space $[T,X]$ is complete. \end{lemma} \begin{proof} Let $f_n\in[T,X]$, for $n\in\mathbb N$, satisfy $$ \sum_{n=1}^\infty \|f_n\|_{[T,X]}<\infty. $$ This implies, for every choice of $h_n$ with $|h_n|\le|f_n|$, that \begin{equation}\label{th} \sum_{n=1}^\infty \|T(h_n)\|_X<\infty. \end{equation} (A) Let $h\in [T,X]\subseteq L^1$. As $|h|\chi_{(-1,0)}\le|h|$ we have that $T(|h|\chi_{(-1,0)})\in X$. If $0<t<1$, then $$ \left|T\left(|h|\chi_{(-1,0)}\right)(t)\right| =\frac{1}{\pi}\int_{-1}^0\frac{|h(x)|}{|x-t|}dx \ge \frac{1}{2\pi} \int_{-1}^0|h(x)|\,dx , $$ since for $-1<x<0$ and $0<t<1$ we have $|x-t|\le2$. Consequently, \begin{align*} \|T\left(|h|\chi_{(-1,0)}\right)\|_X& \ge \|T\left(|h|\chi_{(-1,0)}\right)\chi_{(0,1)}\|_X \ge \left(\frac{1}{2\pi}\int_{-1}^0|h(x)|\,dx\right) \|\chi_{(0,1)}\|_X . \end{align*} In a similar way, as $|h|\chi_{(0,1)}\le|h|$, we have that $T(|h|\chi_{(0,1)})\in X$. If $-1<t<0$, then $$ T\left(|h|\chi_{(0,1)}\right)(t) =\frac{1}{\pi}\int_{0}^1\frac{|h(x)|}{x-t}dx \ge \frac{1}{2\pi}\int_{0}^1|h(x)|\,dx , $$ since for $-1<t<0$ and $0<x<1$ we have $0\le x-t\le2$. Consequently, \begin{align*} \|T\left(|h|\chi_{(0,1)}\right)\|_X &\ge \|T\left(|h|\chi_{(0,1)}\right)\chi_{(-1,0)}\|_X \ge \left(\frac{1}{2\pi}\int_{0}^1|h(x)|\,dx\right) \|\chi_{(-1,0)}\|_X. \end{align*} Applying \eqref{th} with $h_n:=|f_n|\chi_{(-1,0)}$ and $h_n:=|f_n|\chi_{(0,1)}$ it follows, from the previous bounds for $h=f_n$, that \begin{align*} \sum_{n=1}^\infty \|f_n\|_{L^1}&=\sum_{n=1}^\infty \left( \int_{-1}^0|f_n(x)|\,dx + \int_{0}^1|f_n(x)|\,dx\right) \\ &\le \sum_{n=1}^\infty C\left(\|T\left(|f_n|\chi_{(-1,0)}\right)\|_X+ \|T\left(|f_n|\chi_{(0,1)}\right)\|_X\right) \\ &\le 2C \sum_{n=1}^\infty \|f_n\|_{[T,X]}<\infty, \end{align*} with $C:=(2\pi)/\varphi_X(1)$, since $\|\chi_{(0,1)}\|_X=\|\chi_{(-1,0)}\|_X=\varphi_X(1)$. Hence, we have \begin{equation}\label{L1} \sum_{n=1}^\infty f_n=:f\in L^1 \end{equation} with absolute convergence in $L^1$ and hence, also pointwise a.e. (B) We now show that $f\in[T,X]$. Select $h\in L^0$ satisfying $|h|\le|f|$. We need to prove that $T(h)\in X$. To this end, let $\varphi\in L^0$ satisfy $|\varphi|\le1$ and $h=\varphi f$. Then $$ h=\varphi f=\sum_{n=1}^\infty \varphi f_n, \quad \mathrm{a.e.} $$ The functions $h_n:=\varphi f_n\in[T,X]$, for $n\in\mathbb{N}$, satisfy $$ \sum_{n=1}^\infty\|h_n\|_{[T,X]}\le \sum_{n=1}^\infty\|f_n\|_{[T,X]}<\infty $$ due to the ideal property of $[T,X]$. We can apply the arguments in (A) to deduce that the series $\sum_{n=1}^\infty h_n$ converges (absolutely) in $L^1$ to $h$. Kolmogorov's Theorem yields that the series $\sum_{n=1}^\infty T(h_n)$ converges to $T(h)$ in $L^{1,\infty}$. On the other hand, since the series $\sum_{n=1}^\infty T(h_n)$ converges absolutely in $X$ (see \eqref{th}), it is convergent, say to $g=\sum_{n=1}^\infty T(h_n)$ in $X$ and hence, also in $L^{1,\infty}$. Accordingly, $T(h)=g$ and so $T(h)\in X$. This establishes that $f\in[T,X]$. (C) It remains to show that $\sum_{n=1}^\infty f_n$ converges to $f$ in the topology of $[T,X]$, that is, $\|f-\sum_{n=1}^N f_n\|_{[T,X]}\to0$ as $N\to\infty$. Fix $N\in\mathbb{N}$. Let $h\in L^0$ satisfy $$ |h|\le \left| f-\sum_{n=1}^N f_n\right| =\left|\sum_{n=N+1}^\infty f_n\right|\le \sum_{n=N+1}^\infty |f_n|. $$ We can reproduce the argument used in (B) to deduce that $$ h=\sum_{n=N+1}^\infty h_n,\quad |h_n|\le|f_n|,\quad n\ge N+1. $$ Then $$ \|T(h)\|_X\le\sum_{n=N+1}^\infty \|T(h_n)\|_X\le \sum_{n=N+1}^\infty \|f_n\|_{[T,X]}. $$ That is, for each $N\in\mathbb{N}$, we have $$ \|f-\sum_{n=1}^N f_n\|_{[T,X]} =\sup_{ |h|\le| f-\sum_{n=1}^N f_n|} \|T(h)\|_X\le \sum_{n=N+1}^\infty \|f_n\|_{[T,X]}\to0, $$ which establishes the completeness of $[T,X]$. \end{proof} We will require an alternate description of the norm $\|\cdot\|_{[T,X]}$ to that given in \eqref{TX-norm}, namely \begin{equation}\label{III} \|f\|_{[T,X]}=\sup_{\|g\|_{X'}\le1}\|fT_{X'}(g)\|_{L^1},\quad f\in[T,X]. \end{equation} To verify this fix $f\in[T,X]$. Given $\varphi\in L^0$ with $|\varphi|\le1$, the function $\varphi f\in[T,X]$ as $|\varphi f|\le|f|$. It follows from Lemma \ref{NEW-4.3} (see also its proof) with $\varphi f$ in place of $f$, that $\varphi fT_{X'}(g)\in L^1$ for all $g\in X'$ (in particular, also $fT_{X'}(g)\in L^1$) and $$ \int_{-1}^1(\varphi f)T_{X'}(g)=-\int_{-1}^1gT(\varphi f),\quad g\in X'. $$ Since $\{\varphi f: \varphi\in L^0, |\varphi|\le1\}=\{h\in L^0:|h|\le|f|\}$, the previous formula yields \eqref{III} because \eqref{TX-norm} implies that \begin{align*} \|f\|_{[T,X]}&= \sup_{|\varphi|\le1}\|T(\varphi f)\|_X = \sup_{|\varphi|\le1} \sup_{\|g\|_{X'}\le1}\Big|\int_{-1}^1 gT(\varphi f)\Big| \\ &= \sup_{\|g\|_{X'}\le1}\sup_{|\varphi|\le1} \Big|\int_{-1}^1 (\varphi f)T_{X'}(g)\Big| = \sup_{\|g\|_{X'}\le1}\|fT_{X'}(g)\|_{L^1}. \end{align*} \begin{proposition}\label{prop 4.5} Let $X$ be a r.i.\ space satisfying $0<\underline{\alpha}_X\le \overline{\alpha}_X<1$. Then $[T,X]$ is a B.f.s. \end{proposition} \begin{proof} In view of Lemma \ref{NEW-4.4} it remains to establish that $[T,X]$ possesses the Fatou property. Let $0\le f\in L^0$ and $\{f_n\}_{n=1}^\infty\subseteq [T,X]\subseteq L^1$ be a sequence such that $0\le f_n\le f_{n+1}\uparrow f$ pointwise a.e.\ with $\sup_n\|f_n\|_{[T,X]}<\infty$. In Step A of the proof of Lemma \ref{NEW-4.4} it was shown that $$ \|h\|_{L^1}\le (4/\varphi_X(1))\|h\|_{[T,X]},\quad h\in [T,X], $$ which ensures that also $\sup_n\|f_n\|_{L^1}<\infty$. Hence, via Fatou's lemma, $f\in L^1$. Moreover, the Monotone Convergence Theorem together with \eqref{III} applied to $f_n\in[T,X]$ for each $n\in\mathbb N$ yields \begin{align*} \sup_{\|g\|_{X'}\le1}\int_{-1}^1|fT_{X'}(g)| &= \sup_{\|g\|_{X'}\le1}\sup_n\int_{-1}^1|f_nT_{X'}(g)| \\ &= \sup_n\sup_{\|g\|_{X'}\le1}\int_{-1}^1|f_nT_{X'}(g)| = \sup_n\|f_n\|_{[T,X]}<\infty. \end{align*} In particular, $fT_{X'}(g)\in L^1$ for every $g\in X'$ with $f\in L^1$. According to (c)$\Leftrightarrow$(g) in Proposition \ref{prop-7} we have $f\in[T,X]$ and, via \eqref{III} and the previous identity, that $\|f\|_{[T,X]}=\sup_n\|f_n\|_{[T,X]}$. So, we have established that $[T,X]$ has the Fatou property. \end{proof} The optimality property of the B.f.s.\ $[T,X]$ relative to $T_X$ can now be formulated. \begin{theorem}\label{theorem 4.6} Let $X$ be a r.i.\ space satisfying $0<\underline{\alpha}_X\le \overline{\alpha}_X<1$. Then $[T,X]$ is the largest B.f.s.\ containing $X$ to which $T_X\colon X\to X$ has a continuous, linear, $X$-valued extension. \end{theorem} \begin{proof} Let $Z\subseteq L^1$ be any B.f.s.\ with $X\subseteq Z$ such that $T_X$ has a continuous, linear extension $T\colon Z\to X$. Fix $f\in Z$. Then for each $h\in L^0$ with $|h|\le|f|$ we have $h\in Z$ and $$ \|T(h)\|_X\le \|T\|_{op}\|h\|_Z\le\|T\|_{op} \|f\|_Z, $$ where $\|T\|_{op}$ is the operator norm of $T\colon Z\to X$. Then $f\in[T,X]$ and so the space $[T,X]$ contains $Z$ continuously. Due to the boundedness of $T_X\colon X\to X$ we have that $$ \|f\|_{[T,X]}=\sup_{|h|\le|f|}\|T(h)\|_X\le \|T_X\|_{op}\|f\|_X,\quad f\in X, $$ and so $X\subseteq[T,X]$ continuously. By construction $T\colon[T,X]\to X$ and $T$ is continuous. Hence, $[T,X]$ is the \textit{largest} B.f.s.\ containing $X$ to which $T_X\colon X\to X$ has a continuous, linear, $X$-valued extension. \end{proof} We can now prove the impossibility of extending $T_X\colon X\to X$. \begin{theorem}\label{theo-10} Let $X$ be a r.i.\ space satisfying either $1/2<\underline{\alpha}_X\le \overline{\alpha}_X<1$ or $0<\underline{\alpha}_X\le \overline{\alpha}_X<1/2$. Then the finite Hilbert transform $T_X\colon X\to X$ has no $X$-valued, continuous linear extension to any larger B.f.s. \end{theorem} \begin{proof} According to Theorem \ref{theorem 4.6}, whenever $0<\underline{\alpha}_X\le \overline{\alpha}_X<1$, the space $[T,X]$ is the largest B.f.s.\ to which $T_X\colon X\to X$ can be continuously extended with $X\subseteq[T,X]$ continuously. So, it suffices to prove that $[T,X]= X$. But, this corresponds precisely to the equivalence in Proposition \ref{cor-8} between the condition (a), i.e., $f\in X$, and the condition (d), i.e, $T(h)\in X$ for all $h\in L^0$ with $|h|\le|f|$, which is the statement that $f\in[T,X]$. \end{proof} Recall that $T_X$ is not an isomorphism. Nevertheless, Theorems \ref{theo-3} and \ref{theo-4} yield norms, in terms of the finite Hilbert transform, which are equivalent to the given norm in the corresponding r.i.\ space. \begin{corollary}\label{cor-10} Let $X$ be a r.i.\ space satisfying either $1/2<\underline{\alpha}_X\le \overline{\alpha}_X<1$ or $0<\underline{\alpha}_X\le \overline{\alpha}_X<1/2$. Then there exists a constant $C_X>0$ such that \begin{align*} \frac{C_X}{4}\|f\|_X\le \sup_{A\in\mathcal{B}}\big\|T_X(\chi_A f)\big\|_X & \le \sup_{|\theta|=1}\big\|T_X(\theta f)\big\|_X \\ & \le \sup_{|h|\le|f|}\big\|T_X(h)\big\|_X \le \|T_X\| \cdot \|f\|_X , \end{align*} for every $f\in X$. \end{corollary} \begin{proof} The final inequality is clear from $$ \|T_X(h)\|_X\le \|T_X\|\cdot \|h\|_X\le \|T_X\|\cdot \|f\|_X $$ for every $f\in X$ and every $h\in L^0$ with $|h|\le|f|$. It was shown in the proof of Theorem \ref{theo-10} that $[T,X]=X$. Hence, there exists a constant $C_X>0$ such that $$ C_X\|f\|_X\le \sup_{|h|\le|f|} \|T_X(h)\|_X,\quad f\in X. $$ The remaining inequalities now follow from \eqref{norms} which is applicable because if $f\in X$, then condition (c) in Proposition \ref{prop-7} is surely satisfied. \end{proof} \begin{remark} The notion of the optimal domain $[T,X]$ is meaningful for a large family of operators acting on function spaces, as already commented in the Introduction. Amongst them, in a much simpler situation, are the positive operators. For a thorough study of this topic see, for example, \cite{okada-ricker-sanchez} and the references therein. \end{remark} \section{The finite Hilbert transform on $L^2$} \label{S5} Theorems \ref{theo-3} and \ref{theo-4} are not applicable to $X=L^2$. Moreover, $T_{L^2}$ is not Fredholm and no inversion formula is available. Nevertheless, it turns out that no extension of $T_{L^2}$ is possible. A new approach is needed to establish this. Trying to use the results and techniques obtained for the cases $p\not=2$ in an attempt to study the possible extension of $T_{L^2}\colon L^2\to L^2$ is futile as shown by the following consideration. Let $X=L^p$ for $1<p<2$ and set $T_p:=T_{L^p}$. Since $\underline{\alpha}_X= \overline{\alpha}_X=1/p\in(1/2,1)$, we are in the setting of Theorem \ref{theo-3}. The left-inverse of $T_p$ is the operator $\widehat{T}_p:=\widehat T_{L^p}$, defined by \eqref{T-hat}, that is, $$ \widehat{T}_p(f)(x):=\frac{-1}{\sqrt{1-x^2}}\, T_p(\sqrt{1-t^2}f(t))(x), \quad \mathrm{a.e. }\; x\in (-1,1), $$ which maps $L^p$ into $L^p$ and is an isomorphism onto its range. We estimate from below the operator norm of $\widehat{T}_p$. Since $T_p(\sqrt{1-t^2})(x)=-x$, for $f:=\mathbf{1}$ we obtain $$ \|\widehat{T}_p\|\ge \frac{\|x/\sqrt{1-x^2}\|_{L^p}}{\|\mathbf{1}\|_{L^p}} = \left(\frac12\int_{-1}^{1}\frac{|x|^p}{(1-x^2)^{p/2}}\,dx\right)^{1/p} $$ which goes to $\infty$ as $p\to2^-$. We denote by $T_2$ the finite Hilbert transform $T_{L^2}\colon L^2\to L^2$. The norm $\|\cdot\|_{L^2}$ will simply be denoted by $\|\cdot\|_2$. \begin{lemma}\label{lem-11} For every set $A\in\mathcal{B}$ we have $$ \left\|T_2(\chi_A)\right\|_2 \ge \left(\int_0^\infty\frac{4\lambda}{e^{\pi\lambda}+1}d\lambda\right)^{1/2} |A|^{1/2}. $$ \end{lemma} \begin{proof} We rely on a consequence of the Stein-Weiss formula for the distribution function of the Hilbert transform of a characteristic function, due to Laeng, \cite[Theorem 1.2]{laeng}. Namely, for $A\subseteq\mathbb{R}$ with $|A|<\infty$, we have $$ |\{x\in A:\left| H(\chi_A)(x))\right|>\lambda\}| = \frac{2|A|}{e^{\pi\lambda}+1},\quad \lambda>0. $$ For $A\in\mathcal{B}$, it follows from properties of the distribution function for $T_2(\chi_A)$ that \begin{align*} \|T_2(\chi_A)\|^2_2 &= \int_0^\infty 2\lambda \cdot |\{x\in (-1,1):\left| T_2(\chi_A)(x)\right|>\lambda\}|\,d\lambda \\ &\ge \int_0^\infty 2\lambda \cdot |\{x\in A:\left| H(\chi_A)(x)\right|>\lambda\}|\,d\lambda \\ &= |A|\int_0^\infty \frac{4\lambda}{e^{\pi\lambda}+1}\,d\lambda. \end{align*} \end{proof} The approach we use for proving the impossibility of extending $T_2$ is to show that $L^2$ coincides with the B.f.s.\ $[T,L^2]$. For this, we need to compare the norm in $L^2$ with the norm in $[T,L^2]$. \begin{theorem}\label{theo-12} For each function $\phi\in\mathrm{sim }\;\mathcal{B}$ we have \begin{equation*}\label{FHT-inq2} \left(\int_0^\infty\frac{4\lambda}{e^{\pi\lambda}+1}d\lambda\right)^{1/2} \|\phi\|_2 \le \sup_{|\theta|=1}\big\|T_2(\theta \phi)\big\|_2 . \end{equation*} \end{theorem} \begin{proof} In order to prove the claim, fix any simple function $\phi=\sum_{n=1}^N a_n\chi_{A_n},$ with $a_n,\dots,a_N\in\mathbb C$ and pairwise disjoint sets $A_1,\dots, A_N\in\mathcal{B}$ with $N\in\mathbb N$. Let $\tau$ denote the product measure on $\Lambda:=\{-1,1\}^N$ for the uniform probability on $\{-1,1\}$. Thus, given $\sigma\in\Lambda$ we have $\sigma=(\sigma_1,\dots,\sigma_N)$ with $\sigma_n=\pm1$ for $n=1,\dots, N$. Note that the coordinate projections $$ P_n:\sigma\in\Lambda\mapsto \sigma_n\in\{-1,1\}, \quad n=1,\dots,N, $$ form an orthonormal set, i.e., \begin{equation}\label{orto} \int_\Lambda P_jP_k\,d\tau=\int_\Lambda \sigma_j\sigma_k\,d\tau(\sigma) =\delta_{j,k},\quad j,k=1,\dots,N. \end{equation} The function $F\colon\Lambda\to[0,\infty)$ defined by $$ F(\sigma):=\left\|T_2\left(\sum_{n=1}^N \sigma_na_n\chi_{A_n}\right)\right\|_2, \quad \sigma\in\Lambda, $$ is bounded and measurable and so satisfies \begin{equation}\label{2-oo} \left\|F\right\|_{L^2(\tau)}\le \left\|F\right\|_{L^\infty(\tau)}. \end{equation} We now compute both of the norms in \eqref{2-oo} explicitly. Given $\sigma=(\sigma_n)\in\Lambda$, the measurable function defined on $(-1,1)$ by $$ t\mapsto\theta_\sigma(t):=\chi_{(-1,1)\setminus(\cup_{n=1}^NA_n)}(t) + \sum_{n=1}^N\sigma_n\chi_{A_n}(t) $$ satisfies $|\theta_\sigma|=1$ and $$ \theta_\sigma\phi=\sum_{n=1}^N\sigma_na_n\chi_{A_n}. $$ Consequently, $$ T_2\big(\theta_\sigma\phi\big)=T_2\Big(\sum_{n=1}^N\sigma_na_n\chi_{A_n}\Big), $$ from which it is clear that \begin{equation}\label{signs} \left\|F\right\|_{L^\infty(\tau)} =\sup_{\sigma\in\Lambda} \bigg\|T_2\bigg(\sum_{n=1}^N \sigma_na_n\chi_{A_n}\bigg)\bigg\|_{2} \le \sup_{|\theta|=1}\big\|T_2(\theta \phi)\big\|_{2}. \end{equation} Set $\beta:=\big(\int_0^\infty \frac{4\lambda}{e^{\pi\lambda}+1}d\lambda\big)^{1/2}$. By Fubini's theorem, \eqref{orto} and Lemma \ref{lem-11} it follows that \begin{align*} \left\|F\right\|^2_{L^2(\tau)} &= \int_\Lambda \bigg\|T_2\bigg(\sum_{n=1}^N \sigma_na_n\chi_{A_n}\bigg)\bigg\|^2_2 \,d\tau(\sigma) = \int_\Lambda\int_{-1}^{1} \bigg|\sum_{n=1}^N \sigma_na_nT_2(\chi_{A_n})(t)\bigg|^2dt \,d\tau(\sigma) \\ &= \int_{-1}^{1}\int_\Lambda \bigg|\sum_{n=1}^N \sigma_na_nT_2(\chi_{A_n})(t)\bigg|^2\,d\tau(\sigma)\,dt = \int_{-1}^{1}\sum_{n=1}^N \left|a_nT_2(\chi_{A_n})(t)\right|^2\,dt \\ &= \sum_{n=1}^N|a_n|^2\Big\|T_2(\chi_{A_n})\Big\|^2_2 \ge \beta^2\sum_{n=1}^N |a_n|^2|A_n| \\ & = \beta^2\int_{-1}^{1}\bigg|\sum_{n=1}^N a_n\chi_{A_n}(t)\bigg|^2dt \\ & = \beta^2\|\phi\|^2_2. \end{align*} This inequality, together with \eqref{2-oo} and \eqref{signs}, yields $$ \beta \|\phi\|_2 \le \sup_{|\theta|=1}\big\|T_2(\theta \phi)\big\|_2. $$ Since the simple function $\phi$ is arbitrary, this establishes the result. \end{proof} Theorem \ref{theo-12} implies the impossibility of extending $T_2$. Note that this does not follow from Theorem \ref{theo-10} since $L^2$ does not satisfy the restriction on the Boyd indices. \begin{theorem}\label{theo-14} The finite Hilbert transform $T_2\colon L^2\to L^2$ has no continuous, $L^2$-valued extension to any genuinely larger B.f.s. \end{theorem} \begin{proof} We follow the approach used for proving Theorem \ref{theo-10} to show that $$ L^2=[T_2,L^2]:=\big\{f\in L^1: T_2(h)\in L^2,\;\forall |h|\le|f|\big\}. $$ Note first note that \begin{equation}\label{aa} \beta \|\phi\|_2 \le \sup_{|\theta|=1}\big\|T_2(\theta \phi)\big\|_2 \le \sup_{|h|\le|\phi|}\big\| T_2(h)\big\|_2, \quad \phi\in\mathrm{sim }\;\mathcal{B} . \end{equation} The left-hand inequality is Theorem \ref{theo-12}. The right-hand inequality is clear from \eqref{norms}. Let $f\in[T,L^2]$. According to \eqref{aa}, for every $\phi\in\mathrm{sim }\;\mathcal{B}$ satisfying $|\phi|\le |f|$ it follows that $$ \beta\|\phi\|_2 \le \sup_{|h|\le|f|}\big\|T_2(h)\big\|_2 =\|f\|_{[T,L^2]}. $$ Taking the supremum with respect to all such $\phi$ yields $\beta\|f\|_2 \le \|f\|_{[T,L^2]}$. This implies that $f\in L^2$. Consequently, $[T,L^2]=L^2$ with equivalent norms. \end{proof} A further consequence of Theorem \ref{theo-12} leads to various equivalent norms, in terms of the operator $T_2$, to the standard norm $\|\cdot\|_2$ in $L^2$. As before, note that this does not follow from Corollary \ref{cor-10} since $L^2$ does not satisfy the restriction on the Boyd indices. Recall that $\beta:=\big(\int_0^\infty \frac{4\lambda}{e^{\pi\lambda}+1}d\lambda\big)^{1/2}$. \begin{corollary}\label{cor-14} For every $f\in L^2$, we have $$ \frac{\beta}{4}\|f\|_2 \le \sup_{A\in\mathcal{B}}\big\|T_2(\chi_A f)\big\|_2 \le \sup_{|\theta|=1}\big\|T_2(\theta f)\big\|_2 \le \sup_{|h|\le|f|}\big\|T_2(h)\big\|_2 \le \|f\|_2 . $$ \end{corollary} \begin{proof} The last inequality follows (since $\|\cdot\|_2$ is a lattice norm and $\|T_2\|=1$, \cite{mclean-elliot}) via $$ \|T_2(h)\|_2\le \|T_2\|\cdot\|h\|_2\le \|f\|_2,\quad |h|\le |f| . $$ If $f\in L^2$, then surely (c) of Proposition \ref{prop-7} is satisfied with $X=L^2$. Hence the second and third inequalities follow from \eqref{norms}. Finally, in order to prove the first inequality, we begin by establishing, for $h,f\in L^2$ satisfying $|h|\le|f|$, that \begin{equation}\label{nueva} \sup_{|\theta|=1}\big\|T(\theta h)\big\|_2 \le \sup_{|\tilde\theta|=1}\big\|T(\tilde\theta f)\big\|_2 . \end{equation} Fix $\theta$ with $|\theta|=1$. Then, via Parseval's formula, for some function $\tilde\theta_{f,g}$ satisfying $|\tilde\theta_{f,g}|=1$, we have \begin{align*} \big\|T_2(\theta h)\big\|_2 &= \sup_{\|g\|_2\le1}\left|\int_{-1}^1T_2(\theta h)(t)\cdot g(t)\,dt\right| = \sup_{\|g\|_2\le1}\left|\int_{-1}^1\theta(t) h(t)\cdot T_2(g)(t)\,dt\right| \\ &\le \sup_{\|g\|_2\le1}\int_{-1}^1|h(t)|\cdot |T_2(g)(t)|\,dt \le \sup_{\|g\|_2\le1}\int_{-1}^1|f(t)| \cdot|T_2(g)(t)|\,dt \\ &= \sup_{\|g\|_2\le1}\int_{-1}^1f(t) \tilde\theta_{f,g} (t)T_2(g)(t)\,dt \le \sup_{\|g\|_2\le1}\left|\int_{-1}^1T_2(f \tilde\theta_{f,g})(t) g(t)\,dt\right| \\ &\le \sup_{\|g\|_2\le1}\|T_2(f \tilde\theta_{f,g})\|_2 \|g\||_2 \\ &\le \sup_{|\tilde\theta|=1}\|T_2(f \tilde\theta)\|_2 . \end{align*} Accordingly, \eqref{nueva} holds. Fix $f\in L^2$. Then Theorem \ref{theo-12}, together with \eqref{norms} and \eqref{nueva} gives, for $\phi\in\mathrm{sim}\;\mathcal{B}$ satisfying $|\phi|\le |f|$, that $$ \beta\|\phi\|_2 \le \sup_{|\theta|=1}\big\|T_2(\theta \phi)\big\|_2 \le \sup_{|\theta|=1}\big\|T_2(\theta f)\big\|_2 \le 4\sup_{A\in\mathcal{B}}\big\|T_2(f\chi_A)\big\|_2. $$ Taking the supremum with respect to all such simple functions $\phi$ , we arrive at $$ \beta\|f\|_2 \le 4\sup_{A\in\mathcal{B}}\big\|T_2(f\chi_A)\big\|_2. $$ \end{proof} From Corollary \ref{cor-14} we can deduce conditions, in terms of the finite Hilbert transform, for membership of $L^2$. \begin{corollary}\label{cor-15} Given $f\in L^1$ the following conditions are equivalent. \begin{itemize} \item[(a)] $f\in L^2$. \item[(b)] $T(f\chi_A)\in L^2$ for every $A\in\mathcal{B}$. \item[(c)] $T(f\theta)\in L^2$ for every $\theta\in L^\infty$ with $|\theta|=1$ a.e. \item[(d)] $T(h)\in L^2$ for every $h\in L^0$ with $|h|\le |f|$ a.e. \end{itemize} \end{corollary} \begin{proof} (b)$\Leftrightarrow$(c)$\Leftrightarrow$(d) follow from Proposition \ref{prop-7} with $X=L^2$. (a)$\Rightarrow$(b) Clear as $T_2\colon L^2\to L^2$ is bounded. (b)$\Rightarrow$(a) For $X=L^2$ it follows that condition (b) of Proposition \ref{prop-7} holds, that is, $\gamma:=\sup_{A\in\mathcal{B}}\|T(f\chi_A)\|_2<\infty$. For each $n\in\mathbb{N}$ define $A_n:=|f|^{-1}([0,n])$ and $f_n:=f\chi_{A_n}$. Then $$ \|T(f_n\chi_A)\|_2=\|T(f\chi_{A\cap A_n})\|_2\le \gamma,\quad A\in\mathcal{B}, n\in\mathbb{N}, $$ which implies, via Corollary \ref{cor-14}, that $$ \|f_n\|_2\le \frac{4\gamma}{\beta},\quad n\in\mathbb{N}. $$ Since $|f_n|^2\uparrow|f|^2$ pointwise a.e.\ on $(-1,1)$, from the Monotone Convergence Theorem it follows that $f\in L^2$. This is condition (a). \end{proof} \begin{remark} As commented in the Introduction the operator $T_2\colon L^2\to L^2$ is injective and has proper dense range. A detailed study of its range is carried out in Sections 3 and 4 of \cite{okada-elliot}. Let us highlight a somewhat unexpected result given there. Namely, for every $-1<a<1$, each function $f_a(x):=\chi_{(a,1)}(x)/\sqrt{1-x^2}$, for $x\in(-1,1)$, which belongs to $L^1$, satisfies $T(f_a)\in L^2$ and $$ \left\|T(f_a)\right\|_2= \left\| T\left(\frac{\chi_{(a,1)}}{\sqrt{1-x^2}}\right) \right\|_2 =\frac1\pi \big(7\zeta(3)\big)^{1/2}, $$ \cite[Lemma 4.3 and Note 4.4]{okada-elliot}. Observe that $f_a\not\in L^2$ for every $-1<a<1$. On the other hand, if $X$ is a r.i.\ space satisfying $1/2<\underline{\alpha}_X\le\overline{\alpha}_X<1$, then $K=\{f_a: -1<a<1\} \subseteq L^{2,\infty}\subseteq X$. Moreover, for every sequence $a_n\uparrow1^-$ the sequence $\{f_{a_n}\}_{n=1}^\infty$ satisfies $0\le f_{a_n}\downarrow0$ pointwise. By the absolute continuity of the norm $\|\cdot\|_X$ it follows that $\lim_nT_X(f_{a_n})=0$ in $X$. \end{remark} \begin{remark}\label{rem-final} For r.i.\ spaces $X$ satisfying the conditions of Theorem \ref{theo-10}, namely \begin{equation}\label{eq-5.6} 0<\underline{\alpha}_X\le \overline{\alpha}_X<1/2\quad \text{or}\quad 1/2<\underline{\alpha}_X\le \overline{\alpha}_X<1, \end{equation} we know that the finite Hilbert transform $T_X\colon X\to X$ cannot be extended to a larger B.f.s. The proof is based on arguments from Fredholm operator theory, a deep factorization result of Talagrand on $L^0$-valued measures and on the construction of the largest domain space $[T,X]$. For r.i.\ spaces $X$ with $0<\underline{\alpha}_X\le \overline{\alpha}_X<1$ not satisfying the conditions \eqref{eq-5.6} it is unknown in general when $T_X$ is Fredholm and when not (for $X=L^2$ it is known that $T_X$ is not Fredholm). So, the arguments used to prove Theorem \ref{theo-10} may apply to some further cases but surely not to all. The proof given in Theorem \ref{theo-14} for $X=L^2$ relies heavily on properties of the $L^2$-setting. Thus, it is difficult to extend to other spaces. The possibility of a related proof, at least for the spaces $L^{2,q}$ with $1\le q\le \infty$ and $q\not=2$, would require carefully looking at the ``measure of level sets''. Many technical difficulties would be expected to arise in such an attempt and still not all cases would be covered. Nevertheless, the class of r.i.\ spaces $X$ having the property \eqref{eq-5.6}, together with $X=L^2$, is rather large and suggests that $[T,X]=X$ should hold for all r.i.\ spaces satisfying $0<\underline{\alpha}_X\le \overline{\alpha}_X<1$. \end{remark}
{ "timestamp": "2019-01-21T02:13:15", "yymm": "1901", "arxiv_id": "1901.06334", "language": "en", "url": "https://arxiv.org/abs/1901.06334", "abstract": "The principle of optimizing inequalities, or their equivalent operator theoretic formulation, is well established in analysis. For an operator, this corresponds to extending its action to larger domains, hopefully to the largest possible such domain (i.e, its \\textit{optimal domain}). Some classical operators are already optimally defined (e.g., the Hilbert transform in $L^p(\\mathbb{R})$, $1<p<\\infty$) and others are not (e.g., the Hausdorff-Young inequality in $L^p(\\mathbb{T})$, $1<p<2$, or Sobolev's inequality in various spaces). In this paper a detailed investigation is undertaken of the finite Hilbert transform $T$ acting on rearrangement invariant spaces $X$ on $(-1,1)$, an operator whose singular kernel is neither positive nor does it possess any monotonicity properties. For a large class of such spaces $X$ it is shown that $T$ is already optimally defined on $X$ (this is known for $L^p(-1,1)$ for all $1<p<\\infty$, except $p=2$). The case $p=2$ is significantly different because the range of $T$ is a proper dense subspace of $L^2(-1,1)$. Nevertheless, by a completely different approach, it is established that $T$ is also optimally defined on $L^2(-1,1)$. Our methods are also used to show that the solution of the airfoil equation, which is well known for the spaces $L^p(-1,1)$ whenever $p\\not=2$ (due to certain properties of $T$), can also be extended to the class of r.i.\\ spaces $X$ considered in this paper.", "subjects": "Functional Analysis (math.FA)", "title": "Inversion and extension of the finite Hilbert transform on (-1,1)", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692305124305, "lm_q2_score": 0.7248702880639792, "lm_q1q2_score": 0.7079585064647704 }
https://arxiv.org/abs/1804.07940
Viewing Simpson's Paradox
Well known Simpson's paradox is puzzling and surprising for many, especially for the empirical researchers and users of statistics. However there is no surprise as far as mathematical details are concerned. A lot more is written about the paradox but most of them are beyond the grasp of such users. This short article is about explaining the phenomenon in an easy way to grasp using simple algebra and geometry. The mathematical conditions under which the paradox can occur are made explicit and a simple geometrical illustrations is used to describe it. We consider the reversal of the association between two binary variables, say, $X$ and $Y$ by a third binary variable, say, $Z$. We show that it is always possible to define $Z$ algebraically for non-extreme dependence between $X$ and $Y$, therefore occurrence of the paradox depends on identifying it with a practical meaning for it in a given context of interest, that is up to the subject domain expert. And finally we discuss the paradox in predictive contexts since in literature it is argued that the paradox is resolved using causal reasoning.
\section{Introduction} \label{sec:intro} Simpson's paradox that was discussed originally in \cite{YG03} and later in \cite{SE51} but named in \cite{BC72} is a situation where we see that two random variables are positively (negatively) correlated but at the same time negatively (positively) correlated when given each value of a third variable. The paradox is found in many occasions in social science, epidemiological, economics, etc. applications. Nevertheless it is considered to be a puzzling and surprising phenomenon because of its contradictory conclusions when some interpretations of probabilities are used; for example, when causal interpretations are given to observed probabilities. Therefore it has received considerable attention in philosophical literature (see \cite{OR85}, \cite{BP10} and references therein) and in social science contexts as well. One famous example is about alleged sex discrimination in University of California Berkeley graduate college admission discussed in \cite{BH75} where empirical data show that there is an overall higher rate of admission for the male applicants but when the rates are considered academic department-wise there is a slight bias for the female students. Further investigation reveals that the female applicants tend to apply more competitive departments where there are higher rates of rejection. However, if the smallest of department-wise admission rates for the females was greater than the largest of the department-wise admission rates for the males then the paradox would never have happened, whether or not the females tended to apply for more competitive departments. That is, in the observed context if it were the case that all departments were favoring females highly (in that way) the application pattern of the sex groups could not have shown a different scenario than that when two groups are taken together. So, there is a numerically necessary condition for the paradox. Following is another example (data are adjusted and adapted from \cite{PJ09}). Note that here we avoid any small sample problems; for example, one can think that all the count are multiples of a large number such as $10 000$ so that every count is large. \begin{exm} Consider following table of counts that are obtained from an observed sample of individuals both males and females, who either had or had not taken a certain treatment for a certain disease where recovery from the diseases is also reported. \begin{table}[h] \caption{Numeric counts of recovery. \label{tab:ex1}} \begin{center} \begin{tabular}{r|r|rr|r|rr|r|rrr} \hline & \multicolumn{2}{ c }{ Male ($Z=1$) } & & \multicolumn{2}{ c }{ Female ($Z=0$) } & & \multicolumn{2}{ c }{ Both } & \\ & \multicolumn{2}{ c }{ Recovery ($X$) } & & \multicolumn{2}{ c }{ Recovery ($X$) } & & \multicolumn{2}{ c }{ Recovery ($X$) } & \\ & Yes($1$) & No($0$) & & Yes($1$) & No($0$) & & Yes($1$) & No($0$) & \\ \hline Treatment ($Y$) $1$ & $7$ & $3$ & & $9$ & $21$ & & $16$ & $24$ & \\ $0$ & $18$ & $12$ & & $2$ & $8$ & & $20$ & $20$ & \\ \hline \end{tabular} \end{center} \end{table} The counts show that the treatment was effective for both the males and the females separately since its recovery probabilities for the males and the females are greater than those of the non-treatment respectively; \begin{eqnarray*} \frac{7}{7+3} =0.7 > \frac{18}{18+12}=0.6 \qquad \textrm{and} \qquad \frac{9}{9+21} =0.3 > \frac{2}{2+8} =0.2. \end{eqnarray*} However when we aggregate the data, i.e., pool the counts for the males and the females together, then we see that the treatment is not effective for the individuals anymore because then the recovery rate of the treatment is smaller than that of the non-treatment, i.e., \begin{eqnarray*} \frac{16}{16+24}=0.4 < \frac{20}{20+20} =0.5. \end{eqnarray*} \end{exm} Clearly above interpretation of relative frequencies (probabilities) says an impossibility. Of course numerically sum of the recovery probabilities (rates) of the treatment for the males and the females are larger than that of the non-treatment, i.e., $\frac{7}{7+3} + \frac{9}{9+21} > \frac{18}{18+12} +\frac{2}{2+8}$ since the recovery probability of the treatment for the male is greater than that of the non-treatment and similarly for the females. In fact these sums have no reasonable interpretation but one may tend to believe that the treatment group should always have a higher chance of recovery collectively perhaps due to the fact that it corresponds to the bigger summands, therefore the sum. But the recovery probability of the treatment when the males and the females are taken together as a single group happens to be a certain weighted average of those separately for the males and the females and similarly for non-treatment. Write them as \begin{align*} \frac{7+3}{16+24} \cdot \frac{7}{7+3} &+ \frac{9+21}{16+24} \cdot \frac{9}{9+21} =\frac{16}{16+24}=0.4 \\ & < \frac{18+12}{20+20} \cdot \frac{18}{18+12} + \frac{2+8}{20+20} \cdot \frac{2}{2+8} = \frac{20}{20+20} =0.5 \\ u \frac{7}{7+3} + (1-u) \frac{9}{9+21} =0.4 &< v \frac{18}{18+12} + (1-v) \frac{2}{2+8} =0.5 \\ \end{align*} where $u=\frac{7+3}{16+24}=0.25$ and $v=\frac{18+12}{20+20}=0.75$. Even if the sum of two probabilities is greater than that of their complementary probabilities, a certain (but desired) weighted average of the former two is not always greater than that of the latter two with different (but desired) weights where weights sum to $1$ in each case. The key factor is that what the two sets of weights are. As you have seen, in this case the two sets of weights are $\{u,1-u \}$ (for the recovery probabilities of the treatment) and $\{v,1-v \}$ (for the recovery probabilities of the non-treatment). For some $u$ and $v$ that are implied by the counts in the Table \ref{tab:ex1} we can see that the recovery probability of the treatment for the individuals (when the males and the females taken together) can be smaller than that of the non-treatment. Note that it is not required that we have $u+v=1$, though here it is a coincidence. Furthermore $u$ is the probability of being a male in the treatment group and $v$ is that in the non-treatment group, so they indicate how variables sex and taking treatment are dependent. We see that for the paradox to happen it is necessary that $u$ is 'sufficiently' smaller than $v$, written as "$u<<v$". It means that the dependence between the two variables is "strong". But as in the previous example it is necessary that the smallest of sex-wise recovery probabilities of the treatment ($0.3$) is not greater than the largest of sex-wise recovery probabilities of the non-treatment ($0.6$), i.e, the real value interval created by the recovery probabilities of the treatment for the males and the females (which is $[0.3,0.7]$) and that created by those of the non-treatment (which is $[0.2,0.6]$) should overlap, thus making an interval of probability values, otherwise the paradox can not occur. Whether or not one may have incorrect expectations about the rates calculated for the pooled group from those of its constituent groups, i.e., irrespective of numerical facts, the above interpretation of the probabilities is clearly a paradox. This may be the reason that it is stated in \cite{PJ09} that the paradox can be resolved by considering causality, probabilistic causality to be more precise. Here the females tend to take treatment more often than the males do. This is taken to be a causal relation. While it is possible to have a positive association between a treatment and its outcome (recovery of from a disease) for both men and women, and a reversed association between treatment and recovery when the data are amalgamated, it is not possible for treatments to be causally effective for men and women, but not effective for people. Note that as one of the anonymous referees pointed out the Simpson's paradox can be explained by so called mediant fractions\footnote{for example, see \url{http://www.mathteacherctk.com/blog/2011/02/mediant-fractions-and-simpsons-paradox/}} (see also \cite{RJ2006}). However we do not wish to discuss the topic using them here. So, let us discuss the paradox seen in the above example little more formally. We consider the simple case of the paradox with three binary variables; the reversal of the marginal association between $X$ and $Y$ by a third variable $Z$. Let the set of possible values of $X$ be $\{x,x'\}$ where $x$ denotes the success and $x'$ denotes the failure of an event of interest, and similarly for $Y$ and $Z$. One can get these results for a more general case where $Z$ is multi-nary. One instance of the Yule-Simpson's paradox says that the following relationships between conditional probabilities are possible. \begin{eqnarray} p(x \vert y,z) \geq p(x \vert y',z) \label{pxy1z} \\ p(x \vert y,z') \geq p(x \vert y',z') \label{pxy1z1} \end{eqnarray} but at the same time \begin{eqnarray} p(x \vert y) < p(x \vert y') \label{px1y} \end{eqnarray} Equivalently \begin{eqnarray} p(x,y \vert z) \geq p(x \vert z)p(y \vert z) \label{cdxy_z} \\ p(x,y \vert z') \geq p(x \vert z')p(y \vert z') \label{cdxy_z1} \end{eqnarray} but at the same time \begin{eqnarray} p(x,y) < p(x )p(y) \label{pxy_xy} \end{eqnarray} Conditional dependences between $X$ and $Y$ in the two cases of when it is given that $Z=z$ and $Z=z'$ is non-negative but marginal dependence between them is negative. Note that the other occurrence of the paradox is similar. It can be obtained by, for example, interchanging the naming of the success and the failure of the event related to the variable $X$. Empirical researcher may find that this phenomenon is surprising but the algebra behind this reversal is not. The paradox is often a numerical possibility. The main idea about this short article is to make the mathematical details of the paradox explicit (as agreed by one of the anonymous referees) and to show a useful graphical representation on how the paradox can occur. This explanation of the paradox is particularly useful for the empirical researchers for explaining the context and also for students of statistics. When $X$ and $Y$ are dependent through a non-extreme conditional probability (i.e., $0 < P(X \vert Y ) <1$), using the geometric figure it is easy to see that it is always possible to define another third binary variable, say, $Z$ that induces reversed dependences between $X$ and $Y$ at each value of $Z$. Note that a variable is a criterion that makes a partition in the collection of subject (the observed sample, in this case). Therefore, there can be many variables that can be defined on a given sample. However such $Z$ may only be an algebraic and hypothetical, if not subjective, variable unless it is not possible to give a real meaning to it, for example, finding $Z$ as a hidden variable. So, we argue that the occurrence of the paradox in this case is completely dependent on finding such $Z$ that has a real subject domain meaning. So, it is up to the subject domain expert to decide if $Z$ makes sense. Conditional probabilities of occurrence of the paradox is discussed in \cite{PP2009}. However, our argument is that the possibility of the paradox is domain dependent, i.e, finding meaningful $Z$ in the context. It has no relation whatsoever to the probability $P(X \vert Y)$ which is positive. Note that often researchers concern about if there exits such $Z$. In fact, algebraically there can be infinitely many $Z$ for non-extreme $P(X\vert Y)$. And on the other hand when we know the conditional probabilities $P(X \vert Y, Z)$ then we can find sufficient conditions that are simple to avoid the occurrence of the paradox. And one can extend the above discussion for the case of multinary $Z$ where the geometrical illustration is not easy. Though it is reasonable to believe that for the occurrence of the paradox there should be some causal relations between $X$ and $Z$ or $Y$ and $Z$, one may find other situations where there are no such obvious causal relations. We discuss one example taken from current philosophical literature for the latter, that is a phenomenon by design. We also discuss another example of the paradox in a predictive context. In some early literature it was discussed how to proceed with a new case in the paradoxical context, i.e., how a new case should be treated; either using marginal relationship between $X$ and $Y$ or conditional relationships between $X$ and $Y$ given $Z$, that are shown to us in the observed data. Often such discussions on resolving the paradox is done by using some criteria and so-called causal calculus found in graphical modeling theory (see \cite{PJ09}). We avoid those discussions here. \section{Algebraic and Graphical Explanation} \label{sec:alge} Now let us see how the paradox can happen with algebra of probabilities. Suppose that we have above instance of the paradox and let us multiply the inequalities (\ref{cdxy_z}) and (\ref{cdxy_z1}) with the inequalities $ p(z) \geq (p(z))^2$ and $p(z') \geq (p(z'))^2$ respectively and then we get \begin{eqnarray} p(x,y,z) \geq p(x,z)p(y,z) \\ p(x,y,z') \geq p(x,z')p(y,z') \end{eqnarray} By adding them together it gives \begin{eqnarray} p(x,y) \geq p(x,z)p(y,z)+ p(x,z')p(y,z') \end{eqnarray} But $p(x)p(y) = (p(x,z)+p(x,z'))(p(y,z)+p(y,z'))$ implies that \begin{eqnarray} p(x,y) \geq p(x)p(y) - (p(x,z)p(y,z')+ p(x,z')p(y,z)) \label{pxy} \end{eqnarray} Since $(p(x,z)p(y,z')+ p(x,z')p(y,z))$ is non-negative dropping it from expression (\ref{pxy}) may not preserve the inequality in the current form so sometimes we get the result $p(x,y) \leq p(x)p(y)$ which is the inequality (\ref{pxy_xy}). So, algebraically the paradox is simple and can occur sometimes. Furthermore, alternatively, when $p(z \vert y) \leq p(z \vert y')$ then it implies that $p(z' \vert y) > p(z' \vert y')$. By multiplying the inequalities (\ref{pxy1z}) and (\ref{pxy1z1}) with these restrictions respectively we get \begin{eqnarray} p(x,z \vert y) \lesseqgtr p(x,z \vert y') \\ p(x,z' \vert y) \geq p(x,z' \vert y') \end{eqnarray} And now if you add them together the we get \begin{eqnarray} p(x \vert y) \lesseqgtr p(x \vert y'). \end{eqnarray} This implies that either of inequalities can occur where the paradox happens if it results in $p(x \vert y) < p(x \vert y') $. Note that the other case of $p(z \vert y) > p(z \vert y')$ (which implies that $p(z' \vert y) \leq p(z' \vert y')$) results in the same conclusion. So it is clear that for the occurrence of the paradox it necessary that $Y$ and $Z$ are 'sufficiently' dependent. Let us consider the case given by the expressions (\ref{pxy1z}), (\ref{pxy1z1}), say, Case 1, where the paradox is when the expression (\ref{px1y}) is true. Then $p(x \vert y) < p(x \vert y') $ gives $ p(z' \vert y)p(x \vert y,z') + p(z \vert y)p(x \vert y,z) < p(z' \vert y')p(x \vert y',z') + p(z \vert y')p(x \vert y',z) $. The left hand side of the inequality is the weighted average of the conditional probabilities $p(x \vert y,z')$ and $p(x \vert y,z)$ where weights sum up to $1$ (i.e., $p(z' \vert y) + p(z \vert y)=1$). And similarly for the right-hand side of the inequality. Since, when the weights are positive then the weighted average of two numbers is contained in the interval whose end points are the two numbers it is easy to see that the two intervals corresponding to these four conditional probabilities should overlap each other for the possibility of the occurrence of the paradox. See the Figure~\ref{fig:fig1} for this case of the relationships among the conditional probabilities where the two intervals are marked on two parallel horizontal lines. It is necessary but not sufficient that we have $ Min \{p(x \vert y, z), p(x \vert y, z') \} < Max \{ p(x \vert y', z), p(x \vert y',z')\}$ for the paradox to occur. Therefore sufficient condition for non-occurrence of the paradox is that $ Min \{p(x \vert y, z), p(x \vert y, z') \} \geq Max \{ p(x \vert y', z), p(x \vert y',z')\}$. However under the necessary condition for the paradox, not having certain dependence between $Y$ and $Z$ avoids the paradox (as we seen in the example). In the following we assume that the necessary condition holds. It is simple yet important to note that the value $p(x \vert y)$ dissects positive length $p(x \vert y,z)-p(x \vert y,z')$ according to the ratio $p(z \vert y):p(z' \vert y)$; \begin{align*} \{p(z \vert y)+p(z' \vert y)\}p(x \vert y)&=p(z' \vert y)p(x \vert y,z') + p(z \vert y)p(x \vert y,z) \\ p(z \vert y) \{p(x \vert y,z)-p(x \vert y) \} &= p(z' \vert y) \{p(x \vert y)-p(x \vert y,z') \} \\ \frac{p(x \vert y)-p(x \vert y,z')}{p(x \vert y,z)-p(x \vert y)} &=\frac{p(z \vert y)}{p(z' \vert y)} \end{align*} And similarly the value $p(x \vert y')$ dissects positive length $p(x \vert y',z)-p(x \vert y',z')$ according to the ratio $p(z \vert y'):p(z' \vert y')$. In the Figure~\ref{fig:fig1} those ratios are marked with braces. And from the above expression of weighted averages; \begin{align*} p(x \vert y,z') & + p(z \vert y) \big\{ p(x \vert y,z) - p(x \vert y,z') \big\} \\ & < p(x \vert y',z') +p(z \vert y') \big\{ p(x \vert y',z) - p(x \vert y',z') \big\} \\ p(x \vert y,z') & + \Bigg\{ \frac{p(z \vert y)}{p(z \vert y)+p(z' \vert y)} \Bigg\} \big\{ p(x \vert y,z) - p(x \vert y,z') \big\} \\ & < p(x \vert y',z') +\Bigg\{\frac{p(z \vert y')}{p(z \vert y')+p(z' \vert y')}\Bigg\} \big\{ p(x \vert y',z) - p(x \vert y',z') \big\} \end{align*} It is clear that when numerical $p(z \vert y)$ fraction of the length the $p(x \vert y,z) -p(x \vert y,z')$ added to $p(x \vert y,z')$ is smaller than numerical $p(z \vert y')$ fraction of the length $p(x \vert y',z) -p(x \vert y',z')$ added to $p(x \vert y',z')$ the paradox can occur. In other words, the occurrence of the paradox depends on the conditional probability $P(Z \vert Y )$, i.e., the dependence between $Y$ and $Z$ (and equivalently $P(Z \vert X )$) given that the necessary condition for the paradox is satisfied. That is, in this case for occurrence of the paradox $p(z \vert y)$ should be sufficiently smaller than $p(z \vert y')$, written as $p(z \vert y)<<p(z \vert y')$ as previously. Note that other three cases, namely, Case 2: $p(x \vert y,z)< p(x \vert y,z')$ and $p(x \vert y',z)< p(x \vert y',z')$, Case 3: $p(x \vert y,z)< p(x \vert y,z')$ and $p(x \vert y',z')< p(x \vert y',z)$ and Case 4: $p(x \vert y,z')< p(x \vert y,z)$ and $p(x \vert y',z)< p(x \vert y',z')$ can be treated similarly. And furthermore, for a more general case where $Z$ taking more than two values one can easily get all the above algebraic relations. But it may be difficult to mark the corresponding conditional probability ratios in the geometric figure for such cases due to overlapping of distances. \begin{figure} \begin{center} \caption{An occurrence of the Simpson's paradox: for probability $p$, indication of $\{p\}$:$\{1-p\}$ means that the lengths of two line segments on which $\{p\}$ and $\{1-p\}$ appear are according to the ratio $p:1-p$. \label{fig:fig1}} \begin{tikzpicture}[scale=10] \draw [thick, gray, -] (0,0) -- (0,0.5); \draw [thick, gray, -] (0,0) -- (1,0); \node [below] at (1,0) {$1$}; \node [below] at (0,0) {$0$}; \draw [thick, gray, -] (0,0.5) -- (1,0.5); \draw [thick, gray, -] (1,0) -- (1,0.5); \draw [thick, gray, -] (0.1,0) -- (0.2,0.5); \node [below] at (0.15,0) {$p(x \vert y',z')$}; \node [below] at (0.75,0) {$p(x \vert y',z)$}; \draw [thick, gray, -] (0.7,0) -- (0.9,0.5); \node [above] at (0.2,0.5) {$p(x \vert y,z')$}; \node [above] at (0.9,0.5) {$p(x \vert y,z)$}; \draw [thick, gray, -] (0.52,0) -- (0.4,0.5); \node [below] at (0.5,0) {$p(x \vert y')$}; \node [above] at (0.4,0.5) {$p(x \vert y)$}; \node [above] at (0.4,0) {$ \{ p(z \vert y') \}: $}; \node [above] at (0.62,0) {$ \{p(z' \vert y') \} $}; \node [below] at (0.3,0.5) {$ \{ p(z \vert y) \}: $}; \node [below] at (0.5,0.5) {$ \{p(z' \vert y) \} $}; \end{tikzpicture} \end{center} \end{figure} \section{Causal Confounder or Reversal Attribute} \label{sec:cau} Now we turn into some philosophical literature about the paradox. Recently, in \cite{BP10} it is argued that there can be completely non-causal cases of the paradox though it is shown in \cite{PJ09} and \cite{AO08} that it can be resolved by considering causality, implying that solution to the paradox needs causal explanation. Furthermore, in \cite{AO08} it is argued that the paradox is a problem of covariate selection and adjustment (when to control for or not) in causal analysis of non-experimental data. Note that these types of discussions are important for statisticians too. Following their arguments one can create a case such as follows. Suppose that we have two packs of cards. Let the variable pack refers to variable sex ($Z$) in our Example~\ref{tab:ex1} such that value 'pack 1' refers 'male' and value 'pack 2' refers 'female'. On each card a circle is drawn either big or small using either of two colors of red and blue. Let the variable size refers taking the treatment ($Y$) such that the value 'big' refers to 'treatment' and the value 'small' refers to 'non-treatment'. And the variable color refers to recovery ($X$) such that value 'red' refers to 'recover' and value 'blue' refers to 'not recover'. Assume that we randomly draw cards one at a time with replacement from the packs. Then the probability of getting a red card when it is with a big circle is greater than that of when it is with a small circle for each of the packs separately, i.e., $P$(red $\vert$ big, pack 1) = 0.7 $ > P$(red $\vert$ small, pack 1) = 0.6 and $P$(red $\vert$ big, pack 2) = 0.3 $ > P$(red $\vert$ small, pack 2) = 0.20. But if we pool the cards of the two packs together then the probability of getting a red card when it is with a big circle on it is smaller than that of when it is with a small circle on it, i.e., $P$(red $\vert$ big) = 0.4 $ < P$(red $\vert$ small) = 0.50, so the paradox seems to occur. Arguably it is not natural that size of the circle causes it to be red or blue and vice versa but there is a dependence between size and the color of the circle in this context and similarly for pack and color and pack and size of the circle. So, naturally and simply one can argue against causality in this context. However causality is a difficult concept to discuss here. But often such discussions are inevitable in observational data analysis. The context can be entirely a predictive one, however it is by design. And the paradox may not be resolved as long as it is not given how the card is taken -whether it is selected randomly from either of the packs or from the pooled set of cards. When it is given how it is taken the paradox is immediately resolved. That is, in the case of missing information on how the card is selected the paradox seems to exist. But one can ideally calculate desired probabilities, for example, by assuming some probability for pooling two card packs such as $0.5$ (therefore, not pooling has probability $0.5$) and similarly for selecting a pack when card packs are not pooled (along with all other conditional probabilities given). So, the paradox can be resolved in this predictive context by looking for any missing information in the context, i.e., finding it exactly or assuming some probability on it. Though this is not a complete proof of solving the paradox in predictive contexts, we argue that the paradox seems to exist as long as we face with missing information. If one is capable of assuming correct probabilities for the missing details of the context or find them exactly then there is no paradox. For clarity suppose that it is equally likely that the card is selected from one of the two packs or from the pooled pack and furthermore, in the former case it is also equally likely that it is selected from either of the two packs. And let these possibilities are connected with a random variable, say, $T$ where $P(T=1)=0.25$, $P(T=2)=0.25$ and $P(T=3)=0.5$. Then, if the selected card has big circle on it then the probability that it is a red circle is, \begin{eqnarray*} p(\textrm{red} \vert \textrm{big}) &=& \sum_t p(\textrm{red}, T=t \vert \textrm{big}) =\sum_t p(\textrm{red} \vert \textrm{big},T=t) p(T=t \vert \textrm{big}) \\ &=& \sum_t p(\textrm{red} \vert \textrm{big},T=t) p(T=t ) \\ &=& p(\textrm{red} \vert \textrm{big}, \textrm{pack} 1) \times 0.25 + p(\textrm{red} \vert \textrm{big}, \textrm{pack} 2) \times 0.25 \\ & & \qquad + p(\textrm{red} \vert \textrm{big}, \textrm{pooled pack} ) \times 0.5 \\ &=& 0.7 \times 0.25 + 0.3 \times 0.25 + 0.4 \times 0.5 = 0.45 \end{eqnarray*} Likewise any other desired probability can also be calculated. Note that here $p(\textrm{red} \vert \textrm{big})$ is calculated for the case where the action of selection has been taken place (i.e., considering all possible values for $T$), so $T$ is assumed to be independent of anything. Furthermore, such predictive contexts have no relevance to causal contexts where there are possibilities of assigning values for $Y$ for the new subject. Let us consider another predictive context as follows. In a certain newly created city, there are people who are either white or black coming from either northern part or southern part of the country and speaking either of two languages, say, $A$ and $B$ as their mother tongue. Let we have taken a random sample of people from the city population and assume that the corresponding counts are as in the Table~\ref{tab:ex1}. Here we have taken skin color as the variable treatment ($Y$) where 'white' refers to 'treatment' and 'black' refers to 'non-treatment', the mother tongue refers to the variable recovery ($X$) where the 'language A' refers to 'recover' and the 'language B' refers to 'not recover' and the region that they are coming from refers to the variable sex ($Z$) where 'male' refers to 'north' and 'female' refers to 'south'. Imagine that all of the city population are new arrivals and therefore any selected person can not be intervened to have any desired value for any of those three variables. That is, for any given person from the city population we can only do predictions on these three variables. So, it is completely a predictive context. Now for a randomly selected person, say, a white person, whether or not the probability that his or her mother tongue is the language A is greater than that of the language B depends on how the person is selected; either from all the people who are originally coming from same region or from all the people living in the city irrespective of their original region of coming from. That is, we need to know the selection mechanism to answer the predictive problem or otherwise we need to assume a certain probability on the selection mechanism, as in the case of card example above. Now consider the following predictive case. Let some one from city population comes forward and challenges us to predict his or her mother tongue, perhaps knowing about paradoxical conclusion in our random sample of data. Then it can be a hard time for us to know about the person's self-selection as well as to assume some probability on its mechanism. However it is always possible for us to assume a probability as required and do the prediction as in the case of card packs. In both predictive contexts above, the paradox is resolved immediately when we know about the selection mechanism of the new subject, i.e., no missing information. This is similar to the case of knowing about whether or not the variable $Z$ is causally affecting the variable $Y$ in the causal contexts. But the causal contexts have other possibilities even within this case. In the predictive contexts we can do predictions even when how the selection is done is not known. But in the causal context of the paradox, we can not perform similar tasks as we are required to assign a value to $Y$; we can not intervene the new subject more than once. For this reason, the paradox has more significant aspects in causal contexts than those in predictive contexts. If someone finds paradox is surprising in a predictive context, it is due to difficulties in understanding how ratios for the pooled group are made from those of its subgroups. \section{Conclusion} Here we have given an explicit mathematical explanation of the Simpson's paradox using simple algebra and a geometric figure. These details help the empirical researchers and the students of statistics to understand the nature of the paradox. We have seen that it is always possible to define a third variable, say, $Z$ algebraically for any non-extreme dependence between two other variables, say, $X$ and $Y$, (that is, when $P(X \vert Y)>0$) so that the paradox occurs. So, for a given such context meaningfulness of the paradox is dependent on identifying $Z$ as a hidden variable with a real practical meaning, that is up to the subject domain expert. And on the other hand it is easy to see the algebraic conditions to avoid the paradox in the case of obtaining $P(X \vert Y)$ when $P(X \vert Y,Z)$ is known. And finally we have discussed some predictive contexts where the paradox can occur. It occurs when we do not have sufficient information on the context. However, in literature it is generally accepted that paradox can be resolved with causal knowledge. Of course, there exit causal contexts of the paradox, that requires causal knowledge to resolve it. Causal contexts are harder to resolve than predictive contexts where one can assume some probabilities to do predictions so that overall predictive accuracy is acceptable. \section{Acknowledgments} The author gratefully acknowledges the financial support of the Swedish Research Council through the Swedish Initiative for Microdata Research in the Medical and Social Sciences (SIMSAM) and the Swedish Research Council for Health, Working Life and Welfare (FORTE).
{ "timestamp": "2018-04-24T02:05:31", "yymm": "1804", "arxiv_id": "1804.07940", "language": "en", "url": "https://arxiv.org/abs/1804.07940", "abstract": "Well known Simpson's paradox is puzzling and surprising for many, especially for the empirical researchers and users of statistics. However there is no surprise as far as mathematical details are concerned. A lot more is written about the paradox but most of them are beyond the grasp of such users. This short article is about explaining the phenomenon in an easy way to grasp using simple algebra and geometry. The mathematical conditions under which the paradox can occur are made explicit and a simple geometrical illustrations is used to describe it. We consider the reversal of the association between two binary variables, say, $X$ and $Y$ by a third binary variable, say, $Z$. We show that it is always possible to define $Z$ algebraically for non-extreme dependence between $X$ and $Y$, therefore occurrence of the paradox depends on identifying it with a practical meaning for it in a given context of interest, that is up to the subject domain expert. And finally we discuss the paradox in predictive contexts since in literature it is argued that the paradox is resolved using causal reasoning.", "subjects": "Other Statistics (stat.OT)", "title": "Viewing Simpson's Paradox", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.707958505972519 }
https://arxiv.org/abs/1702.07241
Kalman Filter and its Modern Extensions for the Continuous-time Nonlinear Filtering Problem
This paper is concerned with the filtering problem in continuous-time. Three algorithmic solution approaches for this problem are reviewed: (i) the classical Kalman-Bucy filter which provides an exact solution for the linear Gaussian problem, (ii) the ensemble Kalman-Bucy filter (EnKBF) which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems, and (iii) the feedback particle filter (FPF) which represents an extension of the EnKBF and furthermore provides for an consistent solution in the general nonlinear, non-Gaussian case. The common feature of the three algorithms is the gain times error formula to implement the update step (to account for conditioning due to the observations) in the filter. In contrast to the commonly used sequential Monte Carlo methods, the EnKBF and FPF avoid the resampling of the particles in the importance sampling update step. Moreover, the feedback control structure provides for error correction potentially leading to smaller simulation variance and improved stability properties. The paper also discusses the issue of non-uniqueness of the filter update formula and formulates a novel approximation algorithm based on ideas from optimal transport and coupling of measures. Performance of this and other algorithms is illustrated for a numerical example.
\section{Introduction} \label{sec:intro} \begin{table*}[t] \centering \begin{tabular}{|c|c|c|} \hline KBF & Kalman-Bucy Filter & Equations~\eqref{eq:mean_KF}-\eqref{eq:var_KF}\\ \hline EKBF & Extended Kalman-Bucy Filter & Equations~\eqref{eq:mean-EKF}-\eqref{eq:var-EKF} \\ \hline \multirow{2}{*}{EnKBF} & (Stochastic) Ensemble Kalman-Bucy Filter & Equation~\eqref{Stochastic EnKBF} \\ & (Deterministic) Ensemble Kalman-Bucy Filter & Equation~\eqref{detEnKBF}\\ \hline FPF & Feedback Particle Filter & Equation~\eqref{eqn:particle_filter_nonlin_intro} \\\hline \end{tabular} \caption{Nomenclature for the continuous-time filtering algorithms} \label{tab:nomen} \end{table*} Since the pioneering work of Kalman from the 1960s, sequential state estimation has been extended to application areas far beyond its original aims such as numerical weather prediction \cite{sr:kalnay} and oil reservoir exploration (history matching) \cite{sr:Oliver2008}. These developments have been made possible by clever combination of Monte Carlo techniques with Kalman-like techniques for assimilating observations into the underlying dynamical models. The most prominent of these algorithms are the ensemble Kalman filter (EnKF), the randomized maximum likelihood (RML) method and the unscented Kalman filter (UKF) invented independently by several research groups \cite{sr:kitanidis95,jdw:BurgersLeeuwenEvensen1998,sr:houtekamer01,sr:Julier97anew} in the 1990s. The EnKF in particular can be viewed as a cleverly designed random dynamical system of interacting particles which is able to approximate the exact solution with a relative small number of particles. This interacting particle perspective has led to many new filter algorithms in recent years which go beyond the inherent Gaussian approximation of an EnKF during the data assimilation (update) step \cite{jdw:ReichCotter2015}. In this paper, we review the interacting particle perspective in the context of the continuous-time filtering problems and demonstrate its close relation to Kalman's and Bucy's original feedback control structure of the data assimilation step. More specifically, we highlight the feedback control structure of three classes of algorithms for approximating the posterior distribution: (i) the classical Kalman-Bucy filter which provides an exact solution for the linear Gaussian problem, (ii) the ensemble Kalman-Bucy filter (EnKBF) which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems, and (iii) the feedback particle filter (FPF) which represents an extension of the EnKBF and furthermore provides for a consistent solution of the general nonlinear, non-Gaussian problem. A closely related goal is to provide comparison between these algorithms. A common feature of the three algorithms is the gain times error formula to implement the update step in the filter. The difference is that while the Kalman-Bucy filter is an exact algorithm, the two particle-based algorithms are approximate with error decreasing to zero as the number of particles $N$ increases to infinity. Algorithms with this property are said to be consistent. In the class of interacting particle algorithms discussed, the FPF represents the most general solution to the nonlinear non-Gaussian filtering problem. The challenge with implementing the FPF lie in approximation of the ``gain function'' used in the update step. The gain function equals the Kalman gain in the linear Gaussian setting and must be numerically approximated in the general setting. One particular closed-form approximation is the constant gain approximation. In this case, the FPF is shown to reduce to the EnKBF algorithm. The EnKBF naturally extends to nonlinear dynamical systems and its discrete-time versions have become very popular in recent years with applications to, for example, atmosphere-ocean dynamics and oil reservoir exploration. In the discrete-time setting, development and application of closely related particle flow algorithms has also been a subject of recent interest, e.g.,~\cite{daum10,daum2017generalized,ColemanPosterior,MarzoukBayesian,2015arXiv150908787H,yang_discrete}. The outline of the remainder of this paper is as follows: The continuous-time filtering problem and the classic Kalman-Bucy filter are summarized in Sections \ref{sec:problem} and \ref{sec:KBF}, respectively. The Kalman-Bucy filter is then put into the context of interacting particle systems in the form of the EnKBF in Section \ref{sec:EnKF}. Section \ref{sec:FPF} together with Appendices \ref{sec:theory} and \ref{sec:lp_justification} provides a consistent definition of the FPF and a discussion of alternative approximation techniques which lead to consistent approximations to the filtering problem as the number of particles, $N$, goes to infinity. It is shown that the EnKBF can be viewed as an approximation to the FPF. Four algorithmic approaches to gain function approximation are described and their relationship discussed. The performance of the four algorithms is numerically studied and compared for an example problem in Section~\ref{sec:numerics}. The paper concludes with discussion of some open problems in Section~\ref{sec:conc}. The nomenclature for the filtering algorithms described in this paper appears in Table~\ref{tab:nomen}. \section{Problem Statement} \label{sec:problem} In the continuous-time setting, the model for nonlinear filtering problem is described by the nonlinear stochastic differential equations (sdes): \begin{subequations} \begin{align} \text{Signal:} \quad \quad \,\mathrm{d} X_t &= a(X_t) \,\mathrm{d} t + \sigma(X_t) \,\mathrm{d} B_t,\quad\quad X_0 \sim p_0^* \label{eqn:Signal_Process} \\ \text{Observation:} \quad \quad \,\mathrm{d} Z_t &= h(X_t)\,\mathrm{d} t + \,\mathrm{d} W_t \label{eqn:Obs_Process} \end{align} \end{subequations} where $X_t\in\mathbb{R}^d$ is the (hidden) state at time $t$, the initial condition $X_0$ is sampled from a given prior density $p_0^*$, $Z_t \in\mathbb{R}^m$ is the observation or the measurement vector, and $\{B_t\}$, $\{W_t\}$ are two mutually independent Wiener processes taking values in $\mathbb{R}^d$ and $\mathbb{R}^m$. The mappings $a(\cdot): \mathbb{R}^d \rightarrow \mathbb{R}^d$, $h(\cdot): \mathbb{R}^d \rightarrow \mathbb{R}^m$ and $\sigma(\cdot):\mathbb{R}^d \rightarrow \mathbb{R}^{d \times d}$ are known $C^1$ functions. The covariance matrix of the observation noise $\{W_t\}$ is assumed to be positive definite. The function $h$ is a column vector whose $j$-th coordinate is denoted as $h_j$ (i.e., $h=(h_1,h_2,\hdots,h_m)^{\rm T}$). By scaling, it is assumed without loss of generality that the covariance matrices associated with $\{B_t\}$, $\{W_t\}$ are identity matrices. Unless otherwise noted, the stochastic differential equations (sde) are expressed in It\^{o} form. Table~\ref{tab:symbols-filter} includes a list of symbols used in the continuous-time filtering problem. In applications, the continuous time filtering models are often expressed as: \begin{subequations} \begin{align} \frac{\,\mathrm{d} X_t}{\,\mathrm{d} t} &= a(X_t) + \sigma(X_t) \dot{B}_t \label{eqn:Signal_Process_1} \\ Y_t & := \frac{\,\mathrm{d} Z_t}{\,\mathrm{d} t} = h(X_t) + \dot{W}_t \label{eqn:Obs_Process_1} \end{align} \end{subequations} where $\dot{B}_t$ and $\dot{W}_t$ are mutually independent white noise processes (Gaussian noise) and $Y_t \in\mathbb{R}^m$ is the vector valued observation at time $t$. The sde-based model is preferred here because of its mathematical rigor. Any sde involving $Z_t$ is converted into an ODE involving $Y_t$ by formally dividing the sde by $\,\mathrm{d} t$ and replacing $\frac{\,\mathrm{d} Z_t}{\,\mathrm{d} t}$ by $Y_t$ (See also Remark~\ref{rem:remStrato}). \begin{table}[t] \centering \begin{tabular}{|c|c|c|} \hline Variable & Notation & Model\\ \hline State & $X_t$ & Eq.~\eqref{eqn:Signal_Process} \\ Process noise & $B_t$ & Wiener process \\ Measurement & $Z_t$ & Eq.~\eqref{eqn:Obs_Process}\\ Measurement noise & $W_t$ & Wiener process \\\hline \end{tabular} \caption{Symbols for the continuous-time filtering problem} \label{tab:symbols-filter} \end{table} \medskip The objective of filtering is to estimate the posterior distribution of $X_t$ given the time history of observations ${\cal Z}_t := \sigma(Z_s: 0\le s \le t)$. The density of the posterior distribution is denoted by $p^*$, so that for any measurable set $A\subset \mathbb{R}^d$, \begin{equation} \int_{x\in A} p^*(x,t)\, \,\mathrm{d} x = {\sf P}\{ X_t \in A\mid {\cal Z}_t \} \nonumber \end{equation} \medskip One example of particular interest is when the mappings $a(x)$ and $h(x)$ are linear, $\sigma(x)$ is a constant matrix that does not depend upon $x$, and the prior density $p_0^*$ is Gaussian. The associated problem is referred to as the linear Gaussian filtering problem. For this problem, the posterior density is known to be Gaussian. The resulting filter is said to be finite-dimensional because the posterior is completely described by finitely many statistics -- conditional mean and variance in the linear Gaussian case. \medskip For the general nonlinear non-Gaussian case, however, the filter is infinite-dimensional because it defines the evolution, in the space of probability measures, of $\{p^*(\,\cdot\, ,t) : t\ge 0\}$. The particle filter is a simulation-based algorithm to approximate the posterior: The key step is the construction of $N$ interacting stochastic processes $\{X^i_t : 1\le i \le N\}$: The value $X^i_t \in \mathbb{R}^d$ is the state for the $i$-th particle at time $t$. For each time $t$, the empirical distribution formed by the particle population is used to approximate the posterior distribution. Recall that this is defined for any measurable set $A\subset\mathbb{R}^d$ by, \begin{equation} \label{eqn:empirical} p^{(N)}(A,t) = \frac{1}{N}\sum_{i=1}^N \text{\rm\large 1}\{ X^i_t\in A\} \,.\nonumber \end{equation} where $\text{\rm\large 1}\{x \in A\}$ is the indicator function (equal to $1$ if $x\in A$ and $0$ otherwise). The first interacting particle representation of the continuous-time filtering problem can be found in \cite{crisan2007,crisan10}. The close connection of such interacting particle formulations to the gain factor and innovation structure of the classic Kalman filter has been made explicit starting with \cite{taoyang_cdc11,taoyang_TAC12} and has led to the FPF formulation considered in this paper. \medskip \noindent \textbf{Notation:} The density for a Gaussian random variable with mean $m$ and variance $\Sigma$ is denoted as ${\cal N}(m,\Sigma)$. For vectors $x,y\in\mathbb{R}^d$, the dot product is denoted as $x\cdot y$ and $|x|:=\sqrt{x\cdot x}$; $x^{\rm T}$ denotes the transpose of the vector. Similarly, for a matrix ${\sf K}$, ${\sf K}^{\rm T}$ denotes the matrix transpose. For two sequence $\{a_n\}_{n=1}^\infty$ and $\{b_n\}_{n=1}^\infty$, the big $O$ notation $a_n=O(b_n)$ means $\exists \, n_0 \in \mathbb{N}$ and $c>0$ such that $|a_n| \leq c|b_n|$ for $n>n_0$. \section{Kalman-Bucy Filter} \label{sec:KBF} \begin{table}[t] \centering \begin{tabular}{|c|c|c|} \hline Variable & Notn. \& Defn. & Model\\ \hline Cond. mean & $\hat{X}_t={\sf E}[X_t|{\cal Z}_t]$ & Eq.~\eqref{eq:mean_KF} \\ Cond. var. & $\Sigma_t= {\sf E} [(X_t-\hat{X}_t) (X_t-\hat{X}_t)^T|{\cal Z}_t]$ & Eq.~\eqref{eq:var_KF} \\ Kalman gain & ${\sf K}_t=\Sigma_tH_t^T$ & Eq.~\eqref{eq:Kalman_gain} \\\hline \end{tabular} \caption{Symbols for the Kalman filter} \label{tab:symbols-KF} \end{table} Consider the linear Gaussian problem: The mappings $a(x) = A\,x$ and $h(x) = H\,x$ where $A$ and $H$ are $d \times d$ and $m \times d$ matrices; the process noise covariance $\sigma(x) = \sigma$, a constant $d \times d$ matrix; and the prior density is Gaussian, denoted as ${\cal N}(\hat{X}_0,\Sigma_0)$. For this problem, the posterior density is known to be Gaussian, denoted as ${\cal N}(\hat{X}_t,\Sigma_t)$, where $\hat{X}_t$ and $\Sigma_t$ are the conditional mean and variance, i.e $\hat{X}_t:= {\sf E} [X_t|{\cal Z}_t]$ and $\Sigma_t:= {\sf E} [(X_t-\hat{X}_t) (X_t-\hat{X}_t)^T|{\cal Z}_t]$. Their evolution is described by the finite-dimensional Kalman-Bucy filter: \begin{subequations} \begin{align} \,\mathrm{d} \hat{X}_t &= A \hat{X}_t \,\mathrm{d} t + {\sf K}_t \Bigl(\,\mathrm{d} Z_t- H_t \hat{X}_t \,\mathrm{d} t \Bigr) \label{eq:mean_KF} \\[.1cm] \frac{\,\mathrm{d} \Sigma_t}{\,\mathrm{d} t} &= A \Sigma_t + \Sigma_t A^T + \sigma \sigma^T - \Sigma_t H^T H \Sigma_t \label{eq:var_KF} \end{align} \end{subequations} where \begin{equation} {\sf K}_t:=\Sigma_t H_t^{\rm T} \label{eq:Kalman_gain} \end{equation} is referred to as the Kalman gain and the filter is initialized with the initial conditions $\hat{X}_0$ and $\Sigma_0$ of the prior density. Table~\ref{tab:symbols-KF} includes a list of symbols used for the Kalman filter. The evolution equation for the mean is a sde because of the presence of stochastic forcing term $Z_t$ on the right-hand side. The evolution equation for the variance $\Sigma_t$ is an ode that does not depend upon the observation process. The Kalman filter is one of the most widely used algorithm in engineering. Although the filter describes the posterior {\em only} in linear Gaussian settings, it is often used as an approximate algorithm even in more general settings, e.g., by defining the matrices $A$ and $H$ according to the Jacobians of the mappings $a$ and $h$: \[ A:= \frac{\partial a}{\partial x} (\hat{X}_t), \quad H:= \frac{\partial h}{\partial x} (\hat{X}_t) \] The resulting algorithm is referred to as the extended Kalman filter: \begin{subequations} \begin{align} \,\mathrm{d} \hat{X}_t &= a(\hat{X}_t) \,\mathrm{d} t + {\sf K}_t \Bigl(\,\mathrm{d} Z_t- h(\hat{X}_t) \,\mathrm{d} t \Bigr) \label{eq:mean-EKF} \\[.1cm] \frac{\,\mathrm{d} \Sigma_t}{\,\mathrm{d} t} &= A \Sigma_t + \Sigma_t A^T + \sigma (\hat{X}_t) \sigma^T(\hat{X}_t) - \Sigma_t H^T H \Sigma_t \label{eq:var-EKF} \end{align} \end{subequations} where ${\sf K}_t=\Sigma_t H^{\rm T}$ is used as the formula for the gain. The Kalman filter and its extensions are recursive algorithms that process measurements in a sequential (online) fashion. At each time $t$, the filter computes an error $\,\mathrm{d} Z_t- H \hat{X}_t \,\mathrm{d} t$ (called the {\em innovation error}) which reflects the new information contained in the most recent measurement. The filter state $\hat{X}_t$ is corrected at each time step via a $(\text{gain} \times \text{error})$ update formula. The error correction feedback structure (see Fig.~\ref{fig:fig_FPF_KF}) is important on account of robustness. A filter is based on an idealized model of an underlying stochastic dynamic process. The self-correcting property of the feedback provides robustness, allowing one to tolerate a degree of uncertainty inherent in any model. The simple intuitive nature of the update formula is invaluable in design, testing and operation of the filter. For example, the Kalman gain is proportional to $H$ which scales with the signal-to-noise ratio of the measurement model. In practice, the gain may be `tuned' to optimize the filter performance. To minimize online computations, an offline solution of the algebraic Ricatti equation (obtained after equating the right-hand side of the variance ode~\eqref{eq:var_KF} to zero) may be used to obtain a constant value for the gain. The basic Kalman filter has also been extended to handle filtering problems involving additional uncertainties in the signal model and the observation model. The resulting (approximate) algorithms are referred to as the interacting multiple model (IMM) filter~\cite{Blom_cdc12} and the probabilistic data association (PDA) filter~\cite{Bar-Shalom_IEEE_CSM}, respectively. In the PDA filter, the gain varies based on an estimate of the instantaneous uncertainty in the measurements. In the IMM filter, multiple Kalman filters are run in parallel and their outputs combined to obtain an estimate. One explanation of the feedback control structure of the Kalman filter is based on duality between estimation and control~\cite{kalman1960}. Although limited to linear Gaussian problems, these considerations also help explain the differential Ricatti equation structure for the variance ode~\eqref{eq:var_KF}. \medskip Although widely used, the extended Kalman filter can suffer from stability issues because of the very crude approximation of the nonlinear model. The observed divergence arises on account of two inter-related reasons: (i) Even with Gaussian process and measurement noise, the nonlinearity of the mappings $a(\cdot)$, $\sigma(\cdot)$ and $h(\cdot)$ can lead to non-Gaussian forms of the posterior density $p^*$; and (ii) the Jacobians $A$ and $H$ used in propagating the covariance can lead to large errors in approximation of the gain particularly if the Hessian of these mappings is large. These issues have necessitated development of particle based algorithms described in the following sections. \begin{table}[t] \centering \begin{tabular}{|c|c|c|} \hline Variable & Notation & Model\\ \hline \multirow{2}{*}{Particle state} & \multirow{2}{*}{$X^i_t$} & Stoch. EnKBF Eq.~\eqref{Stochastic EnKBF} \\ & & Deter. EnKBF Eq.~\eqref{detEnKBF} \\ \hline Empirical variance & $\Sigma_t^{(N)}$ & Eq.~\eqref{empcov} \\ \hline Particle process noise & $B^i_t$ & Wiener process \\\hline Particle meas. noise & $W^i_t$ & Wiener process \\\hline \end{tabular} \caption{Symbols for the ensemble Kalman-Bucy filter} \label{tab:symbols-EnKF} \end{table} \section{Ensemble Kalman-Bucy Filter} \label{sec:EnKF} For pedagogical reasons, the ensemble Kalman-Bucy filter (EnKBF) is best described for the linear Gaussian problem -- also the approach taken in this section. The extension to the nonlinear non-Gaussian problem is then immediate, similar to the extension from the Kalman filter to the extended Kalman filter. Even in linear Gaussian settings, a particle filter may be a computationally efficient option for problems with very large state dimension $d$ (e.g., weather models in meteorology). For large $d$, the computational bottleneck in simulating a Kalman filter arises due to propagation of the covariance matrix according to the differential Riccati equation~\eqref{eq:var_KF}. This computation scales as $O(d^2)$ in memory. In an EnKBF implementation, one replaces the exact propagation of the covariance matrix by an empirical approximation with $N$ particles \begin{align} \qquad \Sigma_t^{(N)} &= \frac{1}{N-1} \sum_{i=1}^N (X_t^i - \hat X_t^{(N)}) (X_t^i- \hat X_t^{(N)})^{\rm T} \label{empcov} \end{align} This computation scales as $O(Nd)$. The same reduction in computational cost can be achieved by a reduced rank Kalman filter. However, the connection to empirical measures (\ref{eqn:empirical}) is crucial to the application of the EnKBF to nonlinear dynamical systems. The EnKF algorithm was first developed in a discrete-time setting~\cite{jdw:BurgersLeeuwenEvensen1998}. Since then various formulations of the EnKF have been proposed \cite{jdw:LawStuartZygalakis2015,jdw:ReichCotter2015}. Below we state two continuous-time formulations of the EnKBF. Table~\ref{tab:symbols-EnKF} includes a list of symbols used for these formulations. \subsection{Stochastic EnKBF} The conceptual idea of the stochastic EnKBF algorithm is to introduce a zero mean perturbation (noise term) in the innovation error to achieve consistency for the variance update. In the continuous-time stochastic EnKBF algorithm, the particles evolve according to \begin{equation} \label{Stochastic EnKBF} \,\mathrm{d} X_t^i = A X_t^i\,\mathrm{d} t + \sigma \,\mathrm{d} B_t^i + \Sigma^{(N)}_tH^{{\rm T}} \Big( \,\mathrm{d} {Z}_t^i -HX^i_t\,\mathrm{d} t + \,\mathrm{d} W_t^i\Big) \end{equation} for $i=1,\ldots,N$, where $X_t^i\in\mathbb{R}^d$ is the state of the $i^\text{th}$ particle at time $t$, the initial condition $X^i_0\sim p_0^*$, $B^i_t$ is a standard Wiener process, and $W_t^i$ is a standard Wiener process assumed to be independent of $X_0^i,\,B_t^i,\,X_t,\,Z_t$ ~\cite{jdw:LawStuartZygalakis2015}. The variance $\Sigma^{(N)}_t$ is obtained empirically using~\eqref{empcov}. Note that the $N$ particles only interact through the common covariance matrix $\Sigma_t^{(N)}$. The idea of introducing a noise process first appeared for the discrete-time EnKF. The derivation of the continuous-time stochastic EnKBF can be found in~\cite{jdw:LawStuartZygalakis2015} or~\cite{jdw:Reich2011}. It is based on a limiting argument whereby the discrete-time update step is formally viewed as an Euler-Maruyama discretization of a stochastic SDE. For the linear Gaussian problem, the stochastic EnKBF algorithm is consistent in the limit as $N\rightarrow\infty$. This means that the conditional distribution of $X_t^i$ is Gaussian whose mean and variance evolve according the Kalman filter, equations~\eqref{eq:mean_KF} and~\eqref{eq:var_KF}, respectively. The update formula used in~\eqref{Stochastic EnKBF} is not unique. A deterministic analogue is described next. \subsection{Deterministic EnKBF} A deterministic variant of the EnKBF (first proposed in \cite{jdw:BergemannReich2012}) is given by: \begin{equation} \label{detEnKBF} \,\mathrm{d} X_t^i = A X_t^i \,\mathrm{d} t + \sigma \,\mathrm{d} B_t^i + \Sigma_t^{(N)}H^{\rm T} \left( \,\mathrm{d} Z_t - \frac{HX_t^i + H\hat X_t^{(N)}}{2} {\rm d}t \right) \end{equation} for $i=1,\ldots,N$. A proof of the consistency of this deterministic variant of the EnKBF for linear systems can be found in \cite{jdw:deWiljesReichStannat2016}. There are close parallels between the deterministic EnKBF and the FPF which is explored further in Section \ref{sec:FPF}. In the deterministic formulation of the EnKBF the interaction between the $N$ particles arises through the covariance matrix $\Sigma^{(N)}_t$ and the mean $\hat X_t^{(N)}$. Although the stochastic and the deterministic EnKBF algorithms are consistent for the linear Gaussian problem, they can be easily extended to the nonlinear non-Gaussian settings. However, the resulting algorithm will in general not be consistent. \subsection{Well-posedness and accuracy of the EnKBF} Recent research on EnKF has focussed on the long term behavior and accuracy of filters applicable for nonlinear data assimilation \cite{sr:DelMoral16,jdw:deWiljesReichStannat2016,jdw:KellyLawStuart2014,jdw:KellyStuart2016}. In particular the mathematical justification for the feasibility of the EnKF and its continuous counterpart in the small ensemble limit are of interest. These studies of the accuracy for a finite ensemble are of exceptional importance due to the fact that a large number of ensemble members is not an option from a computational point of view in many applicational areas. The authors of \cite{jdw:KellyLawStuart2014} show mean-squared asymptotic accuracy in the large-time limit where a particular type of variance inflation is deployed for the stochastic EnKBF. Well-posedness of the discrete and continuous formulation of the EnKF is also shown. Similar results concerning the well-posedness and accuracy for the deterministic variant~\eqref{detEnKBF} of the EnKBF are derived in \cite{jdw:deWiljesReichStannat2016}. A fully observed system is assumed in deriving these accuracy and well-posedness results. An investigation of the well-posedness for partially observed systems is particularly relevant as the update step in such cases can cause a divergence of the estimate in the sense that the signal is lost or the values of the estimate reach machine infinity \cite{jdw:KellyMajdaTong2015}. \section{Feedback Particle Filter} \label{sec:FPF} The FPF is a controlled sde: \begin{equation} \begin{aligned} \,\mathrm{d} X^i_t = &a(X^i_t) \,\mathrm{d} t + \sigma(X_t^i) \,\mathrm{d} B^i_t \\&+ \underbrace{{\sf K}_t(X^i_t) \circ (\,\mathrm{d} Z_t - \frac{h(X^i_t) + \hat{h}_t}{2}\,\mathrm{d} t)}_{\text{update}}, \quad X_0^i \sim p_0^* \end{aligned} \label{eqn:particle_filter_nonlin_intro} \end{equation} where (similar to EnKBF) $X_t^i\in\mathbb{R}^d$ is the state of the $i^\text{th}$ particle at time $t$, the initial condition $X^i_0\sim p_0^*$, $B^i_t$ is a standard Wiener process, and $\hat{h}_t := {\sf E}[h(X_t^i)|\mathcal{Z}_t]$. Both $B^i_t$ and $X^i_0$ are mutually independent and also independent of $X_t,Z_t$. The $\circ$ in the update term indicates that the sde is expressed in its Stratonovich form. The gain function ${\sf K}_t$ is vector-valued (with dimension $d\times m$) and it needs to be obtained for each fixed time $t$. The gain function is defined as a solution of a pde introduced in the following subsection. For the linear Gaussian problem, ${\sf K}_t$ is the Kalman gain. For the general nonlinear non-Gaussian, the gain function needs to be numerically approximated. Algorithms for this are also summarized in the following subsection. \begin{remark}\label{rem:remStrato} Given that the Stratonovich form provides a mathematical interpretation of the (formal) ode model \cite[see Section 3.3 of the sde textbook by {\O}ksendal]{oksendal2013}, we also obtain the (formal) ode model of the filter. Denoting $Y_t \doteq \frac{\,\mathrm{d} Z_t}{\,\mathrm{d} t} $ and white noise process $\dot{B}^i_t \doteq \frac{\,\mathrm{d} B_t^i}{\,\mathrm{d} t} $, the ODE model of the filter is given by, \begin{equation} \frac{\,\mathrm{d} X^i_t}{\,\mathrm{d} t} = a(X^i_t) + \sigma(X_t^i) \dot{B}^i_t + {\sf K}(X^i,t) \left( Y_t - \frac{1}{2} (h(X^i_t) + \hat{h})\right), \nonumbe \end{equation} for $i=1,\ldots,N$. The feedback particle filter thus provides a generalization of the Kalman filter to nonlinear systems, where the innovation error-based feedback structure of the control is preserved (see Fig.~\ref{fig:fig_FPF_KF}). For the linear Gaussian case, the gain function is the Kalman gain. For the nonlinear case, the Kalman gain is replaced by a nonlinear function of the state (See Fig.~\ref{fig:const-gain-approx}). \end{remark} \begin{figure*} \centering \includegraphics[scale=0.31]{Figure1.eps} \vspace{-0.15in} \caption{Innovation error-based feedback structure for the (a) Kalman filter and (b) nonlinear feedback particle filter. }\vspace{-.1cm} \label{fig:fig_FPF_KF} \end{figure*} \begin{remark} It is shown in Appendix \ref{sec:theory} that, under the condition that the gain function can be computed exactly, FPF is an exact algorithm. That is, if the initial condition $X^i_0$ is sampled from the prior $p_0^*$ then \[ {\sf P} [X_t \in A\mid {\cal Z}_t ] = {\sf P} [X_t^i \in A\mid {\cal Z}_t ], \quad \forall\;A\subset \mathbb{R}^d,\;\;t>0. \] In a numerical implementation, a finite number, $N$, of particles is simulated and ${\sf P} [X_t^i \in A\mid {\cal Z}_t ] \approx \frac{1}{N}\sum_{i=1}^N \text{\rm\large 1} [ X^i_t\in A]$ by the Law of Large Numbers (LLN). The considerations in the Appendix are described in a more general setting, e.g., applicable to stochastic processes $X_t$ and $X_t^i$ evolving on manifolds. This also explains why the update formula has a Stratonovich form. For sdes on a manifold, it is well known that the Stratonovich form is invariant to coordinate transformations (i.e., intrinsic) while the Ito~form is not. A more in-depth discussion of the FPF for Lie groups appears in~\cite{Chi_ACC16}. \end{remark} Table~\ref{tab:symbols-FPF} includes a list of symbols used for the FPF. \begin{table}[t] \centering \begin{tabular}{|c|c|c|} \hline Variable & Notation & Model\\ \hline Particle state & $X^i_t$ & FPF Eq.~\eqref{eqn:particle_filter_nonlin_intro} \\ \hline Gain function & ${\sf K}_t(x)=\nabla \phi(x)$ & Poisson Eq.~\eqref{eqn:EL_phi_intro} \\ \hline \multirow{4}{*}{Particle gain} & \multirow{4}{*}{ ${\sf K}^i={\sf K}(X^i_t)$} & Constant gain~\eqref{eq:const-gain-approx} \\ & & Galerkin~\eqref{eq:galerkin-Ki} \\ & & Kernel-based~\eqref{eq:kernelk}\\ & & Optimal coupling~\eqref{eqn:OT} \\\hline \end{tabular} \caption{Symbols for feedback particle filter} \label{tab:symbols-FPF} \end{table} \subsection{Gain function} For pedagogical reasons primarily to do with notational convenience, the gain function is defined here for the case of scalar-valued observation\footnote{The extension to multi-valued observation is straightforward and appears in~\cite{yang2016}.}. In this case, the gain function ${\sf K}_t$ is defined in terms of the solution of the weighted Poisson equation: \begin{equation} \label{eqn:EL_phi_intro} \begin{aligned} -\nabla \cdot (\rho(x) \nabla \phi(x)) & = (h (x)-\hat{h}) \rho(x),\quad x\in \mathbb{R}^d \\ \int \phi (x) \rho(x) \,\mathrm{d} x & = 0 \end{aligned} \end{equation} where $\hat{h} := \int h(x)\rho(x)\,\mathrm{d} x$, $\nabla$ and $\nabla \cdot $ denote the gradient and the divergence operators, respectively, and at time $t$, $\rho(x)=p(x,t)$ denotes the density of $X_t^i$\footnote{Although this paper is limited to $\mathbb{R}^d$, the proposed algorithm is applicable to nonlinear filtering problems on differential manifolds, e.g., matrix Lie groups (For an intrinsic form of the Poisson equation, see~\cite{Chi_ACC16}). For domains with boundary, the pde is accompanied by a Neumann boundary condition: \[ \nabla \phi(x) \cdot {n}(x) = 0 \] for all $x$ on the boundary of the domain where ${n}(x)$ is a unit normal vector at the boundary point $x$.}. In terms of the solution $\phi(x)$ of~\eqref{eqn:EL_phi_intro}, the gain function at time $t$ is given by \begin{align} {\sf K}_t(x)= \nabla \phi(x)\,. \label{eqn:gradient_gain_fn_intro} \end{align} \begin{remark} The gain function ${\sf K}_t(x)$ is not uniquely defined through the filtering problem. Formula~\eqref{eqn:gradient_gain_fn_intro} represents one choice of the gain function. More generally, it is sufficient to require that ${\sf K}_t={\sf K}$ is a solution of \begin{equation} -\nabla \cdot (\rho(x) {\sf K}(x)) = (h (x)-\hat{h}) \rho(x),\quad x\in \mathbb{R}^d \label{eqn:div_eqn} \end{equation} with $\rho(x)=p(x,t)$ at time $t$. One justification for choosing the gradient form solution, as in~\eqref{eqn:gradient_gain_fn_intro}, is its $L^2$ optimality. The general solution of~\eqref{eqn:div_eqn} is given by \[ {\sf K} = \nabla \phi + v \] where $\phi$ is the solution of~\eqref{eqn:EL_phi_intro} and $v$ solves $\nabla \cdot(\rho v)=0$. It is easy to see that \[ {\sf E}[|{\sf K}|^2] = {\sf E}[|\nabla \phi|^2] + {\sf E}[|v|^2]. \] Therefore, ${\sf K}=\nabla\phi$ is the minimum $L^2$-norm solution of~\eqref{eqn:div_eqn}. By interpreting the $L^2$ norm as the kinetic energy, the gain function ${\sf K}_t = \nabla \phi$, defined through (\ref{eqn:EL_phi_intro}), is seen to be optimal in the sense of optimal transportation \cite{villani2003,evans}. An alternative solution of~\eqref{eqn:div_eqn} is provided through the definition \[ {\sf K}_t(x) = \frac{1}{\rho(x)}\nabla \tilde \phi(x) \] which leads to a standard Poisson equation in the unknown potential $\tilde \phi$ for which the fundamental solution is explicitly known. This fact is exploited in the interacting particle filter representations of \cite{crisan2007,crisan10}. \label{remark:optimal-K} \end{remark} \medskip There are two special cases of (\ref{eqn:EL_phi_intro}) -- summarized as part of the following two examples -- where the exact solution can be found. \begin{example} In the scalar case (where $d=1$), the Poisson equation is: \begin{equation*} -\frac{1}{\rho(x)}\frac{\,\mathrm{d} }{\,\mathrm{d} x}(\rho(x) \frac{\,\mathrm{d} \phi}{\,\mathrm{d} x}(x)) = h-\hat{h}. \end{equation*} Integrating once yields the solution explicitly, \begin{equation} \begin{aligned} {\sf K}(x) = \frac{\,\mathrm{d} \phi}{\,\mathrm{d} x}(x) &= -\frac{1}{\rho(x)}\int_{-\infty}^x \rho(z)(h(z)-\hat{h})\,\mathrm{d} z. \end{aligned} \label{eq:scalar} \end{equation} For the particular choice of $\rho$ as the sum of two Gaussians ${\cal N}(-1,\sigma^2)$ and ${\cal N}(+1,\sigma^2)$ with $\sigma^2=0.2$ and $h(x)=x$, the solution obtained using~\eqref{eq:scalar} is depicted in \Fig{fig:truesol}. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{Figure2} \vspace*{-0.2in} \caption{The exact solution to the Poisson equation using the formula~\eqref{eq:scalar}. The density $\rho$ is the sum of two Gaussians $N(-1,\sigma^2)$ and $N(+1,\sigma^2)$, and $h(x)=x$. The density is depicted as the shaded curve in the background.} \vspace*{-0.1in} \label{fig:truesol} \end{figure} \label{ex:scalar} \end{example} \begin{example} Suppose the density $\rho$ is a Gaussian ${\cal N}(\mu,\Sigma)$. The observation function $h(x) = Hx$, where $H \in \mathbb{R}^{1\times d}$. Then, $\phi = x^{\rm T}\Sigma H^{\rm T} $ and the gain function ${\sf K} = \Sigma \, H^{\rm T}$ is the Kalman gain. \end{example} \medskip In the general non-Gaussian case, the solution is not known in an explicit form and must be numerically approximated. Note that even in the two exact cases, one would need to numerically approximate the solution because the density $\rho$ is not available in an explicit form. The problem statement for numerical approximation is as follows: \medskip \noindent \textbf{Problem statement:} Given $N$ samples $\{X^1,\hdots,X^i,\hdots,X^N\}$ drawn i.i.d. from $\rho$, approximate the vector-valued gain function$\{{\sf K}^1,\hdots,{\sf K}^i,\hdots,{\sf K}^N\}$, where ${\sf K}^i:={\sf K}(X^i)=\nabla\phi(X^i)$. The density $\rho$ is not explicitly known. \medskip Four numerical algorithms for approximation of the gain function appear in the following four subsections\footnote{These algorithms are based on the existence-uniqueness theory for solution $\phi$ of the Poisson equation pde~\eqref{eqn:EL_phi_intro}, as described in~\cite{laugesen15}.}. \subsection{Constant Gain Approximation} The constant gain approximation is the best -- in the least-square sense -- constant approximation of the gain function (see Figure~\ref{fig:const-gain-approx}). Mathematically, it is obtained by considering the following least-square optimization problem: \begin{equation*} \kappa^\ast = \arg \min_{\kappa \in \mathbb{R}^d} {\sf E} \, [|{\sf K} - \kappa|^2] \end{equation*} By using a standard sum of the squares argument, $ \kappa^\ast = {\sf E}[{\sf K}]. $ By multiplying both sides of the pde~\eqref{eqn:EL_phi_intro} by $x$ and integrating by parts, the expected value is computed explicitly as \begin{align*} \kappa^* = {\sf E}[{\sf K}] &=\int_{\mathbb{R}^d} (h(x)-\hat{h})\; x \; \rho(x) \,\mathrm{d} x \end{align*} The integral is evaluated empirically to obtain the following approximate formula for the gain: \begin{equation} {\sf K}^i \equiv \frac{1}{N}\sum_{j=1}^N\; (h(X^j)-\hat{h}^{(N)}) \; X^j \label{eq:const-gain-approx} \end{equation} where $\hat{h}^{(N)} = N^{-1} \sum_{j=1}^N h(X^j)$. The formula~\eqref{eq:const-gain-approx} is referred to as the {\em constant gain approximation} of the gain function; cf.,~\cite{yang2016}. It is a popular choice in applications~\cite{yang2016,stano2014,tilton2013,berntorp2015} and is equivalent to the approximation used in the deterministic and stochastic EnKBF~\cite{jdw:ReichCotter2015,jdw:LawStuartZygalakis2015,jdw:Reich2011,jdw:BergemannReich2012,jdw:deWiljesReichStannat2016}. \begin{example} Consider the linear case where $h(x) = Hx$. The constant gain approximation formula equals \[ {\sf K}^i = \frac{1}{N}\sum_{j=1}^N\; (H X^j -H \hat{X}^{(N)}) \; X^i = \Sigma^{(N)} H, \] where $\hat{X}^{(N)}:=\frac{1}{N}\sum_{i=1}^NX^i$ and $\Sigma^N=\frac{1}{N}\sum_{i=1}^N(X^i-\hat{X}^{(N)})(X^i-\hat{X}^{(N)})^{\rm T}$ are the empirical mean and the empirical variance, respectively. That is, for the linear Gaussian case, the FPF algorithm with the constant gain approximation gives the deterministic EnKF algorithm. \end{example} \medskip \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{Figure3} \caption{Constant gain approximation in the feedback particle filter} \label{fig:const-gain-approx} \end{figure} \subsection{Galerkin Approximation} \label{sec:Galerkin} The Galerkin approximation is a generalization of the constant gain approximation where the gain function ${\sf K}=\nabla\phi$ is now approximated in a finite-dimensional subspace $S:=\text{span}\{\psi_1,\ldots,\psi_M\}$\footnote{$S$ is a finite-dimensional subspace in the Sobolev space $H^1_0(\mathbb{R}^d;\rho)$ -- defined as the space of functions $f$ that are square-integrable with respect to density $\rho$ and whose (weak) derivatives are also square-integrable with respect to density $\rho$. $H_0^1$ is the appropriate space for the solution $\phi$ of the Poisson equation~\eqref{eqn:EL_phi_intro}.}. Mathematically, the Galerkin solution $\nabla\phi^{(M)}$ is defined as the optimal least-square approximation of $\nabla\phi$ in $S$, i.e, \begin{equation*} \phi^{(M)} =\mathop{\text{\rm arg\,min}}_{\psi \in S} ~{\sf E}[|\nabla \phi-\nabla \psi|^2] \end{equation*} The least-square solution is easily obtained by applying the projection theorem which gives \begin{equation} {\sf E}[\nabla \phi^{(M)} \cdot \nabla \psi] = {\sf E}[(h-\hat{h})\,\psi],\quad \forall \; \psi \in S \label{eq:Poisson-weak-finite} \end{equation} By denoting $(\psi_1(x),\psi_2(x),\hdots,\psi_M(x))=:\psi(x)$ and expressing $\phi^{(M)}(x) = c \cdot \psi(x)$, the finite dimensional system~\eqref{eq:Poisson-weak-finite} is expressed as a linear matrix equation \begin{equation} Ac=b \label{eq:Acb} \end{equation} where $A$ is a $M\times M$ matrix and $b$ is a $M\times 1$ vector whose entries are given by the respective formulae: \begin{align*} [A]_{lk} & = {\sf E}[\nabla\psi_l \cdot \nabla \psi_k]\\ [b]_l & = {\sf E}[(h-\hat{h})\,\psi_l] \end{align*} \begin{example} Two types of approximations follow from consideration of two types of basis functions: \begin{romannum} \item The constant gain approximation is obtained by taking basis functions as $\psi_l(x)=x_l$ for $l=1,\ldots,d$. With this choice, $A$ is the identity matrix and the Galerkin gain function is a constant vector: \begin{equation*} {\sf K} (x)= \int x\, (h(x)-\hat{h})\, \rho(x)\,\mathrm{d} x \end{equation*} It's empirical approximation is \[ {\sf K}^i \equiv \frac{1}{N}\sum_{j=1}^N\; (h(X^j)-\hat{h}^{(N)}) \; X^j \] \item With a single basis function $\psi(x) = h(x)$, the Galerkin solution is \[ {\sf K}(x) = \frac{\int (h(x) - \hat{h})^2 \rho(x) \,\mathrm{d} x}{\int |\nabla h(x) |^2 \rho(x) \,\mathrm{d} x} \; \nabla h(x) \] It's empirical approximation is obtained as \[ {\sf K}^i =\frac{\sum_{j=1}^N (h(X_t^i) - \hat{h}^{(N)})^2}{\sum_{j=1}^N |\nabla h(X_t^i) |^2}\; \nabla h(X_t^i) \] \end{romannum} \label{ex:basis} \end{example} \begin{algorithm}[t] \caption{Constant gain approximation} \begin{algorithmic}[1] \REQUIRE $\{X^i\}_{i=1}^N$, $\{h(X^i)\}_{i=1}^N$, \ENSURE $\{{\sf K}^i\}_{i=1}^N$, \medskip \STATE Calculate $\hat{h}^{(N)}=\frac{1}{N}\sum_{i=1}^N h(X^i)$\medskip \STATE Calculate ${\sf K}_c=\frac{1}{N}\sum_{i=1}^N\; (h(X^i)-\hat{h}^{(N)}) \; X^i$ \STATE $K^i={\sf K}_c$ for $i=1,\ldots,N$ \end{algorithmic} \label{alg:const-gain} \end{algorithm} \medskip \begin{algorithm}[t] \caption{Galerkin approximation of the gain function} \begin{algorithmic}[1] \REQUIRE $\{X^i\}_{i=1}^N$, $\{h(X^i)\}_{i=1}^N$, $\{\psi_1,\ldots,\psi_M\}$, \ENSURE $\{{\sf K}^i\}_{i=1}^N$, \medskip \STATE$\hat{h}^{(N)}=\frac{1}{N}\sum_{i=1}^N h(X^i)$ \STATE $[A^{(N)}]_{lk} :=\frac{1}{N}\sum_{i=1}^N \nabla \psi_l(X^i) \cdot \nabla \psi_k(X^i),$ for $l,k=1,\ldots,M$ \STATE $[b^{(N)}]_k :=\frac{1}{N}\sum_{i=1}^N \psi_k(X^i)(h(X^i)-\hat{h}^{(N)})$, for $k=1,\ldots,M$ \STATE Calculate $c^{(N)}$ by solving $A^{(N)} c^{(N)}= b^{(N)}$ \STATE ${\sf K}^i= \sum_{k=1}^M c^{(N)}_k \nabla \psi_k(X^i)$ \end{algorithmic} \label{alg:galerkin} \end{algorithm} In practice, the matrix $A$ and the vector $b$ are approximated empirically, and the equation~\eqref{eq:Acb} solved to obtain the empirical approximation of $c$, denoted as $c^{(N)}$ (see Table~\ref{alg:galerkin} for the Galerkin algorithm). In terms of this empirical approximation, the gain function is approximated as \begin{equation} {\sf K}^i = \nabla\phi^{(M,N)}(X^i) := c^{(N)} \cdot \nabla \psi(X^i) \label{eq:galerkin-Ki} \end{equation} The choice of basis function is problem dependent. In Euclidean settings, the linear basis functions are standard and lead to constant gain approximation as discussed in Example~\ref{ex:basis}. A straightforward extension is to choose quadratic and higher order polynomials as basis functions. However, this approach does not scale well with the dimension of problem: The number of basis functions $M$ scales at least linearly with dimension, and the Galerkin algorithm involves inverting a $M\times M$ matrix. This motivates nonparametric and data driven approaches where (a small number of) basis functions can be selected in an adaptive fashion. One such algorithm, proposed in~\cite{berntorp2016}, is based on the Karhunen-Loeve expansion. In the next two subsections, alternate data-driven algorithms are described. One advantage of these algorithms is that they do not require a selection of basis functions. \subsection{Kernel-based Approximation} \label{sec:kernel} The linear operator $\frac{1}{\rho}\nabla \cdot (\rho \nabla)=:\Delta_{\rho}$ for the pde~\eqref{eqn:EL_phi_intro} is a generator of a Markov semigroup, denoted as $e^{\epsilon\Delta_\rho}$ for $\epsilon>0$. It follows that the solution $\phi$ of~\eqref{eqn:EL_phi_intro} is equivalently expressed as, for any fixed $\epsilon>0$, \begin{equation} \phi = e^{\epsilon\Delta_\rho} \phi+ \int_0^\epsilon e^{s\Delta_\rho} (h-\hat{h}) \,\mathrm{d} s. \label{eq:fixed1_n} \end{equation} The fixed-point representation is useful because $e^{\epsilon \Delta_\rho}$ can be approximated by a finite-rank operator \begin{equation*} {T}^{(N)}_\epsilon f(x) :=\sum_{i=1}^N {k}_{\epsilon}^{(N)}(x,X^i)f(X^i), \label{eq:TepsN} \end{equation*} where the kernel \begin{equation*} {k}_{\epsilon}^{(N)} (x,y) = \frac{1}{{n}_{\epsilon}^{(N)}(x)}\frac{{g}_{\epsilon}(x-y)}{\sqrt{\frac{1}{N}\sum_{i=1}^N {g}_{\epsilon}(x-X^i)}\sqrt{\frac{1}{N}\sum_{i=1}^N{g}_{\epsilon}(y-X^i)}} \end{equation*} is expressed in terms of the Gaussian kernel ${g}_{\epsilon} (z):={(4\pi\epsilon)}^{-\frac{d}{2}}\exp{(-\frac{|z|^2}{4\epsilon})}$ for $z\in\mathbb{R}^d$, and ${n}_{\epsilon}^{(N)}(x)$ is a normalization factor chosen such that ${T}^{(N)}_\epsilon 1=1$. It is shown in \cite{coifman,hein2006} that $e^{\epsilon\Delta\rho}\approx {T}^{(N)}_\epsilon$ as $\epsilon \downarrow 0$ and $N \to \infty$. The approximation of the fixed-point problem~\eqref{eq:fixed1_n} is obtained as \begin{align} {\phi}^{(N)}_\epsilon = {T}^{(N)}_\epsilon {\phi}^{(N)}_\epsilon + \epsilon(h-\hat{h}), \label{eq:fixed-point-epsN} \end{align} where $\int_0^\epsilon e^{s\Delta_\rho}(h-\hat{h})\,\mathrm{d} s \approx \epsilon(h-\hat{h})$ for small $\epsilon>0$. The method of successive approximation is used to solve the fixed-point equation for ${\phi}^{(N)}_\epsilon$. In a recursive simulation, the algorithm is initialized with the solution from the previous time-step. The gain function is obtained by taking the gradient of the two sides of~\eqref{eq:fixed-point-epsN}. For this purpose, it is useful to first define a finite-rank operator: \begin{equation} \begin{aligned} \nabla &{T}^{(N)}_\epsilon f(x) := \sum_{i=1}^N\nabla {k}_{\epsilon}^{(N)}(x,X^i)f(X^i) \\ & = \frac{1}{2\epsilon}\left[\sum_{i=1}^N {k}_{\epsilon}^{(N)}(x,X^i)f(X^i)\left(X^i - \sum_{j=1}^N{k}_{\epsilon}^{(N)}(x,X^j)X^j\right) \right] \label{eq:gradTeps} \end{aligned} \end{equation} In terms of this operator, the gain function is approximated as \begin{equation} {\sf K}^i = \nabla {T}^{(N)}_\epsilon {\phi}^{(N)}_\epsilon(X^i) + \epsilon\nabla {T}^{(N)}_\epsilon(h-\hat{h}^{(N)})(X^i) \label{eq:uepsN} \end{equation} where ${\phi}^{(N)}_\epsilon$ on the righthand-side is the solution of~\eqref{eq:fixed-point-epsN}. For $i,j\in\{1,2,\hdots,N\}$, denote \[ a_{ij} := \frac{1}{2\epsilon}{k}_{\epsilon}^{(N)}(X^i,X^j)\left(r_j -\sum_{l=1}^N {k}_{\epsilon}^{(N)}(X^i,X^l)r_l\right) \] where $r_i:={\phi}^{(N)}_\epsilon(X^i) + \epsilon h(X^i) - \epsilon\hat{h}^{(N)}$. Then, the formula~\eqref{eq:uepsN} is succinctly expressed as \begin{equation} \label{eq:kernelk} {\sf K}^i = \sum_{j=1}^N a_{ij}X^j \end{equation} It is easy to verify that $\sum_{j=1}^N a_{ij} = 0$ and as $\epsilon\rightarrow\infty$, $a_{ij} = {N}^{-1}(h(X^j) - \hat{h}^{(N)})$. Therefore, as $\epsilon\rightarrow\infty$, ${\sf K}^i$ equals the constant gain approximation formula~\eqref{eq:const-gain-approx}. \begin{algorithm}[h] \caption{Kernel-based approximation of the gain function} \begin{algorithmic}[1] \REQUIRE $\{X^i\}_{i=1}^N$, $\{h(X^i)\}_{i=1}^N$, $\Phi_{\text{prev}}$, $L$ \ENSURE $\{{\sf K}^i\}_{i=1}^N$ \medskip \STATE Calculate $g_{ij}:=\exp(-|X^i-X^j|^2/4\epsilon)$ for $i,j=1$ to $N$\medskip \STATE Calculate $k_{ij}:=\frac{g_{ij}}{\sqrt{\sum_l g_{il}}\sqrt{\sum_l g_{jl}}}$ for $i,j=1$ to $N$ \STATE Calculate $T_{ij}:=\frac{k_{ij}}{\sum_l k_{il}}$ for $i,j=1$ to $N$ \STATE Calculate $\hat{h}^{(N)}=\frac{1}{N}\sum_{i=1}^N h(X^i)$\medskip \STATE Initialize $\Phi_i=\Phi_{\text{prev},i}$ for $i=1$ to $N$ \medskip \FOR {$l=1$ to $L$} \STATE Calculate $\Phi_i= \sum_{j=1}^N T_{ij} \Phi_j + \epsilon (h(X^i)-\hat{h}^{(N)})$\medskip \STATE Calculate $\Phi_i = \Phi_i - \frac{1}{N}\sum_{j=1}^N \Phi_j$ \ENDFOR \STATE Calculate $r_i = \Phi_i + \epsilon(h(X^i)-\hat{h}^{(N)})$ \STATE Calculate $a_{ij} = \frac{1}{2\epsilon}T_{ij} \left(r_j -\sum_{l=1}^N T_{il}r_l\right)$ \STATE Calculate ${\sf K}^i = \sum_{j=1}^N a_{ij} X^j$ \end{algorithmic} \label{alg:kernel} \end{algorithm} \subsection{Optimal Coupling-based Approximation} \label{sec:optimal} Optimal coupling-based approximation is another non-parametric approach to directly approximate the gain function ${\sf K}^i$ from the ensemble $\{X^i\}_{i=1}^N$. The algorithm is presented here for the first time in the context of FPF. This approximation is based upon a continuous-time reformulation of the recently developed ensemble transform for optimally transporting (coupling) measures~\cite{jdw:ReichCotter2015}. The relationship to the gain function approximation is as follows: Define an $\epsilon$-parametrized family of densities by $ \rho_{\epsilon}(x) := \rho(x) (1 + \epsilon (h(x) - \hat{h}(x))) $ for $\epsilon >0$ sufficiently small and consider the optimal transport problem \begin{equation}\label{eq:wass} \begin{aligned} \text{Objective:} & \quad \quad \min_{S_\epsilon} \; {\sf E}\,[|S_\epsilon(X)-X|^2]\\ \text{Constraints:} &\quad \quad X \sim \rho,\quad S_{\epsilon}(X)\sim \rho_{\epsilon} \end{aligned} \end{equation} The solution to this problem, denoted as $S_\epsilon$, is referred to as the optimal transport map. It is shown in Appendix~\ref{sec:lp_justification} that $\frac{\,\mathrm{d} S_\epsilon}{\,\mathrm{d} \epsilon} \big|_{\epsilon=0}= {\sf K}$. The ensemble transform is a non-parametric algorithm to approximate the solution $S_\epsilon$ of the optimal transportation problem given {\em only} $N$ samples $X^i$ drawn from $\rho$. For the problem of gain function approximation, the algorithm involves first solving the linear program: \begin{equation}\label{eq:lp} \begin{aligned} \text{Objective:} & \quad \quad \min_{\{t_{ij}\} } \quad \sum_{i=1}^N \sum_{j=1}^N \; t_{ij} \,|X^i-X^j|^2 \\ \text{Constraints:} & \quad \quad \sum_{j=1}^N t_{ij} = \frac{1}{N}, \quad \sum_{i=1}^N t_{ij} = \frac{1+ \epsilon (h(X^j) - \hat h^{(N)})}{N},\\ &\quad \quad \quad t_{ij} \ge 0. \end{aligned} \end{equation} The solution, denoted as $t_{ij}^\ast$, is referred to as the optimal coupling, where the coupling constants $t_{ij}^\ast$ have the interpretation of the joint probabilities. The two equality constraints arise due to the specification of the two marginals $\rho$ and $\rho_{\epsilon}$ in~\eqref{eq:wass} where it is noted that the particles $X^i$ are sampled i.i.d. from $\rho$. The optimal value is an approximation of the optimal value of the objective in~\eqref{eq:wass}. The latter is the celebrated Wasserstein distance between $\rho$ and $\rho_{\epsilon}$. In terms of the optimal solution of the linear program~\eqref{eq:lp}, an approximation to the gain function at $X^i$ is obtained as \begin{equation}\label{eqn:OT} {\sf K}^i := \sum_{j=1}^N a_{ij} \,X^j, \qquad a_{ij} = \frac{t_{ij}^\ast -\delta_{ij}}{\epsilon} \,, \end{equation} where $\delta_{ij}$ is the Dirac delta tensor ($\delta_{ij}=1$ if $i=j$ and $0$ otherwise). In practice, a finite $\epsilon>0$ is appropriately chosen. The approximation becomes exact as $\epsilon \downarrow 0$ and $N\to \infty$. The approximation~\eqref{eqn:OT} is structurally similar to the constant gain approximation formula~\eqref{eq:const-gain-approx} and also the kernel-gain approximation formula~\eqref{eq:kernelk}. In all three cases, the gain ${\sf K}^i$ at the $i^{\text{th}}$ particle is approximated as a linear combination of the particle states $\{X^j\}_{j=1}^N$. Such approximations are computationally attractive whenever $N \ll d$, i.e., when the dimension of state space is high but the dynamics is confined to a low-dimensional subset which, however, is not a priori known. \begin{algorithm}[t] \caption{Optimal Coupling approximation of the gain function} \begin{algorithmic}[1] \REQUIRE $\{X^i\}_{i=1}^N$, $\{h(X^i)\}_{i=1}^N$, $\epsilon$ \ENSURE $\{{\sf K}^i\}_{i=1}^N$ \medskip \STATE Calculate $d_{ij}:=|X^i-X^j|^2$ for $i,j=1$ to $N$\medskip \STATE Calculate $\hat{h}^{(N)} = \frac{1}{N}\sum_{i=1}^N h(X^i)$ \STATE Calculate $t_{ij}$ by solving the Linear program \eqref{eq:lp}. \STATE Calculate $a_{ij} = \frac{(t_{ij}-\delta_{ij})}{\epsilon}$ for $i,j=1$ to $N$ \STATE Calculate ${\sf K}^i = \sum_{j=1}^N a_{ij} X^j$ \end{algorithmic} \label{alg:OC} \end{algorithm} \section{Numerics}\label{sec:numerics} This section contains results of numerical experiments where the Algorithms~\ref{alg:const-gain}-\ref{alg:OC} are applied on the bimodal distribution problem introduced in Example~\ref{ex:scalar}: The density $\rho$ is mixture of two Gaussians $\mathcal{N}(-1,\sigma^2)$ and $\mathcal{N}(+1,\sigma^2)$ with $\sigma^2=0.2$ and and $h(x)=x$. The exact solution is obtained using the explicit formula~\eqref{eq:scalar} and is depicted in \Fig{fig:truesol}. The following parameters are used in the numerical implementation of the algorithms: \begin{enumerate} \item {\bf Galerkin:} Algorithm~\ref{alg:galerkin} with polynomial basis functions $\{x,x^2,\ldots,x^M\}$ for $M=1,3,5$. The case $M=1$ gives the constant gain approximation (Algorithm~\ref{alg:const-gain}). \item {\bf Kernel:} Algorithm~\ref{alg:kernel} with $\epsilon=0.05,0.1,0.2$ and $L=1000$. \item {\bf Optimal coupling:} Algorithm~\ref{alg:OC} with $\epsilon=0.05,0.1,0.2$. \end{enumerate} In the first numerical experiment, a fixed number of particles $N=100$ is drawn i.i.d from $\rho$. Figures~\ref{fig:gain-approx-bimodal}~(a)-(c)-(e) depicts the approximate gain function obtained using the three algorithms. For the ease of comparison, the exact solution is also depicted. In the second numerical experiment, the empirical error is evaluated as a function of the number of particles $N$. For a single simulation, the error is defined according to \begin{equation} \text{Error}:=\frac{1}{N}\sum_{i=1}^N | {\sf K}_{\text{alg}}(X^i) - {\sf K}_{\text{ex}}(X^i)|^2, \label{eq:error} \end{equation} where $\{X^i\}_{i=1}^N$ are the particles, ${\sf K}_{\text{alg}}$ is the output of the algorithm and ${\sf K}_{\text{ex}}$ is the exact gain. The Monte-Carlo estimate of the error is evaluated by averaging over $1000$ simulations. In each simulation, a new set of particles is sampled which is used as an input consistently for the three algorithms. Figure~\ref{fig:gain-approx-bimodal}~(b)-(d)-(f) depict the Monte-Carlo estimate of the error as a function of the number of particles. In the third numerical experiment, the effect of varying the parameter $\epsilon$ is investigated for the kernel-based and the optimal coupling algorithms. In this experiment, a fixed number of particles $N=200$ is used. Figure~\ref{fig:gain-approx-bimodal-eps} depict the Monte-Carlo estimate of the error as a function of the parameter $\epsilon$. \begin{figure*}[t] \centering \begin{tabular}{cc} \subfigure[Galerkin gain approxmiation for $N=100$]{ \includegraphics[width=0.8\columnwidth]{Figure4-a.eps} } & \subfigure[Galerkin gain approx. error as a function of $N$]{ \includegraphics[width=0.8\columnwidth]{Figure4-b.eps} } \\ \subfigure[Kernel-based gain approximation for $N=100$]{ \includegraphics[width=0.8\columnwidth]{Figure4-c.eps} } & \subfigure[Kernel-based gain approx. error as a function of $N$]{ \includegraphics[width=0.8\columnwidth]{Figure4-d.eps} } \\ \subfigure[Optimal Coupling gain approximation for $N=100$]{ \includegraphics[width=0.8\columnwidth]{Figure4-e.eps} } & \subfigure[Optimal Coupling gain approx. error as a function of $N$]{ \includegraphics[width=0.8\columnwidth]{Figure4-f.eps} } \end{tabular} \caption{Comparison of the gain function approximations obtained using Galerkin (part~(a)), kernel (part~(c)), and the optimal coupling (part~(e)) algorithms. The exact gain function is depicted as a solid line and the density $\rho$ is depicted as a shaded region in the background. The parts~(b)-(d)-(e) depict the Monte-Carlo estimate of the empirical error~\eqref{eq:error} as a function of the number of particles $N$. The Monte-Carlo estimate is obtained by averaging the empirical error over 100 simulations.} \label{fig:gain-approx-bimodal} \end{figure*} The following observations are made based on the results of these numerical experiments: \begin{enumerate} \item (Figure \ref{fig:gain-approx-bimodal}~(a)-(b)) The accuracy of the Galerkin algorithm improves as the number of basis function increases. For a fixed number of particles, the matrix $A$ becomes poorly conditioned as the number of basis functions becomes large. This can lead to numerical instabilities in solving the matrix equation~\eqref{eq:Acb}. \item (Figure \ref{fig:gain-approx-bimodal}~(a)-(c)-(d)) The kernel-based and optimal coupling algorithms, preserve the positivity property of the exact gain. The positivity property of the gain is not necesarily preserved in the Galerkin algorithm. The correct sign of the gain is important in filtering applications as the gain determines the direction of drift of the particles. A wrong sign can lead to divergence of the particle trajectories. \item (Figure \ref{fig:gain-approx-bimodal-eps}) For a fixed number of particles, there is an optimal value of $\epsilon$ that minimizes the error for the kernel-based and the optimal coupling algorithms. For the kernel-based algorithm, it is shown in~\cite{Amir_ACC17} that, for small $\epsilon$ and large $N$, the error scales as $O(\epsilon) + O(\frac{1}{\epsilon^{d/2+1}\sqrt{N}})$. As $\epsilon\rightarrow\infty$, the approximate gain converges to the constant gain approximation. In particular, the error remains bounded even for large values of $\epsilon$. \item The optimal coupling algorithm leads to spatially more irregular approximations which nevertheless converge as the number of particles increases. The optimal choice of the parameter $\epsilon$ depends on the particle size. Finally, a spatially more regular approximation could be obtained using kernel dressing, e.g., by convoluting the particle approximation with Gaussian kernels. \end{enumerate} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{Figure5.eps} \caption{Comparison of the Monte-Carlo estimate of the empirical error~\eqref{eq:error} as a function of the parameter $\epsilon$. The number of particles $N=200$.} \label{fig:gain-approx-bimodal-eps} \end{figure} \section{Conclusion} \label{sec:conc} We have summarized an interacting particle representation of the classic Kalman-Bucy filter and its extension to nonlinear systems and non-Gaussian distributions under the general framework of FPFs. This framework is attractive since it maintains key structural elements of the classic Kalman-Bucy filter, namely, a gain factor and the innovation. In particular, the EnKF has become widely used in data assimilation for atmosphere-ocean dynamics and oil reservoir exploration. Robust extensions of the EnKF to non-Gaussian distributions are urgently needed and FPFs provide a systematic approach for such extensions in the spirit of Kalman's original work. However, interacting particle representations come at a price; they require approximate solutions of an elliptic PDE or a coupling problem, when viewed from a probabilistic perspective. Hence, robust and efficient numerical techniques for FPF-based interacting particle systems and study of their long-time behavior will be a primary focus of future research. \section{Acknowledgement} The first author AT was supported in part by the Computational Science and Engineering (CSE) Fellowship at the University of Illinois at Urbana-Champaign (UIUC). The research of AT and PGM at UIUC was additionally supported by the National Science Foundation (NSF) grants 1334987 and 1462773. This support is gratefully acknowledged. The research of JdW and SR has been partially funded by Deutsche Forschungsgemeinschaft (DFG) through the grant CRC 1114 ``Scaling Cascades in Complex Systems'', Project (A02) ``Multiscale data and asymptotic model assimilation for atmospheric flows'' and the grant CRC 1294 ``Data Assimilation'', Project (A02) ``Long-time stability and accuracy of ensemble transform filter algorithms''. \bibliographystyle{asmems4}
{ "timestamp": "2017-12-22T02:08:30", "yymm": "1702", "arxiv_id": "1702.07241", "language": "en", "url": "https://arxiv.org/abs/1702.07241", "abstract": "This paper is concerned with the filtering problem in continuous-time. Three algorithmic solution approaches for this problem are reviewed: (i) the classical Kalman-Bucy filter which provides an exact solution for the linear Gaussian problem, (ii) the ensemble Kalman-Bucy filter (EnKBF) which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems, and (iii) the feedback particle filter (FPF) which represents an extension of the EnKBF and furthermore provides for an consistent solution in the general nonlinear, non-Gaussian case. The common feature of the three algorithms is the gain times error formula to implement the update step (to account for conditioning due to the observations) in the filter. In contrast to the commonly used sequential Monte Carlo methods, the EnKBF and FPF avoid the resampling of the particles in the importance sampling update step. Moreover, the feedback control structure provides for error correction potentially leading to smaller simulation variance and improved stability properties. The paper also discusses the issue of non-uniqueness of the filter update formula and formulates a novel approximation algorithm based on ideas from optimal transport and coupling of measures. Performance of this and other algorithms is illustrated for a numerical example.", "subjects": "Optimization and Control (math.OC); Systems and Control (eess.SY)", "title": "Kalman Filter and its Modern Extensions for the Continuous-time Nonlinear Filtering Problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692366242304, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7079585050901236 }
https://arxiv.org/abs/1305.3207
Efficient Density Estimation via Piecewise Polynomial Approximation
We give a highly efficient "semi-agnostic" algorithm for learning univariate probability distributions that are well approximated by piecewise polynomial density functions. Let $p$ be an arbitrary distribution over an interval $I$ which is $\tau$-close (in total variation distance) to an unknown probability distribution $q$ that is defined by an unknown partition of $I$ into $t$ intervals and $t$ unknown degree-$d$ polynomials specifying $q$ over each of the intervals. We give an algorithm that draws $\tilde{O}(t\new{(d+1)}/\eps^2)$ samples from $p$, runs in time $\poly(t,d,1/\eps)$, and with high probability outputs a piecewise polynomial hypothesis distribution $h$ that is $(O(\tau)+\eps)$-close (in total variation distance) to $p$. This sample complexity is essentially optimal; we show that even for $\tau=0$, any algorithm that learns an unknown $t$-piecewise degree-$d$ probability distribution over $I$ to accuracy $\eps$ must use $\Omega({\frac {t(d+1)} {\poly(1 + \log(d+1))}} \cdot {\frac 1 {\eps^2}})$ samples from the distribution, regardless of its running time. Our algorithm combines tools from approximation theory, uniform convergence, linear programming, and dynamic programming.We apply this general algorithm to obtain a wide range of results for many natural problems in density estimation over both continuous and discrete domains. These include state-of-the-art results for learning mixtures of log-concave distributions; mixtures of $t$-modal distributions; mixtures of Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions; mixtures of Gaussians; and mixtures of $k$-monotone densities. Our general technique yields computationally efficient algorithms for all these problems, in many cases with provably optimal sample complexities (up to logarithmic factors) in all parameters.
\section{Introduction} \label{sec:intro} Over the past several decades, many works in computational learning theory have addressed the general problem of learning an unknown Boolean function from labeled examples. A recurring theme that has emerged from this line of work is that state-of-the-art learning results can often be achieved by analyzing \emph{polynomials} that compute or approximate the function to be learned, see e.g. \cite{LMN:93,KushilevitzMansour:93,Jackson:97,KlivansServedio:04jcss, MOS:04,KOS:04}. In the current paper we show that this theme extends to the well-studied unsupervised learning problem of \emph{density estimation}; namely, learning an unknown \emph{probability distribution} given i.i.d. samples drawn from the distribution. We propose a new approach to density estimation based on establishing the existence of \emph{piecewise polynomial density functions} that approximate the distributions to be learned. The key tool that enables this approach is a new and highly efficient general algorithm that we provide for learning univariate probability distributions that are well approximated by piecewise polynomial density functions. Combining our general algorithm with structural results showing that probability distributions of interest can be well approximated using piecewise polynomial density functions, we obtain learning algorithms for those distributions. We demonstrate the efficacy of this approach by showing that for many natural and well-studied types of distributions, there do indeed exist piecewise polynomial densities that approximate the distributions to high accuracy. For all of these types of distributions our general approach gives a state-of-the-art computationally efficient learning algorithm with the best known sample complexity (number of samples that are required from the distribution) to date; in many cases the sample complexity of our approach is provably optimal, up to logarithmic factors in the optimal sample complexity. \subsection{Related work.} Density estimation is a well-studied topic in probability theory and statistics (see \cite{DG85, Silverman:86,Scott:92,DL:01} for book-length introductions). There is a number of generic techniques for density estimation in the mathematical statistics literature, including histograms, kernels (and variants thereof), nearest neighbor estimators, orthogonal series estimators, maximum likelihood (and variants thereof) and others (see Chapter 2 of~\cite{Silverman:86} for a survey of existing methods). In recent years, theoretical computer science researchers have also studied density estimation problems, with an explicit focus on obtaining \emph{computationally efficient} algorithms (see e.g. \cite{KMR+:94,FreundMansour:99,FOS:05focs,BelkinSinha:10, KMV:10, MoitraValiant:10,DDS:12kmodallearn,DDS12stoc}. We work in a PAC-type model similar to that of \cite{KMR+:94} and to well-studied statistical frameworks for density estimation. The learning algorithm has access to i.i.d. draws from an unknown probability distribution $p$.\ignore{ which is assumed to belong to a (known) class $\mathfrak{C}$ of possible target distributions.\inote{Well, we do not really need to assume this, since our results are quasi-agnostic.} \rnote{We explain the agnostic view of our algorithms later -- I think it's simpler to start off with this as the initial explanation.}} It must output a hypothesis distribution $h$ such that with high probability the total variation distance $d_{\mathrm TV}(p,h)$ between $p$ and $h$ is at most $\varepsilon.$ (Recall that the total variation distance between two distributions $p$ and $h$ is ${\frac 1 2} \int |p(x)-h(x)| dx$ for continuous distributions, and is ${\frac 1 2} \sum |p(x)-h(x)|$ for discrete distributions.) We shall be centrally concerned with obtaining learning algorithms that both use few samples and are computationally efficient. The previous work that is most closely related to our current paper is the recent work \cite{CDSS13soda}. (That paper dealt with distributions over the discrete domain ${[n]} = \{1, \dots,n\}$, but since the current work focuses mostly on the continuous domain, \ignore{ \inote{I think we should just use a generic interval for the intro, not $[-1,1]$. Same thing in all the occurrences below.} \rnote{I think it's best to talk about a finite interval as the domain, otherwise people may get confused thinking about how a polynomial can approximate a distribution on a (semi)infinite interval. If we are going to talk about some finite interval as the domain it might as well be $[-1,1]$, no?}}in our description of the \cite{CDSS13soda} results below we translate them to the continuous domain. This translation is straightforward.) To describe the \ main result of \cite{CDSS13soda} we need to introduce the notions of \emph{mixture distributions} and \emph{piecewise constant} distributions. Given distributions $p_1,\dots,p_k$ and non-negative values $\mu_1,\dots,\mu_k$ that sum to 1, we say that $p = \sum_{i=1}^k \mu_i p_i$ is a \emph{$k$-mixture} of \emph{components} $p_1,\dots,p_k$ with \emph{mixing weights} $\mu_1,\dots,\mu_k$. A draw from $p$ is obtained by choosing $i \in [k]$ with probability $\mu_i$ and then making a draw from $p_i$. A distribution $q$ over {an interval $I$} is \emph{$(\varepsilon,t)$-piecewise constant}\ignore{(see Section~\ref{sec:prelims})} if there is a partition of {$I$} into $t$ disjoint intervals $I_1,\dots,I_t$ such that $p$ is $\varepsilon$-close (in total variation distance) to a distribution $q$ such that $q(x)=c_j$ for all $x \in I_j$ for some $c_j \geq 0$. The main result of \cite{CDSS13soda} is an efficient algorithm for learning any $k$-mixture of $(\varepsilon,t)$-piecewise constant distributions: \begin{theorem} \label{thm:general-cdss12} There is an algorithm that learns any $k$-mixture of $(\varepsilon,t)$-piecewise constant distributions over {an interval $I$} to accuracy $O(\varepsilon)$, using $O(kt/\varepsilon^3)$ samples and running in $\tilde{O}(k t /\varepsilon^3)$ time.\footnote{Here and throughout the paper we work in a standard unit-cost model of computation, in which a sample from distribution $p$ is obtained in one time step (and is assumed to fit into one register) and basic arithmetic operations are assumed to take unit time. Our algorithms, like the \cite{CDSS13soda} algorithm, only performs basic arithmetic operations on ``reasonable'' inputs.} \end{theorem} \subsection{Our main result.} As our main algorithmic contribution, we give a significant strengthening and generalization of Theorem~\ref{thm:general-cdss12} above. First, we improve the {$\varepsilon$-dependence in the} sample complexity of Theorem~\ref{thm:general-cdss12} from $1/\varepsilon^3$ to a near-optimal $\tilde{O}(1/\varepsilon^2).$ \footnote{Recall the well-known fact that $\Omega(1/\varepsilon^2)$ samples are required for essentially every nontrivial distribution learning problem. In particular, any algorithm that distinguishes the uniform distribution over $[-1,1]$ from the piecewise constant distribution with pdf $p(x) = {\frac 1 2}(1-\varepsilon)$ for $-1 \leq x \leq 0$, $p(x)= {\frac 1 2}(1+\varepsilon)$ for $0 < x \leq 1$, must use $\Omega(1/\varepsilon^2)$ samples.}\ignore{ \inote{I agree with Rocco's comment above, however we can significantly strengthen it very easily. In the generic setting of the theorem it is clear that $kt/\varepsilon^2$ is a lower bound. This is certainly true for a single $(\varepsilon, t)$-piecewise constant distribution. And for a $k$-mixture we can do our standard trick of having $k$ disjoint such distributions. I do not propose wasting too much space elaborating on this, but we can certainly state it.}} Second, we extend Theorem~\ref{thm:general-cdss12} from piecewise constant distributions to \emph{piecewise polynomial} distributions. More precisely, we say that a distribution over {an interval $I$} is \emph{$(\varepsilon,t)$-piecewise degree-$d$} if there is a partition of {$I$} into $t$ disjoint intervals $I_1,\dots,I_t$ such that $p$ is $\varepsilon$-close (in total variation distance) to a distribution $q$ such that $q(x)=q_j(x)$ for all $x \in I_j$, where each of $q_1, \dots,q_t$ is a univariate degree-$d$ polynomial.\footnote{Here and throughout the paper, whenever we refer to a ``degree-$d$ polynomial,'' we mean a polynomial of degree at most $d.$} (Note that being $(\varepsilon,t)$-piecewise constant is the same as being $(\varepsilon,t)$-piecewise degree-0.) We say that such a distribution $q$ is a \emph{$t$-piecewise degree-$d$ distribution.} Our main algorithmic result is the following (see Theorem~\ref{thm:main-detail} for a fully detailed statement of the result): \begin{theorem} \label{thm:main2} [Informal statement] There is an algorithm that learns any $k$-mixture of $(\varepsilon,t)$-piecewise degree-$d$ distributions over {an interval $I$} to accuracy $O(\varepsilon)$, using $\tilde{O}((d+1)kt/\varepsilon^2)$ samples and running in $\mathrm{poly}((d+1),k,t,1/\varepsilon)$ time. \end{theorem} As we describe below, the applications that we give for Theorem~\ref{thm:main2} crucially use both aspects in which it strengthens Theorem~\ref{thm:general-cdss12} (degree $d$ rather than degree 0, and $\tilde{O}(1/\varepsilon^2)$ samples rather than $O(1/\varepsilon^3)$) to obtain near-optimal sample complexities. A different view on our main result, which may also be illuminating, is that it gives a ``semi-agnostic'' algorithm for learning piecewise polynomial densities. (Since any $k$-mixture of $t$-piecewise degree-$d$ distributions is easily seen to be a $kt$-piecewise degree-$d$ distribution, we phrase the discussion below only in terms of $t$-piecewise degree-$d$ distributions rather than mixtures.) {Let $\mathcal{P}_{t, d}(I)$ denote the class of all $t$-piecewise degree-$d$ distributions over interval $I$.} Let $p$ be any distribution over $I$. Our algorithm, given parameters $t,d,\varepsilon$ and \new{ $\tilde{O}(t(d+1)/\varepsilon^2)$ } samples from $p$, outputs an $O(t)$-piecewise degree-$d$ hypothesis distribution $h$ such that $d_{\mathrm TV}(p,h) \leq {4}\mathrm{opt}_{t, d}{(1+\varepsilon)}+ \varepsilon$, where \[ \mathrm{opt}_{t,d}:= \inf_{r \in \mathcal{P}_{t,d}(I)} d_{\mathrm TV}(p,r). \] (See Theorem~\ref{thm:no-wb}.) \new{ We prove the following lower bound (see Theorem~\ref{thm:lower-bound-precise} for a precise statement), which shows that the number of samples that our algorithm uses is optimal up to logarithmic factors: \begin{theorem} \label{thm:lower-bound-informal} [Informal statement] Any algorithm that learns an unknown $t$-piecewise degree-$d$ distribution $q$ over an interval $I$ to accuracy $\varepsilon$ must use $\Omega({\frac {t(d+1)} {\mathrm{poly}(1+\log(d+1))}} \cdot {\frac 1 {\varepsilon^2}})$ samples. \end{theorem} Note that the lower bound holds even when the unknown distribution is exactly a $t$-piecewise degree-$d$ distribution, i.e. $\mathrm{opt}_{t,d}=0$ (in fact, the lower bound still applies even if the $t-1$ ``breakpoints'' defining the $t$ interval boundaries within $I$ are fixed to be evenly spaced across $I$). } { \subsection{Applications of Theorem~\ref{thm:main2}.} Using Theorem~\ref{thm:main2} we obtain highly efficient algorithms for a wide range of specific distribution learning problems over both continuous and discrete domains. These include learning mixtures of log-concave distributions; mixtures of $t$-modal distributions; mixtures of Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions; mixtures of Gaussians; and mixtures of $k$-monotone densities. (See Table~1 for a concise summary of these results and a comparison with previous results.) All of our algorithms run in polynomial time in all of the relevant parameters, and for all of the mixture learning problems listed in Table~1, our results improve on previous state-of-the-art results by a polynomial factor. (In some cases, such as $t$-piecewise degree-$d$ polynomial distributions and mixtures of $t$ bounded $k$-monotone distributions, we believe that we give the first nontrivial learning results for the distribution classes in question.) In many cases the sample complexities of our algorithms are provably optimal, up to logarithmic factors in the optimal sample complexity. Detailed descriptions of all of the classes of distributions in the table, and of our results for learning those distributions, are given in Section~\ref{sec:applic}. We note that all the learning results indicated with theorem numbers in Table~1 (i.e. results proved in this paper) are in fact \emph{semi-agnostic} learning results for the given classes as described in the previous subsection; hence all of these results are highly robust even if the target distribution does not exactly belong to the specified class of distributions. More precisely, if the target distribution is $\tau$-close to some member of the specified class of distributions, then the algorithm uses the stated number of samples and outputs a hypothesis that is $(O(\tau)+\varepsilon)$ close to the target distribution. \begin{table*}[t] \begin{center} \begin{tabular}{|c|c|c|c|}% \hline \bf Class of Distributions & \bf Number of samples & \bf Reference \\\hline\hline \multicolumn{3}{|c|}{\bf Continuous distributions over an interval $I$}\\\hline $t$-piecewise constant & $O(t/\epsilon^3)$ & \cite{CDSS13soda} \\ \hline $t$-piecewise constant & $\tilde{O}(t/\epsilon^2) \ (\dagger)$ & Theorem~\ref{thm:main-detail}\\ \hline $t$-piecewise degree-$d$ polynomial & $\tilde O(td/\epsilon^2) \ (\dagger)$ & Theorem~\ref{thm:main-detail}, Theorem~\ref{thm:lower-bound-precise} \\\hline log-concave & $O(1/\varepsilon^{5/2}) \ (\dagger)$ & folklore \cite{DL:01}\\ \hline mixture of $k$ log-concave distributions & $\tilde O(k/\varepsilon^{5/2}) \ (\dagger)$ & Theorem~\ref{thm:lc}\\ \hline mixture of $t$ bounded 1-monotone distributions & $\tilde O(t/\epsilon^{3}) \ (\dagger)$ & Theorem~\ref{thm:kmon} \\\hline mixture of $t$ bounded 2-monotone distributions & $\tilde O(t/\epsilon^{5/2}) \ (\dagger)$ & Theorem~\ref{thm:kmon}\\\hline mixture of $t$ bounded $k$-monotone distributions & $\tilde O(tk/\epsilon^{2+1/k})$ & Theorem~\ref{thm:kmon}\\\hline mixture of $k$ Gaussians & $\tilde O(k/\epsilon^{2}) \ (\dagger) $ & Corollary~\ref{cor:mixGauss} \\\hline \hline \hline \multicolumn{3}{|c|}{\bf Discrete distributions over $\{1,2,\dots,N\}$} \\\hline $t$-modal & $\tilde O(t \log(N)/\epsilon^3) + \tilde{O}(t^3/\varepsilon^3)$ & \cite{DDS:12kmodallearn} \\\hline mixture of $k$ $t$-modal distributions & $O(kt \log(N)/\epsilon^4)$ & \cite{CDSS13soda} \\\hline mixture of $k$ $t$-modal distributions & $\tilde{O}(kt \log(N)/\epsilon^3) \ (\dagger)$ & Theorem~\ref{thm:mix-tmodal}\\\hline mixture of $k$ monotone hazard rate distributions & $\tilde{O}(k \log(N)/\epsilon^4) $ & \cite{CDSS13soda}\\\hline mixture of $k$ monotone hazard rate distributions & $\tilde{O}(k \log(N)/\epsilon^3) \ (\dagger)$ & Theorem~\ref{thm:MHR} \\\hline mixture of $k$ log-concave distributions & $\tilde{O}(k /\epsilon^4) $ & \cite{CDSS13soda} \\\hline mixture of $k$ log-concave distributions & $\tilde{O}(k /\epsilon^3) $ & Theorem~\ref{thm:logconcave}\\\hline Poisson Binomial Distribution & $\tilde{O}(1 /\epsilon^3) $ & \cite{DDS12stoc,CDSS13soda}\\\hline mixture of $k$ Poisson Binomial Distributions & $\tilde{O}(k /\epsilon^4) $ & \cite{CDSS13soda}\\\hline mixture of $k$ Poisson Binomial Distributions & $\tilde{O}(k /\epsilon^3) $ & Theorem~\ref{thm:logconcave} \\\hline \end{tabular} \end{center} \caption{Known algorithmic results for learning various classes of probability distributions. ``Number of samples'' indicates the number of samples that the algorithm uses to learn to total variation distance $\varepsilon$. Results given in this paper are indicated with a reference to the corresponding theorem. A $(\dagger)$ indicates that the given upper bound on sample complexity is known to be optimal up to at most logarithmic factors (i.e. ``$\tilde{O}(m) \ (\dagger)$'' means that there is a known lower bound of ${\Omega}(m)$). } \label{tab:results} \end{table*} \smallskip } \subsection{Our Approach and Techniques.} { As stated in~\cite{Silverman:86}, ``the oldest and most widely used density estimator is the histogram'': Given samples from a density $f$, the method partitions the domain into a number of intervals (bins) $I_1, \ldots, I_k$, and outputs the empirical density which is constant within each bin. Note that the number $k$ of bins and the width of each bin are parameters and may depend on the particular class of distributions being learned. Our proposed technique may naturally be viewed as a very broad generalization of the histogram method, where instead of approximating the distribution by a {\em constant} within each bin, we approximate it by a {\em low-degree polynomial.} We believe that such a generalization is very natural; the recent paper~\cite{PA13cgs} also proposes using splines for density estimation. (However, this is not the main focus of the paper and indeed~\cite{PA13cgs} does not provide or analyze algorithms for density estimation.) Our generalization of the histogram method seems likely to be of wide applicability. Indeed, as we show in this paper, it can be used to obtain many computationally efficient learners for a wide class of concrete learning problems, yielding several new and nearly optimal results. } \medskip { \noindent {\bf The general algorithm.} At a high level, our algorithm uses a rather subtle dynamic program (roughly, to discover the ``correct'' intervals in each of which the underlying distribution is close to a degree-$d$ polynomial) and linear programming (roughly, to learn a single degree-$d$ sub-distribution on a given interval). We note, however, that many challenges arise in going from this high-level intuition to a working algorithm. Consider first the special case in which there is only a single known interval (see Section~\ref{sec:learn-deg-d-close}). In this special case our problem is somewhat reminiscent of the problem of learning a ``noisy polynomial'' that was studied by Arora and Khot \cite{AK03}. We stress, though, that our setting is considerably more challenging in the following sense: in the \cite{AK03} framework, each data point is a pair $(x,y)$ where $y$ is assumed to be close to the value $p(x)$ of the target polynomial at $x$. In our setting the input data is \emph{unlabeled} -- we only get points $x$ drawn from a \emph{distribution} that is $\tau$-close to some polynomial pdf. However, we are able to leverage some ingredients from \cite{AK03} in our context. We carry out a careful error analysis using probabilistic inequalities (the VC inequality and tail bounds) and ingredients from basic approximation theory to show that $\tilde{O}(d/\varepsilon^2)$ samples suffice for our linear program to achieve an $O(\mathrm{opt}_{1,d}+\varepsilon)$-accurate hypothesis with high probability. Additional challenges arise when we go from a single interval to the general case of $t$-piecewise polynomial densities (see Section~\ref{sec:learn-piecewise-deg-d}). The ``correct'' intervals can of course only be approximated rather than exactly identified, introducing an additional source of error that needs to be carefully managed. We formulate a dynamic program that uses the algorithm from Section~\ref{sec:learn-deg-d-close} as a ``black box'' to achieve our most general learning result. } \ignore{ I don't know if we want to say this but: Our inspiration comes from Birge for learning monotone functions. He proceeds in two steps: (1) Show that any monotone distributions is well-approximated by a piecewise constant density (2) learn a piecewise constant density (histogram). However, (1) in Birge's case is very easy because of his oblivious decomposition. Ow, one has to search for the breakpoints, which is essentially what we do by DP. \rnote{I'd suggest against saying this.}} \medskip { \noindent {\bf The applications.} \ignore{For our applications we can exploit the extensive literature from approximation theory and related fields on piecewise polynomial (spline) and piecewise constant approximation of functions. } Given our general algorithm, in order to obtain efficient learning algorithms for specific classes of distributions, it is sufficient to establish the existence of piecewise polynomial (or piecewise constant) approximations to the distributions that are to be learned. In some cases such existence results were already known; for example, Birg\'{e} \cite{Birge:87b} provides the necessary existence result that we require for discrete $t$-modal distributions, and classical results in approximation theory \cite{Dudley:74, Novak:88} give the necessary existence results for concave distributions over continuous domains. For log-concave densities over continuous domains, we prove a new structural result on approximation by piecewise linear densities (Lemma~\ref{lem:lc-struct}) which, combined with our general algorithm, leads to an optimal learning algorithm for (mixtures of) such densities. Finally, for \emph{$k$-monotone} distributions we are able to leverage a recent (and quite sophisticated) result from the approximation theory literature \cite{KonL04, KonL07} to obtain the required approximation result. } \medskip { \noindent {\bf Structure of this paper:} In Section~2 we include some basic preliminaries. In Section~3 we present our main learning result and in Section~4 we describe our applications. } \section{Preliminaries} Throughout the paper for simplicity we consider distributions over the interval $[-1,1)$. It is easy to see that the general results given in Section~\ref{sec:main} go through for distributions over an arbitrary interval $I$. (In the applications given in Section~\ref{sec:applic} we explicitly discuss the different domains over which our distributions are defined.) Given a value $\kappa> 0$, we say that a distribution $p$ over $[-1,1)$ is \emph{$\kappa$-well-behaved} if $\sup_{x \in [-1,1)} \Pr_{x \sim p}[x] \leq \kappa$, i.e. no individual real value is assigned more than $\kappa$ probability under $p$. {Any probability distribution with no atoms (and hence any piecewise polynomial distribution)} is $\kappa$-well-behaved for all $\kappa>0$, but for example the distribution which outputs the value $0.3$ with probability $1/100$ and otherwise outputs a uniform value in $[-1,1)$ is only $\kappa$-well-behaved for $\kappa\geq 1/100.$ {Our results apply for general distributions over $[-1,1)$ which may have an atomic part as well as a non-atomic part. Throughout the paper we assume that the density $p$ is measurable. Note that throughout the paper we only ever work with the probabilities $\Pr_{x \sim p}[x=z]$ of single points and probabilities $\Pr_{x \sim p}[x \in S]$ of sets $S$ that are finite unions of intervals and single points.} Given a function $p: I \to \mathbb{R}$ on an interval ${I} \subseteq [-1,1)$ and a subinterval $J \subseteq I$, we write $p(J)$ to denote $\int_{J} p(x) dx.$ Thus if $p$ is the pdf of a probability distribution over $[-1,1)$, the value $p(J)$ is the probability that distribution $p$ assigns to the subinterval $J$. {We sometimes refer to a function $p$ over an interval (which need not necessarily integrate to 1 over the interval) as a ``subdistribution.''} { Given $m$ independent samples $s_1,\dots,s_m$, drawn from a distribution $p$ over $[-1,1)$, the {\em empirical distribution} $\wh{p}_m$ over $[-1,1)$ is the discrete distribution supported on $\{s_1,\dots,s_m\}$ defined as follows: for all $z \in [-1,1)$, $\Pr_{x \sim \wh{p}_m}[x=z] = |\{j \in [m] \mid s_j=x\}| / m$. } \medskip { \noindent {\bf Optimal piecewise polynomial approximators.} Fix a distribution $p$ over $[-1,1)$. We write $\mathrm{opt}_{t,d}$ to denote the value \[ \mathrm{opt}_{t,d} := \inf_{r \in {\cal P}_{t,d}([-1,1))} d_{\mathrm TV}(p,r). \] Standard closure arguments can be used to show that the above infimum is attained by some $r \in {\cal P}_{t,d}([-1,1))$; however this is not actually required for our purposes.\ignore{ A priori there need not exist a distribution $r \in {\cal P}_{t,d}([-1,1))$ for which $\mathrm{opt}_{t,d}=d_{\mathrm TV}(p,r)$, but of course} It is straightforward to verify that any distribution $\tilde{r} \in {\cal P}_{t,d}([-1,1))$ such that $d_{\mathrm TV}(p,\tilde{r})$ is at most (say) $\mathrm{opt}_{t,d} + \varepsilon/100$ is sufficient for all our arguments. \ignore{ It is clear that for any $\varepsilon > 0$ there exists a distribution $\tilde{r} \in {\cal P}_{t,d}([-1,1))$ such that $d_{\mathrm TV}(p,\tilde{r})$ is at most (say) $\mathrm{opt}_{t,d} + \varepsilon/100$. It is easy to verify that wherever our arguments use the distribution $r$ for which $\mathrm{opt}_{t,d}=d_{\mathrm TV}(p,r)$, the distribution $\tilde{r}$ could be used instead, and in the final $d_{\mathrm TV}(p,h) = c \cdot \mathrm{opt}_{t,d} + O(\varepsilon)$ bounds that we achieve only the constant in the $O(\varepsilon)$ would be affected. } } \ignore{ For convenience we shall sometimes assume in our arguments that there does indeed exist a distribution $r \in {\cal P}_{t,d}([-1,1))$ achieving $\mathrm{opt}_{t,d}=d_{\mathrm TV}(p,r)$. This is without loss of generality because the end goal of our arguments is always to construct a distribution $h$ for which $d_{\mathrm TV}(p,h) = c \cdot \mathrm{opt}_{t,d} + O(\varepsilon)$ for some constant $c$, so even if the desired $r$ does not exist, the distribution $\tilde{r}$ could be used in its place and only the constant in the $O(\varepsilon)$ would be affected.} \medskip \noindent {\bf Refinements.} Let ${\cal I} = \{I_1,\dots,I_s\}$ be a partition of $[-1,1)$ into $s$ disjoint intervals, and ${\cal J} = \{J_1,\dots,J_t\}$ be a partition of $[-1,1)$ into $t$ disjoint intervals. We say that ${\cal J}$ is a \emph{refinement} of ${\cal I}$ if each interval in ${\cal I}$ is a union of intervals in ${\cal J}$, i.e. for every $a \in [s]$ there is a subset $S_a \subseteq [t]$ such that $I_a = \cup_{b \in S_a} J_b$. For ${\cal I}= \{I_i\}_{i=1}^r$ and ${\cal I}'=\{I'_i\}_{i=1}^s$ two partitions of $[-1,1)$ into $r$ and $s$ intervals respectively, we say that the \emph{common refinement} of ${\cal I}$ and ${\cal I}'$ is the partition ${\cal J}$ of $[-1,1)$ into intervals obtained from ${\cal I}$ and ${\cal I}'$ in the obvious way, by taking all possible nonempty intervals of the form $I_i \cap I'_j.$ It is clear that ${\cal J}$ is both a refinement of ${\cal I}$ and of ${\cal I}'$ and that ${\cal J}$ contains at most $r+s$ intervals. \medskip \noindent {\bf Approximation theory.} We will need some basic notation and results from approximation theory. We write $\|p\|_{\infty}$ to denote $\sup_{x \in [-1,1)} |p(x)|.$ We recall the famous inequalities of Bernstein and Markov bounding the derivative of univariate polynomials: \ignore{ \rnote{Will we use Markov -- does it help our overall bounds or no? If not then I think there's no advantage to doing things differently from Arora/Khot just for the sake of being different.} \inote{My opinion is to rework the results with our improved parameters; our setting is somewhat different and this is simple-enough. We can certainly cite them, but perhaps we should not use the words Arora-Khot more than once.} } \begin{theorem} \label{thm:bern-mark} For any real-valued degree-$d$ polynomial $p$ over $[-1,1)$, we have \begin{itemize} \item (Bernstein's Inequality) $\|p'\|_\infty \leq \|p\|_\infty \cdot d^2;$ and \item (Markov's Inequality) $\|p'\|_\infty \leq {\frac d {\sqrt{1-x^2}}} \cdot \|p\|_\infty$ for all $-1 \leq x \leq 1.$ \end{itemize} \end{theorem} \smallskip \noindent {\bf The VC inequality.} Given a family of subsets $\mathcal A$ over $[-1,1)$, define $\norm p_{\mathcal A} = \sup_{A\in \mathcal A} |p(A)|$. The \emph{VC dimension} of $\mathcal A$ is the maximum size of a subset $X\subset [-1,1)$ that is shattered by $\mathcal A$ (a set $X$ is shattered by $\mathcal A$ if for every $Y \subseteq X$, some $A\in\mathcal A$ satisfies $A\cap X = Y$). If there is a shattered subset of size $s$ for all $s$ then we say that the VC dimension of ${\cal A}$ is $\infty$. {The well-known \emph{Vapnik-Chervonenkis (VC) inequality} says the following:} \begin{theorem}[VC inequality, {\cite[p.31]{DL:01}}] \label{thm:vc-inequality} Let $\widehat{p}_m$ be an empirical distribution of $m$ samples from $p$. Let $\mathcal A$ be a family of subsets of VC dimension $d$. Then $ \mathbb{E}[ \norm{p - \widehat{p}_m}_{\mathcal A}] \leq O(\sqrt{d/m}) .$ \end{theorem} \ignore{ \noindent {\bf Uniform convergence.} \rnote{Do we use this? If not, let's get rid of it.} We will also use the following uniform convergence bound: \begin{theorem}[{\cite[p17]{DL:01}}] \label{thm:bdd-diff} Let $\mathcal A$ be a family of subsets over $[-1,1)$, and $\widehat{p}_m$ be an empirical distribution of $m$ samples from $p$. Let $X$ be the random variable $\norm{p - \widehat{p}_m}_{\mathcal A}$. Then we have $ \Pr[X - \mathbb{E}[X] > \eta] \leq e^{-2m\eta^2}.$ \end{theorem} } \subsection{Partitioning into intervals of approximately equal mass.} As a basic primitive, we will often need to decompose a $\kappa$-well-behaved distribution $p$ into $\Theta(1/\kappa)$ intervals each of which has probability $\Theta(\kappa)$ under $p$. The following lemma lets us achieve this using $\tilde{O}(1/\kappa)$ samples; the simple proof is given in Appendix~\ref{ap:z}. \begin{lemma} \label{lem:part-approx-unif} Given $0 < \kappa< 1$ and access to samples from an $\kappa/64$-well-behaved distribution $p$ over $[-1,1)$, the procedure {\tt Approximately-Equal-Partition} uses $\tilde{O}(1/\kappa)$ samples from $p$, runs in time $\tilde{O}(1/\kappa)$, and with probability at least $99/100$ outputs a partition of $[-1,1)$ into $\ell=\Theta(1/\kappa)$ intervals such that $p(I_j) \in [{\frac 1 {2 \kappa}}, {\frac 3 \kappa}]$ for all $1 \leq j \leq \ell.$ \end{lemma} \section{Main result: Learning mixtures of piecewise polynomial distributions with near-optimal sample complexity} \label{sec:main} In this section we present and analyze our main algorithm for learning mixtures of $(\tau,t)$-piecewise degree-$d$ distributions over $[-1,1)$. We start by giving a simple information-theoretic argument (Proposition~\ref{prop:info}, Section~\ref{sec:ineff}) showing that there is a (computationally inefficient) algorithm to learn any distribution $p$ to accuracy $3 \mathrm{opt}_{t,d} + \varepsilon$ using $O(t(d+1)/\varepsilon^2)$ samples, where {$\mathrm{opt}_{t,d}$ is the smallest variation distance between $p$ and any $t$-piecewise degree-$d$ distribution.} \new{Next, we contrast this information-theoretic positive result with an information-theoretic lower bound (Theorem~\ref{thm:lower-bound-precise}, Section~\ref{sec:lower-bound}) showing that any algorithm, regardless of its running time, for learning a $t$-piecewise degree-$d$ distribution to accuracy $\varepsilon$ must use $\Omega({\frac {t(d+1)} {\mathrm{poly}(1+\log(d+1))}} \cdot {\frac 1 {\varepsilon^2}})$ samples.} We then build up to our main result in stages by giving efficient algorithms for successively more challenging learning problems. In Section~\ref{sec:learn-deg-d-close} we give an efficient ``semi-agnostic'' algorithm for learning a single degree-$d$ pdf. More precisely, the algorithm draws $\tilde{O}((d+1)/\varepsilon^2)$ samples from any well-behaved distribution $p$, and with high probability outputs a degree-$d$ pdf $h$ such that $d_{\mathrm TV}(p,h) \leq 3 \mathrm{opt}_{1,d}{(1+\varepsilon)}+ \varepsilon$. This algorithm uses ingredients from approximation theory and linear programming. In Section~\ref{sec:learn-piecewise-deg-d} we extend the approach using dynamic programming to obtain an efficient ``semi-agnostic'' algorithm for $t$-piecewise degree-$d$ pdfs. The extended algorithm draws $\tilde{O}(t(d+1)/\varepsilon^2)$ samples from any well-behaved distribution $p$, and with high probability outputs a $(2t-1)$-piecewise degree-$d$ pdf $h$ such that $d_{\mathrm TV}(p,h) \leq 3 \mathrm{opt}_{t,d}{(1+\varepsilon)}+ \varepsilon$. In Section~\ref{sec:mix} we extend the result to $k$-mixtures of well-behaved distributions. Finally, in Section~\ref{sec:kill-wb} we show how we may get rid of the ``well-behaved'' requirement, and thereby prove Theorem~\ref{thm:main2}. \subsection{An information-theoretic sample complexity upper bound.} \label{sec:ineff} \begin{proposition} \label{prop:info} There is a (computationally inefficient) algorithm that {draws} $O(t(d+1) /\varepsilon^2)$ samples from any distribution $p$ over $[-1,1)$, and with probability $9/10$ outputs a hypothesis distribution $h$ such that $d_{\mathrm TV}(p,h) \leq 3 \mathrm{opt}_{t,d}+ \varepsilon$. \end{proposition} \begin{proof} The main idea is to use Theorem~\ref{thm:vc-inequality}, the VC inequality. Let $p$ be the target distribution and let $q$ be a $t$-piecewise degree-$d$ distribution such that $d_{\mathrm TV}(p,q) = {\mathrm{opt}_{t, d}}.$ The algorithm draws $m = O(t(d+1) /\varepsilon^2)$ samples from $p$; let $\widehat{p}_m$ be the resulting empirical distribution of these $m$ samples. We define the family ${\cal A}$ of subsets of $[-1,1)$ to consist of all unions of up to $2t(d+1)$ intervals. Since $d_{\mathrm TV}(p,q) \leq {\mathrm{opt}_{t, d}}$ we have that $\|p-q\|_{\cal A} \leq {\mathrm{opt}_{t, d}}$. Since the VC dimension of ${\cal A}$ is $4t(d+1)$, Theorem~\ref{thm:vc-inequality} implies that $\mathbb{E}[\|p-\widehat{p}_m\|_{{\cal A}}] \leq \varepsilon/40$, and hence by Markov's inequality, with probability at least $19/20$ we have that $\|p-\widehat{p}_m\|_{\cal A} \leq \varepsilon/2.$ By the triangle inequality for $\| \cdot \|_{\cal A}$-distance, this means that $\|q-\widehat{p}_m\|_{\cal A} \leq {\mathrm{opt}_{t, d}} + \varepsilon/2.$ The algorithm outputs a $t$-piecewise degree-$d$ distribution $h$ that minimizes $\|h-\widehat{p}_m\|_{\cal A}$. Since $q$ is a $t$-piecewise degree-$d$ distribution that satisfies $\|q-\widehat{p}_m\|_{\cal A} \leq {\mathrm{opt}_{t, d}} + \varepsilon/2$, the distribution $h$ satisfies $\|h - \widehat{p}_m\|_{\cal A} \leq {\mathrm{opt}_{t, d}} + \varepsilon/2.$ Hence the triangle inequality gives $\|h - q \|_{\cal A} \leq 2 {\mathrm{opt}_{t, d}} + \varepsilon.$ Now since $h$ and $q$ are both $t$-piecewise degree-$d$ distributions, they must have at most $2t(d+1)$ crossings. (Taking the common refinement of the intervals for $p$ and the intervals for $q$, we get at most $2t$ intervals. Within each such interval both $h$ and $q$ are degree-$d$ polynomials, so there are at most $2t(d+1)$ crossings in total (where the extra $+1$ comes from the endpoints of each of the $2t$ intervals).) Consequently we have that $d_{\mathrm TV}(h,q) = \|h-q\|_{\cal A} \leq 2 {\mathrm{opt}_{t, d}} + \varepsilon.$ The triangle inequality for variation distance gives that $d_{\mathrm TV}(h,p) \leq 3 {\mathrm{opt}_{t, d}} + \varepsilon$, and the proof is complete. \end{proof} {It is not hard to see that the dependence on each of the parameters $t, d, 1/\varepsilon$ in the above upper bound is information-theoretically optimal.} Note that the algorithm described above is not efficient because it is {by no means} clear how to construct a $t$-piecewise degree-$d$ distribution $h$ that minimizes $\|h-\widehat{p}_m\|_{\cal A}$ in a computationally efficient way. {Indeed, several approaches to solve this problem yield running times that grow exponentially in $t, d$}. Starting in Section~\ref{sec:learn-deg-d-close}, we give an algorithm that achieves almost the same sample complexity but runs in time $\mathrm{poly}(t,d,1/\varepsilon).$ {The main idea is that minimizing $\norm\cdot_{\mathcal A}$ (which involves infinitely many inequalities) can be approximately achieved by minimizing a small number of inequalities (\cref{def:interval-ineq,lem:ad-dist}), and this can be achieved with a linear program.} \new{ \subsection{An information-theoretic sample complexity lower bound.} \label{sec:lower-bound} To complement the information-theoretic upper bound from the previous subsection, in this subsection we prove an information-theoretic lower bound showing that even if $\mathrm{opt}_{t,d}=0$ (i.e. the target distribution $p$ is exactly a $t$-piecewise degree-$d$ distribution), $\tilde{\Omega}(t(d+1)/\varepsilon^2)$ samples are required for any algorithm to learn to accuracy $\varepsilon$: \begin{theorem} \label{thm:lower-bound-precise} Let $p$ be an unknown $t$-piecewise degree-$d$ distribution over $[-1,1)$ where $t\geq 1,$ $d \geq 0$ satisfy $t+d > 1.$ \footnote{Note that $t=1$ and $d=0$ is a degenerate case where the only possible distribution $p$ is the uniform distribution over $[-1,1)$.} Let $L$ be any algorithm which, given as input $t,d,\varepsilon$ and access to independent samples from $p$, outputs a hypothesis distribution $h$ such that $\mathbb{E}[d_{\mathrm TV}(p,h)] \leq \varepsilon$, where the expectation is over the random samples drawn from $p$ and any internal randomness of $L$. Then $L$ must use at least $\Omega({\frac {t(d+1)}{(1+\log (d+1))^2}} \cdot {\frac 1 {\varepsilon^2}})$ samples. \end{theorem} Theorem~\ref{thm:lower-bound-precise} is proved using a well known lemma of Assouad \cite{Assouad:83}, together with carefully tailored constructions of polynomial probability density functions to meet the conditions of Assouad's lemma. The proof of Theorem~\ref{thm:lower-bound-precise} is deferred to Appendix~\ref{ap:lower}. } \subsection{Semi-agnostically learning a {degree-$d$ polynomial density} with near-optimal sample complexity.} \label{sec:learn-deg-d-close} In this section we prove the following: \begin{theorem} \label{thm:agno-kis1} Let $p$ be an ${\frac \varepsilon {64(d+1)}}$-well-behaved pdf over $[-1,1)$. There is an algorithm\\ {\tt Learn-WB-Single-Poly}$(d,\varepsilon)$ which runs in poly$(d+1,1/\varepsilon)$ time, uses $\tilde{O}((d+1)/\varepsilon^2)$ samples from $p$, and with probability at least $9/10$ outputs a degree-$d$ polynomial $q$ which defines a pdf over $[-1,1) $ such that $d_{\mathrm TV}(p,q) \leq 3 \mathrm{opt}_{1,d} {(1+\varepsilon)} + O(\varepsilon)$. \end{theorem} Some preliminary definitions will be helpful: \begin{definition}[Uniform partition] Let $p$ be a subdistribution on an interval $I {\subseteq} [-1,1)$. A partition $\mathcal P = \{I_1, \dots, I_\ell\}$ of $I$ is \emph{$(p,\eta)$-uniform} if $p(I_j) \leq \eta$ for all $1\leq j\leq \ell$. \end{definition} \begin{definition} \label{def:interval-ineq} Let $\mathcal P = \{[i_0, i_1), \dots, [i_{r-1}, i_r)\}$ be a partition of an interval $I {\subseteq} [-1,1)$. Let $p,q:I\to \mathbb{R}$ be two functions on $I$. We say that $p$ and $q$ satisfy the \emph{$(\mathcal P, \eta,\varepsilon)$-inequalities over $I$} if \[ \abs{ p([i_j,i_\ell)) - q([i_j,i_\ell)) } \leq \sqrt{\varepsilon(\ell-j)}\cdot \eta \] for all $0\leq j < \ell \leq r$. \end{definition} We will also use the following notation: For this subsection, let $I = {[-1,1)}$ ({$I$ will denote a subinterval of $[-1,1)$ when the results are applied in the next subsection}). We write $\|f\|^{(I)}_{1}$ to denote $\int_{I} |f(x)| dx$, and we write $d_{\mathrm TV}^{(I)}(p,q)$ to denote $\|p-q\|^{(I)}_1{/2}$. We write $\mathrm{opt}^{(I)}_{1,d}$ to denote the {infimum of the} statistical distance $d_{\mathrm TV}^{(I)}(p,g)$ between $p$ and any degree-$d$ subdistribution $g$ on $I$ that satisfies $g(I) = p(I)$. \ignore{ \begin{theorem} \label{thm:kis1} Let $p$ be a pdf over $[-1,1)$ which is an unknown degree-$d$ polynomial. There is an algorithm {\tt Learn-Degree-$d$} which runs in poly$(d+1,1/\varepsilon)$ time, uses $\tilde{O}((d+1)/\varepsilon^2)$ samples from $p$, and with probability at least $9/10$ outputs a degree-$d$ polynomial $q$ which defines a pdf such that $d_{\mathrm TV}(p,q) \leq \varepsilon.$ \end{theorem} } The key step of {\tt Learn-WB-Single-Poly} is Step~3 where it calls the {\tt Find-Single-Polynomial} procedure. In this procedure $T_i(x)$ denotes the degree-$i$ Chebychev polynomial of the first kind. The function {\tt Find-Single-Polynomial} should be thought of as the CDF of a ``quasi-distribution'' $f$; we say that $f=F'$ is a ``quasi-distribution'' and not a bona fide probability distribution because it is not guaranteed to be non-negative everywhere on $[-1,1)$. Step~2 of {\tt Find-Single-Polynomial} processes $f$ slightly to obtain a polynomial $q$ which is an actual distribution over $[-1,1).$ We note that while the {\tt Find-Single-Polynomial} procedure may appear to be more general than is needed for this section, we will exploit its full generality in the next subsection where it is used as a key subroutine for semi-agnostically learning $t$-piecewise polynomial distributions. \begin{framed} \noindent {\bf Algorithm {\tt Learn-WB-Single-Poly}:} \medskip \noindent {\bf Input:} parameters $d,\varepsilon$ \noindent {\bf Output:} with probability at least $9/10$, a degree-$d$ distribution $q$ such that $d_{\mathrm TV}(p,q) \leq 3 \cdot \mathrm{opt}_{1,d} + O(\varepsilon)$ \begin{enumerate} \item Run Algorithm~{\tt Approximately-Equal-Partition} on input parameter $\varepsilon/{(d+1)}$ to partition $[-1,1)$ into $z = \Theta((d+1)/\varepsilon)$ intervals $I_0 = [i_0,i_1)$, $\dots,$ $I_{z-1}=[i_{z-1},i_z)$, where $i_0=-1$ and $i_z=1$, such that for each $j \in \{1,\dots,z\}$ we have $p([i_{j-1},i_j)) = \Theta(\varepsilon/(d+1)).$ \item Draw $m=\tilde{O}((d+1)/\varepsilon^2)$ samples and let $\widehat{p}_m$ be the empirical distribution defined by these samples. \item Call {\tt Find-Single-Polynomial}($d$, $\varepsilon$, $\eta:=\Theta(\varepsilon/(d+1))$, $\{I_0,\dots,I_{z-1}\}$, $\widehat{p}_m)$ and output the hypothesis $q$ that it returns. \end{enumerate} \end{framed} \begin{framed} \noindent {{\bf Subroutine} {\tt Find-Single-Polynomial}:} \medskip \noindent {\bf Input:} degree parameter $d$; error parameter $\varepsilon$; parameter $\eta$; $(p,\eta)$-uniform partition $\mathcal P_I = \{I_1, \dots, I_{{z}}\}$ of interval $I = \cup_{i=1}^{{z}} I_i$ into ${{z}}$ intervals {such that $\sqrt{ \varepsilon z}\cdot \eta \leq \varepsilon/2$}; a subdistribution $\widehat{p}_m$ on $I$ such that $\widehat{p}_m$ and $p$ satisfy the $(\mathcal P,\eta,\varepsilon)$-inequalities over $I$ \noindent \textbf{Output:} a number $\tau$ and a degree-$d$ subdistribution $q$ on $I$ such that $q(I) = \widehat{p}_m(I)$, \[ d_{\mathrm TV}^{(I)}(p,q) \leq 3\mathrm{opt}^{(I)}_{1,d}{(1+\varepsilon)} + \sqrt{\varepsilon r {(d+1)}} \cdot \eta + {\rm error}, \] ${0\leq} \tau \leq \mathrm{opt}^{(I)}_{1,d} {(1+\varepsilon)}$ {and ${\rm error} = O({(d+1)}\eta)$}. \ignore{ {Old guarantee that we had for {\tt Learn-WB-Single-Poly} was: \noindent {\bf Output:} a degree-$d$ distribution $q$ such that $d_{\mathrm TV}(p,q) \leq 3 \cdot \mathrm{opt}_{1,d} + O(\varepsilon)$ and a value $\tau$ such that $\tau \leq \mathrm{opt}_{1,d}$ The above generalizes it so that it can also be used for {\tt Learn-WB-Piecewise-Poly}. } {Synch up the following with the Section 3.3 version} } \begin{enumerate} \item Let $\tau$ be the solution to the following LP: \[ \text{minimize~}\tau~\text{subject to the following constraints:} \] (Below $F(x) = \sum_{i=0}^{d+1} c_i T_i(x)$ where $T_i(x)$ is the degree-$i$ Chebychev polynomial of the first kind, and $f(x)=F'(x) = \sum_{i=0}^{d+1} c_i T'_i(x)$.) \begin{enumerate} \item \label{item:total} $F(-1)=0$ and $F(1)={\widehat{p}_m(I)}$; \item \label{item:phat} For each $0 \leq j < k \leq z$, \begin{equation} \label{eq:agno-phat} \left| \left(\widehat{p}_m([i_j,i_k)) + \mathop{\textstyle \sum}_{j\leq \ell < k} w_\ell \right) - (F(i_k) - F(i_j)) \right| \leq \sqrt{\varepsilon \cdot (k-j)} \cdot \eta; \end{equation} \ignore{ \inote{The way Siuon writes it in the following section: $\widehat{p}+w$ and $f$ satisfy the blah inequalities.} } \item \label{item:robust} \begin{align} \sum_{0\leq \ell < {z}} w_\ell &= 0, \\ -y_\ell \leq w_\ell &\leq y_\ell \qquad \text{for all $0\leq \ell < {z}$,} \\ \sum_{0\leq \ell < {z}} y_\ell &\leq 2\tau{(1+\varepsilon)}; \end{align} \item \label{item:AK} The constraints $|c_i| \leq \sqrt{2}$ for $i=0,\dots,d+1$; \item \label{item:AK2} The constraints \[ 0 \leq F(z) \leq 1 \quad \text{for all~} z \in J, \] where $J$ is a set of {$O(d+1)^6$}\ignore{\inote{We might want to improve this in a later pass; ok for now.}} equally spaced points across $[-1,1]$; \ignore{ , and \[ \widehat{p}([0,y)) - \delta \leq \int_{-1}^z \sum_{i=0}^d a_i T_i(x) dx \leq \widehat{p}([0,z)) + \delta \quad \text{for all~}z \in K, \] where $K$ is a set of $(d+1)^2/\delta$ equally spaced points across $[-1,1).$ } \item \label{item:nonneg-1} The constraints \[ \sum_{i=0}^d c_i T'_i(x) \geq 0 \quad \text{for all~}x \in K, \] where $K$ is a set of $O((d+1)^2/\varepsilon)$ equally spaced points across $[-1,1)$. \end{enumerate} \item Define $q(x) = { \varepsilon f(I)/\len I + (1-\varepsilon)f(x)}.$ Output $q$ as the hypothesis pdf. \end{enumerate} \end{framed} The rest of this subsection gives the proof of Theorem~\ref{thm:agno-kis1}. The claimed sample complexity bound is obvious (observe that Steps~1 and~2 of {\tt Learn-WB-Single-Poly} are the only steps that draw samples), as is the claimed running time bound (the computation is dominated by solving the $\mathrm{poly}(d,1/\varepsilon)$-size LP in {\tt Find-Single-Poly}), so it suffices to prove correctness. Before launching into the proof we give some intuition for the linear program. Intuitively $F(x)$ represents the cdf of a degree-$d$ polynomial distribution $f$ where $f=F'.$ Constraint 1(a) captures the endpoint constraints that any cdf must obey {if it has the same total mass as $\widehat p_m$}. Intuitively, constraint 1(b)(1) ensures that for each interval $[i_j,i_k)$, the value $F(i_k)-F(i_j)$ (which we may alternately write as $f([i_j,i_k))$) is close to the mass $\widehat{p}_m([i_j,i_k))$ that the empirical distribution puts on the interval. Recall that by assumption $p$ is $\mathrm{opt}_{1,d}$-close to some degree-$d$ polynomial $r$. Intuitively the variable $w_\ell$ represents $\int_{[i_\ell, i_{\ell+1})} (r-p)$ (note that these values sum to zero by constraint 1(c)(2)), and $y_\ell$ represents the absolute value of $w_\ell$ (see constraint~1(c)(3)). The value $\tau$, which by constraint 1(c)(4) is at least the sum of the $y_\ell$'s, represents a lower bound on $\mathrm{opt}_{1,d}.$ (The factor $2$ on the RHS of constraint 1(c)(4) is present because $\| p-r \|_1 = 2d_{\mathrm TV}(p,r)$.) The constraints in 1(d) and 1(e) reflect the fact that as a cdf, $F$ should be bounded between 0 and 1 (more on this below), and the 1(f) constraints reflect the fact that the pdf $f=F'$ should be everywhere nonnegative (again more on this below). \medskip We begin by showing that with high probability {\tt Learn-WB-Single-Poly} calls {\tt Find-Single-Polynomial} with input parameters that satisfy {\tt Find-Single-Polynomial}'s input requirements: \begin{enumerate} \item [(I)] the intervals $I_0,\dots,I_{z-1}$ are $(p,\eta)$-uniform; and \item [(II)] $\widehat{p}_m$ and $p$ satisfy the $(\mathcal P,\eta,\varepsilon)$-inequalities over $[-1,1)$. \end{enumerate} We further show that given that this happens, {\tt Find-Single-Polynomial}'s LP is feasible and has a high-quality optimal solution. \begin{lemma} \label{lem:feasible} Suppose $p$ is an ${\frac \varepsilon {64(d+1)}}$-well-behaved pdf over $[-1,1)$. Then with overall probability at least $37/40$ over the random draws performed in steps 1 and 2 of {\tt Learn-WB-Single-Poly}, conditions (I) and (II) above hold; the LP defined in step 1 of {\tt Find-Single-Polynomial} is feasible; and the optimal solution $\tau$ is at most $\mathrm{opt}_{1,d} {\cdot (1+\varepsilon)}.$ \end{lemma} \begin{proof} By Lemma~\ref{lem:part-approx-unif}, we have that with probability at least $99/100$, every pair $j<k$ is such that the true probability mass $p([i_j,i_k))$ is $\Theta((k-j)\varepsilon/(d+1)).$ (Note that the assumption that $p$ is ${\frac \varepsilon {64(d+1)}}$-well-behaved was required to apply Lemma~\ref{lem:part-approx-unif}.) This gives (I). The multiplicative Chernoff bound (and a union bound) tells us that for every pair $(j,k)$ with $1 \leq j < k \leq z$, with probability at least $39/40$ we have \begin{equation} \label{eq:phat-mult-good} \widehat{p}_m([i_j,i_k)) \in (1 \pm \tau) p([i_j,i_k)) \quad \quad \text{for~}\tau=\sqrt{{\frac \varepsilon {k-j}}}, \end{equation} and hence \begin{equation} \label{eq:phat-add-good} \left| \widehat{p}_m([i_j,i_k)) - p([i_j,i_k)) \right| \leq {\frac12 \cdot} \sqrt{\varepsilon (k-j)} \cdot {\frac \varepsilon {(d+1)}}, \end{equation} which {implies} (II). We assume that all these events hold going forth, and show that then the LP is feasible. As above, let $r$ be a degree-$d$ polynomial pdf such that $\mathrm{opt}_{1,d}= d_{\mathrm TV}(p,r)$ {and $r(I) = p(I)$}. {Let $\overline r$ be $r$ renormalized by the empirical mass $\widehat{p}_m$, so $\overline r = r\cdot \widehat{p}_m(I)/p(I)$. Similarly let $\overline p = p\cdot \widehat{p}_m(I)/p(I)$ be the renormalization of $p$.} We exhibit a feasible solution as follows: take $F$ to be the cdf of {$\overline r$} (a degree $d$ polynomial). Take $w$ to be $\int_{[i_\ell,i_{\ell+1})} ({\overline r-\overline p})$, and take $y_\ell$ to be $|w_\ell|$. Finally, take $\tau$ to be ${\frac 1 2} \sum_{0 \leq \ell < {z}} y_\ell.$ We first argue feasibility of the above solution. We first take care of the easy constraints: since $F$ is the cdf of a {sub}distribution over $I$ it is clear that constraints 1(a) and 1(e) are satisfied, and since both $r$ and $p$ are pdfs {with the same total mass} it is clear that constraints 1(c)(2) and 1(f) are both satisfied. Constraints 1(c)(3) and 1(c)(4) also hold{, because $\frac 12\sum y_\ell = \frac 12 \norm{r-p}_1 \cdot \widehat{p}_m(I)/p(I) \leq d_{\mathrm TV}(p,r)\cdot (1+\varepsilon)$, where we have used $(\mathcal P, \eta, \varepsilon)$-inequalities and the assumption $\sqrt{\varepsilon z}\cdot \eta \leq \varepsilon/2$ to show $\widehat{p}_m(I) /p(I)\in [1-\varepsilon/2,1+\varepsilon/2]$}. So it remains to argue constraints 1(b) and 1(d). { \begin{claim} \label{claim:ineq-feas} If $\widehat{p}_m$ and $p$ satisfy $(\mathcal P, \eta, \varepsilon/4)$-inequalities on $I\subseteq [-1,1)$, then $\widehat{p}_m + \overline r - p$ and $\overline r$ satisfy $(\mathcal P, \eta, \varepsilon)$-inequalities on $I$. \end{claim} \begin{proof} For an interval $J = [i_j, i_k) \in \mathcal P$, the LHS of $(\mathcal P, \eta, \varepsilon)$-inequalities between $\widehat p + (\overline r-p)$ and $\overline r$ is \[ \abs{\widehat{p}_m(J)+({\overline r}-p)(J) - {\overline r}(J)} = \abs{\widehat{p}_m(J) - {p}(J)} . \] Therefore it suffices to bound $\abs{\widehat p_m(J) - p(J)}$ and $\abs{\overline r(J) - p(J)}$. We can bound $\abs{\widehat p_m(J) - p(J)}$ by $(\mathcal P, \eta, \varepsilon/4)$-inequalities between $\widehat{p}_m$ and $p$ in our assumption. We also have \[ \abs{\overline r(J) - p(J)} \leq \frac\eps2 p(J) \] because $\widehat{p}_m(J)/p(J)\in [1-\varepsilon/2,1+\varepsilon/2]$. \end{proof} Note that constraint 1(b) is equivalent to $\widehat{p}_m + (\overline r - p)$ and $\overline r$ satisfying $(\mathcal P, \varepsilon/(d+1), \varepsilon)$-inequalities, therefore this constraint is satisfied by \eqref{eq:phat-add-good} and \cref{claim:ineq-feas}. } To see that constraint 1(d) is satisfied we recall some of the analysis of Arora and Khot \cite[{Section~3}]{AK03}. This analysis shows that since $r$ is a cdf (a function bounded between 0 and 1 on $I$) each of its Chebychev coefficients is at most $\sqrt{2}$ in magnitude. {Therefore $F$ is bounded between 0 and $1+\varepsilon$, and likewise its coefficients are bounded by $\sqrt 2(1+\varepsilon)$}. To conclude the proof of the lemma we need to argue that $\tau \leq \mathrm{opt}_{1,d}{\cdot (1+\varepsilon)}$. Since $w_\ell = \int_{[i_\ell,i_{\ell+1})} ({\overline r-\overline p})$ it is easy to see that $2 \tau = \sum_{0 \leq \ell < {z}} y_\ell = \sum_{0 \leq \ell < {z}} |w_\ell| \leq \|{\overline p-\overline r}\|_1$, and hence indeed $\tau \leq d_{\mathrm TV}(p,r){\cdot \widehat{p}_m(I)/p(I)} \leq \mathrm{opt}_{1,d}{\cdot (1+\varepsilon)}$ as required. \end{proof} Having established that with high probability the LP is indeed feasible, henceforth we let $\tau$ denote the optimal solution to the LP and $F$, $f$, $w_\ell$, $c_i$, $y_\ell$ denote the values in the optimal solution. A simple argument (see e.g. the proof of {\cite[Theorem~8]{AK03}}) gives that $\|F\|_\infty \leq 2{(1+\varepsilon)}$. Given this bound on $\|F\|_\infty$, the Bernstein--Markov inequality implies that $\|f\|_\infty = \|F'\|_\infty \leq O((d+1)^2)$. Together with (\ref{item:nonneg-1}) this implies that $f(z) \geq -\varepsilon{/2}$ for all $z \in [-1,1).$ Consequently $q(z) \geq 0$ for all $z \in [-1,1)$, and \[ \int_{-1}^1 q(x) dx = \varepsilon + (1 - \varepsilon) \int_{-1}^1 f(x)dx = \varepsilon + (1-\varepsilon)(F(1)-F(-1)) = 1. \] So $q(x)$ is indeed a degree-$d$ pdf. To prove Theorem~\ref{thm:agno-kis1} it remains to show that $d_{\mathrm TV}(q,p) \leq 3 \mathrm{opt}_{1,d} + O(\varepsilon).$ \ignore{ It may be the case that a feasible solution $f(x)= \sum_{i=0}^d a_i x^i$ of the LP has $f(I)<0$ for some interval $I$. However, we now show that if $f(I)$ is negative it can only have small magnitude, and that moreover $f$ can never take a large-magnitude negative value on $[-1,1)$. \begin{lemma} \label{lem:f-not-too-negative} Let $f(x)=\sum_{i=0}^d a_i x^i$ be any feasible solution of the LP from Steps~3(a) and~3(b). Then \begin{itemize} \item For any interval $I=[u,v) \subseteq [0,1]$ it must be the case that $f([u,v)) \geq -\varepsilon/d.$ \item For any point $z \in [-1,1)$ it must be the case that $f(z) \geq -\varepsilon.$ \end{itemize} \end{lemma} \begin{proof} FILL ME IN \end{proof} } We sketch the argument that we shall use to bound $d_{\mathrm TV}(p,q).$ A key step in achieving this bound is to bound the $\|\cdot\|_{\cal A}$ distance between $f$ and $\widehat{p}_m + w$ where ${\cal A} = {\mathcal A_{d+1}}$ is the class of all unions of $d+1$ intervals and $w$ is a function based on the $w_\ell$ values (see \eqref{eq:good} below). Similar to Section~\ref{sec:ineff} the VC theorem gives us that $\|p - \widehat{p}_m\|_{\cal A} \leq \varepsilon$ with probability at least $39/40$, so if we can bound $\|(\widehat{p}_m +w)- f\|_{\cal A} \leq O(\varepsilon)$ then it will not be difficult to show that $\|r - f\|_{\cal A} \leq 2 \mathrm{opt}_{1,d} + O(\varepsilon).$ Since $r$ and $f$ are both degree-$d$ polynomials we have $d_{\mathrm TV}(r,f) = \|r - f\|_{\cal A} \leq 2 \mathrm{opt}_{1,d} + O(\varepsilon)$, so the triangle inequality (recalling that $d_{\mathrm TV}(p,r) = \mathrm{opt}_{1,d}$) gives $d_{\mathrm TV}(p,f) \leq 3 \mathrm{opt}_{1,d}+O(\varepsilon).$ From this point a simple argument (Proposition~\ref{prop:perturb}) gives that $d_{\mathrm TV}(p,q) \leq d_{\mathrm TV}(p,f) + O(\varepsilon)$, which gives the theorem. We will use the following lemma {that translates $(\mathcal P, \eta, \varepsilon)$-inequalities into a bound on $\mathcal A_{d+1}$ distance}. \begin{lemma} \label{lem:ad-dist} Let $\mathcal P = \{I_0=[i_0, i_1), \dots, I_{z-1}=[i_{z-1}, i_z)\}$ be a {$(p,\eta)$-uniform} partition of $I$. Let $\widehat{p}_m$ be a {sub}distribution {on $I$} such that {$\widehat{p}_m$ and $p$ satisfy $(\mathcal P, \eta, \varepsilon)$-inequalities on $I$.} If $h:I\to \mathbb{R}$ and $\widehat{p}_m$ also satisfy the $(\mathcal P, \eta, \varepsilon)$-inequalities, then \[ {\|\widehat{p}_m - h\|_{\mathcal A_{{d+1}}}^{(I)} \leq \sqrt{\varepsilon z {(d+1)}}\cdot \eta + {\rm error},} \] {where ${\rm error} = O({(d+1)}\eta)$}. \end{lemma} \begin{proof} To analyze $\|\widehat{p}_m - h\|_{\mathcal A_{d+1}}$, consider any union of ${d+1}$ disjoint non-overlapping intervals $S = J_1 \cup \dots \cup J_{{d+1}}$. We will bound $\norm{ \widehat{p}_m - h }_{\mathcal A_{d+1}}$ by {bounding $\abs{ \widehat{p}_m(S) - h(S)}$}. We lengthen intervals in $S$ slightly to obtain $T = J'_1 \cup \dots \cup J'_{{d+1}}$ so that each $J'_j$ is a union of intervals of the form $[i_\ell, i_{\ell+1})$. Formally, if $J_j = [a,b)$, then $J'_j = [a',b')$, where $a' = \max_\ell \{ i_\ell \mid i_\ell \leq a\}$ and $b' = \min_\ell \{ i_\ell\mid i_\ell \geq b \}$. We claim that \begin{equation} \label{eq:lengthen} \abs{ \widehat{p}_m(S) - h(S) } \leq O({(d+1)}\eta) + \abs{ \widehat{p}_m(T) - f(T) } . \end{equation} Indeed, consider any interval of the form $J = [i_\ell, i_{\ell+1})$ such that $J \cap S \neq J \cap T$. We have \begin{equation} \label{eq:lengthen-single} \abs{ \widehat{p}_m(J \cap S) - \widehat{p}_m(J \cap T) } \leq \widehat{p}_m(J) \leq {O(\eta)}, \end{equation} where the first inequality uses nonegativity of $\widehat{p}_m$ and the second inequality follows from {$(\mathcal P, \eta, \varepsilon)$-inequalities (between $\widehat{p}_m$ and $p$)} and the bound $p([i_\ell,i_{\ell + 1})) \leq \eta$. The {$(\mathcal P, \eta, \varepsilon)$-inequalities (between $h$ and $\widehat{p}_m$)} implies that the inequalities in \eqref{eq:lengthen-single} also hold with $h$ in place of $\widehat{p}_m$. \ignore{and now the second inequality in \eqref{eq:lengthen-single} follows from condition (\ref{item:phat}).} Now \eqref{eq:lengthen} follows by adding \eqref{eq:lengthen-single} across all $J = [i_\ell, i_{\ell+1})$ such that $J\cap S\neq J\cap T$ (there are at most $2{(d+1)}$ such intervals $J$), since each interval $J_j$ in $S$ can change at most two such $J$'s when lengthened. Now rewrite $T$ as a disjoint union of $s \leq {d+1}$ intervals $[i_{L_1}, i_{R_1}) \cup \dots \cup [i_{L_s}, i_{R_s})$. We have \[ |\widehat{p}_m(T) - h(T)| \leq \sum_{j=1}^s \sqrt{R_j - L_j} \cdot \sqrt \varepsilon\eta \] by {$(\mathcal P, \eta, \varepsilon)$-inequalities between $\widehat{p}_m$ and $h$}. Now observing that that $0 \leq L_1 \leq R_1 \cdots \leq L_s \leq R_s \leq t = O((d+1)/\varepsilon)$, we get that the largest possible value of $\sum_{j=1}^s \sqrt{R_j - L_j}$ is $\sqrt{sz} \leq {\sqrt{{(d+1)}z}}$, so the RHS of (\ref{eq:lengthen}) is at most $O({(d+1)}\eta) + {\sqrt{ {(d+1)}z\varepsilon}\eta}$, as desired. \end{proof} Recall from above that $F$, $f$, $w_\ell$, $c_i$, $y_\ell$, $\tau$ denote the values in the optimal solution. We claim that \begin{equation} \label{eq:good} \| (\widehat{p}_m+ w) - f \|_{\cal A} = O(\varepsilon) , \end{equation} where $w$ is the sub-distribution which is constant on each $[i_\ell, i_{\ell+1})$ and has mass $w_\ell$ there, so in particular $\| w \|_1 \leq 2\tau \leq 2 \mathrm{opt}_{1,d}{(1+\varepsilon)}$. Indeed, this equality follows by applying \cref{lem:ad-dist} with ${h = f-w}$. {The lemma requires $h$ and $\widehat{p}_m$ to satisfy $(\mathcal P, \eta, \varepsilon)$-inequalities, which follows from constraint 1(b) ($(\mathcal P, \eta, \varepsilon)$-inqualities between $\widehat{p}_m+w$ and $f$) and observing that $(\widehat{p}_m+ w) - f = \widehat{p}_m- (f - w)$. We have also used $\eta = \Theta(\varepsilon/{(d+1)})$ to bound the {\rm error} term of the lemma by $O(\varepsilon)$.} Next, by the triangle inequality we have {(writing ${\cal A}$ for ${\cal A}_{d+1}$)} \[ \| r - f \|_{\cal A} \leq \| r - (p+w) \|_{\cal A} + \| (p+w) - (\widehat{p}_m+w) \|_{\cal A } + \| (\widehat{p}_m+w) - f \|_{\cal A} . \] The last term on the RHS has just been shown to be $O(\varepsilon)$. The second term equals $\| p - \widehat{p}_m\|_{\cal A}$ and is $O(\varepsilon)$ with probability at least $39/40$ by the VC inequality. The first term is bounded by \[ \| r-(p+w)\|_{\cal A} \leq d_{\mathrm TV}(r, p+w) = \| r-(p+w) \|_1/2 \leq (\| r-p\|_1 + \| w\|_1)/2 \leq 2\mathrm{opt}_{1,d}{(1+\varepsilon)}. \] Altogether, we get that $\| r - f \|_{\cal A} \leq 2\mathrm{opt}_{1,d} {(1+\varepsilon)}+ O(\varepsilon)$. Since $r$ and $f$ are degree $d$ polynomials, $d_{\mathrm TV}(r,f) = \| r - f \|_{\cal A} \leq 2\mathrm{opt}_{1,d}{(1+\varepsilon)} + O(\varepsilon)$. This implies $d_{\mathrm TV}(p,f) \leq d_{\mathrm TV}(p,r) + d_{\mathrm TV}(r,f) \leq 3\mathrm{opt}_{1,d} {(1+\varepsilon)}+ O(\varepsilon)$. {Finally, we turn our quasidistribution $f$ which has value $\geq -\varepsilon/2$ everywhere into a distribution $q$ (which is nonnegative), by redistributing the mass.} The following simple proposition {bounds the error incurred}. \begin{proposition} \label{prop:perturb} {Let $f$ and $p$ be any sub-quasidistribution on $I$.} If $q = {\varepsilon f(I)/\len I + (1- \varepsilon)f}$, then $\norm{q - p}_1 \leq \norm{f - p}_1 + {\varepsilon(f(I)+p(I))}$. \end{proposition} \begin{proof} We have \[ q - p = {\varepsilon(f(I)/\len I - p) + (1-\varepsilon)(f - p)}. \] Therefore \[ \norm{ q - p }_1 \leq { \varepsilon \norm{f(I)/|I| - p}_1 + (1-\varepsilon) \norm{ f - p }_1 \leq \varepsilon(f(I)+p(I)) + \norm{ f - p }_1 } . \qedhere \] \end{proof} We have $d_{\mathrm TV}(p,q) \leq d_{\mathrm TV}(p,f) + O(\varepsilon)$ by Proposition~\ref{prop:perturb}, and we are done with the proof of Theorem~\ref{thm:agno-kis1}. \qed \subsection{Efficiently learning $(\varepsilon,t)$-piecewise degree-$d$ distributions.} \label{sec:learn-piecewise-deg-d} In this section we extend the previous result to semi-agnostically learn $t$-piecewise degree-$d$ distributions. We prove the following: \begin{theorem} \label{thm:piece-poly} Let $p$ be an ${\frac \varepsilon {64t(d+1)}}$-well-behaved pdf over $[-1,1)$. There is an algorithm\\ {\tt Learn-WB-Piecewise-Poly}$(t,d,\varepsilon)$ which runs in poly$(t,d+1,1/\varepsilon)$ time, uses $\tilde{O}(t(d+1)/\varepsilon^2)$ samples from $p$, and with probability at least $9/10$ outputs a $(2t-1)$-piecewise degree-$d$ distribution $q$ such that $d_{\mathrm TV}(p, q) \leq 3 \mathrm{opt}_{t,d} {(1+\varepsilon)} + O(\varepsilon)$. \end{theorem} At a high level, {\tt Learn-WB-Piecewise-Poly$(t,d,\varepsilon)$} breaks down $[-1,1)$ into $t/\varepsilon$ subintervals (denoted as the partition $\mathcal P' = \{I'_0,\dots, I'_{t/\varepsilon - 1}\}$ in subsequent discussion; this partition is constructed in step (\ref{item:coarsening})) and calls the subroutine \texttt{Find-Single-Polynomial}$(d,\varepsilon,{\eta,} \{I'_\ell,\dots,I'_{j-1}\}, \widehat{p}_m)$ on blocks of consecutive intervals from $\mathcal P'$ (see \cref{rem:repres}). As shown in the previous subsection, the subroutine {\tt Find-Single-Polynomial} returns a degree-$d$ polynomial $h$ that is close to the optimal degree-$d$ polynomial over $I'_\ell \cup \cdots \cup I'_{j-1}$. An exhaustive search over all ways of breaking $[-1,1)$ up into $t$ intervals would require running time exponential in $t$; to improve efficiency, dynamic programming is used to combine the different $h$'s obtained as described above and efficiently construct an overall high-accuracy piecewise degree-$d$ hypothesis. \ignore{ To reduce the running time (in particular, to achieve the polynomial dependence on $t$), dynamic programming is applied in step (\ref{item:dynprog}). } \begin{remark} \label{rem:repres} {The subroutine \textsc{Find-Single-Polynomial} from the previous section assumes the domain $I$ is $[-1,1)$. The following modification extends the subroutine to arbitrary domain $I$.} Map the interval $I = [a,b)$ to $[-1,1)$ via \[ \phi_I(a+\lambda(b-a)) = -1+2\lambda \quad \forall \lambda\in [0,1). \] We write $\phi = \phi_I$ when $I$ is clear from the context. Then the transformation $f\mapsto f_\phi$, where \[ f_\phi(x) = \frac{b-a}2 \cdot f(\phi^{-1}(x)) , \] is a linear map taking distributions over $I$ to distributions over $[-1,1)$ (and in fact, a linear isomorphism from $L_1(I)$ to $L_1[-1,1)$.) This transformation is also a bijection between degree-$d$ polynomials over $I$ and those over $[-1,1)$. As a result, if we represent $f_\phi$ by \[ f_\phi(x) = \sum_{i=0}^d c_i T_i(x) \quad \forall x\in [-1,1), \] where $T_i:[-1,1)\to \mathbb{R}$ are Chebyshev polynomials of degree $i$, we get a representation of $f:I\to \mathbb{R}$ via \begin{equation} \label{eq:repres} f(y) = {{\frac{2}{b-a}}} \sum_{i=0}^d c_i T_i(\phi(y)) . \end{equation} Note that if $f$ is bounded on $I$ and $b-a\leq 2$, then the same is true for $f_\phi$ on $[-1,1)$, and \[ \norm{f_\phi}_\infty^{([-1,1))} \leq \norm f_\infty^{(I)} . \] (The same inequality is also true with the RHS multiplied by $(b-a)/2 \leq 1$, but we only need the weaker inequality above.) {Further, since $f\mapsto f_\phi$ preserves distances between subdistributions, the assumptions and conclusions in the subroutine remain unchanged.} \end{remark} \begin{framed} \noindent Algorithm {\tt Learn-WB-Piecewise-Poly:} \medskip \noindent {\bf Input:} parameters $t,d,\varepsilon$ \noindent {\bf Output:} with probability at least $9/10$, a $t$-piecewise degree-$d$ distribution $q$ such that $d_{\mathrm TV}(p,q) \leq 3 \cdot \mathrm{opt}_{t,d} {(1+\varepsilon)} + O(\varepsilon)$ \begin{enumerate} \item \label{item:uniform-partit} Run Algorithm~{\tt Approximately-Equal-Partition} on input parameter $\varepsilon/(t(d+1))$ to partition $[-1,1)$ into $z = \Theta(t(d+1)/\varepsilon)$ intervals $I_0=[i_0,i_1)$, $\dots,$ $I_z=[i_{z-1},i_z)$, where $i_0=0$ and $i_z=1$, such that for each $j \in \{1,\dots,t\}$ we have $p([i_{j-1},i_j)) = \Theta(\varepsilon/(t(d+1))).$ \item \label{item:coarsening} Let $s = z/(d+1) = {\Theta(}t/\varepsilon {)}$. Set $i'_j = i_{(d+1)j}$ and define interval $I'_j=[i'_j,i'_{j+1})$ for $0\leq j < s$. \item \label{item:empirical} Draw $m = \tilde O(t(d+1)/\varepsilon^2)$ samples to define an empirical distribution $\widehat{p}_m$ over $[-1,1)$. \item Initialize $T(i,j) = \infty$ for $i\in \{0, \dots, {2t-1}\}$, $j\in \{0, \dots, s\}$, except that $T(0,0) = 0$. \item \label{item:dynprog} For $i \in \{1, \dots, 2t-1\}$, $j\in \{1, \dots, s\}$, $\ell\in \{0, \dots, j-1\}$: \begin{enumerate} \item Call subroutine {\tt Find-Single-Polynomial} $(d,$ $\varepsilon,$ $\eta=\Theta(\varepsilon/(t(d+1))),$ $\{I'_\ell,\dots,I'_{j-1}\}$, $\widehat{p}_m)$ \item Let $\tau$ be the solution to the LP found by {\tt Find-Single-Polynomial} and $h$ be the degree-$d$ hypothesis {sub-distribution} that it returns. \item If $T(i,j) > T(i-1, \ell) + \tau$, then \begin{enumerate} \item Update $T(i,j)$ to $T(i-1, \ell) + \tau$ \item Store the polynomial $h$ in a table $H(i,j)$. \end{enumerate} \end{enumerate} \item Recover a piecewise degree-$d$ distribution $h$ from the table $H(\cdot, \cdot)$. \end{enumerate} \end{framed} Let $\checkmark_1$ be the event that step (\ref{item:uniform-partit}) of {Subroutine} \texttt{Find-Piecewise-Polynomial} succeeds (i.e.\ the intervals $[i_j, i_{j+1})$ all have mass within a constant factor of $\varepsilon/t(d+1)$). In step (\ref{item:coarsening}) {of \texttt{Learn-WB-Piecewise-Poly}}, the algorithm effectively constructs a coarsening $\mathcal P'$ of $\mathcal P$ by merging every $d+1$ consecutive intervals from $\mathcal P$. These super-intervals are used in the dynamic programming in step (\ref{item:dynprog}). {The table entry $T(i,j)$ stores the minimum sum of errors $\tau$ (returned by the subroutine \textsc{Find-Single-Polynomial}) when the interval $[i'_0, i'_j)$ is partitioned into $i$ pieces. The dynamic program above only computes an estimate of $\mathrm{opt}_{t,d}$; one can use standard techniques to also recover a $t$-piecewise degree-$d$ polynomial $q$ close to $p$. } For step (\ref{item:empirical}), let $\checkmark_2$ be the event that $p$ and $\widehat{p}_m$ satisfy $(\mathcal P, \varepsilon/(t(d+1)), \varepsilon{/4})$-inequalities. In particular, when $\checkmark_2$ holds $\widehat{p}_m(I)/p(I) \leq {\varepsilon/2}$ for all $I\in \mathcal P$. By multiplicative Chernoff and union bound (over the $m$ samples in step (\ref{item:empirical})), event $\checkmark_2$ holds with probability at least $19/20$. \begin{proposition} \label{prop:solvable} If $\checkmark_1$ and $\checkmark_2$ hold and $p$ is $\tau$-close to some $t$-piecewise degree-$d$ distribution, then there is a coarsening $\mathcal P^*$ of $\mathcal P'$ and degree-$d$ polynomials $g_i:I^*_i \to \mathbb{R}$ such that $\sum_i d_{\mathrm TV}(p,g_i) \leq \tau+O(\varepsilon)$. Further, the $g_i$ functions can be chosen to satisfy constraints {\ref{item:total}, \ref{item:AK}--\ref{item:nonneg-1} in the subroutine \textsc{Find-Piecewise-Polynomial}}. \end{proposition} \begin{proof} Suppose $p$ is $\tau$-close to a $t$-piecewise degree-$d$ distribution. In other words, there exists a partition $\{J_1, \dots, J_t\}$ of $[-1,1)$ and degree-$d$ polynomials $h_i:J_i\to \mathbb{R}$ such that $\sum_{1\leq i\leq t} d_{\mathrm TV}(p, h_i) \leq \tau$. Let $\{ [i'_0, i'_1), \dots, [i'_{s-1}, i'_s) \}$ be $\mathcal P'$. Except in degenerate cases, the coarsening $\mathcal P^*$ contains $2t-1$ intervals, corresponding to the $t$ intervals on which $p$ is a polynomial and $t-1$ small intervals containing ``breakpoints'' between the polynomials. More precisely, if we denote by $\{\alpha_0, \dots, \alpha_j\}$ the breakpoints of $J_1, \dots, J_t$ (so that $J_j = [\alpha_{j-1}, \alpha_j)$), and define \[ J'_j := \cup\{ [\alpha_a, \alpha_b) \mid [\alpha_a, \alpha_b) \subset J_j \} \] as the maximal subinterval of $J_j$ with endpoints from $\{\alpha_j\}$, then $\mathcal P^*$ is the partition containing all the $J'_j$'s together with the intervals between consecutive $J'_j$'s. As a result, $\mathcal P^*$ is a partition of $[-1,1)$ into at most $2t-1$ non-empty intervals. For an interval $I^*_i$ not containing any breakpoint, the corresponding polynomial $g_i:I^*_i\to \mathbb{R}$ is simply $h_i$ rescaled by the empirical mass on $I^*_i$, so \[ g_i(x) = h_i(x) \cdot \frac{\widehat{p}_m(I^*_i)}{h_i(I^*_i)} \quad \text{for $x\in I^*_i\neq \emptyset$}. \] Then $g_i$ clearly satisfies constraints {\ref{item:total}} and {\ref{item:nonneg-1}}. Constraints \ref{item:AK} and \ref{item:AK2} are also satisfied: $(h_i)_{\phi_i}$ is a degree-$d$ polynomial on $[-1,1)$ bounded by $1$ in absolute value (here $\phi_i = \phi_{I_i^*}$), and $\widehat{p}_m(I_i^*)/h_i(I_i^*) \leq {\varepsilon/}2$ when $\checkmark_2$ holds. For an interval $I^*_i$ containing a breakpoint, we simply set $g_i$ to be the constant function with total mass $\widehat{p}_m(I_i^*)$ on $I_i^*$. As before, $g_i$ satisfies \ref{item:AK}--\ref{item:nonneg-1}. The contribution of such $g_i$'s (there are at most $t-1$ of them) to $\sum_i d_{\mathrm TV}(p,g_i)$ is at most $(t-1)\cdot 2\varepsilon/t = O(\varepsilon)$, using the fact that $\mathcal P'$ is $(\widehat{p}_m,4\varepsilon/t)$-uniform when $\checkmark_1$ and $\checkmark_2$ hold. \end{proof} When event $\checkmark_2$ holds, $p$ and $\widehat{p}_m$ satisfy the $(\mathcal P_{I^*_i}, \varepsilon/(t(d+1)), \varepsilon/4)$-inequalities. But this is the same as $g_i$ and $\widehat{p}_m+(g_i-p)$ satisfying the $(\mathcal P_{I^*_i}, \varepsilon/(t(d+1)), \varepsilon)/4$-inequalities, because $p-\widehat{p}_m = g_i - {(}\widehat{p}_m + g_i - p)$. { Therefore \cref{claim:ineq-feas} tells us that constraint \ref{item:phat} is satisfied. Constraints \ref{item:robust} are satisfied for similar reasons as in Section~\ref{sec:learn-deg-d-close}. Together with \cref{prop:solvable}}, the LP in the subroutine \texttt{Find-Single-Polynomial} will be feasible, provided the partition $\mathcal P^*$ is chosen correctly in the dynamic program. We have the following restatement of {\cref{lem:ad-dist}}, and a robust version as a corollary {(which follows by combining \cref{lem:ad-dist} and the proof of \cref{prop:info})} \begin{lemma}[{\cref{lem:ad-dist}} restated] Let $\mathcal P$ be a $(p,\eta)$-partition of $I \subseteq [-1,1)$ into $r$ intervals. Let $\widehat{p}_m$ be a subdistribution on $I$ such that $\widehat{p}_m$ and $p$ satisfy the $(\mathcal P, \eta, \varepsilon)$-inequalities. If $f:I\to \mathbb{R}$ and $\widehat{p}_m$ also satisfy the $(\mathcal P, \eta, \varepsilon)$-inequalities, then \[ \norm{\widehat{p}_m-f}_{\mathcal A_d}^{(I)} \leq \sqrt{\varepsilon r {(d+1)}}\cdot \eta + {\rm error}, \] {where the {\rm error} is $O({(d+1)}\eta)$.} \end{lemma} \begin{corollary} \label{cor:stat-dist-bound} {Let $p$ be a degree-$d$ subdistribution on $I$.} Let $\mathcal P$ be a $(p,\eta)$-partition of $I \subseteq [-1,1)$ into $r$ intervals. Let $\widehat{p}_m$ be a subdistribution on $I$ such that $\widehat{p}_m$ and $p$ satisfy $(\mathcal P, \eta, \varepsilon)$-inequalities. If $h:I\to \mathbb{R}$ and $\widehat{p}_m+ w$ also satisfy $(\mathcal P, \eta, \varepsilon)$-inequalities, then \[ d_{\mathrm TV}^{(I)}(p,h) \leq 3\tau {(1+\varepsilon)} + \sqrt{\varepsilon r{(d+1)}}\cdot \eta + {\rm error}, \] where $2\tau = \norm w_1$ {and ${\rm error} = O({(d+1)}\eta)$}. \end{corollary} \begin{proof}[{\bf Proof of \cref{thm:piece-poly}}] Since $p$ is $\tau$-close to a $t$-piecewise degree-$d$ distribution, there are a partition $\{J_1, \dots, J_t\}$ of $[-1,1)$ and degree-$d$ polynomials $g_i:J_i\to \mathbb{R}$ such that $\sum_{1\leq i\leq t} \tau_i \leq \tau$, where $\tau_i = d_{\mathrm TV}(p, g_i)$. Let $\mathcal P^* = \{I_1^*, \dots, I_{2t-1}^*\}$ be the coarsening of $\mathcal P'$ as in the proof of \cref{prop:solvable}. When $\checkmark_1$ and $\checkmark_2$ hold, it follows by a simple induction on $i\in \{0, \dots, 2t-1\}$ that the algorithm will output a $(2t-1)$-piecewise degree-$d$ distribution $h$ satisfying \begin{equation} \label{eq:stat-dist-bound-ineq} d_{\mathrm TV}(p,h) \leq \sum_{1\leq i\leq t} \left( 3\tau_i {(1+\varepsilon)} + \sqrt{\varepsilon r_i {(d+1)}} \cdot \frac\varepsilon{t(d+1)} + O\left({(d+1)}\cdot \frac\varepsilon{t(d+1)}\right) \right) + O(\varepsilon) . \end{equation} The first term comes from \cref{cor:stat-dist-bound} (with $\eta = O(\varepsilon/(t{(d+1)}))$), and the second term comes from the $t-1$ intervals containing the breakpoints (see the proof of \cref{prop:solvable}). Here $r_i$ denotes the number of intervals from $\mathcal P$ contained in $I_i^*$. Therefore the RHS of \eqref{eq:stat-dist-bound-ineq} is at most \[ 3\tau {(1+\varepsilon)} + \sum_{1\leq i\leq t} \sqrt{\varepsilon r_i (d+1)}\cdot \frac\varepsilon{t(d+1)} + O(\varepsilon) . \] The second term of this expression is bounded by $\varepsilon$ using Cauchy--Schwarz and the fact that $\mathcal P$ contains $t(d+1)/\varepsilon$ intervals. \end{proof} \subsection{Learning $k$-mixtures of well-behaved $(\tau,t)$-piecewise degree-$d$ distributions.} \label{sec:mix} In this subsection we prove Theorem~\ref{thm:main2} under the additional restriction that the target polynomial $p$ is well-behaved: \begin{theorem} \label{thm:learn-wb-mix} Let $p$ be an ${\frac \varepsilon {64 k t(d+1)}}$-well-behaved $k$-mixture of $(\tau,t)$-piecewise degree-$d$ distributions over $[-1,1)$. There is an algorithm that runs in $\mathrm{poly}(k,t,d+1,1/\varepsilon)$ time, uses $\tilde{O}((d+1)kt/\varepsilon^2)$ samples from $p$, and with probability at least $9/10$ outputs a $(2kt-1)$-piecewise degree-$d$ hypothesis $h$ such that $d_{\mathrm TV}(p,h) \leq 3 \mathrm{opt}_{t,d} {(1+\varepsilon)} + O(\varepsilon).$ \end{theorem} As we shall see, the algorithm of the previous subsection in fact suffices for this result. The key to extending Theorem~\ref{thm:piece-poly} to yield Theorem~\ref{thm:learn-wb-mix} is the following structural result, which says that any $k$-mixture of $(\tau,t)$-piecewise degree-$d$ distributions must itself be an $(\tau,kt)$-piecewise degree-$d$ distribution. \begin{lemma} \label{lem:mix} Let $p_1,\dots,p_k$ each be an $(\tau,t)$-piecewise degree-$d$ distribution over $[-1,1)$ and let $p = \sum_{j=1}^k \mu_j p_j$ be a $k$-mixture of components $p_1,\dots,p_k.$ Then $p$ is a $(\tau,kt)$-piecewise degree-$d$ distribution. \end{lemma} The simple proof is essentially the same as the proof of Lemma~3.2 of \cite{CDSS13soda} and is given in Appendix~\ref{ap:z}. We may rephrase Theorem~\ref{thm:piece-poly} as follows: \medskip \noindent {\bf Alternate Phrasing of Theorem~\ref{thm:piece-poly}.} \emph{Let $p$ be an ${\frac \varepsilon {64 t (d+1)}}$-well-behaved $(\tau,t)$-piecewise degree-$d$ pdf over $[-1,1).$ Algorithm {\tt Learn-WB-Piecewise-Poly}$(t,d,\varepsilon)$ runs in $\mathrm{poly}(t,d+1,1/\varepsilon)$ time, uses $\tilde{O}(t(d+1)/\varepsilon^2)$ samples from $p$, and with probability at least $9/10$ outputs a $(2t-1)$-piecewise degree-$d$ distribution $q$ such that $d_{\mathrm TV}(p,q) \leq 3 \tau {(1+\varepsilon)}+ O(\varepsilon).$ } \medskip Theorem~\ref{thm:learn-wb-mix} follows immediately from Theorem~\ref{thm:piece-poly} and Lemma~\ref{lem:mix}. \subsection{Proof of Theorem~\ref{thm:main2}.} \label{sec:kill-wb} In this subsection we show how to remove the well-behavedness assumption from Theorem~\ref{thm:learn-wb-mix} and thus prove Theorem~\ref{thm:main2}. More precisely we prove the following theorem which is a more detailed version of Theorem~\ref{thm:main2}: \begin{theorem} \label{thm:main-detail} Let $p$ be any $k$-mixture of $(\tau,t)$-piecewise degree-$d$ distributions over $[-1,1)$. There is an algorithm that runs in $\mathrm{poly}(k,t,d+1,1/\varepsilon)$ time, uses $\tilde{O}((d+1)kt/\varepsilon^2)$ samples from $p$, and with probability at least $9/10$ outputs a $(2kt-1)$-piecewise degree-$d$ hypothesis $h$ such that $d_{\mathrm TV}(p,h) \leq {4 \mathrm{opt}_{t,d} (1+\varepsilon)}+ O(\varepsilon).$ \end{theorem} To prove Theorem~\ref{thm:main-detail} we will need the following simple procedure, which (approximately) outputs all the points in $[-1,1)$ that are $\gamma$-heavy under a distribution $p$: \begin{framed} \noindent {\bf Algorithm {\tt Find-Heavy}:} \medskip \noindent {\bf Input:} parameter $\gamma>0$, sample access to distribution $p$ over $[-1,1)$ \noindent {\bf Output:} With probability at least $99/100$, a set $S \subset [-1,1)$ such that for all $x \in [-1,1)$, \begin{enumerate} \item if $\Pr_{x \sim p}[x] \geq 2 \gamma$ then $x \in S$; \item if $\Pr_{x \sim p}[x] < \gamma/2$ then $x \notin S$. \end{enumerate} \noindent Draw $m = \tilde{O}(1/\gamma)$ samples from $p$. For each $x \in [-1,1)$ let $\widehat{p}(x)$ equal $1/m$ times the number of occurrences of $x$ in these $m$ draws. Return the set $S$ which contains all $x$ such that $\widehat{p}(x) \geq \gamma.$ \end{framed} It is clear that the set $S$ returned by {\tt Find-Heavy}$(\gamma)$ has $|S| \leq 1 /\gamma$. We now prove that {\tt Find-Heavy} performs as claimed: \begin{lemma} \label{lem:FH} With probability at least $99/100$, {\tt Find-Heavy}$(\gamma)$ returns a set $S$ satisfying conditions (1) and (2) in the ``Output'' description. \end{lemma} We give the straightforward proof in Appendix~\ref{ap:z}. To prove Theorem~\ref{thm:main-detail} it suffices to prove the following result (which is an extension of Theorem~\ref{thm:piece-poly} that does not require the well-behavedness condition on $p$): \begin{theorem} \label{thm:no-wb} Let $p$ be a pdf over $[-1,1)$. There is an algorithm {\tt Learn-Piecewise-Poly}$(t,d,\varepsilon)$ which runs in poly$(t,d+1,1/\varepsilon)$ time, uses $\tilde{O}(t(d+1)/\varepsilon^2)$ samples from $p$, and with probability at least $9/10$ outputs a $(2t-1)$-piecewise degree-$d$ distribution $q$ such that $d_{\mathrm TV}(p, q) \leq {4} \mathrm{opt}_{t,d}{(1+\varepsilon)} + O(\varepsilon)$. where $\mathrm{opt}_{t,d}$ is the smallest variation distance between $p$ and any $t$-piecewise degree-$d$ distribution. \end{theorem} Using the arguments of Section~\ref{sec:mix}, Theorem~\ref{thm:main-detail} follows from Theorem~\ref{thm:no-wb} exactly as Theorem~\ref{thm:learn-wb-mix} follows from Theorem~\ref{thm:piece-poly}. \medskip \noindent {\bf Proof of Theorem~\ref{thm:no-wb}.} The algorithm {\tt Learn-Piecewise-Poly$(t,d,1/\varepsilon)$} works as follows: it first runs {\tt Find-Heavy}$(\gamma)$ where $\gamma = O({\frac \varepsilon {t(d+1)}})$ to obtain a set $S \subset [-1,1).$ It then runs\\ {\tt Learn-WB-Piecewise-Poly-$(t,d,1/\varepsilon)$} but using the distribution $p_{[-1,1)\setminus S}$ (i.e. $p$ conditioned on $[-1,1) \setminus S$) in place of $p$ throughout the algorithm. Each time a draw from $p_{[-1,1)\setminus S}$ is required, it simply draws repeatedly from $p$ until a point outside of $S$ is obtained. Let $p$ be any distribution over $[-1,1).$ Since the conclusion of the theorem is trivial if $\mathrm{opt}_{t,d} \geq 1/4$, we may assume that $\mathrm{opt}_{t,d}< 1/4.$ Consider an execution of {\tt Learn-Piecewise-Poly$(t,d,1/\varepsilon)$}. We assume that conditions (1) and (2) of {\tt Find-Heavy} indeed hold for the set $S$ that it constructs. Let $S' \supseteq S$ be defined as $S' = \{x \in [-1,1): \Pr_{x \sim p}[x] \geq \gamma/2\}.$ Since every $t$-piecewise degree-$d$ distribution $q$ has $d_{\mathrm TV}(p,q) \geq \Pr_{x \sim p}[x \in S']$ (because $p$ assigns probability $\Pr_{x \sim p}[x \in S']$ to $S'$ whereas $q$ assigns probability 0 to this finite set of points), it must be the case that $\Pr_{x \sim p}[x \in S] \leq \Pr_{x \sim p}[x \in S'] \leq \mathrm{opt}_{t,d}.$ Hence a draw from $p_{[-1,1) \setminus S}$ is indeed a valid draw from $p_{[-1,1) \setminus S}$ except with failure probability at most $\mathrm{opt}_{t,d}< 1/4.$ It follows easily from this and the sample complexity bound of Theorem~\ref{thm:piece-poly} that the sample complexity of algorithm {\tt Learn-Piecewise-Poly$(t,d,1/\varepsilon)$} is as claimed. Verifying correctness is also straightforward. {Recall that $\mathrm{opt}_{t,d}$ denotes the infimum of $d_{\mathrm TV}(p,q)$ where $q$ is any $t$-piecewise degree-$d$ distribution. Fix a $q$ which achieves $d_{\mathrm TV}(p,q)=\mathrm{opt}_{t,d}$; we claim that this $q$ also satisfies $d_{\mathrm TV}(p_{[-1,1)\setminus S},q) \leq \mathrm{opt}_{t,d}.$ (To see this, note that we may write $d_{\mathrm TV}(p,q)$ as $A + B$ where $A$ is the contribution from points in $[-1,1)\setminus S$ and $B$ is the contribution from $S$. Since $\Pr_{x \sim q}[x \in B]$ is zero it must be the case that $B = {\frac 1 2} \Pr_{x \sim p}[S]$, where the ``${\frac 1 2}$'' is the factor relating $L_1$ norm and total variation distance. Now write $d_{\mathrm TV}(p_{[-1,1)\setminus S},q)$ as $A' + B'$ where $A'$ is the contribution from points in $[-1,1)\setminus S$ and $B$ is the contribution from $S$. Clearly $B'$ is now 0, and $A'$ can be at most $B={\frac 1 2} \Pr_{x \sim p}[S]$ larger than $A$.) By Lemma~\ref{lem:FH} we have that $p_{[-1,1)\setminus S}$ is $O({\frac \varepsilon {t(d+1)}})$-well-behaved. Hence by Theorem~\ref{thm:piece-poly}, when {\tt Learn-WB-Piecewise-Poly}$(t,d,1/\varepsilon)$ is run on $p_{[-1,1) \setminus S}$ it succeeds with high probability to give a hypothesis $h$ such that $d_{\mathrm TV}(h,p_{[-1,1) \setminus S}) \leq 3\mathrm{opt}_{t,d}{(1+\varepsilon)}+ O(\varepsilon)$. Since $d_{\mathrm TV}(p,p_{[-1,1) \setminus S}) \leq \mathrm{opt}_{t,d}$ using the triangle inequality we get that $d_{\mathrm TV}(h,p) \leq 4 \mathrm{opt}_{t,d}{(1+\varepsilon)}+ O(\varepsilon)$, and Theorem~\ref{thm:no-wb} is proved. \qed } \ignore{OLD VERSION: By Lemma~\ref{lem:FH} we have that $p_{[-1,1]\setminus S}$ is $O({\frac \varepsilon {t(d+1)}})$-well-behaved. Since $d_{\mathrm TV}(p,p_{[-1,1] \setminus S}) \leq \mathrm{opt}_{t,d}$ and $p$ is $(\tau,t)$-piecewise degree-$d$, by the triangle inequality the variation distance between $p_{[-1,1] \setminus S}$ and the closest $t$-piecewise degree-$d$ distribution must be at most $2\mathrm{opt}_{t,d}.$ Hence by Theorem~\ref{thm:piece-poly}, when {\tt Learn-WB-Piecewise-Poly}$(t,d,1/\varepsilon)$ is run on $p_{[-1,1] \setminus S}$ it succeeds with high probability to give a hypothesis $h$ such that $d_{\mathrm TV}(h,p_{[-1,1] \setminus S}) \leq 6 \mathrm{opt}_{t,d}{(1+\varepsilon)}+ O(\varepsilon)$. Using the triangle inequality again we get that $d_{\mathrm TV}(h,p) \leq 7 \tau {(1+\varepsilon)}+ O(\varepsilon)$, and . Theorem~\ref{thm:no-wb} is proved. \qed END OLD VERSION } \bigskip \section{Applications} \label{sec:applic} In this section we use Theorem~\ref{thm:main-detail} to obtain a wide range of concrete learning results for natural and well-studied classes of distributions over both continuous and discrete domains. Throughout this section we do not aim to exhaustively cover all possible applications of Theorem~\ref{thm:main-detail}, but rather to give some selected applications that are indicative of the generality and power of our methods. We first (Section~\ref{sec:continuous}) give a range of applications of Theorem~\ref{thm:main-detail} to {semi-agnostically} learn various natural classes of continuous distributions. These include {non-parametric classes such as concave, log-concave, and $k$-monotone densities, mixtures of these densities}, and {parametric classes such as} mixtures of univariate Gaussians. Next, turning to discrete distributions we first show (Section~\ref{sec:discrete}) how the $d=0$ case of Theorem~\ref{thm:main-detail} can be easily adapted to learn \emph{discrete} distributions that are well-approximated by piecewise flat distributions. Using this general result, we improve prior results on learning mixtures of discrete $t$-modal distributions, mixtures of discrete monotone hazard rate (MHR) distributions, and mixtures of discrete log-concave distributions (including mixtures of Poisson Binomial Distributions), in most cases giving essentially optimal results in terms of sample complexity. {While we have not pursued this direction in the current paper, which focuses chiefly on continuous distributions, we suspect that with additional work Theorem~\ref{thm:main-detail} can be adapted to discrete domains in its full generality (of polynomials of degree $d$ for arbitrary $d$). We conjecture that such an adaptation may give essentially optimal sample complexity bounds for all of the classes of discrete distributions that we discuss in this paper.} \subsection{Applications to Distributions over Continuous Domains.} \label{sec:continuous} { In this section we apply our general approach to obtain efficient learning algorithms for mixtures of many different types of continuous probability distributions. We focus chiefly on distributions that are defined by various kinds of ``shape restrictions'' on the pdf. Nonparametric density estimation for shape restricted classes has been a subject of study in statistics since the 1950s (see \cite{BBBB:72} for an early book on the topic), and has applications to a range of areas including reliability theory {(see~\cite{Reb05aos} and references therein)}. The shape restrictions that have been studied in this area include monotonicity and concavity \ignore{and convexity }of pdfs~\cite{Grenander:56, Brunk:58, PrakasaRao:69, Wegman:70, HansonP:76, Groeneboom:85, Birge:87, Birge:87b}. More recently, motivated by statistical applications (see e.g. Walther's recent survey~\cite{Walther09}), researchers in this area have considered other types of shape restrictions including log-concavity and $k$-monotonicity \cite{BW07aos, DumbgenRufibach:09, BRW:09aos, GW09sc, BW10sn, KoenkerM:10aos}. } As we will see, our general method provides a single unified approach that gives a highly-efficient algorithm (both in terms of sample complexity and computational complexity) for all the aforementioned shape restricted densities (and mixtures thereof). {In most cases the sample complexities of our efficient algorithms are optimal up to log factors. } \subsubsection{Concave and Log-concave Densities.} \label{ssec:logconcave} Let $I \subseteq \mathbb{R} $ be a (not necessarily finite) interval. Recall that a function $g: I \to \mathbb{R}$ is called {\em concave} if for any $x, y \in I$ and $ \lambda \in [0,1]$ it holds $g\left( \lambda x + (1-\lambda)y \right) \ge \lambda g(x)+(1-\lambda) g(y).$ A function $h: I \to \mathbb{R}_+$ is called {\em log-concave} if $h(x) = \exp\left( g(x) \right)$, where $g:I \to \mathbb{R}$ is concave. In this section we show that our general technique yields nearly-optimal efficient algorithms to learn (mixtures of) concave and (more generally) log-concave densities. (Because of the concavity of the $\log$ function it is easy to see that every positive and concave function is log-concave.) In particular, we show the following: \begin{theorem} \label{thm:lc} Let $f: I \to \mathbb{R}_{+}$ be any $k$-mixture of log-concave densities, where $I = [a, b]$ is an arbitrary (not necessarily finite) interval. There is an algorithm that runs in $poly(k/\varepsilon)$ time, draws $\tilde{O}(k / \varepsilon^{5/2})$ samples from $f$, and with probability at least $9/10$ outputs a hypothesis distribution $h$ such that $d_{\mathrm TV}(f, h) \le \varepsilon$. \end{theorem} We note that the above sample complexity is information-theoretically optimal (up to logarithmic factors). In particular, it is known (see e.g. Chapter 15 of~\cite{DL:01}) that learning a single concave density {(recall that a concave density is necessarily log-concave)} over $[0,1]$ requires $\Omega(\varepsilon^{-5/2})$ samples. This lower bound can be easily generalized to show that learning a $k$-mixture of log-concave distributions over $[0,1]$ requires $\Omega(k/\varepsilon^{5/2})$ samples. As far as we know, ours is the first {computationally} efficient algorithm with {essentially} optimal sample complexity for this problem. \medskip To prove our result we proceed as follows: We show that any log-concave density $f: I \to \mathbb{R}_{+}$ has an ${(\varepsilon,t)}$-piecewise linear {(degree-1)} decomposition for $t = \tilde{O}(1/\sqrt{\varepsilon})$. A continuous version of the argument in Theorem~4.1 of~\cite{CDSS13soda} can be used to show the existence of an ${(\varepsilon, t)}$-piecewise {\em constant} {(degree-0)} decomposition with $t = \tilde{O}(1/\varepsilon)$. Unfortunately, the latter bound is essentially tight, hence cannot lead to an algorithm with sample complexity better than $\Omega(\varepsilon^{-3}).$ { Classical approximation results (see e.g.~\cite{Dudley:74, Novak:88}) provide optimal piecewise linear decompositions of concave functions. While these results have a dependence on the domain size of the function, they can rather easily be adapted to establish the existence of $(\varepsilon, t)$-piecewise linear decompositions for concave densities with {$t = O(1/\sqrt{\varepsilon})$}. However, we are not aware of prior work establishing the existence of piecewise linear decompositions for \emph{log-concave} densities. We give such a result by proving the following structural lemma: } \ignore{ the existence of the desired piecewise linear decompositions for concave densities. (It should be noted however that these old results are for concave functions, note densities and there is a dependence on the domain size of the function. One needs to use the fact that we have a density to get if for our case.) We are not aware of any result like that for log-concave. We prove it. In particular, we show the following structural lemma: } \begin{lemma} \label{lem:lc-struct} Let $f: I \to \mathbb{R}_{+}$ be any log-concave density, where $I = [a, b]$ is an arbitrary (not necessarily finite) interval. There exists an ${(\varepsilon, t)}$-piecewise linear decomposition {of $f$} for $t = \tilde{O}(1/\sqrt{\varepsilon})$. \end{lemma} {We note that our proof of Lemma~\ref{lem:lc-struct} is significantly different from the aforementioned known arguments establishing the existence of piecewise linear approximations for concave functions. In particular, these proofs critically exploit concavity, namely the fact that for a concave function $f$, the line segment $(x, f(x))$, $(y, f(y))$ lies below the graph of the function.} {Before giving the proof of our lemma, we note that the $\tilde{O}(1/\sqrt{\varepsilon})$ bound is best possible (up to log factors) even for concave densities. This can be verified by considering the concave density over $[0,1]$ whose graph is given by the upper half of a circle. We further note that the \cite{DL:01} $\Omega(1/\varepsilon^{5/2})$ lower bound implies that no significant strengthening can be achieved by using our general results for learning piecewise degree-$d$ polynomials for $d>1$. } \medskip \noindent Theorem~\ref{thm:lc} follows as a direct corollary of Lemma~\ref{lem:lc-struct} and Theorem~\ref{thm:main2}. \medskip \noindent {\bf Proof of Lemma~\ref{lem:lc-struct}:} We begin by recalling the following fact which is a basic property (in fact an alternate characterization) of log-concave densities: \begin{fact}(\cite{An:95}, Lemma~1) \label{fact:lc} Let $f: \mathbb{R} \to \mathbb{R}_{+}$ be log-concave. Suppose that $\{x \mid f(x) >0 \} = (a, b).$ Then, for all $x_1, x_2 \in (a, b)$ with $x_1< x_2$ and all $\delta \ge 0$ such that $x_1+\delta, x_2+\delta \in (a, b)$ we have \[ \frac{f(x_1+\delta)}{f(x_1)} \ge \frac{f(x_2+\delta)}{f(x_2)}.\] \end{fact} Let $f$ be an arbitrary log-concave density over $\mathbb{R}$. {Well known concentration bounds for log-concave densities (see \cite{An:95}) imply that} $1-\varepsilon$ fraction of the total probability mass lies in a {\em finite} interval $[a, b]$. Let $m \in [a, b]$ be a mode of $f$ so that $f$ is non-decreasing in $[a, m]$ and non-increasing in $[m, b]$. (Recall the well-known fact \cite{An:95} that every log-concave density is unimodal, so such a mode must exist.) It suffices to analyze the second {portion of the density}, i.e., a non-increasing log-concave (sub)-distribution over $[m, b]$. We may further assume without loss of generality that $[m, b] = [0,1]$. (It will be clear that in what follows nothing changes in the calculations as a result of this assumption -- the length of the interval is irrelevant.) So let $f: [0,1] \to \mathbb{R}_{+}$ be a non-increasing log-concave density and let $c = f(0) = \max_{x \in [0,1]} f(x).$ It follows from elementary calculus that $f$ is continuous in its support. We assume {without loss of generality} that $f$ is strictly decreasing in this domain. {(It follows from Fact~\ref{fact:lc} that for any non-increasing log-concave density over $[0, 1]$ there exists $x_0 \in [0,1]$ such that $f$ is constant in $[0, x_0]$ and strictly decreasing in $[x_0, 1]$.)} We proceed to construct the desired piecewise-linear approximation in two stages: \begin{enumerate} \item[(a)] Let $r, s \in \mathbb{Z}_{+}$ with $r=\Theta((1/\varepsilon) \log(1/\varepsilon))$ and $s = { \lceil \log_{1/(1-\varepsilon)} \frac{f(0)}{f(1)}\rceil = } \lceil \log_{1-\varepsilon} \frac{f(1)}{f(0)}\rceil$. We divide the domain $[0,1]$ into $t' \stackrel{{\mathrm {\footnotesize def}}}{=} \min \{ r, s \} = O((1/\varepsilon) \log(1/\varepsilon))$ intervals (disjoint except at the endpoints) $\mathcal{I} = \{I_i\}_{i=1}^{t'}$, where $I_i = [x_{i-1}, x_i]$, $i \in [t']$. The point $x_i \in [0,1]$ is the point that satisfies \begin{equation} \label{eqn:geom} f(x_i) = \max\{ f(x_0) (1-\varepsilon)^i , f(1)\}. \end{equation} Since the function is strictly decreasing and continuous, such a point exists and is unique. Note that the definition with the ``max'' above addresses the case that $s \le r$. In this case, we will have that $x_{t'} = x_s = 1.$ If $s>r$, then we will have that $f(x_i) = f(x_0) (1-\varepsilon)^i$ for $i \in [t']$ and $x_{t'} < 1.$ We now proceed to establish a couple of useful properties of this decomposition. The first property is that the length of the intervals $I_i$ is non-increasing as a function of $i$ for $i \in [t']$. \begin{claim} \label{claim:ni} For all $i \in [t'-1]$ we have that $|I_i| \ge |I_{i+1}|$. \end{claim} \begin{proof} Consider two consecutive intervals $I_i = [x_{i-1}, x_i]$ and $I_{i+1} = [x_{i}, x_{i+1}]$, $i \in [t'-1]$. It is easy to see that by the definition of the intervals we have that \[ \frac{f(x_{i+1})}{f(x_i)} \ge \frac{f(x_{i})}{f(x_{i-1})} \] or equivalently \[ \frac{f(x_i + |I_{i+1}| )}{f(x_i)} \ge \frac{f(x_{i-1} + |I_{i}|) }{f(x_{i-1})} .\] Since $x_{i-1} < x_i$, by Fact~\ref{fact:lc} we have \[ \frac{f(x_{i-1} + |I_{i}| )}{f(x_{i-1})} \ge \frac{f(x_{i} + |I_{i}|) }{f(x_{i})} .\] Combining the above two inequalities yields that $f(x_i + |I_{i+1}| ) \ge f(x_{i} + |I_{i}|)$. Since $f$ is non-increasing we conclude that $x_i + |I_{i+1}| \le x_{i} + |I_{i}|$ and the proof is complete. \end{proof} The second property is that the probability mass that $f$ puts in the interval $[x_{t'}, 1]$ is bounded by $\varepsilon$. \begin{claim} \label{claim:tail-pc} We have that $f([x_{t'}, 1]) \le \varepsilon$. \end{claim} \begin{proof} We consider two cases. If $t' = s$, then $x_{t'} = 1$ and the desired probability is zero. It thus suffices to analyze the case $t' = r$. In this case $x_{t'} < 1$ and for all $i \in [t']$ it holds $f(x_i) = f(x_0) (1-\varepsilon)^i$. Note that $f(x_{t'}) = f(0) (1-\varepsilon)^{t'} \le f(0) \varepsilon/2 = c\varepsilon/2.$ For the purposes of the analysis, suppose we decompose $[x_{t'}, 1]$ into a sequence of intervals $\{I_i\}_{i > t'}$, where $I_{i} = [x_{i-1}, x_i]$ and point $x_i$ is defined by (\ref{eqn:geom}). That is, we have a total of $s$ intervals $I_1, \ldots, I_s$ partitioning $[0,1]$ where by Claim~\ref{claim:ni} $|I_1| \ge |I_2| \ge \ldots \ge | I_s|.$ Clearly, $\mathop{\textstyle \sum}_{i=1}^s f(I_i) = 1$ and since $f$ is non-increasing \begin{equation} \label{eqn:ineq} c (1-\varepsilon)^i |I_i| \le f(x_i) |I_i| \le f(I_i) \le f(x_{i-1}) |I_i| = c (1-\varepsilon)^{i-1} |I_i|. \end{equation} Combining the above yields \begin{equation} \label{eqn:ub} c \cdot \mathop{\textstyle \sum}_{i=1}^s (1-\varepsilon)^i |I_i| \le 1. \end{equation} We want to show that $f([x_{t'}, 1]) = \mathop{\textstyle \sum}_{i=t'+1}^s f(I_i) \le \varepsilon.$ Indeed, we have \begin{equation} \label{eq:handy} \mathop{\textstyle \sum}_{i=t'+1}^s f(I_i) \le \mathop{\textstyle \sum}_{i=t'+1}^s c (1-\varepsilon)^{i-1} |I_i| \le \frac{c\varepsilon}{2(1-\varepsilon)} \cdot \mathop{\textstyle \sum}_{i=1}^{s-t'} (1-\varepsilon)^i |I_{i+t'}| \end{equation} where the first inequality uses (\ref{eqn:ineq}) and the second uses the fact that $(1-\varepsilon)^{t'} \le \varepsilon/2$. By Claim~\ref{claim:ni} it follows that $|I_{i+t'}| \le |I_{i}|$ which yields \[ \mathop{\textstyle \sum}_{i=t'+1}^s f(I_i) \le \frac{c\varepsilon}{2(1-\varepsilon)} \cdot \mathop{\textstyle \sum}_{i=1}^{s-t'} (1-\varepsilon)^i |I_{i}| \le \frac{c\varepsilon}{2(1-\varepsilon)} \cdot \mathop{\textstyle \sum}_{i=1}^{s} (1-\varepsilon)^i |I_{i}| \le \varepsilon \] where the last inequality follows from (\ref{eqn:ub}) for $\varepsilon \le 1/2$. \end{proof} In fact, it is now easy to show that $\mathcal{I}$ is an $(O(\varepsilon), t')$-flat decomposition of $f$, but we will not make direct use of this in the subsequent analysis. \item[(b)] In the second step, we group consecutive intervals of $\mathcal{I}$ (in increasing order of $i$) to obtain an $(O(\varepsilon), t)$ piecewise linear decomposition $\mathcal{J} = \{J_{\ell}\}_{\ell = 1}^t$ for $f$, where $t = \tilde{O}(\varepsilon^{-1/2}).$ Suppose that we have constructed the super-intervals $J_1, \ldots, J_{\ell-1}$ and that $\cup_{s=1}^{\ell-1} J_s = \cup_{k=1}^{i} I_k = [x_0, x_{i}].$ {If $i=t'$ then $t$ is set to $\ell-1$, and if $i \leq t'$ then} the super-interval $J_{\ell}$ contains the intervals $I_{i+1}, \ldots, I_j$, where $j \in \mathbb{Z}_{+}$ is the maximum value {which is $\leq t'$} and satisfies: \begin{enumerate} \item[(1)] $f(x_j) \ge f(x_i) (1-\varepsilon)^{1/\sqrt{\varepsilon}}$, and \item[(2)] $|I_j| \ge (1-\sqrt{\varepsilon}) |I_{i+1}| $. \end{enumerate} Within each super-interval $J_{\ell} = \cup_{k=i+1}^j{I_k} = [x_i, x_j]$ we approximate $f$ by the linear function $\tilde{f}$ satisfying $\tilde{f}(x_i) = f(x_i)$ and $\tilde{f}(x_j) = f(x_j)$. This completes the description of the construction. We proceed to show correctness. {Our first claim is that it is sufficient, in the construction described in (b) above, to take only $t = \tilde{O}( \varepsilon^{-1/2})$ super-intervals, because the probability mass under $f$ that lies to the right of the rightmost of these super-intervals is at most $\varepsilon$: } \begin{claim} \label{claim:tail-pl} {Suppose that} $t = \Omega(\varepsilon^{-1/2} \log(1/\varepsilon) )$ and $J_t = [x_u, x_v]$ is the rightmost super-interval. Then, $f([x_v, 1]) \le \varepsilon$. \end{claim} \begin{proof} Consider a generic {super-interval} $J_{\ell} = \cup_{k=i+1}^{j} I_k$. Since $j$ is the maximum value that satisfies both (1) and (2) we conclude that either \begin{equation} \label{eqn:cond1} j+1 - i > 1/\sqrt{\varepsilon} \end{equation} (this inequality follows from the negation of (1) and the definition of $f(x_i)$, $f(x_j)$) or \begin{equation} \label{eqn:cond2} |I_{j+1}| < (1-\sqrt{\varepsilon}) |I_{i+1}| . \end{equation} Suppose we {have} $t = \Omega ( \varepsilon^{-1/2} \log(1/\varepsilon) )$ super-intervals. Then, either (\ref{eqn:cond1}) is satisfied for at least $t/2$ super-intervals or (\ref{eqn:cond2}) is satisfied for at least $t/2$ super-intervals. Denote the rightmost super-interval by $J_t = [x_u, x_v]$. In the first case, for an appropriate constant in the big-Omega, we have $v = t'$ and the desired result follows from Claim~\ref{claim:tail-pc}. In the second case, {for an appropriate constant in the big-Omega} we will have $|I_v| \le \varepsilon^{{3}} |I_1|$. To show that $f([x_v, 1]) \le \varepsilon$ in this case, we consider further partitioning the interval $[x_v, 1]$ into a sequence of intervals $\{I_i\}_{i > v}$, where $I_{i} = [x_{i-1}, x_i]$ and point $x_i$ is defined by (\ref{eqn:geom}). By Claim~\ref{claim:ni} we will have that $|I_i| \le |I_v|$, $i>v$. We can therefore bound the desired quantity by \[ {\mathop{\textstyle \sum}_{i=v+1}^s f(I_i) \leq} \mathop{\textstyle \sum}_{i=v+1}^s c (1-\varepsilon)^{i-1} |I_i| \le \mathop{\textstyle \sum}_{i=v+1}^s c (1-\varepsilon)^{i-1} \varepsilon^{{3}} |I_{{1}}| \le \varepsilon^{{3}} c |I_1| \mathop{\textstyle \sum}_{i=1}^{\infty} (1-\varepsilon)^{i-1} \le \frac{\varepsilon^{{3}}}{{(1-\varepsilon)^2}} \cdot \frac{1-\varepsilon}{\varepsilon} \leq \varepsilon, \] where {the first inequality used the first inequality of (\ref{eq:handy}) and} the {penultimate} inequality uses the fact that $c (1-\varepsilon) |I_1| \le p(I_1) \le 1.$ This completes the proof of the claim. \end{proof} The main claim we are going to establish for the piecewise-linear approximation $\mathcal{J}$ is the following: \begin{claim} \label{claim:pl} For any super-interval $J_{\ell} = \cup_{k=i+1}^{j} I_k$ and any $i \le m \le j$ we have that \[ | \tilde{f}(x_m) - f(x_m) | = O(\varepsilon) f(x_m).\] \end{claim} Assuming the above claim it is easy to argue that $\mathcal{J}$ is indeed an $(O(\varepsilon), t)$ piecewise linear approximation to $f$. Let $\tilde{f}$ be the piecewise linear function over $[0,1]$ which is linear over $J_{\ell}$ (as described above) and identically zero in the interval $[x_v, 1]$. Indeed, we have that \begin{eqnarray*} \| \tilde{f} - f \|_1 &\le& \sum_{\ell=1}^t \int_{J_{\ell}} | \tilde{f}(y) - f(y)| dy + f([x_v, 1]) \\ &\le& \sum_{i=1}^v \int_{y=x_{i-1}}^{x_i} | \tilde{f}(y) - f(y)| dy + \varepsilon \\ &\le& {\sum_{i=1}^v O(\varepsilon) f(x_{m}) |I_{i}| + \varepsilon} \\ &\le& \sum_{i=1}^v O(\varepsilon) f(x_{i-1}) |I_{i}| + \varepsilon \\ &=& O(\varepsilon) \end{eqnarray*} where {the second inequality used Claim~\ref{claim:tail-pl}}, {the third inequality used Claim~\ref{claim:pl}}, {the fourth inequality used the fact that $f$ is non-increasing}, and the {final inequality used the fact that} \[ \sum_{i=1}^v f(x_{i-1}) |I_{i}| \le { {\frac 1 {1-\varepsilon}}\sum_{i=1}^v f(x_i) |I_{i}| \le {\frac 1 {1-\varepsilon}}\sum_{i=1}^v f(I_{i}) } \le 1/(1-\varepsilon), \] which follows by the definition of the $f(x_i)$'s. \ignore{\inote{This basically follows from (2), clean it up in later pass.}. } We are now ready to give the proof of the claim. \noindent \begin{proof}[Proof of Claim~\ref{claim:pl}] If $\tilde{f}$ is the approximating line between $x_i$ and $x_j$ we can write \[ \tilde{f}(x_m) = f(x_i) + (f(x_j) - f(x_i)) \cdot \frac{\mathop{\textstyle \sum}_{k=i+1}^m {|}I_k{|} }{\mathop{\textstyle \sum}_{k=i+1}^j {|}I_k{|} }. \] Note that $f(x_j) - f(x_i) = f(x_i) \left( (1-\varepsilon)^{{j-i}}-1 \right)$. We also recall that \[ (1-\varepsilon)^{j-i} = 1- \varepsilon(j-i) + \varepsilon^2 (j-i)^2/2 + O( \varepsilon^3 (j-i)^3).\] Since $i, j$ are in the same super-interval, we have that $j-i \le 1/\sqrt{\varepsilon}$, which implies that the above error term is $O(\varepsilon^{3/2})$. We will use this approximation henceforth, which is also valid for any $m \in [i,j]$. Also by condition (2) defining the lengths of the intervals in the same super-interval and the monotonicity of the lengths themselves, we obtain \[ \frac{m-i}{j-i} \cdot (1-\sqrt{\varepsilon}) \le \frac{\mathop{\textstyle \sum}_{k=i+1}^m {|}I_k {|} }{\mathop{\textstyle \sum}_{k=i+1}^j {|}I_k {|} } \le \frac{m-i}{j-i} \cdot \frac{1}{1-\sqrt{\varepsilon}}. \] By carefully combining the above inequalities we obtain the desired result. In particular, we have that \[ \tilde{f}(x_m) \le f(x_i) \left[ 1- \varepsilon \big(1+O(\sqrt{\varepsilon})\big)(m-i) +(\varepsilon^2/2)(j-i)(m-i)\big(1+O(\sqrt{\varepsilon})\big) +O(\varepsilon^{3/2}) \right]. \] Also \[ f(x_m) = f(x_i) \left[ 1-\varepsilon(m-i) +(\varepsilon^2/2)(m-i)^2 + O(\varepsilon^{3/2}) \right]. \] Therefore, {using the fact that $j-i,m-i \leq 1/\sqrt{\varepsilon}$, we get that} \[ \tilde{f}(x_m) - f(x_m) \le O(\varepsilon) f(x_i). \] {In an analogous manner we obtain that \[ f(x_m) - \tilde{f}(x_m) \le O(\varepsilon) f(x_i). \] } By the definition of a {super-interval}, the maximum and minimum values of $f$ within the {super-interval} are within a $1+o(1)$ factor of each other. This completes the proof of Claim~\ref{claim:pl}. \end{proof} \end{enumerate} This completes the proof of Lemma~\ref{lem:lc-struct}. \qed \subsubsection{$k$-monotone Densities.} Let $I = [a, b] \subseteq \mathbb{R}$ be a (not necessarily finite) interval. A function $f: I \to \mathbb{R}_{+}$ is {said to be} \emph{$1$-monotone} if it is non-increasing. It is \emph{$2$-monotone} if it is non-increasing and convex, and \emph{$k$-monotone} for $k \ge 3$ if $(-1)^j f^{(j)}$ is non-negative, non-increasing and convex for $j=0, \ldots, k-2.$ The problem of density estimation for $k$-monotone densities has been extensively investigated in the mathematical statistics community during the past few years (see~\cite{BW07aos, GW09sc, BW10sn, S10ams} and references therein) due to its significance in both theory and applications~\cite{BW07aos}. For example, as pointed out in~\cite{BW07aos}, the problem of learning an unknown $k$-monotone density arises in a generalization of Hampel's bird-watching problem~\cite{Hampel87}. The aforementioned papers from the statistics community focus on analyzing the rate of convergence of the Maximum Likelihood Estimator (MLE) under various metrics. In this section we show that our approach yields an efficient algorithm to learn bounded $k$-monotone densities over $[0, 1]$ {(i.e., $k$-monotone densities $p$ such that $\sup_{x \in [0,1]} p(x) = O(1)$)}, and mixtures thereof, with sample complexity $\tilde{O}(k/\varepsilon^{2+1/k})$. This bound is provably optimal {(up to log factors)} {for $k=1$ by \cite{Birge:87} and for $k=2$ (see e.g. Chapter 15 of \cite{DL:01}}) and we conjecture that it is {similarly} tight for all values of $k$. \ignore{ \inote{It seems to be that finding the optimal learning algorithm is open. In fact, the sample complexity we get is what these guys conjecture for the MLE but cannot prove. Also: I think one of the papers shows a tight bound on the cover size, from which we get the same sample complexity upper bound non-constructively. Do we know a lower bound on sample complexity?} \inote{For monotone we know the tight answer from Birg{\'e}~\cite{Birge:87, Birge:87b} Let $f: [a, a+L] \to [0, H]$ be non-decreasing (or non-increasing). Then, there exists an algorithm to learn $f$ up to variation distance $\varepsilon$ using $O(\log(1+HL)/\varepsilon^3)$ samples and this bound is information-theoretically optimal (up to a constant factor). The upper bound is shown in~\cite{Birge:87b} by showing that any such density $f$ has an $(\varepsilon, t)$-piecewise constant approximation with $O(\log(1+HL)/\varepsilon)$ pieces. The important thing is that this decomposition is in fact {\em oblivious}, i.e. it is the same for any monotone $f$. As a consequence, there is a very simple algorithm to learn. (For other classes of distributions such decompositions may exist but are not oblivious and a fundamental difficulty is to find/approximate such a decomposition using samples.)} } Our main algorithmic result for $k$-monotone densities is the following: \begin{theorem} \label{thm:kmon} Let $k \in \mathbb{Z}_{+}$ and $f:[0,1] \to \mathbb{R}_{+}$ be a $t$-mixture of bounded $k$-monotone densities. There is an algorithm that runs in $\mathrm{poly}(k, t, 1/\varepsilon)$ time, uses $\tilde{O}(t{k}/\varepsilon^{2+1/k})$ samples, and outputs a hypothesis distribution $h$ such that $d_{\mathrm TV}(h, f) \le \varepsilon$. \end{theorem} \noindent The above theorem follows as a corollary of Theorem~\ref{thm:main2} and the following structural result: \begin{lemma}[{Implicit in }\cite{KonL04, KonL07}] \label{lem:kmon-struct} Let $f:[0,1] \to \mathbb{R}_{+}$ be a $k$-monotone density such that $\sup_{x} |f(x)| = O(1)$. There exists an $(\varepsilon, t)$-piecewise degree-$(k-1)$ approximation of $f$ with $t = O(\varepsilon^{1/k})$. \end{lemma} \noindent {As we now explain} the above lemma can be deduced from recent work in approximation theory~\cite{KonL04, KonL07}. To state the relevant theorem we need some terminology: Let $s \in \mathbb{Z}_+$, and for a real function $f$ over interval $I$, let $\Delta^s_{\tau}f(t) = \mathop{\textstyle \sum}_{i=0}^s (-1)^{s-i}\binom{s}{i}f(t+i\tau)$ be the $s$th difference of the function $x$ with step $\tau>0$, where $[t, t+s\tau] \subseteq I.$ For $r \in \mathbb{Z}_{+}^{\ast}$, let $W^r_1 (I)$ be the set of real functions $f$ over $I$ that are absolutely continuous in every compact subinterval of $I$ and satisfy $\|f^{(r)}\|_1 = O(1).$ We denote by $\Delta^s_+ W^r_1 (I)$ the subset of functions $f$ in $W^r_1(I)$ that satisfy $\Delta^s_{\tau}f(t) \ge 0$ for all $\tau>0$ such that $[t, t+s\tau] \subseteq I.$ (Note that if $f$ is $s$-times differentiable the latter condition is tantamount to saying that $f^{(s)} \ge 0$.) We have the following: \begin{theorem}[Theorem 1 in~\cite{KonL07}] \label{thm:kl} Let $s \in \mathbb{Z}_{+}$, $r, \nu, n \in \mathbb{Z}_{+}^{\ast}$ such that $\nu \ge \max\{ r,s \}$. For any $f \in \Delta^s_+ W^r_1(I) $ there exists a piecewise degree-$(\nu-1)$ polynomial approximation $h$ to $f$ with $n$ pieces such that $\|h-f\|_1 = O(n^{-{\max\{r, s\}}}).$ \end{theorem} \noindent (In fact, it is shown in~\cite{KonL07} that the above bound is quantitatively optimal up to constant factors.) Let $f: [0, 1] \to \mathbb{R}_{+}$ be a $k$-monotone density such that $\sup |f| = O(1).$ It is easy to see that Lemma~\ref{lem:kmon-struct} follows from Theorem~\ref{thm:kl} for the following setting of parameters: $s=k$, $r=1$ and $\nu = \max \{r, s\} = k.$ Indeed, since $(-1)^{k-2} f^{(k-2)}$ is convex, it follows that $\Delta^k_{\tau}f(t)$ is nonnegative for even $k$ and nonpositive for odd $k$. Since $f$ is a non-increasing bounded density, it is clear that $\|f'\|_1 = |\int_{{0}}^1 f'(t)dt| = f(0) - f(1) = O(1).$ Hence, for even $k$ Theorem~\ref{thm:kl} is applicable to $f$ and yields Lemma~\ref{lem:kmon-struct}. For odd $k$, Lemma~\ref{lem:kmon-struct} follows by applying Theorem~\ref{thm:kl} to the function {$-f$}. \ignore{ \inote{ The reason I am phrasing the above theorem like that is because it is not clear what the right dependence on ($H = $ max value of $f$) really is. It would be nice to get the right dependence on the maximum value $H$. It should be $(\log H)^{1/k}$. At least this is the case for monotone (see discussion above) and agrees with the nonconstructive cover bound. If we do, then we can crap on kernel methods, which get linear in $H$ even for the monotone case (this is also mentioned in Birge's paper and the DL book).} } \ignore{ \inote{The above sample complexity is tight for $k=1, 2$. This follows from Birge for $k=1$ and for $k=2$ from~\cite{DL:01}. We conjecture it is tight for all $k$.} } \subsubsection{{Mixtures of Univariate Gaussians.}} \label{sec:mix-gauss} As a final example {illustrating} the power and generality of {Theorem~\ref{thm:main2}}, we now show how it very easily yields a computationally efficient and essentially optimal (up to logarithmic factors) sample complexity algorithm for learning mixtures of $k$ univariate Gaussians. As will be evident from the proof, similar results could be obtained via our techniques for a wide range of mixture distribution learning problems for different types of parametric univariate distributions beyond Gaussians. \begin{lemma} \label{lem:mixGauss} Let $p=N(\mu,\sigma^2)$ be a univariate Gaussian. Then $p$ is an $(\varepsilon,3)$-piecewise degree-$d$ distribution for $d = O(\log(1/\varepsilon)).$ \end{lemma} {Since Theorem~\ref{thm:main-detail} is easily seen to extend to {semi}-agnostic learning of $k$-mixtures of $t$-piecewise degree-$d$ distributions,} Lemma~\ref{lem:mixGauss} immediately gives the following {semi}-agnostic learning result for mixtures of $k$ one-dimensional Gaussians: \ignore{ \rnote{Strictly speaking Theorem~\ref{thm:main-detail} does not quite give this. Theorem~\ref{thm:main-detail} assumes that the target is actually a $k$-mixture of $(\varepsilon,t)$-piecewise degree-$d$ distributions, so it would apply to mixtures of Gaussians. For the quasi-agnostic statement below, where the target is only *close* to a mixture of Gaussians, one would need a small bit of additional argumentation. I guess the way to do this would be to say that if the target distribution $q$ is $\varepsilon$-close to a $k$-mixture of $(\varepsilon,t)$-piecewise degree-$d$ distributions, then in fact it is exactly a $k$-mixture of $(2\varepsilon,t)$-piecewise degree-$d$ distributions. (This is true, right?) What is the best way to handle this expositionally: with a note after the statement of Theorem~\ref{thm:main-detail} explaining this, or dealing with it here? I guess it's best to do it near the statement of Theorem~\ref{thm:main-detail}, and then say there once and for all that all the learning results we obtain with our methods are ``robust'' -- i.e. when we state a result for learning a particular class $\mathfrak{C}$ of distributions (like mixtures of $t$-modal distributions or mixtures of Gaussians or whatever) to accuracy $O(\varepsilon)$, actually the algorithm learns any distribution that is $\varepsilon$-close to the class. We may even want to introduce a specific term ``robust learning'' or something like that for this concept, and use it for all the applications. Let me know what you think.} } \begin{theorem} \label{cor:mixGauss} Let $p$ be any distribution that has $d_{\mathrm TV}(p,q) \leq \varepsilon$ where $q$ is any one-dimensional mixture of $k$ Gaussians. There is a $\mathrm{poly}(k,1/\varepsilon)$-time algorithm that uses $\tilde{O}(k/\varepsilon^2)$ samples and with high probability outputs a hypothesis $h$ such that $d_{\mathrm TV}(h,p) \leq O(\varepsilon).$ \end{theorem} \noindent {It is straightforward to show that $\Omega(k/\varepsilon^2)$ samples are information-theoretically necessary for learning a mixture of $k$ Gaussians, and thus our sample complexity is optimal up to logarithmic factors.} \medskip \noindent {\bf Discussion.} Moitra and Valiant \cite{MoitraValiant:10} recently gave an algorithm for \emph{parameter estimation} (a stronger requirement than the density estimation guarantees that we provide) of any mixture of $k$ \emph{$n$-dimensional} Gaussians. Their algorithm has sample complexity that is exponential in $k$, and indeed {they} prove that any algorithm that does parameter estimation even for a mixture of $k$ one-dimensional Gaussians must use $2^{\Omega(k)}$ samples. In contrast, our result shows that it is possible to perform \emph{density estimation} for any mixture of $k$ one-dimensional Gaussians with a computationally efficient algorithm that uses \emph{exponentially fewer} (linear in $k$) samples than are required for parameter estimation. Moreover, unlike the parameter estimation results of \cite{MoitraValiant:10}, our density estimation algorithm is {semi}-agnostic: it succeeds even if the target distribution is $\varepsilon$-far from a mixture of Gaussians. \medskip \noindent {\bf Proof of Lemma~\ref{lem:mixGauss}:} Without loss of generality we may take $p$ to be the standard Gaussian $N(0,1),$ which has pdf $p(x) = {\frac 1 {\sqrt 2 \pi}} e^{-x^2/2}.$ Let $I_1 = (-\infty,-C \sqrt{\log(1/\varepsilon)}),$ $I_2 = [-C \sqrt{\log(1/\varepsilon)},C\sqrt{\log(1/\varepsilon)})$ and $I_3 = [C \sqrt{\log(1/\varepsilon)},\infty)$ where $C>0$ is an absolute constant. We define the distribution $q$ as follows: $q(x)=0$ for all $x \in I_1 \cup I_3$, and $q(x)$ is given by the degree-$d$ Taylor expansion of $p(x)$ about 0 for $x \in I_2$, where $d=O(\log(1/\varepsilon)).$ Clearly $q$ is a 3-piecewise degree-$d$ polynomial. To see that $d_{\mathrm TV}(p,q) \leq \varepsilon$, we first observe that by a standard Gaussian tail bound the regions $I_1$ and $I_3$ contribute at most $\varepsilon/2$ to $d_{\mathrm TV}(p,q)$ so it suffices to argue that \begin{equation} \label{eq:int} \int_{I_2} |p(x)-q(x)| dx \leq \varepsilon/2. \end{equation} Fix any $x \in I_2$. Taylor's theorem gives that $|p(x)-q(x)| \leq p^{(d+1)}(x') x^{d+1}/(d+1)!$ for some $x' \in [0,x].$ Recalling that the $(d+1)$-st derivative $p^{(d+1)}(x')$ of the pdf of the standard Gaussian equals $H_{d+1}(x')p(x')$, where $H_{d+1}$ is the Hermite polynomial of order $d+1$, standard bounds on the Hermite polynomials together with the fact that $|x| \leq C \sqrt{\log(1/\varepsilon)}$ give that for $d = O(\log {\frac 1 \varepsilon})$ we have $|p(x) - q(x)| \leq \varepsilon^2$ for all $x \in I_2$. This gives the lemma. \qed \subsection{Learning discrete distributions.} \label{sec:discrete} For convenience in this subsection we consider discrete distributions over the $2N$-point finite domain \[ D := \left\{ - {\frac N N}, - {\frac {N-1} N}, \dots, - {\frac 1 N}, 0, {\frac 1 N}, \dots, {\frac {N-1} N}\right\}. \] We say that a discrete distribution $q$ over domain $D$ is \emph{$t$-flat} if there exists a partition of $D$ into $t$ intervals $I_1,\dots,I_t$ such that $q(i)=q(j)$ for all $i,j \in I_\ell$ for all $\ell=1,\dots,t.$ We say that a distribution $p$ over $D$ is \emph{$(\varepsilon,t)$-flat} if $d_{\mathrm TV}(p,q) \leq \varepsilon$ for some distribution $q$ over $D$ that is $t$-flat. We begin by giving a simple reduction from learning $(\varepsilon,t)$-flat distributions over $D$ to learning $(\varepsilon,t)$-piecewise degree-0 distributions over $[-1,1].$ Together with Theorem~\ref{thm:main-detail} this reduction gives us an essentially optimal algorithm for learning discrete $(\varepsilon,t)$-flat distributions (see Theorem~\ref{thm:opt-discrete}). We then apply Theorem~\ref{thm:opt-discrete} to obtain highly efficient algorithms (in {most} cases with provably near-optimal sample complexity) for various specific classes of discrete distributions {essentially resolving a number of open problems from previous works}. \subsubsection{A reduction from discrete to continuous.} \label{sec:disc-cont-red} Given a discrete distribution $p$ over $D$, we define $\tilde{p}$ to be the distribution over $[-1,1)$ defined as follows: a draw from $\tilde{p}$ is obtained by drawing a value $i/N$ from $p$, and then outputting $i + x/N$ where $x$ is distributed uniformly over $[0,1).$ It is easy to see that if distribution $p$ (over domain $D$) is $t$-flat, then the distribution $\tilde{p}$ (over domain $[-1,1)$) is $t$-piecewise degree-0. Moreover, if $p$ is $\tau$-close to some $t$-flat distribution $q$ over $D$, then $\tilde{p}$ is $\tau$-close to $\tilde{q}$. In the opposite direction, for $p$ a distribution over $[-1,1)$ we define $p^\ast$ to be the following distribution supported on $D$: a draw from $p^\ast$ is obtained by sampling $x$ from $p$ and then outputting the value obtained by rounding $x$ down to the next integer multiple of $1/N.$ It is easy to see that if $p,q$ are distributions over $[-1,1)$ then $d_{\mathrm TV}(p,q)=d_{\mathrm TV}(p^\ast,q^\ast).$ It is also clear that for $p$ a distribution over $D$ we have $(\tilde{p})^\ast = p.$ With these relationships in hand, we may learn a $(\tau,t)$-flat distribution $p$ over $D$ as follows: run Algorithm {\tt Learn-Piecewise-Poly}$(t,d=0,\varepsilon)$ on the distribution $\tilde{p}$. Since $p$ is $(\tau,t)$-flat, $\tilde{p}$ is $\tau$-close to some $t$-piecewise degree-0 distribution $q$ over $[-1,1)$, so the algorithm with high probability constructs a hypothesis $h$ over $[-1,1)$ such that $d_{\mathrm TV}(h,\tilde{p}) \leq O(\tau + \varepsilon)$. The final hypothesis is $h^\ast$; for this hypothesis we have \[ d_{\mathrm TV}(h^\ast,p) = d_{\mathrm TV}(h^\ast,(\tilde{p})^\ast)= d_{\mathrm TV}(h,\tilde{p}) \leq O(\tau + \varepsilon) \] as desired. The above discussion and Theorem~\ref{thm:main-detail} together give the following: \begin{theorem} \label{thm:opt-discrete} Let $p$ be a mixture of $k$ $(\tau,t)$-flat discrete distributions over $D$. There is an algorithm which uses $\tilde{O}(kt/\varepsilon^2)$ samples from $p$, runs in time {$\mathrm{poly}(k, t, 1/\varepsilon)$}, and with probability at least $9/10$ outputs a hypothesis distribution $h$ over $D$ such that $d_{\mathrm TV}(p,h) \leq O(\varepsilon + \tau).$ \end{theorem} We note that this is essentially a stronger version of Corollary~3.1 (the main technical result) of \cite{CDSS13soda}, which gave a similar guarantee but with an algorithm that required $O(kt/\varepsilon^3)$ samples. {We also remark that $\Omega(kt/\varepsilon^2)$ samples are information-theoretically required to learn an arbitrary $k$-mixture of $t$-flat distributions. Hence, our sample complexity is optimal up to logarithmic factors (even for the case $\tau = 0$). We would also like to mention the relation of the above theorem to a recent work by Indyk, Levi and Rubinfeld~\cite{ILR12}. Motivated by a database application, \cite{ILR12} consider the problem of learning a $k$-flat distribution over $[n]$ {\em under the $L_2$ norm} and give an efficient algorithm that uses $O(k^2 \log (n) / \varepsilon^4)$ samples. Since the total variation distance is a stronger metric, Theorem~\ref{thm:opt-discrete} immediately implies an improved sample bound of $\tilde{O}(k/\varepsilon^2)$ for their problem. } \subsubsection{Learning specific classes of discrete distributions.}~ \label{sec:disc-ap} \medskip \noindent {\bf Mixtures of $t$-modal discrete distributions.} Recall that a distribution over an interval $I = [a,b] \cap D$ is said to be \emph{unimodal} if there is a value $y \in I$ such that its pdf is monotone non-decreasing on $I \cap [-1,y]$ and monotone non-increasing on $I \cap (y,1)$. For $t>1$, a distribution $p$ over $D$ is $t$-modal if there is a partition of $D$ into $t$ intervals $I_1,\dots,I_t$ such that the conditional distributions $p_{I_1},\dots,p_{I_t}$ are each unimodal. In \cite{CDSS13soda,DDSVV13soda} (building on \cite{Birge:87b}) it is shown that every $t$-modal distribution over $D$ is $(\varepsilon,t \log(N)/\varepsilon)$-flat. By using this fact together with Theorem~\ref{thm:opt-discrete} in place of Corollary~3.1 of \cite{CDSS13soda}, we improve the sample complexity of the \cite{CDSS13soda} algorithm for learning mixtures of $t$-modal distributions and obtain the following: \begin{theorem} \label{thm:mix-tmodal} For any $t \geq 1$, let $p$ be any $k$-mixture of $t$-modal distributions over $D$. There is an algorithm that runs in time $\mathrm{poly}(k,t,\log N, 1/\varepsilon)$, draws $\tilde{O}(kt \log(N)/\varepsilon^3)$ samples from $p$, and with probability at least $9/10$ outputs a hypothesis distribution $h$ such that $d_{\mathrm TV}(p,h) \leq \varepsilon$. \end{theorem} We note that an easy adaptation of Birg\'{e}'s lower bound \cite{Birge:87} for learning monotone distributions (see the discussion at the end of Section~5 of \cite{CDSS13soda}) gives that any algorithm for learning a $k$-mixture of $t$-modal distributions over $D$ must use $\Omega(k t \log(N/(kt))/\varepsilon^3)$ samples, and hence the sample complexity bound of Theorem~\ref{thm:mix-tmodal} is optimal up to logarithmic factors. We further note that even the $t=1$ case of this result compares favorably with the main result of \cite{DDS:12kmodallearn}, which gave an algorithm for learning $t$-modal distributions over $D$ that uses $O(t \log(N)/\varepsilon^3) + \tilde{O}(t^3/\varepsilon^3)$ samples. The \cite{DDS:12kmodallearn} result gave an optimal bound only for small settings of $t$, specifically $t = \tilde{O}((\log N)^{1/3})$, and gave a quite poor bound as $t$ grows large; for example, at $t=(\log N)^2$ the optimal bound would be $O((\log N)^3/\varepsilon^3)$ but the \cite{DDS:12kmodallearn} result only gives $\tilde{O}((\log N)^9/\varepsilon^3).$ In contrast, our new result gives an essentially optimal bound (up to log factors in the optimal sample complexity) for \emph{all} settings of $t$. \medskip \noindent {\bf Mixtures of monotone hazard rate distributions.} Let $p$ be a distribution supported on $D$. The \emph{hazard rate} of $p$ is the function $H(i) \stackrel{{\mathrm {\footnotesize def}}}{=} {\frac {p(i)}{\mathop{\textstyle \sum}_{j \geq i} p(j)}}$; if $\mathop{\textstyle \sum}_{j \geq i} p(j) = 0$ then we say $H(i) = +\infty.$ We say that $p$ has \emph{monotone hazard rate} (MHR) if $H(i)$ is a non-decreasing function over $D.$ \cite{CDSS13soda} showed that every MHR distribution over $D$ is $(\varepsilon,O(\log(N/\varepsilon)/\varepsilon))$-flat. Theorem~\ref{thm:opt-discrete} thus gives us the following: \begin{theorem} \label{thm:MHR} Let $p$ be any $k$-mixture of MHR distributions over $D$. There is an algorithm that runs in time $\mathrm{poly}(k,\log N, 1/\varepsilon)$, draws $\tilde{O}(k \log(N)/\varepsilon^3)$ samples from $p$, and with probability at least $9/10$ outputs a hypothesis distribution $h$ such that $d_{\mathrm TV}(p,h) \leq \varepsilon$. \end{theorem} In \cite{CDSS13soda} it is shown that any algorithm to learn $k$-mixtures of MHR distributions over $D$ must use $\Omega(k \log(N/k)/\varepsilon^3)$ samples, so Theorem~\ref{thm:MHR} is essentially optimal in its sample complexity. \medskip \noindent {\bf Mixtures of discrete log-concave distributions.} A probability distribution $p$ over $D$ is said to be \emph{log-concave} if it satisfies the following conditions: (i) if $i < j < k \in D$ are such that $p(i)p(k)>0$ then $p(j) > 0$; and (ii) $p(k/N)^2 \geq p((k-1)/N)p((k+1)/N)$ for all $k \in \{-N+1,\dots,-1,0,1,\dots,N-2\}.$ In \cite{CDSS13soda} it is shown that every log-concave distribution over $D$ is $(\varepsilon,O(\log(1/\varepsilon))/\varepsilon)$-flat. Hence Theorem~\ref{thm:opt-discrete} gives: \begin{theorem} \label{thm:logconcave} Let $p$ be any $k$-mixture of log-concave distributions over $D$. There is an algorithm that runs in time $\mathrm{poly}(k,1/\varepsilon)$, draws $\tilde{O}(k /\varepsilon^3)$ samples from $p$, and with probability at least $9/10$ outputs a hypothesis distribution $h$ such that $d_{\mathrm TV}(p,h) \leq \varepsilon$. \end{theorem} As in the previous examples, this improves the \cite{CDSS13soda} sample complexity by essentially a factor of $1/\varepsilon$. We note that as a special case of Theorem~\ref{thm:logconcave} we get an efficient $O(k/\varepsilon^3)$-sample algorithm for learning any mixture of $k$ \emph{Poisson Binomial Distributions}. (A Poisson Binomial Distribution, or PBD, is a random variable of the form $X_1 + \cdots + X_N$ where the $X_i$'s are independent 0/1 random variables that may have arbitrary and non-identical means.) The main result of \cite{DDS12stoc} gave an efficient $\tilde{O}(1/\varepsilon^3)$-sample algorithm for learning a single PBD; here we achieve the same sample complexity, with an efficient algorithm, for learning any mixture of any constant number of PBDs. \ignore{ {We remark here that an extension of our Theorem~\ref{thm:main2} to the discrete setting for $d\ge 1$ would yield an (essentially) optimal bound of $\tilde{O}(k /\varepsilon^{5/2})$ for Theorem~\ref{thm:logconcave}.} } \bigskip \noindent {\bf Acknowledgements.} We would like to thank Dany Leviatan for useful correspondence regarding his recent works~ \cite{KonL04, KonL07}. \bibliographystyle{alpha}
{ "timestamp": "2013-05-15T02:02:25", "yymm": "1305", "arxiv_id": "1305.3207", "language": "en", "url": "https://arxiv.org/abs/1305.3207", "abstract": "We give a highly efficient \"semi-agnostic\" algorithm for learning univariate probability distributions that are well approximated by piecewise polynomial density functions. Let $p$ be an arbitrary distribution over an interval $I$ which is $\\tau$-close (in total variation distance) to an unknown probability distribution $q$ that is defined by an unknown partition of $I$ into $t$ intervals and $t$ unknown degree-$d$ polynomials specifying $q$ over each of the intervals. We give an algorithm that draws $\\tilde{O}(t\\new{(d+1)}/\\eps^2)$ samples from $p$, runs in time $\\poly(t,d,1/\\eps)$, and with high probability outputs a piecewise polynomial hypothesis distribution $h$ that is $(O(\\tau)+\\eps)$-close (in total variation distance) to $p$. This sample complexity is essentially optimal; we show that even for $\\tau=0$, any algorithm that learns an unknown $t$-piecewise degree-$d$ probability distribution over $I$ to accuracy $\\eps$ must use $\\Omega({\\frac {t(d+1)} {\\poly(1 + \\log(d+1))}} \\cdot {\\frac 1 {\\eps^2}})$ samples from the distribution, regardless of its running time. Our algorithm combines tools from approximation theory, uniform convergence, linear programming, and dynamic programming.We apply this general algorithm to obtain a wide range of results for many natural problems in density estimation over both continuous and discrete domains. These include state-of-the-art results for learning mixtures of log-concave distributions; mixtures of $t$-modal distributions; mixtures of Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions; mixtures of Gaussians; and mixtures of $k$-monotone densities. Our general technique yields computationally efficient algorithms for all these problems, in many cases with provably optimal sample complexities (up to logarithmic factors) in all parameters.", "subjects": "Machine Learning (cs.LG); Data Structures and Algorithms (cs.DS); Machine Learning (stat.ML)", "title": "Efficient Density Estimation via Piecewise Polynomial Approximation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692359451418, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7079585045978725 }
https://arxiv.org/abs/1611.05561
A Fast and Provable Method for Estimating Clique Counts Using Turán's Theorem
Clique counts reveal important properties about the structure of massive graphs, especially social networks. The simple setting of just 3-cliques (triangles) has received much attention from the research community. For larger cliques (even, say 6-cliques) the problem quickly becomes intractable because of combinatorial explosion. Most methods used for triangle counting do not scale for large cliques, and existing algorithms require massive parallelism to be feasible.We present a new randomized algorithm that provably approximates the number of k-cliques, for any constant k. The key insight is the use of (strengthenings of) the classic Turán's theorem: this claims that if the edge density of a graph is sufficiently high, the k-clique density must be non-trivial. We define a combinatorial structure called a Turán shadow, the construction of which leads to fast algorithms for clique counting.We design a practical heuristic, called TURÁN-SHADOW, based on this theoretical algorithm, and test it on a large class of test graphs. In all cases,TURÁN-SHADOW has less than 2% error, in a fraction of the time used by well-tuned exact algorithms. We do detailed comparisons with a range of other sampling algorithms, and find that TURÁN-SHADOW is generally much faster and more accurate. For example, TURÁN-SHADOW estimates all cliques numbers up to size 10 in social network with over a hundred million edges. This is done in less than three hours on a single commodity machine.
\section{Introduction}\label{sec:intro} Pattern counting is an important graph analysis tool in many domains: anomaly detection, social network analysis, bioinformatics among others~\cite{HoLe70,Milo2002,Burt04,PrzuljCJ04,HoBe+07,Fa10}. Many real world graphs show significantly higher counts of certain patterns than one would expect in a random graph~\cite{HoLe70,WaSt98,Milo2002}. This technique has been referred to with a variety of names: subgraph analysis, motif counting, graphlet analysis, etc. But the fundamental task is to count the occurrence of a small pattern graph in a large input graph. In all such applications, it is essential to have fast algorithms for pattern counting. It is well-known that certain patterns capture specific semantic relationships, and thus the social dynamics are reflected in these graph structures. The most famous such pattern is the \emph{triangle}, which consists of three vertices connected to each other. Triangle counting has a rich history in the social sciences and network science~\cite{HoLe70,WaSt98,Burt04,FoDeCo10}. We focus on the more general problem of \emph{clique counting}. A \emph{$k$-clique} is a set of $k$ vertices that are all connected to each other; thus, a triangle is a $3$-clique. Cliques are extremely significant in social network analysis (Chap. 11 of~\cite{HR05} and Chap. 2 of~\cite{J10}). They are the archetypal example of a dense subgraph, and a number of recent results use cliques to find large, dense subregions of a network~\cite{SaSePi14,Ts15,MiPaPe+15,TsPaMi16}. \subsection{Problem Statement} \label{sec:problem} Given an undirected graph $G = (V,E)$, a $k$-clique is a set $S$ of $k$ vertices in $V$ with all pairs in $S$ connected by an edge. The problem is to count the number of $k$-cliques, for varying values of $k$. Our aim is to get all clique counts for $k \leq 10$. The primary challenge is \emph{combinatorial explosion}. An autonomous system network with ten million edges has more than a \emph{trillion} $10$-cliques. Any enumeration procedure is doomed to failure. Under complexity theoretical assumptions, clique counting is believed to be exponential in the size $k$~\cite{CHKX04}, and we cannot hope to get a good worst-case algorithm. Our aim is to employ \emph{randomized sampling} methods for clique counting, which have seen some success in counting triangles and small patterns~\cite{TsKaMiFa09,SePiKo13,JhSePi15}. We stress that we make no distributional assumption on the graph. All probabilities are over the internal randomness of the algorithm itself (which is independent of the instance). \subsection{Main contributions} \label{sec:contri} Our main theoretical result is a randomized algorithm {\sc Tur\'{a}n-shadow}{} that approximates the $k$-clique count, for any constant $k$. We implement this algorithm \emph{on a commodity machine} and get $k$-clique counts (for all $k \leq 10$) on a variety of data sets, the largest of which has 100M edges. The main features of our work follow. \begin{asparaitem} \item [\textbf{Extremal combinatorics meets sampling.}] Our novelty is in the algorithmic use of classic extremal combinatorics results on clique densities. Seminal results of Tur\'{a}n~\cite{TURAN41} and Erd\H{o}s~\cite{E69} provide bounds on the number of cliques in a sufficiently dense graph. {\sc Tur\'{a}n-shadow}{} tries to cover $G$ by a carefully chosen collection of dense subgraphs that contains all cliques, called a \emph{Tur\'{a}n-shadow}. It then uses standard techniques to design an unbiased estimator for the clique count. Crucially, the result of Erd\H{o}s~\cite{E69} (a quantitative version of Tur\'{a}n's theorem) is used to bound the variance of the estimator. We provide a detailed theoretical analysis of {\sc Tur\'{a}n-shadow}, proving correctness and analyzing its time complexity. The running time of our algorithm is bounded by the time to construct the Tur\'{a}n-shadow, which as we shall see, is quite feasible in all the experiments we run. \item[\textbf{Extremely fast.}] In the worst case, we cannot expect the Tur\'{a}n-shadow to be small, as that would imply new theoretical bounds for clique counting. But in practice on a wide variety of real graphs, we observe it to be much smaller than the worst-case bound. Thus, {\sc Tur\'{a}n-shadow}{} can be made into a \emph{practical} algorithm, which also has provable bounds. We implement {\sc Tur\'{a}n-shadow}{} and run it on a commodity machine. \Fig{timings} shows the time required for {\sc Tur\'{a}n-shadow}{} to obtain estimates for $k=7$ and $k=10$ in seconds. The {\tt as-skitter} graph is processed in less than 3 minutes, despite there being billions of $7$-cliques and trillions of $10$-cliques. All graphs are processed in minutes, except for an Orkut social network with more than 100M edges ({\sc Tur\'{a}n-shadow}{} handles this graph within 2.5 hours). \emph{To the best of our knowledge, there is no existing work that gets comparable results.} An algorithm of Finocchi {\em et al.} also computes clique counts, but employs MapReduce on the same datasets~\cite{FFF15}. We only require a single machine to get a good approximation. We tested {\sc Tur\'{a}n-shadow}{} against a number of state of the art algorithmic techniques (color coding~\cite{AlYuZw94}, edge sampling~\cite{TsKaMiFa09}, GRAFT~\cite{RaBhHa14}). For 10-clique counting, none of these algorithms terminate for all instances even in 7 hours; {\sc Tur\'{a}n-shadow}{} runs in minutes on all but one instance (where it takes less than 2.5 hours). For 7-clique counting, {\sc Tur\'{a}n-shadow}{} is typically 10-100 times faster than competing algorithms. (A notable exception is {\tt com-orkut}, where an edge sampling algorithm runs much faster than {\sc Tur\'{a}n-shadow}.) \item[\textbf{Excellent accuracy.}] {\sc Tur\'{a}n-shadow}{} has extremely small variance, and computes accurate results (in all instances we could verify). We compute exact results for $7$-clique numbers, and compare with the output of {\sc Tur\'{a}n-shadow}. In~\Fig{acc}, we see that the accuracy is well within 2\% (relative error) of the true answer for all datasets. We do detailed experiments to measure variance, and in all cases, {\sc Tur\'{a}n-shadow}{} is accurate. The efficiency and accuracy of {\sc Tur\'{a}n-shadow}{} allows us to get clique counts for a variety of graphs, and track how the counts change as $k$ increases. We seem to get two categories of graphs: those where the count increases (exponentially) with $k$, and those where it decreases with $k$, see \Fig{trends}. This provides a new lens to view social networks, and we hope {\sc Tur\'{a}n-shadow}{} can become a new tool for pattern analysis. \end{asparaitem} \subsection{Related Work} \label{sec:related} The importance of pattern counts gained attention in bioinformatics with a seminal paper of Milo {\em et al.}~\cite{Milo2002}, though it has been studied for many decades in the social sciences~\cite{HoLe70}. Triangle counting and its use has an incredibly rich history, and is used in applications as diverse as spam detection~\cite{BeBoCaGi08}, graph modeling~\cite{SeKoPi11}, and role detection~\cite{Burt04}. Counting four cliques is mostly feasible using some recent developments in sampling and exact algorithms~\cite{JhSePi15,AhNe+15}. Clique counts are an important part of recent dense subgraph discovery algorithms~\cite{SaSePi14,Ts15}. Cliques also play an important role in understanding dynamics of social capital~\cite{JRT12}, and their importance in the social sciences is well documented~\cite{HR05,J10}. In topological approaches to network analysis, cliques are the fundamental building blocks used to construct simplicial structures~\cite{SGB16}. From an algorithmic perspective, clique counting has received much attention from the theoretical computer science community~\cite{ChNi85,AlYuZw94,CHKX04,V09}. Maximal clique enumeration has been an important topic~\cite{A73,Tomita04,ES11} since the seminal algorithm of Bron-Kerbosch~\cite{BronKerb73}. Practical algorithms for finding the maximum clique were given by Rossi {\em et al.} using branch and bound methods~\cite{RossiG15}. Most relevant to our work is a classic algorithm of Chiba and Nishizeki~\cite{ChNi85}. This work introduces graph orientations to reduce the search time and provides a theoretical connection to graph arboricity. We also apply this technique in {\sc Tur\'{a}n-shadow}. The closest result to our work is a recent MapReduce algorithm of Finocchi {\em et al.} for clique counting~\cite{FFF15}. This result applies the orientation technique of~\cite{ChNi85}, and creates a large set of small (directed) egonets. Clique counting overall reduces to clique counting in each of these egonets, and this can be parallelized using MapReduce. We experiment on the same graphs used in~\cite{FFF15} (particularly, some of the largest ones) and get accurate results on a single, commodity machine (as opposed to using a cluster). Alternate MapReduce methods using multi-way joins have been proposed, though this is theoretical and not tested on real data~\cite{AFRATI13}. A number of randomized techniques have been proposed for pattern counting, and can be used to design algorithms for clique counting. Most prominent are color coding~\cite{AlYuZw94,HoBe+07,BetzlerBFKN11,ZhWaBu+12} and edge sampling methods~\cite{TsKaMiFa09,TsKoMi11,RaBhHa14}. (MCMC methods~\cite{BhRaRa+12} typically do not scale for graphs with millions of vertices~\cite{JhSePi15}.) We perform detailed comparisons with these methods, and conclude that they do not scale for larger clique counting. \section{Main Ideas} \label{sec:ideas} The starting point for our result is a seminal theorem of Tur\'{a}n~\cite{TURAN41}: if the edge density of a graph is more than $1-\frac{1}{k-1}$, then it must contain a $k$-clique. (The density bound is often called the Tur\'{a}n density for $k$.) Erd\H{o}s proved a stronger version~\cite{E69}. Suppose the graph has $n$ vertices. Then in this case, it contains $\Omega(n^{k-2})$ $k$-cliques! Consider the trivial randomized algorithm to estimate $k$-cliques. Simply sample a uniform random set of $k$ vertices and check if they form a clique. Denote the number of $k$-cliques by $C$, then the success probability is $C/{n\choose k}$. Thus, we can estimate this probability using ${n\choose k}/C$ samples. By Erd\H{o}s' bound, $C = \Omega(n^{k-2})$. Thus, if the density of a graph (with $n$ vertices) is above the Tur\'{a}n density, one can estimate the number of $k$-cliques using $O(n^2)$ samples. Of course, the input graph $G$ is unlikely to have such a high density, and $O(n^2)$ is a large bound. We try to cover all $k$-cliques in $G$ using a collection of dense subgraphs. This collection is called a \emph{Tur\'{a}n shadow}. We employ orientation techniques from Chiba-Nishizeki to recursively construct a shadow~\cite{ChNi85}. We take the degeneracy (k-core) ordering in $G$~\cite{SEIDMAN83}. It is well-known that outdegrees are typically small in this ordering. To count $k$-cliques in $G$, it suffices to count $(k-1)$-cliques in every outneighborhood. (This is the main idea in the MapReduce algorithms of Finocchi et al~\cite{FFF15}.) If an outneighborhood has density higher than the Tur\'{a}n density for $(k-1)$, we add this set/induced subgraph to the Tur\'{a}n shadow. If not, we recursively employ this scheme to find denser sets. When the process terminates, we have a collection of sets (or induced subgraphs) such that each has density above the Tur\'{a}n threshold (for some appropriate $k'$ for each set). Furthermore, the sum of cliques ($k'$-cliques, for the same $k'$) is the number of $k$-cliques in $G$. Now, we can hope to use the randomized procedure to estimate the number of $k'$-cliques in each set of the Tur\'{a}n shadow. By a theorem of Chiba-Nishizeki~\cite{ChNi85}, we can argue that number of vertices in any set of the Tur\'{a}n shadow is at most $\sqrt{2m}$ (where $m$ is the number of edges in $G$). Thus, $O(m)$ samples suffices to estimate clique counts for any set in the Tur\'{a}n shadow. But the Tur\'{a}n shadow has many sets, and it is infeasible to spend $O(m)$ samples for each set. We employ a randomized trick. We only need to approximate the sum of clique counts over the shadow, and can use random sampling for that purpose. Working through the math, we effectively set up a distribution over the sets in the Tur\'{a}n shadow. We pick a set from this distribution, pick some subset of random vertices, and check if they form a clique. The probability of this event can be related to the number of $k$-cliques in $G$. Furthermore, we can prove that $O(m)$ samples suffice to estimate this probability. All in all, after constructing the Tur\'{a}n shadow, $k$-clique counting can be done in $O(m)$ time. \subsection{Main theorem and significance} \label{sec:thm} The formal version of the main theorem is \Thm{main-full}. It requires a fair bit of terminology to state. So we state an informal version that maintains the spirit of our main result. This should provide the reader with a sense of what we can hope to prove. We will define the Tur\'{a}n shadow formally in later sections. But it basically refers to the construct described above. \begin{theorem} \label{thm:main} [Informal] Consider graph $G=(V,E)$ with $n$ vertices, $m$ edges, and maximum core number $\alpha$. Let $\boldsymbol{S}$ be the Tur\'{a}n $k$-clique shadow of $G$, and let $|\boldsymbol{S}|$ be the number of sets in $\boldsymbol{S}$. Given any $\delta > 0, \varepsilon > 0, k$, with probability at least $1-\delta$, the procedure {\sc Tur\'{a}n-shadow}{} outputs a $(1+\varepsilon)$-multiplicative approximation to the number of $k$-cliques in $G$. The running time is linear in $|\boldsymbol{S}|$ and $m \alpha \log(1/\delta)/\varepsilon^2$. The storage is linear in $|\boldsymbol{S}|$. \end{theorem} Observe that the size of the shadow is critical to the procedure's efficiency. As long as the number of sets in the Tur\'{a}n shadow is small, the extra running time overhead is only linear in $m$. And in practice, we observe that the Tur\'{a}n shadow scales linearly with graph size, leading to a practically viable algorithm. \medskip \textbf{Outline:} In \Sec{prelim}, we formally describe Tur\'{a}n's theorem and set some terminology. \Sec{shadows} defines (saturated) shadows, and shows how to construct efficient sampling algorithms for clique counting from shadow. \Sec{const} describes the recursive construction of the Tur\'{a}n shadow. In \Sec{together}, we describe the final procedure {\sc Tur\'{a}n-shadow}{}, and prove (the formal version of) \Thm{main}. Finally, in \Sec{results}, we detail our empirical study of {\sc Tur\'{a}n-shadow}{} and comparison with the state of the art. \section{Tur\'{a}n's Theorem} \label{sec:prelim} For any arbitrary graph $H = (V(H), E(H))$, let $C_i(H)$ denote the set of cliques in $H$, and $\rho_i(H) := |C_i(H)|/{|V(H)| \choose i}$ is the $i$-clique density. Note that $\rho_2(H)$ is the standard notion of edge density. The following theorem of Tur\'{a}n is one of the most important results in extremal graph theory. \begin{theorem} \label{thm:turan} (Tur\'an~\cite{TURAN41}) For any graph $H$, if $\rho_2(H) > 1-\frac{1}{k-1}$, then $H$ contains a $k$-clique. \end{theorem} This is tight, as evidenced by the complete $(k-1)$-partite graph $T_{n,k-1}$ (also called the Tur\'{a}n graph). In a remarkable generalization, Erd\H{o}s proved that if an $n$-vertex graph has even \emph{one more edge} than $T_{n,k-1}$, it must contain many $k$-cliques. One can think of this theorem as a quantified version of Tur\'{a}n's theorem. \begin{theorem} \label{thm:erdos} (Erd\H{o}s~\cite{E69}) For any graph $H$ over $n$ vertices, if $\rho_2(H)>1-\frac{1}{k-1}$, then $H$ contains at least $(n/(k-1))^{k-2}$ $k$-cliques. \end{theorem} It will be convenient to express this result in terms on $k$-clique densities. We introduce some notation: let $f(k) = k^{k-2}/k!$. By Stirling's approximation, $f(k)$ is well approximated by $e^k/\sqrt{2\pi k^{5}}$. Note that $f(k)$ is some fixed constant, for constant $k$. This corollary will be critical to our analysis. \begin{corollary} \label{cor:erdos} For any graph $H$ over $n$ vertices, if $\rho_2(H)>1-\frac{1}{k-1}$, then $\rho_k(H) \geq 1/f(k)n^2 $. \end{corollary} \begin{proof} By \Thm{erdos}, $H$ has at least $(\frac{n}{(k-1)})^{k-2}$ $k$-cliques. Thus, \begin{equation*} \rho_k(H) \geq \frac{(\frac{n}{(k-1)})^{k-2}}{{n \choose k}} \geq n^{k-2}/n^k \times k!/(k-1)^{k-2} \geq 1/(f(k) n^2) \end{equation*} \end{proof} \section{Clique shadows} \label{sec:shadows} A key concept in our algorithm is that of \emph{clique shadows}. Consider graph $G = (V,E)$. For any set $S \subseteq V$, we let $C_\ell(S)$ denote the set of $\ell$-cliques contained in $S$. \begin{definition} \label{def:shadow} A $k$-clique shadow $\boldsymbol{S}$ for graph $G$ is a multiset of tuples $\{(S_i, \ell_i)\}$ where $S_i \subseteq V$ and $\ell_i \in \mathbb{N}$ such that: there is a bijection between $C_k(G)$ and $\bigcup_{(S,\ell) \in \boldsymbol{S}} C_\ell(S)$. Furthermore, a $k$-clique shadow $\boldsymbol{S}$ is $\gamma$-saturated if $\forall (S,\ell) \in \boldsymbol{S}$, $\rho_\ell(S) \geq \gamma$. \end{definition} Intuitively, it is a collection of subgraphs, such that the sum of clique counts within them is the total clique count of $G$. Note that for each set $S$ in the shadow, the associated clique size $\ell$ is different (for different $S$). Observe that $\{(V,k)\}$ is trivally a clique shadow. But it is highly unlikely to be saturated. It is important to define the \emph{size} of $\boldsymbol{S}$, which is really the storage required to represent it. \begin{definition} The representation size of $\boldsymbol{S}$ is denoted $\sz{\boldsymbol{S}}$, and is $\sum_{(S,\ell) \in \boldsymbol{S}} |S|$. \end{definition} \begin{algorithm} \caption{{\tt sample}$(\boldsymbol{S},\gamma,k,\varepsilon,\delta)$ \newline $\boldsymbol{S}$ is $\gamma$-saturated $k$-clique shadow\newline $\varepsilon, \delta$ are error parameters } For each $(S,\ell) \in \boldsymbol{S}$, set $w(S) = {|S|\choose \ell}$ \; Set probability distribution $\mathcal{D}$ over $\boldsymbol{S}$ where $p(S) = w(S)/\sum_{(S,\ell) \in \boldsymbol{S}} w(S)$ \; \label{step:sample} For $r \in 1, 2, \ldots, t = \frac{20}{\gamma\varepsilon^2} \log(1/\delta)$\; \ \ \ \ Independently sample $(S,\ell)$ from $\mathcal{D}$\; \ \ \ \ Choose a u.a.r. $\ell$-tuple $A$ from $S$\; \ \ \ \ \label{step:X} If $A$ forms $\ell$-clique, set indicator $X_r = 1$. Else, $X_r = 0$ \; Output $\frac{\sum_r X_r}{t} \sum_{(S,\ell) \in \boldsymbol{S}} {|S| \choose \ell}$ as estimate for $|C_k(G)|$\; \end{algorithm} When a $k$-clique shadow $\boldsymbol{S}$ is $\gamma$-saturated, each $(S,\ell) \in \boldsymbol{S}$ has many $\ell$-cliques. Thus, one can employ random sampling within each $S$ to estimate $|C_\ell(S)|$, and thereby estimate $C_k(G)$. We use a sampling trick to show that we do not need to estimate all $|C_\ell(S)|$; instead we only need $O(1/\gamma)$ samples in total. \begin{theorem} \label{thm:sample} Suppose $\boldsymbol{S}$ is a $\gamma$-saturated $k$-clique shadow for $G$. The procedure {\tt sample}$(\boldsymbol{S})$ outputs an estimate $\hat{C}$ such $|\hat{C} - |C_k(G)|| \leq \varepsilon |C_k(G)|$ with probability $> 1- \delta$. The running time of {\tt sample}$(\boldsymbol{S})$ is $O(\sz{\boldsymbol{S}} + \frac{1}{\gamma\varepsilon^2}\log(1/\delta))$. \end{theorem} \begin{proof} We remind the reader that $w(S) = {|S|\choose \ell}$. Set $\alpha = |C_k(G)|/\sum_{S \in \boldsymbol{S}} w(S)$. Observe that \begin{eqnarray*} \Pr[X_r = 1] & = & \sum_{(S,\ell) \in \boldsymbol{S}} \Pr[\textrm{$(S,\ell)$ is chosen}] \\ & \times & \Pr[\textrm{$\ell$-clique chosen in $S$} | \textrm{$(S,\ell)$ is chosen}] \end{eqnarray*} The former probability is exactly $w(S)/\sum_{S \in \boldsymbol{S}} w(S)$, and the latter is exactly $|C_\ell(S)|/{|S| \choose \ell}$ $=|C_\ell(S)|/w(S)$. So, \begin{equation*} \Pr[X_r = 1] = \sum_{(S,\ell) \in \boldsymbol{S}} |C_\ell(S)|/\sum_{S \in \boldsymbol{S}} w(S) \end{equation*} Since $\boldsymbol{S}$ is a $k$-clique shadow, $\sum_{(S,\ell) \in \boldsymbol{S}} |C_\ell(S)| = |C_k(G)|$. Thus, $\Pr[X_r = 1] = \alpha$. By the saturation property, $\rho_\ell(S) \geq \gamma$, equivalent to $|C_\ell(S)| \geq \gamma w(S)$. So $\sum_{S \in \boldsymbol{S}} |C_\ell(S)| \geq \gamma \sum_{S \in \boldsymbol{S}} w(S)$. That implies that $\alpha \geq \gamma$. By linearity of expectation, $\hbox{\bf E}[\sum_{r \leq t} X_r] = \sum_{r\leq t} \hbox{\bf E}[X_r] \geq \gamma t$. Note that all the $X_r$s come from independent trials. (The graph structure plays no role, since the distribution of each $X_r$ does not change upon conditioning on the other $X_r$s.) By a multiplicative Chernoff bound (Thm 1.1 of~\cite{DuPa09}), \begin{eqnarray*} & & \Pr[\sum_r X_r/t \leq \alpha(1-\varepsilon)] \leq \exp(-\varepsilon^2 \hbox{\bf E}[\sum_r X_r]/3) \\ & \leq & \exp(-\varepsilon^2 \gamma t/3) = \exp(-5\log(1/\delta)) \leq \delta/5. \end{eqnarray*} By an analogous upper tail bound, $\Pr[\sum_r X_r/t \geq \alpha(1+\varepsilon)] \leq \delta/5$. By the union bound, with probability at least $1-2\delta/5$, $\alpha(1-\varepsilon) \leq \sum_r X_r/t \leq \alpha(1+\varepsilon)$. Note that the output $\hat{C} = (\sum_r X_r/t) \sum_{S \in \boldsymbol{S}} w(S)$. We multiply the bound above on $\sum_r X_r/t$ by $\sum_{S \in \boldsymbol{S}} w(S)$, and note that $\alpha \sum_{S \in \boldsymbol{S}} w(S) = |C_k(G)|$ to complete the proof. \end{proof} We stress the significance of \Thm{sample}. Once we get a $\gamma$-saturated clique shadow $\boldsymbol{S}$, $|C_k(G)|$ can be approximated in time \emph{linear} in $\sz{\boldsymbol{S}}$. The number of samples chosen only depends on $\gamma$ and the approximation parameters, not on the graph size. But how to actually generate a saturated clique shadow? Saturation appears to be extremely difficult to enforce. This is where the theorem of Erd\H{o}s (\Thm{erdos}) saves the day. It merely suffices to make the edge density of each set in the clique shadow high enough. The $k$-clique density \emph{automatically} becomes large enough. \begin{theorem} \label{thm:sat-shadow} Consider a $k$-clique shadow $\boldsymbol{S}$ such that $\forall (S,\ell) \in \boldsymbol{S}$, $\rho_2(S) > 1-\frac{1}{\ell-1}$. Let $\gamma = 1/\max_{(S,\ell)\in \boldsymbol{S}} f(\ell)|S|^2$. Then, $\boldsymbol{S}$ is $\gamma$-saturated. \end{theorem} \begin{proof} By \Cor{erdos}, for every $(S,\ell) \in \boldsymbol{S}$, $\rho_\ell(S) \geq 1/(f(\ell) |S|^2)$. We simply set $\gamma$ to be the minimum such density over all $(S,\ell) \in \boldsymbol{S}$. \end{proof} \section{Constructing saturated clique shadows} \label{sec:const} We use a refinement process to construct saturated clique shadows. We start with the trivial shadow $\boldsymbol{S} = \{(V,k)\}$ and iteratively ``refine" it until the saturation property is satisfied. By \Thm{sat-shadow}, we just have to ensure edge densities in each set are sufficiently large. For any set $S \subset V$, let $G|_S$ be the subgraph of $G$ induced by $S$. Given an unsaturated $k$-clique shadow $\boldsymbol{S}$, we find some $(S,\ell) \in \boldsymbol{S}$ such that $\rho_2(S) \leq 1-\frac{1}{\ell-1}$. By iterating over the vertices, we replace $(S,\ell)$ by various neighborhoods in $G|_S$ to get a new shadow. We would like the edge densities of these neighborhoods to increase, in the hope of crossing the threshold given in \Thm{sat-shadow}. The key insight is to use the \emph{degeneracy ordering} to construct specific neighborhoods of high density that also yield a valid shadow. This is basically the classic graph theoretic technique of computing core decompositions, which is widely used in large-graph analysis~\cite{SEIDMAN83, GiChMa14}. As mentioned earlier, this idea is used for fast clique counting as well~\cite{ChNi85,FFF15}. \begin{definition} \label{def:degen} For a (labeled) graph $G = (V,E)$, a \emph{degeneracy ordering} is a permutation of $V$ given as $v_1, v_2, \ldots, v_n$ such that: for each $i \leq n$, $v_i$ is the minimum degree vertex in the subgraph induced by $v_i, v_{i+1}, \ldots, v_n$. (As defined, this ordering is not unique, but we can enforce uniqueness by breaking ties by vertex id.) The degree of $v_i$ in $G|_{\{v_i,\ldots,v_n\}}$ is the core number of $v_i$. The largest core number is called the \emph{degeneracy} of $G$, denoted $\alpha(G)$. The \emph{degeneracy DAG} of $G$, denoted $D(G)$ is obtained by orienting edges in degeneracy order. In other words, every edge $(u,v) \in G$ is directed from lower to higher in the degeneracy ordering. \end{definition} The degeneracy ordering is the deletion time of the standard linear time procedure that computes the degeneracy ~\cite{MB83}. It is convenient for us to think of the degeneracy in terms of graph orientations. As defined earlier, any permutation on $V$ can be used to make a DAG out of $G$. We use this idea for generating saturated clique shadows. Essentially, while $G$ may be sparse, \emph{out-neighborhoods} in $G$ are typically dense. (This has been observed in numerous results on dense subgraph discovery~\cite{AnCh09,Tsourakakis13,SaSePi14}.) We now define the procedure {\tt Shadow-Finder}$(G,k)$, which works by a simple, iterative refinement procedure. Think of $\boldsymbol{T}$ as the current working set, and $\boldsymbol{S}$ as the final output. We take a set $(S,\ell)$ in $\boldsymbol{T}$, and construct all outneighborhoods in the degeneracy DAG. Any such set whose density is above the Tur\'{a}n threshold goes to $\boldsymbol{S}$ (the output), otherwise, it goes to $\boldsymbol{T}$ (back to the working set). \begin{algorithm} \caption{{\tt Shadow-Finder}$(G,k)$} Initialize $\boldsymbol{T} = \{(V,k)\}$ and $\boldsymbol{S} = \emptyset$\; While $\exists (S,\ell) \in \boldsymbol{T}$ such that $\rho_2(S) \leq 1 - \frac{1}{\ell-1}$\; \ \ \ \ Construct the degeneracy DAG $D(G|_S)$\; \ \ \ \ Let $N^+_s$ denote the outneighborhood (within $D(G|_S)$) of $s \in S$\; \ \ \ \ Delete $(S,\ell)$ from $\boldsymbol{T}$\; \ \ \ \ For each $s \in S$\; \ \ \ \ \ \ \ If $\ell \leq 2$ or $\rho_2(N^+_s) > 1 - \frac{1}{\ell-2}$\; \ \ \ \ \ \ \ \ \ \label{step:add} Add $(N^+_s,\ell-1)$ to $\boldsymbol{S}$\; \ \ \ \ \ \ \ \label{step:move} Else, add $(N^+_s,\ell-1)$ to $\boldsymbol{T}$\; Output $\boldsymbol{S}$\; \end{algorithm} It is useful to define the \emph{recursion tree} ${\cal T}$ of this process as follows. Every pair $(S,\ell)$ that is ever part of $\boldsymbol{T}$ is a node in ${\cal T}$. The children of $(S,\ell)$ are precisely the pairs $(N^+_s,\ell-1)$ added in \Step{add}. (At the point, $(S,\ell)$ is deleted from $\boldsymbol{T}$, and all the $(N^+_s,\ell-1)$ are added.) Observe that the root of $\boldsymbol{T}$ is $(V,k)$, and the leaves are precisely the final output $\boldsymbol{S}$. \begin{theorem} \label{thm:shadow-output} The output $\boldsymbol{S}$ of {\tt Shadow-Finder}$(G,k)$ is a $\gamma$-saturated $k$-clique shadow, where $\gamma = 1/\max_{(S,\ell) \in \boldsymbol{S}} (f(\ell) |S|^2)$. \end{theorem} \begin{proof} We first prove by induction the following loop invariant for {\tt Shadow-Finder}: $\boldsymbol{T} \cup \boldsymbol{S}$ is always a $k$-clique shadow. For the base case, note that at the beginning, $\boldsymbol{T} = \{(V,k)\}$ and $\boldsymbol{S} = \emptyset$. For the induction step, assume that $\boldsymbol{T} \cup \boldsymbol{S}$ is a $k$-clique shadow at the beginning of some iteration. The element $(S,\ell)$ is deleted from $\boldsymbol{T}$. Each $(N^+_s,\ell-1)$ is added to $\boldsymbol{S}$ or to $\boldsymbol{T}$. Thus, it suffices to prove that there is a bijection mapping between $C_\ell(S)$ and $\bigcup_{s \in S} C_{\ell-1}(N^+_s)$. (By the induction hypothesis, we can then construct a bijection between $C_k(G)$ and the appropriate cliques in $\boldsymbol{T} \cup \boldsymbol{S}$.) Consider an $\ell$-clique $K$ in $S$. Set $s$ to be the minimum vertex according to the degeneracy ordering in $D(G|_S)$. Observe that the remaining vertices form an $(\ell-1)$-clique in $N^+_s$, which we map the $K$ to. This is a bijection, because every clique $K$ can be mapped to a (unique) $(\ell-1)$-clique, and furthermore, every $(\ell-1)$-clique in $\bigcup_{s \in S} C_{\ell-1}(N^+_s)$ is in the image of this mapping. Thus, when {\tt Shadow-Finder}{} terminates, $\boldsymbol{T} \cup \boldsymbol{S}$ is a $k$-clique shadow. Since $\boldsymbol{T}$ must be empty, $\boldsymbol{S}$ is a $k$-clique shadow. Furthermore, a pair $(S,\ell)$ is in $\boldsymbol{S}$ iff $\rho_2(S) > 1 - \frac{1}{\ell-1}$. By \Thm{sat-shadow}, $\boldsymbol{S}$ is $1/\max_{(S,\ell) \in \boldsymbol{S}} (f(\ell)|S|^2)$-saturated. \end{proof} We have a simple, but important claim that bounds the size of any set in the shadow by the degeneracy. \begin{claim} \label{clm:degen} Consider non-root $(S,\ell) \in {\cal T}$. Then $|S| \leq \alpha(G)$. \end{claim} \begin{proof} Suppose the parent of $(S,\ell)$ is $(P,\ell+1)$. Observe that $S$ is the outneighborhood of some node $p$ in the DAG $D(G|_P)$. Thus, $|S| \leq \alpha(G|_P)$. The degeneracy can never be larger in a subgraph. (This is apparent by an alternate definition of degeneracy, the maximum smallest degree of an induced subgraph~\cite{MB83}.) Hence, $\alpha(G|_P) \leq \alpha(G)$. \end{proof} \begin{theorem} \label{thm:shadow-time} The running time of {\tt Shadow-Finder}$(G,k)$ is $O(\alpha(G)\sz{\boldsymbol{S}}+m+n)$. The total storage is $O(\sz{\boldsymbol{S}}+m+n)$. \end{theorem} \begin{proof} Every time we add $(N^+_S,\ell-1)$ (\Step{add}) to $\boldsymbol{T}$, we explicitly construct the graph $G|_{N^+_s}$. Thus, we can guarantee that for every $(S,\ell)$ present in $\boldsymbol{T}$, we can make queries in the graph $G|_S$. This construction takes $O(|S|^2)$ time, to query every pair in $S$. (This is \emph{not} required when $S = V$, since $G|_V = G$.) Furthermore, this construction is done for every $(S,\ell) \in {\cal T}$, except for the root node in ${\cal T}$. Once we have $G|_S$, the degeneracy order can be computed in time linear in the number of edges in $G|_S$ ~\cite{MB83}. Thus, the running time can be bounded by $O(\sum_{(S,\ell) \in {\cal T}: S \neq V} |S|^2 + m + n)$. By \Clm{degen}, we can bound $\sum_{(S,\ell) \in {\cal T}: S \neq V} |S|^2 = O(\alpha(G) \sum_{(S,\ell) \in {\cal T}} |S|)$. We split the sum over leaves and non-leaves. The sum over leaves is precisely a sum over the sets in $\boldsymbol{S}$, so that yields $O(\alpha(G)\sz{\boldsymbol{S}})$. It suffices to prove that $\sum_{(S,\ell) \in {\cal T}: S \ \textrm{non-leaf}} |S| = O(\sz{\boldsymbol{S}})$, which we show next. Observe that a non-leaf node $(S,\ell)$ in ${\cal T}$ has exactly $|S|$ children, one for each vertex $s \in S$. Thus, \begin{eqnarray*} \sum_{(S,\ell) \in {\cal T}: (S,\ell) \textrm{non-leaf}} |S| & = & \sum_{(S,\ell) \in {\cal T}} \textrm{\# children of $(S,\ell)$} \\ & = & \textrm{\# edges in ${\cal T}$} \end{eqnarray*} All internal nodes in ${\cal T}$ have at least $2$ children, so the number of edges in ${\cal T}$ is at most twice the number of leaves in ${\cal T}$. But this is exactly the number of sets in the output $\mathcal{S}$, which is at most $\sz{\boldsymbol{S}}$. The total storage is $O(\sum_{(S,\ell) \in {\cal T}} |S| + m + n)$, which is $O(\sz{S} + m + n)$ by the above arguments. \end{proof} We now formally define the Tur\'{a}n shadow to be output of this procedure. \begin{definition} \label{def:turan-shadow} The $k$-clique Tur\'{a}n shadow of $G$ is the output of {\tt Shadow-Finder}$(G,k)$. \end{definition} \subsection{Putting it all together} \label{sec:together} \begin{algorithm} \caption{{\sc Tur\'{a}n-shadow}$(G,k,\varepsilon,\delta)$} Compute $\boldsymbol{S} = $ {\tt Shadow-Finder}$(G,k)$\; \label{step:gamma} Set $\gamma = 1/\max_{(S,\ell) \in \boldsymbol{S}} (f(\ell) |S|^2)$\; Output $\hat{C}_k = {\tt sample}(G,k,\gamma,\varepsilon,\delta)$\; \end{algorithm} \begin{theorem} \label{thm:main-full} Consider graph $G=(V,E)$ with $m$ edges, $n$ vertices, and degeneracy $\alpha(G)$. Assume $m \leq n^2/4$. Let $\boldsymbol{S}$ be the Tur\'{a}n $k$-clique shadow of $G$. With probability at least $1-\delta$ (this probability is over the randomness of {\sc Tur\'{a}n-shadow}; there is no stochastic assumption on $G$), $|\hat{C}_k - |C_k(G)|| \leq \varepsilon |C_k(G)|$. The running time of {\sc Tur\'{a}n-shadow}{} is $O(\alpha(G)\sz{\boldsymbol{S}} + f(k)m\log(1/\delta)/\varepsilon^2 + n)$ and the total storage is $O(\sz{\boldsymbol{S}} + m + n)$. \end{theorem} \begin{proof} By \Thm{shadow-output}, $\boldsymbol{S}$ is $\gamma$-saturated, for $\gamma = 1/\max_{(S,\ell) \in \boldsymbol{S}} f(\ell)|S|^2$. Since $m \leq n^2/4$, the procedure {\tt Shadow-Finder}$(G,k)$ cannot just output $\{(V,k)\}$. All leaves in the recursion tree must have depth at least $2$, and by \Clm{degen}, for all $(S,\ell) \in \boldsymbol{S}$, $|S| \leq \alpha(G)$. A classic bound on the degeneracy asserts that $\alpha(G) \leq \sqrt{2m}$ (Lemma 1 of~\cite{ChNi85}). Since $f(\ell)$ is increasing in $\ell$, $\max_{(S,\ell) \in \boldsymbol{S}} f(\ell) |S|^2 \leq 2f(k)m$. Thus, $\gamma = \Omega(1/(f(k)m))$. By \Thm{sample}, the running time of {\tt sample}{} is $O(\sz{\boldsymbol{S}} + \log(1/\delta)/(\gamma\varepsilon^2))$, which is $O(\sz{\boldsymbol{S}} + f(k)m\log(1/\delta)/\varepsilon^2)$. \Thm{sample} also asserts the accuracy of the output. Adding the bounds of \Thm{shadow-time}, we prove the running time and storage bounds. \end{proof} \subsection{The shadow size} \label{sec:shadow} The practicality of {\sc Tur\'{a}n-shadow}{} hinges on $\sz{\boldsymbol{S}}$ being small. It is not hard to prove a worst-case bound, using the degeneracy. \begin{claim} \label{clm:size} $\sz{\boldsymbol{S}} = O(n\alpha(G)^{k-2})$. \end{claim} \begin{proof} By arguments in the proof of \Thm{shadow-time}, we can show that $\sz{\boldsymbol{S}}$ is at most the number of edges in ${\cal T}$. In ${\cal T}$, the degree of the root is $n$, and by \Clm{degen}, the degree of all other nodes is at most $\alpha(G)$. The depth of the tree is at most $k-1$, since the value of $\ell$ decreases every step down the tree. That proves that $n\alpha^{k-2}$ bound. \end{proof} This bound is not that interesting, and the Chiba-Nishizeki algorithm for exact clique enumeration matches this bound~\cite{ChNi85}. Indeed, we can design instances where \Clm{size} is tight (a set of $n/\alpha$ Erd\H{o}s-R\'{e}nyi graphs $G_{\alpha,1/3}$). In any case, beating an exponential dependence on $k$ for any algorithm is unlikely~\cite{CHKX04}. \emph{The key empirical insight of this paper is that Tur\'{a}n clique shadows are small for real-world graphs.} We explain in more detail in the next section; \Fig{shadow_size} shows that the shadow sizes are typically less than $m$, and never more than $10m$. \section{Experimental results}\label{sec:results} \textbf{Preliminaries:} We implemented our algorithms in {\tt C++} and ran our experiments on a commodity machine equipped with a 3.00GHz Intel Core i7 processor with 8~cores and 256KB L2 cache (per core), 20MB L3 cache, and 128GB memory. We performed our experiments on a collection of graphs from SNAP~\cite{SNAP}, the largest with more than 100M edges. The collection includes social networks, web networks, and infrastructure networks. Each graph is made simple by ignoring direction. Basic properties of these graphs are presented in \Tab{estimates_TS}. In the implementation of {\sc Tur\'{a}n-shadow}, there is just one parameter to choose: the number of samples chosen in \Step{sample} in {\tt sample}. Theoretically, it is set to $(20/\gamma\varepsilon^2) \log(1/\delta)$; in practice, we just set it to 50K for all our runs. Note that $\gamma$ is not a free parameter and is automatically set in \Step{gamma} of {\sc Tur\'{a}n-shadow}. We focus on counting $k$-cliques for $k$ ranging from $5$ to $10$. We ignore $k=3,4$, since there is much existing (scalable) work for this setting~\cite{SePiKo13,JhSePi15,AhNe+15}. For the sake of presentation, we showcase results for $k=7,10$. We focus on $k=10$ since no existing algorithm produces results for 10-cliques in reasonable time. We also show specifics for $k=7$, to contrast with $k=10$. \begin{table*}[] \centering \caption{Graph properties} \label{tab:estimates_TS} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{|l|l|l|r|r|l|r|r|l|r|r|l|r|r|} \hline \multicolumn{5}{|c|}{} & \multicolumn{3}{c|}{\textbf{k=5}} & \multicolumn{3}{c|}{\textbf{k=7}} & \multicolumn{3}{c|}{\textbf{k=10}} \\ \cline{7-7} \hline \textbf{graph} & \textbf{vertices} & \textbf{edges} & \textbf{degen} & \textbf{max degree} & \textbf{estimate} & \textbf{\% error} & \textbf{time} & \textbf{estimate} & \textbf{\% error} & \textbf{time} & \textbf{estimate} & \textbf{\% error} & \textbf{time} \\ \cline{7-7} \hline loc-gowalla & 1.97E+05 & 9.50E+05 & 51 & 14730 & 1.46E+07 & 0.20 & 2 & 4.78E+07 & 0.36 & 2 & 1.08E+08 & 1.63 & 3 \\ web-Stanford & 2.82E+05 & 1.99E+06 & 71 & 38625 & 6.21E+08 & 0.00 & 20 & 3.47E+10 & 0.13 & 43 & 5.82E+12 & - & 52 \\ amazon0601 & 4.03E+05 & 4.89E+06 & 10 & 2752 & 3.64E+06 & 0.93 & 1 & 9.98E+05 & 0.95 & 1 & 9.77E+03 & 0.01 & 1 \\ com-youtube & 1.13E+06 & 2.99E+06 & 51 & 28754 & 7.29E+06 & 1.08 & 7 & 7.85E+06 & 1.38 & 8 & 1.83E+06 & 0.20 & 8 \\ web-Google & 8.76E+05 & 4.32E+06 & 44 & 6332 & 1.05E+08 & 0.10 & 2 & 6.06E+08 & 0.09 & 2 & 1.29E+10 & 0.82 & 2 \\ web-BerkStan & 6.85E+05 & 6.65E+06 & 201 & 84230 & 2.19E+10 & 0.00 & 101 & 9.30E+12 & 1.05 & 214 & 5.79E+16 & - & 262 \\ as-skitter & 1.70E+06 & 1.11E+07 & 111 & 35455 & 1.17E+09 & 0.01 & 153 & 7.30E+10 & 0.23 & 164 & 1.42E+13 & - & 180 \\ cit-Patents & 3.77E+06 & 1.65E+07 & 64 & 793 & 3.05E+06 & 0.34 & 10 & 1.89E+06 & 0.83 & 9 & 2.55E+03 & 4.46 & 9 \\ soc-pokec & 1.63E+06 & 2.23E+07 & 47 & 14854 & 5.29E+07 & 0.13 & 42 & 8.43E+07 & 0.48 & 45 & 1.98E+08 & 0.01 & 45 \\ com-lj & 4.00E+06 & 3.47E+07 & 360 & 14815 & 2.46E+11 & - & 106 & 4.48E+14 & - & 153 & 1.47E+19 & - & 252 \\ com-orkut & 3.07E+06 & 1.17E+08 & 253 & 33313 & 1.57E+10 & 0.00 & 3119 & 3.61E+11 & 1.97 & 5587 & 3.03E+13 & - & 9298 \\ \hline \end{tabular} \end{adjustbox} \caption{Table shows the sizes, degeneracy, maximum degree of the graphs, the counts of 5, 7 and 10 cliques obtained using {\sc Tur\'{a}n-shadow}{}, the percent relative error in the estimates, and time in seconds required to get the estimates. Some of the exact counts were obtained from ~\cite{FFF15} (where available). This is the first such algorithm that obtains these counts with $<2\%$ error without using any specialized hardware.} \end{table*} \textbf{Convergence of {\sc Tur\'{a}n-shadow}:} We picked two smaller graphs {\tt amazon0601} and {\tt web-Google} for which the exact $k$-clique count is known (for all $k \in [5,10]$). We choose both $k=7, 10$. For each graph, for sample size in [10K,50K,100K,500K,1M], we perform 100 runs of the algorithm. We plot the spread of the output of {\sc Tur\'{a}n-shadow}{}, over all these runs. The results are shown in~\Fig{convergence}. The red line denotes the true answer, and there is a point for the output of every single run. Even for 10-clique counting, the spread of 100 runs is absolutely minimal. For 50K samples, the range of values is within 2\% of the true answer. This was consistent with all our runs. \textbf{Accuracy of {\sc Tur\'{a}n-shadow}:} For many graphs (and values of $k$), it was not feasible to get an exact algorithm to run in reasonable time. The run time of exact procedures can vary wildly, so we have exact numbers for some larger graphs but could not generate numbers for smaller graphs. We collected as many exact results as possible to validate {\sc Tur\'{a}n-shadow}. For the sake of presentation, we only show a snapshot of these results here. For $k=7$, we collected exact results for a collection of graphs, and for each graph, compared the output of a single run of {\sc Tur\'{a}n-shadow}{} (with 50K samples) with the true answer. We compute \emph{relative error}: |true - estimate|/true. These results are presented in~\Fig{acc}. Note that the errors are within 2\% in all cases, again consistent with all our runs. In \Tab{estimates_TS}, we present the output of our algorithm for a single run on all instances and $k=5,7,10$. For every graph where we know the true value, we present the relative error. Barring one example ({\tt cit-Patents} for $k=10$), all errors are less than 2\%. Even in the worst case, the error is at most 5\%. \begin{figure*} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{Figures/amazon0601_7_convergence.pdf} \label{fig:p1} \end{subfigure} \hfill \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{Figures/amazon0601_10_convergence.pdf} \label{fig:p2} \end{subfigure} \hfill \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{Figures/web-Google_7_convergence.pdf} \label{fig:p3} \end{subfigure} \hfill \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{Figures/web-Google_10_convergence.pdf} \label{fig:p4} \end{subfigure} \hfill \caption{\footnotesize{ Figure shows convergence over 100 runs of {\sc Tur\'{a}n-shadow}{} using 10K, 50K, 100K, 500K and 1M samples each. {\sc Tur\'{a}n-shadow}{} has an extremely low spread and consistently gives very accurate results.}} \label{fig:convergence} \end{figure*} \textbf{Running time:} All runtimes are presented in \Tab{estimates_TS}. (We show the time for a single run, since there was little variance for different runs on the same graph.) In all instances except {\tt com-orkut}, the runtime was a few minutes, even for graphs with tens of millions of edges. We stress that these are all on a single machine. For {\tt com-orkut}, the runtime is at most 2.5 hours. Previously, such graphs were processed with MapReduce on clusters~\cite{FFF15}. \begin{figure}[t] \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{Figures/shadow_size_vs_edges_k=7.pdf} \label{fig:shadow7} \end{subfigure} \hfill \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{Figures/shadow_size_vs_edges_k=10.pdf} \label{fig:shadow10} \end{subfigure} \caption{\footnotesize{ Figures show the sizes of the Tur\'{a}n shadows generated for k=7 and k=10 in all the graphs. The runtime of the algorithm is proportional to the size of the shadow and crucially, the sizes scale only linearly with the number of edges.}} \label{fig:shadow_size} \end{figure} \begin{figure}[t] \begin{subfigure}[b]{0.22\textwidth} \includegraphics[width=\textwidth]{Figures/succ_ratio_vs_graphs_log_k=7.pdf} \label{fig:p5} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \includegraphics[width=\textwidth]{Figures/succ_ratio_vs_graphs_log_k=10.pdf} \label{fig:succ10} \end{subfigure} \caption{\footnotesize{ Figures show the success ratio (probability of finding a clique) obtained in the sampling experiments in all the graphs. }} \label{fig:success_ratios} \end{figure} \begin{figure}[t] \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/est_num_cliques_log.pdf} \end{subfigure} \caption{\footnotesize{ Figures show the trends in clique counts of some graphs. While cit-Patents, com-youtube and amazon0601 show a decreasing trend, all other graphs show an exponential increase in the number of cliques with clique size. }} \label{fig:trends} \end{figure} \subsection{Comparison with other algorithms} Our exact brute-force procedure is a well-tuned algorithm that uses the degeneracy ordering and exhaustively searches outneighborhoods for cliques. This is basically the procedure of Finetti {\em et al.}~\cite{FFF15}, inspired by the algorithm of Chiba-Nishizeki~\cite{ChNi85}. We compare with the following algorithms. \begin{asparaitem} \item Color coding: This is a classic algorithmic technique~\cite{AlYuZw94}. For counting $k$-cliques, the algorithm randomly colors vertices with one of $k$ colors. Then, the algorithm uses a brute-force procedure to count polychromatic $k$-cliques (where each vertex has a different color). This number is scaled to give an unbiased estimate, and the coloring helps cut down the search time of the brute-force procedure. This method has been applied in practice for numerous pattern counting problems~\cite{HoBe+07,BetzlerBFKN11,ZhWaBu+12}. \item Edge sampling: Edge sampling was discussed by Tsourakakis {\em et al.} in the context of triangle counting~\cite{TsKaMiFa09,TsDrMi09,TsKoMi11}, though the idea is flexible and can be used for large patterns~\cite{ElShBo16}. The idea here is to sample each edge independently with some probability $p$, and then count $k$-cliques in the down-sampled graph. This number is scaled to give an unbiased estimate for the number of $k$-cliques. For clique counting, we observe that minor differences in $p$ (by $0.1$) have huge effects on runtime and accuracy. To do a fair comparison, we run multiple experiments with varying $p$ (increments of $0.1$), until we reach the smallest $p$ that consistently yields less than 5\% error. (Note that the error of {\sc Tur\'{a}n-shadow}{} is significantly smaller that this.) Timing comparisons are done with runs for that value of $p$. \item GRAFT~\cite{RaBhHa14}: Rahman {\em et al.} give a variant of edge sampling with better performance for large pattern counts~\cite{RaBhHa14}. The idea is to sample some set of edges, and exactly count the number of $k$-cliques on each of these edges. This can be scaled into an unbiased estimate for the total number of $k$-cliques. As with edge sampling, we increase the number of edge samples until we get consistently within 5\% error. Timing comparisons are done with this setting. Typical settings seems to be in the range of 100K to 1M samples. Beyond that, GRAFT is infeasible, even for graphs with 10M edges. \end{asparaitem} We focus on $k=7,10$ for clarity. In all cases, we simply terminate the algorithm if it takes more than the minimum of 7 hours and 100 times the time required by {\sc Tur\'{a}n-shadow}{}. We present the speedup of {\sc Tur\'{a}n-shadow}{} with respect to all these algorithms in \Fig{7-speedup} for k=7. For k=10, for most instances, no competing algorithm terminated. \begin{asparaitem} \item $k=7$ (\Fig{7-speedup}): {\sc Tur\'{a}n-shadow}{} outperformed Color Coding and GRAFT across all instances. Color Coding never gave good accuracy, so we ignore it in our speedup plots. We do note that Edge Sampling gives extremely good performance in some instances, but can be very slow in others. For {\tt amazon0601}, {\tt com-youtube}, {\tt cit-Patents}, and {\tt soc-pokec}, Edge Sampling is faster than {\sc Tur\'{a}n-shadow}. But {\sc Tur\'{a}n-shadow}{} handles all these graphs with a minute. The only exception is {\tt com-orkut}, where GRAFT is much faster than {\sc Tur\'{a}n-shadow}. We note that all other algorithms can perform extremely poorly on fairly small graphs: Edge Sampling is 10-100 times slower on a number of graphs, which have only millions of edges. On the other hand, {\sc Tur\'{a}n-shadow}{} always runs in minutes for these graphs. \item $k=10$ : \emph{No competing algorithm} is able to handle 10 cliques for all datasets, even in 7 hours (giving a speedup of anywhere between 3x to 100x). They all generally fail for at least half of the instances. {\sc Tur\'{a}n-shadow}{} gets an answer for {\tt com-orkut} within 2.5 hours, and handles all other graphs in minutes. \end{asparaitem} \subsection{Details about {\sc Tur\'{a}n-shadow}} \textbf{Shadow size:} In \Fig{shadow_size}, we plot the size of the $k$-clique Tur\'{a}n shadow with respect to the number of edges in each instance. This is done for $k=7,10$. (The line $y=x$ is drawn as well.) As seen from \Thm{main-full}, the size of the shadow controls the storage and runtime of {\sc Tur\'{a}n-shadow}. We see how in almost all instances, the shadow size is around the number of edges. This empirically explains the efficiency of {\sc Tur\'{a}n-shadow}. The worst case is {\tt com-orkut}, where the shadow size is at most ten times the number of edges. \textbf{Success probability:} The final estimate of {\sc Tur\'{a}n-shadow}{} is generated through {\tt sample}. We asserted (theoretically) that $O(m)$ samples suffice, and in practice, we use 50K samples. In \Fig{success_ratios}, we plot (for $k=7,10$) the empirical probability of finding a clique in \Step{X} of {\tt sample}. The higher this is, the fewer samples we require and the more confidence in the statistical validity of our estimate. Almost all (empirical) probabilities are more than $0.1$, and 50K samples are more than enough for convergence. \textbf{Trends in clique numbers:} \Fig{trends} plots the number of $k$-cliques (as computed by {\sc Tur\'{a}n-shadow}) versus $k$. (We do not consider all graphs for the sake of clarity.) Interestingly, there are some graphs where the number of cliques grows exponentially. This is probably because of a large clique/dense-subgraph, and it would be interesting to verify this. For another class of graphs, the clique counts are consistently decreasing. This seems to classify graphs into one of two types. We feel further analysis of these trends would be interesting, and {\sc Tur\'{a}n-shadow}{} can be a useful tool for network analysis. \clearpage \scriptsize \bibliographystyle{abbrv}
{ "timestamp": "2018-08-29T02:06:20", "yymm": "1611", "arxiv_id": "1611.05561", "language": "en", "url": "https://arxiv.org/abs/1611.05561", "abstract": "Clique counts reveal important properties about the structure of massive graphs, especially social networks. The simple setting of just 3-cliques (triangles) has received much attention from the research community. For larger cliques (even, say 6-cliques) the problem quickly becomes intractable because of combinatorial explosion. Most methods used for triangle counting do not scale for large cliques, and existing algorithms require massive parallelism to be feasible.We present a new randomized algorithm that provably approximates the number of k-cliques, for any constant k. The key insight is the use of (strengthenings of) the classic Turán's theorem: this claims that if the edge density of a graph is sufficiently high, the k-clique density must be non-trivial. We define a combinatorial structure called a Turán shadow, the construction of which leads to fast algorithms for clique counting.We design a practical heuristic, called TURÁN-SHADOW, based on this theoretical algorithm, and test it on a large class of test graphs. In all cases,TURÁN-SHADOW has less than 2% error, in a fraction of the time used by well-tuned exact algorithms. We do detailed comparisons with a range of other sampling algorithms, and find that TURÁN-SHADOW is generally much faster and more accurate. For example, TURÁN-SHADOW estimates all cliques numbers up to size 10 in social network with over a hundred million edges. This is done in less than three hours on a single commodity machine.", "subjects": "Social and Information Networks (cs.SI); Data Structures and Algorithms (cs.DS)", "title": "A Fast and Provable Method for Estimating Clique Counts Using Turán's Theorem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692352660529, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7079585041056211 }
https://arxiv.org/abs/2012.13795
On the Möbius function of permutations under the pattern containment order
We study several aspects of the Möbius function, $\mu[\sigma,\pi]$, on the poset of permutations under the pattern containment order.First, we consider cases where the lower bound of the poset is indecomposable. We show that $\mu[\sigma,\pi]$ can be computed by considering just the indecomposable permutations contained in the upper bound. We apply this to the case where the upper bound is an increasing oscillation, and give a method for computing the value of the Möbius function that only involves evaluating simple inequalities.We then consider conditions on an interval which guarantee that the value of the Möbius function is zero. In particular, we show that if a permutation $\pi$ contains two intervals of length 2, which are not order-isomorphic to one another, then $\mu[1,\pi] = 0$. This allows us to prove that the proportion of permutations of length $n$ with principal Möbius function equal to zero is asymptotically bounded below by $(1-1/e)^2 \ge 0.3995$. This is the first result determining the value of $\mu[1,\pi]$ for an asymptotically positive proportion of permutations $\pi$.Following this, we use ''2413-balloon'' permutations to show that the growth of the principal Möbius function on the permutation poset is exponential. This improves on previous work, which has shown that the growth is at least polynomial.We then generalise 2413-balloon permutations, and find a recursion for the value of the principal Möbius function of these generalisations.
\chapter*{Abstract} \addcontentsline{toc}{chapter}{Abstract} We study several aspects of the M\"{o}bius function, $\mobfn{\sigma}{\pi}$, on the poset of permutations under the pattern containment order. First, we consider cases where the lower bound of the poset is indecomposable. We show that $\mobfn{\sigma}{\pi}$ can be computed by considering just the indecomposable permutations contained in the upper bound. We apply this to the case where the upper bound is an increasing oscillation, and give a method for computing the value of the M\"{o}bius function that only involves evaluating simple inequalities. We then consider conditions on an interval which guarantee that the value of the M\"{o}bius function is zero. In particular, we show that if a permutation $\pi$ contains two intervals of length 2, which are not order-isomorphic to one another, then $\mobfn{1}{\pi} = 0$. This allows us to prove that the proportion of permutations of length $n$ with principal M\"{o}bius function equal to zero is asymptotically bounded below by $(1-1/e)^2\ge0.3995$. This is the first result determining the value of $\mobfn{1}{\pi}$ for an asymptotically positive proportion of permutations~$\pi$. Following this, we use ``2413-balloon'' permutations to show that the growth of the principal M\"{o}bius function on the permutation poset is exponential. This improves on previous work, which has shown that the growth is at least polynomial. We then generalise 2413-balloon permutations, and find a recursion for the value of the principal M\"{o}bius function of these generalisations. Finally, we look back at the results found, and discuss ways to relate the results from each chapter. We then consider further research avenues. \chapter*{Acknowledgements} \addcontentsline{toc}{chapter}{Acknowledgements} I would like to thank my supervisor, Robert Brignall, for introducing me to the M\"{o}bius function on the permutation pattern poset, and for offering me the chance to study under his supervision. Robert has always been patient and forbearing in our interactions, and has allowed me the freedom to find my own mathematical ``voice''. As a part-time student, living some distance from The Open University campus, my opportunities to meet with other PhD students have been somewhat limited. Where these opportunities have arisen, I have been warmly welcomed. I would especially like to thank Grahame Erskine, Jakub Slia\v{c}an, James Tuite, Margaret Stanier, Olivia Jeans, and Rob Lewis for their support and discussions. Part-time students are not entitled to any funding support from the School of Mathematics and Statistics at The Open University. I am therefore most grateful to the School for providing funding for conferences and visits over the last four years. I am also grateful for the support received from the National Science Foundation for contributions towards travel and accommodation costs for attending conferences, and for the support from Charles University in Prague for my visit in 2019. The Permutation Patterns community is small and vibrant. I was made to feel welcome by everyone I encountered. I would particularly like to thank Einar Steingr\'imsson, V{\'i}t Jel{\'i}nek, and Jan Kyn{\v c}l for their support and encouragement. I would also like to thank the anonymous referees of the papers which underpin this thesis for all their work. \chapter{\texorpdfstring{Values of $\mobp{\pi}$ where $\pi$ is a specific simple permutation} {Values of mu(pi) where pi is a specific simple permutation}} \label{chapter_appendix-a} \section{\texorpdfstring {Values of $\mobp{W_1(n)}$} {Values of the principal M\"{o}bius function of W1(n)}} \begin{table}[ht!] \begin{center} \begin{tabular}{ccccc} \begin{tabular}{lr} \toprule $n$ & $\mobp{W_1(n)}$ \\ \midrule 4 & -3 \\ 5 & 6 \\ 6 & -9 \\ 7 & 12 \\ 8 & -15 \\ 9 & 18 \\ 10 & -21 \\ 11 & 24 \\ 12 & -27 \\ \bottomrule \end{tabular} & \phantom{xxx} & \begin{tabular}{lr} \toprule $n$ & $\mobp{W_1(n)}$ \\ \midrule 13 & 30 \\ 14 & -33 \\ 15 & 36 \\ 16 & -39 \\ 17 & 42 \\ 18 & -45 \\ 19 & 48 \\ 20 & -51 \\ 21 & 54 \\ \bottomrule \end{tabular} & \phantom{xxx} & \begin{tabular}{lr} \toprule $n$ & $\mobp{W_1(n)}$ \\ \midrule 22 & -57 \\ 23 & 60 \\ 24 & -63 \\ 25 & 66 \\ 26 & -69 \\ 27 & 72 \\ 28 & -75 \\ 29 & 78 \\ 30 & -81 \\ \bottomrule \end{tabular} \end{tabular} \caption{Values of $\mobp{W_1(n)}$ for $n = 4, \ldots, 30$.} \label{table-values-of-w1} \end{center} \end{table} \section{\texorpdfstring {Values of $\mobp{W_2(n)}$} {Values of the principal M\"{o}bius function of W2(n)}} \begin{table}[ht!] \begin{center} \begin{tabular}{ccccc} \begin{tabular}{lr} \toprule $n$ & $\mobp{W_2(n)}$ \\ \midrule 4 & -3 \\ 5 & 4 \\ 6 & -5 \\ 7 & 6 \\ 8 & -7 \\ 9 & 8 \\ 10 & -9 \\ 11 & 10 \\ 12 & -11 \\ \bottomrule \end{tabular} & \phantom{xxx} & \begin{tabular}{lr} \toprule $n$ & $\mobp{W_2(n)}$ \\ \midrule 13 & 12 \\ 14 & -13 \\ 15 & 14 \\ 16 & -15 \\ 17 & 16 \\ 18 & -17 \\ 19 & 18 \\ 20 & -19 \\ 21 & 20 \\ \bottomrule \end{tabular} & \phantom{xxx} & \begin{tabular}{lr} \toprule $n$ & $\mobp{W_2(n)}$ \\ \midrule 22 & -21 \\ 23 & 22 \\ 24 & -23 \\ 25 & 24 \\ 26 & -25 \\ 27 & 26 \\ 28 & -27 \\ 29 & 28 \\ 30 & -29 \\ \bottomrule \end{tabular} \end{tabular} \caption{Values of $\mobp{W_2(n)}$ for $n = 4, \ldots, 30$.} \label{table-values-of-w2} \end{center} \end{table} \section{\texorpdfstring {Values of $\mobp{E_1(2n,k)}$} {Values of the principal M\"{o}bius function of E1(2n,k)}} \begin{table}[ht!] \ContinuedFloat* \begin{center} \begin{tabular}{ccc} \begin{tabular}{llr} \toprule $n$ & $k$ & $\mobp{E_1(2n,k)}$ \\ \midrule 3 & 1 & -5 \\ & & \\ 4 & 1 & -7 \\ 4 & 2 & -8 \\ & & \\ 5 & 1 & -9 \\ 5 & 2 & -10 \\ 5 & 3 & -12 \\ & & \\ 6 & 1 & -11 \\ 6 & 2 & -12 \\ 6 & 3 & -14 \\ 6 & 4 & -17 \\ & & \\ 7 & 1 & -13 \\ 7 & 2 & -14 \\ 7 & 3 & -16 \\ 7 & 4 & -19 \\ 7 & 5 & -23 \\ & & \\ & & \\ & & \\ & & \\ \bottomrule \end{tabular} & \begin{tabular}{llr} \toprule $n$ & $k$ & $\mobp{E_1(2n,k)}$ \\ \midrule 8 & 1 & -25 \\ 8 & 2 & -16 \\ 8 & 3 & -18 \\ 8 & 4 & -21 \\ 8 & 5 & -25 \\ 8 & 6 & -30 \\ & & \\ 9 & 1 & -17 \\ 9 & 2 & -18 \\ 9 & 3 & -20 \\ 9 & 4 & -23 \\ 9 & 5 & -27 \\ 9 & 6 & -32 \\ 9 & 7 & -38 \\ & & \\ 10 & 1 & -19 \\ 10 & 2 & -20 \\ 10 & 3 & -22 \\ 10 & 4 & -25 \\ 10 & 5 & -29 \\ 10 & 6 & -34 \\ 10 & 7 & -40 \\ 10 & 8 & -47 \\ \bottomrule \end{tabular} & \begin{tabular}{llr} \toprule $n$ & $k$ & $\mobp{E_1(2n,k)}$ \\ \midrule 11 & 1 & -21 \\ 11 & 2 & -22 \\ 11 & 3 & -24 \\ 11 & 4 & -27 \\ 11 & 5 & -31 \\ 11 & 6 & -36 \\ 11 & 7 & -42 \\ 11 & 8 & -49 \\ 11 & 9 & -57 \\ & & \\ 12 & 1 & -23 \\ 12 & 2 & -24 \\ 12 & 3 & -26 \\ 12 & 4 & -29 \\ 12 & 5 & -33 \\ 12 & 6 & -38 \\ 12 & 7 & -44 \\ 12 & 8 & -51 \\ 12 & 9 & -59 \\ 12 & 10 & -68 \\ & & \\ & & \\ & & \\ \bottomrule \end{tabular} \end{tabular} \caption{Values of $\mobp{E_1(2n,k)}$ for $n = 3, \ldots, 12$, and all valid values of $k$.} \label{table-values-of-e1-3-12} \end{center} \end{table} \begin{table}[ht!] \ContinuedFloat \begin{center} \begin{tabular}{ccc} \begin{tabular}{llr} \toprule $n$ & $k$ & $\mobp{E_1(2n,k)}$ \\ \midrule 13 & 1 & -25 \\ 13 & 2 & -26 \\ 13 & 3 & -28 \\ 13 & 4 & -31 \\ 13 & 5 & -35 \\ 13 & 6 & -40 \\ 13 & 7 & -46 \\ 13 & 8 & -53 \\ 13 & 9 & -61 \\ 13 & 10 & -70 \\ 13 & 11 & -80 \\ & & \\ & & \\ \bottomrule \end{tabular} & \begin{tabular}{llr} \toprule $n$ & $k$ & $\mobp{E_1(2n,k)}$ \\ \midrule 14 & 1 & -27 \\ 14 & 2 & -28 \\ 14 & 3 & -30 \\ 14 & 4 & -33 \\ 14 & 5 & -37 \\ 14 & 6 & -42 \\ 14 & 7 & -48 \\ 14 & 8 & -55 \\ 14 & 9 & -63 \\ 14 & 10 & -72 \\ 14 & 11 & -82 \\ 14 & 12 & -93 \\ & & \\ \bottomrule \end{tabular} & \begin{tabular}{llr} \toprule $n$ & $k$ & $\mobp{E_1(2n,k)}$ \\ \midrule 15 & 1 & -29 \\ 15 & 2 & -30 \\ 15 & 3 & -32 \\ 15 & 4 & -35 \\ 15 & 5 & -39 \\ 15 & 6 & -44 \\ 15 & 7 & -50 \\ 15 & 8 & -57 \\ 15 & 9 & -65 \\ 15 & 10 & -74 \\ 15 & 11 & -84 \\ 15 & 12 & -95 \\ 15 & 13 &-107 \\ \bottomrule \end{tabular} \end{tabular} \caption{Values of $\mobp{E_1(2n,k)}$ for $n = 13, 14, 15$, and all valid values of $k$.} \end{center} \end{table} \section{\texorpdfstring {Values of $\mobp{E_2(2n)}$} {Values of the principal M\"{o}bius function of E2(2n)}} \begin{table}[ht!] \begin{center} \begin{tabular}{ccccc} \begin{tabular}{lr} \toprule $n$ & $\mobp{E_2(2n)}$ \\ \midrule 3 & -5 \\ 4 & -8 \\ 5 & -12 \\ 6 & -17 \\ 7 & -23 \\ \bottomrule \end{tabular} & \phantom{xxx} & \begin{tabular}{lr} \toprule $n$ & $\mobp{E_2(2n)}$ \\ \midrule 8 & -30 \\ 9 & -38 \\ 10 & -47 \\ 11 & -57 \\ 12 & -68 \\ \bottomrule \end{tabular} & \phantom{xxx} & \begin{tabular}{lr} \toprule $n$ & $\mobp{E_2(2n)}$ \\ \midrule 13 & -80 \\ 14 & -93 \\ 15 &-107 \\ & \\ & \\ \bottomrule \end{tabular} \end{tabular} \caption{Values of $\mobp{E_2(2n)}$ for $n = 3, \ldots, 15$.} \label{table-values-of-e2} \end{center} \end{table} \section{\texorpdfstring {Values of $\mobp{O(2n+1,k)}$} {Values of the principal M\"{o}bius function of O(2n+1,k)}} \begin{table}[ht!] \begin{center} \begin{tabular}{ccc} \begin{tabular}{llr} \toprule $n$ & $k$ & $\mobp{O(2n+1,k)}$ \\ \midrule 3 & $1, 2$ & 6 \\ 4 & $1,2,3$ & 8 \\ 5 & $1, \ldots, 4$ & 10 \\ 6 & $1, \ldots, 5$ & 12 \\ 7 & $1, \ldots, 6$ & 14 \\ 8 & $1, \ldots, 7$ & 16 \\ 9 & $1, \ldots, 8$ & 18 \\ \bottomrule \end{tabular} & \phantom{xxx} & \begin{tabular}{llr} \toprule $n$ & $k$ & $\mobp{O(2n+1,k}$ \\ \midrule 10 & $1, \ldots, 9$ & 20 \\ 11 & $1, \ldots, 10$ & 22 \\ 12 & $1, \ldots, 11$ & 24 \\ 13 & $1, \ldots, 12$ & 26 \\ 14 & $1, \ldots, 13$ & 28 \\ 15 & $1, \ldots, 14$ & 30 \\ & & \\ \bottomrule \end{tabular} \end{tabular} \caption{Values of $\mobp{O(2n+1,k)}$ for $n = 3, \ldots, 15$, and all valid values of $k$.} \label{table-values-of-o} \end{center} \end{table} \chapter{\texorpdfstring {Canonical permutations that achieve $\truemobmax(n)$ or $\truemobmin(n)$} {Canonical permutations that achieve MaxMu(n) or MinMu(n)}} \label{chapter_appendix-b} \begin{table}[ht!] \[ \begin{array}{lrr} \toprule n & \mobp{\pi} = \truemobmin(n) & \mobp{\pi} = \truemobmax(n) \\ \midrule 1 & \mathbf{1} & \mathbf{1} \\ 2 & \mathbf{12} & \mathbf{12} \\ 3 & 123 & 132 \\ 4 & \mathbf{2413} & 1234; \;1243; \;1432 \\ \multirow{3}{*}{5 \Bigg\{ } & 12345; \;12354; \;12435; \;12453; & \\ & 12543; \;13452; \;14325; \;14532; & \mathbf{24153} \\ & 15432; \;21354; \;21453; \;21543 \phantom{;} & \\ 6 & \mathbf{351624} & 231564 \\ 7 & 2547163; \;3416725 & \mathbf{2461735} \\ 8 & \mathbf{35172846} & \mathbf{36184725} \\ 9 & \mathbf{472951836} & \mathbf{357182946} \\ 10 & \mathbf{4{,}6{,}8{,}1{,}9{,}2{,}10{,}3{,}5{,}7} & \mathbf{4{,}7{,}9{,}1{,}10{,}6{,}2{,}8{,}3{,}5} \\ 11 & \mathbf{3{,}5{,}8{,}10{,}1{,}7{,}11{,}2{,}9{,}4{,}6} & \mathbf{3{,}6{,}1{,}9{,}4{,}11{,}7{,}2{,}10{,}5{,}8} \\ 12 & \mathbf{4{,}7{,}2{,}10{,}5{,}1{,}12{,}8{,}3{,}11{,}6{,}9} & \mathbf{5{,}10{,}2{,}7{,}12{,}4{,}9{,}1{,}6{,}11{,}318} \\ \multirow{3}{*}{13 \Bigg\{ } && 1{,}5{,}8{,}3{,}11{,}6{,}2{,}13{,}9{,}4{,}12{,}7{,}10 ; \\ & \mathbf{6{,}2{,}9{,}4{,}11{,}1{,}7{,}13{,}3{,}10{,}5{,}12{,}8} & 1{,}8{,}11{,}5{,}13{,}9{,}3{,}12{,}6{,}2{,}10{,}4{,}7 ; \\ && \mathbf{4{,}7{,}2{,}10{,}5{,}13{,}1{,}12{,}8{,}3{,}11{,}6{,}9} \\ \bottomrule \end{array} \] \caption{Canonical permutations of length $n$ where the principal M\"{o}bius function has a minimum / maximum value. Simple permutations are highlighted.} \label{table_canonical_min_max} \end{table} \chapter{Background and history} \label{chapter_background_and_history} \section{Permutations} The first, albeit implicit, reference to permutations in the literature appears to be due to Euler in~\cite{Euler1755}, where he describes polynomials which essentially define what are now known as the Eulerian numbers $A_{n,m}$. In permutational terms, $A_{n,m}$ counts the number of permutations of length $n$ that have $m$ descents. The next significant set of results comes some 150 years later, where MacMahon~\cite{MacMahon1915} has a result that, interpreted in permutational terms, shows that $\Av(123)$ is counted by the Catalan numbers. The Erd\H{o}s-Szekeres theorem~\cite{Erdos1935} can be interpreted as saying that a permutation of length $(a-1)(b-1)+1$ must contain either an increasing sequence of length $a$ or a decreasing sequence of length $b$. The study of pattern avoidance in permutations can be said to have started with exercise 2.2.1(5) in Knuth~\cite{Knuth1968}, where readers are essentially asked to show that a permutation that can be stack-sorted must avoid 231. This work was further developed in the 1970s and 1980s in papers by Knuth~\cite{Knuth1970}, Rogers~\cite{Rogers1978}, Rotem~\cite{Rotem1981}, and Simion and Schmidt~\cite{Simion1985}. This initial development then turned into a veritable explosion of papers, most of which are too specific to relate to this general background. A good summary of the way in which the field has developed can be found in the book by Kitaev~\cite{Kitaev2011}, the book by Bona~\cite{Bona2016Book}, the survey article by Steingr{\'{i}}msson~\cite{Steingrimsson2013}, and the chapter by Vatter on permutation classes in~\cite{Handbook2015}. \section{The M\"{o}bius function} The M\"{o}bius function was first defined in the context of number theory by August M\"{o}bius in 1832 in~\cite{Mobius1832}. In that paper, M\"{o}bius defines $\mu(n): \bbN \mapsto \bbN$ as 0 if $n$ has a repeated prime factor, and as $(-1)^k$ if $n$ is the product of $k$ distinct prime factors. If we say that a positive integer $a$ is contained in a positive integer $b$ if $a$ divides $b$, then the integers under this relationship form a poset, and $\mu(n) = \mobfn{1}{n}$. The number-theoretic M\"{o}bius function has been extensively studied since its definition. The combinatorial M\"{o}bius function does not seem to have any significant presence in the literature until a seminal paper by Rota in 1964~\cite{Rota1964a}, which made an explicit link between the principle of inclusion--exclusion and the combinatorial M\"{o}bius function. While there are many papers that have results relating to the M\"{o}bius function on a variety of posets, we refer the reader to Cameron~\cite{Cameron1994} or Stanley~\cite{Stanley2012} for a general background to the area. The classic definition of the M\"{o}bius function, as given in Equation~\ref{equation_mobius_function}, is, essentially, a recursive sum over the elements of the poset. This thesis, in general, restricts itself to this view. A simple consequence of the fundamental definition is Hall's Theorem~\cite[Proposition 3.8.5]{Stanley2012} which defines the M\"{o}bius function as a sum over the chains in the poset. There are, however, other ways in which we can understand the M\"{o}bius function, and in order to provide a broad background, we briefly describe two of them here. \subsection{Simplicial complexes} Given a set of vertices $V$, a \emph{simplicial complex}\extindex{simplicial complex} $\Delta$ is a non-empty set of subsets of $V$ such that if $v \in V$, then $\{ v \} \in \Delta$; and if $G \in \Delta$, and $F \subset G$, then $F \in \Delta$. If $F \in \Delta$, then we say that $F$ has dimension $\order{F} - 1$. We then refer to $F$ as a \emph{face}\extindex[simplicial complex]{face} of $V$. Note that the empty subset $\emptyset$ is a face of $V$. If we have two elements of a poset $\sigma$ and $\pi$, with $\sigma < \pi$, and $(\sigma, \pi)$ is non-empty, then we can set $V$ to be the set of chains in the open interval $(\sigma, \pi)$, and this gives a simplicial complex $\Delta$. Given a simplicial complex $\Delta$, the reduced Euler characteristic of $\Delta$, $\chi (\Delta)$ is defined as \[ \chi (\Delta) = \sum_{k=-1}^{\Dim \Delta} (-1)^k f_k (\Delta), \] where $f_k (\Delta)$ is the number of faces of dimension $k$. If we have a poset $P$ with unique minimal and maximal elements $\hat{0}$ and $\hat{1}$ respectively, and set $\Delta$ to be the chains in the open interval $(\hat{0}, \hat{1})$, then Hall's Theorem~(see, for example, Stanley~\cite[Proposition 3.8.5]{Stanley2012} or Wachs~\cite[Proposition 1.2.6]{Wachs2006}) gives us that $ \mobp{P} = \chi (\Delta) $. Since a simplicial complex is a topological entity, in addition to the possibility of using the M\"{o}bius function to determine the value of the reduced Euler characteristic, it is possible to pose questions about the topology of the poset. While this approach has been taken in some papers (discussed in Section~\ref{section_background_permutation_poset_and_mobius} below), this thesis does not use this approach or provide any topological results. The interested reader is referred to the material in~\cite{Wachs2006} for further details. \subsection{Incidence algebras and incidence matrices} A poset $P$ is \emph{locally finite}\extindex[poset]{locally finite} if, for every $\sigma, \pi \in P$, the interval $[\sigma, \pi]$ has a finite number of elements. Following Rota~\cite{Rota1964a}, we define an \emph{incidence algebra}\extindex{incidence algebra} by first taking a locally finite partially ordered set $P$, and considering the set of all real-valued functions $f(x,y)$, where $x, y \in P$ and $f(x,y) = 0$ if $x \not\leq y$. We then define the incidence algebra of $P$ by convolution: \[ (f * g)(x,y) = \sum_{x \leq z \leq y} f(x,z) g(z,y). \] This algebra has an identity element, normally written as $\delta(x,y)$, which is defined as \[ \delta(x,y) = \begin{cases} 1 & \text{If } x = y \\ 0 & \text{Otherwise}. \end{cases} \] The \emph{zeta function}\extindex[incidence algebra]{zeta function} is defined as \[ \zeta(x,y) = \begin{cases} 1 & \text{If } x \leq y \\ 0 & \text{Otherwise}. \end{cases} \] With these definitions, it can be shown that the M\"{o}bius function is the convolutional inverse of $\zeta$, so \[ \zeta * \mu (x,y) = \mu * \zeta (x,y) = \delta(x,y). \] Let $Z$ be a square matrix, with rows and columns indexed by the elements of a poset $P$, and with $Z_{x,y} = \zeta(x,y)$. We call this the \emph{zeta matrix}\extindex[incidence algebra]{zeta matrix} of $P$. It is now possible to show that if $x$ and $y$ are elements of the poset, then \[ \mobfn{x}{y} = (Z^{-1})_{x,y}. \] We remark here that the result above implies that we can determine the value of the M\"{o}bius function for every interval in a poset by (simply) calculating the inverse of the zeta matrix. We have some computational evidence that using the ``matrix inverse'' method to determine the value of the M\"{o}bius function for a significant number of intervals in a poset is computationally more efficient than using the fundamental definition given in Equation~\ref{equation_mobius_function}. On the other hand, if we want to determine the value of the M\"{o}bius function for a single interval, then the fundamental definition seems to be significantly faster than the matrix inverse method. Of course, our ideal is to find methods that can determine the value of the M\"{o}bius function faster than either the matrix inverse method, or using the fundamental definition. \section{The M\"{o}bius function for general posets} \label{section_background_mobius_various_posets} Before we move on to consider the M\"{o}bius function of the permutation poset under classic pattern containment, we divert slightly to review some results relating to the M\"{o}bius function on other posets. We start by remarking that, for a general poset, using the recursive definition of the M\"{o}bius function is computationally hard. Our purpose in this section is to establish that determining the M\"{o}bius function need not be computationally hard in some cases. We refer the reader to~\cite{Kitaev2011} for a good overview of most of the containment types discussed in this section. There are some well-known cases where an explicit formula exists for the M\"{o}bius function. For example, the M\"{o}bius function on a Boolean algebra is given by $\mobfn{R}{S} = (-1)^{\#(S-R)}$ (see, for instance, Example 3.8.3 in~\cite{Stanley2012}). A slightly more complex example is given by the poset of subspaces of a vector space $V \subseteq GF(q)^n$. If we have $U \subseteq W \subseteq V$, then \[ \mobfn{U}{W} = (-1)^{k} q^{\binom{k}{2}}, \text{ where } k = \dim(W) - \dim(U). \] This result is attributed to Hall in Rota~\cite{Rota1964a}. The poset of subspaces of a vector space is an example of a lattice, and the proof given in Rota utilises this fact. The M\"{o}bius function for general lattices is also well-known (see, for instance, Section 3.9 in~\cite{Stanley2012}). We now turn to posets that are defined by free monoids over alphabets, or by permutations using a containment other than classic pattern containment. Bj\"orner completely determined the M\"{o}bius function of subword order in~\cite{BjornerSubword}, and then completely determined the M\"{o}bius function for factor order in~\cite{Bjorner1993}. Sagan and Vatter, in~\cite{Sagan2006}, considered ordered partitions (compositions) of an integer, with a partial order given by subwords, and completely determined the M\"{o}bius function on this poset. This paper also has the first result for the permutation poset under classic pattern containment, which we discuss in Section~\ref{section_background_permutation_poset_and_mobius}. Bernini, Ferrari and Steingr\'{\i}msson, in~\cite{Bernini2011}, considered permutations using consecutive pattern containment. For most intervals $[\sigma, \pi]$ they have a set of explicit formulae for $\mobfn{\sigma}{\pi}$, based mainly on how many times $\sigma$ occurs in $\pi$. For intervals not covered by their formulae, they provide a polynomial algorithm to calculate the M\"{o}bius function. We consider that the M\"{o}bius function of permutations under consecutive pattern containment is, therefore, completely known. Sagan and Willenbring, in~\cite{Sagan2012}, reproduced this result using a technique known as discrete Morse theory. Bernini and Ferrari, in~\cite{Bernini2017}, introduced the quasi-consecutive pattern poset of permutations, where $\sigma$ is contained in $\pi$ if $\pi$ contains an occurrence of $\sigma$ where all entries are adjacent, except possibly the first and second. They completely determine the M\"{o}bius function for any interval $[\sigma, \pi]$ where $\sigma$ occurs exactly once in $\pi$. A recent preprint by Bernini, Cervetti, Ferrari and Steingr\'{\i}msson~\cite{Bernini2019} considers the poset of Dyck paths, where we say that a path $P$ contains a path $Q$ if the steps in $Q$ are a subsequence of the steps in $P$. The preprint includes expressions for the M\"{o}bius function of some specific intervals in this poset. The posets discussed so far are, in some way, simpler than the permutation poset under classic pattern containment, and we have seen that the M\"{o}bius function has either been completely determined, or, as in the last two examples, has been determined for a particular subset of intervals in the poset. We now consider posets that, in some sense, generalize classic pattern containment. Mesh patterns are a generalization of classic pattern containment on permutations, and the poset of mesh patterns contains the poset of permutations as an induced subposet. We refer the reader to~\cite{Branden2011} for a formal definition of mesh patterns. In~\cite{Smith2018a}, Smith and Ulfarsson present some initial results on the M\"{o}bius function of the mesh pattern poset, and show that as $n \to \infty$, the proportion of mesh patterns $p$ of length $n$ with $\mobfn{1^\emptyset}{p} = 0$ approaches 1, where $1^\emptyset$ is the unshaded singleton mesh pattern. The mesh pattern $1^\emptyset$ corresponds to the permutation $1$ in the induced poset of permutations. In the permutation pattern poset, we know~\cite{Albert2003} that the number of simple permutations of length $n$ is, asymptotically, $\frac{n!}{\mathrm{e}^2}$, and it is generally believed that for most simple permutations the value of the principal M\"{o}bius function is non-zero, thus in the permutation pattern poset we do not expect the proportion of permutations $\pi$ of length $n$ with $\mobp{\pi} = 0$ to approach 1. Smith generalised pattern containment in~\cite{Smith2019}, and found some explicit formulae for the M\"{o}bius function. These formulae have the general form \[ \mobfn{\sigma}{\pi} = (-1)^{\order{\pi} - \order{\sigma}} E(\sigma, \pi) + \sum_{\lambda \in [\sigma, \pi)} \mobfn{\sigma}{\lambda} \mu[ \hat{P}(\lambda, \pi) ], \] where $E(\sigma, \pi)$ counts specific types of embeddings of $\sigma$ into $\pi$, and $\hat{P}(\lambda, \pi)$ is a poset derived from the interval $[\lambda, \pi]$. Although this last example does have a complete characterisation of the M\"{o}bius function on all intervals of the poset, from a computational perspective the result is only useful if, outside the result given, we can show that the second term is zero. This result is a generalisation of some of the results given by Smith in~\cite{Smith2016a}, which we discuss in the following section. \section{The M\"{o}bius function of the permutation poset under pattern containment} \label{section_background_permutation_poset_and_mobius} The study of the M\"{o}bius function in the (classic) permutation poset was introduced by Wilf~\cite{Wilf2002}, who wrote \begin{quote} We can partially order the set of all permutations of all numbers of letters by declaring that $\sigma \leq \tau$ if $\sigma$ is contained as a pattern in $\tau$. It would be interesting to study this as a poset. For example, what can be said about its M\"{o}bius function? \end{quote} The first result in this area was by Sagan and Vatter~\cite{Sagan2006}. Their paper primarily concerns itself with the M\"{o}bius function of a composition poset. An integer composition can be thought of as an ordered list of positive integers, and a layered permutation can be completely specified by such a list, so there is a bijection between integer compositions and layered permutations. From this it follows that there is a bijection between the poset of compositions of integers and the poset of layered permutations. In the final section of their paper, they use this bijection to essentially give an expression for the M\"{o}bius function on intervals in the poset of layered permutations under classic pattern containment. Steingr\'{\i}msson and Tenner~\cite{Steingrimsson2010} found a large class of pairs of permutations $(\sigma, \pi)$ where $\mobfn{\sigma}{\pi} = 0$. They show that the (poset) interval $[\sigma, \pi]$ has $\mobfn{\sigma}{\pi} = 0$ if $\pi$ contains a non-trivial interval where none of the elements of the (permutation) interval are part of an embedding of $\sigma$ into $\pi$. They also show that if there is exactly one embedding of $\sigma$ into $\pi$, and the complement of the embedding satisfies certain conditions, then $\mobfn{\sigma}{\pi} \in \{0, \pm 1\}$. In a seminal paper, Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011} found a recursion for the M\"{o}bius function for sum/skew decomposable permutations in terms of the sum/skew indecomposable permutations in the lower and upper bounds. They also found a method to determine the M\"{o}bius function for separable permutations by counting embeddings. The recursions for decomposable permutations are used extensively in this thesis. McNamara and Steingr{\'{i}}msson~\cite{McNamara2015} investigated the topology of intervals in the permutation poset, and in doing so found a single recurrence equivalent to the recursions in~\cite{Burstein2011}. We now compare the recursions from Burstein et al~\cite{Burstein2011} with those from McNamara and Steingr{\'{i}}msson~\cite{McNamara2015}. The recursions from Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011} can be written as follows. \begin{quotation} \begin{proposition}[{% McNamara and Steingr{\'{i}}msson \cite[Proposition 8.3]{McNamara2015}, following Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson \cite[Proposition 1]{Burstein2011}% }] \label{BJJS-proposition-1-as-in-mcnamara} Let $\sigma$ and $\pi$ be non-empty permutations with finest decompositions $\sigma= \sigma_1 \oplus \ldots \oplus \sigma_s$ and $\pi = \pi_1 \oplus \ldots \oplus \pi_t$, where $t \geq 2$. Suppose that $\pi_1 = 1$. Let $k \geq 1$ be the largest integer such that all the components $\pi_1, \ldots, \pi_k$ are equal to 1, and let $\ell \geq 0$ be the largest integer such that all the components $\sigma_1, \ldots, \sigma_\ell$ are equal to 1. Then \[ \mobfn{\sigma}{\pi} = \begin{cases} 0 & \text{if $k-1 > \ell$,} \\ -\mobfn{\sigma_{> k-1}}{\pi_{> k}} & \text{if $k-1 = \ell$,} \\ \mobfn{\sigma_{> k}}{\pi_{> k}} - \mobfn{\sigma_{> k-1}}{\pi_{> k}} & \text{if $k-1 < \ell$.} \end{cases} \] \end{proposition} The remaining case is $\pi_1 > 1$, and is covered by the next proposition. \begin{proposition}[{% McNamara and Steingr{\'{i}}msson \cite[Proposition 8.4]{McNamara2015}, following Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson \cite[Proposition 2]{Burstein2011}% }] Let $\sigma$ and $\pi$ be non-empty permutations with finest decompositions $\sigma= \sigma_1 \oplus \ldots \oplus \sigma_s$ and $\pi = \pi_1 \oplus \ldots \oplus \pi_t$, where $t \geq 2$. Suppose that $\pi_1 > 1$. Let $k \geq 1$ be the largest integer such that all the $\pi_1, \ldots, \pi_k$ are equal to $\pi_1$. Then \[ \mobfn{\sigma}{\pi} = \sum_{i=1}^s \sum_{j=1}^k \mobfn{\sigma_{\leq i}}{\pi_{1}} \mobfn{\sigma_{> i}}{\pi_{> j}}. \] \end{proposition} \end{quotation} The recursion found by McNamara and Steingr{\'{i}}msson can be written as follows. \begin{quotation} \begin{proposition}[{% McNamara and Steingr{\'{i}}msson \cite[Proposition 8.1]{McNamara2015}% }] Consider permutations $\sigma$ and $\pi$ and let $\pi = \pi_1 \oplus \ldots \oplus \pi_t$ be the finest decomposition of $\pi$. Then \[ \mobfn{\sigma}{\pi} = \sum_{\sigma = \varsigma_1 \oplus \ldots \oplus \varsigma_t} \prod_{1 \leq m \leq t} \begin{cases} \mobfn{\varsigma_m}{\pi_m} + 1 & \text{If $\varsigma_m = \epsilon$ and $\pi_{m-1} = \pi_m$,} \\ \mobfn{\varsigma_m}{\pi_m} & \text{otherwise, } \end{cases} \] where the sum is over all direct sums $\sigma = \varsigma_1 \oplus \ldots \oplus \varsigma_t$, such that $\epsilon \leq \varsigma_m \leq \pi_m$ for all $1 \leq m \leq t$. \end{proposition} \end{quotation} We remark here that the recursion from McNamara and Steingr{\'{i}}msson is, in some sense, a nicer recursion than that found by Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson. Despite this, in this thesis we use the Burstein et al recursions as these are easier to work with in the context of our results. Smith~\cite{Smith2013} found an explicit formula for the M\"{o}bius function on the interval $[1, \pi]$ for all permutations $\pi$ with a single descent. Smith~\cite{Smith2016} has explicit expressions for the M\"{o}bius function $\mobfn{\sigma}{\pi}$ when $\sigma$ and $\pi$ have the same number of descents. In~\cite{Smith2016a}, Smith found an expression that determines the M\"{o}bius function for all intervals in the poset. The main result is \begin{align} \label{equation_smith_all_intervals} \mobfn{\sigma}{\pi} &= (-1)^{\order{\pi} - \order{\sigma}} \NE(\sigma, \pi) + \sum_{\lambda \in [\sigma, \pi)} \mobfn{\sigma}{\lambda} \sum_{S \in EZ^{\lambda,\pi}} (-1)^{\order{S}} \end{align} where $\NE(\sigma, \pi)$ is the number of normal embeddings of $\sigma$ into $\pi$, and $EZ^{\lambda,\pi}$ is a set of sets of embeddings of $\lambda$ into $\pi$ that satisfy a particular condition. One view of this result is that it tells us that the value of the M\"{o}bius function on an interval $[\sigma, \pi]$ is given by the number of normal embeddings of $\sigma$ into $\pi$, plus a correction factor. Smith notes~\cite[Remark 22]{Smith2016a} that 95\% of intervals with $\order{\pi} \leq 8$ have $\mobfn{\sigma}{\pi} = (-1)^{\order{\pi} - \order{\sigma}} \NE(\sigma, \pi)$, so in these cases the correction factor is zero. Smith remarks~\cite[Remark 23]{Smith2016a} that where we can show that \[ \sum_{\lambda \in [\sigma, \pi)} \mobfn{\sigma}{\lambda} \sum_{S \in EZ^{\lambda,\pi}} (-1)^{\order{S}} = 0, \] the normal embedding approach can determine the value of the M\"{o}bius function in polynomial time, whereas using the recursive formula of Equation~\ref{equation_mobius_function} is exponential complexity. One approach to showing that $ \sum_{\lambda \in [\sigma, \pi)} \mobfn{\sigma}{\lambda} \sum_{S \in EZ^{\lambda,\pi}} (-1)^{\order{S}} = 0 $ would be to find permutations $\sigma$ and $\pi$ such that for any $\lambda \in [\sigma, \pi)$, $EZ^{\lambda,\pi}$ is empty, as this would force the second term to be zero. Some small-scale experiments by the author suggest that it is significantly more likely that some of the sets $EZ^{\lambda,\pi}$ are non-empty, and therefore when $ \sum_{\lambda \in [\sigma, \pi)} \mobfn{\sigma}{\lambda} \sum_{S \in EZ^{\lambda,\pi}} (-1)^{\order{S}} = 0 $ it is likely to be because, taken across every $\lambda \in [\sigma, \pi)$, the number of sets in $EZ^{\lambda,\pi}$ with even order is the same as the number of sets with odd order. Brignall and Marchant~\cite{Brignall2017a} showed that if the lower bound of an interval is indecomposable, then the M\"{o}bius function depends only on the indecomposable permutations contained in the upper bound. They then used this result to find a fast polynomial algorithm for computing $\mobp{\pi}$ where $\pi$ is an increasing oscillation. This paper forms the basis of Chapter~\ref{chapter_incosc_paper} of this thesis. Brignall, Jel{\'{i}}nek, Kyn{\v{c}}l and Marchant~\cite{Brignall2020} prove that if a permutation $\pi$ contains opposing adjacencies, then $\mobp{\pi} = 0$. They then use this to show that the proportion of permutations of length $n$ with principal M\"{o}bius function equal to zero is asymptotically bounded below by $(1-1/e)^2 \ge 0.3995$. This paper forms the basis of Chapter~\ref{chapter_oppadj_paper} of this thesis. Jel{\'{i}}nek, Kantor, Kyn{\v{c}}l and Tancer~\cite{Jelinek2020} show how to construct a sequence of permutations $\pi_n$ with length $2n + 2$, and they show that for $n \geq 2$, \[ \mobp{\pi_n} = -\binom{n+2}{7} -\binom{n+1}{7} + 2\binom{n+2}{5} -\binom{n+2}{3} -\binom{n}{2} -2n, \] and thus demonstrate that the absolute value of the M\"{o}bius function grows according to the seventh power of the length. In their paper they also show that if $f:[\sigma, \pi] \to \mathbb{R}$ is any function satisfying $f(\pi) = 1$, then \[ \mobfn{\sigma}{\pi} = f(\sigma) - \sum_{\lambda \in [\sigma, \pi)} \mobfn{\sigma}{\lambda} \sum_{\tau \in [\lambda, \pi]} f(\tau) \] and as a corollary, they then show that \[ \mobfn{\sigma}{\pi} = (-1)^{\order{\pi} - \order{\sigma}} \E(\sigma, \pi) - \sum_{\lambda \in [\sigma, \pi)} \mobfn{\sigma}{\lambda} \sum_{\tau \in [\lambda, \pi]} (-1)^{\order{\pi} - \order{\tau}} \E(\tau, \pi), \] where $\E(\alpha, \beta)$ is the number of embeddings of $\alpha$ into $\beta$. We note that the formula in the corollary has a similar structure to Equation~\ref{equation_smith_all_intervals} described above, although in general it suffers from the same restrictions as Smith's equation. Finally, Marchant~\cite{Marchant2020} showed how to construct a sequence of permutations $\pi_1$, $\pi_2$, $\pi_3, \ldots$ with lengths $n, n+4, n+8, \ldots$ such that $\mobfn{1}{\pi_{i+1}} = 2 \mobfn{1}{\pi_{i}}$, and this gives us that the growth of the principal M\"{o}bius function on the permutation poset is exponential. This paper forms the basis of Chapter~\ref{chapter_2413_balloon_paper} of this thesis. \section{Motivation} It seems reasonably clear that the M\"{o}bius function of the permutation poset under classic pattern containment is a non-trivial problem. This contrasts with some of the posets described in~\ref{section_background_mobius_various_posets}, where the M\"{o}bius function is completely determined. As we have described, the study of the permutation poset under classic pattern containment was initiated by Wilf in 2002 in~\cite{Wilf2002}. Anecdotally, Wilf is believed to have later said that the M\"{o}bius function on the permutations pattern poset was \textit{``A mess. Don't touch it''}. We state here that we think that Wilf's reported view is somewhat pessimistic. While we think that it is unlikely that there is a polynomial-time procedure for computing the value of the M\"{o}bius on an arbitrary interval of the permutation poset, we believe, and we hope to show in this thesis, that there is considerable scope for further research in this area. The permutation pattern poset is the subject of considerable research activity outside of the M\"{o}bius function, and we claim that the permutation pattern poset is the underlying object for many studies related to patterns in permutations. This then means that research into the M\"{o}bius function on the permutation pattern poset may lead to a better understanding of this poset, and hence to results in other areas related to permutation patterns. We can summarise our motivation for research in this area by saying that the M\"{o}bius function of the permutation poset under classic pattern containment is not well-understood, and indeed up until recently the proportion of permutations where we had a (computationally) simple way to determine the value of the principal M\"{o}bius function was, asymptotically, zero. The paper which forms the basis of Chapter~\ref{chapter_oppadj_paper} shows that, asymptotically, the proportion of permutations where the principal M\"{o}bius function is zero is at least 0.3995. The corollary to this result, however, is that we do not yet have an effective means to compute the principal M\"{o}bius function for 60\% of all permutations. Further, research into the M\"{o}bius function on the permutation pattern poset may lead to a better understanding of the intrinsic properties of the poset, which in turn may lead to results in related areas. \chapter{2413-balloons and the growth of the M\"{o}bius function} \label{chapter_2413_balloon_paper} \section{Preamble} This chapter is based on a published paper~\cite{Marchant2020} which is sole work by the author. In this chapter we show that the growth of the principal M\"{o}bius function on the permutation poset is exponential. This improves on previous work, which has shown that the growth is at least polynomial. We define a method of constructing a permutation from a smaller permutation which we call ``ballooning''. We show that if $\beta$ is a 2413-balloon, and $\pi$ is the 2413-balloon of $\beta$, then $\mobfn{1}{\pi} = 2 \mobfn{1}{\beta}$. This allows us to construct a sequence of permutations $\pi_1, \pi_2, \pi_3\ldots$, with lengths $n, n+4, n+8, \ldots$ such that $\mobfn{1}{\pi_{i+1}} = 2 \mobfn{1}{\pi_{i}}$, and this gives us exponential growth of the principal M\"{o}bius function. Further, our construction method gives permutations that lie within a hereditary class with finitely many simple permutations. We also find an expression for the value of $\mobfn{1}{\pi}$, where $\pi$ is a 2413-balloon, with no restriction on the permutation being ballooned. \section{Introduction} In the concluding remarks to their seminal paper, Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011} ask whether the principal M\"{o}bius function is unbounded, which is the first reference to the growth of the M\"{o}bius function in the literature. They show that if $\pi$ is a separable permutation, then $\mobp{\pi} \in \{0, \pm 1 \}$, and thus is bounded. The separable permutations lie in a hereditary class which only contains the simple permutations $1$, $12$ and $21$. They ask (Question 27) for which classes is $\mobp{\pi}$ bounded? Smith~\cite{Smith2013} found an explicit case-wise formula for the principal M\"{o}bius function for all permutations with a single descent. For certain sets of permutations with a single descent, the associated formula is, up to a sign, \[ \mobp{\pi} = \binom{k}{2}, \] where $k$ is a linear expression in the length of $\pi$. This shows that the growth of the M\"{o}bius function is at least quadratic. Jel{\'{i}}nek, Kantor, Kyn{\v{c}}l and Tancer~\cite{Jelinek2020} show how to construct a sequence of permutations where the absolute value of the M\"{o}bius function grows according to the seventh power of the length. We show that, given some permutation $\beta$, we can construct a permutation that we call the ``2413-balloon'' of $\beta$. This permutation will have four more points than $\beta$. We then show that if $\pi$ is a 2413-balloon of $\beta$, and $\beta$ is itself a 2413-balloon, then $\mobp{\pi} = 2 \mobp{\beta}$. From this we deduce that the growth of the principal M\"{o}bius function is exponential. If $\beta = 25314$ (which is a 2413-balloon), then we can construct a hereditary class that contains only the simple permutations $\{ 1,12,21,2413,25314\}$, where the growth of the principal M\"{o}bius function is exponential, answering questions in Burstein et al~\cite{Burstein2011} and Jel{\'{i}}nek et al~\cite{Jelinek2020}. We start by recalling some essential definitions and notation in Section~\ref{section-definitions-and-notation}, where we also provide some extensions of existing results. We formally define a 2413-balloon in Section~\ref{section-define-2413-balloon}, and we provide some results which will be used in the remainder of this chapter. In Section~\ref{section-2413-double-balloons}, we derive an expression for the value of $\mobp{\pi}$ when $\pi$ is a double 2413-balloon, and following this we show that the growth of the M\"{o}bius function is exponential in Section~\ref{section-growth-rate-of-mu}. We return to the topic of 2413-balloons in Section~\ref{section-2413-balloons}, and derive an expression for the value of $\mobp{\pi}$ when $\pi$ is any 2413-balloon. Finally, we discuss the generalization of the balloon operator in Section~\ref{section-concluding-remarks}. We also ask some questions regarding the growth of the M\"{o}bius function. \section{Essential definitions, notation, and results} \label{section-definitions-and-notation} In this section we recall some standard definitions and notation that we will use, and add some simple definitions and consequences of known results. If $\pi$ is a permutation with length $n$, then the number of \emph{corners}\extindex[permutation]{corners} of $\pi$ is the number of points of $\pi$ that are extremal in both position and value, that is, $\pi_1 \in \{1, n\}$ or $\pi_n \in \{1, n\}$. It is easy to see that any permutation with length 2 or more can have at most two corners. We adopt the convention that the permutation $1$ has one corner. Recall that if a permutation $\pi$ can be written as $\oneplus\oneplus\tau$, $\oneminus\oneminus\tau$, $\tau\oplus 1\plusone$, or $\tau\ominus 1\minusone$, where $\tau$ is non-empty (so $\order{\pi} \geq 3$), then we say that $\pi$ has a \emph{long corner}. We now have \begin{lemma} \label{lemma-oneplus-oneplus} If $\pi$ has a long corner, then $\mobp{\pi} = 0$. \end{lemma} \begin{lemma} \label{lemma-oneplus} If $\pi$ can be written as $\pi = \oneplus\tau$, or $\pi = \tau\oplus 1$ or $\pi = \oneminus\tau$ or $\pi = \tau\ominus 1$, and does not have a long corner, then $\mobp{\pi} = - \mobp{\tau}$. \end{lemma} These are well-known consequences of Propositions~1~and~2 of Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011}, and we refrain from providing proofs here. The reader is directed to Lemma~\ref{lemma_mobius_function_is_zero} on page~\pageref{lemma_mobius_function_is_zero} in Chapter~\ref{chapter_incosc_paper} for a proof of Lemma~\ref{lemma-oneplus-oneplus}. Lemma~\ref{lemma-oneplus} is a trivial extension of Corollary 3 in~\cite{Burstein2011}. Recall that a triple adjacency is a monotonic interval of length 3. Smith shows that \begin{lemma}[{% Smith~\cite[Lemma 1]{Smith2013}}] \label{lemma-triple-adjacencies} If a permutation $\pi$ contains a triple adjacency then $\mobp{\pi} = 0$. \end{lemma} A trivial corollary to Lemma~\ref{lemma-triple-adjacencies} is \begin{corollary} \label{corollary-monotonic-interval} If a permutation contains a monotonic interval with length 3 or more, then $\mobp{\pi} = 0$. \end{corollary} Recall that Hall's Theorem~\cite[Proposition 3.8.5]{Stanley2012} says that \[ \mobfn{\sigma}{\pi} = \sum_{c \in \chain{C}(\sigma, \pi)} (-1)^{\order{c}} = \sum_{i=1}^{\order{\pi} - 1} (-1)^i K_i \] where $\chain{C}(\sigma, \pi)$ is the set of chains in the poset interval $[\sigma, \pi]$ which contain both $\sigma$ and $\pi$, and $K_i$ is the number of chains of length $i$. We can also use Hall's Theorem if we have a subset of chains that meet a specific criteria: \begin{lemma} \label{lemma-hall-sum-second-element-psi} Let $\pi$ be any permutation with length three or more. Let $\psi$ be a permutation with $1 < \psi < \pi$. Let $\chain{C}$ be the subset of chains in the poset interval $[1, \pi]$ where the second-highest element is $\psi$. Then \[\sum\limits_{c \in \chain{C}} (-1)^{\order{c}} = - \mobp{\psi}.\] \end{lemma} \begin{proof} If we remove $\pi$ from the chains in $\chain{C}$, then we have all of the chains in the poset interval $[1, \psi]$, and the Hall sum of these chains is, by definition, $\mobp{\psi}$. It follows that the Hall sum of the chains in $\chain{C}$ is $ - \mobp{\psi}$. \end{proof} \begin{corollary} \label{corollary-hall-sum-second-highest-set} Given a permutation $\pi$, and a set of permutations $S$ where every $\sigma \in S$ satisfies $1 < \sigma < \pi$, then if $\chain{C}$ is the set of chains in the poset interval $[1, \pi]$ where the second-highest element is in $S$, then the Hall sum of $\chain{C}$ is $- \sum_{\sigma \in S} \mobp{\sigma}$. \end{corollary} \begin{proof} First, partition $\chain{C}$ based on the second-highest element, and then apply Lemma~\ref{lemma-hall-sum-second-element-psi} to each partition. \end{proof} \section{2413-Balloons} \label{section-define-2413-balloon} In this section we define the vocabulary and notation specific to this chapter. We also present some general results which will be used in later sections. Given a non-empty permutation $\beta$, the \emph{2413-balloon}\extindex{2413-balloon} of $\beta$ is the permutation formed by inserting $\beta$ into the centre of $2413$, which we write as $\ball{2413}{\beta}$. Formally, we have \begin{align*} (\ball{2413}{\beta})_i & = \begin{cases} 2 & \text{if $i = 1$}\\ \order{\beta} + 4 & \text{if $i = 2$}\\ \beta_{i-2} + 2 & \text{if $i > 2$ and $i \leq \order{\beta} + 2$ }\\ 1 & \text{if $i = \order{\beta} + 3$}\\ \order{\beta} + 3 & \text{if $i = \order{\beta} + 4$}\\ \end{cases} \end{align*} Figure~\ref{figure-2413-balloons-a} shows $\ball{2413}{\beta}$. \begin{figure} \begin{center} \begin{subfigure}[t]{0.35\textwidth} \begin{center} \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,2,5,6,7}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {7.5}); }; \foreach \i in {0,1,2,5,6,7}{ \draw [color=lightgray] (0.5, {\i+0.5})--({7.5}, {\i+0.5}); }; \normaldot{(1,2)}; \normaldot{(2,7)}; \scell{4}{4}{$\beta$}; \normaldot{(6,1)}; \normaldot{(7,6)}; \end{tikzpicture} \end{center} \caption{} \label{figure-2413-balloons-a} \end{subfigure} \begin{subfigure}[t]{0.55\textwidth} \begin{center} \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,2,3,4,7,8,9,10,11}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {11.5}); }; \foreach \i in {0,1,2,3,4,7,8,9,10,11}{ \draw [color=lightgray] (0.5, {\i+0.5})--({11.5}, {\i+0.5}); }; \normaldot{(1,2)}; \normaldot{(2,11)}; \normaldot{(3,4)}; \normaldot{(4,9)}; \scell{6}{6}{$\gamma$}; \normaldot{(8,3)}; \normaldot{(9,8)}; \normaldot{(10,1)}; \normaldot{(11,10)}; \end{tikzpicture} \end{center} \caption{} \label{figure-2413-balloons-b} \end{subfigure} \end{center} \caption{ (a) The 2413-balloon $\ball{2413}{\beta}$ and (b) the double 2413-balloon $\ball{2413}{\ball{2413}{\gamma}}$.} \label{figure-2413-balloons} \end{figure} Throughout this chapter we will be discussing permutations that contain an interval copy of a smaller permutation. Examples of this smaller permutation are $\beta$ in $\ball{2413}{\beta}$, and $\gamma$ in $\ball{2413}{\ball{2413}{\gamma}}$, as shown in Figure~\ref{figure-2413-balloons}. In figures where this is the case, the permutation plot scale will be non-linear so that the cell containing the interval copy ($\beta$ and $\gamma$ in our examples) is larger than the other cells. The balloon operation as defined has to be right-associative and the definition given does not support overriding right-associativity. In other words, $\ball{2413}{\ball{2413}{\beta}}$ must be $\ball{2413}{(\ball{2413}{\beta})}$, and $\ball{(\ball{2413}{2413})}{\beta}$ is not defined. In Section~\ref{section-concluding-remarks} we suggest how the balloon operation could be generalized. Given some $\pi = \ball{2413}{\beta}$, if $\beta$ is itself a 2413-balloon, so $\pi = \ball{2413}{\ball{2413}{\gamma}}$, then we say that $\pi$ is a \emph{double 2413-balloon}\extindex{double 2413-balloon}. Figure~\ref{figure-2413-balloons-b} shows a double 2413-balloon. \begin{remark} We can write $\ball{2413}{\beta}$ as the inflation $25314 [1,1,\beta,1,1]$. We refer the reader to Albert and Atkinson~\cite{Albert2005} for further details of inflations. In this chapter we use balloon notation, as we feel that this leads to a simpler exposition. \end{remark} If we have $\pi = \ball{2413}{\beta}$, and we have some $\sigma$ with $\beta \leq \sigma < \pi$, we will frequently want to represent $\sigma$ in terms of sub-permutations of $2413$ and the permutation $\beta$. We start by colouring the extremal points of $\pi$ red, and all remaining points black. Note that the red points are a 2413 permutation, and the black points are $\beta$. Now consider a specific embedding of $\sigma$ into $\pi$, where we use all of the black points ($\beta$). If the embedding is monochromatic ($\sigma = \beta$) then we require no special notation. If the embedding is not monochromatic, then it must be the case that only some of the red points are used. We take $2413$, and mark the red points that are unused with an overline, and then write $\sigma$ using our balloon notation. As an example of this, if $\pi = \ball{2413}{21} = 264315$, and $\sigma = 213$, then we could represent $\sigma$ as $\ball{\ex{2}\ex{4}\ex{1}3}{21}$. This example is shown in Figure~\ref{figure-overbar-notation}. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.3] \plotpermgrid{2,6,4,3,1,5} \opendot{(1,2)}; \opendot{(2,6)}; \opendot{(5,1)}; \end{tikzpicture} \end{center} \caption{An embedding of $213 = \ball{\ex{2}\ex{4}\ex{1}3}{21}$ in $264315 = \ball{2413}{21}$.} \label{figure-overbar-notation} \end{figure} We can see that if $\beta \leq \sigma < \ball{2413}{\beta}$, and $\beta$ is not monotonic (i.e., not the identity permutation or its reverse), then there is a unique way to represent $\sigma$ using this notation. If we have $\pi = \ball{2413}{\beta}$, and $\sigma$ is a permutation such that $\beta \leq \sigma < \pi$, then we say that $\sigma$ is a \emph{reduction}\extindex{reduction (of a 2413 balloon)} of $\pi$. If $\sigma$ is a reduction of $\pi = \ball{2413}{\beta}$, and there is no $\eta$ with $\order{\eta} <\order{\beta}$ such that either $\sigma$ is equal to $\ball{2413}{\eta}$, or $\sigma$ is a reduction of $\ball{2413}{\eta}$, then we say that $\sigma$ is a \emph{proper reduction}\extindof{2413-balloon}{proper reduction}{(of a 2413-balloon)} of $\pi$. A reduction of $\pi$ that is not a proper reduction is an \emph{improper reduction}\extindex{improper reduction (of a 2413 balloon)}. The following case-by-case analysis shows the improper reductions (of $\pi$) based on the form of $\beta$. \begin{itemize} \item If $\beta$ is a 2413-balloon, then $\beta$ is the only improper reduction of $\pi$. \item If $\beta$ is not a 2413-balloon, and $\beta$ has no corners, then there are no improper reductions of $\pi$. \item If $\beta$ has one corner, then there are four improper reductions of $\pi$. As an example, if $\beta = 1 \oplus \gamma$, then the improper reductions of $\pi$ are $\ball{\ex{2}\ex{4}13}{\beta}$, % $\ball{\ex{2}\ex{4}1\ex{3}}{\beta}$, $\ball{\ex{2}\ex{4}\ex{1}3}{\beta}$, and $\beta$. \item If $\beta$ has two corners, then there are seven improper reductions of $\pi$. As an example, if $\beta = 1 \oplus \gamma \oplus 1$, then the improper reductions are $\ball{\ex{2}\ex{4}13}{\beta}$, $\ball{24\ex{1}\ex{3}}{\beta}$, % $\ball{2\ex{4}\ex{1}\ex{3}}{\beta}$, $\ball{\ex{2}4\ex{1}\ex{3}}{\beta}$, $\ball{\ex{2}\ex{4}1\ex{3}}{\beta}$, $\ball{\ex{2}\ex{4}\ex{1}3}{\beta}$, and $\beta$. \end{itemize} The set of permutations that are proper reductions of $\pi$ is written as $\redx{\pi}$. Figure~\ref{figure-2413-reductions} shows all the reductions (proper and improper) of $\pi = \ball{2413}{\beta}$. \begin{figure} \begin{center} \begin{subfigure}[t]{0.2\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,4,5,6}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {6.5}); }; \foreach \i in {0,1,4,5,6}{ \draw [color=lightgray] (0.5, {\i+0.5})--({6.5}, {\i+0.5}); }; \normaldot{(1,6)}; \scell{3}{3}{$\beta$}; \normaldot{(5,1)}; \normaldot{(6,5)}; \end{tikzpicture} \caption*{$\ball{\ex{2}413}{\beta}$} \end{subfigure} \begin{subfigure}[t]{0.2\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,4,5,6}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {6.5}); }; \foreach \i in {0,1,2,5,6}{ \draw [color=lightgray] (0.5, {\i+0.5})--({6.5}, {\i+0.5}); }; \normaldot{(1,2)}; \scell{3}{4}{$\beta$}; \normaldot{(5,1)}; \normaldot{(6,6)}; \end{tikzpicture} \caption*{$\ball{2\ex{4}13}{\beta}$} \end{subfigure} \begin{subfigure}[t]{0.2\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,2,5,6}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {6.5}); }; \foreach \i in {0,1,4,5,6}{ \draw [color=lightgray] (0.5, {\i+0.5})--({6.5}, {\i+0.5}); }; \normaldot{(1,1)}; \normaldot{(2,6)}; \scell{4}{3}{$\beta$}; \normaldot{(6,5)}; \end{tikzpicture} \caption*{$\ball{24\ex{1}3}{\beta}$} \end{subfigure} \begin{subfigure}[t]{0.2\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,2,5,6}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {6.5}); }; \foreach \i in {0,1,2,5,6}{ \draw [color=lightgray] (0.5, {\i+0.5})--({6.5}, {\i+0.5}); }; \normaldot{(1,2)}; \normaldot{(2,6)}; \scell{4}{4}{$\beta$}; \normaldot{(6,1)}; \end{tikzpicture} \caption*{$\ball{241\ex{3}}{\beta}$} \end{subfigure} \vspace{1\baselineskip} \end{center} \begin{center} \begin{subfigure}[t]{0.15\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,3,4,5}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {5.5}); }; \foreach \i in {0,1,4,5}{ \draw [color=lightgray] (0.5, {\i+0.5})--({5.5}, {\i+0.5}); }; \scell{2}{3}{$\beta$}; \normaldot{(4,1)}; \normaldot{(5,5)}; \end{tikzpicture} \caption*{$\ball{\ex{2}\ex{4}13}{\beta}$} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,4,5}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {5.5}); }; \foreach \i in {0,3,4,5}{ \draw [color=lightgray] (0.5, {\i+0.5})--({5.5}, {\i+0.5}); }; \scell{3}{2}{$\beta$}; \normaldot{(1,5)}; \normaldot{(5,4)}; \hiddendot{(1,1)}; \end{tikzpicture} \caption*{$\ball{\ex{2}4\ex{1}3}{\beta}$} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,4,5}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {5.5}); }; \foreach \i in {0,1,4,5}{ \draw [color=lightgray] (0.5, {\i+0.5})--({5.5}, {\i+0.5}); }; \normaldot{(1,5)}; \scell{3}{3}{$\beta$}; \normaldot{(5,1)}; \end{tikzpicture} \caption*{$\ball{\ex{2}41\ex{3}}{\beta}$} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,4,5}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {5.5}); }; \foreach \i in {0,1,4,5}{ \draw [color=lightgray] (0.5, {\i+0.5})--({5.5}, {\i+0.5}); }; \normaldot{(1,1)}; \scell{3}{3}{$\beta$}; \normaldot{(5,5)}; \end{tikzpicture} \caption*{$\ball{2\ex{4}\ex{1}3}{\beta}$} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,4,5}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {5.5}); }; \foreach \i in {0,1,2,5}{ \draw [color=lightgray] (0.5, {\i+0.5})--({5.5}, {\i+0.5}); }; \normaldot{(1,2)}; \scell{3}{4}{$\beta$}; \normaldot{(5,1)}; \end{tikzpicture} \caption*{$\ball{2\ex{4}1\ex{3}}{\beta}$} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,2,5}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {5.5}); }; \foreach \i in {0,1,4,5}{ \draw [color=lightgray] (0.5, {\i+0.5})--({5.5}, {\i+0.5}); }; \normaldot{(1,1)}; \normaldot{(2,5)}; \scell{4}{3}{$\beta$}; \end{tikzpicture} \caption*{$\ball{24\ex{1}\ex{3}}{\beta}$} \end{subfigure} \vspace{1\baselineskip} \end{center} \begin{center} \begin{subfigure}[t]{0.15\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,4}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {4.5}); }; \foreach \i in {0,1,4}{ \draw [color=lightgray] (0.5, {\i+0.5})--({4.5}, {\i+0.5}); }; \normaldot{(1,1)}; \scell{3}{3}{$\beta$}; \end{tikzpicture} \caption*{$\ball{2\ex{4}\ex{1}\ex{3}}{\beta}$} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,1,4}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {4.5}); }; \foreach \i in {0,3,4}{ \draw [color=lightgray] (0.5, {\i+0.5})--({4.5}, {\i+0.5}); }; \normaldot{(1,4)}; \scell{3}{2}{$\beta$}; \hiddendot{(1,1)}; \end{tikzpicture} \caption*{$\ball{\ex{2}4\ex{1}\ex{3}}{\beta}$} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,3,4}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {4.5}); }; \foreach \i in {0,1,4}{ \draw [color=lightgray] (0.5, {\i+0.5})--({4.5}, {\i+0.5}); }; \scell{2}{3}{$\beta$}; \normaldot{(4,1)}; \end{tikzpicture} \caption*{$\ball{\ex{2}\ex{4}1\ex{3}}{\beta}$} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,3,4}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {4.5}); }; \foreach \i in {0,3,4}{ \draw [color=lightgray] (0.5, {\i+0.5})--({4.5}, {\i+0.5}); }; \scell{2}{2}{$\beta$}; \normaldot{(4,4)}; \hiddendot{(1,1)}; \end{tikzpicture} \caption*{$\ball{\ex{2}\ex{4}\ex{1}3}{\beta}$} \end{subfigure} \begin{subfigure}[t]{0.15\textwidth} \centering \begin{tikzpicture}[scale=0.3] \foreach \i in {0,3}{ \draw [color=lightgray] ({\i+0.5}, 0.5)--({\i+0.5}, {3.5}); }; \foreach \i in {0,3}{ \draw [color=lightgray] (0.5, {\i+0.5})--({3.5}, {\i+0.5}); }; \scell{2}{2}{$\beta$}; \hiddendot{(1,1)}; \end{tikzpicture} \caption*{$\beta$} \end{subfigure} \end{center} \caption{Reductions of $\pi = \ball{2413}{\beta}$. Some may not be proper reductions, depending on $\beta$.} \label{figure-2413-reductions} \end{figure} The strategy that we will use in Sections~\ref{section-2413-double-balloons} and~\ref{section-2413-balloons} is to partition the chains in the poset interval $[1,\pi]$ into three sets, $\chain{R}$, $\chain{G}$, and $\chain{B}$. We then show that there are parity-reversing involutions on the sets $\chain{G}$ and $\chain{B}$, and therefore, by Corollary~\ref{corollary-halls-corollary}, the Hall sum for each of these sets is zero, and so $\mobp{\pi}$ is given by the Hall sum of the set $\chain{R}$. Finally, we show that the Hall sum of $\chain{R}$ can be written in terms of $\mobp{\beta}$. The chains in $\chain{R}$ are those chains where the second-highest element is a proper reduction of $\pi$, so if $\kappa_c$ is the second-highest element of a chain $c$, then $c \in \chain{R}$ if and only if $\kappa_c \in \redx{\pi}$. Note that, as mentioned earlier, the members of $\redx{\pi}$, and hence the chains in $\chain{R}$, depend on the form of $\pi$. It is easy to see that for any permutation $\sigma \in \redx{\pi}$ we must have $\order{\sigma} \geq \order{\beta}$. We have some results that are independent of $\redx{\pi}$, and, once we have given some further definitions, we present these in the current section to avoid repetition. Let $\pi$ be a 2413-balloon, and let $c$ be any chain in the poset interval $[1, \pi]$. Since the top of the chain is, by definition, a 2413-balloon, it follows that $c$ has a unique maximal segment that includes the element $\pi$, where every element in the segment is a 2413-balloon. We call the smallest element in this segment the \emph{least 2413-balloon}\extindex{least 2413-balloon}\footnote{The name should really be ``least 2413-balloon in the chain that has only 2413-balloons above it''.}. Further, since the permutation 1 is not a 2413-balloon, it follows that $c$ has an element that is immediately below the least 2413-balloon in the chain, and we call this element the \emph{pivot}\extindof{2413-balloon}{pivot}{(in a 2413-balloon)}. We define $\phi_c$ to be the least 2413-balloon in $c$, $\psi_c$ to be the pivot in $c$, $\tau_c$ to be the permutation that satisfies $\ball{2413}{\tau_c} = \phi_c$, and $\kappa_c$ to be the second-highest element of $c$. Note that $\phi_c$ and $\psi_c$ must be distinct, but we can have $\tau_c = \psi_c$. Further, $\kappa_c$ is independent, and may be the same as $\phi_c$, $\psi_c$ or $\tau_c$. Figure~\ref{figure-example-chains} shows some example chains, highlighting these elements. \begin{figure} \begin{center} \begin{tikzpicture}[] \draw [] (0,5) -- (0,4); \draw [dotted] (0,4) -- (0,3); \draw [] (0,3) -- (0,2); \draw [dotted] (0,2) -- (0,1); \spoint{0}{5}; \spoint{0}{4}; \spoint{0}{3}; \spoint{0}{2}; \node [right] at (0.1, 5.0) {$\pi = \ball{2413}{\beta}$}; \node [right] at (0.1, 4.0) {$\kappa_c$}; \node [right] at (0.1, 3.0) {$\phi_c= \ball{2413}{\tau_c}$}; \node [right] at (0.1, 2.0) {$\psi_c$}; % \draw [] (4,5) -- (4,4); \draw [] (4,4) -- (4,3); \draw [dotted] (4,3) -- (4,2); \spoint{4}{5}; \spoint{4}{4}; \spoint{4}{3}; \node [right] at (4.1, 5.0) {$\pi = \ball{2413}{\beta}$}; \node [right] at (4.1, 4.0) {$\kappa_c = \phi_c$}; \node [right] at (4.1, 3.6) {$\phantom{\kappa_c} = \ball{2413}{\tau_c}$}; \node [right] at (4.1, 3.0) {$\psi_c$}; % \draw [] (8,5) -- (8,3); \draw [dotted] (8,3) -- (8,2); \spoint{8}{5}; \spoint{8}{3}; \node [right] at (8.1, 5.0) {$\pi = \ball{2413}{\beta}$}; \node [right] at (8.1, 4.6) {$\phantom{\pi} = \phi_c$}; \node [right] at (8.1, 4.2) {$\phantom{\pi} = \ball{2413}{\tau_c}$}; \node [right] at (8.1, 3.0) {$\kappa_c = \psi_c$}; \end{tikzpicture} \end{center} \caption{Examples of chains, showing some possible relationships between $\pi$, $\kappa_c$, $\phi_c$, and $\psi_c$.} \label{figure-example-chains} \end{figure} We are now in a position to give a definition of the sets $\chain{R}$, $\chain{G}$, and $\chain{B}$. This definition depends on the set of proper reductions of $\pi$, $\redx{\pi}$, which, as stated earlier, depends on the form of $\beta$. Let $\chain{C}$ be the set of chains in the poset interval $[1, \pi]$. We define subsets of $\chain{C}$ as follows: \begin{align*} \chain{R} &= \{ c : c \in \chain{C} \text{ and } \kappa_c \in \redx{\pi} \}, \\ \chain{G} &= \{ c : c \in \chain{C} \setminus \chain{R} \text{ and } \psi_c \leq 2413 \}, \\ \chain{B} &= \{ c : c \in \chain{C} \setminus (\chain{R} \cup \chain{G}) \}. \end{align*} Clearly, every chain in $[1, \pi]$ is included in exactly one of these subsets, and so these sets are a partition of the chains. Given a pivot $\psi_c$, there is a unique permutation $\eta_c$ which we call the \emph{core}\extindof{2413-balloon}{core}{(2413-balloon)} of $\psi_c$. In essence, $\eta_c$ is the smallest permutation such that $\psi_c < \ball{2413}{\eta_c}$. To determine the core, we use the following algorithm: \begin{align*} \text{If $\psi_c$ can be written as~~} & 1 \ominus ( ( \eta \ominus 1 ) \oplus 1) \text{~~or~~} ( ( 1 \oplus \eta ) \ominus 1 ) \oplus 1 \\ \text{or~~} & 1 \oplus ( 1 \ominus ( \eta \oplus 1 ) ) \text{~~or~~} ( 1 \oplus ( 1 \ominus \eta ) ) \ominus 1, \\ \text{then set~~} & \eta_c = \eta.\\ \text{Otherwise, if $\psi_c$ can be written as~~} & ( \eta \ominus 1 ) \oplus 1 \text{~~or~~} 1 \ominus ( \eta \oplus 1 ) \\ \text{~~or~~} &1 \ominus \eta \ominus 1 \text{~~or~~} 1 \oplus \eta \oplus 1 \\ \text{~~or~~} & ( 1 \oplus \eta ) \ominus 1 \text{~~or~~} 1 \oplus ( 1 \ominus \eta ), \\ \text{then set~~} & \eta_c = \eta. \\ \text{Otherwise, if $\psi_c$ can be written as~~} & 1 \oplus \eta \text{~~or~~} 1 \ominus \eta \text{~~or~~} \eta \ominus 1 \text{~~or~~} \eta \oplus 1, \\ \text{then set~~} & \eta_c = \eta.\\ \text{Otherwise, set~~} & \eta_c = \psi_c.\\ \end{align*} Since we have $\psi_c < \phi_c = \ball{2413}{\tau_c}$, it is easy to see that $\eta_c \leq \tau_c$. Note that $\ball{2413}{\eta_c}$ is the smallest 2413-balloon that contains $\psi_c$. We now define two functions, one for each of $\chain{G}$ and $\chain{B}$, which will give us parity-reversing involutions. \begin{align*} \Phi_{\chain{G}}(c) & = \begin{cases} c \setminus \{2413\} & \text{If $\psi_c = 2413$} \\ c \cup \{2413\} & \text{If $\psi_c < 2413$} \\ \end{cases} \\ \Phi_{\chain{B}}(c) & = \begin{cases} c \setminus \{\ball{2413}{\eta_c}\} & \text{If $\eta_c = \tau_c$} \\ c \cup \{\ball{2413}{\eta_c}\} & \text{If $\eta_c < \tau_c$} \\ \end{cases} \\ \end{align*} \begin{remark} If we were to allow the ballooning of the empty permutation $\epsilon$, and then treat $2413$ as $\ball{2413}{\epsilon}$ then $\Phi_{\chain{G}}(c)$ is subsumed by $\Phi_{\chain{B}}(c)$. Doing this, however, introduces additional complications in later proofs, and so we prefer two involutions. \end{remark} For $\Phi_{\chain{G}}(c)$ to be a parity-reversing involution on $\chain{G}$, we need to show that if $c \in \chain{G}$, then $\Phi_{\chain{G}}(c)$ is a chain, that $\Phi(c) \in \chain{G}$, and that $c$ and $\Phi(c)$ have different parities. It is easy to see that this last condition is true. A similar comment applies to $\Phi_{\chain{B}}(c)$ and $\chain{B}$. For $\Phi_{\chain{G}}(c)$ we can show that all the conditions hold for any $\redx{\pi}$, regardless of the form of $\beta$. For $\Phi_{\chain{B}}(c)$ we show that some weaker conditions hold for an arbitrary subset of the reductions of $\pi$, and then, when we have an explicit set of proper reductions, we show that all conditions hold. The following Lemma gives us a result that applies to $\Phi_{\chain{G}}(c)$ and $\Phi_{\chain{B}}(c)$ for any $\redx{\pi}$, and we will use this result in both Section~\ref{section-2413-double-balloons} and Section~\ref{section-2413-balloons}. \begin{lemma} \label{lemma-2413-balloon-phi-gb} Let $\pi = \ball{2413}{\beta}$, with $\order{\beta} > 4$, and let $\chain{R}$, $\chain{G}$, and $\chain{B}$ be as defined above. \begin{enumerate}[label=(\alph*)] \item \label{enum-lemma-balloon-g} If $c \in \chain{G}$, then $\Phi_{\chain{G}}(c) \in \chain{G}$. \item \label{enum-lemma-balloon-b-eq} If $c \in \chain{B}$, with $\eta_c = \tau_c$, and $\Phi_{\chain{B}}(c)$ is a chain, then $\Phi_{\chain{B}}(c) \in \chain{B} \cup \chain{R}$. \item \label{enum-lemma-balloon-b-lt} If $c \in \chain{B}$, with $\eta_c < \tau_c$, then $\Phi_{\chain{B}}(c) \in \chain{B} \cup \chain{R}$. \end{enumerate} \end{lemma} \begin{proof} \textbf{Case~\ref{enum-lemma-balloon-g}.} First, assume that $c \in \chain{G}$ with $\psi_c = 2413$. Then $c$ contains a segment $2413 < \ball{2413}{\tau_c}$, and $c^\prime = \Phi_{\chain{G}}(c) = c \setminus \{ 2413 \}$. We can see that $c^\prime$ is a chain, as 2413 is neither the smallest nor the largest entry in $c^\prime$. Further, $\psi_{c^\prime} < 2413$. Since $\order{\beta} > 4$, and $\order{\psi_{c^\prime}} < 4$ we must have $c^\prime \not \in \chain{R}$, and therefore $c^\prime \in \chain{G}$. Now assume that $c \in \chain{G}$ with $\psi_c < 2413$. Then $c$ contains a segment $\psi_c < \ball{2413}{\tau_c}$, and $c^\prime = \Phi_{\chain{G}}(c) = c \cup \{ 2413 \}$. We can see that $c^\prime$ is a chain, since $\psi_c < 2413 < \ball{2413}{\tau_c}$, and further, $\psi_{c^\prime} = 2413$. Since $\order{\beta} > 4$, and $\order{\psi_{c^\prime}} = 4$ we must have $c^\prime \not \in \chain{R}$, and therefore $c^\prime \in \chain{G}$. \textbf{Case~\ref{enum-lemma-balloon-b-eq}.} Let $c$ be a chain in $\chain{B}$, with $\eta_c = \tau_c$. Then $c$ contains a segment $\psi_c < \ball{2413}{\tau_c}$, and $c^\prime = \Phi_{\chain{B}}(c) = c \setminus \{ \ball{2413}{\tau_c} \}$. If $\tau_c = \beta$, then $c^\prime$ is not a chain, so we must have $\tau_c < \beta$, and therefore $c^\prime$ is a chain that contains a segment $\psi_c < \ball{2413}{\gamma}$, with $\tau_c < \gamma$. Now, $\psi_c$ is the pivot of $c^\prime$, so we cannot have $c^\prime \in \chain{G}$ as this would imply that $c \in \chain{G}$, which is a contradiction. Thus either $c^\prime \in \chain{R}$ or $c^\prime \in \chain{B}$. \textbf{Case~\ref{enum-lemma-balloon-b-lt}.} Let $c$ be a chain in $\chain{B}$, with $\eta_c < \tau_c$. Then $c$ contains a segment $\psi_c < \ball{2413}{\tau_c}$, and $c^\prime = \Phi_{\chain{B}}(c) = c \cup \{ \ball{2413}{\eta_c} \}$. We can see that $c^\prime$ is a chain since $\psi_c < \ball{2413}{\eta_c} < \ball{2413}{\tau_c}$. Now, $\psi_c$ is the pivot of $c^\prime$, so we cannot have $c^\prime \in \chain{G}$ as this would imply that $c \in \chain{G}$, which is a contradiction. So either $c^\prime \in \chain{R}$ or $c^\prime \in \chain{B}$. \end{proof} We now have \begin{observation} \label{observation-all-we-have-to-do} If $\pi = \ball{2413}{\beta}$, with $\order{\beta} > 4$, then to show that $\Phi_{\chain{B}}$ is a parity-reversing involution on $\chain{B}$ it is sufficient to show that: \begin{enumerate}[label=(\alph*)] \item \label{enum-observation-all-we-have-to-do-b-eq} If $c \in \chain{B}$ and $\eta_c = \tau_c$, then $\Phi_{\chain{B}}(c)$ is a chain, and $\Phi_{\chain{B}}(c) \not\in \chain{R}$. \item \label{enum-observation-all-we-have-to-do-b-chain} If $c \in \chain{B}$, and $\eta_c < \tau_c$, then $\Phi_{\chain{B}}(c) \not\in \chain{R}$. \end{enumerate} Further, if $\Phi_{\chain{B}}$ is a parity-reversing involution on the set $\chain{B}$, then $\mobp{\pi} = - \sum_{\sigma \in \redx{\pi}} \mobp{\sigma}$. \end{observation} \begin{proof} Combining~\ref{enum-observation-all-we-have-to-do-b-eq} and~\ref{enum-observation-all-we-have-to-do-b-chain} above with cases~\ref{enum-lemma-balloon-b-eq} and~\ref{enum-lemma-balloon-b-lt} of Lemma~\ref{lemma-2413-balloon-phi-gb} gives us that $\Phi_{\chain{B}}$ is a parity-reversing involution on $\chain{B}$. This now gives us that $\sum_{c \in \chain{B}} (-1)^{\order{c}} = 0$. By Lemma~\ref{lemma-2413-balloon-phi-gb}, we know that $\sum_{c \in \chain{G}} (-1)^{\order{c}} = 0$, so we must have $\mobp{\pi} = \sum_{c \in \chain{R}} (-1)^{\order{c}}$. Since the chains in $\chain{R}$ are defined by the second-highest element ($\kappa_c$) being in $\redx{\pi}$, the final part of the observation follows by applying Corollary~\ref{corollary-hall-sum-second-highest-set}. \end{proof} \section[The principal M\"{o}bius function of double 2413-balloons]{The principal M\"{o}bius function of double\\ 2413-balloons} \label{section-2413-double-balloons} We are now able to state and prove our first major result. \begin{theorem} \label{theorem-2413-balloon-beta-is-a-balloon} Let $\pi = \ball{2413}{\beta}$, where $\beta$ is a 2413-balloon, Then $\mobp{\pi} = 2 \mobp{\beta}$. \end{theorem} \begin{proof} Note that $\beta \not \in \redx{\pi}$, and further that $\order{\beta} > 4$, since $\beta$ is a 2413-balloon. Using Observation~\ref{observation-all-we-have-to-do}, we will show that $\Phi_{\chain{B}}$ is a parity-reversing involution on $\chain{B}$. Once we have shown that we have parity-reversing involutions, we will then show how to express the Hall sum of $\chain{R}$ in terms of $\mobp{\beta}$. % % \begin{subproof}[Proof that $\Phi_{\chain{B}}$ is a parity-reversing involution on $\chain{B}$.] Let $c$ be a chain in $\chain{B}$. % First, assume that $\eta_c = \tau_c$. If $\tau_c = \beta$, then either $\psi_c$ is a proper reduction of $\pi$, or $\psi_c = \beta$. In the first case, $c \in \chain{R}$, and in the second case $\psi_c$ is a 2413-balloon, and these are both contradictions. Thus we must have $\tau_c < \beta$, and so there is at least one permutation in $c$ greater than $\phi_c$. % It follows that $c^\prime$ is a chain. % We now show that $c^\prime \not\in \chain{R}$. Assume, to the contrary, that $c^\prime \in \chain{R}$ which implies that $\psi_c$ is a proper reduction of $\pi$. But now we have $\eta_c = \beta$, which is a contradiction, so $\psi_c$ is not a proper reduction of $\pi$, therefore $c^\prime \not \in \chain{R}$. Now assume that $\eta_c < \tau_c$. % Let $c^\prime = \Phi_{\chain{B}}(c) = c \cup \{ \ball{2413}{\eta_c} \}$. We know by Lemma~\ref{lemma-2413-balloon-phi-gb} that this is a chain. % Either $\kappa_c = \kappa_{c^\prime}$, or $\kappa_{c^\prime}$ is a 2413-balloon. % If $\kappa_c = \kappa_{c^\prime}$, then $c^\prime \not \in \chain{R}$. % If $\kappa_{c^\prime}$ is a 2413-balloon, then $\kappa_{c^\prime} \not \in \redx{\pi}$, so $c^\prime \not \in \chain{R}$. % Thus we must have $c^\prime \not\in \chain{R}$. So now we have that if $c \in \chain{B}$ and $\eta_c = \tau_c$, then $\Phi_{\chain{B}}(c)$ is a chain; and that for any $c \in \chain{B}$, $\Phi_{\chain{B}}(c) \in \chain{B}$. It follows that $\Phi_{\chain{B}}$ is a parity-reversing involution on $\chain{B}$. \end{subproof} We now have that $\Phi_{\chain{G}}$ and $\Phi_{\chain{B}}$ are parity-reversing involutions on $\chain{G}$ and $\chain{B}$ respectively. It follows from Observation~\ref{observation-all-we-have-to-do} that $ \mobp{\pi} = - \sum_{\sigma \in \redx{\pi}} \mobp{\sigma}. $ We now show how to express $\mobp{\sigma}$, where $\sigma \in \redx{\pi}$, in terms of $\mobp{\beta}$. We start by noting that since $\beta$ is a 2413-balloon, then $\beta$ has no corners. Now, take the case where $\sigma = \ball{\ex{2}413}{\beta}$, which is the first permutation in Figure~\ref{figure-2413-reductions}. Note that we can write $\sigma = \oneminus ((\beta \ominus 1) \oplus 1)$. Applying Lemma~\ref{lemma-oneplus} to the outermost three points in $\sigma$ (those from the $\ex{2}413$), we find that $\mobp{\sigma} = -\mobp{\beta}$. % The other cases are similar, and this gives us:\footnote{This table is slightly redundant, as the entries are determined by the parity of the red points. We include it as later results have similar tables where some values of $\mobp{\sigma}$ are zero, and this gives a consistent presentation.} \begin{center} $ \begin{array}{ccccc} \begin{array}{lr} \sigma & \mobp{\sigma} \\ \midrule \ball{\ex{2}413}{\beta} & - \mobp{\beta} \\ \ball{2\ex{4}13}{\beta} & - \mobp{\beta} \\ \ball{24\ex{1}3}{\beta} & - \mobp{\beta} \\ \ball{241\ex{3}}{\beta} & - \mobp{\beta} \\ \phantom{x} & \phantom{x} \\ \phantom{x} & \phantom{x} \\ \end{array} & \phantom{xxx} & \begin{array}{lr} \sigma & \mobp{\sigma} \\ \midrule \ball{\ex{2}\ex{4}13}{\beta} & \mobp{\beta} \\ \ball{\ex{2}4\ex{1}3}{\beta} & \mobp{\beta} \\ \ball{\ex{2}41\ex{3}}{\beta} & \mobp{\beta} \\ \ball{2\ex{4}\ex{1}3}{\beta} & \mobp{\beta} \\ \ball{2\ex{4}1\ex{3}}{\beta} & \mobp{\beta} \\ \ball{24\ex{1}\ex{3}}{\beta} & \mobp{\beta} \\ \end{array} & \phantom{xxx} & \begin{array}{lr} \sigma & \mobp{\sigma} \\ \midrule \ball{2\ex{4}\ex{1}\ex{3}}{\beta} & - \mobp{\beta} \\ \ball{\ex{2}4\ex{1}\ex{3}}{\beta} & - \mobp{\beta} \\ \ball{\ex{2}\ex{4}1\ex{3}}{\beta} & - \mobp{\beta} \\ \ball{\ex{2}\ex{4}\ex{1}3}{\beta} & - \mobp{\beta} \\ \phantom{x} & \phantom{x} \\ \phantom{x} & \phantom{x} \\ \end{array} \\ \end{array} $ \end{center} It is now easy to see that \[ \sum_{\sigma \in \redx{\pi}} \mobp{\sigma} = - 2 \mobp{\beta} \] and the result follows directly. \end{proof} \section{The growth of the M\"{o}bius function} \label{section-growth-rate-of-mu} We define $\mobmaxx{n} = \max \{ \order{\mobp{\pi}} : \order{\pi} = n \}$. Previous work in~\cite{Jelinek2020} and~\cite{Smith2013} has shown that the growth of $\mobmaxx{n}$ is at least polynomial. We will show that the growth is at least exponential. We have \begin{theorem} \label{theorem-growth-of-mobius-function} For all $n$, $\mobmaxx{n} \geq 2^{\lfloor n/4 \rfloor - 1 }$. \end{theorem} \begin{proof} We start by defining a function to construct a permutation of length $n$. \[ \pi^{(n)} = \begin{cases} 1 & \text{If $n = 1$} \\ 12 & \text{If $n = 2$} \\ 132 & \text{If $n = 3$} \\ 2413 & \text{If $n = 4$} \\ \ball{2413}{\pi^{(n-4)}} & \text{Otherwise} \end{cases} \] Note that for $n > 8$, $\pi^{(n)}$ is a double 2413-balloon. It is simple to calculate $\mobp{\pi^{(n)}}$ for $n=1, \ldots, 8$, and these values are given below. \begin{center} \begin{tabular}{lcl} $\mobp{\pi^{(1)}} = \mobp{1} = 1$, & \phantom{xxxxx} & $\mobp{\pi^{(5)}} = \mobp{25314} = 4$, \\ $\mobp{\pi^{(2)}} = \mobp{12} = -1$, && $\mobp{\pi^{(6)}} = \mobp{263415} = -1$, \\ $\mobp{\pi^{(3)}} = \mobp{132} = 1$, && $\mobp{\pi^{(7)}} = \mobp{2735416} = 1$, \\ $\mobp{\pi^{(4)}} = \mobp{2413} = -3$, && $\mobp{\pi^{(8)}} = \mobp{28463517} = -6$. \end{tabular} \end{center} These values match Theorem~\ref{theorem-growth-of-mobius-function}, and so this is true for $n \leq 8$. For $n > 8$, $\mobp{\pi^{(n)}} = 2 \mobp{\pi^{(n-4)}}$ by Theorem~\ref{theorem-2413-balloon-beta-is-a-balloon}, and the result follows immediately. \end{proof} \begin{remark} \label{remark-simples-in-2413-balloons} It is easy to see that, with the definitions above, the only simple permutations that can be contained in $\pi^{(n)}$ are $1$, $12$, $21$, $2413$, and $25314$. This answers Problem 4.4 in~\cite{Jelinek2020}, which asks whether $\mobp{\pi}$ is bounded on a hereditary class which contains only finitely many simple permutations, as, by Theorem~\ref{theorem-growth-of-mobius-function}, we have unbounded growth, but only finitely many simple permutations. \end{remark} If we repeat the ballooning process, as we do in $\pi^{(n)}$, then the permutation plot is rather striking. We illustrate this in Figure~\ref{figure-multiple-2413-ballons}, which shows $\pi^{(21)}$. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.3] \plotpermgrid{02,21,04,19,06,17,08,15,10,13,11,09,12,07,14,05,16,03,18,01,20} \end{tikzpicture} \caption{A permutation plot showing $\pi^{(21)}$.} \label{figure-multiple-2413-ballons} \end{center} \end{figure} \section{The principal M\"{o}bius function of 2413-balloons} \label{section-2413-balloons} Theorem~\ref{theorem-2413-balloon-beta-is-a-balloon} gives us an expression for the value of the M\"{o}bius function $\mobp{\pi}$ when $\pi$ is a double 2413-balloon. We expand on this to find an expression for the M\"{o}bius function $\mobp{\pi}$ when $\pi$ is any 2413-balloon. We start with a Lemma that handles the case where $\beta$ is not a 2413-balloon, and has more than four points. The structure of our proof is similar to that of Theorem~\ref{theorem-2413-balloon-beta-is-a-balloon}, but we present a complete argument to aid readability. We will show \begin{lemma} \label{lemma-2413-balloon-beta-not-a-balloon} Let $\pi = \ball{2413}{\beta}$, where $\beta$ is not a 2413-balloon, and $\order{\beta} > 4$. Then $\mobp{\pi} = \mobp{\beta}$. \end{lemma} \begin{proof} First note that if $\beta$ is monotonic, then by Corollary~\ref{corollary-monotonic-interval} we have $\mobp{\beta} = 0 = \mobp{\pi}$. For the remainder of this proof, we assume that $\beta$ is not monotonic. If $\beta$ has one corner, then without loss of generality, we can assume by symmetry that $\beta = \oneplus \gamma$. Similarly, if $\beta$ has two corners, then we can assume that $\beta = \oneplus \gamma \oplus 1$. As before, we will use Observation~\ref{observation-all-we-have-to-do}. We will show that $\Phi_{\chain{B}}$ is a parity-reversing involution on $\chain{B}$. Once we have shown that we have parity-reversing involutions, we will then show how to express the Hall sum of $\chain{R}$ in terms of $\mobp{\beta}$. The proper reductions of $\pi$ depend on the number of corners of $\beta$. Below we list the improper reductions of $\pi$ for each case. \begin{center} \begin{tabular}{lr} \toprule Corners in $\beta$ & Improper reductions of $\pi$ \\ \midrule No corners & None. \\ One corner ($\beta = 1 \oplus \gamma$) & $\ball{\ex{2}\ex{4}13}{\beta}$, $\ball{\ex{2}\ex{4}1\ex{3}}{\beta}$, $\ball{\ex{2}\ex{4}\ex{1}3}{\beta}$, and $\beta$. \\ Two corners ($\beta = 1 \oplus \gamma \oplus 1$) & $\ball{\ex{2}\ex{4}13}{\beta}$, $\ball{24\ex{1}\ex{3}}{\beta}$, \\ & $\ball{2\ex{4}\ex{1}\ex{3}}{\beta}$, $\ball{\ex{2}4\ex{1}\ex{3}}{\beta}$, $\ball{\ex{2}\ex{4}1\ex{3}}{\beta}$, $\ball{\ex{2}\ex{4}\ex{1}3}{\beta}$, \\ & and $\beta$. \\ \bottomrule \end{tabular} \end{center} % % \begin{subproof}[Proof that $\Phi_{\chain{B}}$ is a parity-reversing involution on $\chain{B}$.] Let $c$ be a chain in $\chain{B}$. First, assume that $\eta_c = \tau_c$, so $c^\prime = c \setminus \{ \ball{2413}{\tau_c} \}$. We start by showing that $c^\prime$ is a valid chain. Assume otherwise, which implies $\tau_c = \beta$. If $\beta$ has no corners, then $\psi_c \in \redx{\pi}$, so $c \in \chain{R}$, which is a contradiction. If $\beta$ has one corner, then either $\psi_c \in \redx{\pi}$, which is a contradiction, or $\psi_c \not \in \redx{\pi}$. In the latter case, assume, without loss of generality, that $\beta = 1 \oplus \gamma$. Then $\ball{\ex{2}\ex{4}13}{\beta} = \ball{2\ex{4}13}{\gamma}$, $\ball{\ex{2}\ex{4}1\ex{3}}{\beta} = \ball{2\ex{4}1\ex{3}}{\gamma}$, $\ball{\ex{2}\ex{4}\ex{1}3}{\beta} = \ball{2\ex{4}\ex{1}3}{\gamma}$, and $\beta = \ball{2\ex{4}\ex{1}\ex{3}}{\gamma}$. Thus in all cases where $\psi_c \not \in \redx{\pi}$, we have that $\eta_c$ is not minimal, which is a contradiction. Finally, if $\beta$ has two corners, then either $\psi_c \in \redx{\pi}$, which is a contradiction, or $\psi_c \not \in \redx{\pi}$. The latter case implies that $\psi_c = \beta$, and then we have that either $\psi_c = 1 \oplus \gamma \oplus 1 = \ball{2\ex{4}\ex{1}3}{\gamma}$ or $\psi_c = 1 \ominus \gamma \ominus 1 = \ball{\ex{2}41\ex{3}}{\gamma}$, so $\eta_c$ is not minimal, which is a contradiction. Thus we have that $c^\prime$ must be a chain, and, moreover, $\tau_c \neq \beta$. We now show that $c^\prime \not\in \chain{R}$. Assume, to the contrary, that $c^\prime \in \chain{R}$ which implies that $\psi_c$ is a proper reduction of $\pi$. But now we have $\eta_c = \beta$, but this would give $\tau_c = \beta$, which is a contradiction, therefore $c^\prime \not \in \chain{R}$. Now assume that $\eta_c < \tau_c$. % Let $c^\prime = \Phi_{\chain{B}}(c) = c \cup \{ \ball{2413}{\eta_c} \}$, and we know from Lemma~\ref{lemma-2413-balloon-phi-gb} that $c^\prime$ is a chain. Now either $\kappa_c = \kappa_{c^\prime}$, or $\kappa_{c^\prime}=\ball{2413}{\eta_c}$ is a 2413-balloon. In either case we have $c^\prime \not \in \chain{R}$. So if $c \in \chain{B}$, then $\Phi_{\chain{B}}(c)$ is a chain in $\chain{B}$, and thus $\Phi_{\chain{B}}$ is a parity-reversing involution. \end{subproof} We have shown that $\Phi_{\chain{G}}$ and $\Phi_{\chain{B}}$ are parity-reversing involutions on $\chain{G}$ and $\chain{B}$ respectively. It follows from Observation~\ref{observation-all-we-have-to-do} that $ \mobp{\pi} = - \sum_{\sigma \in \redx{\pi}} \mobp{\sigma}. $ We now show how to express $\mobp{\sigma}$, where $\sigma \in \redx{\pi}$ in terms of $\mobp{\beta}$. We use a similar mechanism to that used in Theorem~\ref{theorem-2413-balloon-beta-is-a-balloon}. % There are some additional considerations where $\beta$ has one or two corners. As an example, take the case where $\sigma = \ball{2\ex{4}13}{\beta}$, and $\beta$ has one corner, and so, by our assumption, can be written as $\oneplus \gamma$. We can write $\sigma = ((\oneplus \beta) \ominus 1) \oplus 1$, and expanding $\beta$ we have $\sigma = ((\oneplus \oneplus \gamma) \ominus 1) \oplus 1$, Applying Lemma~\ref{lemma-oneplus} to the outermost two points in $\sigma$, we find that $\mobp{\sigma} = \mobp{\oneplus \oneplus \gamma}$, and by Lemma~\ref{lemma-oneplus-oneplus} we now have $\mobp{\sigma} = 0$. % Because of this, our analysis depends on the number of corners of $\beta$, and we consider each case separately below. If $\beta$ has no corners, then we have \begin{center} $ \begin{array}{ccccc} \begin{array}{lr} \sigma & \mobp{\sigma} \\ \midrule \ball{\ex{2}413}{\beta} & - \mobp{\beta} \\ \ball{2\ex{4}13}{\beta} & - \mobp{\beta} \\ \ball{24\ex{1}3}{\beta} & - \mobp{\beta} \\ \ball{241\ex{3}}{\beta} & - \mobp{\beta} \\ \phantom{x} & \phantom{x} \\ \phantom{x} & \phantom{x} \\ \end{array} & \phantom{xxx} & \begin{array}{lr} \sigma & \mobp{\sigma} \\ \midrule \ball{\ex{2}\ex{4}13}{\beta} & \mobp{\beta} \\ \ball{\ex{2}4\ex{1}3}{\beta} & \mobp{\beta} \\ \ball{\ex{2}41\ex{3}}{\beta} & \mobp{\beta} \\ \ball{2\ex{4}\ex{1}3}{\beta} & \mobp{\beta} \\ \ball{2\ex{4}1\ex{3}}{\beta} & \mobp{\beta} \\ \ball{24\ex{1}\ex{3}}{\beta} & \mobp{\beta} \\ \end{array} & \phantom{xxx} & \begin{array}{lr} \sigma & \mobp{\sigma} \\ \midrule \ball{2\ex{4}\ex{1}\ex{3}}{\beta} & - \mobp{\beta} \\ \ball{\ex{2}4\ex{1}\ex{3}}{\beta} & - \mobp{\beta} \\ \ball{\ex{2}\ex{4}1\ex{3}}{\beta} & - \mobp{\beta} \\ \ball{\ex{2}\ex{4}\ex{1}3}{\beta} & - \mobp{\beta} \\ \phantom{x} & \phantom{x} \\ \beta & \mobp{\beta} \\ \end{array} \\ \end{array} $ \end{center} If $\beta$ has one corner, under our assumption that $\beta$ = $\oneplus \gamma$, we have \begin{center} $ \begin{array}{ccccc} \begin{array}{lr} \sigma & \mobp{\sigma} \\ \midrule \ball{\ex{2}413}{\beta} & - \mobp{\beta} \\ \ball{2\ex{4}13}{\beta} & 0 \\ \ball{24\ex{1}3}{\beta} & - \mobp{\beta} \\ \ball{241\ex{3}}{\beta} & - \mobp{\beta} \\ \phantom{x} & \phantom{x} \\ \end{array} & \phantom{xxx} & \begin{array}{lr} \sigma & \mobp{\sigma} \\ \midrule \ball{\ex{2}4\ex{1}3}{\beta} & \mobp{\beta} \\ \ball{\ex{2}41\ex{3}}{\beta} & \mobp{\beta} \\ \ball{2\ex{4}\ex{1}3}{\beta} & 0 \\ \ball{2\ex{4}1\ex{3}}{\beta} & 0 \\ \ball{24\ex{1}\ex{3}}{\beta} & \mobp{\beta} \\ \end{array} & \phantom{xxx} & \begin{array}{lr} \sigma & \mobp{\sigma} \\ \midrule \ball{2\ex{4}\ex{1}\ex{3}}{\beta} & 0 \\ \ball{\ex{2}4\ex{1}\ex{3}}{\beta} & - \mobp{\beta} \\ \phantom{x} & \phantom{x} \\ \phantom{x} & \phantom{x} \\ \phantom{x} & \phantom{x} \\ \end{array} \\ \end{array} $ \end{center} Finally, if $\beta$ has two corners, under our assumption that $\beta = \oneplus \gamma \oplus 1$, we have \begin{center} $ \begin{array}{ccc} \begin{array}{lr} \sigma & \mobp{\sigma} \\ \midrule \ball{\ex{2}413}{\beta} & - \mobp{\beta} \\ \ball{2\ex{4}13}{\beta} & 0 \\ \ball{24\ex{1}3}{\beta} & 0 \\ \ball{241\ex{3}}{\beta} & - \mobp{\beta} \\ \end{array} & \phantom{xxx} & \begin{array}{lr} \sigma & \mobp{\sigma} \\ \midrule \ball{\ex{2}4\ex{1}3}{\beta} & 0 \\ \ball{\ex{2}41\ex{3}}{\beta} & \mobp{\beta} \\ \ball{2\ex{4}\ex{1}3}{\beta} & 0 \\ \ball{2\ex{4}1\ex{3}}{\beta} & 0 \\ \end{array} \end{array} $ \end{center} In all three cases we have \[ \sum_{\sigma \in \redx{\pi}} \mobp{\sigma} = - \mobp{\beta} \] and the result follows directly. \end{proof} We are now in a position to state and prove the main Theorem for this section. \begin{theorem} \label{theorem-2413-balloons} Let $\pi = \ball{2413}{\beta}$. Then \begin{align*} \mobp{\pi} = \begin{cases} 4 & \text{If $\beta = 1$} \\ -6 & \text{If $\beta = 2413$} \\ 2 \mobp{\beta} & \text{If $\beta$ is a 2413-balloon} \\ \mobp{\beta} & \text{Otherwise}. \end{cases} \end{align*} \end{theorem} \begin{proof} The value of $\mobp{\ball{2413}{\beta}}$ for the symmetry classes of $\beta$ with $\order{\beta} \leq 4$ are shown below. \begin{center} $ \begin{array}{ccc} \begin{array}{lrr} \beta & \mobp{\beta} &\mobp{\ball{2413}{\beta}} \\ \midrule 1 & 1 & 4 \\ 12 & -1 & -1 \\ 123 & 0 & 0 \\ 132 & 1 & 1 \\ 1234 & 0 & 0 \\ 1243 & 0 & 0 \\ \end{array} & \phantom{xxx} & \begin{array}{lrr} \beta & \mobp{\beta} &\mobp{\ball{2413}{\beta}} \\ \midrule 1324 & -1 & -1 \\ 1342 & -1 & -1 \\ 1432 & 0 & 0 \\ 2143 & -1 & -1 \\ 2413 & -3 & -6 \\ \phantom{x} & \phantom{x} \\ \end{array} \\ \end{array} $ \end{center} It is easy to see that these values meet Theorem~\ref{theorem-2413-balloons}. We now combine Theorem~\ref{theorem-2413-balloon-beta-is-a-balloon} and Lemma~\ref{lemma-2413-balloon-beta-not-a-balloon} to complete the proof. \end{proof} \section{Concluding remarks} \label{section-concluding-remarks} \subsection{Generalising the balloon operator} Given two permutations $\alpha$ and $\beta$, with lengths $a$ and $b$ respectively, and two integers $i,j$ which satisfy $0 \leq i,j \leq a$, the \emph{$i,j$-balloon}\extindex{$i,j$-balloon} of $\beta$ by $\alpha$, written as $\ballgen{i,j}{\alpha}{\beta}$, is the permutation formed by inserting the permutation $\beta$ into $\alpha$ between the $i$-th and $i+1$-th columns of $\alpha$, and between the $j$-th and $j+1$-th rows of $\alpha$. The integers $i$ and $j$ are, collectively, the \emph{indexes}\extindex[balloon]{indexes} of the balloon. Formally, we have \begin{align*} (\ballgen{i,j}{\alpha}{\beta})_x & = \begin{cases} \alpha_x & \text{if $x \leq i$ and $\alpha_x \leq j $}\\ \alpha_x + \order{\beta} & \text{if $x \leq i$ and $\alpha_x > j $}\\ \beta_{x-i} + j & \text{if $x > i $ and $x \leq i+\order{\beta}$ }\\ \alpha_{x-\order{\beta}} & \text{if $x > i+\order{\beta}$ and $\alpha_{x-\order{\beta}} \leq j $}\\ \alpha_{x-\order{\beta}} + \order{\beta} & \text{if $x > i+\order{\beta}$ and $\alpha_{x-\order{\beta}} > j $}\\ \end{cases} \end{align*} As before, the balloon notation is not associative. Unlike 2413-balloons, which have to be interpreted as right-associative, generalized balloons can use brackets to define associativity. Note that the 2413-balloon defined in Section~\ref{section-definitions-and-notation} is written as $\ballgen{2,2}{2413}{\beta}$ in our generalized notation. We remark that for any $\alpha$ and any $\beta$, we have $\ballgen{0,0}{\alpha}{\beta} = \alpha \oplus \beta$, and we can easily determine $\mobp{\alpha \oplus \beta}$ using results from Propositions~1~and~2 of Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011}. \subsection{Generalised 2413-balloons} If we restrict $\alpha$ to 2413, then, up to symmetry, there are seven possible values for the indexes: $(0,0)$, $(0,1)$, $(0,2)$, $(1,0)$, $(1,1)$, $(1,2)$, and $(2,2)$. Theorem~\ref{theorem-2413-balloons} handles the case where the indexes are $(2,2)$, and~\cite{Burstein2011} handles the case where the indexes are $(0,0)$. For the other indexes, we have \begin{conjecture} \label{conjecture-most-2413-balloons} Let $\pi = \ballgen{i,j}{2413}{\beta}$, where $(i,j) \in \{ (0,1), (0,2), (1,1), (1,2) \}$. Then \[ \mobp{\pi} = \begin{cases} 0 & \text{If $(i,j) = (0,1)$ and $\beta = \tau \oplus 1$} \\ 0 & \text{If $(i,j) = (0,2)$ and $\beta = \tau \ominus 1$} \\ 0 & \text{If $(i,j) = (1,1)$ and $\beta = \oneminus \tau$ or $\beta = 12$} \\ 0 & \text{If $(i,j) = (1,2)$ and $\beta = \oneplus \tau$} \\ \mobp{\beta} & \text{Otherwise.} \end{cases} \] \end{conjecture} and \begin{conjecture} \label{conjecture-1-0-2413-balloons} Let $\pi = \ballgen{1,0}{2413}{\beta}$. Then \[ \mobp{\pi} = \begin{cases} 6 & \text{If $\beta = 1$} \\ -2 & \text{If $\beta = 21$} \\ 0 & \text{If $\beta = 312$} \\ 2 \mobp{\beta} & \text{If $\beta = \ballgen{1,0}{2413}{\gamma}$} \\ \mobp{\beta} & \text{Otherwise.} \end{cases} \] \end{conjecture} Lemma~\ref{lemma-wedge-is-sum-of-reductions} in Chapter~\ref{chapter_balloon_permutations_preprint} can be applied to the cases where the indexes are $(0,1)$, $(0,2)$, or $(1,0)$. Using this lemma, together with some of the techniques used earlier in this chapter, it is easy to show that Conjecture~\ref{conjecture-most-2413-balloons} is true in these cases. For brevity, we do not provide proofs here. \subsection[Bounding the M\"{o}bius function on hereditary classes]{Bounding the M\"{o}bius function on hereditary\\ classes} Corollary 24 in Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011} gives us that if $\pi$ is separable, then $\mobp{\pi} \in \{ 0, \pm1 \}$. The simple permutations in the hereditary class of separable permutations are $1$, $12$, and $21$. In Remark~\ref{remark-simples-in-2413-balloons} we have unbounded growth where the simple permutations in the hereditary class are just $1$, $12$, $21$, $2413$, and $25314$, so adding $2413$ and $25314$ to the simple permutations moves us from bounded growth to unbounded growth. This then leads to: \begin{question} If $C$ is a hereditary class containing just the simple permutations $1$, $12$, $21$ and $2413$, and $\pi \in C$, then is $\mobp{\pi}$ bounded? Further, if $D$ is a hereditary class containing just the simple permutations $1$, $12$, $21$, $2413$, and $3142$, and $\pi \in D$, then is $\mobp{\pi}$ bounded? \end{question} \section{Chapter summary} The main result from this paper is a proof that the growth of $\mobmaxx{n} = \max \{ \order{\mobp{\pi}} : \order{\pi} = n \}$. is at least exponential. This is proved by finding explicit recursions for the principal M\"{o}bius function of double 2413-balloons. This result is not unexpected. Indeed, while the main result from Jel{\'{i}}nek, Kantor, Kyn{\v{c}}l and Tancer~\cite{Jelinek2020} is that the growth of $\mobmaxx{n}$ is bounded below by an order-7 polynomial, in the final section of their paper they define a set of permutations $\kappa_n$ as \[ \kappa_n = n + 1, n + 3, \ldots, 3n - 1, 1, 3n + 1, 2, 3n + 2, \ldots, n, 4n, n + 2, n + 4, \ldots, 3n; \] and then they conjecture that \emph{``the absolute value of $\mobp{\kappa_n}$ is exponential in $n$''}. The author computed the value of the principal M\"{o}bius function for $\kappa_1, \ldots, \kappa_7$, and the results are shown in Table~\ref{table_kappa_n_values}. \begin{table} \[ \begin{array}{lrr} \toprule n & \mobp{\kappa_n} & \mobp{\kappa_{n}} / \mobp{\kappa_{n-1}} \\ \midrule 1 & -1 & \text{--} \\ 2 & -27 & 27 \\ 3 & -117 & 4.333 \\ 4 & -509 & 4.350 \\ 5 & -2389 & 4.694 \\ 6 & -10946 & 4.582 \\ 7 & -51210 & 4.678 \\ \bottomrule \end{array} \] \caption{Values of the principal M\"{o}bius function of $\kappa_1, \ldots, \kappa_7$.} \label{table_kappa_n_values} \end{table} Examining these values we can see two things: \begin{itemize} \item The ratio $\mobp{\kappa_{n}} / \mobp{\kappa_{n-1}}$ is not an integer. In contrast, 2413 double-balloons grow by a factor of 2 at each iteration. \item $\kappa_n$ appears to grow at more than double the rate of a double 2413-balloon. \end{itemize} The author feels that it is very unlikely that $\mobmaxx{n}$ is bounded below by something that grows faster than an exponential function. The only mechanism currently available to determine $\mobmaxx{n}$ is essentially to calculate the value of the principal M\"{o}bius function for every permutation of length $n$. While this has been done for lengths $1, \ldots, 13$, the computational effort required to determine the value of $\mobmaxx{14}$ is very large\footnote{The author estimates the effort on his HEDT PC would be around half a million CPU hours.} and it is therefore unreasonable to expect this data to become available in the near future. The technique used to construct 2413-balloon permutations can, as mentioned earlier, be thought of in terms of inflations of $25314$. We commented earlier that we used balloon notation, as it is our belief that this gave us a simpler exposition. In Section~\ref{section-concluding-remarks} we generalized the ballooning process, and defined $i,j$-balloons. Although $\ballgen{i,j}{\alpha}{\beta}$ can be considered as an inflation of a permutation that is $\alpha$ plus a single new point, we think that viewing permutations through the ``balloon'' lens gives a sufficiently different view that this technique could well have a wider application. Initial investigations by the author and others seem to indicate that most $i,j$-balloons behave ``regularly''. Typically we find that $\mobp{\ballgen{i,j}{\alpha}{\beta}}$ is a multiple of $\mobp{\beta}$. \chapter{The principal M\"{o}bius function of balloon permutations} \label{chapter_balloon_permutations_preprint} \section{Preamble} This chapter is based on independent research by the author that is, at the time of writing, still in progress. The author intends that this chapter will form the basis of work that will be submitted for publication, and at present this will be a single-author paper. Generalised balloon permutations were introduced in Section~\ref{section-concluding-remarks} of Chapter~\ref{chapter_2413_balloon_paper}. In this chapter we drop the ``generalised'' qualifier. A balloon permutation is formed from the merge of two permutations, $\alpha$ and $\beta$, and has the property that $\beta$ occurs as an interval copy in the balloon permutation. In this chapter we find an expression for the principal M\"{o}bius function of balloon permutations in terms of a sum over a set of permutations, plus a correction factor. We then show that for certain types of balloon permutation (``wedge permutations'') the correction factor is always zero. Further, we show that the principal M\"{o}bius function of a wedge permutation is always a multiple of the principal M\"{o}bius function of $\beta$. \section{Introduction} In this chapter we recall the definition of a balloon permutation from Chapter~\ref{chapter_2413_balloon_paper}, and provide a second, equivalent, definition. We show how, given two permutations $\alpha$ and $\beta$ we can construct a balloon permutation $\ballij{\alpha}{\beta}$. We then provide some examples of balloon permutations, which include direct sums and skew sums, and introduce ``block diagrams'', which show how $\alpha$ and $\beta$ are related. We discuss how to represent permutations contained in a balloon permutation in terms of $\alpha$ and $\beta$. This includes discussing how to resolve ambiguities that arise when a permutation contained in a balloon permutation has multiple embeddings. We show that the principal M\"{o}bius function of $\ballij{\alpha}{\beta}$ can be expressed as a sum of the principal M\"{o}bius function over a certain subset of permutations contained in $\ballij{\alpha}{\beta}$, plus a correction factor that is calculated from a specific set of chains. The subset of permutations has the property that they all contain $\beta$ as an interval copy, although we note that not every permutation that contains $\beta$ as an interval copy is included in the set of permutations. One of the types of balloon permutation is a wedge permutation. We show that the correction factor for a wedge permutation is always zero. We further show that the value of the principal M\"{o}bius function of a wedge permutation is a multiple of the principal M\"{o}bius function of $\beta$. We conclude this chapter with a brief summary of the results found. We briefly discuss certain phenomena which have been observed when the wedge operation is iterated. \section{Definitions, examples and notation} \subsection{Balloon permutations} Generalised balloon permutations were introduced in Chapter~\ref{chapter_2413_balloon_paper}, and we recall that they are defined as \begin{align} \label{equation-recall-balloon-definition} (\ballgen{i,j}{\alpha}{\beta})_x & = \begin{cases} \alpha_x & \text{if $x \leq i$ and $\alpha_x \leq j $}\\ \alpha_x + \order{\beta} & \text{if $x \leq i$ and $\alpha_x > j $}\\ \beta_{x-i} + j & \text{if $x > i $ and $x \leq i+\order{\beta}$ }\\ \alpha_{x-\order{\beta}} & \text{if $x > i+\order{\beta}$ and $\alpha_{x-\order{\beta}} \leq j $}\\ \alpha_{x-\order{\beta}} + \order{\beta} & \text{if $x > i+\order{\beta}$ and $\alpha_{x-\order{\beta}} > j $}\\ \end{cases} \end{align} with $0 \leq i,j \leq \order{\alpha}$. If $\alpha = 1$, then $\ballgen{i,j}{\alpha}{\beta}$ is one of $1 \oplus \beta$, $1 \ominus \beta$, $\beta \oplus 1$, or $\beta \ominus 1$. In these cases it is trivial to find $\mobp{\ballgen{i,j}{\alpha}{\beta}}$ using results from Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011}, and we exclude these cases from further consideration in this chapter by requiring that $\order{\alpha} > 1$. Given two permutations, $\alpha$ and $\beta$, we say that some permutation $\pi$ is a \emph{merge}\extindex[permutation]{merge} of $\alpha$ and $\beta$ if the points of $\pi$ can be coloured red or blue so that the red points are order-isomorphic to $\alpha$, and the blue points are order-isomorphic to $\beta$. We can alternatively define a \emph{balloon}\extindex[permutation]{balloon} permutation as the merge of two non-empty permutations $\alpha$ and $\beta$, which we write as $\ballij{\alpha}{\beta}$, with the additional requirement that the blue points (order-isomorphic to $\beta$) must be an interval copy of $\beta$. One consequence of this last requirement is that \[ \ballij{\alpha}{\eta} < \ballij{\alpha}{\zeta} \;\;\text{~if and only if~}\;\; \eta < \zeta. \] We call this the \emph{nesting}\extindex{nesting condition} condition. Our alternative definition of a balloon permutation is somewhat abstract, and, as we will see, certain values of $i$ and/or $j$ make a significant difference to our results. Before we continue with our discussion of notation, we first present several varieties of balloon permutations, together with a diagrammatic way of understanding how balloon permutations are structured. \subsection{Types of balloon permutations} When discussing types of balloon permutations $\ballij{\alpha}{\beta}$, we will generally give a description of how $\alpha$ and $\beta$ are merged. We will also provide what we term a \emph{block diagram}\extindex{block diagram}. A block diagram is used to give a visual representation of the merge, showing the relative locations of $\alpha$ and $\beta$ in $\ballij{\alpha}{\beta}$. In all the examples we show, there are parts of the permutation plot of $\ballij{\alpha}{\beta}$ that are guaranteed to be empty. In a block diagram, we highlight these empty regions by shading them. \subsubsection{Balloon permutations} Figure~\ref{figure_generalised-balloon} shows the block diagram for any balloon permutation. \begin{figure}[!h] \begin{center} \begin{tikzpicture}[scale=1] \fill [color=lightgray] (0.5,1.5) rectangle (1.5,2.5); \fill [color=lightgray] (1.5,0.5) rectangle (2.5,1.5); \fill [color=lightgray] (1.5,2.5) rectangle (2.5,3.5); \fill [color=lightgray] (2.5,1.5) rectangle (3.5,2.5); \plotgrid{3}{3} \scell{1}{1}{$\alpha_{BL}$}; \scell{1}{3}{$\alpha_{TL}$}; \scell{2}{2}{$\beta$}; \scell{3}{1}{$\alpha_{BR}$}; \scell{3}{3}{$\alpha_{TR}$}; \end{tikzpicture} \end{center} \caption{Block diagram for the generalised balloon permutation $\ballgen{i,j}{\alpha}{\beta}$.} \label{figure_generalised-balloon} \end{figure} Note that, by design, a block diagram does not include the indices used (the $i$ and $j$ in $\ballgen{i,j}{\alpha}{\beta}$), as the purpose of a block diagram is to provide a high-level view of the construction. There are two sub-types of balloon permutations, which we describe now. \subsubsection{Direct sums and skew sums} The simplest examples of balloon permutations, which have already appeared extensively in the literature, are the direct sum of two permutations, $\alpha \oplus \beta$, and the skew sum, $\alpha \ominus \beta$. Figure~\ref{figure-sum-and-skew-as-blocks} shows the block diagram for direct sums and skew sums. Direct sums occur when we have $\ballgen{0,0}{\alpha}{\beta}$ or $\ballgen{\order{\alpha},\order{\alpha}}{\alpha}{\beta}$. Skew sums occur when we have $\ballgen{0,\order{\alpha}}{\alpha}{\beta}$ or $\ballgen{\order{\alpha},0}{\alpha}{\beta}$. \begin{figure}[!h] \begin{center} \begin{subfigure}[t]{0.45\textwidth} \begin{center} \begin{tikzpicture}[scale=1] \fill [color=lightgray] (0.5,1.5) rectangle (1.5,2.5); \fill [color=lightgray] (1.5,0.5) rectangle (2.5,1.5); \plotgrid{2}{2} \scell{1}{1}{$\alpha$}; \scell{2}{2}{$\beta$}; \end{tikzpicture} \end{center} \caption{$\alpha \oplus \beta = \ballgen{\order{\alpha},\order{\alpha}}{\alpha}{\beta}$.} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \begin{center} \begin{tikzpicture}[scale=1] \fill [color=lightgray] (0.5,0.5) rectangle (1.5,1.5); \fill [color=lightgray] (1.5,1.5) rectangle (2.5,2.5); \plotgrid{2}{2} \scell{1}{2}{$\alpha$}; \scell{2}{1}{$\beta$}; \end{tikzpicture} \end{center} \caption{$\alpha \ominus \beta = \ballgen{\order{\alpha},0}{\alpha}{\beta}$.} \end{subfigure} \end{center} \caption{Block diagrams for direct and skew sums.} \label{figure-sum-and-skew-as-blocks} \end{figure} \subsubsection{Wedge permutations} \label{subsection-wedge-permutations} Our second type of balloon permutation is the \emph{wedge permutation}\extindex[permutation]{wedge}. Wedge permutations occur when we have $\ballgen{i,0}{\alpha}{\beta}$ or $\ballgen{i, \order{\alpha}}{\alpha}{\beta}$ or $\ballgen{0,j}{\alpha}{\beta}$ or $\ballgen{\order{\alpha},j}{\alpha}{\beta}$. There are thus four (symmetric) ways in which we can construct a wedge permutation, For our purposes, we only need consider one symmetry, and we choose the version defined by $\ballgen{i, \order{\alpha}}{\alpha}{\beta}$. We write this is as $\ballwedgex{i}{\alpha}{\beta}$. Henceforth, we will refer to this as a wedge permutation without qualification. Note that if $i \in \{ 0 , \order{\alpha} \}$, then $\ballwedgex{i}{\alpha}{\beta}$ is a direct or skew sum of $\alpha$ and $\beta$. We will occasionally want to refer to wedge permutations that are not direct or skew sums, and we call these \emph{proper wedge permutations}\extindex[permutation]{proper wedge}. Another way to understand the construction of wedge permutations is to define $\ballwedgek{\alpha}{\beta}$ as the permutation formed by taking the first $k$ points of $\alpha$, then appending all of the points from $\beta$, with their values increased by $\order{\alpha}$, and finally appending the remaining points of $\alpha$. Figure~\ref{figure_example_wedge_generic} shows the block diagram for a wedge permutation, where $\alpha_L$ represents the first $k$ points for $\alpha$, and $\alpha_R$ represents the remaining points of $\alpha$. \begin{figure}[!h] \begin{center} \begin{tikzpicture}[scale=1] \fill [color=lightgray] (0.5,1.5) rectangle (1.5,2.5); \fill [color=lightgray] (1.5,0.5) rectangle (2.5,1.5); \fill [color=lightgray] (2.5,1.5) rectangle (3.5,2.5); \plotgrid{3}{2} \scell{1}{1}{$\alpha_{L}$}; \scell{2}{2}{$\beta$}; \scell{3}{1}{$\alpha_{R}$}; \end{tikzpicture} \end{center} \caption{Block diagram for the wedge permutation $\ballwedgek{\alpha}{\beta}$.} \label{figure_example_wedge_generic} \end{figure} \subsection{Notation} We have said that a balloon permutation will be written as $\ballij{\alpha}{\beta}$. We now consider how we will represent some permutation $\sigma$, where $\sigma \leq \ballij{\alpha}{\beta}$. Our aim is to define a notation where most permutations contained in $\ballij{\alpha}{\beta}$ have a unique representation. Since $\ballij{\alpha}{\beta}$ is a merge of $\alpha$ and $\beta$, we can colour the points of $\alpha$ red, and the points of $\beta$ blue. We note that for a merge in general there may be several possible colourings. By contrast, there is only one possible colouring for a balloon permutation. This is because $\beta$ is an interval at a fixed position within $\ballij{\alpha}{\beta}$, and so we know that the first point of $\beta$ is in column $i+1$, and the lowest point of $\beta$ is in row $j+1$. Since $\beta$ is an interval, this then means that there is only one possible choice for the blue points, and so we have a unique colouring. When we discuss permutations contained in $\ballij{\alpha}{\beta}$, we will sometimes want to discuss how these permutations can be found as embeddings. Recall that if $\sigma$ is contained in $\pi$, then an embedding of $\sigma$ in $\pi$ is a set of points of $\pi$, with cardinality $\order{\sigma}$, that is order-isomorphic to $\sigma$. Further, note that an embedding is not necessarily unique. As a (trivial) example, if $\order{\pi} = n$, then there are $n$ distinct embeddings of the permutation 1 in $\pi$. Now let $\pi = \ballij{\alpha}{\beta}$, and assume that we have a permutation $\sigma < \pi$, and we want to describe an embedding $\omega$ of $\sigma$ in $\pi$ in terms of the points of $\alpha$ and $\beta$. The points of $\omega$ in $\ballij{\alpha}{\beta}$ can be partitioned into two sets -- those that are red, and those that are blue. The blue points used in $\omega$ will form a permutation $\eta$, where we allow $\eta$ to be the empty permutation $\epsilon$. In general, we will not be concerned with exactly which red points are used. We know that the red points are a (possibly improper or empty) subset of the points of $\alpha$. We represent the embedding as \[ \omega = \ballij{\ex{\alpha}}{\eta}. \] This representation must be thought of as first finding the complete permutation $\ballij{\alpha}{\eta}$, where $\eta$ uses the blue points chosen, and then removing some or possibly all of the red points. With this notation, we can clearly understand the permutation formed by the blue points, as this is $\eta$, but we do not know which red points have been used. This notation does not distinguish between two embeddings where the blue points of each embedding are equivalent to the same permutation. This is deliberate, as when we consider embeddings, we will only be concerned with the permutation formed by the blue points, not exactly which blue points are used, and we think that the ambiguity in notation is more than offset by the increase in clarity in the discourse. We now partition the permutations contained in $\pi = \ballij{\alpha}{\beta}$ into four subsets. \subsubsection{Complete permutations} Our first set of permutations are those where there is an embedding that uses all of the red points. If $\sigma$ is such a permutation, then it is possible to write $\sigma = \ballij{\alpha}{\eta}$ for some (possibly empty) permutation $\eta$. The permutation $\eta$ is unique, since if we had $\sigma = \ballij{\alpha}{\eta} = \ballij{\alpha}{\zeta}$, then by the nesting condition we must have $\eta = \zeta$. We call these permutations \emph{complete}\extindex[permutation]{complete}, as they have an embedding which uses the complete set of red points. We will always write them in the form $\ballij{\alpha}{\eta}$. The permutation $\pi = \ballij{\alpha}{\beta}$ is, of course, complete. \subsubsection{Proper reductions} Our second set of permutations are those where every embedding uses all of the blue points, excluding the permutation $\pi$, which, as noted above, is complete. For these permutations, we will not be interested in understanding exactly which red points are used in any embedding. We write these permutations in the form $\ballij{\ex{\alpha}}{\beta}$. This representation must be thought of as first finding the complete permutation $\ballij{\alpha}{\beta}$, and then removing some or possibly all of the red points. We call these permutations \emph{proper reductions}\extindof{balloon}{proper reduction}{(of a balloon)}. Given $\pi = \ballij{\alpha}{\beta}$, we denote the set of permutations that are proper reductions as $\redx{\pi}$. Note that no proper reduction can be complete. \subsubsection{Matryoshka permutations} Our third set of permutations are a subset of the permutations that are neither complete, nor proper reductions, and satisfy a specific condition (the ``matryoshka'' condition). Given any two permutations $\sigma$ and $\pi$, there is a set of all possible embeddings of $\sigma$ into $\pi$, which we write $\sigma(\pi)$. Since we are only interested in cases where $\pi = \ballij{\alpha}{\beta}$, and where $\sigma$ is neither complete, nor a proper reduction, we can extend our notation to write \[ \sigma(\pi) = \{ \ballij{\ex{\alpha}^1}{\eta^1}, \ldots, \ballij{\ex{\alpha}^n}{\eta^n} \}. \] Note that there may be cases where there are two embeddings $\ballij{\ex{\alpha}^k}{\eta^k}$, and $\ballij{\ex{\alpha}^\ell}{\eta^\ell}$, with $k \neq \ell$, where the permutations $\eta^k$ and $\eta^\ell$ are the same. We are interested in understanding which permutations occur as $\eta$ in $\ballij{\ex{\alpha}}{\eta}$ in the set of embeddings $\sigma(\pi)$, so our next step is to form the set of permutations that occur as some $\eta$ in $\sigma(\pi)$. We define \[ E_{\sigma(\pi)} = \{ \zeta : \sigma(\pi) \text{ contains an embedding } \ballij{\ex{\alpha}}{\zeta} \}. \] and, given some $E_{\sigma(\pi)}$, we label the elements as $ E_{\sigma(\pi)} = \{ \zeta_1, \ldots, \zeta_m \}. $ Note that $E_{\sigma(\pi)}$ is a set of permutations, and that this set can include the empty permutation $\epsilon$. Now, if there is an integer $k$, with $1 \leq k \leq m$ such that for all $\ell = 1, \ldots, m$, and $\ell \neq k$ we have $\zeta_k < \zeta_\ell$, then we say that the permutation $\sigma$ is \emph{matryoshka}\extindex[permutation]{matryoshka}, and we will write these permutations in the form $\ballij{\ex{\alpha}}{\zeta}$, where $\zeta = \zeta_k$. As before, this representation must be thought of as first finding the complete permutation $\ballij{\alpha}{\zeta}$, and then removing some or possibly all of the red points. The set of matryoshka permutations forms our third set. If a permutation $\sigma$ is not complete, and is not a proper reduction, and has an embedding $\ballij{\ex{\alpha}}{\eta}$, where $\eta \in \{ \epsilon, 1\}$, then it is easy to see that $\sigma$ must be matryoshka. \subsubsection{Defective permutations} The remaining permutations are not complete permutations, proper reductions, or matryoshka. We say that these permutations are \emph{defective}\extindex[permutation]{defective}. We will not need to concern ourselves with a unique representation of a defective permutation. \subsubsection{Notation for elements of a chain} Our main arguments will be based on partitioning the chains in the poset, and then applying Corollary~\ref{corollary-halls-corollary}. Given $\pi = \ballij{\alpha}{\beta}$, we now introduce the terminology and notation we will use in handling the chains in the poset $[1, \pi]$. Let $c$ be a chain in the interval $[1, \pi]$. We start by noting that the top element of every chain in the interval $[1, \pi]$ is $\pi = \ballij{\alpha}{\beta}$, and the bottom element is the permutation $1$. Recall that we have $\order{\alpha} > 1$, so the permutation $1$ cannot be written as $\ballij{\alpha}{\eta}$ for any $\eta$. It follows that every chain contains a largest permutation that cannot be written as $\ballij{\alpha}{\eta}$. Note that since we cannot write this permutation as $\ballij{\alpha}{\eta}$ for any $\eta$, this permutation is not complete. As in Chapter~\ref{chapter_2413_balloon_paper}, we call this permutation the \emph{pivot}\extindof{balloon}{pivot}{(in a balloon)}, and denote it by $\psi_c$. Since $\psi_c$ is not complete, and the highest element of the chain, $\pi$, is complete, there must be a permutation above $\psi_c$ in the chain. We call the permutation above $\psi_c$ in the chain $\phi_c$. Finally, since every chain has at least two elements, there is always a second-highest permutation in the chain, and we call this $\kappa_c$. We remark that $\psi_c$ cannot be a complete permutation, and that $\phi_c$, and every permutation above $\phi_c$ in the chain, must be a complete permutation. We will partition the chains in the poset into three sets. The first set, $\chain{R}$, consists of those chains where $\kappa_c$ is a proper reduction. The second set, $\chain{M}$, comprises chains $c$ that are not in $\chain{R}$, where $\psi_c$ is matryoshka. Our final set $\chain{G}$ contains the remaining chains. Formally, we define \begin{align} \label{defn_block_chains_rmg} \begin{split} \chain{C} & = \text{The set of all chains in $[1, \pi]$} \\ \chain{R} & = \{ c : c \in \chain{C}; \kappa_c \text{~is a proper reduction} \} \\ \chain{M} & = \{ c : c \in \chain{C} \setminus \chain{R}; \psi_c \text{~is matryoshka} \} \\ \chain{G} & = \{ c : c \in \chain{C} \setminus (\chain{R} \cup \chain{M}) \}. \end{split} \end{align} We now have all the terminology and notation to prove our first result in this chapter. \section{The principal M\"{o}bius function of balloon permutations} Let $\pi = \ballij{\alpha}{\beta}$. Our aim in this section is to derive an expression for $\mobp{\pi}$ as follows. \begin{theorem} \label{theorem-pmf-balloon-permutations} If $\pi = \ballij{\alpha}{\beta}$, and $\chain{G}$ is as defined in Equation~\ref{defn_block_chains_rmg}, then \[ \mobp{\pi} = - \sum_{\lambda \in \redx{\pi}} \mobp{\lambda} + \sum_{c \in \chain{G}} (-1)^{\order{c}}. \] \end{theorem} \begin{proof} Since the sets $\chain{R}$, $\chain{M}$, and $\chain{G}$ partition the chains in the poset, we can write \[ \mobp{\pi} = \sum_{c \in \chain{R}} (-1)^{\order{c}} + \sum_{c \in \chain{M}} (-1)^{\order{c}} + \sum_{c \in \chain{G}} (-1)^{\order{c}}. \] We start by showing that the Hall sum for the set $\chain{M}$ is zero. Let $c$ be a chain in $\chain{M}$, $\psi_c = \ballij{\ex{\alpha}}{\eta}$, and $\phi_c = \ballij{\alpha}{\tau}$. Define a function $\Phi$ as follows: \[ \Phi(c) = \begin{cases} c \setminus \{ \ballij{\alpha}{\eta} \} & \text{if $\eta = \tau$,} \\ c \cup \{ \ballij{\alpha}{\eta} \} & \text{otherwise.} \end{cases} \] We have two cases to consider. Either $\eta = \tau$, or $\eta \neq \tau$. Case 1: $\eta = \tau$. The chain $c$ has a segment $\ballij{\ex{\alpha}}{\eta} < \ballij{\alpha}{\eta}$. If $\eta$ = $\beta$, then $\psi_c = \kappa_c$. Further, $\kappa_c$ is a proper reduction since $\eta$ is minimal, so $c \in \chain{R}$, thus we must have $\eta < \beta$, and so $\Phi(c) = c^\prime$ is a chain. Now, since $\eta \neq \beta$, it follows that $\Phi(c)$ contains a segment $\ballij{\ex{\alpha}}{\eta} < \ballij{\alpha}{\zeta}$ for some $\zeta \leq \beta$. Since $\eta < \beta$, $\ballij{\ex{\alpha}}{\eta}$ is not a reduction. If $\psi_{c^\prime} = \kappa_{c^\prime}$, then $c^\prime \not\in \chain{R}$. If $\psi_{c^\prime} \neq \kappa_{c^\prime}$, then we must have $\kappa_{c^\prime} = \kappa_c$, and so again $c^\prime \not\in \chain{R}$. Since we have $\psi_{c^\prime} = \psi_c$, it then follows that $c \in \chain{M}$. Case 2: $\eta \neq \tau$. The chain $c$ has a segment $\ballij{\ex{\alpha}}{\eta} < \ballij{\alpha}{\tau}$. Clearly, $\ballij{\ex{\alpha}}{\eta} < \ballij{\alpha}{\eta}$. so $c^\prime = \Phi(c)$ can fail to be a chain if and only if $\ballij{\alpha}{\eta} \not< \ballij{\alpha}{\tau}$ or equivalently, from the nesting condition, $\eta \not< \tau$. We show that $\eta < \tau$ by assuming otherwise, and showing that this leads to a contradiction. \begin{figure} \begin{center} \begin{subfigure}{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.3] \fill [color=lightgray] (0.5,2.5) rectangle (3.5,6.5); \fill [color=lightgray] (3.5,0.5) rectangle (7.5,2.5); \fill [color=lightgray] (3.5,6.5) rectangle (7.5,9.5); \fill [color=lightgray] (7.5,2.5) rectangle (9.5,6.5); \plotgrid{9}{9}; \embedast{(1,2)}{blue}; \embedast{(2,8)}{blue}; \embeddot{(3,9)}{black}; \embedast{(4,5)}{blue}; \embedast{(5,3)}{blue}; \embedast{(6,6)}{blue}; \embeddot{(7,4)}{black}; \embeddot{(8,1)}{black}; \embeddot{(9,7)}{black}; \end{tikzpicture} \caption{} \label{figure-block-case-2-1} $\ballij{\ex{\alpha}^1}{\eta} < \ballij{\alpha}{\tau}$. \end{subfigure} \phantom{x} \begin{subfigure}{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.3] \fill [color=lightgray] (0.5,2.5) rectangle (3.5,6.5); \fill [color=lightgray] (3.5,0.5) rectangle (7.5,2.5); \fill [color=lightgray] (3.5,6.5) rectangle (7.5,9.5); \fill [color=lightgray] (7.5,2.5) rectangle (9.5,6.5); \plotgrid{9}{9}; \embedcir{(1,2)}{black}; \embedcir{(2,8)}{black}; \embeddot{(3,9)}{black}; \embedast{(4,5)}{blue}; \embedast{(5,3)}{blue}; \embedast{(6,6)}{blue}; \embeddot{(7,4)}{black}; \embeddot{(8,1)}{black}; \embeddot{(9,7)}{black}; \end{tikzpicture} \caption{} \label{figure-block-case-2-2} $\eta < \ballij{\ex{\alpha}^2}{\tau}$. \end{subfigure} \phantom{x} \begin{subfigure}{0.22\textwidth} \begin{tikzpicture}[scale=0.3] \fill [color=lightgray] (0.5,2.5) rectangle (3.5,6.5); \fill [color=lightgray] (3.5,0.5) rectangle (7.5,2.5); \fill [color=lightgray] (3.5,6.5) rectangle (7.5,9.5); \fill [color=lightgray] (7.5,2.5) rectangle (9.5,6.5); \plotgrid{9}{9}; \embedcir{(1,2)}{black}; \embedcir{(2,8)}{black}; \embeddot{(3,9)}{black}; \embedast{(4,5)}{blue}; \embedast{(5,3)}{blue}; \embeddot{(6,6)}{black}; \embeddot{(7,4)}{black}; \embeddot{(8,1)}{black}; \embedast{(9,7)}{blue}; \end{tikzpicture} \caption{} \label{figure-block-case-2-3} $\eta < \ballij{\ex{\alpha}^3}{\tau}$. \end{subfigure} \phantom{x} \begin{subfigure}{0.22\textwidth} \begin{tikzpicture}[scale=0.3] \fill [color=lightgray] (0.5,2.5) rectangle (3.5,6.5); \fill [color=lightgray] (3.5,0.5) rectangle (7.5,2.5); \fill [color=lightgray] (3.5,6.5) rectangle (7.5,9.5); \fill [color=lightgray] (7.5,2.5) rectangle (9.5,6.5); \plotgrid{9}{9}; \embedast{(1,2)}{blue}; \embedast{(2,8)}{blue}; \embeddot{(3,9)}{black}; \embedast{(4,5)}{blue}; \embedast{(5,3)}{blue}; \embeddot{(6,6)}{black}; \embeddot{(7,4)}{black}; \embeddot{(8,1)}{black}; \embedast{(9,7)}{blue}; \end{tikzpicture} \caption{} \label{figure-block-case-2-4} $\ballij{\ex{\alpha}^4}{\eta^\prime} < \ballij{\alpha}{\tau}$. \end{subfigure} \end{center} \caption{Examples to illustrate the steps in case 2. The left-hand side of the inequality is shown as {$\textcolor{blue}\ast$}.} \end{figure} To aid the reader, we provide a running example using a generalised balloon, where $\alpha = 24513$, $\tau = 3142$, $\psi_c = 15324$, and $\psi_c$ is matryoshka, with representation $\ballij{\ex{\alpha}}{213}$. % Of course, in this example the representation of $\psi_c$ is not minimal, and in addition $\eta < \tau$, but despite this we feel that the diagrams are a helpful aid in understanding the steps in our argument. The chain $c$ contains a segment $\ballij{\ex{\alpha}^1}{\eta} < \ballij{\alpha}{\tau}$. Figure~\ref{figure-block-case-2-1} shows how $\psi_c = \ballij{\ex{\alpha}^1}{\eta}$ might be embedded in $\phi_c = \ballij{\alpha}{\tau}$. Since $\ballij{\ex{\alpha}^1}{\eta} < \ballij{\alpha}{\tau}$, the points of $\,\ex{\alpha}^1$ in $\,\ballij{\ex{\alpha}^1}{\eta}$ are a proper subset of the points of $\alpha$ in $\ballij{\alpha}{\tau}$. We can remove the points of $\,\ex{\alpha}^1$ from both sides of the inequality and we obtain $\eta < \ballij{\ex{\alpha}^2}{\tau}$. This is shown in Figure~\ref{figure-block-case-2-2}. Since, by assumption, $\eta \not< \tau$, it follows that we must be able to find an embedding of $\eta$ in $\ballij{\ex{\alpha}^2}{\tau}$ that uses at least one point that is not in $\tau$, and so we can write $\eta < \ballij{\ex{\alpha}^3}{\tau}$, where the points in $\tau$ that are used is a proper subset of the points of $\tau$ that are used in $\eta < \ballij{\ex{\alpha}^2}{\tau}$. An example is shown in Figure~\ref{figure-block-case-2-3}. Now note that the points from $\alpha$ in this new embedding of $\eta$, $\ex{\alpha}^3$, must be disjoint from the points from $\alpha$ in the original embedding, $\ex{\alpha}^1$. It follows that we can write $\ballij{\ex{\alpha}^1}{\eta}$ as $\ballij{\ex{\alpha}^4}{\eta^\prime}$, where the points in $\ex{\alpha}^4$ are the union of the points in $\ex{\alpha}^1$ and $\ex{\alpha}^3$. Further, we must have $\order{\eta^\prime} < \order{\eta}$. This is shown in Figure~\ref{figure-block-case-2-4}. We now have $\psi_c = \ballij{\ex{\alpha}^4}{\eta^\prime}$, where $\order{\eta^\prime} < \order{\eta}$, but this is a contradiction, since $\ballij{\ex{\alpha}^1}{\eta}$ is matryoshka, and so $\eta$ is minimal. % Therefore our assumption must be wrong, and so we have $\eta < \tau$, and thus $c^\prime = \Phi(c)$ is a chain. It remains to show that $c^\prime \in \chain{M}$. Now, $c^\prime$ has a segment $\ballij{\ex{\alpha}}{\eta} < \ballij{\alpha}{\eta} < \ballij{\alpha}{\tau}$. Since $\psi_c = \psi_{c^\prime}$, and $\psi_c$ is matryoshka, the only way for $c^\prime \not\in \chain{M}$ is if $\kappa_{c^\prime}$ is a proper reduction. Since $\psi_{c^\prime}$ is the pivot of $c^\prime$, and there are at least two permutations above $\psi_{c^\prime}$ in $c^\prime$, it follows that $\kappa_{c^\prime}$ is a complete permutation, and such a permutation cannot be a proper reduction. It follows, therefore, that $c^\prime \not\in \chain{R}$. Since $\psi_{c^\prime} = \psi_c$, this then means that $c^\prime \in \chain{M}$. We now have that $\Phi(c)$ is a parity-reversing involution on $\chain{M}$, and so, by Corollary~\ref{corollary-halls-corollary}, we have $\sum_{c \in \chain{M}} (-1)^{\order{c}} = 0$. We now have \[ \mobp{\pi} = \sum_{c \in \chain{R}} (-1)^{\order{c}} + \sum_{c \in \chain{G}} (-1)^{\order{c}}. \] The chains in $\chain{R}$ are characterised by the second-highest element being an element of $\redx{\pi}$. Using Corollary~\ref{corollary-hall-sum-second-highest-set} from Chapter~\ref{chapter_2413_balloon_paper} on the sum over chains in $\chain{R}$ completes the proof. \end{proof} Theorem~\ref{theorem-pmf-balloon-permutations} is hard to use in practice, because of the second term, $\sum_{c \in \chain{G}} (-1)^{\order{c}}$. There is some numerical evidence, based on analysing permutations with length 12 or less, that this second term is, in fact, zero in many cases. Clearly, one way to handle the difficulties of this second term would be to ensure that the sum was zero. The easiest case is where $\chain{G} = \emptyset$. This occurs when every permutation in the poset is matryoshka. We state this formally as \begin{corollary}[to Theorem~\ref{theorem-pmf-balloon-permutations}] \label{corollary-balloon-is-matryoshka} If $\pi = \ballij{\alpha}{\beta}$, and every permutation in $[1, \pi)$ is matryoshka, then \[ \mobp{\pi} = - \sum_{\lambda \in \redx{\pi}} \mobp{\lambda}. \] \end{corollary} \begin{proof} If every permutation in $[1, \pi)$ is matryoshka, then $\chain{G} = \emptyset$, and the result follows immediately. \end{proof} It is easy to see that if $\pi$ is a direct or skew sum, then every permutation contained in $\pi$ is matryoshka. In the following section we show that if $\pi$ is a wedge permutation, then every permutation contained in $\pi$ is matryoshka. \section{The principal M\"{o}bius function of wedge permutations} \label{section-pmf-wedge-permutations} We will start by showing: \begin{theorem} \label{theorem-block-wedges-are-matryoshka} If $\pi$ is a wedge permutation, as defined in Sub-section~\ref{subsection-wedge-permutations}, then every permutation in $[1, \pi)$ is matryoshka. \end{theorem} \begin{proof} To prove Theorem~\ref{theorem-block-wedges-are-matryoshka}, it is sufficient to show that an arbitrary permutation contained in a wedge permutation is matryoshka. Let $\pi = \ballwedgek{\alpha}{\beta}$, and let $\sigma < \pi$. Consider any embedding of $\sigma$ in $\pi$. Then if there are $n$ blue points in the embedding, these, from the construction method, will represent the top $n$ points of the permutation $\sigma$. Now assume we have two embeddings of $\sigma$, say $\ballij{\ex{\alpha}^1}{\eta}$ and $\ballij{\ex{\alpha}^2}{\zeta}$. If $\order{\eta} = \order{\zeta}$, then we have $\eta = \zeta$. Assume now, without loss of generality, that $\order{\eta} < \order{\zeta}$. Then $\eta$ is contained in $\zeta$, as the top $\order{\eta}$ points of $\zeta$ are order-isomorphic to $\eta$. For any $\sigma$ there will be some $\eta$ that is minimal, noting that this may mean that $\eta = \epsilon$. This then means that any permutation contained in a wedge permutation is matryoshka. \end{proof} We now have \begin{lemma} \label{lemma-wedge-is-sum-of-reductions} If $\pi = \ballwedgek{\alpha}{\beta}$, then \[ \mobp{\pi} = - \sum_{\lambda \in \redx{\pi}} \mobp{\lambda}. \] \end{lemma} \begin{proof} By Theorem~\ref{theorem-block-wedges-are-matryoshka} every permutation in a wedge permutation is matryoshka. Applying Corollary~\ref{corollary-balloon-is-matryoshka} then gives us the result. \end{proof} We have shown that every permutation in a wedge permutation is matryoshka. This is not the case for balloon permutations in general. If \begin{align*} \alpha & = 4,6,3,5,8,9,2,12,10,13,11,7,1 \\ \beta & = 2,4,1,3,7,5,8,6 \\ \pi & = \ballgen{5,8}{\alpha}{\beta} \\ & = 4,6,3,5,8,10,12,9,11,15,13,16,14,17,2,20,18,21,19,7,1 \intertext{and} \sigma & = (2413 \oplus 1 \oplus 3142) \ominus 21 \\ & = 4,6,3,5,7,10,8,11,9,2,1 \end{align*} as shown in Figure~\ref{figure-balloon-not-matryoshka}, then $\sigma$ is not complete, as $\alpha$ has 13 points, but $\sigma$ only has 11. Further, $\sigma$ is not a proper reduction, as it does not contain an interval copy of $\beta$. Finally, $ E_{\sigma(\pi)} = \{ 2413, 3142, 24135, 13524 \} $ contains two permutations of length 4, $2413$ and $3142$, and no permutations with length less than 4. It follows that $\sigma$ is not matryoshka, and therefore $\sigma$ must be defective. \begin{figure} \centering \begin{tikzpicture}[scale=0.3] \fill [color=lightgray] (0.5,8.5) rectangle (5.5,16.5); \fill [color=lightgray] (5.5,0.5) rectangle (13.5,8.5); \fill [color=lightgray] (5.5,16.5) rectangle (13.5,21.5); \fill [color=lightgray] (13.5,8.5) rectangle (21.5,16.5); \plotpermgrid{4,6,3,5,8,10,12,9,11,15,13,16,14,17,2,20,18,21,19,7,1} \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.3] \plotpermgrid{4,6,3,5,7,10,8,11,9,2,1} \end{tikzpicture} \caption{The permutations $\pi$ and $\sigma$ that demonstrate that $\sigma$ is defective in the interval $[1, \pi]$.} \label{figure-balloon-not-matryoshka} \end{figure} We do not claim that $\pi$ and $\sigma$ above are minimal, although a (fairly restricted) computer search failed to find any smaller counter-examples. We now show that \begin{theorem} \label{theorem-wedge-permutations-multiple-of-beta} If $\pi = \ballwedgek{\alpha}{\beta}$, then $\mobp{\pi} = c \cdot \mobp{\beta}$, where $c$ is an integer. \end{theorem} \begin{proof}[Proof of Theorem~\ref{theorem-wedge-permutations-multiple-of-beta}] Our proof will use Lemmas~\ref{lemma-oneplus-oneplus} and~\ref{lemma-oneplus} from Chapter~\ref{chapter_2413_balloon_paper}, which we restate here for convenience. \begin{lemma}[% Lemma~\ref{lemma-oneplus-oneplus} from Chapter~\ref{chapter_2413_balloon_paper}] \label{lemma-oneplus-oneplus-balloon} If $\pi$ has a long corner, then $\mobp{\pi} = 0$. \end{lemma} \begin{lemma}[% Lemma~\ref{lemma-oneplus} from Chapter~\ref{chapter_2413_balloon_paper}] \label{lemma-oneplus-balloon} If $\pi$ can be written as $\pi = \oneplus\tau$, or $\pi = \tau\oplus 1$ or $\pi = \oneminus\tau$ or $\pi = \tau\ominus 1$, and does not have a long corner, then $\mobp{\pi} = - \mobp{\tau}$. \end{lemma} Let $\pi = \ballwedgek{\alpha}{\beta}$. First, consider the case when $\order{\alpha} = 1$, so either $\pi = 1 \oplus \beta$ or $\pi = \beta \ominus 1$. % In the first sub-case, if $\beta$ begins 1, then $\rho$ has a long corner, and so by Lemma~\ref{lemma-oneplus-oneplus-balloon} $\mobp{\pi} = 0$. If $\beta$ does not begin $1$, then by Lemma~\ref{lemma-oneplus-balloon} $\mobp{\pi} = - \mobp{\beta}$. The argument for the second sub-case is similar. % Thus we have that Theorem~\ref{theorem-wedge-permutations-multiple-of-beta} is true if $\order{\alpha} = 1$. Now assume that Theorem~\ref{theorem-wedge-permutations-multiple-of-beta} is true for all $\alpha$ with $\order{\alpha} < m$, for some $m \geq 1$, and assume that $\order{\alpha} = m$. % Let $\rho$ be a reduction of $\pi$. From the definition of a reduction we have that $\rho = \ballwedgex{\ell}{\alpha^\prime}{\beta}$ or $\rho = \beta$, where $\order{\alpha^\prime} < \order{\alpha}$. In the first case, by the inductive hypothesis, we have that $\mobp{\rho}$ is a multiple of $\mobp{\beta}$. In the second case, trivially, $\mobp{\rho} = \mobp{\beta}$. Now summing over all reductions, we can see that $\mobp{\pi} = c \cdot \mobp{\beta}$ as required. \end{proof} \section{Chapter summary} The main results from this chapter are expressions for the principal M\"{o}bius function of balloon permutations, and a somewhat simpler expression for the principal M\"{o}bius function of wedge permutations. We also have that if $\pi$ is a wedge permutation that can be written as $\ballwedgek{\alpha}{\beta}$, then $\mobp{\pi}$ is a multiple of $\mobp{\beta}$. We have observed that if we repeatedly iterate the wedge construction, that is, we define \begin{align*} W_{k,\alpha, \beta, 0} & = \beta, \\ W_{k,\alpha, \beta , 1} & = \ballwedgek{\alpha}{\beta} \\ W_{k,\alpha, \beta , n} & = \ballwedgek{\alpha}{(W_{k,\alpha, \beta, n-1})} \; \text{for $n > 1$} \end{align*} then the sequence given by \[ \mobp{W_{k,\alpha, \beta, 0}}, \; \mobp{W_{k,\alpha, \beta, 1}}, \; \mobp{W_{k,\alpha, \beta, 2}}, \; \mobp{W_{k,\alpha, \beta, 3}}, \; \ldots \] follows one of the following patterns: \begin{align*} &0,0,0,0,0, \ldots & &c,0,0,0,0, \ldots & &c,c,0,0,0, \ldots \\ &c, -c, 0, 0, 0 \ldots & &c, 2c, 4c, 8c, 16c, \ldots & &c, c, 2c, 4c, 8c, \ldots \\ &c, -c, c, -c, c, \ldots & &c, c, c, c, c, \ldots & &c, d, d, d, d, \ldots \\ \end{align*} where $c = \mobp{\beta}$, and, where appropriate, $d = \mobp{\ballwedgek{\alpha}{\beta}}$. For some examples, we have outline proofs that these patterns will continue, based on a specific analysis of the proper reductions. However, we do not yet have a way to characterise wedge permutations in general, so that we can predict their behaviour without conducting a complete analysis of the proper reductions. We remark that this is one of the aspects that we are still researching. \chapter{Common definitions} \label{chapter_common_definitions} A \emph{permutation}\extindex{permutation} of length $n$ is an ordering of the natural numbers $1, \ldots, n$. For short (length less than 10) permutations, we write the permutation without delimiters, so $2413$ represents a permutation of length 4, where the first element has value $2$. For longer permutations, we use commas to delimit values, so, for example, we would write $1,7,4,3,10,9,2,5,8,6$. We may occasionally use commas as delimiters in short permutations, where this will aid the reader. We use $\pi_i$ to refer to the $i$-th element of the permutation $\pi$, so, for example, if $\pi = 2413$, then $\pi_1 = 2$. We let $\epsilon$ denote the unique permutation of length~$0$. We write the length of a permutation $\pi$ as $\order{\pi}$. If $L$ is a list of distinct integers, then we can treat $L$ as a permutation by replacing the $i$-th smallest entry with $i$. As an example, if $L = 4,9,2,6$, then $L$ represents the permutation 2413. A permutation $\pi$ can be represented graphically by plotting the points $(i, \pi_i)$, with $1 \leq i \leq \order{\pi}$, as shown in Figure~\ref{figure_example_permutation_plot}. Throughout we treat a permutation and its plot interchangeably. \begin{figure} \centering \begin{tikzpicture}[scale=0.3] \plotpermgrid{1,7,4,3,10,9,2,5,8,6} \opendot{(3,4)} \opendot{(6,9)} \opendot{(7,2)} \opendot{(10,6)} \end{tikzpicture} \caption{The permutation $1,7,4,3,10,9,2,5,8,6$, highlighting one of the copies of $2413$ that it contains.} \label{figure_example_permutation_plot} \end{figure} The set of all permutations of length $n$ is written $\cS_n$. A sequence of numbers $a_1,a_2,\dots,a_n$ is \emph{order-isomorphic}\extindex{order-isomorphic} to a sequence $b_1,b_2,\allowbreak\dots,b_n$ if for every $i,j \in [1,n]$ we have $a_i<a_j \Leftrightarrow b_i<b_j$. A permutation $\pi \in\cS_n$ \emph{contains}\extindex[permutation]{contains} a permutation $\sigma\in\cS_k$ as a \emph{pattern}\extindex[permutation]{pattern} if $\pi$ has a subsequence of length $k$ order-isomorphic to~$\sigma$. We say that $\pi$ \emph{avoids}\extindex[permutation]{avoids} $\sigma$ if $\pi$ does not contain $\sigma$. There are other ways to define pattern containment, some of which are more specific, and others more general. We discuss some of these in Chapter~\ref{chapter_background_and_history}, and in that chapter we refer to containment as defined above as \emph{classic pattern containment}\extindex{classic pattern containment}. Throughout the rest of this document we omit the qualifier ``classic''. As an example of containment, the permutation $2413$ is contained in the permutation $1,7,4,3,10,9,2,5,8,6$, as shown in Figure~\ref{figure_example_permutation_plot}. If $\sigma$ is contained in $\pi$, then there will be at least one set of points of $\pi$, with cardinality $\order{\sigma}$, such that the set of points is order-isomorphic to $\sigma$. We call such a set of points an \emph{embedding}\extindex[permutation]{embedding} of $\sigma$ into $\pi$. The points highlighted in Figure~\ref{figure_example_permutation_plot}, (4,9,2,6), represent one possible embedding of $2413$ in $1,7,4,3,10,9,2,5,8,6$. Typically, where embeddings are used, the arguments used require that only some of the embeddings are counted, and these are generally referred to as \emph{normal embeddings}\extindex{normal embedding}. It is notable that the precise definition of a normal embedding varies between papers, and indeed, in Brignall et al~\cite{Brignall2020}, several different definitions of normal embedding are used. We note here that one problem with embeddings arises in cases such as $\mobfn{1}{24153}$. Here there are plainly only five ways to embed the permutation $1$ into $24153$, however $\mobfn{1}{24153} = 6$, and thus the embedding approach is not sufficient. One possible solution to this issue is to count the normal embeddings and then add a correction factor. The set of all permutations, ordered by pattern containment, is a \emph{poset}\extindex{poset} (partially ordered set). If we have two permutations $\sigma$ and $\pi$ such that $\sigma$ is not contained in $\pi$, and $\pi$ is not contained in $\sigma$, then we say that $\sigma$ and $\pi$ are \emph{incomparable}\extindex[permutation]{incomparable}. \extindex[poset]{interval} A closed interval $[\sigma, \pi]$ in a poset is the set defined as $\{ \tau : \sigma \leq \tau \leq \pi \}$. A half-open interval $[\sigma, \pi)$ is the set $\{ \tau : \sigma \leq \tau < \pi \}$, and the open interval $(\sigma, \pi)$ is the set $\{ \tau : \sigma < \tau < \pi \}$, The \emph{M\"{o}bius function}\extindex[M\"{o}bius function]{$\mobfn{\sigma}{\pi}$}, $\mobfn{\sigma}{\pi}$, is defined for an ordered pair of elements $(\sigma, \pi)$ from any poset. If $\sigma \not\leq \pi$, then $\mobfn{\sigma}{\pi} = 0$, and if $\sigma = \pi$, then $\mobfn{\sigma}{\pi} = 1$. The remaining possibility is that $\sigma < \pi$, and in this case we have \begin{align} \mobfn{\sigma}{\pi} & = - \sum_{\lambda \in [\sigma, \pi)} \mobfn{\sigma}{\lambda}. \label{equation_mobius_function} \end{align} If we have $\sigma < \pi$, then from the definition above we also have \begin{align*} \sum_{\lambda \in [\sigma, \pi]} \mobfn{\sigma}{\lambda} & = 0. \end{align*} For posets that possess a unique smallest element $\hat{0}$, we define the \emph{principal M\"{o}bius function}\extindex[principal M\"{o}bius function]{$\mobp{\pi}$}, $\mobp{\pi} = \mobfn{\hat{0}}{\pi}$. We will occasionally want to discuss the M\"{o}bius function of a poset $P$ with a unique minimal element $\hat{0}$, and unique maximal element $\hat{1}$. Here, we define $ \mobp{P} = \mobfn{\hat{0}}{\hat{1}} = \sum_{\hat{0} \leq x < \hat{1}} \mobfn{\hat{0}}{x} $. The \emph{Hasse diagram}\extindex{Hasse diagram} of a poset $P$ is a directed graph, where two vertices $u$ and $w$ are connected from $u$ to $w$ by a directed arc if and only if $w < u$, and there is no $v$ such that $w < v < u$. The Hasse diagram of the permutation poset $[1, 13524]$ is shown in Figure~\ref{figure_example_hasse_diagram}. Note that we sometimes omit the arrowheads for clarity, as Hasse diagrams, by convention, are always drawn with the arc direction downwards. \begin{figure} \centering \begin{tikzpicture}[xscale=1,yscale=2] \pnode{13524}{0}{5}; \pnode{1243}{-4}{4}; \pnode{1423}{-2}{4}; \pnode{1324}{ 0}{4}; \pnode{1342}{ 2}{4}; \pnode{2413}{ 4}{4}; \pnode{123}{-4}{3}; \pnode{132}{-2}{3}; \pnode{312}{ 0}{3}; \pnode{213}{ 2}{3}; \pnode{231}{ 4}{3}; \pnode{12}{-2}{2}; \pnode{21}{ 2}{2}; \pnode{1}{0}{1}; \ddline{13524}{1243,1423,1324,1342,2413}; \ddline{1243}{123,132} \ddline{1423}{123,132,312} \ddline{1324}{123,132,213} \ddline{1342}{123,132,231} \ddline{2413}{132,213,231,312} \ddline{123}{12} \ddline{132}{12,21} \ddline{213}{12,21} \ddline{312}{12,21} \ddline{231}{12,21} \ddline{12}{1} \ddline{21}{1} \end{tikzpicture} \caption{The Hasse diagram of the poset interval $[1, 13524]$.} \label{figure_example_hasse_diagram} \end{figure} A \emph{chain}\extindex{chain} in a poset interval $[\sigma, \pi]$ is, for our purposes, a subset of the elements in the interval $[\sigma, \pi]$, where the subset includes the elements $\sigma$ and $\pi$, and every distinct pair of elements of the subset are comparable. This last clause means that the subset has a total order. If a chain $c$ has $k$ elements, then we say that the length of $c$, written $\order{c}$, is $k - 1$. One way to visualise a chain is to first choose a path in the Hasse diagram from the highest entry to the lowest, and then a chain is found by choosing a (possibly improper) subset of the elements on the path, ensuring that the first and last elements (the upper and lower bounds of the poset) are included in the subset. Chains in a poset interval are related to the M\"{o}bius function by Hall's Theorem~\cite[Proposition 3.8.5]{Stanley2012}, which says that \begin{align*} \mobfn{\sigma}{\pi} = \sum_{c \in \chain{C}(\sigma, \pi)} (-1)^{\order{c}} = \sum_{i=1}^{\order{\pi} - 1} (-1)^i K_i \end{align*} where $\chain{C}(\sigma, \pi)$ is the set of chains in the poset interval $[\sigma, \pi]$, and $K_i$ is the number of chains of length $i$. If $\chain{C}$ is a subset of the chains in some poset interval $[\sigma, \pi]$, then the \emph{Hall sum}\extindex[chain]{Hall sum} of $\chain{C}$ is $\sum_{c \in \chain{C}} (-1)^{\order{c}}$. A \emph{parity-reversing involution}\extindex{parity-reversing involution}, $\Phi: \chain{C} \mapsto \chain{C}$, is an involution such that for any $c \in \chain{C}$, the parities of $c$ and $\Phi(c)$ are different. A simple corollary to Hall's Theorem is \begin{corollary} \label{corollary-halls-corollary} If we can find a set of chains $\chain{C}$ with a parity-reversing involution, then the Hall sum of $\chain{C}$ is zero. \end{corollary} \begin{proof} Because there is a parity-reversing involution, the number of chains in $\chain{C}$ with odd length is equal to the number of chains with even length, so $\sum_{c \in \chain{C}} (-1)^{\order{c}} = 0$. \end{proof} In Chapters~\ref{chapter_2413_balloon_paper} and~\ref{chapter_balloon_permutations_preprint} we will want to show that there is a parity-reversing involution on a set of chains $\chain{C}$. Our basic methodology, given a set of chains $\chain{C}$, and a chain $c \in \chain{C}$, will be to construct a chain $c^\prime$ by using a parity-reversing involution $\Phi$. Strictly speaking, $\Phi$ is a function that maps a set of permutations (which is a chain) to a set of permutations (which may not be a chain). As examples, if $\Phi(c)$ removes the largest or smallest element of $c$, or adds an element so that $\Phi(c)$ does not have a total order, then $\Phi(c)$ is not a chain. To show that $\Phi$ is a parity-reversing involution we will need to show that $\Phi(c)$ is a chain in $\chain{C}$, and that $c$ and $\Phi(c)$ have opposite parities. In our discussions, we will typically set $c^\prime = \Phi(c)$, and then show that the set of permutations $c^\prime$ is a chain. We will then, without further comment, treat $c^\prime$ as a chain. When discussing chains, in general we will only be interested in a small subset of the chain containing two or three elements. We say that a \emph{segment}\extindex[chain]{segment} of some chain $c$ is a non-empty subset of the elements in $c$ with the property that any element not in the segment is either less than every element in the segment, or is greater than every element in the segment. A \emph{direct sum}\extindex[permutation]{direct sum} of two permutations $\alpha$ and $\beta$ of lengths $m$ and $n$ respectively is the permutation $ \alpha_1, \ldots, \alpha_m, \beta_1 + m, \ldots, \beta_n + m $. We write a sum as $\alpha \oplus \beta$. A \emph{skew sum}\extindex[permutation]{skew sum}, $\alpha \ominus \beta$, is the permutation $ \alpha_1 +n, \ldots, \alpha_m + n, \beta_1, \ldots, \beta_n $. As examples, $321 \oplus 213 = 321546$, and $321 \ominus 213 = 654213$, and these are shown in Figure~\ref{figure_examples-of-sums}. A \emph{sum-indecomposable}\extindex[permutation]{sum-indecomposable} (resp. \emph{skew-indecomposable}\extindex[permutation]{skew-indecomposable}) permutation is a permutation that cannot be written as the direct sum (resp. skew sum) of two smaller permutations. If a permutation is not sum-indecomposable, then it is \emph{sum-decomposable}\extindex[permutation]{sum-decomposable}, and if a permutation is not skew-indecomposable, then it is \emph{skew-decomposable}\extindex[permutation]{skew-decomposable}, \begin{figure} \begin{center} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.3] \plotpermgrid{3,2,1,5,4,6} \end{tikzpicture} \caption*{$321 \oplus 213$} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.3] \plotpermgrid{6,5,4,2,1,3} \end{tikzpicture} \caption*{$321 \ominus 213$} \end{subfigure} \end{center} \caption{Examples of direct and skew sums.} \label{figure_examples-of-sums} \end{figure} Given a permutation $\pi$, the \emph{finest}\extindex{finest sum decomposition} sum decomposition (resp. skew decomposition) of $\pi$ is a decomposition into the maximum number of sum-indecomposable (resp. skew-indecomposable) permutations. As examples, using Figure~\ref{figure_examples-of-sums}, the finest sum decomposition of $321546$ is $321 \oplus 21 \oplus 1$, and the finest skew-decomposition of $654213$ is $1 \ominus 1 \ominus 1 \ominus 213$. Let $\alpha$ be a permutation, and $r$ a positive integer. Then $\nsums{r}{\alpha}$\extindex{$\nsums{r}{\alpha}$} is $\alpha \oplus \alpha \oplus \ldots \oplus \alpha \oplus \alpha$, with $r$ occurrences of $\alpha$. If $S$ is a set of permutations, then $\nsums{r}{S} = \cup_{\lambda \in S} \{ \nsums{r}{\lambda} \}$. A \emph{layered}\extindex[permutation]{layered} permutation is a permutation that can be written as the direct sum of one or more decreasing permutations. Egge and Mansour, in~\cite{Egge2004}, show that layered permutations can also be defined as permutations that avoid the permutations $231$ and $312$. The first example in Figure~\ref{figure_examples-of-sums} is a layered permutation. If a permutation $\pi$ can be written as $\oneplus\oneplus\tau$, $\oneminus\oneminus\tau$, $\tau\oplus 1\plusone$, or $\tau\ominus 1\minusone$, where $\tau$ is non-empty (so $\order{\pi} \geq 3$), then we say that $\pi$ has a \emph{long corner}\extindex{long corner}. We will occasionally want to discuss situations where some permutation $\pi$ is known to have a sum-decomposable (resp. skew-decomposable) decomposition, but we do not know exactly which permutations form the decomposition. In such cases we will write $\pi = \alpha_1 \oplus \ldots \oplus \alpha_n$ or $\pi = \alpha_1 \ominus \ldots \ominus \alpha_n$. Similarly, if we want to discuss an arbitrary set of permutations then we will write $\{ \alpha_1, \ldots, \alpha_n \}$. It will always be clear from the context whether $\alpha_i$ refers to the $i$-th element of the permutation $\alpha$, the $i$-th permutation in a sum, or the $i$-th permutation in a set of permutations. An \emph{interval}\extindex[permutation]{interval} in a permutation $\pi$ is a non-empty contiguous set of indexes $i, i+1, \ldots, j$ such that the set of values $\{ \pi_i, \pi_{i+1}, \ldots, \pi_j \}$ is also contiguous. Every permutation $\pi$ has intervals of length 1 and of length $\order{\pi}$, which we call \emph{trivial intervals}\extindex{trivial interval}. A \emph{simple}\extindex[permutation]{simple} permutation is a permutation that only has trivial intervals. As examples, $1324$ is not simple, as, for example, the second and third points $(32)$ form a non-trivial interval, whereas $2413$ is simple. We say that $\pi$ has an \emph{interval copy}\extindex[permutation]{interval copy} of a permutation $\alpha$ if it contains an interval of length $\order{\alpha}$ whose elements form a subsequence order-isomorphic to~$\alpha$. We note here that the term ``interval'' is used in relation to both posets and permutations. This is standard terminology in the field, and when we use the term ``interval'' it will be clear from the context whether we are referring to a poset interval or an interval in a permutation. An interval of length 2 is termed an \emph{adjacency}\extindex{adjacency} in this thesis. An adjacency is clearly order-isomorphic to either $12$, an \emph{up-adjacency}\extindex{up-adjacency}, or to $21$, a \emph{down-adjacency}\extindex{down-adjacency}. If a permutation contains at least one up-adjacency and at least one down-adjacency, then we say that the permutation has \emph{opposing adjacencies}\extindex{opposing adjacencies}. An interval of length 3 that is monotonic, that is, order-isomorphic to either 123 or 321, is a \emph{triple-adjacency}\extindex{triple-adjacency}. We note here that some sources use ``adjacency'' to refer to a non-trivial interval of any length that is monotonic. A \emph{descent}\extindex{descent} in a permutation $\pi$ is a position $i$ such that $\pi_i > \pi_{i+1}$. Similarly, an \emph{ascent}\extindex{ascent} in a permutation $\pi$ is a position $i$ such that $\pi_i < \pi_{i+1}$. A \emph{permutation class}\extindex{permutation class} is a set of permutations $C$ with the property that if $c \in C$, and $d < c$, then $d \in C$. Every permutation class can be defined by the minimal set of permutations that are not contained in the class, and this minimal set is referred to as the \emph{basis}\extindex[permutation class]{basis}. If a permutation class $C$ has basis $B$, then we write $C = \Av (B)$. Where we want to discuss a permutation class $C$ that contains a specific set of (normally simple) permutations, we refer to $C$ as a \emph{hereditary class}\extindex[permutation class]{hereditary class}. There is no difference between a permutation class and a hereditary class, the distinction is simply used to draw attention to the properties of the class that we are discussing. Permutations can be represented by plotting points in a square grid, as described earlier. It is clear that any symmetry of the square, if applied to a permutation plot, will result in another permutation plot. If $\alpha$ is a permutation, then a reflection in a vertical bisector of the plot is called a \emph{reversal}\extindex[permutation]{reversal}, written $\alpha^R$, a reflection in a horizontal bisector of the plot is called a \emph{complement}\extindex[permutation]{complement}, written $\alpha^C$. The inverse of a permutation, written $\alpha^{-1}$ is also a symmetry. These three operations are the generating set of the group of symmetries of permutations. We can now see that for any permutations $\sigma$ and $\pi$, and any symmetry $S$, \[ \mobfn{\sigma}{\pi} = \mobfn{\sigma^S}{\pi^S}. \] We will occasionally want to discuss permutations where we want a unique representative from the symmetries. We say that such a representative is the \emph{canonical}\extindex[permutation]{canonical} form of the permutation, and for our purposes we choose the symmetry which is smallest under the lexicographic order. As an example, 2413 and 3142 are symmetries of one another, and the canonical representation is 2413. \chapter{Conclusion} \label{chapter_conclusion} \section{A review of our results} Our journey through the M\"{o}bius function on the permutation pattern poset started, in Chapter~\ref{chapter_incosc_paper}, by looking at ways in which we could calculate the value of the M\"{o}bius function in a more efficient way than using the recursive definition of Equation~\ref{equation_mobius_function}, and here we found that we could reduce the number of permutations that needed to be considered slightly in the general case, and by a very significant number in the case of increasing oscillations. Chapter~\ref{chapter_oppadj_paper} continues this theme by showing that if a permutation has opposing adjacencies, then the value of the principal M\"{o}bius function is zero. We also describe other cases where, if $\sigma$ meets certain criteria, then any permutation $\pi$ that contains $\sigma$ as an interval has $\mobp{\pi} = 0$. The main result from this chapter is, however, not the results that provide a way to determine the value of the M\"{o}bius function, but the result that, asymptotically, 39.95\%~of permutations are M\"{o}bius zeros. This represents a move away from finding ways to determine the value of the M\"{o}bius function towards ways to better understand the permutation pattern poset. Chapter~\ref{chapter_2413_balloon_paper} finds a recursion for the value of the principal M\"{o}bius function of 2413-balloons, and uses this to show that $\mobmaxx{n}$ grows at least exponentially. In this chapter the main result is the exponential growth, and to a large extent the results for the values of the principal M\"{o}bius function of 2413-balloons is the mechanism we use to prove it. Our results from Chapter~\ref{chapter_balloon_permutations_preprint} show that for balloon permutations generally the value of the principal M\"{o}bius function is, essentially, related to the permutations $\beta$ plus a correction factor. For wedge permutations, this correction factor is guaranteed to be zero. If we consider the results directly relating to the M\"{o}bius function from Chapter~\ref{chapter_oppadj_paper}, one aspect that could be used to distinguish them is that they are all related to finding a set of permutations $S$ with the property that if a larger permutation contains each permutation in $S$ as an interval copy, then the value of the M\"{o}bius function is zero. Now compare this with Chapter~\ref{chapter_2413_balloon_paper}. Given some permutation $\pi = \ball{2413}{\beta}$, first note that $\beta$ is an interval in $\pi$, and so we can (somewhat trivially) claim that $\pi$ contains an interval copy of $\beta$. Our results, with some small exceptions, all give the value of $\mobp{\pi}$ as a multiple of $\mobp{\beta}$. We also note that Conjectures~\ref{conjecture-most-2413-balloons} and~\ref{conjecture-1-0-2413-balloons} relate to permutations $\pi = \ballij{2413}{\beta}$, and again here we see that $\beta$ occurs as an interval copy in $\pi$. Finally, if we look at the results from Chapter~\ref{chapter_balloon_permutations_preprint}, again we can see that $\beta$ occurs as an interval copy in $\ballij{\alpha}{\beta}$ and, trivially, in $\ballwedgek{\alpha}{\beta}$. Our suggestion here is that, in some ill-defined sense, intervals in permutations have a marked effect on the value of the principal M\"{o}bius function. In some cases, the presence of a copy interval or intervals guarantees that the value of the principal M\"{o}bius function is zero. In other cases, where we have an interval copy of $\beta$, we see that the value of the principal M\"{o}bius function can be expressed in terms of $\mobp{\beta}$, with, possibly, some correction factor, although for 2413-balloons and wedge permutations, the correction factor is zero for all but trivial cases. \section{Further research into the M\"{o}bius function} In the chapter summaries we have already mentioned several possible avenues for future research. We now consider more general avenues for further research, and we divide these into two main areas. The first area is research that will give expressions or recursions for the value of the M\"{o}bius function on some interval. Current results, and indeed our active research, tends to examine permutations that have some ``structure''. We do not intend to try and formally define structure; rather we claim that it is a property that is generally recognisable when it is seen. The classic example of a set of permutations with structure is, we suggest, the decomposable and separable permutations, studied by Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011}, as, in some ill-defined sense, these permutations have a lot of structure. Our own research into permutations with opposing adjacencies, and permutations that are 2413-balloons again looks at permutations with structure. The second area is research into what we call ``global'' properties of the permutation pattern poset. The result from Chapter~\ref{chapter_oppadj_paper} that 39.95\%~of permutations are M\"{o}bius zeros is one example, as is the exponential growth rate of the principal M\"{o}bius function proved in Chapter~\ref{chapter_2413_balloon_paper}. \subsection{The M\"{o}bius function of simple permutations} Recall that the simple permutations are permutations that only contain trivial intervals. In other areas of permutation patterns, such as the enumeration of permutation classes, the simple permutations underpin many results. For further details, we refer the reader to the survey article by Brignall~\cite{Brignall2010}. By contrast, little is known about the principal M\"{o}bius function of simple permutations. The first reference to this area occurs in the concluding remarks of Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011}. Here (using the terminology of this thesis) they give a sequence of values of $\mobmaxx{n}$ for $n = 1, \ldots, 11$, and they note that there is, up to symmetry, a unique permutation $\pi_n$ of length $n$ such that $\order{\mobp{\pi_n}} = \mobmaxx{n}$. They further note that $\pi_n$ is simple except for the case $n=3$, (but there are no simple permutations of length 3). The author has confirmed that this is also the case for permutations of length $12$ and $13$ (see Table~\ref{table_values_of_true_mob_min_max} on page~\pageref{table_values_of_true_mob_min_max}). To understand the principal M\"{o}bius function of simple permutations, we begin by considering some examples that are easy to describe, as they have recognisable structure. A simple parallel alternation is a permutation $\pi$ with even length $2 n$, where $ \pi = 2, 4, \ldots, 2n, 1, 3, \ldots, 2n-1, $ or any symmetry of this sequence. We show an example of a simple parallel alternation in Figure~\ref{figure_examples_simple_parallel_alternation}. \begin{figure} \begin{center} \begin{subfigure}[t]{0.30\textwidth} \centering \begin{tikzpicture}[scale=0.3] \plotpermgrid{2,4,6,8,10,1,3,5,7,9} \end{tikzpicture} \end{subfigure} \end{center} \caption{A simple parallel alternation.} \label{figure_examples_simple_parallel_alternation} \end{figure} Smith's paper on permutations with one descent~\cite{Smith2013} covers simple parallel alternations, and so we have an explicit expression for their principal M\"{o}bius function value. If $\pi$ is a simple parallel alternation of length $n$, then \[ \mobp{\pi} = -\dbinom{\frac{n}{2} + 1}{2}. \] Increasing oscillations are also simple permutations. Our paper on permutations with an indecomposable lower bound~\cite{Brignall2017a}, on which Chapter~\ref{chapter_incosc_paper} is based, includes a recursion for the principal M\"{o}bius function of increasing oscillations. We are unaware of any other published results for families of simple permutations. We now provide several examples of simple permutations with a recognisable structure, and give conjectures for the value of the principal M\"{o}bius function. We have already described simple parallel alternations. We extend our vocabulary to define an \emph{alternation}\extindex[permutation]{alternation} to be a permutation where every odd entry is to the right of every even entry, or a symmetry of such a permutation. A \emph{wedge alternation}\extindex[permutation]{wedge alternation} is then an alternation where the two sets of entries point in opposite directions, and an example is shown in Figure~\ref{figure_examples_of_type_1_2_wedge_simples}. Wedge alternations are not simple, but a single point can be added in one of two ways to form a simple permutation. These are called type 1 and type 2 wedge simples, written $W_1(n)$ and $W_2(n)$, where we require $n > 3$. The wedge simples appear to have been introduced in~\cite{Brignall2008a}, where we have \begin{theorem}[{% Brignall, Huczynska and Vatter \cite[Theorem 3]{Brignall2008a}}] For any fixed $k$, every sufficiently long simple permutation contains either a proper pin sequence of length at least $k$, a parallel alternation of length at least $k$, or a wedge simple permutation of length at least $k$. \end{theorem} We refer the interested reader to~\cite{Brignall2008a} for a definition of ``proper pin sequence''. The type 1 and type 2 wedge simples have the form \begin{align*} W_1(n) & = \begin{cases} 3, 5, \ldots, n-1, 1, n, n-2, \ldots , 2 & \text{If $n$ is even,} \\ 3, 5, \ldots, n, 1, n-1, n-3, \ldots, 2 & \text{If $n$ is odd,} \\ \end{cases} \intertext{and} W_2(n) & = \begin{cases} 2, 4, \ldots, n-2, n, n-3, n-5, \ldots, 1, n-1 & \text{if $n$ is even,} \\ 2, 4, \ddots, n-3, n, n-2, n-4, \ldots, 1, n-1 & \text{if $n$ is odd,} \\ \end{cases} \end{align*} and every symmetry of these permutations. Examples of type 1 and type 2 wedge simples are shown in Figure~\ref{figure_examples_of_type_1_2_wedge_simples}. \begin{figure} \begin{center} \begin{subfigure}[t]{0.30\textwidth} \centering \begin{tikzpicture}[scale=0.3] \plotpermgrid{2,4,6,8,10,9,7,5,3,1} \end{tikzpicture} \caption*{A wedge alternation.} \end{subfigure} \begin{subfigure}[t]{0.30\textwidth} \centering \begin{tikzpicture}[scale=0.3] \plotpermgrid{3,5,7,9,1,10,8,6,4,2} \end{tikzpicture} \caption*{A type 1 wedge simple} \end{subfigure} \begin{subfigure}[t]{0.30\textwidth} \centering \begin{tikzpicture}[scale=0.3] \plotpermgrid{2,4,6,8,10,7,5,3,1,9} \end{tikzpicture} \caption*{A type 2 wedge simple} \end{subfigure} \end{center} \caption{Examples of a wedge alternation, and type 1 and type 2 wedge simple permutations.} \label{figure_examples_of_type_1_2_wedge_simples} \end{figure} We calculated the value of the principal M\"{o}bius function of $W_1(n)$ and $W_2(n)$ for $n = 4, \dots, 30$, and the results are shown in Tables~\ref{table-values-of-w1} and~\ref{table-values-of-w2} in Appendix~\ref{chapter_appendix-a}. These values suggest the following conjecture: \begin{conjecture} For all $n > 3$, \begin{align*} \mobp{W_1(n)} & = (-1)^{n} 3(3-n) \\ \mobp{W_2(n)} & = (-1)^{n} (1-n). \end{align*} \end{conjecture} Schmerl and Trotter~\cite{Schmerl1993} show that every simple permutation of length $n$ contains a simple permutation of length $n-1$ or $n-2$. Further, they show that the only \emph{exceptional}\extindex[permutation]{exceptional} simple permutations, which are those permutations of length $n$ that do not contain a simple sub permutation of length $n-1$, are the parallel alternations, which we discussed above. The \emph{nearly-exceptional}\extindex[permutation]{nearly-exceptional} simple permutations are permutations which are not exceptional, but there is only a single point which, when deleted, results in a smaller simple permutation; so deleting any other point results in a non-simple permutation. The nearly-exceptional permutations have not, as far as we are aware, appeared in any publication. They were described in a talk given by Robert Brignall at Permutation Patterns 2010~\cite{Brignall2010a}. There are three types of nearly exceptional simple permutations, which we refer to as $E_1 (2n, k)$, $E_2 (2n)$ and $O (2n+1, k)$. The first parameter gives the length of the permutation, which, for simplicity, we require to be greater than 5. For $E_{1} (2n, k)$, the second parameter, $k$ must satisfy $1 \leq k \leq n-2$, and for $O (2n+1, k)$, $k$ must satisfy $1 \leq k \leq n-1$. Formally, up to symmetry, we have \begin{align*} E_{1}(2n,k) ={ } & n+1, 1, n+2, 2, \ldots, \\& k, n+k+1, n, n+k+2, n-1, \ldots, \\& 2n, k+1; \\ \intertext{~} E_2 (2n) ={ } & n, 1, n+1, 2, \ldots, 2n-2, 2n, n-1, 2n-1; \\ \intertext{and} O (2n+1, k) ={ } & n-k+1, 2n+1, n-k+2, 2n, \ldots, \\& 2n+2 - k, n + 1, n-k, n + 2, n-k-1, \ldots, \\& 1, 2n-k+1. \end{align*} Examples of $E_1 (2n, k)$, $E_2 (2n)$ and $O (2n+1, k)$ are shown in Figure~\ref{figure_examples_of_nearly_exceptional_simples}. \begin{figure} \begin{center} \begin{subfigure}[t]{0.30\textwidth} \centering \begin{tikzpicture}[scale=0.3] \plotpermgrid{6,1,7,2,8,5,9,4,10,3} \end{tikzpicture} \caption*{$E_{1}(10,2)$} \end{subfigure} \begin{subfigure}[t]{0.30\textwidth} \centering \begin{tikzpicture}[scale=0.3] \plotpermgrid{5,1,6,2,7,3,8,10,4,9} \end{tikzpicture} \caption*{$E_{2} (10)$} \end{subfigure} \begin{subfigure}[t]{0.30\textwidth} \centering \begin{tikzpicture}[scale=0.3] \plotpermgrid{4,11,5,10,6,3,7,2,8,1,9} \end{tikzpicture} \caption*{$O(11,2)$} \end{subfigure} \end{center} \caption{Examples of nearly exceptional simple permutations.} \label{figure_examples_of_nearly_exceptional_simples} \end{figure} We calculated the value of the principal M\"{o}bius function of $E_1(2n,k)$, $E_2(2n)$, and $O(2n+1,k)$, for $n = 3, \ldots, 15$, and all valid values of $k$, and the results are shown in Tables~\ref{table-values-of-e1-3-12}, \ref{table-values-of-e2}, and~\ref{table-values-of-o} in Appendix~\ref{chapter_appendix-a}. These values suggest the following conjectures: \begin{conjecture} For all $n \geq 5$ and for all $k$ with $1 \leq k \leq n-2$, \[ \mobp{E_{1} (2n, k)} = - \dfrac{(4n - 2) + k^2 - k}{2}. \] \end{conjecture} \begin{conjecture} For all $n \geq 5$, \[ \mobp{E_2 (2n)} = \dfrac{n - n^2 - 4}{2}. \] \end{conjecture} \begin{conjecture} For all $n \geq 5$ and for all $k$ with $1 \leq k \leq n-1$, \[ \mobp{O (2n+1, k)} = 2n. \] \end{conjecture} We remark that these permutations do exhibit a significant amount of structure, and we suggest that finding an expression for the principal M\"{o}bius function of other, less-structured, simple permutations will be difficult. \section{Further research into global properties of the poset} The second area for future research is to examine global attributes of the permutation pattern poset. Our results for the proportion of permutations that are M\"{o}bius zeros in Chapter~\ref{chapter_oppadj_paper} based on~\cite{Brignall2020}, and our result on the growth of $\mobmaxx{n}$ in Chapter~\ref{chapter_2413_balloon_paper} based on~\cite{Marchant2020} are examples of existing results. \subsection{\texorpdfstring{% Extremal values of $\mobp{\pi}$ as a function of the length of the permutations}{% Extremal values of the principal M\"{o}bius function as a function of the length of the permutations}} We think that further examination of the behaviour of $\mobmaxx{n}$ is one interesting area for research. We could also define $\truemobmax(n) = \max \{ \mobp{\pi} : \order{\pi} = n \}$, and $\truemobmin(n) = \min \{ \mobp{\pi} : \order{\pi} = n \}$, and then ask how these two functions behave as functions of $n$. Trivially, we have that, for $n > 4$, $\truemobmax(n+1) \geq - \truemobmin(n)$ and $\truemobmin(n+1) \leq - \truemobmax(n)$. Table~\ref{table_values_of_true_mob_min_max} shows the first thirteen values of these functions. \begin{table} \[ \begin{array}{lrr} \toprule n & \truemobmin(n) & \truemobmax(n) \\ \midrule 1 & 1 & 1 \\ 2 & -1 & -1 \\ 3 & 0 & 1 \\ 4 & -3 & 0 \\ 5 & 0 & 6 \\ 6 & -11 & 1 \\ 7 & -2 & 15 \\ 8 & -27 & 14 \\ 9 & -50 & 39 \\ 10 & -58 & 55 \\ 11 & -81 & 143 \\ 12 & -261 & 183 \\ 13 & -330 & 261 \\ \bottomrule \end{array} \] \caption{Values of $\truemobmax(n)$ and $\truemobmin(n)$ for $n=1, \ldots, 13$.} \label{table_values_of_true_mob_min_max} \end{table} \subsection{\texorpdfstring{% Characterising permutations where $\mobp{\pi}$ achieves extreme values}{ Characterising permutations where the M\"{o}bius function achieves extreme values}} The functions discussed above, $\mobmaxx{n}$, $\truemobmax(n)$, and $\truemobmin(n)$, concern themselves with the minimum or maximum values of the principal M\"{o}bius function. We could also ask for the characteristics of the permutations that achieve the minimum or maximum values. This question was, of course, first asked implicitly in Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011}, where there is a remark that up to length 11, if $\pi$ is a permutation where $\order{\mobp{\pi}} = \mobmaxx{n}$, then $\pi$ is simple, and, up to symmetry, unique. We remark here that our own intuition is that if $\order{\mobp{\pi}} = \mobmaxx{n}$, then $\pi$ is likely to be simple, but we suspect, for sufficiently large lengths, there may be more than one canonical permutation that meets the equality. A slightly different approach would be to consider the set of canonical permutations that achieve $\truemobmax(n)$ or $\truemobmin(n)$. We have results up to length 13, and these are shown in Table~\ref{table_canonical_min_max} in Appendix~\ref{chapter_appendix-b}. We remark that, as with many aspects of the permutation pattern poset, results for small permutations are atypical, and, in our opinion, the entries in the table for $n \leq 7$ certainly fall into the atypical category. We can see that $\truemobmax(13)$ is attained by three permutations. If we set \[\theta = 4,7,2,10,5,1,12,8,3,11,6,9\] then the first two permutations listed achieving $\truemobmax(13)$ are $1 \oplus \theta$, and $1 \oplus ((\theta)^R)^{-1}$ respectively. With this exception, all of the permutations that achieve $\truemobmin(n)$ or $\truemobmax(n)$ for $n > 7$ are simple. We suggest that, for lengths greater than 7, the permutations that achieve $\truemobmax(n)$ are either simple, or they are a sum (either direct or skew) of 1 and a permutation that achieves $\truemobmin(n-1)$. Conversely, the permutations that achieve $\truemobmin(n)$ are either simple, or they are a sum of 1 and a permutation that achieves $\truemobmax(n-1)$. \chapter*{Declarations} \addcontentsline{toc}{chapter}{Declarations} Some chapters of this thesis are based on work that has been published. The relevant chapters are as follows: \begin{enumerate} \item Chapter~\ref{chapter_incosc_paper} is based on published joint work with Robert Brignall. The paper~\cite{Brignall2017a} was published in \emph{Discrete Mathematics}. % \item Chapter~\ref{chapter_oppadj_paper} is based on published joint work with Robert Brignall, V{\'i}t Jel{\'i}nek and Jan Kyn{\v c}l. The paper~\cite{Brignall2020} was published in \emph{Mathematika}. I thank the London Mathematical Society for granting permission to include edited extracts from the published article in this thesis. % \item Chapter~\ref{chapter_2413_balloon_paper} is based on published sole work by the author. The paper~\cite{Marchant2020} was published in \emph{The Electronic Journal of Combinatorics}. \end{enumerate} None of the results appear in any other thesis, and all co-authors have agreed with the inclusion of joint work in this thesis. Where work has been published, the publishers have given permission for edited extracts of the published article to be included in this thesis. This thesis is approximately 49,000 words. \chapter*{Dedication} \addcontentsline{toc}{chapter}{Dedication} This thesis is dedicated to Jo. Five years ago she agreed that I could stop work in order to study for a PhD -- which has been the hardest I have ever worked. There is no way for me to adequately express my gratitude for her support, her tolerance for my mathematical adventures, or the way that she makes me complete. \begin{center} \cjRL{\Large 'a:niy l:dwodiy w:dwodiy liy} \\[2pt] \cjRL{\normalsize +sir ha+s*iyriym w;g} \end{center} \chapter{The M\"{o}bius function of permutations with an indecomposable lower bound} \chaptermark{Permutations with an indecomposable lower bound} \label{chapter_incosc_paper} \section{Preamble} This chapter is based on a published paper~\cite{Brignall2017a}, which is joint work with Robert Brignall. In this chapter, we show that the M\"{o}bius function of an interval in a permutation poset where the lower bound is sum (resp. skew) indecomposable depends solely on the sum (resp. skew) indecomposable permutations contained in the upper bound, and that this can simplify the calculation of the M\"{o}bius sum. For increasing oscillations, we give a recursion for the M\"{o}bius sum which only involves evaluating simple inequalities. \section{Introduction} Recall that the M\"{o}bius function on a poset interval $[\sigma, \pi]$ is defined by \begin{align} \label{incosc_equation_mobius_function} \mobfn{\sigma}{\pi} & = \begin{cases} 1 & \text{if $\sigma = \pi$,} \\ - \sum_{\lambda \in [\sigma, \pi)} \mobfn{\sigma}{\lambda} &\text{otherwise.} \end{cases} \end{align} Our motivation for this chapter is to find a \emph{contributing set}\extindex{contributing set} $\contrib{\sigma}{\pi}$ that is significantly smaller than the poset interval $[\sigma, \pi)$, and a $\{0, \pm1 \}$ weighting function $\weightgen{\sigma}{\alpha}{\pi}$ such that \begin{align} \label{equation_contributing_sum} \mobfn{\sigma}{\pi} & = - \sum_{\alpha \in \contrib{\sigma}{\pi}} \mobfn{\sigma}{\alpha} \weightgen{\sigma}{\alpha}{\pi}. \end{align} Plainly, in Equation~\ref{equation_contributing_sum}, we could set $\contrib{\sigma}{\pi} = [\sigma, \pi)$, and $\weightgen{\sigma}{\alpha}{\pi} = 1$, which is equivalent to Equation~\ref{incosc_equation_mobius_function}. One approach here would be to take a permutation $\beta$ such that $\sigma < \beta < \pi$. We could then set $\contrib{\sigma}{\pi} = \{ \lambda : \lambda \in [\sigma, \pi) \text{ and } \lambda \not\in [\sigma, \beta] \}$, and $\weightgen{\sigma}{\alpha}{\pi} = 1$, since, from Equation~\ref{incosc_equation_mobius_function}, $\sum_{\lambda \in [\sigma, \beta]} \mobfn{\sigma}{\lambda} = 0$. This approach was used in Smith~\cite{Smith2013}, who determined the M\"{o}bius function on the interval $[1, \pi]$ for all permutations $\pi$ with a single descent. Smith's paper is unusual, in that it provides an explicit formula for the value of the M\"{o}bius function. Our approach is different. We identify individual elements (say $\lambda$), of the poset that have $\mobfn{\sigma}{\lambda} = 0$. We also show that there are pairs of elements, $\lambda$ and $\lambda^\prime$, where $\mobfn{\sigma}{\lambda} = - \mobfn{\sigma}{\lambda^\prime}$, and so we can exclude these pairs of elements. Finally, we show that there are quartets of permutations $\lambda_1, \ldots, \lambda_4$ where $\sum_{i=1}^{4} \mobfn{\sigma}{\lambda_i} = 0$; and that we can systematically identify these quartets. By excluding these permutations from $\contrib{\sigma}{\pi}$ we can significantly reduce the number of elements in $\contrib{\sigma}{\pi}$ compared to the number of elements in the interval $[\sigma, \pi)$. This approach results in the ability to compute $\mobfn{\sigma}{\pi}$, where $\sigma$ is indecomposable, much faster than evaluating Equation~\ref{incosc_equation_mobius_function}. For increasing oscillations, we will show that the elements of $\contrib{\sigma}{\pi}$ can be determined using simple inequalities, and that as a consequence $\mobfn{\sigma}{\pi}$ can be determined using inequalities. With this approach, we have computed $\mobfn{1}{\pi}$, where $\pi$ is an increasing oscillation, up to $\order{\pi} = \text{2,000,000}$. Our main tool in the first part of this chapter comes from the results of Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011}. They found a recursion for the M\"{o}bius function for sum/skew decomposable permutations in terms of the sum/skew indecomposable permutations in the lower and upper bounds. They also found a method to determine the M\"{o}bius function for separable permutations by counting embeddings. We use the recursions for decomposable permutations to underpin the first part of this chapter. In this chapter we show that the M\"{o}bius function on intervals with a sum indecomposable lower bound depends only on the sum indecomposable permutations contained in the upper bound. We provide a weighting function that determines which sum indecomposable permutations contribute to the M\"{o}bius sum. We then consider increasing oscillations. For these permutations, we show how we can find all of the permutations that contribute to the M\"{o}bius sum by applying simple numeric inequalities, which leads to a fast polynomial algorithm for determining the M\"{o}bius function. We start with some essential definitions and notation relevant to this chapter in Section~\ref{incosc_section-definitions-and-notation}, then in Section~\ref{section-preliminary-lemmas} we provide a number of preliminary lemmas. We conclude this section with a theorem that gives $\mobfn{\sigma}{\pi}$, where $\sigma$ is a sum indecomposable permutation, for all $\pi$. In Section~\ref{section-increasing-oscillations} we consider $\mobfn{\sigma}{\pi}$ where $\sigma$ is a sum indecomposable permutation, and $\pi$ is an increasing oscillation. We finish with some concluding remarks in Section~\ref{incosc_section-concluding-remarks}. \section{Definitions and notation}\label{incosc_section-definitions-and-notation} When discussing the M\"{o}bius function, $\mobfn{\sigma}{\pi} = - \sum_{\lambda \in [\sigma, \pi)} \mobfn{\sigma}{\lambda}$, we will frequently be examining the value of $\mobfn{\sigma}{\lambda}$ for a specific permutation $\lambda$. We say that this is the \emph{contribution}\extindex{contribution} that $\lambda$ makes to the sum. If we have a set of permutations $S \subseteq [\sigma, \pi)$ such that $\sum_{\lambda \in S} \mobfn{\sigma}{\lambda} = 0$, then we say that the set $S$ makes \emph{no net contribution}\extindex{net contribution} to the sum. The \emph{interleave}\extindex[permutation]{interleave} of two permutations $\alpha$ and $\beta$ is formed by taking the sum $\alpha \oplus \beta$, and then exchanging the value of the largest point from $\alpha$ with the value of the smallest point from $\beta$. We can also view this as increasing the largest point from $\alpha$ by 1, and simultaneously decreasing the smallest point from $\beta$ by 1. We write an interleave as $\alpha \interleave \beta$. For example, $321 \interleave 213 = 421536$, see Figure~\ref{figure_examples-of-sums-and-interleaves}. For completeness, we also define a \emph{skew interleave}\extindex[permutation]{skew interleave}, $\alpha \oslash \beta$, which is formed by taking the skew sum $\alpha \ominus \beta$, and then exchanging the smallest point from $\alpha$ with the largest point from $\beta$. As an example, $321 \oslash 213 = 653214$, as shown in Figure~\ref{figure_examples-of-sums-and-interleaves}. The interleave operations, $\interleave$ and $\oslash$, are not associative, as $1 \interleave 1 \interleave 1$ could represent $231$ or $312$. To avoid this ambiguity, we require that the permutation $1$ can either be interleaved to the left or to the right, but not both. It is easy to see that this restriction establishes associativity. We note here that, with this restriction, an expression involving $\oplus$ and $\interleave$ represents a unique permutation regardless of the order in which the operations are applied. Let $\alpha$ be a permutation with length greater than 1. We will frequently want to refer to permutations that have the form $\alpha \interleave \alpha \interleave \ldots \interleave \alpha \interleave \alpha$. If there are $n$ copies of $\alpha$ being interleaved, then we will write this as $\nils{n}{\alpha}$\extindex{$\nils{n}{\alpha}$}, so, for example, we have $\nils{3}{(21)} = 21 \interleave 21 \interleave 21 = 315264$. \begin{figure} \begin{footnotesize} \begin{center} \begin{subfigure}[t]{0.22\textwidth} \begin{center} \begin{tikzpicture}[scale=0.3] \plotpermgrid{3,2,1,5,4,6} \end{tikzpicture} \end{center} \caption*{$321 \oplus 213$} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \begin{center} \begin{tikzpicture}[scale=0.3] \plotpermgrid{6,5,4,2,1,3} \end{tikzpicture} \end{center} \caption*{$321 \ominus 213$} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \begin{center} \begin{tikzpicture}[scale=0.3] \plotpermgrid{4,2,1,5,3,6} \end{tikzpicture} \end{center} \caption*{$321 \interleave 213$} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \begin{center} \begin{tikzpicture}[scale=0.3] \plotpermgrid{6,5,3,2,1,4} \end{tikzpicture} \end{center} \caption*{$321 \oslash 213$} \end{subfigure} \end{center} \end{footnotesize} \caption{Examples of direct and skew sums and interleaves.} \label{figure_examples-of-sums-and-interleaves} \end{figure} For the remainder of this chapter, by symmetry it suffices to discuss permutations in relation to sums and interleaves only. For the same reason, references to (in)decomposable permutations may omit the ``sum'' qualifier. The \emph{increasing oscillating sequence}\extindex{increasing oscillating sequence} is the sequence \[ 4, 1, 6, 3, 8, 5, 10, 7, \ldots, 2k+2, 2k-1, \ldots. \] The start of the sequence is depicted in Figure~\ref{figure_increasing_oscillating_sequence}. \begin{figure} \centering \begin{subfigure}[c]{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.3] \plotopengrid{12}{14} \plotperm{4, 1, 6, 3, 8, 5} \smalldot{(7,10)} \smalldot{(8,7)} \smallerdot{(9, 12)} \smallerdot{(10, 9)} \evensmallerdot{(11, 14)} \evensmallerdot{(12, 11)} \tinydot{(13, 16)} \tinydot{(14, 13)} \end{tikzpicture} \end{subfigure} \caption{A depiction of the start of the increasing oscillating sequence.} \label{figure_increasing_oscillating_sequence} \end{figure} An \emph{increasing oscillation}\extindex[permutation]{increasing oscillation} is a simple permutation contained in the increasing oscillating sequence. For lengths greater than three, there are exactly two increasing oscillations of each length. Let $W_n$ be the increasing oscillation with $n$ elements which starts with a descent, and let $M_n$ be the increasing oscillation with $n$ elements which starts with an ascent. Then \begin{align*} W_{2n} & = \nils{n}{21}%{\delta}, & M_{2n} & = 1 \interleave \left( \nils{n-1}{21}%{\delta} \right) \interleave 1, \\ W_{2n-1} & = \left( \nils{n-1}{21}%{\delta} \right) \interleave 1, \qquad \text{and} & M_{2n-1} & = 1 \interleave \left( \nils{n-1}{21}%{\delta} \right). \\ \end{align*} Note that $W_n = M_n^{-1}$. There are instances where, for some permutation $\alpha$, we are interested in the set of permutations $ \{\alpha, \; \oneplus \alpha, \; \alpha \oplus 1, \; \oneplus \alpha \oplus 1 \}$. Given a permutation $\alpha$, we refer to this set as $\familysum{\alpha}$, and we say that this set is the \emph{family}\extindex{family} of $\alpha$. If $S$ is a set of permutations, then $\familysum{S}= \cup_{\alpha \in S} \{\familysum{\alpha} \}$. There are also some instances where we are interested in the set of permutations $ \familyil{\alpha} = \{\alpha, \; 1 \interleave \alpha, \; \alpha \interleave 1, \; 1 \interleave \alpha \interleave 1 \}$. Note that every increasing oscillation is an element of $ \familyil{\nils{k}{\dtwo}} $ for some $k \geq 1$. \section{Preliminary lemmas and main theorem}\label{section-preliminary-lemmas} In this section our aim is to show that if $\sigma$ is indecomposable, then for any $\pi \geq \sigma$ there is a $\{0, \pm 1\}$ weighting function $\weightgen{\sigma}{\alpha}{\pi}$ and a set of permutations $\contrib{\sigma}{\pi}$, such that \[ \mobfn{\sigma}{\pi} = - \sum_{\alpha \in \contrib{\sigma}{\pi}} \mobfn{\sigma}{\alpha} \weightgen{\sigma}{\alpha}{\pi}. \] If $\pi$ is the identity permutation $1,2, \ldots, n$ or its reverse, then $\mobfn{\sigma}{\pi}$ is trivial for any $\sigma$, and we exclude the identity and its reverse from being the upper bound of any interval under consideration. As noted earlier, our approach is to show that there are permutations, pairs of permutations, and quartets of permutations in $[\sigma, \pi)$ that make no net contribution to the sum. We use Proposition 1 and 2, and Corollary 3 from Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011}. Note that we have already introduced these propositions on page~\pageref{BJJS-proposition-1-as-in-mcnamara}, but we repeat them here for ease of use. We start with some required notation. If $\pi$ is a non-empty permutation with decomposition $\pi_\oneplus \ldots \oplus \pi_n$, then for any integer $i$ with $0 \leq i \leq n$, $\pi_{\leq i}$ is the permutation $\pi_\oneplus \ldots \oplus \pi_i$, and $\pi_{> i}$ is the permutation $\pi_{i+1} \oplus \ldots \oplus \pi_n$. An empty sum of permutations is defined as $\varepsilon$, and in particular $\pi_{\leq 0} = \pi_{> n} = \varepsilon$. We can see that $\mobfn{\varepsilon}{\varepsilon} = 1$, $\mobfn{\varepsilon}{1} = -1$ and $\mobfn{\varepsilon}{\tau} = 0$ for any $\tau > 1$. We now recall the results from Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson: \begin{proposition}[{% Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson \cite[Proposition 1]{Burstein2011}}] \label{BJJS_proposition_1} Let $\sigma$ and $\pi$ be non-empty permutations with decompositions $\sigma = \sigma_\oneplus \ldots \oplus \sigma_m$ and $\pi = \pi_\oneplus \ldots \oplus \pi_n$, with $n \geq 2$. Assume that $\pi_1 = 1$, and let $k$ be the largest integer such that $\pi_1, \pi_2, \ldots ,\pi_k$ are all equal to $1$. Let $l \geq 0$ be the largest integer such that $\sigma_1, \sigma_2, \ldots, \sigma_l$ are all equal to $1$. Then \begin{align*} \mobfn{\sigma}{\pi} & = \begin{cases*} 0 & \text{if $k-1 > l$,} \\ -\mobfn{\sigma_{> k-1}}{\pi_{> k}} & \text{if $k-1 = l$,} \\ \mobfn{\sigma_{> k}}{\pi_{> k}} - \mobfn{\sigma_{> k-1}}{\pi_{> k}} & \text{if $k-1 < l$.} \end{cases*} \end{align*} \end{proposition} \begin{proposition}[{% \cite[Proposition 2]{Burstein2011}}] \label{BJJS_proposition_2} Let $\sigma$ and $\pi$ be non-empty permutations with decompositions $\sigma = \sigma_1 \oplus \ldots \oplus \sigma_m$ and $\pi = \pi_1 \oplus \ldots \oplus \pi_n$, with $n \geq 2$. Assume that $\pi_1 \neq 1$, and let $k$ be the largest integer such that $\pi_1, \pi_2, \ldots ,\pi_k$ are all equal to $\pi_1$. Then \begin{align*} \mobfn{\sigma}{\pi} & = \sum_{i=1}^m \sum_{j=1}^k \mobfn{\sigma_{\leq i}}{\pi_{1}} \mobfn{\sigma_{> i}}{\pi_{> j}}. \end{align*} \end{proposition} \begin{corollary}[{% \cite[Corollary 3]{Burstein2011}}] \label{BJJS_corollary_3} Let $\sigma$ and $\pi$ be as in Proposition~\ref{BJJS_proposition_2}. Suppose that $\sigma$ is sum indecomposable, so $m = 1$. Then \begin{align*} \mobfn{\sigma}{\pi} & = \begin{cases*} \mobfn{\sigma}{\pi_1} & \text{if $\pi = \nsums{k}{\pi_1$},} \\ -\mobfn{\sigma}{\pi_1} & \text{if $\pi = \left(\nsums{k}{\pi_1} \right) \oplus 1$,} \\ 0 & \text{otherwise,} \end{cases*} \end{align*} \end{corollary} A simple consequence of Propositions~\ref{BJJS_proposition_1} and~\ref{BJJS_proposition_2} is the identification of some intervals of permutations where the value of the M\"{o}bius function is zero. \begin{lemma} \label{lemma_mobius_function_is_zero} Let $\pi \in \{ \oneplus \oneplus \tau, \tau \oplus 1 \oplus 1, \familysum{\left(\nsums{r}{\alpha}\right) \oplus \tau^\prime} \} $, where $\tau$ is any permutation, $r$ is maximal, $\alpha$ is sum indecomposable, and $\tau^\prime$ is any permutation greater than $1$. Let $\sigma$ be a sum indecomposable permutation. Then $ \mobfn{\sigma}{\pi} = 0 $. \end{lemma} \begin{proof} Consider $\pi = \oneplus \oneplus \tau$. We use Proposition~\ref{BJJS_proposition_1}. If $\tau_1 = 1$, then $k \geq 3$, and $l \leq 1$, and the result follows immediately. Now assume that $\tau_1 \neq 1$. Then $k=2$. If $\sigma > 1$, then again the result follows immediately. If $\sigma=1$, then we have $\mobfn{\sigma}{\pi} = - \mobfn{\sigma_{> k-1}}{\pi_{>k}} = - \mobfn{\varepsilon}{\tau} = 0$. The case for $\pi = \tau \oplus 1 \oplus 1$ follows by symmetry. Now consider $\pi = \familysum{\left(\nsums{r}{\alpha}\right) \oplus \tau^\prime}$. If $\pi = \left(\nsums{r}{\alpha}\right) \oplus \tau$, or $\pi = \left(\nsums{r}{\alpha}\right) \oplus \tau \oplus 1$, then we use Proposition~\ref{BJJS_proposition_2}. In that context we have $m = 1$ and $k = r$, and so $ \mobfn{\sigma}{\pi} = \sum_{j=1}^r \mobfn{\sigma}{\pi_1} \mobfn{\varepsilon}{\pi_{> j}} $ For every value of $j$, $\pi_{> j}$ is non-empty and greater than $1$, and so $\mobfn{\varepsilon}{\pi_{> j}} = 0$ for all $j$, and hence every term in the sum is zero. If $\pi = \oneplus \left(\nsums{r}{\alpha}\right) \oplus \tau$ or $\pi = \oneplus \left(\nsums{r}{\alpha}\right) \oplus \tau \oplus 1$, then we use Proposition~\ref{BJJS_proposition_1}, which reduces to one of the previous cases. \end{proof} We now turn to identifying pairs and quartets of permutations that make no net contribution to the M\"{o}bius sum. We start by showing that if $\sigma$ and $\alpha$ are indecomposable, and $r \geq 1$, and with $\pi \in \familysum{\nsums{r}{\alpha}}$, then $\mobfn{\sigma}{\pi}$ and $\mobfn{\sigma}{\alpha}$ have the same magnitude. \begin{lemma} \label{lemma_pi_has_single_block} Let $\pi \in \familysum{\nsums{r}{\alpha}}$, where $r \geq 1$ and $\alpha > 1$ is sum indecomposable. Let $\sigma$ be a sum indecomposable permutation. Then \[ \mobfn{\sigma}{\pi} = \begin{cases*} \mobfn{\sigma}{\alpha} & \text{if $\pi = \nsums{r}{\alpha}$\; or\; $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1 $}, \\ -\mobfn{\sigma}{\alpha} & \text{if $\pi = \oneplus \left(\nsums{r}{\alpha}\right) $\; or\; $\left(\nsums{r}{\alpha}\right) \oplus 1$}. \end{cases*} \] As a consequence, if $\familysum{\nsums{r}{\alpha}} \subseteq [\sigma,\pi)$, then $\familysum{\nsums{r}{\alpha}}$ makes no net contribution to $\mobfn{\sigma}{\pi}$. \end{lemma} \begin{proof} If $\pi = \nsums{r}{\alpha}$ or $\pi = \left(\nsums{r}{\alpha}\right) \oplus 1$, then this is immediate from Corollary~\ref{BJJS_corollary_3}. % If $\pi = \oneplus \left(\nsums{r}{\alpha}\right)$ or $\pi = \oneplus \left(\nsums{r}{\alpha}\right) \oplus 1$, then we use Proposition~\ref{BJJS_proposition_1}. For the net contribution of $\familysum{\nsums{r}{\alpha}}$, $\sum_{\lambda \in \familysum{\nsums{r}{\alpha}}} \mobfn{\sigma}{\lambda} = 0$. \end{proof} We now have a lemma that adds a further restriction to the permutations that have a non-zero contribution to the M\"{o}bius sum. \begin{lemma} \label{lemma_only_r_and_rplusone_count} If $\sigma \leq \pi$, and $\alpha \in [\sigma, \pi]$ is sum indecomposable, and $r$ is the smallest integer such that $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1 \not\leq \pi$, then $\familysum{\nsums{k}{\alpha}} \subseteq [\sigma, \pi)$ for all $k \in [1, r)$. \end{lemma} \begin{proof} For any $k < r$, $\sigma \leq \nsums{k}{\alpha} < \oneplus \left(\nsums{k}{\alpha}\right) \oplus 1 \leq \pi$. Note that by Lemma~\ref{lemma_pi_has_single_block} the net contribution of the family $\familysum{\nsums{k}{\alpha}}$ to $\mobfn{\sigma}{\pi}$ is zero. \end{proof} \begin{observation} \label{observation_only_r_and_rplusone_count} Using the same terminology as Lemma~\ref{lemma_only_r_and_rplusone_count}, if $k > r+1$ then we must have $\nsums{k}{\alpha} \not\leq \pi$. As a consequence, for each indecomposable $\alpha \in [\sigma, \pi]$, the only families of $\alpha$ that can have a non-zero net contribution to $\mobfn{\sigma}{\pi}$ are $\familysum{\nsums{r}{\alpha}}$ and $\familysum{\nsums{r+1}{\alpha}}$. \end{observation} We now eliminate two specific permutations from the M\"{o}bius sum. \begin{lemma} \label{lemma_order_greater_than_three_ignore_o_opo} If $\pi$ is any permutation with $\order{\pi} > 3$ apart from the identity permutation and its reverse, and $\sigma$ is sum indecomposable, then the permutations $1$ and $1 \oplus 1$ make no net contribution to the M\"{o}bius sum $\mobfn{\sigma}{\pi}$. \end{lemma} \begin{proof} If $\sigma = 1$, then the interval contains both $1$ and $1 \oplus 1$. Since $\mobfn{1}{1} = 1$ and $\mobfn{1}{12} = -1$, there is no net contribution to $\mobfn{\sigma}{\pi}$. If $\sigma > 1$, then $\sigma \neq 12$, and so neither $1$ nor $12$ is in the interval. \end{proof} Before we present the main theorem for this section, we formally define the weight function and the contributing set. Let $\alpha$ be a sum indecomposable permutation. The weight function, $\weightgen{\sigma}{\alpha}{\pi}$, is defined as \begin{align} \label{equation_general_weight_function} \weightgen{\sigma}{\alpha}{\pi} & = \begin{cases*} 1 & If $ \left\lbrace \begin{array}{l} \sigma \leq \nsums{r}{\alpha} \leq \pi \text{ and } \\ \oneplus \left(\nsums{r}{\alpha}\right) \not \leq \pi \text{ and } \\ \left(\nsums{r}{\alpha}\right) \oplus 1 \not \leq \pi, \end{array} \right. $ \\ -1 & If $ \left\lbrace \begin{array}{l} \sigma \leq \nsums{r}{\alpha} \leq \pi \text{ and } \\ \oneplus \left(\nsums{r}{\alpha}\right) \leq \pi \text{ and } \\ \left(\nsums{r}{\alpha}\right) \oplus 1 \leq \pi \text{ and } \\ \nsums{r+1}{\alpha} \not\leq \pi, \end{array} \right. $ \\ 0 & Otherwise, \end{cases*} \end{align} where $r$ is the smallest integer such that $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1 \not \leq \pi$. The contributing set $\contrib{\sigma}{\pi}$ is defined as \begin{align*} \contrib{\sigma}{\pi} & = \left\lbrace \alpha : \begin{array}{l} \alpha \in [\sigma, \pi), \\ \alpha \text{ is sum indecomposable, and } \\ \weightgen{\sigma}{\alpha}{\pi} \neq 0 \end{array} \right\rbrace. \end{align*} We have one last lemma before we move on to the main theorem. \begin{lemma} \label{lemma_weight_of_alpha} If $\sigma$ and $\alpha$ are sum indecomposable, then for any permutation $\pi$, $\mobfn{\sigma}{\alpha}\weightgen{\sigma}{\alpha}{\pi}$ gives the contribution of the set of families $\familysum{\nsums{r}{\alpha}}$ to the M\"{o}bius sum, where $r$ is any positive integer. \end{lemma} \begin{proof} By Observation~\ref{observation_only_r_and_rplusone_count}, we only need consider the contribution made by $\nsums{r}{\alpha}$ and $\nsums{r+1}{\alpha}$, where $r$ is the smallest integer such that $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1 \not \leq \pi$. If $\sigma \not \leq \nsums{r}{\alpha}$, or $\nsums{r}{\alpha} \not \leq \pi$, then $\familysum{\nsums{r}{\alpha}}$ makes no net contribution to the M\"{o}bius sum. Now assume that $\sigma \leq \nsums{r}{\alpha} \leq \pi$. First, we can see that if $\oneplus \left(\nsums{r}{\alpha}\right) \not \leq \pi$, or $\left(\nsums{r}{\alpha}\right) \oplus 1 \not \leq \pi$ then $\nsums{r+1}{\alpha} \not \leq \pi$. We can also see that if $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1 \not \leq \pi$ then $\oneplus \left(\nsums{r+1}{\alpha}\right) \not \leq \pi$ and $\nsums{r+1}{\alpha} \not \leq \pi$. The possibilities remaining are itemised in Table~\ref{table_family_members_in_an_interval}, \begin{table}[btp] \centering \begin{tabular}{cccc} \toprule $\oneplus \left(\nsums{r}{\alpha}\right)$ & $\left(\nsums{r}{\alpha}\right) \oplus 1$ & $\nsums{r+1}{\alpha}$ & M\"{o}bius contribution \\ \midrule $\leq \pi$ & $\leq \pi$ & $\leq \pi$ & $0$ \\ $\leq \pi$ & $\leq \pi$ & $\not \leq \pi$ & $- \mobfn{\sigma}{\alpha}$ \\ $\leq \pi$ & $\not \leq \pi$ & $\not \leq \pi$ & $0$ \\ $\not \leq \pi$ & $\leq \pi$ & $\not \leq \pi$ & $0$ \\ $\not \leq \pi$ & $\not \leq \pi$ & $\not \leq \pi$ & $\mobfn{\sigma}{\alpha}$ \\ \bottomrule \end{tabular} \caption{M\"{o}bius contribution from family members.} \label{table_family_members_in_an_interval} \end{table} where the M\"{o}bius contribution is determined by applying Lemma~\ref{lemma_pi_has_single_block}. We can see that in every case $\weightgen{\sigma}{\alpha}{\pi}$ provides the correct weight for the M\"{o}bius function $\mobfn{\sigma}{\alpha}$. \end{proof} We are now in a position to present the main theorem for this section. \begin{theorem} \label{theorem_mobius_sum_bottom_level_indecomposable} If $\sigma$ is a sum indecomposable permutation, and $\order{\pi} > 3$, then \[ \mobfn{\sigma}{\pi} = - \sum_{\alpha \in \contrib{\sigma}{\pi}} \mobfn{\sigma}{\alpha} \weightgen{\sigma}{\alpha}{\pi} . \] \end{theorem} \begin{proof} Let $\alpha \leq \pi$ be an indecomposable permutation. Using Lemmas~\ref{lemma_mobius_function_is_zero} and~\ref{lemma_order_greater_than_three_ignore_o_opo} we can see that any permutations not in the set $\familysum{\nsums{r}{\alpha}}$ can be excluded from $\contrib{\sigma}{\pi}$, as these permutations make no net contribution to the M\"{o}bius sum. For every $\alpha$, by Lemma~\ref{lemma_weight_of_alpha}, $\mobfn{\sigma}{\alpha} \weightgen{\sigma}{\alpha}{\pi}$ provides the contribution to the M\"{o}bius sum of all families $\familysum{\nsums{r}{\alpha}}$, where $r$ is a positive integer. \end{proof} Theorem~\ref{theorem_mobius_sum_bottom_level_indecomposable} reduces the number of permutations that need to be considered as part of the M\"{o}bius sum. We can see that the largest permutation in $\contrib{\sigma}{\pi}$ must have length less than $\order{\pi}$, and so we can apply Theorem~\ref{theorem_mobius_sum_bottom_level_indecomposable} recursively to the permutations in $\contrib{\sigma}{\pi}$ to determine their M\"{o}bius values. In this recursion, if we are attempting to determine $\mobfn{\sigma}{\lambda}$, we can stop if $\order{\sigma} = \order{\lambda}$ or $\order{\sigma} = \order{\lambda} - 1$, as in these cases $\mobfn{\sigma}{\lambda}$ is $+1$ and $-1$ respectively. \section{Increasing oscillations}\label{section-increasing-oscillations} We now move on to increasing oscillations. Given an indecomposable permutation $\sigma$, and an increasing oscillation $\pi$, our aim in this section is to describe $\contrib{\sigma}{\pi}$ in precise terms. We will find a sum for the M\"{o}bius function, $\mobfn{\sigma}{\pi}$, which only requires the evaluation of simple inequalities. If $\pi$ is an increasing oscillation with length less than 4, then $\mobfn{\sigma}{\pi}$ is trivial to determine for any $\sigma$. For the remainder of this section we assume that $\pi$ has length at least 4. We partition the set of increasing oscillations with length greater than 1 into five disjoint subsets. These subsets are $\{ 21}%{\delta \},\;$ $\{\nils{{k+1}}{\dtwo} \},\;$ $\{1 \interleave \left(\nils{k}{\dtwo}\right)\},\;$ $\{\left(\nils{k}{\dtwo}\right) \interleave 1\},\;$ and $\{1 \interleave \left(\nils{k}{\dtwo}\right) \interleave 1 \}$, where $k$ is a positive integer. If two increasing oscillations are in the same subset, then we say that they have the same \emph{shape}\extindex{shape}. We now determine what permutations contained in an increasing oscillation have a non-zero contribution to the M\"{o}bius sum. \begin{lemma} \label{lemma_form_of_permutations_in_increasing_oscillations} Let $\pi$ be an increasing oscillation, and let $\sigma \leq \pi$ be sum indecomposable. Let $S$ be the subset of the permutations in the interval $[\sigma, \pi)$ that can be written in the form $ \familysum{ \nsums{r}{\familyil{\nils{k}{21}%{\delta}} } } $ for some $k,r \geq 1$. If $\lambda \in [\sigma, \pi)$, and $\lambda \not \in S$, then $\mobfn{\sigma}{\lambda} = 0$. \end{lemma} We note here that $\familyil{\nils{k}{21}%{\delta}}$ is a set containing only increasing oscillations. \begin{proof} We start by showing that if $\pi$ is an increasing oscillation, and $\lambda = \lambda_1 \oplus \ldots \oplus \lambda_m \leq \pi$, where each $\lambda_i$ is sum indecomposable, then every $\lambda_i$ is an increasing oscillation. % This is trivially true if $\lambda$ is itself an increasing oscillation, thus it is sufficient to show that if $\lambda$ is an increasing oscillation, then deleting a single point results in either an increasing oscillation, or a permutation that is the sum of two increasing oscillations. If $k = 1$, then we can see that deleting a single point results in a permutation with the required characteristic. Now assume that $k > 1$. Let $\lambda = 1 \interleave \left(\nils{k}{\dtwo}\right)$. Deleting the leftmost point gives $\nils{k}{\dtwo}$, and deleting the rightmost point gives $1 \interleave \left( \nils{k-1}{21}%{\delta} \right) \oplus 1$. Deleting the second point gives $21}%{\delta \oplus \left( \nils{k-1}{21}%{\delta} \right)$, and deleting the last-but-one point gives $1 \interleave \left( \nils{k-1}{21}%{\delta} \right) \interleave 1$. Deleting any even point $2t$ except the second or second-to-last results in $ \left( 1 \interleave \left( \nils{t-1}{21}%{\delta} \right) \interleave 1 \right) \oplus \left( \left( \nils{k-t}{21}%{\delta} \right) \right) $. Finally, deleting any odd point $2t+1$ apart from the first or last results in $ \left( 1 \interleave \left( \nils{t-1}{21}%{\delta} \right) \right) \oplus \left( 1 \interleave \left( \nils{k-t}{21}%{\delta} \right) \right) $. Thus if $\lambda = 1 \interleave \left(\nils{k}{\dtwo}\right)$, then deleting a single point from $\lambda$ results in either an increasing oscillation, or a permutation that is the sum of two increasing oscillations. A similar argument applies to the other three cases, which we omit for brevity. To complete the proof, we now see that by Lemma~\ref{lemma_order_greater_than_three_ignore_o_opo}, we can ignore $\lambda = 1 $ and $\lambda = 1 \oplus 1$. If $\lambda = \lambda_1 \oplus \lambda_2 \oplus \ldots \oplus \lambda_m \leq \pi$, then by the argument above, every $\lambda_i$ is an increasing oscillation. Applying Lemma~\ref{lemma_mobius_function_is_zero} completes the proof. \end{proof} Following Observation~\ref{observation_only_r_and_rplusone_count}, it is clear that, if $\alpha \in \familyil{\nils{k}{\dtwo}}$, then for any family $\familysum{\nsums{r}{\alpha}}$, we only need consider the cases $\nsums{r}{\alpha}$ and $\nsums{r+1}{\alpha}$ where $r$ is the smallest integer such that $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1 \not \leq \pi$. Given some $\pi = W_n \text{ or } M_n$, we will find inequalities that relate $n$, $r$ and $k$ and the shape of $\alpha$ that will allow us to find the values that contribute to the M\"{o}bius sum. We know from Lemma~\ref{lemma_form_of_permutations_in_increasing_oscillations} the shape of the permutations that contribute to the M\"{o}bius sum. For each of the four types of increasing oscillation ($W_{2n}$, $W_{2n-1}$, $M_{2n}$ and $M_{2n-1}$), we can examine how each shape can be embedded so that the unused points at the start of the increasing oscillation are minimised. Figure~\ref{figure_unused_points_at_start_w2n} shows examples of embeddings into $W_{2n}$. This gives us an inequality relating to the start of the embedding. Similarly, we can find inequalities for the end of the embedding. We can also find inequalities that relate to the interior (when $r > 1$), and Figures~\ref{figure_unused_points_interior_not_21} and~\ref{figure_unused_points_interior_21} show examples of this. We can use these inequalities to determine what values of $k$ will allow the shape to be embedded. For each allowable value of $k$, we can then determine the maximum value of $r$ such that $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1 \not \leq \pi$. This then means that, by evaluating inequalities alone, we can identify the specific permutations that could contribute to the M\"{o}bius sum. We first have two lemmas that examine inequalities at the start and end of an embedding. \begin{lemma} \label{lemma_unused_points_at_start} If $\pi$ is an increasing oscillation, and $\alpha \leq \pi$ is sum indecomposable, then in any embedding of an element $\lambda$ of $\familysum{\nsums{r}{\alpha}}$ into $\pi$, the minimum number of unused points at the start of $\pi$ depends on the start of $\lambda$, and on $\pi$, and is as shown below: \centering \begin{tabular}{lcccc} \toprule Start of $\lambda$ & $\pi=W_{2n}$ & $\pi=W_{2n-1}$ & $\pi=M_{2n}$ & $\pi=M_{2n-1}$ \\ \midrule $21}%{\delta \ldots $ & $0$ & $0$ & $0$ & $0$ \\ $\nils{{k+1}}{\dtwo} \ldots $ & $0$ & $0$ & $1$ & $1$ \\ $1 \interleave \left( \nils{k}{\dtwo} \right) \ldots $ & $1$ & $1$ & $0$ & $0$ \\ \midrule $\oneplus 21}%{\delta \ldots $ & $1$ & $1$ & $1$ & $1$ \\ $\oneplus \left( \nils{{k+1}}{\dtwo} \right) \ldots$ & $1$ & $1$ & $2$ & $2$ \\ $\oneplus 1 \interleave \left( \nils{k}{\dtwo} \right) \ldots$ & $2$ & $2$ & $1$ & $1$ \\ \bottomrule \end{tabular} \end{lemma} \begin{proof} It is clear that if we minimise the number of points at the start of an embedding, then the number of unused points depends on $\pi$, and the start of $\alpha$. The values in Lemma~\ref{lemma_unused_points_at_start} are found by considering each of the possibilities. We illustrate some of these cases in Figure~\ref{figure_unused_points_at_start_w2n}. \begin{figure} \centering \begin{subfigure}[c]{0.2\textwidth} \begin{tikzpicture}[scale=0.3] \plotgrid{8}{9} \plotperm{3, 1, 5, 2, 7, 4, 9, 6} \end{tikzpicture} \caption*{$\nils{4}{21}%{\delta}$} \end{subfigure} \quad \begin{subfigure}[c]{0.2\textwidth} \begin{tikzpicture}[scale=0.3] \plotgrid{8}{9} \plotperm{3, 1, 5, 2, 7, 4, 9, 6} \opendot{(2,1)} \end{tikzpicture} \caption*{$1 \interleave \left( \nils{3}{21}%{\delta} \right)$} \end{subfigure} \quad \begin{subfigure}[c]{0.2\textwidth} \begin{tikzpicture}[scale=0.3] \plotgrid{8}{9} \plotperm{3, 1, 5, 2, 7, 4, 9, 6} \opendot{(1,3)} \end{tikzpicture} \caption*{$\oneplus \left( \nils{3}{21}%{\delta} \right)$} \end{subfigure} \quad \begin{subfigure}[c]{0.2\textwidth} \begin{tikzpicture}[scale=0.3] \plotgrid{8}{9} \plotperm{3, 1, 5, 2, 7, 4, 9, 6} \opendot{(2,1)} \opendot{(4,2)} \end{tikzpicture} \caption*{$\oneplus 1 \interleave \left( \nils{2}{21}%{\delta} \right)$} \end{subfigure} \caption{Embedding the start of $\alpha$ in $W_{2n}$.} \label{figure_unused_points_at_start_w2n} \end{figure} \end{proof} \begin{lemma} \label{lemma_unused_points_at_end} If $\pi$ is an increasing oscillation, and $\alpha \leq \pi$ is sum indecomposable, then in any embedding of an element $\lambda$ of $\familysum{\nsums{r}{\alpha}}$ into $\pi$, the minimum number of unused points at the end of $\pi$ depends on the end of $\lambda$, and on $\pi$, and is as shown below: \centering \begin{tabular}{lcccc} \toprule End of $\lambda$ & $\pi=W_{2n}$ & $\pi=W_{2n-1}$ & $\pi=M_{2n}$ & $\pi=M_{2n-1}$ \\ \midrule $\ldots 21}%{\delta $ & $0$ & $0$ & $0$ & $0$ \\ $\ldots \nils{{k+1}}{\dtwo} $ & $0$ & $1$ & $1$ & $0$ \\ $\ldots \left(\nils{k}{\dtwo}\right) \interleave 1 $ & $1$ & $0$ & $0$ & $1$ \\ \midrule $\ldots 21}%{\delta \oplus 1$ & $1$ & $1$ & $1$ & $1$ \\ $\ldots \left(\nils{{k+1}}{\dtwo}\right) \oplus 1 $ & $1$ & $2$ & $2$ & $1$ \\ $\ldots \left(\nils{k}{\dtwo}\right) \interleave 1 \oplus 1$ & $2$ & $1$ & $1$ & $2$ \\ \bottomrule \end{tabular} \end{lemma} \begin{proof} We examine all the possibilities as we did in Lemma~\ref{lemma_unused_points_at_start}. \end{proof} We now consider how closely copies of some sum indecomposable $\alpha$ can be embedded into $\pi$. This leads to two inequalities that relate $\alpha$, $\pi$ and the maximum number of copies of $\alpha$ that can be embedded in $\pi$. Where $\alpha \neq 21}%{\delta$, the shape of $\alpha$ fixes the way the two copies can be embedded in an increasing oscillation. If $\alpha = 21}%{\delta$, then we will see that there are choices for the embedding. \begin{lemma} \label{lemma_unused_points_interior} If $\pi$ is an increasing oscillation, and $\alpha \neq 21}%{\delta$, and $\alpha \leq \pi$ is sum indecomposable, then in any embedding of $\nsums{r}{\alpha}$ into $\pi$, the minimum number of points between the start and end of $\nsums{r}{\alpha}$ depends on $\alpha$, and is as shown below: \centering \begin{tabular}{lccc} \toprule Shape of $\alpha$ & Points in $\nsums{r}{\alpha}$ & Unused points & Minimum points \\ \midrule $\nils{{k+1}}{\dtwo}$ & $2kr $ & $2r-2$ & $2kr+2r-2$ \\ $1 \interleave \left(\nils{k}{\dtwo}\right)$ & $2kr+r $ & $r-1 $ & $2kr+2r-1$ \\ $\left(\nils{k}{\dtwo}\right) \interleave 1$ & $2kr+r $ & $r-1 $ & $2kr+2r-1$ \\ $1 \interleave \left(\nils{k}{\dtwo}\right) \interleave 1$ & $2kr+2r $ & $2r-2$ & $2kr+4r-2$ \\ \bottomrule \end{tabular} \end{lemma} \begin{proof} If $r$ = 1, then there are no unused points, and so the minimum number of points depends solely on the points in $\alpha$, and the table reflects this. Assume now that $r > 1$. If $\alpha \neq 21}%{\delta$, then we can see that the interleave fixes the layout of each copy of $\alpha$, so we simply pack each copy as close as possible. This packing clearly depends on the start and end of $\alpha$, and it is simple to examine the four possibilities. Examples are shown in Figure~\ref{figure_unused_points_interior_not_21}. \begin{figure} \centering \captionsetup[subfigure]{justification=centering} \begin{subfigure}[c]{0.22\textwidth} \begin{tikzpicture}[scale=0.3] \plotgrid{10}{12} \plotperm{4,1,6,3,8,5,10,7,12,9} \opendot{(5,8)} \opendot{(6,5)} \darkhline{6}{10} \darkvline{5}{12} \end{tikzpicture} \caption*{$21}%{\delta \interleave 21}%{\delta \oplus$\\$21}%{\delta \interleave 21}%{\delta$} \end{subfigure} \quad \begin{subfigure}[c]{0.22\textwidth} \begin{tikzpicture}[scale=0.3] \plotgrid{10}{12} \plotperm{4,1,6,3,8,5,10,7,12,9} \opendot{(6,5)} \darkhline{6}{10} \darkvline{4}{12} \end{tikzpicture} \caption*{$21}%{\delta \interleave 21}%{\delta \oplus$\\$1 \interleave 21}%{\delta \interleave 21}%{\delta$} \end{subfigure} \quad \begin{subfigure}[c]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.3] \plotgrid{10}{12} \plotperm{4,1,6,3,8,5,10,7,12,9} \opendot{(5,8)} \darkhline{6}{10} \darkvline{6}{12} \end{tikzpicture} \caption*{$21}%{\delta \interleave 21}%{\delta \interleave 1 \oplus$\\$21}%{\delta \interleave 21}%{\delta$} \end{subfigure} \quad \begin{subfigure}[c]{0.22\textwidth} \begin{tikzpicture}[scale=0.3] \plotgrid{10}{12} \plotperm{4,1,6,3,8,5,10,7,12,9} \opendot{(5,8)} \opendot{(8,7)} \darkhline{7}{10} \darkvline{6}{12} \end{tikzpicture} \caption*{$21}%{\delta \interleave 21}%{\delta \interleave 1 \oplus$\\$1 \interleave 21}%{\delta$} \end{subfigure} \caption{Packing $\alpha$ as close as possible when $\alpha \neq 21}%{\delta$.} \label{figure_unused_points_interior_not_21} \end{figure} \end{proof} We now turn to the case where $\alpha = 21}%{\delta$. This is more complex than the previous cases. We can see that there must be at least one point between each copy of $\alpha$. We can insert each copy of $21}%{\delta$ in two ways, one where the points are horizontally adjacent, and one where the points are vertically adjacent. These alternatives can be seen in Figure~\ref{figure_unused_points_interior_21}. Alternating these means that there will be exactly one point between each copy of $\alpha$, so this embedding minimises the number of points between the start and end of $\nsums{r}{\alpha}$. The complication in this case relates to how we start and end the embedding. We illustrate this by showing, in Figure~\ref{figure_unused_points_interior_21}, maximal embeddings where we are embedding into $W_{8}$, $W_{10}$ and $W_{12}$. \begin{figure} \begin{center} \begin{subfigure}[c]{0.3\textwidth} \begin{center} \begin{tikzpicture}[scale=0.3] \plotgrid{8}{8} \plotperm{3, 1, 5, 2, 7, 4, 8, 6} \opendot{(4,2)} \opendot{(5,7)} \darkhline{3}{8} \darkhline{5}{8} \darkvline{2}{8} \darkvline{6}{8} \end{tikzpicture} \caption*{$W_{8}$} \end{center} \end{subfigure} \quad \begin{subfigure}[c]{0.3\textwidth} \begin{center} \begin{tikzpicture}[scale=0.3] \plotgrid{10}{10} \plotperm{3, 1, 5, 2, 7, 4, 9, 6, 10, 8} \opendot{(4,2)} \opendot{(5,7)} \opendot{(9,10)} \opendot{(10,8)} \darkhline{3}{10} \darkhline{5}{10} \darkvline{2}{10} \darkvline{6}{10} \end{tikzpicture} \caption*{$W_{10}$} \end{center} \end{subfigure} \quad \begin{subfigure}[c]{0.3\textwidth} \begin{center} \begin{tikzpicture}[scale=0.3] \plotgrid{12}{12} \plotperm{3, 1, 5, 2, 7, 4, 9, 6, 11, 8, 12, 10} \opendot{(4,2)} \opendot{(5,7)} \opendot{(10,8)} \opendot{(11,12)} \darkhline{3}{12} \darkhline{5}{12} \darkhline{9}{12} \darkvline{2}{12} \darkvline{6}{12} \darkvline{8}{12} \end{tikzpicture} \caption*{$W_{12}$} \end{center} \end{subfigure} \caption{Examples of unused points when embedding $\nsums{r}{21}%{\delta}$.} \label{figure_unused_points_interior_21} \end{center} \end{figure} A detailed examination of each possible case gives us our second inequality. \begin{lemma} \label{lemma_unused_points_21} If $\pi$ is an increasing oscillation, and $\alpha = 21}%{\delta$ then for $\nsums{r}{\alpha}$ to be contained in $\pi$ we must have $3r - 1 \leq 2n$ for $\pi \in \{W_{2n}, M_{2n} \}$, and $3r \leq 2n$ for $\pi \in \{W_{2n-1}, M_{2n-1} \}$. \end{lemma} \begin{proof} In every case we start by embedding the first $21}%{\delta$ into the first two elements of the permutation. Thereafter, we embed each successive $21}%{\delta$ as close as possible to the preceding $21}%{\delta$. The minimum number of elements to embed $r$ copies of $21}%{\delta$ will be $2r$ elements to hold the points of the $21}%{\delta$s, and $r-1$ intermediate empty elements. For $W_{2n}$ and $M_{2n}$, this then gives $3r-1 \leq 2n$, and for $W_{2n-1}$ and $M_{2n-1}$ we obtain $3r-1 \leq 2n-1$. \end{proof} We now have a complete understanding of the number of points required to embed any permutation that contributes to the M\"{o}bius sum into an increasing oscillation. The following Lemma summarises the situation. \begin{lemma} \label{lemma_inequalities_for_pi} If $\pi$ is an increasing oscillation, and $\alpha \in \familyil{\nils{k}{\dtwo}} \leq \pi$ (so $\alpha$ is sum indecomposable), then for $\nsums{r}{\alpha}$ to be contained in $\pi$, the inequality in the table below must be satisfied, where $k \geq 1$. \centering \begin{tabular}{l@{\phantom{xxxxxxx}}c@{\phantom{xxxxxxx}}r} \toprule $\pi$ & Shape of $\alpha$ & Inequality \\ \midrule $W_{2n}, M_{2n}$ & $21}%{\delta$ & $3r -1 \leq 2n $ \\ $W_{2n-1}, M_{2n-1}$ & $21}%{\delta$ & $3r \leq 2n $\\ $W_{2n}$ & $ \nils{{k+1}}{\dtwo} $ & $ 2kr+2r-2 \leq 2n $ \\ $W_{2n-1}$ & $ 1 \interleave \left(\nils{k}{\dtwo}\right) $ & $ 2kr+2r+2 \leq 2n $ \\ $M_{2n-1}$ & $ \left(\nils{k}{\dtwo}\right) \interleave 1 $ & $ 2kr+2r+2 \leq 2n $ \\ $M_{2n}$ & $ 1 \interleave \left(\nils{k}{\dtwo}\right) \interleave 1 $ & $ 2kr+4r-2 \leq 2n $ \\ $W_{2n}, W_{2n-1}, M_{2n-1}$ & $ 1 \interleave \left(\nils{k}{\dtwo}\right) \interleave 1 $ & $ 2kr+4r \leq 2n $ \\ \multicolumn{2}{c}{All other cases} & $ 2kr+2r \leq 2n $ \\ \bottomrule \end{tabular} \end{lemma} \begin{proof} We apply Lemmas~\ref{lemma_unused_points_at_start}, \ref{lemma_unused_points_at_end}, ~\ref{lemma_unused_points_interior} and~\ref{lemma_unused_points_21} to the possibilities for $\pi$ and $\alpha$. \end{proof} As a consequence of Lemmas~\ref{lemma_unused_points_at_start} and~\ref{lemma_unused_points_at_end} we can define a relationship between the minimum number of points required to embed some $\nsums{r}{\alpha}$, and the minimum number of points required to embed $\oneplus \left(\nsums{r}{\alpha}\right)$, $\nsums{r}{\alpha} \oplus 1$ and $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1$. \begin{corollary} \label{corollary_add_2_for_op_or_po} If $\pi$ is an increasing oscillation, and $\alpha \leq \pi$ is sum indecomposable and if the minimum number of points required to embed $\nsums{r}{\alpha}$ into $\pi$ is $C$, then the minimum number of points required to embed $\oneplus \left(\nsums{r}{\alpha}\right)$ into $\pi$ is $C+2$, the minimum number of points required to embed $\nsums{r}{\alpha} \oplus 1$ into $\pi$ is $C+2$, and the minimum number of points required to embed $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1$ into $\pi$ is $C+4$. \end{corollary} \begin{proof} We can see from Lemmas~\ref{lemma_unused_points_at_start} and~\ref{lemma_unused_points_at_end} that adding $\oneplus {}$ at the start of a permutation increases the number of points required by two -- one for the new point, and one that is unused. Similarly, adding ${} \oplus 1$ at the end increases the points required by two. \end{proof} Lemma~\ref{lemma_inequalities_for_pi} gives us inequalities that any $\nsums{r}{\alpha}$ must satisfy to ensure that $\nsums{r}{\alpha} \leq \pi$. Further, Corollary~\ref{corollary_add_2_for_op_or_po} gives us inequalities that, for a given $\nsums{r}{\alpha}$ allow us to determine if $\oneplus \left(\nsums{r}{\alpha}\right) \leq \pi$, $\left(\nsums{r}{\alpha}\right) \oplus 1 \leq \pi$ and $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1 \leq \pi$. We can therefore determine what values of $r$ and $k$ will result in $\familysum{\nsums{r}{\alpha}}$ contributing to the M\"{o}bius function. We now consider inequalities that relate $\sigma$ and $\alpha$, so that we can determine if $\sigma \leq \alpha$ using an inequality. \begin{lemma} \label{lemma_inequalities_for_sigma} If $\sigma > 1$ is an increasing oscillation, and $\alpha \in \familyil{\nils{k}{\dtwo}}$ for some $k$, then for $\sigma$ to be contained in $\alpha$ the inequality in the table below must be satisfied, where $k \geq 1$. \centering \begin{tabular}{l@{\phantom{xxxxxxx}}c@{\phantom{xxxxxxx}}r} \toprule $\sigma$ & Shape of $\alpha$ & Inequality \\ \midrule $W_{2n-1},M_{2n},M_{2n-1}$ & $ 21}%{\delta $ & False \\ $W_{2n-1}$ & $ \left(\nils{k}{\dtwo}\right) \interleave 1 $ & $ k \geq n-1 $ \\ $M_{2n-1}$ & $ 1 \interleave \left(\nils{k}{\dtwo}\right) $ & $ k \geq n-1 $ \\ $W_{2n-1}, M_{2n}, M_{2n-1}$ & $ 1 \interleave \left(\nils{k}{\dtwo}\right) \interleave 1 $ & $ k \geq n-1 $ \\ $M_{2n}$ & $ \nils{{k+1}}{\dtwo} $ & $ k \geq n+1 $ \\ \multicolumn{2}{c}{All other cases} & $ k \geq n $ \\ \bottomrule \end{tabular} \end{lemma} \begin{proof} We examine all possible cases. \end{proof} We are now nearly ready to present the main theorem for this section. Informally, for each possible shape of permutation $\alpha$, we will first find the minimum and maximum values of $k$ such that $\sigma \leq \alpha \leq \pi$, as any other values of $k$ result in $\alpha$ being outside the interval. For each $\alpha$ and each $k$, we then determine the minimum value of $r$ such that $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1 \not\leq \pi$. We can then use this value of $r$ (assuming it is non-zero) to determine the weight to be applied to $\mobfn{\sigma}{\alpha}$. The set of $\alpha$'s with a non-zero weight is a contributing set $\contrib{\sigma}{\pi}$. At this point we can substitute a value for any $\mobfn{\sigma}{\alpha}$ where $\order{\sigma} \leq \order{\alpha} - 1$. We then use the same process recursively to determine the contributing set for the remaining elements of $\contrib{\sigma}{\pi}$. We first define some supporting functions. Let $\rawmink(\sigma, \alpha)$ be the minimum value of $k$ that satisfies the inequality in Lemma~\ref{lemma_inequalities_for_sigma}. For the first inequality, which is always false, we set $k=\order{\pi}$, as this will force the sum, defined later in Theorem~\ref{theorem_mobius_sum_increasing_oscillations}, to be empty. Let $\mink(\sigma, \alpha)$ be defined as \begin{align*} \mink(\sigma, \alpha) & = \begin{cases*} 1 & If $\sigma = 1$ and $\alpha \neq \nils{{k+1}}{\dtwo}$, \\ 2 & If $\sigma = 1$ and $\alpha = \nils{{k+1}}{\dtwo}$, \\ \rawmink(\sigma,\alpha) & otherwise. \end{cases*} \end{align*} Observe that for any $k < \mink(\sigma, \alpha)$, we have $\alpha < \sigma$, and so $\familysum{\nsums{k}{\alpha}}$ makes no net contribution to the M\"{o}bius sum. Let $\maxk(\alpha, \pi)$ be defined as the maximum value of $k$ that satisfies the inequality in Lemma~\ref{lemma_inequalities_for_pi}, if the shape of $\alpha$ and the shape of $\pi$ are different; and one less than the maximum value of $k$ that satisfies the inequality if the shape of $\alpha$ and the shape of $\pi$ are the same. For the first two inequalities, which do not involve $k$, we set $\maxk(\alpha, \pi)=1$ if the inequality is satisfied, and $\maxk(\alpha, \pi)=0$ if not. Observe here that for any $k > \maxk(\alpha, \pi)$ we have $\alpha \not < \pi$, and so $\familysum{\nsums{k}{\alpha}}$ makes no contribution to the M\"{o}bius sum. We define the weight function for increasing oscillations, $\weightosc{\sigma}{\alpha}{\pi}$, as \begin{align*} \weightosc{\sigma}{\alpha}{\pi} & = \begin{cases*} 1 & If $ \left(\nsums{r}{\alpha}\right) \oplus 1 \not \leq \pi, $ \\ -1 & If $ \left(\nsums{r}{\alpha}\right) \oplus 1 \leq \pi \text{ and } \nsums{r+1}{\alpha} \not\leq \pi, $ \\ 0 & Otherwise, \end{cases*} \end{align*} where $r$ is the smallest integer such that $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1 \not \leq \pi$. These conditions are simpler than those given in the weight function~(\ref{equation_general_weight_function}) for Theorem~\ref{theorem_mobius_sum_bottom_level_indecomposable} as, by Corollary~\ref{corollary_add_2_for_op_or_po}, if $\left(\nsums{r}{\alpha}\right) \oplus 1 \not \leq \pi$ then $\oneplus \left(\nsums{r}{\alpha}\right) \not \leq \pi$ and vice-versa. Furthermore, we will see that this weight function is only used when $\sigma \leq \nsums{r}{\alpha} \leq \pi$. We are now in a position to state our main theorem for this section. In this theorem, we consider the contribution to the M\"{o}bius sum of each possible shape of some sum indecomposable $\alpha$. There are five possible shapes, and, given that the expression for each shape is identical, we abuse notation slightly by writing our theorem as a sum over the shapes, thus the first sum in Theorem~\ref{theorem_mobius_sum_increasing_oscillations} is over the possible shapes of $\alpha$, where four of the shapes have a parameter $k$. For each shape, the limits on the interior sum determine the minimum and maximum values of $k$, using the summation variable $v$. We use the notation $\alpha_v$ to represent the actual permutation that has the shape $\alpha$, where the parameter $k$ has been set to the value of $v$. As an example, if $\alpha = 1 \interleave \left(\nils{k}{\dtwo}\right)$, and $v = 2$, then $\alpha_v = 1 \interleave \left( \nils{2}{21}%{\delta} \right) = 24153$. \begin{theorem} \label{theorem_mobius_sum_increasing_oscillations} Let $\pi$ be an increasing oscillation, and let $\sigma \leq \pi$ be sum indecomposable. Then \begin{align*} \mobfn{\sigma}{\pi} & = \sum_{\alpha \in \mathcal{S}} \ \sum_{v=\mink(\sigma,\alpha)}^{\maxk(\alpha,\pi)} \mobfn{\sigma}{\alpha_{v}} \weightosc{\sigma}{\alpha_{v}}{\pi} \end{align*} where the first sum is over the possible shapes of a sum indecomposable permutation contained in an increasing oscillation, so $\mathcal{S} = \{ 21}%{\delta, \;$ $\nils{{k+1}}{\dtwo},\;$ $1 \interleave \left(\nils{k}{\dtwo}\right),\;$ $\left(\nils{k}{\dtwo}\right) \interleave 1,\;$ $1 \interleave \left(\nils{k}{\dtwo}\right) \interleave 1 \}$. \end{theorem} \begin{proof} By Lemma~\ref{lemma_form_of_permutations_in_increasing_oscillations} the only sum-decomposable permutations contained in an increasing oscillation that contribute to the M\"{o}bius sum are $\familysum{\nsums{r}{\alpha}}$, where $\alpha \in \mathcal{S}$. If we set $r=1$, then for each $\alpha$ in $\mathcal{S}$ Lemma~\ref{lemma_inequalities_for_sigma} provides the smallest value of $k$ such that $\sigma \leq \alpha$. If there is no such value of $k$, then we use $\order{\pi}$, as the maximum value of $k$ must be smaller than this, and so the sum is empty. Again setting $r=1$, for each $\alpha$ in $\mathcal{S}$ Lemma~\ref{lemma_inequalities_for_pi} provides the maximum value of $k$ such that $\alpha \leq \pi$. If there is no value of $k$ that satisfies the inequality, then we set $\maxk(\alpha,\pi) = 0$, thus forcing the sum to be empty. Thus the permutations $\alpha_v$ in the sum \[ \sum_{\alpha \in \mathcal{S}} \ \sum_{v=\mink(\sigma,\alpha)}^{\maxk(\alpha,\pi)} \] are those that could contribute to the M\"{o}bius sum, and for any $\alpha_v$ not included in the sum, $\familysum{\nsums{r}{\alpha_v}}$ has a zero contribution to the M\"{o}bius sum for any $r$. Further, we can see from the construction method that any $\alpha_v$ included in the sum has $\sigma \leq \nsums{r}{\alpha_v} \leq \pi$ for at least one value of $r$, as if this was not the case, then we would have $\mink(\sigma,\alpha) > \maxk(\alpha,\pi)$, and so the sum would be empty. We have therefore shown that the $\alpha_v$-s included in the sum form a contributing set, and we could therefore set $\contrib{\sigma}{\pi}$ to be those $\alpha_v$-s, and use Theorem~\ref{theorem_mobius_sum_bottom_level_indecomposable}. We now show that the increasing oscillation weight function $\weightosc{\sigma}{\alpha}{\pi}$ is equivalent to $\weightgen{\sigma}{\alpha}{\pi}$ as defined in the general case. By Corollary~\ref{corollary_add_2_for_op_or_po}, if $\left(\nsums{r}{\alpha}\right) \oplus 1 \not \leq \pi$ then $\oneplus \left(\nsums{r}{\alpha}\right) \not \leq \pi$ and vice-versa, and so the condition for $\left(\nsums{r}{\alpha}\right) \oplus 1$ also covers $\oneplus \left(\nsums{r}{\alpha}\right)$. As discussed above, we know that there is at least one value of $r$ such that $\sigma \leq \nsums{r}{\alpha} \leq \pi$, and so $\weightosc{\sigma}{\alpha}{\pi}$ does not need to include this condition. Thus the increasing oscillation weight function $\weightosc{\sigma}{\alpha}{\pi}$ is equivalent to $\weightgen{\sigma}{\alpha}{\pi}$ as defined in the general case. \end{proof} \subsection{Example of Theorem~\ref{theorem_mobius_sum_increasing_oscillations}} As an example of Theorem~\ref{theorem_mobius_sum_increasing_oscillations} in action, we show how to determine \[ \mobfn{3142}{315274968} = \mobfn{\nils{2}{21}%{\delta}}{\nils{4}{21}%{\delta} \interleave 1}. \] We start by considering each possible shape of $\alpha$, setting $r=1$, and then using the inequalities in Lemmas~\ref{lemma_inequalities_for_pi} and~\ref{lemma_inequalities_for_sigma} to determine the minimum and maximum values of $k$. This gives us \begin{center} \begin{tabular}{ccc} \toprule Shape of $\alpha$ & Minimum $k$ & Maximum $k$ \\ \midrule $21}%{\delta$ & 1 & 1 \\ $\nils{k}{\dtwo}$ & 2 & 4 \\ $1 \interleave \left(\nils{k}{\dtwo}\right)$ & 2 & 3 \\ $\left(\nils{k}{\dtwo}\right) \interleave 1$ & 2 & 3 \\ $1 \interleave \left(\nils{k}{\dtwo}\right) \interleave 1$ & 2 & 3 \\ \bottomrule \end{tabular} \end{center} For each shape of $\alpha$, and each value of $k$, we then use the inequalities in Lemma~\ref{lemma_inequalities_for_pi} to determine the minimum value of $r$ such that $\oneplus \left(\nsums{r}{\alpha}\right) \oplus 1 \not \leq \pi$, and we then calculate the weight using this value of $r$. This gives \begin{center} \begin{tabular}{ccr} \toprule $\alpha$ & $r$ & Weight \\ \midrule $21}%{\delta$ & \multicolumn{2}{c}{No possibilities} \\ $\nils{2}{21}%{\delta}$ & $2$ & $1$ \\ $\nils{3}{21}%{\delta}$ & $1$ & $-1$ \\ $\nils{4}{21}%{\delta}$ & $1$ & $-1$ \\ $1 \interleave \left( \nils{2}{21}%{\delta} \right)$ & $1$ & $-1$ \\ $1 \interleave \left( \nils{3}{21}%{\delta} \right)$ & $1$ & $-1$ \\ $\left( \nils{2}{21}%{\delta} \right) \interleave 1$ & $2$ & $1$ \\ $\left( \nils{3}{21}%{\delta} \right) \interleave 1$ & $1$ & $-1$ \\ $1 \interleave \left( \nils{2}{21}%{\delta} \right) \interleave 1$ & $1$ & $-1$ \\ $1 \interleave \left( \nils{3}{21}%{\delta} \right) \interleave 1$ & $1$ & $-1$ \\ \bottomrule \end{tabular} \end{center} This leads to the following initial expression: \begin{align*} \mobfn{\nils{2}{21}%{\delta}}{\nils{4}{21}%{\delta} \interleave 1} = & \mobfn{\nils{2}{21}%{\delta}}{\nils{2}{21}%{\delta}} - \mobfn{\nils{2}{21}%{\delta}}{\nils{3}{21}%{\delta}} - \mobfn{\nils{2}{21}%{\delta}}{\nils{4}{21}%{\delta}} \\ & - \mobfn{\nils{2}{21}%{\delta}}{1 \interleave \left( \nils{2}{21}%{\delta} \right)} - {} \mobfn{\nils{2}{21}%{\delta}}{1 \interleave \left( \nils{3}{21}%{\delta} \right)} \\ & + \mobfn{\nils{2}{21}%{\delta}}{\nils{2}{21}%{\delta} \interleave 1} - \mobfn{\nils{2}{21}%{\delta}}{\nils{3}{21}%{\delta} \interleave 1} \\ & - \mobfn{\nils{2}{21}%{\delta}}{1 \interleave \left( \nils{2}{21}%{\delta} \right) \interleave 1} - \mobfn{\nils{2}{21}%{\delta}}{1 \interleave \left( \nils{3}{21}%{\delta} \right) \interleave 1} \end{align*} We know that $\mobfn{\nils{2}{21}%{\delta}}{\nils{2}{21}%{\delta}} = 1$, and that \[ \mobfn{\nils{2}{21}%{\delta}}{1 \interleave \left( \nils{2}{21}%{\delta} \right)} = \mobfn{\nils{2}{21}%{\delta}}{\nils{2}{21}%{\delta} \interleave 1} = -1. \] Applying Theorem~\ref{theorem_mobius_sum_increasing_oscillations} recursively to the other intervals eventually yields \[ \mobfn{\nils{2}{21}%{\delta}}{\nils{4}{21}%{\delta} \interleave 1} = -6. \] \section{Concluding remarks} \label{incosc_section-concluding-remarks} The results in~\cite{Burstein2011} provide two recurrences to handle the case where $\pi$ is decomposable. This work handles the case where $\sigma$ is indecomposable. It overlaps with~\cite{Burstein2011} when $\sigma$ is indecomposable and $\pi$ is decomposable. This leaves the case where $\sigma$ is decomposable and $\pi$ is indecomposable for further investigation. We can see that by symmetry $\mobfn{\sigma}{W_{n}} = \mobfn{\sigma^{-1}}{M_{n}}$. If we consider the value of the principal M\"{o}bius function, $\mobfn{1}{\pi}$, where $\pi$ is either $W_n$ or $M_n$, then it is simple to show that the absolute value of the principal M\"{o}bius function is bounded above by $2^n$. The weight function for increasing oscillations can be $\pm 1$, and we can see no obvious reason why there should not be two distinct values, $i$ and $j$, with the same parity, such that the signs of $\mobfn{1}{W_i}$ and $\mobfn{1}{W_j}$ were different. We have experimental evidence, based on the values of $W_n$ and $M_n$ for $n=1 \ldots \text{2,000,000} $ that suggests that $\mobfn{1}{W_{2n}} < 0$, and that $\mobfn{1}{W_{2n-1}} > 0$. Figure~\ref{figure_increasing_oscillation_values} is a log-log plot of the values of $- \mobfn{1}{W_{2n}}$ from $n=\text{8,000}$ to $n=\text{10,000}$. As can be seen, there seems to be some evidence that the values fall into distinct bands, and we have confirmed that this pattern continues up to $n=\text{1,000,000}$. Examination of the values of $\mobfn{1}{W_{2n-1}$} reveals the same patterns. \begin{figure} \centering \centering \input{incosc_loglogplotofwn.tex} \caption{Log-Log plot of $\order{W_{2n}}$.} \label{figure_increasing_oscillation_values} \end{figure} Following discussions at Permutation Patterns 2017, V{\'{i}}t Jel{\'{i}}nek~\cite{Jelinek2017a} provided the following conjecture (rephrased to reflect our notation). \begin{conjecture} [{Jel{\'{i}}nek~\cite{Jelinek2017a}}] \label{incosc_conjecture_vit} Let $M(n)$ denote the absolute value of the M\"{o}bius function $\mobfn{1}{W_n} = \mobfn{1}{M_n}$. Then for $n > 50$ we have \begin{align*} M(2n) & = n^2 \Longleftrightarrow \text{$n+1$ is prime and $n \equiv 0 \mymod 6$} \\ M(2n) & = n^2 - 1 \Longleftrightarrow \text{$n+1$ is prime and $n \equiv 4 \mymod 6$} \\ M(2n+1) & = n^2 - n \Longleftrightarrow \text{$n+1$ is prime and $n \equiv 0 \mymod 6$} \\ M(2n+1) & = n^2 - n - 1 \Longleftrightarrow \text{$n+1$ is prime and $n \equiv 4 \mymod 6$} \\ \end{align*} Further, Jel{\'{i}}nek notes that there does not seem to be any other small constant $k$ such that $M(n) = (n^2-k)/4$ infinitely often. \end{conjecture} We also have the following conjecture relating to the banding of the values. \begin{conjecture} \label{incosc_conjecture_dwm} Let $M(n)$ denote the absolute value of the M\"{o}bius function $\mobfn{1}{W_n} = \mobfn{1}{M_n}$. Let $E(n) = M(n) / (n^2)$, and let $O(n) = M(n)/(n^2 + n)$. Then, with $n \geq 1$, there exist constants $0 < a < b < c < d < e < f < g < 1$ such that \begin{align*} E(12n + 10) & \in [ a , b ] & O(12n + 11) & \in [ a , b ] \\ E(12n + 2) & \in [ c , d ] & O(12n + 3) & \in [ c , d ] \\ E(12n + 6) & \in [ c , d ] & O(12n + 7) & \in [ c , d ] \\ E(12n + 4) & \in [ e , f ] & O(12n + 5) & \in [ e , f ] \\ E(12n + 8) & \in [ g , 1 ] & O(12n + 9) & \in [ g , 1 ] \\ E(12n ) & \in [ g , 1 ] & O(12n + 1) & \in [ g , 1 ] \\ \end{align*} Examining the first 2,000,000 values of $\mobfn{1}{W_n}$ gives the following estimates for the constants. \[ \begin{array}{ccccccc} a & b & c & d & e & f & g \\ 0.615 & 0.680 & 0.692 & 0.760 & 0.821 & 0.896 & 0.923 \end{array} \] \end{conjecture} The \emph{complete nearly-layered}\extindex[permutation]{complete nearly-layered} permutations are formed by interleaving descending permutations. Formally, a complete nearly-layered permutation has the form \[ \alpha_1 \interleave \alpha_2 \interleave \ldots \interleave \alpha_{k-1} \interleave \alpha_k \] where each $\alpha_i$ is a descending permutation, with $\alpha_i > 1$ for $i = 2, \ldots ,k-1$. If we set $\alpha_i = 21$ for $i = 2, \ldots ,k-1$, and $\alpha_1, \alpha_k \in \{1,21 \}$, then we obtain the increasing oscillations. The computational approach taken for increasing oscillations could, we think, be adapted to complete nearly-layered permutations. It is clear that the equivalent of the inequalities in Lemmas~\ref{lemma_inequalities_for_pi} and~\ref{lemma_inequalities_for_sigma} would be somewhat more complex than those found here, but we believe that it should be possible to define an algorithm that could determine the M\"{o}bius function for complete nearly-layered permutations where the lower bound is sum indecomposable. \section{Chapter summary} We started this chapter by saying that our motivation was to find a contributing set $\contrib{\sigma}{\pi}$ that is significantly smaller than the poset interval $[\sigma, \pi)$, and a $\{0, \pm1 \}$ weighting function $\weightgen{\sigma}{\alpha}{\pi}$ such that \begin{align*} \mobfn{\sigma}{\pi} & = - \sum_{\alpha \in \contrib{\sigma}{\pi}} \mobfn{\sigma}{\alpha} \weightgen{\sigma}{\alpha}{\pi}. \end{align*} Our results are, essentially, computational. By this we mean that if $\sigma$ is indecomposable, then Theorem~\ref{theorem_mobius_sum_bottom_level_indecomposable} can be used to compute the value of the M\"{o}bius function on an interval $[\sigma, \pi]$ with less computational resources than that required by the standard recursion of Equation~\ref{equation_mobius_function}. During the preparation of this thesis the author generated 1000 random permutations with length 14, then determined the poset $[1, \pi)$ defined by the random permutation ($\pi$), and then counted the permutations in the poset, the sum-indecomposable permutations in the poset, and the skew-indecomposable permutations in the poset. The details are summarised in Table~\ref{table_indecomposable_improvement}. \begin{table} \centering \begin{tabular}{lr} \toprule Average number of \ldots & Value \\ \midrule Permutations & 3373 \\ Sum-indecomposable permutations & 2492 \\ Skew-indecomposable permutations & 2445 \\ \bottomrule \end{tabular} \caption{Statistics for 1000 posets $[1, \pi)$ defined by a random permutation of length 14, with figures rounded to the nearest integer.} \label{table_indecomposable_improvement} \end{table} These statistics seem to indicate that the improvement, at least for small intervals, does not seem to be as significant as the author hoped. With hindsight this is not unexpected. The proportion of permutations that are sum or skew decomposable tends to zero as the length of the permutation increases, from which one can readily deduce that the proportion of strongly indecomposable permutations must tend to 1 as the length of the permutation increases. This essentially means that the size of the contributing set $\contrib{\sigma}{\pi}$ is likely to be only slightly smaller than the size of the overall poset. The author wrote a computer program, Permutation WorkShop (PWS)~\cite{Marchant2020a}, which can be used to investigate the M\"{o}bius function on the permutation poset. The author found that the overhead of identifying which permutations were in the contributing set $\contrib{\sigma}{\pi}$, and the overhead of calculating $\weightgen{\sigma}{\alpha}{\pi}$ meant that, in general, calculations for permutations with an indecomposable lower bound took longer than using the standard recursive definition in Equation~\ref{equation_mobius_function}. This observation is, however, limited to the way in which PWS operates, and, indeed, the hardware it runs on. We suspect, however, that this observation is likely to be applicable to other routines that calculate the M\"{o}bius function on the permutation poset. Despite the comments above, we still feel that the overall approach of finding a contributing set and a weighting function so that we can write \[ \mobfn{\sigma}{\pi} = - \sum_{\alpha \in \contrib{\sigma}{\pi}} \mobfn{\sigma}{\alpha} \weightgen{\sigma}{\alpha}{\pi}. \] is valid, and indeed, although it is not phrased in these terms, the results in Chapter~\ref{chapter_2413_balloon_paper} use this method successfully. By contrast, when we consider intervals $[\sigma, \pi]$ where $\pi$ is an increasing oscillation, we find that the number of indecomposable permutations contained in $W(n)$ or $M(n)$ is, apart from trivial values of $\pi$, no greater than $2n - 4$, and this upper bound is only achieved when $\sigma = 1$. This is because any indecomposable permutation contained in an increasing oscillation must itself be a (smaller) increasing oscillation, and there are only two increasing oscillations of each length. It is well-known that, for most intervals $[1, \pi]$, the number of permutations in the poset grows exponentially as $\order{\pi}$ increases, and so we believe that in this specific case we have indeed found a contributing set that is significantly smaller than the poset. As supporting evidence for this claim, we note that we were easily able to calculate $\mobp{\pi}$ where $\pi$ was an increasing oscillation with $\text{2,000,000}$ elements. This gave us the raw data to notice the banding shown in Figure~\ref{figure_increasing_oscillation_values}, and led to Conjecture~\ref{incosc_conjecture_dwm}. We are not aware of any other set of permutations with a simple length-based construction where the values of the M\"{o}bius function fall into bands as the length of the permutation(s) increase. The M\"{o}bius function on the permutation pattern poset is, however, notoriously hard to compute in general, so it is quite possible that such sets do exist, but we do not have the understanding and/or the technology to be able to calculate values that would exhibit banding. We are not the only researchers to have considered the behaviour of $\mobp{W_n}$, and Conjecture~\ref{incosc_conjecture_vit} comes from a personal communication with V{\'{i}}t Jel{\'{i}}nek~\cite{Jelinek2017a}. We have used our computations of $\mobp{W_n}$ to confirm that this conjecture holds for $50 < n \leq \text{2,000,000}$. We suspect that the banding behaviour in Conjecture~\ref{incosc_conjecture_dwm} is a consequence of the link with the prime numbers in Conjecture~\ref{incosc_conjecture_vit}. While there are results that give us expressions for the principal M\"{o}bius function value, we are not aware of any result, whether relating to the value of the principal M\"{o}bius function, or the growth of the principal M\"{o}bius function, where the result has a link to the prime numbers. This suggest to us that one possible area for future research would be to develop a better understanding of the behaviour of the principal M\"{o}bius function of increasing oscillations. We would hope that if we could find a relationship that accounted for the apparent link with prime numbers, then we would also have a better understanding of the permutation pattern poset. \chapter{Overview} \label{chapter_overview} This thesis is primarily concerned with the M\"{o}bius function on the poset of permutations ordered by classic pattern containment. This thesis consists of eight chapters. \section{Introductory material} This chapter (Chapter~\ref{chapter_overview}), is an overview of the thesis. Following this overview, we have Chapter~\ref{chapter_common_definitions}, which defines the terminology and notation for the subject area as a whole. Subsequent chapters will also include definitions of terminology and notation that is only used in those chapters. This is then followed, in Chapter~\ref{chapter_background_and_history}, with a brief overview of the history of the subject area, and a description of the motivation for the work described in this thesis. \section{Chapters based on published material} Chapters~\ref{chapter_incosc_paper}, \ref{chapter_oppadj_paper}, and \ref{chapter_2413_balloon_paper} are based on material that has been published in peer-reviewed journals. Chapter~\ref{chapter_balloon_permutations_preprint} is based on material currently being prepared for publication. These chapters start with a section (``Preamble'') that introduces the subject matter. This is based on the abstract of the published paper, but may include additional material to help place the subject into the context of this thesis. This is then followed by sections that are based on the published material. We then conclude each chapter with a section (``Chapter summary'') that summarises the impact of the results, discusses how they relate to this thesis, and considers possible avenues for further research following on from the results described in the chapter. \section{Conclusion} Chapter~\ref{chapter_conclusion} looks back at the results from chapters~\ref{chapter_incosc_paper}, \ref{chapter_oppadj_paper}, \ref{chapter_2413_balloon_paper}, and~\ref{chapter_balloon_permutations_preprint}. Here we summarise the results that we presented in the preceding four chapters, and discuss whether it is possible to find a common theme (beyond the obvious ``related to the M\"{o}bius function'') in the work presented. We then consider possible avenues for future research. \section{Details of chapters based on published material} We now provide a more detailed description of the chapters based on published material, or on material being prepared for publication. In the description that follows, we may use terminology that is in common use in the field, but which will not be formally defined until Chapter~\ref{chapter_common_definitions}. \subsection{The M\"{o}bius function of permutations with an indecomposable lower bound} Chapter~\ref{chapter_incosc_paper} is based on a published paper, ``The M\"{o}bius function of permutations with an indecomposable lower bound''~\cite{Brignall2017a}, which is joint work with Robert Brignall. In this paper we show that, given some interval $[\sigma, \pi]$ in the permutation poset, if $\sigma$ is sum (resp. skew) indecomposable, then the value of the M\"{o}bius function $\mobfn{\sigma}{\pi}$ depends solely on the sum (resp. skew) indecomposable permutations contained in the upper bound $\pi$. The basic methodology is to first use existing results to show that certain permutations that are contained in the interval do not contribute to the value of the M\"{o}bius function. We then show that the permutations that remain can be partitioned into families, defined by a single sum (resp. skew) indecomposable permutation $\alpha$, and that the net contribution of a family will be in $\{ \pm \mobfn{\sigma}{\alpha}, 0 \}$. We derive a $\{ \pm 1, 0\}$ weighting function $\weightgen{\sigma}{\alpha}{\pi}$, and using this, we then show that $\mobfn{\sigma}{\pi}$ can be calculated by summing the value of $\mobfn{\sigma}{\alpha} \weightgen{\sigma}{\alpha}{\pi}$, over all permutations $\alpha$ that are sum (resp. skew) indecomposable and contained in the interval. We then set $\pi$ to be an increasing oscillation. This allows us to define a revised weighting function specific to these intervals which can be computed by using simple inequalities. This then leads to a fast algorithm for calculating $\mobfn{\sigma}{\pi}$, where $\pi$ is an increasing oscillation. We then have some conjectures relating to the long-term behaviour of the absolute value of $\mobfn{1}{\pi}$, where $\pi$ is an increasing oscillation. The chapter concludes by summarising the impact of the results, particularly from a computational perspective. \subsection{Zeros of the M\"{o}bius function of permutations} Chapter~\ref{chapter_oppadj_paper} is based on a published paper ``Zeros of the M\"{o}bius function of permutations''~\cite{Brignall2020}, which is joint work with Robert Brignall, V{\'i}t Jel{\'i}nek and Jan Kyn{\v c}l. In this paper we show that if a permutation $\pi$ contains two opposing adjacencies, then $\mobfn{1}{\pi} = 0$. We then use this result to show that the proportion of permutations of length $n$ with principal M\"{o}bius function equal to zero is, asymptotically, bounded below by 0.3995. We start by showing that if a poset $P$ has a particular structure, then $\mobp{P} = 0$. We then show that if a permutation $\pi$ contains two opposing adjacencies, then the poset interval $[1, \pi]$ has the required structure, and it follows that $\mobfn{1}{\pi} = 0$. We then provide a second proof of the same result based on normal embeddings. The techniques used in both proofs are used in later, more complicated, settings. We then show that if $\sigma$ is any permutation, and $\phi$ meets certain requirements, then any permutation $\pi$ that contains an interval order-isomorphic to $\phi$ has $\mobfn{\sigma}{\pi} = 0$. We use this result to show if $\sigma$ meets a particular condition, and $\pi$ contains an interval copy in the form $\alpha \oplus 1 \oplus \beta$, then $\mobfn{\sigma}{\pi} = 0$. We then show that if $\sigma = 1$, then $\sigma$ meets the condition required, and thus we prove that if a permutation contains an interval copy in the form $\alpha \oplus 1 \oplus \beta$, then $\mobfn{1}{\pi} = 0$. In the next part of this chapter, we show that, asymptotically, the proportion of permutations of length $n$ that have a principal M\"{o}bius function value of zero, $d_n$, is bounded below by $\left( 1 - \dfrac{1}{\mathrm{e}} \right)^2 \approxeq 0.3995$. We then use the techniques already introduced to show that there are pairs of permutations, $\alpha, \beta$, such that if $\pi$ contains interval copies of $\alpha$ and $\beta$, then $\mobfn{1}{\pi} = 0$. We further show that there are individual permutations $\alpha$ with the property that if $\pi$ contains an interval copies of $\alpha$, then $\mobfn{1}{\pi} = 0$. We then discuss further ways in which we could find a permutation $\pi$ where the presence of a specific interval or intervals in $\pi$ would guarantee $\mobfn{1}{\pi} = 0$. We discuss $d_n$, including a conjecture on an upper bound for $d_n$. The chapter concludes by summarising the impact of the results. We show that there is some numerical evidence that a large proportion of permutations with multiple non-opposing adjacencies have a principal M\"{o}bius function value of zero, and show that if we could prove this for a positive proportion of these permutations, then we could improve the lower bound of $d_n$. We discuss extending the opposing adjacency result to more general poset intervals and show that this is not possible in all cases. We then present two minor results. The first shows that certain intervals $[\sigma, \pi]$ have $\mobfn{\sigma}{\pi} = 0$. The second result shows that if $\sigma$ is adjacency-free, and $\pi$ is an inflation of $\sigma$, and $\mobfn{\sigma}{\pi} = 0$, then we have some information about the permutations used in the inflation. \subsection{2413-balloons and the growth of the M\"{o}bius function} Chapter~\ref{chapter_2413_balloon_paper} is based on a published paper ``2413-balloons and the growth of the M\"{o}bius function''~\cite{Marchant2020}, which is sole work by the author. In this paper we show that the growth of the principal M\"{o}bius function on the permutation poset is exponential. We start by defining the ``2413-balloon'' of some permutation $\beta.$ The resulting permutation has extremal points that are order-isomorphic to 2413, and the non-extremal points are an interval copy of $\beta$. A double 2413-balloon is the result of ballooning a permutation that is already a 2413-balloon. We take a poset where the upper bound is a double 2413-balloon, and the lower bound is 1, and we show how we can partition the chains in the poset into three sets. We then show that two of these subsets contribute zero to the value of the M\"{o}bius function. The remaining set of chains has the property that the second-highest element of every chain is in a particular set of permutations. We show that the Hall sum over the remaining chains is equivalent to summing the M\"{o}bius function over the set of permutations. The permutations in this set have the property that they can all be formed as the sum of $\beta$ and either one, two or three copies of the permutation $1$. For example, one of these permutations is $1 \oplus \beta$, and another is $1 \ominus ((\beta \ominus 1) \oplus 1)$. If we let $\rho$ be one of the permutations in the set, then this means that we can use a well-known result to show that $\mobfn{1}{\rho} = \pm \mobfn{1}{\beta}$. We use this to show that if $\beta$ is a 2413-balloon, and $\pi$ is the 2413-balloon of $\beta$, then $\mobfn{1}{\pi} = 2 \mobfn{1}{\beta}$, and this in turn allows us to show that the growth of the principal M\"{o}bius function on the permutation poset is exponential. We then consider 2413-balloons where the permutation being ballooned is not itself a 2413-balloon. Using a similar argument to that used for double 2413-balloons, we derive an expression for $\mobfn{1}{\pi}$, where $\pi$ is the 2413-balloon of some permutation $\beta$, and $\beta$ is not a 2413-balloon. For all but trivial cases we prove that $\mobfn{1}{\pi} = \mobfn{1}{\beta}$. We discuss generalising the ``balloon'' operation. We provide two conjectures which, up to symmetry, cover all generalised 2413-balloons. The chapter concludes with a brief discussion of a set of permutations where it is believed that the growth of the principal M\"{o}bius function is also exponential, but the ``growth rate'' is faster than that found for double 2413-balloons. We also discuss generalised balloons. \subsection{The principal M\"{o}bius function of balloon permutations} Chapter~\ref{chapter_balloon_permutations_preprint} is based on an unpublished paper which is being prepared for submission in parallel with this thesis, and which is sole work by the author. In this paper we generalise the 2413-balloon permutations used in Chapter~\ref{chapter_2413_balloon_paper}, and derive an expression for the value of the principal M\"{o}bius function of these permutations. We start by defining a method of constructing a permutation from two smaller permutations $\alpha$ and $\beta$. This construction method requires that the constructed permutation contains $\beta$ as an interval copy, and that the remaining points are order-isomorphic to $\alpha$. We call such a permutation a ``balloon'' permutation, which we write as $\ballij{\alpha}{\beta}$. We describe several sub-types of balloon permutation, including one that we call a ``wedge'' permutation. We show that the chains in the poset interval $[1, \ballij{\alpha}{\beta}]$ can be partitioned into three sets. We further show that one of these subsets contributes zero to the value of the M\"{o}bius function. We then prove that the contribution of a second subset can be written as the sum of the principal M\"{o}bius function of a set of permutations, all of which contain $\beta$ as an interval copy. This leads to an expression for $\mobfn{1}{\ballij{\alpha}{\beta}}$. This expression includes a ``correction factor'', expressed as a sum over a particular set of (hard to handle) chains. We then consider wedge permutations, and we show that the correction factor is always zero, thus leading to a simplified expression for the principal M\"{o}bius function of a wedge permutation. We further show that the principal M\"{o}bius function of a wedge permutation is always a multiple of the principal M\"{o}bius function of $\beta$. We discuss some of the problems that need to be overcome in order to extend our result to any interval where the upper bound is a balloon permutation. \chapter{Zeros of the M\"{o}bius function of permutations} \label{chapter_oppadj_paper} \section{Preamble} This chapter is based on a published paper~\cite{Brignall2020}, which is joint work with Robert Brignall, V{\'i}t Jel{\'i}nek and Jan Kyn{\v c}l. In this chapter we show that if a permutation $\pi$ contains two intervals of length 2, where one interval is an ascent and the other a descent, then the M\"{o}bius function $\mobfn{1}{\pi}$ of the interval $[1,\pi]$ is zero. As a consequence, we prove that the proportion of permutations of length $n$ with principal M\"{o}bius function equal to zero is asymptotically bounded below by $(1-1/e)^2\ge0.3995$. This is the first result determining the value of $\mobfn{1}{\pi}$ for an asymptotically positive proportion of permutations~$\pi$. We further establish other general conditions on a permutation $\pi$ that ensure $\mobfn{1}{\pi}=0$ including the occurrence in $\pi$ of any interval of the form $\alpha\oplus 1 \oplus\beta$. \section{Introduction} In this section we describe our principal results, and give an overview of the previous work in this area. Formal definitions are given in the next section. In this chapter, we are mainly concerned with the principal M\"{o}bius function. We focus on the zeros of the principal M\"{o}bius function, that is, on the permutations $\pi$ for which $\mobp{\pi}=0$. We show that we can often determine that a permutation $\pi$ is such a M\"{o}bius zero by examining small localities of~$\pi$. We formalize this idea using the notion of an ``annihilator''. Informally, an annihilator is a permutation $\alpha$ such that any permutation $\pi$ containing an interval copy of $\alpha$ is a M\"{o}bius zero. We will describe an infinite family of annihilators. We will also prove that any permutation containing an increasing as well as a decreasing interval of size 2 is a M\"{o}bius zero. Based on this result, we show that the asymptotic proportion of M\"{o}bius zeros among the permutations of a given length is at least $(1-1/e)^2\ge 0.3995$. This is the first known result determining the values of the principal M\"{o}bius function for an asymptotically positive fraction of permutations. We will also demonstrate how our results on the principal M\"{o}bius function can be extended to intervals whose lower bound is not~$1$. Burstein, Jel{\'{i}}nek, Jel{\'{i}}nkov{\'{a}} and Steingr{\'{i}}msson~\cite{Burstein2011} found a recursion for the M\"{o}bius function for sum and skew decomposable permutations. They used this to determine the M\"{o}bius function for separable permutations. Their results for sum and skew decomposable permutations implicitly include a result that only concerns small localities, which is that, up to symmetry, if a permutation $\pi$ of length greater than two begins $12$, then $\mobp{\pi} = 0$. Smith~\cite{Smith2013} found an explicit formula for the M\"{o}bius function on the interval $[1, \pi]$ for all permutations $\pi$ with a single descent. Smith's paper includes a lemma stating that if a permutation $\pi$ contains an interval order-isomorphic to $123$, then $\mobp{\pi}=0$. While the result in~\cite{Burstein2011} requires that the permutation starts with a particular sequence, Smith's result is, in some sense, more general, as the critical interval (123) can occur in any position. Smith's lemma may be viewed as the first instance of an annihilator result. Our results on annihilators provide a common generalization of Smith's lemma and the above mentioned result of Burstein et al.~\cite{Burstein2011}. \section{Definitions and notation} \label{sect-definitions-and-notation} Recall that an \emph{adjacency} in a permutation is an interval of length two. If a permutation contains a monotonic interval of length three or more, then each subinterval of length two is an adjacency. As examples, $367249815$ has two adjacencies, $67$ and $98$; and $1432$ also has two adjacencies, $43$ and $32$. If an adjacency is ascending, then it is an \emph{up-adjacency}, otherwise it is a \emph{down-adjacency}. If a permutation $\pi$ contains at least one up-adjacency, and at least one down-adjacency, then we say that $\pi$ has \emph{opposing adjacencies}. An example of a permutation with opposing adjacencies is $367249815$, which is shown in Figure~\ref{figure-example-oppadj}. \begin{figure}[!ht] \begin{center} \begin{tikzpicture}[scale=0.25] \plotpermgrid{3,6,7,2,4,9,8,1,5}; \draw [color=blue, very thick] (1.5, 5.5) rectangle (3.5, 7.5); \draw [color=blue, very thick] (5.5, 7.5) rectangle (7.5, 9.5); \end{tikzpicture} \end{center}% \caption{A permutation with opposing adjacencies.} \label{figure-example-oppadj} \end{figure} A permutation that does not contain any adjacencies is \emph{adjacency-free}\extindex{adjacency-free}. Some early papers use the term ``strongly irreducible'' for what we call adjacency-free permutations. See, for example, Atkinson and Stitt~\cite{Atkinson2002}. Given a permutation $\sigma$ of length $n$, and permutations $\alpha_1, \ldots, \alpha_n$, not all of them equal to the empty permutation $\epsilon$, the \emph{inflation}\extindex{inflation} of $\sigma$ by $\alpha_1, \ldots, \alpha_n$, written as $\inflateall{\sigma}{\alpha_1, \ldots, \alpha_n}$, is the permutation obtained by removing the element $\sigma_i$ if $\alpha_i = \epsilon$, and replacing $\sigma_i$ with an interval isomorphic to $\alpha_i$ otherwise. Note that this is slightly different to the standard definition of inflation, originally given in Albert and Atkinson~\cite{Albert2005}, which does not allow inflation by the empty permutation. As examples, $\inflateall{3624715}{1,12,1,1,21,1,1}=367249815$, and $\inflateall{3624715}{\epsilon,1,1,\epsilon,1,\epsilon,1}=3142$. In many cases we will be interested in permutations where most positions are inflated by the singleton permutation $1$. If $\sigma = 3624715$, then we will write $\inflateall{\sigma}{1,12,1,1,21,1,1} = 367249815$ as $\inflatesome{\sigma}{2,5}{12,21}$. Formally, $\inflatesome{\sigma}{i_1, \ldots, i_k}{\alpha_1, \ldots, \alpha_k}$ is the inflation of $\sigma$ where $\sigma_{i_j}$ is inflated by $\alpha_{j}$ for $j = 1, \ldots , k$, and all other positions of $\sigma$ are inflated by $1$. When using this notation, we always assume that the indices $i_1,\dotsc,i_k$ are distinct; however, we make no assumption about their relative order. Our aim is to study the M\"{o}bius function of the permutation poset, that is, the poset of finite permutations ordered by containment. We are interested in describing general examples of intervals $[\sigma,\pi]$ such that $\mobfn{\sigma}{\pi}=0$, with particular emphasis on the case $\sigma=1$. We say that $\pi$ is a \emph{M\"{o}bius zero}\extindex{M\"{o}bius zero} (or just \emph{zero}) if $\mobp{\pi}=0$, and we say that $\pi$ is a \emph{$\sigma$-zero}\extindex{$\sigma$-zero} if $\mobfn{\sigma}{\pi}=0$. It turns out that many sufficient conditions for $\pi$ to be a M\"{o}bius zero can be stated in terms of inflations. We say that a permutation $\phi$ is an \emph{annihilator}\extindex{annihilator} if every permutation that has an interval copy of $\phi$ is a M\"{o}bius zero; in other words, for every $\tau$ and every $i\le|\tau|$ the permutation $\tau_i[\phi]$ is a M\"{o}bius zero. More generally, we say that $\phi$ is a \emph{$\sigma$-annihilator}\extindex{$\sigma$-annihilator} if every permutation with an interval copy of $\phi$ is a $\sigma$-zero. We say that a pair of permutations $\phi$, $\psi$ is an \emph{annihilator pair}\extindex{annihilator pair} if for every permutation $\tau$ and every pair of distinct indices $i,j\le |\tau|$, the permutation $\inflatesome{\tau}{i,j}{\phi,\psi}$ is a M\"{o}bius zero. Observe that for an annihilator $\phi$, any permutation containing an interval copy of $\phi$ is also an annihilator. Likewise, if $\phi$ and $\psi$ form an annihilator pair then any permutation containing disjoint interval copies of $\phi$ and $\psi$ is an annihilator. As our first main result, presented in Section~\ref{sec-opposing}, we show that the two permutations $12$ and $21$ are an annihilator pair, or equivalently, any permutation with opposing adjacencies is a M\"{o}bius zero. Later, in Section~\ref{section-bounds-for-zn}, we use this result to prove that M\"{o}bius zeros have asymptotic density at least $(1-1/e)^2$. We also prove that for any two non-empty permutations $\alpha$ and $\beta$, the permutation $\alpha\oplus1\oplus\beta=\inflateall{123}{\alpha,1,\beta}$ is an annihilator, and generalize this result to a construction of $\sigma$-annihilators for general~$\sigma$. These results are presented in Section~\ref{sec-annihilator}. Finally, in Section~\ref{sec-special}, we give several examples of annihilators and annihilator pairs that do not directly follow from the results in the previous sections. \subsection{Intervals with vanishing M\"{o}bius function} We will now present several basic facts about the M\"{o}bius function, which are valid in an arbitrary finite poset. The first fact is a simple observation following directly from the definition of the M\"{o}bius function, and we present it without proof. \begin{fact}\label{fac-del} Let $P$ be a finite poset with M\"{o}bius function $\mu_P$, and let $x$ and $y$ be two elements of $P$ satisfying $\mobxfn{P}{x}{y}=0$. Let $Q$ be the poset obtained from $P$ by deleting the element $y$, and let $\mu_Q$ be its M\"{o}bius function. Then for every $z\in Q$, we have $\mobxfn{Q}{x}{z}=\mobxfn{P}{x}{z}$. \end{fact} Next, we introduce two types of intervals whose specific structure ensures that their M\"{o}bius function is zero. Let $[x,y]$ be a finite interval in a poset $P$. We say that $[x,y]$ is \emph{narrow-tipped}\extindex[poset]{narrow-tipped} if it contains an element $z$ different from $x$ such that $[x,y)=[x,z]$. The element $z$ is then called the \emph{core}\extindof{poset}{core}{(poset)} of $[x,y]$. We say that the interval $[x,y]$ is \emph{diamond-tipped}\extindex[poset]{diamond-tipped} if there are three elements $z$, $z'$ and $w$, all different from $x$, and such that \begin{enumerate} \item $[x,y)=[x,z]\cup[x,z']$ and \item $[x,z]\cap[x,z']=[x,w]$. \end{enumerate} Condition 2 is equivalent to $w$ being the greatest lower bound of $z$ and $z'$ in the interval $[x, y]$. The triple of elements $(z,z',w)$ is again called the \emph{core} of $[x,y]$. Figure~\ref{fig-example-diamond-tipped-poset} shows examples of narrow-tipped and diamond-tipped posets. \begin{figure} \begin{center} \begin{tikzpicture}[xscale=1,yscale=1] \tnode{5}{0}{5}{$y$}; \tnode{4}{0}{4}{$z$}; \dnode{31}{-1.5}{3}; \dnode{32}{-0.5}{3}; \dnode{33}{0.5}{3}; \dnode{34}{1.5}{3}; \dnode{21}{-1.5}{2}; \dnode{22}{-0.5}{2}; \dnode{23}{0.5}{2}; \dnode{24}{1.5}{2}; \dnode{11}{-0.75}{1}; \dnode{12}{0.75}{1}; \tnode{0}{0}{0}{$x$}; \dline{5}{4}; \dline{4}{31,32,33,34}; \dline{31}{21,22}; \dline{32}{22,23}; \dline{33}{21,22,23,24}; \dline{34}{22,24}; \dline{21}{11,12}; \dline{22}{11,12}; \dline{23}{11,12}; \dline{24}{11,12}; \dline{12}{0}; \dline{21}{0}; \end{tikzpicture} \qquad\qquad \begin{tikzpicture}[xscale=1,yscale=1] \tnode{1}{0}{0}{$x$}; \dnode{12}{-1}{1}; \dnode{21}{1}{1}; \dnode{231}{-1.5}{2}; \tnode{132}{0}{2}{$w$}; \dnode{213}{1.5}{2}; \dnode{2431}{-2}{3}; \dnode{1342}{0}{3}; \tnode{2143}{2}{3}{$z^\prime$}; \tnode{13542}{-1.5}{4}{$z$}; \tnode{214653}{0}{5}{$y$}; \dline{1}{12}; \dline{1}{21}; \dline{12}{231}; \dline{12}{132}; \dline{12}{213}; \dline{21}{231}; \dline{21}{132}; \dline{21}{213}; \dline{231}{2431}; \dline{231}{1342}; \dline{132}{2431}; \dline{132}{1342}; \dline{132}{2143}; \dline{213}{2143}; \dline{2431}{13542}; \dline{1342}{13542}; \dline{2143}{214653}; \dline{13542}{214653}; \draw [thick] plot [smooth cycle] coordinates { (-1.5, 4.2) (-2.2, 3.0) (-1.2, 1.0) ( 0.0, -0.2) ( 1.2, 1.0) ( 0.2, 3.0) }; \draw [thick] plot [smooth cycle] coordinates { (2.2, 3.2) ( -0.2, 2.2) (-1.3, 1.0) ( 0.0, -0.3) ( 1.4, 1.0) }; \end{tikzpicture} \end{center} \caption{Examples of narrow-tipped (left) and diamond-tipped (right) posets.} \label{fig-example-diamond-tipped-poset} \end{figure} \begin{fact}\label{fac-nd} Let $P$ be a poset with M\"{o}bius function $\mu_P$, and let $[x,y]$ be a finite interval in~$P$. If $[x,y]$ is narrow-tipped or diamond-tipped, then $\mobxfn{P}{x}{y}=0$. \end{fact} \begin{proof} If $[x,y]$ is narrow-tipped with core $z$, then \begin{align*} \mobxfn{P}{x}{y} = -\sum_{v\in[x,y)} \mobxfn{P}{x}{v} = -\sum_{v\in[x,z]} \mobxfn{P}{x}{v} = 0. \end{align*} If $[x,y]$ is diamond-tipped with core $(z,z',w)$ then \begin{align*} \mobxfn{P}{x}{y} &= -\sum_{v\in[x,y)} \mobxfn{P}{x}{v} \\ &= -\sum_{v\in[x,z]\cup[x,z']} \mobxfn{P}{x}{v} \\ &= -\sum_{v\in[x,z]} \mobxfn{P}{x}{v} - \sum_{v\in[x,z']} \mobxfn{P}{x}{v} + \sum_{v\in[x,z]\cap[x,z']} \mobxfn{P}{x}{v} \\ &= -\sum_{v\in[x,z]} \mobxfn{P}{x}{v} - \sum_{v\in[x,z']} \mobxfn{P}{x}{v} + \sum_{v\in[x,w]} \mobxfn{P}{x}{v} \\ & =0.\qedhere \end{align*} \end{proof} \subsection{Embeddings} \label{subsect-embeddings} Recall that an \emph{embedding} of a permutation $\sigma\in\cS_k$ into a permutation $\pi\in\cS_n$ is a function $f\colon [k]\to[n]$ with the following properties: \begin{itemize} \item $1\le f(1)<f(2)<\dotsb<f(k)\le n$. \item For any $i,j\in[k]$, we have $\sigma_i<\sigma_j$ if and only if $\pi_{f(i)}<\pi_{f(j)}$. \end{itemize} We let $\sE(\sigma,\pi)$ denote the set of embeddings of $\sigma$ into $\pi$, and $E(\sigma,\pi)$ denote the cardinality of $\sE(\sigma,\pi)$. For an embedding $f$ of $\sigma$ into $\pi$, the \emph{image}\extindex[embedding]{image} of $f$, denoted $\Img(f)$, is the set $\{f(i);\;i\in[k]\}$. In particular, $|\Img(f)|=|\sigma|$. The permutation $\sigma$ is the \emph{source}\extindex[embedding]{source} of the embedding $f$, denoted $\src_\pi(f)$. When $\pi$ is clear from the context (as it usually will be) we write $\src(f)$ instead of $\src_\pi(f)$. Note that for a fixed $\pi$, the set $\Img(f)$ determines both $f$ and $\src_\pi(f)$ uniquely. We say that an embedding $f$ is \emph{even}\extindex[embedding]{even} if the cardinality of $\Img(f)$ is even, otherwise $f$ is \emph{odd}\extindex[embedding]{odd}. In our arguments, we will frequently consider \emph{sign-reversing}\extindex[embedding]{sign-reversing} mappings on sets of embeddings (with different sources), which are mappings that map an odd embedding to an even one and vice versa. A typical example of a sign-reversing mapping is the so-called $i$-switch, which we now define. For a permutation $\pi\in\cS_n$, let $\sE(*,\pi)$ be the set $\bigcup_{\sigma\le\pi}\sE(\sigma,\pi)$. For an index $i\in[n]$, the \emph{$i$-switch}\extindex[embedding]{$i$-switch} of an embedding $f\in\sE(*,\pi)$, denoted $\Delta_i(f)$, is the embedding $g\in\sE(*,\pi)$ uniquely determined by the following properties: \begin{align*} \Img(g)&= \Img(f)\cup\{i\} \text{ if } i\not\in\Img(f)\text{, and}\\ \Img(g)&= \Img(f)\setminus\{i\} \text{ if } i\in\Img(f). \end{align*} For example, consider the permutations $\sigma=132$ and $\pi=41253$, and the embedding $f\in\sE(\sigma,\pi)$ satisfying $f(1)=2$, $f(2)=4$, and $f(3)=5$. We then have $\Img(f)=\{2,4,5\}$. Defining $g=\Delta_3(f)$, we see that $\Img(g)=\{2,3,4,5\}$, and $\src(g)$ is the permutation $1243$. Similarly, for $h=\Delta_5(g)$, we have $\Img(h)=\{2,3,4\}$ and $\src(h)=123$. Note that for any $\pi\in\cS_n$ and any $i\in[n]$, the function $\Delta_i$ is a sign-reversing involution on the set $\sE(*,\pi)$. Consider, for a given $\pi\in\cS_n$, two embeddings $f,g\in\sE(*,\pi)$. We say that $f$ \emph{is contained in}\extindex[embedding]{contained in} $g$ if $\Img(f)\subseteq \Img(g)$. Note that if $f$ is contained in $g$, then the permutation $\src(f)$ is contained in~$\src(g)$, and if a permutation $\lambda$ is contained in a permutation $\tau$, then any embedding from $\sE(\tau,\pi)$ contains at least one embedding from $\sE(\lambda,\pi)$. In particular, the mapping $f\mapsto \src(f)$ is a poset homomorphism from the set $\sE(*,\pi)$ ordered by containment onto the interval $[\epsilon,\pi]$ in the permutation pattern poset. \subsection{M\"{o}bius function via normal embeddings}\label{sec-form} We will now derive a general formula which will become useful in several subsequent arguments. The formula can be seen as a direct consequence of the well-known M\"{o}bius inversion formula. The following form of the M\"{o}bius inversion formula can be deduced, for example, from Proposition 3.7.2 in Stanley~\cite{Stanley2012}. Recall that a poset is \emph{locally finite} if each of its intervals is finite. \begin{fact}[M\"obius inversion formula] \label{fac-mif} Let $P$ be a locally finite poset with maximum element $y$, let $\mu$ be the M\"obius function of $P$, and let $F\colon P\to\bbR$ be a function. If a function $G\colon P\to\bbR$ is defined by \[ G(x)=\sum_{z\in[x,y]} F(z), \] then for every $x\in P$, we have \[ F(x)=\sum_{z\in[x,y]} \mobfn{x}{z}G(z). \] \end{fact} As a consequence, we obtain the following result. \begin{proposition} \label{pro-form} Let $\sigma$ and $\pi$ be arbitrary permutations, and let $F\colon [\sigma,\pi]\to\bbR$ be a function satisfying $F(\pi)=1$. We then have \begin{equation} \mobfn{\sigma}{\pi}= F(\sigma) - \sum_{\lambda\in [\sigma,\pi)} \mobfn{\sigma}{\lambda}\sum_{\tau\in[\lambda,\pi]} F(\tau). \label{eq-form} \end{equation} \end{proposition} \begin{proof} Fix $\sigma$, $\pi$ and $F$. For $\lambda\in[\sigma,\pi]$, define $G(\lambda)=\sum_{\tau\in[\lambda,\pi]} F(\tau)$. Using Fact~\ref{fac-mif} for the poset $P=[\sigma,\pi]$, we obtain \[ F(\sigma)=\sum_{\lambda\in[\sigma,\pi]} \mobfn{\sigma}{\lambda}G(\lambda). \] Substituting the definition of $G(\lambda)$ into the above identity and noting that $F(\pi)=1$, we get \begin{align*} F(\sigma) & =\sum_{\lambda\in[\sigma,\pi]}\mobfn{\sigma}{\lambda} \sum_{\tau\in[\lambda,\pi]} F(\tau)\\ &=\mobfn{\sigma}{\pi} + \sum_{\lambda\in[\sigma,\pi)}\mobfn{\sigma}{\lambda} \sum_{\tau\in[\lambda,\pi]} F(\tau), \end{align*} from which the proposition follows. \end{proof} In our applications, the function $F(\tau)$ will usually be defined in terms of the number of embeddings of $\tau$ into $\pi$ satisfying certain additional conditions. We call embeddings that satisfy these conditions \emph{normal embeddings}. We briefly discussed normal embeddings in Chapter~\ref{chapter_common_definitions}. We extend that discussion here, as, in this thesis, normal embeddings are only used in the current chapter. The notion of normal embedding seems to originate from the work of Bj\"orner~\cite{BjornerSubword}, who defined normal embeddings between words, and showed that in the subword order of words over a finite alphabet, the M\"obius function of any interval $[x,y]$ is equal in absolute value to the number of normal embeddings of $x$ into~$y$. Bj\"orner's approach was later extended to the computation of the M\"obius function in the composition poset~\cite{Sagan2006}, the poset of separable permutations~\cite{Burstein2011}, or the poset of permutations with a fixed number of descents~\cite{Smith2016}. In all these cases, the authors define a notion of ``normal'' embeddings tailored for their poset, and then express the M\"obius function of an interval $[x,y]$ as the sum of weights of the ``normal'' embeddings of $x$ into $y$, where each normal embedding has weight $1$ or~$-1$. For general permutations, this simple approach fails, since the M\"obius function $\mobfn{\sigma}{\pi}$ is sometimes larger than the number of all embeddings of $\sigma$ into~$\pi$. However, Smith~\cite{Smith2016a} introduced a notion of normal embedding applicable to arbitrary permutations, and proved a formula expressing $\mobfn{\sigma}{\pi}$ as a summation over certain sets of normal embeddings. For consistency, we adopt the term ``normal embedding'' in this chapter, although in our proofs, we will need to introduce several notions of normality, which are different from each other and from the notions of normality introduced by previous authors. We will always use $\sNE(\tau,\pi)$ to denote the set of embeddings of $\tau$ into $\pi$ satisfying the definition of normality used in the given context, and we let $\NE(\tau,\pi)$ be the cardinality of $\sNE(\tau,\pi)$. The next proposition provides a general basis for all our subsequent applications of normal embeddings. \begin{proposition}\label{pro-normal} Let $\sigma$ and $\pi$ be permutations. Suppose that for each $\tau\in[\sigma,\pi]$ we fix a subset $\sNE(\tau,\pi)$ of $\sE(\tau,\pi)$, with the elements of $\sNE(\tau,\pi)$ being referred to as \emph{normal embeddings} of $\tau$ into~$\pi$. Assume that $\sNE(\pi,\pi)=\sE(\pi,\pi)$, that is, the unique embedding of $\pi$ into $\pi$ is normal. For each $\lambda\in[\sigma,\pi)$, define the two sets of embeddings \begin{align*} \sNE_\lambda(\mathrm{odd},\pi)&=\bigcup_{\substack{\tau\in[\lambda,\pi]\\ |\tau| \text{ odd}}}\sNE(\tau,\pi)\quad\text{and}\\ \sNE_\lambda(\mathrm{even},\pi)&=\bigcup_{\substack{\tau\in[\lambda,\pi]\\ |\tau| \text{ even}}}\sNE(\tau,\pi). \end{align*} If for every $\lambda\in[\sigma,\pi)$ such that $\mobfn{\sigma}{\lambda}\neq 0$, we have the identity \begin{equation} \left|\sNE_\lambda(\mathrm{odd},\pi)\right|= \left|\sNE_\lambda(\mathrm{even},\pi)\right|, \label{eq-cancel} \end{equation} then $\mobfn{\sigma}{\pi} = (-1)^{|\pi|-|\sigma|}\NE(\sigma,\pi)$. \end{proposition} \begin{proof} The trick is to define the function $F(\tau)=(-1)^{|\pi|-|\tau|}\NE(\tau,\pi)$ and apply Proposition~\ref{pro-form}. This yields \begin{align*} \mobfn{\sigma}{\pi}&= F(\sigma) - \sum_{\lambda\in [\sigma,\pi)} \mobfn{\sigma}{\lambda}\sum_{\tau\in[\lambda,\pi]} F(\tau)\\ &=F(\sigma) - \sum_{\lambda\in [\sigma,\pi)} \mobfn{\sigma}{\lambda}\sum_{\tau\in[\lambda,\pi]} (-1)^{|\pi|-|\tau|}\NE(\tau,\pi)\\ &=F(\sigma) -\sum_{\lambda\in [\sigma,\pi)} \mobfn{\sigma}{\lambda}(-1)^{|\pi|}\bigl( \left|\sNE_\lambda(\mathrm{even},\pi)\right|-\left|\sNE_\lambda(\mathrm{odd},\pi)\right|\bigr)\\ &=F(\sigma)\\ &=(-1)^{|\pi|-|\sigma|}\NE(\sigma,\pi), \end{align*} as claimed. \end{proof} We remark that the general formula of Proposition~\ref{pro-form} can be useful even in situations where the more restrictive assumptions of Proposition~\ref{pro-normal} fail. An example of such an application of Proposition~\ref{pro-form} appears in Jel{\'{i}}nek, Kantor, Kyn{\v{c}}l and Tancer~\cite{Jelinek2020}. \section{Permutations with opposing adjacencies}\label{sec-opposing} In this section, we show that if a permutation has opposing adjacencies, then the value of the principal M\"{o}bius function is zero. \begin{theorem} \label{theorem-PMF-opposing-adjacencies} If $\pi$ has opposing adjacencies, then $\mobp{\pi} = 0$. \end{theorem} For this theorem, we are able to give two proofs. One of them is based on the notion of diamond-tipped intervals, and the other uses the approach of normal embeddings. As both these approaches will later be adapted to more complicated settings, we find it instructive to include both proofs here. \begin{proof}[Proof via diamond-tipped posets] For contradiction, suppose that the theorem fails, and let $\pi$ be a shortest permutation with opposing adjacencies such that $\mobp{\pi}\neq0$. Since $\pi$ has opposing adjacencies, there is a permutation $\tau$ and indices $i,j\le|\tau|$ such that $\pi=\tau_{i,j}[12,21]$. Define $\phi=\tau_{i,j}[1,21]$ and $\phi'=\tau_{i,j}[12,1]$. We claim that the interval $[1,\pi]$ can be transformed into a diamond-tipped interval with core $(\phi,\phi',\tau)$ by deleting a set of M\"{o}bius zeros from the interior of $[1,\pi]$. Since by Fact~\ref{fac-del}, the deletion of M\"{o}bius zeros does not affect the value of $\mobfn{1}{\pi}$, and since diamond-tipped intervals have zero M\"{o}bius function by Fact~\ref{fac-nd}, this claim will imply that $\mobfn{1}{\pi}=0$, a contradiction. To prove the claim, note first that any permutation $\lambda\in[1,\pi)$ with opposing adjacencies is a M\"{o}bius zero, since $\pi$ is a minimal counterexample to the theorem. Choose any $\lambda\in[1,\pi)$. Observe that if $\lambda$ has no up-adjacency, then $\lambda\le\phi$, and symmetrically, if $\lambda$ has no down-adjacency, then $\lambda\le\phi'$. Thus, any $\lambda\in[1,\pi)$ not belonging to $[1,\phi]\cup[1,\phi']$ has opposing adjacencies and can be deleted from $[1,\pi]$. Next, suppose that a permutation $\lambda$ is in $[1,\phi]\cap[1,\phi']$ but not in $[1,\tau]$. Observe that any permutation in $[1,\phi]\setminus[1,\tau]$ has a down-adjacency, while any permutation in $[1,\phi']\setminus[1,\tau]$ has an up-adjacency. It follows that $\lambda$ has opposing adjacencies and can again be deleted from $[1,\pi]$. After these deletions, the remaining poset is diamond-tipped with core $(\phi,\phi',\tau)$ as claimed, hence $\mobfn{1}{\pi}=0$, a contradiction. \end{proof} \begin{proof}[Proof via normal embeddings] Suppose again that $\pi\in\cS_n$ is a shortest counterexample. Suppose that $\pi$ has an up-adjacency at positions $i$, $i+1$, and a down-adjacency at positions $j$, $j+1$. Note that the positions $i$, $i+1$, $j$ and $j+1$ are all distinct, and in particular $n\ge 4$. We will say that an embedding $f\in\sE(*,\pi)$ is \emph{normal} if $\Img(f)$ is a superset of $[n]\setminus\{i,j\}$. In other words, $\Img(f)$ contains all positions of $\pi$ with the possible exception of $i$ and~$j$. Thus, there are four normal embeddings. We will use Proposition~\ref{pro-normal} with the above notion of normal embeddings and with $\sigma=1$. Clearly, we have $\sE(\pi,\pi)=\sNE(\pi,\pi)$. The main task is to verify equation \eqref{eq-cancel}, that is, to show that for every $\lambda\in[1,\pi)$ such that $\mobp{\lambda}\neq0$ we have $|\sNE_\lambda(\mathrm{odd},\pi)|=|\sNE_\lambda(\mathrm{even},\pi)|$. To prove this identity, we let $\sNE_\lambda(*,\pi)$ denote the set $\sNE_\lambda(\mathrm{odd},\pi)\cup\sNE_\lambda(\mathrm{even},\pi)$, and we will provide a sign-reversing involution on~$\sNE_\lambda(*,\pi)$. Choose a $\lambda\in[1,\pi)$ with $\mobp{\lambda}\neq0$. It follows that $\lambda$ does not have opposing adjacencies, otherwise it would be a counterexample shorter than~$\pi$. Without loss of generality, assume that $\lambda$ has no up-adjacency. We will prove that the $i$-switch operation $\Delta_i$ is a sign-reversing involution on~$\sNE_\lambda(*,\pi)$. It is clear that $\Delta_i$ is sign-reversing. We need to demonstrate that for every $f\in\sNE_\lambda(*,\pi)$, the embedding $g=\Delta_i(f)$ is again in $\sNE_\lambda(*,\pi)$. It is clear that $g$ is normal. It remains to argue that $\src(g)$ contains $\lambda$, or in other words, that there is an embedding of $\lambda$ into $\pi$ contained in~$g$. Let $h$ be a (not necessarily normal) embedding of $\lambda$ into $\pi$ contained in~$f$. If $i$ is not in $\Img(h)$, then $h$ is also contained in $g$, and we are done. Suppose now that $i\in\Img(h)$. Then $i+1\not\in\Img(h)$, because $i$ and $i+1$ form an up-adjacency in $\pi$ while $\lambda$ has no up-adjacency. We modify the embedding $h$ so that the element mapped to $i$ will be mapped to $i+1$ instead, and the mapping of the remaining elements is unchanged; let $h'$ be the resulting embedding (formally, we have $\Delta_i(\Delta_{i+1}(h))=h'$). Since $i$ and $i+1$ form an adjacency in $\pi$, we have $\src(h')=\src(h)=\lambda$. Since $i+1$ is in the image of all normal embeddings, we see that $h'$ is contained in $g$, and so $g\in\sNE_\lambda(*,\pi)$. This shows that $\Delta_i$ is the required sign-reversing involution on $\sNE_\lambda(*,\pi)$, verifying the assumptions of Proposition~\ref{pro-normal}. Proposition~\ref{pro-normal} then gives us that $\mobfn{1}{\pi}=(-1)^{n-1}\NE(1,\pi)$. Since every normal embedding into $\pi$ contains both $i+1$ and $j+1$ in its image, there is clearly no normal embedding of 1 into $\pi$ and therefore we get $\mobfn{1}{\pi}=0$. \end{proof} \section{\texorpdfstring{A general construction of $\sigma$-annihilators}% {A general construction of sigma-annihilators}} \label{sec-annihilator} Let $\sigma$ be a fixed non-empty lower bound permutation (the case $\sigma=1$ being the most interesting). Recall that a permutation $\phi$ is a \emph{$\sigma$-zero} if $\mobfn{\sigma}{\phi}=0$, and $\phi$ is a \emph{$\sigma$-annihilator} if every permutation with an interval copy of $\phi$ is a $\sigma$-zero. Clearly, any $\sigma$-annihilator is also a $\sigma$-zero. Our goal in this section is to present a general construction of an infinite family of $\sigma$-annihilators. A permutation $\phi$ is \emph{$\sigma$-narrow}\extindex[permutation]{$\sigma$-narrow} if $\phi$ contains a permutation $\phi^-$ of size $|\phi|-1$ such that every permutation in the set $[1,\phi)\setminus [1,\phi^-]$ is a $\sigma$-annihilator. In this situation, we call $\phi^-$ a \emph{$\sigma$-core of $\phi$}\extindex{$\sigma$-core of $\phi$}. Note that if $\phi$ is $\sigma$-narrow with $\sigma$-core $\phi^-$, then the interval $[1,\phi]$ can be transformed into a narrow-tipped interval by a deletion of $\sigma$-annihilators. Our first goal is to show that, with a few exceptions, all $\sigma$-narrow permutations are $\sigma$-annihilators. \begin{proposition}\label{pro-narrow} If a permutation $\phi$ is $\sigma$-narrow with a $\sigma$-core $\phi^-$, and if $\sigma$ has no interval copy of $\phi$ or of $\phi^-$, then $\phi$ is a $\sigma$-annihilator. \end{proposition} \begin{proof} Let $\phi$ be $\sigma$-narrow with a $\sigma$-core~$\phi^-$. Let $\pi$ be a permutation with an interval copy of~$\phi$, that is, $\pi=\tau_i[\phi]$ for some $\tau$ and~$i$. We show that $\mobfn{\sigma}{\pi}=0$. We may assume that $\sigma\le\pi$, otherwise $\mobfn{\sigma}{\pi}=0$ trivially. Let $\pi^-$ be the permutation $\tau_i[\phi^-]$. Note that $\sigma\neq\pi$ and $\sigma\neq\pi^-$, since $\sigma$ has no interval copy of~$\phi$ or of~$\phi^-$. The key step of the proof is to show that any permutation in $[\sigma,\pi)\setminus [\sigma,\pi^-]$ is a $\sigma$-zero. After we have proved this, we may use Fact~\ref{fac-del} to remove all such $\sigma$-zeros from the interval $[\sigma,\pi]$ without affecting the value of $\mobfn{\sigma}{\pi}$; note that $\sigma$ itself is clearly not a $\sigma$-zero, so it will not be removed, implying that $\sigma<\pi^-$. After the removal of $[\sigma,\pi)\setminus [\sigma,\pi^-]$, the remainder of the interval $[\sigma,\pi]$ is a narrow-tipped poset with core $\pi^-$, yielding $\mobfn{\sigma}{\pi}=0$ by Fact~\ref{fac-nd}. Therefore, to prove that $\mobfn{\sigma}{\pi}=0$ for a particular $\pi=\tau_i[\phi]$, it is enough to show that all the permutations in $[\sigma,\pi)\setminus [\sigma,\pi^-]$ are $\sigma$-zeros. We prove this by induction on~$|\tau|$. If $|\tau|=1$, we have $\pi=\phi$ and $\pi^-=\phi^-$. Then all the permutations in $[1,\pi)\setminus[1,\pi^-]$ are $\sigma$-annihilators (and therefore $\sigma$-zeros) by definition of $\sigma$-narrowness, and in particular, restricting our attention to permutations containing $\sigma$, we see that all the permutations in $[\sigma,\pi)\setminus[\sigma,\pi^-]$ are $\sigma$-zeros, as claimed. Suppose that $|\tau|>1$. Consider a permutation $\gamma \in [\sigma,\pi)\setminus [\sigma,\pi^-]$. Since $\gamma$ is contained in $\pi=\tau_i[\phi]$, it can be expressed as $\gamma=\tau^*_j[{\phi^*}]$ for some $\epsilon\le{\phi^*}\le \phi$ and $1\le\tau^*\le\tau$, where $\tau^*$ has an embedding into $\tau$ which maps $j$ to $i$. Note that ${\phi^*}$ cannot be contained in~$\phi^-$, because in such case we would have $\gamma\le \pi^-$. Moreover, if ${\phi^*}=\phi$, then necessarily $\tau^*<\tau$, and by induction $\gamma$ is a $\sigma$-zero. Finally, if ${\phi^*}$ is in $[1,\phi)\setminus [1,\phi^-]$, then ${\phi^*}$ is a $\sigma$-annihilator by the $\sigma$-narrowness of $\phi$, and hence $\gamma$ is a $\sigma$-zero. \end{proof} With the help of Proposition~\ref{pro-narrow}, we can now provide an explicit general construction of $\sigma$-annihilators. \begin{proposition}\label{pro-sum} Let $\alpha$ and $\beta$ be non-empty permutations. Assume that $\sigma$ does not contain any interval copy of a permutation of the form $\alpha'\oplus\beta'$ with $1\le\alpha'\le \alpha$ and $1\le\beta'\le\beta$ (in particular, $\sigma$ has no up-adjacency). Then $\alpha\oplus1\oplus\beta$ is $\sigma$-narrow with $\sigma$-core $\alpha\oplus\beta$, and $\alpha\oplus1\oplus\beta$ is a $\sigma$-annihilator. \end{proposition} \begin{proof} We proceed by induction on $|\alpha|+|\beta|$. Suppose first that $\alpha=\beta=1$. Then trivially $\alpha\oplus1\oplus\beta=123$ is $\sigma$-narrow with $\sigma$-core $\alpha\oplus\beta=12$, since the set $[1,123)\setminus[1,12]$ is empty. Moreover, by assumption, $\sigma$ has no interval copy of $12$, and therefore also no interval copy of $123$, hence $123$ is a $\sigma$-annihilator by Proposition~\ref{pro-narrow}. Suppose now that $|\alpha|+|\beta|>2$. Define $\phi=\alpha\oplus1\oplus\beta$ and $\phi^-=\alpha\oplus\beta$. To prove that $\phi$ is $\sigma$-narrow with $\sigma$-core $\phi^-$, we will show that any permutation $\gamma\in[1,\phi)\setminus[1,\phi^-]$ is a $\sigma$-annihilator. Such a $\gamma$ has the form $\alpha'\oplus 1\oplus\beta'$ for some $1\le\alpha'\le\alpha$ and $1\le\beta'\le\beta$, with $|\alpha'|+|\beta'|<|\alpha|+|\beta|$; note that we here exclude the cases $\alpha'=\epsilon$ and $\beta'=\epsilon$, because in these cases $\gamma$ would be contained in~$\phi^-$. By induction, $\gamma$ is $\sigma$-narrow, with $\sigma$-core $\gamma^-=\alpha'\oplus\beta'$. Moreover, $\sigma$ has no interval isomorphic to $\gamma$ or $\gamma^-$: observe that if $\sigma$ had an interval isomorphic to $\gamma$, it would also have an interval isomorphic to $\alpha'\oplus1$, which is forbidden by our assumptions on~$\sigma$. Thus, we may apply Proposition~\ref{pro-narrow} to conclude that $\gamma$ is a $\sigma$-annihilator, and in particular $\phi$ is $\sigma$-narrow with $\sigma$-core $\phi^-$, as claimed. Proposition~\ref{pro-narrow} then gives us that $\phi$ is a $\sigma$-annihilator. \end{proof} Focusing on the special case $\sigma=1$, which satisfies the assumptions of Proposition~\ref{pro-sum} trivially, we obtain the following result. \begin{corollary}\label{cor-sum} For any non-empty permutations $\alpha$ and $\beta$, the permutation $\alpha\oplus1\oplus\beta$ is an annihilator. \end{corollary} \section{The density of zeros} \label{section-bounds-for-zn} Our goal is to find an asymptotic positive lower bound on the proportion of permutations of length $n$ whose principal M\"{o}bius function is zero. The key step is the following lemma. \begin{lemma}\label{lem-incdec} Let $s_n$ be the number of permutations of size $n$ with opposing adjacencies. Then \[ \frac{s_n}{n!}=\left(1-\frac{1}{e}\right)^2+\cO\left(\frac{1}{n}\right). \] \end{lemma} \begin{proof} Let $a_n$ be the number of permutations of size $n$ that have no up-adjacency, and let $b_n$ be the number of permutations of size $n$ that have neither an up-adjacency nor a down-adjacency. The numbers $a_n$ (sequence A000255 in the OEIS~\cite{sloane}) have already been studied by Euler~\cite{Euler}, and it is known~\cite{RumneyPrimrose} that they satisfy $a_n/n! =e^{-1}+\cO(n^{-1})$. The numbers $b_n$ (sequence A002464 in the OEIS~\cite{sloane}) satisfy the asymptotics $b_n/n!= e^{-2} + \cO(n^{-1})$, which follows from the results of Kaplansky~\cite{Kaplansky1945} (see also~Albert et al.~\cite{Albert2003}). We may now express the number $s_n$ of permutations with opposing adjacencies by inclusion-exclusion as follows: we subtract from $n!$ the number of permutations having no up-adjacency and the number of permutations having no down-adjacency, and then we add back the number of permutations having no adjacency at all. This yields $s_n=n!-2a_n +b_n$, from which the lemma follows by the above-mentioned asymptotics of $a_n$ and~$b_n$. \end{proof} Combining Theorem~\ref{theorem-PMF-opposing-adjacencies} with Lemma~\ref{lem-incdec} we obtain the following consequence, which is the main result of this section. \begin{corollary}\label{cor-incdec} For a given $n$ and for $\pi$ a uniformly random permutation of length $n$, the probability that $\mobp{\pi}=0$ is at least \[ \left(1-\frac{1}{e}\right)^2-\cO\left(\frac{1}{n}\right). \] \end{corollary} \section{More complicated examples}\label{sec-special} We will now construct several specific examples of annihilators and annihilator pairs, which are not covered by the general results obtained in the previous sections. We begin with a construction of four new annihilator pairs, which we will later use to construct new annihilators. \begin{theorem}\label{thm-pair} The two permutations $213$ and $2431$ form an annihilator pair. \end{theorem} \begin{proof} Our proof is based on the concept of normal embeddings and follows a similar structure as the normal embedding proof of Theorem~\ref{theorem-PMF-opposing-adjacencies}. Suppose for contradiction that there is a permutation $\pi$ that contains an interval isomorphic to $213$ as well as an interval isomorphic to~$2431$, and that $\mobp{\pi}\neq0$. Fix a smallest possible $\pi$, and let $n$ be its length. Note that an interval isomorphic to $213$ is necessarily disjoint from an interval isomorphic to~$2431$, and in particular, $n\ge 7$. Let $i$, $i+1$ and $i+2$ be three positions of $\pi$ containing an interval copy of $213$, and let $j$, $j+1$, $j+2$ and $j+3$ be four positions containing an interval copy of~$2431$. We will apply the approach of Proposition~\ref{pro-normal}, with $\sigma=1$. We will say that an embedding $f\in\sE(*,\pi)$ is \emph{normal} if $\Img(f)$ is a superset of $[n]\setminus\{i+2, j+2, j+3\}$. Informally, the image of a normal embedding contains all the positions of $\pi$, except possibly some of the three positions that correspond to the value $3$ of $213$ or the values $3$ and $1$ of~$2431$ in the chosen interval copies of $213$ and $2431$, as shown in Figure~\ref{figure-intervals-in-213-2431}. In particular, there are eight normal embeddings. \begin{figure}[!ht] \begin{center} \begin{tikzpicture}[scale=0.3] \sqat{3}{0}{0}; \normaldot{(1,2)}; \normaldot{(2,1)}; \opendot{(3,3)}; \sqat{4}{5}{5}; \normaldot{(6,7)}; \normaldot{(7,9)}; \opendot{(8,8)}; \opendot{(9,6)}; \path [draw=gray,wavy] (-1,4.5) -- (10,4.5); \path [draw=gray,wavy] (4.5,-1) -- (4.5,10); \end{tikzpicture} \end{center} \caption{The intervals $213$ and $2431$ in Theorem~\ref{thm-pair}. Normal embeddings may omit some of the hollow points.} \label{figure-intervals-in-213-2431} \end{figure} We now verify the assumptions of Proposition~\ref{pro-normal}. We obviously have $\sNE(\pi,\pi)=\sE(\pi,\pi)$. The main task is to verify, for a given $\lambda\in[1,\pi)$ with $\mobp{\lambda}\neq0$, the identity~\eqref{eq-cancel} of Proposition~\ref{pro-normal}, that is, the identity $\left|\sNE_\lambda(\mathrm{odd},\pi)\right|= \left|\sNE_\lambda(\mathrm{even},\pi)\right|$. Fix a $\lambda\in[1,\pi)$ such that $\mobp{\lambda}\neq0$, and let $\sNE_\lambda(*,\pi)$ be the set $\sNE_\lambda(\mathrm{odd},\pi)\cup \sNE_\lambda(\mathrm{even},\pi)$. We will describe a sign-reversing involution $\Phi_\lambda$ on $\sNE_\lambda(*,\pi)$. The involution $\Phi_\lambda$ will always be equal to a switch operation $\Delta_k$, where the choice of $k$ will depend on~$\lambda$. Suppose first that $\lambda$ does not contain any down-adjacency. We claim that $\Delta_{j+2}$ is an involution on the set $\sNE_\lambda(*,\pi)$. To see this, choose $f\in\sNE_\lambda(*,\pi)$ and define $g=\Delta_{j+2}(f)$. It is clear that $g$ is a normal embedding. To prove that $g$ belongs to $\sNE_\lambda(*,\pi)$, it remains to show that $\src(g)$ contains~$\lambda$, or equivalently, that there is an embedding of $\lambda$ into $\pi$ that is contained in~$g$. Let $h$ be an embedding of $\lambda$ into $\pi$ which is contained in~$f$. If $j+2\not\in\Img(h)$, then $h$ is also contained in $g$ and we are done. Suppose then that $j+2\in\Img(h)$. This means that $j+1$ is not in $\Img(h)$, because $\pi$ has a down-adjacency at positions $j+1$ and $j+2$, while $\lambda$ has no down-adjacency. We now modify $h$ in such a way that the element previously mapped to $j+2$ will be mapped to $j+1$, while the mapping of the remaining elements remains unchanged. Let $h'$ be the embedding obtained from $h$ by this modification; formally, we have $h'=\Delta_{j+1}(\Delta_{j+2}(h))$. Since the two elements $\pi_{j+1}$ and $\pi_{j+2}$ form an adjacency, we have $\src(h')=\src(h)=\lambda$. Moreover, $h'$ is contained in $g$ (recall that $g$ is normal, and therefore $\Img(g)$ contains $j+1$). Consequently, $g$ is in $\sNE_\lambda(*,\pi)$, as claimed. We now deal with the case when $\lambda$ contains a down-adjacency. Since $\mobp{\lambda}\neq0$, it follows by Theorem~\ref{theorem-PMF-opposing-adjacencies} that $\lambda$ has no up-adjacency. We distinguish two subcases, depending on whether $\lambda$ contains an interval copy of~$2431$. Suppose that $\lambda$ contains an interval copy of~$2431$. We will prove that in this case, $\Delta_{i+2}$ is a sign-reversing involution on~$\sNE_\lambda(*,\pi)$. We begin by observing that $\lambda$ has no interval copy of $213$, otherwise $\lambda$ would be a counterexample to Theorem~\ref{thm-pair}, contradicting the minimality of~$\pi$. Fix again an embedding $f\in\sNE_\lambda(*,\pi)$, and define $g=\Delta_{i+2}(f)$. As in the previous case, $g$ is clearly normal, and we only need to show that there is an embedding of $\lambda$ into $\pi$ contained in~$g$. Let $h$ be an embedding of $\lambda$ into $\pi$ contained in $f$. If $i+2\not\in\Img(h)$, then $h$ is contained in $g$ and we are done, so suppose $i+2\in\Img(h)$. If at least one of the two positions $i$ and $i+1$ belongs to $\Img(h)$, then $\lambda$ contains an up-adjacency or an interval copy of $213$, contradicting our assumptions. Therefore, we can modify $h$ so that the element mapped to $i+2$ is mapped to $i$ instead, obtaining an embedding of $\lambda$ contained in $g$ and showing that~$g\in\sNE_\lambda(*,\pi)$. Finally, suppose that $\lambda$ has no interval copy of $2431$. In this case, we prove that $\Delta_{j+3}$ is the required involution on $\sNE_\lambda(*,\pi)$. As in the previous cases, we fix $f\in\sNE_\lambda(*,\pi)$, define $g=\Delta_{j+3}(f)$, and let $h$ be an embedding of $\lambda$ contained in~$f$; we again want to modify $h$ into an embedding $\lambda$ contained in~$g$. Let $\alpha$ be the subpermutation of $\lambda$ formed by those positions that are mapped into the set $J=\{j,j+1,j+2,j+3\}$ by~$h$. Recall that the positions in $J$ induce an interval copy of $2431$ in~$\pi$. In particular, $\alpha\le 2431$, and $\lambda$ has an interval copy of~$\alpha$. We know that $\alpha\neq 2431$, since we assume that $\lambda$ has no interval copy of $2431$. Also, $\alpha\neq 321$, since $321$ is an annihilator by Corollary~\ref{cor-sum}, while $\mobp{\lambda}\neq0$. Finally, $\alpha \neq 231$, since $\lambda$ has no up-adjacency. This implies that $\alpha\le 132$, and we can modify $h$ so that all the positions originally mapped into $J$ will get mapped into $J\setminus\{j+3\}$, obtaining an embedding of $\lambda$ into $\pi$ contained in $g$. Having thus verified the assumptions of Proposition~\ref{pro-normal}, we can conclude that $\mobp{\pi}=(-1)^{|\pi|-1}\NE(1,\pi)=0$, a contradiction. \end{proof} The following three results are proved using similar methods to those used in the proof of Theorem~\ref{thm-pair}. \begin{theorem}\label{thm-pair2} The permutations $2143$ and $2431$ form an annihilator pair. \end{theorem} \begin{theorem}\label{thm-pair3} The permutations $312$ and $23514$ form an annihilator pair. \end{theorem} \begin{theorem}\label{thm-pair4} The permutations $25134$ and $23514$ form an annihilator pair. \end{theorem} We omit the proofs here, as they were not included in the published paper~\cite{Brignall2020} on which this chapter is based. They can be found in \url{https://arxiv.org/abs/1810.05449v1}~\cite{OppAdjPreviousVersion}. \begin{theorem}\label{thm-annihil} % % Each of the three permutations $215463$, $236145$ and $214653$ is a M\"{o}bius annihilator. \end{theorem} \begin{proof} We first present the proof for the permutation $215463$. Let $\alpha=215463$, $\beta=\alpha_1[\epsilon]=14352$, $\beta'=\alpha_6[\epsilon]=21435$ and $\gamma=\alpha_{1,6}[\epsilon,\epsilon]=1324$. From Figure~\ref{fig-annihil} (left) we see that, after the removal of the annihilators $\alpha_3[\epsilon], \alpha_4[\epsilon]$ and $\alpha_5[\epsilon]$, the interval $[1,\alpha]$ becomes diamond-tipped with core $(\beta,\beta',\gamma)$. Hence by Facts~\ref{fac-del} and~\ref{fac-nd} we have $\mu[1,\alpha]=0$. Let $\pi$ be a permutation of the form $\tau_i[\alpha]$ for some $\tau$ and $i\le|\tau|$. We will show, by induction on $|\tau|$, that $\pi$ is a zero. The case $|\tau|=1$ has been proved in the previous paragraph. Assume that $|\tau|>1$. We will demonstrate that we can remove some zeros from the interval $[1,\pi]$ to end up with a diamond-tipped interval with core $(\tau_i[\beta],\tau_i[\beta'],\tau_i[\gamma])$. Choose a $\lambda\in[1,\pi)$. We can then write $\lambda$ as $\lambda=\tau^*_j[\alpha^*]$ for some $\tau^*\le \tau$ and some (possibly empty) $\alpha^*\le\alpha$, where $\tau^*$ has an embedding into $\tau$ mapping $j$ to~$i$. If $\alpha^*$ is an annihilator, then $\lambda$ is a zero and can be removed. If $\alpha^*=\alpha$, then $|\tau^*|<|\tau|$, and by induction, $\lambda$ is a zero and can be removed. In all the other cases, we have $\alpha^*\le \beta$ or $\alpha^*\le \beta'$, and in particular, $\lambda$ belongs to $[1,\tau_i[\beta]]\cup[1,\tau_i[\beta']]$. Suppose now that $\lambda$ is in $[1,\tau_i[\beta]]\cap[1,\tau_i[\beta']]$ but not in $[1,\tau_i[\gamma]]$. Since $\lambda\le\tau_i[\beta]$, we can write it as $\lambda=\tau_j^L[\beta^L]$, for some $\tau^L\le\tau$ and $\beta^L\le\beta$, where $\tau^L$ has an embedding into $\tau$ mapping $j$ to~$i$. Since $\lambda\not\le\tau_i[\gamma]$, we know that $\beta^L\not\le\gamma$. This means that $\beta^L\in[1,\beta]\setminus[1,\gamma]=\{14352,\allowbreak3241,1342,231\}$. Similarly, $\lambda\in[1,\tau_i[\beta']]\setminus[1,\tau_i[\gamma]]$ means that $\lambda$ can be written as $\lambda=\tau_k^R[\beta^R]$, with $\beta^R\in\{21435,2143\}$. Since $\lambda$ has an interval copy of $\beta^L$ as well as an interval copy of $\beta^R$, Theorem~\ref{theorem-PMF-opposing-adjacencies} shows that $\lambda$ is a zero if $\beta^L \in \{1342,231\}$, and Theorem~\ref{thm-pair2} shows that $\lambda$ is a zero if $\beta^L \in \{14352,3241\}$ (using that $3241$ is a diagonal reflection of $2431$). Therefore $\lambda$ can be removed. After the removal described above, $[1,\pi]$ is transformed into a diamond-tipped interval, showing that $\pi$ is a zero. The arguments for the other two permutations are completely analogous. For $236145$ we have $\alpha=236145$, $\beta=25134$, $\beta'=23514$, $\gamma=2413$, $\beta^L\in \{25134,1423\}$ and $\beta^R\in\{23514,2314\}$, and use Theorems~\ref{thm-pair},~\ref{thm-pair3} and~\ref{thm-pair4}. For $214653$ we have $\alpha=214653$, $\beta=13542$, $\beta'=2143$, $\gamma=132$, $\beta^L\in \{13542, 2431,\allowbreak 1342, 231\}$ and $\beta^R\in\{2143,213\}$, and use Theorems~\ref{theorem-PMF-opposing-adjacencies}, \ref{thm-pair} and~\ref{thm-pair2}. \end{proof} \begin{figure}[!ht] % % % \centering \begin{subfigure}[t]{0.32\textwidth} \begin{tikzpicture}[scale=0.15] \tikzmath{\y=0;}; \permnode{ 0}{\y}{1}; % \tikzmath{\y=8;}; \permnode{-4}{\y}{12}; \permnode{ 4}{\y}{21}; % \tikzmath{\y=\y+9;}; \permnode{-6}{\y}{231}; \permnode{ 0}{\y}{213}; \permnode{ 6}{\y}{132}; % \tikzmath{\y=\y+10;}; \permnode{-10.5}{\y}{3241}; \permnode{ -3.5}{\y}{1342}; \permnode{ 3.5}{\y}{1324}; \permnode{ 10.5}{\y}{2143}; % \tikzmath{\y=\y+11;}; \permnode{ -6}{\y}{14352}; \permnode{ 6}{\y}{21435}; % \tikzmath{\y=\y+12;}; \permnode{ 0}{\y}{215463}; % \link{12}{1}; \link{21}{1}; % \link{231}{12}; \link{231}{21}; \link{213}{12}; \link{213}{21}; \link{132}{12}; \link{132}{21}; % \link{3241}{231}; \link{3241}{213}; \link{1342}{231}; \link{1342}{132}; \link{1324}{213}; \link{1324}{132}; \link{2143}{213}; \link{2143}{132}; % \link{14352}{3241}; \link{14352}{1342}; \link{14352}{1324}; \link{21435}{1324}; \link{21435}{2143}; % \link{215463}{14352} \link{215463}{21435} \end{tikzpicture} \end{subfigure} \quad \begin{subfigure}[t]{0.26\textwidth} \begin{tikzpicture}[scale=0.15] \tikzmath{\y=0;}; \permnode{ 0}{\y}{1}; % \tikzmath{\y=8;}; \permnode{-4}{\y}{12}; \permnode{ 4}{\y}{21}; % \tikzmath{\y=\y+9;}; \permnode{-9}{\y}{132}; \permnode{-3}{\y}{312}; \permnode{ 3}{\y}{213}; \permnode{ 9}{\y}{231}; % \tikzmath{\y=\y+10;}; \permnode{ -8}{\y}{1423}; \permnode{ 0}{\y}{2413}; \permnode{ 8}{\y}{2314}; % \tikzmath{\y=\y+11;}; \permnode{ -6}{\y}{25134}; \permnode{ 6}{\y}{23514}; % \tikzmath{\y=\y+12;}; \permnode{ 0}{\y}{236145}; % \link{12}{1}; \link{21}{1}; % \link{132}{12}; \link{132}{21}; \link{231}{12}; \link{231}{21}; \link{213}{12}; \link{213}{21}; \link{312}{12}; \link{312}{21}; % \link{1423}{132}; \link{1423}{312}; \link{2413}{132}; \link{2413}{231}; \link{2413}{213}; \link{2413}{312}; \link{2314}{213}; \link{2314}{231}; % \link{25134}{1423}; \link{25134}{2413}; \link{23514}{2413}; \link{23514}{2314}; % \link{236145}{25134} \link{236145}{23514} \end{tikzpicture} \end{subfigure} \quad \begin{subfigure}[t]{0.26\textwidth} \begin{tikzpicture}[scale=0.15] \tikzmath{\y=0;}; \permnode{ 0}{\y}{1}; % \tikzmath{\y=8;}; \permnode{-4}{\y}{12}; \permnode{ 4}{\y}{21}; % \tikzmath{\y=\y+9;}; \permnode{-6}{\y}{231}; \permnode{ 0}{\y}{132}; \permnode{ 6}{\y}{213}; % \tikzmath{\y=\y+10;}; \permnode{-8}{\y}{2431}; \permnode{ 0}{\y}{1342}; \permnode{ 8}{\y}{2143}; % \tikzmath{\y=\y+11;}; \permnode{-6}{\y}{13542}; % \tikzmath{\y=\y+12;}; \permnode{ 0}{\y}{214653}; % \link{12}{1}; \link{21}{1}; % \link{231}{12}; \link{231}{21}; \link{213}{12}; \link{213}{21}; \link{132}{12}; \link{132}{21}; % \link{2431}{231}; \link{2431}{132}; \link{1342}{231}; \link{1342}{132}; \link{2143}{132}; \link{2143}{213}; % \link{13542}{2431}; \link{13542}{1342}; % \link{214653}{13542} \link{214653}{2143} \end{tikzpicture} \end{subfigure} \caption{The three annihilators from Theorem~\ref{thm-annihil}, and the posets of their subpermutations. The figures omit the permutations with opposing adjacencies, as well as the permutations with an interval copy of a permutation of the form $\alpha\oplus1\oplus\beta$.} \label{fig-annihil} \end{figure} The annihilator $215463$ of Theorem~\ref{thm-annihil} can be written as a sum of two intervals, namely $215463=21\oplus 3241$. One might wonder whether the two summands are in fact an annihilator pair. This, however, is not the case, as evidenced by the permutation $32417685=3241\oplus3241$, which is not a M\"{o}bius zero. An analogous example applies to $214653=21\oplus 2431$. In the proof of Theorem~\ref{thm-annihil}, it was crucial that for each $\alpha\in\{215463,\allowbreak236145,\allowbreak214653\}$, the interval $[1,\alpha]$ becomes diamond-tipped after the removal of some annihilators. However, this property alone is not sufficient to make a permutation $\alpha$ an annihilator. Consider, for instance, the permutation $\alpha=214635$. We may routinely check that by removing some annihilators, the interval $[1,\alpha]$ can be made diamond-tipped with core $(\beta=13524,\beta'=21435,\gamma=1324)$. This implies that $\alpha$ is a M\"{o}bius zero by Facts~\ref{fac-del} and~\ref{fac-nd}; however, it does not imply that $\alpha$ is an annihilator. In fact, $\alpha$ is not an annihilator, as demonstrated by the permutation \begin{align*} \pi&=582741936_{2,4,5}[\beta,\alpha,\beta']\\ &=9,17,19,21,18,20,2,12,11,14,16,13,15,5,4,7,6,8,1,22,3,10, \end{align*} whose principal M\"{o}bius function is 1, not 0. This example also shows that not all M\"{o}bius zeros are annihilators. In fact, among permutations of size at most 6, there are up to symmetry four M\"{o}bius zeros that are not annihilators. Apart from the permutation $214635$ pointed out above, there are three more examples: $235614$, $254613$ and $465213$. To see that these three permutations are not annihilators, it suffices to check that for any $\alpha\in\{235614,254613,465213\}$, the permutation $24153_{2}[\alpha]$ has non-zero principal M\"{o}bius function. We verified, with the help of a computer, that all the M\"{o}bius zeros of size at most 6 that are not symmetries of the four examples above can be shown to be annihilators by our results. This data is available at \url{https://iuuk.mff.cuni.cz/~jelinek/mf/zeros.txt}. \section{Concluding remarks} Given Theorem~\ref{theorem-PMF-opposing-adjacencies}, it is natural to wonder if we can find a similar result that applies to a permutation with multiple adjacencies, but no opposing adjacencies. One difficulty here is that there are permutations that have multiple adjacencies, and do not have opposing adjacencies, where the principal M\"{o}bius function value is non-zero. As an example, any permutation $\pi = 2,1,4,3,\dots, 2k,2k-1 = \nsums{k}{21}$ has $\mobp{\pi} = -1$ by the results of Burstein et al.~\cite[Corollary 3]{Burstein2011}. Let $d_n$ be the ``density of zeros'' of the M\"{o}bius function, that is, the probability that $\mobp{\pi}=0$ for a uniformly random permutation $\pi$ of size~$n$. The asymptotic behaviour of~$d_n$ is still elusive. \begin{problem} Does the limit $\lim_{n\to\infty} d_n$ exist? And if it does, what is its value? \end{problem} Corollary~\ref{cor-incdec} implies that $\liminf_{n\to\infty} d_n \ge (1-1/e)^2\ge 0.3995$. We have no upper bound on $d_n$ apart from the trivial bound $d_n\le 1$, but computational data suggest that simple permutations very often (though not always) have non-zero principal M\"{o}bius function. Since a random permutation is simple with probability approaching $1/e^2$~\cite{Albert2003}, this would suggest that $\limsup_{n\to\infty} d_n$ is at most~$1-1/e^2\approx 0.8647$. \begin{table}[!ht] \[ \begin{array}{lcr} \begin{array}{lr} n & d_n \\ \midrule 1 & 0.0000 \\ 2 & 0.0000 \\ 3 & 0.3333 \\ 4 & 0.4167 \\ 5 & 0.4833 \\ 6 & 0.5361 \\ 7 & 0.5742 \\ \end{array} & \phantom{xxx} & \begin{array}{lr} n & d_n \\ \midrule 8 & 0.5942 \\ 9 & 0.6019 \\ 10 & 0.6040 \\ 11 & 0.6034 \\ 12 & 0.6021 \\ 13 & 0.6006 \end{array} \end{array} \] \caption{The density of M\"{o}bius zeros among permutations of length $n$, with $n = 1, \ldots, 13$.} \label{table-zn-one-to-thirteen} \end{table} Table~\ref{table-zn-one-to-thirteen} lists the values of $d_n$ for $n=1, \ldots, 13$. The values are based on data supplied by Smith~\cite{Smith2018} for $1 \leq n \leq 9$, and calculations performed by the author of this thesis. Data files with the values of the principal M\"{o}bius function for all permutations of length twelve or less are available from \url{https://doi.org/10.21954/ou.rd.7171997.v2}. Based on this somewhat limited numeric evidence, we make the following conjecture: \begin{conjecture} \label{conjecture-PMF-zero-61} The values $d_n$ are bounded from above by 0.6040. \end{conjecture} It is natural to look for further ways to identify M\"{o}bius zeros and M\"{o}bius annihilators. Characterizing all the M\"{o}bius zeros would be an ambitious goal, since $\mobp{\pi}$ might be zero as a result of ``accidental'' cancellations with no deeper structural significance for~$\pi$. An \emph{annihilator multiset}\extindex{annihilator multiset} is a multiset of permutations $A = \{ \alpha_1, \ldots, \alpha_n \}$ such that any permutation $\pi$ that contains disjoint interval copies of the permutations $\alpha_1, \ldots, \alpha_n$ has $\mobp{\pi} = 0$. If $A = \{ \alpha_1, \ldots, \alpha_n \}$ and $B = \{ \beta_1, \ldots, \beta_m \}$ are annihilator multisets, then we say that $A$ \emph{contains} $B$ if $A \neq B$ and we can find the elements of $B$ as disjoint interval copies in the elements of $A$. An annihilator multiset $A$ is \emph{minimal}\extindex{minimal annihilator multiset} if there is no annihilator multiset contained in $A$. Using Corollary 3 of~\cite{Burstein2011}, which implies $\mobp{\pi} = \mobp{\pi \oplus \pi}$ for $\pi \neq 1$, it is simple to show that the permutations in a minimal annihilator multiset are, in fact, all distinct, and so we can refer to \emph{minimal annihilator sets}\extindex{minimal annihilator sets} of permutations. \begin{problem} Which permutations are M\"{o}bius annihilators? Are there infinitely many minimal annihilator sets that contain just one element, and are not of the form $\alpha\oplus1\oplus\beta$? \end{problem} It seems likely to us that the proofs of Theorems~\ref{thm-pair}--\ref{thm-pair4} might be extended to give several more annihilator pairs, such as $(312,235614)$. However, we do not see any general pattern in these examples yet. \begin{problem} Are there infinitely many minimal annihilator sets with two elements? \end{problem} \begin{problem} Are there any minimal annihilator sets with more than two elements? \end{problem} \section{Chapter summary} Prior to the publication of the paper on which this chapter is based, the proportion of permutations where we had a (computationally) simple way to determine the value of the principal M\"{o}bius function was, asymptotically, zero. This chapter presents two main results. The first, Theorem~\ref{theorem-PMF-opposing-adjacencies}, tells us that if a permutation has opposing adjacencies, then the value of the principal M\"{o}bius function is zero. It is possible to determine if a permutation has opposing adjacencies in time proportional to the length of the permutation, so this is a test that is simple to implement. This is potentially useful to anyone wanting to compute values of the principal M\"{o}bius function. The second main result comes from Corollary~\ref{cor-incdec}. In essence this gives us that the proportion of permutations where the principal M\"{o}bius function is zero is at least 0.3995. From a computational perspective, this implies that there are significant benefits from using Theorem~\ref{theorem-PMF-opposing-adjacencies} when determining the value of the principal M\"{o}bius function, as we have a linear time algorithm which, asymptotically, gives a positive result for nearly 40\% of permutations. \subsection{\texorpdfstring{Improving the lower bound for the density of zeros, $d_n$} {Improving the lower bound for the density of zeros}} Conjecture~\ref{conjecture-PMF-zero-61} suggests, based on some rather limited numerical evidence, that the density of zeros, $d_n$, of the principal M\"{o}bius function is bounded above by $0.6040$. We know, from Corollary~\ref{cor-incdec}, that asymptotically $d_n$ is bounded below by 0.3995. One area for further research would be to consider if we can find better bounds on the behaviour of $d_n$. One difficulty with this is that we would need to find a result where the number of permutations covered by the result is proportional to $n!$, where $n$ is the length of the permutation. Any relationship that was less than factorial (for example, exponential or polynomial) would mean that the proportion of permutations covered would be, asymptotically, zero. In the concluding remarks above, we note that it is natural to wonder if we can find a similar result that applies where a permutation has multiple adjacencies, but no opposing adjacencies. Such a result would have to account for the permutations that have multiple non-opposing adjacencies where the principal M\"{o}bius function value is non-zero. Table~\ref{table-count-of-non-opp-adj-permutations} shows, for lengths $4, \ldots, 12$, the number of permutations with multiple non-opposing adjacencies broken down by whether the value of the principal M\"{o}bius function is zero or not. \begin{table} \[ \begin{array}{lrr} \toprule \text{Length} & =0 & \neq 0 \\ \midrule 4 & 6 & 2 \\ 5 & 30 & 8 \\ 6 & 170 & 38 \\ 7 & 1154 & 212 \\ 8 & 8954 & 1502 \\ 9 & 78006 & 13088 \\ 10 & 757966 & 130066 \\ 11 & 8132206 & 1436296 \\ 12 & 95463532 & 17403612 \\ \bottomrule \end{array} \] \caption{Number of permutations with non-opposing adjacencies, classified by the value of the principal M\"{o}bius function.} \label{table-count-of-non-opp-adj-permutations} \end{table} This suggests that it might be possible to find a result similar to Theorem~\ref{theorem-PMF-opposing-adjacencies} for some or all of these cases, although, as noted, any such result will clearly need some additional criteria that will exclude permutations that have a non-zero principal M\"{o}bius function value. We remark that the non-opposing adjacency case may be important because the proportion of permutations of length $n$ that have non-opposing adjacencies is, asymptotically, non-zero, as we show now. \begin{theorem} The proportion of permutations that have non-opposing adjacencies is, asymptotically, bounded below by 0.1944. \end{theorem} \begin{proof} We find a lower bound by counting permutations that have non-opposing adjacencies. We will need to use \begin{theorem}[{% Albert, Atkinson and Klazar~\cite[Theorem 5]{Albert2003}}] \label{AAK-number-of-simple-permutations} The number of simple permutations of length $n$, $S(n)$, is given by \[ S(n) = \dfrac{n!}{\mathrm{e}^2} \left( 1 - \dfrac{1}{n} + \dfrac{2}{n(n-1)} +O(n^{-3}) \right). \] \end{theorem} Let $Z^\prime(n)$ be the proportion of permutations of length $n$ that have non-opposing adjacencies. % Let $n \geq 6$ be an integer; and let $k$ be an integer in the range $2, \ldots, \lfloor n/2 \rfloor$. Let $\sigma$ be a simple permutation with length $n-k$. We will count the number of ways we can inflate $\sigma$ with $k$ adjacencies to obtain a permutation with length $n$ that has non-opposing adjacencies. We can choose the positions to inflate in $\binom{n-k}{k}$ ways. The positions chosen can be inflated by either $12$ or $21$, so there are just $2$ distinct inflations by adjacencies. It follows that the number of ways to inflate $\sigma$ that result in a permutation with non-opposing adjacencies is given by \[ 2 \binom{n-k}{k}. \] Since we are inflating simple permutations, it follows from Albert and Atkinson~\cite[Proposition 2]{Albert2005} that the inflations are unique. For an inflation to contain a non-opposing adjacency, we need to inflate at least two points. Further, to obtain a permutation of length $n$ by inflating with adjacencies we can, at most, inflate $\lfloor n/2 \rfloor$ positions. Now using Theorem~\ref{AAK-number-of-simple-permutations} we can say that \[ Z^\prime(n) \geq \dfrac{1}{n!} \sum_{k=2}^{\lfloor n/2 \rfloor} S(n-k) 2 \binom{n-k}{k}. \] Since we are only interested in the asymptotic behaviour of $Z^\prime(n)$, we can assume that $n > 20$, and so we write \begin{align*} \lim_{n \to \infty} Z^\prime(n) & \geq \lim_{n \to \infty} \dfrac{1}{n!} \sum_{k=2}^{\lfloor n/2 \rfloor} S(n-k) 2 \binom{n-k}{k} \\ & \geq \lim_{n \to \infty} \dfrac{1}{n!} \sum_{k=2}^{10} S(n-k) 2 \binom{n-k}{k} \\ & \geq 0.1944. \end{align*} \end{proof} Thus if we could show that asymptotically, some fixed proportion of the permutations of length $n$ with non-opposing adjacencies were all M\"{o}bius zeros, then we could improve the bounds given by Corollary~\ref{cor-incdec}. \subsection{Extending the ``opposing adjacencies'' theorem} It is natural to ask if we can extend Theorem~\ref{theorem-PMF-opposing-adjacencies} to handle cases where the lower bound of the interval is not $1$. This is not possible in general, as if we take any permutation $\sigma \neq 1$, and inflate any two distinct points in positions $\ell$ and $r$ by $12$ and $21$ respectively, then $\pi = \inflatesome{\sigma}{\ell,r}{12,21}$ has opposing adjacencies, but $\mobfn{\sigma}{\pi} = 1$, as can be deduced from Figure~\ref{figure-extending-PFM-oppadj-fails}. \begin{figure} \begin{center} \begin{tikzpicture}[xscale=1,yscale=1] \node (n12-21) at ( 0, 3) {$\inflatesome{\sigma}{\ell,r}{12,21}$}; \node (n12-1) at (-2, 2) {$\inflatesome{\sigma}{\ell,r}{12,1}$}; \node (n1-21) at ( 2, 2) {$\inflatesome{\sigma}{\ell,r}{1,21}$}; \node (n1-1) at ( 0, 1) {$\inflatesome{\sigma}{\ell,r}{1,1}$}; \draw (n12-21) -- (n1-21); \draw (n12-21) -- (n12-1); \draw (n12-1) -- (n1-1); \draw (n1-21) -- (n1-1); \end{tikzpicture} \qquad \begin{tikzpicture}[xscale=1,yscale=1] \node (n12-21) at ( 0, 3) {$1$}; \node (n12-1) at (-2, 2) {$-1$}; \node (n1-21) at ( 2, 2) {$-1$}; \node (n1-1) at ( 0, 1) {$1$}; \draw (n12-21) -- (n1-21); \draw (n12-21) -- (n12-1); \draw (n12-1) -- (n1-1); \draw (n1-21) -- (n1-1); \end{tikzpicture} \end{center} \caption{The Hasse diagram of the interval $[\sigma, \inflatesome{\sigma}{\ell,r}{12,21}]$, and the corresponding values of the M\"{o}bius function.} \label{figure-extending-PFM-oppadj-fails} \end{figure} Although we do not have a general extension of Theorem~\ref{theorem-PMF-opposing-adjacencies}, we can show that: \begin{theorem} If $\sigma$ is adjacency-free, and $\pi$ contains an interval order-isomorphic to a symmetry of $1243$, then $\mobfn{\sigma}{\pi} = 0$. \end{theorem} \begin{proof} First note that if $\sigma \not\leq \pi$, then $\mobfn{\sigma}{\pi} = 0$ from the definition of the M\"{o}bius function. Further, since $\sigma$ is adjacency-free, we cannot have $\sigma = \pi$. We can now assume that $\sigma < \pi$. Without loss of generality we can also assume, by symmetry, that the interval in $\pi$ is order-isomorphic to $1243$. We start by claiming that, for any permutation $\sigma$ which is adjacency-free, and any $c$ with $1 \leq c \leq \order{\sigma}$, we have $\mobfn{\sigma}{\inflatesome{\sigma}{c}{1243}} = 0$. The Hasse diagram of the interval $[\sigma, \inflatesome{\sigma}{c}{1243}]$ is shown in Figure~\ref{figure-hasse-interval-inflate-1243}. \begin{figure} \begin{center} \begin{tikzpicture}[xscale=1,yscale=1.1] \node (n1243) at ( 0, 3) {$\inflatesome{\sigma}{c}{1243}$}; \node (n123) at (-2, 2) {$\inflatesome{\sigma}{c}{123}$}; \node (n132) at ( 2, 2) {$\inflatesome{\sigma}{c}{132}$}; \node (n12) at (-2, 1) {$\inflatesome{\sigma}{c}{12}$}; \node (n21) at ( 2, 1) {$\inflatesome{\sigma}{c}{21}$}; \node (n1) at ( 0, 0) {$\inflatesome{\sigma}{c}{1}$}; \draw (n1243) -- (n123); \draw (n1243) -- (n132); \draw (n123) -- (n12); \draw (n132) -- (n12); \draw (n132) -- (n21); \draw (n12) -- (n1); \draw (n21) -- (n1); \end{tikzpicture} \end{center} \caption{The Hasse diagram of the interval $[\sigma, \inflatesome{\sigma}{c}{1243}]$.} \label{figure-hasse-interval-inflate-1243} \end{figure} From the definition of the M\"{o}bius function, we have $\mobfn{\sigma}{\inflatesome{\sigma}{c}{1}} = 1$, $\mobfn{\sigma}{\inflatesome{\sigma}{c}{12}} = -1$, $\mobfn{\sigma}{\inflatesome{\sigma}{c}{21}} = -1$, $\mobfn{\sigma}{\inflatesome{\sigma}{c}{123}} = 0$, and $\mobfn{\sigma}{\inflatesome{\sigma}{c}{132}} = 1$, and so $\mobfn{\sigma}{\inflatesome{\sigma}{c}{1243}} = 0$, and thus our claim is true. Our argument now follows a similar pattern to that used by the proof of Theorem~\ref{theorem-PMF-opposing-adjacencies}, and we restrict ourselves to highlighting the differences. Assume that $\pi$ is a proper inflation of $\sigma$, with length greater than $\order{\sigma} + 4$, and $\pi$ contains an interval order-isomorphic to $1243$. Let $\gamma$ be the permutation formed by replacing an occurrence of $1243$ in $\pi$ by $12$, so if $\ell$ is the position of the first point of the $1243$ selected, then $\inflatesome{\gamma}{\ell,\ell+1}{12,21} = \pi$. Let $\lambda = \inflatesome{\gamma}{\ell,\ell+1}{12,1}$, and let $\rho = \inflatesome{\gamma}{\ell,\ell+1}{1,21}$; Define sets $L = [\sigma, \lambda]$, $R = [\sigma, \rho]$, $G_\gamma = [\sigma, \gamma]$, $G_x = L \cap R \setminus G_\gamma$, and $T = [\sigma, \pi) \setminus (L \cup R)$. Similarly to Theorem~\ref{theorem-PMF-opposing-adjacencies}, we have \[ \mobfn{\sigma}{\pi} = - \sum_{\tau \in L} \mobfn{\sigma}{\tau} - \sum_{\tau \in R} \mobfn{\sigma}{\tau} - \sum_{\tau \in T} \mobfn{\sigma}{\tau} + \sum_{\tau \in G_\gamma} \mobfn{\sigma}{\tau} + \sum_{\tau \in G_x} \mobfn{\sigma}{\tau}, \] and the sums over the sets $L$, $R$ and $G_\gamma$ are obviously zero. Using similar arguments to Theorem~\ref{theorem-PMF-opposing-adjacencies}, we can see that every permutation $\tau$ in $T$ or $G_x$ contains an interval order-isomorphic to $1243$, and so by the inductive hypothesis, has $\mobfn{\sigma}{\tau} = 0$, and thus we have $\mobfn{\sigma}{\pi} = 0$. \end{proof} Although we cannot find a general extension to Theorem~\ref{theorem-PMF-opposing-adjacencies}, we can find a necessary condition for a proper inflation of certain permutations to have a M\"{o}bius function value of zero. This is \begin{lemma} If $\sigma$ is adjacency-free, and $\pi = \inflateall{\sigma}{\alpha_1, \ldots, \alpha_n}$ is a proper inflation of $\sigma$, then $\mobfn{\sigma}{\pi} = 0$ implies that at least one $\alpha_i \not \in \{1, 12, 21\}$. \end{lemma} \begin{proof} Assume that every $\alpha_i \in \{1, 12, 21\}$. Let $k$ be the number of $\alpha_i$-s that are not equal to 1, and let $j_1, \ldots , j_k$ be the indexes ($i$-s) where $\alpha_i \neq 1$, so $\pi = \inflatesome{\sigma}{j_1, \ldots, j_k} {\alpha_{j_1}, \ldots, \alpha_{j_k}}$. Then every permutation in the interval $[\sigma, \pi]$ has a unique representation as $\inflatesome{\sigma}{j_1, \ldots, j_k}{\upsilon_1, \ldots , \upsilon_k}$, where \begin{align*} \upsilon_i & \in \begin{cases} \{ 1, 12 \} & \text{if } \alpha_{j_i} = 12, \\ \{ 1, 21 \} & \text{if } \alpha_{j_i} = 21. \\ \end{cases} \end{align*} So each position $j_i$ can be inflated by one of two permutations, and thus there is an obvious isomorphism between permutations in the interval $[\sigma, \pi]$ and binary numbers with $k$ bits. It follows that the poset can be represented as a Boolean algebra, and so by a well-known result (see, for instance, Example 3.8.3 in Stanley~\cite{Stanley2012}), $\mobfn{\sigma}{\pi} = (-1)^{\order{\pi} - \order{\sigma}}$. Thus if $\mobfn{\sigma}{\pi} = 0$, at least one $\alpha_i \not \in \{1, 12, 21\}$. \end{proof}
{ "timestamp": "2020-12-29T02:14:17", "yymm": "2012", "arxiv_id": "2012.13795", "language": "en", "url": "https://arxiv.org/abs/2012.13795", "abstract": "We study several aspects of the Möbius function, $\\mu[\\sigma,\\pi]$, on the poset of permutations under the pattern containment order.First, we consider cases where the lower bound of the poset is indecomposable. We show that $\\mu[\\sigma,\\pi]$ can be computed by considering just the indecomposable permutations contained in the upper bound. We apply this to the case where the upper bound is an increasing oscillation, and give a method for computing the value of the Möbius function that only involves evaluating simple inequalities.We then consider conditions on an interval which guarantee that the value of the Möbius function is zero. In particular, we show that if a permutation $\\pi$ contains two intervals of length 2, which are not order-isomorphic to one another, then $\\mu[1,\\pi] = 0$. This allows us to prove that the proportion of permutations of length $n$ with principal Möbius function equal to zero is asymptotically bounded below by $(1-1/e)^2 \\ge 0.3995$. This is the first result determining the value of $\\mu[1,\\pi]$ for an asymptotically positive proportion of permutations $\\pi$.Following this, we use ''2413-balloon'' permutations to show that the growth of the principal Möbius function on the permutation poset is exponential. This improves on previous work, which has shown that the growth is at least polynomial.We then generalise 2413-balloon permutations, and find a recursion for the value of the principal Möbius function of these generalisations.", "subjects": "Combinatorics (math.CO)", "title": "On the Möbius function of permutations under the pattern containment order", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692257588073, "lm_q2_score": 0.7248702880639792, "lm_q1q2_score": 0.7079585030190102 }
https://arxiv.org/abs/1805.01434
Regularity of powers of edge ideals: from local properties to global bounds
Let $I = I(G)$ be the edge ideal of a graph $G$. We give various general upper bounds for the regularity function $\text{reg} I^s$, for $s \ge 1$, addressing a conjecture made by the authors and Alilooee. When $G$ is a gap-free graph and locally of regularity 2, we show that $\text{reg} I^s = 2s$ for all $s \ge 2$. This is a slightly weaker version of a conjecture of Nevo and Peeva. Our method is to investigate the regularity function $\text{reg}I^s$, for $s \ge 1$, via local information of $I$.
\section{Introduction} During the last few decades, studying the regularity of powers of homogeneous ideals has evolved to be a central research topic in algebraic geometry and commutative algebra. This research program began with a celebrated theorem, proved independently by Cutkosky-Herzog-Trung \cite{CHT} and Kodiyalam \cite{Ko}, which stated that for a homogeneous ideal $I$ in a standard graded algebra over a field, the regularity function $\reg I^s$ is asymptotically a linear function (see also \cite{BCH, TW}). Though despite much effort from many researchers, this asymptotic linear function is far from being well understood. In this paper, we investigate this regularity function for edge ideals of graphs. We shall explore several classes of graphs for which this regularity function can be explicitly described or bounded in terms of combinatorial data of the graphs. This problem has been studied recently by many authors (cf. \cite{AB, ABS, Ba, BBH, BHT, Erey1, Erey2, JNS, JS, JS3, MFY, NFY}). Our initial motivation for this paper is the general belief that global conclusions often could be derived from local information. Particularly, local conditions on an edge ideal $I$ (i.e., conditions on $\reg (I:x)$, for $x \in V(G)$) should give a global understanding of the function $\reg I^s$, for $s \ge 1$. Our motivation furthermore comes from the following conjectures (see \cite{BBH, Nevo, NP}), which provide a general upper bound for the regularity function of edge ideals, and describe a special class of edge ideals whose powers (at least 2) all have linear resolutions. \medskip \noindent{\bf Conjecture A} (Alilooee-Banerjee-Beyarslan-H\`a). Let $G$ be a simple graph with edge ideal $I = I(G)$. For any $s \ge 1$, we have $$\reg I^s \le 2s+\reg I - 2.$$ \noindent{\bf Conjecture B} (Nevo-Peeva). Let $G$ be a simple graph with edge ideal $I = I(G)$. Suppose that $G$ is gap-free and $\reg I = 3$. Then, for all $s \ge 2$, we have $$\reg I^s = 2s.$$ Our aim is to investigate Conjectures A and B using the local-global principle. Finding general upper bounds for $\reg I(G)^s$ has received a special interest and generated a large number of papers during the last few years. This partly thanks to a general lower bound for $\reg I(G)^s$ given in \cite{BHT}; particularly, if $\nu(G)$ denotes the induced matching number of $G$ then, for any $s \ge 1$, we have \begin{align} \reg I(G)^s \ge 2s + \nu(G)-1. \label{eq.genlowerbound} \end{align} Our first main result gives a weaker general upper bound for $\reg I(G)^s$ than that of Conjecture A. The motivation of this result comes from an upper bound for the regularity of $I(G)$ given by Adam Van Tuyl and the last author, namely $\reg I(G) \le \beta(G) + 1$, where $\beta(G)$ denotes the matching number of $G$ (see \cite{HVT2008}). We prove the following theorem. \noindent{\bf Theorem \ref{thm.matching}.} Let $G$ be a graph with edge ideal $I = I(G)$, and let $\beta(G)$ be its matching number. Then, for all $s \ge 1$, we have $$\reg I^s \le 2s+\beta(G)-1.$$ As a consequence of Theorem \ref{thm.matching}, for the class of Cameron-Walker graphs, where $\nu(G) = \beta(G)$, we have $$\reg I^s = 2s+\nu(G)-1 \ \forall \ s \ge 1.$$ A graph $G$ is said to be \emph{locally of regularity at most $r-1$} if $\reg (I(G):x) \le r-1$ for all vertex $x$ in $G$. Note that, by \cite[Proposition 4.9]{CHHKTT}, if $G$ is locally of regularity at most $r-1$ then $\reg I(G) \le r$. In the local-global spirit, we reformulate Conjecture A to a slightly weaker conjecture as follows. \medskip \noindent{\bf Conjecture $\textbf{A}'$.} Let $G$ be a simple graph with edge ideal $I = I(G)$. Suppose that $G$ is locally of regularity at most $r-1$, for some $r \ge 2$. Then, for any $s \ge 1$, we have $$\reg I^s \le 2s+r-2.$$ Our next main result proves Conjecture $\text{A}'$ for gap-free graphs. \noindent{\bf Theorem \ref{thm.gaplocal}.} Let $G$ be a simple graph with edge ideal $I = I(G)$. Suppose that $G$ is gap-free and locally of regularity at most $r-1$, for some $r \ge 2$. Then, for any $s \ge 1$, we have $$\reg I^s \le 2s+r-2.$$ It is an easy observation that if $I(G)^s$ has a linear resolution for some $s \ge 1$ then $G$ must be gap-free. Conjecture B serves as a converse statement to this observation, and has remained intractable. By applying the local-global principle, we prove a weaker statement, in which the condition $\reg I = 3$ is replaced by the condition that $G$ is locally linear (i.e., locally of regularity at most 2). Our main result toward Conjecture B is stated as follows. \noindent{\bf Theorem \ref{thm.gaplocallin}.} Let $G$ be a simple graph with edge ideal $I = I(G)$. Suppose that $G$ is gap-free and locally linear. Then for all $s \ge 2$, we have $$\reg I^s = 2s.$$ As a consequence of Theorem \ref{thm.gaplocallin}, we quickly recover a result of Banerjee, which showed that if $G$ is gap-free and cricket-free then $I(G)^s$ has a linear resolution for all $s \ge 2$ (see Corollary \ref{cor.Ba}). We end the paper by exhibiting an evidence for Conjecture $\text{A}'$ at the first nontrivial value of $s$, i.e., $s = 2$, for all graphs. \noindent{\bf Theorem \ref{thm.square}.} Let $G$ be a graph with edge ideal $I = I(G)$. Suppose that $G$ is locally of regularity at most $r-1$. Then, for any edge $e \in E(G)$, $\reg (I^2:e) \le r.$ Particularly, this implies that $\reg (I^2) \le r+2.$ Our paper is structured as follows. In the next section we give necessary notation and terminology. The reader who is familiar with previous work in this research area may want to proceed directly to Section \ref{chapter3}. In Section \ref{chapter3}, we discuss general upper bound for the regularity function, aiming toward Conjecture A. Theorem \ref{thm.matching} is proved in this section. In Section \ref{chapter4}, we focus further on gap-free graphs, investigating both Conjectures $\text{A}'$ and B using the local-global principle. Theorems \ref{thm.gaplocal} and \ref{thm.gaplocallin} are proved in this section. We end the paper with Section \ref{chapter5}, proving Theorem \ref{thm.square} and discussing briefly how an effective bound on the regularity of $I(G)^2$ may give us information on the regularity of the second symbolic power $I(G)^{(2)}$. This gives a glimpse into future work on the regularity function of symbolic powers of edge ideals. \begin{acknowledgement} Part of this work was done while the first named and the last named authors were visiting the Vietnam Institute for Advanced Study in Mathematics (VIASM). We would like to express our gratitude toward VIASM for its support and hospitality. The last named author is partially supported by Simons Foundation (grant \#279786) and Louisiana Board of Regents (grant \#LEQSF(2017-19)-ENH-TR-25). The authors thank Thanh Vu for pointing out a mistake in our first version of the paper. \end{acknowledgement} \section{Preliminaries}\label{chapter2} In this section, we collect notations and terminology used in the paper. For unexplained notions, we refer the reader to standard texts \cite{BrHe, E, HH, MS, Stanley, V}. \noindent{\bf Graph Theory.} Throughout the paper, $G$ shall denote a finite simple graph with vertex set $V(G)$ and edge set $E(G)$. A subgraph $G'$ of $G$ is called \emph{induced} if for any two vertices $u,v$ in $G'$, $uv \in E(G') \Leftrightarrow uv \in E(G)$. For a subset $W \subseteq V(G)$, we shall denote by $G_W$ the induced subgraph of $G$ over the vertices in $W$, and denote by $G-W$ the induced subgraph of $G$ on $V(G) \setminus W$. When $W = \{w\}$ consists of a single vertex, we also write $G-w$ for $G-\{w\}$. The \emph{complement} of a graph $G$, denoted by $G^c$, is the graph on the same vertex set $V(G)$ in which $uv \in E(G^c) \Leftrightarrow uv \not\in E(G)$. \begin{definition} Let $G$ be a graph. \begin{enumerate} \item A \emph{walk} in $G$ is a sequence of (not necessarily distinct) vertices $x_1,x_2, \dots, x_n$ such that $x_ix_{i+1}$ is an edge for all $i=1,2,\dots,n.$ A \emph{circuit} is a \emph{closed} walk (i.e., when $x_1 \equiv x_n$). \item A \emph{path} in $G$ is a walk whose vertices are distinct (except possibly the first and the last vertices). \item A \emph{cycle} in $G$ is a closed path. A cycle consisting of $n$ distinct vertices is called an $n$-cycle and often denoted by $C_n.$ \item An \emph{anticycle} is the complement of a cycle. \end{enumerate} \end{definition} A graph in which there is no induced cycle of length greater than 3 is called a \emph{chordal} graph. A graph whose complement is chordal is called a \emph{co-chordal} graph. \begin{definition} Let $G$ be a graph. \begin{enumerate} \item A \emph{matching} in $G$ is a collection of disjoint edges. The \emph{matching number} of $G$, denoted by $\beta(G)$ is the maximum size of a matching in $G$. \item An \emph{induced matching} in $G$ is a matching $C$ such that the induced subgraph of $G$ over the vertices in $C$ does not contain any edge other than those already in $C$. The \emph{induced matching number} of $G$, denoted by $\nu(G)$, is the maximum size of an induced matching in $G$. \end{enumerate} \end{definition} \begin{definition} Let $G$ be a graph. \begin{enumerate} \item Two disjoint edges $uv$ and $xy$ are said to form a \emph{gap} in $G$ if $G$ does not have an edge with one endpoint in $\{u,v\}$ and the other in $\{x,y\}$. \item If $G$ has no gaps then $G$ is called \emph{gap-free}. Equivalently, $G$ is gap-free if and only if $\nu(G) = 1$ (i.e., $G^c$ contains no induced $C_4$). \end{enumerate} \end{definition} For any integer $n$, $K_n$ denotes the \emph{complete} graph over $n$ vertices (i.e., there is an edge connecting any pair of vertices). For any pair of integers $m$ and $n$, $K_{m,n}$ denotes the \emph{complete bipartite} graph; that is, a graph with a bipartition $(U,V)$ of the vertices such that $|U| = m, |V| = n$ and $E(K_{m,n}) = \{uv ~|~ u \in U, v \in V\}.$ \begin{definition} \quad \begin{enumerate} \item A graph isomorphic to $K_{1,3}$ is called a \emph{claw}. A graph without any induced claw is called a \emph{claw-free} graph. \item A graph isomorphic to the graph with vertex set $\{w_1,w_2,w_3,w_4,\\w_5\}$ and edge set $\{w_1w_3,w_2w_3,w_3w_4,w_3w_5,w_4w_5\}$ is called a \emph{cricket}. A graph without any induced cricket is called a \emph{cricket-free} graph. \end{enumerate} \end{definition} \begin{observation} A claw-free graph is cricket-free. \end{observation} \begin{notation} Let $G$ be a graph, let $u,v \in V(G)$, and let $e = xy \in E(G)$. \begin{enumerate} \item The set of vertices incident to $u$, the \emph{neighborhood} of $u$, is denoted by $N_G(u)$. Set $N_G[u] = N_G(u) \cup \{u\}$. \item The set of vertices incident to an endpoint of $e$, the \emph{neighborhood} of $e$, is denoted by $N_G(e)$. Set $N_G[e] = N_G(e) \cup \{x,y\}.$ \item The \emph{degree} of $u$ is $\deg_G(u) = \big|N_G(u)\big|$. An edge is called a \emph{leaf} or a \emph{whisker} if any of its vertices has degree exactly 1. \item The \emph{distance} between $u$ and $v$, denoted by $d(u,v)$, is the fewest number of edges that must be traversed to travel from $u$ to $v$ in $G$. \end{enumerate} \end{notation} We can naturally extend these notions to get $N_G(W)$, $N_G[W]$, $N_G({\mathcal E})$ and $N_G[{\mathcal E}]$ for a subset of the vertices $W \subseteq V(G)$ or a subset of the edges ${\mathcal E} \subseteq E(G)$. \begin{definition} Let $G$ be a graph. \begin{enumerate} \item A collection $W$ of the vertices in $G$ is called an \emph{independent set} if there is no edge connecting two vertices in $W$. \item The \emph{independent complex} of $G$, denoted by $\Delta(G)$, is the simplicial complex whose faces are independent sets of $G$. \end{enumerate} \end{definition} \noindent{\bf Commutative Algebra.} Let $G$ be a simple graph over the vertices $V(G) = \{x_1, \dots, x_n\}$. By abusing notation, we shall identify the vertices of $G$ with the variables in a polynomial ring $S = k[x_1, \dots, x_n]$, where $k$ is any infinite field. Particularly, we shall use $uv$ to denote both the edge $uv$ in $G$ and the monomial $uv$ in $S$ (the choice would be obvious from the context). \begin{definition} Let $G$ be a graph over the vertices $V(G) = \{x_1, \dots, x_n\}$. The \emph{edge ideal} of $G$ is defined to be $$I(G) = \langle xy ~|~ xy \in E(G)\rangle \subseteq S.$$ \end{definition} Castelnuovo-Mumford regularity is \emph{the} invariant being investigated in this paper. We shall give a definition most suitable for our context. \begin{definition} Let $S$ be a standard graded polynomial ring over a field $k$. The \emph{regularity} of a finitely generated graded $S$ module $M$, written as $\reg M$, is given by $$\reg(M):= \max \{j-i|\Tor_{i} (M,k)_j \neq 0 \}.$$ \end{definition} For a graph $G$, we shall use $\reg I(G)$ and $\reg G$ interchangeably. The following simple bound is often used without references. \begin{lemma}[\protect{See \cite[Lemma 3.1]{HaSurvey}}] \label{lem.indsubgraph} Let $G$ be a simple graph and let $H$ be an induced subgraph of $G$. Then $$\reg I(H) \le \reg I(G).$$ Particularly, for any vertex $v \in V(G)$, we have that $\reg I(G - v) \le \reg I(G).$ \end{lemma} A standard use of short exact sequences yields the following result, which we shall also often use. \begin{lemma}\label{exact} Let $I \subseteq S$ be a monomial ideal, and let $m$ be a monomial of degree $d$. Then $$\reg I \le \max\{ \reg (I : m) + d, \reg (I,m)\}.$$ Moreover, if $m$ is a variable appearing in I, then $\reg I$ is {\it equal} to one of the right-hand-side terms. \end{lemma} \begin{definition} Let $r \in {\mathbb N}$. A graph $G$ is said to be \emph{locally of regularity $\le r$} if for every vertex $x \in V(G)$, we have $\reg (I(G):x) \le r$. A graph which is locally of regularity $\le 2$ is called \emph{locally linear}. \end{definition} \noindent{\bf Auxiliary Results.} We next recall a few results that are useful for our purpose. We shall make use of the following characterization for edge ideals of graphs with linear resolutions. This characterization was first given in topological language by Wegner \cite{Wegner} and later, independently, by Lyubeznik \cite{L} and Fr\"oberg \cite{Fr} in monomial ideals language. \begin{theorem}[\protect{See \cite[Theorem 1]{{Fr}}}] \label{thm.regtwo} Let $G$ be a simple graph. Then $\reg I(G) = 2$ if and only if $G$ is a co-chordal graph. \end{theorem} In the study of powers of edge ideals, Banerjee developed the notion of even-connection and gave an important inductive inequality in \cite{Ba}. This inductive method has proved to be quite powerful, which we shall make use of often. \begin{theorem}\label{thm.inductive} For any finite simple graph $G$ and any $s\geq 1$, let the set of minimal monomial generators of $I(G)^s$ be $\{m_1,....,m_k\}$, then $$\reg I(G)^{s+1} \leq \max \{ \reg (I(G)^{s+1} : m_l)+2s, 1\leq l \leq k, \reg I(G)^s\}.$$ \end{theorem} The ideal $(I(G)^{s+1}: m)$ in Theorem \ref{thm.inductive} and its generators are understood via the following notion of even-connection. \begin{definition}\label{even_connected} Let $G=(V,E)$ be a graph. Two vertices $u$ and $v$ ($u$ may be the same as $v$) are said to be even-connected with respect to an $s$-fold product $e_1\cdots e_s$ where $e_i$'s are edges of $G$, not necessarily distinct, if there is a path $p_0p_1\cdots p_{2k+1}$, $k\geq 1$ in $G$ such that: \begin{enumerate} \item $p_0=u,p_{2k+1}=v.$ \item For all $0 \leq l \leq k-1,$ $p_{2l+1}p_{2l+2}=e_i$ for some $i$. \item For all $i$, $\big|\{l \geq 0 \mid p_{2l+1}p_{2l+2}=e_i \}\big| \le \big|\{j \mid e_j=e_i\}\big|$. \item For all $0 \leq r \leq 2k$, $p_rp_{r+1}$ is an edge in $G$. \end{enumerate} \end{definition} It turns out that $(I(G)^{s+1} : m)$ is generated by monomials in degree 2. \begin{theorem}[\protect{\cite[Theorem 6.1 and Theorem 6.7]{Ba}}] \label{even_connec_equivalent} Let $G$ be a graph with edge ideal $I = I(G)$, and let $s \geq 1$ be an integer. Let $m$ be a minimal generator of $I^s$. Then $(I^{s+1} : m)$ is minimally generated by monomials of degree 2, and $uv$ ($u$ and $v$ may be the same) is a minimal generator of $(I^{s+1} : m )$ if and only if either $\{u, v\} \in E(G) $ or $u$ and $v$ are even-connected with respect to $m$. \end{theorem} \section{General Upper Bounds for Regularity Function} \label{chapter3} The aim of this section is to give a weaker general upper bound for $\reg I(G)^s$ than that of Conjecture A. The heart of many studies on regularity of powers of edge ideals is to understand the colon ideal $J = I(G)^s : e_1 \dots e_{s-1}$ in making use of Banerjee's inductive method, Theorem \ref{thm.inductive}. We start by examining a local property for $J$. \begin{lemma} \label{lem.induction} Let $G$ be a simple graph with edge ideal $I = I(G)$ and let $s \in {\mathbb N}$. Let $e_1, \dots, e_{s-1} \in E(G)$, $J = I^s : e_1 \dots e_{s-1}$, and let $G'$ be the graph associated to the polarization of $J$. Let $w \in V(G)$. \begin{enumerate} \item If $e_1$ is a leaf of $G$ then $J = I^{s-1} : e_2 \dots e_{s-1}$. \item Suppose that $w \not\in N_G[\{e_1, \dots, e_{s-1}\}]$. Then $$J:w = I(G - N_G[w])^s : e_1 \dots e_{s-1} + (u ~\big|~ u \in N_G[w]).$$ \item Suppose that $w \in N_G[e_1]$. Then $$J:w = (I(G - N_{G'}[w])^t : f_1 \dots f_{t-1}) + (u ~\big|~ u \in N_{G'}(w))$$ for some $t \le s$, and a subcollection $\{f_1, \dots, f_{t-1}\}$ of $\{e_2, \dots, e_{s-1}\}$. Moreover, in this case, the graph associated to the polarization of $I(G-N_{G'}[w])^t : f_1 \dots f_{t-1}$ is an induced subgraph of that associated to the polarization of $I(G-N_G[w])^t : f_1 \dots f_{t-1}$. \end{enumerate} \end{lemma} \begin{proof} (1) It follows from Theorem \ref{even_connec_equivalent} that $J$ is obtained by adding to $I$ quadratic generators $uv$, where $u$ and $v$ are even-connected in $G$ with respect to $e_1 \dots e_{s-1}$. If $e_1$ is an isolated edge then clearly, by definition, the even-connected path between $u$ and $v$ does not contain $e_1$. Thus, $uv \in I^{s-1} : e_2 \dots e_{s-1}$ and (1) is proved. (2) It can be seen that if $w \not\in N_G[\{e_1, \dots, e_{s-1}\}]$ then $w$ is not in any even-connected path with respect to $e_1 \dots e_{s-1}$. Thus, even-connected paths with respect to $e_1 \dots e_{s-1}$ between two vertices that are not in $N_G[w]$ are even-connected path with respect to $e_1 \dots e_{s-1}$ in $G - N_G[w]$. Furthermore, any edge $uv \in J$, for which $u \in N_G[w]$ (similarly if $v \in N_G[w]$), would be divisible by $u \in J:w$ and, thus, subsumed into the ideal $(u ~\big|~ u \in N_G[w])$. Therefore, (2) follows. \begin{figure}[h] \includegraphics[width=0.9\linewidth]{even1.pdf} \caption{When $w \in e_1$} \label{fig:even1} \end{figure} \begin{figure}[h] \includegraphics[width=0.9\linewidth]{even2.pdf} \caption{When $w \in N_G(e_1)$} \label{fig:even2} \end{figure} (3) We first observe that for any subcollection $\{f_1, \dots, f_{t-1}\}$ of $\{e_1, \dots, e_{s-1}\}$ (for some $t \le e$), by the definition of even-connection, we have $$I(G - N_{G'}[w])^t : f_1 \dots f_{t-1} \subseteq J \subseteq (J:w).$$ Moreover, for any $u \in N_{G'}(w)$, $u$ and $w$ are even-connected with respect to $e_1 \dots e_{s-1}$, and so $uw \in J$, i.e., $u \in (J : w)$. Thus, we have the inclusion $$(I(G - N_{G'}[w])^t : f_1 \dots f_{t-1}) + (u ~\big|~ u \in N_{G'}(w)) \subseteq (J:w).$$ To prove the other inclusion, let us analyse the minimal generators of $(J:w)$ more closely. Consider any $uv \in J$, where $u$ and $v$ are even-connected with respect to $e_1 \dots e_{s-1}$. If $v \equiv w$ (similarly if $u \equiv w$) then $u \in N_{G'}(w)$. If $u,v \not\equiv w$, but $v \in N_{G'}(w)$ (similarly if $u \in N_{G'}(w)$), then $uv$ is subsumed in the ideal $(u ~\big|~ u \in N_{G'}(w))$. Suppose now that $u,v \not\in N_{G'}[w]$. Then $u,v \in G - N_{G'}[w]$, which are even-connected with respect to $e_1 \dots e_{s-1}$. Observe that if the even-connected path between $u$ and $v$ contains $e_1$ then, by considering a subpath of this path, either $u$ and $w$ or $v$ and $w$ are even-connected with respect to $e_1 \dots e_{s-1}$ (see Figures \ref{fig:even1} and \ref{fig:even2}). That is, either $u$ or $v$ is in $N_{G'}(w)$, and so $uv$ is again subsumed in the ideal $(u ~\big|~ u \in N_{G'}(w))$. Therefore, we may assume that $u$ and $v$ are even-connected with respect to a subcollection $\{f_1, \dots, f_{t-1}\}$ of $\{e_2, \dots, e_{s-1}\}$. That is, $uv \in I(G - N_{G'}[w])^t : f_1 \dots f_{t-1}$. \begin{figure}[h] \includegraphics[width=0.7\linewidth]{evenwsub.pdf} \caption{When an even-connected path $u$ --- $v$ contains $w' \in N_{G'}[w]$} \label{fig:evenwsub} \end{figure} To establish the last statement, consider any two vertices $u$ and $v$ which are even-connected in $G-N_G[w]$ with respect to $f_1 \dots f_{t-1}$. If the even-connected path between $u$ and $v$ does not contain any vertex in $N_{G'}[w] \setminus N_G[w]$ then $u$ and $v$ are even-connected in $G-N_{G'}[w]$. If the even-connected path between $u$ and $v$ contain a vertex $w' \in N_{G'}[w] \setminus N_G[w]$ (see Figure \ref{fig:evenwsub}) then, by combining with the even-connected path from $w$ to $w'$, either $u$ and $w$ or $v$ and $w$ are even-connected in $G'$. That is, either $u$ or $v$ is already in $N_{G'}[w]$ (or equivalently, not in $G-N_{G'}[w]$). Hence, the graph associated to the polarization of $I(G-N_{G'}[w])^t : f_1 \dots f_{t-1}$ is an induced subgraph of that associated to the polarization of $I(G-N_G[w])^t : f_1 \dots f_{t-1}$. \end{proof} By understanding local properties of $J$ in Lemma \ref{lem.induction}, we are able to give a general upper bound for the regularity function based on a well chosen numerical function on families of graphs. Specific interesting general bounds can be obtained by picking these numerical functions suitably. \begin{definition} A collection ${\mathcal F}$ of simple graphs is a \emph{hierarchy} if for any nonempty graph $G \in {\mathcal F}$, both $G - u$ and $G - N_G[u]$ are in ${\mathcal F}$ for any vertex $u \in V(G)$. \end{definition} \begin{theorem} \label{thm.hierarchy} Let ${\mathcal F}$ be a hierarchy family of simple graphs. Let $f : {\mathcal F} \longrightarrow {\mathbb N}$ be a function satisfying the following properties: \begin{enumerate} \item for any $G \in {\mathcal F}$, $\reg I(G) \le f(G)$; and \item for any nonempty graph $G \in {\mathcal F}$ and each non-isolated vertex $w \in V(G)$, $$f(G-w) \le f(G) \text{ and } f(G - N_G[w]) \le \max\{f(G)-1, 2\}.$$ \end{enumerate} Then, for any $G \in {\mathcal F}$ and any $s \ge 1$, we have $$\reg I(G)^s \le 2s + f(G) - 2.$$ \end{theorem} \begin{proof} Fix a graph $G \in {\mathcal F}$ and let $I = I(G)$. If $f(G) \le 2$ then the result is immediate from \cite{HHZ}. Assume that $f(G) \ge 3$. Then the condition on $f(G - N_G[w])$ reads $f(G-N_G[w]) \le f(G)-1.$ By Theorem \ref{thm.inductive} and the hypothesis that $\reg I(G) \le f(G)$, it suffices to show that for any collection of edges $e_1, \dots, e_{s-1}$ in $G$ (not necessarily distinct), we have \begin{align} \reg (I^s : e_1 \dots e_{s-1}) \le f(G). \label{eq.bounds} \end{align} We shall prove (\ref{eq.bounds}) by induction on $s$ and on the size of the graph $G$. Let $J = I^s : e_1 \dots e_{s-1}$. The statement is trivial if $s = 1$ (whence, $J = I$) or if $G$ is the empty graph (whence, $J = (0)$). Suppose that $s \ge 2$ and $G$ is not the empty graph. Let $w \in V(G)$ be any vertex in $G$. It follows from Lemma \ref{lem.induction} that $\reg (J:w)$ is equal to either $\reg (I(G - N_G[w])^s : e_1 \dots e_{s-1})$ or $\reg (I(G - N_{G'}[w])^s : e_1 \dots e_{s-1})$ where the graph associated to the polarization of $I(G-N_{G'}[w])^t : f_1 \dots f_{t-1}$ is an induced subgraph of that associated to the polarization of $I(G-N_G[w])^t : f_1 \dots f_{t-1}$. If the latter is the case, then by Lemma \ref{lem.indsubgraph} and the fact that polarization does not change the regularity, we have $$\reg (J:w) \le \reg (I(G-N_G[w])^t : f_1 \dots f_{t-1}).$$ Thus, since $G-N_G[w] \in {\mathcal F}$, by induction on the size of the graphs and our assumption, we have \begin{eqnarray}\label{eq:colon} \reg (J:w) \le f(G-N_G[w]) \le f(G)-1 \text{ for any vertex } w \in V(G). \end{eqnarray} By taking, for example, a vertex cover of the graph associated to the polarization of $J$, we may assume that we have a collection of distinct vertices $w_1, \dots, w_l$ of $G$ such that $(J,w_1, \dots, w_l) = (w_1, \dots, w_l)$. Observe that for each $i = 1, \dots, l-1$, we have $$(J,w_1, \dots, w_i):w_{i+1} = (J:w_{i+1}) + (w_1, \dots, w_{i}).$$ Thus, by \cite[Corollary 3.2]{Herzog} and (\ref{eq:colon}), we get $$\reg [(J,w_1, \dots, w_i):w_{i+1}] \le \reg (J:w_{i+1}) \le f(G)-1.$$ This, by successively applying Lemma \ref{exact} with $(J, w_1, \dots, w_i)$ and $w_{i+1}$, implies that $$\reg (J,w_1) \le f(G).$$ The assertion now follows by utilizing Lemma \ref{exact} with $J$ and $w_1$. \end{proof} Based on the known upper bound for $\reg I(G)$, given in \cite{HVT2008}, one can take $f(G)$ in Theorem \ref{thm.hierarchy} to be the matching number of a graph and obtain the following interesting bound for the regularity function. \begin{theorem} \label{thm.matching} Let $G$ be a simple graph with edge ideal $I = I(G)$. Let $\beta(G)$ denote the matching number of $G$. Then, for all $s \ge 1$, we have $$\reg I^s \le 2s+\beta(G)-1.$$ \end{theorem} \begin{proof} Let ${\mathcal F}$ be the family of all simple graphs. Then ${\mathcal F}$ clearly is a hierarchy. Let $f(G) = \beta(G)+1$ for all $G \in {\mathcal F}$. It is easy to see that: \begin{enumerate} \item $\reg I(G) \le f(G)$ by \cite{HVT2008}; and \item For any non-isolated vertex $w$ in $G,$ clearly $\beta(G-w) \le \beta(G)$, and we can always add an edge incident to $w$ to any matching of $G - N_G[w]$ to get a bigger matching, and so $f(G-N_G[w]) \le f(G)-1.$ \end{enumerate} Hence, the statement follows from Theorem \ref{thm.hierarchy}. \end{proof} A particular interesting application of Theorem \ref{thm.matching} is for the class of Cameron-Walker graphs introduced in \cite{CW}. These are graphs for which $\nu(G) = \beta(G)$. See \cite{HHKO} for a further classification of Cameron-Walker graphs. \begin{corollary} \label{cor.CW} Let $G$ be a Cameron-Walker graph and let $I = I(G)$ be its edge ideal. Then, for all $s \ge 1$, we have $$\reg I^s = 2s + \nu(G)-1.$$ \end{corollary} \begin{proof} The conclusion is an immediate consequence of Theorem \ref{thm.matching} noting that $\nu(G) = \beta(G)$ if $G$ is a Cameron-Walker graph. \end{proof} It is known, by the main theorem of \cite{HHZ}, that if $I(G)$ has a linear resolution then so does $I(G)^s$ for any $s \in {\mathbb N}$. Thus, the first nontrivial case of Conjecture $\text{A}$ is for those graphs $G$ such that $G$ is locally linear and $\reg I(G) > 2$. Recall that by \cite[Proposition 4.9]{CHHKTT}, in this case, we necessarily have $\reg I(G) = 3$. Theorem \ref{thm.hierarchy} allows us to settle Conjecture $\text{A}$ for this class of graphs. \begin{theorem} \label{thm.conj3} Let $G$ be a graph with edge ideal $I = I(G)$. Suppose that $G$ is locally linear. Then for all $s \ge 1$, we have $$\reg I^s \le 2s+\reg I-2 \le 2s+1.$$ \end{theorem} \begin{proof} Let ${\mathcal F}$ be the family of locally linear graphs (including those whose edge ideals have linear resolutions). Define $f : {\mathcal F} \longrightarrow {\mathbb N}$ by $f(G) = \reg I(G)$ for all $G \in {\mathcal F}$. By the definition and Lemma \ref{lem.indsubgraph}, the edge ideal of any proper induced subgraph of $G \in {\mathcal F}$ has a linear resolution. Thus, ${\mathcal F}$ is a hierarchy and $f$ satisfies conditions of Theorem \ref{thm.hierarchy}. The conclusion now follows from that of Theorem \ref{thm.hierarchy}. \end{proof} \begin{example} Let $G$ be a graph such that $G^c$ is triangle-free (see, for example, Figure \ref{fig:trianglefree}). It can be seen that for any $x \in V(G)$, $G - N_G[x]$ is a complete graph (and, thus, is of regularity 2). Therefore, $G$ is a locally linear graph. \begin{figure}[h] \includegraphics[width=0.5\linewidth]{trianglefreereg3.pdf} \caption{A graph whose complement is triangle-free} \label{fig:trianglefree} \end{figure} \end{example} \section{Regularity Function of Gap-free Graphs} \label{chapter4} In this section, we focus on gap-free graphs, investigating both Conjectures $\text{A}'$ and B. We start with a stronger version of \cite[Lemma 6.18]{Ba}. The proof is almost the same as that given in \cite[Lemma 6.18]{Ba} \begin{lemma}\label{lem.gaplocal} Let $G$ be a gap-free graph with edge ideal $I = I(G)$. Let $e_1, \dots, e_{s-1}$ be a collection of edges, let $J = I^s : e_1 \dots e_{s-1}$, and let $G'$ be the graph associated to the polarization of $J$. Let $W \subseteq V(G)$. Suppose that $u=p_0, \dots, p_{2k+1}=v$ is an even-connected path in $G$ with respect to $e_1 \dots e_{s-1}$ satisfying: \begin{enumerate} \item $u,v \not\in W$; and \item this path is of the longest possible length with respect to condition (1). \end{enumerate} Then $G' - W - N_{G'}[u]$ is obtained by adding isolated vertices to an induced subgraph of $G - N_G[u]$. \end{lemma} \begin{proof} By Theorem \ref{even_connec_equivalent}, $uv \in G' - W$. Consider any other edge $u'v' \in G' \setminus G$ with $u',v' \not\in W$. Then, there is an even-connected path $u' = q_0, \dots, q_{2l+1}=v'$ in $G$ with respect to $e_1 \dots e_{s-1}$ for some $1 \le l \le k$. If there exist $i$ and $j$ such that $p_{2i+1}p_{2i+2}$ and $q_{2j+1}q_{2j+2}$ are the same edge in $G$ then by combining these two even-connected paths, either $u'$ or $v'$ will be even-connected to $u$. That is, either $u'$ or $v'$ will become an isolated vertex in $G'-W-N_{G'}[u]$. We may assume that the two even-connected path between $u,v$ and $u',v'$ do not share any edge. Consider $p_1p_2$ and $q_1q_2$. Since these two edges do not form a gap in $G$, they must be connected. Let us now explore different possibilities for this connection. If $p_1 \equiv q_1$ then $u$ and $v'$ are even-connected with respect to $e_1 \dots e_{s-1}$, and so $v'$ becomes an isolated vertex in $G'-W-N_{G'}[u]$. If $p_1 \equiv q_2$ (similarly for the case that $p_2 \equiv q_1$) then $u$ and $u'$ are even-connected with respect to $e_1 \dots e_{s-1}$, and so $u'$ becomes an isolated vertex in $G'-W-N_{G'}[u]$. If $p_1q_1 \in E(G)$ then combining the two even-connected paths between $u,v$ and $u',v'$ and the edge $p_1q_1$, we get an even-connected path between $v$ and $v'$ that is of length $>k$, a contradiction. If $p_1q_2 \in E(G)$ (similarly for the case that $p_2q_1 \in E(G)$) then by combining the two even-connected paths between $u,v$ and $u',v'$ and the edge $p_1q_2$, we have an even connected path between $u'$ and $v$ that is of length $> k$, a contradiction. Thus, in any case, either $u'$ or $v'$ will becomes an isolated vertex in $G'-W-N_{G'}[u]$. That is, any edge in $G' \setminus G$ will reduce to an isolated vertex in $G'-W-N_{G'}[u]$. The statement is proved. \end{proof} Our next main result establishes Conjecture $\text{A}'$ for gap-free graphs. \begin{theorem}\label{thm.gaplocal} Let $G$ be a graph with edge ideal $I = I(G)$ and let $r \ge 3$ be an integer. Assume that $G$ is gap-free and locally of regularity $\le r-1$. Then, for all $s \in {\mathbb N}$, we have $$\reg I^s \le 2s + r-2.$$ \end{theorem} \begin{proof} By \cite[Proposition 4.9]{CHHKTT}, we have $\reg I \le r$. By Theorem \ref{thm.inductive}, it suffices to show that for any collection of edges $e_1, \dots, e_{s-1}$ (not necessarily distinct) in $G$, we have $$\reg (I^s : e_1 \dots e_{s-1}) \le r.$$ Let $G'$ be the graph associated to the polarization of $J = I^s : e_1 \dots e_{s-1}$. It follows from Lemma \ref{exact} that, for any vertex $x \in G'$, \begin{align} \reg G' \leq \max \{\reg(G'-N_{G'}[x])+1, \reg(G'-x)\}. \label{eq.sesG'} \end{align} Thus, we shall show that $\reg (G'-x) \le r$ and $\reg (G'-N_{G'}[x]) \le r-1.$ Let $u$ and $v$ be even-connected in $G$ with respect to $e_1 \dots e_{s-1}$ such that the even-connected path $u=p_0,\dots,p_{2k_1+1}=v$ is of maximum possible length. By Lemma \ref{lem.gaplocal}, $G'-N_{G'}[u]$ is obtained by adding isolated vertices to an induced subgraph of $G - N_G[u]$. Thus, by Lemma \ref{lem.indsubgraph}, we have $\reg (G'-N_{G'}[u]) \le \reg (G - N_G[u]) \le r-1.$ It remains to consider $\reg (G'-u)$. Let $u'$ and $v'$ be even-connected in $G$ with respect to $e_1 \dots e_{s-1}$ such that $u',v' \in G'-u$ and there is an even-connected path $u' = q_0, \dots, q_{2l+1}=v'$ in $G$ with respect to $e_1 \dots e_{s-1}$ such that $l$ is the maximum possible length. By using Lemma \ref{lem.gaplocal} again, we can deduce that $\reg (G'-u-N_{G'}[u']) \le \reg (G-N_G[u']) \le r-1$. Thus, by applying (\ref{eq.sesG'}) to the graph $G'-u$, it suffices to show that $\reg (G' - \{u,u'\}) \le r$. We can continue in this fashion until all edges in $G' \setminus G$ are examined, i.e., we obtain a collection $W \subseteq V(G)$ such that $G'-W = G-W$, and reduce the problem to showing that $\reg (G' - W) = \reg (G - W) \le r$. This is obviously true by Lemma \ref{lem.indsubgraph} and the fact that $\reg G \le r$. The theorem is proved. \end{proof} We shall now shift our attention to Conjecture B. We begin by an improved statement of \cite[Corollary 6.5]{CHHKTT}. \begin{lemma}\label{lem.locallinear} Let $G$ be a gap-free and cricket-free graph. Then $G$ is locally linear. \end{lemma} \begin{proof} We may assume that $G$ contains no isolated vertices. By Theorem \ref{thm.regtwo}, it suffices to show that $(G \setminus N_G[x])^c$ is chordal for any vertex $x$ in $G$. Note that since $G \setminus N_G[x]$ is an induced subgraph of $G$, it is gap-free and cannot have any induced anticycle of length 4. Suppose that $W = \{w_1,w_2,\dots,w_n\}$ is such that $G[W]$ is an anticycle of length $n \ge 5$ in $G \setminus N_G[x].$ Clearly, $W \cap N_G[x] = \emptyset$. Let $y$ be a neighbor of $x$. Since $G$ is gap-free, $\{x,y\}$ and $\{w_1,w_3\}$ cannot form a gap. Thus, these edges must be connected in $G$. That is, either $\{y,w_1\}$ or $\{y,w_3\}$ (or both) must be an edge in $G$. Suppose that $\{y,w_1\}$ and $\{y,w_3\}$ are both edges in $G.$ Then, by considering edges $\{x,y\}$ and $\{w_2,w_n\}$ in $G,$ either $\{y,w_2\}$ or $\{y,w_n\}$ must be an edge in $G.$ If $\{y,w_2\}$ is an edge, then the induced subgraph on $\{x,y,w_1,w_2,w_3\}$ is a cricket in $G,$ a contradiction. Otherwise, $\{y,w_n\} \in E(G)$. Since $\{x,y\}$ and $\{w_2,w_{n-1}\}$ cannot form a gap in $G,$ we must have $\{y,w_{n-1}\} \in E(G)$. Thus, the induced subgraph on $\{x,y,w_1,w_{n-1},w_n\}$ is a cricket in $G,$ a contradiction. If $\{y,w_1\} \in E(G)$ and $\{y,w_3\} \not\in E(G)$ (similarly for the case $\{y,w_1\} \not\in E(G)$ and $\{y,w_3\} \in E(G)$), then $\{y,w_n\}$ must be an edge in $G$; otherwise, $\{x,y\}$ and $\{w_3,w_n\}$ form a gap in $G.$ By considering $\{x,y\}$ and $\{w_2,w_{n-1}\}$, either $\{y,w_2\}$ or $\{y,w_{n-1}\}$ must be an edge in $G.$ If $\{y,w_2\} \in E(G)$, then the induced subgraph on $\{x,y,w_1,w_2,w_n\}$ is a cricket in $G,$ a contradiction. Otherwise, $\{y,w_{n-1}\} \in E(G)$, and the induced subgraph on $\{x,y,w_1,w_{n-1},w_n\}$ is a cricket in $G,$ a contradiction. \end{proof} \begin{example} There are examples for locally linear gap-free graphs for which the regularity could be either 2 or 3 (see Figure \ref{fig:gapfree}). \begin{figure}[h] \includegraphics[width=0.7\linewidth]{gap-free-reg-2-3.pdf} \caption{Locally linear gap-free graphs with regularity 2 and 3 (respectively)} \label{fig:gapfree} \end{figure} On the other hand, note that if $G$ is not gap-free, then $\nu(G)\geq 2 \implies \reg I(G) \geq 3.$ Thus, if, in addition, $I(G)$ is locally linear, then we have $\reg I(G)= 3$ by \cite[Proposition 4.9]{CHHKTT}. Figure \ref{fig:notgapfree} depicts such a graph. \begin{figure}[h] \includegraphics[width=0.5\linewidth]{notgapfreereg3.pdf} \caption{A graph that is not gap-free but locally linear with regularity 3} \label{fig:notgapfree} \end{figure} \end{example} We are now ready to state our main result toward Conjecture B. In this result, we establish the conclusion of Conjecture B replacing the condition that $\reg I(G) = 3$ by the condition that $G$ is locally linear. \begin{theorem}\label{thm.gaplocallin} If $G$ is a graph with edge ideal $I = I(G)$. Suppose that $G$ is gap-free and locally linear. Then, for all $s \ge 2$, we have $$\reg I^s = 2s.$$ \end{theorem} \begin{proof} Again, by Theorem \ref{thm.inductive}, it suffices to show that for any collection of edges $e_1, \dots, e_{s-1}$ (not necessarily distinct), we have $$\reg (I^s : e_1 \dots e_{s-1}) \le 2.$$ That is, the graph $G'$ associated to the ideal $J = I^s : e_1 \dots e_{s-1}$ is a co-chordal graph. By \cite[Lemma 6.14]{Ba}, $G'$ is also gap-free, and so $G'$ does not contain an anticycle of length 4. Suppose that $W = \{w_1\ldots w_n\}$, for $n \ge 5$, is such that $G'[W]$ is an induced anticycle of $G'$. It follows from \cite[Lemma 6.15]{Ba} that $G[W]$ is an induced anticycle of $G$. Let $e_1 = ab$. We shall consider different possibilities for the relative position of $a$ and $b$ with respect $W$. If $a, b \in W$, say $a \equiv w_1$ and $b \equiv w_i$ (for $i \not= 1$), then since $\{w_1,w_2\}, \{w_1,w_n\} \not\in E(G')$, $b \not= w_2, w_n$. Consider the edges $\{a,b\}$ and $\{w_2,w_n\}$. These do not form a gap (and $a$ is not connected to neither $w_2$ nor $w_n$), and so either $\{b,w_2\} \in E(G)$ or $\{b,w_n\} \in E(G)$. If $\{b,w_2\} \in E(G)$ then $w_2$ and $w_3$ are even-connected with respect to $e_1 = ab$, which implies that $\{w_2,w_3\} \in E(G')$, a contradiction. If $\{b,w_n\} \in E(G)$ then $w_{n-1}$ and $w_n$ are even-connected with respect to $e_1 = ab$, which implies that $w_{n-1}w_n \in E(G')$, also a contradiction. If $a \in W$, say $a = w_1$, and $b \not\in W$ (similar to the case where $a \not\in W$ and $b \in W$) then by considering the edges $\{a,b\}$ and $\{w_2,w_n\}$ again, the same arguments as above would lead to a contradiction. If $a,b \not\in W$ and either $a$ or $b$ is not connected to any vertices in $W$, then $G'[W]$ (being also an anticycle in $G$) is an anticycle in either $G-N_{G}[a]$ or $G-N_{G}[b]$, which is a contradiction to the local linearity of $G$. It remains to consider the case that $a,b \not\in W$, and both $a$ and $b$ are connected to $W$. Assume that $aw_1 \in E(G)$. Consider the pair of edges $\{a,b\}$ and $\{w_2,w_n\}$. If either $\{b,w_2\} \in E(G)$ or $\{b,w_n\} \in E(G)$ then, as before, we would have either $\{w_2,w_3\} \in E(G)$ or $\{w_{n-1},w_n\} \in E(G)$, which is a contradiction. Thus, we must have either $\{a,w_2\} \in E(G)$ or $\{a,w_n\} \in E(G)$. Without loss of generality, we may assume that $\{a,w_2\} \in E(G)$. We continue by considering the pair of edges $\{a,b\}$ and $\{w_3, w_n\}$. A similar argument shows that $\{a,w_3\} \in E(G)$. We can keep going in this fashion to get $\{a,w_i\} \in E(G)$ for all $i = 1, \dots, n-2$. Now, it can be seen that $b$ cannot be connected to any of the $w_i$ without creating an even-connection that gives $\{w_i,w_{i+1}\} \in E(G)$, for some $i$, which is a contradiction. We have shown that such a collection of the vertices $W$ cannot exists. That is, $G'$ is a co-chordal graph. The theorem is proved. \end{proof} Theorem \ref{thm.gaplocallin} immediately recovers the following result of Banerjee \cite{Ba}. \begin{corollary}[\protect{\cite[Theorem 6.7]{Ba}}] \label{cor.Ba} Let $G$ be a gap-free and cricket-free graph. Then, for any $s \ge 2$, we have $$\reg I(G)^s = 2s.$$ \end{corollary} \begin{proof} The conclusion follows from Lemma \ref{lem.locallinear} and Theorem \ref{thm.gaplocallin}. \end{proof} \begin{example} Let $2K_2$ denote a gap and let $K_6$ denote the complete graph on 6 vertices. Let $G = 2K_2 + K_6$ be the \emph{join} of these two graphs (the join of two graphs $H$ and $K$ is obtained by taking the disjoint union of $H$ and $K$ and connecting each vertex in $H$ with every vertex in $K$). Then, it can be seen $G$ is locally linear but not gap-free. Particularly, it follows that $\reg I(G)^s \not= 2s$ for all $s \in {\mathbb N}$. This gives an example of a locally linear graph $G$ for which $\reg I(G)^s \not= 2s$ for all $s \in {\mathbb N}$. \end{example} \section{Regularity of Second Powers of Edge Ideals} \label{chapter5} We end the paper with a flavor of Conjecture $\text{A}'$ when $s = 2$. We also take a look at the symbolic square of edge ideals. \begin{theorem} \label{thm.square} Let $G$ be a graph with edge ideal $I = I(G)$. Suppose that $G$ is locally of regularity at most $r-1$. Then, for any edge $e \in E(G)$, $\reg (I^2:e) \le r.$ Particularly, this implies that $\reg (I^2) \le r+2.$ \end{theorem} \begin{proof} The second statement follows from the first statement and Theorem \ref{thm.inductive}. To prove the first statement, we shall use induction on $|V(G)|$. Let $J = I^2 : e$ and let $G'$ be the graph associated to $J$. If there are no even-connected vertices in $G$ with respect to $e$, then $I^2 : e = I$, and the conclusion follows from \cite[Proposition 4.9]{CHHKTT}. If there are edges in $G'$ which are not initially in $G$, then these edges are of the form $xy$ where $x \in N(a), y \in N(b)$ or $xx'$ where $x \in N(a) \cap N(b)$ and $x'$ is a new whisker vertex. Suppose that there exists at least one new edge of the form $xy$ for $x \not= y$. Observe that $J:x= I:x+(u ~|~ u \in N(b)).$ Thus $\reg (J:x) \leq \reg (I:x) \leq r-1.$ Furthermore, $(J,x)= I(G \setminus x)^2:e.$ Therefore, by induction on $|V(G)|,$ we have $\reg (J,x) \leq r.$ Hence, by Lemma \ref{exact}, we have $\reg J \leq r$. Suppose that the only new edges are of the form $xx'$, where $x'$ is a new whisker vertex. Observe that, in this case, \[ J:x= I:x +( u ~|~ u \in N(a)\cup N(b))+(u' ~|~ u' \text{ is a whisker in the new edges }) \] \[(J,x)= I(G \setminus x)^2:e\] Thus, we also have $\reg (J:x) \leq \reg (I:x) \leq r-1$ and $\reg (J,x)\leq r$ by induction. Hence, by Lemma \ref{exact} again, we have $\reg J \leq r.$ This completes the proof. \end{proof} Symbolic powers in general are much harder to handle than ordinary powers. The symbolic square of an edge ideal appears to be more tractable. We recall and rephrase a result from \cite{Su}. \begin{theorem}[\protect{\cite[Corollary 3.12]{Su}}] \label{SecondSymb} For any graph $G,$ \[ I(G)^{(2)}= I(G)^2+(x_ix_jx_k ~|~ \{x_i,x_j,x_k\} \text{ forms a triangle in } G). \] \end{theorem} The last result of our paper is stated as follows. \begin{theorem} Let $G$ be a graph with edge ideal $I = I(G)$. Suppose that $G$ is locally of regularity at most $r-1$. Then $\reg (I^{(2)}) \le r+2.$ \end{theorem} \begin{proof} We first note that, by Theorem \ref{SecondSymb}, $I^{(2)} \subseteq I$. Let $E(G)=\{e_1,\ldots, e_l\}$ and, for $0 \le i \le l$, define $$J_i=(I^{(2)} +e_1 \dots + e_i):(e_{i+1}) \text{ and } K_i=(I^{(2)} +e_1 \dots + e_i).$$ Observe that $K_l = I$, and for all $i$ we have the following short exact sequence. \begin{eqnarray} 0 \longrightarrow \frac{R}{J_i} (-2) \longrightarrow \frac{R}{K_i} \longrightarrow \frac{R}{K_{i+1}} \longrightarrow 0 \end{eqnarray} This, particularly, implies that $\displaystyle \reg (I^{(2)}) \leq \max_{1\leq i \leq l-1 }\{\reg (J_i)+2, \reg I\}.$ It follows from Theorem \ref{SecondSymb} that $$J_i= I^2:e_{i+1}+ (x_ix_jx_k:e_{i+1} ~|~ \{x_i,x_j,x_k\} \text{ forms a triangle in } G).$$ Note that if $e$ is an edge in the triangle $\{x_i,x_j,x_k\},$ then $(x_ix_jx_k:e)$ is a variable. If $e$ shares a vertex with the triangle, then the colon ideal is generated by an edge and $(x_ix_jx_k:e) \in I.$ If $e$ and $\{x_i,x_j,x_k\}$ have no common vertices, then $(x_ix_jx_k:e) = x_ix_jx_k \in I.$ Then, by Theorem \ref{even_connec_equivalent} we have $J_i= I^2:e_{i+1}+ (\text{variables})$ and hence, $\reg J_i \le \reg (I^2:e)$. The conclusion now follows from Theorem \ref{thm.square} and the use of \cite[Proposition 4.9]{CHHKTT}. \end{proof}
{ "timestamp": "2019-04-16T02:19:12", "yymm": "1805", "arxiv_id": "1805.01434", "language": "en", "url": "https://arxiv.org/abs/1805.01434", "abstract": "Let $I = I(G)$ be the edge ideal of a graph $G$. We give various general upper bounds for the regularity function $\\text{reg} I^s$, for $s \\ge 1$, addressing a conjecture made by the authors and Alilooee. When $G$ is a gap-free graph and locally of regularity 2, we show that $\\text{reg} I^s = 2s$ for all $s \\ge 2$. This is a slightly weaker version of a conjecture of Nevo and Peeva. Our method is to investigate the regularity function $\\text{reg}I^s$, for $s \\ge 1$, via local information of $I$.", "subjects": "Commutative Algebra (math.AC); Combinatorics (math.CO)", "title": "Regularity of powers of edge ideals: from local properties to global bounds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692311915195, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7079585011521129 }
https://arxiv.org/abs/2212.03325
Proposal of a Score Based Approach to Sampling Using Monte Carlo Estimation of Score and Oracle Access to Target Density
Score based approaches to sampling have shown much success as a generative algorithm to produce new samples from a target density given a pool of initial samples. In this work, we consider if we have no initial samples from the target density, but rather $0^{th}$ and $1^{st}$ order oracle access to the log likelihood. Such problems may arise in Bayesian posterior sampling, or in approximate minimization of non-convex functions. Using this knowledge alone, we propose a Monte Carlo method to estimate the score empirically as a particular expectation of a random variable. Using this estimator, we can then run a discrete version of the backward flow SDE to produce samples from the target density. This approach has the benefit of not relying on a pool of initial samples from the target density, and it does not rely on a neural network or other black box model to estimate the score.
\section{Introduction} \let\thefootnote\relax\footnotetext{Code available at https://github.com/CMcDonald-1/Score\_Modeling\_Monte\_Carlo} Producing samples from a known probability measure is a core problem in statistics and many machine learning applications. Computing complicated integrals, determining the volume of a high dimensional set, or performing Bayesian inference on a posterior are all problems which require generating samples from a probability measure in question. In this work, we consider sampling from a probability measure $P_{0}$ over $\theta \in \mathbb{R}^{d}$ absolutely continuous with respect to Lebesgue measure and admitting a probability density proportional to $p_{0}(\theta) \propto e^{f(\theta) - \frac{1}{2}\theta^{T}\theta}$ for some function $f(\theta)$. The form of the density is known and we are able to query $f(\theta)$ and $\nabla f(\theta)$ at will. When $f(\theta)$ is concave, such a density is called log concave and can easily be sampled by known methods such as Langevin diffusion or other classical Markov Chain Monte Carlo (MCMC) methods. However, we are particularly interested in when $f(\theta)$ is non-concave having multiple separated maximizers. Traditional MCMC methods for such a density have no guarantee of convergence to invariance in finite time. Two examples are if $f(\theta)$ represents the log likelihood of a mixture model (e.g. Gaussian mixture model) and $p_{0}(\theta)$ is the posterior under a normal prior. Under such a model, the posterior has multiple modes of equal likelihood which are difficult for traditional methods to sample. If $f(\theta)$ represents a non-concave objective we want to maximize under a $L^{2}$ penalty, $\theta^{*}= \text{argmax}_{\theta} f(\theta) - \frac{1}{2}\theta^{T}\theta$, we may settle not for finding $\theta^{*}$, but rather sampling $\theta \sim p_{0}(\theta)$ and anticipating that $f(\theta^{*}) \approx f(\theta)$, that is our sampled point, while not the global maximizer, produces a function value close to the optimal value. Thus, there remains non-log concave sampling problems yet to be sufficiently addressed by existing MCMC algorithms. In recent years, score based methods have shown remarkable success as a generative model on complex multi-modal densities in high dimensions. In this paper, we would like to transfer this success as a generative model into a successful algorithm for sampling a known density function. The main difference between a generative modelling problem and the sampling problem studied here is what ``information'' about the target density $p_{0}(\theta)$ the algorithm may access. In the current applications of score based methods as a generative algorithm, one starts with an initial pool of samples drawn from the target measure, $\theta_{0}, \cdots, \theta_{N} \sim P_{0}$, but one does not know the functional form of the target density nor can one query $f(\theta)$ and it's gradient $\nabla f(\theta)$ directly. In this work, we have no initial samples from the target density, but do have direct axis to the density function $p_{0} = e^{f(\theta) - \frac{1}{2}\theta^{T}\theta}$ as well as access to $f(\theta)$ and $\nabla f(\theta)$. What connects both problems is the core theory of score based sampling. Underlying all score based approaches to sampling is the dual relationship between the ``forward flow'' and ``backward flow'' stochastic differential equations (SDE). Consider the forward flow SDE \begin{align} d\theta_{t}&= d(\theta_{t}, t)dt+g(t)dW_{t}, \theta_{0} \sim P_{0}, \end{align} where the marginal distribution of $\theta_{t}$ at time $t$ is denoted $P_{t}$ with density $p_{t}$. Paired with this forward flow SDE is a backward flow SDE \cite{anderson1982reverse} \begin{align} d\theta_{t}&= [d(\theta_{t}, t)-g(t)^{2}\nabla \log p_{t}(\theta_{t})]dt+g(t)dW_{t}, \end{align} where we run time in reverse from $t = T$ to $t = 0$. The forward flow SDE takes $\theta_{0} \sim P_{0}$ to $\theta_{T} \sim P_{T}$, while the backward flow SDE takes $\theta_{T} \sim P_{T}$ to $\theta_{0} \sim P_{0}$. If we can sample $P_{T}$ directly and implement the backward flow SDE then we can produce samples from $P_{0}$. Crucial to the backward flow SDE is the score function, $\nabla \log p_{t}(\theta)$, which is not known and must be estimated. Popular generative approaches have a large pool of initial samples drawn from $P_{0}$. They then train a model such as a neural network to approximate the score at different time scales $t$ by minimizing an objective function using the initial samples and noisy perturbations as training data \cite{song2019generative, song2020score}. Various related methods such as \cite{de2021diffusion}, \cite{dockhorn2021critically} propose improved methods to estimate the score, but are also ultimately generative and thus rely on initial samples. \cite{doucet2022score} applies score based diffusions as an approach to improve annealed importance sampling. It is not a generative approach as it does not rely on initial samples from the target density, but it ultimately trains a neural network to estimate the score by minimizing relative entropy. In this paper, we desire a method to estimate the score without any initial samples from $P_{0}$ and without appealing to a complex secondary model such as a neural network to estimate the score. As a summary of what is to come, with knowledge of the functional form of the density $p_{0}$ and application of stochastic calculus, we can analytically express how $p_{t}$ changes over time and express $p_{t}$ as an expectation of a known function over a known density. With some further analysis, the score $\nabla \log p_{t}(\theta)$ can also be expressed as the expectation of a function over a density, and thus the score can be computed by estimating this integral via a sub Monte Carlo (MC) sampling problem. This replaces the need to estimate the score via a neural network with estimating the score via a MC average. Specifically, we will consider the forward flow SDE as the Ornstein–Uhlenbeck process with drift $d(\theta,t)=-\theta$ and constant diffusion $g(t) = \sqrt{2}$ \begin{align} d\theta_{t}&= -\theta_{t}dt+\sqrt{2}dW_{t}. \end{align} The invariant measure is standard normal and the conditional measure $p_{t}(\theta_{t}|\theta_{0})$ can be expressed as \begin{align} \theta_{t}&= e^{-t}\theta_{0}+\sqrt{1-e^{-2t}}Z, Z \sim N(0, I).\label{OU_soln} \end{align} Thus the density $p_{t}(\theta_{t})$ up to normalization can be computed by integrating $p_{0}(\theta_{0})p_{t}(\theta_{t}|\theta_{0})$ over $\theta_{0}$ and the score, which removes any dependence on normalizing constants, can be expressed as \begin{align} \nabla_{\theta_{t}} \log p_{t}(\theta_{t}) &= \frac{\int p_{0}(\theta_{0})\nabla_{\theta_{t}}p_{t}(\theta_{t}|\theta_{0})d\theta_{0}}{\int p_{0}(\theta_{0})p_{t}(\theta_{t}|\theta_{0})d\theta_{0}}.\label{score_ratio} \end{align} With some manipulations, we can express the numerator and denominator as expectations over normal random variables, and thus can approximate each via averaging the integrand at normally drawn points, i.e. a MC estimator. \section{Results} Simplifying equation (\ref{score_ratio}) (see appendix for full derivation) we can express the score as a ratio of two expectations, \begin{align} \nabla \log p_{t}(\theta)&=- \theta+\frac{e^{-t}}{\sqrt{1-e^{-2t}}}\frac{E[Ue^{f(\sqrt{1-e^{-2t}}U+e^{-t}\theta)}]}{E[e^{f(\sqrt{1-e^{-2t}}U+e^{-t}\theta)}]}\quad U \sim N(0,I),\label{expression_1} \end{align} applying integration by parts (see appendix), this expression can equivalently be written as \begin{align} \nabla \log p_{t}(\theta)&= -\theta+e^{-t}\frac{E[\nabla f(\sqrt{1-e^{-2t}}U+e^{-t}\theta)e^{f(\sqrt{1-e^{-2t}}U+e^{-t}\theta)}]}{E[e^{f(\sqrt{1-e^{-2t}}U+e^{-t}\theta)}]}\quad U \sim N(0, I).\label{expression_2} \end{align} Computing expressions (\ref{expression_1}) and (\ref{expression_2}) then amounts to computing expectations over normal random variables, and we will appeal to a MC estimator. Say we draw $K$ points $U_{1}, \cdots, U_{K}\sim N(0,I)$ from a standard normal, and approximate the score in one of two ways. Define weights $w_{k}$ which sum to 1 \begin{align} w_{k}&= \frac{e^{f(\sqrt{1-e^{-2t}}U_{k}+e^{-t}\theta)}}{\sum_{j=1}^{K} e^{f(\sqrt{1-e^{-2t}}U_{j}+e^{-t}\theta)}}, \end{align} as the relative heights of the exponential of $f$ evaluated at the sampled points. We can then approximate (\ref{expression_1}) and (\ref{expression_2}) as weighted averages using these weightings \begin{align} \hat{s}_{1}(\theta, t)&= -\theta + \frac{e^{-t}}{\sqrt{1-e^{-2t}}} \sum_{k=1}^{K}U_{k} w_{k}\\ \hat{s}_{2}(\theta, t)&= -\theta+e^{-t} \sum_{k=1}^{K} \nabla f(\sqrt{1-e^{-2t}}U_{k}+e^{-t}\theta)w_{k}.\label{s_2} \end{align} There are two perspectives on how to estimate the ratio of expectations that define (\ref{expression_1}) and (\ref{expression_2}). One could use a separate pool of samples to evaluate the numerator and denominator expectations and then take the ratio of these two estimates. However, we may be dividing by a potentially small number in this method and have high variance as a result. The authors here propose using the same pool of samples to estimate both the numerator and denominator, resulting in the weights $w_{k}$ where the normalizing constant of the weights represents the denominator in (\ref{expression_1}) and (\ref{expression_2}). This has the advantage that the estimates will never become unbounded since all estimates are weighted averages of well behaved values, however proper analysis must be conducted on how such a sampling procedure effects the independence, bias, and variance of the estimators. For small $t$, say $t< 0.1$, we conjecture that $\hat{s}_{2}(\theta, t)$ is the better estimator as it will have lower variance since $\frac{e^{-t}}{\sqrt{1-e^{-2t}}}$ can grow large as $t \to 0$. Additionally, as $t \to 0$, $\nabla f(\sqrt{1-e^{-2t}}U+e^{-t}\theta) \to \nabla f(\theta)$ and estimate $\hat{s}_{2}(\theta, t)$ approaches $-\theta+\nabla f(\theta)$ for small $t$, that is the score of the target density. For larger $t$, it would seem $\hat{s}_{1}(\theta, t)$ will have lower variance since $U_{k}$ is less variable than the gradient of the target $\nabla f(U_{k})$. Additionally, as $t \to \infty$ we have the weights $w_{k}$ approach a $\theta$ independent value due to the $e^{-t}\theta$, so the expectation is some finite vector while $\frac{e^{-t}}{\sqrt{1-e^{-2t}}} \to 0$ thus the score approaches $-\theta$, the score of a standard normal. The algorithm to generate a sample from $P_{0}$ is as follows. Pick a terminal time $T>0$, step size $\delta>0$, and MC sample size $K \in \mathbb{N}$. Initialize $\theta_{T} \sim N(0,I)$ and $t = T$. While $t>0$, in each iteration draw $K$ values $U_{1},\cdots, U_{K} \sim N(0,I)$ and compute weights $w_{k}$. For large times $t>0.1$ estimate score as $\hat{s}(\theta_{t}, t)=\hat{s}_{1}(\theta, t)$, otherwise set $\hat{s}(\theta_{t}, t)=\hat{s}_{2}(\theta, t)$. Update the point as $ \theta_{t-\delta}=\theta_{t}-\delta(-\theta_{t}-2\hat{s}(\theta_{t}, t))+\sqrt{2\delta}Z,\quad Z \sim N(0, I)$ and decrease $t = t-\delta$. Finally, return $\theta_{0}$. One could propose more complex methods such Stein's method \cite{chen2019stein},\cite{liu2016stein} or multi-modal sampling methods such as Simulated Tempering \cite{ge2018simulated} to estimate the quantities in (\ref{expression_1}) and (\ref{expression_2}), as these represent expectations over un-normalized densities (the denominator is the normalizing constant). However, estimating these values is itself a sub problem of sampling the un-normalized density $p_{0}$ in the first place, thus if we employ sufficiently complex methods to sample the sub problems we may have simply employed said methods on the original problem. Additionally, for large $t>>0$ and small $t \approx 0$ the simple estimators will have small variance, thus it is the middle region of $t$ where it may require more involved estimation of quantities (\ref{expression_1}) and (\ref{expression_2}) to keep the estimates accurate. \section{Example} Here we demonstrate the performance of the algorithm on a series of example problems. These examples while low dimensional represent multi-modal targets with future work considering higher dimensional implementations. First, we sample a one dimensional multi-modal target density with asymmetric modes. Using the log likelihood function \begin{align*} f(\theta)&= 100\sum_{i=1}^{4}(\tanh(\theta+0.05-\mu_{i})-\tanh(\theta-0.05-\mu_{i})),\mu_{1}=-5, \mu_{2}=-1, \mu_{3}=3, \mu_{4}=4. \end{align*} In Figure \ref{fig:one_dim} we see the histogram of $n = 1000$ points ran in parallel with terminal time $T = 2$, $K = 3000$ samples per estimate, and step size $\delta = 0.01$. This requires $K*(T/\delta) = 600,000$ evaluations of $f$ and $\nabla f$ per point, a significantly high number of evaluations per sample but the polynomial nature may see better gains in high dimensions compared with existing methods. Another typical problem to test multi-modal sampling is to sample the Himmelblau function \cite{feroz2013exploring} in two dimensions \begin{align*} f(\theta_{1}, \theta_{2})&= -(\theta_{1}^{2}+\theta_{2}-11)^{2}-(\theta_{1}+\theta_{2}^{2}-7)^{2} \end{align*} which has modes at $(3,2), (-2.81, 3.13), (-3.78, -3.28), (3.58, -1.85)$ which are all separated by areas of low probability and asymmetric in their relative heights. Sampling $n = 2000$ points in parallel with $T = 3$, $K = 1000$, $\delta = 0.01$ this algorithm requires $K*(T/\delta) = 300,000$ evaluations of $f$ and $\nabla f$ per point. To test if each mode is sampled proportionally correct, we compute the proportion of points within $\pm 0.5$ in each coordinate around each mode and normalize to sum to 1. We then evaluate the target pdf at $10,000$ points sampled uniformly around each mode as an estimate of the probability of the region around each mode and normalize to sum to 1. If each more is sampled proportionally correctly, then these two vectors should be the same. The probability estimates of each mode's immediate vicinity are (0.831, 0.009, 0.002, 0.158) while the proportion of points near each mode is (0.741, 0.053, 0.005, 0.201). We note that the modes are not sampled exactly in the correct proportions, but the proportions are reasonable close to each mode, and each mode is visited, so as an approximate sampling algorithm this may suffice in certain situations. \begin{figure} \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width = \textwidth]{basic_one_dim} \caption{One Dimensional Sampling Histogram} \label{fig:one_dim} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width = \textwidth]{himmelbau_heatmap} \caption{Himmelblau Sampled Points} \label{fig:himmelblau} \end{subfigure} \caption{Samples produced for 1 and 2 dimensional sampling problems.} \end{figure} \section{Concluding remarks} This paper presents a score based sampling approach using Monte Carlo estimation of the score and knowledge of the functional form of the target density. This method does not rely on any initial samples from the target density, nor on a model such as a neural network to estimate the score function. Much work remains to be done on the the analysis of such a method, specifically on high dimensional target densities of interest. The MC estimators may be of high variance, and theoretical guarantees on this variance must be provided. Furthermore, the estimates of the score will never be perfect, they will have some standard error around the true value, thus analysis of the backward flow SDE under discretization and approximate drift such as the analysis in \cite{lee2022convergence} will be also be required to guarantee convergence to the correct target measure. \bibliographystyle{plain}
{ "timestamp": "2022-12-08T02:02:29", "yymm": "2212", "arxiv_id": "2212.03325", "language": "en", "url": "https://arxiv.org/abs/2212.03325", "abstract": "Score based approaches to sampling have shown much success as a generative algorithm to produce new samples from a target density given a pool of initial samples. In this work, we consider if we have no initial samples from the target density, but rather $0^{th}$ and $1^{st}$ order oracle access to the log likelihood. Such problems may arise in Bayesian posterior sampling, or in approximate minimization of non-convex functions. Using this knowledge alone, we propose a Monte Carlo method to estimate the score empirically as a particular expectation of a random variable. Using this estimator, we can then run a discrete version of the backward flow SDE to produce samples from the target density. This approach has the benefit of not relying on a pool of initial samples from the target density, and it does not rely on a neural network or other black box model to estimate the score.", "subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG); Methodology (stat.ME)", "title": "Proposal of a Score Based Approach to Sampling Using Monte Carlo Estimation of Score and Oracle Access to Target Density", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692305124306, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7079585006598615 }
https://arxiv.org/abs/2004.12965
Some application examples of minimization based formulations of inverse problems and their regularization
In this paper we extend a recent idea of formulating and regularizing inverse problems as minimization problems, so without using a forward operator, thus avoiding explicit evaluation of a parameter-to-state map. We do so by rephrasing three application examples in this minimization form, namely (a) electrical impedance tomography with the complete electrode model (b) identification of a nonlinear magnetic permeability from magnetic flux measurements (c) localization of sound sources from microphone array measurements. To establish convergence of the proposed regularization approach for these problems, we first of all extend the existing theory. In particular, we take advantage of the fact that observations are finite dimensional here, so that inversion of the noisy data can to some extent be done separately, using a right inverse of the observation operator. This new approach is actually applicable to a wide range of real world problems.
\section{Introduction} An inverse problem of recovering some quantity $x$ from data $y$ can be expressed as an operator equation $F(x)=y$ with a forward operator $F$. In practice, the quantity $x$ is usually contained in a mathematical model, e.g., a PDE or an ODE, involving also a state $u$, abstractly written as \begin{equation} \label{eq:model} E(x,u)=0, \end{equation} and the data $y$ is collected from observations of the state $u$ \begin{equation} \label{eq:observation} C(u)=y, \end{equation} where $E$ and $C$ are mappings acting on function spaces $$E: \widetilde{\mathcal{D}} \times V \to W, \qquad C: V \to Y$$ where $\widetilde{\mathcal{D}} \subseteq X$ and $X,Y,V,W$ are Banach spaces. In this setting, $F=C \circ S$ is a composite function that concatenates the operator $C$ with the parameter-to-state map $S:\mathcal{D} \to V$ defined by $E(x,S(x))=0, \forall x \in \mathcal{D}$. However, in order for $S$ to be well-defined, often restrictive assumptions on the parameter need to be made, i.e., the domain $\mathcal{D}$ of $S$ and $F$ will typically only be a strict subset of $\widetilde{\mathcal{D}}$. Thus our aim is to completely avoid appearance of $S$, as has it already been previously done by means of all-at-once formulations, i.e., by considering \eqref{eq:model} and \eqref{eq:observation} as a system of equations for $x$ and $u$, see, e.g., \cite{KKV14,Kal16}. An even more general approach to do so is to rewrite (\ref{eq:model}), (\ref{eq:observation}) as an equivalent minimization problem \begin{equation} \label{eq:minIP} (x, u) \in \argmin \{\mathcal{J} (x, u; y): (x, u) \in M_{\mathrm{ad}}(y)\} \end{equation} for some cost function $\mathcal{J}$ and some admissible set $M_{\mathrm{ad}}(y)$, see \cite{Kal18}. Since we are interested in ill-posed problems and, instead of the exact data $y$, only a noisy version $y^{\delta}$ is available, we regularize this by considering \begin{equation}\label{eq:minIP_noise} \begin{split} (x_\alpha^\delta, u_\alpha^\delta) \in \argmin \{ T_\alpha(x, u; y^\delta) = \mathcal{J} (x, u; y^\delta) + \alpha \cdot \mathcal{R}(x, u): \qquad \qquad \\ (x, u)\in M_{\mathrm{ad}}^\delta(y^\delta) \}. \end{split} \end{equation} The regularization parameter $\alpha \in \mathbb{R}^m_+$ will be chosen according to the noise level $\delta$, the mapping $\mathcal{R}: X \times V \to \overline{\mathbb{R}}^m_+$ corresponds to regularization terms, and the set $M_{\mathrm{ad}}^\delta (y^\delta) \subset X \times V$ may contain additional constraints that can be used for stabilizing the problem in the sense of Ivanov regularization, see, e.g., \cite{KK18, IVT02, KRR16, LW13, NR14}. \medskip The aim of this paper is to apply this approach to three exemplary practical problems, namely \begin{itemize} \item electrical impedance tomography using the complete electrode model; \item determination of the nonlinear magnetic permeability from measurements of the magnetic flux; \item localization of sound sources from microphone array measurements. \end{itemize} What these and many othe real world problems have in common is -- among others -- finite dimensionality of the observation space. Therefore, inverting the observation operator (using a right inverse) is stable and to some extent allows to uncouple data inversion from the actual reconstruction process. This approach, which we think is of interest also on its own and applicable to a wide range of other problems, is described in Section \ref{sec:Preliminaries} where more concrete choices of the cost function $\mathcal{J}$ and the admissible set $M_{\mathrm{ad}}^\delta$ will be given. To do so, we require a somewhat extended version of the abstract background from \cite[Section 3.1]{Kal18}, which we therefore provide in Section \ref{sec:abstractanalysis}. The main Section \ref{sec:ex} of this paper deals with the three above mentioned application examples in the respective subsections \ref{sec:CEM-EIT}, \ref{sec:MPP}, \ref{sec:SoundSources}. \section{Convergence analysis} \subsection{The abstract convergence theory revisited}\label{sec:abstractanalysis} Like in \cite{Kal18}, we start with a completely general setting, which we slightly extend for our purpose (cf. Remark \ref{rem:adjust_ass:Kal18}). To this end, we first of all very briefly recall and summarize the assumptions and results from \cite[Section 3.1]{Kal18} on existence of minimizers $(x^\delta, u^\delta) := \left( x^\delta_{\alpha (\delta, y^\delta)}, u^\delta_{\alpha (\delta, y^\delta)} \right)$ of (\ref{eq:minIP_noise}) as well as their stability and convergence to a minimizer $(x^\dagger, u^\dagger)$ of (\ref{eq:minIP}). \begin{assumption} \label{ass:Kal18} {(\cite[Assumption 3.7]{Kal18})} Let a topology $\mathcal{T}$ and a norm $\|\cdot\|_B$ on $X \times V$ and $\bar \delta >0$ exist such that for the family of noisy data $(y^\delta)_{\delta \in (0,\bar \delta]}$ and any sequence $(y_n)_{n \in \mathbb{N}} \subset Y$ with $y_n \to y^\delta$ in $Y$ and for all $$ (\tilde x^\delta, \tilde u^\delta) \in \argmin \left\{ \mathcal{J} (x, u; y^\delta) + \alpha (\delta, y^\delta) \cdot \mathcal{R}(x, u): (x, u) \in M_{\mathrm{ad}}^\delta(y^\delta) \right\} $$ (part of the assumptions below indeed guarantee that this set of minimizers is nonempty) we have \begin{enumerate}[label= (\alph*)] \item $\forall \delta \in (0, \bar \delta], \exists n_0 \in \mathbb{N}, \forall n \ge n_0: (x^\dagger, u^\dagger) \in M_{\mathrm{ad}}^\delta (y^\delta) \cap M_{\mathrm{ad}}^\delta (y_n)$; \item $\forall j \in \{ 1, \dots, m\}: \left( \mathcal{R}_j (x^\dagger, u^\dagger) < \infty \text{\; and \;} \exists \underline{r} \in \mathbb{R}: \mathcal{R}_j \ge \underline{r} \right)$; \item $\forall c \in \mathbb{R}, \forall \alpha \in \mathbb{R}_+^m$, the sets $\{ (x, u) \in \bigcup_{\delta \in (0, \bar \delta]} M_{\mathrm{ad}}^\delta (y^\delta): T_\alpha (x, u; y) \le c \}$ and $\{ (x, u) \in M_{\mathrm{ad}}^\delta (y^\delta): T_\alpha (x, u; y^\delta) \le c \}$ are $\mathcal{T}$ relatively compact for all $\delta \in (0, \bar \delta]$; \item $\forall \alpha \in \mathbb{R}_+^m, \forall \delta \in (0, \bar \delta], \exists C_\alpha \ge T_\alpha (x^\dagger, u^\dagger; y^\delta), \exists \tilde{C}_\alpha > 0, \forall (x_n, u_n)_{n \in \mathbb{N}} \subset X \times V, (x_n, u_n) \in M_{\mathrm{ad}}^\delta (y_n):$ $$ \left( \forall n \in \mathbb{N}: T_\alpha (x_n, u_n; y^\delta) \le C_\alpha \right) \Rightarrow \left( \forall n \in \mathbb{N}: \| (x_n, u_n) \|_B \le \tilde{C}_\alpha \right); $$ \item $\forall \delta \in (0, \bar \delta]: M_{\mathrm{ad}}^\delta (y^\delta)$ is $\mathcal{T}$ closed; \item $\forall (\delta_n)_{n \in \mathbb{N}}, (z_n)_{n \in \mathbb{N}} \subset Y, (x_n, u_n)_{n \in \mathbb{N}} \subset X \times V$ with $(x_n, u_n) \in M_{\mathrm{ad}}^{\delta_n} (z_n)$: $$\left( \delta_n \to 0, \; z_n \to y, \; (x_n, u_n) \xrightarrow{\mathcal{T}} (x_0, u_0) \right) \Rightarrow \Big( (x_0, u_0) \in M_{\mathrm{ad}} (y) \Big);$$ \item $\forall j \in \{1,\dots,m\}: \mathcal{R}_j$ is $\mathcal{T}$ lower semicontinuous; \item $\forall \delta \in (0, \bar \delta], \mathcal{J} (\cdot, \cdot; y^\delta) $ and $\mathcal{J} (\cdot, \cdot; y)$ are $\mathcal{T}$ lower semicontinuous; \item $\forall \delta \in (0, \bar \delta]: \sup_{(x, u) \in \cup_{m\in\mathbb{N}} M_{\mathrm{ad}}^\delta (y_m)} \left| \mathcal{J} (x, u; y_n) - \mathcal{J} (x, u; y^\delta) \right| \to 0$ as $n \to \infty$; \item $\limsup_{\delta \to 0} \sup \left\{ \mathcal{J} (x, u; y) - \mathcal{J} (x, u; y^\delta): (x, u) \in \bigcup_{d \in (0, \bar \delta]} M_{\mathrm{ad}}^d (y^d) \right\} \le 0$ if $y^\delta \to y$ in $Y$ as $\delta \to 0$; \item If $y^\delta \to y$ in $Y$ as $\delta \to 0$ then $\forall j \in \{1,\dots,m\}:$\\ $\limsup_{\delta \to 0} \frac{1}{\alpha_j (\delta, y^\delta)} \left( \mathcal{J} (x^\dagger, u^\dagger; y^\delta) - \mathcal{J} (x^\dagger, u^\dagger; y) \right) < \infty$, \\ $\limsup_{\delta \to 0} \frac{1}{\alpha_j (\delta, y^\delta)} \left( \mathcal{J} (x^\dagger, u^\dagger; y) - \mathcal{J} (\tilde x^\delta, \tilde u^\delta; y^\delta) \right) < \infty$, \\ and $\alpha (\delta, y^\delta) \to 0$ as $\delta \to 0$. \end{enumerate} \end{assumption} \begin{remark} \label{rem:adjust_ass:Kal18} According to a careful check of the proofs of Proposition 3.4 and Theorem 3.6 in \cite{Kal18}, the sets, over which the suprema in Assumption \ref{ass:Kal18}(i),(j) are taken, can be shrunk under conditions \begin{equation} \label{eq:adjust_ass:Kal18_J(x,u,y)_bounded} \mathcal{J} (x^\dagger, u^\dagger; y) < \infty \end{equation} and \begin{equation} \label{eq:adjust_ass:Kal18_J(x,u,.)_continuous} \mathcal{J} (x^\dagger, u^\dagger; \cdot) \text{\; is continuous on a suitable subset of \;} Y. \end{equation} Assuming without loss of generality that $\max\{\alpha_j: j=1,\dots,m\} \le 1$, we replace those sets as follows. \begin{itemize} \item For Proposition 3.4 in \cite{Kal18}, the minimizers $(x^\delta_{\alpha n}, u^\delta_{\alpha n}) \in \argmin \{ \mathcal{J} (x, u; y_n) + \alpha (\delta, y_n) \cdot \mathcal{R}(x, u): (x,u) \in M_{\mathrm{ad}}^\delta (y_n)\}$ satisfy \begin{equation*} \begin{split} \mathcal{J} (x^\delta_{\alpha n}, u^\delta_{\alpha n}; y_n) &\le \mathcal{J} (x^\dagger, u^\dagger; y_n) + \alpha \cdot \mathcal{R} (x^\dagger, u^\dagger) - \alpha \cdot \mathcal{R} (x^\delta_{\alpha n}, u^\delta_{\alpha n}) \\ &\le \mathcal{J} (x^\dagger, u^\dagger; y_n) - \mathcal{J} (x^\dagger, u^\dagger; y^\delta) + \mathcal{J} (x^\dagger, u^\dagger; y^\delta) \\ & \qquad + \alpha \cdot \mathcal{R} (x^\dagger, u^\dagger) - \alpha \cdot \mathcal{R} (x^\delta_{\alpha n}, u^\delta_{\alpha n}) \\ & \le 1 + \mathcal{J} (x^\dagger, u^\dagger; y^\delta) + \sum_{j: \mathcal{R}_j (x^\dagger, u^\dagger) \ge 0} \mathcal{R}_j (x^\dagger, u^\dagger) + m \max \{ 0, -\underline{r} \} \\ &=: c (x^\dagger, u^\dagger; y^\delta), \end{split} \end{equation*} where $\mathcal{J} (x^\dagger, u^\dagger; y_n) - \mathcal{J} (x^\dagger, u^\dagger; y^\delta) \le 1$ with $n$ large enough due to the continuity of $\mathcal{J} (x^\dagger, u^\dagger; \cdot)$ at $y^\delta$. So the set $\cup_{m\in\mathbb{N}} M_{\mathrm{ad}}^\delta (y_m)$ in Assumption \ref{ass:Kal18}(i) can be replaced by \begin{equation} \label{eq:adjust_ass:Ka18_set(i)} \bigcup_{m\in\mathbb{N}} \left( M_{\mathrm{ad}}^\delta (y_m) \cap \{ (x,u): \mathcal{J} (x,u;y_m) \le c (x^\dagger, u^\dagger; y^\delta) \} \right). \end{equation} \item Likewise, for Theorem 3.6 in \cite{Kal18}, the minimizers $(\tilde x^\delta, \tilde u^\delta) \in \argmin \{ \mathcal{J} (x, u; y^\delta) + \alpha (\delta, y^\delta) \cdot \mathcal{R}(x, u): (x, u) \in M_{\mathrm{ad}}^\delta(y^\delta) \}$ satisfy \begin{equation*} \begin{split} \mathcal{J} (\tilde x^\delta, \tilde u^\delta; y^\delta) \le c (x^\dagger, u^\dagger; y), \end{split} \end{equation*} since $\mathcal{J} (x^\dagger, u^\dagger; y^\delta) - \mathcal{J} (x^\dagger, u^\dagger; y) \le 1$ for $\delta$ small enough due to the continuity of $\mathcal{J} (x^\dagger, u^\dagger; \cdot)$ at $y$. So the set $\cup_{d \in (0, \bar \delta]} M_{\mathrm{ad}}^d (y^d)$ in Assumption \ref{ass:Kal18}(j) can be replaced by \begin{equation} \label{eq:adjust_ass:Ka18_set(i)} \bigcup_{d \in (0, \bar \delta]} \left( M_{\mathrm{ad}}^d (y^d) \cap \{ (x,u): \mathcal{J} (x,u;y^d) \le c (x^\dagger, u^\dagger; y) \} \right). \end{equation} \end{itemize} \end{remark} \subsection{A minimization based approach using data inversion} \label{sec:Preliminaries} In this section, we will introduce an approach to partially uncouple data dependence from reconstruction within the minimization form (\ref{eq:minIP}), (\ref{eq:minIP_noise}) by using a right inverse operator $C^{\mathrm{ri}}$ of $C$ from (\ref{eq:observation}). To this end, we assume that the observation operator $C$ is \emph{linear} as is the case in many practical problems. The linear operator $C^{\mathrm{ri}}$ is supposed to be a right inverse of $C$ in the sense that \begin{equation} \label{eq:condition_Cri} (CC^{\mathrm{ri}})|_{{\mathrm{Im}} (C)} = {\mathrm{id}}_{{\mathrm{Im}} (C)} \end{equation} where ${\mathrm{Im}}(C) := \{C(u): u \in V\}$. If $V$, $Y$ are Hilbert spaces, one of the possible options to define $C^{\mathrm{ri}}$ is via the Moore-Penrose inverse operator $C^\dagger$, see \cite{EHN96}, with the domain $\mathcal{D} (C^\dagger) = {\mathrm{Im}} (C) \oplus {\mathrm{Im}} (C)^\bot \subset Y$. It is known that $C^\dagger$ is bounded iff ${\mathrm{Im}} (C)$ is closed, which is also equivalent to $\mathcal{D} (C^\dagger) = Y$. Thus this approach is always applicable in case of a finite dimensional observation space $Y$. Now, assuming that $C^{\mathrm{ri}}$ is well-defined and bounded on the entire space $Y$, by writing $u = C^{\mathrm{ri}} (y) + \hat u$ with $C(\hat u) = 0$, $y \in {\mathrm{Im}}(C)$ we rephrase the problem (\ref{eq:model}), (\ref{eq:observation}) as \begin{equation} \label{eq:model_observation_mixed} \left\{ \begin{array}{l} E(x, \hat u + C^{\mathrm{ri}} (y)) = 0, \\ (x, \hat u) \in X \times {\mathrm{Ker}}(C), \end{array} \right. \end{equation} with ${\mathrm{Ker}}(C) := \{ \hat u \in V: C(\hat u) = 0\}$. Thus we have decomposed the state into \begin{itemize} \item[(a)] a part $C^{\mathrm{ri}} (y)$, which depends on the data and is therefore subject to noise, which, however, propagates into this part in a stable way; \item[(b)] a part $\hat u \in {\mathrm{Ker}}(C)$, which is data independent to which -- along with $x$ -- minimization will be applied in order to enforce the model equation to hold (approximately). \end{itemize} Assuming that the priori information $\widetilde{\mathcal{R}} (x^\dagger, \hat u^\dagger) \le \rho$, with some radius $\rho>0$, and some functional $\widetilde{\mathcal{R}}: X \times V \to \overline{\mathbb{R}}$ is known, we consider \begin{equation} \label{eq:M_ad} M_{\mathrm{ad}} (y) = \{ (x, \hat u) \in X \times {\mathrm{Ker}}(C): \widetilde{\mathcal{R}} (x, \hat u) \le \rho \} \end{equation} and \begin{equation} \label{eq:J} \mathcal{J} (x, \hat u; y) = \mathcal{Q}_E (x, \hat u; y), \end{equation} where $\mathcal{Q}_E: X \times V \times Y \to \overline{\mathbb{R}}$ satisfies \begin{equation} \label{eq:conditionQ} \begin{split} &\forall (x, \hat u, y) \in X \times V \times Y:\\ &\qquad \qquad \mathcal{Q}_E (x, \hat u; y) \ge 0 \text{\quad and \quad} \\ &\qquad \qquad \left( \forall \hat{u}\in {\mathrm{Ker}}(C)\, : E(x, \hat u + C^{\mathrm{ri}} (y)) = 0 \Leftrightarrow \mathcal{Q}_E (x, \hat u; y) = 0 \right). \end{split} \end{equation} With noisy data $y^\delta$ being available, we use the admissible set \begin{equation} \label{eq:M_ad^delta} \begin{split} M_{\mathrm{ad}}^\delta (y^\delta) &= \left\{ (x, \hat u) \in X \times V: \; \mathcal{S} (C(\hat u+C^{\mathrm{ri}}(y^\delta)), y^\delta) \le \tau \delta, \; \widetilde{\mathcal{R}} (x, \hat u) \le \rho \right\} \end{split} \end{equation} with some $\tau>1$ and a discrepancy measure $\mathcal{S}: Y \times Y \to \overline{\mathbb{R}}$ such that \begin{equation} \label{eq:conditionS_definiteness} \forall y_1, y_2 \in Y: \qquad \mathcal{S} (y_1, y_2) \ge 0 \text{\quad and \quad} (\mathcal{S} (y_1, y_2) = 0 \Leftrightarrow y_1=y_2), \end{equation} \begin{equation} \label{eq:conditionS_delta} \mathcal{S} (y, y^\delta) \le \delta, \end{equation} and \begin{equation} \label{eq:conditionS_delta2} \mathcal{S} (CC^{\mathrm{ri}} (y^\delta), y^\delta) < \tau \delta. \end{equation} This framework also includes discrepancy measures that are not necessarily defined by a metric or norm, for example, the Kullback Leibler divergence or the Bregman distance with respect to some proper convex functional. In the special case of $\mathcal{S}$ being translation invariant, i.e., \[ \forall y_1,y_2,y_3\in Y: \ \mathcal{S}(y_1,y_2)=\mathcal{S}(y_1+y_3,y_2+y_3) \] and if ${\mathrm{Im}}(C) \equiv Y$, i.e., $CC^{\mathrm{ri}} \equiv {\mathrm{id}}_Y$, the admissible set simplifies to \begin{equation} \label{eq:M_ad^delta_reduced} \begin{split} M_{\mathrm{ad}}^\delta &= \left\{ (x, \hat u) \in X \times V: \; \mathcal{S} (C(\hat u), 0) \le \tau \delta, \; \widetilde{\mathcal{R}} (x, \hat u) \le \rho \right\}. \end{split} \end{equation} To guarantee well-definedness, stability and convergence of the regularized minimization problems defined by \eqref{eq:minIP_noise} with \eqref{eq:J}, \eqref{eq:M_ad^delta}, we make some assumptions that are largely being used also in previous publications, see, e.g., \cite{Kal18, HKPS07, HW13, SGG+09}. \begin{assumption} \label{ass:Maao} Let a topology $\mathcal{T}$ and a norm $\| \cdot \|_B$ on $X \times V$ exist such that \begin{enumerate}[label= (\roman*)] \item $\widetilde{\mathcal{R}} (x^\dagger, \hat u^\dagger) \le \rho$; \item $\mathcal{R}_j (x^\dagger, \hat u^\dagger) < \infty \text{\; and \;} \exists \underline{r} \in \mathbb{R}: \mathcal{R}_j \ge \underline{r}$ \; for all $j \in \{1, \dots, m\}$; \item for all $z_1, z_2 \in Y$ and $c>0$, the sublevel set \begin{equation*} \begin{split} &L_c = \Big\{ (x, \hat u) \in X \times V: \\ & \quad \max \{ \mathcal{Q}_E (x, \hat u; z_1), \mathcal{R}_1 (x, \hat u), \dots, \mathcal{R}_m (x, \hat u), \widetilde{\mathcal{R}} (x, \hat u), \mathcal{S} (C(\hat u+C^{\mathrm{ri}} (z_2)), z_2) \} \le c \Big\} \end{split} \end{equation*} is $\mathcal{T}$ compact and $\| \cdot \|_B$ bounded; \item for all $z \in Y$, the maps $(x, \hat u) \mapsto \mathcal{Q}_E (x, \hat u; z)$, $(x, \hat u) \mapsto \mathcal{S} (C(\hat u+C^{\mathrm{ri}} (z)), z)$, $\mathcal{R}$ and $\widetilde{\mathcal{R}}$ are $\mathcal{T}$ lower semicontinuous; \item the family of mappings $\big( \zeta \mapsto \mathcal{S} (z+CC^{\mathrm{ri}} (\zeta), \zeta) \big)_{z \in Z}$ is uniformly continuous on $Z = \{ C(\hat u): \exists x \in X: \widetilde{\mathcal{R}} (x, \hat u) \le \rho \}$, i.e., $$\lim_{\zeta \to \zeta_0} \sup_{z \in Z} |\mathcal{S} (z+CC^{\mathrm{ri}} (\zeta), \zeta) - \mathcal{S} (z+CC^{\mathrm{ri}} (\zeta_0), \zeta_0)| = 0, \quad \forall \zeta_0 \in Y.$$ \end{enumerate} \end{assumption} \begin{remark} \label{rem:ass:Maao(v)} If the admissible set (\ref{eq:M_ad^delta_reduced}) is used, regardless of whether or not ${\mathrm{Im}}(C) \equiv Y$ holds, the condition (\ref{eq:conditionS_delta2}) and the Assumption \ref{ass:Maao}(v) can be dropped. \end{remark} Differently from \cite{Kal18}, we here have to deal with a model misfit functional $\mathcal{Q}$ that depends on the data and therefore make some continuity assumptions concerning this dependence. \begin{assumption}\label{ass:Maao_newcondition} For all solution $(x^\dagger, \hat u^\dagger)$ to (\ref{eq:model_observation_mixed}) with the exact data $y$, \begin{enumerate} [label = (\roman*)] \item there exists a constant $\bar \delta >0$ such that $$\lim_{z_2 \to z_1} \sup_{(x, \hat u) \in M} \left| \mathcal{Q}_E (x, \hat u; z_2) - \mathcal{Q}_E (x, \hat u; z_1) \right| = 0, \quad \forall z_1 \in Z,$$ where \begin{itemize} \item $Z$ is an open set that contains $\{ z \in Y: \mathcal{S} (y,z) \le \bar \delta \}$, for example, $Z = \cup_{\{ z \in Y: \mathcal{S} (y,z) \le \bar \delta \}} B_{\|\cdot\|_{Y}} (z,1)$, \item $M = \cup_{z \in Z } (M_{\mathrm{ad}}^{\bar \delta} (z) \cap M_z)$ with $M_z = \{ (x, \hat u): \mathcal{Q}_E (x, \hat u; z) \le c(x^\dagger, \hat u^\dagger; y) \}$ and $c(x^\dagger, \hat u^\dagger; y)$ as in Remark \ref{rem:adjust_ass:Kal18}; \end{itemize} \item there exists a nondecreasing function $\gamma: [0,\infty] \to [0,\infty]$ such that $$\mathcal{Q}_E (x^\dagger, \hat u^\dagger; z) - \mathcal{Q}_E (x^\dagger, \hat u^\dagger; y) \le \gamma (\mathcal{S} (y, z)), \quad \forall z \in Y.$$ \end{enumerate} \end{assumption} \begin{remark}\label{rem:Ass3} In case of $\mathcal{Q}_E (x, \hat u; z)$ and $\mathcal{S}$ being defined by norms \[ \mathcal{Q}_E(x, \hat u; z)=\sum_{i=1}^I \mathcal{Q}_i(x, \hat u; z)=\frac12\sum_{i=1}^I\|D_i(x, \hat u)C^{ri}z+b_i(x, \hat u)\|_W^2, \] with linear operators $D_i(x, \hat u):V\to W$, as well as $\mathcal{S}(z_1,z_2)=\|z_1-z_2\|_Y$, as in the three examples of Section \ref{sec:ex}, we can easily verify Assumption \ref{ass:Maao_newcondition} with $\gamma(t)=c\cdot(1+t)\cdot t$ for some constant $c$, by using the estimate \[ \begin{split} &|\mathcal{Q}_E(x, \hat u; z_1)-\mathcal{Q}_E(x, \hat u; z_2)|\\ &\quad = \left|\sum_{i=1}^I (\sqrt{\mathcal{Q}_i(x, \hat u; z_1)}+\sqrt{\mathcal{Q}_i(x, \hat u; z_2)}) (\sqrt{\mathcal{Q}_i(x, \hat u; z_1)}-\sqrt{\mathcal{Q}_i(x, \hat u; z_2)})\right| \\ &\quad\leq\tfrac{1}{\sqrt{2}}\sum_{i=1}^I (2\sqrt{\mathcal{Q}_E(x, \hat u; z_2)}+\tfrac{1}{\sqrt{2}}\|D_i(x, \hat u)C^{ri}(z_1-z_2)\|_W) \|D_i(x, \hat u)C^{ri}(z_1-z_2)\|_W\\ &\quad\leq\sum_{i=1}^I (\sqrt{2}\sqrt{\mathcal{Q}_E(x, \hat u; z_2)}+\tfrac12 \|D_i(x, \hat u)C^{ri}\|_{Y\to W} \mathcal{S} (z_1, z_2))\|D_i(x, \hat u)C^{ri}\|_{Y\to W} \mathcal{S} (z_1, z_2) \,. \end{split} \] \end{remark} Under these assumptions we obtain the following result. \begin{theorem} \label{the:main_theorem} If Assumption \ref{ass:Maao}, Assumption \ref{ass:Maao_newcondition} and the conditions (\ref{eq:conditionQ}), (\ref{eq:conditionS_definiteness}), (\ref{eq:conditionS_delta}), (\ref{eq:conditionS_delta2}) are satisfied, then we achieve well-definedness, stability and convergence as follows. For any family of noisy data $(y^\delta)_{\delta \in (0,\bar \delta]}$, \begin{enumerate}[label = (\alph*)] \item $\forall \delta \in (0, \bar \delta]$, $\forall \alpha \in \mathbb{R}_+^m$, a minimizer of (\ref{eq:minIP_noise}) with (\ref{eq:J}), (\ref{eq:M_ad^delta}) exists; \item for each $\delta \in (0, \bar \delta]$, for any sequence $(y_n)_{n \in \mathbb{N}} \subset Y$ with $y_n \to y^\delta$ in $Y$ as $n\to \infty$, the sequence of corresponding minimizers is $\| \cdot \|_B$ bounded. \item If, additionally, the regularization parameter choice satisfies $$ \alpha (\delta, y^\delta) \to 0 \text{\quad and \quad} \frac{\gamma(\delta)}{\alpha_j (\delta, y^\delta)} \le c, \forall j \in \{1,\dots,m\} \text{\qquad as \;} \delta \to 0,$$ for some $c \in \mathbb{R}$, then, as $\delta \to 0$, $y^\delta \to y$, the family of minimizers $(x^\delta_{\alpha (\delta, y^\delta)}, \hat u^\delta_{\alpha (\delta, y^\delta)})$ of (\ref{eq:minIP_noise}) with (\ref{eq:J}), (\ref{eq:M_ad^delta}) converges $\mathcal{T}$ subsequentially to a solution of the inverse problem with exact data, that is, it has a $\mathcal{T}$ convergent subsequence and the limit $(x^\dagger, \hat u^\dagger)$ forms a solution $(x^\dagger, \hat u^\dagger+C^{ri}y)$ of (\ref{eq:model}), (\ref{eq:observation}). \end{enumerate} \end{theorem} \begin{remark}\label{rem:uniqueness} In Theorem \ref{the:main_theorem}, if the solution $(x^\dagger, \hat u^\dagger)$ is unique then $(x^\delta_{\alpha (\delta, y^\delta)}, \hat u^\delta_{\alpha (\delta, y^\delta)}) \xrightarrow{\mathcal{T}} (x^\dagger, \hat u^\dagger)$. However, uniqueness is unlikely to hold with finite dimensional data. \end{remark} \begin{proof} By (\ref{eq:conditionQ}), we see that $(x^\dagger, \hat u^\dagger)$ is a solution to (\ref{eq:model_observation_mixed}) if and only if it is a solution to (\ref{eq:minIP}). The rest of the proof consists of checking the items of Assumption \ref{ass:Kal18} where $u$ is replaced by $\hat u$. \begin{itemize} \item Assumption \ref{ass:Kal18}(a) follows from Assumption \ref{ass:Maao}(i) and (\ref{eq:conditionS_delta2}) which, due to the fact that $C\hat u^\dagger=0$, implies \[ \mathcal{S} (C(\hat u^\dagger+C^{\mathrm{ri}} (y^\delta)), y^\delta) =\mathcal{S} (CC^{\mathrm{ri}} (y^\delta)), y^\delta)<\tau\delta \] and therefore \[ \begin{split} &\mathcal{S} (C(\hat u^\dagger+C^{\mathrm{ri}} (y_n)), y_n)\\ &\quad= \mathcal{S} (C(\hat u^\dagger+C^{\mathrm{ri}} (y^\delta)), y^\delta) +\Bigl[\mathcal{S} (C(\hat u^\dagger+C^{\mathrm{ri}} (y_n)), y_n)-\mathcal{S} (C(\hat u^\dagger+C^{\mathrm{ri}} (y^\delta)), y^\delta)\Bigr]\\ &\quad\leq\tau\delta \end{split} \] for all $n$ large enough, since the first term is strictly smaller than $\tau\delta$ by \eqref{eq:conditionS_delta2} and the term in brackets tends to zero by Assumption \ref{ass:Maao}(v). \item Assumption \ref{ass:Kal18}(b) is exactly Assumption \ref{ass:Maao}(ii). \item Assumption \ref{ass:Kal18}(c),(d) follow from the $\mathcal{T}$ compactness and $\| \cdot \|_B$ boundedness of $L_c$ as in Assumption \ref{ass:Maao}(iii). \item Assumption \ref{ass:Kal18}(e) follows from the $\mathcal{T}$ lower semicontinuity of $\widetilde{\mathcal{R}}$ and of $(x, \hat u) \mapsto \mathcal{S} (C(\hat u+C^{\mathrm{ri}}(z)), z), \forall z \in Y$ according to Assumption \ref{ass:Maao}(iv). \item Assumption \ref{ass:Kal18}(f) can be obtained by using the fact that $y\in {\mathrm{Im}}(C)$ and $(x_n,\hat{u}_n)\in M_{\mathrm{ad}}^\delta (y_n)$ defined by \eqref{eq:M_ad^delta} with $y_n$ in place of $y^\delta$, as well as Assumption \ref{ass:Maao}(iv),(v) to get \begin{equation*} \begin{split} \mathcal{S} (C(\hat u_0)+y, y) &= \mathcal{S} (C(\hat u_0+C^{ri}(y)), y) \le \liminf_{n \to \infty} \mathcal{S} (C(\hat u_n+C^{\mathrm{ri}}(y)), y ) \\ &= \liminf_{n \to \infty} \big( \mathcal{S} (C(\hat u_n+C^{\mathrm{ri}}(z_n)), z_n) \\ & \qquad \qquad \quad + \mathcal{S} (C(\hat u_n+C^{\mathrm{ri}}(y)), y) - \mathcal{S} (C(\hat u_n+C^{\mathrm{ri}}(z_n)), z_n) \big) \\ & \le \limsup_{n \to \infty} \big( \tau \delta_n \ + \mathcal{S} (C(\hat u_n+C^{\mathrm{ri}}(y)), y ) - \mathcal{S} (C(\hat u_n+C^{\mathrm{ri}}(z_n)), z_n) \big) \\ & \le 0\,, \end{split} \end{equation*} which implies $C(\hat u_0) =0$ by definiteness of $\mathcal{S}$ according to (\ref{eq:conditionS_definiteness}). \item Assumption \ref{ass:Kal18}(g),(h) follows directly from Assumption \ref{ass:Maao}(iv). \item Assumption \ref{ass:Kal18}(i),(j) adjusted by Remark \ref{rem:adjust_ass:Kal18} follow from Assumption \ref{ass:Maao_newcondition}(i), noting that $\mathcal{Q}_E (x^\dagger, \hat u^\dagger; y) = 0$, $ (x^\dagger, \hat u^\dagger) \in M$ and ``the suitable subset of $Y$'' in (\ref{eq:adjust_ass:Kal18_J(x,u,.)_continuous}) is $Z$. \item The last one, Assumption \ref{ass:Kal18}(k), is verified by Assumption \ref{ass:Maao_newcondition}(ii) and (\ref{eq:conditionS_definiteness}), (\ref{eq:conditionS_delta}) as below \begin{equation*} \begin{split} \frac{\mathcal{J} (x^\dagger, \hat u^\dagger; y^\delta) - \mathcal{J} (x^\dagger, \hat u^\dagger; y)}{\alpha_j (\delta, y^\delta)} &= \frac{\mathcal{Q} (x^\dagger, \hat u^\dagger; y^\delta) - \mathcal{Q} (x^\dagger, \hat u^\dagger; y)}{\alpha_j (\delta, y^\delta)} \\ & \le \frac{\gamma(\mathcal{S} (y, y^\delta))}{\alpha_j (\delta, y^\delta)} \le \frac{\gamma(\delta)}{\alpha_j (\delta, y^\delta)} \le c, \end{split} \end{equation*} and \begin{equation*} \begin{split} \frac{ \mathcal{J} (x^\dagger, \hat u^\dagger; y) - \mathcal{J} (x^\delta, \hat u^\delta; y^\delta)}{\alpha_j (\delta, y^\delta)} & = \frac{0 - \mathcal{Q} (x^\delta, \hat u^\delta; y^\delta)}{\alpha_j (\delta, y^\delta)} \le 0. \end{split} \end{equation*} \end{itemize} \end{proof} \section{Three application examples}\label{sec:ex} Following up on a few briefly sketched examples in \cite{Kal18}, we here provide further evidence of the usefulness of minimization based formulation and regularization to real world problems. To enable easy applicability of iterative minimization methods, we aim at working in Hilbert spaces $X$, $V$ for the design variables $x,u$. Also differentiability of $\mathcal{J}^\delta$ is an asset in this sense and holds for the examples below. On the constraints side, it is pointwise bounds that on one hand can be very efficiently implemented see, e.g., \cite{comp_minIP}, on the other hand are practically relevant in view of known a prior bounds on the searched for quantities. Morover, in the spirit of the Kohn-Vogelius functional, we strive for first order least squares formulations of the PDE models. Another common feature of these examples and actually very characteristic in many real world applications is finite dimensionality of the data space, which enables application of the data inversion strategy from Section \ref{sec:Preliminaries}. While in the examples of Sections \ref{sec:CEM-EIT}, \ref{sec:MPP}, the same PDE model comes with several different excitations, each of which leading to one of $I$ measurements (or measurement sets), there is only a single experiment carried out in the example of Section \ref{sec:SoundSources} and the measurement vector consists of observations at $L$ spatial points. \subsection{Electrical impedance tomography with the complete electrode model} \label{sec:CEM-EIT} Electrical impedance tomography (EIT) is a meanwhile well-established and well-researched imaging technology (see, e.g., the review \cite{Borcea2002} and the references therein), which seeks to recover the spatially varying electrical conductivity in the interior of an inhomogeneous object by means of low-frequency voltage-current measurements on its surface. A minimization based formulation of this problem has already been devised in \cite{Knowles1998,KohnVogelius87,KM90} for the idealized measurement model that considers both voltages and currents as arbitrary (up to smoothness assumptions) functions on the boundary. A more realistic model of the electrodes has been developed in \cite{SCI92}, see also \cite{JXZ16}. Our aim here is to extend the variational formulation from \cite{Knowles1998,KohnVogelius87,KM90} to incorporate the complete electrode model (CEM) from \cite{SCI92,JXZ16}. \subsubsection{The minimization form of the problem} Let $\Omega$ be a domain in $\mathbb{R}^2$ with a boundary $\partial \Omega$ which is a closed and simple curve. Electrodes are placed on $\partial \Omega$ and are numbered in counterclockwise order from $e_1$ to $e_L$ with $\overline{e_{\ell_1}} \cap \overline{e_{\ell_2}} = \emptyset$ if $\ell_1 \ne \ell_2$. The starting point and the end point of $\ell$th electrode are denoted by $e_\ell^a$ and $e_\ell^b$ and the gap between $e_\ell$ and $e_{\ell+1}$ is denoted by $g_\ell$ (see Figure \ref{pic:CEM}). For convenience, we use the identities $e_{L+1} \equiv e_1$, $e_0 \equiv e_L$, $g_{L+1} \equiv g_1$, $g_0 \equiv g_L$. We also denote by $j_{i,\ell}$ the current applied on $e_\ell$ and by $v_{i,\ell}$ the voltage at the $\ell$th electrode at the $i$th measurement, $i\in\{1,\ldots,I\}$. By the law of charge conservation $j_i=(j_{i,1}, \dots, j_{i,L})$ is an element of $$\mathbb{R}^L_{\diamond}:= \left\{ (x_1,\dots,x_L) \in \mathbb{R}^L: \sum_{\ell=1}^{L} x_\ell =0 \right\}.$$ In addition, we can also normalize $v_i=(v_{i,1},\dots,v_{i,L})$ such that $v_i \in \mathbb{R}^L_{\diamond}$, $i\in\{1,\ldots,I\}$. \begin{figure}[!h] \begin{center} \begin{tikzpicture} \draw (0,0) circle (3 and 2); \coordinate (P) at ($(0, 0) + (30:3cm and 2cm)$); \draw[black, line width=1pt] (P) arc (30:70:3cm and 2cm) node[pos=0.4, above]{$g_1$}; \coordinate (P) at ($(0, 0) + (110:3cm and 2cm)$); \draw[black, line width=1pt] (P) arc (110:150:3cm and 2cm) node[pos=0.6, above]{$g_2$}; \coordinate (P) at ($(0, 0) + (200:3cm and 2cm)$); \draw[black, line width=1pt] (P) arc (200:270:3cm and 2cm) node[pos=0.5, below]{$g_3$}; \coordinate (P) at ($(0, 0) + (305:3cm and 2cm)$); \draw[black, line width=1pt] (P) arc (305:360:3cm and 2cm) node[pos=0.5, right]{$g_4$}; \draw[red, line width=2pt] (3,0) circle(1pt) node[blue, right] {$e_1^a$} arc (0:30:3cm and 2cm) circle(1pt) node[blue, right] {$e_1^b$} node[pos=0.5, right]{$e_1$}; \coordinate (P) at ($(0, 0) + (70:3cm and 2cm)$); \draw[red, line width=2pt] (P) circle (1pt) node[above, blue] {$e_2^a$} arc (70:110:3cm and 2cm) circle (1pt) node[above, blue] {$e_2^b$} node[pos=0.5, above]{$e_2$}; \coordinate (P) at ($(0, 0) + (150:3cm and 2cm)$); \draw[red, line width=2pt] (P) circle (1pt) node[left, blue] {$e_3^a$} arc (150:200:3cm and 2cm) circle (1pt) node[left, blue] {$e_3^b$} node[pos=0.5, left]{$e_3$}; \coordinate (P) at ($(0, 0) + (270:3cm and 2cm)$); \draw[red, line width=2pt] (P) circle (1pt) node[below, blue] {$e_4^a$} arc (270:305:3cm and 2cm) circle (1pt) node[below, blue] {$e_4^b$} node[pos=0.5, below]{$e_4$}; \end{tikzpicture} \end{center} \caption{Electrodes (in red) on the boundary with $L=4$. \label{pic:CEM}} \end{figure} The EIT-CEM problem is to find the conductivity $\sigma: \Omega \to \mathbb{R}$, $0 < \underline{\sigma} \le \sigma \le \overline{\sigma}, \text{\; a.e. in \;} \Omega$, satisfying $$ \nabla \cdot J_i = 0, \quad \nabla^\bot \cdot E_i = 0, \quad J_i = \sigma E_i, \quad \text{in\;} \Omega, \quad i=1,2,\dots, I$$ where $J_i:\Omega \to \mathbb{R}^2$ is the current density, $E_i:\Omega \to \mathbb{R}^2$ is the electric field and $\nabla^\bot$ is the 2-d rotation operator $\nabla^\bot = \left( - \frac{\partial}{\partial x_2},\frac{\partial}{\partial x_1} \right)$. In a similar way to \cite{Kal18,KM90,JXZ16}, by using potentials $\phi_i$ and $\psi_i$ for $J_i$ and $E_i$, $$J_i = - \nabla^\bot \psi_i, \quad E_i = -\nabla \phi_i,$$ we write the problem in the form \begin{subequations} \label{eq:EIT_CEM} \begin{alignat}{2} & \sqrt{\sigma}\nabla \phi_i - \frac{1}{\sqrt{\sigma}} \nabla^\bot \psi_i = 0 &&\quad \text{in \;} \Omega, \label{eq:EIT_CEM_eq} \\ & \phi_i + z_\ell \nabla^\bot \psi_i \cdot \nu = v_{i,\ell} &&\quad \text{on \;} e_\ell, \ \ell=1,2,\dots,L, \label{eq:EIT_CEM_constrain_el1} \\ & \int_{e_\ell} \nabla^\bot \psi_i \cdot \nu {~\mathrm{d}} s = j_{i,\ell} &&\quad \text{for \;} \ell=1,2,\dots,L, \label{eq:EIT_CEM_constrain_el2} \\ & \nabla^\bot \psi_i \cdot \nu = 0 &&\quad \text{on \;} \partial \Omega \backslash \cup_{\ell=1}^L e_\ell, \label{eq:EIT_CEM_constrain_outside_el} \end{alignat} \end{subequations} for $i= 1, 2, \dots, I$, where $\{z_\ell\}_{\ell=1}^{L}$ is the set of (known) positive contact impedances. Here $I$ is the number of current patterns impressed via the $L$ electrodes. Note that equations (\ref{eq:EIT_CEM}) contain only $\nabla^\bot \psi_i$, not $\psi_i$ itself. So adding or subtracting a constant from $\psi_i$ will have no effect on the problem and we can assume $\psi_i(e_1^a) = 0$. By using (\ref{eq:EIT_CEM_constrain_el2}) and (\ref{eq:EIT_CEM_constrain_outside_el}) we see that $\psi_i(e_\ell^b) - \psi_i(e_\ell^a) = -j_{i,\ell}$ for $\ell \in\{1,\dots,L\}$, and $\psi_i(x) - \psi_i(e_{\ell}^b)=0$ for $x \in g_\ell, \ell \in \{1,\dots,L\}$. It follows that $\psi_i(e_{\ell+1}^a) = \psi_i(e_{\ell}^b) = \psi_i(e_{\ell}^a) - j_{i,\ell}$ for $\ell \in \{1,\dots,L\}$ and then $\psi_i(e_{\ell+1}^a) = \psi_i(e_{\ell}^b) = \psi_i(e_1^a) - \sum_{k=1}^{l} j_{i,k} = -\sum_{k=1}^{l} j_{i,k}$ for $\ell \in \{1,\dots,L\}$. Therefore, the value of $\psi_i$ outside the electrodes is determined by $\psi_i(x) = -\sum_{k=1}^{l} j_{i,k}$ for $x \in g_\ell, \ell \in \{1,\dots,L\}$ or \begin{equation} \label{eq:condition_gl} \psi_i|_{g_{\ell}} = -\sum_{k=1}^{l} j_{i,k}, \quad \mbox{ for all } \ell \in \{1,\dots,L\}. \end{equation} Next, by taking the integral from $e_\ell^a$ to $x$ on $e_\ell$ on both sides of (\ref{eq:EIT_CEM_constrain_el1}), noting that $\psi_i(e_\ell^a) = \psi_i|_{g_{\ell-1}}$, we get \begin{equation} \label{eq:condition_el} \int_{e_\ell^a}^x \phi_i {~\mathrm{d}} s - z_\ell (\psi_i(x) - \psi_i|_{g_{\ell-1}}) = v_{i,\ell} d_{\partial \Omega} (e_\ell^a, x), \quad \mbox{ for all }x \in e_\ell, \ \ell \in \{1,\dots,L\} \end{equation} where $d_{\partial \Omega} (x_1, x_2)$ is the length of $\partial \Omega$ from $x_1$ to $x_2$. In view of (\ref{eq:condition_gl}), (\ref{eq:condition_el}), the function $\psi_i$ is constant on each $g_\ell$ and every function $C_\ell (\phi_i, \psi_i): e_\ell \to \mathbb{R}$, \begin{equation} \label{eq:setting_Cl} C_\ell(\phi_i, \psi_i)(x) := \frac{1}{d_{\partial \Omega} (e_\ell^a, x)} \left( \int_{e_\ell^a}^x \phi_i {~\mathrm{d}} s - z_\ell (\psi_i(x) - \psi_i|_{g_{\ell-1}}) \right), \quad \mbox{ for all } x \in e_\ell, \end{equation} is also constant on each $e_\ell$, so we can choose the spaces containing $\sigma$, $\overrightarrow{\phi} = (\phi_1, \dots, \phi_I)$, $ \overrightarrow{\psi} = (\psi_1, \dots, \psi_I)$ as \begin{equation} \label{eq:setting_xu} \begin{split} & \sigma \in X := L^2(\Omega), \\ & (\overrightarrow{\phi}, \overrightarrow{\psi}) \in V := \Big\{ (\overrightarrow{\phi}, \overrightarrow{\psi}) \in H^1 (\Omega)^{2I}: \forall i \in \{1, \dots, I\}, \; \forall \ell \in \{1, \dots, L\}, \\ & \qquad \qquad \qquad \qquad \psi_i (e_1^a) = 0, \qquad \psi_i|_{g_\ell}, C_\ell(\phi_i, \psi_i) \text{\; are constant,} \\ & \qquad \qquad \qquad \qquad \text{and \quad } \sum_{\ell=1}^{L} C_\ell (\phi_i, \psi_i) = 0\Big\}. \end{split} \end{equation} We define the observation operator $C: V \to Y$ by \begin{equation} \label{eq:setting_C} C (\overrightarrow{\phi}, \overrightarrow{\psi}) = \left( \Big( \psi_i|_{g_{\ell-1}} - \psi_i|_{g_\ell} \Big)_{\ell=1}^L, \Big( C_\ell(\phi_i, \psi_i) \Big)_{\ell=1}^L \right)_{i=1}^I, \end{equation} where \begin{equation} Y := \Big( \mathbb{R}_{\diamond}^L \times \mathbb{R}_{\diamond}^L \Big)^{I} \end{equation} \label{eq:setting_Y} and obviously $E (\sigma, \overrightarrow{\phi}, \overrightarrow{\psi}) = \Bigl(E_i(\sigma,\phi_i,\psi_i)\Bigr)_{i=1}^I \in L^2 (\Omega)^{2I}$ with \begin{equation} \label{eq:setting_P} E_i(\sigma,\phi_i,\psi_i)=\sqrt{\sigma}\nabla \phi_i - \frac{1}{\sqrt{\sigma}} \nabla^\bot \psi_i . \end{equation} As a next step, we will determine the operator $C^{\mathrm{ri}}: Y \to V$ such that $C(C^{\mathrm{ri}} (y)) = y,$ for all \begin{equation}\label{yetaxi} y = \left( (\eta_{i,\ell})_{\ell=1}^L, (\xi_{i,\ell})_{\ell=1}^L \right)_{i=1}^I \in Y, \end{equation} that is, \begin{equation} \label{eq:formof_Cri} C^{\mathrm{ri}} (y) = (C^{\mathrm{ri}}_{\overrightarrow{\phi}} (y), C^{\mathrm{ri}}_{\overrightarrow{\psi}} (y)) = (C^{\mathrm{ri}}_{\phi,1} (y), \dots, C^{\mathrm{ri}}_{\phi,I} (y), C^{\mathrm{ri}}_{\psi,1} (y), \dots, C^{\mathrm{ri}}_{\psi,I} (y)) \end{equation} satisfying (\ref{eq:condition_gl}), (\ref{eq:condition_el}) where $(C^{\mathrm{ri}}_{\phi,i} (y), C^{\mathrm{ri}}_{\psi,i} (y))$ replaces $(\phi_i, \psi_i)$ and $(\eta_{i,\ell}, \xi_{i,\ell})$ replaces $(j_{i,\ell}, v_{i,\ell})$. Because of $C^{\mathrm{ri}}_{\psi,i} (y)|_{g_\ell} = -\sum_{k=1}^{l} \eta_{i,k}$, we choose $C^{\mathrm{ri}}_{\psi,i} (y)$ in the form \begin{equation} \label{eq:formof_psi0} C^{\mathrm{ri}}_{\psi,i} (y) = \sum_{\ell=1}^L \left( -\sum_{k=1}^{l} \eta_{i,k} \right) \psi_{0,\ell} \end{equation} where $\psi_{0,\ell}(x) = 1$ if $x \in g_\ell$ and $\psi_{0,\ell} (x) = 0$ if $x \in g_k, k \ne l$, particularly, we choose $\psi_{0,\ell} \in H^1(\Omega)$ such that it satisfies the boundary conditions \begin{equation} \label{eq:formof_psi0l} \psi_{0,\ell} (x) = \left\{ \begin{array}{ll} 1, & \text{if \;} x \in g_\ell, \\ 1 - d_{\partial \Omega} (e^b_\ell,x)/|e_\ell|, & \text{if \;} x \in e_\ell, \\ 1 - d_{\partial \Omega} (e^a_{\ell+1},x)/|e_{\ell+1}|, & \text{if \;} x \in e_{\ell+1}, \\ 0, & \text{if \;} x \in \partial \Omega \backslash (g_\ell \cup e_\ell \cup e_{\ell+1}), \end{array} \right. \end{equation} where $|e_\ell|$ is the length of $e_\ell$. This is possible due to the fact that the function on the right hand side of (\ref{eq:formof_psi0l}) is in $H^{1/2} (\partial \Omega)$ together with the (inverse) Trace Theorem and the assumption that $\partial \Omega$ is Lipschitz. By some direct calculations on $C^{\mathrm{ri}}_{\psi,i} (y)$, it is easy to see that (recalling \eqref{yetaxi}) \begin{equation*} \begin{split} C^{\mathrm{ri}}_{\psi,i} (y) (x) &= -\sum_{k=1}^{l-1} \eta_{i,k} - \frac{\eta_{i,\ell}}{|e_\ell|} d_{\partial \Omega} (e^a_\ell, x) \\ &= C^{\mathrm{ri}}_{\psi,i} (y)|_{g_{i,l-1}} - \frac{\eta_{i,\ell}}{|e_\ell|} d_{\partial \Omega} (e^a_\ell, x), \quad \mbox{ for all } x \in e_\ell \end{split} \end{equation*} and we get (by using (\ref{eq:condition_el})) \begin{equation*} \begin{split} \int_{e_\ell^a}^x C^{\mathrm{ri}}_{\phi,i} (y) {~\mathrm{d}} s - z_\ell \left( C^{\mathrm{ri}}_{\psi,i} (y) (x) - C^{\mathrm{ri}}_{\psi,i} (y)|_{g_{\ell-1}} \right) &= \xi_{i,\ell} d_{\partial \Omega} (e_\ell^a, x), \quad \mbox{ for all } x \in e_\ell, \ \ell \in \{1,\dots,L\} \end{split} \end{equation*} thus \begin{equation*} \begin{split} \int_{e_\ell^a}^x C^{\mathrm{ri}}_{\phi,i} (y) {~\mathrm{d}} s &= \left( \xi_{i,\ell} - \frac{z_\ell}{|e_\ell|} \eta_{i,\ell} \right) d_{\partial \Omega} (e_\ell^a, x) \\ & = \left( \xi_{i,\ell} - \frac{z_\ell}{|e_\ell|} \eta_{i,\ell} \right) \int_{e_\ell^a}^x {~\mathrm{d}} s, \quad \mbox{ for all } x \in e_\ell, \ \ell \in \{1,\dots,L\}. \end{split} \end{equation*} So $C^{\mathrm{ri}}_{\phi,i} (y)$ for $y$ according to \eqref{yetaxi} is defined as \begin{equation} \label{eq:formof_phi0} C^{\mathrm{ri}}_{\phi,i} (y) = \sum_{\ell=1}^L \left( \xi_{i,\ell} - \frac{z_\ell}{|e_\ell|} \eta_{i,\ell} \right) \phi_{0,\ell} \end{equation} where $\phi_{0,\ell}(x) = 1$ if $x \in e_\ell$ and $\phi_{0,\ell} (x) = 0$ if $x \in e_k, k \ne l$, particularly, we choose $\phi_{0,\ell} \in H^1(\Omega)$ satisfying the boundary conditions \begin{equation} \label{eq:formof_phi0l} \phi_{0,\ell} (x) = \left\{ \begin{array}{ll} 1, & \text{if \;} x \in e_\ell, \\ 1 - d_{\partial \Omega} (e_\ell^b, x)/|g_\ell|, & \text{if \;} x \in g_\ell, \\ 1 - d_{\partial \Omega} (e_\ell^a, x)/|g_{\ell-1}|, & \text{if \;} x \in g_{\ell-1}, \\ 0, & \text{if \;} x \in \partial \Omega \backslash (e_\ell \cup g_\ell \cup g_{\ell-1}), \end{array} \right. \end{equation} where $|g_\ell|$ is the length of $g_\ell$. This choice is also possible due to the same reason as for the existence of $\psi_{0,\ell}$ in (\ref{eq:formof_psi0l}). The so defined operator $C^{\mathrm{ri}}$ is linear and continuous as a mapping $C^{\mathrm{ri}}:Y\to V$. \medskip Now, the problem (\ref{eq:EIT_CEM}) is rewritten in the form (\ref{eq:model_observation_mixed}) as \begin{equation} \label{eq:model_observation_mixed_EIT_CEM} \left\{ \begin{array}{l} \sqrt{\sigma} \nabla (\overrightarrow{\hat\phi} + C^{\mathrm{ri}}_{\overrightarrow{\phi}} (y)) - \frac{1}{\sqrt{\sigma}} \nabla^\bot (\overrightarrow{\hat\psi} + C^{\mathrm{ri}}_{\overrightarrow{\psi}} (y)) = 0, \\ (\sigma, \overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) \in L^\infty (\Omega) \times {\mathrm{Ker}}(C), \quad \mbox{ for all } i = 1,2,\dots,I, \end{array} \right. \end{equation} where $(C^{\mathrm{ri}}_{\overrightarrow{\phi}} (y), C^{\mathrm{ri}}_{\overrightarrow{\psi}} (y))$ satisfy (\ref{eq:formof_psi0}), (\ref{eq:formof_psi0l}), (\ref{eq:formof_phi0}), (\ref{eq:formof_phi0l}) and \begin{equation} \label{eq:KerC} \begin{split} {\mathrm{Ker}}(C) &= \big\{ (\overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) = (\hat \phi_1, \dots, \hat \phi_I, \hat \psi_1, \dots, \hat \psi_I,) \in V: \hat \psi_i|_{g_{\ell-1}} - \hat \psi_i|_{g_\ell} = 0, \\ &\qquad \qquad \text{\; and \;} C_\ell (\hat \phi_i, \hat \psi_i) = 0, \quad \forall \ell \in \{1, \dots, L\}, \forall i \in \{1, \dots, I\} \big\}. \end{split} \end{equation} To regularize this problem using the theory from Section \ref{sec:Preliminaries}, we consider the cost function $\mathcal{J} (\sigma, \overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}; y) = \mathcal{Q}_E (\sigma, \overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}; y)$, \[ \mathcal{Q}_E (\sigma, \overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}; y) = \sum_{i=1}^I \mathcal{Q}_i(\sigma,\hat\phi_i, \hat\psi_i;y) \] where \begin{equation} \label{eq:Q_P} \begin{split} &\mathcal{Q}_i(\sigma,\hat\phi_i, \hat\psi_i;y):=\begin{cases}\frac12\|q_i\|_{L^2(\Omega)}^2 \mbox{ if } q_i\in L^2(\Omega)\\ +\infty\mbox{ else} \end{cases}\\ \mbox{with } &q_i:= E_i(\sigma,\hat\phi_i + C^{\mathrm{ri}}_{\phi,i} (y), \hat\psi_i + C^{\mathrm{ri}}_{\psi,i} (y))\\ &\hspace*{0.5cm}= \sqrt{\sigma} \nabla (\hat \phi_i + C^{\mathrm{ri}}_{\phi,i} (y)) - \frac{1}{\sqrt{\sigma}} \nabla^\bot (\hat \psi_i + C^{\mathrm{ri}}_{\psi,i} (y)) \end{split} \end{equation} the regularization function \begin{equation} \label{eq:R} \mathcal{R} (\overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) = \frac{1}{2} \| (\overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) \|^2_{H^{s} (\Omega)^{2I}}, \end{equation} for some $s>1$ the discrepancy measure \begin{equation} \label{eq:S} \mathcal{S} (y_1, y_2) = \| y_2 -y_1 \|_{\infty,\mathbb{R}^{2LI}}=\max\{|y_{2,j}-y_{1,j}|\, : \, j\in\{1,\ldots,2LI\}, \end{equation} the function $\widetilde{\mathcal{R}}: L^2(\Omega) \to \overline{\mathbb{R}}$, \begin{equation} \label{eq:R_tilde} \widetilde{\mathcal{R}} (\sigma) = \left\| \sigma - \frac{\underline{\sigma} + \overline{\sigma}}{2} \right\|_{L^\infty (\Omega)} \end{equation} and the constant $\rho =\frac{\overline{\sigma} - \underline{\sigma}}{2}$. Note that therewith the constraint $\widetilde{\mathcal{R}}(\sigma) \le \rho$ is equivalent to $\underline{\sigma} \le \sigma \le \overline{\sigma}$ a.e. in $\Omega$. The regularization term according to \eqref{eq:R} is only needed in order to prove existence of a minimizer. In particular, it allows to establish $\mathcal{T}$ lower semicontinuity of the regularized cost function with an appropriate topology $\mathcal{T}$ which would not be the case without this term. If the accuracy of the current and voltage measurements is $\delta^{j}$ and $\delta^v$ respectively, i.e., $|j_{i,\ell}^\delta - j_{i,\ell}| \le \delta^j$ and $|v_{i,\ell}^\delta - v_{i,\ell}| \le \delta^v$ then the discrepancy measure between the perturbed data $y^\delta = \left( (j_{i,\ell}^\delta)_{\ell=1}^L, (v_{i,\ell}^\delta)_{\ell=1}^L \right)_{i=1}^I$ and the exact data $y = \left( (j_{i,\ell})_{\ell=1}^L, (v_{i,\ell})_{\ell=1}^L \right)_{i=1}^I$ is \begin{equation}\label{deltaEIT} \mathcal{S} (y, y^\delta) = \| y^\delta - y \|_{\infty,\mathbb{R}^{2LI}} \le \max \{\delta^j, \delta^v\} =: \delta, \end{equation} where $\| \cdot \|_{\infty,\mathbb{R}^{2LI}}$ is the maximum norm and regularized minimization can be summarized as \begin{equation} \label{eq:minIP_CEM-EIT_noise} \begin{split} &\begin{split}\min \Bigg\{ \sum_{i=1}^I \frac{1}{2} \int_\Omega \left| \sqrt{\sigma} \nabla (\hat \phi_i + C^{\mathrm{ri}}_{\phi,i} (y^\delta)) - \frac{1}{\sqrt{\sigma}} \nabla^\bot (\hat \psi_i + C^{\mathrm{ri}}_{\psi,i} (y^\delta)) \right|^2 {~\mathrm{d}} x \\ + \frac{\alpha}{2} \| (\overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) \|^2_{H^{s} (\Omega) ^{2I}}:\end{split} \\ & \qquad (\sigma, \overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) \in \{ X \times V: \| C(\overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) \|_{\mathbb{R}^{2LI}} \le \tau \delta, \; \underline{\sigma} \le \sigma \le \overline{\sigma}, \text{\; a.e in \;} \Omega \} \Bigg\}, \end{split} \end{equation} where $X,V$ are defined in (\ref{eq:setting_xu}), $C$ in (\ref{eq:setting_C}), and $C^{\mathrm{ri}}$ in \eqref{eq:formof_psi0}--\eqref{eq:formof_phi0l}. \begin{remark} \label{rem:CEM-EIT_choosing_M_ad^delta} Because of ${\mathrm{Im}}(C) \equiv Y$, the admissible sets in (\ref{eq:M_ad^delta}) and (\ref{eq:M_ad^delta_reduced}) are equal. \end{remark} \subsubsection{Convergence} To obtain a convergence result, we define the topology $\mathcal{T}$ on $X \times V$ by \begin{equation} \label{eq:topologyT} (\sigma_n, \overrightarrow{\hat\phi}_n, \overrightarrow{\hat\psi}_n) \xrightarrow{\mathcal{T}} (\sigma, \overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) \Leftrightarrow \left \{ \begin{split} \sigma_n \xrightharpoonup{*} \sigma \text{\; and \;} \frac{1}{\sigma_n} \xrightharpoonup{*} \frac{1}{\sigma} & \text{\; in \;} L^\infty (\Omega), \\ (\overrightarrow{\hat\phi}_n, \overrightarrow{\hat\psi}_n) \to (\overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) & \text{\; in \;} H^1 (\Omega)^{2I}, \\ (\overrightarrow{\hat\phi}_n, \overrightarrow{\hat\psi}_n) \rightharpoonup (\overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) & \text{\; in \;} H^{s} (\Omega)^{2I}, \end{split} \right. \end{equation} for $s>1$ as in \eqref{eq:R} and the norm $\|\cdot\|_B$ is given by \begin{equation} \label{eq:normB} \| (\sigma, \overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) \|_B := \| \sigma \|_{L^\infty (\Omega)} + \| (\overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) \|_{H^{s} (\Omega)^{2I}} + \| C(\overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) \|_{\mathbb{R}^{2LI}}. \end{equation} In addition, we also impose some priori conditions on $\sigma^\dagger$ and $(\overrightarrow{\hat\phi}^\dagger, \overrightarrow{\hat\psi}^\dagger)$ as follows \begin{equation} \label{eq:priori_information_sigma} \underline{\sigma} \le \sigma^\dagger \le \overline{\sigma}, \text{\quad a.e. in \;} \Omega \end{equation} and \begin{equation} \label{eq:priori_information_phi_psi} (\overrightarrow{\hat\phi}^\dagger, \overrightarrow{\hat\psi}^\dagger) \in H^{s} (\Omega)^{2I}. \end{equation} \begin{corollary} \label{cor:CEM-EIT} Let (\ref{eq:priori_information_sigma}), (\ref{eq:priori_information_phi_psi}) hold. Then \begin{enumerate} [label = (\roman*)] \item (Existence of minimizers) For any $\alpha>0$, a minimizer of (\ref{eq:minIP_CEM-EIT_noise}) exists; \item (Boundedness) For any sequence $(y_n)_{n \in \mathbb{N}} \subset Y$ with $y_n \to y^\delta$ in $Y$, the sequence of corresponding regularized minimizers is $\|\cdot\|_B$ bounded for $\|\cdot\|_B$ as in (\ref{eq:normB}); \item (Convergence) If additionally the choice of regularization parameter satisfies $$\alpha(\delta, y^\delta) \to 0 \text{\quad and \quad} \frac{\delta^2}{\alpha(\delta, y^\delta)} \le c_0, \text{\qquad as \;} \delta \to 0, $$ then as $\delta \to 0$ in \eqref{deltaEIT}, $y^\delta \to y$, the family of minimizers $\left( \sigma_{\alpha(\delta, y^\delta)}^\delta, \overrightarrow{\hat\phi}_{\alpha(\delta, y^\delta)}^\delta, \overrightarrow{\hat\psi}_{\alpha(\delta, y^\delta)}^\delta \right)$ converges $\mathcal{T}$ subsequentially to a solution $(\sigma^\dagger, \overrightarrow{\hat\phi}^\dagger, \overrightarrow{\hat\psi}^\dagger)$ of the inverse problem (\ref{eq:model_observation_mixed_EIT_CEM}) with exact data $y$. \end{enumerate} \end{corollary} \begin{proof} The proof proceeds by verification of Assumptions \ref{ass:Maao} and \ref{ass:Maao_newcondition}. \begin{itemize} \item Assumptions \ref{ass:Maao}(i),(ii) follow from (\ref{eq:priori_information_sigma}), (\ref{eq:priori_information_phi_psi}). \item The set $L_c$ in Assumption \ref{ass:Maao}(iii) is clearly $\|\cdot\|_B$ bounded by definition of $\mathcal{R}$, $\widetilde{\mathcal{R}}$, $\mathcal{S}$. Besides, for every sequence $(\sigma_n, \overrightarrow{\hat\phi}_n, \overrightarrow{\hat\psi}_n)$ in $L_c$, \begin{itemize} \item $(\overrightarrow{\hat\phi}_n, \overrightarrow{\hat\psi}_n)$, is bounded and thus has a weakly convergent subsequence in $H^{s} (\Omega)^{2I}$, as well as a strongly convergent subsequence in $H^1 (\Omega)^{2I}$ by the compactness of the embedding operator from $H^{s} (\Omega)^{2I}$ to $H^1 (\Omega)^{2I}$; \item the sequences $(\sigma_n)$, $\left(\frac{1}{\sigma_n}\right)$, which are bounded in $L^\infty (\Omega)$ due to boundedness of $\widetilde{\mathcal{R}} (\sigma_n)$, have weak-star convergent subsequences in $L^\infty (\Omega)$. \end{itemize} Thus we can extract a subsequence of $(\sigma_n, \overrightarrow{\hat\phi}_n, \overrightarrow{\hat\psi}_n)$ that is $\mathcal{T}$ convergent in $X \times V$. So Assumption \ref{ass:Maao}(iii) is verified. \item The maps $\mathcal{R}$, $\widetilde{\mathcal{R}}$, $(\sigma, \overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) \mapsto \mathcal{S} (C(\overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}), z)$ are obviously $\mathcal{T}$ lower semicontinuous by their definition. To verify Assumption \ref{ass:Maao}(iv), we only need to show $\mathcal{T}$ lower semicontinuity of $(\sigma, \overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}) \mapsto \mathcal{Q}_E (\sigma, \overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}; z)$ for all $z \in Y$. Indeed, we have $\mathcal{Q}_E (\sigma, \overrightarrow{\hat\phi}, \overrightarrow{\hat\psi}; z) = \sum_{i=1}^I \mathcal{Q}_i (\sigma, \hat \phi_i, \hat \psi_i; z),$ where, cf. \eqref{eq:Q_P}, \begin{equation*} \begin{split} \mathcal{Q}_i (\sigma, \hat \phi_i, \hat \psi_i; z) &= \frac{1}{2} \int_\Omega \left( \sigma |\nabla \hat \phi_i|^2 + \frac{1}{\sigma} |\nabla^\bot \hat \psi_i|^2 - 2 \nabla \hat \phi_i \cdot \nabla^\bot \hat \psi_i \right) {~\mathrm{d}} x \\ & \quad + \int_\Omega \sigma \left( \nabla \hat \phi_i - \frac{1}{\sigma} \nabla^\bot \hat \psi_i \right) \cdot \left( \nabla C^{\mathrm{ri}}_{\phi,i} (z) - \frac{1}{\sigma} \nabla^\bot C^{\mathrm{ri}}_{\psi,i} (z) \right) {~\mathrm{d}} x \\ & \quad + \frac{1}{2} \int_\Omega \left( \sigma |\nabla C^{\mathrm{ri}}_{\phi,i} (z)|^2 + \frac{1}{\sigma} |\nabla^\bot C^{\mathrm{ri}}_{\psi,i} (z)|^2 - 2 \nabla C^{\mathrm{ri}}_{\phi,i} (z) \cdot \nabla^\bot C^{\mathrm{ri}}_{\psi,i} (z) \right) {~\mathrm{d}} x \end{split} \end{equation*} and for all $(\sigma_n, \overrightarrow{\hat\phi}_n, \overrightarrow{\hat\psi}_n) \xrightarrow{\mathcal{T}} (\sigma, \overrightarrow{\hat\phi}, \overrightarrow{\hat\psi})$, \begin{equation*} \begin{split} & |\mathcal{Q}_i (\sigma_n, \hat \phi_{n,i}, \hat \psi_{n,i}; z) - \mathcal{Q}_i (\sigma, \hat \phi_i, \hat \psi_i; z)| \\ & \quad \le \frac{1}{2} \left| \int_\Omega \left( \sigma_n |\nabla \hat \phi_{n,i}|^2 - \sigma |\nabla \hat \phi_i|^2 \right) {~\mathrm{d}} x \right| + \frac{1}{2} \left| \int_\Omega \left( \frac{1}{\sigma_n} |\nabla^\bot \hat \psi_{n,i}|^2 - \frac{1}{\sigma} |\nabla^\bot \hat \psi_i|^2 \right) {~\mathrm{d}} x \right| \\ & \quad \quad + \left| \int_\Omega \left( \nabla \hat \phi_{n,i} \cdot \nabla^\bot \hat \psi_{n,i} - \nabla \hat \phi_{i} \cdot \nabla^\bot \hat \psi_{i} \right) {~\mathrm{d}} x \right| \\ & \quad \quad + \left| \int_\Omega \left( \sigma_n \nabla \hat \phi_{n,i} - \sigma \nabla \hat \phi_i \right) \cdot \left( \nabla C^{\mathrm{ri}}_{\phi,i} (z) - \frac{1}{\sigma} \nabla^\bot C^{\mathrm{ri}}_{\psi,i} (z) \right){~\mathrm{d}} x \right| \\ & \quad \quad + \left| \int_\Omega \left( \frac{1}{\sigma_n} \nabla^\bot \hat \psi_{n,i} - \frac{1}{\sigma} \nabla^\bot \hat \psi_i \right) \cdot \left( \nabla C^{\mathrm{ri}}_{\phi,i} (z) - \frac{1}{\sigma} \nabla^\bot C^{\mathrm{ri}}_{\psi,i} (z) \right){~\mathrm{d}} x \right| \\ & \to 0, \end{split} \end{equation*} because each term in the right hand side tends to $0$ as $n \to \infty$: \\ the first term \begin{equation*} \begin{split} &\left| \int_\Omega \left( \sigma_n |\nabla \hat \phi_{n,i}|^2 - \sigma |\nabla \hat \phi_i|^2 \right) {~\mathrm{d}} x \right| \\ & \quad = \left| \int_\Omega \sigma_n \left( | \nabla \hat \phi_{n,i}|^2 - |\nabla \hat \phi_i|^2 \right) {~\mathrm{d}} x + \int_\Omega (\sigma_n - \sigma) |\nabla \hat \phi_i|^2 {~\mathrm{d}} x \right| \\ & \quad \le \overline{\sigma} \int_\Omega \left| \left( \nabla \hat \phi_{n,i} + \nabla \hat \phi_i \right) \cdot \left( \nabla \hat \phi_{n,i} - \nabla \hat \phi_i \right) \right| {~\mathrm{d}} x + \left| \int_\Omega (\sigma_n - \sigma) |\nabla \hat \phi_i|^2 {~\mathrm{d}} x \right| \\ & \quad \le \overline{\sigma} \left\| \nabla \hat \phi_{n,i} + \nabla \hat \phi_i \right\|_{L^2 (\Omega)^2} \left\| \nabla \hat \phi_{n,i} - \nabla \hat \phi_i \right\|_{L^2 (\Omega)^2} + \left| \int_\Omega (\sigma_n - \sigma) |\nabla \hat \phi_i|^2 {~\mathrm{d}} x \right| \end{split} \end{equation*} tends to $0$, since $\left\| \nabla \hat \phi_{n,i} + \nabla \hat \phi_i \right\|_{L^2 (\Omega)^2}$ is bounded, $\left\| \nabla \hat \phi_{n,i} - \nabla \hat \phi_i \right\|_{L^2 (\Omega)^2} \to 0$ as $n \to \infty$ and $(\sigma_n -\sigma) \xrightharpoonup{*} 0$ in $L^\infty (\Omega)$, $|\nabla \hat \phi_i|^2 \in L^1 (\Omega)$; note that at this point we need the strength of $\mathcal{T}$ inducing strong $H^1$ norm convergence of $\hat \phi_{n,i}$, as enabled by the regularization term \eqref{eq:R};\\ the second term can be estimated analogously;\\ the third term \begin{equation*} \begin{split} & \left| \int_\Omega \left( \nabla \hat \phi_{n,i} \cdot \nabla^\bot \hat \psi_{n,i} - \nabla \hat \phi_{i} \cdot \nabla^\bot \hat \psi_{i} \right) {~\mathrm{d}} x \right| \\ & \quad \le \left| \int_\Omega \nabla \hat \phi_{n,i} \cdot \left( \nabla^\bot \hat \psi_{n,i} - \nabla^\bot \hat \psi_i \right) {~\mathrm{d}} x \right| + \left| \int_\Omega \left( \nabla \hat \phi_{n,i} - \nabla \hat \phi_i \right) \cdot \nabla^\bot \hat \psi_i {~\mathrm{d}} x \right| \\ & \quad \le \left\| \nabla \hat \phi_{n,i} \right\|_{L^2 (\Omega)^2} \left\| \nabla^\bot \hat \psi_{n,i} - \nabla^\bot \hat \psi_i \right\|_{L^2 (\Omega)^2} + \left\| \nabla \hat \phi_{n,i} - \nabla \hat \phi_i \right\|_{L^2 (\Omega)^2} \left\| \nabla^\bot \hat \psi_i \right\|_{L^2 (\Omega)^2} \end{split} \end{equation*} tends to $0$, since $\nabla^\bot \hat \psi_{n,i} \to \nabla^\bot \hat \psi_i$ and $\nabla \hat \phi_{n,i} \to \nabla \hat \phi_i$ in $L^2 (\Omega)^2$; \\ the fourth term \begin{equation*} \begin{split} & \left| \int_\Omega \left( \sigma_n \nabla \hat \phi_{n,i} - \sigma \nabla \hat \phi_i \right) \cdot \left( \nabla C^{\mathrm{ri}}_{\phi,i} (z) - \frac{1}{\sigma} \nabla^\bot C^{\mathrm{ri}}_{\psi,i} (z) \right){~\mathrm{d}} x \right| \\ & \quad \le \left| \int_\Omega \left( \left( \sigma_n - \sigma \right) \nabla \hat \phi_{n,i} + \sigma \left( \nabla \hat \phi_{n,i} - \nabla \hat \phi_i \right) \right) \cdot \left( \nabla C^{\mathrm{ri}}_{\phi,i} (z) - \frac{1}{\sigma} \nabla^\bot C^{\mathrm{ri}}_{\psi,i} (z) \right){~\mathrm{d}} x \right| \\ & \quad \le \left| \int_\Omega \left( \sigma_n - \sigma \right) \nabla \hat \phi_{n,i} \cdot \left( \nabla C^{\mathrm{ri}}_{\phi,i} (z) - \frac{1}{\sigma} \nabla^\bot C^{\mathrm{ri}}_{\psi,i} (z) \right) {~\mathrm{d}} x \right| \\ & \quad \qquad + \overline{\sigma} \left\| \nabla \hat \phi_{n,i} - \nabla \hat \phi_i \right\|_{L^2 (\Omega)^2} \left\| \nabla C^{\mathrm{ri}}_{\phi,i} (z) - \frac{1}{\sqrt{\sigma}} \nabla^\bot C^{\mathrm{ri}}_{\psi,i} (z) \right\|_{L^2 (\Omega)^2} \end{split} \end{equation*} tends to $0$, since $(\sigma_n -\sigma) \xrightharpoonup{*} 0$ in $L^\infty (\Omega)$, $\nabla \hat \phi_{n,i} \cdot \left( \nabla C^{\mathrm{ri}}_{\phi,i} (z) - \frac{1}{\sigma} \nabla^\bot C^{\mathrm{ri}}_{\psi,i} (z) \right) \in L^1 (\Omega)$ and $\left\| \nabla \hat \phi_{n,i} - \nabla \hat \phi_i \right\|_{L^2 (\Omega)^2} \to 0$;\\ the fifth term again works analogously. \item Assumption \ref{ass:Maao}(v) is verified by Remark \ref{rem:ass:Maao(v)}. \item For Assumption \ref{ass:Maao_newcondition}, we employ Remark \ref{rem:Ass3} with $W=L^2(\Omega)$, $D_i(\sigma, \phi_i,\psi_i)C^{\mathrm{ri}}(\eta,\xi)=\sqrt{\sigma}\nabla C^{\mathrm{ri}}_{\phi,i} (\eta,\xi)- \frac{1}{\sqrt{\sigma}} \nabla^\bot C^{\mathrm{ri}}_{\psi,i} (\eta,\xi)$, cf. \eqref{eq:formof_psi0}, \eqref{eq:formof_phi0}, \eqref{eq:Q_P}, and the estimate \[ \begin{split} &\|D_i(\sigma, \phi_i,\psi_i)C^{\mathrm{ri}}(\eta,\xi)\|_{L^2(\Omega)}\\ &\leq \sqrt{\overline{\sigma}}\sum_{\ell=1}^L\left(|\xi_{i,\ell}|+\frac{|z_\ell|}{|e_\ell|}|\eta_{i,\ell}|\right)\|\nabla\phi_{0,\ell}\|_{L^2(\Omega)} +\frac{1}{\sqrt{\underline{\sigma}}} \sum_{\ell=1}^L\sum_{k=1}^\ell |\eta_{i,k}| \|\nabla^\bot\psi_{0,\ell}\|_{L^2(\Omega)}\,. \end{split} \] \end{itemize} \end{proof} \subsection{Identification of a nonlinear magnetic permeability from magnetic flux measurements} \label{sec:MPP} The magnetic permeability, i.e., the factor relating the magnetic field strength $H$ to the magnetic flux density $B$, often exhibits a nonlinear behaviour. This is the case in particular in the presence of large field strengths as typical for actuator applications. To determine this nonlinear relation from magnetic flux measurements, either a very specific experimental geometry allowing for model simplifications needs to be employed or the full PDE model incorporating Maxwell's equations has to be taken into account, cf. \cite{KKR03} and the references therein. We here follow the latter approach and derive a minimization based formulation for the corresponding inverse problem. \subsubsection{The minimization form of the problem} The magnetic field $H$ and the magnetic flux density $B$ are related by $B =\mu H$, where $\mu$ is the magnetic permeability. In the presence of large field strengths, it exhibits a dependence on the magnetic field strength $|H|$, so $\mu = \mu(|H|)$ and $B =\mu(|H|) H$. The typical experimental setup for determining the $B-H$ curve (see Figure \ref{fig:magn} left) or equivalently the permeability curve, is depicted in Figure \ref{fig:magn} right. \begin{figure}[ht] \begin{center} \includegraphics[width=0.39\textwidth]{HBtyp.eps} \hspace*{0.05\textwidth} \includegraphics[width=0.54\textwidth]{BH_setup_neu.eps} \caption{Typical $B$--$H$-curve shape (left) and measurement setup (right)} \label{fig:magn} \end{center} \end{figure} The current $J_i^{\rm imp}$ impressed by the excitation coil generates a magnetic field $H_i$ and a magnetic flux density $B_i$. They must satisfy Maxwell's equations $$\nabla \cdot B_i = 0, \quad \nabla \times H_i = J_i^{{\rm imp}} \quad \text{ in} \; \Omega\subset \mathbb{R}^3, \quad i=1,\cdots,I$$ and the relation $$B_i = \mu(|H_i|) H_i.$$ For a given sequence of impressed currents $(J_i^{\mathrm{imp}})_{i=1,\dots,I}$, we can measure the magnetic fluxes $\overrightarrow{\Phi} = (\Phi_i)_{i=1,\dots,I}$, with $$\Phi_i = \frac{1}{h} \int_{\Omega_c} B_i \cdot \nu {~\mathrm{d}} x \quad i=1,\dots,I,$$ where $\Omega_c := \Gamma_c \times [0,h]$ is the region covered by the excitation coil, $\Gamma_c$ is its cross section, $h$ is the coil height and $\nu$ is the unit vector parallel to the coil axis, see \cite{KKR03} for a more detailed description of the measurement setup. By using the vector and scalar potentials $\overrightarrow{A} = (A_i)_{i=1,\dots,I}$, $\overrightarrow{\psi} = (\psi_i)_{i=1,\dots,I}$, with $$B_i = \nabla \times A_i, \quad H_i = \nabla \psi_i + A_i^J, \quad i=1, \cdots,I$$ (where $\nabla \times A^J_i = J_i^{{\rm imp}}, i=1,\dots,I$), the problem can be rewritten as \begin{equation} \label{eq:permeability_problem} \left\{ \begin{split} &\sqrt{\mu_i} (\nabla \psi_i + A_i^J) - \frac{1}{\sqrt{\mu_i}} \nabla \times A_i =0 \text{\quad in~} \Omega, \\ &\psi_i|_{\partial \Omega}=0, \quad \nu \times A_i|_{\partial \Omega}=0, \quad C(\overrightarrow{A},\overrightarrow{\psi}) = \overrightarrow{\Phi}, \quad i=1,\dots,I, \end{split} \right. \end{equation} where we abbreviate \begin{equation} \label{eq:mu} \mu_i := \mu (|H_i|) = \mu (|\nabla \psi_i + A_i^J|), \end{equation} the spaces containing $\mu$, $\overrightarrow A$, $\overrightarrow \psi$ are defined by \begin{equation} \label{eq:permeability_mu_A_phi} \begin{split} \mu \in X &:= L^2 ([0,\infty)), \\ (\overrightarrow A, \overrightarrow \psi) \in V &:= H_0 (\curl, \Omega)^I \times H_0^1 (\Omega)^I, \end{split} \end{equation} and $C: V \to Y$, ($Y := \mathbb{R}^{I}$ with the maximum norm), is the observation operator defined by \begin{equation} \label{eq:permeability_C} C(\overrightarrow{A}, \overrightarrow{\psi}):= \left( \frac{1}{h} \int_{\Omega_c} B_i \cdot \nu {~\mathrm{d}} x \right)_{i=1,\dots,I} = \left( \frac{1}{h} \int_{\Omega_c} (\nabla \times A_i) \cdot \nu {~\mathrm{d}} x \right)_{i=1,\dots,I}. \end{equation} To apply the theory from Section \ref{sec:Preliminaries}, we define a right inverse $C^{\mathrm{ri}}$ of $C$ in the form \begin{equation} \label{eq:permeability_Cri} C^{\mathrm{ri}} (\overrightarrow \Phi) = (C^{\mathrm{ri}}_A (\overrightarrow \Phi), C^{\mathrm{ri}}_\psi (\overrightarrow \Phi)) = \left( (C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi))_{i=1}^I, (C^{\mathrm{ri}}_{\psi_i} (\overrightarrow \Phi))_{i=1}^I \right) \in V, \end{equation} for all $\overrightarrow \Phi \in \mathbb{R}^I$. Because of the absence of the component $\overrightarrow \psi$ in the right hand side of (\ref{eq:permeability_C}), we can choose \begin{equation} \label{eq:permeability_Cri_psi} C^{\mathrm{ri}}_{\psi_i} (\overrightarrow \Phi)=0, \quad \forall i \in \{1, \dots, I\}. \end{equation} Without loss of generality, we can assume that $\nu = (0,0,1)$ and choose the component $C^{\mathrm{ri}}_A (\overrightarrow \Phi) = (C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi))_{i=1}^I$ in (\ref{eq:permeability_Cri}) as follows \begin{equation} \label{eq:permeability_Cri_A} C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi) (x_1, x_2, x_3) = \left( 0, \frac{\Phi_i}{|\Gamma_c|} x_1, 0 \right)^T, \quad \forall i \in \{1, \dots, I\}, \end{equation} where $|\Gamma_c|$ is the area of $\Gamma_c$. It is easy to check that $C(C^{\mathrm{ri}}(\overrightarrow \Phi)) = \overrightarrow \Phi$ for all $\overrightarrow \Phi \in \mathbb{R}^I$. \begin{remark} \label{re:permeability_choiceOfCri} It is evident that there are many ways to choose $C^{\mathrm{ri}}$ instead of (\ref{eq:permeability_Cri}), (\ref{eq:permeability_Cri_psi}), (\ref{eq:permeability_Cri_A}). For example, we can choose $\widetilde{C^{\mathrm{ri}}} (\overrightarrow{ \Phi}) = C^{\mathrm{ri}} (\overrightarrow{ \Phi}) + u$ with some $u \in {\mathrm{Ker}} (C)$. In this paper, we only focus on demonstrating the applicability of our new approach from Section \ref{sec:Preliminaries} so the question of how to optimally choose $C^{\mathrm{ri}}$ is not considered in more detail here. This remark also applies to the choice of $C^{\mathrm{ri}}$ (\ref{eq:formof_Cri}), (\ref{eq:formof_psi0}), (\ref{eq:formof_phi0}) in the previous section. \end{remark} Now we rewrite the problem (\ref{eq:permeability_problem}) in the form of (\ref{eq:model_observation_mixed}) as \begin{equation} \label{eq:permeability_problem_mixed} \left\{ \begin{split} &E_i(\mu, \hat \psi_i,A_i) = \sqrt{\mu_i} (\nabla \hat \psi_i + A_i^J) - \frac{1}{\sqrt{\mu_i}} \nabla \times (\hat A_i + C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi)) = 0 \text{\quad in~} \Omega, \\ &(\mu, \overrightarrow{\hat A}, \overrightarrow{\hat \psi}) \in L^\infty (\Omega) \times {\mathrm{Ker}}(C) = L^\infty (\Omega) \times \{ (\overrightarrow{\hat A}, \overrightarrow{\hat \psi}) \in V: C(\overrightarrow{\hat A}, \overrightarrow{\hat \psi}) = 0 \}, \end{split} \right. \end{equation} where $\overrightarrow{\hat A} = (\hat A_i)_{i=1}^I$, $\overrightarrow{\hat \psi} = (\hat \psi_i)_{i=1}^I$, and $\mu_i$ is defined by \eqref{eq:mu}. Additionally, we consider the cost function $\mathcal{J} (\mu, \overrightarrow{\hat A}, \overrightarrow{\hat \psi}; \overrightarrow \Phi) = \mathcal{Q}_E(\mu, \overrightarrow{\hat A}, \overrightarrow{\hat \psi}; \overrightarrow \Phi)$, \begin{equation} \label{eq:permeability_setting_Q} \mathcal{Q}_E(\mu, \overrightarrow{\hat A}, \overrightarrow{\hat \psi}; \overrightarrow \Phi) = \sum_{i=1}^I \mathcal{Q}_i(\mu, {\hat A}_i, {\hat \psi}_i; \overrightarrow \Phi) \end{equation} where \begin{equation} \label{eq:permeability_setting_Q_i} \begin{split} \mathcal{Q}_i(\mu, {\hat A}_i, {\hat \psi}_i; \overrightarrow \Phi)=\begin{cases}\frac12\|q_i\|_{L^2(\Omega)}^2 \mbox{ if } q_i\in L^2(\Omega)^3\\ +\infty\mbox{ else} \end{cases}\\ \mbox{with }q_i= \sqrt{\mu_i} (\nabla \hat \psi_i + A_i^J) - \frac{1}{\sqrt{\mu_i}} \nabla \times (\hat A_i + C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi)), \end{split} \end{equation} the discrepancy measure \begin{equation} \label{eq:permeability_setting_S} \mathcal{S} (y_1, y_2) = \| y_2 -y_1 \|_{\infty,\mathbb{R}^{I}}, \end{equation} the regularization functionals \begin{equation} \label{eq:permeability_setting_R} \mathcal{R}: X \times V \to \overline{\mathbb{R}}, \qquad \mathcal{R} (\mu, \overrightarrow{\hat A}, \overrightarrow{\hat \psi}) = \frac{1}{2} \left( \| \overrightarrow{\hat A} \|_{H^s(\Omega)^{3I}}^2 + \|\overrightarrow{\hat \psi} \|_{H^s(\Omega)^{I}}^2 \right), \end{equation} where $s>1$ is given, and \begin{equation} \label{eq:permeability_setting_R_tilde} \widetilde{\mathcal{R}}: X \times V \to \overline{\mathbb{R}}, \qquad \widetilde{\mathcal{R}}(\mu,\overrightarrow{\hat A}, \overrightarrow{\hat \psi}) = \max\limits_{i=1,\dots,I}\left\| \mu_i - \frac{1}{2}(\overline{\mu}+\underline{\mu}) \right\|_{L^\infty(\Omega)}, \end{equation} where $\overline{\mu}, \underline{\mu}$ are upper and lower bounds for the permeability ($0<\underline{\mu}<\overline{\mu}$). If the measurement data $\overrightarrow \Phi^\delta = (\Phi_i^\delta)_{i=1}^I$ is obtained with the accuracy of $\delta$, i.e. $$\mathcal{S} (\overrightarrow \Phi, \overrightarrow \Phi^\delta) = \| \overrightarrow \Phi^\delta - \overrightarrow \Phi \|_{\mathbb{R}^I} = \max \{ |\Phi_i^\delta - \Phi_i |: i\in \{1,\dots,I\} \} \le \delta,$$ then the problem (\ref{eq:minIP_noise}) becomes \begin{equation} \label{eq:permeability_minIP_noise} \begin{split} &\begin{split}\min \Bigg\{ \frac{1}{2} \sum_{i=1}^I \int_\Omega \left| \sqrt{\mu_i} (\nabla \hat \psi_i + A_i^J) - \frac{1}{\sqrt{\mu_i}} \nabla \times (\hat A_i + C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi^\delta)) \right|^2 {~\mathrm{d}} x \\ + \frac{\alpha}{2} \left( \| \overrightarrow{\hat A} \|_{H^s(\Omega)^{3I}}^2 + \|\overrightarrow{\hat \psi} \|_{H^s(\Omega)^{I}}^2 \right):\end{split} \\ & \qquad (\mu, \overrightarrow{\hat A}, \overrightarrow{\hat \psi}) \in \{ X \times V: \| C(\overrightarrow{\hat A}, \overrightarrow{\hat \psi}) \|_{\mathbb{R}^{I}} \le \tau \delta, \; \underline{\mu} \le \mu_i \le \overline{\mu}, \text{\; a.e in \;} \Omega, \; \forall i \} \Bigg\} \end{split} \end{equation} with $X,V$ as in \eqref{eq:permeability_mu_A_phi} and $\mu_i$ as in \eqref{eq:mu}. In the next subsection, we will prove well-definedness, stability, and convergence of the minimizers of problem (\ref{eq:permeability_minIP_noise}). \subsubsection{Convergence} The topology $\mathcal{T}$ in Assumption \ref{ass:Kal18} can be chosen as \begin{equation} \label{eq:permeability_topology_T} (\mu^n, \overrightarrow{A}^n, \overrightarrow{\psi}^n) \xrightarrow{\mathcal{T}} (\mu, \overrightarrow{A}, \overrightarrow{\psi}) \Leftrightarrow \left\{ \begin{split} & \begin{split} \mu^n(|H^n_i|) \xrightharpoonup{*} \mu(|H_i|) \text{\; and \;} \frac{1}{\mu^n(|H^n_i|)} \xrightharpoonup{*} \frac{1}{\mu(|H_i|)}\\ \text{\; in\;} L^{\infty}(\Omega), \quad i=1,\dots,I,\end{split} \\ & \overrightarrow{A}^n \to \overrightarrow{A} \text{\; in \;} H(\curl; \Omega)^I, \quad \overrightarrow{\psi}^n \to \overrightarrow{\psi} \text{\; in \;} H^1(\Omega)^I, \\ & \overrightarrow{A}^n \rightharpoonup \overrightarrow{A} \text{\; in \;} H^s(\Omega)^{3I}, \quad \overrightarrow{\psi}^n \rightharpoonup \overrightarrow{\psi} \text{\; in \;} H^s(\Omega)^I, \\ \end{split} \right. \end{equation} for $s>1$ as in \eqref{eq:permeability_setting_R} where $H^n_i:=\nabla \psi^n_i + A^{J}_i, \; H_i:=\nabla \psi_i + A^J_i, \;i=1,\dots,I$, and the norm $\|\cdot\|_B$ is defined by \begin{equation} \label{eq:permeability_norm_B} \| (\mu, \overrightarrow{A}, \overrightarrow{\psi}) \|_B := \max \limits_{i=1,\dots,I} \| \mu(|H_i|) \|_{L^{\infty}(\Omega)} + \| \overrightarrow{A} \|_{H^s(\Omega)^{3I}} + \| \overrightarrow{\psi} \|_{H^s(\Omega)^I} + \| C(\overrightarrow{A},\overrightarrow{\psi})\|_{\mathbb{R}^I}. \end{equation} Note that convergence in the combined topology \eqref{eq:permeability_topology_T} is different from weak* $L^\infty([0,\infty))$ convergence of $\mu^n$, see the appendix for more details. As a matter of fact, we can only expect $\mu$ to be determined on the union of the ranges of the magnetic fields corresponding to the exact solution, i.e., on $\bigcup_{i=1}^I\{|\nabla\psi_i^\dagger(x)+A^{J}_i(x)|\, : \, x\in\Omega\}=:\mathcal{D}(\overrightarrow{\psi^\dagger})\subseteq[0,\infty)$. This domain is a priori unknown, though, so we consider $\mu$ as a function on $[0,\infty)$ and, according to the first line in \eqref{eq:permeability_topology_T} trust its reconstruction only on $\mathcal{D}(\overrightarrow{\psi})\subseteq[0,\infty)$, where $\overrightarrow{\psi}$ is the set of reconstructed scalar potentials. Similarly to conditions (\ref{eq:priori_information_sigma}), (\ref{eq:priori_information_phi_psi}), we also need \begin{equation} \label{eq:permeability_priori_information_mu} \underline{\mu} \le \mu^\dagger(|H^\dagger_i|) \le \overline{\mu}, \quad \text{\; a.e. in \;} \Omega, \quad i\in\{1,\ldots,I\} \end{equation} and \begin{equation} \label{eq:permeability_priori_information_A_psi} \left( \overrightarrow{\hat A}^\dagger,\overrightarrow{\hat \psi}^\dagger \right) \in H^s (\Omega)^{3I} \times H^s (\Omega)^I \end{equation} to verify Assumptions \ref{ass:Maao}, \ref{ass:Maao_newcondition} and get Corollary \ref{cor:permeability} as follows. \begin{corollary} \label{cor:permeability} Let (\ref{eq:permeability_priori_information_mu}), (\ref{eq:permeability_priori_information_A_psi}) hold. \begin{itemize} \item[(i)] (Existence of minimizers) Then for any $\alpha>0$, a minimizer of (\ref{eq:permeability_minIP_noise}) exists; \item[(ii)] (Boundedness) and for any sequence $(\overrightarrow \Phi^n)_{n \in \mathbb{N}} \subset \mathbb{R}^I$ with $\overrightarrow \Phi^n \to \overrightarrow \Phi^\delta$ in $\mathbb{R}^I$, the sequence of corresponding regularized minimizers is $\|\cdot\|_B$ bounded for $\|\cdot\|_B$ as in (\ref{eq:permeability_norm_B}). \item[(iii)] (Convergence) Assume additionally that the choice of $\alpha$ satisfies \begin{equation*} \alpha(\delta, \overrightarrow \Phi^\delta) \to 0 \quad \text{\; and \;} \quad \frac{\delta^2}{\alpha(\delta, \overrightarrow \Phi^\delta)} \le c_0, \qquad \text{\; as \;} \delta \to 0. \end{equation*} Then as $\delta \to 0$, $\overrightarrow \Phi^\delta \to \overrightarrow \Phi$, the family of minimizers $\left( \mu^\delta_{\alpha(\delta, \overrightarrow \Phi^\delta)}, \overrightarrow{\hat A}^\delta_{\alpha(\delta, \overrightarrow \Phi^\delta)}, \overrightarrow{\hat \psi}^\delta_{\alpha(\delta, \overrightarrow \Phi^\delta)} \right)$ converges $\mathcal{T}$ subsequentially to a solution $(\mu^\dagger, \overrightarrow{\hat A}^\dagger,\overrightarrow{\hat \psi}^\dagger)$ to the problem (\ref{eq:permeability_problem_mixed}) with exact data $\overrightarrow \Phi$. \end{itemize} \end{corollary} \begin{proof} The proof is very similar to the proof of Corollary \ref{cor:CEM-EIT}. Here we only verify the $\mathcal{T}$ lower semicontinuity of $\mathcal{Q}_E$ in Assumption \ref{ass:Maao}(iv) and Assumption \ref{ass:Maao_newcondition}. \begin{itemize} \item The $\mathcal{T}$ lower semicontinuity of $\mathcal{Q}_E$ is verified by the continuity of $Q_i$ according to \eqref{eq:permeability_setting_Q_i}. Indeed, we have \begin{equation*} \begin{split} & |Q_i (\mu^n, \hat A^n_i, \hat \psi^n_i; \overrightarrow \Phi) - Q_i (\mu, \hat A_i, \hat \psi_i; \overrightarrow \Phi)| \\ & \; = \Bigg| \frac{1}{2} \underbrace{\int_\Omega \left( \mu^n_i |\nabla \hat \psi^n_i + A^J_i|^2 - \mu_i |\nabla \hat \psi_i + A_i^J|^2 \right){~\mathrm{d}} x}_{I_1} \\ & \quad + \frac{1}{2} \underbrace{\int_\Omega \left( \frac{1}{\mu^n_i} |\nabla \times (\hat A^n_i + C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi))|^2 - \frac{1}{\mu_i} |\nabla \times (\hat A_i + C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi))|^2 \right) {~\mathrm{d}} x}_{I_2} \\ & \quad - \underbrace{\int_\Omega \left( (\nabla \hat \psi^n_i + A^J_i) \cdot \nabla \times (\hat A^n_i + C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi)) - (\nabla \hat \psi_i + A^J_i) \cdot \nabla \times (\hat A_i + C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi)) \right) {~\mathrm{d}} x}_{I_3} \Bigg|, \end{split} \end{equation*} where \begin{equation*} \begin{split} |I_1| & = \Big| \int_\Omega (\mu^n_i - \mu_i) |\nabla \hat \psi_i + A^J_i|^2 {~\mathrm{d}} x + \int_\Omega \mu^n_i \left( |\nabla \hat \psi^n_i + A^J_i|^2 - |\nabla \hat \psi_i + A^J_i|^2 \right) {~\mathrm{d}} x \Big| \\ & \le \Big| \int_\Omega (\mu^n_i - \mu_i) |\nabla \hat \psi_i + A^J_i|^2 {~\mathrm{d}} x \Big| \\ & \qquad \qquad + \overline{\mu} \Big| \int_\Omega ( \nabla \hat \psi^n_i + \nabla \hat \psi_i + 2 A^J_i) \cdot (\nabla \hat \psi^n_i - \nabla \hat \psi_i) {~\mathrm{d}} x \Big| \\ & \le \Big| \int_\Omega (\mu^n_i - \mu_i) |\nabla \hat \psi_i + A^J_i|^2 {~\mathrm{d}} x \Big| \\ & \qquad \qquad + \overline{\mu} \| \nabla \hat \psi^n_i + \nabla \hat \psi_i + 2 A^J_i \|_{L^2(\Omega)^3} \| \nabla \hat \psi^n_i - \nabla \hat \psi_i \|_{L^2(\Omega)^3} \\ & \to 0, \text{\; as \;} n \to \infty, \end{split} \end{equation*} since $\mu^n_i \xrightharpoonup{*} \mu_i$ in $L^\infty (\Omega)$, $|\nabla \hat \psi_i + A^J_i|^2 \in L^1 (\Omega)$, and $\nabla \hat \psi^n_i \to \nabla \hat \psi_i$ in $L^2(\Omega)^3$; $I_2 \to 0$ in a similar way; and the absolute value of the last integral \begin{equation*} \begin{split} |I_3| & \le \Bigg| \int_\Omega (\nabla \hat \psi^n_i - \nabla \hat \psi_i) \cdot \nabla \times (\hat A_i + C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi)) {~\mathrm{d}} x \Bigg| \\ & \quad \qquad + \Bigg| \int_\Omega (\nabla \hat \psi^n_i + A^J_i) \cdot \left( \nabla \times (\hat A^n_i + C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi)) - \nabla \times (\hat A_i + C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi)) \right) {~\mathrm{d}} x \Bigg| \\ & \le \| \nabla \hat \psi^n_i - \nabla \hat \psi_i \|_{L^2(\Omega)^3} \|\nabla \times (\hat A_i + C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi))\|_{L^2(\Omega)^3} \\ & \quad \qquad + \|\nabla \hat \psi^n_i + A^J_i\|_{L^2(\Omega)^3} \| \nabla \times \hat A^n_i - \nabla \times \hat A_i\|_{L^2(\Omega)^3} \\ & \to 0, \text{\; as \;} n \to \infty, \end{split} \end{equation*} since $\nabla \hat \psi^n_i \to \nabla \hat \psi_i$ in $L^2(\Omega)^3$, and $\nabla \times \hat A^n_i \to \nabla \times \hat A_i$ in $L^2(\Omega)^3$. \item Assumption \ref{ass:Maao_newcondition} is verified by Remark \ref{rem:Ass3} with $W=L^2(\Omega)^3$, \[ D_i(\mu, \hat A_i, \hat \psi_i) C^{\mathrm{ri}}(\overrightarrow \Phi) = - \frac{1}{\sqrt{\mu_i}} \nabla \times C^{\mathrm{ri}}_{A_i} (\overrightarrow \Phi) = - \frac{1}{\sqrt{\mu_i}|\Gamma_c|} (0,0,1)^T \Phi_i \] cf. \eqref{eq:permeability_Cri_A}. \end{itemize} \end{proof} \subsection{Localization of sound sources from microphone array measurements} \label{sec:SoundSources} The problem of localizing sound sources from remote measurements of the sound pressure arises in a multitude of applications, such as failure diagnosis and monitoring as well as in sound design or noise reduction tasks. Under the simplifying assumption of unperturbed sound propagation in free space, it basically reduces to a signal processing problem (more precisely, deconvolution with respect to the free space Green's function for the Helmholtz equation) and can be solved by so-called beamforming methods and refined variants thereof, see, e.g., \cite{Mueller2002,Brooks2004,Sijtsma2009}. In realistic experimental scenarios, more complicated geometries, in particular bounded domains with combinations of reflecting and partially absorbing wall parts need to be taken into account by considering the wave or Helmholtz equation with appropriate boundary conditions as a model, cf., e.g., \cite{KKG18,Schumacher03}. \subsubsection{The minimization form of the problem} Acoustic wave propagation in a linear low amplitude regime is governed by the well-known wave equation \begin{equation}\label{wave} \frac{1}{c_0^2}p_{tt}-\Delta p = \sigma \end{equation} where $c_0$ is the speed of sound, $p$ is the acoustic pressure and $\sigma=\sigma(x)$ represents the distributed sound sources. We here, like in the previous sections, aim at formulating the problem as a system of first order PDEs and therefore go back to the (linearized versions of the) fundamental physical laws governing acoustic wave propagation, \[ \left\{ \begin{split} &\mbox{linearized conservation of momentum: }\ \varrho_0 v_t + \nabla p_\sim = f, \\ &\mbox{linearized conservation of mass: }\ \varrho_{\sim t} + \varrho_0 \nabla \cdot v = g, \\ &\mbox{linearized equation of state: }\ \varrho_\sim = \tfrac{1}{c_0^2} p_\sim, \end{split} \right. \] which can be rephrased as \begin{equation} \label{eq:ss_linear_acoustic} \left\{ \begin{split} & \varrho_0 v_t + \nabla p_\sim = f, \\ & \tfrac{1}{c_0^2} p_{\sim t} + \varrho_0 \nabla \cdot v = g, \end{split} \right. \end{equation} where (with the subscripts $_\sim$ and $_0$ denoting fluctuating part and constant mean value, respectively) \begin{itemize} \item $\varrho = \varrho_0 + \varrho_\sim$ is the mass density \item $v$ is the acoustic particle velocity, \item $p = p_0 + p_\sim$ is the pressure, \item $c_0$ is the speed of sound. \end{itemize} Note that the second order wave equation \eqref{wave} can be derived from this by subtracting the divergence of the first line from the time derivative of the second line, thus eliminating the velocity. Via the identity $\sigma = g_t-\nabla\cdot f$, the functions $g$ and $f$ represent the searched for sound sources. As a matter of fact, in the identification process below, one may skip one of the two functions $f$ or $g$ -- preferably the latter, since $\sigma= -\nabla\cdot f$ allows to represent nonsmooth sources while still dealing with an $L^p$ function $f$ and also since it appears physically more meaningful to regard a sound source as giving rise to a momentum rather than giving rise to a change of mass density . Imposing the above equations \eqref{eq:ss_linear_acoustic} on a domain $\Omega\subseteq\mathbb{R}^d$ with $d\in\{2,3\}$ and taking the Fourier transform with respect to time $t$ we get \begin{equation} \label{eq:ss_linear_acoustic_ft} \left\{ \begin{split} \varrho_0 {\imath} \omega v^{\mathrm{ft}} + \nabla p^{\mathrm{ft}} = f^{\mathrm{ft}} & \text{\; in \;} \Omega, \\ \tfrac{1}{c_0^2} {\imath} \omega p^{\mathrm{ft}} + \varrho_0 \nabla \cdot v^{\mathrm{ft}} = g^{\mathrm{ft}} & \text{\; in \;} \Omega, \end{split} \right. \end{equation} with $v^{\mathrm{ft}} := \mathcal{F}^t v$, $p^{\mathrm{ft}} := \mathcal{F}^t p_\sim$, $f^{\mathrm{ft}} := \mathcal{F}^t f$, $g^{\mathrm{ft}} := \mathcal{F}^t g$, $\Omega \subset \mathbb{R}^d$ $(d \ge 2)$. The boundary of $\Omega$ is assumed to consist two parts, $\partial \Omega = \Gamma_r \cup \Gamma_a$, where $\Gamma_r$ is the sound hard part of the boundary and $\Gamma_a = \partial \Omega \backslash \Gamma_r$ is the set of absorbing walls, see \cite{PTTW18} or represents a nonreflecting boundary that is used to enable truncation of the computational domain, see, e.g., the classical reference \cite{EngquistMajda1977} and the citing literature. Correspondingly, we impose the boundary conditions \begin{equation} \label{eq:ss_linear_acoustic_boundaryCondition} \left\{ \begin{split} \varrho_0 v^{\mathrm{ft}} \cdot \nu + \kappa p^{\mathrm{ft}} = 0 & \text{\; on \;} \Gamma_a, \\ v^{\mathrm{ft}} \cdot \nu = 0 & \text{\; on \;} \Gamma_r, \end{split} \right. \end{equation} where $\kappa \in \mathbb{R}$ is a positive constant depending on the properties of walls on $\Gamma_a$; typically $\kappa=c$ in case of computational absorbing boundary conditions (ABC). By separating the real and imaginary parts of (\ref{eq:ss_linear_acoustic_ft}) with \begin{equation*} f^{\mathrm{ft}} = f_\Re + {\imath} f_\Im, \qquad g^{\mathrm{ft}} = g_\Re + {\imath} g_\Im, \qquad p^{\mathrm{ft}} = p_\Re + {\imath} p_\Im, \qquad v^{\mathrm{ft}} = v_\Re + {\imath} v_\Im, \end{equation*} we can see that the model for this problem is $E \left( f_\Re, f_\Im, p_\Re, p_\Im, v_\Re, v_\Im \right) = 0$ with \begin{equation} \label{eq:ss_model} E \left( f_\Re, f_\Im, g_\Re, g_\Im, p_\Re, p_\Im, v_\Re, v_\Im \right) = \begin{pmatrix} - \varrho_0 \omega v_\Im + \nabla p_\Re - f_\Re \\ \varrho_0 \omega v_\Re + \nabla p_\Im - f_\Im \\ - \frac{1}{c_0^2} \omega p_\Im + \varrho_0 \nabla \cdot v_\Re - g_\Re \\ \frac{1}{c_0^2} \omega p_\Re + \varrho_0 \nabla \cdot v_\Im - g_\Im \end{pmatrix}. \end{equation} This allows to work exclusively in spaces of real valued functions and thus to avoid the potential trouble arising from nondifferentiability of the squared absolute value function $z\mapsto|z|^2$ in $\mathbb{C}$. Taking again the $L^2(\Omega)$ norm of the model residual for defining the cost function (see \eqref{eq:ss_QP}, \eqref{Q1Q2Q3Q4} below), suggests to use the function space setting $(p_\Re, p_\Im) \in H^1(\Omega)^2$ and $(v_\Re, v_\Im) \in H(\dive, \Omega)^2$. Combining with the boundary conditions (\ref{eq:ss_linear_acoustic_boundaryCondition}) and noting that $v_\Re \cdot \nu, v_\Im \cdot \nu \in H^{-1/2} (\partial \Omega)$ (see \cite[Theorem 3.24]{Mon03}), we get the definition of the space $V$ containing the state $(p_\Re, p_\Im, v_\Re, v_\Im)$ as \begin{equation} \label{eq:ss_V0} \begin{split} V &\subseteq V_0:=\Big\{ (p_\Re, p_\Im, v_\Re, v_\Im) \in H^1 (\Omega)^2 \times H (\dive,\Omega)^2: \\ & \qquad \qquad \varrho_0 v_\Re \cdot \nu + \kappa p_\Re = 0, \quad \varrho_0 v_\Im \cdot \nu + \kappa p_\Im = 0 \text{\; in \;} H^{-1/2}(\Gamma_a), \\ & \qquad \qquad v_\Re \cdot \nu = 0, \quad v_\Im \cdot \nu = 0 \text{\; in \;} H^{-1/2} (\Gamma_r) \Big\}. \end{split} \end{equation} Microphone array measurements of the acoustic pressure are modelled by point values $p(x_\ell)$, $\ell\in\{1,\ldots,L\}$, where $x_\ell\in\Omega$ denotes the (known) location of the $\ell$-th microphone. Thus, the inverse problem under consideration is to find $(f_\Re, f_\Im, g_\Re, g_\Im) \in X$ with \begin{equation} \label{eq:ss_X0} X \subseteq X_0:=L^2 (\Omega)^d \times L^2 (\Omega)^d \times L^2 (\Omega) \times L^2 (\Omega), \end{equation} from the data \begin{equation} \label{eq:ss_C} y=C(p_\Re, p_\Im, v_\Re, v_\Im):= (p_\Re (x_\ell),p_\Im (x_\ell))_{\ell=1}^L \in \mathbb{R}^{2L}. \end{equation} To guarantee sufficient regularity of $\hat p_\Re, \hat p_\Im$ at the measurement points so that the observation operator $C:V\to \mathbb{R}^{2L}$ is bounded -- note that $H^1$ functions in general do not admit point evaluation -- we assume that the support of the sources is separated from the measurement domain, i.e., \begin{eqnarray} &&\Omega_{ms}\subseteq\Omega\,,\ \Omega_{ms}\mbox{ open } \label{eq:ss_X}\\ && X=\{(f_\Re, f_\Im, g_\Re, g_\Im)\in X_0\ : \ \mbox{suppess}(h)\,\subseteq\Omega\setminus\Omega_{ms}\,, \ h\in\{f_\Re,f_\Im,g_\Re,g_\Im\}\,\}\,, \nonumber\\ && V=\{(p_\Re, p_\Im, v_\Re, v_\Im)\in V_0\ : \ p_\Re\vert_{\Omega_{ms}},\, p_\Im\vert_{\Omega_{ms}} \,\in C(\Omega_{ms})\}\,. \label{eq:ss_V} \end{eqnarray} Indeed, for Fourier transformed solutions $p$ of \eqref{wave}, $H^2$ smoothness, and therefore -- via Sobolev's embedding -- continuity, follows immediately from interior regularity results for the homogeneous Helmholtz equation on $\Omega_{ms}$; therefore the exact solution of the inverse problem is indeed contained in $V$. To solve the problem using the theory from Section \ref{sec:Preliminaries}, we choose a right inverse $C^{\mathrm{ri}}$ of $C$ as \begin{equation} \label{eq:ss_Cri} C^{\mathrm{ri}}: \mathbb{R}^{2L} \to V, \qquad C^{\mathrm{ri}} (y) = \left( \sum_{\ell=1}^L y_{\Re,\ell} p_{0\Re,\ell}, \sum_{\ell=1}^L y_{\Im,\ell} p_{0\Im,\ell}, 0, 0 \right), \end{equation} where $y=(y_{\Re,1}, y_{\Im,1}, \dots, y_{\Re,L}, y_{\Im,L})\in\mathbb{R}$ and the functions $p_{0\Re,\ell} \in H^1 (\Omega)$, $p_{0\Im,\ell} \in H^1 (\Omega)$ are chosen such that $(p_{0\Re,\ell}, p_{0\Im,\ell}, 0, 0) \in V$, and $p_{0\Re,\ell} (x_j) = p_{0\Im,\ell} (x_j) = \delta_{\ell j}$ for $\ell,j\in\{1,\ldots,L\}$. Now we rewrite our problem in the form of (\ref{eq:model_observation_mixed}) as \begin{equation} \label{eq:ss_model_mixed} \left\{ \begin{split} - \varrho_0 \omega v_\Im + \nabla \left( \hat p_\Re + \sum_{\ell=1}^L y_{\Re,\ell} p_{0\Re,\ell} \right) - f_\Re = 0 \\ \varrho_0 \omega v_\Re + \nabla \left( \hat p_\Im + \sum_{\ell=1}^L y_{\Im,\ell} p_{0\Im,\ell} \right) - f_\Im = 0 \\ - \frac{1}{c_0^2} \omega \left( \hat p_\Im + \sum_{\ell=1}^L y_{\Im,\ell} p_{0\Im,\ell} \right) + \varrho_0 \nabla \cdot v_\Re - g_\Re = 0 \\ \frac{1}{c_0^2} \omega \left( \hat p_\Re + \sum_{\ell=1}^L y_{\Re,\ell} p_{0\Re,\ell} \right) + \varrho_0 \nabla \cdot v_\Im - g_\Im = 0 \\ (f_\Re, f_\Im, g_\Re, g_\Im, \hat p_\Re, \hat p_\Im, v_\Re, v_\Im) \in X \times {\mathrm{Im}} (C), \end{split} \right. \end{equation} where ${\mathrm{Im}} (C) = \{ (\hat p_\Re, \hat p_\Im, v_\Re, v_\Im) \in V: \hat p_{\Re} (x_\ell) = \hat p_{\Im} (x_\ell) = 0, \forall \ell\in \{1,\dots, I \} \}$. The cost function $\mathcal{J} = \mathcal{Q}_E$ is chosen as \begin{equation} \label{eq:ss_QP} \begin{split} & \mathcal{Q}_E \left( f_\Re, f_\Im, g_\Re, g_\Im, \hat p_\Re, \hat p_\Im, v_\Re, v_\Im; y \right) \\ & \qquad = \mathcal{Q}_1 (f_\Re, \hat p_\Re, v_\Im, y_\Re) + \mathcal{Q}_2 (f_\Im, \hat p_\Im, v_\Re, y_\Im) \\ & \qquad + \mathcal{Q}_3 (g_\Re, \hat p_\Im, v_\Re, y_\Im) + \mathcal{Q}_4 (g_\Im, \hat p_\Re, v_\Im, y_\Re) \end{split} \end{equation} where $y_\Re = (y_{\Re,\ell})_{\ell=1}^L$, $y_\Im = (y_{\Im,\ell}))_{\ell=1}^L$ and \begin{equation}\label{Q1Q2Q3Q4} \begin{split} &\mathcal{Q}_{1/2} (f_{\Re/\Im}, \hat p_{\Re/\Im}, v_{\Im/\Re}, y_{\Re/\Im}) :=\begin{cases}\frac12\|q_{1/2}\|_{L^2(\Omega)^d}^2 \mbox{ if } q_{1/2}\in L^2(\Omega)^d\\ +\infty\mbox{ else} \end{cases} \\ &\mathcal{Q}_{3/4} (g_{\Re/\Im}, \hat p_{\Im/\Re}, v_{\Re/\Im}, y_{\Im/\Re}) :=\begin{cases}\frac12\|q_{3/4}\|_{L^2(\Omega)}^2 \mbox{ if } q_{3/4}\in L^2(\Omega)\\ +\infty\mbox{ else} \end{cases} \\ \mbox{with } &q_1:= - \varrho_0 \omega v_\Im + \nabla \left( \hat p_\Re + \sum_{\ell=1}^L y_{\Re,\ell} p_{0\Re,\ell} \right) - f_\Re\\ &q_2:=\varrho_0 \omega v_\Re + \nabla \left( \hat p_\Im + \sum_{\ell=1}^L y_{\Im,\ell} p_{0\Im,\ell} \right) - f_\Im\\ &q_3:= - \frac{1}{c_0^2} \omega \left( \hat p_\Im + \sum_{\ell=1}^L y_{\Im,\ell} p_{0\Im,\ell} \right) + \varrho_0 \nabla \cdot v_\Re - g_\Re\\ &q_4:= \frac{1}{c_0^2} \omega \left( \hat p_\Re + \sum_{\ell=1}^L y_{\Re,\ell} p_{0\Re,\ell} \right) + \varrho_0 \nabla \cdot v_\Im - g_\Im , \end{split} \end{equation} So differently from the previous two examples, the number of measurements is not reflected in the number of terms in the cost functional. The measurements are rather imposed via the requirement of $\hat{p}_\Re$ and $\hat{p}_\Im$ vanishing (or being smaller than $\tau\delta$) at the measurement points $x_\ell$. The choices for the regularization functional $\mathcal{R}$ and the discrepancy $\mathcal{S}$ are \begin{equation} \label{eq:ss_S} \mathcal{S} (y,z) = \| z - y \|_{\infty,\mathbb{R}^{2L}}, \end{equation} \begin{equation} \label{eq:ss_R} \begin{split} & \mathcal{R} \left( f_\Re, f_\Im, g_\Re, g_\Im, \hat p_\Re, \hat p_\Im, v_\Re, v_\Im \right) \\ & \qquad = \frac{1}{2} \| f_\Re \|^2_{L^2(\Omega)^d} + \frac{1}{2} \| f_\Im \|^2_{L^2(\Omega)^d} + \frac{1}{2} \| g_\Re \|^2_{L^2(\Omega)} + \frac{1}{2} \| g_\Im \|^2_{L^2(\Omega)} \\ & \qquad \qquad + \frac{1}{2} \| \hat p_\Re \|^2_{L^2(\Omega)} + \frac{1}{2} \| \hat p_\Im \|^2_{L^2(\Omega)} + \frac{1}{2} \| v_\Re \|^2_{L^2(\Omega)^d} + \frac{1}{2} \| v_\Im \|^2_{L^2(\Omega)^d}, \end{split} \end{equation} As a matter of fact, boundedness of the $L^2$ norms of $\hat p_\Re$, $\hat p_\Im$, $v_\Re$, $v_\Im$, combined with boundedness of $\mathcal{Q}_E$ will allow us to bound higher order norms of these states. Also note that here the problem of needing the regularization term in order to guarantee existence of a minimizer does not arise here, as opposed to the two previous examples. This is essentially due to the linearity of the inverse problem under consideration here. Quite often in practice, sources are supported on a set of points or along lines in two or three dimensional space. In order to account for this we enhance sparsity of the recovered sources by using the functional \begin{equation} \label{eq:ss_Rtilde} \begin{split} &\widetilde{\mathcal{R}} (f_\Re, f_\Im, g_\Re, g_\Im, \hat p_\Re, \hat p_\Im, v_\Re, v_\Im) \\ &= \| \nabla \cdot f_\Re \|_{\mathcal{M}} + \| \nabla \cdot f_\Im \|_{\mathcal{M}} + \| \nabla g_\Re \|_{\mathcal{M}} + \| \nabla g_\Im \|_{\mathcal{M}}, \end{split}\end{equation} where $\mathcal{M} = C_b(\Omega)^*$ is the space of Radon measures on $\Omega$, cf. \cite{BrediesPikkarainen13,CasasClasonKunisch12,PTTW18}. Again, instead of the exact data $y=(y_{\Re,1}, y_{\Im,1}, \dots, y_{\Re,L}, y_{\Im,L})$, we only have a noisy version $y^\delta = (y^\delta_{\Re,1}, y^\delta_{\Im,1}, \dots, y^\delta_{\Re,L}, y^\delta_{\Im,L})$ with the accuracy of $\delta$ in the sense of $$\mathcal{S} (y, y^\delta) = \| y^\delta - y \|_{\infty,\mathbb{R}^{2L}} \le \delta$$ and the regularized problem is to find \begin{equation} \label{eq:ss_model_mixed_noisy} \begin{split} \argmin & \Big\{ \mathcal{Q}_E \left( f_\Re, f_\Im, g_\Re, g_\Im, \hat p_\Re, \hat p_\Im, v_\Re, v_\Im; y^\delta \right) + \alpha \mathcal{R} \left( f_\Re, f_\Im, g_\Re, g_\Im, \hat p_\Re, \hat p_\Im, v_\Re, v_\Im \right) \Big \},\\ & \text{\; s.t. \;} \left\{ \begin{array} {l} \left( f_\Re, f_\Im, g_\Re, g_\Im, \hat p_\Re, \hat p_\Im, v_\Re, v_\Im \right) \in X \times V, \\ \max_{\ell\in \{1, \dots, L\}} \{ |\hat p_\Re (x_\ell)|, |\hat p_\Im (x_\ell)| \} \le \tau \delta, \\ \widetilde{\mathcal{R}} (f_\Re, f_\Im, g_\Re, g_\Im, \hat p_\Re, \hat p_\Im, v_\Re, v_\Im) \le \rho, \end{array} \right. \end{split} \end{equation} where $\rho>0$ is a given constant (note to be mistaken with the mass density here) such that the exact solution satisfies \begin{equation} \label{eq:ss_priori_information_fg} \| \nabla \cdot f_\Re^\dagger \|_{\mathcal{M}} + \| \nabla \cdot f_\Im^\dagger \|_{\mathcal{M}} + \| \nabla g_\Re^\dagger \|_{\mathcal{M}} + \| \nabla g_\Im^\dagger \|_{\mathcal{M}} \le \rho \end{equation} and $\tau > 1$ is fixed. \subsubsection{Convergence} To achieve convergence results, we use the priori information \eqref{eq:ss_priori_information_fg} and choose the topology $\mathcal{T}$ and the norm $\| \cdot \|_B$ as \begin{equation} \label{eq:ss_topologyT} \begin{split} & (f_\Re^n, f_\Im^n, g_\Re^n, g_\Im^n, \hat p_\Re^n, \hat p_\Im^n, v_\Re^n, v_\Im^n) \xrightarrow{\mathcal{T}} (f_\Re, f_\Im, g_\Re, g_\Im, \hat p_\Re, \hat p_\Im, v_\Re, v_\Im) \\ & \qquad \Leftrightarrow \left\{ \begin{array} {l} f_\Re^n \rightharpoonup f_\Re, \quad f_\Im^n \rightharpoonup f_\Im \quad \text{\; in \;} L^2(\Omega)^d, \\ g_\Re^n \rightharpoonup g_\Re, \quad g_\Im^n \rightharpoonup g_\Im \quad \text{\; in \;} L^2(\Omega), \\ \hat p_\Re^n \rightharpoonup \hat p_\Re, \quad \hat p_\Im^n \rightharpoonup \hat p_\Im \quad \text{\; in \;} H^1(\Omega), \\ v_\Re^n \rightharpoonup v_\Re, \quad v_\Im^n \rightharpoonup v_\Im \quad \text{\; in \;} H(\dive, \Omega), \\ \nabla \cdot f_\Re^n \xrightharpoonup{*} \nabla \cdot f_\Re, \quad \nabla \cdot f_\Im^n \xrightharpoonup{*} \nabla \cdot f_\Im \quad \text{\; in \;} \mathcal{M}, \\ \nabla g_\Re^n \xrightharpoonup{*} \nabla g_\Re, \quad \nabla g_\Im^n \xrightharpoonup{*} \nabla g_\Im \quad \text{\; in \;} \mathcal{M}, \\ \left( \hat p_\Re^n (x_\ell), \hat p_\Im^n (x_\ell) \right)_{\ell=1}^L \to \left( \hat p_\Re (x_\ell), \hat p_\Im (x_\ell) \right)_{\ell=1}^L\quad \text{\; in \;} \mathbb{R}^{2L}, \end{array} \right. \end{split} \end{equation} \begin{equation} \label{eq:ss_normB} \begin{split} & \| (f_\Re, f_\Im, g_\Re, g_\Im, \hat p_\Re, \hat p_\Im, v_\Re, v_\Im) \|_B \\ & \qquad = \| f_\Re \|_{L^2(\Omega)^d} + \| f_\Im \|_{L^2(\Omega)^d} + \| g_\Re \|_{L^2(\Omega)} + \| g_\Im \|_{L^2(\Omega)} \\ & \qquad \quad + \| \nabla \cdot f_\Re \|_{\mathcal{M}} + \| \nabla \cdot f_\Im \|_{\mathcal{M}} + \| \nabla g_\Re \|_{\mathcal{M}} + \| \nabla g_\Im \|_{\mathcal{M}} \\ & \qquad \quad + \| \hat p_\Re \|_{H^1(\Omega)} + \| \hat p_\Im \|_{H^1(\Omega)} + \| v_\Re \|_{H(\dive, \Omega)} + \| v_\Im \|_{H(\dive,\Omega)} \\ & \qquad \quad + \| \left( \hat p_\Re (x_\ell), \hat p_\Im (x_\ell) \right)_{\ell=1}^L \|_{\mathbb{R}^{2L}}. \end{split} \end{equation} \begin{corollary} \label{cor:ss} Let (\ref{eq:ss_priori_information_fg}) hold. \begin{itemize} \item[(i)] (Existence of minimizers) Then for any $\alpha>0$, a minimizer of (\ref{eq:ss_model_mixed_noisy}) exists; \item[(ii)] (Boundedness) and for any sequence $(y^n)_{n \in \mathbb{N}} \subset \mathbb{R}^{2L}$ with $y^n \to y^\delta$ in $\mathbb{R}^{2L}$, the sequence of corresponding regularized minimizers is $\|\cdot\|_B$ bounded. \item[(iii)] (Convergence) Assume additionally that the choice of $\alpha$ satisfies \begin{equation*} \alpha(\delta, y^\delta) \to 0 \quad \text{\; and \;} \quad \frac{\delta^2}{\alpha(\delta, y^\delta)} \le c_0, \qquad \text{\; as \;} \delta \to 0. \end{equation*} Then as $\delta \to 0$, $y^\delta \to y$, the family of minimizers of (\ref{eq:ss_model_mixed_noisy}) converges $\mathcal{T}$ subsequentially to a solution to the problem (\ref{eq:ss_model_mixed}) with the exact data $y$. \end{itemize} \end{corollary} \begin{proof} We will check each item in the Assumptions \ref{ass:Maao} and \ref{ass:Maao_newcondition} as below. \begin{itemize} \item Assumption \ref{ass:Maao} (i) follows from (\ref{eq:ss_priori_information_fg}) and the Assumption \ref{ass:Maao} (ii) from the definition of $\mathcal{R}$ (\ref{eq:ss_R}). \item The $\|\cdot\|_B$ boundedness of $L_c$ follows from the definitions of $\mathcal{Q}_E$, $\mathcal{R}$, $\widetilde{\mathcal{R}}$, $\mathcal{S}$. The $\mathcal{T}$ compactness of $L_c$ follows from weak compactness of bounded sets in Hilbert spaces and weak* compactness of bounded subsets of $\mathcal{M}$ since it is the dual of a separable space. Hence, Assumption \ref{ass:Maao} (iii) is verified. \\ Note that here in order to bound the $H^1 (\Omega)^2 \times H (\dive,\Omega)^2$ norm of the state we use the triangle inequality together with boundedness of $\mathcal{Q}_E$ and $\mathcal{R}$, e.g. \[ \begin{split} \|\nabla \hat{p}_\Re\|_{L^2(\Omega)^d}\leq& \sqrt{2Q_1(f_\Re, \hat p_\Re, v_\Im, z_\Re)} +\varrho_0\omega \|v_\Im\|_{L^2(\Omega)^d}\\ &+\|z_\Re\|_{\infty,\mathbb{R}^{2L}}\sum_{\ell=1}^L\|\nabla p_{0\Re,\ell}\|_{L^2(\Omega)^d} +\|f_\Re\|_{L^2(\Omega)^d}\,. \end{split} \] Doing so without including the $L^2$ norms of the states into $\mathcal{R}$, so just by using boundedness of $\mathcal{Q}_E$, of the $L^2$ norms of the sources, and continuity of the embeddings $H^1 (\Omega)\to L^2(\Omega)$ and $H (\dive,\Omega)\to L^2(\Omega)^d$ together with some elimination strategy would not work, since the zero derivative terms of the states in $\mathcal{Q}_E$ come with frequency dependent factors that will typically be larger than the embedding constants of $H^1(\Omega)\to L^2(\Omega)$ and $H (\dive,\Omega)\to L^2(\Omega)^d$. \item Assumption \ref{ass:Maao} (iv) follows from linearity of the operators inside the norms and weak lower semicontinuity of the norms. \item Assumption \ref{ass:Maao} (v) is obvious by Remark \ref{rem:ass:Maao(v)}. \item Assumption \ref{ass:Maao_newcondition} is verified by Remark \ref{rem:Ass3} with $W=L^2(\Omega)^d$, \[ \begin{split} &D_1(f_\Re, \hat p_\Re, v_\Im) C^{\mathrm{ri}}(z)=\sum_{\ell=1}^L z_{\Re,\ell}\nabla p_{0\Re,\ell}, \ D_3(g_\Re, \hat p_\Im, v_\Re) C^{\mathrm{ri}}(z)=-\frac{\omega}{c_0^2}\sum_{\ell=1}^L z_{\Im,\ell}\nabla p_{0\Im,\ell}\,,\\ &D_2(f_\Im, \hat p_\Im, v_\Re) C^{\mathrm{ri}}(z)=\sum_{\ell=1}^L z_{\Im,\ell}\nabla p_{0\Im,\ell}, \ D_4(g_\Im, \hat p_\Re, v_\Im) C^{\mathrm{ri}}(z)=-\frac{\omega}{c_0^2}\sum_{\ell=1}^L z_{\Re,\ell}\nabla p_{0\Im,\ell}\,, \end{split} \] cf. \eqref{eq:ss_Cri}, \eqref{eq:ss_QP}, \eqref{Q1Q2Q3Q4}. \end{itemize} \end{proof} \section{Conclusions and outlook} In this paper we have contributed some further examples of variational formulations for inverse problems cf. \cite{AndrieuxBarangerBenAbda2006,BrownJaisKnowles2005,BrownJais2011,Knowles1998,KnowlesWallace1996,KohnVogelius87,KM90,Ron07} and additionally provided a regularization framework for these formulations, following up on \cite{Kal18} and augmenting it by the idea of data inversion using a right inverse $C^{\mathrm{ri}}$ of the observation operator. Future research in this context will be on the question of optimally selecting this right inverse operator, as well as on further applications. While the examples in this paper are from electromagnetics and acoustics and there are certainly many other relevant inverse problems in this physical context, we are also interested in extending the scope to problems from elasticity as arising in medical (e.g., elastography) and engineering (e.g., nondestructive testing) applications. Also time dependent problems will be considered in a next step -- the time domain version of the problem from Section \ref{sec:SoundSources} is already one of them. Other wave models (electromagnetics, elasticity) also allow for such a first order system formulation \cite{KirschRieder2016} and have many important real world applications. \section*{Appendix} In this section we will state a few relations between boundedness of the sequence $(\mu^n)_{n\in\mathbb{N}}$ in $L^\infty([0,\infty))$ and boundedness of $(\mu^n(h^n))_{n\in\mathbb{N}}$ in $L^\infty(\Omega)$ for $h^n,h\in C^{k,\beta}(\Omega)$ $h^n\to h$ in $C^{\ell,\gamma}(\Omega)$. First of all, for Lipschitz continuous functions $h^n:\Omega\to\mathbb{R}$ the inequality \begin{equation}\label{munHnLinfty} \|\mu^n(h^n)\|_{L^\infty(\Omega)} \geq \|\mu^n\|_{L^\infty(h^n(\Omega))}, \end{equation} holds, which can be seen as follows. Using the definition of the $L^\infty$ norm on $\Omega$ with the Borel sigma algebra and the Lebesgue measure $\lambda^d$ \[ \|\mu^n(h^n)\|_{L^\infty(\Omega)} = \inf R^n \] where \begin{eqnarray*} R^n&=&\{c\geq0\, : \, \exists N\subseteq\Omega, \,\lambda^d(N)=0\, \forall x\in \Omega\setminus N\, : \mu^n(h^n(x))\leq c\}\\ &=&\{c\geq0\, : \, \exists N\subseteq\Omega, \,\lambda^d(N)=0\, \forall z\in h^n(\Omega\setminus N)\, : \mu^n(z)\leq c\} \end{eqnarray*} and \[ h^n(\Omega)\setminus h^n(N)\subset h^n(\Omega\setminus N)\subset h^n(\Omega)\setminus h^n(\emptyset) \] we see that \[ c\in R^n\ \Rightarrow \ \exists \tilde{N}\subseteq h^n(\Omega), \,\lambda^1(\tilde{N})=0 \, \forall z\in h^n(\Omega)\setminus \tilde{N}\, : \mu^n(z)\leq c\,. \] Here we have made use of the fact that a Lipschitz continuous function maps sets of measure zero into sets of measure zero. Note that this is in general not true for H\"older continuous functions, the Cantor function being a well-known counterexample. This proves \eqref{munHnLinfty}. Now if $h^n$ converges to $h$ in $C(\Omega)$, then \[ \forall V\subset h(\Omega), \,\overline{V}\subset h(\Omega)^o\, \exists n_V\in\mathbb{N}\, \forall n\geq n_V: V\subseteq h^n(\Omega) \] Thus, altogether setting $C_V:= C+\max\{\|\mu^j\|_{L^\infty(V)}\, : \,j\in\{1,\ldots,n_V-1\}\}$ we have proven the following relation. \begin{lemma} Let, for all $n\in\mathbb{N}$, $\mu^n\in L^\infty(h(\Omega))$ and $h^n:\Omega\to\mathbb{R}$ be Lipschitz continuous and converge to $h$ in $C(\Omega)$, and assume that there exists $C>0$ such that \[ \forall n\in\mathbb{N} \, : \|\mu^n(h^n)\|_{L^\infty(\Omega)}\leq C \] Then for any $V\subset h(\Omega), \,\overline{V}\subset h(\Omega)^o$ there exists $C_V>0$ such that \[ \forall n\in\mathbb{N} \, : \|\mu^n\|_{L^\infty(V)}\leq C_V\,. \] \end{lemma} Lipschitz continuity of $h^n$ and its convergence in $C(\Omega)$ along a subsequence can be achieved by choosing $s$ in the definition \eqref{eq:permeability_setting_R} of the regularization function $\mathcal{R}$ sufficiently large ($s>\frac{d}{2}+1$) and using Sobolev's embedding. Note however, that this is not required for obtaining the well-definedness, boundedness and convergence results of Corollary \ref{cor:permeability}. \medskip On the other hand, $L^\infty([0,\infty))$ boundedness of the sequence $(\mu^n)_{n\in\mathbb{N}}$ clearly implies $L^\infty(\Omega)$ boundedness of $(\mu^n(h^n))_{n\in\mathbb{N}}$. However the latter cannot be concluded from $L^\infty(h(\Omega))$ boundedness of $(\mu^n)_{n\in\mathbb{N}}$, even if $h^n\to h$ in $C^\infty(\Omega)$, as the simple counterexample $\Omega=(0,1)$, $h(x)=x$, $h^n(x)=x+\frac{1}{n}$, $\mu^n(z)=\begin{cases}0\mbox{ for }z\in(0,1)\\ \exp(3n-\frac{1}{z-1})\mbox{ for }z\geq1\end{cases}$ shows. \section*{Acknowledgment} This work was supported by the Austrian Science Fund FWF under the grant P30054.
{ "timestamp": "2020-04-28T02:34:11", "yymm": "2004", "arxiv_id": "2004.12965", "language": "en", "url": "https://arxiv.org/abs/2004.12965", "abstract": "In this paper we extend a recent idea of formulating and regularizing inverse problems as minimization problems, so without using a forward operator, thus avoiding explicit evaluation of a parameter-to-state map. We do so by rephrasing three application examples in this minimization form, namely (a) electrical impedance tomography with the complete electrode model (b) identification of a nonlinear magnetic permeability from magnetic flux measurements (c) localization of sound sources from microphone array measurements. To establish convergence of the proposed regularization approach for these problems, we first of all extend the existing theory. In particular, we take advantage of the fact that observations are finite dimensional here, so that inversion of the noisy data can to some extent be done separately, using a right inverse of the observation operator. This new approach is actually applicable to a wide range of real world problems.", "subjects": "Numerical Analysis (math.NA)", "title": "Some application examples of minimization based formulations of inverse problems and their regularization", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692291542526, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.7079584996753585 }
https://arxiv.org/abs/1703.02908
Asymptotic behaviour of solutions to fractional diffusion-convection equations
We consider a convection-diffusion model with linear fractional diffusion in the sub-critical range. We prove that the large time asymptotic behavior of the solution is given by the unique entropy solution of the convective part of the equation. The proof is based on suitable a-priori estimates, among which proving an Oleinik type inequality plays a key role.
\section{Introduction and main results} We consider the convection diffusion equation \[ u_{t}(t,x) + (-\Delta)^{\alpha/2}u(t,x)+(f(u(t,x)))_x=0 \quad \text{for }t>0 \text{ and } x \in \mathbb{R}, \tag{CD}\label{CD} \] where $u: (0,\infty)\times \R \to \R$, $(-\Delta)^{\alpha/2}$ is the Fractional Laplacian operator of order $\alpha \in (0,2)$ and $f(\cdot)$ is a locally Lipschitz function whose prototype is $f(s)=|s|^{q-1}s/q$ with $q>1$. This model has received considerable attention since the 1990s due to the interesting phenomena that appear: there is a competition between the effects of the diffusion and convection terms. Depending on the parameters $\alpha$ and $q$, the asymptotic behaviour is given by either the solution of the diffusion equation: \[ u_{t}(t,x) + (-\Delta)^{\alpha/2}u(t,x)=0 \quad \text{for }t>0 \text{ and } x \in \mathbb{R}, \tag{D}\label{D} \] or the convective one \[ u_{t}(t,x) +(f(u(t,x)))_x=0 \quad \text{for }t>0 \text{ and } x \in \mathbb{R}, \tag{C}\label{C} \] or by a self-similar solution of (CD) in a critical case. The classical case $\alpha=2$ has been analysed for all $q>1$ in the quoted papers of Escobedo, V\'azquez and Zuazua \cite{EVZArma,EVZIndiana,EZ}. In the last twenty years there has been a great interest in models with nonlocal diffusion, specially fractional diffusion since the fractional Laplacian $(-\Delta)^{\alpha/2}$ is the infinitesimal generator of a stable Levy process. There are many applications in physical sciences where models with anomalous diffusion are needed, see the survey \cite{WoyczynskiLevyProc2001} for a description of possible applications, and the lecture notes \cite{VazCIME} for a presentation of recent models involving nonlocal diffusion. We are interested in the large time asymptotic behavior of solutions to the initial value problem \begin{equation}\label{Problem1} \left\{ \begin{array}{ll} u_{t}(t,x) + (-\Delta)^{\alpha/2}u(t,x)+(f(u(t,x)))_x=0 &\text{for }t>0 \text{ and } x \in \mathbb{R}, \\[10pt] u(0,x) =u_0(x) &\text{for } x \in \mathbb{R}. \end{array} \right. \end{equation} The critical case $q=\alpha$ makes the difference in the asymptotic behavior since equation \eqref{CD} is invariant by scaling $u_\lambda(t,x)=\lambda u(\lambda^\alpha t,\lambda x)$, and it admits self-similar solutions. In this case the asymptotic behavior of the solutions is given by the self-similar solution with the same mass as the initial datum $u_0$ (see \cite{BilerFunaki}). In the supercritical range $\alpha \in (1,2)$, $q> \max\{1,\alpha\}$ the asymptotic behaviour is given by the fundamental solution of the diffusion model (D) multiplied by the mass of the initial datum (see \cite{BilerKarchWoyczAsymp} for $\alpha\in(1,2)$). We will provide more details in next section. In this paper we consider the case $\alpha \in (1,2)$ and the nonlinearity $f(u)=|u|^{q-1}u/q$ in the subcritical range $1<q<\alpha$, which has been an open issue so far. The main result of this paper is the following theorem. \begin{theorem}\label{ThmAsympBehav} For any $1<q<\alpha<2$, $f(u)=|u|^{q-1}u/q$ and $u_0\in L^1(\R)\cap L^\infty(\R)$ nonnegative there exists a unique mild solution $u \in C([0,\infty), L^1(\R))\cap L^\infty((0,\infty)\times\R)$ of system \eqref{Problem1}. Moreover, for any $1\leq p<\infty$, solution $u$ satisfies \begin{equation} \label{limit.asymp} \lim _{t\rightarrow \infty} t^{\frac 1q(1-\frac 1p)}\|u(t)-U_M(t)\|_{L^p(\R)}=0, \end{equation} where $M$ is the mass of the initial data $u_0$ and $U_M$ is the unique entropy solution of the equation \begin{equation}\label{limit.problem} \left\{ \begin{array}{ll} u_{t}+(f(u))_x=0 &\text{for }t>0 \text{ and } x \in \mathbb{R}, \\[10pt] u(0) =M\delta_0. & \end{array} \right. \end{equation} \end{theorem} \begin{remark} We believe that the $L^\infty$-assumption on the initial data can be dropped. Through the paper we will consider nonnegative solutions. The general case of changing sign solutions can be analysed following the same arguments as in \cite[Section 6]{CazacuIgnatPazoto}. We emphasise that since the nonlinearity should be locally Lipschitz we should impose $q>1$. Since we are interested in the subcritical case where the convection is dominant we have to impose $\alpha>q$ and hence $\alpha$ should belong to the interval $(1,2)$. \end{remark} An interesting phenomenon happens: the diffusion is dominant over the convection for $\alpha>1$, having a regularizing effect on the solution. However, when $1<q<\alpha$ in the asymptotic limit as time $t$ goes to infinity the solution approaches the unique entropy solution to the pure convective equation which is discontinuous and develops shocks. This phenomenon has been established for the local case $\alpha=2$ by Escobedo, V\'azquez and Zuazua in \cite{EVZArma}. In this paper we prove that this behavior holds as long as $1<q<\alpha<2$. This is done using both parabolic and hyperbolic arguments and dealing with the difficulties created by the nonlocal operator and the nonlinearity of the convective term. The organization of the paper is as follows. In Section \ref{SectPrelim} we give a panorama on previous results on the model both in local and nonlocal cases. Also we provide a reminder on the diffusion equation which will be useful throughout the paper. In Section \ref{SectionExist} we are concerned with the existence and main properties of solutions. Entropy and mild solutions are introduced. The key estimate is given in Proposition \ref{prop.oleinik} where we show that for any $\alpha,q\in (1,2]$ and any initial data uniformly bounded above and below by two positive constants, the solution of our problem satisfies an Oleinik type inequality, $(u^{q-1})_x\leq 1/t$. We emphasize that this estimate does not require $q<\alpha$. In Section \ref{SectAsymp} we prove the asymptotic behavior of solutions stated in Theorem \ref{ThmAsympBehav}. \section{Preliminaries}\label{SectPrelim} \subsection{Panorama: from local to nonlocal diffusion} We describe some of the results known so far for this convection-diffusion model. We try to cover all the ranges of parameters and finally to better place our contribution in this field. The general model is \begin{equation}\label{GenEq} \left\{ \begin{array}{ll} u_t(t,x) + \mathcal{L}[u](t,x)+\overline{b}\cdot \nabla (f(u(t,x)))=0 &\text{for }t>0 \text{ and } x \in \RN, \\[10pt] u(0,x) =u_0(x) &\text{for } x \in \RN, \end{array} \right. \end{equation} where $\mathcal{L}$ is a L\'evy type operator, $\widehat{\mathcal{L}v}(\xi)=a(\xi)\hat v(\xi)$, whose symbol $a$ is written in the form \[ a(\xi)=i k \xi +\mu(\xi)+\int _{\rr^N}\big(1-e^{-i\eta \xi}-i\eta \xi \mathbf{1}_{|\eta|<1}\big)\Pi (d\eta). \] Usually $k\in \rr^N$, $\mu$ is a positive semi-definite quadratic form on $\rr^N$ and $\Pi$ is a positive Radon measure satisfying \[ \int_{\rr^N} \min\{|z|^2, 1\} \Pi(dz)<\infty. \] Two particular cases are the Laplacian, $\mathcal{L}=-\Delta$ and $\mathcal{L}=(-\Delta)^{\alpha/2}$ corresponding to $k=0$, $\mu(\xi)=|\xi|^2$, $\Pi =0$ and $k=0$, $\mu(\xi)=0$, $\Pi(dz)=|z|^{-N-\alpha}dz$ respectively. \medskip \noindent\textbf{Local Diffusion.} The local diffusion case, i.e. $\mathcal{L}=-\Delta$, has been intensively studied for linear diffusion $u_t - \Delta u + \overline{b}\cdot \nabla (|u|^{q-1}u)=0$, see \cite{EZ} for the supercritical and critical cases ($q\geq 1+1/N$ in $\rr^N$) and \cite{EVZArma} for the subcritical case $1<q<2$ in dimension $N=1$. The subcritical case $q<1+1/N$ in any dimension $N\geq 1$ has been analysed in \cite{EVZIndiana} for nonnegative solutions and for changing sign solutions in \cite{Carpio1996}. \medskip \noindent\textbf{Nonlocal Diffusion.} There is always a competition between the diffusion, which is differentiable of order $\alpha$, and the convection terms having one derivative. This implies the consideration of certain classes of solutions: entropy solutions, weak solutions, mild solutions. The study takes into consideration the fractional order $\alpha$, the nonlinearity $f(u)$, the dimension $N$ and the regularity of the initial data $u_0$. \noindent\textbf{Existence of solutions.} For all ranges or parameters $\alpha \in (0,2)$, $q>1$, the model admits a unique entropy solution. More precisely, for $\alpha \in (1,2)$ and $f$ locally Lipshitz, the existence and uniqueness of entropy solutions were proved by Droniou \cite{DroniouVanishing2003}. Then Alibaud \cite{AlibaudEntropy} proved the same for $\alpha \in (0,2)$. Cifani and Jakobsen \cite{CifaniJakobsenEntropySol} proved the existence of entropy solutions for the degenerate nonlinear nonlocal integral equation $u_t+(-\Delta)^{\alpha/2}A(u)+(f(u))_x=0$ with $\alpha \in (0,2) $ and developed a numerical scheme that gives an idea of the asymptotic behavior of the solution. The existence of entropy solutions for \eqref{GenEq} with merely bounded (possibly non-integrable) data has been proved by Endal and Jakobsen \cite{EndalJakobsen}. If moreover $f\in C^\infty$, $\alpha \in (1,2)$ and $q>1$ then there exists a unique mild solution with good regularity properties, see Droniou, Gallouet, Vovelle \cite{Droniou}. When the diffusion is smaller, $\alpha\in (0,1]$ regularity is lost, since the convection has the effect of shock formation. There is non-uniqueness of weak solutions, as proved by Alibaud and Andreianov \cite{AlibaudAndreianov}. However, uniqueness holds in the class of entropy solutions. \smallskip \noindent\textbf{Asymptotic Behaviour.} Concerning the asymptotic behavior of solutions there are previous works in some ranges of exponents. \emph{(i) Integrable data.} When the data is $u_0 \in L^1(\RN)$ there are previous works in the critical and supercritical cases. The critical case corresponds to $q=1+\frac{\alpha-1}{N}$ when the equation \eqref{CD} admits a unique self-similar solution $U(t,x)=t^{-N/\alpha}U(1,xt^{-1/\alpha})$ with data $U(0,x)=M\delta(x).$ For $\alpha\in (1,2)$ the critical case has been analyzed Biler, Karch and Woyczy{\'n}ski \cite{BilerKarchWoycz2001} who proved that the asymptotic profile as $t\to \infty$ is given by the self-similar solution $ U(t,x)$ described above. When $\alpha\in (0,1)$ the critical exponent $q$ is less than one and the nonlinearity would not be Lipschitz which is out the scope of this analysis. In the supercritical case $q>1+(\alpha-1)/{N}$, $\alpha\in (1,2)$, the diffusion is dominant and then the asymptotic behavior of solutions to \eqref{Problem1} with $u_0\in L^1(\rr)\cap L^\infty(\rr)$ is given by $e^{-t(-\Delta)^{\alpha/2}}u_0$, the solution of the linear diffusion problem $U_t + (-\Delta)^{\alpha/2} U=0$ with data $U(0,x)=u_0(x)$ (see Biler, Karch and Woyczy{\'n}ski \cite[Th.~4.1, Lemma~4.1]{BilerKarchWoyczAsymp}). Some results in the one dimensional case were obtained by Biler, Funaki and Woyczy{\'n}ski \cite{BilerFunaki}. The analysis of the linear semigroup generated by \eqref{D} shows that the first term in the asymptotic behaviour may be chosen as $M K^\alpha_t$ where $K^\alpha_t$ is the fundamental solution of problem \eqref{D}. See for instance \cite[Theorem 6.3]{BSVfracHE}. In Section \ref{reminder} we present more details about the linear model \eqref{D} and its properties. When $\alpha\in (0,1)$ all the nonlinearities considered here are super-critical since $q>1>1+(\alpha-1)/N$. The asymptotic behavior is given again by the linear semigroup. We state in the following theorem the result in the one-dimensional case. \begin{theorem} For any $\alpha \in (0,1)$, $q>1$, $f(u)=|u|^{q-1}u/q$ and $u_0\in L^1(\R)\cap L^\infty(\R)$ there exists a unique entropy solution $u$ of system \eqref{Problem1}. Moreover, for any $1\leq p<\infty$, solution $u$ satisfies \begin{equation*} \lim _{t\rightarrow \infty} t^{\frac 1 \alpha(1-\frac 1p)}\|u(t)-U(t)\|_{L^p(\R)}=0, \end{equation*} where $U$ is the unique weak solution of the equation \begin{equation*} \left\{ \begin{array}{ll} U_{t}(t,x)+(-\Delta)^{\alpha/2}U(t,x)=0 &\text{for }t>0 \text{ and } x \in \mathbb{R}, \\[10pt] U(0,x) =u_0(x) & \text{for } x \in \mathbb{R}. \end{array} \right. \end{equation*} \end{theorem} \begin{proof} The proof should follow as in \cite[Th.~1.1, Th.~3.5]{AlibaudImbertKarch} by using the technique of approximation with a vanishing viscosity term: $$ (u_ \epsilon)_t +(-\Delta)^{\alpha/2} u_\epsilon+(f(u_\epsilon))_x= \epsilon \Delta u_\epsilon. $$ The asymptotic behavior is proved first for this approximating problem and then by letting $\epsilon\rightarrow 0$ for the initial problem. We could also work directly with entropy solutions as in this present paper, but one should consider a parabolic scaling $u_\lambda(t,x)=\lambda^2 u(\lambda^2 t,\lambda x)$ instead of the one used in Section \ref{SectAsymp}. A detailed proof of these fact does not bring great novelty and we consider it is beyond the purpose of this paper. \end{proof} In this work we make a step further by describing the asymptotic behavior of mild solutions in the subcritical case $1<q<1+(\alpha-1)/{N}$ and dimension one, that is $1<q<\alpha<2$, for bounded integrable data. \medskip \emph{(ii) Step-like data.} There is an interesting phenomenon when $f(u)=u^2/2$ supplemented by a step-like initial datum approaching the constants $u_{\pm}$, $u_-<u_+$, as $x\rightarrow \pm \infty$, respectively. For $\alpha\in (1,2)$ in \cite{KarchMiaoXu} the authors study the one dimensional case and they prove that the limit profile is given by a rarefaction wave, that is the unique entropy solution of the Riemann problem \[ w_t+ww_x=0, \quad w(0,x)= \left\{ \begin{array}{ll} u_-, &x<0, \\ u_+,&x>0. \end{array} \right. \] When $\alpha \in (0,1)$ the convection is negligible and the asymptotic behavior is given by the solution of the diffusion problem \eqref{D} with the same initial initial data $w(0,x)$ as above. This is proved in \cite{AlibaudImbertKarch} in dimension one. The two-dimensional case of the above results has been analysed by Karch, Pudelko and Xu \cite{KarchPudelkoXu}. The characterization depends on the fractional order $\alpha$ and on the direction $\overline{b}$ of the convective nonlinearity in \eqref{GenEq}. \medskip \noindent\textbf{Remarks. } (i) There is a connection with Hamilton-Jacobi equations. By considering the integrated solution $v(t,x)=\int_{-\infty}^x u(t,y)dy$, it follows that $v(t,x)$ solves the equation $v_t + \La[v] + \frac{1}{q}(v_x)^q=0$, which is a type of Hamilton-Jacobi equation with fractional diffusion. The problem admits classical solutions when $\alpha \in (1,2)$ (\cite{DroniouImbertFractal,Imbert2005218}). For $\alpha=1$ this is related to drift-diffusion equations (\cite{Silvestre20112020}). (ii) There is a considerable interest in nonlocal equations with zero-order operators $\mathcal{L}[u]= J \star u - u$, where $J$ is a non-singular, integrable kernel with mass one. This is a quite different topic, since the nonlocal operator does not provide any regularity for the solution, as it happens in the fractional derivative case, and then other techniques must be used. When $q<2$, the first author considers the model $u_t = J \star u - u - (f (u))_x$ in \cite{CazacuIgnatPazoto}. The asymptotic behavior is given by the solution of \eqref{limit.problem}. The case $q=2$ has been analyzed in \cite{MR2138795} and $q>2$ in \cite{MR3190994}. There are situations when the convection is also nonlocal, $u_t = J \star u - u + G\ast f(u)-f(u)$. We refer to \cite{MR2356418} for the supercritical case $q>1+1/N$ and \cite{MR3328145} for the critical case $q=1+1/N$. However, for the subcritical case, i.e. $q<1+1/N$ there are no results on the long time behavior of the solutions. (iii) The case of nonlinear local diffusion also brings considerable difficulties, for instance for porous-medium type diffusion and convection the model becomes $u_t=\Delta u^m- (u^q)_x$. The third parameter $m$ of the nonlinearity changes the behaviour of the solution. For slow diffusion and slow convection we refer to Lauren\c{c}ot and Simondon \cite{LaurencotSimondon}. See \cite{LaurencotFast} for fast convection $0<q<1$ and slow diffusion $m>1$. The asymptotics of both fractional and nonlinear diffusion, $(-\Delta)^{\alpha/2}(u^m)$ plus convection has not been considered as far as we know. \subsection{Reminder on linear fractional diffusion}\label{reminder} We recall some useful results concerning the associated diffusion problem \eqref{D}, that is the \emph{Fractional Heat Equation} for $0<\alpha<2$. We consider the initial value problem \begin{equation}\label{FHE} \left\{ \begin{array}{ll} U_t (t,x) + (-\Delta)^{\alpha/2} U(t,x)=0 \quad \text{for } x \in \mathbb{R} \text{ and }t>0,\\ U(0,x)=U_0(x) \quad \text{for } x \in \mathbb{R}. \end{array} \right. \end{equation} This problem has been widely studied and many results are known (see \cite{Applebaum,Bertoin,BlumenthalGetoor} for the probabilistic point of view, \cite{Valdinoc} for a nice motivation of the model and the recent survey \cite{BSVfracHE} for a complete characterization). Some useful properties are proved in \cite[Section 2]{Droniou}. For initial data $U_0 \in L^1(\R)$ the solution of Problem \eqref{FHE} has the integral representation \begin{equation* U(t,x)=(K^{\alpha}_t(\cdot) \star U_0 ) (x)= \int_{\R}K_t^{\alpha}(x-z)U_0(z)dz\,, \end{equation*} where the kernel $K_t^\alpha$ has Fourier transform $\widehat{K}_t^{\alpha}(\xi)=e^{-|\xi|^{\alpha}t}.$ If $\alpha=2$, the function $K^2_t$ is the Gaussian heat kernel. We recall some detailed information on the behaviour of the kernel $K_t^{\alpha}(x)$ for $0<\alpha<2$. In the particular case $\alpha=1$, the kernel is explicit, given by the formula $$ K^{1}_t(x)=C t (|x|^2+t^2)^{-1}. $$ Kernel $K_t^{\alpha}(x)$ is the fundamental solution of Problem \eqref{FHE}, that is $K_t^{\alpha}(x)$ solves the problem with initial data Dirac delta $ \delta_0. $ It is known \cite{BlumenthalGetoor} that the kernel $K_t^{\alpha}$ has the self-similar form $$K_t^{\alpha}(x)=t^{-1/\alpha}F_\alpha(|x|t^{-1/\alpha}),$$ for some profile function, $F_\alpha(r)$. For any $\alpha\in (0,2)$ the profile $F_\alpha$ is $C^\infty(\rr)$, positive and decreasing on $(0,\infty)$, and behaves at infinity like $F_\alpha(r) \sim r^{-(1+\alpha)}$. Moreover, the solution of Problem \eqref{FHE} behaves as time $t\to \infty$ as $M K_t^{\alpha}$, where $M=\int_{\R}U_0(x) dx$ is the total mass: \begin{equation*} t^{ \frac{1}{\alpha}\left( 1- \frac{1}{ p} \right)} \|U(t,\cdot) -M K_t^{\alpha}(\cdot)\|_{L^{p}(\rr)} \to 0 \quad\text{as}\quad t\to \infty. \end{equation*} See for instance \cite[Theorem 6.3]{BSVfracHE}. Throughout the paper we will need the following time decay estimates on the fractional derivatives of the kernel. \begin{lemma} \label{decay.nucleu} For any $\alpha\in (0,2)$, $s\geq 0$ and $1\leq p\leq \infty$ the kernel $K_t^\alpha$ satisfies the following estimates for any positive $t$: \begin{eqnarray} \| K_t^{\alpha} \|_{L^{p}(\R)} &\simeq & \mathcal{K} t^{-\frac{1}{\alpha}(1-\frac 1p)}, \label{EstFHE}\\ \||D|^s K^\alpha_t\|_{L^p(\R)}& \lesssim & t^{-\frac {1}\alpha (1-\frac 1p)-\frac s\alpha},\label{EstFHE2}\\ \||D|^s \partial_x K^\alpha_t\|_{L^p(\R)}& \lesssim & t^{-\frac {1}\alpha (1-\frac 1p)-\frac {s+1}\alpha}\label{EstFHE3}. \end{eqnarray} \end{lemma} \noindent We used the notation $|D|^s:=(-\Delta)^{s/2}$. The proof of these estimates is given in the Appendix. \section{Existence of solutions and main properties}\label{SectionExist} \subsection{Concept of solution: entropy and mild solutions} We now recall some classical results for systems \eqref{Problem1} and \eqref{limit.problem}. In the case of the conservation law \eqref{limit.problem} the entropy formulation is as follows. \begin{definition}\label{entropysol} (\cite{MR735207}) By an entropy solution of system \eqref{limit.problem} we mean a function \[ w\in L^\infty((0,\infty),L^1(\R))\cap L^\infty((\tau,\infty)\times \R), \ \forall \tau\in (0,\infty) \] such that: C1) For every constant $k\in \R$ and $\varphi\in C_c^\infty((0,\infty)\times \R)$, $\varphi\geq 0$, the following inequality holds \[ \int_0^\infty \int_{\R} \Big(|w-k|\frac{\partial \varphi}{\partial t} +\,{\rm sgn}(w-k)(f(w)-f(k))\frac{\partial \varphi}{\partial x}\Big) dxdt\geq 0. \] C2) For any bounded continuous function $\psi$ \begin{equation*} \limess_{t\downarrow 0} \int _\R w(t,x)\psi(x)dx=M\psi(0). \end{equation*} \end{definition} The existence of a unique entropy solution of system \eqref{limit.problem}, as well as its properties were deeply analysed in \cite{MR735207}. For $f(u)=|u|^{q-1}u/q$ system \eqref{limit.problem} has an unique entropy solution $U_M$, see \cite[Section 2]{MR735207}, which is given by the $N$-wave profile \begin{equation*} U_M(t,x)=\left\{ \begin{array}{ll} (x/t)^{\frac 1{q-1}}, & 0<x<r(t), \\[10pt] 0, & otherwise, \end{array} \right. \end{equation*} with $r(t)=(\frac q{q-1})^{\frac{q-1}q}M^{(q-1)/q}t^{1/q}$. Let us first recall the representation of the fractional Laplacian in \cite{DroniouImbertFractal}. For any $\alpha\in (0,2)$: there exists a positive constant $c(\alpha)$ such that for all $\varphi\in \mathcal{C}^2_b(\R)$, all $r>0$ and all $x\in \R$ the following holds \begin{equation} \label{frac.lap} [(-\Delta)^{\alpha/2}\varphi ](x) =-c(\alpha)\int _{|z|\geq r}\frac{\varphi(x+z)-\varphi(x)}{|z|^{1+\alpha}}dz-c(\alpha)\int _{|z|\leq r} \frac{\varphi(x+z)-\varphi(x)- \varphi'(x)z}{|z|^{1+\alpha}}dz. \end{equation} Using this representation, we introduce, according to \cite{AlibaudEntropy}, the following definition of the entropy solution for system \eqref{Problem1}. \begin{definition}\label{DefnEntropySol} (\cite{AlibaudEntropy}) Let $u_0\in L^\infty(\R)$. We define an entropy solution of Problem \eqref{Problem1} as a function $u\in L^\infty((0,\infty)\times \R)$ such that for all $r>0$, all non-negative $\varphi\in C_c^\infty ([0,\infty)\times \R)$, all smooth convex functions $\eta:\R\rightarrow\R $ and all $\phi$ such that $\phi'=\eta' f'$, $f(s)=|s|^{q-1}s/q$, \begin{align*} \int _0^\infty\int_{\R} &(\eta(u)\partial_t \varphi +\phi(u)\partial _x\varphi )dx dt\\ \nonumber &+c(\alpha) \int _0^\infty\int_{\R} \int _{|z|\geq r}\eta '(u(t,x))\int _{|z|\leq r} \frac{u(t,x+z)-u(t,x)}{|z|^{1+\alpha}} \varphi(t,x) dzdxdt+\\ \nonumber &+c(\alpha) \int _0^\infty\int_{\R} \int _{|z|\leq r} \eta (u(t,x)) \frac{\varphi(t,x+z)-\varphi(t,x)- \varphi'(t,x)z}{|z|^{1+\alpha}}dzdxdt\\ \nonumber &+ \int_{\R} \eta(u_0)\varphi (0,x)dx\geq 0. \end{align*} \end{definition} \begin{remark} In the above definition it is sufficient to consider the particular entropy-flux pairs, $\eta _k(s)=|s-k|$, $\varphi_k(s)=\,{\rm sgn}(s-k)(f(s)-f(k))$, for any real number $k$. \end{remark} For any $u_0\in L^\infty(\R)$ and $f:\rr\rightarrow\rr$ locally Lipschitz there exists a unique entropy solution of Problem \eqref{Problem1}. Entropy solutions belong to $C([0,\infty),L^1_{loc}(\R))$. If $u_0\in L^1(\R)\cap L^\infty(\R)$, then so does $u(t)$, for all $t>0$, and moreover $u\in C([0,\infty),L^1(\R))$. All these properties have been proved in \cite{Droniou,AlibaudEntropy}. In the above papers the authors introduce a splitting in time approximation in order to prove the existence of an entropy solution. In fact for any $\delta>0$ they define the approximation $u_\delta$ in the following way: let $u^\delta(0,\cdot)=u_0$; for all $n\geq 0$, on the time interval $(2n\delta,(2n+1)\delta]$, $u^\delta$ is the solution of $\partial_tu^\delta +2(-\Delta)^{\alpha/2}u^\delta=0$ with initial condition $u^\delta(2n\delta,\cdot)$, and on the time interval $((2n+1)\delta,2(n+1)\delta]$, $u^\delta$ is the entropy solution of $\partial_tu^\delta +2\partial_x(f(u^\delta))=0$ with initial condition $u^\delta((2n+1)\delta,\cdot)$. For any initial data in $L^\infty(\rr)$ the approximation $u^\delta$ converges in $C([0,T),L^1_{loc}(\rr))$, $T>0$, to the entropy solution of Problem \eqref{Problem1}. In \cite{Droniou}, for $\alpha\in (1,2)$, and \cite{AlibaudEntropy} for $0<\alpha<1$, the authors prove that the entropy solutions in the sense of Definition \ref{DefnEntropySol} are solutions in the sense of distributions. Moreover when $\alpha\in (1,2)$, Droniou \cite{Droniou} proved that this distributional solution is the unique mild solution in the sense of Definition \ref{mild} below. \begin{definition}\label{mild}Let $u_0\in L^\infty(\rr)$ and $T>0$ or $T=\infty$. We say that a \emph{mild solution} of Problem \eqref{Problem1} is a function $u\in L^\infty((0,\infty)\times \rr)$ which satisfies for a.e. $(t,x)\in (0,T)\times \rr$, \begin{equation} \label{mild.form} u(t,x)=(K_t^{\alpha} \star u_0)(x) + \int_0^t (K_{t-\sigma}^{\alpha})_x \star f(u(\sigma,x)) d\sigma. \end{equation} \end{definition} The existence and regularity of the mild solution are given in the following Proposition. \begin{proposition} \label{PropExistence} For any $u_0\in L^{\infty}(\R)$ there exists a unique global mild solution $u$ of Problem \eqref{Problem1}. Moreover $u$ satisfies: (i) $\operatorname*{ess\,inf} u_0 \le u(t,x) \le \operatorname*{ess\,sup} u_0$. If $u_0 \in L^1(\R) \cap L^{\infty}(\R)$ then (ii) $u \in C([0,+\infty), L^1(\R))\cap C((0,\infty), L^\infty(\R))$. Moreover, $\|u(t)\|_{L^1(\R)} \le \|u_0\|_{L^1(\R)}$. (iii) for any $s<\alpha+\min\{\alpha,q\}-1$ and $1<p<\infty$ solution $u$ satisfies $u_t \in C((0,\infty) ,L^p( \R))$ and $u \in C((0,\infty),H^{s,p}(\R))$. \end{proposition} \begin{remark} Since $\alpha+\min\{\alpha,q\}-1>1$ we have for any $t>0$ that $u_x(t)\in L^p(\rr)$ for any $1<p<\infty$. Moreover for any $t>0$, the map $x\mapsto u(t,x)$ is continuous. The last property also guarantees that various integrations by parts used in the paper are allowed. \end{remark} \begin{proof}The global existence, uniqueness and the first two properties are proved in \cite{Droniou}. We now prove property (iii). Its proof relays on a classical bootstrap argument: one starts with some regularity of $u$ in the right-hand side and obtain that this right hand side term is slightly better than the hypothesis. For a nice review of the method we refer to \cite[Ch.~1.3, p.~20]{MR2233925}. Let us fix $T>0$. We first remark that since $u\in C([0,T],L^1(\R))\cap L^\infty((0,\infty)\times \R)$ we have that $f(u)=|u|^{q-1}u/q$ belongs to the same space. Moreover, it is sufficient to prove that for any $t>0$, $u(t)\in H^{s,p}(\rr)$ with a norm that is bounded in any interval $[\tau,T]$ with $\tau>0$. The main steps of the proof are as follows: we first prove that for $u\in L^\infty((0,T),L^1(\rr)\cap L^\infty(\rr))$ the right hand side in \eqref{mild.form} belongs to $H^{s,p}(\rr)$ for any $0<s<\alpha-1$, $1<p<\infty$. The next step is to use this new regularity to prove the same for $0<s<\alpha$. The last step, the most technical one, is to extend the regularity up to $s<\alpha+\min\{\alpha,q\}-1$. \textit{Step I.} We first prove that we gain some regularity for $u$, $u\in C((0,T),H^{s,p}(\R))$ for any $0<s<\alpha-1$ and $1<p<\infty$. Let $0<s<\alpha-1$. We have \begin{equation}\label{identity.s} |D|^su(t)=|D|^sK^\alpha_t\ast u_0 +\int _0^t |D|^s\partial _xK^\alpha_{t-\sigma}\ast f(u(\sigma))d\sigma. \end{equation} Using the decay of the $s$ derivative of $K_t^\alpha$ in \eqref{EstFHE2}, \eqref{EstFHE3} and that $0<s<\alpha-1$ we find that for any $1<p<\infty$ the following holds for any $t\in (0,T]$: \begin{align*} \| |D|^su(t) \|_{L^p(\R)}&\leq \| |D|^sK^\alpha_t\|_{L^1(\R)}\| u_0 \|_{L^p(\R)}+\int _0^t \||D|^{s} \partial_xK^\alpha_{t-\sigma}\|_{L^1(\R)} \|f(u(\sigma))\|_{L^p(\R)}d\sigma\\ &\lesssim t^{-\frac s\alpha}+\int _0^t (t-\sigma)^{-\frac{s+1}\alpha}d\sigma= t^{-\frac s\alpha}(1+t^{1-\frac 1\alpha})\lesssim t^{-\frac s\alpha}. \end{align*} Let us now explain why identity \eqref{identity.s} holds. We know that $u_0\in L^1 (\rr)\cap L^\infty(\rr)$ and by Lemma \ref{decay.nucleu} kernel $K^\alpha_t $ satisfies $|D|^sK^\alpha_t\in L^p(\rr)$ for any $1\leq p\leq \infty$. Hence $|D|^s (K^\alpha_t\ast u_0)=(|D|^sK^\alpha_t)\ast u_0$. Let us now prove that for a.e. $x\in \rr$ the following holds \begin{equation}\label{change.diff.int} |D|^s \int _0^t \partial _xK^\alpha_{t-\sigma}\ast f(u(\sigma))d\sigma= \int _0^t |D|^s\partial _xK^\alpha_{t-\sigma}\ast f(u(\sigma))d\sigma. \end{equation} For any $\rho>0$, the Tonelli-Fubini theorem can be applied to obtain that \begin{equation}\label{eq.221} |D|^s \int _0^{t-\rho} \partial _xK^\alpha_{t-\sigma}\ast f(u(\sigma))d\sigma= \int _0^{t-\rho} |D|^s\partial _xK^\alpha_{t-\sigma}\ast f(u(\sigma))d\sigma. \end{equation} Indeed, \eqref{eq.221} is true since we avoid the singularity of $K^\alpha_{t}$ at $t=0$. Moreover, as $\rho\rightarrow 0$, $0<s<\alpha-1$, using \eqref{EstFHE3} we obtain that for any $1\leq p\leq \infty$ the following holds \begin{align*} \int_{t-\rho}^t &\| |D|^s \partial _xK^\alpha_{t-\sigma}\ast f(u(\sigma))\|_{L^p(\rr)} d\sigma \le \int_{t-\rho}^t \||D|^s \partial _xK^\alpha_{t-\sigma}\|_{L^1(\R)} \| f(u(\sigma))\|_ {L^p(\R) } d \sigma \\ &\lesssim \int _{t-\rho}^t (t-\sigma)^{-\frac{s+1}\alpha}d\sigma= \rho^{1-\frac{s+1}\alpha} \to 0. \end{align*} Similarly, using \eqref{EstFHE2} it follows that $$\int_{t-\rho}^t \| \partial _xK^\alpha_{t-\sigma}\ast f(u(\sigma))\|_{L^p(\rr)} d\sigma \to 0 \quad \text{as } \rho \to 0.$$ Therefore we obtain that \begin{equation*} \int _0^{t-\rho} \partial _xK^\alpha_{t-\sigma}\ast f(u(\sigma))d\sigma\rightarrow \int _0^{t} \partial _xK^\alpha_{t-\sigma}\ast f(u(\sigma))d\sigma \end{equation*} and \begin{equation}\label{eq.222} \int _0^{t-\rho} |D|^s\partial _xK^\alpha_{t-\sigma}\ast f(u(\sigma))d\sigma\rightarrow \int _0^{t} |D|^s\partial _xK^\alpha_{t-\sigma}\ast f(u(\sigma))d\sigma \end{equation} in any $L^p(\rr)$, $1\leq p\leq \infty$. In view of \eqref{eq.221} and \eqref{eq.222} we obtain that $|D|^s\int _0^t \partial _xK^\alpha_{t-\sigma}\ast f(u(\sigma))d\sigma$ belongs to $L^p(\rr)$ for any $1\leq p\leq\infty$ and moreover \eqref{change.diff.int} holds in $L^p(\rr)$, $1\leq p\leq\infty$, so for a.e. $x\in \rr$. This type of arguments apply also in the rest of the paper, whenever one needs to commute $D^s:=(-\Delta)^s$ with the integral $\int_0^t$. \medskip \textit{Step II.} In order to extend the range of $s$ we first recall the chain rule for fractional derivatives (see \cite[Prop. 5 (a)]{MR1878630}, \cite[Prop. 3.1]{MR1124294}). For any $0<s<1$ and $F\in C^1(\R)$ the following inequality holds \begin{equation} \label{frac.chain} \||D|^s F(u)\|_{L^p(\R)}\lesssim \|F'(u)\|_{L^{p_1}(\R)} \| |D|^s u\|_{L^{p_2}(\R)}, \end{equation} where $1< p, p_2<\infty$, $1<p_1 \leq \infty$ and $\frac 1p=\frac 1{p_1}+\frac 1{p_2}$. Let us now choose two positive numbers $s_1$ and $s_2$ such that $s_1<\alpha-1$, $s_2<1$ and denote $s=s_1+s_2$. Applying estimate \eqref{frac.chain} to $F(u)=|u|^{q-1}u\in C^1(\rr)$ with $p_1=\infty$, $p_2=p$, we obtain \begin{align*} \| |D|^su(t) \|_{L^p(\R)}&\leq \| |D|^sK^\alpha_t\|_{L^1(\R)}\| u_0 \|_{L^p(\R)}+\int _0^t \||D|^{s_1}\partial _x K^\alpha_{t-\sigma}\|_{L^1(\R)} \| |D|^{s_2}f(u(\sigma))\|_{L^p(\R)}d\sigma\\ &\lesssim t^{-\frac s\alpha}+ \int _0^t (t-\sigma)^{-\frac{s_1+1}\alpha} \| |D|^{s_2} u(\sigma)\|_{L^p(\R)}. \end{align*} Assuming that $\| |D|^{s_2} u(t)\|_{L^p(\R)}\lesssim t^{-\frac {s_2}\alpha}$ for all $t\in (0,T)$ we obtain that for any $s<\alpha-1+s_2$ we have \[ \| |D|^su(t) \|_{L^p(\R)}\lesssim t^{-\frac s\alpha} + \int _0^t (t-\sigma)^{-\frac{s_1+1}\alpha} \sigma ^{-\frac {s_2}\alpha}d\sigma\lesssim t^{-\frac s\alpha}, \quad\ \forall t\in (0,T). \] This means that we always we can gain up to $\alpha-1$ derivatives with respect to the initial assumption. Repeating the above argument and using Step I we obtain that for any $s\in (0,\alpha)$ and any $p\in (1,\infty)$ we have $u(t)\in H^{s,p}(\R)$ for all $t\in (0,T)$ and \[ \|| D|^s u(t)\|_{L^p(\R)}\lesssim t^{-\frac s{\alpha}}, \quad \forall \ t\in (0,T). \] Moreover, using the properties of the Hilbert transform we also obtain for any $s\in [0,\alpha-1)$ and any $p\in (1,\infty)$ \[ \|| D|^{s} u_x(t)\|_{L^p(\R)}\lesssim t^{-\frac s{\alpha}}, \quad \forall \ t\in (0,T). \] \textit{Step III.} Let us now consider the case $s\geq \alpha$. We write the equation for $u_x$: \[ u_x(t)=\partial_x K^\alpha_t\ast u_0+\int _0^t \partial_x(K^\alpha_{t-\sigma})\ast f'(u)u_x(\sigma)d\sigma. \] Let us consider $s=s_1+s_2$ with $0<s_1<\alpha-1$ and $0<s_2<\min\{\alpha,q\}-1$. Thus \begin{align*} \| |D|^{s_1+s_2}u_x(t) \|_{L^p(\R)}&\leq \| |D|^{s_1+s_2}\partial_x K_t\|_{L^1(\R)}\|u_0\|_{L^p(\rr)}\\ &\quad + \int _0^t \| |D|^{s_1} \partial_x K_{t-\sigma}\|_{L^1(\rr)} \| |D|^{s_2} (f'(u)u_x)\|_{L^p(\rr)}d\sigma\\ &\lesssim t^{-\frac {s+1}\alpha} +\int _0^t (t-\sigma)^{-\frac{1+s_1}\alpha} \| |D|^{s_2} (f'(u)u_x)\|_{L^p(\rr)}d\sigma. \end{align*} Leibniz's rule (\cite[Th. 3]{MR1878630}, \cite[Prop. 3.3]{MR1124294}) gives us that \[ \| |D|^{s_2} (f'(u)u_x)\|_{p}\lesssim \| |D|^{s_2} f'(u)\|_{p_1}\|u_x\|_{p_2}+ \||D|^{s_2}u_x\|_{q_1} \|f'(u)\|_{q_2} \] where $\frac 1p=\frac 1{p_1}+\frac 1{p_2}=\frac 1{q_1}+\frac 1{q_2}$ and $1<p_1,q_1<\infty$, $1<p_2,q_2\leq \infty$ (Th. 3 in \cite{MR1878630} allows the case $p_2=q_2=\infty$). Choosing $q_1=p$, $q_2=\infty$ we obtain \begin{align*} \| |D|^{s_1+s_2}u_x(t) \|_{L^p(\R)}&\lesssim t^{-\frac {s+1}\alpha} +I_1+I_2, \end{align*} where \[ I_1=\int _0^t (t-\sigma)^{-\frac{1+s_1}\alpha}\| |D|^{s_2} f'(u(\sigma))\|_{p_1}\|u_x(\sigma)\|_{p_2}d\sigma \] and \[ I_2=\int _0^t (t-\sigma)^{-\frac{1+s_1}\alpha}\||D|^{s_2}u_x(\sigma)\|_{p} \|f'(u(\sigma))\|_{\infty}d\sigma. \] For $s_2<\alpha-1$, using Step II, we have for any $t\in (0,T)$ \[ I_2\lesssim \int _0^t (t-\sigma)^{-\frac{1+s_1}\alpha} \sigma^{-\frac{1+s_2}\alpha}d\sigma\simeq t^{1-\frac 1\alpha -\frac{s+1}\alpha}\lesssim t^{ -\frac{s+1}\alpha}. \] It remains to estimate the first term. For $u_x$ we use the estimates from the previous step since $1<\alpha$ to obtain that $\|u_x(\sigma)\|_{p_2}\lesssim \sigma^{-\frac 1{\alpha}}$. For the term $|D|^{s_2} f'(u)$ we use the fact that $f'(u)=q|u|^{q-1}$ is H\"older continuous of order $q-1$ so for $s_2,\beta$ satisfying \[0<s_2<q-1<1, \quad 0<\frac{s_2}{q-1}<\beta <1,\] we have \cite[Proposition A.1]{MR2318286} \[ \| |D|^{s_2}|u|^{q-1}\|_{p_1}\leq \| |D|^\beta u\|_{r_2}^{s_2/\beta} \| |u|^{q-1-\frac {s_2}\beta} \|_{r_3} \] where \[ \frac 1{p_1}=\frac{s_2}{ r_2\beta}+\frac 1{r_3},\quad r_3\Big(1-\frac{s_2}{(q-1)\beta}\Big)>1. \] Choosing $r_3$ large enough such that \[ r_3\Big( (q-1)-\frac{s_2}\beta \Big)\geq 1 \] the last condition is satisfied and moreover the term $\| |u|^{q-1-\frac {s_2}\beta} \|_{r_3}$ belongs to $L^\infty((0,T))$ since $u \in L^\infty((0,T),L^1(\rr)\cap L^\infty(\rr))$. On the other hand for $\beta <1$ we have estimates on the term $|D|^\beta u$ in the $L^{r_2}(\rr)$-norm, $r_2>1$, obtained previously. This gives us that \begin{align*} I_1& \lesssim \int _0^t (t-\sigma)^{-\frac{1+s_1}\alpha} \| |D|^\beta u(\sigma)\|_{r_2}^{s_2/\beta} \|u_x(\sigma)\|_{p_2}d\sigma\\ & \lesssim \int _0^t (t-\sigma)^{-\frac{1+s_1}\alpha} \sigma ^{-\frac \beta \alpha \frac {s_2}\beta} \sigma ^{-\frac{1}\alpha}d\sigma = \int _0^t (t-\sigma)^{-\frac{1+s_1}\alpha} \sigma ^{-\frac{s_2+1}\alpha}d\sigma<\infty \end{align*} since $s_1<\alpha-1$ and $s_2<\min\{\alpha,q\}-1\leq q-1$. To do that we have to check that for any fixed $p\in (1,\infty)$, $s_2\in (0,q-1)$ and $q\in (1,2)$ the following system has a solution $(p_1,\beta,r_2,r_3)$ \[ p\leq p_1<\infty, \, \frac 1{p_1} =\frac{s_2}{\beta r_2}+\frac {1}{r_3}, \, \frac{s_2}{q-1}<\beta <1, \, (q-1)-\frac{s_2}\beta \geq \frac 1{r_3}, \, r_2>1. \] In order to show the existence of $\beta, r_2,r_3,p_1$ which solves the above system we proceed as follows: Given $s_2\in (0,q-1)$ let us choose $\beta$ such that \[ \frac{s_2}{q-1}<\beta<1. \] We now choose $r_2\geq 2p$ and $r_3$ such that \[ r_3\geq \max\left\{2p,\frac{1}{q-1-\frac{s_2}\beta}\right\}. \] Thus we choose $p_1$ such that \[ \frac 1{p_1} =\frac{s_2}{\beta r_2}+\frac {1}{r_3}< \frac{q-1}{r_2}+\frac 1{r_3}\leq \frac 1{r_2}+\frac 1{r_3}\leq \frac 1p. \] The choice of $r_2$ and $r_3$ guarantees that $p_1\geq p$. As a consequence of the above estimates for any $s_2<\min\{q,\alpha\}-1 $ we can always make such a choice. Then we obtain that $u\in H^{s,p}$ for any $s<1+\alpha-1+\min\{q,\alpha\}-1=\alpha+\min\{q,\alpha\}-1$ and $1<p<\infty$. \end{proof} \begin{proposition} \label{PropExistEps} Assuming that the initial data is positive and bounded $u_0 \geq \epsilon>0$ then the unique mild solution of Problem \eqref{Problem1} satisfies \\ (i) $u(t,x)$ is also positive and bounded with $\epsilon \le u(t) \le \|u_0\|_{L^\infty(\rr)}$, for all $x \in \R$.\\ (ii) $u\in C_{b}^\infty ((0,\infty)\times \R)$. \end{proposition} \begin{proof} Using the maximum principle in Proposition \ref{PropExistence} we have that $\epsilon\leq u(t)\leq \|u_0\|_{L^\infty(\rr)}$ for all $t>0$. This gives us that the nonlinearity $f(s)=s^q/q$ belongs to $C^\infty((\epsilon, \|u_0\|_{L^\infty(\rr)}))$ and then the results of \cite[Proposition 5.1, Theorem 5.2]{Droniou} guarantee that $u\in C_{b}^\infty((0,\infty)\times \rr)$. \end{proof} \subsection{Smooth approximate solutions} Some of the estimates we need to prove in this paper require positive solutions. This is why we proceed by considering approximating the problem with positive data which, thanks to the maximum principle, also admits positive solutions. We will prove the necessary estimates for the approximating problem and then pass to the limit. Let $u_0\in L^\infty(\rr)$ nonnegative be the initial data of Problem \eqref{Problem1}. We consider the following approximating problem \[ \left\{ \begin{array}{ll} (u_\epsilon)_{t}(t,x) + (-\Delta)^{\alpha/2} u_\epsilon(t,x)+|u_\epsilon|^{q-1}(u_\epsilon)_x=0 &\text{for }t>0 \text{ and } x \in \mathbb{R}, \\[10pt] u_\epsilon(0,x) =u_{0,\epsilon}(x) &\text{for } x \in \mathbb{R}, \end{array} \right. \tag{$P_\epsilon$} \label{Problem1eps} \] where $u_{0,\epsilon}$ is an approximation of $u_0$. \begin{lemma}\label{lemma_ueps_u} Let $u$ be the solution of Problem \eqref{Problem1} with initial data $u_0 \ge 0$ and let $ u_\epsilon $ be the solution of Problem \eqref{Problem1eps} with initial data $u_{0,\epsilon}=u_0 + \epsilon$. Then for every $T>0$ we have \begin{equation}\label{Conv_ueps_u} \max _{t\in [0,T]}\|u_\epsilon (t) - u(t)\|_{L^\infty(\R)} \to 0 \quad \text{as} \quad \epsilon \to 0. \end{equation} \end{lemma} \begin{proof} Proposition \ref{PropExistEps} shows that there exists a unique mild solution of Problem \eqref{Problem1eps} with $u_\epsilon\in C_{b}^\infty ((0,\infty)\times \R)$ and $\epsilon \le u_\epsilon (t,x) \le \|u_0\|_{L^\infty(\rr)}+\epsilon$ for all $x\in \R$, $t\ge 0.$ For $u_0\geq 0$ the maximum principle in Proposition \ref{PropExistence} guarantees that $u$ the solution of system \eqref{Problem1} is also nonnegative. Let us choose $\epsilon\leq \|u_0\|_{L^\infty(\rr)}$ and $A=2\|u_0\|_{L^\infty(\rr)}$. The result follows from the fact that $f(s)=s^{q}/q$ is Lipschitz on $[0,A]$ and the use of Fractional Gronwall Lemma \cite[Lemma 2.4]{Bouharguane2013}. Indeed, using the mild formulation we find that \[ u(t)-u_\epsilon(t)=K_{t}^\alpha\ast (u_0-u_{0,\epsilon})+\int _0^t (K^{\alpha}_{t-s})_x \ast (f(u(s))-f(u_\epsilon(s)))ds. \] Then \begin{align*} \| u(t)-u_\epsilon(t)\|_{L^\infty(\rr)}&\leq \|K_{t}^\alpha\|_{L^1(\rr)} \|u_0-u_{0,\epsilon}\|_{L^\infty(\rr)}\\ &\quad+\int _0^t \|K_{t-s}^\alpha\|_{L^1(\rr)} \| f(u(s))-f(u_\epsilon(s)) \|_{L^\infty(\rr)}ds\\ &\leq \epsilon + C A^{q-1} \int _0^t (t-s)^{-\frac 1\alpha} \| u(s)-u_\epsilon(s)\|_{L^\infty(\rr)}ds. \end{align*} Since $\alpha>1$ we can apply Fractional Gronwall Lemma \cite[Lemma 2.4]{Bouharguane2013} to obtain that for any $T>0$ there exists a positive constant $C(T)$ such that \[ \| u(t)-u_\epsilon(t)\|_{L^\infty(\rr)}\leq \epsilon\, C(T), \quad \forall \ t\in [0,T]. \] This finishes the proof. \end{proof} \subsection{Hyperbolic estimates for \eqref{Problem1eps}} For any $\epsilon>0$ we now consider initial data in Problem \eqref{Problem1eps} a function $u_{0,\epsilon}$ such that $\epsilon\leq u_{0,\epsilon}\leq m$ and let $u_\epsilon$ be the solution of Problem \eqref{Problem1eps}. The following is the key estimate towards the proof of the asymptotic result. \begin{proposition}\label{prop.oleinik} Let $1<q,\alpha\leq 2$. For any $\epsilon>0$ solution $u_\epsilon$ of Problem \eqref{Problem1eps} satisfies the \emph{Oleinik type estimate}: \begin{equation}\label{OleinikEps} (u_\epsilon ^{q-1})_ x (t,x)\le \frac{1}{t}, \quad \forall t>0, \, x\in \R. \end{equation} \end{proposition} \begin{remark}We emphasize here that the result holds for all $q,\alpha\in (1,2]$ without the assumption $q<\alpha$. When $\alpha=2$ this estimate has been obtained in \cite{EVZArma}. A similar result has been proved in \cite{AlibaudAndreianov} when $\alpha\in (0,1)$ and $q=2$ for the regularised equation \[ u_{t} + (-\Delta)^{\alpha/2} u+|u|^{q-1}u_x-\epsilon u_{xx}=0. \] We are not able to use the barrier method as in \cite{AlibaudAndreianov}. The difficulty comes from the fact that one should prove that for a suitable function, i.e. $\Phi(x)=(1+x^{2})^{\gamma}$, the term \[ A(w,z)=- (2-q)w \La [z^{\beta+1}] + z\La [ z^{\beta}w] \] satisfies $z^{-(\beta+1)}(t,x)A(\Phi(x),z(t,x))\geq -C_{z} $ for all $x\in \rr$ and $t>0$ where $z$ is a $C_b^{\infty}((0,\infty)\times \rr)$ function and $\beta=\frac{2-q}{q-1}>0$. Observe that in the case $q=2$ we have $\beta=0$, $A(w,z)=\La w$ and the required estimate holds by choosing $\gamma$ suitably. \end{remark} \begin{proof}We consider $\alpha\in (1,2)$ since the case $\alpha=2$ has been treated in \cite{EVZArma}. Let $z(t,x)=(u_\epsilon)^{q-1}(t,x)$. For simplicity we will not make explicit the dependence on $\epsilon$. Then $z\in C_b^{\infty}((0,\infty)\times \rr)$ and $$ z_t + (q-1)z^{1- \frac{1}{q-1}} \La [z^{\frac{1}{q-1}}] + z z_x=0. $$ Let $w(t,x)=z_x(t,x)$. Then $w\in C_b^{\infty}((0,\infty)\times \rr)$ and it verifies \begin{equation* w_t + w^2 + zw_x +z^{-\beta -1} A(w,z)=0. \end{equation*} We continue as in \cite{CazacuIgnatPazoto} following some ideas from \cite{Droniou,KarchQualitProp2009}. Let us denote $W(t)=\sup _{x\in \rr} w(t,x)$. Since $z$ is $C^k_b((0,\infty)\times \rr)$ using the same arguments as in \cite[Th. 1.18]{KarchQualitProp2009} we have that $W$ is locally Lipschitz. In particular $W$ is absolutely continuous so differentiable almost everywhere. We now differentiate $W(t)$ for $t>0$ and obtain the equation it satisfies. Let us choose $0< s<t$. We use Taylor's expansion in the time variable $t$: \[ w( t,x)\leq w(t-s,x)+ s w_t( t,x) + C s^2\leq W(t-s)+s w_t ( t,x) + C s^2. \] It follows that \begin{equation} \label{eq.w.1} w(t,x)+s \Big(w^2(t,x)+ z w_x(t,x) +z^{-\beta-1}(t,x) A(w(t,x),z(t,x))\Big)\leq W(t-s)+Cs^2. \end{equation} Let us fix $t>0$ and consider the points $x_n$ such that $w(x_n , t)=W(t)-1/n$. Following \cite[Lemma~1.17]{KarchQualitProp2009} we have \begin{equation*} \lim _{n\rightarrow\infty} w_x(t,x_n)= 0. \end{equation*} Moreover, since the sequence $(z(t,x_n))_{n\geq 1}$ is bounded we can assume that, up to a subsequence, $z(t,x_n)\rightarrow p(t)$ for some function $p(t)\in [\epsilon,m]$. Now we evaluate \eqref{eq.w.1} at the point $x=x_n$. Letting $n\rightarrow\infty$ we can easily see that, up to a subsequence, \[ w(t,x_n)+s( w^2(t,x_n)+zw_x(t,x_n))\rightarrow W(t)+ s W^2(t). \] We claim that up to a subsequence \begin{equation} \label{below.A} A(w(t,x_n),z(t,x_n))\geq W(t) I_n(t)-o(1) \end{equation} for some bounded non-negative sequence $I_n(t)$. This implies that, up to a subsequence, $I_n(t)\rightarrow q(t)$ where $q(t)\geq 0$. This implies that inequality \eqref{eq.w.1} becomes \[ W(t)+s\big(W^2(t) +p^{-\beta-1}(t) q(t) W(t)\big)\leq W(t-s)+Cs^2. \] Letting $s\rightarrow 0$ we obtain that for a.e. $t>0$, $W$ satisfies \[ W'(t)+W^2(t)+p^{-\beta-1}(t) q(t) W(t)\leq 0. \] Now it follows using classical ODEs arguments (see for example \cite[p. 3136]{CazacuIgnatPazoto}) that $W$ satisfies \[ \max\{W(t),0\}\leq \frac 1t, \quad \forall \ t>0. \] To finish the proof it remains to prove claim \eqref{below.A}. To do that, we use representation \eqref{frac.lap} with suitable $r=r_n$ depending on $x_n$ that will be specified latter. Using that $\beta/(\beta+1)=2-q$ we write $A(w,z)$ as follow \begin{align*} A&(w(x),z(x))/c(\alpha)\\ &=-z(x)\int _{|y|>r}\frac{z^\beta w(x+y)-z^\beta w(x)}{|y|^{\alpha+1}}dy -z(x)\int _{|y|<r}\frac{z^\beta w(x+y)-z^\beta w(x) - y (z^\beta w)_x(x)}{|y|^{\alpha+1}}dy\\ &+\frac{\beta}{\beta+1} w(x)\int _{|y|>r}\frac{z^{\beta+1} (x+y)-z^{\beta+1}(x)}{|y|^{\alpha+1}}dy \\ & + \frac{\beta}{\beta+1} w(x)\int _{|y|<r}\frac{z^{\beta+1}(x+y)-z^{\beta+1}(x) - y (z^{\beta+1})_x(x)}{|y|^{\alpha+1}}dy\\ &=\int _{|y|>r}\Big[ w(x)\big( \frac{z^{\beta+1}(x)}{\beta +1}+\frac{\beta z^{\beta+1}(x+y)}{\beta +1}\big) -w(x+y)z^{\beta}(x+y)z(x)\Big]\frac{dy}{|y|^{\alpha+1}}+R(r,w,z), \end{align*} where we have collected the integrals in the ball of radius $r$ in the reminder term $R$. It is easy to evaluate each integral in $R$ and prove that \[ |R(t,w,z)|\lesssim r^{2-\alpha} (\|z\|_{\infty} \| (z^\beta w)_{xx})\|_{\infty}+\|w\|_{\infty} \| (z^{\beta+1})_{xx}\|_\infty )\leq C(\epsilon,\beta, \|z(t)\|_{C^3_b(\rr)}) r^{2-\alpha}. \] Let us evaluate $A(w,z)$ at the point $x=x_n$. Using that $w(t,x_n)=W(t)-1/n$ we obtain \begin{align*} A&(w(t,x_n),z(t,x_n) )\\ &\geq \int _{|y|>r}\Big[ w(t,x_n)\big( \frac{z^{\beta+1}(t,x_n)}{\beta +1}+\frac{\beta z^{\beta+1}(t,x_n+y)}{\beta +1}\big) \\ &\qquad\qquad\qquad\qquad\qquad\qquad -w(t,x_n+y)z^{\beta}(t,x_n+y)z(t,x_n)\Big]\frac{dy}{|y|^{\alpha+1}} -Cr^{2-\alpha}\\ &=\int _{|y|>r}\Big[ W(t)\big( \frac{z^{\beta+1}(t,x_n)}{\beta +1}+\frac{\beta z^{\beta+1}(t,x_n+y)}{\beta +1}\big) -w(t,x_n+y)z^{\beta}(t,x_n+y)z(t,x_n)\Big]\frac{dy}{|y|^{\alpha+1}}\\ &\quad -\frac 1n \int _{|y|>r} \big( \frac{z^{\beta+1}(x_n)}{\beta +1} +\frac{\beta z^{\beta+1}(x_n+y)}{\beta +1}\big)\frac{dy}{|y|^{\alpha+1}}-Cr^{2-\alpha}\\ &\geq W(t) \int _{|y|>r}\Big[ \big( \frac{z^{\beta+1}(t,x_n)}{\beta +1}+\frac{\beta z^{\beta+1}(t,x_n+y)}{\beta +1}\big) -z^{\beta}(t,x_n+y)z(t,x_n)\Big]\frac{dy}{|y|^{\alpha+1}}\\ &\quad-Cr^{2-\alpha}-\frac{\|z(t)\|_\infty^{\beta+1}}{nr^\alpha}\\ &:=W(t)I_n(z(t),r,x_n) -Cr^{2-\alpha}-\frac{C}{nr^\alpha}. \end{align*} Let us now choose $r=r_n$ such that $r_n\rightarrow 0$ and $nr_n^\alpha\rightarrow\infty$ as $n\rightarrow\infty$. Lemma \ref{positive.term} below shows that $I_n(t)=I(z(t),r_n,x_n)$ is well defined and is uniformly bounded. Moreover, $I_n(t)\geq 0$: indeed this follows by applying Young's inequality $ \frac{a^p}{p}+\frac{b^q}{q}\ge ab$, $\frac{1}{p}+\frac{1}{q}=1$ for $a=z(t,x_n)$, $b=z(t,x_n+y)$, $p=\beta+1$, $q=\frac{\beta+1}{\beta}$. Hence \[ A(w(t,x_n),z(t,x_n)\geq W(t)I_n(t)-o(1) \] and claim \eqref{below.A} is proved. The proof is now complete. \end{proof} \begin{lemma} \label{positive.term} Let $z\in C^1_b(\rr)$ such that $0<\epsilon\leq z\leq m$ and $\alpha\in (0,2]$, $\beta>0$. The function \[ I(z,r,x)= \int _{|y|>r}\left(\frac 1{\beta+1} z^{\beta+1}(x)+\frac{\beta}{1+\beta}z^{\beta+1}(x+y) -z(x)z^{\beta}(x+y)\right)\frac{dy}{|y|^{1+\alpha}}, \] defined for $r>0, \ x\in \rr,$ satisfies \[ |I(z,r,x)|\leq C(\beta,\epsilon,m) \|z\|_{C^1_b(\rr)}^2. \] \end{lemma} \begin{proof} Observe that for any $\beta>0$ we have $\beta t^{\beta+1} +1-(\beta+1)t^{\beta}\sim (t-1)^2$ as $t\sim 1$. Then the following inequality holds \[ |\beta t^{\beta+1} +1-(\beta+1)t^{\beta}|\leq C(\beta) \max\{1,t^{\beta-1}\} |t-1|^2, \quad \forall t>0. \] Applying to $t=z(x+y)/z(x)$ and integrating on $y$ we obtain that \begin{align*} \int _{|y|>r}& \left(\frac 1{\beta+1} z^{\beta+1}(x)+\frac{\beta}{1+\beta}z^{\beta+1}(x+y) -z(x)z^{\beta}(x+y)\right)\frac{dy}{|y|^{1+\alpha}} \\ &\leq C(\beta, \epsilon,m) \int _{|y|>r} \frac{|z(x+y)-z(x)|^2}{|y|^{1+\alpha}}dy\\ &\leq C(\beta, \epsilon,m) \Big(\|z_x\|_{L^\infty(\rr)}^2\int _{|y|<1}\frac 1{|y|^{\alpha-1}}dy+\|z\|_{L^\infty(\rr)}^2 \int _{|y|>1}\frac 1{|y|^{\alpha+1}}dy\Big). \end{align*} The proof is now complete. \end{proof} \subsection{Estimates for the solution of Problem \eqref{Problem1} } We will prove various estimates for $u$ the mild solution of Problem \eqref{Problem1} by using as the starting point the estimate in Proposition \ref{prop.oleinik}. We recall that $u \in C((0,\infty), H^{s,p}(\R))$ for any $s<\alpha+q-1$ and $1<p<\infty$, according to Proposition \ref{PropExistence}. Remark that \eqref{Conv_ueps_u} and the regularity of $u$ implies that $u_\epsilon (t,x) \to u(t,x)$ for all $t>0$, $x\in \R$, where $u_\epsilon$ is the solution of Problem \eqref{Problem1} with initial data $u_{0,\epsilon}=u_0+\epsilon$. \begin{lemma}\label{est.for.u} Let $u$ be the solution of Problem \eqref{Problem1} with nonnegative initial data $u_0\in L^1(\R)\cap L^\infty(\rr)$. Then the following estimates hold: \begin{enumerate} \item Mass conservation: $\int_{\R}u(t,x)dx= M, \quad \forall t\ge 0.$ \item Hyperbolic estimate: $\displaystyle (u^{q-1})_ x (t,x)\le \frac{1}{t}$ for all $ t>0 $ in $\mathcal{D'}(\rr)$. \item\label{Linf}Upper bound: $\displaystyle 0\leq u(t,x) \le \left(\frac{q}{q-1}M \right)^{1/q} t^{-1/q} $ for all $t>0,\ x\in \R.$ \item Decay of the $L^p$-norm, $1\leq p\leq \infty$: \[\displaystyle \|u(t,\cdot)\|_{L^p(\R)}\le \left(\frac{q}{q-1}\right)^\frac{p-1}{pq}M^{\frac{p-1}{pq}+\frac{1}{p}}\, t^{-\frac{1}{q}\left(1-\frac{1}{p}\right)},\ \forall \ t>0. \] \item Decay of the spatial derivative: $ \displaystyle u_x(t,x) \le C(q) M ^{\frac{2-q}{q}} t^{-\frac{2}{q}}$ for all $ t>0 $, a.e. $x\in \rr$. \item $W^{1,1}_{\text{loc}}(\R)$ estimate: \[ \int_{|x|\leq R} |u_x(t,x)|dx \le 2R\, C(q) M ^{\frac{2-q}{q}} t^{-\frac{2}{q}} + 2\left(\frac{q}{q-1}M \right)^{1/q} t^{-1/q},\quad \forall t>0. \] \item Energy estimate: for every $0<\tau<T$, \begin{equation*} \displaystyle \int_\tau^T \int_\R |(-\Delta)^{\alpha/4} u(t,x)|^2 dx dt \le \frac{1}{2}\int_{\R}u^2(\tau,x)dx \le \frac{1}{2}\left(\frac{q}{q-1} \right)^{1/q} \tau^{-1/q} M^{\frac{q+1}{q}}. \end{equation*} \end{enumerate} \end{lemma} \begin{proof} Using the regularity obtained in Proposition \ref{PropExistence} ii), we can integrate the integral representation \eqref{mild.form} we respect to the $x$ variable. Using Fubini's theorem, we obtain the mass conservation property. Alternatively, the mass conservation also follows from the distributional formulation. In fact, a classical approximation argument allows to write for any $\psi\in C_c^2(\rr)$ the following identity \[ \int _{\rr} u (t,x)\psi (x)dx-\int _{\rr}u (0,x)\psi(x)dx=\int _0^t \int _{\rr} f(u )\psi_x-\lambda^{q-\alpha}\int _0^t \int _{\rr} u (-\Delta)^{\alpha/2}\psi. \] We choose as test function $\psi_R(x)=\psi(x/R)$ where $\psi\in C_c^2(\rr)$, $0\leq \psi\leq 1$, $\psi(x)\equiv 1$ for $|x|\leq 1 $ and $\psi(x)\equiv 0$ for $|x|\geq 2$. Then $(\psi_R)_x=O(R^{-1})$ and $ (-\Delta)^{\alpha/2}\psi_R =O (R^{-\alpha}).$ Letting $R\rightarrow \infty$ gives us the conservation of the mass. For the second property we consider $u_\epsilon $ the solution of Problem \eqref{Problem1eps} with $u_{0,\epsilon}=u_0+\epsilon$. Then by Lemma \ref{lemma_ueps_u} we have that $u_\epsilon(t) \to u(t)$ in $ L^\infty(\R)$. This way we are able to pass to the limit estimate \eqref{OleinikEps} in a distributional sense. The regularity results obtained in Proposition \ref{PropExistence} show that $u(t)$ is a continuous function for any $t>0$. Using estimate \eqref{OleinikEps} for $u_\epsilon$ and letting $\epsilon\rightarrow 0$ imply that \begin{equation}\label{ineq1} u^{q-1}(t,x)-u^{q-1}(t,y)\leq \frac{x-y}t, \quad \forall \, y<x, \quad \forall \, t>0. \end{equation} The proof of the third estimate follows from \eqref{ineq1}: we fix $x\in \R$ and we integrate in $y$ on the interval $I=\{y \in \R: y<x \text{ and } u^{q-1}(t,x)-\frac{x-y}t \ge 0\}$. Thus \begin{align*} M&=\int_{\R}u(t,y)dy \ge \int_{I} \left(u^{q-1}(t,x)-\frac{x-y}t \right)^{1/(q-1)}dy =\frac{1}{t^{1/(q-1)}}\int_0^{t u^{q-1}(t,x)} z^{1/(q-1)} dz \\ &=\frac{q-1}{q} t u^{q}(t,x). \end{align*} Inequality (iv) is a consequence of the mass conservation and previous estimate. Using the intermediate value theorem we obtain that \[ u(t,x)-u(t,y)= \left( u^{q-1}(t,x)-u^{q-1}(t,y) \right) \frac{1}{q-1}\xi^{2-q} , \] for some $\xi$ between $u(x)$ and $u(y)$. Then according to \eqref{ineq1} for any $y<x$ the following holds \[ u(t,x)-u(t,y)\le \frac{1}{q-1}\|u(t)\|_{L^\infty}^{2-q} \, \frac{x-y}{t}. \] Then using the upper bound from point \eqref{Linf} we get \[ \frac{ u(t,x)-u(t,y)}{x-y}\le C(q) M^\frac{2-q}{q}t^{-\frac{2}{q}}. \] Since $u$ is differentiable a.e. we can let $y\to 0$ we obtain the desired upper bounds for $u_x$. Denoting $B_R=(-R,R)$ and using that $u\in W^{1,1}_{loc}(\rr)$ we have \begin{align*} \int_{B_R}|u_x(t,x)| dx &= \int_{B_R \cap \{u_x>0\}} u_x dx + \int_{B_R \cap \{u_x<0\}} (-u_x) dx \\ &=2 \int_{B_R \cap \{u_x>0\}} u_x dx + u(-R) -u(R) \\ &\le 2R\, C(q) M ^{\frac{2-q}{q}} t^{-\frac{2}{q}} + 2\left(\frac{q}{q-1}M \right)^{1/q} t^{-1/q}. \end{align*} Multiplying equation \eqref{Problem1} by $u$ and integrating by parts $$\frac{1}{2}\frac{d}{dt}\int_{\R}{u^2} dx + \int_{\R} |(-\Delta)^{\alpha/4}u|^2dx =0.$$ The decay of the $L^2(\rr)$-norm gives that \begin{align*} \int_{\tau}^T\int_{\R} |(-\Delta)^{\alpha/4}u|^2dx dt &\le \frac{1}{2} \int_{\R}{u^2(\tau)} dx \leq \frac{1}{2}\left(\frac{q}{q-1} \right)^{1/q} \tau^{-1/q} M^{\frac{q+1}{q}}. \end{align*} The proof is now finished. \end{proof} \section{Asymptotic behaviour}\label{SectAsymp} Let $u$ be the unique mild solution to Problem \eqref{Problem1} with nonnegative data $u_0\in L^1(\R) \cap L^\infty(\R)$ obtained in Proposition \ref{PropExistence}. In order to prove the asymptotic behaviour we perform the method developed by Kamin and V\'{a}zquez in \cite{KamVaz88}. For every $\lambda>0$, we define the rescaled function \begin{equation}\label{u_lambda} u_\lambda(t,x) := \lambda u(\lambda^{q} t, \lambda x). \end{equation} It follows that $\ul$ is a solution of the problem \[ \left\{ \begin{array}{ll} (\ul)_t + \lambda^{q-\alpha}(-\Delta)^{\alpha/2} [\ul] +(\ul)^{q-1}(\ul)_x=0, & x\in \rr, \, t>0, \\[10pt] u_\lambda(0,x)=\lambda u_0(\lambda x), &x\in \rr. \end{array} \right. \tag{$P_\lambda$}\label{Plambda} \] Using the properties obtained in Lemma \ref{est.for.u} and the definition of $u_\lambda$ we obtain the following uniform in $\lambda$ estimates for $u_\lambda$. \begin{lemma}\label{LemmaEstul} Let $\ul$ be the rescaled function defined by \eqref{u_lambda}. Then the corresponding a-priori estimates are true. \begin{enumerate} \item Mass conservation: $\int_{\R}u_\lambda(t,x)dx= M, \quad \forall t\ge 0, \, \forall \lambda>0.$ \item Decay of the $L^p$-norm: \\ $\displaystyle \|u_{\lambda}(t,\cdot)\|_{L^p(\R)}\le \left(\frac{q}{q-1}\right)^\frac{p-1}{pq}M^{\frac{p-1}{pq}+\frac{1}{p}}\, t^{-\frac{1}{q}\left(1-\frac{1}{p}\right)}$, $\forall \lambda>0,$\, $\forall p \ge 1$. \item\label{W11loc} $W^{1,1}_{loc}(\R)$ estimate: for $R>0$ we have\\ $ \displaystyle \int_{B_R} | (\partial_x\ul)(t,x)|dx \le 2R\, C(q) M ^{\frac{2-q}{q}} t^{-\frac{2}{q}} + 2\left(\frac{q}{q-1}M \right)^{1/q} t^{-\frac{1}{q}},\quad \forall\ t>0.$ \item Energy estimate: for every $0<\tau<T$ and $\lambda>0$ \begin{equation*} \displaystyle \lambda^{q-\alpha}\int_\tau^T \int_\R |(-\Delta)^{\alpha/4} \ul(t,x)|^2 dx dt \le \frac{1}{2}\int_{\R}\ul^2(\tau,x)dx \le \frac{1}{2}\left(\frac{q}{q-1} \right)^{1/q} \tau^{-1/q} M^{\frac{q+1}{q}}. \end{equation*} \end{enumerate} \end{lemma} In what follows we establish the results stated in Theorem \ref{ThmAsympBehav} by re-writing in an equivalent manner the asymptotic behavior \eqref{limit.asymp}. For $1\leq p<\infty$ and $t>0$ we will prove that \begin{equation} \label{equiv.limit} \| u_\lambda(t,x) - U_M(t,x) \|_{L^p(\R)}\to 0 \quad \text{as}\quad \lambda \to \infty, \end{equation} where $U_M(t,x)$ is the solution to the purely convective equation \eqref{limit.problem}. We emphasize that it is enough to prove \eqref{equiv.limit} only for some $t=t_0>0$. \begin{proof}[Proof of Theorem \ref{ThmAsympBehav}] For the reader's convenience we divide the proof according to the four-step method developed in \cite{KamVaz88}. Moreover for completeness we recall the following classical compactness argument due to Aubin-Lions-Simon. \begin{theorem}[\cite{Simon}, Th.5] \label{3spaces} Let us consider three Banach spaces $X\hookrightarrow B \hookrightarrow Y$ where $X\hookrightarrow B$ is compact. Assume $1\leq p\leq \infty$ and \\ i) $\mathcal{F}$ is bounded in $L^p((0,T),X)$,\\ ii) $\|\tau _h f-f\|_{L^p((0,T-h),Y)}\rightarrow 0$ as $h\rightarrow 0$ uniformly for $f\in\mathcal{F}$. Then $\mathcal{F} $ is relatively compact in $L^p((0,T),B)$ (and in $C([0,T],B)$ if $p=\infty$). \end{theorem} Let us consider $0<t_1<t_2<\infty$. \medskip \noindent\textbf{Step I. Compactness of family $(u_\lambda)_{\lambda>0}$ in $C([t_1,t_2],L^2_{loc}(\rr))$}. Let $B_R =(-R,R)$. We apply the Aubin-Lions-Simon compactness argument in Theorem \ref{3spaces} to the triple $W^{1,1}(B_R) \hookrightarrow L^2(B_R)\hookrightarrow H^{-1}(B_R)$. Estimate \ref{W11loc} in Lemma \ref{LemmaEstul} and the mass conservation give us that $(u_\lambda)_{\lambda>0} $ is uniformly bounded in $L^{\infty}((t_1,t_2):W^{1,1}(B_R))$. Moreover, we can prove that $(\partial _t u_{\lambda})_{\lambda>1}$ is uniformly bounded in $L^2 ((t_1,t_2):H^{-1}(B_R))$. Indeed, let us choose $\varphi \in C_c((0,\infty)\times B_R)$. We extend it with zero outside $B_R$. For such $\varphi$ and $\lambda>1$ we have \begin{align*} & \left| \int_{t_1}^{t_2}\int_{\R} (\partial_t u_{\lambda}) \varphi \right| \le \left|\int_{t_1}^{t_2}\int_{\R}(u_{\lambda}^q)_x \varphi \right| + \lambda^{q-\alpha} \left| \int_{t_1}^{t_2}\int_{\R} \La[\ul] \varphi \right| \\ &= \left|\int_{t_1}^{t_2}\int_{\R}u_{\lambda}^q \varphi_x \right| + \lambda^{q-\alpha} \left| \int_{t_1}^{t_2} \int_{\R}(-\Delta)^{\alpha/4}[\ul] (-\Delta)^{\alpha/4} \varphi \right| \\ &\le \|\ul^q \|_{L^2((t_1,t_2),L^2(\R))} \cdot \|\varphi \|_{L^2((t_1,t_2),H^1(\R))} + \\ & \quad+ \lambda^{q-\alpha} \left(\int_{t_1}^{t_2} \int_{\R}|(-\Delta)^{\alpha/4}\ul |^2 \right)^{1/2} \left(\int_{t_1}^{t_2} \int_{\R}|(-\Delta)^{\alpha/4}\varphi |^2 \right)^{1/2} \\ &= \|\ul^q \|_{L^2((t_1,t_2),L^2(\R))} \cdot \|\varphi \|_{L^2((t_1,t_2),H^1(\R))} + \\ & \quad + \lambda^{\frac{q-\alpha}{2}} \left(\lambda^{q-\alpha} \int_{t_1}^{t_2} \int_{\R}|(-\Delta)^{\alpha/4}\ul|^2 \right)^{1/2} \left(\int_{t_1}^{t_2} \int_{\R}|(-\Delta)^{\alpha/4}\varphi |^2 dx dt\right)^{1/2} \\ &\le \|\ul^q \|_{L^2((t_1,t_2),L^2(\R))} \cdot \|\varphi \|_{L^2((t_1,t_2),H^1(\R))} + \lambda^{\frac{q-\alpha}{2}} C(M,q,t_1) \|\varphi \|_{L^2((t_1,t_2),H^{\alpha/2}(\R))} \\ &\le C(M,q,t_1) \|\varphi \|_{L^2((t_1,t_2),H^1(\R))}. \end{align*} This gives us that $$\|(u_{\lambda})_t \|_{L^2 ((t_1,t_2),H^{-1}(B_R))} \le C(M,q,t_1), \quad \forall \lambda \ge 1.$$ Using the classical compactness arguments in Theorem \ref{3spaces}, we deduce that $(\ul)_{\lambda>1}$ is relatively compact in $C([t_1,t_2],L^2(B_R))$. Therefore there exists $U \in C([t_1,t_2],L^2(B_R))$ such that $\ul \to U$ in $ C([t_1,t_2],L^2(B_R)).$ By a diagonal argument we get that $U \in C([t_1,t_2], L^2_{loc}(\R))$ and \begin{equation}\label{conv:ul:Uloc} \ul \to U \quad \text{in} \quad C([t_1,t_2],L^2_{loc}(\R)) \quad \text{as}\quad \lambda \to \infty. \end{equation} \medskip \noindent \textbf{Step II. Tail control and convergence in $C([t_1,t_2],L^1(\rr))$.} In view of \eqref{conv:ul:Uloc} we obtain that $\ul \to U$ {in} $C([t_1,t_2]:L^1_{loc}(\R)) $. In order to prove the convergence in $C([t_1,t_2],L^1(\rr))$ we will prove a uniform tail control of the functions $(u_\lambda)_{\lambda>1 }$. More exactly, we prove that there exists a constant $C(M)$ such that \begin{equation}\label{tail} \int_{|x|>2 R} \ul(t,x) dx \leq \int_{|x|>R} u_0(x) dx + C( M)\left( \frac{t\lambda^{q-\alpha}}{R^\alpha}+ \frac{t^{1/q}}{R}\right),\quad \forall t>0. \end{equation} In view of this estimate, classical arguments give us that \begin{equation*} \ul \to U \quad \text{in} \quad C([t_1,t_2],L^1(\R)) \quad \text{as}\quad \lambda \to \infty. \end{equation*} Let us now prove estimate \eqref{tail}. Let $\varphi\in C^2(\rr)$ be such that $0\le \varphi\le 1$, $\varphi\equiv 1$ for $|x|\ge 2$, $\varphi_R \equiv 0$ for $|x|\le 1$. Let $\varphi(x)=\varphi(x/R)$. Multiplying equation \eqref{Plambda} by $\varphi$ and integrating by parts we obtain \begin{align*} \int_{\R}\ul (t) \varphi_R dx &=\int_{\R}\ul(0) \varphi_R dx -\lambda^{q-\alpha} \int_0^t\int_{\R} \ul (\tau,x) (-\Delta)^{\alpha/2} \varphi_R dx d\tau \\ &\quad +\int_0^t\int_{\R} \ul ^q(\tau,x) (\varphi_R)_x dx d\tau\\ &=I+II+III. \end{align*} For $\lambda >1$ the first term satisfies $$ I \le \int_{|x|\ge R}\ul(0,x) dx =\int_{|x|>\lambda R} u_0(x) dx \le \int_{|x|> R} u_0(x) dx . $$ Using that $\varphi\in C^2_b(\rr)$ and the homogeneity of $(-\Delta)^{\alpha/2} $ we obtain that $$ |((-\Delta)^{\alpha/2} \varphi_R) (x)| = \frac{1}{R^{\alpha}}|( (-\Delta)^{\alpha/2} \varphi) (x/R) | \le \frac{C}{R^{\alpha}}.$$ Thus the second term satisfies \begin{align*} II&\le \lambda^{q-\alpha} \| (-\Delta)^{\alpha/2} \varphi_R (x) \|_{L^\infty(\R)}\int_0^t\int_{\R} \ul (\tau,x) dx d\tau \le \lambda^{q-\alpha} \frac{C}{R^{\alpha}} t M. \end{align*} The third term is bounded as follows: \begin{align*} III&\le \|(\varphi_R)' \|_{L^\infty(\R)} \int_0^t \| \ul (\tau)\|_{L^q(\R)}^q d\tau\le C(M)\frac{t^{1/q}}{R}. \end{align*} Using the fact that $\varphi_R$ is identically one outside the ball of radius $2R$ we obtain the desired estimate \eqref{tail}. \medskip \noindent \textbf{Step III. Identifying the limit.} We now prove that $U\in C ((0,\infty), L^1(\rr))$ obtained above is an entropy solution of system \eqref{limit.problem}. First, by construction in \cite{AlibaudEntropy,Droniou}, $u$ is an entropy solution of Problem \eqref{FHE} and this implies that $\ul$ is an entropy solution of Problem \eqref{Plambda}. In view of Definition \ref{DefnEntropySol} with the particular choice $\eta_{k}(s)=|s-k|$ and $\phi_{k}(s)=\,{\rm sgn}(s-k){(f(s)-f(k))}$, function $u_\lambda$ satisfies for any $\varphi\in C_{c}^{\infty}( (0,\infty)\times \R)$ the following inequality: \begin{align*} \int _0^\infty\int_{\R} &(|u_{\lambda}-k|\partial_t \varphi +\,{\rm sgn}(u_{\lambda}-k)(f(u_{\lambda})-f(k))\partial _x\varphi)dx dt \\ \nonumber &+c(\alpha) \lambda^{q-\alpha} \int _0^\infty\int_{\R} \,{\rm sgn} (u_{\lambda }(t,x)-k) \int _{|z|> r} \frac{ u_\lambda(t,x+z)-u_\lambda(t,x)}{|z|^{1+\alpha}} \varphi(t,x) dzdxdt+\\ \nonumber &+c(\alpha) \lambda^{q-\alpha} \int _0^\infty\int_{\R} \int _{|z|\leq r} |u_{\lambda }(t,x)-k| \frac{\varphi(x+z)-\varphi(x)- \varphi'(x)z}{|z|^{1+\alpha}}dzdxdt\geq 0. \end{align*} We prove that the last two terms, denoted by $I_{1},I_{2}$, tend to zero as $\lambda\rightarrow \infty$. Assume that $\varphi$ is supported in $(0,T)\times (-R,R)$ for some positive $T$ and $R$. The first term satisfies \begin{align*} |I_1|&\leq 2c(\alpha) \lambda^{q-\alpha} \|\varphi\|_{L^\infty(\R)} \int_0^T \int _{\R} |u_\lambda(t,x)|\int _{|z|>r} \frac1{|z|^{1+\alpha}}dz\\ &\leq C(\alpha,r,\varphi) T M \lambda^{q-\alpha}\rightarrow 0, \quad \lambda\rightarrow\infty. \end{align*} In the case of the second term we have \begin{align*} |I_{2}|\leq c(\alpha)\lambda^{q-\alpha} \|\varphi''\|_{L^\infty(\R)} \int _0^T \int _{|x|\leq R+r} |u_\lambda(t,x)-k| \int_{|z|\leq r} \frac1{|z|^{\alpha-1}}dz \lesssim \lambda^{q-\alpha}\rightarrow 0, \quad \lambda\rightarrow\infty. \end{align*} Since $u_\lambda\rightarrow U$ in $C((0,\infty),L^1(\rr))$ and $\varphi\in C_c^\infty((0,\infty)\times \rr)$ we obtain \[ \int _0^\infty\int_{\R} |u_{\lambda}(t,x)-k|\partial_t \varphi dxdt\rightarrow \int _0^\infty\int_{\R} |U(t,x)-k|\varphi dxdt. \] Observe that since $u_\lambda\rightarrow U$ in $C((0,\infty),L^1(\rr))$, function $U$ satisfies \[ \int_{\rr}U(t,x)dx=M. \] Moreover $u_\lambda \rightarrow U$ a.e. in $(0,\infty)\times \rr$. This shows that the $L^\infty(\rr)$ bound in $u_\lambda$ transfers to $U$: \[ \| U(t)\|_{L^\infty(\rr)}\leq C(M) t^{-1/q}. \] This shows that $f(u_\lambda)\rightarrow f(U)$ in $C((0,\infty),L^1(\rr))$ and \[ \int _0^\infty\int_{\R} \,{\rm sgn}(u_{\lambda}-k)(f(u_{\lambda})-f(k))\partial _x\varphi dxdt\rightarrow \int _0^\infty\int_{\R} \,{\rm sgn}(U-k)(f(U)-f(k))\partial _x\varphi dxdt . \] In view of the fact that $I_1$ and $I_2$ tend to zero as $\lambda\rightarrow\infty$ we obtain that $U$ satisfies condition C1) in Definition \ref{entropysol}. We now identify the initial data taken by $U$ at $t=0$ by proving condition C2) in Definition \ref{entropysol}. Multiplying \eqref{Plambda} by $\psi\in C_c^2(\rr)$ the solution $u_\lambda$ satisfies \[ \int _{\rr} u_\lambda(t,x)\psi (x)dx-\int _{\rr}u_{\lambda}(0,x)\psi(x)dx=\int _0^t \int _{\rr} f(u_\lambda)\psi_x-\lambda^{q-\alpha}\int _0^t \int _{\rr} u_\lambda (-\Delta)^{\alpha/2}\psi. \] This implies that \begin{align*} \Big |\int _\rr u_\lambda(t,x)\psi (x)dx&- \int _{\rr}u_0(x)\psi \big(\frac x\lambda\big)\Big |\leq \|\psi_x\|_{L^\infty(\rr)} \int _0^t \int_{\rr }u_\lambda^q dxds + t \lambda^{q-\alpha} M\|\psi\|_{H^\alpha(\rr)} \\ &\leq C(M) t^{1/q} \|\psi_x\|_{L^\infty(\rr)}+ t \lambda^{q-\alpha} M\|\psi\|_{H^\alpha(\rr)} . \end{align*} Passing to the limit $\lambda\rightarrow\infty$ and using that $\alpha<2$ we get that for any $\psi\in C_c^2(\rr)$ we have \begin{equation}\label{initial.data.h2} \Big|\int _\rr U(t,x)\psi (x)dx-M\psi(0)dx\Big |\leq t^{1/q}\|\psi\|_{H^2(\rr)} . \end{equation} By density this estimate also holds for any $\psi\in H^2(\rr)$. We now claim that for any $\psi \in BC(\rr)$ the following holds \begin{equation} \label{initial.data} \lim _{t\rightarrow 0} \int _\rr U(t,x)\psi (x)dx=M\psi(0) . \end{equation} This shows that $U$ is the unique entropy solution of system \eqref{limit.problem}. Since \eqref{limit.problem} has a unique solution, $U_M$, then the whole sequence $(u_\lambda)_{\lambda>0}$ converges to $U$ not only a subsequence. We now prove that an approximation argument and the tail control of $u_\lambda$ (so of $U$) give \eqref{initial.data} for any $\psi\in BC(\rr)$. Even this procedure is standard, for completeness we prefer to add it here. Let us choose a sequence of mollifiers $\{\rho_n\}_{n\geq 1}$ as in \cite[Ch.~4.4, p.~108]{MR2759829} and $\psi_n=\rho_n\ast \rho$. It follows that $\|\psi_n\|_{L^\infty(\rr)}\leq \|\psi\|_{L^\infty(\rr)}$ and $\psi_n\rightarrow\psi$ uniformly on compact sets of $\rr$ (cf. \cite[Prop.~4.2.1, Ch.~4, p.~108]{MR2759829}). Moreover, $\|\psi_n\|_{H^2(\rr)}\leq C(n,\rho)\|\rho_n\|_{L^\infty(\rr)}$. Applying \eqref{initial.data.h2} to $\psi_n\in H^2(\rr)$ we obtain \[ \Big|\int _\rr U(t,x)\psi_n (x)dx-M\psi_n(0)\Big |\leq t^{1/q}\|\psi_n\|_{H^2(\rr)} . \] We write \begin{align*} \label{} \int _\rr U(t,x)\psi (x)dx-M\psi(0)=&\int _{|x|>2R} U(t,x)(\psi(x)-\psi_n (x))dx\\ & +\int _{|x|<2R}U(t,x)(\psi(x)-\psi_n(x))dx+M(\psi(0)-\psi_n(0))\\ &+ \int _\rr U(t,x)\psi_n (x)dx - M\psi_n(0)\\ &=I+II+III. \end{align*} The uniform tail control in \eqref{tail} and the fact that for any $t>0$, $u_\lambda(t)\rightarrow U(t)$ in $L^1(\rr)$ give us, letting $\lambda\rightarrow\infty$, that $U$ satisfies something similar to \eqref{tail}: \[ \int_{|x|>2 R} U(t,x) dx \leq \int_{|x|>R} u_0(x) dx + C( M)\frac{t^{1/q}}{R},\quad \forall t>0. \] Hence \begin{align*} \label{} \Big| \int _{|x|>2R} U(t,x)(\psi(x)-\psi_n (x))dx\Big|&\leq 2\|\psi\|_{L^\infty(\rr)}\int _{|x|>2R}U(t,x)dx\\ &\leq 2\|\psi\|_{L^\infty(\rr)}\Big(\int_{|x|>R} u_0(x) dx + C( M)\frac{t^{1/q}}{R}\Big)< \epsilon, \end{align*} provided that $0<t<1$ and $R>R(\epsilon)$. Let us fix $R$ large enough. We analyze the second term $II$. We have \begin{align*} \label{} \Big|\int _{|x|<2R}U(t,x)(\psi(x)-\psi_n(x))dx\Big|&\leq \|\psi-\psi_n\|_{L^\infty(|x|<2R)}\int _{\rr}U(t,x)dx\\ &=M \|\psi-\psi_n\|_{L^\infty(|x|<2R)}. \end{align*} Thus \[ |II|\leq 2M \|\psi-\psi_n\|_{L^\infty(|x|<2R)}\leq \epsilon \] provided that $n$ is large enough. We now apply estimate \eqref{initial.data.h2} to $\psi_n$ to obtain \[ | III |\leq t^{1/q}\|\psi_n\|_{H^2(\rr)}<\epsilon, \] provided $t$ is small enough. Hence $|I+II+III|\leq 3\epsilon$ for $t$ small enough, which finishes the proof of \eqref{initial.data}. \medskip \noindent \textbf{Step IV. Conclusion.} When $p=1$ we have proved that for any $t>0$, $u_\lambda(t)\rightarrow U_M(t)$ in $L^1(\rr)$. For $p>1$ we use interpolation, the fact that $(u_\lambda(t))_{\lambda>0}$ is uniformly bounded in $L^{2p}(\rr)$ and that $U(t)\in L^{2p}(\rr)$. Indeed, we have \[ \|u_\lambda(t)-U_M(t)\|_{L^p(\rr)}\leq \|u_\lambda(t)-U_M(t)\|_{L^1(\rr)}^{1/(2p-1)} (\|u_\lambda(t)\|_{L^{2p}(\rr)} +\|U_M(t)\|_{L^{2p}(\rr)})^{2(p-1)/(2p-1)} , \] since $\frac{1}{p}=\frac{1-\theta}{1}+\frac{\theta}{2p}$ with $\theta=\frac{2(p-1)}{2p-1}.$ This proves the result for any $1\leq p<\infty$ and the proof is finished. \end{proof} \section{Appendix} We give now the proof of Lemma \ref{decay.nucleu}. We mention that these estimates were done in \cite{VazClassicalSolJEMS} for dimensions $N\ge 2$ and in the particular case $s=\alpha$ using some technical results of \cite{Pruitt}. We provide here the proof for all $s\in (0,2)$ and $\alpha\in (0,2)$ in the one-dimensional case. This requires a more careful proof since the results of \cite{Pruitt} allow only Bessel functions of positive index. Using the homogeneity of the Fourier transform of $K_t^\alpha$ the proof is easily reduced to the case $t=1$. To simplify the presentation we will denote $K^\alpha$ the kernel $K_t^\alpha$ at the time $t=1$. In the first case we know (see \cite{BlumenthalGetoor}) that $K^\alpha$ satisfies \[ |K^\alpha(x)|\lesssim \frac{1}{|x|^{1+\alpha}}, \quad |x|>>1. \] The estimates on the $L^p(\rr)$ norm of $K^\alpha$ immediately follow. We now want to estimate $(-\Delta)^\frac{s}{2} K^{\alpha}$. Using the Fourier transform we have \[ (-\Delta)^\frac{s}{2} K^\alpha(x) =\frac 1{2\pi}\int_{-\infty}^{+\infty} e^{i x \xi}e^{-|\xi|^\alpha} |\xi|^s d\xi =\frac 1{\pi}\int_{0}^{+\infty} \cos(x \xi) e^{-|\xi|^\alpha} \xi^s d\xi . \] and \[ (-\Delta)^\frac{s}{2} \partial_x K^\alpha(x) =\frac 1{2\pi}\int_{-\infty}^{+\infty} e^{i x \xi}e^{-|\xi|^\alpha} |\xi|^s (i\xi)d\xi =-\frac 1{\pi}\int_{0}^{+\infty} \sin(x \xi) e^{-|\xi|^\alpha} \xi^{s+1} d\xi. \] We consider the case when $x$ is positive and then \[ (-\Delta)^\frac{s}{2} K^\alpha(x) =\sqrt{\frac x{2\pi}} \int_{0}^{+\infty} e^{-|\xi|^\alpha} \xi^{s+{1/2}}J_{-1/2} (x\xi)d\xi \] and \[ (-\Delta)^\frac{s}{2} \partial_x K^\alpha(x) =-\sqrt{\frac x{2\pi}} \int_{0}^{+\infty} e^{-|\xi|^\alpha} \xi^{s+{3/2}}J_{1/2} (x\xi)d\xi, \] where $J_n$ is the Bessel function of first kind with index $n$. We now use Lemma 1 in \cite{Pruitt} but we need to involve Bessel functions with positive index $J_\nu$, $\nu \geq 0$. In the second case applying this lemma we obtain that for $|x|$ large the following holds \[ | (-\Delta)^\frac{s}{2} \partial_x K^\alpha(x)|\lesssim \frac 1{|x|^{s+2}}. \] This shows that $(-\Delta)^\frac{s}{2} \partial_x K^\alpha$ belongs to $L^p(\rr)$ for any $1\leq p\leq \infty$. In the first case we perform an integration by parts to obtain that \[ (-\Delta)^\frac{s}{2} K^\alpha(x)=-\frac{1}{2\pi x} \int _0^\infty e^{-|\xi|^\alpha}J_{1/2}(x\xi) \left( s \xi^{s-1/2} - \sigma \xi^{s+\alpha-1/2} \right) d\xi. \] Applying again Lemma 1 in \cite{Pruitt} we obtain that for $|x|$ large \[ | (-\Delta)^\frac{s}{2} K^\alpha(x)|\lesssim \frac 1{|x|^{s+1}} \] and then $(-\Delta)^\frac{s}{2} K^\alpha$ belongs to $L^p(\rr)$ for any $1\leq p\leq \infty$. \bibliographystyle{siam}
{ "timestamp": "2018-02-14T02:06:56", "yymm": "1703", "arxiv_id": "1703.02908", "language": "en", "url": "https://arxiv.org/abs/1703.02908", "abstract": "We consider a convection-diffusion model with linear fractional diffusion in the sub-critical range. We prove that the large time asymptotic behavior of the solution is given by the unique entropy solution of the convective part of the equation. The proof is based on suitable a-priori estimates, among which proving an Oleinik type inequality plays a key role.", "subjects": "Analysis of PDEs (math.AP)", "title": "Asymptotic behaviour of solutions to fractional diffusion-convection equations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692366242306, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7079584992852146 }
https://arxiv.org/abs/1302.1929
ZL-amenability constants of finite groups with two character degrees
We calculate the exact amenability constant of the centre of $\ell^1(G)$ when $G$ is one of the following classes of finite group: dihedral; extraspecial; or Frobenius with abelian complement and kernel. This is done using a formula which applies to all finite groups with two character degrees. In passing, we answer in the negative a question raised in work of the third author with Azimifard and Spronk (J. Funct. Anal. 2009).
\subsection{Motivation: lower bounds on ZL-amenability constants} As mentioned in the introduction, it is observed in \cite[\S1.5]{AzSaSp} that \begin{equation}\label{eq:AMZL-gap} \inf \{ \AMZL{G} \colon \text{$G$ finite and non-abelian} \} > 1. \end{equation} (See the proof of Theorem 1.10, {\it op.~cit.}\/) The proof relies on the following hard result of D. Rider. \begin{thm}[Rider; {see \cite[Lemma 5.2]{ri}}]\label{t:rider} Let $G$ be a compact group, $\lambda$ a Haar measure on it, and $\psi$ a finite linear combination of irreducible group characters on $G$. Suppose that $\psi*\psi=\psi$ as elements of $L^1(G,\lambda)$ and that $\int_G \abs{\psi(x)}\,d\lambda(x) > 1$. Then $\int_G \abs{\psi(x)}\,d\lambda(x) \geq 301/300$. \end{thm} \begin{rem} Rider's result is stated for the case where $\lambda(G)=1$. However, if we let $\mu = \lambda(G)^{-1}\lambda$, then $\psi*\psi=\psi$ in $L^1(G,\lambda)$ if and only if $\lambda(G)\psi * \lambda(G)\psi = \lambda(G)\psi$ in $L^1(G,\mu)$. So by rescaling, our formulation reduces to the one given by Rider. \end{rem} \begin{qu} Can we get an improved bound\footnotemark\label{note-in-proof} on the infimum on the left hand side of \eqref{eq:AMZL-gap}, beyond the lower bound $301/300$ provided by Rider's theorem? \end{qu} \footnotetext{{\bf Note added in proof:} after this work was accepted for publication, the second author (YC) was able to show that $\AMZL{G}\geq 7/4$ for every finite non-abelian group. Details will appear in a forthcoming paper.} \begin{rem} To put this question in context, we note that the smallest explicitly known value of $\AMZL{G}$ for a non-abelian group $G$ is $7/4$ (see Remark~\ref{r:lowest-AMZL-so-far} in the next section). Rider remarks that his estimates are not intended to be best possible, but it seems unlikely that his techniques can get near $11/10$, let alone $7/4$, without substantial new input. Of course, his results concern much more general central idempotents, whereas our concern is with the very particular idempotent described in~\eqref{eq:diagonal}. \end{rem} It seems difficult to attack this problem directly using \eqref{eq:AMZL-formula}. One might hope that for groups with two character degrees, one can use \eqref{eq:new-AMZL-formula} to obtain a lower bound on the ZL-amenability constants which is strictly greater than $1$. While we were unable to do this in full generality, we can do better for particular classes of groups; these calculations are the topic of the next section. Another question raised in \cite[\S1.5]{AzSaSp} is the following: \begin{quote} {\it given a finite non-abelian group $G$, can we get a lower bound on $\AMZL{G}$ in terms of $\max_{\pi\in\widehat{G}} d_\pi$}\/? \end{quote} If this were the case, one could obtain further results on (non-)amen\-ability of the centre of $L^1(G)$ for certain profinite groups~$G$. Unfortunately, as we shall see below (Remark~\ref{r:pain-in-ASS}), there exists a sequence of finite groups $(G_i)$ such that $\sup_i \max_{\pi\in\hat{G_i}} d_\pi = + \infty$ yet $\sup_i \AMZL{G_i} = 5$. Therefore this question has a negative answer. \end{section} \begin{section}{ZL-amenability constants of particular groups} \label{s:AMZL_examples} Using Theorem \ref{t:amzl-2cd}, we can find the ZL-amenability constants for several well-known families of finite groups. \begin{subsection}{Dihedral groups} Let us fix some notation: $D_n$ denotes the \dt{dihedral group of order $2n$}, whose standard presentation~is \[ D_n=\langle r,t \mid r^n=t^2=1, tr=r^{-1}t\rangle. \] The character table of $D_n$ is well known and can be found in standard sources: for instance, see \cite[pp.~182--183]{JamLie}. We note, nevertheless, that we only need to know the number of linear characters and the cardinalities of the conjugacy classes, both of which can be determined by straightforward {\it ad hoc} arguments that we leave to the reader. As usual, we must treat the cases of odd and even $n$ separately. \paragraph{The case of even $n$.} Suppose $n=2\nu$ for some integer $\nu\geq 2$. Then $D_n$ has four linear characters (so that its derived subgroup has order $\nu$), and all other characters have degree~$2$. Also, $D_n$ has two conjugacy classes of size $1$ (namely $\{1\}$ and $\{r^\nu\}$), two of size $\nu$ (namely $[t]$ and $[rt]$), and $\nu-1$ of size~$2$ (the remaining rotations, paired~up). Thus \[ \sum_{C\in\operatorname{Conj}(D_{2\nu})} |C|^2 = 2\cdot 1^2 + 2\left(\frac{n}{2}\right)^2 + \left(\frac{n}{2}-1\right)\cdot 2^2 = \frac{1}{2}(n^2 + 4n -4) \,; \] and so, by our general formula \eqref{eq:new-AMZL-formula}, \begin{equation} \label{eq:AMZL-of-dihedral-even} \begin{aligned} \AMZL{D_{2\nu}} = 1 + 2(2^2-1) \left( 1- \frac{n^2+4n-4}{2n^2}\right) & = 1 + 6 \frac{n^2-4n+4}{2n^2} \\ & = 1 + 3\left(1 -\frac{2}{n}\right)^2\,. \end{aligned} \end{equation} \paragraph{The case of odd $n$.} Suppose $n=2\nu+1$ where $\nu$ is an integer $\geq 1$. Then $D_n$ has two linear characters (so that its derived subgroup has order $n$), and all other characters have degree~$2$. Also, its conjugacy classes are as follows: the trivial conjugacy class of the identity; the conjugacy class consisting of all involutions, which has size $n$; and $\nu$ conjugacy classes of size~$2$ (each consisting of a rotation and its inverse). Thus \[ \sum_{C\in\operatorname{Conj}(D_{2\nu+1})} |C|^2 = 1^2 + n^2 + \frac{n-1}{2}\cdot 2^2 = n^2 +2n -1 \/; \] and so, by our general formula \eqref{eq:new-AMZL-formula}, \begin{equation} \label{eq:AMZL-of-dihedral-odd} \begin{aligned} \AMZL{D_{2\nu+1}} = 1 + 2(2^2-1)\left( 1- \frac{n^2+2n-1}{2n^2}\right) & = 1 + 6 \frac{n^2-2n+1}{2n^2} \\ & = 1 + 3\left(1-\frac{1}{n}\right)^2 \,. \end{aligned} \end{equation} In fact, when $n$ is odd, $D_n$ fits into a family of more general examples, for which one can simplify \eqref{eq:new-AMZL-formula} even further. These groups are the topic of the next subsection. \end{subsection} \begin{subsection}{Frobenius groups with abelian complement and kernel} \label{ss:frobenius_ACAK} Frobenius groups admit various characterizations or equivalent definitions. The following one is convenient for our purposes. \begin{dfn}[cf.~{\cite[Theorem~8.2]{Passman_PGbook}}] \label{d:frobenius-group} A finite group $G$ is a \dt{Frobenius group} if it has a finite, proper, non-trivial subgroup $H$ which is \dt{malnormal}, i.e.~which satisfies $H \cap gHg^{-1} = \{e\}$ for all $g \in G \setminus H$. We say that $H$ is a \dt{Frobenius complement} in~$G$. \end{dfn} Given a Frobenius complement $H < G$, let $K \defeq \left( G \setminus \bigcup_{g\in G} gHg^{-1} \right) \cup \{e\}$. Clearly $K$ is a conjugation-invariant susbet of $G$: by a deep result of Frobenius, $K$ is actually a subgroup of $G$, called the \dt{Frobenius kernel} of $G$, and $G$ is the semidirect product $K \rtimes H$. (See Passman's book, in particular the proof of \cite[Theorem 17.1]{Passman_PGbook}, for further details.) \begin{rem} {\it A~priori}, $K$ depends on the particular choice of Frobenius complement~$H$. However, it turns out that if $G$ has a Frobenius complement $H$ and $K$ is the corresponding Frobenius kernel, then $K$ is equal to the Fitting subgroup of~$G$; moreover, all proper, non-trivial, malnormal subrgoups of $G$ are conjugate in~$G$ (\cite[Cor\-oll\-ary~17.5]{Passman_PGbook}). These highly non-obvious results are sometimes summarized in the slogan ``a finite group can be Frobenius in at most one way''. \end{rem} For sake of brevity, we write ``let $G=K\rtimes H$ be Frobenius'' as an abbreviation for ``let $G$ be a finite Frobenius group, with Frobenius complement $H$ and Frobenius kernel $K$.'' \begin{prop}\label{p:frob_facts} Let $G=K\rtimes H$ be Frobenius. Suppose $H$ is an abelian group of order~$h$, and $K$ is an abelian group of order~$k$. Then $h$ divides $k-1$. Moreover: \begin{numlist} \item\label{li:conjugacy-in-frob} $G$ has trivial centre, $(k-1)/h$ conjugacy classes of size $h$, and $h-1$ conjugacy classes of size~$k$. \item\label{li:characters-of-frob} $G$ has exactly $h$ linear characters; the remaining characters each have degree~$h$. \end{numlist} \end{prop} The proposition is an assembly of several standard facts about Frobenius groups. However, as it is difficult to locate a reference that states concisely what we need, we give a proof in Appendix~\ref{app:frobenius_facts}. \begin{thm}\label{t:AMZL-of-Frobenius} Let $G$ be a Frobenius group whose complement and kernel are both abelian; let $h$ and $k$ be the orders of the complement and kernel, respectively. Then \begin{equation}\label{eq:AMZL-of-Frobenius} \AMZL{G} = 1 + 2\cdot \frac{h^2-1}{h} \left(1 - \frac{h-1}{k}\right)\left(1- \frac{1}{k}\right). \end{equation} \end{thm} \begin{proof} By Proposition~\ref{p:frob_facts}, \[ \sum_{C\in\operatorname{Conj}(G)} |C|^2 = 1 + \frac{k-1}{h}\ h^2 + (h-1)\ k^2 = 1 + h(k-1) + (h-1)k^2 \,; \] and substituting the remaining information from Proposition~\ref{p:frob_facts} into the general formula \eqref{eq:new-AMZL-formula} yields \[ \begin{aligned} \frac{\AMZL{G} -1}{2} & = (h^2-1) \left( 1- \frac{ 1 + h(k-1) + (h-1)k^2}{hk^2} \right) \\ & = \frac{h^2-1}{h} \cdot \frac{ -1 - hk+h + k^2}{k^2} \\ & = \frac{h^2-1}{h} \left( 1-\frac{h}{k} + \frac{h-1}{k^2}\right)\,; \end{aligned} \] factorizing and rearranging this gives the formula \eqref{eq:AMZL-of-Frobenius}, as required. \end{proof} \begin{eg}[Dihedral groups of odd order, revisited] \label{eg:AMZL-of-dihedral-odd} Let $n$ be an odd integer with $n\geq 3$. Using the standard presentation of $D_n$ as given earlier, we see that the subgroup generated by the `reflection' $t$ is malnormal, while the Frobenius kernel turns out to be the subgroup generated by the `rotation'~$r$. Putting $h=2$ and $k=n$ in \eqref{eq:AMZL-of-Frobenius} gives \[ \AMZL{D_n} = 1 + 3 \left(1- \frac{1}{n}\right)^2 \,, \] just as we had before. \end{eg} \begin{eg}[Affine groups of finite fields] \label{eg:AMZL-of-affine} Let ${\mathbb F}_q$ be a finite field of order $q$, where $q$ is a prime power $\geq 3$. The \dt{affine group of ${\mathbb F}_q$}, which we shall denote by $\Aff({\mathbb F}_q)$, is the set \[ \left\{ \affine{a}{b} \colon a \in{\mathbb F}_q^\times, b\in {\mathbb F}_q \right\} \] equipped with the group structure it inherits from the usual matrix product and inversion. It is a metabelian group; more precisely, it is isomorphic to the semidirect product ${\mathbb F}_q \rtimes {\mathbb F}_q^\times$\/. It is straightforward to check that the subgroup of $\Aff({\mathbb F}_q)$ corresponding to the multiplicative group of ${\mathbb F}_q$ is a proper, non-trivial, malnormal subgroup; the Frobenius kernel turns out to be the normal subgroup of $\Aff({\mathbb F}_q)$ corresponding to the additive group of~${\mathbb F}_q$. Both are abelian, so we can apply Theorem~\ref{t:AMZL-of-Frobenius}, which yields \[ \begin{aligned} \AMZL{\Aff({\mathbb F}_q)} & = 1 + 2\cdot\frac{(q-1)^2-1}{q-1}\left(1 - \frac{q-2}{q}\right) \left( 1 -\frac{1}{q}\right) \\ & = 1 + 2\cdot\frac{2q-q^2}{q-1}\cdot\frac{2}{q}\cdot \frac{q-1}{q} \\ & = 1 + 4\cdot \frac{q-2}{q} \\ & = 5 - \frac{8}{q} \,. \end{aligned} \] \end{eg} \begin{rem} One can also compute $\AMZL{\Aff({\mathbb F}_q)}$ more directly from the character table of $\Aff(q)$, which is simple and well-known, and can be found in standard sources. In fact, the exact computation for these examples, which arose in other work of the present authors related to \cite{Stegmeir}, provided some of the motivation for Theorem~\ref{t:amzl-2cd}. \end{rem} \begin{rem}\label{r:pain-in-ASS} For all odd primes $p$, $2\leq \operatorname{AM}(\Aff(p)) \leq 5$, while $\Aff(p)$ has an irreducible representation of dimension $p-1$. This shows that the amenability constant of $\operatorname{Z\ell}^1(G)$ cannot be bounded from below by an increasing function of $\max\{ d_\chi : \chi \in \Irr(G)\}$. (For context, see the remarks made at the end of Section~\ref{s:AMZL-formulas}.) \end{rem} \begin{eg}[$a^2x+b$ groups] \label{eg:AMZL-of-subgroup} Let $q$ be an odd prime power $\geq 5$, and let $d=(q-1)/2$. Consider the following subgroup of $\Aff({\mathbb F}_q)$, sometimes referred to as the ``$a^2x+b$ group over~${\mathbb F}_q$''\/: \[ G_q \defeq \left\{ \left( \begin{matrix} a^2 & b \\ 0 & 1 \end{matrix} \right) \colon a\in {\mathbb F}_q^\times, b\in {\mathbb F}_q \right\}. \] Recalling that ${\mathbb F}_q^\times$ is cyclic, pick a generator $z$, and let $H$ be the subgroup of $\Aff({\mathbb F}_q)$ generated by $\left(\begin{matrix} z^2 & 0 \\ 0 & 1 \end{matrix} \right)$. One can check that $H$~is malnormal, and so $G_q$ is Frobenius; the Frobenius kernel $K$ turns out to be the normal subgroup corresponding to the additive group of ${\mathbb F}_q$. So both $K$ and $H$ are abelian; the former has order $q$ while the latter has order $d$, so using \eqref{eq:AMZL-of-Frobenius} we get \[ \begin{aligned} \AMZL{G_q} & = 1 + 2\cdot\frac{d^2-1}{d}\left(1- \frac{d-1}{q}\right)\left(1-\frac{1}{q}\right) \\ & = 1 + 2\cdot\frac{q^2-2q-3}{2(q-1)}\ \frac{q+3}{2q} \ \frac{q-1}{q} \end{aligned} \] which simplifies to \begin{equation}\label{eq:AMZL-of-subgroup} \AMZL{G_q} = 1 + \frac{q+1}{2}\ \left( 1 - \frac{9}{q^2} \right) \\ \end{equation} \end{eg} As a consistency check: when $q=5$, Equation~\eqref{eq:AMZL-of-subgroup} gives $\AMZL{G_5} = 73/25$. On the other hand, it is straightforward to check that $G_5$ is isomorphic to the dihedral group of order $10$, and using our earlier formulas we have $\AMZL{D_5} = 73/25$. \begin{rem} Even though $G_q$ is a (index $2$) subgroup of $\Aff({\mathbb F}_q)$, it may have a larger ZL-amenability constant. Indeed, it is clear from the formulas obtained in Examples \ref{eg:AMZL-of-affine} and \ref{eg:AMZL-of-subgroup} that \[ \lim_{q\to\infty} \AMZL{\Aff_q} = 5 \quad\text{while}\quad \lim_{q\to\infty} q^{-1}\AMZL{G_q} = \frac{1}{2} \] \end{rem} Example~\ref{eg:AMZL-of-subgroup} shows that within the class of groups with two character degrees, we can obtain arbitrarily large ZL-amenability constants. It is natural to ask how small such constants can be. For Frobenius groups with abelian complement and kernel, we can obtain a complete answer. \begin{thm}\label{t:lower-bound-AMZL-Frob-AKAC} Let $G$ be a Frobenius group with abelian complement and kernel. Then\hfil\newline $\AMZL{G} \geq 7/3$, with equality if and only if $G$ is the dihedral group of order $6$. \end{thm} \begin{proof} Let $h$ be the order of the Frobenius complement of $G$, and $k$ the order of its Frobenius kernel. Note that $G$ is isomorphic to $D_3$ if and only if $h=2$ and $k=3$. To reduce notational clutter, let $F(k,h)$ denote $\frac{1}{2}(\AMZL{G}-1)$. By Theorem~\ref{t:AMZL-of-Frobenius}, \begin{equation}\label{eq:DAFFY} F(k,h) = \frac{h^2-1}{h} \left(1 - \frac{h-1}{k}\right)\left(1- \frac{1}{k}\right) \tag{$\clubsuit$} \end{equation} and it suffices to prove that $F(k,h)\geq 2/3$ with equality if and only if $(h,k)=(2,3)$ (subject to $h$ and $k$ arising from a Frobenius group of the specified form). Note that for fixed $h$, $F(\cdot, h)$ is a strictly increasing function. As observed above, $h$ divides $k-1$, so in particular $k\geq h+1$; hence, $F(k,h) \geq F(h+1,h)$, with equality if and only if $h=k-1$. Direct calculation gives \[ \begin{aligned} F(h+1,h) = \frac{h^2-1}{h} \cdot \frac{2}{h+1} \cdot \frac{h}{h+1} = \frac{2(h-1)}{h+1} = 2\left(1-\frac{2}{h+1}\right), \end{aligned} \] and so $F(h+1,h) \geq 2/3$, with equality if and only if $h=2$. This completes the proof. \end{proof} If we consider more general groups with two character degrees, then there is an infinite family of such groups whose ZL-amenability constants are less than~$2$. This will be seen in the next and final subsection of the paper. \end{subsection} \begin{subsection}{Extra-special $p$-groups.} \begin{dfn} \label{d:extraspecial} Fix a prime $p$. A finite group $G$ is \dt{$p$-extraspecial} if it has order $p^{2n+1}$ for some integer $n$ and has the following properties: \begin{numlist} \item The centre $Z(G)$ and the derived subgroup $\der{G}$ both have order $p$. \item The quotient $G/Z(G)$ is abelian, and each non-identity element in the quotient has order~$p$. \end{numlist} \end{dfn} Such groups do exist (for instance, the dihedral group of order $8$ is $2$-extraspecial), and their character tables and conjugacy classes turn out to be uniquely determined by these conditions. In particular, each non-linear irreducible group character of $G$ is supported on $Z(G)$ and has degree~$p^n$. This follows from, e.g. \cite[Chapter 5, Theorem 5.5]{Gor_FGbook_ed2}, as pointed out to the second author by D.~F. Holt. Alternatively, a short argument using some basic character theory is described by I.~M. Isaacs in the appendix to \cite{Dia_threads}. \begin{eg}\label{eg:AMZL-of-extraspecial} Let $G$ be an extraspecial group of order $p^{2n+1}$, where $p$ is a prime. We know every non-linear character has degree $p^n$, and we know $\der{G}$ has order~$p$. To apply Theorem~\ref{t:amzl-2cd}, we need also to know the sizes of the conjugacy classes. By some elementary group theory (see e.g.~the appendix of \cite{Dia_threads}), the conjugacy classes of $G$ are either elements of the centre or the non-trivial cosets of the derived subgroup. Thus there are $p$ conjugacy classes of size $1$ and $p^{2n}-1$ conjugacy classes of size~$p$, and no others. Therefore \[ \sum_{C\in\operatorname{Conj}(G)} |C|^2 = p\cdot 1^2 + (p^{2n}-1)\cdot p^2 = p^{2n+2}-p^2+p \] and so Theorem~\ref{t:amzl-2cd} gives \begin{equation} \label{eq:AMZL-of-extraspecial} \begin{aligned} \AMZL{G} & = 1 + 2(p^{2n}-1)\left( 1 - \frac{p^{2n+2}-p^2+p}{ p^{2n+2} } \right) \\ & = 1 + 2\left(1 - \frac{1}{p^{2n}} \right)\ \left(1 - \frac{1}{p} \right) \\ \end{aligned} \end{equation} \end{eg} \begin{rem} If $G$ is an extraspecial group of order $2^{2n+1}$, then $\AMZL{G} = 2 - 2^{-2n}$\/. Thus we have an infinite family of finite groups $G$ for which $1 < \AMZL{G} < 2$. \end{rem} \begin{rem}\label{r:lowest-AMZL-so-far} Within the class of extra-special $p$-groups, the ZL-amenability constant is minimized when we take $p=2$ and $n=1$. This example is nothing but the dihedral group of order $8$, whose amenability constant is $7/4$. This is the smallest ZL-amenability constant we have found for any non-abelian group. \end{rem} \begin{qu} Does there exist a non-abelian, finite group whose ZL-amenability constant is $<7/4$? Can such a group have two character degrees? \end{qu} \end{subsection} \subsection{Summary information} We summarize the findings of this section in Figure~\ref{fig:table}. \begin{figure}[hpt] \begin{tabular}{l|c|c|c|c|c|r} Ref. & $G$ & $|G|$ & $|{\mathfrak L}|$ & c.d. & $\AMZL{G}-1$ & min. \\ \hline Ex.~\ref{eg:AMZL-of-affine} & $\Aff({\mathbb F}_q)$, & $q(q-1)$ & $q-1$ & $q-1$ & $4(1-2q^{-1})$ & $4/3$ \\ & $q\geq 3$ & & & & \\ Ex.~\ref{eg:AMZL-of-subgroup} & $ax^2+b$ of ${\mathbb F}_q$, & ${\displaystyle\frac{q(q-1)}{2}}$ & $\frac{1}{2}(q-1)$ & $q-1$ & $\frac{1}{2}(q+1) (1 - 9q^{-2})$ & $48/25$ \\ & $q\geq 5$ & & & & \\ Eq.~\eqref{eq:AMZL-of-dihedral-odd} & $D_n$, & $2n$ & $2$ & $2$ & $3(1-n^{-1})^2$ & $4/3$ \\ & $n$ odd $\geq 3$ & & & & \\ Eq.~\eqref{eq:AMZL-of-dihedral-even} & $D_n$, & $2n$ & $4$ & $2$ & $3(1-(2n)^{-1})^2$ & $3/4$ \\ & $n$ even $\geq 4$ & & & & \\ Ex.~\ref{eg:AMZL-of-extraspecial} & $p$-extraspecial & $p^{2n+1}$ & $p^{2n}$ & $p^n$ & $2(1- p^{-2n}) (1-p^{-1})$ & $3/4$ \end{tabular} \caption{Summary table for some groups with two character degrees} \label{fig:table} \end{figure} \begin{itemize} \item ``Ref.'' gives the number of the relevant theorem, example or equation. \item ${\mathfrak L}$ is the set of linear characters. \item ``c.d.'' stands for the character degree of the non-linear characters. \item ``min.'' denotes the minimum value of $\AMZL{G}-1$ within the specified family of groups. \end{itemize} \end{section}
{ "timestamp": "2013-08-16T02:06:30", "yymm": "1302", "arxiv_id": "1302.1929", "language": "en", "url": "https://arxiv.org/abs/1302.1929", "abstract": "We calculate the exact amenability constant of the centre of $\\ell^1(G)$ when $G$ is one of the following classes of finite group: dihedral; extraspecial; or Frobenius with abelian complement and kernel. This is done using a formula which applies to all finite groups with two character degrees. In passing, we answer in the negative a question raised in work of the third author with Azimifard and Spronk (J. Funct. Anal. 2009).", "subjects": "Functional Analysis (math.FA); Group Theory (math.GR)", "title": "ZL-amenability constants of finite groups with two character degrees", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692359451418, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7079584987929634 }
https://arxiv.org/abs/1908.08509
Navigation of a Quadratic Potential with Ellipsoidal Obstacles
Given a convex quadratic potential of which its minimum is the agent's goal and a Euclidean space populated with ellipsoidal obstacles, one can construct a Rimon-Koditschek (RK) artificial potential to navigate. Its negative gradient attracts the agent toward the goal and repels the agent away from the boundary of the obstacles. This is a popular approach to navigation problems since it can be implemented with local spatial information that is acquired during operation time. However, navigation is only successful in situations where the obstacles are not too eccentric (flat). This paper proposes a modification to gradient dynamics that allows successful navigation of an environment with a quadratic cost and ellipsoidal obstacles regardless of their eccentricity. This is accomplished by altering gradient dynamics with a Hessian correction that is intended to imitate worlds with spherical obstacles in which RK potentials are known to work. The resulting dynamics simplify by the quadratic form of the obstacles. Convergence to the goal and obstacle avoidance is established from almost every initial position (up to a set of measure one) in the free space, with mild conditions on the location of the target. Results are corroborated empirically with numerical simulations.
\section{Conclusions} \label{conclusion} We considered the problem of a point agent navigating to a target with a fininte number of ellipsoidal obstacles in the way. In particular, we suggested dynamics without second order information to remove the geometric constraint on the eccentricity of the obstacles from the navigation function approach. We guaranteed convergence to the target from any initial position by showing that the agent will always be moving toward the target except for when it is close to obstacles, of which it will only visit a finite number and will almost surely escape. For future work, this approach may be used to extend to convex obstacles in general as well as non convex star obstacles. \subsection{Defining Graphs on Configurations} \label{sec:graphs} In this section, we will show that the agent will only visit a finite number of obstacles. We do so by defining a graph on the configuration where the nodes represent obstacle and edges represent the ability to move from being $\varepsilon$-close from one obstacle to another. This graph will encode an ordering which will show that the agent will only visit a finite number of obstacles. Before defining the graph on the configuration we require the following lemma, stating that the space can be divided in hyperplanes which the agent can only traverse in one direction. \begin{lemma} \label{lem:boundary} Let ${f_0}(x)$ be a quadratic potential as in \eqref{eqn_goal_def} and ${\mathcal H}$ be a hyperplane that does not intersect any obstacle and separates the space ${\mathcal X}$ into two halfspaces ${\mathcal H}^1$ and ${\mathcal H}^2$. Without loss of generality assume $x^* \in {\mathcal H}^1$. Let $n\in \reals^n$ be a unit vector normal to the hyperplane pointing toward ${\mathcal H}^2$. Then there exists $K_{1,2}$ such that for all $k > K_{1,2}$, $n^\top \dot x < 0$ \end{lemma} \begin{proof} See Appendix-\ref{proof_lem3} \end{proof} The previous lemma is interpreted as follows. Once the agent transitions from ${\mathcal H}_2$ to ${\mathcal H}_1$, it will never be able to go back into ${\mathcal H}_2$. As such, we consider constructing a graph on the configuration which would describe the order in which an agent would visit each obstacle. In order to show that once the agent visits an obstacle it will never return to the same obstacle, consider the following construction of a graph on an arbitrary configuration. First, to formalize notation, consider a hyperplane ${\mathcal H}_{i,j}$ between obstacle ${\mathcal O}_i$ and ${\mathcal O}_j$. Because the obstacles are strictly convex, we know this strictly separating hyperplane exists \cite{boyd2004convex}. Let ${\mathcal H}^i$ denote the halfspace containing ${\mathcal O}_i$ and ${\mathcal H}^j$ denote the halfspace containing ${\mathcal O}_j$. The agent's target $x^*$ is either in ${\mathcal H}^i$ or ${\mathcal H}^j$. Without loss of generality, let $x^* \in {\mathcal H}^j$. Now we can define a graph $G = (\ccalN,\ccalE)$, where the obstacles are nodes $\ccalN = \{1, \dots, m\}$, and the edges in $\ccalE$ represent the ability to move from one obstacle to the other. To place an edge between any pair of obstacles ${\mathcal O}_i, {\mathcal O}_j$, we check two conditions independently of each other: \begin{condition} \label{cond_1} There is a hyperplane ${\mathcal H}_{i,j}$ such that $x^* \in {\mathcal H}^j.$ \end{condition} \begin{condition} \label{cond_2} There is a hyperplane ${\mathcal H}_{j,i}$ such that $x^* \in {\mathcal H}^i.$\end{condition} Because there is always a hyperplane between two convex sets, at least one condition will be satisfied for every pair of obstacles. Therefore we define the graph by \begin{equation} \label{equ:graph_def} (i,j) \in \ccalE ~\textrm{if Condition \ref{cond_1} and not Condition \ref{cond_2}}. \end{equation} We will illustrate this with an example. Consider the environment in Fig \ref{EdgeEx} (a). Let us consider whether or not to place an edge between nodes two and three of the corresponding graph. We first consider the hyperplane ${\mathcal H}_{2,3}$ where $x^*\in {\mathcal H}^{3}$. In the figure, this is the dashed hyperplane. Because we are also able to draw a hyperplane ${\mathcal H}_{3,2}$ between the obstacles such that $x^* \in {\mathcal H}^2$, we do not put an edge between nodes two and three of the graph. By Lemma \ref{lem:boundary}, this means that with large enough $k$, the trajectory of the agent will never cross the boundaries to hit either ${\mathcal O}_2$ or ${\mathcal O}_3$, where by hit, we mean be $\varepsilon$-close. The graph corresponding to the configuration is shown in Figure \ref{EdgeEx} (b). We interpret the graph as follows. Suppose the point agent is placed randomly on the free space. Then following \eqref{our_dynamics}, it may eventually hit one of the obstacles represented by the nodes on the graph. If it hits a node that is at the end of the directed acyclic graph (i.e. nodes two, three, four, or eight), it will hit any other obstacle before reaching its target. If the agent hits any other node, then it can either go directly to the target or visit the obstacle which the edge indicates. For example, if the agent is near obstacle one, it will either go to the target, or hit obstacle eight. It is impossible for the agent to visit any other obstacle. \begin{figure}[!t] \centering \includegraphics[trim = {2.5cm, 0, 2.5cm, 0},width=.7\linewidth]{lemma2ex1c.png} \caption{We consider placing an edge between obstacles ${\mathcal O}_2$ and ${\mathcal O}_3$. The dashed line shows that there exists a hyperplane ${\mathcal H}_{2,3}$ with $x^*\in {\mathcal H}^3$ and the solid line shows there exists a hyperplane ${\mathcal H}_{3,2}$ with $x^*\in {\mathcal H}^2$. As such, there will be no edge placed between nodes 2 and 3 of the corresponding graph.} \label{EdgeEx} \end{figure} \begin{figure}[t!] \centering \includegraphics[trim = {2cm, 0, 2cm, 0},width=.4\linewidth]{lemma2ex2.pdf} \caption{The graph $G$ corresponding to the configuration in Figure \ref{EdgeEx}. } \label{EdgeEx_graph} \end{figure} When the graph defined on the configuration does not have any cycles, then the agent will only visit a finite number of obstacles, which completes the argument. When the configuration has a cycle, we need an additional argument which will show that indeed the agent will still only visit a finite number of obstacles. For this, we recall lemma \ref{lem:break_cycles} which confirms that when the agent is close to the border of the obstacle, it will always move away from the center of the obstacle. For cycles to exist, it must be that the obstacles are equidistant from the objective. For an agent to visit an infinite number of obstacles in the cycle, it must go around each obstacle on the side opposite the target, which requires it to cross the center of the obstacle. Lemma \ref{lem:break_cycles} indicates that the agent will move away from the center of the obstacle thereby guaranteeing that the agent will escape the cycle. To conclude the proof of Theorem \ref{thm:main_result}, set $K$ to be equal to the largest of $K_i$ [c.f. Lemma \ref{lem:rep_zone}], $K^\epsilon$, $K^\epsilon_2$, $K^\epsilon_3$, and $K_{i,j}$ [c.f. Lemma \ref{lem:boundary}]. over all obstacles and pairs of obstacles. \begin{remark} \label{remark:cycles} The convergence for second order correction dynamics \cite{arXiv_version} holds only for configurations without cycles. In Figure \ref{EdgeEx}, consider constructing a graph for the configuration where the objective lies inside the cluster of obstacles ${\mathcal O}_4$, ${\mathcal O}_5$, ${\mathcal O}_6$ and ${\mathcal O}_7$. Such a graph would contain a cycle, and there are not convergence guarantees for an agent following \eqref{equ:cdc_dynamics}. Lemma \ref{lem:break_cycles} guarantees that the agent will navigate between the gap of adjacent obstacles in the cycle when the agent follows \eqref{our_dynamics}. \end{remark} \section{Proof of Theorem \ref{thm:main_result}}\label{proof_of_theorem} Let us define a global Lyapunov function candidate \begin{equation}\label{eqn_lyap_function} V(x) = \frac{1}{2}{(x-x^*)}^\top {(x-x^*)}. \end{equation} By definition, $V$ is always positive, and equal to zero only when $x = x^*$. To prove convergence to the target, we must show that $\dot V < 0$. That is, we require \begin{equation} \dot V = {(x-x^*)}^\top \dot x < 0. \end{equation} Substituting \eqref{our_dynamics} for $\dot x$, this would require \begin{equation} \label{equ:dot_v} {(x-x^*)}^\top \left(-\beta(x){(x-x^*)} + \frac{{f_0}(x)}{k} \sum_{i = 1}^m {\bar \beta_i(x)} {(x-x_i)} \right) < 0 \end{equation} Simplifying the expression \eqref{equ:dot_v}, we have that $\dot V < 0$ for all $x\in {\mathcal F}$ except for when \begin{equation} \label{equ:global_violation} \beta(x) \leq \frac{{f_0}(x) \sum_{i = 1}^m \bar\beta_i(x) {(x-x^*)}^\top{(x-x_i)}}{k {(x-x^*)}^\top {(x-x^*)}}. \end{equation} For any $\delta > 0$, let us define the following neighborhood of the target \begin{equation} \label{equ:Bdest} \ccalB_{x^*}(\delta) := \left\{x \in {\mathcal F} \vert {(x-x^*)}^\top {(x-x^*)} \leq \delta \right\}. \end{equation} We then define the set $\ccalV_k(\delta)$ to be all $x \in \ccalB^c_{x^*}(\delta)$ such that \eqref{equ:global_violation} holds. Formally, let \begin{equation}\label{equ:def_ccalV} \begin{split} \ccalV_k(\delta) :=& \Bigg\{ x\in \ccalB_{x^*}^c(\delta) \vert\\ & \beta(x) \leq \frac{{f_0}(x)\sum_{i = 1}^m \bar \beta_i(x) {(x-x^*)}^\top{(x-x_i)} }{k {(x-x^*)}^\top {(x-x^*)}} \Bigg\} \end{split} \end{equation} As such, we know that for all $x \notin \ccalV_k(\delta)$, $\dot V < 0$. From Assumption \ref{assumption:interior}, we know that $x^*$ lies in the interior of the free space. Therefore, there exists a $\delta > 0$ such that $\ccalB_{x^*}(\delta)$ does not intersect with the obstacle or the boundary. Since the free space is compact and both ${f_0}$ and $\beta$ are continuously differentiable, it follows that the numerator on the right hand since of \eqref{equ:global_violation} is bounded. Likewise, since for any $x\notin \ccalB_{x^*}^c(\delta)$ we have that $(x-x^*)^\top (x - x^*)>\delta$ it follows that by increasing $k$, the area of $\ccalV_k(\delta)$ decreases. Therefore, we can choose $K^\varepsilon$ large enough such that for all $k> K^\varepsilon$, the region where the global Lyapunov function candidate is increasing is contained within the $\varepsilon$-balls of the obstacles (Figure \ref{fig:eps_ball}). That is, $\ccalV_k(\delta) \subset \cup_{i = 1}^m \ccalB^i_\varepsilon$, where \begin{equation} \label{equ:def_B_varepsilon} \ccalB^i_\varepsilon := \{x | \min_{x'\in {\mathcal O}_i} \|x - x'\| < \varepsilon\}. \end{equation} We focus now on the region $\ccalB_{x^*}(\delta)$ and notice that we can upper bound the left hand side of \eqref{equ:dot_v} using the Cauchy-Schwartz inequality by % \begin{equation} \label{eqn_dotVbound} \begin{split} \dot V(x) \leq &-\beta(x)(x - x^*)^\top (x - x^*)\\ &+\frac{{f_0}(x)}{k}\sum_{i = 1}^m \left\|x - x^*\right\| \left\|x-x^i\right\|. \end{split} \end{equation} % Because ${f_0}(x)$ is quadratic, the second term is of order $\left\|x-x^*\right\|^3 =\delta^{3/2}$ while the first term is only of order $\delta$. Thus, there exists $\delta_0>0$ such that for all $\delta< \delta_0$ \eqref{eqn_dotVbound} is negative. Now that we have established that $V$ defined in \eqref{eqn_lyap_function} is strictly decreasing for all $x \notin \ccalV_{K^\varepsilon}(\delta)$, with $\delta<\delta_0$ we will show that the agent will locally escape the $\varepsilon$ ball of each obstacle. \subsection{Locally Escaping Obstacles} \label{sec:local_escape} \begin{figure}[t!] \centering \includegraphics[trim = {10cm, 0, 10cm, 0}, width=.4\linewidth]{zone_visualization3.png} \caption{Illustration of the different regions defined in Section \ref{sec:local_escape}. The solid line shows the separation of the border of the $\varepsilon$-ball, where the agent can only enter from the top half and exit from the bottom half due to the global Lyapunov function candidate. The repulsion zone \eqref{equ:repul_zone_def2} shown in yellow, is the regio where the agent is moving away from the obstacle.} \label{fig:eps_ball} \end{figure} Because the obstacles are convex and non-intersecting, we know that there exists an $\varepsilon > 0$ such that the $\varepsilon$-balls of each obstacle are non-intersecting. Therefore in this section, we will show that if an agent enters the $\varepsilon$-ball of an obstacle ${\mathcal O}_i$, it will escape that obstacle. We first recall that when $k > K^\epsilon$ the region $\ccalV_k(\delta) \backslash \ccalB_{x^*}(\delta)$ i.e., where $\dot{V}>0$, a proper subset of the interior of the $\varepsilon$-ball regions of the obstacles. On the border of the $\varepsilon$-ball, $\beta(x)$ is bounded away from zero. Therefore, for all $k$ larger than $K^\epsilon_2$, the dynamics \ref{our_dynamics} are nearly colinear with $(x-x^*)$. Consider any ray originating from the destination and intersecting with $\varepsilon$-ball region. The ray will intersect the border of the $\varepsilon$-ball at two points. At both of these points, with $k > K^\epsilon_2$, the agent is moving nearly in the direction of the target, creating an entry point and exit point. As such, we can divide the boundary of the $\varepsilon$-ball into two regions: the first region is the set of entry points, where the agent enters the $\varepsilon$-ball, and is located on the opposite side of the obstacle with respect to $x^*$. The second region is the set of exit points, where the agent leaves the $\varepsilon$-ball, and is located on the same side of the obstacle as $x^*$. This is illustrated in Figure \ref{fig:eps_ball}. So we have established that if the agent enters the $\varepsilon$-ball and navigates around the obstacle, it will exit closer to the objective. We now show that the agent will indeed exit the $\varepsilon$-ball region. We accomplish this by defining two local Lyapunov function candidates, the second of which requires the definition of the first \begin{equation} \label{equ:local_lyap} V_i(x) = \frac{1}{2} {(x-x^*)}^\top \nabla^2 \beta_i(x) {(x-x^*)}. \end{equation} We call this Lyapunov function candidate \emph{local} because it is only proper within the $\varepsilon$-ball of the local obstacle ${\mathcal O}_i$. The idea is that although the global Lyapunov function candidate may not be decreasing in the $\varepsilon$-ball region around the obstacle, the local Lyapunov function candidate $V_i(x)$ will guarantee that the agent moves around the obstacle toward the target $x^*$. To do so, we require $\dot V_i(x) = {(x-x^*)}^\top \nabla^2 \beta_i(x)\dot x < 0$, or equivalently \begin{equation}\label{equ:global_cond} \begin{split} {(x-x^*)}^\top &\nabla^2 \beta_i(x) \Big( -\beta(x) {(x-x^*)}\\ &+ \frac{{f_0}(x)}{k}\sum_{l = 1}\bar\beta_l(x)(x-x_l) \Big) <0 \end{split} \end{equation} Similar to our definition of $\ccalV_k(\delta)$, we define \begin{equation} \label{equ:repul_zone_def} \begin{split} \ccalB^i_k&(\delta) := \Bigg\{ x\in \ccalB_{x^*}^c(\delta) \vert \\ & \beta(x) \leq \frac{{f_0}(x) \sum_{l =1}^m \bar \beta_l(x){(x-x^*)}^\top \nabla^2 \beta_i(x) (x-x_l) }{k {(x-x^*)}^\top \nabla^2\beta_i(x){(x-x^*)}}\Bigg\}, \end{split} \end{equation} to be the region where the \emph{local} Lyapunov function candidate is increasing. Unlike the region $\ccalV_k(\delta)$, we can guarantee that an agent which starts outside this region will never enter, and an agent which starts inside the region will always exit. This is the subject of Lemma \ref{lem:local_lyap_2}. For this reason, we refer to $\ccalB^i_k(\delta)$ as the \emph{repulsion zone}. Intuitively, the repulsion zone is the area where the agent is very close to the obstacle, so the $(x-x^*)$ term in the dynamics which pulls the obstacle towards the target becomes negligible compared to the $(x-x_i)$ term which pushes the obstacle away from the obstacle. By substituting $\beta(x) = \beta_i(x) \bar \beta_i(x)$, we get an equivalent, obstacle specific, expression for the repulsion zone \begin{equation} \label{equ:repul_zone_def2} \begin{split} \ccalB^i_k&(\delta) := \Bigg\{ x\in \ccalB_{x^*}^c(\delta) \vert \\ & \beta_i(x) \leq \frac{{f_0}(x) \sum_{l =1}^m \bar \beta_l(x){(x-x^*)}^\top \nabla^2 \beta_i(x) (x-x_l) }{k \bar \beta_i(x) {(x-x^*)}^\top \nabla^2\beta_i(x){(x-x^*)}}\Bigg\}. \end{split} \end{equation} Because we are in the free space bounded away from the minimum, $\bar\beta_i(x)$ is bounded, therefore increasing $k$ means that $\bar \beta_i(x)$ decreases at the rate $O(1/k)$. Because $\nabla^2 \beta_i(x)$ is positive definite, we know that the denominator $k{(x-x^*)}^\top \nabla^2 \beta_i(x) {(x-x^*)} > 0$. Therefore, similar to $\ccalV_k(\delta)$ [c.f. \eqref{equ:def_ccalV}], as $k$ increases, the area of $\ccalB_k(\delta)$ decreases (Figure \ref{fig:eps_ball}). Let us define $K^\varepsilon_3$ to be such that for all $k > K_3^\varepsilon$, we have $\ccalB_k(\delta) \subset \cup_{i = 1}^m \ccalB^i_\varepsilon$ [c.f. \eqref{equ:def_B_varepsilon}]. Focusing on obstacle ${\mathcal O}_i$, we let $\ccalB_k^i$ denote the intersection of $\ccalB_k(\delta) \cap \ccalB_\varepsilon^i$ for ease of notation. By definition, $V_i$ is decreasing for all $x$ inside the $\varepsilon$-ball and outside the repulsion zone. This guarantees that the agent will escape the $\varepsilon$-ball region around each obstacle so long as it does not enter the repulsion zone. On the border of the repulsion zone $\ccalB_i$, the local Lyapunov function is constant since by definition, $\dot V_i(x)\vert_{x\in \partial \ccalB_k^i} = 0$. We now define the second \emph{local} Lyapunov function candidate which will both elucidate the behavior of the agent on the border of the repulsion zone as well as validate that the agent is moving away from the obstacle in this region. Let \begin{equation} \begin{split} \tilde V_i(x) =& \max_{x\in\ccalB_k^i}\left\{ (x-x^*)^\top \nabla^2\beta_i(x) {(x-x^*)} \right\}\\ & - {(x-x^*)}^\top \nabla^2\beta_i(x) {(x-x^*)} \end{split} \end{equation} The following lemma establishes that the \emph{repulsion zone} acts like a barrier such that if the agent starts outside the repulsion zone, it will not enter the zone. Further, if the agent starts inside the repulsion zone, it will exit the repulsion zone. \begin{lemma}\label{lem:local_lyap_2} Given the dynamics \eqref{our_dynamics}, the function $\tilde V_i(x)$ is strictly decreasing on $\emph{\textbf{int}}\ccalB_k^i$ and constant on the border $\partial \ccalB_k^i$. \end{lemma} \begin{proof} See Appendix-\ref{proof_local_lyap_2} \end{proof} Lemma \ref{lem:local_lyap_2} confirms that by following the flow prescribed in \eqref{our_dynamics}, the agent will never collide with the obstacle. In fact, if the agent starts outside the repulsion zone, it will not enter the repulsion zone as doing so would require violating the Lyapunov function candidate $\tilde V_i(x)$. Lemma \ref{lem:local_lyap_2} also confirms that an agent which starts inside the repulsion zone will reach the border of the repulsion zone. We have shown that the agent will move toward $x^*$ for all $x$ in the $\varepsilon$-ball of obstacle ${\mathcal O}_i$ except for the repulsion zone. Further we have shown that it is impossible to enter the repulsion zone, and that if the agent starts inside the repulsion zone, it will reach the border of the repulsion zone. In fact, the border of the repulsion zone represents the region where both local Lyapunov function candidates $V_i$ and $\tilde V_i$ are constant. What is left to show is that the agent will exit the border of the repulsion zone. We accomplish this with the following two lemmas. The first confirms that the agent will always be moving in the direction away from the center when it is on the border of the repulsion zone. The second asserts that there is only one critical point on the border of the repulsion zone, and this critical point is an unstable equilibrium. \begin{lemma}\label{lem:break_cycles} Consider any point $x$ on the repulsion zone of ${\mathcal O}_i$ such that $(x-x_i)$ is not aligned with $(x-x^*)$. Let $n$ be a unit vector which lies on a hyperplane containing the points $x, x^*$, and $x_i$ such that $n^\top(x-x^*) = 0$ and $n^\top(x-x_i) = \delta(x) > 0$. Then there exists a large enough $K(x)$ such that for all $k > K(x)$, $n^\top\dot x > 0$ \end{lemma} \begin{proof} See Appendix-\ref{proof_break_cycles} \end{proof} \begin{lemma} \label{lem:rep_zone} There exists a $K_i$ such that for all $k > K_i$, there is only one critical point on $\partial \ccalB_k^i$ and that critical point is an unstable equilibrium. \end{lemma} \begin{proof} See Appendix-\ref{proof_rep_zone} \end{proof} Lemmas \ref{lem:break_cycles} and \ref{lem:rep_zone} guarantee that the agent will exit the repulsion zone for almost every initial position. As soon as the agent exists the border of the repulsion zone, it will be in the region where the local Lyapunov function candidate $V_i$ is decreasing, meaning that returning to the repulsion zone is impossible. We have therefore shown that if the agent enters an $\varepsilon$-region of an obstacle ${\mathcal O}_i$, it will exit that region. This result holds for any bounded free space, and therefore we have shown that even when the obstacle is almost flat, the dynamics will allow the agent to maneuver around the obstacle. What is left to show is that the agent can only visit a finite number of obstacles. This combined with locally escaping obstacles and the global Lyapunov function candidate will complete the proof of Theorem \ref{thm:main_result}. \subsection{Proof of Lemma \ref{lem:local_lyap_2}} \label{proof_local_lyap_2} For all $x\in \mathcal{B}_k^i$, $\tilde V_i(x) \geq 0$ with equality holding only when $x\in \argmax_{x\in\ccalB_k^i}\left\{ (x-x^*)^\top \nabla^2\beta_i(x) {(x-x^*)} \right\}$. Now consider \begin{equation} \dot{\tilde V}_i(x) = -{(x-x^*)}^\top \nabla^2 \beta_i(x) \dot x. \end{equation} Substituting \eqref{our_dynamics} for $\dot x$, we get \begin{equation} \begin{split} \dot V_i =& \beta(x) {(x-x^*)}^\top \nabla^2\beta_i(x){(x-x^*)} \\ & - \frac{{f_0}(x)}{k}\bar\beta_i(x) {(x-x^*)}^\top \nabla^2\beta_i(x){(x-x_i)}\\ &-\frac{{f_0}(x)}{k} \sum_{j \neq i } \bar \beta_j(x) {(x-x^*)}^\top \nabla^2\beta_i(x){(x-x_j)}. \end{split} \end{equation} We can bound $\dot V_i(x)$ by substituting in the value of $\beta(x)$ on the border of the repulsion zone [c.f. \eqref{equ:repul_zone_def}] \begin{equation} \begin{split} \dot V_i \leq &\frac{{f_0}(x){(x-x^*)}^\top \nabla^2 \beta_i(x){(x-x^*)} }{k{(x-x^*)}^\top\nabla^2 \beta_i(x){(x-x^*)}}\cdot \\ &\Big( \bar \beta_i(x){(x-x^*)}^\top \nabla^2\beta_i(x){(x-x_i)} \\ &+ \sum_{j \neq i} {(x-x^*)}^\top \nabla^2\beta_i(x){(x-x_j)}\Big)\\ &- \frac{{f_0}(x)}{k}\bar\beta_i(x) {(x-x^*)}^\top \nabla^2\beta_i(x){(x-x_i)}\\ &-\frac{{f_0}(x)}{k} \sum_{j \neq i } \bar \beta_j(x) {(x-x^*)}^\top \nabla^2\beta_i(x){(x-x_j)}\\ & = 0 \end{split} \end{equation} We have shown that the local Lyapunov function candidate is strictly negative except for on the border of the repulsion zone. This completes the proof. \subsection{Proof of Lemma \ref{lem:break_cycles}} \label{proof_break_cycles} Evaluate $n^\top \dot x$ with dynamics \eqref{our_dynamics} \begin{equation} n^\top \dot x = n^\top \left( -\beta(x){(x-x^*)} + \sum_{i = 1}^m{\bar \beta_i(x)} {(x-x_i)} \right). \end{equation} The first term $-\beta(x) n^\top {(x-x^*)} = 0$ from our definition of $n$. Therefore we have \begin{equation} n^\top \dot x = \frac{{f_0}(x)}{k}\left( {\bar \beta_i(x)} n^\top {(x-x_i)} + \sum_{j \neq i}{\bar \beta_j(x)} n^\top {(x-x_j)} \right). \end{equation} The first term is bounded away from zero since $n^\top {(x-x_i)} = \delta(x) > 0$. Further, the second term has a common $\beta_i$ term which we know decreases at a rate proportional to $1/k$. As such, there exists a $K(x)$ large enough such that the first term is larger than the second, thereby making $n^\top \dot x > 0.$ This concludes the proof. \subsection{Proof of Lemma \ref{lem:rep_zone}} \label{proof_rep_zone} First note that \eqref{equ:dot_v} holds trivially when $(x-x^*)^\top(x-x_i) < 0$, or when the agent and the objective are on the same side of the obstacle. We conjecture that the saddle point on the border of the repulsion zone will be close to the point $x_s \in \partial \ccalB_k^i$ where \begin{equation}\label{equ:aligned} (x_s-x^*) = a(x_s-x_i), \end{equation} for some $a>0$. Consider grouping the dynamics \eqref{our_dynamics} as \begin{equation} \dot x = \bar\beta_i(x) \left(g_1(x) + g_2(x) \right), \end{equation} where \begin{equation} g_1(x) = -\beta_i(x) {(x-x^*)} + \frac{{f_0}(x)}{k}{(x-x_i)}, \end{equation} and \begin{equation} g_2(x) = \frac{{f_0}(x)}{k\bar \beta_i(x)} \left( \sum_{j \neq i} \bar \beta_j(x) {(x-x_j)} \right). \end{equation} $g_2$ has a common factor of $\beta_i(x)$ which decreases at a rate of $O(1/k)$. The rest of the terms in $g_2$ are bounded by our assumptions. As such, any local minima caused by $g_2$ will be dominated by the critical points of $g_1$ for all $k > K'_i$. To show that $x_s$ is unstable, we will show that for some $v$, $v^\top J_{g_1}(x_s) v > 0$. Where $J_{g_1}$ is the Jacobian of $g_1$. We compute the Jacobian \begin{equation} \begin{split} J_{g_1}(x) =& -{(x-x^*)} \nabla \beta_i(x)^\top - \beta_i(x) I\\ &+\frac{{f_0}(x)}{k}I + \frac{1}{k} {(x-x_i)} \nabla {f_0}(x)^\top. \end{split} \end{equation} Let $v$ be a unit vector such that $v^\top (x_s-x^*) = 0$ and $v^\top (x_s-x_i) = 0$. Consider evaluating $v^\top J_{g_1}(x_s)v$ \begin{equation} \begin{split} v^\top J_{g_1}(x_s)v =& -v^\top (x_s-x^*)\nabla \beta_i(x_s)^\top v - \beta_i(x_s)\\ &+\frac{{f_0}(x_s)}{k} + \frac{1}{k} v(x_s - x_i) \nabla {f_0}(x_s)^\top v. \end{split} \end{equation} Note that by definition of $v$, the first and the fourth term are equal to zero. As such, we have \begin{equation} v^\top J_{g_1}(x_s)v = -\beta_i(x_s) + \frac{{f_0}(x_s)}{k}. \end{equation} Substituting the expression for $\beta_i(x_s)$ [c.f. \eqref{equ:repul_zone_def2}], \begin{equation} \begin{split} v^\top &J_{g_1}(x_s)v = -\frac{{f_0}(x_s) \bar \beta_i(x_s)(x_s -x^*)^\top \nabla^2 \beta_i(x_s) (x_s-x_i) }{k \bar \beta_i(x_s) (x_s-x^*)^\top \nabla^2\beta_i(x_s)(x_s-x^*)} \\ & - \frac{{f_0}(x_s) \sum_{j \neq i}^m \bar \beta_j(x_s)(x_s-x^*)^\top \nabla^2 \beta_i(x_s) (x_s-x_j) }{k \bar \beta_i(x_s) (x_s-x^*)^\top \nabla^2\beta_i(x_s)(x_s-x^*)} \\ & + \frac{{f_0}(x_s)}{k}. \end{split} \end{equation} We simplify the first term by applying \eqref{equ:aligned} so that the numerator of the first term simplifies to ${f_0}(x_s)/ak$. For the second term, we again factor out $\beta_i(x_s)$ from each ${\bar \beta_j(x)}$ which, again, makes the second term decrease at the rate $O(1/k)$. As such, we obtain \begin{equation} v^\top J_{g_1}(x_s)v = \frac{{f_0}(x_s)}{k} \left( 1 - \frac{1}{a} + O\left(\frac{1}{k}\right) \right). \end{equation} We know that $a > 1$, so this means that there exists a $\hat K_i$ large enough that for every $k > \hat K_i$, $x_s$ unstable. What is left to show is that there is no other critical point on the repulsion zone. Because we have shown that $x_s$ is an unstable equilibrium point, we know that there exists a $\gamma$-region around $x_s$ such that there is no other critical point for some $\gamma > 0$. At this point, we use the result form lemma \ref{lem:break_cycles} to show that the dynamics are not trivial. Because we are at least $\gamma$ away from the point where $(x-x_i)$ is aligned with ${(x-x^*)}$, we know there is a $K_i(x)$ large enough such that the agent is moving away form the center of the obstacle. Let $K^\gamma_i = \max_{x\in \partial \gamma-\textrm{region}} \{K_i(x)\}$, then choose $K_i = \max\{K'_i,\hat K_i, K^\gamma_i\}$ to conclude the proof. \subsection{Proof of Lemma \ref{lem:boundary}} \label{proof_lem3} Let us start by writing the product $n^\top \dot{x}$ substituting $\dot{x}$ by the dynamics \eqref{our_dynamics} \begin{equation} \label{equ:lem3} n^\top \dot x = n^\top \left(-\beta(x){(x-x^*)}+ \frac{{f_0}(x)}{k}\sum_{i = 1}^m \bar\beta_i(x) {(x-x_i)}\right). \end{equation} Since ${\mathcal H}$ does not intersect any obstacle, there exists $\varepsilon>0$ such that for all $x \in {\mathcal H}$ we have that \begin{equation}\label{before_bound} \beta(x) > \varepsilon. \end{equation} Combine (\ref{before_bound}) with the fact that $n^\top (x-x^*)=\delta>0$, which holds since $x^*\in{\mathcal H}^1$, to obtain the bound \begin{equation} \label{beta_bound1} \beta(x)n^\top (x-x^*) \geq \delta_\varepsilon, \end{equation} for some $\delta_\varepsilon>0$. Further, because all obstacles are contained in $\{x | \beta_0(x) > 0\}$, $n$ is a unit vector, and the functions ${f_0}(x)$, $B(x)$, and $\beta(x)$ are continuous, we can bound \begin{equation} \label{beta_bound2} \frac{{f_0}(x)}{k}\sum_{i = 1}^m {\bar \beta_i(x)} n^\top {(x-x_i)} \leq C. \end{equation} Use (\ref{beta_bound1}) and (\ref{beta_bound2}) to bound (\ref{equ:lem3}) \begin{equation} n^\top \dot x \leq - \delta_\varepsilon + \frac{C}{k} \end{equation} Therefore, let $K_{1,2} = C/\delta_\varepsilon$ so that when $k > K_{1,2}$ we have $n^\top \dot x < 0$. This completes the proof. \subsubsection{#1}\vspace{-3\baselineskip}\color{black}\bigskip{\noindent \bf \thesubsubsection. #1.}} \newcommand{\myparagraph}[1]{\needspace{1\baselineskip}\medskip\noindent {\it #1.}} \newcommand{\myparagraphtc}[1]{\needspace{1\baselineskip}\medskip\noindent {\it #1.}\addcontentsline{toc}{subsubsection}{\qquad\qquad\quad#1}} \section{Numerical Results} \label{num_results} In this section, we compare the performance of our proposed dynamics $g_\textrm{new}$ [c.f. \eqref{our_dynamics}] to the performance of navigation function dynamics $g_\textrm{nav}$ [c.f. \eqref{old_dynamics}] and second order correction dynamics $g_\textrm{old}$ [c.f. \eqref{equ:cdc_dynamics}]. We consider a discrete approximation for the flow $\dot x = g(x)$. In practice, the norm of the dynamics is generally very small. This may cause problems numerically when computing the direction as well as taking taking a long time for the agent to reach its target \cite{whitcomb1992toward}. Hence, what is often used in practice is to normalize the gradient by scaling it by a factor of $(\epsilon + \|g(x)\|)$, where $\epsilon > 0$ \cite{whitcomb1991automatic}. As such the dynamics will be \begin{equation} \label{discrete_new} x_{t+1} = x_t + \eta \frac{g(x_t)}{\|g(x_t)\| + \epsilon} \end{equation} where $\eta$ is a constant step size. We set $\epsilon = 10^{-4}$ and $\eta = 0.01$ in \eqref{discrete_new} for all simulations. First, we consider a world with eight ellipsoidal obstacles, and we compare the trajectories from several different initial positions using the different dynamics. Next, we consider in three dimensions to illustrate that our convergence result holds for obstacles in higher dimensions. Finally, we explore the effect of increasing the number of obstacles of randomly generated ellipsoidal worlds. The obstacles are generated such that condition (\ref{equ:condition}) might fail. Therefore, increasing the number of obstacles results in fewer trajectories successfully reaching the target. In contrast, the corrected dynamics, both with and without the second order information, perform well even when the number of obstacles is large. \subsection{Correcting the Field} \label{sec_traj_plots} \begin{figure*}[!t] \centering \begin{tabular}{ccc} \includegraphics[trim = {1.5cm, 1cm, 1.5cm, 0},width=.33\linewidth]{plot6.pdf} & \includegraphics[trim = {1.5cm, 1cm, 1.5cm, 0},width=.33\linewidth]{plot9.pdf} & \includegraphics[trim = {1.5cm, 1cm, 1.5cm, 0},width=.33\linewidth]{plot5.pdf} \\ \small (a) & \small (b) & \small (c) \end{tabular} \caption{(a)Trajectories generated by following the negative gradient of the RK Potential -- that is $\dot x = g_\textrm{nav}$ [c.f. \eqref{old_dynamics}] -- which is not a navigation function as condition \eqref{equ:condition} is violated. (b)Trajectories generated by dynamics with second order information, that is $\dot x = g_\textrm{old}$ [c.f. \eqref{equ:cdc_dynamics}]. (c)Trajectories generated by following our proposed dynamics, that is $\dot x = g_\textrm{new}$ [c.f. \eqref{our_dynamics}]. Trajectories which converge to a local minimum of $\varphi(x)$ end in a red square.We set $k = 15$} \label{mult_old_new} \end{figure*} In this section, we show an ellipsoidal world with eight obstacles and several different initialization points. We designed the world such that condition \eqref{equ:condition} does not hold, thereby eliminating the guarantee that the Rimon Koditschek potential is a navigation function. The ratios of the eigenvalues $\mu^i_\textrm{max}/\mu^i_\textrm{min}$ of the eight obstacles range between 2 and 50. The radius and center of each obstacle are chosen such that Assumptions \ref{assumption:interior} and \ref{as:1} hold, with the radius of the outer obstacle $\beta_0$ equal to 20. The objective value of the function is chosen to be ${f_0}(x) = \|x\|^2$. Figure \ref{mult_old_new} (a) shows the vector field and some trajectories for the navigation function dynamics $\dot x = g_\textrm{nav}$. Indeed, condition \eqref{equ:condition} is violated. As such, four of the trajectories converge to local minimum appearing behind the obstacles which violate the condition instead of to the target. We selected $k = 15$ because this was the maximum value for $k$ considered in the analysis for worlds which violate the condition \cite{paternain2018navigation}. We compare the trajectories of the navigation function dynamics where the condition is violated to the corrected dynamics with and without second order information. The Figure \ref{mult_old_new} (b) shows the agent following $\dot x = g_\textrm{old}$ -- the corrected dynamics with second order information -- while Figure \ref{mult_old_new} (c) shows the agent following $\dot x = g_\textrm{new}$ -- the proposed dynamics without second order information. With the same value of $k = 15$, all of the trajectories converge to the agent's target. The vector field plots show that there is only one stable point, the target. This is consistent with Theorem \ref{thm:main_result}. Not surprisingly, the trajectories following $g_\textrm{old}$ and $g_\textrm{new}$ are almost identical.The corrected dynamics move toward the target except for when the agent is close to an obstacle. At this point, the agent veers off away from the center of the obstacle as predicted by Lemma \ref{lem:break_cycles}. We emphasize that the trajectories following $g_\textrm{old}$ requires second order information whereas those following $g_\textrm{new}$ does not. Our proposed dynamics achive the same performance with less information. \subsection{Obstacles in $\mathbb{R}^3$} \begin{figure}[t] \centering \includegraphics[trim = {2cm, .4cm, 2cm, .6cm},clip,angle = 90,width=.9\linewidth]{example3d.png} \caption{We set $k = 40$. Trajectories following the uncorrected dynamics $g_\textrm{nav}$ are shown in red, the corrected second order dynamics $g_\textrm{old}$ are shown in the dashed magenta, and the proposed dynamics $g_\textrm{new}$ are shown in green.} \label{mult_3d} \end{figure} Theorem \ref{thm:main_result} holds in general for $n$ dimensions. Up until this point, all of the examples shown have been two dimensional. Figure \ref{mult_3d} shows an example of the dynamics applied to an ellipsoidal world in three dimensions. Here, again we choose the objective function to be $f_0(x) = \|x\|^2$ with the outer obstacle having radius equal to 20. The ratios of the eigenvalues $\mu^i_\textrm{max}/\mu^i_\textrm{min}$ range between 5 and 10. The radii are chosen to be either 1 or 2. The centeres are then chosen such that condition \eqref{equ:condition} fails. The trajectory following navigation function dynamics does not converge to the target whereas both the corrected dynamics (with and without second order) converge. Interestingly, for the same value of $k$, we observe that the proposed correction dynamics without second order information takes a shorter path than the second order correction dynamics. This suggests that not only does \eqref{our_dynamics} require less information, but also they result in a shorter path for the same value of $k$. \subsection{Obstacles in ${\mathbb R}^2$} \begin{figure*}[!t] \centering \begin{tabular}{ccc} \includegraphics[width=.33\linewidth]{increase_m_old.pdf} & \includegraphics[width=.33\linewidth]{increase_m_new.pdf} & \includegraphics[width=.33\linewidth]{increase_m_newest.pdf} \\ \small (a) & \small (b) & \small (c) \end{tabular} \caption{(a) Uncorrected Dynamics: Regardless of the value $k$, the ratio of successful simulations decreases as the number of obstacles increases. (b) Corrected Dynamics (second order information): By increasing $k$, the ratio of successful simulations remains high regardless of the number of obstacles (c) Proposed Dynamics: By increasing $k$, the ratio of successful simulations remains high regardless of the number of obstacles} \label{m_trend} \end{figure*} In this section, we explore the effect of increasing the number of obstacles on the percentage of successful trajectories. We define the external shell to be the a spherical with center $(0,0)$ and radius $r_0$. The center of each ellipsoid is drawn uniformly from $[-r_0/2,r_0/2]^2$. The maximum semiaxis $r_i$ is drawn uniformly from $[r_0/10,r_0/5]$. The positive definite matrices $A_i$ have minimum eigenvalues $1$ and $\mu_\textrm{max}^i$, where $\mu_\textrm{max}^i$ is drawn randomly from $[1,r_0/2 ]$. The obstacles are then rotated by $\theta_i$ where $\theta_i$ is drawn randomly from $[-\pi/2,\pi/2]$. The obstacles are redrawn if Assumption \ref{as:1} is violated. For the objective function, we consider a quadratic cost given by \begin{equation} {f_0}(x) = (x-x^*)^\top Q (x-x^*). \end{equation} where $Q\in \ccalM^{2 \times 2}$ is a diagonal matrix with eigenvalues $\eig(Q) = \{1,\lambda\}$ where $\lambda$ is drawn from $[0,r_0]$. The minimizer of the objective function $x^*$ is drawn uniformly from $[-r_0/2, r_0/2]^2$. The minimizer $x^*$ is redrawn if it violates Assumption $\ref{assumption:interior}$. Finally, the initial position is drawn uniformly from $[-r_0,r_0]^2$ and is redrawn if it is not in the interior of the free space. For our experiments, we set $r_0 = 20$. We then vary number of obstacles $m$ from two to seven. For tuning parameters $k = \{20, 40, 60\}$ we run 100 simulations for each $m \in \{2, \dots , 7\}$. Each simulation is terminated successfully when the norm of the difference of $x_t$ and $x^*$ is less than the step size $\eta = 0.01$. A simulation is terminated unsuccessfully if the agent collides with an obstacle - including the outer boundary - or the number of steps reaches $5\times 10^4$. Figure \ref{m_trend} (a) shows the the results of the simulation for the uncorrected dynamics. For all values of $k$, the ratio of successful trajectories decreases as the number of obstacles increases. This is due to the fact that the increased number of obstacles increases the probability that there an obstacle violates condition \eqref{equ:condition}. In contrast, Figures \ref{m_trend} (b) and \ref{m_trend} (c) show that as $k$ increases the ratio of successful trials increases. For $k = 40$, the success percentage is always above $85\%$. For $k = 60$, the success percentage is always above $95\%$. Again, we reiterate that the dynamics proposed achieve the same success ratios using less information than the second order corrected dynamics. The poor performance of $k = 20$ in the corrected dynamics is due to the fact that we do not consider the outer obstacle $\beta_0$ in the dynamics -- see Remark \ref{beta_0remark}. Because $g_\textrm{nav}$ includes $\beta_0$ as part of the dynamics, the agent is repelled away from the boundary thereby avoiding collision. In contrast, the correction dynamics avoid this collision by assuming that $k$ is large enough such that the agent is always moving inward when it is close to the outer boundary. As expected, the performance improves significantly with larger values of $k$. The similar trends between the dynamics with and without second order correction is due to the fact that the dynamics \eqref{old_dynamics} are almost identical to \eqref{our_dynamics}, which is consistent with section \ref{sec_traj_plots}. \section{Potential, Obstacles, and Navigation} \label{problem_form} We consider the problem of a point agent navigating a quadratic potential in a space with ellipsoidal punctures. Formally, let $\mathcal{X} \subset {\mathbb R}^n$ be a non empty compact convex set that we call the workspace and let ${f_0}: {\mathcal X} \to {\mathbb R}_+$ be a convex strictly quadratic function that we call the potential. A point agent is interested in reaching the target destination $x^* \in \ccalX$ which is defined as the minimizer of the potential. Thus, for some positive definite matrix $Q$ we can write \begin{equation}\label{eqn_goal_def} x^* = \argmin_{x \in {\mathcal X}} {f_0}(x) = \argmin_{x \in {\mathcal X}} (x-x^*)^\top Q (x-x^*). \end{equation} In most navigation problems the potential is just the Euclidean distance to the goal \cite{arslan2016sensor, filippidis2011adjustable}, i.e., $Q$ is the identity, but arbitrary quadratic functions are of interest in some situations \cite{paternain2018navigation}. For future reference denote the minimum and maximum eigenvalues of $Q$ as $0<\lambda_\textrm{min}\leq \lambda_\textrm{max}$. The workspace $\ccalX$ is populated by $m$ ellipsoidal obstacles ${\mathcal O}_i \subset {\mathcal X}$ for $i = 1, \dots, m$ which are closed and have a non empty interior. Each of these obstacles is represented as the sublevel set of a proper convex quadratic function $\beta_i: {\mathbb R}^n \to {\mathbb R}$. These functions are defined based on the positive definite matrices $A_i$ and denote their minimum and maximum eigenvalues as $0 < \mu^i_\textrm{min} \leq \mu^i_\textrm{max}$. Further, we introduce ellipsoid centers $x_i$ and ellipsoid radiuses $r_i$ to define \begin{equation} \label{equ:beta1_def} \beta_i(x) = \frac{1}{2}(x - x_i)^\top A_i (x - x_i) - \frac{1}{2}r_i^2 . \end{equation} The obstacle ${\mathcal O}_i$ is now defined as the zero sublevel set of the quadratic function $\beta_i(x)$, \begin{equation} \label{equ:obs_def_beta} {\mathcal O}_i = \left\{x \in {\mathcal X} \given \beta_i(x) \leq 0 \right\}. \end{equation} From \eqref{equ:beta1_def} and \eqref{equ:obs_def_beta} it follows that ${\mathcal O}_i$ is an ellipsoid centered at $x^i$ with axes given by the eigenvectors of $A_i$. The length of the axis along the $k$th eigenvector is $\mu^k_i r_i$. In particular, the length of the minor axis is $r_i\mu^i_\textrm{min}$ and the length of the major axis is $r_i\mu^i_\textrm{max}$. We further introduce a concave quadratic function $\beta_0: {\mathbb R}^n \to {\mathbb R}$ so that we write the workspace as a superlevel set, \begin{equation}\label{eqn_workspace} {\mathcal X} = \left\{x \in {\mathbb R}^n \given \beta_0(x) \geq 0 \right\}. \end{equation} The latter is possible to do since superlevelsets of concave functions are convex sets \cite{boyd2004convex}. The navigation problem we want to solve is one in which the agent stays in the interior of the workspace at all times, does not collide with any obstacle and approaches the goal at least asymptotically. For a formal specification we define the free space as the complement of the obstacle set relative to the workspace, \begin{equation} {\mathcal F} := {\mathcal X} \ \backslash\ \Big(\, {\textstyle \bigcup\limits_{i = 1}^m}{\mathcal O}_i \, \Big), \end{equation} so that we can specify the agent's goal as that of finding a trajectory $x(t)$ such that \begin{equation}\label{eqn_problem_interest} x(t) \in {\mathcal F} ~\forall ~t\geq 0, ~\textrm{and} ~\lim_{t \to \infty} x(t) = x^*. \end{equation} A common approach to solve \eqref{eqn_problem_interest} is to construct a navigation function \cite{koditschek1990robot}. We explain this in the following section after introducing two assumptions that will be used in forthcoming sections. \begin{assumption} \label{assumption:interior} {\bf Target and initial point in free space.} The target $x^*$ defined in \eqref{eqn_goal_def} and the initial condition $x(0)$ lie in the free space defined in \eqref{eqn_workspace}, i.e., $ x^*,x(0) \in {\mathcal F}$. \end{assumption} \begin{assumption} \label{as:1} {\bf Obstacles do not intersect.} The intersection of any two obstacles is empty, namely, ${\mathcal O}_i \cap {\mathcal O}_j = \emptyset$ for all $i\neq j$ with $i,j=1,\ldots,m$. \end{assumption} \medskip \begin{figure}[!t] \centering \includegraphics[width=.8\linewidth]{slide3.png} \caption{Ellipsoidal fit of obstacles satisfies Assumption \ref{as:1}, while the spherical fit does not. The target of the function is shown by the star.} \label{ellipsoid} \end{figure} Both of these assumptions are minimal restrictions. Assumption \ref{assumption:interior} {requires the target and the initial goal} to be in free space. This assumption is required for the problem \eqref{eqn_problem_interest} to be feasible. Assumption \ref{as:1} states that obstacles are disjoint. This assumption is usual in the navigation function framework \cite{filippidis2012navigation, paternain2018navigation, rimon1991construction}. Notice that these assumption motivate indirectly the use of elliptical obstacles since approximating arbitrary obstacles by ellipsoids results in a better fit than spheres, allowing a representation of a larger class of worlds as illustrated in Figure \ref{ellipsoid}. In the example depicted in Figure \ref{ellipsoid} a spherical fit of the obstacles results in an environment that violates Assumptions \ref{assumption:interior} and \ref{as:1}, which prevents the use of navigation functions to solve the problem. Thanks to the results in \cite{li2004least} and \cite{fitzgibbon1999direct}, ellipsoid fitting has shown to be implementable, which makes ellipsoidal worlds practical. \subsection{Navigation Functions} A navigation function is a twice continuously differentiable function defined on free space that satisfies three properties: (i) It has a unique minimum at $x^*$. (ii) All of its critical points in free space are nondegenerate. (iii) Its maximum is attained at the boundary of free space. These three properties guarantee that if an agent follows the negative gradient of the navigation function it will converge to the minimum of the navigation function without running into the boundary of free space for almost every initial condition \cite{koditschek1990robot}. Thus, it is possible to recast \eqref{eqn_problem_interest} as the problem of finding a navigation function whose minimum is at the goal destination $x^*$. This is always possible to do since for any manifold with boundary it is guaranteed that said function exists \cite{{rimon1992exact}}. In general depending on the geometry of the freespace the navigation functions are constructed differently. For instance, in spherical worlds, Rimon-Koditschek artificial potentials can be used \cite{rimon1992exact}, and in topologically complex ones navigation functions based on harmonic potentials are preferred \cite{loizou2011navigation, loizou2012navigation}. The family of Rimon-Koditshek potentials has been extended to be able to navigate convex potentials such as $f_0$ in a space of nonintersecting convex obstacles such as the ${\mathcal O}_i$ we consider here is the Rimon-Koditschek potential \cite{filippidis2012navigation, paternain2018navigation, rimon1991construction}. However some geometrical conditions restrict its application directly. To explain the construction of these potentials we use the definition of the obstacles and the workspace provided in \eqref{equ:obs_def_beta} and \eqref{eqn_workspace} we can write an analytical expression for free space. To that end, define the function $\beta:{\mathbb R}^n \to {\mathbb R}$ to be the product of all the obstacle equations, \begin{equation} \label{beta_prod} \beta(x) = \prod_{i = 1}^m \beta_i(x). \end{equation} By Assumption \ref{as:1}, only the function $\beta_i$ can be negative inside of obstacle ${\mathcal O}_i$. It follows that $\beta(x)$ is negative if and only if the argument $x$ is inside of some obstacle. We can therefore define the free space as the set of points where the product of $\beta(x)$ and $\beta_0(x)$ is positive, \begin{equation}\label{eqn_new_freespace} {\mathcal F} = \left\{ x\in {\mathbb R}^n \given \beta_0(x) \beta(x) > 0 \right\}. \end{equation} We can now think of the product $\beta_0(x) \beta(x)$ as a barrier function that we must guarantee stays positive during navigation. Then Rimon-Koditschek potential does so by introducing a parameter $k> 0$ and defining the function $\varphi_k: {\mathcal F} \to {\mathbb R}_+$ as \begin{equation} \label{equ:nav_fun} \varphi_k(x) = \frac{{f_0}(x)} { \left({f_0}^k(x) + \beta_0(x)\beta(x) \right)^{1/k}}. \end{equation} For all $x\in \ccalF$, the Rimon-Koditschek potential $\varphi_k(x) < 1$. On the boundary of the free space, $\beta_0(x)\beta(x) = 0$, making the potential equal to one exactly. Because the target $x^*$ cannot lie on the boundary (Assumption \ref{assumption:interior}), the denominator is always positive. Since $\varphi_k(x)$ is non-negative for all $x\in\ccalF$ and only attains the value zero at $x^*$, this guarantees that $x^*$ is the minimum of the function. This analysis guarantees that the aforementioned conditions (i) and (iii) are satisfied. The design parameter $k$ controls the importance of the barrier $\beta_0(x)\beta(x)$ relative to the potential ${f_0}(x)$, thereby repelling the agent and preventing it from crossing into the obstacle space -- see also \eqref{equ:nav_fun_grad}. This interplay is key in establishing that all other critical points are non-degenerate saddle point and it has been established that $\varphi_k(x)$ is a navigation function when $k$ is sufficiently large under some restrictions on the shape of the obstacles, the potential function, and position of the goal \cite{filippidis2012navigation, paternain2018navigation}. The operative phrase in the previous sentence is ``under some restrictions.'' The potential $\varphi_k(x)$ in \eqref{equ:nav_fun} is not always a valid navigation function because for some geometries it can have several local minima as critical points. For the case of a quadratic potential and ellipsoidal obstacles that we consider here $\varphi_k(x)$ is known to be a valid potential when \cite[Theorem 3]{paternain2018navigation} \begin{equation} \label{equ:condition} \frac{\lambda_\textrm{max}}{\lambda_\textrm{min}} \times \frac{\mu^i _\textrm{max}}{\mu^i _\textrm{min}} < 1 + \frac{d_i}{r_i\mu^i_\textrm{max}}, \end{equation} where $d_i = \|x_i - x^* \|$ is the distance from the center of the ellipsoid to the goal and, we recall, $0<\lambda_\textrm{min}\leq \lambda_\textrm{max}$ are potential eigenvalues, $0 < \mu^i_\textrm{min} \leq \mu^i_\textrm{max}$ are obstacle eigenvalues, and $r_i$ is the ellipsoid radius. When \eqref{equ:condition} fails, $\varphi_k$ might fail to be a navigation function because it may present a local minimum on the side of the obstacle opposite the target. The important consequence that follows from \eqref{equ:condition} is that the navigation function $\varphi_k(x)$ in \eqref{equ:nav_fun} may fail to solve the navigation problem specified in \eqref{eqn_problem_interest}. Indeed, it will fail whenever the obstacles are wide with respect to the potential level sets. On the other hand, notice that when the attractive potential is rotationally symmetric and the obstacles are spherical, the left hand side of the previous expression is equals to one, and thus the condition is always satisfied. The main contribution of this paper is to leverage this observation to introduce a correction in the gradient field arising from a navigation function that allows to solve \eqref{eqn_problem_interest} in {\it all} environments with quadratic potentials and ellipsoidal obstacles \emph{without} using second order information about the potential nor the obstacles. \section{Curvature Corrected Navigation Flows} \label{sec_correction} As previously stated, an agent that follows the gradient of a navigation function converges to the goal while avoiding the obstacles. To gain some intuition about these dynamics in the case of Rimon-Koditschek potentials we write them explicitly as \begin{equation} \label{equ:nav_fun_grad} \begin{split} \dot x =& -\nabla \varphi_k(x) = -\big( f_0^k(x) + \beta(x)\beta_0(x))\big)^{-1 - 1/k}\\ &\left( \beta(x)\beta_0(x)\nabla {f_0}(x) - \frac{{f_0}(x)\nabla (\beta(x) \beta_0(x))}{k}\right). \end{split} \end{equation} In practice, the dynamics are typically normalized since the norm of the gradient is generally small \cite{whitcomb1991automatic,whitcomb1992toward}. As such, we omit the scaling $-\big( f_0^k(x) + \beta(x)\beta_0(x))\big)^{-1 - 1/k}$. We also omit $\beta_0(x)$ for simplicity, a minor modification which we explain in Remark \ref{beta_0remark}. \begin{align} \label{old_dynamics} \dot x = g_\textrm{nav}(x) := -\beta(x) \nabla {f_0}(x) + \frac{{f_0}(x)}{k}\nabla\beta(x). \end{align} The first term, $-\beta(x) \nabla {f_0}(x)$, in this dynamical system is an attractive potential to the goal and the second term, $({f_0}(x)/k) \nabla\beta(x)$, is a repulsive field pushing the agent away from the obstacles. When the agent is close to the obstacle ${\mathcal O}_i$, the product function $\beta(x)$ takes a value close to zero thereby eliminating the first summand in \eqref{old_dynamics} and prompting the agent's velocity to be almost collinear with the vector $\nabla\beta(x)$. In turn, this makes the time derivative $\dot{\beta(x)}$ positive thus preventing $\beta(x)$ from becoming negative. This guarantees that the agent remains in free space [cf., \eqref{eqn_new_freespace}]. When the agent is away from the obstacles, the term that dominates is the negative gradient of $f_0(x)$ which pushes the agent towards the goal $x^*$. The parameter $k$ balances the relative strengths of these two potentials. At points where the attractive and repulsive potentials cancel we find critical points. These points can be made saddles when the condition in \eqref{equ:condition} holds. An important observation here is that the condition is always satisfied when the potential and the obstacles are spherical because in that case the left hand side is $(\lambda_\textrm{max}/\lambda_\textrm{min})\times(\mu^i_\textrm{max}/\mu^i_\textrm{min})=1$. This motivates an approach in which we implement a change of coordinates to render the geometry spherical. The challenge is that the change of coordinates that would render obstacles spherical is not the same change of coordinates that would render the potential spherical. Still, this is idea motivates the curvature corrected dynamics that we present in this section. To simplify presentation consider first the case of a single obstacle and postpone the general case to Section \ref{before_proof}. In this case the function in \eqref{beta_prod} reduces to $\beta(x) = \beta_1(x)$. Similar to how Newton's Method uses second order information to render the level sets of the objective function into spherical sets so to obtain a faster rate of convergence\cite[Ch. 9.5]{boyd2004convex}, pre-multiplying each gradient in the original flow \eqref{old_dynamics} by the Hessian inverse of the corresponding function \emph{corrects} the dynamics so that the world appears spherical to the agent. This results in the dynamical system \begin{equation}\label{one_dynamics} \dot x = -\beta(x) \nabla^2 {f_0}(x)^{-1}\nabla {f_0}(x) + \frac{{f_0}(x)}{k} \nabla^2 \beta(x)^{-1}\nabla \beta(x). \end{equation} We see that the corrected dynamics \eqref{one_dynamics} differs from the navigation function dynamics \eqref{old_dynamics} in that $\nabla {f_0}(x)$ in the first term is premultiplied by the Hessian inverse $\nabla^2 {f_0}(x)^{-1}$ and in that $\nabla \beta(x)$ is premultiplied by the Hessian inverse $\nabla^2\beta(x)^{-1}$. Observe that since we are assuming the objective to be a quadratic function the Hessian inverse gradient product in \eqref{one_dynamics} is simply given by the position of the agent relative to the goal \begin{equation}\label{eqn_hessian_gradient_product_1} \nabla^2 {f_0}(x)^{-1}\nabla {f_0}(x) = x - x^*. \end{equation} Likewise, recalling our definition of the ellipsoids in \eqref{equ:beta1_def}, the corresponding Hessian inverse gradient product is the position of the agent relative to the center of the ellipsoid \begin{equation}\label{eqn_hessian_gradient_product_2} \nabla^2 \beta(x)^{-1}\nabla \beta(x) = x - x_1. \end{equation} Substituting \eqref{eqn_hessian_gradient_product_1} and \eqref{eqn_hessian_gradient_product_2} into our proposed dynamical system in \eqref{one_dynamics} yields \begin{equation}\label{one_dynamics_simplified} \dot x = -\beta(x)(x - x^*) + \frac{{f_0}(x)}{k} (x - x_1). \end{equation} These dynamics are almost equivalent to \eqref{old_dynamics} if the dynamics from the navigation function gradient is particularized to an environment with a spherical objective function and a spherical obstacle. Since such a spherical environment satisfies the geometric condition in \eqref{equ:condition}, it is reasonable to expect that a point agent following \eqref{one_dynamics}, which reduces to \eqref{one_dynamics_simplified}, converges to the goal $x^*$ in all environments. It is important to note that the correction is not equivalent to treating everything like spheres. This is because the functions $\beta$ and ${f_0}$ have their original, quadratic form. An interesting side effect of the simplified dynamics expression in \eqref{one_dynamics_simplified} is that implementation of \eqref{one_dynamics} is simpler than it appears. The first term pushes in the direction of the goal and the second term pushes away from the center of the obstacle. Thus, the algorithm can be made to work if we just estimate these two quantities. Curvature estimates are not needed for implementation. The dynamics \eqref{one_dynamics} only consider the case where there is one obstacle. In the following section, we will generalize the dynamics so that the agent can navigate around an arbitrary number of obstacles. \begin{remark} \label{beta_0remark} \normalfont On the boundary of the workspace ${\mathcal X}$ [c.f. \eqref{eqn_workspace}], for large enough $k$, the dynamics \eqref{old_dynamics} are almost exactly colinear with $-\nabla {f_0}(x)$. Since the workspace is convex, $-\nabla {f_0}(x)$ points inside the obstacle. Therefore, by selecting a large enough $k$, we know that the agent will not collide with the external boundary of the free space, which is why we omit it from \eqref{old_dynamics}. \end{remark} \subsection{Extension to Multiple Obstacles} \label{before_proof} In this section we extend our proposed dynamics to the case where the workspace is populated with a finite number of non-intersecting obstacles. Let us start by reviewing the extension to multiple obstacles using second order information \cite{arXiv_version}. Use the product rule of derivation to write \begin{equation} \label{equ:nabbeta} \nabla \beta(x) = \sum_{i = 1}^m \left(\prod_{j \neq i} \beta_j(x)\right) \nabla \beta_i(x). \end{equation} For ease of notation, we define the omitted product function $\bar \beta_i(x) = \prod_{j \neq i} \beta_j(x)$, so (\ref{equ:nabbeta}) becomes \begin{equation} \nabla \beta(x) = \sum_{i = 1}^m \bar \beta_i(x) \nabla \beta_i(x). \end{equation} When the agent is close to the boundary of an obstacle ${\mathcal O}_i$, the $\bar\beta_i(x)$ dominates the dynamics so that the gradient $\nabla \beta(x) \approx \nabla \beta_i(x)$. Observing this, we correct the dynamics by premultiplying the obstacle gradient by $B(x)^{-1}$ where $B(x)$ is a linear combination of the Hessians of the obstacles. The weight of obstacle ${\mathcal O}_i$ is defined by the \emph{soft switch} $\alpha_i(x)$ which is a continuous function equal to one on the border of the obstacle and negligible away from the obstacle. As such, we write the correction term \begin{equation} B(x) = \sum_{i = 1}^m \alpha_i(x) \nabla^2 \beta_i(x), \end{equation} so that the proposed dynamics \cite{arXiv_version} are \begin{equation} \label{equ:cdc_dynamics} \dot x = g_\textrm{old}(x) := -\beta(x) (x-x^*) + \frac{{f_0}(x)}{k} B(x)^{-1}\nabla \beta(x). \end{equation} Similar to how $\nabla \beta(x)$ is approximately equal to $\nabla \beta_i(x)$ close to the border $\partial {\mathcal O}_i$, the correction term $B(x)$ is approximately equal to the Hessian $\nabla^2\beta_i(x)$ due to the defining properties of the continuous switch functions $\alpha_i(x)$. Naturally this makes the inverse of the correction term $B(x)^{-1}$ close to the boundary of obstacle ${\mathcal O}_i$ approximately equal to $\nabla^2\beta_i(x)$ so that the correction renders the obstacle gradient component approximately equal to $ (x-x_i)$. This is reminiscent of the one obstacle dynamics \eqref{one_dynamics_simplified}. As such, under the assumption that the agent starts $\varepsilon$-away from any obstacle, for most geometries -- see Remark \ref{remark:cycles} -- asymptotic convergence to the target is guaranteed. The issue with these dynamics is that they require the agent to estimate the curvature of the obstacle and store the information for computation of dynamics. We can overcome this issue by applying separate correction terms to the gradient of each obstacle function $\beta_i$ instead of applying the correction $B(x)$ collectively to the gradient $\nabla \beta(x)$. This means the dynamics with second order correction becomes \begin{equation} \dot x = -\beta(x) (x-x^*) + \frac{{f_0}(x)}{k} \sum_{i = 1}^m \bar \beta_i(x) \nabla^2\beta_i(x)^{-1}\nabla \beta_i(x) \end{equation} Doing so would result in a simplification \eqref{one_dynamics_simplified} for each obstacle. The resulting dynamics for the multiple obstacle case without second order information becomes \begin{equation} \dot x = g_\textrm{new}(x) := -\beta(x) (x-x^*) + \frac{{f_0}(x)}{k}\sum_{i = 1}^m \bar \beta_i(x) (x-x^i). \label{our_dynamics} \end{equation} The advantages of this approach are threefold: (i) the estimate of the Hessian does not need to be computed, (ii) the dynamics are simpler and easier to implement, and (iii) the convergence proof is complete for every starting position in the free space. We now present our main result, which guarantees convergence to the target in \emph{all} environments with ellipsoidal obstacles. \begin{theorem} \label{thm:main_result} Let ${f_0}(x)$ be a quadratic potential as in \eqref{eqn_goal_def} and let $\beta_i(x)$ be an ellipsoid as in (\ref{equ:beta1_def}) for all $i = 1, \dots, m$. Further let $x$ be the solution of the dynamical system (\ref{our_dynamics}) with initial condition $x_0$. Then, there exists a $K$ such that when $k > K$, $x(t) \in {\mathcal F}$ for all $t \geq 0$ and $\lim_{t\to \infty} x(t) = x^*$. \end{theorem} The complete proof is presented in section \ref{proof_of_theorem}, however we present a sketch of proof here. First we define a global Lyapunov function which shows that the target $x^*$ is asymptotically stable for all but a few points close to the borders of the obstacles. We then choose $K$ large enough so that these regions are contained inside an $\varepsilon$-ball of each obstacle. Within this $\varepsilon$-ball, we show that the agent will navigate around the obstacle. We then show that the agent can only visit a finite number of obstacles to conclude the proof. The result of Theorem \ref{thm:main_result} is different than the guarantee of the second order dynamics in \eqref{equ:cdc_dynamics} \cite[Theorem 2]{arXiv_version}. For the second order dynamics, the value of $k$ was dependent on the initial position $x_0$ of the agent. It also required the assumption that the starting point is some $\varepsilon$ away from the border of the obstacles. Our result says that there exists a finite $K$ \emph{independent} of the starting position $x_0$ such that when $k > K$, \emph{any} starting position will converge to the target. Further, the analysis for second order dynamics does not guarantee that the agent will not cycle infinitely around a group of obstacles tightly surrounding the objective failing to converge to the objective \textemdash see Remark \ref{remark:cycles}. Lemma \ref{lem:break_cycles} provides this guarantee for our proposed dynamics. \section{Introduction} \label{sec_intro} Applied to solve a variety of problems such as surveillance \cite{bhattacharya2007motion,rybski2000team}, or search and rescue tasks \cite{murphy2008search, kantor2003distributed}, motion planning considers the problem of defining a path from the robots current location to its goal such that it is able to avoid colliding with obstacles which might be present in the environment \cite{lavalle2006planning}. Due to the large scope of motion planning problem, many different approaches exist. For example, bug algorithms are a class of solutions where the agent moves toward the goal until it hits an obstacle, then moves around the border of the obstacle to avoid collision\cite{taylor2009bug,ng2007performance}. Although bug algorithms are simple to understand and can handle environments with non-convex obstacles, they result in non-smooth trajectories which are infeasible for bounded torque actuated robots. Furthermore, graph based techniques such as A star \cite{hart1968formal} and random trees \cite{lavalle1998rapidly} offer solutions which look at reachability of states from current states to plan the trajectory. Many of these solutions enjoy optimality guarantees along with convergence to the target\cite{duchovn2014path,koenig2005fast}. Generally formatted for grid worlds, these graph based techniques are computationally expensive as the number of states increases \cite{martelli1977complexity}, making them ill suited for continuous space environments we consider, such as the hill climbing problem. In addition they require that a map of the environement is available to the agent before hand. Artificial potentials offer a relatively fast and effective way to navigate a bounded torque actuated robot around obstacles toward a target \cite{warren1989global}. By combining an attractive potential of the target with repulsive potential of obstacles, artificial potentials offer a continuous space solution to the motion planning problem when the agent follows the negated gradient vector field of the potential function. Simply combining these potentials is not enough to guarantee that the robot will reach its target, as the the artificial potential might have several local minima that result from the combination of the aforementioned potentials. A \emph{navigation function} is a specific type of artificial potential which guarantees convergence to the target at least asymptotically\cite{koditschek1990robot}. The defining properties of a navigation function ensure that its negative gradient repels the agent away from obstacles and attracts the agent toward the target in such a way that they do not posit local minima other than the desired destination\cite{rimon1992exact}. Navigation functions have been adapted so that they work in finite time\cite{loizou2017navigation, vrohidis2018prescribed}, and they have been extended to solve problems with multiple agents and non stationary targets \cite{tanner2005formation,ghaffarkhah2009communication}. Although it has been shown that navigation functions exist in any manifold with boundaries \cite{koditschek1990robot}, their specific construction depends on the geometry of the environment. For example, a Rimon-Koditschek potential can be tuned into a navigation function for sufficiently curved worlds \cite{filippidis2012navigation} of which spherical obstacles \cite{koditschek1990robot} and some worlds with strongly convex obstacles are a particular case\cite{paternain2018navigation}. With global information, even the more general class of star worlds can be transformed into a sphere world so that the navigation function approach can be used \cite{rimon1991construction,loizou2011navigation}. Likewise, navigation functions based on harmonic potentials can be used to navigate topologically complex 3D worlds \cite{loizou2012navigation}. An important advantage of navigation functions is that they can operate without global information as long as the their gradients can be measured locally \cite{lionis2007locally,lionis2008towards}. Without going into technical details, approximations of the Rimon-Koditschek potentials can be constructed if we can measure the distances to nearby obstacles and locally estimate their curvature, as well as measure the gradient of the natural potential we intend to minimize \cite{paternain2016stochastic}. Even under the presence of measurement noise the estimate of the gradient of the navigation function can be used to achieve convergence to the desired destination with probability one while avoiding runing into the obstacles \cite{paternain2016stochastic}. However, the limitation of the Rimon-Kodittschek potentials \textemdash regardless of its construction being local or global \textemdash is that they fail in situations where the obstacles are flat with respect to the objective potential\cite{filippidis2012navigation,paternain2018navigation}. In these cases, the agent may converge to the local minima of the artifical potential which appear near the boundaries of these wide obstacles. This becomes limiting as the number of obstacles increases, as there is a higher chance that these local minima exist (see Figure \ref{m_trend}). Similar conditions regarding the navigability of an environment have been established even for alternate solutions that abandon the smooth vector field approach \cite{arslan2016exact, arslan2016sensor}. The latter suggests that this limitation is intrinsic to the problem and additional information is required to solve navigation problems with generic convex obstacles. In that sense, we propose to modify the gradient based dynamics introducing a correction that takes into account the curvature of the obstacles. Since Rimon-Koditschek potentials are guaranteed to converge to the desired goal in spherical worlds, our approach \textemdash reminiscent of Newton's method \cite[Ch. 9]{boyd2004convex} \textemdash uses the Hessian as local change of coordinates to do so (Section \ref{sec_correction}). It is important to point out that the correction introduced is such that for practical implementation the estimate of the curvature is not needed, and the agent is only required to estimate the distance to the objective and to the obstacles in addition to their relative position. This results in a fundamental difference from \cite{arXiv_version}. In Section \ref{proof_of_theorem} we show that the proposed dynamics are such that the robot is able to navigate towards the goal while avoiding the obstacles regardless of the eccentricity of the obstacles and the natural potential from all initial positions in the free space. This result is as well stronger than the one in \cite{arXiv_version} since it does not limit the initial configuration to be bounded away from the obstacles. Other than concluding remarks, the paper finishes with numerical results that showcase success in spaces where navigation functions based on Rimon-Koditschek potentials fail (Section \ref{num_results}). \section{Practical considerations}\label{sec_navigation_function_unknown} The gradient controller in \eqref{eqn_gradient_flow} utilizing the navigation function $\varphi=\varphi_k$ in \eqref{eqn_navigation_function} succeeds in reaching a point arbitrarily close to the minimum $x^*$ under the conditions of Theorem \ref{theo_general} or Theorem \ref{theo_ellipses}. However, the controller is not strictly local because constructing $\varphi_k$ requires knowledge of all the obstacles. This limitation can be remedied by noting that the encoding of the obstacles is through the function $\beta(x)$ which is defined by the product of the functions $\beta_i(x)$ [c.f. \eqref{eqn_obstacle_function}]. We can then modify $\beta(x)$ to include only the obstacles that have already been visited. Let $c>0$ be the a constant defining the range of the sensor that estimates the obstacles and define the $c$-neighborhood of obstacle $\ccalO_i$ as the set of points with $\beta_i(x) \leq c$. For given time $t$, we define the set of obstacles of which the agent is aware as the set of obstacles of which the agent has visited their $c$-neighborhood at some time $s\in [0,t]$, \begin{equation}\label{eqn_awareness_set} \ccalA_c(t) \triangleq \Big\{ i : \beta_i(x(s)) \leq c, \text{\ for some\ } s\in[0,t] \Big\}. \end{equation} The above set can be used to construct a modified version of $\beta(x)$ that includes only the obstacles visited by the agent, \begin{equation}\label{eqn_partial_obstacle_function} \beta_{\ccalA_c(t)}(x) \triangleq \beta_0(x) \prod_{i \in \ccalA_c(t)} \beta_i(x). \end{equation} Observe that the above function depends on the time through the set $\ccalA_c(t)$ however this dependence is not explicit as the set is only modified when the agent reaches the neighborhood of a new obstacle. In that sense $\ccalA_c(t)$ behaves as a switch depending only of the position of the agent. Proceeding by analogy to \eqref{eqn_navigation_function}, we use the function $\beta_{\ccalA_c(t)}(x)$ in \eqref{eqn_partial_obstacle_function} to define the switched potential $\varphi_{k,\ccalA_c(t)}(x) : \ccalF_{\ccalA_c(t)} \to \mathbb{R}$ taking values \begin{equation}\label{eqn_partial_navigation_function} \varphi_{k,\ccalA_c(t)}(x) \triangleq \frac{f_0(x)}{\left(f_0^k(x)+\beta_{\ccalA_c(t)}(x) \right)^{1/k}}. \end{equation} The free space $\ccalF_{\ccalA_c(t)}$ is defined as in \eqref{eqn_free_space_set}, with the difference that we remove only those obstacles for which $i \in \ccalA_c(t)$. Observe that $\ccalF_{\ccalA_c(t)} \subseteq \ccalF_{\ccalA_c(s)}$ if $t>s$. We use this potential to navigate the free space $\ccalF$ according to the switched controller \begin{equation}\label{eqn_partial_gradient_flow} \dot{x} = - \nabla \varphi_{k,\ccalA_c(t)}(x) . \end{equation} Given that $\varphi_{k,\ccalA_c(t)}(x)$ is a switched potential, it has points of discontinuity. The switched gradient controller in \eqref{eqn_partial_gradient_flow} is interpreted as following the left limit at the discontinuities. The solution of system \eqref{eqn_partial_gradient_flow} converges to the minimum of $f_0(x)$ while avoiding the obstacles for a set of initial conditions whose measure is one, as we formally state next. \begin{theorem}\label{theo_switched} Let $\mathcal{F}$ be the free space defined in \eqref{eqn_free_space} verifying Assumption \ref{assum_obstacles} and let $\ccalA_c(t)$ for any $c>0$ be the set defined in \eqref{eqn_awareness_set}. Consider the switched navigation function $\varphi_{k,\ccalA_c(t)}: \ccalF_{\ccalA_c(t)} \rightarrow [0,1]$ to be the function defined in \eqref{eqn_partial_navigation_function}. Further let condition \eqref{eqn_general_condition} hold for all $i=1\ldots m$ and for all $x_s$ in the boundary of $\mathcal{O}_i$. Then, for any $\varepsilon>0$ there exists a constant $K(\varepsilon)>0$, such that if $k>K(\varepsilon)$, for a set of initial conditions of measure one, the solution of the dynamical system \eqref{eqn_partial_gradient_flow} verifies that $x(t) \in \mathcal{F}$ for all $t\in[0,\infty)$ and its limit is $\bar{x}$, where $\left\|\bar{x}-x^*\right\|<\varepsilon$. Furthermore if $f_0(x^*) =0$ or $\nabla \beta(x^*) = 0$, then $\bar{x}=\bar{x^*}$. \end{theorem} \begin{proof} See Appendix \ref{ap_theo_switched_proof}. \end{proof} Theorem \ref{theo_switched} shows that it is possible to navigate the free space $\ccalF$ and converge asymptotically to the minimum of the objective function $f_0(x)$ by implementing the switched dynamical system \eqref{eqn_partial_gradient_flow}. This dynamical system only uses information about the obstacles that the agent has already visited. Therefore, the controller in \eqref{eqn_partial_gradient_flow} is a spatially local algorithm because the free space is not known a priori but observed as the agent navigates. Do notice that the observation of the obstacles is not entirely local because their complete shape is assumed to become known when the agent visits their respective $c$-neighborhoods. Incremental discovery of obstacles is also considered in \cite{filippidis2011adjustable} for the case of spherical worlds and the proof is similar to that of Theorem \ref{theo_switched}. We also point out that a minor modification of \eqref{eqn_partial_gradient_flow} can be used for systems with dynamics as we formalize in the next proposition. \begin{corollary} Consider the system given by \eqref{eqn_robot_model}. Let $\varphi_{k,\ccalA_c(t)}(x)$ be the function given by \eqref{eqn_partial_navigation_function} and let $d(x,\dot{x})$ be a dissipative field, then by selecting the torque input \begin{equation}\label{eqn_proposition} \tau(x,\dot{x}) = -\nabla \varphi_{k,\ccalA_c(t)}(x) + d(x,\dot{x}), \end{equation} the behavior of the agent converges asymptotically to solutions of the gradient dynamical system \eqref{eqn_partial_gradient_flow}. \end{corollary} \begin{proof} From the proof of Theorem \ref{theo_switched} it follows that there exists a finite time $T>0$ such that $\ccalA_c(t)$ is constant for any $t\geq T$ [cf.\eqref{eqn_finite_time}]. Then for any $t \geq T$ the dynamical system given by \eqref{eqn_robot_model} with the torque input \eqref{eqn_proposition} is equivalent to the system discussed in Remark \ref{rmk_dynamics} and the proof of \cite{koditschek1991control} follows. \end{proof} The above corollary shows that the goal in \eqref{eqn_navigation_goal} is achieved for a system with nontrivial dynamics when the obstacles are observed in real time. \begin{remark}[\bf Selection of navigation function order $k$]\label{remark_varying_k} Theorems \ref{theo_general} - \ref{theo_switched} give conditions for the existence of a constant $K$ such that for all $k\geq K$ the function $\varphi_k$ in \eqref{eqn_navigation_function} enables successful navigation to the minimum of the potential function $f_0$. The value of $k$ is, however, limited by implementation considerations. E.g., as $k$ grows the weight of $ \nabla \beta$ relative to $\nabla f_0$ diminishes [c.f. \eqref{eqn_nabla_phi}], pushing trajectories closer to the obstacles. This is unsafe because noise in sensor inputs and actuation might result in collisions. A pre-design solution is to experiment on the type of environment in which the agent is to be deployed and select a $k$ that works in most configurations (Section \ref{sec_numerical_examples}). With this implementation restriction Theorems \ref{theo_general} - \ref{theo_switched} can not guarantee {\it absence} of local minima but rather assure that it is possible to select a $k$ that will make them {\it rare} for a given family of spatial geometries -- indeed, they vanish as $k$ grows. Alternatively, and given that using a $k$ that is as small as possible is beneficial, algorithms to adapt $k$ can be used. For a certain maximum allowable value of $k$, Theorems \ref{theo_general} - \ref{theo_switched} do not guarantee absence of local minima but they indicate that local minima are rare. In either case, the agent may get stuck in a local minimum of the artificial potential $\varphi_k$ -- this may happen because $k$ is not large enough or because the geometry of the problem is unworkable for any $k$. Practical deployments must be combined with a decision making module to dislodge the agent from a local minimum when one is encountered. One possible approach to identifying local minima is to verify that the navigation gradient is $\nabla\varphi_k(x)\approx0$ but the potential gradient is $\nabla f_0(x)\not\approx0$. \end{remark} \begin{comment} \subsection{Adjustable $k$}\label{section_varying_k} In this section we discuss present an algorithm that allows the agent to converge to minimum of the artificial potential in finite time if condition \eqref{eqn_general_condition} is satisfied. In cases where the condition is not satisfied then the robot will increase the tuning parameter $k$ until the value reaches the maximum number that he can handle. In that sense the algorithm in this section provides a way of checking the condition in operation time. Let $f(x)$ be a vector field and define its normalization as \begin{equation} \overline{f(x)} = \left\{\begin{array}{c} \frac{f(x)}{\|f(x)\|} \quad \mbox{if} \quad f(x) \neq 0 \\ 0 \quad \mbox{if} \quad f(x) =0 \end{array}\right. \end{equation} The main reason for considering normalized gradient flows is that the convergence speed is not affected by an increment of the parameter $k$ as happens when considering gradient descent. Furthermore convergence to critical points happens in finite (c.f. Corollary \cite{cortes2006finite}). The latter is key to ensure convergence to the minimum of the navigation function by Algorithm \ref{alg_adjustable_k} in finite time or to reach the maximum value $K_{max}$ that the robot can handle. We formalize this result next. {\small \begin{algorithm}[t] \caption{Adjustable $k$} {\small \label{alg_adjustable_k} \begin{algorithmic}[1] \Require initialization $x_{0}$ and $k_{0}$, \State $k\gets k_0$ \State Normalized Gradient descent $\dot{x} = -\overline{\nabla \varphi_k(x)}.$ \State $x_0 \gets x(t_{f})$ \While {$x_0 \neq \argmin \varphi_k(x)$ or $k<K_{max}$} \State $k\gets k+1$ \State Draw random point $x_{rand}$ in neighborhood of $x_0$ \State Move towards $x_{rand}$ by doing $\dot{x} = -\overline{(x-x_{rand})}.$ \State $x_0 \gets x_{rand}$ \State Normalized Gradient descent $\dot{x} = -\overline{\nabla \varphi_k(x)}.$ \EndWhile \end{algorithmic}} \end{algorithm}} \begin{theorem} Let $\mathcal{F}$ be the free space defined in \eqref{eqn_free_space} satisfying Assumption \ref{assum_obstacles} and let $\varphi_k: \mathcal{F} \rightarrow [0,1]$ be the function defined in \eqref{eqn_navigation_function}. Let $\lambda_{\max}$, $\lambda_{\min}$ and $\mu^i_{\min}$ be the bounds in Assumption \ref{assum_objective_function}. Further let \eqref{eqn_general_condition} hold for all $i=1\ldots m$ and for all $x_s$ in the boundary of $\mathcal{O}_i$. Then Algorithm \ref{alg_adjustable_k} finishes in finite time and either $k=K_{max}$ or the final position $x_f$ is such that $x_f= \argmin \varphi_k(x)$. \end{theorem} \begin{proof} Let us assume that the maximum $K_{max}$ that the robot can handle is smaller than the $K$ of \ref{theo_general}. Notice that the normalized gradient flow convergences to a critical point of the artificial potential $\varphi_k(x)$ (c.f. Corollary 10 \cite{cortes2006finite}). If the initial parameter $k_0$ is large enough then this implies convergence to the minimum and the algorithm finishes in finite time. If not then it means that the robot is in a local minimum. The convergence to the randomly selected point also happens in finite time because of the result in \cite{cortes2006finite}. Therefore in finite time it will be the case that $k>K$. This being the case implies that $\varphi_k(x)$ is a navigation function and therefore the critical points next to the obstacles are saddles. Also, with probability one the point $x_{rand}$ will be in the unstable manifold of the saddle. Which ensures that after one more normalized gradient descent the agent converges to the minimum of the navigation function. On the other hand if $K>K_{max}$ then the agent does not converge to minimum of the navigation function but the algorithm terminates in finite time due to the fact that the gradients flow converge in finite time. \end{proof} The previous theorem shows that the adaptive strategy for selecting the parameter $k$ finishes in finite time either by converging to the minimum of the navigation function or by reaching the maximum $K$ that the agent can handle. If the second happens then it must be the case that the problem that the agent is trying to satisfy if such that it violates condition \eqref{eqn_general_condition} and in that sense Algorithm \ref{alg_adjustable_k} allows to identify where the does not hold. Observe that to avoid jittering around the critical points it is possible to stop the gradient flows when the norm of the gradient is smaller than a given tolerance. \end{comment}
{ "timestamp": "2019-08-23T02:15:22", "yymm": "1908", "arxiv_id": "1908.08509", "language": "en", "url": "https://arxiv.org/abs/1908.08509", "abstract": "Given a convex quadratic potential of which its minimum is the agent's goal and a Euclidean space populated with ellipsoidal obstacles, one can construct a Rimon-Koditschek (RK) artificial potential to navigate. Its negative gradient attracts the agent toward the goal and repels the agent away from the boundary of the obstacles. This is a popular approach to navigation problems since it can be implemented with local spatial information that is acquired during operation time. However, navigation is only successful in situations where the obstacles are not too eccentric (flat). This paper proposes a modification to gradient dynamics that allows successful navigation of an environment with a quadratic cost and ellipsoidal obstacles regardless of their eccentricity. This is accomplished by altering gradient dynamics with a Hessian correction that is intended to imitate worlds with spherical obstacles in which RK potentials are known to work. The resulting dynamics simplify by the quadratic form of the obstacles. Convergence to the goal and obstacle avoidance is established from almost every initial position (up to a set of measure one) in the free space, with mild conditions on the location of the target. Results are corroborated empirically with numerical simulations.", "subjects": "Optimization and Control (math.OC)", "title": "Navigation of a Quadratic Potential with Ellipsoidal Obstacles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692271169855, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.7079584981986041 }
https://arxiv.org/abs/1709.07345
On the multi-dimensional elephant random walk
The purpose of this paper is to investigate the asymptotic behavior of the multi-dimensional elephant random walk (MERW). It is a non-Markovian random walk which has a complete memory of its entire history. A wide range of literature is available on the one-dimensional ERW. Surprisingly, no references are available on the MERW. The goal of this paper is to fill the gap by extending the results on the one-dimensional ERW to the MERW. In the diffusive and critical regimes, we establish the almost sure convergence, the law of iterated logarithm and the quadratic strong law for the MERW. The asymptotic normality of the MERW, properly normalized, is also provided. In the superdiffusive regime, we prove the almost sure convergence as well as the mean square convergence of the MERW. All our analysis relies on asymptotic results for multi-dimensional martingales.
\section{Introduction} \label{S-I} The elephant random walk (ERW) is a fascinating discrete-time random process arising from mathematical physics. It is a non-Markovian random walk on $\mathbb{Z}$ which has a complete memory of its entire history. This anomalous random walk was introduced by Sch\"utz and Trimper \cite{Schutz04}, in order to investigate how long-range memory affects the random walk and induces a crossover from a diffusive to superdiffusive behavior. It was referred to as the ERW in allusion to the traditional saying that elephants can always remember where they have been. The ERW shows three differents regimes depending on the location of its memory parameter $p$ which lies between zero and one. \vspace{1ex}\\ Over the last decade, the ERW has received considerable attention in the mathematical physics literature in the diffusive regime $p< 3/4$ and the critical regime $p=3/4$, see e.g.\! \cite{Baur16},\cite{Boyer14},\cite{Cressoni13},\cite{Cressoni07},\cite{Da13},\cite{Kumar10},\cite{Kursten16},\cite{Par06} and the references therein. Quite recently, Baur and Bertoin \cite{Baur16} and independently Coletti, Gava and Sch\"utz \cite{Col17} have proven the asymptotic normality of the ERW, properly normalized, with an explicit asymptotic variance. \vspace{1ex}\\ The superdiffusive regime $p>3/4$ is much harder to handle. Initially, it was suggested by Sch\"utz and Trimper \cite{Schutz04} that, even in the superdiffusive regime, the ERW has a Gaussian limiting distribution. However, it turns out \cite{Bercu17} that this limiting distribution is not Gaussian, as it was already predicted in \cite{Da13}, see also \cite{Col17},\cite{Par06}. \vspace{1ex}\\ Surprisingly, to the best of our knowledge, no references are available on the multi-dimensional elephant random walk (MERW) on $\mathbb{Z}^d$, except \cite{Cressoni13},\cite{Lyu17} in the special case $d=2$. The goal of this paper is to fill the gap by extending the results on the one-dimensional ERW to the MERW. To be more precise, we shall study the influence of the memory parameter $p$ on the MERW and we will show that the critical value is given by $$ p_d=\frac{2d+1}{4d}. $$ In the diffusive and critical regimes $p\leq p_d$, the reader will find the natural extension to higher dimension of the results recently established in \cite{Baur16},\cite{Bercu17},\cite{Col17},\cite{ColN17} on the almost sure asymptotic behavior of the ERW as well as on its asymptotic normality. One can notice that unlike in the classic random walk, the asymptotic normality of the MERW holds in any dimension $d \geq 1$. In the superdiffusive regime $p>p_d$, we will also prove some extensions of the results in \cite{Bercu17},\cite{Cressoni13},\cite{Lyu17}. \vspace{1ex}\\ Our strategy is to make an extensive use of the theory of martinagles \cite{Duflo97},\cite{HallHeyde80}, in particular the strong law of large numbers and the central limit theorem for multi-dimensional martingales \cite{Duflo97}, as well as the law of iterated logarithm \cite{Stout70},\cite{Stout74}. We strongly believe that our approach could be successfully extended to MERW with stops \cite{Cressoni13},\cite{Harbola14}, to amnesiac MERW \cite{Cressoni07}, as well as to MERW with reinforced memory \cite{Baur16},\cite{Harris15}. \vspace{1ex}\\ The paper is organized as follows. In Section \ref{S-MERW}, we introduce the exact MERW and the multi-dimensional martingale we will extensively make use of. The main results of the paper are given in Section \ref{S-MR}. As usual, we first investigate the diffusive regime $p<p_d$ and we establish the almost sure convergence, the law of iterated logarithm and the quadratic strong law for the MERW. The asymptotic normality of the MERW, properly normalized, is also provided. Next, we prove similar results in the critical regime $p=p_d$. At last, we study the superdiffusive regime $p>p_d$ and we prove the almost sure convergence as well as the mean square convergence of the MERW to a non-degenerate random vector. Our martingale approach is described in Appendix A, while all technical proofs are postponed to Appendices B and C. \ \vspace{-2ex} \\ \section{The multi-dimensional elephant random walk} \label{S-MERW} First of all, let us introduce the MERW. It is the natural extension to higher dimension of the one-dimensional ERW defined in the pioneer work of Sch\"utz and Trimper \cite{Schutz04}. For a given dimension $d \geq 1$, let $(S_n)$ be a random walk on $\mathbb{Z}^d$, starting at the origin at time zero, $S_0 = 0$. At time $n = 1$, the elephant moves in one of the $2d$ directions with the same probability $1/2d$. Afterwards, at time $n \geq 1$, the elephant chooses uniformly at random an integer $k$ among the previous times $1,\ldots,n$. Then, he moves exactly in the same direction as that of time $k$ with probability $p$ or in one of the $2d-1$ remaining directions with the same probability $(1-p)/(2d-1)$, where the parameter $p$ stands for the memory parameter of the MERW. From a mathematical point of view, the step of the elephant at time $n \geq 1$ in given by \begin{equation} \label{INCMERW} X_{n+1}=A_n X_k \end{equation} where \begin{equation*} A_{n} = \left \{ \begin{array}{ccc} +I_d &\text{ with probability } & p \vspace{1ex}\\ -I_d &\text{ with probability } & \frac{1-p}{2d-1} \vspace{1ex}\\ +J_d &\text{ with probability } & \frac{1-p}{2d-1} \vspace{1ex}\\ -J_d &\text{ with probability } & \frac{1-p}{2d-1} \vspace{1ex}\\ & \vdots & \vspace{1ex}\\ +J_d^{d-1} &\text{ with probability } & \frac{1-p}{2d-1} \vspace{1ex}\\ -J_d^{d-1} &\text{ with probability } & \frac{1-p}{2d-1} \end{array} \nonumber \right. \vspace{2ex} \end{equation*} with \begin{equation*} I_d = \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \ddots & \ddots &\vdots \\ 0 & \cdots& 0 & 1 \end{pmatrix} \hspace{1cm} \text{and} \hspace{1cm} J_d = \begin{pmatrix} 0 & 1 & \cdots & 0 \\ 0 & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots &\vdots \\ 1 & \cdots& 0 & 0 \end{pmatrix}. \end{equation*} One can observe that the permutation matrix $J_d$ satisfies $J_d^d=I_d$. Therefore, the position of the elephant at time $n \geq 1$ is given by \begin{equation} \label{POSMERW} S_{n+1} = S_n + X_{n+1}. \end{equation} It follows from our very definition of the MERW that at any $n \geq 1$, $X_{n+1} = A_n X_{b_n}$ where $A_n$ is the random matrix described before while $b_n$ is a random variable uniformly distributed on $\{1, ... , n\}$. Moreover, as $A_n$ and $b_n$ are conditionally independent, we clearly have $\mathbb{E}\left[X_{n+1}|\mathcal{F}_n \right] = \mathbb{E}\left[A_n\right] \mathbb{E}\left[X_{b_n}|\mathcal{F}_n\right]$ where $\mathcal{F}_n$ stands for the $\sigma$-algebra, $\mathcal{F}_n=\sigma(X_1, \ldots,X_n)$. Hence, we can deduce from the law of total probability that at any time $n \geq 1$, \begin{equation} \label{CEX} \mathbb{E}\left[X_{n+1}|\mathcal{F}_n \right] = \frac{1}{n}\Bigl(\frac{2dp-1}{2d-1}\Bigr) S_n = \frac{a}{n} S_n \hspace{1cm}\text{a.s.} \end{equation} where $a$ is the fundamental parameter of the MERW, \begin{equation} \label{DEFA} a=\frac{2dp-1}{2d-1}. \end{equation} Consequently, we immediately obtain from \eqref{POSMERW} and \eqref{CEX} that for any $n \geq 1$, \begin{equation} \label{CES1} \mathbb{E}\left[S_{n+1}|\mathcal{F}_n\right] = \gamma_n S_n \hspace{1cm} \text{where} \hspace{1cm} \gamma_n = 1+\frac{a}{n}. \end{equation} Furthermore, \begin{equation*}\prod_{k=1}^{n} \gamma_k=\frac{\Gamma(a+1+n)}{\Gamma(a+1)\Gamma(n+1)} \end{equation*} where $\Gamma$ is the standard Euler Gamma function. The critical value associated with the memory parameter $p$ of the MERW is \begin{equation} \label{DEFPD} p_d=\frac{2d+1}{4d}. \end{equation} As a matter of fact, $$ a<\frac{1}{2} \Longleftrightarrow p< p_d, \hspace{1cm} a=\frac{1}{2} \Longleftrightarrow p= p_d, \hspace{1cm} a>\frac{1}{2} \Longleftrightarrow p> p_d. $$ \begin{defi} \label{DEF3R} The MERW $(S_n)$ is said to be diffusive if $p<p_d$, critical if $p=p_d$, and superdiffusive if $p>p_d$. \end{defi} All our investigation in the three regimes relies on a martingale approach. To be more precise, the asymptotic behavior of $(S_n)$ is closely related to the one of the sequence $(M_n)$ defined, for all $n \geq 0$, by $M_n = a_nS_n$ where $a_0=1$, $a_1 = 1$ and, for all $n \geq 2$, \begin{equation} \label{DEFAN} a_n=\prod_{k=1}^{n-1}\gamma_k^{-1} = \frac{\Gamma(a+1)\Gamma(n)}{\Gamma(n+a)}. \end{equation} It follows from a well-known property of the Euler Gamma function that \begin{equation} \label{CVGAMMAN} \lim_{n \rightarrow \infty} \frac{\Gamma(n+a)}{\Gamma(n) n^a}= 1. \end{equation} Hence, we obtain from \eqref{DEFAN} and \eqref{CVGAMMAN} that \begin{equation} \label{CVGAN} \lim_{n \rightarrow \infty} n^a a_n= \Gamma(a+1). \end{equation} Furthermore, since $a_n = \gamma_na_{n+1}$, we can deduce from \eqref{CES1} that for all $n \geq 1$, $$\mathbb{E}\left[M_{n+1}|\mathcal{F}_n\right] = M_n \hspace{1cm} \text{a.s.}$$ It means that $(M_n)$ is a multi-dimensional martingale. Our goal is to extend the results recently established in \cite{Bercu17} to MERW. One can observe that our approach is much more tricky than that of \cite{Bercu17} as it requires to study the asymptotic behavior of the multi-dimensional martingale $(M_n)$. \section{Main results} \label{S-MR} \subsection{The diffusive regime} Our first result deals with the strong law of large numbers for the MERW in the diffusive regime where $0 \leq p < p_d$. \begin{thm} \label{T-ASCVG-DR} We have the almost sure convergence \begin{equation} \label{T-ASCVG-DR1} \lim_{n \to \infty} \frac{1}{n}S_n = 0 \hspace{1cm}\text{a.s.} \end{equation} \end{thm} \noindent Some refinements on the almost sure rates of convergence for the MERW are as follows. \begin{thm} \label{T-ASCVGRATES-DR} We have the quadratic strong law \begin{equation} \label{T-ASCVG-DR2} \lim_{n \to \infty} \frac{1}{\log n}\sum_{k=1}^n \frac{1}{k^2}S_kS_k^T=\frac{1}{d(1-2a)}I_d \hspace{1cm} \text{a.s.} \end{equation} In particular, \begin{equation} \label{T-ASCVG-DR3} \lim_{n \to \infty} \frac{1}{\log n}\sum_{k=1}^n \frac{\|S_k\|^2}{k^2}=\frac{1}{(1-2a)} \hspace{1cm} \text{a.s.} \end{equation} Moreover, we also have the law of iterated logarithm \begin{equation} \label{T-ASCVG-DR4} \limsup_{n \rightarrow \infty} \frac{\|S_n\|^2}{2 n \log \log n} = \frac{1}{(1-2a)} \hspace{1cm} \text{a.s.} \end{equation} \end{thm} \noindent Our next result is devoted to the asymptotic normality of the MERW in the diffusive regime $0 \leq p <p_d$. \begin{thm} \label{T-AN-DR} We have the asymptotic normality \begin{equation} \label{T-CLT-DR} \frac{1}{\sqrt{n}} S_n \build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N} \Bigl(0, \frac{1}{(1-2a)d} I_d\Bigr). \end{equation} \end{thm} \begin{rem} We clearly have from \eqref{DEFA} that $$ \frac{1}{1-2a}=\frac{2d-1}{2d(1-2p)+1}. $$ Hence, in the special case $d=1$, the critical value $p_d=3/4$ and the asymptotic variance $$ \frac{1}{1-2a}=\frac{1}{3-4p}. $$ Consequently, we find again the asymptotic normality for the one-dimensional ERW in the diffusive regime $0 \leq p <3/4$ recently established in \cite{Baur16},\cite{Bercu17},\cite{Col17}. \end{rem} \subsection{The critical regime} We now focus our attention on the critical regime where the memory parameter $p = p_d$. \begin{thm} \label{T-ASCVG-CR} We have the almost sure convergence \begin{equation} \label{T-ASCVG-CR1} \lim_{n \to \infty} \frac{1}{\sqrt{n}\log n}S_n = 0 \hspace{1cm} \text{a.s.} \end{equation} \end{thm} \noindent We continue with some refinements on the almost sure rates of convergence for the MERW. \begin{thm} \label{T-ASCVGRATES-CR} We have the quadratic strong law \begin{equation} \label{T-ASCVG-CR2} \lim_{n \rightarrow \infty} \frac{1}{\log \log n}\sum_{k=2}^n\frac{1}{(k \log k)^2}S_kS_k^T=\frac{1}{d}I_d \hspace{1cm} \text{a.s} \end{equation} In particular, \begin{equation} \label{T-ASCVG-CR3} \lim_{n \to \infty} \frac{1}{\log \log n}\sum_{k=2}^n\frac{\|S_k\|^2}{(k \log k)^2}=1 \hspace{1cm} \text{a.s.} \end{equation} Moreover, we also have the law of iterated logarithm \begin{equation} \label{T-ASCVG-CR4} \limsup_{n \rightarrow \infty} \frac{\|S_n\|^2}{2 n\log n\log \log \log n}=1 \hspace{1cm} \text{a.s.} \end{equation} \end{thm} \noindent Our next result concerns the asymptotic normality of the MERW in the critical regime $p=p_d$. \begin{thm} \label{T-AN-CR} We have the asymptotic normality \begin{equation} \label{T-AN-CR1} \frac{1}{\sqrt{n\log n}} S_n\build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N} \Bigl(0, \frac{1}{d} I_d\Bigr). \end{equation} \end{thm} \begin{rem} As before, in the special case $d=1$, we find again \cite{Baur16},\cite{Bercu17},\cite{Col17} the asymptotic normality for the one-dimensional ERW $$ \frac{S_n}{\sqrt{n\log n}} \build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N} (0, 1). $$ \end{rem} \subsection{The superdiffusive regime} Finally, we get a handle on the more arduous superdiffusive regime where $p_d < p \leq 1$. \begin{thm} \label{T-ASCVG-SR} We have the almost sure convergence \begin{equation} \label{T-ASCVG-SR1} \lim_{n \to \infty} \frac{1}{n^a} S_n = L \hspace{1cm} \text{a.s.} \end{equation} where the limiting value L is a non-degenerate random vector. Moreover, we also have the mean square convergence \begin{equation} \label{T-ASCVG-SR2} \lim_{n \to \infty} \mathbb{E}\Bigl[\Bigl\|\frac{1}{n^a}S_n-L\Bigr\|^2\Bigr]=0. \end{equation} \end{thm} \begin{thm} \label{T-MOM-SR} The expected value of $L$ is $\mathbb{E}[L]=0$, while its covariance matrix is given by \begin{equation} \label{COVL} \mathbb{E}\left[LL^T\right]=\frac{1}{d(2a -1)\Gamma(2a)} I_d. \end{equation} In particular, \begin{equation} \label{TRCOVL} \mathbb{E}\left[\|L\|^2\right]=\frac{1}{(2a-1)\Gamma(2a)}. \end{equation} \end{thm} \begin{rem} Another possibility for the MERW is that, at time $n = 1$, the elephant moves in one direction, say the first direction $e_1$ of the standard basis $(e_1, \ldots, e_d)$ of $\mathbb{R}^d$, with probability $q$ or in one of the $2d-1$ remaining directions with the same probability $(1-q)/(2d-1)$, where the parameter $q$ lies in the interval $[0,1]$. Afterwards, at any time $n \geq 2$, the elephant moves exactly as before, which means that his steps are given by \eqref{INCMERW}. Then, the results of Section \ref{S-MR} holds true except Theorem \ref{T-MOM-SR} where $$ \mathbb{E}[L]=\frac{1}{\Gamma(a+1)}\Bigl( \frac{2dq -1}{2d-1} \Bigr) e_1 $$ and $$ \mathbb{E}[LL^T]= \frac{1}{\Gamma(2a+1)} \Bigl( \frac{2dq -1}{2d-1} \Bigr)\Bigl(e_1e_1^T - \frac{1}{d}I_d\Bigr) + \frac{1}{d(2a -1)\Gamma(2a)} I_d, $$ which also leads to $$ \mathbb{E}\left[\|L\|^2\right]=\frac{1}{(2a-1)\Gamma(2a)}. $$ \end{rem} \section*{Appendix A \\ A multi-dimensional martingale approach} \renewcommand{\thesection}{\Alph{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{section}{1} \setcounter{equation}{0} We clearly obtain from \eqref{INCMERW} that for any time $n\geq 1$, $\|X_{n} \| = 1$. Consequently, it follows from \eqref{POSMERW} that $\|S_n\| \leq n$. Therefore, the sequence $(M_n)$ given, for all $n \geq 0$, by $M_n = a_nS_n$, is a locally square-integrable multi-dimensional martingale. It can be rewritten in the additive form \begin{equation} \label{DEFMN} M_n = \sum_{k=1}^{n}a_k \varepsilon_k \end{equation} since its increments $\Delta M_n = M_n - M_{n-1}$ satisfy $\Delta M_n = a_nS_n - a_{n-1}S_{n-1} = a_n \varepsilon_n$ where $\varepsilon_n = S_n - \gamma_{n-1}S_{n-1}$. The predictable quadratic variation associated with $(M_n)$ is the random square matrix of order $d$ given, for all $n \geq 1$, by \begin{equation} \label{IPMN} \langle M \rangle_n = \sum_{k=1}^{n} \mathbb{E} \left[\Delta M_k (\Delta M_k)^T | \mathcal{F}_{k-1}\right]. \end{equation} We already saw from \eqref{CES1} that $\mathbb{E}\left[\varepsilon_{n+1}|\mathcal{F}_n\right] = 0$. Moreover, we deduce from \eqref{POSMERW} together with \eqref{CEX} that \begin{eqnarray} \mathbb{E}\left[S_{n+1} S_{n+1}^T|\mathcal{F}_n\right] & = & \mathbb{E}\left[S_nS_n^T|\mathcal{F}_n\right]+\frac{2a}{n}S_nS_n^T + \mathbb{E}\left[X_{n+1}X_{n+1}^T|\mathcal{F}_n\right] \notag\\ & = & \left(1+ \frac{2a}{n}\right)S_nS_n^T + \mathbb{E}\left[X_{n+1}X_{n+1}^T|\mathcal{F}_n\right]\hspace{1cm} \text{a.s.} \label{CES2} \end{eqnarray} In order to calculate the right-hand side of \eqref{CES2}, one can notice that for any $n \geq 1$, $$X_{n}X_{n}^T = \sum_{i=1}^d \mathrm{I}_{X^i_{n} \neq 0}e_ie_i^{T} $$ where $(e_1, \ldots, e_d)$ stands for the standard basis of the Euclidean space $\mathbb{R}^d$ and $X^i_{n}$ is the $i$-th coordinate of the random vector $X_n$. Moreover, it follows from \eqref{INCMERW} together with the law of total probability that any time $n \geq 1$ and for any $1\leq i \leq d$, \begin{eqnarray*} \mathbb{P}(X_{n+1}^i\neq 0 | \mathcal{F}_n)& = &\frac{1}{n}\sum _{k=1}^n \mathbb{P}((A_nX_{k})^i\neq0 | \mathcal{F}_n) \\ & = &\frac{1}{n}\sum _{k=1}^n \mathrm{I}_{X_k^i \neq 0} \mathbb{P}(A_n = \pm I_d)+ \frac{1}{n}\sum _{k=1}^n (1-\mathrm{I}_{X_k^i \neq 0}) \mathbb{P}(A_n = \pm J_d) \\ & = &\frac{N_n^{X}(i)}{n} \Bigl(\mathbb{P}(A_n=I_d)-\mathbb{P}(A_n=J_d)\Bigr)+ 2\mathbb{P}(A_n=J_d) \end{eqnarray*} which implies that for any $1\leq i \leq d$, \begin{equation} \label{EXNNEQ} \mathbb{E}\bigl[\mathrm{I}_{X_{n+1}^i \neq 0}|\mathcal{F}_n\bigr] = \frac{a}{n} N_n^{X}(i) + \frac{(1-a)}{d} \hspace{1cm} \text{a.s.} \end{equation} where $$ N_n^{X}(i) = \sum_{k=1}^n \mathrm{I}_{X_k^i \neq 0} $$ and the parameter $a$ is given by \eqref{DEFA}. Hence, we infer from \eqref{CES2} and \eqref{EXNNEQ} that \begin{equation} \label{CEX2} \mathbb{E}\left[X_{n+1}X_{n+1}^T|\mathcal{F}_n\right] = \frac{a}{n}\Sigma_n + \frac{(1-a)}{d}I_d \hspace{1cm} \text{a.s.} \end{equation} where \begin{equation} \label{DEFSIGMAN} \Sigma_n = \sum_{i=1}^d N_n^{X}(i) e_ie_i^{T}. \end{equation} One can observe the elementary fact that for all $n\geq 1$, $\text{Tr}(\Sigma_n)=n$ where $\text{Tr}(\Sigma_n)$ stands for the trace of the positive definite matrix $\Sigma_n$. Therefore, we obtain from \eqref{CES2} together with \eqref{CEX2} that \begin{eqnarray} \mathbb{E}\left[\varepsilon_{n+1}\varepsilon_{n+1}^T|\mathcal{F}_n\right] & = & \mathbb{E}\left[S_{n+1}S_{n+1}^T|\mathcal{F}_n\right]-\gamma_n^2S_nS_n^T \notag \\ & = & \Bigl(1+ \frac{2a}{n}\Bigr)S_nS_n^T + \frac{a}{n}\Sigma_n + \frac{(1-a)}{d}I_d -\gamma_n^2S_nS_n^T \notag \\ & = & \frac{a}{n}\Sigma_n + \frac{(1-a)}{d}I_d - \left( \frac{a}{n}\right) ^2 S_nS_n^T\hspace{1cm} \text{a.s.} \label{CEMEPS2} \end{eqnarray} which ensures that \begin{eqnarray} \label{CEEPS2} \mathbb{E}\left[\|\varepsilon_{n+1}\|^2|\mathcal{F}_n\right] & = &\frac{a}{n}\text{Tr}(\Sigma_n) + \frac{1-a}{d}\text{Tr}(I_d) - \left( \frac{a}{n}\right) ^2 \|S_n\|^2 \notag \\ & = & 1 - ( \gamma_n -1 ) ^2 \|S_n\|^2 \hspace{1cm}\text{a.s.} \end{eqnarray} By the same token, $$ \mathbb{E}\left[\|\varepsilon_{n+1}\|^4|\mathcal{F}_n\right] = 1 - 3(\gamma_n -1)^4\|S_n\|^4 -2 (\gamma_n-1)^2\|S_n\|^2 + 4(\gamma_n-1)^2\xi_n $$ where, thanks to \eqref{CEX2}, $$ \xi_n = \mathbb{E}\left[\langle S_n , X_{n+1}\rangle ^2 | \mathcal{F}_n\right] =\frac{a}{n}S_n^T\Sigma_nS_n + \frac{(1-a)}{d}\|S_n\|^2. $$ It leads to \begin{eqnarray} \mathbb{E}\left[\|\varepsilon_{n+1}\|^4|\mathcal{F}_n\right] & = & 1 - 3(\gamma_n -1)^4\|S_n\|^4 - 2 \Bigl(1-\frac{2(1-a)}{d}\Bigr)(\gamma_n-1)^2\|S_n\|^2 \notag \\ & & \hspace{5ex} +\frac{4a}{n}(\gamma_n-1)^2S_n^T\Sigma_nS_n \hspace{1cm} \text{a.s.} \label{CEEPS4} \end{eqnarray} Therefore, as $\Sigma_n \leq nI_d$ for the usual order of positive definite matrices, we clearly obtain from \eqref{CEEPS4} that \begin{eqnarray} \label{MAJCEEPS4} \mathbb{E}\left[\|\varepsilon_{n+1}\|^4|\mathcal{F}_n\right] & \leq & 1 - 3(\gamma_n -1)^4\|S_n\|^4 \notag \\ & &\hspace{4ex} +\frac{2}{d} (\gamma_n-1)^2 \Bigl(2a(d-1) +2-d\Bigr)\|S_n\|^2\hspace{0.5cm} \text{a.s.} \end{eqnarray} Consequently, we obtain from \eqref{CEEPS2} and \eqref{MAJCEEPS4} the almost sure upper bounds \begin{equation} \label{MAJMOMEPS} \sup_{n\geq0} \mathbb{E}\left[\|\varepsilon_{n+1}\|^2|\mathcal{F}_n\right] \leq 1 \hspace{1cm} \text{and} \hspace{1cm} \sup_{n\geq0} \mathbb{E}\left[\|\varepsilon_{n+1}\|^4|\mathcal{F}_n\right] \leq \frac{4}{3} \hspace{1cm} \text{a.s.} \end{equation} Hereafter, we deduce from \eqref{IPMN} and \eqref{CEMEPS2} that \begin{eqnarray} \langle M \rangle_n & = & a_1^2 \mathbb{E}[\varepsilon_1\varepsilon_1^T] + \sum_{k=1}^{n-1} a_{k+1}^2 \mathbb{E}\left[\varepsilon_{k+1}\varepsilon_{k+1}^T|\mathcal{F}_k\right] \notag \\ & = & \frac{1}{d}I_d \sum_{k=1}^{n}a_k^2 + a \sum_{k=1}^{n-1}a_{k+1}^2 \Bigl( \frac{1}{k}\Sigma_k-\frac{1}{d}I_d\Bigr) - \zeta_n \label{CALIPMN} \end{eqnarray} where $$ \zeta_n=a^2 \sum_{k=1}^{n-1}\Bigl(\frac{a_{k+1}}{k}\Bigr)^2 S_kS_k^T. $$ Hence, by taking the trace on both sides of \eqref{CALIPMN}, we find that \begin{equation} \label{TRIPMN} \text{Tr}\langle M \rangle_n =\sum_{k=1}^{n}a_k^2 - a ^2 \sum_{k=1}^{n-1} \Bigl(\frac{a_{k+1}}{k}\Bigr)^2 \| S_k \|^2. \end{equation} The asymptotic behavior of the multi-dimensional martingale $(M_n)$ is closely related to the one of \begin{equation*} v_n=\sum_{k=1}^na_k^2=\sum_{k=1}^n\Bigl(\frac{\Gamma(a+1)\Gamma(k)}{\Gamma(a+k)}\Bigr)^2. \end{equation*} One can observe that we always have $\text{Tr} \langle M \rangle_n \leq v_n$. In accordance with Definition \ref{DEF3R}, we have three regimes. In the diffusive regime where $a<1/2$, \begin{equation} \label{VNDIFF} \lim_{n \to \infty} \frac{v_n}{n^{1-2a}}= \ell \hspace{1cm} \text{where} \hspace{1cm} \ell=\frac{(\Gamma(a+1))^2}{1-2a}. \end{equation} In the critical regime where $a=1/2$, \begin{equation} \label{VNCRIT} \lim_{n\to \infty} \frac{v_n}{\log n} = (\Gamma(a+1))^2=\frac{\pi}{4}. \end{equation} Finally, in the superdiffusive regime where $a>1/2$, $v_n$ converges to the finite value \begin{eqnarray} \label{VNSUPER} \lim_{n\rightarrow \infty} v_n & = &\sum_{k=0}^{\infty} \Bigl( \frac{\Gamma(a+1) \Gamma(k+1) }{\Gamma(a+k+1)} \Bigr)^2 = \sum_{k=0}^{\infty} \frac{(1)_k\,(1)_k\,(1)_k} {(a+1)_k\, (a+1)_k\,k!} \notag\\ & = & {}_{3}F_2\Bigl( \begin{matrix} {1,1,1}\\ {a+1,a+1}\end{matrix} \Bigl| 1\Bigr) \end{eqnarray} where, for any $\alpha\in \mathbb{R}$, $(\alpha)_k=\alpha(\alpha+1)\cdots(\alpha+k-1)$ for $k\geq 1$, $(\alpha)_0=1$ stands for the Pochhammer symbol and ${}_{3}F_2$ is the generalized hypergeometric function defined by \begin{eqnarray*} {}_{3}F_2 \Bigl( \begin{matrix} {a,b,c}\\ {d,e}\end{matrix} \Bigl| {\displaystyle z}\Bigr) =\sum_{k=0}^{\infty} \frac{(a)_k\,(b)_k\,(c)_k} {(d)_k\,(e)_k\, k!} z^k. \end{eqnarray*} \section*{Appendix B \\ Proofs of the almost sure convergence results} \renewcommand{\thesection}{\Alph{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{section}{2} \setcounter{equation}{0} \vspace{-3ex} \subsection*{} \begin{center} {\bf B.1. The diffusive regime.} \end{center} \ \vspace{-4ex}\\ \noindent{\bf Proof of Theorem \ref{T-ASCVG-DR}.} First of all, we focus our attention on the proof of the almost sure convergence \eqref{T-ASCVG-DR1}. We already saw from \eqref{TRIPMN} that $\text{Tr} \langle M \rangle_n \leq v_n$. Moreover, we obtain from \eqref{VNDIFF} that, in the diffusive regime where $0<a<1/2$, $v_n$ increases to infinity with the speed $n^{1-2a}$. On the one hand, it follows from the strong law of large numbers for multi-dimensional martingales given e.g. by the last part of Theorem 4.3.15 in \cite{Duflo97} that for any $\gamma>0$, \begin{equation} \label{SLLNMN} \frac{\|M_n\|^2}{\lambda_{max} \langle M \rangle_n }=o\Bigl( \Bigl(\log \text{Tr} \langle M \rangle_n \Bigr)^{1+\gamma}\Bigr) \hspace{1cm} \text{a.s} \end{equation} where $\lambda_{max} \langle M \rangle_n $ stands for the maximal eigenvalue of the random square matrix $\langle M \rangle_n$. However, as $\langle M \rangle_n$ is a positive definite matrix and $\text{Tr} \langle M \rangle_n \leq v_n$, we clearly have $\lambda_{max} \langle M \rangle_n \leq \text{Tr} \langle M \rangle_n \leq v_n$. Consequenly, we obtain from \eqref{SLLNMN} that \begin{equation*} \|M_n\|^2=o\bigl( v_n (\log v_n )^{1+\gamma}\bigr) \hspace{1cm} \text{a.s} \end{equation*} which implies that \begin{equation} \label{CVGMNDIFF} \|M_n\|^2 =o\bigl( n^{1-2a}(\log n)^{1+\gamma} \bigr) \hspace{1cm} \text{a.s.} \end{equation} Hence, as $M_n = a_nS_n$, it follows from \eqref{CVGAN} and \eqref{CVGMNDIFF} that for any $\gamma>0$, \begin{equation*} \|S_n\|^2 =o\bigl(n (\log n)^{1+\gamma}\bigr) \hspace{1cm} \text{a.s.} \end{equation*} which completes the proof of Theorem \ref{T-ASCVG-DR}. \hfill $\videbox$\\ \noindent{\bf Proof of Theorem \ref{T-ASCVGRATES-DR}.} We shall now proceed to the proof of the almost sure rates of convergence given in Theorem \ref{T-ASCVGRATES-DR}. First of all, we claim that \begin{equation} \label{CVGSIGMAN} \lim_{n\to\infty}\frac{1}{n}\Sigma_n = \frac{1}{d}I_d \hspace{1cm} \text {a.s.} \end{equation} where $\Sigma_n$ is the random square matrix of order $d$ given by \eqref{DEFSIGMAN}. As a matter of fact, in order to prove \eqref{CVGSIGMAN} it is only necessary to show that for any $1 \leq i \leq d$, \begin{equation} \label{CVGNXNI} \lim_{n\to\infty}\frac{N_n^{X}(i)}{n} = \frac{1}{d} \hspace{1cm} \text {a.s.} \end{equation} For any $1 \leq i \leq d$, denote $$\Lambda_n(i) = \frac{N_n^{X}(i)}{n}.$$ One can observe that $$\Lambda_{n+1}(i)=\frac{n}{n+1}\Lambda_n(i)+ \frac{1}{n+1}\mathrm{I}_{X^i_{n+1} \neq 0}$$ which leads, via \eqref{EXNNEQ}, to the recurrence relation \begin{equation} \label{RECLNI} \Lambda_{n+1}(i) = \frac{n}{n+1}\gamma_n \Lambda_n(i) + \frac{(1-a)}{d(n+1)} +\frac{1}{n+1}\delta_{n+1}(i) \end{equation} where $\delta_{n+1}(i)=\mathrm{I}_{X^i_{n+1} \neq 0}-\mathbb{E}[\mathrm{I}_{X^i_{n+1}\neq 0}|\mathcal{F}_n]$. After straightforward calculations, the solution of this recurrence relation is given by \begin{equation} \label{SOLLNI} \Lambda_n(i) =\frac{1}{na_n}\Bigl(\Lambda_1(i) + \frac{(1-a)}{d} \sum_{k=2}^na_k + L_n(i) \Bigr) \end{equation} where $$ L_n(i) = \sum_{k=2}^na_k \delta_k(i). $$ However, $(L_n(i))$ is a square-integrable real martingale with predictable quadratic variation $\langle L(i) \rangle_n$ satisfying $\langle L(i) \rangle_n \leq v_n$ a.s. Then, it follows from the standard strong law of large numbers for martingales given by Theorem 1.3.24 in \cite{Duflo97} that $(L_n(i))^2=O(v_n \log v_n)$ a.s. Consequently, as $na_n^2$ is equivalent to $(1-2a) v_n$, we obtain that for any $1 \leq i \leq d$, \begin{equation} \label{CVGLNI} \lim_{n\to\infty}\frac{1}{na_n}L_n(i) = 0 \hspace{1cm} \text {a.s.} \end{equation} Furthermore, one can easily check from \eqref{CVGAN} that \begin{equation} \label{CVGSUMAN} \lim_{n\to\infty}\frac{1}{na_n}\sum_{k=1}^na_k = \frac{1}{1-a}. \end{equation} Therefore, we find from \eqref{SOLLNI} together with \eqref{CVGLNI} and \eqref{CVGSUMAN} that for any $1 \leq i \leq d$, \begin{equation} \label{CVGLAMBDANI} \lim_{n\to\infty}\Lambda_n(i) = \frac{1}{d} \hspace{1cm} \text {a.s.} \end{equation} which immediately leads to \eqref{CVGNXNI}. Hereafter, it follows from the conjunction of \eqref{T-ASCVG-DR1}, \eqref{CEMEPS2} and \eqref{CVGNXNI} that \begin{equation} \label{CVGEPS2} \lim_{n\to\infty} \mathbb{E}\left[\varepsilon_{n+1}\varepsilon_{n+1}^T|\mathcal{F}_n\right]= \frac{1}{d}I_d \hspace{1cm} \text {a.s.} \end{equation} By the same token, we also obtain from \eqref{CALIPMN} and Toeplitz lemma that \begin{equation} \label{CVGIPMN} \lim_{n\to\infty} \frac{1}{v_n} \langle M \rangle_n= \frac{1}{d}I_d \hspace{1cm} \text {a.s.} \end{equation} We are now in the position to prove the quadratic strong law \eqref{T-ASCVG-DR2}. For any vector $u$ of $\mathbb{R}^d$, denote $M_n(u)=\langle u, M_n \rangle$ and $\varepsilon_n(u)=\langle u, \varepsilon_n \rangle$. We clearly have from \eqref{DEFMN} \begin{equation*} M_n(u) = \sum_{k=1}^{n}a_k \varepsilon_k(u). \end{equation*} Consequently, $(M_n(u))$ is a square-integrable real martingale. Moreover, it follows from \eqref{CVGEPS2} that \begin{equation*} \lim_{n\to\infty} \mathbb{E}\left[|\varepsilon_{n+1}(u)|^2|\mathcal{F}_n\right]= \frac{1}{d}\|u\|^2 \hspace{1cm} \text {a.s.} \end{equation*} Moreover, we can deduce from \eqref{MAJMOMEPS} and the Cauchy-Schwarz inequality that \begin{equation*} \sup_{n\geq 0} \mathbb{E}\left[|\varepsilon_{n+1}(u)|^4|\mathcal{F}_n\right] \leq \frac{4}{3}\|u\|^4 \hspace{1cm} \text{a.s.} \end{equation*} Furthermore, we clearly have from \eqref{CVGAN} and \eqref{VNDIFF} that $$ \lim_{n \rightarrow \infty} n f_n= 1-2a \hspace{1cm} \text{where} \hspace{1cm} f_n=\frac{a_n^2}{v_n}, $$ which of course implies that $f_n$ converges to zero. Therefore, it follows from the quadratic strong law for real martingales given e.g. in Theorem 3 of \cite{Bercu04}, that for any vector $u$ of $\mathbb{R}^d$, \begin{equation} \label{LFQ-DR1} \lim_{n\rightarrow \infty} \frac{1}{\log v_n} \sum_{k=1}^{n} f_{k} \Bigl( \frac{M_{k}^2(u)}{v_k} \Bigr) = \frac{1}{d}\|u\|^2 \hspace{1cm} \text{a.s.} \end{equation} Consequently, we find from \eqref{VNDIFF} and \eqref{LFQ-DR1} that \begin{equation} \label{LFQ-DR2} \lim_{n\rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^{n} \frac{a_k^2}{v_k^2} M_{k}^2(u) = \frac{(1-2a)}{d}\|u\|^2 \hspace{1cm} \text{a.s.} \end{equation} Hereafter, as $M_n=a_nS_n$ and $n^2 a_n^4$ is equivalent to $(1-2a)^2v_n^2$, we obtain from \eqref{LFQ-DR2} that for any vector $u$ of $\mathbb{R}^d$, \begin{equation} \label{LFQ-DR3} \lim_{n\rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \frac{1}{k^2}u^TS_kS_k^T u=\frac{1}{d(1-2a)}\|u\|^2 \hspace{1cm} \text{a.s.} \end{equation} By virtue of the second part of Proposition 4.2.8 in \cite{Duflo97}, we can conclude from \eqref{LFQ-DR3} that \begin{equation} \label{LFQ-DR4} \lim_{n\rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \frac{1}{k^2}S_kS_k^T =\frac{1}{d(1-2a)}I_d \hspace{1cm} \text{a.s.} \end{equation} which completes the proof of \eqref{T-ASCVG-DR2}. By taking the trace on both sides of \eqref{LFQ-DR4}, we immediately obtain \eqref{T-ASCVG-DR3}. Finally, we shall proceed to the proof of the law of iterated logarithm given by \eqref{T-ASCVG-DR4}. We already saw that $a_n^4v_n^{-2}$ is equivalent to $(1-2a)^2n^{-2}$. It ensures that \begin{equation} \label{CONDLIL} \sum_{n=1}^{+\infty} \frac{a_{n}^4}{v_n^2} < +\infty. \end{equation} Hence, it follows from the law of iterated logarithm for real martingales due to Stout \cite{Stout70},\cite{Stout74}, see also Corollary 6.4.25 in \cite{Duflo97}, that for any vector $u$ of $\mathbb{R}^d$, \begin{eqnarray} \limsup_{n \rightarrow \infty} \Bigl(\frac{1}{2 v_n \log \log v_n}\Bigr)^{1/2} M_n(u) & = & -\liminf_{n \rightarrow \infty} \Bigl(\frac{1}{2 v_n \log \log v_n}\Bigr)^{1/2} M_n(u) \nonumber \\ & = & \frac{1}{\sqrt{d}}\|u\| \hspace{1cm} \text{a.s.} \label{LIL-MG-DR} \end{eqnarray} Consequently, as $M_n(u)=a_n\langle u, S_n\rangle$, we obtain from \eqref{VNDIFF} together with \eqref{LIL-MG-DR} that \begin{eqnarray*} \limsup_{n \rightarrow \infty} \Bigl(\frac{1}{2 n \log \log n}\Bigr)^{1/2} \langle u, S_n\rangle & = & -\liminf_{n \rightarrow \infty} \Bigl(\frac{1}{2 n \log \log n}\Bigr)^{1/2} \langle u, S_n\rangle \nonumber \\ & = & \frac{1}{\sqrt{d(1-2a)}}\|u\| \hspace{1cm} \text{a.s.} \end{eqnarray*} In particular, for any vector $u$ of $\mathbb{R}^d$, \begin{equation} \label{LIL-SNU-DR} \limsup_{n \rightarrow \infty} \frac{1}{2 n \log \log n}\langle u, S_n\rangle^2 = \frac{1}{d(1-2a)}\|u\|^2 \hspace{1cm} \text{a.s.} \end{equation} However, $$ \| S_n \|^2=\sum_{i=1}^d \langle e_i, S_n\rangle^2 $$ where $(e_1, \ldots, e_d)$ is the standard basis of $\mathbb{R}^d$. Finally, we deduce from \eqref{LIL-SNU-DR} that \begin{equation*} \limsup_{n \rightarrow \infty} \frac{\|S_n\|^2}{2 n \log \log n} = \frac{1}{(1-2a)} \hspace{1cm} \text{a.s.} \end{equation*} which achieves the proof of Theorem \ref{T-ASCVGRATES-DR}. \hfill $\videbox$\\ \vspace{-2ex} \subsection*{} \begin{center} {\bf B.2. The critical regime.} \end{center} \ \vspace{-4ex}\\ \noindent{\bf Proof of Theorem \ref{T-ASCVG-CR}.} We already saw from \eqref{VNCRIT} that in the critical regime where $a=1/2$, $v_n$ increases slowly to infinity with a logarithmic speed $\log n$. We obtain once again from the last part of Theorem 4.3.15 in \cite{Duflo97} that for any $\gamma>0$, \begin{equation*} \|M_n\|^2=o\bigl( v_n (\log v_n )^{1+\gamma}\bigr) \hspace{1cm} \text{a.s} \end{equation*} which leads to \begin{equation} \label{CVGMNCRIT} \|M_n\|^2 =o\bigl( \log n (\log\log n)^{1+\gamma} \bigr) \hspace{1cm} \text{a.s.} \end{equation} However, we clearly have from \eqref{CVGAN} with $a=1/2$ that \begin{equation} \label{CVGANCRIT} \lim_{n\to \infty}na_n^2=\frac{\pi}{4}. \end{equation} Consequently, as $M_n = a_nS_n$, we deduce from \eqref{CVGMNCRIT} and \eqref{CVGANCRIT} that for any $\gamma>0$, \begin{equation*} \|S_n\|^2 =o\bigl(n \log n (\log \log n)^{1+\gamma}\bigr) \hspace{1cm} \text{a.s.} \end{equation*} which completes the proof of Theorem \ref{T-ASCVG-CR}. \hfill $\videbox$\\ \vspace{-2ex} \noindent{\bf Proof of Theorem \ref{T-ASCVGRATES-CR}.} The proof of Theorem \ref{T-ASCVGRATES-CR} is left to the reader as it follows the same lines as that of Theorem \ref{T-ASCVGRATES-DR}. \hfill $\videbox$\\ \vspace{-2ex} \subsection*{} \begin{center} {\bf B.3. The superdiffusive regime.} \end{center} \ \vspace{-4ex} \\ \noindent{\bf Proof of Theorem \ref{T-ASCVG-SR}.} We already saw from \eqref{VNSUPER} that in the superdiffusive regime where $1/2<a \leq 1$, $v_n$ converges to a finite value. As previously seen, $\text{Tr} \langle M \rangle_n \leq v_n$. Hence, we clearly have $$ \lim_{n \rightarrow \infty} \text{Tr} \langle M \rangle_n < \infty \hspace{1cm} \text{a.s.} $$ Therefore, if \begin{equation} \label{DEFLNSUPER} L_n=\frac{M_n}{\Gamma(a+1)}, \end{equation} we can deduce from the second part of Theorem 4.3.15 in \cite{Duflo97} that \begin{equation} \label{CVGMNSUPER} \lim_{n \to \infty} M_n = M \hspace{1cm} \text{and} \hspace{1cm} \lim_{n \to \infty} L_n = L \hspace{1cm} \text{a.s.} \end{equation} where the limiting values $M$ and $L$ are the random vectors of $\mathbb{R}^d$ given by $$ M=\sum_{k=1}^{\infty}a_k\varepsilon_k \hspace{1cm} \text{and} \hspace{1cm} L=\frac{1}{\Gamma(a+1)}\sum_{k=1}^{\infty}a_k\varepsilon_k. $$ Consequently, as $M_n = a_nS_n$, \eqref{T-ASCVG-SR1} clearly follows from \eqref{CVGAN} and \eqref{CVGMNSUPER} We now focus our attention on the mean square convergence \eqref{T-ASCVG-SR2}. As $M_0=0$, we have from \eqref{DEFMN} and \eqref{IPMN} that for all $n \geq 1$, $$ \mathbb{E}[\|M_n\|^2]= \sum_{k=1}^n \mathbb{E}[\|\Delta M_{k}\|^2]=\mathbb{E}[\text{Tr} \langle M \rangle_n] \leq v_n. $$ Hence, we obtain from \eqref{VNSUPER} that \begin{equation*} \sup_{n \geq 1} \mathbb{E}\left[\|M_n\|^2\right] \leq {}_{3}F_2\Bigl( \begin{matrix} {1,1,1}\\ {a+1,a+1}\end{matrix} \Bigl| 1\Bigr)< \infty, \end{equation*} which means that the martingale $(M_n)$ is bounded in $\mathbb{L}^2$. Therefore, we have the mean square convergence \begin{equation*} \lim_{n \to \infty} \mathbb{E}\bigl[\|M_n-M\|^2\bigr]=0, \end{equation*} which clearly leads to \eqref{T-ASCVG-SR2}. \hfill $\videbox$\\ \noindent{\bf Proof of Theorem \ref{T-MOM-SR}.} First of all, we clearly have for all $n \geq 1$, $\mathbb{E}[M_n]=0$ which implies that $\mathbb{E}[M]=0$ leading to $\mathbb{E}[L]=0$. Moreover, taking expectation on both sides of \eqref{CES2} and \eqref{CEX2}, we obtain that for all $n \geq 1$, \begin{eqnarray} \mathbb{E}\left[S_{n+1}S_{n+1}^T\right] & = & \Bigl( 1+\frac{2a}{n}\Bigr)\mathbb{E}\left[S_{n}S_n^T\right] + \mathbb{E} \left[X_{n+1}X_{n+1}^T\right] \notag \\ & = & \Bigl( 1+\frac{2a}{n}\Bigr)\mathbb{E}\left[S_{n}S_n^T\right] +\frac{a}{n}\mathbb{E}\left[\Sigma_n\right]+\frac{(1-a)}{d}I_d. \label{COVSN-1} \end{eqnarray} However, we claim that \begin{equation} \label{ESIGMAN} \mathbb{E}\left[\Sigma_n\right] = \frac{n}{d}I_d. \end{equation} As a matter of fact, taking expectation on both sides of \eqref{SOLLNI}, we find that for any $1 \leq i \leq d$, \begin{equation} \label{ELNI} \mathbb{E}[\Lambda_n(i)] =\frac{1}{na_n}\Bigl(\mathbb{E}[\Lambda_1(i)] + \frac{(1-a)}{d} \sum_{k=2}^na_k \Bigr). \end{equation} On the one hand, we clearly have $$\mathbb{E}[\Lambda_1(i)]=\frac{1}{d}.$$ On the other hand, it follows from Lemma B.1 in \cite{Bercu17} that \begin{eqnarray} \label{SUMAN} \sum_{k=2}^na_k & = & \sum_{k=2}^n \frac{\Gamma(a+1)\Gamma(k)}{\Gamma(k+a)} = \sum_{k=1}^{n-1} \frac{\Gamma(a+1)\Gamma(k+1)}{\Gamma(k+a+1)} \notag \\ & = & \frac{1}{(a-1)} \left(1 - \frac{\Gamma(a+1)\Gamma(n+1)}{\Gamma(a+n)}\right) = \frac{(1 - na_n)}{(a-1)}. \end{eqnarray} Consequently, we can deduce from \eqref{ELNI} and \eqref{SUMAN} that for any $1 \leq i \leq d$, \begin{equation} \label{ELNIFIN} \mathbb{E}[\Lambda_n(i)] =\frac{1}{na_n}\Bigl(\frac{1}{d} - \frac{(1-na_n)}{d} \Bigr)=\frac{1}{d}. \end{equation} Therefore, we get from \eqref{DEFSIGMAN} and \eqref{ELNIFIN} that \begin{equation*} \mathbb{E}[\Sigma_n] = n \sum_{i=1}^d \mathbb{E}[\Lambda_n(i)] e_ie_i^{T}= \frac{n}{d} \sum_{i=1}^d e_ie_i^{T}=\frac{n}{d} I_d. \end{equation*} Hereafter, we obtain from \eqref{COVSN-1} and \eqref{ESIGMAN} that \begin{equation} \mathbb{E}\left[S_{n+1}S_{n+1}^T\right] =\Bigl( 1+\frac{2a}{n}\Bigr)\mathbb{E}\left[S_{n}S_n^T\right]+ \frac{1}{d} I_d. \label{COVSN-2} \end{equation} It is not hard to see that the solution of this recurrence relation is given by \begin{eqnarray} \label{COVSN-3} \mathbb{E}\left[S_{n}S_n^T\right] & = & \frac{\Gamma(n+2a)}{\Gamma(2a+1)\Gamma(n)} \left( \mathbb{E}[S_1S_1^T] + \frac{1}{d} \sum_{k=1}^{n-1} \frac{\Gamma(2a+1)\Gamma(k+1)}{\Gamma(k+2a+1)} I_d \right) \notag \\ & = & \frac{\Gamma(n+2a)}{\Gamma(n)} \left( \sum_{k=1}^{n} \frac{\Gamma(k)}{\Gamma(k+2a)} \right) \frac{1}{d}I_d \end{eqnarray} since $$ \mathbb{E}[S_1S_1^T] = \frac{1}{d}I_d. $$ Therefore, it follows once again from Lemma B.1 in \cite{Bercu17} that \begin{equation} \label{COVSN-4} \mathbb{E}\left[S_{n}S_n^T\right]=\frac{n}{(2a -1)}\left( \frac{\Gamma(n+2a)}{\Gamma(n+1)\Gamma(2a)} -1\right) \frac{1}{d}I_d. \end{equation} Hence, we obtain from \eqref{DEFLNSUPER} together with \eqref{COVSN-4} that \begin{eqnarray} \label{COVLN} \mathbb{E}[L_n L_n^T] & = & \frac{na_n^2}{(2a -1)(\Gamma(a+1))^2}\left( \frac{\Gamma(n+2a)}{\Gamma(n+1)\Gamma(2a)} -1\right) \frac{1}{d}I_d \notag \\ & = & \frac{n}{(2a -1)} \left( \frac{\Gamma(n)}{\Gamma(n+a)}\right)^2 \left( \frac{\Gamma(n+2a)}{\Gamma(n+1)\Gamma(2a)} -1\right) \frac{1}{d}I_d. \end{eqnarray} Finally, we find from \eqref{T-ASCVG-SR2} and \eqref{COVLN} that $$ \lim_{n\rightarrow \infty} \mathbb{E}[L_n L_n^T]=\mathbb{E}[L L^T]= \frac{1}{d(2a -1)\Gamma(2a)} I_d $$ which achieves the proof of Theorem \ref{T-MOM-SR}. \hfill $\videbox$\\ \section*{Appendix C \\ Proofs of the asymptotic normality results} \renewcommand{\thesection}{\Alph{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{section}{3} \setcounter{equation}{0} \vspace{-2ex} \subsection*{} \begin{center} {\bf C.1. The diffusive regime.} \end{center} \ \vspace{-4ex}\\ \noindent{\bf Proof of Theorem \ref{T-AN-DR}.} In order to establish the asymptotic normality \eqref{T-CLT-DR}, we shall make use of the central limit theorem for multi-dimensional martingales given e.g. by Corollary 2.1.10 of \cite{Duflo97}. First of all, we already saw from \eqref{CVGIPMN} that \begin{equation} \label{RCVGIPMN} \lim_{n\to\infty} \frac{1}{v_n} \langle M \rangle_n= \frac{1}{d}I_d \hspace{1cm} \text {a.s.} \end{equation} Consequently, it only remains to show that $(M_n)$ satisfies Lindeberg's condition, in other words, for all $\varepsilon > 0$, \begin{equation*} \frac{1}{v_n}\sum_{k=1}^n \mathbb{E}\left[\| \Delta M_n \|^2 \mathrm{I}_{\{\| \Delta M_n \| \geq \varepsilon \sqrt{v_n}\}}| \mathcal{F}_{k-1}\right] \build{\longrightarrow}_{}^{\dP} 0. \end{equation*} We have from \eqref{MAJMOMEPS} that for all $\varepsilon > 0$ \begin{eqnarray*} \frac{1}{v_n}\sum_{k=1}^n \mathbb{E}\left[\| \Delta M_n \|^2 \mathrm{I}_{\{\| \Delta M_n \| \geq \varepsilon \sqrt{v_n}\}}| \mathcal{F}_{k-1}\right] & \leq &\frac{1}{\varepsilon^2 v_n^2}\sum_{k=1}^n \mathbb{E}\left[\| \Delta M_n \|^4 | \mathcal{F}_{k-1}\right] \\ & \leq & \sup_{1 \leq k \leq n} \mathbb{E}\left[\|\varepsilon_k\|^4|\mathcal{F}_{k-1}\right]\frac{1}{\varepsilon^2 v_n^2} \sum_{k=1}^na_k^4 \\ & \leq & \frac{4}{3\varepsilon^2 v_n^2}\sum_{k=1}^na_k^4. \end{eqnarray*} However, we already saw from \eqref{CONDLIL} that \begin{equation*} \sum_{n=1}^{+\infty} \frac{a_{n}^4}{v_n^2} < +\infty. \end{equation*} Hence, it follows from Kronecker's lemma that $$\lim_{n\to \infty}\frac{1}{v_n^2}\sum_{k=1}^na_k^4 =0,$$ which ensures that Lindeberg's condition is satisfied. Therefore, we can conclude from the central limit theorem for martingales that \begin{equation} \label{CLTMN-DR} \frac{1}{\sqrt{v_n}}M_n\build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N}\Bigl(0,\frac{1}{d}I_d\Bigr). \end{equation} As $M_n = a_nS_n$ and $\sqrt{n}a_n$ is equivalent to $\sqrt{v_n(1-2a)}$, we find from \eqref{CLTMN-DR} that $$\frac{1}{\sqrt{n}} S_n\build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N}\Bigl(0,\frac{1}{d(1-2a)}I_d\Bigr),$$ which completes the proof of Theorem \ref{T-AN-DR}. \hfill $\videbox$\\ \vspace{-5ex} \subsection*{} \begin{center} {\bf C.2. The critical regime.} \end{center} \ \vspace{-4ex}\\ \noindent{\bf Proof of Theorem \ref{T-AN-CR}.} Via the same lines as in the proof of \eqref{CVGIPMN}, we can deduce from \eqref{T-ASCVG-CR1}, (\ref{TRIPMN}) and \eqref{VNCRIT} that in the critical regime \begin{equation} \lim_{n \to \infty} \frac{1}{v_n} \langle M\rangle_n=\frac{1}{d}I_d \hspace{1cm} \text{a.s.} \end{equation} Moreover, it follows from \eqref{VNCRIT} and \eqref{CVGANCRIT} that $a_n^2v_n^{-1}$ is equivalent to $(n\log n)^{-1}$. It implies that \begin{equation} \label{CONDLILCR} \sum_{k=1}^{\infty}\frac{a_n^4}{v_n^2} < + \infty. \end{equation} As previously seen, we infer from \eqref{CONDLILCR} that $(M_n)$ satisfies Lindeberg's condition. Therefore, we can conclude from the central limit theorem for martingales that \begin{equation} \label{CLTMN-CR} \frac{1}{\sqrt{v_n}} M_n\build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N}\Bigl(0,\frac{1}{d}I_d\Bigr). \end{equation} Finally, as $M_n = a_nS_n$ and $a_n\sqrt{n \log n}$ is equivalent to $\sqrt{v_n}$, we obtain from that \eqref{CLTMN-CR} that \begin{equation*} \frac{1}{\sqrt{n \log n}} S_n \build{\longrightarrow}_{}^{{\mbox{\calcal L}}} \mathcal{N} (0,1), \end{equation*} which achieves the proof of Theorem \ref{T-AN-CR}. \hfill $\videbox$\\ \vspace{-2ex} \bibliographystyle{acm}
{ "timestamp": "2017-09-22T02:08:14", "yymm": "1709", "arxiv_id": "1709.07345", "language": "en", "url": "https://arxiv.org/abs/1709.07345", "abstract": "The purpose of this paper is to investigate the asymptotic behavior of the multi-dimensional elephant random walk (MERW). It is a non-Markovian random walk which has a complete memory of its entire history. A wide range of literature is available on the one-dimensional ERW. Surprisingly, no references are available on the MERW. The goal of this paper is to fill the gap by extending the results on the one-dimensional ERW to the MERW. In the diffusive and critical regimes, we establish the almost sure convergence, the law of iterated logarithm and the quadratic strong law for the MERW. The asymptotic normality of the MERW, properly normalized, is also provided. In the superdiffusive regime, we prove the almost sure convergence as well as the mean square convergence of the MERW. All our analysis relies on asymptotic results for multi-dimensional martingales.", "subjects": "Probability (math.PR); Statistics Theory (math.ST)", "title": "On the multi-dimensional elephant random walk", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.976669234586964, "lm_q2_score": 0.7248702761768249, "lm_q1q2_score": 0.7079584978084608 }
https://arxiv.org/abs/2003.08093
Solving Non-Convex Non-Differentiable Min-Max Games using Proximal Gradient Method
Min-max saddle point games appear in a wide range of applications in machine leaning and signal processing. Despite their wide applicability, theoretical studies are mostly limited to the special convex-concave structure. While some recent works generalized these results to special smooth non-convex cases, our understanding of non-smooth scenarios is still limited. In this work, we study special form of non-smooth min-max games when the objective function is (strongly) convex with respect to one of the player's decision variable. We show that a simple multi-step proximal gradient descent-ascent algorithm converges to $\epsilon$-first-order Nash equilibrium of the min-max game with the number of gradient evaluations being polynomial in $1/\epsilon$. We will also show that our notion of stationarity is stronger than existing ones in the literature. Finally, we evaluate the performance of the proposed algorithm through adversarial attack on a LASSO estimator.
\section{Introduction} Non-convex min-max saddle point games appear in a wide range of applications such as training Generative Adversarial Networks~\cite{goodfellow2014generative,gulrajani2017improved,sanjabi2018convergence, barazandeh2019training}, fair statistical inference \cite{xu2018fairgan, madras2018learning, baharlouei2019r}, and training robust neural networks and systems \cite{madry2017towards, berger2013statistical,barazandeh2018behavior}. In such a game, the goal is to solve the optimization problem of the form \begin{align}\label{sec:intro} \min_{\boldsymbol{\theta} \in \Theta}\; \max_{\boldsymbol{\alpha} \in \mathcal{A}} \;\;f(\boldsymbol{\theta},\boldsymbol{\alpha}), \end{align} which can be considered as a two player game where one player aims at increasing the objective, while the other tries to minimize the objective. Using game theoretic point of view, we may aim for finding Nash equilibria~\cite{nash1950equilibrium} in which no player can do better off by unilaterally changing its strategy. Unfortunately, finding/checking such Nash equilibria is hard in general~\cite{murty1987some} for non-convex objective functions. Moreover, such Nash equilibria might not even exist. Therefore, many works focus on special cases such as convex-concave problems where $f(\boldsymbol{\theta},.)$ is concave for any given $\boldsymbol{\theta}$ and $f(.,\boldsymbol{\alpha})$ is convex for any given $\boldsymbol{\alpha}$. Under this assumption, different algorithms such as optimistic mirror descent \cite{rakhlin2013optimization, mertikopoulos2019optimistic, daskalakis2018last,mokhtari2019unified}, Frank-Wolfe algorithm \cite{gidel2016frank,abernethy2017frank} and Primal-Dual method~\cite{hamedani2018iteration} have been studied. \vspace{0.2cm} In the general non-convex settings, \cite{rafique2018non} considers the weakly convex-concave case and proposes a primal-dual based approach for finding approximate stationary solutions. More recently, the research works~\cite{lu2019block, lu2019hybrid, nouiehed2019solving, ostrovskii2020efficient} examine the min-max problem in non-convex-(strongly)-concave cases and proposed first-order algorithms for solving them. Some of the results have been accelerated in the ``Moreau envelope regime" by the recent interesting work~\cite{thekumparampil2019efficient}. This work first starts by studying the problem in smooth strongly convex-concave and convex-concave settings, and proposes an algorithm based on the combination of Mirror-Prox \cite{juditsky2011solving} and Nesterov's accelerated gradient descent \cite{nesterov1998introductory} methods. Then the algorithm is extended to the smooth non-convex-concave scenario. Some of the aforementioned results are extended to zeroth-order methods for solving non-convex-concave min-max optimization problems~\cite{liu2019min,wang2020zeroth}. As a first step toward solving non-convex non-concave min-max problems, \cite{nouiehed2019solving} studies a class of games in which one of the players satisfies the Polyak-{\cal L}{}ojasiewic(PL) condition and the other player has a general non-convex structure. More recently, the work~\cite{yang2020global} studied the two sided PL min-max games and proposed a variance reduced strategy for solving these games. \vspace{0.2cm} While almost all existing efforts focus on smooth min-max problems, in this work, we study non-differentiable, non-convex-strongly-concave and non-convex-concave games and propose an algorithm for computing their first-order Nash equilibria. \section{Problem Definition} Consider the min-max zero-sum game \begin{align} \min_{\boldsymbol{\theta} \in \Theta}\; \max_{\boldsymbol{\alpha} \in \mathcal{A}} \;\;(f(\boldsymbol{\theta},\boldsymbol{\alpha}) \triangleq h(\boldsymbol{\theta},\boldsymbol{\alpha}) -p(\boldsymbol{\alpha}) + q(\boldsymbol{\theta})),\label{eq: game1-cons} \end{align} where we assume that the constraint sets and the objective function satisfy the following assumptions throughout the paper. \begin{asu} \label{assumption:Concavity} The sets $\Theta \subseteq \mathbb{R}^{d_\theta}$ and $\mathcal{A} \subseteq \mathbb{R}^{d_\alpha}$ are convex and compact. Moreover, there exist two separate balls with radius $R$ that contains the feasible sets~$\mathcal{A}$ and~$\Theta$. \end{asu} \begin{asu} \label{assumption:objective} The functions $h(\boldsymbol{\theta},\boldsymbol{\alpha})$ is continuously differentiable, $p(\cdot)$ and $q(\cdot)$ are convex and (potentially) non-differentiable, $p(\cdot)$ is $L_{p}$-Lipschitz continuous and $q(\cdot)$ is continuous. \end{asu} \begin{asu \label{assumption: LipSmooth-uncons} The function~$h(\boldsymbol{\theta}, \boldsymbol{\alpha})$ is continuously differentiable in both $\boldsymbol{\theta}$ and $\boldsymbol{\alpha}$ and there exist constants $L_{11}$, $L_{22}$ and $L_{12}$ such that for every $\boldsymbol{\alpha},\boldsymbol{\alpha}_1,\boldsymbol{\alpha}_{2}\in \mathcal{A}$, and $\boldsymbol{\theta},\boldsymbol{\theta}_1,\boldsymbol{\theta}_2 \in \Theta$, we have \[ \begin{array}{ll} &\|\nabla_{\boldsymbol{\theta}} h(\boldsymbol{\theta}_1,\boldsymbol{\alpha})-\nabla_{\boldsymbol{\theta}} h(\boldsymbol{\theta}_2,\boldsymbol{\alpha})\|\leq L_{11}\|\boldsymbol{\theta}_1-\boldsymbol{\theta}_2\|,\nonumber\\ & \|\nabla_{\boldsymbol{\alpha}} h(\boldsymbol{\theta},\boldsymbol{\alpha}_1)-\nabla_{\boldsymbol{\alpha}} h(\boldsymbol{\theta},\boldsymbol{\alpha}_{2})\|\leq L_{22}\|\boldsymbol{\alpha}_1-\boldsymbol{\alpha}_{2}\|,\\ &\|\nabla_{\boldsymbol{\alpha}} h(\boldsymbol{\theta}_1,\boldsymbol{\alpha})-\nabla_{\boldsymbol{\alpha}} h(\boldsymbol{\theta}_2,\boldsymbol{\alpha})\|\leq L_{12}\|\boldsymbol{\theta}_1-\boldsymbol{\theta}_2\|,\nonumber\\ & \|\nabla_{\boldsymbol{\theta}} h(\boldsymbol{\theta},\boldsymbol{\alpha}_1)-\nabla_{\boldsymbol{\theta}} h(\boldsymbol{\theta},\boldsymbol{\alpha}_{2})\|\leq L_{12}\|\boldsymbol{\alpha}_1-\boldsymbol{\alpha}_{2}\|.\\ \end{array} \] \normalsize \end{asu} \vspace{0.2cm} To proceed, let us first define some preliminary concepts: \begin{restatable}{defi}{df}{\normalfont (Directional Derivative)}\label{def.dd} Let $\psi: \mathbb{R}^n \rightarrow \mathbb{R}$ and $\bar{{\mathbf{x}}} \in dom(\psi)$. The directional derivative of $\psi$ at the point $\bar{{\mathbf{x}}}$ along the direction ${\mathbf{d}}$ is defined as \begin{align*} \psi^{'}({\bar{{\mathbf{x}}}};{\mathbf{d}}) = \lim\limits_{\tau \downarrow 0} \frac{\psi({\bar{{\mathbf{x}}}} + \tau {\mathbf{d}}) - \psi({\bar{{\mathbf{x}}}})}{\tau}. \end{align*} We say that $\psi$ is directionally differentiable at $\bar{{\mathbf{x}}}$ if the above limit exists for all ${\mathbf{d}} \in \mathbb{R}^n$. It can be shown that any convex function is directionally differentiable. \end{restatable} \begin{restatable}{defi}{dff}{\normalfont (FNE)} \label{def.FNE} A point $(\boldsymbol{\theta}^*,\boldsymbol{\alpha}^*)\in \Theta\times \mathcal{A}$ is a \textit{first-order Nash equilibrium} (FNE) of the game~\eqref{eq: game1-cons} if \begin{align*} &f'_{\boldsymbol{\theta}}(\boldsymbol{\theta}^*, \boldsymbol{\alpha}^*; \boldsymbol{\theta} - \boldsymbol{\theta}^*)\geq 0 \quad \forall \boldsymbol{\theta} \in \Theta, \\ & f'_{\boldsymbol{\alpha}}(\boldsymbol{\theta}^*, \boldsymbol{\alpha}^*; \boldsymbol{\alpha} - \boldsymbol{\alpha}^*)\leq 0 \quad \forall \boldsymbol{\alpha} \in \mathcal{A}; \end{align*} or equivalently if \small \begin{align}\small &\langle \nabla_{\boldsymbol{\theta}} h(\boldsymbol{\theta}^*, \boldsymbol{\alpha}^*), \boldsymbol{\theta} -\boldsymbol{\theta}^* \rangle + q(\boldsymbol{\theta}) - q(\boldsymbol{\theta}^*)+ \frac{M}{2}||\boldsymbol{\theta} - \boldsymbol{\theta}^*||^2 \geq 0,\nonumber\\ &\langle \nabla_{\boldsymbol{\alpha}} h(\boldsymbol{\theta}^*, \boldsymbol{\alpha}^*), \boldsymbol{\alpha} -\boldsymbol{\alpha}^* \rangle - p(\boldsymbol{\alpha}) + p(\boldsymbol{\alpha}^*) - \frac{M}{2} ||\boldsymbol{\alpha} - \boldsymbol{\alpha}^*||^2 \leq 0, \nonumbe \end{align}\normalsize for all $ \boldsymbol{\theta} \in \Theta$ and $\boldsymbol{\alpha} \in \mathcal{A}$; and all $M>0$. \end{restatable} \normalsize This definition implies that, at the first-order Nash equilibrium point, each player satisfies the first-order necessary optimality condition of its own objective when the other player's strategy is fixed. This is also equivalent to saying we have found the solution to the corresponding variational inequality \cite{harker1990finite}. Moreover, in the unconstrained smooth case that $\Theta = \mathbb{R}^{d_\theta}$, $\mathcal{A} =\mathbb{R}^{d_\alpha}$, and $p\equiv q \equiv 0$, this definition reduces to the standard widely used definition~$\nabla_{\boldsymbol{\alpha}} h(\boldsymbol{\theta}^*,\boldsymbol{\alpha}^*) = 0$ and $\nabla_{\boldsymbol{\theta}} h(\boldsymbol{\theta}^*,\boldsymbol{\alpha}^*) = 0$. \vspace{0.2cm} In practice, we use iterative methods for solving such games and it is natural to evaluate the performance of the algorithms based on their efficiency in finding an approximate-FNE point. To this end, let us define the concept of approximate-FNE point: \begin{restatable}{defi}{dfff}(Approximate-FNE)\label{def:cons-approx-stationarity} A point $(\bar{\boldsymbol{\theta}}, \bar{\boldsymbol{\alpha}})$ is said to be an $\epsilon$--first-order Nash equilibrium ($\epsilon$--FNE) of the game~\eqref{eq: game1-cons} if \[ {\cal X}(\bar{\boldsymbol{\theta}}, \bar{\boldsymbol{\alpha}}) \leq \epsilon^2 \quad \mbox{and} \quad {\cal Y}(\bar{\boldsymbol{\theta}}, \bar{\boldsymbol{\alpha}}) \leq \epsilon^2, \] where \begin{align*} {\cal X}(\bar{\boldsymbol{\theta}}, \bar{\boldsymbol{\alpha}}) \triangleq - 2 L_{11}\min_{ \boldsymbol{\theta} \in \Theta}\,\,\Big[ & \langle \nabla_{\boldsymbol{\theta}} h(\bar{\boldsymbol{\theta}}, \bar{\boldsymbol{\alpha}}), \boldsymbol{\theta} -\bar{\boldsymbol{\theta}} \rangle + q(\boldsymbol{\theta}) - q(\bar{\boldsymbol{\theta}}) + \frac{L_{11}}{2} ||\boldsymbol{\theta} - \bar{\boldsymbol{\theta}}||^2 \Big], \end{align*}\label{eq:X_k2} and \begin{align*}\label{eq:Y_k2} {\cal Y}(\bar{\boldsymbol{\theta}}, \bar{\boldsymbol{\alpha}}) \triangleq 2 L_{22}\max_{\boldsymbol{\alpha} \in \mathcal{A}}\,\, \Big[ & \langle \nabla_{\boldsymbol{\alpha}} h(\bar{\boldsymbol{\theta}}, \bar{\boldsymbol{\alpha}}), \boldsymbol{\alpha} -\bar{\boldsymbol{\alpha}} \rangle - p(\boldsymbol{\alpha}) + p(\bar{\boldsymbol{\alpha}}) - \frac{L_{22}}{2} ||\boldsymbol{\alpha} - \bar{\boldsymbol{\alpha}}||^2 \Big]. \end{align*} \end{restatable} \vspace{0.2cm} In the unconstrained and smooth scenario that $\Theta = \mathbb{R}^{d_\theta}$, $\mathcal{A} =\mathbb{R}^{d_\alpha}$, and $p\equiv q \equiv 0$, the above $\epsilon$-FNE definition reduces to $\|{\nabla}_{\boldsymbol{\alpha}} h(\bar{\boldsymbol{\theta}}, \bar{\boldsymbol{\alpha}})\| \leq \epsilon$ and $\|{\nabla}_{\boldsymbol{\theta}} h(\bar{\boldsymbol{\theta}}, \bar{\boldsymbol{\alpha}})\| \leq \epsilon$. \vspace{0.2cm} \begin{rmk}\label{optimilaity_condtion} The above definition of $\epsilon$--FNE is stronger than the $\epsilon$-stationarity concept defined based on the proximal gradient norm in the literature (see, e.g.,~\cite{lin2019gradient}). Details of this remark is discussed in the Appendix section. \end{rmk} \begin{rmk} \label{remark:1}(Rephrased from Proposition 4.2 in \cite{pang2016unified}) For the min-max game~\eqref{eq: game1-cons}, under assumptions \ref{assumption:Concavity}, \ref{assumption:objective} and \ref{assumption: LipSmooth-uncons}, FNE always exists. Moreover, it is easy to show that ${\cal X}(\cdot,\cdot)$ and ${\cal Y}(\cdot,\cdot)$ are continuous functions in their arguments. Hence, $\epsilon$--FNE exists for every $\epsilon \geq 0$. \end{rmk} \vspace{0.2cm} In what follows, we consider two different scenarios for finding $\epsilon$-FNE points. In the first scenario, we assume that~$h(\boldsymbol{\theta},\boldsymbol{\alpha})$ is strongly concave in $\boldsymbol{\alpha}$ for every given $\boldsymbol{\theta}$ and develop a first-order algorithm for finding $\epsilon$-FNE. Then, in the second scenario, we extend our result to the case where $h(\boldsymbol{\theta},\boldsymbol{\alpha})$ is concave (but not strongly concave) in $\boldsymbol{\alpha}$ for every given $\boldsymbol{\theta}$. \section{Non-Convex Strongly-Concave Games} In this section, we study the zero-sum game \eqref{eq: game1-cons} in the case that the function $h(\boldsymbol{\theta},\boldsymbol{\alpha})$ is $\sigma$-strongly concave in $\boldsymbol{\alpha}$ for every given value of~$\boldsymbol{\theta}$. To understand the idea behind the algorithm, let us define the auxiliary function \[ g(\boldsymbol{\theta}) \triangleq \max\limits_{\boldsymbol{\alpha} \in \mathcal{A}} h(\boldsymbol{\theta}, \boldsymbol{\alpha}) - p(\boldsymbol{\alpha}). \] A ``conceptual" algorithm for solving the min-max optimization problem~\eqref{eq: game1-cons} is to minimize the function $g(\boldsymbol{\theta}) + q(\boldsymbol{\theta})$ using iterative decent procedures. First, notice that, based on the following lemma, the strong concavity assumption implies the differentiability of~$g(\boldsymbol{\theta})$. \begin{restatable}{lem}{lmggsmoot}\label{lm:gg-smooth} Let $g(\boldsymbol{\theta}) = \max\limits_{\boldsymbol{\alpha} \in \mathcal{A}} h(\boldsymbol{\theta}, \boldsymbol{\alpha}) - p(\boldsymbol{\alpha})$ in which the function $h(\boldsymbol{\theta}, \boldsymbol{\alpha})$ is $\sigma$-strongly concave in $\boldsymbol{\alpha}$ for any given~$\boldsymbol{\theta}$. Then, under Assumption~\ref{assumption: LipSmooth-uncons}, the function $g(\boldsymbol{\theta})$ is differentiable. Moreover, its gradient is $L_g$-Lipschitz continuous, i.e., \[ \|\nabla g(\boldsymbol{\theta}_1) - \nabla g(\boldsymbol{\theta}_2)\| \leq L_g\| \boldsymbol{\theta}_1 - \boldsymbol{\theta}_2 \|, \] where $L_g = L_{11} + \dfrac{L_{12}^2}{\sigma}$. \end{restatable} The smoothness of the function~$g(\boldsymbol{\theta})$ suggests the natural multi-step proximal method in Algorithm~\ref{alg: alg_grad} for solving the min-max optimization problem~\eqref{eq: game1-cons}. This algorithm performs two major steps in each iteration: the first major step, which is marked as ``Accelerated Proximal Gradient Ascent", runs multiple iterations of the accelerated proximal gradient ascent to estimate the solution of the inner maximization problem. In other words, this step finds a point $\alpha_{t+1}$ such that \vspace{-.2cm} \[ \boldsymbol{\alpha}_{t+1}\approx \arg\max_{\boldsymbol{\alpha} \in \mathcal{A}} f(\boldsymbol{\theta}_t,\boldsymbol{\alpha}). \] \normalsize The output of this step will then be used to compute the approximate proximal gradient of the function $g(\boldsymbol{\theta})$ in the second step based on the classical Danskin's theorem~\cite{bernhard1995theorem, danskin1967theory}, which is restated below: \vspace{0.2cm} \begin{thm [Rephrased from \cite{bernhard1995theorem, danskin1967theory}] \label{thm:danskin} Let $V\subset \mathbb{R}^m$ be a compact set and $J(\mathbf{u},\bm{\nu}): \mathbb{R}^n \times V \mapsto \mathbb{R}$ be differentiable with respect to $\textbf{u}$. Let $\bar{J}(\textbf{u}) = \max\limits_{\bm{\nu} \in V}\; J(\textbf{u},\bm{\nu})$ and assume $\hat{V}(\textbf{u}) = \{\bm{\nu} \in V\;|\;J(\textbf{u},\bm{\nu}) = \bar{J}(\textbf{u})\}$ is singleton for any given $\mathbf{u}$. Then, $\bar{J}(\textbf{u})$ is differentiable and $\nabla_{\textbf{u}} \bar{J}(\textbf{u}) = \nabla_{\textbf{u}} J(\textbf{u},\hat{\bm{\nu}})$ with $\hat{\bm{\nu}} \in \hat{V}(\mathbf{u})$. \end{thm} \vspace{0.2cm} According to the above lemma, the proximal gradient descent update rule on $g(\boldsymbol{\theta})$ will be given by \begin{align*} \boldsymbol{\theta}_{t+1} = \arg\min\limits_{\boldsymbol{\theta} \in \Theta} \Big[& q(\boldsymbol{\theta})+ \langle \nabla_{\boldsymbol{\theta}}h(\boldsymbol{\theta}_{t}, \boldsymbol{\alpha}_{t+1}), \boldsymbol{\theta} - \boldsymbol{\theta}^{t}\rangle + \frac{L_g}{2} \|\boldsymbol{\theta} - \boldsymbol{\theta}_{t}\|^2\Big]. \end{align*} The two main proximal gradient update operators used in Algorithm~\ref{alg: alg_grad} are defines as \[ \rho_{\boldsymbol{\alpha}}(\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{\alpha}}, \gamma_1) = \arg\max\limits_{\boldsymbol{\alpha} \in \mathcal{A}} \;\; \langle \nabla_{\boldsymbol{\alpha}} h(\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{\alpha}}), \boldsymbol{\alpha} -\tilde{\boldsymbol{\alpha}}\rangle -\frac{\gamma_1}{2}\|\boldsymbol{\alpha} -\tilde{\boldsymbol{\alpha}}\|^2 -p(\boldsymbol{\alpha}) \] and \[\rho_{\boldsymbol{\theta}}(\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{\alpha}}, \gamma_2) = \arg\min\limits_{\boldsymbol{\theta} \in \Theta} \;\; \langle \nabla_{\boldsymbol{\theta}}h(\tilde{\boldsymbol{\theta}}, \tilde{\boldsymbol{\alpha}}), \boldsymbol{\theta} - \tilde{\boldsymbol{\theta}}\rangle + \frac{\gamma_2}{2} \|\boldsymbol{\theta} - \tilde{\boldsymbol{\theta}}\|^2 + q(\boldsymbol{\theta}). \] \normalsize \vspace{0.2cm} The following theorem establishes the rate of convergence of Algorithm~\ref{alg: alg_grad} to $\epsilon$-FNE. A more detailed statement of the theorem (which includes the constants of the theorem) is presented in the Appendix section. \vspace{0.2cm} \begin{thm}\label{thm:main}[Informal Statement] Consider the min-max zero-sum game \begin{align*} \min_{\boldsymbol{\theta} \in \Theta}\; \max_{\boldsymbol{\alpha} \in \mathcal{A}} \;\;\bigg(f(\boldsymbol{\theta},\boldsymbol{\alpha}) = h(\boldsymbol{\theta},\boldsymbol{\alpha}) -p(\boldsymbol{\alpha}) + q(\boldsymbol{\theta})\bigg), \end{align*} where function $h(\boldsymbol{\theta},\boldsymbol{\alpha})$ is $\sigma-$strongly concave in $\boldsymbol{\alpha}$ for any given $\boldsymbol{\theta}$. In Algorithm \ref{alg: alg_grad}, if we choose $\eta_1 = \frac{1}{L_{22}}, \eta_2 = \frac{1}{L_g}$, $N = \sqrt{8L_{22}/\sigma}-1$; and $K$ and $T$ large enough such that \[T \geq N_T(\epsilon) \triangleq {\cal O}(\epsilon^{-2}) \quad {\rm and} \quad K \geq N_K(\epsilon) \triangleq {\cal O}(\log\big(\epsilon^{-1})\big),\] then there exists an iterate $t\in \{0,\cdots, T-1\}$ such that $(\boldsymbol{\theta}_t,\boldsymbol{\alpha}_{t+1})$ is an $\epsilon$--FNE of \eqref{eq: game1-cons}. \end{thm} \vspace{0.2cm} \begin{center} \begin{minipage}{0.85\textwidth} \begin{algorithm}[H] \caption{Multi-step Accelerated Proximal Gradient Descent-Ascent} \label{alg: alg_grad} \begin{algorithmic}[1] \State \textbf{Input}: $K$, $T$, $N$, $\eta_1$, $\eta_2$, $\boldsymbol{\alpha}_0 \in \mathcal{A}$ and $\boldsymbol{\theta}_0 \in \Theta$. \For {$t=0, \cdots, T-1$} \vspace{.1cm} \For {$k=0, \cdots, \lfloor{K/N}\rfloor$ \tikzmark{top} \State Set $\beta_1 = 1$ and ${\mathbf{x}}_0 = \boldsymbol{\alpha}_t$ \If {$ k = 0$} \State ${\mathbf{y}}_1 = {\mathbf{x}}_0$ \Else \State ${\mathbf{y}}_1 = {\mathbf{x}}_N $ \EndIf \vspace{.1cm} \For{$j= 1, 2, \ldots, N$} \State Set ${\mathbf{x}}_{j} = \rho_{\boldsymbol{\alpha}}(\boldsymbol{\theta}_{t},{\mathbf{y}}_j,\eta_1)$ \State Set $\beta_{j+1} =\dfrac{1 + \sqrt{1+4\beta_j^2}}{2}$ \State ${\mathbf{y}}_{j+1} = {\mathbf{x}}_j + \Big(\dfrac{\beta_j -1}{\beta_{j+1}} \Big)({\mathbf{x}}_j - {\mathbf{x}}_{j-1})$ \tikzmark{right} \EndFor \EndFor \tikzmark{bottom} \State $\boldsymbol{\alpha}_{t+1} = {\mathbf{x}}_N$ \State $\boldsymbol{\theta}_{t+1} = \rho_{\boldsymbol{\theta}}(\boldsymbol{\theta}_{t},\boldsymbol{\alpha}_{t+1},\eta_2)$\label{alg1:update-theta} \EndFor \end{algorithmic} \AddNote{top}{bottom}{right}{{\small Accelerated Proximal Gradient Ascent \cite{nesterov1998introductory, beck2009fast}}} \end{algorithm} \end{minipage} \end{center} \vspace{0.2cm} \begin{restatable}{cor}{Corollary} Based on Theorem~\ref{alg: alg_grad}, to find an $\epsilon$-FNE of the game~\eqref{eq: game1-cons}, Algorithm~\ref{alg: alg_grad} requires ${\cal O}(\epsilon^{-2}\log(\epsilon^{-1}))$ gradients evaluations of the objective function. \end{restatable} \section{Non-Convex Concave Games} In this section, we consider the min-max problem~$\eqref{eq: game1-cons}$ under the assumption that $h(\boldsymbol{\theta},\boldsymbol{\alpha})$ is concave (but not strongly concave) in $\boldsymbol{\alpha}$ for any given value of~$\boldsymbol{\theta}$. In this case, the direct extension of Algorithm~\ref{alg: alg_grad} will not work since the function $g(\boldsymbol{\theta})$ might be non-differentiable. To overcome this issue, we start by making the function~$f(\boldsymbol{\theta},\boldsymbol{\alpha})$ strongly concave by adding a ``negligible" regularization. More specifically, we define \begin{align} f_{\lambda}(\boldsymbol{\theta},\boldsymbol{\alpha}) = f(\boldsymbol{\theta},\boldsymbol{\alpha}) -\frac{\lambda}{2} \|\boldsymbol{\alpha} - \hat{\boldsymbol{\alpha}}\|^2, \end{align} for some $\hat{\boldsymbol{\alpha}} \in \mathcal{A}$. We then apply Algorithm~\ref{alg: alg_grad} to the modified non-convex-strongly-concave game \begin{equation}\label{eq:game_reg} \min_{\boldsymbol{\theta}\in\Theta}\max_{\boldsymbol{\alpha} \in \mathcal{A}} \;f_{\lambda}(\boldsymbol{\theta},\boldsymbol{\alpha}). \end{equation} It can be shown that by choosing $\lambda = \frac{\epsilon}{2\sqrt{2}R}$, when we apply Algorithm~\ref{alg: alg_grad} to the modified game~\eqref{eq:game_reg}, we obtain an $\epsilon$-FNE of the original problem~\eqref{eq: game1-cons}. More specifically, with a proper choice of parameters, the following theorem establishes that the proposed method converges to $\epsilon$-FNE point of the original problem. \begin{restatable}{thm}{Thmmm}\label{thm: FW} [Informal Statement] Set $\eta_1 = 1/(L_{22} + \lambda)$, $\eta_2 = 1/(L_{11} + L_{12}^2/\lambda)$, $\lambda = \dfrac{\epsilon}{2R}$, $N = \sqrt{8L_{22}/\lambda}-1$, and apply Algorithm~\ref{alg: alg_grad} to the regularized min-max problem~\eqref{eq:game_reg}. Choose $K,T$ large enough such that $ T \geq N_T(\epsilon) \triangleq {\cal O} (\epsilon^{-3}), \; {\rm and} \; K \geq N_K(\epsilon) \triangleq {\cal O}\big(\epsilon^{-1/2} \log(\epsilon^{-1})\big). $ Then, there exists $t \in \{0, \ldots, T-1\}$ in Algorithm~\ref{alg: alg_grad} such that $(\boldsymbol{\theta}_t, \boldsymbol{\alpha}_{t+1})$ is an $\epsilon$-FNE of the original problem~\eqref{eq: game1-cons}. \end{restatable} \begin{restatable}{cor}{Corollaryy} Based on Theorem~\ref{thm: FW}, Algorithm~\ref{alg: alg_grad} requires ${\cal O}(\epsilon^{-3.5}\log(\epsilon^{-1}))$ gradient evaluations in order to find a $\epsilon$-FNE of the game~\eqref{eq: game1-cons}. \end{restatable} \vspace{-.cm} \section{Numerical Experiments} In this section, we evaluate the performance of the proposed algorithm for the problem of attacking the LASSO estimator. In other words, our goal is to find a small perturbation of the observation matrix that worsens the performance of the LASSO estimator in the training set. This attack problem can be formulated as \begin{align}\label{eq:experiment} \max\limits_{\textbf{A} \in \mathcal{B}(\hat{\textbf{A}}, \Delta)} \min\limits_{{\mathbf{x}}} \|\textbf{A}{\mathbf{x}} - \textbf{b}\|_2^2 + \xi \|{\mathbf{x}}\|_1, \end{align} where $\mathcal{B}(\hat{\textbf{A}}, \Delta) = \{\textbf{A}\;|\;||\textbf{A} - \hat{\textbf{A}}||^2_{F} \leq \Delta\}$ and the matrix $\textbf{A} \in \mathbb{R}^{m \times n}$. We set $m = 100$, $n = 500$, $\xi = 1$ and $\Delta = 10^{-1}$. In our experiments, first we generate a ``ground-truth" vector~${\mathbf{x}}^*$ with sparsity level $s = 25$ in which the location of the non-zero elements are chosen randomly and their values are sampled from a standard Gaussian distribution. Then, we generate the elements of matrix $\mathbf{A}$ using standard Gaussian distribution. Finally, we set $\mathbf{b} = \mathbf{A} \mathbf{x}^* + \mathbf{e}$, where $\mathbf{e} \sim N(\mathbf{0}, 0.001\mathbf{I})$. We compare the performance of the proposed algorithm with the popular subgradient descent-ascent and proximal gradient descent-ascent algorithms. In the subgradient descent-ascent algorithm, at each iteration, we take one step of sub-gradient ascent step with respect to~${\mathbf{x}}$ followed by one steps of sub-gradient ascent in $\mathbf{A}$. Similarly, each iteration of the proximal gradient descent-ascent algorithm consists of one step of proximal gradient descent with respect to~${\mathbf{x}}$ and one step of proximal gradient descent with respect to~$\mathbf{A}$. \vspace{0.2cm} To have a fair comparison, all of the studied algorithms have been initialized at the same random points in Fig.~\ref{fig:plot}. \begin{figure}[H] \centering \includegraphics[width=.5\textwidth]{1.png}\includegraphics[width=.5\textwidth]{2.png}\quad \caption{ \small (left): Convergence behavior of different algorithms in terms of the objective value. The objective value at iteration $t$ is defined as $g(\mathbf{A}_t)\triangleq \min_{{\mathbf{x}}} \|\mathbf{A}_t {\mathbf{x}} - \mathbf{b}\|_2^2 + \xi \|{\mathbf{x}}\|_1$, (right): Convergence behavior of different algorithms in terms of the stationarity measures~${\cal X}(\mathbf{A}_{t},{\mathbf{x}}_{t+1})$, ${\cal Y}(\mathbf{A}_{t},{\mathbf{x}}_{t+1})$ (logarithmic scale). The list of the algorithms used in the comparison is as follows: Proposed Algorithm (PA), Subgradient Descent-Ascent (SDA), and Proximal Descent-Ascent algorithm (PDA).} \label{fig:plot} \end{figure} The above figure might not be a fair comparison since each step of the proposed algorithm is computationally more expensive than the two benchmark methods. To have a better comparison, we evaluate the performance of the algorithms in terms of the required time for convergence. Table~\ref{table:1} summarizes the average time required for different algorithms for finding a point $(\bar{\mathbf{A}},\bar{{\mathbf{x}}})$ satisfying ${\cal X}(\bar{\mathbf{A}},\bar{{\mathbf{x}}})\leq 0.1$ and ${\cal Y}(\bar{\mathbf{A}},\bar{{\mathbf{x}}})\leq 0.1$. The average is taken over 100 different experiments. As can be seen in the table, the proposed method in average converges an order of magnitude faster than the other two algorithms. \begin{table}[H] \begin{center} \begin{tabular}{ |c|c|c|c| } \hline Algorithm & PA & SDA & PDA \\ \hline Average time (seconds) & 0.0268 & 3.5016 & 0.5603 \\ \hline Standard deviation (seconds) & 0.0538 & 7.0137 & 1.1339 \\ \hline \end{tabular} \caption{\footnotesize Average computational time of different algorithms. } \label{table:1} \end{center} \end{table} \vspace{-0.5cm} \section*{Acknowledgement} The authors would like to thank Shaddin Dughmi and Dmitrii M. Ostrovskii for their insightful comments that helped to improve the work. \bibliographystyle{IEEEbib}
{ "timestamp": "2020-03-19T01:07:08", "yymm": "2003", "arxiv_id": "2003.08093", "language": "en", "url": "https://arxiv.org/abs/2003.08093", "abstract": "Min-max saddle point games appear in a wide range of applications in machine leaning and signal processing. Despite their wide applicability, theoretical studies are mostly limited to the special convex-concave structure. While some recent works generalized these results to special smooth non-convex cases, our understanding of non-smooth scenarios is still limited. In this work, we study special form of non-smooth min-max games when the objective function is (strongly) convex with respect to one of the player's decision variable. We show that a simple multi-step proximal gradient descent-ascent algorithm converges to $\\epsilon$-first-order Nash equilibrium of the min-max game with the number of gradient evaluations being polynomial in $1/\\epsilon$. We will also show that our notion of stationarity is stronger than existing ones in the literature. Finally, we evaluate the performance of the proposed algorithm through adversarial attack on a LASSO estimator.", "subjects": "Optimization and Control (math.OC); Computer Science and Game Theory (cs.GT); Machine Learning (cs.LG); Machine Learning (stat.ML)", "title": "Solving Non-Convex Non-Differentiable Min-Max Games using Proximal Gradient Method", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692264378963, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7079584977063528 }
https://arxiv.org/abs/0708.4275
Continuation of solutions of coupled dynamical systems
Recently, the synchronization of coupled dynamical systems has been widely studied. Synchronization is referred to as a process wherein two (or many) dynamical systems are adjusted to a common behavior as time goes to infinity, due to coupling or forcing. Therefore, before discussing synchronization, a basic problem on continuation of the solution must be solved: For given initial conditions, can the solution of coupled dynamical systems be extended to the infinite interval $[0,+\infty)$? In this paper, we propose a general model of coupled dynamical systems, which includes previously studied systems as special cases, and prove that under the assumption of QUAD, the solution of the general model exists on $[0,+\infty)$.
\section{Introduction} In past years, collective behaviors of coupled dynamical systems have been widely studied. In particular, synchronization in networks of coupled dynamical systems, as one of the simplest and most striking behaviors, has attracted increasing attention in mathematical and physical literatures because of its potential applications in various fields, such as communication \cite{VanWiggeren1998}, seismology \cite{Vieira1999}, and neural networks \cite{Hoppensteadt2000}. The word ``synchronization" comes from a Greek word, which means ``share time". Today, in science and technology, it has come to be considered as ``time coherence of different processes". Since the first observation of synchronization phenomenon was made by Huygens \cite{Huygens1672} in the 17th century, many different types of synchronization phenomena have been found, e.g., phase synchronization, lag synchronization, full synchronization, partial synchronization, almost synchronization, and so on. In mathematics, synchronization can be defined as a process wherein two (or many) dynamical systems adjust a given property of their motion to a common behavior as time goes to infinity, due to coupling or forcing (see \cite{Boccaletti2002}). For example, full synchronization requires that the difference between any two nodes converges to zero as time goes to infinity. Therefore, it is natural to raise following question: For given initial conditions, can the solution be extended to the infinite interval $[0,+\infty)$? For example, in the paper \cite{Lu2006-2}, the following coupled systems with a delay is considered: \begin{eqnarray} \dot{x}^{i}(t)=f(x^{i}(t))+c\sum\limits_{j=1,j\ne i}^{m}a_{ij}\Gamma\bigg[x^{j}(t-\tau)-x^{i}(t)\bigg],\nonumber\\ i=1,\ldots,m, \label{LCODECD1} \end{eqnarray} where $x^{i}(t)=[x^{i}_{1}(t),\cdots,x^{i}_{n}(t)]^{\top}\in \mathbb R^{n}$ denotes the $n$-dimensional state variable of the $i$-th node, $i=1,\ldots,m$; $f:\mathbb R^{n}\rightarrow \mathbb R^{n}$ is a differential function of the intrinsic system; $c$ is the coupling strength; $\Gamma=\mathrm{diag}\{\gamma_{1},\cdots,\gamma_{n}\}$ is the inner connection diagonal matrix with $\gamma_{i}\ge 0$, $i=1,\ldots,n$; $a_{ij}\ge 0$, for all $i\ne j$, is the coupling coefficient from node $j$ to node $i$; and $\tau\ge 0$ is the coupling delay. It is assumed that $\sum_{j=1,j\ne i}^{m}a_{ij}=1$, $a_{ii}=-1$, for all $i=1,\ldots,m$. And the following theorem was proved. \begin{myprop} Suppose that there are a positive definite diagonal matrix $P=\mathrm{diag}\{p_{1},\cdots,p_{n}\}$ and a diagonal matrix $D=\mathrm{diag}\{d_{1},\cdots,d_{n}\}$, such that \begin{eqnarray*} (x-y)^{\top}P[f(x)-f(y)-Dx+Dy]\le -\alpha (x-y)^{\top}(x-y) \end{eqnarray*} holds for some $\alpha>0$, any $x,y\in \mathbb R^{n}$. Then, for sufficiently large coupling strength $c$ and sufficiently small delay $\tau$, the coupled system (\ref{LCODECD1}) will be globally synchronized. \end{myprop} Here, a prerequisite condition in discussing synchronization is that the solution $x^i(t)$, $i=1,\cdots,m$, can be extended to the infinite interval $[0,+\infty)$. However, in most papers on synchronization of coupled systems, such as \cite{Wu1995,Lu2004,Zhou2006,Lu2006-2} and others, it is always assumed that for each initial condition, the coupled system under consideration has a unique solution for all time $t\geq0$ without any theoretical justification. In this short paper, we address this issue and propose a general model of coupled dynamical systems, which includes previously studied systems as special cases. We prove that under the assumption of QUAD (Assumption (A5) in Section \ref{sec:model}), the solution of the general model exists on $[0,+\infty)$. The assumption of QUAD is often used when using a Lyapunov function with a quadratic form to investigate the global synchronization (e.g., in Proposition 1, and in \cite{Wu1995,Belykh2006,Lu2006-1}). Therefore, the theorem proved in this paper provides a theoretical basis for the discussion of synchronization of the coupled systems. The rest of the paper is organized as follows: In Section \ref{sec:model}, we propose a general model of coupled dynamical systems. In Section \ref{sec:preliminaries}, we present some fundamental theorems of retarded functional differential equations with infinite delay, which are taken from \cite{Hino1991}. In Section \ref{sec:main-result}, the main theorem is proved. We conclude the paper in Section \ref{sec:conclusions}. \section{Model descriptions} \label{sec:model} In this section, we investigate the coupled dynamical systems described by the following retarded functional integro-differential equations: \begin{eqnarray} \dot{x}^i(t) & = & f(t,x^i(t))\nonumber\\ & & +\sum_{j=1}^{m}a_{ij}(t)\int_0^{\infty}g(t,x^j(t-\tau_{ij}(t)-s)) \mathrm{d}K_{ij}(s),\nonumber\\ & & i=1,2,\ldots,m, \label{general-model} \end{eqnarray} where ``$\dot{\;\;}$" represents the right-hand derivative, $m$ is the network size, $x^i(t)\in \mathbb{R}^n$ is the state variable of the $i$-th node, $t\in[0,+\infty)$ is a continuous time, $f:[0,+\infty)\times\mathbb{R}^n\rightarrow\mathbb{R}^n$ describes the dynamical behavior of each uncoupled system, $A(t)=(a_{ij}(t))\in\mathbb{R}^{m\times m}$ is the time-varying coupling matrix, which is determined by the topological structure of the network, $g:[0,+\infty)\times\mathbb{R}^n\rightarrow\mathbb{R}^n$ is the output function, $\mathrm{d}K_{ij}(s)$ is a Lebesgue-Stieljies measure for each $i,j=1,\ldots,m$, and satisfies $\int_0^{\infty}|\mathrm{d}K_{ij}(s)|<+\infty$. In addition, the following assumptions are necessary in discussion of retarded systems: \begin{list} {{\upshape (A\arabic{mycounter})}\hfill} {\setlength{\topsep}{0ex} \setlength{\parskip}{0ex} \setlength{\itemsep}{0.2ex} \setlength{\parsep}{0ex} \setlength{\leftmargin}{6ex} \setlength{\labelwidth}{4ex} \setlength{\labelsep}{1ex} \setlength{\itemindent}{-1ex} \usecounter{mycounter}} \item $f(t,u)$ is continuous, and locally Lipschitz continuous with respect to $u$, i.e., in each compact subset $W$ of $[0,+\infty)\times\mathbb{R}^n$, there exists a constant $l(W)>0$ such that $\|f(t,u_1)-f(t,u_2)\|\leq l(W)\|u_1-u_2\|$ for any $(t,u_k)\in W$, $k=1,2$; \item $A(t)=(a_{ij}(t))_{i,j=1}^m$ is continuous; \item $g(t,u)$ is continuous, and there exists a continuous function $\kappa(t):[0,+\infty)\rightarrow\mathbb{R}^{+}$, such that $\|g(t,u_1)-g(t,u_2)\|\leq\kappa(t)\|u_1-u_2\|$ for any $t\in[0,+\infty)$ and $u_1,u_2\in\mathbb{R}^n$; \item For each $i,j=1,\ldots,m$, $\tau_{ij}(t)$ is continuous and nonnegative; \item There are a symmetric positive definite matrix $P$ and a diagonal matrix $\Delta=\mathrm{diag}\{\delta_1,\ldots,\delta_n\}$ such that $f(t,u)\in\mathrm{QUAD}(\Delta,P)$, where $\mathrm{QUAD}(\Delta,P)$ denotes a class of continuous functions $h(t,u):[0,+\infty)\times\mathbb{R}^n\rightarrow\mathbb{R}^n$ satisfying \begin{eqnarray} (u_1-u_2)^{\top}P\{[h(t,u_1)-h(t,u_2)]-\Delta[u_1-u_2]\}\nonumber\\ \leq-\epsilon(u_1-u_2)^{\top}(u_1-u_2) \label{QUAD} \end{eqnarray} for some $\epsilon>0$, all $u_1,u_2\in\mathbb{R}^n$ and $t\in[0,+\infty)$. \end{list} Here, $\|\cdot\|$ can be any norm in $\mathbb{R}^n$ (Without loss of generality, in this paper we assume that $\|\cdot\|$ is 2-norm). The model (\ref{general-model}) includes many previously studied systems as special cases. In the following, we present several examples. \begin{example} $\mathrm{d}K_{ij}(s)=\delta(s)$, where $\delta(s)$ is the Dirac-delta function, i.e., $\delta(0)=1$ and $\delta(s)=0$ for $s\neq0$; $A(t)=A$ is a constant matrix with zero-sum rows and nonnegative off-diagonal elements; $g(t,u)=\Gamma u$, where $\Gamma$ is a constant matrix; $\tau_{ij}(t)=0$ for each $i,j=1,\ldots,m$ and all $t\geq0$. Then, (\ref{general-model}) reduces to the system with undelayed, constant and linear coupling discussed in \cite{Wu1995,Lu2006-1}: \begin{eqnarray*} \dot{x}^i(t)=f(t,x^i(t))+\sum_{j=1}^{m}a_{ij}\Gamma x^j(t), \qquad i=1,2,\ldots,m. \end{eqnarray*} \end{example} \begin{example} $\mathrm{d}K_{ij}(s)=\delta(s)$, where $\delta(s)$ is the Dirac-delta function; $A(t)$ is a time-dependent matrix with zero-sum rows and nonnegative off-diagonal elements; $g(t,u)=\Gamma(t)u$, where $\Gamma(t)$ is a time-dependent matrix; $\tau_{ij}(t)=0$ for each $i,j=1,\ldots,m$ and all $t\geq0$. Then, (\ref{general-model}) reduces to the system with undelayed, time-varying and linear coupling discussed in \cite{Wu2003,Wu2005}: \begin{eqnarray*} \dot{x}^i(t)=f(t,x^i(t))+\sum_{j=1}^{m}a_{ij}(t)\Gamma(t)x^j(t),\\ \qquad i=1,2,\ldots,m. \end{eqnarray*} \end{example} \begin{example} $\mathrm{d}K_{ij}(s)=\delta(s)$, where $\delta(s)$ is the Dirac-delta function; $f(t,u)=f(u)$, i.e., $f$ is independent of $t$; $A(t)=A$ is a constant matrix with zero-sum rows and nonnegative off-diagonal elements, and satisfies $a_{ii}=-c$ for $i=1,\ldots,m$; $g(t,u)=\Gamma u$, where $\Gamma$ is a diagonal matrix with nonnegative diagonal elements; $\tau_{ij}(t)=\tau$ for $i\neq j$ and $\tau_{ii}(t)=0$ for $i=1,\ldots,m$. Then, (\ref{general-model}) reduces to the system with delayed, constant and linear coupling discussed in \cite{Lu2006-2}: \begin{eqnarray*} \dot{x}^i(t)=f(x^i(t))+\sum_{j=1,j\neq i}^{m}a_{ij}\Gamma[x^j(t-\tau)-x^i(t)],\\ \qquad i=1,2,\ldots,m. \end{eqnarray*} \end{example} Besides Examples 1-3, the model (\ref{general-model}) includes coupled dynamical systems with nonlinear coupling, time-varyingly delayed coupling, distributedly delayed coupling, etc. \section{Preliminaries} \label{sec:preliminaries} In this section, we present some fundamental results of retarded functional differential equations with infinite delay, which will be used in the sequel. Firstly, we introduce some notations and definitions. Denote $BC((-\infty,a],\mathbb{R}^N)$ the family of continuous functions $\phi$ mapping the interval $(-\infty,a]$ into $\mathbb{R}^N$ such that $\|\phi\|=:\sup\{\|\phi(\theta)\|:-\infty<\theta\leq a\}$ is finite. Also, denote $C^{\infty}((-\infty,a],\mathbb{R}^N)=\{\phi\in BC((-\infty,a],\mathbb{R}^N):\lim_{\theta\rightarrow-\infty}\phi(\theta)$ exists in $\mathbb{R}^N\}$. When $a=0$, we generally denote $C^{\infty}=C^{\infty}((-\infty,0],\mathbb{R}^N)$. For $\sigma\in\mathbb{R}$, $B\geq0$, $x\in C^{\infty}((-\infty,\sigma+B],\mathbb{R}^N)$, and $t\in[\sigma,\sigma+B]$, we define $x_t\in C^{\infty}$ as $x_t(\theta)=x(t+\theta)$, $\theta\in(-\infty,0]$. Assume $\Omega$ is an open subset of $\mathbb{R}\times C^{\infty}$, $h: \Omega\rightarrow\mathbb{R}^N$ is a given function, and ``$\dot{\;\;}$" represents the right-hand derivative; then, we call \begin{eqnarray} \dot{x}(t)=h(t,x_t) \label{RFDE} \end{eqnarray} a retarded functional differential equation with infinite delay on $\Omega$. \begin{definition} A function $x$ is said to be a solution of Equation (\ref{RFDE}) on the interval $I=[\sigma,\sigma+B)$ if there are $\sigma\in\mathbb{R}$ and $B>0$ such that $x\in C^{\infty}((-\infty,\sigma+B),\mathbb{R}^N)$, $(t,x_t)\in\Omega$ and $x(t)$ satisfies Equation (\ref{RFDE}) for $t\in I$. For given $\sigma\in\mathbb{R}$, $\varphi\in C^{\infty}$, if a solution $x$ of Equation (\ref{RFDE}) is defined on an interval $[\sigma,\sigma+B)$, $B>0$, and satisfies $x_{\sigma}=\varphi$, then $x$ is called a solution of Equation (\ref{RFDE}) with initial value $\varphi$ at $\sigma$ or simply a solution through $(\sigma,\varphi)$. \end{definition} \begin{definition} Suppose $x(t)$ and $y(t)$ are solutions with the same initial condition and satisfies Equation (\ref{RFDE}) respectively on the intervals $I$ and $J$ whose left end points are $\sigma$. If $I$ is properly contained in $J$ and $x(t)=y(t)$ for $t\in I$, we say $y$ is a continuation of $x$. If $x$ has no continuation, it is called a noncontinuable solution, or a maximal solution. \end{definition} \begin{definition} We say $h(t,\phi)$ is Lipschitz in $\phi$ in a compact subset $W$ of $\mathbb{R}\times C^{\infty}$ if there a constant $l>0$ such that, for any $(t,\phi_k)\in W$, $k=1,2$, \begin{eqnarray} \|h(t,\phi_1)-h(t,\phi_2)\|\leq l\|\phi_1-\phi_2\|.\label{Lip} \end{eqnarray} \end{definition} The following three lemmas on existence, uniqueness, and continuation of the solution of Equation (\ref{RFDE}), are used in the proof of the main theorem in the next section. The details can be found in \cite{Hino1991}, \begin{lemma} (Existence) Suppose $\Omega$ is an open subset in $\mathbb{R}\times C^{\infty}$ and $h:\Omega\rightarrow\mathbb{R}^N$ is continuous. Then, for any $(\sigma,\varphi)\in\Omega$, there exists a solution of Equation (\ref{RFDE}) through $(\sigma,\varphi)$. \end{lemma} \begin{lemma} (Uniqueness) Suppose $\Omega$ is an open subset in $\mathbb{R}\times C^{\infty}$ and $h(t,\phi)$ is Lipschitz in $\phi$ in each compact subset of $\Omega$. Then, for any $(\sigma,\varphi)\in\Omega$, there exists at most one noncontinuable solution of Equation (\ref{RFDE}) through $(\sigma,\varphi)$. \end{lemma} \begin{lemma} (Continuation) Suppose $\Omega$ is an open subset in $\mathbb{R}\times C^{\infty}$, $h:\Omega\rightarrow\mathbb{R}^N$ is continuous, and $x$ is a noncontinuable solution of Equation (\ref{RFDE}) defined on $I=[\sigma,\sigma+B)$. Then, for every compact subset $W$ of $\Omega$, there is a $t_W$ in $I$ such that $(t,x_t)\not\in W$ for all $t\in(t_W,\sigma+B)$. \end{lemma} \section{Main result} \label{sec:main-result} In this section, we prove the following theorem. \begin{theorem} Suppose that Assumptions (A1)-(A5) hold. Then, for any $\varphi(\theta)=[\varphi^1(\theta)^{\top},\ldots,\varphi^m(\theta)^{\top}]^{\top}$ with $\varphi^i(\theta)\in C^{\infty}((-\infty,0],\mathbb{R}^n)$, there is a unique noncontinuable solution $x(t)=[x^1(t)^{\top},\ldots,x^m(t)^{\top}]^{\top}$ of Equation (\ref{general-model}) through $(0,\varphi)$. Moreover, the interval of existence of the solution $x$ is $[0,+\infty)$. \end{theorem} {\it Proof :} By Assumptions (A1)-(A4) and Lemmas 1-2, it is clear that for the integro-diffential system (\ref{general-model}), there exists a unique noncontinuable solution $x(t)$. In the following, we will prove that the interval of existence of the solution $x(t)$ is $[0,+\infty)$. We employ ``proof by contradiction", and suppose that the interval of existence of the noncontinuable solution $x(t)$ is $[0,b)$, where $b$ is a positive constant. Firstly, by Assumptions (A1)-(A4), we can find positive constants $\alpha$, $\beta$ and $\gamma$ such that \begin{eqnarray*} &&\|g(t,u_1)-g(t,u_2)\|\leq\alpha\|u_1-u_2\| \end{eqnarray*} holds for all $u_1,u_2\in\mathbb{R}^n$ and $t\in[0,b)$, and \begin{eqnarray*} &&|a_{ij}(t)|\leq\beta,\\ &&\big\|f(t,x^i(0))+\sum_{j=1}^m a_{ij}(t)g(t,x^j(0))\int_0^{\infty}\mathrm{d}K_{ij}(s)\big\|\leq\gamma. \end{eqnarray*} hold for all $i,j=1,\ldots,m$ and $t\in[0,b)$. Now, we will show how the assumption of QUAD (Assumption (A5)) plays an important role in the proof. Since $f(t,u)\in\mathrm{QUAD}(\Delta,P)$ (Assumption (A5)), it is clear that there is a constant $\delta>0$ such that for all $u_1,u_2\in\mathbb{R}^n$ and $t\geq0$, \begin{eqnarray*} (u_1-u_2)^{\top}P[f(t,u_1)-f(t,u_2)]\leq\delta(u_1-u_2)^{\top}(u_1-u_2). \end{eqnarray*} Denote \begin{eqnarray*} \eta=\frac{2\delta+2\alpha\beta\big\|P\big\|K}{\lambda^P_{\min}} +\frac{2m\gamma\big\|P\big\|}{\sqrt{\lambda^P_{\min}}}>0\;, \end{eqnarray*} where $m$ is the number of the nodes, $\|P\|$ is the 2-norm of the matrix $P$, $\lambda^P_{\min}$ is the minimum eigenvalue of the matrix $P$, and $K=\sum_{i=1}^m\sum_{j=1}^m\int_0^{\infty}|\mathrm{d}K_{ij}(s)|$. Since the matrix $P$ is symmetric positive definite, we can define a norm in $\mathbb{R}^{nm}$: \begin{eqnarray*} \|x(t)\|_P=\Big(\sum_{i=1}^m x^i(t)^{\top}Px^i(t)\Big)^{\frac{1}{2}}; \end{eqnarray*} and two nonnegative functions: \begin{eqnarray*} &&V(t)=\frac{1}{2}\|x(t)-x(0)\|_P^2,\\ &&M(t)=\max\Big[\frac{1}{2}\,,\; \sup_{-\infty<s\leq t}\frac{1}{2}\|x(s)-x(0)\|_P^2\Big],\\ &&t\in[0,b). \end{eqnarray*} Clearly, $V(t)\leq M(t)$.We claim that $M(t)\leq M(0)e^{\eta t}$ for all $t\in[0,b)$. In fact, at any $t_0\in[0,b)$, there are two possible cases: \noindent{\bf Case 1:} $V(t_0)<M(t_0)$. In this case, by the continuity of $\|x(t)-x(0)\|_P^2$, $M(t)$ is non-increasing at $t_0$. \noindent{\bf Case 2:} $V(t_0)=M(t_0)$. Calculating the right-hand derivative of $V$ with respect to time along the trajectories of (\ref{general-model}), one has \allowdisplaybreaks[3] \begin{eqnarray*} & & \hspace{-2em}\dot{V}(t_0)= \sum_{i=1}^m(x^i(t_0)-x^i(0))^{\top}P\dot{x^i}(t_0)\allowdisplaybreaks[3]\\ & = & \sum_{i=1}^m(x^i(t_0)-x^i(0))^{\top}P\bigg[f(t_0,x^i(t_0))\\ & & +\sum_{j=1}^ma_{ij}(t_0) \int_0^{\infty}g(t_0,x^j(t_0-\tau_{ij}(t_0)-s))\mathrm{d}K_{ij}(s)\bigg]\allowdisplaybreaks[3]\\ & = & \sum_{i=1}^m(x^i(t_0)-x^i(0))^{\top}P\bigg\{\big[f(t_0,x^i(t_0))-f(t_0,x^i(0))\big]\\ & & +\sum_{j=1}^m a_{ij}(t_0)\int_0^{\infty}\big[g(t_0,x^j(t_0-\tau_{ij}(t_0)-s))\\ & & -g(t_0,x^j(0))\big]\mathrm{d}K_{ij}(s)+\Big[f(t_0,x^i(0))\\ & & +\sum_{j=1}^ma_{ij}(t_0)g(t_0,x^j(0))\int_0^{\infty}\mathrm{d}K_{ij}(s)\Big]\bigg\}\allowdisplaybreaks[3]\\ & \leq & \delta\sum_{i=1}^m(x^i(t_0)-x^i(0))^{\top}(x^i(t_0)-x^i(0))\\ & & +\sum_{i=1}^m\sum_{j=1}^m\big|a_{ij}(t_0)\big|\big\|P\big\|\big\|x^i(t_0)-x^i(0)\big\|\\ & & \times\int_0^{\infty}\big\|g(t_0,x^j(t_0-\tau_{ij}(t_0)-s))\\ & & -g(t_0,x^j(0))\big\|\big|\mathrm{d}K_{ij}(s)\big|\\ & & +\sum_{i=1}^m\big\|P\big\|\big\|x^i(t_0)-x^i(0)\big\|\big\|f(t_0,x^i(0))\\ & & +\sum_{j=1}^ma_{ij}(t_0)g(t_0,x^j(0))\int_0^{\infty}\mathrm{d}K_{ij}(s)\big\|\allowdisplaybreaks[3]\\ & \leq & \delta\big\|x(t_0)-x(0)\big\|^2 +\alpha\beta\big\|P\big\|\sum_{i=1}^m\sum_{j=1}^m\big\|x^i(t_0)-x^i(0)\big\|\\ & & \times\int_0^{\infty}\big\|x^j(t_0-\tau_{ij}(t_0)-s)-x^j(0)\big\|\big|\mathrm{d}K_{ij}(s)\big|\\ & & +\gamma\big\|P\big\|\sum_{i=1}^m\big\|x^i(t_0)-x^i(0)\big\|\allowdisplaybreaks[3]\\ & \leq & \delta\big\|x(t_0)-x(0)\big\|^2 +\alpha\beta\big\|P\big\|\sum_{i=1}^m\sum_{j=1}^m\big\|x(t_0)-x(0)\big\|\\ & & \times\int_0^{\infty}\big\|x(t_0-\tau_{ij}(t_0)-s)-x(0)\big\|\big|\mathrm{d}K_{ij}(s)\big|\\ & & +m\gamma\big\|P\big\|\big\|x(t_0)-x(0)\big\|\allowdisplaybreaks[3]\\ & \leq & \frac{\delta}{\lambda^P_{\min}}\big\|x(t_0)-x(0)\big\|_P^2\\ & & +\frac{\alpha\beta\big\|P\big\|}{\lambda^P_{\min}}\sum_{i=1}^m\sum_{j=1}^m\big\|x(t_0)-x(0)\big\|_P\\ & & \times\int_0^{\infty}\big\|x(t_0-\tau_{ij}(t_0)-s)-x(0)\big\|_P\big|\mathrm{d}K_{ij}(s)\big|\\ & & +\frac{m\gamma\big\|P\big\|}{\sqrt{\lambda^P_{\min}}}\big\|x(t_0)-x(0)\big\|_P\cdot1\allowdisplaybreaks[3]\\ & \leq & \frac{\delta}{\lambda^P_{\min}}2M(t_0)\\ & & +\frac{\alpha\beta\big\|P\big\|}{\lambda^P_{\min}}\sqrt{2M(t_0)}\sqrt{2M(t_0)} \sum_{i=1}^{m}\sum_{j=1}^{m}\int_0^{\infty}\big|\mathrm{d}K_{ij}(s)\big|\\ & & +\frac{m\gamma\big\|P\big\|}{\sqrt{\lambda^P_{\min}}}\sqrt{2M(t_0)}\sqrt{2M(t_0)}\allowdisplaybreaks[3]\\ & = & \bigg\{\frac{2\delta+2\alpha\beta\big\|P\big\|K}{\lambda^P_{\min}} +\frac{2m\gamma\big\|P\big\|}{\sqrt{\lambda^P_{\min}}}\bigg\}M(t_0)\\ & = & \eta V(t_0) \end{eqnarray*} In summary, we conclude that $M(t)\leq M(0)e^{\eta t}$ for all $t\in[0,b)$, which implies $V(t)\leq M(0)e^{\eta t}$ and \begin{eqnarray} \|x(t)\| & \leq & \frac{1}{\sqrt{\lambda^P_{\min}}}\|x(t)\|_P\nonumber\\ & \leq & \frac{1}{\sqrt{\lambda^P_{\min}}}\big(\|x(0)\|_P+\|x(t)-x(0)\|_P\big)\nonumber\\ & = & \frac{1}{\sqrt{\lambda^P_{\min}}}\big(\|x(0)\|_P+\sqrt{2V(t)}\;\big)\nonumber\\ & \leq & \frac{1}{\sqrt{\lambda^P_{\min}}}\big(\|x(0)\|_P+\sqrt{2M(0)e^{\eta t}}\;\big)\nonumber\\ & \leq & \frac{1}{\sqrt{\lambda^P_{\min}}}\big(\|x(0)\|_P+\sqrt{2M(0)e^{\eta b}}\;\big)\label{boundary} \end{eqnarray} for all $t\in[0,b)$. Now, pick a compact set \begin{eqnarray*} W = \bigg\{(t,\psi)\in\mathbb{R}\times C^{\infty}((-\infty,0], \mathbb{R}^{nm})\Big|0\leq t\leq b,\mbox{ and }\\ \|\psi\|\leq\max\Big[\frac{1}{\sqrt{\lambda^P_{\min}}}\big(\|x(0)\|_P +\sqrt{2M(0)e^{\eta b}}\;\big),\;\|\varphi\|\Big]\bigg\}, \end{eqnarray*} where $\varphi$ is the initial value. By the inequality (\ref{boundary}), we conclude that $(t,x_t)\in W$ for all $t\in[0,b)$, which contradicts Lemma 3. Therefore, the interval of existence of the noncontinuable solution $x$ is $[0,+\infty)$. Theorem is proved completely \section{Conclusions} \label{sec:conclusions} In this paper, we propose a general model of coupled dynamical systems, which includes previously studied systems as special cases, and prove that under the assumption of QUAD, the solution of the general model exists on $[0,+\infty)$.
{ "timestamp": "2007-08-31T08:32:31", "yymm": "0708", "arxiv_id": "0708.4275", "language": "en", "url": "https://arxiv.org/abs/0708.4275", "abstract": "Recently, the synchronization of coupled dynamical systems has been widely studied. Synchronization is referred to as a process wherein two (or many) dynamical systems are adjusted to a common behavior as time goes to infinity, due to coupling or forcing. Therefore, before discussing synchronization, a basic problem on continuation of the solution must be solved: For given initial conditions, can the solution of coupled dynamical systems be extended to the infinite interval $[0,+\\infty)$? In this paper, we propose a general model of coupled dynamical systems, which includes previously studied systems as special cases, and prove that under the assumption of QUAD, the solution of the general model exists on $[0,+\\infty)$.", "subjects": "Dynamical Systems (math.DS)", "title": "Continuation of solutions of coupled dynamical systems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692339078752, "lm_q2_score": 0.7248702761768249, "lm_q1q2_score": 0.7079584973162094 }
https://arxiv.org/abs/2207.08248
Using Aichinger's equation to characterize polynomial functions
Aichinger's equation is used to give simple proofs of several well-known characterizations of polynomial functions as solutions of certain functional equations. Concretely, we use that Aichinger's equation characterizes polynomial functions to solve, for arbitrary commutative groups, Ghurye-Olkin's functional equation, Wilson's functional equation, the Kakutani-Nagumo-Walsh functional equation, and a general version of Fréchet's unmixed functional equation.
\section{Introduction} Given $(S,+)$ a commutative semigroup and $(H,+)$ a commutative group, we say that $f:S\to H$ is a polynomial function (also named generalized polynomial) of degree $\leq m$ if $f$ solves Fréchet's mixed differences functional equation: $$ \Delta_{h_1}\Delta_{h_2}\cdots \Delta_{h_{m+1}}f(x)=0\text{ for all } h_1,\cdots, h_{m+1},x\in S. $$ When we evaluate a polynomial function of degree $\leq m$ on a sum $u_1+u_2+\cdots+u_{m+1}$ of $m+1$ variables $u_1,\cdots, u_{m+1}$, we obtain a sum of functions with the property that each one of them depends of at most $m$ variables $u_{i_1},\cdots,u_{i_m}$ ($i_k\in\{1,\cdots,m+1\}$ for all $k$). Indeed, this fact completely characterises polynomial functions, as was proved by Aichinger and Moosbauer \cite{AM} for functions defined on abelian groups and by Almira \cite{almira} for functions defined on commutative semigroups $S$ which satisfy that $S+S=S$ and $0\in S$. Concretely \cite[Lemma 4.1]{AM} states that, when $(S,+)$ is also a group, the function $f:S\to H$ is a generalized polynomial of degree $\leq m$ if and only if it solves Aichinger's equation: \begin{equation}\label{aichinger} f(x_1+\cdots+x_{m+1})=\sum_{i=1}^{m+1}g_i(x_1,x_2,\cdots, \widehat{x_i},\cdots, x_{m+1}) \end{equation} for certain functions $g_i:S^{m}\to H$, $i=1,2,\cdots, m+1$. Here $\widehat{x_{i}}$ means that $g_i$ does not depend on $x_i$. Moreover, \cite[Theorem 2]{almira} proves that, if $S+S=S$ and $f$ solves \eqref{aichinger}, then $f$ is a polynomial function of degree $\leq m$. Moreover, if $S$ also satisfies that $0\in S$, then all polynomial functions of degree $\leq m$ are solutions of \eqref{aichinger}. The main goal of this note is to use these results to give simple proofs of several well-known (and useful) characterizations of polynomial functions as solutions of certain functional equations. Concretely, we use that Aichinger's equation characterizes polynomial functions to solve Ghurye-Olkin's functional equation (Corollary \ref{GO}), Wilson's functional equation (Corollary \ref{Wilson}), Kakutani-Nagumo-Walsh functional equation (Corollary \ref{KNW}), and a general version of Fréchet's unmixed functional equation (Corollary \ref{GFFE}). In all cases we give proofs for general commutative groups or semigroups and we do not worry about the regularity properties of polynomial functions. Although the results we prove are well-known, our proofs are surprisingly easier than the original ones. \section{Characterizations of polynomial functions} We use Aichinger's equation to prove the main result of this note, which is the following theorem: \begin{theorem} \label{main} Assume that $(S,+)$ is a commutative semigroup such that $S+S=S$, $(R,+,\cdot)$ is a commutative ring, $c:S\to S$ is an automorphism, and $f:(S,+)\to (R,+)$ is a map that satisfies \begin{equation}\label{GOcero} f(x+c(y))=\sum_{j=1}^Np_j(x)a_j(y)+\sum_{k=1}^Mq_k(y)b_k(x) \end{equation} where $p_j,q_k:(S,+)\to (R,+)$ are polynomial functions and $\deg(p_j)\leq r$, $\deg(q_k)\leq s$ for all $j,k$. Then $f:(S,+)\to (R,+)$ is a polynomial function and $\deg (f) \leq r+s+1$. \end{theorem} This theorem is a generalized version of \cite[Lemma 2.1]{almirajmaa} and is the key to prove Corollary \ref{GO}, which is a generalization of a result proved by Ghurye and Olkin in \cite[Lemma 3]{Gu_O} (see also \cite[Theorem 1.3]{almirajmaa}), where the equation was used for the characterization of Gaussian probability distributions (see also \cite[Chapter 7]{MaPe}). Our proof is not based on the very technical tools used in \cite{Gu_O}. Moreover, it simplifies the proof given in \cite{almirajmaa}, since avoids using exponential polynomials. In particular, it does not depend on \cite[Lemma 4.3]{Sz1} and/or \cite[Theorem 5.10]{Sz}. \noindent \textbf{Proof of Theorem \ref{main}.} Take $x_1,\cdots ,x_{r+1},y_1,\cdots,y_{s+1}$, a set of $r+s+2$ variables, then: \begin{eqnarray*} & \ & f(x_1+\cdots+x_{r+1}+y_1+\cdots+y_{s+1}) \\ &=& f(x_1+\cdots+x_{r+1}+c(c^{-1}(y_1)+\cdots+c^{-1}(y_{s+1})))\\ &=& \sum_{j=1}^Np_j(x_1+\cdots+x_{r+1})a_j(c^{-1}(y_1)+\cdots+c^{-1}(y_{s+1}))\\ &\ & \ +\sum_{k=1}^Mq_k(c^{-1}(y_1)+\cdots+c^{-1}(y_{s+1}))b_k(x_1+\cdots+x_{r+1}) , \end{eqnarray*} now, each term $p_j(x_1+\cdots+x_{r+1})$ can be decomposed as a sum of terms, each one depending of at most $r$ of the variables $x_1,\cdots,x_{r+1}$; and each term $q_k(c^{-1}(y_1)+\cdots+c^{-1}(y_{s+1}))$ can be decomposed as a sum of terms, each one depending of at most $s$ of the variables $y_1,\cdots,y_{s+1}$. Thus, $f$ satisfies Aichinger's equation of order $r+s+1$, which means that $f$ is a polynomial function of degree $\leq r+s+1$. {\hfill $\Box$} Aichinger's equation can also be used to prove the following \begin{lemma}\label{lemma} Assume that $(S,+)$ is a commutative semigroup such that $0\in S= S+S$, $(H,+)$ is a commutative group, $c:S\to S$ is an automorphism, and $f:(S,+)\to (H,+)$ is a map such that $g(x)=f(c(x))$ is a polynomial function of degree $\leq m$. Then $f$ is also a polynomial function of degree $\leq m$. \end{lemma} \noindent \textbf{Proof. } Given $x_1,\cdots, x_{m+1}$ a set of $m+1$ variables, we have that \begin{eqnarray*} f(x_1+\cdots+x_{m+1})&=& f(c(c^{-1}(x_1)+\cdots+c^{-1}(x_{m+1})))\\ &=&g(c^{-1}(x_1)+\cdots+c^{-1}(x_{m+1}))\\ &=&\sum_{i=1}^{m+1}g_i( c^{-1}(x_1),\cdots,\widehat{c^{-1}(x_i)},\cdots, c^{-1}(x_{m+1}))\\ &=&\sum_{i=1}^{m+1}G_i( x_1,\cdots,\widehat{x_i},\cdots x_{m+1}), \end{eqnarray*} which means that $f$ satisfies Aichinger's equation of order $m$. {\hfill $\Box$} We are now in good conditions to study Ghurye-Olkin's functional equation: \begin{corollary}[Ghurye-Olkin's functional equation]\label{GO} With the same notation used in Theorem \ref{main}, assume that $(S,+)$ is a commutative group, $c_i:S\to S$, $i=1,\cdots,n$ are automorphisms and that $c_i-c_j$ is also an automorphism whenever $i\neq j$. Let $f_i:S\to R$, $i=1,\cdots,n$ be such that \begin{equation}\label{eqGO} \sum_{i=1}^nf_i(x+c_i(y))=\sum_{j=1}^Np_j(x)a_j(y)+\sum_{k=1}^Mq_k(y)b_k(x) \end{equation} where $p_j,q_k:(S,+)\to (R,+)$ are polynomial functions and $\deg(p_j)\leq r$, $\deg(q_k)\leq s$ for all $j,k$. Then each function $f_i:(S,+)\to (R,+)$ is a polynomial function and $\deg (f_i) \leq r+s+n$, $i=1,\cdots,n$. Moreover, in the particular case that $p_j=q_k=0$ for all $j,k$ (so that the second member of the equation is $0$), the equation is well defined for functions $f_i$ taking values on a commutative group and each $f_i$ that solves the equation is a polynomial function of degree at most $n-1$. \end{corollary} \noindent \textbf{Proof. } Assume that the second member of \eqref{eqGO} is not $0$. Theorem \ref{main} is case $n=1$, so that it is natural to proceed by induction on $n$. Indeed, we prove by induction on $n$ that $f_n$ is a polynomial function of degree $\leq r+s+n$, and a simple rearrangement of the functions $f_1,\cdots,f_n$ proves the result for all functions $f_i$, $i=1,\cdots,n$. Let us assume that $n>1$ and the result holds for $n-1$. If we consider both members of the equation as functions $F(x,y)$ defined on $S\times S$, we can apply the difference operator $$\Delta_{(h_1,-c_1^{-1}(h_1))}F(x,y)=F(x+h_1,y-c_1^{-1}(h_1))-F(x,y)$$ to both sides of the equation to get that $$ \sum_{i=2}^ng_i(x+c_i(y))=\sum_{j=1}^Np_j^*(x)a_j^*(y)+\sum_{k=1}^Mq_k^*(y)b_k^*(x), $$ where \begin{eqnarray*} g_i(x+c_i(y)) &=& \Delta_{(h_1,-c_1^{-1}(h_1))}f_i(x+c_i(y)) \\ &=& f_i(x+h_1+c_i(y-c_1^{-1}(h_1))) - f_i(x+c_i(y)) \\ &=& f_i(x+h_1+c_i(y)-c_i(c_1^{-1}(h_1)))-f_i(x+c_i(y)) \\ &=& \Delta_{h_1-c_i(c_1^{-1}(h_1))}f_i(x+c_i(y))\\ &=& \Delta_{(1_d-c_i\circ c_1^{-1})(h_1)}f_i(x+c_i(y)) \end{eqnarray*} and $p_j^*,q_k^*:(S,+)\to (R,+)$ are polynomial functions satisfying $\deg(p_j^*)\leq r$, $\deg(q_k^*)\leq s$ for all $j,k$. Thus, the induction hypothesis implies that all functions $g_i$, $i=2,\cdots, n$ are polynomial functions of degree $\leq r+s+n-1$. In particular, for each $h_1\in S$, $\Delta_{(1_d-c_i\circ c_1^{-1})(h_1)}f_n$ is a polynomial function of degree $\leq r+s+n-1$. Now, $(c_1-c_i)\circ c_1^{-1}= 1_d-c_i\circ c_1^{-1}$ is biyective, so that, for each $h\in S$, $\Delta_{h}f_n$ is a polynomial function of degree $\leq r+s+n-1$. In particular, $f_n$ satisfies Fréchet's mixed functional equation $$ \Delta_{h_1}\Delta_{h_2}\cdots \Delta_{h_{r+s+n}} \Delta_{h}f(x)=0\text{ for all } h_1,\cdots, h_{r+s+n-1},h,x\in S. $$ Hence $f_n$ is a polynomial function of degree $\leq r+s+n$. Let us now assume that that $p_j=q_k=0$ for all $j,k$ and that all functions $f_i$ take values on a commutative group (not necessarily a ring). Then the equation is of the form \begin{equation}\label{eqGOcase0} \sum_{i=1}^nf_i(x+c_i(y))=0. \end{equation} Again, we prove by induction on $n$ that $f_n$ is a polynomial function of degree $\leq n-1$, and a simple rearrangement of the functions $f_1,\cdots,f_n$ proves the result for all functions $f_i$, $i=1,\cdots,n$. For $n=1$, the equation becomes $f_1(x+c_1(y))=0$ and taking $y=0$ he get that $f_1(x)=0$ and $f_1$ is a polynomial of degree $0$. We assume that the result holds true for $n-1$ and consider the case $n>1$. Applying to both sides of \eqref{eqGOcase0} the operator $\Delta_{(h_1,-c_1^{-1}(h_1))}$ we obtain that \[ \sum_{i=2}^n\Delta_{(1_d-c_i\circ c_1^{-1})(h_1)}f_i(x+c_i(y))=0. \] Thus, the induction hypothesis and the fact that $1_d-c_i\circ c_1^{-1}$ is an automorphism for each $i\geq 2$ imply that $\Delta_{h}f_i$ is a polynomial function of degree at most $n-2$ for each $i=2,\cdots,n$ and for every $h\in S$. In particular, $f_n$ is a polynomial function of degree at most $n-1$. {\hfill $\Box$} \begin{remark} \label{rem} The transformation $\widetilde{f_i}(x)=f_i(\beta_i(x))$ reduces the equation \begin{equation}\label{GOM} \sum_{i=1}^nf_i(\beta_i(x)+\delta_i(y))=\sum_{j=1}^Np_j(x)a_j(y)+\sum_{k=1}^Mq_k(y)b_k(x) \end{equation} to equation \begin{equation}\label{GOM2} \sum_{i=1}^n\widetilde{f_i}(x+(\beta_i^{-1}\circ \delta_i)(y))=\sum_{j=1}^Np_j(x)a_j(y)+\sum_{k=1}^Mq_k(y)b_k(x), \end{equation} so that, if $\beta_i,\delta_i:S\to S$ are automorphisms such that $\beta_i^{-1}\circ \delta_i-\beta_j^{-1}\circ \delta_j$ is invertible whenever $i\neq j$ and $p_j,q_k:(S,+)\to (R,+)$ are polynomial functions, $\deg(p_j)\leq r$, $\deg(q_k)\leq s$ for all $j,k$, then we can use Corollary \ref{GO} with $c_i=\beta_i^{-1}\circ \delta_i$ and Lemma 1 to conclude that the functions $f_i$ that solve equation \eqref{GOM} are polynomial functions of degree at most $r+s+n$. Moreover, if the second member of \eqref{GOM} is $0$, the functions $f_i$ that solve the equation are polynomial functions of degree at most $n-1$. \end{remark} The following equation was studied one hundred years ago by Wilson \cite{wilson}: \begin{corollary}[Wilson's functional equation]\label{Wilson} Assume that $(S,+)$, $(H,+)$ are commutative groups and $\beta_i,\delta_i:S\to S$ are automorphisms such that $\beta_i^{-1}\circ \delta_i-\beta_j^{-1}\circ \delta_j$ is invertible whenever $i\neq j$. Assume also that the functions $f_i,a,b:S\to H$ ($i=1,\cdots,n$) solve the equation \[ \sum_{i=1}^nf_i(\beta_i(x)+\delta_i(y))=a(x)+b(y) \] Then $a,b$ are polynomial functions of degree at most $n$. \end{corollary} \noindent \textbf{Proof.} A direct application of Remark \ref{rem} with $r=s=0$ shows that all functions $f_i$, $i=1,\cdots,n$ are polynomial functions of degree at most $n$. Hence $a$, $b$ are also polynomial functions of degree at most $n$. {\hfill $\Box$} The following equation was studied by Almira in \cite{almirajmaa, A_JJA}. \begin{corollary}[Generalized Fréchet's unmixed functional equation] \label{GFFE} Let $(S,+)$ and $(H,+)$ be commutative groups and assume that $f:S\to H$ solves the equation \begin{equation} q(\tau_h)(f)=\sum_{k=0}^na_kf(x+kh)=0 \text{ for all } x,h\in S, \end{equation} where $q(z)=a_0+\cdots+a_nz^n$ is a polynomial and $\tau_h:S\to S$ is the translation operator, $\tau_h(x)=x+h$. Then $f$ is a polynomial function of degree at most $s=\#\{k:a_k\neq 0\}-1$. \end{corollary} \noindent \textbf{Proof. } Set $\{k:a_k\neq 0\}=\{k_1,k_2,\cdots,k_s\}$ and apply Corollary \ref{GO} with $p_j=q_k=0$ for all $j,k$, $f_i=a_{k_i}f(x)$ and $c_i(x)=k_ix:=x+\cdots^{k_i\text{ times }}+x$, $i=1,\cdots,s$. {\hfill $\Box$} The following equation was introduced by S. Kakutani and M. Nagumo \cite{kn} and J. L. Walsh \cite{wal} in the 1930's. The equation was extensively studied by S. Haruki \cite{h1,h2,h3} in the 1970's and 1980's. \begin{corollary} \label{KNW} Let $(H,+)$ be a commutative group, let $f:\mathbb{C}\to H$ be a solution of the Kakutami-Nagumo-Walsh's functional equation \begin{equation}\label{eqKNW} \frac{1}{N}\sum_{k=0}^{N-1}f(z+w^kh)=f(z) \text{ for all } z,h\in\mathbb{C}, \end{equation} where $w$ is any primitive $N$-th root of $1$. Then $f$ is a polynomial function of degree $\leq N$. \end{corollary} \noindent \textbf{Proof. } Use Corollary \ref{Wilson} with $n=N$, $f_i(z)=\frac{1}{N}f(z)$, $\beta_i(z)=z$, $\delta_i(z)=w^{i-1}z$, $i=1,\cdots,N$, and $a(z)=f(z)$, $b(z)=0$. Obviously the corollary can be used since $\beta_i^{-1}(z)=z$, $\delta_i^{-1}(z)=w^{N+1-i}z$, and \[ (\beta_i^{-1}\circ \delta_i-\beta_j^{-1}\circ \delta_j)(z)=(\delta_i-\delta_j)(z)=(w^{i-1}-w^{j-1})z \] is an automosphim of $\mathbb{C}$ for all $i\neq j$. {\hfill $\Box$} Another special case of Wilson's equation is \begin{equation}\label{Ski-ordinary} \sum_{i=1}^{m}f_i(b_ix+c_iy)= \sum_{i=1}^m f_i(b_ix) + \sum_{i=1}^m f_i(c_iy), \end{equation} which is a linearized form of the Skitovich-Darmois functional equation \begin{equation*} \prod_{i=1}^m \widehat{\mu_i}(b_ix+c_iy) = \prod_{i=1}^m \widehat{\mu_i}(b_ix) \prod_{i=1}^m \widehat{\mu_i}(c_iy), \end{equation*} an equation which is connected to the characterization problem of Gaussian distributions (see, for example, [Linnik \cite{Linn}, Ghurye-Olkin \cite{Gu_O}, \cite{KLR}]): \begin{corollary}[Linearized Skitovich–Darmois functional equation] \label{SD} Assume that $(S,+)$ and $(H,+)$ are commutative groups and let $\beta_i,\delta_i:S\to S$ be automorphisms such that $\beta_i^{-1}\circ \delta_i-\beta_j^{-1}\circ \delta_j$ is invertible whenever $i\neq j$. If the functions $f_i:S\to H$, $i=1,\cdots, n$ solve the functional equation \begin{equation}\label{LSDFE} \sum_{i=1}^nf_i(\beta_i(x)+\delta_i(y))=\sum_{i=1}^nf_i(\beta_i(x))+\sum_{i=1}^nf_i(\delta_i(y)), \end{equation} then $P(x)=\sum_{i=1}^nf_i(\beta_i(x))$ and $Q(y)= \sum_{i=1}^nf_i(\delta_i(y))$ are polynomial functions of degree $\leq n$. \end{corollary} \noindent \textbf{Proof. } Use Corollary \ref{Wilson} with $a(x)=P(x)$ and $b(y)=Q(y)$. {\hfill $\Box$} The equation \eqref{LSDFE} has been studied in great detail by Feld'man \cite{Feld} for functions defined on locally compact commutative groups. \begin{remark} As a final remark we would like to mention that although we have formulated all results in this paper for ordinary functions defined on commutative groups or semigroups, the same proofs can be translated to the distributional setting. In particular, ordinary polynomials of degree $\leq m$ are the unique solutions of Aichinger's equation \eqref{aichinger} when $f\in \mathcal{D}(\mathbb{R}^d)'$ (so that $f(x_1+\cdots+x_{m+1}) \in \mathcal{D}(\mathbb{R}^d\times \cdots ^{m+1 \text{times}}\times \mathbb{R}^d)'$) and $g_i(x_1,\cdots,\widehat{x_i},\cdots,x_{m+1}) \in \mathcal{D}(\mathbb{R}^d\times \cdots ^{m+1 \text{times}}\times \mathbb{R}^d)'$ but does not depend on $x_i$). The proof of this result is a direct consequence of the fact that the translation and difference operators are well defined for distributions and inherit in the distributional framework the properties they have when applied to ordinary functions, and Fréchet's functional equation also characterizes polynomials in distributional sense \cite{A_NFAO}. \end{remark}
{ "timestamp": "2022-07-19T02:22:02", "yymm": "2207", "arxiv_id": "2207.08248", "language": "en", "url": "https://arxiv.org/abs/2207.08248", "abstract": "Aichinger's equation is used to give simple proofs of several well-known characterizations of polynomial functions as solutions of certain functional equations. Concretely, we use that Aichinger's equation characterizes polynomial functions to solve, for arbitrary commutative groups, Ghurye-Olkin's functional equation, Wilson's functional equation, the Kakutani-Nagumo-Walsh functional equation, and a general version of Fréchet's unmixed functional equation.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "Using Aichinger's equation to characterize polynomial functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692332287863, "lm_q2_score": 0.7248702761768249, "lm_q1q2_score": 0.7079584968239581 }
https://arxiv.org/abs/2110.03078
Singularities of normal quartic surfaces II (char=2)
We show, in this second part, that the maximal number of singular points of a quartic surface $X \subset \mathbb{P}^3_K$ defined over an algebraically closed field $K$ of characteristic 2 is at most 14, and that, if we have 14 singularities, these are nodes and moreover the minimal resolution of $X$ is a supersingular K3 surface. We produce an irreducible component, of dimension 24, of the variety of quartics with 14 nodes. We also exhibit easy examples of quartics with 7 $A_3$-singularities.
\section{Introduction} Once upon a time\footnote{It was in August-September 1976} in Cortona there was a Summer school with wonderful courses held by Herb Clemens and Boris Moishezon. The first author had the privilege of attending the Summer school. On that occasion Herb lectured on several beautiful classical topics, and these lectures formed the basis of a lovely book \cite{clemens}. Even if the course and the book were devoted to complex curves, yet characteristic $p$ appeared on the stage, and was used by Clemens to explain the `Unity of Mathematics' (section 2.12). In this spirit we are happy to dedicate this `characteristic $2$' paper to Herb. \smallskip These are our main results. They feature the property of the minimal resolution $S$ of being a supersingular K3 surface (i.e.\ with Picard number $\rho=22$).\footnote{For the reader who has never seen such a surface, an easy example is provided in Corollary \ref{A3} in Section \ref{ss:plane}, as the resolution of a quartic surface with $7$ $A_3$ singularities, providing $21$ independent $(-2)$-curves on $S$ which together with the hyperplane section $H$ generate a rank $22$ finite index sublattice of $\Pic(S)$.} The following is our main theorem: \begin{theo}\label{maintheo} \label{theo} A normal quartic surface $X \subset \ensuremath{\mathbb{P}}^3_K$ defined over an algebraically closed field $K$ of characteristic $2$ contains no more than 14 singular points. If the maximum number of 14 singularities is attained, then all singularities are nodes and the minimal resolution is a supersingular K3 surface. The variety of quartics with $14$ nodes contains an irreducible component, of dimension $24$. \end{theo} If the minimal resolution $S$ of a normal quartic $X$ is not a supersingular K3 surface, then Theorem \ref{theo} shows that $X$ has at most $13$ singular points. This bound is not sharp; we have examples with $12$ nodes, and we will show in a forthcoming paper (part III) that if $S$ is a K3 surface which is not supersingular, then $X$ has at most $12$ singular points. The proof uses mostly classical techniques, notably the Gauss map, but there are some ingredients (notably the main claim in Section \ref{s:proof}) which build on the theory of genus one fibrations (see Section \ref{s:g=1}). We emphasize that each ingredient has some special feature in characteristic $2$; for instance, the Gauss map of a normal surface in $\ensuremath{\mathbb{P}}^3$ need not be birational, and double points behave differently (this affects the degree formula, see Section \ref{s:Gauss}). The dual surface can be a plane, as we study in Section \ref{ss:plane}. Elliptic fibrations feature wild ramification (at certain additive fibres), which has surprising consequences for supersingularity (see Section \ref{s:g=1}). The notion of genus one fibration also encompasses quasi-elliptic fibrations whose properties we exploit in Section \ref{s:quasi}, especially with a view to the dual surface. Naturally Theorem \ref{theo} leads to the question about what is true for other quasi-polarized K3 surfaces in characteristic $2$, which we plan to address in part III as well. \emph{Convention:} We work over an algebraically closed field $K$, mostly of characteristic 2, though many results may also be stated over non-closed fields. \section{The Gauss map} \label{s:Gauss} We consider in this section a normal quartic surface $$ X = \{F(x)=0\} \subset \ensuremath{\mathbb{P}}^3$$ and summarize and extend some considerations made in the first part, \cite{cat21} in order to gain control over the (number of) singular points of $X$. The Gauss map $\ga : X \dasharrow \sP : = (\ensuremath{\mathbb{P}}^3)^{\vee}$ is the rational map given by $$ \ga(x) : = \nabla F (x), \;\; x \in X^0 : = X \setminus \mathrm{Sing}(X).$$ We let $X^{\vee} : = \overline{\ga(X^0)}$ be the closure of the image of the Gauss map, which is a morphism on $X^0$, and becomes a morphism $\tilde{\ga}$ on a suitable blow up $\tilde{S}$ of the minimal resolution $S$ of $X$. $X^{\vee}$ is called the dual variety of $X$. In order to compute the degree of $X^{\vee}$ (this is defined to be equal to zero if $X^{\vee}$ is a curve), we consider a line $\Lambda \subset \sP$ such that $\Lambda$ is transversal to the map $\tilde{\ga}$, this means: \begin{enumerate} \item[1)] $\Lambda \cap X^{\vee} = \emptyset $ if $X^{\vee}$ is a curve; \item[2)] $\Lambda $ is not tangent to $X^{\vee}$ at any smooth point, and neither contains any singular point of $X^{\vee}$, nor any point $y$ where the dimension of the fibre $\tilde{\ga}^{-1} (y)$ equals $1$, so that \item[3)] $\Lambda \cap X^{\vee} $ is in particular a subscheme consisting of $\deg(X^{\vee})$ distinct points, and its inverse image in $\tilde{S}$ is a finite set. \end{enumerate} By a suitable choice of the coordinates, we may assume that $$\ga^{-1} ( \Lambda ) \subset X \cap \{F_1 = F_2 = 0\}\;\;\;\;\;\; (F_i = \partial F/\partial x_i). $$ The latter is a finite set, hence by Bezout's theorem it consists of $4 \cdot 3^2= 36$ points counted with multiplicity, including the singular points of $X$. We have therefore proven the following (probably well known) formula: $$ (DEGREE - FORMULA) \ \ \deg(\ga) \deg(X^{\vee} ) = 36 - \sum_{P \in \mathrm{Sing}(X)} (F,F_1, F_2)_P,$$ where the symbol $(F,F_1, F_2)_P$ denotes the local intersection multiplicity at $P$, defined by $$ (F,F_1, F_2)_P: = \dim_K ( \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^3,P} / (F, F_1, F_2)) = \dim_K ( \ensuremath{\mathcal{O}}_{X,P} / ( F_1, F_2)) .$$ Under the above assumptions this intersection multiplicity is zero unless $P$ is a singular point, and then we have $$ (F,F_1, F_2)_P \geq 2 \;\; \forall P \in \mathrm{Sing}(X) .$$ The integer $(F,F_1, F_2)_P$ shall be called the Gaussian defect. \subsection{Calculation of Gaussian defects} Since quartics with triple points were treated in \cite{cat21}, we will mostly be concerned with double points, but we will cover all types of singularities in Proposition \ref{9}. Double point singularities are divided into three rough types according to the rank of the tangent quadric at $P$: \begin{enumerate} \item nodes: here the quadric is smooth, and we have an $A_1$-singularity, formally equivalent to $xy = z^2$; the nodes give a contribution $(F,F_1, F_2)_P=2$ to the Gaussian defect; \item biplanar double points: here the quadric consists of two planes, and we have an $A_n$-singularity with $n\geq 2$, formally equivalent to $xy = z^{n+1}$ (see for instance \cite{cat21a}); the biplanar double points of type $A_{n}$ give a contribution of $n+1$ to the Gaussian defect; \item uniplanar double points: here the quadric consists of a double plane, and we have several types (see \cite{artin}, \cite{artin2}, \cite{roczen}), the Taylor development is of the form $ x^2 + \psi =0$, where $\psi$ has order $\geq 3$; the uniplanar double points give a contribution of order at least $8$ to the Gaussian defect, since $ (F,F_1, F_2)_P = (F, \psi_1, \psi_2) \geq 8$. \end{enumerate} \begin{prop}\label{gaussestimate} Let $X$ be a normal quartic surface in $\ensuremath{\mathbb{P}}^3$. (I) If $X$ has $\nu$ singular points of multiplicity $2$, among them $b$ biplanar double points, and $u$ uniplanar double points, then: \begin{eqnarray} \label{eq:deg} 36 - \deg(\ga) \deg(X^{\vee} ) \geq 2 \nu + b + 6 u . \end{eqnarray} (II) If $X$ contains a node, then the exceptional curve $E$ in $S$ resolving the node maps to a line via an inseparable map of degree two. In particular the Gauss map cannot be birational if $X^{\vee} $ is a normal surface. (III) The dual variety $X^{\vee}$ cannot be a line. (IV) For $\nu \geq 13$, the dual variety $X^{\vee}$ is an irreducible surface; in particular $\deg(\ga) \deg(X^{\vee} ) \geq 2$, and, if $\deg(\ga) =1$, $X^\vee$ is non-normal and $ \deg(X^{\vee} ) \geq 3$. (V) For $\nu \geq 14$, if the dual variety $X^{\vee}$ is not a plane, then the singularities of $X$ are all of type $A_n$ ($u=0$). \end{prop} \begin{proof} (I): follows since the nodes give a contribution equal to $2$ to the Gaussian defect, the biplanar double points of type $A_{n}$ give a contribution $n+1 \geq 3$, the uniplanar double points give a contribution at least $8$. (II): given a node $P$, an $A_1$-singularity, then the affine Taylor development at $P$ is given by $$F = xy + z^2 + \psi (x,y,z) =0$$ and the Gauss map on the exceptional curve $ E$, given as a conic $E = \{ xy + z^2=0\} \subset \ensuremath{\mathbb{P}}^2$, is given by $(x,y,0,0)$. If $X^{\vee} $ is a normal surface and $\ga$ is birational onto its image, then $$\tilde{\ga} : \tilde{S} \ensuremath{\rightarrow} X^{\vee} $$ is an isomorphism over the complement of a finite number of points of $X^{\vee} $, a contradiction since $E$ maps $2$ to $1$ to a line. (III): if $X^{\vee}$ is a line, then there are projective coordinates in $\ensuremath{\mathbb{P}}^3$ such that the partial derivatives with respect to 2 variables, say $z,w$, are identically zero; hence $$ X = \{ a z^4 + b w^4 + c z^2 w^2 + z^2 D(x,y) + w^2 E(x,y) + f (x,y) = 0 \}.$$ Writing $$ D(x,y) = d_1 x^2 + d_2 y^2 + d xy, E(x,y) = e_1 x^2 + e_2 y^2 + e xy,$$ $$ f(x,y) = q(x,y)^2 + f_1 x^3 y + f_2 x y^3,$$ we see that $$ \mathrm{Sing}(X) = X \cap \{ y M = x M = 0\}, \ M : = d z^2 + e w^2 + f_1 x^2 + f_2 y^2,$$ hence Sing$(X) \supset X \cap \{ M=0\}$ and $X$ is not normal. (IV): since for $\nu \geq 13$ there must be a node by the degree formula, $X^{\vee}$ contains a line; but $X^\vee$ cannot be a line by (III), hence it is a surface; the rest follows from (II) and from the fact that an irreducible quadric is normal. (V) For $\nu \geq 14$, if the dual variety $X^{\vee}$ is not a plane, then $ \deg (\ga) \geq 2$, or $ \deg (X^{\vee}) \geq 3$ (since if $\ga$ is birational, then $X^{\vee}$ is not normal by (IV)), hence $ 2 \nu + b + 6 u \leq 33$, hence $u=0$ and the singularities of $X$ are all of type $A_n$. \end{proof} \begin{rem} \label{rem:non-RDP} The degree formula can be improved substantially by taking the precise types of singularities into account as the proof of (I) shows. For instance, the biplanar double points contribute $b$ in \ref{eq:deg} exactly when they all have type $A_2$. \end{rem} \subsection{Gaussian defect of non-rational double point singularities} In the spirit of Remark \ref{rem:non-RDP}, we take a closer look at those singularities which are not rational double point. This will enable us to strengthen the results of Proposition \ref{gaussestimate}, and to decide when the minimal resolution $S$ of $X$ is a K3 surface (see Proposition \ref{cor:9}). \begin{prop}\label{9} If a given singularity $P$ on $X$ is not a rational double point, we have, for a general (hence any) choice of the affine local coordinates at $P$, \begin{eqnarray} \label{eq:>=10} (F,F_1, F_2)_P \geq 10. \end{eqnarray} \end{prop} \begin{proof First of all, for a triple point the Gaussian defect is at least 12, since the Gaussian defect is greater or equal to the product of the respective orders of $F, F_1, F_2$ at $P$, and a double point which is not a rational double point must be a uniplanar double point. We can therefore assume that the affine Taylor development of $F$ at the point $P$ is of the form $$ \ F (x_1, x_2,x_3) = x_1 ^2 + G (x) + B (x),$$ where $G$ is homogeneous of degree $3$ and $B$ of degree $4$. We may take local coordinates such that $ x: = x_1,$ and where $y,z$ are generic linear forms vanishing at $P$, hence the Gaussian defect will be the intersection number $ (F, F_y, F_z)$ at the point $P$. This said, we can write $$ F (x,y,z) = x^2 ( 1 + A(x,y,z)) + x g(y,z) + g' (y,z) + B' (x,y,z),$$ and multiplying by $ ( 1 + A(x,y,z))^{-1}$ we get a formal power series equation \begin{eqnarray} \label{eq:f} f = x^2 + x g(y,z) + g' (y,z) + b (x,y,z), \end{eqnarray} where $g$ is a quadratic form, $g'$ is a cubic form, and the power series $b$ has order at least 4. We consider the blow up of the singular point $P \in X$. The equation of $X$ is $ f(x,y,z) = x^2 + x g(y,z) + g'(y,z) + b(x,y,z) = 0$; set now : $$ x = t\, \xi, \; \ y = t\, \eta, \; \ z= t\, \zeta$$ (here $t=0$ is the equation of the exceptional divisor, isomorphic to $\ensuremath{\mathbb{P}}^2$ and $(\xi, \eta, \zeta)$ are homogeneous coordinates in $\ensuremath{\mathbb{P}}^2$) so that the equation of the blow up is $$ \xi^2 + t \xi \ g (\eta, \zeta) + t \ g' (\eta, \zeta) + t^2 \ b (\xi, \eta, \zeta)=0.$$ On the exceptional line $\{ t = \xi =0 \}$ the singular points are the roots of $ g' $. Hence either $ g' $ is identically zero, or the blow up is normal. If $ g' $ does not have a multiple root, we get 3 nodes, hence $P$ is a singularity of type $D_4$ and we are done. Therefore we may assume that $g'$ has a multiple root (or is identically zero) and apply a linear transformation such that $ y =0$ is this root. We want to show that the length of the Artin algebra $$\sA : = \ensuremath{\mathcal{O}}_P / ( f, f_y, f_z) $$ is $\geq 10$. A fortiori it will suffice to replace the algebra $\sA$ by the quotient algebra $$\sA_4 : = \ensuremath{\mathcal{O}}_P / (( f, f_y, f_z) + \mathfrak M_P^4)$$ or by the quotient algebra $$\sA_5 : = \ensuremath{\mathcal{O}}_P / ((f, f_y, f_z) + \mathfrak M_P^5).$$ Inside the algebra $\sB_4 : = \ensuremath{\mathcal{O}}_P / \mathfrak M_P^4$, the ideal $\sI$ generated by $ f, f_y, f_z$ is generated as a vector space by the vectors $$f, f_y, f_z, x f, x f_y, x f_z, y f, y f_y, y f_z, z f, z f_y, zf_z $$ where the first three have order at least $2$, and the latter at least $3$. Since $\sI \subset \mathfrak M_P^2$, which has colength $4$, it suffices to show that \begin{enumerate} \item[1)] $(\sI + \mathfrak M_P^3)/ \mathfrak M_P^3$ has codimension at least $3$ in $\mathfrak M_P^2/ \mathfrak M_P^3$, which is a 6-dimensional vector space. \item[2)] $((\sI \cap \mathfrak M_P^3) + \mathfrak M_P^4) / \mathfrak M_P^4$ has codimension at least $2$ in $\mathfrak M_P^3/ \mathfrak M_P^4$, which is a 10-dimensional vector space. \item[3)] if in 2) codimension $2$ occurs, then $((\sI \cap \mathfrak M_P^4) + \mathfrak M_P^5)/ \mathfrak M_P^5$ has codimension at least $2$ in $\mathfrak M_P^4/ \mathfrak M_P^5$. \end{enumerate} We can now write $$ f = x^2 + x g(y,z) + cy^3+dy^2z \;\; (\mathrm{mod} \ \mathfrak M_P^4),$$ $$ \ f_y = a xz + c y^2, \;\;\; \ f_z = a xy + d y^2 \;\; (\mathrm{mod} \ \mathfrak M_P^3), $$ where $c,d$ may be equal to zero. The first assertion is clear, since modulo $\mathfrak M_P^3$ we just have three vectors, $x^2, a xz + c y^2, a xy + d y^2$. In fact, if $a=0$, then the vectors are linearly dependent modulo $\mathfrak M_P^3$, so $(\sI + \mathfrak M_P^3)/ \mathfrak M_P^3$ has codimension at least $4$ in $\mathfrak M_P^2/ \mathfrak M_P^3$. As one easily checks that $((\sI \cap \mathfrak M_P^3) + \mathfrak M_P^4) / \mathfrak M_P^4$ has codimension at least $3$ in $\mathfrak M_P^3/ \mathfrak M_P^4$ in this situation, it follows that $(F,F_1, F_2)_P \geq 11$ as desired. Hence, in what follows, we will assume $a\neq 0$ and thus normalize $a=1$. For the second assertion, it suffices to show that, modulo $\mathfrak M_P^4$, we get a subspace in degree 3 of dimension at most $8$. From $f$, in degree 3 we get $ x^3, x^2 y, x^2 z,$ and modulo the subspace generated by the above vectors we get $xf_y\equiv cxy^2, xf_z\equiv dxy^2$: these vectors are all contained in the 4-dimensional subspace $V$ generated by $x^3, x^2 y, x^2 z, x y^2.$ Since there are only 4 further generators in degree 3 (given below), this already proves the second assertion. For future use, we further investigate $(\sI + \mathfrak M_P^4) / \mathfrak M_P^4$. Modulo $V$, we get the 4 vectors, $$ yf_y = y (x z + c y^2), \; zf_y = z (x z + c y^2), \; yf_z \equiv d y^3, \; zf_z=z (x y + d y^2).$$ These are linearly independent if and only if $d\neq 0$; in that case, their span is generated by $xyz, xz^2, y^3, y^2z$, in agreement with the second assertion. Note that, if $d=0$, then $(\sI \cap \mathfrak M_P^3) / \mathfrak M_P^4$ has codimension at least $3$ in $\mathfrak M_P^3/ \mathfrak M_P^4$, and the main claim of our proposition follows readily. Hence we will assume $d\neq 0$ in what follows. In order to prove the third assertion we observe that the ideal $$ \mathcal I':=(x^2, x y^2, x z^2, xyz, y^3, y^2z)$$ arising from the monomials in the above computations contains all monomials of degree 4 except for $z^4$ and $z^3y$. Define $W$ to be the subspace of $\mathfrak M_P^4/ \mathfrak M_P^5$ generated by $\mathcal I'$, i.e.\ by the monomials containing $x$ or divisible by $y^2$. Working in $$ U: = (\mathfrak M_P^4/ \mathfrak M_P^5)/W \cong K z^4 \oplus K z^3y $$ we want to show that $((\sI \cap \mathfrak M_P^4) +\mathfrak M_P^5) / \mathfrak M_P^5$ maps to zero in $U$. Observe that, in degree $3$, $b_y \equiv \la y^2 z + \mu z^3, b_z \equiv \la y^3 + \mu y z^2 $ modulo $\mathcal I'$. But in order to get this, we must have some degree 1 relation between the quadratic parts of $f, f_y, f_z$, giving then rise in degree 4 to some non-zero vector in $U$. In fact, one relation is obvious, namely \[ ( cy + dz)f+ dxf_y+ cxf_z\in \mathfrak M_P^4, \] but it only produces multiples of $x$ and $y^2$ in degree 4, i.e.\ zero in $U$. Direct calculation shows that the above is the only relation in degree 1 occurring, so we are done. \end{proof} We will use the proposition soon to derive a criterion for the minimal resolution $S$ of $X$ to be a K3 surface (Proposition \ref{cor:9}), but as a preparation we have to discuss the case where the dual surface $X^\vee$ is a plane. \section{When the dual surface is a plane} \label{ss:plane} We continue to consider a normal quartic surface $X\subset \ensuremath{\mathbb{P}}^3$. The term $\deg(\gamma)\deg(X^\vee)$ in \eqref{eq:deg} deems it essential to study the case where the dual surface $X^{\vee}$ is a plane. In this case there are coordinates $(x_1, x_2, x_3, z)$ such that the partial derivative of $F$ with respect to $z$ is identically zero, hence \begin{eqnarray} \label{eq:plane} X = \{ (x,z) | a z^4 + z^2 Q(x) + B(x)=0\}. \end{eqnarray} We are going to show that for the general such surface $X$ has $14$ nodes as singularities (Theorem \ref{dual=plane}). This will prove part of Theorem \ref{theo}. There are a few special equations where $X^\vee$ is a plane which require extra treatment. Two of them were contained in part one of this paper \cite{cat21}: \begin{enumerate} \item[(1)] $ X = \{ (x,z) \mid z^2 Q(x) + B(x)=0\}$ ($a=0$) is the case where there is a singular point $P$ ($x=0$) such that projection with centre $P$ is inseparable. \item[(2)] $Q(x)$ is the square of a linear form (see Proposition \ref{special}). \end{enumerate} One more equation will appear in Proposition \ref{insep}. Together with Lemma \ref{dp}, they will suffice to prove the instrumental fact that, with at least 13 singular points, the minimal resolution $S$ of $X$ is a K3 surface (Proposition \ref{cor:9}). \medskip The curve $\{ B(x)=0\}$ obtained from equation \ref{eq:plane} is a plane quartic curve, and we want now to establish some simple properties of plane quartic curves which will be relevant for our issues. \subsection{The strange points of a plane curve of even degree in characteristic $=2$} We define here, as in \cite{cat21}, the strange points of a plane curve $\{ B(x) = 0\}\subset\ensuremath{\mathbb{P}}^2$ to be the points outside the curve where the gradient $\nabla B$ vanishes. We have seen in Part I (\cite{cat21}) for the case of a general plane quartic curve $\{ B(x) = 0\}\subset\ensuremath{\mathbb{P}}^2$: \begin{prop}\label{7} For a homogeneous quartic polynomial $B \in K [x_1,x_2,x_3]_4$ let $\Sigma$ be the critical locus of $B$, the locus where the gradient $\nabla B$ vanishes. If $\Sigma$ is a finite set, then it consists of at most 7 points. For $B$ general, $\Sigma$ consists of exactly 7 reduced points. \end{prop} The above result does indeed nicely extend to the case of a homogeneous polynomial of even degree $B \in K [x_1,x_2,x_3]_{2m}$. As noticed in \cite{cat21}, because of the Euler identity $$x_1 B_1 + x_2 B_2 + x_3 B_3 =0$$ among partial derivatives, taking coordinates such that the line $\{ x_3=0 \}$ does not intersect $\Sigma$, it follows that $$\Sigma = \{ x \mid B_1(x)= B_2(x)=0, x_3 \neq 0\}.$$ For a polynomial of the Klein form $$ B_0 : = x_1^{2m-1} x_2 + x_2^{2m-1} x_3 + x_3^{2m-1} x_1,$$ the critical scheme is defined by $$ x_i^{2m-1} = x_{i+1}^{2m-2} x_{i+2} \;\; \Longrightarrow \;\; x = (\e, 1, \e^{2m-1}), \ \e^{(2m-2)(2m-1) + 1} = 1.$$ Letting $s$ be the general number of strange points ($s: = |\Sigma|$), we have therefore that, setting $d = 2m$, $s$ lies in the interval $$ (d-1)(d-2)+ 1= (d-1)^2 - (d-2) \leq s \leq (d-1)^2 .$$ { \begin{prop} The number of strange points $s: = |\Sigma|$ ($\Sigma : = \{ \nabla B(x)=0\}$) of a general homogeneous polynomial $B(x_1, x_2,x_3)$ of even degree $d$ is equal to $ s = (d-1)(d-2)+ 1 $. Whenever the subscheme $\Sigma$ is finite, its length equals $s$. \end{prop} \begin{proof} To show this, two steps suffice: I) if $\Sigma$ is finite, then the scheme $ \{ x \mid B_1(x)= B_2(x)=0\}$ is finite for general choice of coordinates, and $\Sigma \cap \{ x_3=0\}$ is empty in general; II) the subscheme $ \{ x| B_1(x)= B_2(x)= x_3 =0\}$, if we write $$ B = x_3 B' + \be ( x_1, x_2) + q(x_1, x_2)^2, \ \be ( x_1, x_2) = \sum_{n \ \text{odd}} a_n x_1^{n} x_2^{2m-n},$$ equals the subscheme $$ \left\{ x\, \middle\vert \, x_3 = \sum_{n \ \text{odd}} a_n x_1^{n-1} x_2^{2m-n} = \sum_{n \ \text{odd}} a_n x_1^n x_2^{2m-n-1} =0 \right\} = $$ $$ \left\{ x \, \middle\vert \, x_3 = x_2 \left( \sum_{n \ \text{odd}} a_n x_1^{n-1} x_2^{2m-n-1}\right) = x_1 \left( \sum_{n \ \text{odd}} a_{n-1} x_1^{n-1} x_2^{2m-n-1}\right) =0 \right\}.$$ I) holds since changing variables we get that the new partials are general linear combinations of the old partials: if $\Sigma$ is finite then we can keep $B_1$ fixed and vary $B_2$ so that it has no common factor with $B_1$; hence the result holds for general choice of linear coordinates, and the rest is obvious. Our result follows then from I) and II), since then the scheme $\Sigma$ is disjoint from the length $(d-2)$ scheme $$\left\{ x_3= \sum_{n \ \text{odd}} a_{n-1} x_1^{n-1} x_2^{2m-n-1} =0 \right\}, $$ and we conclude since their union is the complete intersection subscheme $ \{ x| B_1(x)= B_2(x)=0\}$, which has length $(d-1)^2$. \end{proof} } \begin{rem} A more general result (also valid in other characteristics) is contained in Theorem 2.4 of \cite{liedtkecan}, whose formulation, however, does neither mention derivatives nor critical sets. \end{rem} \subsection{Supersingular quartics with $7$ $A_3$-singularities} A first immediate consequence of the previous result is: \begin{cor}\label{A3} For $B$ a homogeneous polynomial $B \in K [x_1,x_2,x_3]_4$, a normal quartic surface of the form: $$ X : = \{ (x,z) \mid z^4 + B(x) = 0 \}$$ has at most $7$ singular points. If $B$ is general, $X$ has 7 $A_3$-singularities. \end{cor} \begin{proof} The singular points of $X$ are in bijection with the critical set $\Sigma$ of $B$, which consists of $7$ reduced points for $B$ general. Hence at these points there are local coordinates $u,v$ such that $ B = a^4 + uv$, hence the local equation of $X$ is $ (z+a)^4 = uv$, and we have an $A_3$-singularity. \end{proof} \subsection{Quartics with $14$ nodes blowing up to $7$ lines in the plane under the Gauss map} A second immediate consequence concerns the quartics with dual surface equal to a plane, and with a singular point such that the second order term $Q$ of the Taylor development (see formula \ref{eq:plane}) is equal to the square of a linear form. \begin{prop}\label{special} Consider a normal quartic of equation $$ X = \{ (x,z) \mid z^4 + z^2 x_1^2 + B(x)=0\},$$ where $B$ is a homogeneous polynomial of degree $4$. Then $X$ has at most $14$ singular points, inverse image of the (at most $7$) points in the plane where $\nabla B(x)=0$. For general choice of $B$, $X$ has exactly $14$ nodes as singularities. The Gauss map $\ga$ is inseparable, it factors through the projection $ (x,z) \mapsto x$ and a degree two map $\ensuremath{\mathbb{P}}^2 \ensuremath{\rightarrow} \ensuremath{\mathbb{P}}^2$, $ x \mapsto \nabla B(x)$. {In particular, if $X$ has $14$ singular points, these are nodes.} \end{prop} \begin{proof} The Gauss map is given by $$ \ga(x,z) = (\nabla B(x), 0).$$ The singular points are the inverse image of the critical locus of $B$, $\Sigma = \{ x \mid \nabla B(x)=0\}$. $\Sigma$ consists of at most $7$ points by Proposition \ref{7}. For general $B$ we get $7$ reduced points, and since $z^2$ is the root of a quadratic polynomial with derivative $x_1^2$, if the line $\{ x_1=0\}$ does not meet the locus $\Sigma $, we get $14$ nodes as singularities. Observe finally that $ x \mapsto \nabla B(x)$ has degree $2$ since the base locus consists of the length $7$ subscheme $\Sigma$. The last assertion follows now easily from the fact that the Gauss map has degree $8$, and from the Gauss estimate \ref{eq:deg} of Proposition \ref{gaussestimate}. \end{proof} \subsection{Quartics with inseparable projection from one node} This is another specialization, corresponding to the case $a=0$ in \ref{eq:plane}, but the conic $Q$ is a smooth one. \begin{prop}\label{insep} Consider a quartic of equation $$ X = \{ (x,z) \mid z^2 (x_1 x_2 + x_3^2 )+ B(x)=0\},$$ where $B$ is a homogeneous polynomial of degree $4$. { The Gauss map $\ga$ is inseparable, it factors through the degree two projection $ (x,z) \mapsto x$ and a degree four map $\ensuremath{\mathbb{P}}^2 \ensuremath{\rightarrow} \ensuremath{\mathbb{P}}^2$. $X$ has at most $14$ singularities, the node $P = \{x=0\}$, and the inverse image of a $0$-dimensional subscheme of the plane of length $13$. If $X$ has $14$ singularities, these are nodes; and, for general choice of $B$, $X$ has $14$ nodes as singularities.} \end{prop} \begin{proof} The Gauss map is given by $$ \ga(x,z) = (z^2 x_2 + B_1, z^2 x_1 + B_2, B_3).$$ Multiplying by $ Q = (x_1 x_2 + x_3^2 )$ and using the equation of $X$, we get that $$ \ga(x,z) = \ga'(x) : = ( B x_2 + B_1 Q, B x_1 + B_2 Q, B_3 Q).$$ The base point scheme of $\ga'$ in the plane consists of $\{Q=B=0\}$, which {is a length 8 subscheme which} in general consists of $8$ reduced points, and, since outside of this subscheme we may assume that $Q(x)\neq 0$ (since $x_1= x_2 = Q=0$ has no solutions), of the locus $$\ensuremath{\mathcal{S}} : = \{ B x_2 + B_1 Q= B x_1 + B_2 Q= B_3 =0\}.$$ We observe now that every quartic polynomial can be uniquely written as the sum of a square $q^2$ plus a polynomial of the special form below $$ B' : = \sum_{i\neq j} b_{ij}x_1^3 x_j + \sum_i c_i x_i x_1x_2x_3.$$ Then, working modulo $(x_3)$, we get: $$ B' \equiv b_{12} x_1^3 x_2 + b_{21} x_2^3 x_1, \;\;\; B'_2 \equiv b_{12} x_1^3 + b_{21} x_2^2 x_1,$$ $$B'_1 \equiv b_{12} x_1^2 x_2 + b_{21} x_2^3.$$ Hence $ x_1 B_1' \equiv x_2 B_2' \equiv B' \;(\mathrm{mod} \ x_3)$. Consider the subscheme $$\sL : = \{ x_3 = B x_2 + B_1 Q= B x_1 + B_2 Q= 0\}.$$ For $B = B'$ we get $\sL = \{ x_3 =0\}$, because $ Q = x_1 x_2 + x_3^2$. If we now add to $B'$ the square of a quadratic form $q^2$, $\sL$ coincides with $$ \{ x_3 = q^2 x_2 = q^2 x_1 = 0\} = \{ x_3 = q^2=0\}.$$ This is a subscheme of length equal to $4$. Our subscheme $\ensuremath{\mathcal{S}}$ is the residual scheme with respect to the above length $4$ scheme $\sL$ of the scheme $$\sH \sB : = \{ B x_2 + B_1 Q= B x_1 + B_2 Q= x_3 B_3 =0\}.$$ $\sH \sB $ is a Hilbert-Burch Cohen-Macaulay subscheme of codimension $2$, corresponding to the $2 \times 3$ matrix with rows $ ( x_1, x_2, Q)$ and $(B_2, B_1, B)$. Since the ideal $\sI$ of the subscheme has a resolution $$ 0 \ensuremath{\rightarrow} \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2} (-6) \oplus \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2} (-6) \ensuremath{\rightarrow} \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2} (-5)^2 \oplus \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2} (-4) \ensuremath{\rightarrow} \sI \ensuremath{\rightarrow} 0,$$ an easy Chern class computation shows that the length of $\sH \sB$ is $17$. Moving $q$, the scheme $\sL$ is disjoint from $\ensuremath{\mathcal{S}}$, hence {we conclude that} the length of $\ensuremath{\mathcal{S}}$ is $13$. The degree of the Gauss map is then $ 2 (25 - 8 - 13)= 8$; hence, by the Gauss estimate \ref{eq:deg} of Proposition \ref{gaussestimate} we get $ 28 \geq 2 \nu + b + u$, hence for $\nu =14$ we obtain $14$ nodes. That the subscheme $\ensuremath{\mathcal{S}}$ consists in general of $13$ distinct points follows from the examples given in \cite{cat21}, step IV of proposition 3. \end{proof} \subsection{A $24$-dimensional family of quartics with $14$ nodes} We pass now to the general case, where $a\neq 0$, and $Q$ is not a double line. \begin{theo}\label{dual=plane} Let $K$ be an algebraically closed field of characteristic $2$, and let $X \subset \ensuremath{\mathbb{P}}^3_K$ be a general quartic hypersurface such that the dual variety is a plane. Then $X$ has $14$ nodes as singularities and is unirational, hence supersingular. These quartic surfaces form an irreducible component, of dimension $24$, of the variety of quartics with $14$ nodes. \end{theo} \begin{proof} The condition that the dual variety is a plane is equivalent to the existence of coordinates $(x,z)$ ($x= (x_1, x_2,x_3)$) such that $$ X = \{ (x,z) \mid a z^4 + z^2 Q(x) + B(x)=0\}.$$ Since we have already dealt with the special case $a=0$, and with the cases $Q=0$ or the square of a linear form, let us assume that $$a=1, \;\; Q(x) = x_1 x_2 + \la x_3^2.$$ The Gauss map is given by $$ \ga(x,z) = z^2 \nabla Q + \nabla B= ( z^2 x_2 + B_1, z^2 x_1 + B_2, B_3).$$ Hence for the singular points $B_3(x)=0$, which implies $x_1 B_1 + x_2 B_2=0$. Therefore, for the singular points we have $$ z^2 = \frac{B_1}{x_2} = \frac{B_2}{x_1}.$$ More precisely, if we have a point $x \in \ensuremath{\mathbb{P}}^2$ such that $B_3(x)=0$, and $x_2 \neq 0$, necessarily we have $ z^2 = \frac{B_1}{x_2} $ and we have a singular point if the equation of $X$ is satisfied, namely if $$ z^4 + z^2 Q(x) + B(x)=0 \;\; \Longleftrightarrow \;\; B_1^2 + B_1 x_2 Q + B x_2^2=0.$$ An easy calculation shows that $$B_3 = b_{32} x_3^2 x_2 + b_{31} x_3^2 x_1+ b_{13} x_1^3 + b_{23} x_2^3 + c_1 x_1^2 x_2 + c_2 x_2^2 x_1,$$ which is in the ideal $(x_1, x_2)$ but does not in general vanish neither on $x_1=x_3=0$ nor on $x_2=x_3=0$. We look now at the points $x$ where $$B_3=B_1^2 + B_1 x_2 Q + B x_2^2= x_2 = 0 \;\; \Longleftrightarrow \;\; B_3= x_2=B_1= 0 :$$ these are contained in the set $\{x_2= b_{31} x_3^2 x_1+ b_{13} x_1^3 = 0\}$, which consists of the point $P'':=\{x_2=x_1=0\}$, and the point $ P' : = \{x_2= b_{31} x_3^2 + b_{13} x_1^2 = 0\}$. Since $$B_1 = b_{12} x_1^2 x_2 + b_{13} x_1^2 x_3+ b_{31} x_3^3 + b_{21} x_2^3 + c_3 x_3^2 x_2 + c_2 x_2^2 x_3,$$ $B_1$ does not in general vanish in $P''$, but it vanishes indeed in $P'$. At the point $P'$, for general choice of $B$, $x_1\neq 0$, hence if $P'$ were to correspond to a singular point of $X$, we would have $$ z^2 = \frac{B_2}{x_1} \Rightarrow B_2^2 + B_2 x_1 Q + B x_1^2 . $$ But the right hand side does not in general vanish at $P'$, since $x_1 \neq 0$, and since we can add to $B$ the square of a quadratic form $q(x)$ without affecting the partial derivatives. The number of singular points of $X$ is then equal, by the B\'ezout theorem, to the difference between $18$ and the intersection multiplicity of $B_3$ and $B_1^2 + B_1 x_2 Q + B x_2^2$ at the point $P'$. In the special case $ B = x_1^3 x_2 + x_2^3 x_3 + x_3^3 x_1 + q^2$, we get the point $P'=\{x_2=x_3=0\}$, and $$B_3 = x_3^2 x_1 + x_2^3,\;\; B_1 = x_1^2 x_2 + x_3^3 .$$ The curve $\{ B_3=0\}$ has a cusp with tangent $\{x_2=0\}$, so that $x_3$ has order $3$, $x_2$ has order $2$, hence $B_1^2 + B_1 x_2 Q + B x_2^2$ has order equal to $4$ for general choice of $q$. By semicontinuity the intersection multiplicity is in general at most $4$, hence the `number' of singular points of $X$ is at least $14$. But in the special case of proposition \ref{special} we have exactly $14$ nodes, so $14$ points counted with multiplicity $1$; hence by semicontinuity in the other direction we have in general exactly $14$ nodes. That $X$ is unirational, hence supersingular by \cite{shiodass}, follows since $X$ is an inseparable double cover of the surface $$ Y = \{ (x,w) \mid a w^2 + w Q(x) + B(x)=0\} \subset \ensuremath{\mathbb{P}}(1,1,1,2).$$ $Y$ has degree $4$, hence $\omega_Y = \ensuremath{\mathcal{O}}_Y (-1)$ and $Y$ is a Del Pezzo surface, hence rational. The dimensionality assertion follows by a simple parameter counting, $1 + 6 + 15-1=21$ parameters for the above polynomial equations, plus $3$ parameters for the plane $X^{\vee}$, which in the chosen equations is the plane $\{z=0\}$. {Finally, consider the surface $$ X_0 : = \{ (x,z) \mid z^4 + z^2 l(x)^2 + B_0(x)=0\}, \;\; B_0 = x_1^3 x_2 + x_2^3 x_3 + x_3^3 x_1, $$ and consider the deformations obtained by adding to the equation of $X_0$ a polynomial $$ f : = z \sum_{i=1}^7 \la_i G(i)(x) + z^3 \sum_{j=1}^3 \mu_j L_j(x),$$ where $G(1), \dots, G(7)$ are polynomials of degree $3$ such that $G(i)$ is vanishing at exactly all the critical points of $B_0$ except the $i$-th point $P_i$, and the linear forms $L_j(x)$ vanish on the points $P_i, 1 \leq i \leq 3, \ i \neq j.$ The polynomial $f$ belongs to a $10$-dimensional vector subspace, and we shall show now that we get independent smoothings of ten of the nodes: one for each of the pairs of singular points $P_i', P_i''$ lying over $P_i$, for $ i=4,5,6,7$, and two over each $P_i$ for $i=1,2,3$. Then, if we choose one of the two singular points $P_i', P_i''$ lying over $P_i$, for $ i=4,5,6,7$, say $P'_i$, the map to the local deformation space of the singularity is of the form (in local coordinates $u, v, \zeta : = (z + z'_i)$ such that $B = uv + {\rm constant}$) $$uv + (z+z'_i)^2 + \la_i z + z \sum_{j=1}^3 \mu_j L_j(P_i) (z'_i)^2,$$ since $z^3 = (\zeta + z_i')^3 \equiv z (z'_i)^2 \ (\mathrm{mod} \ \zeta^2)$; whereas for $j=1,2,3$ the map is given by $$uv + (z+z'_j)^2 + \la_j z + z \mu_j (z'_j)^2,$$ respectively by $$uv + (z+z''_j)^2 + \la_j z + z \mu_j (z''_j)^2.$$ Observe that, if we have a node of equation $ uv + \zeta^2=0$, the local deformations are of the form $$ uv + \zeta^2 + c_0 + c_1 \zeta=0,$$ and we have a smoothing iff $c_1 \neq 0$. It is easily seen that the deformation yields ten independent smoothings of the ten nodes $P_1', \dots, P_7', P_1'', P_2'', P_3''$, hence it follows that the variety of quartics with $14$ nodes, at the point $X_0$, has Zariski tangent space of codimension at least $10$ in the space of all quartics. Since the space of all quartics has dimension $34$, and our family is irreducible of codimension $10$, it follows that at the point $X_0$ our family coincides with the variety of quartics with $14$ nodes, and our family is a component of this variety. } \end{proof} \begin{rem} Since our family yields a dimension 9 locus in the moduli space, we have found an irreducible component of the moduli space of supersingular K3 surface with a quasi-polarization of degree $4$. This may be compared to Shimada's results on double sextics where there is an irreducible component with 21 nodes \cite{Shimada}. \end{rem} \subsection{When the minimal resolution is a K3 surface} Concerning the degree of the Gauss map, which is in the above situation generally equal to 8, we have a weaker result, which is sufficient, as we will see in Proposition \ref{cor:9}, for the purpose of showing that the minimal resolution of $X$ is always a K3 surface if the number of singular points is at least 13. Equivalently, all singularities are rational double points. \begin{lemma}\label{dp} Assume that the normal quartic $X$ has the following equation $$ X = \{ (x,z) \; | \; z^4 + z^2 Q(x) + B(x)=0\},$$ where the quadratic form $Q$ is not the square of a linear form. Then the degree of the Gauss map is at least 4 or $X$ has at most 12 singular points. \end{lemma} \begin{proof} We use again the normal form where $ Q(x) = x_1 x_2 + \la x_3^2, \ \la \in \{0,1\}$. The Gauss map factors through the inseparable double cover (setting $w:= z^2$) of the Del Pezzo surface $Y$ of degree $2$ in $\ensuremath{\mathbb{P}}(1,1,1,2)$, such that $ \omega_Y = \ensuremath{\mathcal{O}}_Y(-1)$. The projection to the $\ensuremath{\mathbb{P}}^2$ with coordinates $x$ and the Gauss map to the plane with coordinates $y$ induce a birational embedding of $Y$ in $\ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^2$, since $y = \ga (x,w) = ( w x_2 + B_1, w x_1 + B_2, B_3)$, hence $$ y_1/y_3 = (w x_2 + B_1)/B_3 \Rightarrow w = (B_3/x_2 ) ( y_1/y_3 + B_1/B_3).$$ The image lands, as it is immediate to verify, in the flag manifold $\ensuremath{\mathbb{F}}$, a smooth divisor of bitype (1,1) $$ \ensuremath{\mathbb{F}} = \left\{ (x,y) \,\middle\vert \, \sum_i x_i y_i =0\right\},$$ and inside $\ensuremath{\mathbb{F}}$ the image $Z$ of $Y$ is a divisor of bitype $(d,2)$ where $2d$ is the degree of the Gauss map. We want to show that $ d >1$. By adjunction the dualizing sheaf $\omega_Z$ of $Z$ is a divisor of bitype $(d-2, 0)$. whereas the canonical system of $Y$ corresponds to a divisor of bitype (-1, 0). The crucial observation is that, if $d=1$, then these two divisors coincide. $Y$ has a rational map to $Z$ and composing with the first projection we get a morphism, while composing with the second projection we get the blow up of some points. Let $Y'$ be the blow up of $Y$, such that $\pi : Y' \ensuremath{\rightarrow} Z$ is a birational morphism. Also the second projection $ p : Z \ensuremath{\rightarrow} \ensuremath{\mathbb{P}}^2$ is a birational morphism, moreover the fibres of $p$ are contained in the fibres of $ p : \ensuremath{\mathbb{F}} \ensuremath{\rightarrow} \ensuremath{\mathbb{P}}^2$, which are isomorphic to $\ensuremath{\mathbb{P}}^1$. We blow up the points of $\ensuremath{\mathbb{P}}^2$ where the fibre of $ p : Z \ensuremath{\rightarrow} \ensuremath{\mathbb{P}}^2$ has dimension 1, obtaining $Z'$. Then we get a factorization $ Z \ensuremath{\rightarrow} Z' \ensuremath{\rightarrow} \ensuremath{\mathbb{P}}^2$. Since $Z'$ is smooth, and $ Z \ensuremath{\rightarrow} Z'$ is finite and birational, follows that $ Z \cong Z'$ and $Z$ is smooth. Now $Z$ and $Y$ are birational normal Del Pezzo surfaces, and for both the anticanonical divisor is the pull back of $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2}(1)$ under the first projection (to the $\ensuremath{\mathbb{P}}^2$ with coordinates $(x)$). The first projection $\phi : Z \ensuremath{\rightarrow} \ensuremath{\mathbb{P}}^2$ has degree two and is either finite, or its fibres are isomorphic to $\ensuremath{\mathbb{P}}^1$. By normality we have a birational morphism $ \psi : Z \ensuremath{\rightarrow} Y$. In the first case $\psi$ is an isomorphism, in the second case it is a minimal resolution of singularites. And since the fibres are smooth rational curves with normal bundle of degree $-2$, then the corresponding singularities of $Y$ are nodes. This shows that $d=1$ is only possible if there are no singular points of $X$ which do not map to singular points of $Y$, and the latter are nodes. Since the singularities of $Y$ correspond to the singularities of $X$ for which $ Q(x)=0$, we see that all the singular points of $X$ satisfy $ Q(x)=0$. Since the singular points of $Y$ are defined by $ Q(x)=0$ and by 3 equations of degree $3$, it follows that there is a linear combination $B'(w,x) $ of these 3 equations such that the singular points of $Y$ are contained in the finite set defined by $ Q(x)= B'(w,x)=0$. Since $\ensuremath{\mathcal{O}}_Y(1)$ has self-intersection equal to 2, $Y$ has at most 12 singular points. \end{proof} \begin{prop}\label{13} \label{cor:9} If $2 \nu > 28 - \deg (\ga) \deg (X^{\vee}) $, then all singularities of $X$ are rational double points, and the minimal resolution $S$ is a K3 surface. In particular, this holds for $\nu\geq 13$. \end{prop} \begin{proof} The first statement follows directly from combining Propositions \ref{gaussestimate} and \ref{9}. Let's deal with the second assertion. If $X^\vee$ is not a plane, then, by Proposition \ref{gaussestimate} (IV), $\deg (\ga) \deg (X^{\vee}) \geq 3$ and we are done. Hence we may assume that $X^\vee$ is a plane. By Lemma \ref{dp}, Propositions \ref{special} and \ref{insep}, either the number of singular points is at most 12, or $ \deg(\ga) \deg (X^{\vee}) \geq 4$, or we are in the cases where $$ X = \{ (x,z) | z^2 x_1^2 + B(x)=0\},$$ or $$ X = \{ (x,z) | z^2 x_1x_2 + B(x)=0\}.$$ The former case was dealt in Step I of Proposition 3 of Part I, showing that $X$ has at most 8 singular points, and in this case Example 10 ibidem shows that $ \deg (\ga) \geq 4$. In the latter case Step II of Proposition 3 of Part I shows that $X$ has at most 13 singular points; and that it has exactly 13 points only if it has 12 nodes (corresponding to the points of the plane where $ B_3= B_1 x_1 + B=0$), and a biplanar singular point (at $x=0$): hence also in this case the minimal resolution is a K3 surface. \end{proof} The following result improves upon part (V) of Proposition \ref{gaussestimate}. \begin{cor}\label{14} If $\nu\geq 14$ all the singularities are either nodes or biplanar double points. \end{cor} \begin{proof} Recall the basic inequality $$ 36 - \deg(\ga) \deg(X^{\vee} ) \geq 2 \nu + b + 6 u .$$ We are claiming $u=0$ if $\nu \geq 14$, hence it suffices to recall that we saw in the previous proposition that $ \deg(\ga) \deg(X^{\vee}) \geq 3$. \end{proof} \section{Proof of the main theorem \ref{theo} -- general bound } \label{s:proof} Throughout this section until \ref{ss:aux}, we assume that $X$ is a normal quartic surface with $\nu \geq 15$ singular points in order to establish a contradiction and prove the general bound of Theorem \ref{theo}. We use the following result which will follow from Propositions \ref{prop:>14} and \ref{prop:cusps} (to be proven in Section \ref{s:quasi} using the theory of elliptic and quasi-elliptic fibrations on K3 surfaces). \bigskip \begin{main-claim} \label{main-claim} If $X$ has $\nu \geq 15$ singular points, then, for each pair $P_i, P_j$ of singular points of $X$, the line $L_{ij}^{\vee}$ dual to $L_{ij} : = \overline{P_i P_j}$ is contained in the dual surface $X^{\vee}$. \end{main-claim} \bigskip \subsection{The main claim implies the general bound of Theorem \ref{theo}} \label{ss:pf-thm} \smallskip It will suffice to show that: \begin{claim} \label{claim:skew} In the above setting, $X^{\vee}$ contains two skew lines and $7$ distinct coplanar lines. \end{claim} Indeed the claim implies that $X^{\vee}$ is a surface of degree $\geq 7$, and by the Gauss map estimate \ref{eq:deg} of Proposition \ref{gaussestimate} we have $$ 36 - 7 \deg(\ga) \geq 2 \nu + b + 6 u,$$ hence $\nu \leq 14$ as announced. \hspace*{\fill}$q.e.d.$ \subsection{Proof of Claim \ref{claim:skew}} \label{ss:pfofclaim} We observe first that if a line $L_{ij}$ passes through a third singular point $P$ of $X$, then it is contained in $X$, and the planes $H \supset L_{ij}$ cut $X$ in the line $L_{ij}$ plus a cubic $C$ meeting $L_{ij}$ in the three points $P, P_i, P_j$. Hence there cannot be $4$ collinear singular points: because then $C$ would contain $L_{ij}$ and $L_{ij} \subset$ Sing$(X)$, contradicting the normality of $X$. We show now that each plane contains at most $6$ singular points of $X$. In fact, if the plane is the plane $z=0$, and the equation of $X$ is $$B(x) + z G(x) \mod ( z^2 ),$$ the singular points on the plane are the solutions of $$ z = \nabla B(x)= B(x) = G(x)=0.$$ A reduced plane quartic has at most $6$ singular points. If the quartic is non-reduced, and $B(x) = q(x)^2$, then the singular points are the solutions of $ z =q(x) = G(x)=0$ and they are at most $6$ by the theorem of B\'ezout and since $X$ is normal. The case where $\{ x \mid B(x)=0\}$ consists of a double line and a reduced conic leads to at most one singular point outside the line, hence at most $4$ singular points in the plane. Whence, if $\nu \geq 7$, there are $4$ linearly independent singular points of $X$, and we have found two skew lines $L_{ij}, L_{hk}$: likewise the dual lines are skew. Assume now that $\nu \geq 15$ and consider all the lines of the form $L_{1j}$: these are at least $7$, since at most 3 singular points are collinear, and the dual lines are contained in the plane dual to the point $P_1$. \hspace*{\fill}$q.e.d.$ \subsection{Propositions \ref{prop:>14} and \ref{prop:cusps} imply the Main Claim \ref{main-claim}} \label{ss:main-claim} Since we assume $\nu\geq 15$, we can apply Proposition \ref{prop:>14} to show that each pair $(P_i, P_j)$ induces a quasi-elliptic fibration. By the degree estimate in Proposition \ref{gaussestimate}, all singularities are nodes or biplanar double points, so Proposition \ref{prop:cusps} proves that the pencil of planes containing $L_{ij}$ yields a line $L_{ij}^{\vee}$ contained in $X^{\vee}$. \hspace*{\fill}$q.e.d.$ \subsection{Auxiliary results} \label{ss:aux} We establish here, with similar arguments, two easy results for later use. To this end, we distinguish whether two given singular points $P_1, P_2$ are collinear with a third singularity or not (in the latter case we call $P_1, P_2$ companions). Recall that in the first case, the line $L=\overline{P_1P_2}$ is contained in $X$, and each plane containing $L$ contains at most 6 singularities. \begin{lemma} \label{lem:companions} If $\nu\geq 9$, then there is a singular point with two companions. \end{lemma} \begin{proof} Assume to the contrary that each singularity has at most one companion. Take a point $P_1$ and three collinear pairs, say $P_1 , P_2 , P_3 \in L$ $P_1 , P_4 , P_5 \in L_1$, $P_1 , P_7, P_8 \in L'$. Let $H$ be the plane containing $P_1, P_2 , P_3, P_4 , P_5,$ and let $H'$ be the plane containing $P_1, P_2 , P_3, P_7 , P_8$. These are different, since each plane contains at most $6$ singular points. By assumption, we may assume without loss of generality that $P_4$ is not companion of $P_2$, hence there is $P_6$ collinear with $P_2, P_4$, so that $P_2,P_4,P_6 \in L_2\subset X$. At this stage we have obtained $6$ singular points (the maximum) in the plane $H$, and we observe that $P_3$ is not companion of $P_4$ or $P_5$. Hence we get 4 lines $$X \cap H = L+L_1+L_2+L_3 ,$$ where $L_3$ must contain the singular points $P_3, P_6$ and thus also $P_5$. Thereby we reach the conclusion that $ P_1$ is companion of $P_6$. We establish now a contradiction as follows. Playing the same game for the other plane $H'$, we find another companion of $P_1$, call it $P_9$. Since $P_9 \in H' $, while $P_6\not\in H' $ (since $ H \cap H' = L $) we have found two different companions for $P_1$, and we have reached a contradiction. \end{proof} \begin{prop} \label{prop:deg8} If $X$ has $\nu =14$ singular points and, for each pair $P_i, P_j$ of singular points of $X$, the pencil of planes containing the line $L_{ij} = \overline{P_i P_j}$ yields a line $L_{ij}^{\vee}$ contained in the dual surface $X^{\vee}$, then the degree of the dual surface is at least $8$. In particular, the singular points are just $14$ nodes. \end{prop} \begin{proof} By the Gauss estimate it suffices to show the first assertion, and since $X^{\vee}$ contains at least two skew lines, it suffices to show that it contains at least $8$ coplanar lines. But this follows from Lemma \ref{lem:companions} as there is a singular point $P_1$ on $X$ collinear with at most 5 pairs of singularities, thus companion to at least 3, yielding a total number of at least 8 coplanar lines on $X^\vee$. \end{proof} We will use Proposition \ref{prop:deg8} later, and we observe that a weaker form suffices, where there is one point $P_1$ with the property of the proposition holding for all lines $\overline{P_1 P_j}$, if $X^\vee$ contains a line skew to (one of) the 8 dual lines from $P_1$. \section{Genus one fibrations} \label{s:g=1} We shall now invoke some results from the theory of genus one fibrations on K3 surfaces in order to achieve the proof of Propositions \ref{prop:>14} and \ref{prop:cusps}. These will also be used for the proof of the other parts of Theorem \ref{theo}. \bigskip Let $X$ be a projective K3 surface. {Let $L \in\Pic(X)$ be a divisor class with $L^2 \geq -2$; then, by Riemann-Roch, $\chi(L) \geq 1$ hence $L$ or $-L$ is effective. Hence let us assume that $ L $ is linearly equivalent to an effective divisor $D$. If $D^2 =0$, then the linear system $|D|$ has dimension $\geq 1$, and we can write $ |D| = |M| + \Psi$, where $\Psi$ is the fixed part. Clearly then $\Psi = \sum_i E_i$ where each $E_i$ is an irreducible curve with $E_i^2=-2$. Since \begin{eqnarray} \label{eq:0=} 0 = D^2 = M^2 + D \Psi + M \Psi , \ M^2 \geq 0, \ M \Psi \geq 0, \end{eqnarray} we have $ D \Psi < 0$, or $\Psi=0$. Because, if $ D \Psi \geq 0$ and $\Psi>0$, then \eqref{eq:0=} implies $M^2 = D \Psi = M \Psi =0$, hence $\Psi^2 =0 \Rightarrow \Psi=0$, the intersection form being negative definite by Zariski's lemma on the divisor $\Psi$: because $\Psi$ is contained in the fibres of the fibration associated to $|M|$, there are no multiple fibres, and $\Psi$ does not contain any full fibre (else, it would not be the fixed part). The conclusion is that either $|D|$ has no fixed part or there is $E_1$ such that $ D E_1 < 0$, hence reflection in the $(-2)$-curve $E_1$ produces a new divisor class $$D'' : = D + (D E_1) E_1$$ such that $(D'')^2=0$. The system $|D''|$ has dimension $\geq 1$, and since the degree of $D''$ is smaller than the degree of $D$, the process terminates producing a base point free system $|D'|$, with $(D')^2=0$, hence $|D'|$ is a pencil of genus $1$ curves. We may also assume that $D'$ is primitive, so that $D'$ is indeed a fibre of a fibration $ f : X \ensuremath{\rightarrow} \ensuremath{\mathbb{P}}^1$. \medskip If the general fibre is smooth, we call the fibration elliptic and we may further distinguish whether the fibration admits a section or not.} In characteristics $2$ and $3$, however, the general fibre may also be a cuspidal cubic curve whence the fibration is called quasi-elliptic. Examples are given by sparse Weierstrass forms; more precisely, in terms of the general equation \eqref{eq:WF} which shall be recalled later, those forms which do not contain terms linear in $y$ (in characteristic $2$) or all of whose terms have degree 0 or 3 in $x$ (in characteristic $3$). In particular, quasi-elliptic surfaces over $\ensuremath{\mathbb{P}}^1$ are unirational and thus supersingular ($\rho=b_2$) by \cite{shiodass} which makes them quite special (see \cite{rudakov-shafarevich}, for instance). \begin{rem} \label{rem:-2} Any $(-2)$ curve $C$ on $X$ which is perpendicular to $D'$ features as a fibre component of $|D'|$ (but the analogous statement for $(-2)$-curves orthogonal to $D$ is surprisingly subtle in case there is some base locus involved. We will come back to this problem in part III. \end{rem} \begin{rem} \label{rem:pencil} In general, given an effective divisor $D$ with $D^2=0$, $|D|$ need not be a pencil, the easiest example being the one where $D$ consists of a genus $2$ curve and a disjoint $(-2)$-curve, that is, $$ D = M + E, M^2 = 2, E^2=-2, M E=0,$$ here $M-E$ gives the desired pencil. A sufficient condition for $\dim |D|=1$ is that the divisor $D$ is numerically connected, that is, any decomposition $ D = A + B$, where $A,B$ are effective, satisfies $ AB \geq 1$. Because, by the exact sequence $$ 0 \ensuremath{\rightarrow} \ensuremath{\mathcal{O}}_X \ensuremath{\rightarrow} \ensuremath{\mathcal{O}}_X (D) \ensuremath{\rightarrow} \ensuremath{\mathcal{O}}_D(D)\ensuremath{\rightarrow} 0$$ we have $H^1 (\ensuremath{\mathcal{O}}_X (D))=0$ unless $h^1 (\ensuremath{\mathcal{O}}_D (D))\geq 2$. Since $h^1 (\ensuremath{\mathcal{O}}_D (D))= h^0 (\ensuremath{\mathcal{O}}_D),$ and $h^0 (\ensuremath{\mathcal{O}}_D)=1$ if $D$ is numerically connected, \cite {franchetta}, \cite{ram}, our claim follows. In this case, $M^2=0$, and $D$ could, for instance, consist of a fibre plus a $(-2)$-curve which is a section, $$ D = M + E, \;\; M^2 = 0, \;\; E^2=-2, \;\; M E=1.$$ \end{rem} \subsection{ Disjoint smooth rational fibre components} For later use, let us record some rather special features of elliptic fibrations in characteristic 2. \begin{prop} \label{lem:12} In characteristic 2, on an elliptic K3 surface the singular fibres contain at most 12 disjoint $(-2)$-curves. \end{prop} At first, this result may seem rather surprising, since usually, i.e.\ outside characteristic $2$, elliptic fibrations allow for as many as 16 disjoint $(-2)$-curves. This happens in the case of 4 fibres of Kodaira type I$^*_0$, each containing 4 disjoint $(-2)$-curves -- for instance, on the Kummer surface of a product of two elliptic curves. \begin{proof} What prevents the same as above to happen in characteristic $2$ is the fact that all additive fibres, except for Kodaira types IV, IV$^*$, come with wild ramification by \cite{SSc}. More precisely, there still is a representation of the Euler-Poincar\'e characteristic of the elliptic K3 surface $X$ as a sum over the fibres: \[ 24 = e(X) = \sum_v (e(F_v) + \delta_v). \] Here $\delta_v$ denotes the index of wild ramification, studied in more generality in \cite{Deligne-wild}. On an elliptic surface, it can be computed as the difference of the Euler number $e(F_v)$ and the local multiplicity of the discriminant which is the equation for the singular fibres and may be computed on the Jacobian by \cite[p.348]{CDL}. The bounds for $\delta_v$ in the next table have been taken from \cite[Prop.~5.1]{SSc}. Note that the number of components $m_v$ is the index of the Dynkin type plus one, while, except in the first case, the Euler number is $m_v+1$. The table also collects the maximal number $N_v$ disjoint (-2)-fibre components, to be computed below. \begin{table}[ht!] \begin{tabular}{c||c|c|c|c|c|c|c|c|c} fibre type & I$_n$ & II & III & IV & I$^*_n (n\neq 1)$ & I$^*_1$ & IV$^*$ & III$^*$ & II$^*$\\ \hline \hline Dynkin type & $A_{n-1}$ & $A_0$ & $A_1$ & $A_2$ & $D_{n+4}$ & $D_5$ & $E_6$ & $E_7$ & $E_8$\\ \hline $m_v$ & $n$ & 1 & 2 & 3 & $n+5$ & 6 & 7 & 8 & 9\\ \hline $\delta_v$ & 0 & $\geq 2$ & $\geq 1$ & 0 & $\geq 2$ & 1 & 0 & $\geq 1$ & $\geq 1$\\ \hline $e(F_v)$ & $n$ & 2 & 3 & 4 & $n+6$ & 7 & 8 & 9 & 10\\ \hline $N_v$ & $\lfloor \frac{n}{2}\rfloor$ & 0 & 1 & 1 & $4 + \lfloor \frac{n}{2}\rfloor$ & 4 & 4 & 5 & 5\\ \end{tabular} \end{table} For the convenience of the reader, we also include the dual graphs of the fibres in terms of the extended Dynkin diagrams $\tilde A_n, \tilde D_k\, (k\geq 4)$. For fibre types IV$^*$, III$^*$, II$^*$, we only give the Dynkin diagram $E_l\, (l=6,7,8)$ for sake of a unified presentation. For these types the fibre is obtained by adding another fibre component $e_0$ adjacent to the vertex $e_1$ in case $E_6$, resp.\ $e_2$ in case $E_7$, resp.\ $e_8$ in case $E_8$. In total, the simple fibre components (i.e.\ those having multiplicity 1 in the fibre) are: \begin{enumerate} \item[$\tilde A_n$] all components, \item[$\tilde D_k$] the exterior components, \item[$\tilde E_l$] $e_0, e_2, e_6$ ($l=6$) resp.\ $e_0, e_7$ ($l=7$), resp.\ $e_0$ ($l=8$). \end{enumerate} \begin{figure}[ht!] \setlength{\unitlength}{.6mm} \begin{picture}(80,20)(5,25) \multiput(3,32)(20,0){5}{\circle*{1.5}} \put(3,32){\line(1,0){43}} \put(83,32){\line(-1,0){23}} \put(2,25){$a_1$} \put(22,25){$a_2$} \put(49,32){$\hdots$} \put(82,25){$a_{n}$} \put(3,32){\line(4,1){40}} \put(83,32){\line(-4,1){40}} \put(43,42){\circle*{1.5}} \put(45,45){$a_0$} \put(-50,35){$(\tilde A_n)$} \end{picture} \end{figure} \begin{figure}[ht!] \setlength{\unitlength}{.6mm} \begin{picture}(100,25)(5,-4.5) \put(-40,7){$(\tilde D_k)$} \multiput(23,8)(20,0){4}{\circle*{1.5}} \put(23,8){\line(1,0){23}} \put(83,8){\line(-1,0){23}} \put(49,8){$\hdots$} \put(83,8){\line(2,1){20}} \put(83,8){\line(2,-1){20}} \put(103,18){\circle*{1.5}} \put(103, -2){\circle*{1.5}} \put(23,8){\line(-2,1){20}} \put(23,8){\line(-2,-1){20}} \put(3,18){\circle*{1.5}} \put(3, -2){\circle*{1.5}} \put(-6,18){$d_0$} \put(-6,-2){$d_1$} \put(22,1){$d_2$} \put(78,1){$d_{k-2}$} \put(106,17){$d_{k-1}$} \put(106, -2){$d_{k}$} \end{picture} \end{figure} \begin{figure}[ht!] \setlength{\unitlength}{.6mm} \begin{picture}(120,30)(3,-40) \multiput(3,-32)(20,0){7}{\circle*{1.5}} \put(3,-32){\line(1,0){83}} \put(123,-32){\line(-1,0){23}} \put(2,-39){$e_2$} \put(22,-39){$e_3$} \put(42,-39){$e_4$} \put(62,-39){$e_5$} \put(43,-32){\line(0,1){20}} \put(43,-12){\circle*{1.5}} \put(46,-13){$e_1$} \put(89,-32){$\hdots$} \put(122,-39){$e_l$} \put(-31,-27){$(E_l)$} \end{picture} \end{figure} Case by case, this allows us to compare the maximal number $N_v$ of disjoint (-2)-fibre components with the contribution to the Euler-Poincar\'e characteristic, see the above table. Overall, we find \begin{eqnarray} \label{eq:N_v} N_v \leq \frac 12 \lfloor e(F_v) + \delta_{v}\rfloor \end{eqnarray} and thus \begin{eqnarray} \label{eq:eqeq} \sum_v N_v \leq \sum_v \frac 12 \lfloor e(F_v) + \delta_{v}\rfloor \leq \frac 12 \sum_v (e(F_v) + \delta_{v}) = 12. \end{eqnarray} This yields the desired inequality and proves our assertion. \end{proof} \begin{rem} \label{rem:ineq} If equality holds at each step of the chain of inequalities $$N_v \leq \frac 12 \lfloor e(F_v) + \delta_{v}\rfloor \leq \frac 12 (e(F_v) + \delta_{v}),$$ then $\de_v$ attains its minimum value, and the multiplicity $(e(F_v) + \delta_{v}) $ is an even number, hence we get only the types \[ \mathrm I_{2n} \; (n>0), \;\; \mathrm I_{2n}^*\; (n\geq 0), \;\; \mathrm I_1^*,\;\; \mathrm{IV}^*,\;\;\mathrm{III}^*. \] \end{rem} \begin{cor} \label{cor:12fibres} If the fibres of an elliptic K3 surface in characteristic $2$ contain 12 disjoint $(-2)$-curves, then the only possible singular fibre types are (with minimum possible $\delta_v$ each) \[ \mathrm I_{2n} \; (n>0), \;\; \mathrm I_{2n}^*\; (n\geq 0), \;\; \mathrm I_1^*,\;\; \mathrm{IV}^*,\;\;\mathrm{III}^*. \] \end{cor} \begin{proof} This is a direct consequence of the proof of Proposition \ref{lem:12} since all the inequalities in \ref{eq:eqeq} actually have to be equalities (in particular the same must hold for \ref{eq:N_v} at each $v$, as in Remark \ref{rem:ineq}). \end{proof} A close inspection of the fibres in the proof of Proposition \ref{lem:12} allows us even to rule out higher ADE-types: \begin{cor} \label{cor:ADE} If the fibres of an elliptic K3 surface in characteristic 2 support 12 disjoint ADE-configurations of $(-2)$-curves, then each has type $A_1$. \end{cor} \begin{proof} These 12 disjoint ADE-configurations produce at least 12 disjoint $(-2)$-curves, hence we may apply the previous corollary and check directly. \end{proof} \subsection{ Connection with supersingularity} To relate with Theorem \ref{theo}, especially with the statement about supersingular K3 surfaces, we provide the next result which concerns the case of exact equality in Proposition \ref{lem:12}. \begin{prop} \label{lem:=12} Let $X$ be an elliptic K3 surface such that there are 12 disjoint $(-2)$-curves contained in the fibres. Then $X$ is supersingular or there are two additive fibres. \end{prop} Note that the fibres in Proposition \ref{lem:=12} are the fibres of Corollary \ref{cor:12fibres}: either $\mathrm I_{2n}$, or additive fibres which are non-reduced. This will be of great use in what follows. \begin{rem} \label{rem:occur} (i) It is easy to see that both cases of Proposition \ref{lem:=12} can occur: the first one via inseparable base change from rational elliptic surfaces {(see \cite[p.\ 342]{MWL}, Proposition 12.32)} the other one (as in characteristic zero!) by taking the Kummer surface of the product of two elliptic curves (both not supersingular): here there are two singular fibres of Kodaira type $\mathrm{I}_4^*$ by \cite{shioda}. (ii) The second case of Proposition \ref{lem:=12} encompasses the case where there are 12 disjoint $(-2)$-curves contained in the fibres and the j-invariant is constant, since then every reducible fibre is additive, and if there were a single reducible fibre, it would have type $\mathrm I_{16}^*$, which is impossible by \cite{S-max}. \end{rem} \begin{proof}[Proof of Proposition \ref{lem:=12}] If the singular fibres contain 12 disjoint $(-2)$-curves, then by the proof of Proposition \ref{lem:12}, both inequalities in \ref{eq:eqeq} are in fact equalities, with fibre types given in Corollary \ref{cor:12fibres}. Hence $\delta_v$ attains the minimal possible value $ \delta_{v}(min)$ and $e(F_v) + \delta_{v} = e(F_v) + \delta_{v}(min) $ is always even. Since $e(F_v) + \delta_{v}$ is exactly the vanishing order of the discriminant $\Delta$ at $v$ by \cite{Ogg}, we find that $\Delta$ is a square in $k(t)$. \subsubsection{The Jacobian fibration} We now switch to the Jacobian $J$ of $X$ -- another elliptic K3 surface, since it shares the same invariants of $X$ by \cite[Cor.\ 5.3.5]{CD}. Note that $J$ also has the same Picard number as $X$, but, by definition, $J$ has a section while $X$ may not. By \cite[Theorem 5.3.1]{CD} $J$ and $X$ share the same singular fibres (and by \cite[p.348]{CDL} also the same $\Delta$ and $\delta_v$ (minimal!)) since, by virtue of the canonical bundle formula (Theorem 2 of \cite{bm}), there are no multiple fibres. In terms of a minimal Weierstrass equation for $J$, \begin{equation} \label{eq:WF} y^2 + a_1 xy + a_3 y = x^3 + a_2 x^2 + a_4 x + a_6, \;\;\; a_i\in k[t], \deg(a_i)\leq 2i, \end{equation} there are essentially two options for $a_1$ (since $a_1\equiv 0$ forces all singular fibres to be additive and is thus covered by the second alternative of Proposition \ref{lem:=12}, cf. Remark \ref{rem:occur} (ii)), up to M\"obius transformations: \[ a_1 = t \;\;\; \text{ or } \;\;\; a_1 = t^2. \] In the first case, we can argue directly with the general expression of the discriminant, \begin{eqnarray} \label{eq:Delta} \Delta = a_3^4+a_1^3a_3^3 + a_1^4a_4^2 +a_1^4a_2a_3^2 + a_1^5a_3a_4 +a_1^6a_6. \end{eqnarray} Notably, if $a_1=t$, then this reads modulo $t^4$ \[ \Delta \equiv a_3(0)^4 + a_3(0)^3t^3 \mod t^4, \] so $\Delta$ can only be a square if $a_3(0)=0$ which makes the fibre at $t=0$ singular and in fact additive. By symmetry, the same reasoning applies at $t=\infty$, so there are two additive fibres and we reach the second alternative of this proposition. \subsubsection{ Normal forms for additive fibre types} There remains to study the case $a_1=t^2$. We start arguing with the minimality of $\delta_0$ to reduce to just 3 cases. If there is a singular fibre at $t=0$ (then $a_3$ vanishes at $t=0$ and we have an additive fibre), then we can use Tate's algorithm to develop a normal form for the fibre \cite{Tate}, \cite[IV.9]{Si3}. For fibres of type I$^*_{2n}$, the normal form is \begin{eqnarray} \label{eq:2n^*} y^2 + t^2 xy + t^{n+2}a_3' y & = & x^3 + ta_2' x^2 + t^{n+2}a_{4}'x + t^{2n+4} a_6' \end{eqnarray} with $t\nmid a_2'a_4'$; here we have used Steps 6 and Step 7 in \cite[IV.9]{Si3}, pages 367-368. For $n=0$ we use indeed Step 6, and the fact that the auxiliary polynomial $P(T)$ in loc.\ cit.\ has three distinct roots to infer that $t\nmid a_2'a_4'$ after locating one root at $T=0$. For $ n =1$ the assertions are proven in Step 7, page 367; for higher $n$ one proceeds by induction on $n$, see line 8 of page 368 concerning the assertion on the divisibility of $a_3, a_4, a_6$ going up in each induction step. Note that by the argument in loc.\ cit., the divisibility of $a_6$ grows in fact by two in each of our steps. This shows that $t^{n+2}\mid a_3, a_4$ and $t^{2n+3}\mid a_6$ and then a translation in $x$ ensures that indeed $t^{2n+4}\mid a_6$ as claimed. Substituting into \ref{eq:Delta} gives \[ \Delta = t^{4n+8}a_3'^4 + t^{3n+12}a_3'^3 + t^{2n+12} a_4'^2 + h.o.t., \] whence, for the wild ramification $$\de_0 = \ord (\De) - e (F) = \ord(\Delta) - (2n+6) \geq 2n+2, $$ we have $\delta_0\geq 4$ for $n>0$. Since Corollary \ref{cor:12fibres} requires minimal wild ramification $\delta_0=2$, this leaves only fibres of type I$_0^*$ among all fibre types I$_{2m}^*$. For a fibre of type I$^*_{1}$, the normal form arises from an additional vanishing condition at $a_4$ compared to \eqref{eq:2n^*} again by \cite[IV.9, Step 7]{Si3}: \begin{eqnarray*} \label{eq:2n+1^*} y^2 + t^2 xy + t^{2}a_3' y = x^3 + ta_2' x^2 + t^{3}a_{4}'x + t^{4} a_6' \;\;\; \text{ with } \;\; t\nmid a_2'a_3'. \end{eqnarray*} Then fibre type IV$^*$ is given by further imposing $t^2\mid a_2$ by \cite[IV.9, Step 8]{Si3}, still with $t\nmid a_3'$ (in agreement with $\delta_v=0$). Meanwhile a fibre of type III$^*$ imposes additional vanishing conditions $t^3\mid a_3,\; t^5\mid a_6$, but $t^4\nmid a_4$ \cite[IV.9, Step 9]{Si3}. Substituting into \ref{eq:Delta} gives \[ \Delta = t^{12}a_3'^4 + t^{14} a_4'^2 + t^{15}a_3'^3 + h.o.t., \] so in particular $\delta_0\geq 3$, ruling out fibre type III$^*$ by Corollary \ref{cor:12fibres} again. To sum it up, the only additive fibre types remaining from Corollary \ref{cor:12fibres} are I$_0^*$, I$_1^*$ and IV$^*$. In each case, one can easily parametrize all K3 surfaces with such a given fibre and square discriminant, starting from the above normal form. It should be noted that for these types the normal form can be derived by means of a linear transformation \begin{eqnarray} \label{eq:adm} (x,y) \mapsto (x+\alpha_4, y+\alpha_2 x + \alpha_6) \end{eqnarray} with $\alpha_i\in k[t]$ of degree at most $i$; in particular, the degree bounds of \eqref{eq:WF} are preserved. \subsubsection{ Conditions for the discriminant to be a square} For type I$_0^*$, \ref{eq:2n^*} leads to the discriminant \[ \Delta = t^8(a_3'^4+t^4a_3'^3 + t^4a_4'^2 +t^5a_2'a_3'^2 + t^6a_3'a_4' +t^8a_6') \] where, by the minimality of wild ramification, $t\nmid a_3'$. Modulo square summands, this simplifies as \[ \Delta \equiv t^{12}(a_3'^3 +ta_2'a_3'^2 + t^2a_3'a_4' +t^4a_6') \mod k[t]^2. \] Write $a_i' = \sum_j a_{i,j}'t^j$. Then the condition that $\Delta$ is a square, i.e.\ that all odd degree coefficients vanish, determines \begin{itemize} \item the odd degree coefficients of $a_6'$ in terms of the coefficients of the other forms $a'_m$ (looking at the coefficients of $\Delta$ at $t^{17},\hdots,t^{23}$). \item $a_{2,0}' = a_{3,1}'$ (from the $t^{13}$-coefficient); \item $a_{4,1}' = (a_{2,2}'a_{3,0}'^2 + a_{3,0}'^2a_{3,3}' + a_{3,1}'a_{4,0})/a_{3,0}'$ (from the $t^{15}$-coefficient). \end{itemize} In particular, we find that the family of elliptic K3 surfaces with a fibre of type I$_0^*$ with wild ramification of index 2 and all other singular fibres of type I$_{2n}$ (generically 8 I$_2$'s) is irreducible. Its moduli dimension, equal to $7$, is obtained by comparing the degrees \[ \deg(a_3')\leq 4, \;\; \deg(a_2')\leq 3,\;\; \deg(a_4')\leq 6, \;\; \deg(a_6')\leq 8 \] (these bounds follow from the degree bounds in \eqref{eq:WF} and from \eqref{eq:2n^*}), against M\"obius transformations $t\mapsto ut/(\varepsilon t +1) \; (u \in k^\times, \varepsilon\in k)$ and the following variable transformations preserving the shape of \ref{eq:2n^*} (since $\Delta$ being a square is automatically preserved): \begin{eqnarray} \label{eq:adm'} (x,y) \mapsto (u^4x+t^2\beta_2, u^6y+t\beta_1 x + t^2\beta_4) \end{eqnarray} where the degree of each polynomial $\beta_i\in k[t]$ is at most $i$. \subsubsection{ Conclusion of proof using number of moduli} Any smooth K3 surface arising from a member of the above family satisfies \[ \rho \geq 2 + 8 + 4 = 14 \] by the Shioda--Tate formula where the first entry comes from the zero section and the fibre, the second from the semi-stable fibres (each of type I$_{2n}$ for some $n\in\ensuremath{\mathbb{N}}$, hence contributing $2n$ to the Euler--Poincar\'e characteristic and $2n-1$ to the Shioda--Tate formula) and the third from the fibre at $t=0$ (contributing $8$ to the Euler--Poincar\'e characteristic, including wild ramification, and $4$ to the Shioda--Tate formula). If a very general member were not supersingular, then it would deform in a $6$-dimensional family as in \cite[Prop.\ 4.1]{LM} (based on \cite{Deligne}) but this is exceeded by our moduli count. Hence the whole family is supersingular as claimed. We pass now to the case of a fibre of type I$_1^*$ or of type IV$^*$. As explained before, the K3 surfaces with a fibre of type I$_1^*$ are contained in the subfamily where $t^3\mid a_4$ (while for I$_0^*$ we simply had $t^2\mid a_4$) and type IV$^*$ additionally requires $t^2\mid a_2$. Each family allows the same transformations, so the moduli dimension is 6, resp.\ 5. But $\rho$ generically goes up by 1 each time (promoting the root lattice at the special fibre from $D_4$ through $D_5$ to $E_6$), so the whole family is supersingular again by \cite[Prop.\ 4.1]{LM}. If there is no additive fibre, then the condition that $\Delta$ is a square gives 9 moduli for $\Delta$: moreover the condition that $ a_1 = t^2$ reduces the number of moduli to 8, and one can show by the same kind of arguments as above that we have an irreducible 8-dimensional family of semi-stable elliptic K3 surfaces with 12 disjoint $A_1$'s embedding into the singular fibres; since $\rho\geq 2+12=14$, again by the formula of \cite[Prop.\ 4.1]{LM} the family is supersingular. \end{proof} \begin{rem} Another possible argument of proof is as follows: in each case we have an irreducible family of a certain dimension $k$, and inside it we can construct a family of the same dimension $k$ of surfaces arising via an inseparable base change from a rational elliptic surface. The surfaces are thus unirational, hence supersingular, and this shows directly that the original family is a family of supersingular surfaces. Indeed, starting from rational elliptic surfaces with singular fibre at $t=0$ of type $\mathrm I_0$ (smooth supersingular), $\mathrm{II}, \mathrm{III}, \mathrm{IV}$, respectively, inseparable base change exactly results in a family of supersingular K3 surfaces of the expected type and dimension. Note that, since the elliptic fibrations admit a 2-torsion section by \cite[p.342]{MWL}, the Artin invariants \cite{artinSS} satisfy $\s \leq 9$. \end{rem} \section{Proof of the main claim: there cannot be at least 15 singularities.} \label{s:quasi} In order to bound the number of singularities on a normal quartic $X\subset\ensuremath{\mathbb{P}}^3$, we shall use the theory of genus 1 fibrations laid out in the previous section. By Proposition \ref{cor:9}, if $X$ has at least $13$ singular points ($\nu\geq 13$), then the singularities are rational double points and the minimal resolution $S$ is a K3 surface. $S$ is endowed with the following divisors: the pull-back $H$ of a plane section and, for each pair of singular points, say $P_1, P_2$, the respective fundamental cycles $D_1, D_2$ (see \cite{artin}), consisting of the exceptional curves with suitable multiplicities, and equal to the pull back of the maximal ideal at the singular point. Then, since $D_i^2 = -2$, \[ E:= H-D_1-D_2 \] gives an effective isotropic class in $\Pic(S)$. We have that the linear system $|E|$ is base point free if and only if the line $L=\overline{P_1P_2}$ is not contained in $X$: this is clear for the points of $S$ not lying over $P_1, P_2$; moreover, since for each exceptional curve $C$ the intersection number $D_i C \leq 0$, $E$ has no fixed part (it was observed at the beginning of the previous section that the fixed part $\Psi$ satisfies, if non empty, $ E \Psi < 0$) hence it has no base points since $E^2=0$. If instead the line $L$ is contained in $X$, denote still by $L$ the strict transform of the line and replace $E$ by $E-L$, observing that $E L = -1$, hence $(E-L)^2 =0$, and continue until we get a base point free pencil $|E'|$, which gives a morphism $ S \ensuremath{\rightarrow} \ensuremath{\mathbb{P}}^1$ whose fibres correspond to the planes through $P_1, P_2$. \begin{prop} \label{prop:>14} Let $X\subset\ensuremath{\mathbb{P}}^3$ be a normal quartic with at least 15 singularities. Then every genus one pencil $|E'|$ arising from two singularities on $X$ as above is quasi-elliptic. \end{prop} \begin{proof} Let $\nu\geq 15$ denote the number of singularities, $P_1,\hdots,P_{\nu}$, and let $C_i^{j}, j = 1, \dots, n(i)$ be the irreducible exceptional curves lying above the point $P_i$. Let us first assume that no $P_i \, (i>2)$ lies on $L$, so that each lies on a unique plane through $P_1, P_2$; hence the $C_i^j$'s are components of the corresponding fibre of $|E'|$ (as in Remark \ref{rem:-2}). Then the fibration $|E'|$ has $\nu-2>12$ disjoint smooth rational fibre components (the $C_i^j$). If, on the other hand, there is a third singularity on $L$, say $P_3$, then this implies not only that $L\subset X$, but also that $L$ appears as a multiple component of $X\cap H$ for a unique plane $H$ (just take a point $P \in L$ which is a smooth point of $X$, and let $H$ be the tangent plane to $X$ at $P$: then $ H \cap X \geq 2L$). This implies that $L$ is a component of the fibre corresponding to $H$, and, together with $C_4^1,\hdots,C_{\nu}^1$, we obtain $\nu-2>12$ disjoint smooth rational fibre components as before. In both cases the proposition follows then from Proposition \ref{lem:12}. \end{proof} \begin{prop} \label{prop:cusps} Let $X\subset\ensuremath{\mathbb{P}}^3$ be a normal quartic. Let $L$ be a line through two singular points of $X$ such that $X\cap L$ consists of nodes and biplanar double points (and smooth points if $L\subset X$). If the fibration induced by $L$ is quasi-elliptic, then the line dual to $L$ is contained in the dual surface $X^{\vee}$. \end{prop} \begin{proof} We consider the curve $\Sigma_0 \subset S$ ($S$ is the minimal resolution of $X$ as usual) consisting of the horizontal divisorial part of the set of singular points of the fibres, the so-called curve of cusps. The first case is when this curve is not exceptional for the map $$ \Phi: S \ensuremath{\rightarrow} X \subset \ensuremath{\mathbb{P}}^3; $$ then we get a curve on $X$ consisting of singular points of the intersections $H \cap X$, where $H$ is a plane of the pencil through $L=\overline{PP'}$. Therefore the dual line $L^\vee$ is contained in $X^\vee$. \medskip The second case is where $\Sigma_0$ is exceptional: we use for this Proposition 1, page 199 of \cite{bminv}, and denote as in loc.\ cit.\ $f : S \ensuremath{\rightarrow} B$ the quasi-elliptic fibration. At a general point $Q' \in \Sigma_0$, the fibre $ F : = F_{f(Q')}$ has a cusp and, if $t$ is a local parameter for $B$ at $f(Q')$, the map is given by $ t = u (x^2 + y^3)$ where $u$ is a unit in the formal power series ring which is the completion of the local ring $\ensuremath{\mathcal{O}}_{S,Q'}$. Bombieri and Mumford show that there is a local parameter $\s$ such that $\Sigma_0 = \{ \s=0\}$, and that $ (\Sigma_0 \cdot F )_{Q'} = 2$, so that $x,\s$ are local parameters for $S$ at $Q'$, and we can write $ y = \s + \la x $ plus higher order terms. Since we assume that the curve $ \Sigma_0 $ is contracted by the map $\Phi$, it follows that $\Phi$ has a local Taylor development which contains only terms in the ideal generated by $y$. Hence we are left only with monomials $y, y^2, xy, \dots$ whose respective orders on the normalization of the fibre $F$ are: $2,4,5$. We conclude that the image of $F$ under $\Phi$ has a higher order cusp at a singular point $P''$ lying in $L$. By assumption, we can write the equation of $X$ at $P''$ in local affine coordinates as $$ h: = xy + \la z^2 + g (x,y,z) = 0 \;\;\; (\lambda\in K), $$ where $g$ has order $\geq 3$. Since we want that the planes of the pencil cut a cusp at $P''$, the quadratic part of the restriction of the equation $h$ to the planes must be the square of a linear form, hence in the projectivized tangent space we get lines intersecting the exceptional conic $C$ with multiplicity two, hence lines tangent to the conic; from the equation $ xy + \la z^2$ of the quadratic part follows that this pencil is generated by the linear forms $x,y$. We claim now that, as in the first case, the pencil of planes through $L$ yields a line in the dual surface $X^{\vee}$. Because the Gauss map is given by $(y,x, 0,0) + h.o.t $, and the image of the exceptional divisor in the dual surface is the pencil of planes $\mu_0 x + \mu_1 y =0$, which is exactly the pencil of planes containing $L$ by our previous argument. \end{proof} \begin{rem} Both cases from the proof of the proposition actually occur (for the second case, it suffices that $g(x,y,z)$ above has order $ 4$). \end{rem} Note that Propositions \ref{prop:>14} and \ref{prop:cusps} provide the missing ingredients for the proof of the Main Claim \ref{main-claim} in \ref{ss:main-claim}. Thereby the proof of the first statement of Theorem \ref{theo} is now complete. \section{14 singularities are nodes} The aim of this section is to prove the following result which covers the second part of Theorem \ref{theo}: \begin{theo} \label{thm:14nodes} Let $X\subset\ensuremath{\mathbb{P}}^3$ be a normal quartic with 14 singular points. Then all singularities are nodes. \end{theo} \begin{proof} The minimal resolution $S$ of $X$ is a K3 surface by Proposition \ref{cor:9}, and the singular points are nodes or biplanar double points ($u=0$) by Corollary \ref{14}. Assume that we have a singular point $P$ which is not of type $A_1$. Just like in the proof of Proposition \ref{prop:>14}, any genus one fibration $S\to \ensuremath{\mathbb{P}}^1$ induced by two singular points admits 12 disjoint smooth rational curves in the fibres. By Proposition \ref{lem:12} this is the maximum possible for an elliptic fibration. If the fibration is not induced by $P$ and another singular point, $P$ lies in exactly one fibre of $X$ and the fundamental cycle is supported on the corresponding fibre of $S$. Hence Corollary \ref{cor:ADE} implies that the fibration is quasi-elliptic. Any singular point $Q\neq P$ thus admits at least 6 quasi-elliptic fibrations induced by a pair of singular points $Q, Q'$ which are nodes or biplanar double points. Hence we infer from Proposition \ref{prop:cusps} and the proof of Proposition \ref{prop:deg8} that $\deg(X^\vee)\geq 6$ and $b\leq 2$. More precisely, by Remark \ref{rem:non-RDP}, $P$ can only have type $A_2$ or $A_3$, and in the former case there may be a second singular point $P'$ of type $A_2$. In fact, we can say more about the configuration of singularities relative to $Q$. Namely $Q$ is collinear with at least 5 pairs of singularities (possibly including $P$), for else it would induces at least 8 quasi-elliptic fibrations, and $\deg(X^\vee)\geq 8$ would give a contradiction using \eqref{eq:deg}. We pick one such pair not involving $P$, say $Q, Q', Q''\in L\subset X$, and consider the induced quasi-elliptic fibration $$\pi: S \to \ensuremath{\mathbb{P}}^1. $$ The fibres are the cubics $C$ residual to $L$ in the respective plane $H$ containing $L$. Except possibly for the cubic containing $L$ as a component, these cubics are all reduced, since they meet $L$ in the three points $Q, Q', Q''$. Recall moreover that the exceptional (-2)-curve resolving a node not on $L$ also appears always with multiplicity 1, hence the only fibres of $\pi$ which may not be reduced are those containing exceptional curves lying above the singular points of type $A_n$ with $ n \geq 2$ and the one containing $L$. Since $b\leq 2$, this makes for at most 3 fibres. Since there are 5 pairs of singular points collinear with $Q$, there has to be a pair of nodes left which lie on a reduced fibre (since no plane can contain more than 6 singular points (cf.\ \ref{ss:pfofclaim}), so all pairs of points $\neq (Q',Q'')$ collinear with $Q$ lie on different fibres). In particular, this reduced fibre has at least 4 components. However, by \cite[Prop.\ 5.5.10]{CD} the possible fibre types of a quasi-elliptic fibration are a priori \begin{eqnarray} \label{eq:fibres} \mathrm{II}, \;\; \mathrm{III}, \;\; \mathrm I_{2n}^*\; (n\geq 0), \;\; \mathrm{III}^*,\;\;\mathrm{II}^*. \end{eqnarray} Of these, only fibres of type $\mathrm{II}, \mathrm{III}$ are reduced, with one or two components. This gives the required contradiction. Hence all singularities of $X$ are nodes. \end{proof} \section{Proof of Theorem \ref{theo}: the non-supersingular case} \label{s:non-ss} To complete the proof of Theorem \ref{theo}, it remains to analyse the non-supersingular case. \begin{prop} \label{prop:nu<14} Let $X\subset \ensuremath{\mathbb{P}}^3$ be a normal quartic such that a minimal resolution is not a supersingular K3 surface. Then $X$ contains at most 13 singular points. \end{prop} \begin{proof} By the general part of Theorem \ref{theo}, we only have to rule out: $X$ contains 14 singularities. Assuming this, all singular points are nodes by Theorem \ref{thm:14nodes}, and the minimal resolution $S$ is a K3 surface (non-supersingular by assumption). We continue to study the fibrations $\pi_{i,j}$ induced by pairs of nodes $(P_i, P_j)$. The proof of Theorem \ref{thm:14nodes} shows that the fibres contain 12 disjoint $(-2)$-curves, so by Proposition \ref{lem:=12} there are two additive fibres; by Corollary \ref{cor:12fibres}, the possible types are \begin{eqnarray} \label{eq:types} \mathrm I_{2n}^* (n\geq 0), \;\; \mathrm I_1^*,\;\; \mathrm{IV}^*,\;\; \text{ and } \;\;\mathrm{III}^*. \end{eqnarray} We distinguish three cases: \subsubsection{} \label{ss:3_collinear} If there are 3 collinear nodes, then they give sections of the induced fibration, and the 11 exceptional curves above the other nodes embed into the negative definite root lattices which are the orthogonal complements of the sections. On the multiplicative fibres, this imposes no general restrictions, but additive fibres can, by inspection of the singular fibres, as described in the proof of Proposition \ref{lem:12}, only support one disjoint smooth rational curve less. This is because the sections necessarily intersect only the simple fibre components. Therefore the number of disjoint rational curves not meeting one of the three sections is at most $N_v - 1$, where we recall that \begin{eqnarray} \label{eq:N_v'} N_v \leq \frac 12 \lfloor e(F_v) + \delta_{v}\rfloor. \end{eqnarray} Hence, with two additive fibres, there can only be 10 disjoint $(-2)$-curves supported on the orthogonal complement of the sections, contradiction. \subsubsection{} \label{ss:2_collinear} Thus we may assume that there are no three collinear nodes (i.e., any two nodes are companions). Note that this implies that any 3 nodes lie on a unique plane. If some connecting line is contained in $X$, then the line and the two nodes give sections of the fibration, with 12 disjoint $(-2)$-curves in the fibres. Hence the argument from \ref{ss:3_collinear} applies to establish a contradiction. \subsubsection{} \label{ss:double_conics} We can therefore assume that no line $\overline{P_iP_j}$ is contained in $X$. We continue by restricting the possible additive fibre types. They arise from the quartic curve $X\cap H$ by blowing up the nodes in the plane $H$: two of them give bisections of the fibration while the others give $(-2)$-fibre components, which cannot be such that their multiplicity in the fibre is $\geq 3$. \smallskip Only the additive fibre type I$^*_0$ can be realized of the possible types listed in \eqref{eq:types} (as a double conic with 6 nodes; see part III). The reason is based on the fact that this is the only one with only one component with multiplicity at least 2, while the others have several components appearing with multiplicity at least 2, indeed at least 3 components except for the case of I$^*_1$. Indeed, the blow ups of nodes appear with multiplicity 1, hence the plane section $X \cap H$ must be non reduced. In particular there are at most 2 components appearing with multiplicity at least 2. To exclude the case of I$^*_1$, we need to exclude that $X \cap H$ consists of two double lines. In this case there are at most 4 nodes in $H$, since the intersection point of the two double lines cannot be a node, and there are no 3 collinear nodes by assumption, hence the number of irreducible components of the fibre is at most 4, a contradiction. \smallskip Therefore each fibration $\pi_{i,j}$ admits two such fibres. This turns out to be too restrictive: in fact, we have seen that for each pair $\sP$ of nodes, there are exactly two planes $\pi$ containing the pair, each containing six nodes. Consider then the number of pairs as above $(\sP, \pi), \sP \subset \pi$. The number is therefore $(13)\cdot 14$. But since each such plane $\pi$ contains exactly $15$ such pairs $\sP$, we have obtained a contradiction. \end{proof} \subsubsection{Proof of Theorem \ref{theo}} Since the triple point case was covered in \cite{cat21}, we only have to deal with double point singularities. The general statement that a normal quartic in characteristic 2 contains at most 14 singularities was proved in \ref{ss:pf-thm} (using Propositions \ref{prop:>14} and \ref{prop:cusps}). That 14 singular points necessarily form nodes was proved in Theorem \ref{thm:14nodes}. An irreducible component was exhibited in Theorem \ref{dual=plane}. Finally, the result that there are fewer than 14 singularities if the resolution is not a supersingular K3 surface, was proved in Proposition \ref{prop:nu<14}. \hspace*{\fill}$q.e.d.$ \subsection*{Acknowledgement} Thanks to the anonymous referees for their comments which helped us improve the paper. We would also like to thank Stephen Coughlan for an interesting conversation.
{ "timestamp": "2022-05-25T02:16:11", "yymm": "2110", "arxiv_id": "2110.03078", "language": "en", "url": "https://arxiv.org/abs/2110.03078", "abstract": "We show, in this second part, that the maximal number of singular points of a quartic surface $X \\subset \\mathbb{P}^3_K$ defined over an algebraically closed field $K$ of characteristic 2 is at most 14, and that, if we have 14 singularities, these are nodes and moreover the minimal resolution of $X$ is a supersingular K3 surface. We produce an irreducible component, of dimension 24, of the variety of quartics with 14 nodes. We also exhibit easy examples of quartics with 7 $A_3$-singularities.", "subjects": "Algebraic Geometry (math.AG)", "title": "Singularities of normal quartic surfaces II (char=2)", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692325496974, "lm_q2_score": 0.7248702761768249, "lm_q1q2_score": 0.7079584963317067 }
https://arxiv.org/abs/1807.09201
Every square can be tiled with T-tetrominos and no more than 5 monominos
If n is a multiple of 4, then a square of side n can be tiled with T-tetrominos, using a well-known construction. If n is even but not a multiple of four, then there exists an equally well-known construction for tiling a square of side n with T-tetrominos and exactly 4 monominos. On the other hand, it was shown by Walkup that it is not possible to tile the square using only T-tetrominos. Now consider the remaining cases, where n is odd. It was shown by Zhan that it is not possible to tile such a square using only one monomino. Hochberg showed that no more than 9 monominos are ever needed. We give a construction for all odd n which uses exactly 5 monominos, thereby resolving this question.
\section{Introduction} The sequence \cite{oeis} gives the maximal number of T-tetrominos which can be used to tile the $n \times n$ square with t-tetrominos and monominos. Theorem \ref{main} shows that this sequence is trivially given by $\frac{n^2}{4}, \frac{(n^2-1)}{4}-1, \frac{n^2}{4}-1, \frac{(n^2-1)}{4}-1$, depending on the value of $n$ modulo 4. \section{Tiling every square} \begin{theorem}\label{main} Every square can be tiled with T-tetrominos and at most 5 monominos. \end{theorem} This theorem follows immediately from propositions \ref{four}, \ref{even} and \ref{odd}. \begin{proposition}\label{four} Every square of side $n = 4m$ can be tiled with T-tetrominos. \end{proposition} \begin{proposition}\label{even} Every square of side $n = 4m + 2$ can be tiled with T-tetrominos and 4 monominos, and 4 monominos are always needed. \end{proposition} For $n = 2$ this is the same as pointing out that a single T-tetromino will not fit in the $2x2$ square. For $n = 4m + 2$, where $m$ is a positive integer, we can extend the tiling of the $4m$-square without monominos to a tiling of the $4m+2$-square, adding only 4 monominos. The tiling of the the L-shaped strip which extends the $4 \times 4$ square to a $6 \times 6$ square is given in figure \ref{six}. We can increase the length of the arms of the strip, by replacing the two T-tetrominos with a longer sequence taken from the `frieze', or tiling of a strip of width 2. \begin{figure} \input{six_image} \caption{Extending the $4 \times 4$ tiling to $6 \times 6$, adding 4 monominos and 4 T-tetrominos.} \label{six} \end{figure} \begin{proposition}\label{odd} Every square of side $n = 2m + 1$ can be tiled with T-tetrominos and 5 monominos, and 5 monominos are always needed (except for $n = 1$). \end{proposition} Zhan's (\cite{zhan}) Theorem 2 states that it is not possible to tile any rectangle with T-tetrominos and only one monomino. It must therefore be the case that at least 5 are needed. We show that exactly 5 are sufficient. \begin{definition} Call $A_n$ the set of lattice squares given by the square of side $n$, with the lattice squares at $(0, 0), (0,1), (1, 0)$ and $(0, n-1)$ removed. This shape has area $n^2 - 4 = 4(m^2 + m - 1) + 1$. \end{definition} \begin{lemma} For all $m \in \mathbb{N}$, $A_{2m+1}$ can be tiled with $m^2 + m - 1$ T-tetrominos and one monomino. \end{lemma} \begin{figure} \input{cropped_image} \caption{$A_5$, the $5 \times 5$ square with four lattice squares removed, $A_7$ and $A_9$.} \label{cropped} \end{figure} \begin{figure} \input{five_image} \caption{Tiling of $A_5$ with a single monomino.} \label{five} \end{figure} {\sc Proof.} The proof is by induction on $n$. In figure \ref{five} we show how $A_5$ can be tiled by 5 tetrominos and a single monomino. (It is trivial to tile $A_3$ with a single tetromino and a single monomino, but it is slightly clearer to start the induction with $n=5$.) If $A_n$ can be tiled with one monomino, then so can $A_{n+1}$. There are two constructions for the cases $n=4k+1$ and $n=4k+3$. \begin{figure} \input{ones_image} \caption{A tiling of $A_{4k+1}$ can be extended to a tiling of a reflected copy of $A_{4k+3}$.} \label{ones} \end{figure} \begin{figure} \input{threes_image} \caption{A tiling of $A_{4k+3}$ can be extended to a tiling of a reflected copy of $A_{4(k+1)+1}$.} \label{threes} \end{figure}
{ "timestamp": "2018-07-25T02:13:36", "yymm": "1807", "arxiv_id": "1807.09201", "language": "en", "url": "https://arxiv.org/abs/1807.09201", "abstract": "If n is a multiple of 4, then a square of side n can be tiled with T-tetrominos, using a well-known construction. If n is even but not a multiple of four, then there exists an equally well-known construction for tiling a square of side n with T-tetrominos and exactly 4 monominos. On the other hand, it was shown by Walkup that it is not possible to tile the square using only T-tetrominos. Now consider the remaining cases, where n is odd. It was shown by Zhan that it is not possible to tile such a square using only one monomino. Hochberg showed that no more than 9 monominos are ever needed. We give a construction for all odd n which uses exactly 5 monominos, thereby resolving this question.", "subjects": "Combinatorics (math.CO)", "title": "Every square can be tiled with T-tetrominos and no more than 5 monominos", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692311915194, "lm_q2_score": 0.7248702761768249, "lm_q1q2_score": 0.7079584953472039 }
https://arxiv.org/abs/1609.03754
Efficiency of a Stochastic Search with Punctual and Costly Restarts
The mean completion time of a stochastic process may be rendered finite and minimised by a judiciously chosen restart protocol, which may either be stochastic or deterministic. Here we study analytically an arbitrary stochastic search subject to an arbitrary restart protocol, each characterised by a distribution of waiting times. By a direct enumeration of paths we construct the joint distribution of completion time and restart number, in a form amenable to analytical evaluation or quadrature; thereby we optimise the search over both time and potentially costly restart events. Analysing the effect of a punctual, i.e. almost deterministic, restart, we demonstrate that the optimal completion time always increases proportionately with the variance of the restart distribution; the constant of proportionality depends only on the search process. We go on to establish simple bounds on the optimal restart time. Our results are relevant to the analysis and rational design of efficient and optimal restart protocols.
\section{1D Diffusive Search} \indent The first passage time distribution $P(t)$ for a 1D diffusive search is given by the Levi-Smirnov distribution: \begin{equation} \label{eq:levismirnov} P(t) = \frac{x_0}{\sqrt{4 \pi D \, t^3}} \mathrm{e}^{-\frac{x_0^2}{4Dt}} \end{equation} \noindent where $x_0$ is the distance to the target and $D$ is the diffusion coefficient. We choose as the unit of time $x_0^2/4D$, with which the distribution takes the appealingly simple form $\frac{1}{\sqrt{\pi}\,t^{3/2}} \, \mathrm{e}^{-1/t}$. Note that this is a different choice than the one made by Evans and Majumdar in \cite{EvansMajumdarPRL}. \section{Diffusion with `Disordered' Restarts} \indent In the main text we construct an example in which the mean of both the search and restart distributions diverge but the mean completion time remains finite: a 1D diffusive search with Poisson restarts whose rate is drawn from an exponential distribution with mean $1/\eta$. As, to the best of our knowledge, this calculation has not been presented before, we do so here. \indent The restart rate $k$ is drawn anew after each restart from the disorder distribution $P(k) = \eta \, \mathrm{e}^{-\eta \, k}$ (corresponding to a kind of `annealed' disorder). The characteristic time-scale of restarting is then $\eta$, with $1/\eta$ being the characteristic `rate'. Denoting an average over $P(k)$ by an overbar, we find $\overbar{P}_r(t) = \eta/\left( \eta + t \right)^2$ and $\overbar{S}_r(t) = \eta/\left( \eta + t \right)$. Inserting these into the definitions of $G_i(t)$ and $G_f(t)$, we find for the mean completion time: \begin{equation} \langle T \rangle = \frac{\pi \eta \, \text{erfi}\left(\frac{1}{\sqrt{\eta }}\right)-2 \, _2F_2\left(1,1;\frac{3}{2},2;\frac{1}{\eta }\right)}{1-\frac{\sqrt{\pi } e^{\frac{1}{\eta }} \text{erfc}\left(\frac{1}{\sqrt{\eta }}\right)}{\sqrt{\eta }}} \end{equation} \indent We plot this, as a function of $\eta^{-1}$, against the disorder-free case (simple Poisson restarts) in Figure \ref{fig:si1}(b). \section{Gamma and Weibull Distributions} \indent In the text we analyse the properties of a diffusive search subject to restarts according to a Gamma ($P_r(t) = \frac{(k r)^k t^{k-1}}{\Gamma(k)} \mathrm{e}^{-krt}$) or Weibull ($\frac{k}{\lambda^k} t^{k-1}\exp \left( -\left( \frac{t}{\lambda} \right) ^k \right)$) distribution. Note that the mean of the Gamma distribution is $1/r$ and hence varying $k$ while keeping $r$ fixed is easily accomplished. To facilitate comparison between the distributions, we reparameterise the Weibull distribution by the substitution $\lambda = \frac{1}{r \, \Gamma \left( 1+ 1/k \right)}$, where the mean of the Weibull distribution is now also $1/r$. \indent The mean and standard deviations for each $k$ were then calculated from the definitions of the $G^{(n)}_x$ given in the main text, either analytically (for the Gamma distribution) or by quadrature (for the Weibull distribution). \indent In the main text we remark that the variance of these distributions depend as power laws on the shape factor $k$ as $k \to \infty$. We demonstrate this here. The variance of the Gamma distribution is simply $\sigma_r^2 = 1/k\,r^2$, from which the claimed dependence can be immediately seen. For the Weibull distribution, the variance is: \begin{equation} \label{eq:weibullvariance} \sigma_r^2 = \left( \frac{1}{r \, \Gamma \left( 1+ 1/k \right)} \right)^2 \left[ \Gamma \left( 1+ \frac{2}{k} \right) - \left( \Gamma \left( 1 + \frac{1}{k} \right) \right)^2 \right] \end{equation} \indent This is plotted in Figure \ref{fig:si1}(a) for a fixed $r$. We see that for large $k$ the variance behaves algebraically with exponent $\approx - 1.9$. This was identical to the numerically determined exponent for $\langle T \rangle_{\text{opt}} - \langle T \rangle_{\delta\text{-opt}}$, which we reported in the main text as $\approx -2$ for simplicity. \section{Numerical Procedure for $\langle T \rangle_{\text{opt}} - \langle T\rangle_{\delta\text{-opt}}$ vs. $\sigma_r^2$ Plot} For each pair of search and restart distributions, we used quadrature to find the value of $\langle T \rangle_{\text{opt}}$. This was done by exploting the two-parameter nature of the restart distributions chosen. The distributions were first reparameterised by the mean $1/r$ and the variance $\sigma_r^2$. Then, for each value of $\sigma_r^2$, quadrature was used to find the mean completion time $\langle T \rangle$ for a given $r$ -- this was then numerically optimised over $r$ to find $\langle T \rangle_{\text{opt}}$. This was repeated for each value of $\sigma_r^2$, values of which were chosen so as to be uniformly distributed in log-space. \indent The search distributions chosen for this calculation were: Levi-Smirnov: $\frac{1}{\sqrt{\pi}\,t^{3/2}} \, \mathrm{e}^{-1/t}$, Frechet: $\frac{1}{2\, t^{3/2}} \exp \left( -1/\sqrt{t} \right)$ and log-normal: $\frac{1}{\sqrt{2 \pi}\, t} \exp \left( -(\ln t)^2/2 \right)$ \begin{figure*} \begin{centering} \includegraphics[width=15cm]{Figure_S1.ps} \caption{\label{fig:si1} \textbf{(a)} Variance of a Weibull distribution $\sigma_r^2$ as as function of the shape factor $k$. Dotted line is $\sim k^{-1.9}$. \textbf{(b)} Average completion time $\langle T \rangle$ for Poisson (orange) restarts with rate $k$ and exponentially disordered Poisson restarts (blue) with characteristic restart time scale $\eta$. Inset shows disorder averaged restart time distribution $\overbar{P}_r(t)$ - i.e., the actual waiting time between restart events. Note the algebraic decay $\propto t^{-2}$ \textbf{(c)} Plot of $\langle m \rangle$ for a 1D diffusive search with Poisson (solid) and deterministic (dashed) restarts, as a function of the inverse mean restart time, $r$.} \end{centering} \end{figure*}
{ "timestamp": "2016-09-14T02:03:57", "yymm": "1609", "arxiv_id": "1609.03754", "language": "en", "url": "https://arxiv.org/abs/1609.03754", "abstract": "The mean completion time of a stochastic process may be rendered finite and minimised by a judiciously chosen restart protocol, which may either be stochastic or deterministic. Here we study analytically an arbitrary stochastic search subject to an arbitrary restart protocol, each characterised by a distribution of waiting times. By a direct enumeration of paths we construct the joint distribution of completion time and restart number, in a form amenable to analytical evaluation or quadrature; thereby we optimise the search over both time and potentially costly restart events. Analysing the effect of a punctual, i.e. almost deterministic, restart, we demonstrate that the optimal completion time always increases proportionately with the variance of the restart distribution; the constant of proportionality depends only on the search process. We go on to establish simple bounds on the optimal restart time. Our results are relevant to the analysis and rational design of efficient and optimal restart protocols.", "subjects": "Quantitative Methods (q-bio.QM); Statistical Mechanics (cond-mat.stat-mech)", "title": "Efficiency of a Stochastic Search with Punctual and Costly Restarts", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692311915196, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7079584953472039 }
https://arxiv.org/abs/1807.03386
Data-driven pattern identification and outlier detection in time series
We address the problem of data-driven pattern identification and outlier detection in time series. To this end, we use singular value decomposition (SVD) which is a well-known technique to compute a low-rank approximation for an arbitrary matrix. By recasting the time series as a matrix it becomes possible to use SVD to highlight the underlying patterns and periodicities. This is done without the need for specifying user-defined parameters. From a data mining perspective, this opens up new ways of analyzing time series in a data-driven, bottom-up fashion. However, in order to get correct results, it is important to understand how the SVD-spectrum of a time series is influenced by various characteristics of the underlying signal and noise. In this paper, we have extended the work in earlier papers by initiating a more systematic analysis of these effects. We then illustrate our findings on some real-life data.
\section{Introduction} \label{sct:intro} \subsection{Motivation} Since the gathering of the sensor data has become relatively cheap and straightforward, nowadays it is common to collect detailed information about all sorts of processes and services that take place in factories, infrastructural networks and public spaces. In many of these applications (especially those related to human activities), there is a multitude of time series in which a pronounced but relatively short periodicity (e.g. daily pattern) is superimposed on a slower, more global trend. If this underlying trend is simple or regular, classic detrending algorithms (e.g.~\cite{wu2007trend,kantelhardt2002multifractal}) can be applied to remove it. However, these techniques fall short if it is difficult to identify clear underlying patterns. In this paper we propose to use singular value decomposition (SVD) as a way to extract regular periodic patterns in a data-driven fashion. The basic idea is fairly straightforward and was first proposed in~\cite{kanjilal1994singular}. Let us suppose that one has a (1-dim) periodic time series $\mathbf{x} = (x_1, x_2,\ldots, x_n)$ that has a known period $p$. We can then reshape this time series into a matrix using the first $p$ observations (i.e. $x_1$ through $x_p$) to construct the first column, the second set of $p$ observations ($x_{p+1}$ through $x_{2p}$) as the second column, and so on. Assuming that the length $n$ of the time series is an integer multiple (say $q$) of $p$ (i.e. $n = pq$), this reshaping results in a $p \times q$ matrix $A$. If the time series is perfectly periodic and noiseless, this matrix $A$ has rank 1, since all the columns are linearly dependent. This means that $A$ can be expressed as the product of a single ($p$-dimensional) column $\mathbf{u}$ and ($q$-dimensional) row $\mathbf{v}^T$: \begin{equation} A = \sigma_1 \mathbf{u} \mathbf{v}^T \label{eq:rk_1} \end{equation} where $\sigma_1>0$ is a scaling factor to ensure the normalization $ |\!|\mathbf{u}|\!| = |\!|\mathbf{v}|\!| = 1$. In fact, in the case of identical columns, $\mathbf{v} = \mathbf{1}_q $ (i.e. all $v$-entries are equal to 1), while $\mathbf{u}$ will be equal to the common column. Obviously, the above represents an extreme case where all the singular values beyond the first one vanish. If we sprinkle a bit of noise onto the time series, the columns in $A$ will no longer be identical, but still very similar. As a consequence, the expression in~\eqref{eq:rk_1} will still hold to a very good approximation. This observation is the motivation for the introduction of singular value decomposition (SVD) which we will briefly recapitulate below. \subsection{SVD recapitulation and some notation} The basic result that we will use throughout the paper is the following well-known theorem. \noindent \begin{thm} {\bf Singular Value Decomposition (SVD)} Given an arbitrary $p \times q$ matrix $A \in \mathbb{R}^{p \times q}$, then there exists matrices $U$ and $V$ (both with orthonormal columns), and positive numbers $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_r $ (where $r = \min(p,q)$), such that: \begin{equation} A = \sum\limits_{k = 1}^r \sigma_kU_k V_k^T = USV^T \label{eq:svd} \end{equation} with $U_k$ and $V_k$ denoting the $k^{th}$ column of $U$ and $V$, respectively, and $S$ is an $p\times q$ matrix for which the numbers $\sigma_k$ (the singular values) are placed on the main diagonal. For a proof, see e.g.~\cite{strang1993introduction}. \end{thm} \noindent In the remainder of this paper, we will assume that the singular values are arranged in descending order: $ \sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_n \geq 0. $ For a given matrix $A$ we use the notation $\sigma_i(A)$ or $\lambda_i(A)$ to denote the $i$-th (ordered) singular or eigen-value, respectively. If there is no danger of confusion, the explicit reference to the matrix will be suppressed. Recall that there is a useful relationship between the singular values of a matrix $A \in \mathbb{R}^{p\times q}$ and the eigenvalues of the related matrices $AA^T$ and $A^T A$: \begin{equation} \sigma_i(A) = \sqrt{\lambda_i(AA^T)} = \sqrt{\lambda_i(A^T A)} . \label{eq:eigenvals} \end{equation} where $i = 1:\min(p,q)$. This connection will be used extensively in the analysis below. \subsection{Applying SVD to time series} In \cite{kanjilal1995multiple}, the authors draw on SVD to address the following problems for time series: \begin{enumerate} \item {\bf Period extraction:} Given a times series, $\mathbf{x}=(x_1,x_2,\ldots x_n)$, reshape it as a $p \times q$ matrix $A$ (where $p$ ranges between some judiciously chosen lower and upper value, and $q=floor({ n/p})$, is the nearest integer less than $n/p$. The authors then introduce the {\it singular value ratio} \begin{equation} SVR(p) = \sigma_1/\sigma_2 \label{eq:svr} \end{equation} to quantify the dominance of the first singular value over the second. High values of the SVR are then considered as an indicator of existence of strong underlying periodicities. Plotting $SVR$ as a function of $p$ allows one to spot peaks and identify underlying periodicities. It is not necessarily correct, however, and one must exercise caution when interpreting these graphs, as explained in Section~\ref{sct:mean_shift}. \item {\bf Data-driven times series approximation and decomposition:} If in the expansion~\eqref{eq:svd} all but the first $r$ singular values are negligible, then truncating the expansion after $r$ terms will still result in an excellent approximation of the full matrix $A$ (and corresponding times series). Furthermore, the columns and rows that are retained, can often be interpreted as meaningful patterns (see Fig.~\ref{fig:block_signal}). More precisely, for an arbitrary $p\times q$ matrix $A$ (using the notation established above) we know that the ($L_2$) optimal approximation of rank $p < r$ is given by: $$ A_p = \sum_{k=1}^p \sigma_k U_kV_k^T $$ and the Frobenius norm of the residual is given by $$|\!| A-A_p |\!|_F^2 = \sigma_{p+1}^2 $$ \end{enumerate} The gist of these observations is clearly illustrated in Fig.~\ref{fig:block_signal}. The top panel shows a noisy block signal of length $n = 1000$ with a pronounced period $p=100$ and $q = 10$ full cycles. In addition to the noise, there are three irregularly occurring spikes. After rewriting this time series as a $100 \times 10 $ matrix $A$, we apply the SVD algorithm to obtain $A = USV^T$ where $S$ is $100 \times 10$ ``rectangular diagonal'' matrix with the 10 singular values on its main diagonal. The middle panel shows those ten singular values, clearly illustrating that all except the first two are negligible, which means that the matrix (and therefore the time series) can be accurately represented by truncating the expansion in~\eqref{eq:svd} after the first two terms, i.e. rank-2 approximation (see Fig.~\ref{fig:block_signal_approx}). Finally, the bottom panel of Fig.~\ref{fig:block_signal} displays the first three columns of $U$ (left) and $V$ (right), respectively. As they correspond to the most significant singular values, they are most important for the reconstruction of the signal. The $U$-columns cover one cycle and can be interpreted as successive profiles needed to reconstruct a generic cycle. In that sense, they are analogous to the various trigonometric basis functions in Fourier analysis. The $V$-columns, on the other hand, specify the amplitudes with which these basis functions need to be combined in order to reproduce the individual cycles observed in the data. Not surprisingly, the main profile ($U_1$ top left) reflects the step-like behaviour seen during each cycle. As the amplitude of each of these steps is essentially constant, the 10 $V_1$-entries displayed in the top-right panel show little variation. The $U_2$ profile (middle, left) captures the shape of the additional spikes that occur at irregular intervals. The positive values in the corresponding $ V_2$-coefficients (middle, right) clearly indicate in which intervals these spikes occur. Finally, the erratic appearance of both $U_3$ and $V_3$ are a further indication (in line with $\sigma_3 \approx 0$) that all structural information has been extracted from the signal. \begin{figure} \centering \includegraphics[width = 7.4cm,height=4cm]{figs/signal_block_input} \includegraphics[width = 7.4cm,height=4cm]{figs/signal_block_singvals}\\ \includegraphics[width=7.4cm]{figs/signal_block_svd_rk3} \caption{SVD application to pattern-extraction in noisy block signal. {\bf Top: } Original data of noisy block signal with period 100. In addition to the noise there are three irregularly occurring spikes. {\bf Middle: } The 10 singular values for SVD with period $p = 100$. Clearly, only the two first are significant and $\sigma_1 \gg \sigma_2$ confirming that $p=100$ corresponds to a valid periodicity. {\bf Bottom: } The first three columns of $U$ (left) and $V$ (right). } \label{fig:block_signal} \end{figure} \begin{figure} \centering \includegraphics[width=7.4cm]{figs/signal_block_approx_residuals_rk2.png} \caption{Top: Original (blue) and rank-2 approximation (red) of the block-signal. Bottom: Residuals with respect to the approximation. } \label{fig:block_signal_approx} \end{figure} \noindent{\bf Contribution of this paper and overview:} The main contribution of this paper is to investigate more systematically the effects of noise (Section~\ref{sct:mean_shift}) and signal levels (Section~\ref{sct:signal_shift}) on the SVD spectrum of a time series with a known period. In Section~\ref{sct:cooler}, we show how one can use this decomposition to detect and interpret outliers. \section{Impact of signal- and noise-levels on singular values } As mentioned before,~\eqref{eq:svr} was used to identify underlying periods in~\cite{kanjilal1995multiple}. However, what was apparently not realized is that the singular values are influenced by relative and absolute levels of noise. Failing to recognize this interplay can result in biased or misleading results. For this reason, we will review and complement some earlier results. \subsection{Singular value spectrum of random matrices } In the introductory sections, we assumed that the matrix $A$ was the superposition of some underlying periodic signal and independent noise. However, to disentangle the impact of signal and noise, we first focus on the effect of pure noise (i.e. random matrices). The spectral study of random matrices (i.e. matrices for which the entries are independent, identically distributed (i.i.d.) random variables) has been a very active research domain in recent years and uncovered a number of key insights (see e.g.~\cite{Tao_VanVu_2009,paul2014random,nguyen2014random}). One of the more striking results is the emergence of {\it universality} which basically says that as the size of the matrix grows, the distribution of the singular values becomes increasingly independent of the distribution of the individual entries. Put differently, as long as the mean and variance of the noise is kept constant, its actual distribution has very little influence on the distribution of the resulting singular values, assuming the size of the matrix is not too small. This surprising result is illustrated in Fig.~\ref{fig:singvals_universality} where we compare the singular values (averaged over 200 trials) of $50\times 50 $ random matrices for two different distributions of the individual matrix entries: standard normal and exponential (shifted to become zero-mean). The agreement of the singular values is striking. \begin{figure} \centering \includegraphics[width=7.4cm]{figs/svd_randn_exp_comparison_n_50.png} \caption{Singular values (averaged over 200 trials) for $50\times 50$ random matrices generated by drawing i.i.d. entries from the standard normal (red) and (shifted to ensure zero mean and unit variance) exponential (blue) distributions. } \label{fig:singvals_universality} \end{figure} In addition to the above result, we also know that rescaling the variance of the entries in a zero-mean random matrix induces a corresponding rescaling of the singular values: $$ \sigma_i(\alpha A) = \alpha \,\sigma_i(A). $$ This follows immediately from the observation that $\alpha A = U(\alpha S)V^T$. In other words, the singular value ratio $SVR = \sigma_1/\sigma_2$ is not affected by a uniform increase in the noise variance. However, a shift in the mean of the noise does affect the SVR, as will be explained in the section below. \subsection{Impact of entries mean value} \label{sct:mean_shift} In the original papers~\cite{kanjilal1995multiple,ying2004improved}, it was not sufficiently appreciated how a shift in the mean value of the time series (the DC component) impacts on the SVR. This is important as failure to understand this issue introduces a major bias in the test values and could therefore result in erroneous conclusions. To address this issue, we compare the singular values of zero-mean $p\times q$ random matrix $A_0$ and its mean-shifted version: $ A = A_0 + \alpha$ which is shorthand for $ A = A_0 + \alpha \mathbf{1}_{p\times q} = A_0+\alpha \mathbf{1}_p \mathbf{1}_q^T $. Using the connection between singular values and eigenvalues expounded in~\eqref{eq:eigenvals}, we can express any singular value $\sigma(A)$ as: $\\ \sigma^2(A) = \lambda(AA^T) \\ = \lambda((A_0 + \alpha \mathbf{1}_p \mathbf{1}_q^T) (A_0^T+ \alpha \mathbf{1}_q \mathbf{1}_p^T))\\ = \lambda \left(A_0A_0^T + \alpha (A_0 \mathbf{1}_q \mathbf{1}_p^T + \mathbf{1}_p \mathbf{1}_q^T A_0^T) + \alpha^2 \mathbf{1}_p \mathbf{1}_q^T \mathbf{1}_q \mathbf{1}_p^T\right) \\ = \lambda \left(A_0A_0^T + \alpha q(R \mathbf{1}_p^T + \mathbf{1}_p R^T) + \alpha^2 q \mathbf{1}_p \mathbf{1}_p^T \right)\\ $ where $ R = (1/q)A_0 \mathbf{1}_q $ is a $p\times 1$ column matrix for which each element is the mean of the corresponding $A_0$ row. However, recall that the entries of $A_0$ are independent zero-mean stochastic variables. Hence, unless the matrix dimensions are very small, it follows that $R \approx 0$ and can be neglected. We therefore derive the approximation: \begin{equation} \sigma^2(A) \approx \lambda \left(A_0A_0^T + \alpha^2 q \mathbf{1}_p \mathbf{1}_p^T \right) \label{eq:sigma_approx} \end{equation} Next, we make use of the standard results on Rayleigh quotients for eigenvalues which states that the dominant eigenvalue of a symmetric, positive definite matrix $M$ is the solution to the maximization problem: $$ \lambda_1 = \max_{\mathbf{x}\neq \mathbf{0}} \left( \frac{\mathbf{x}^T M \mathbf{x}}{ \mathbf{x}^T\mathbf{x}} \right) = \max\limits_{|\!|\mathbf{u}|\!| = 1} \, ( \mathbf{u}^T M \mathbf{u}). $$ Furthermore, if an unit vector $\mathbf{u}_1$ realizes the above maximum, then the second largest eigenvalue is obtained as the solution of the constrained optimization problem: $$ \lambda_2 = \max\limits_{|\!|\mathbf{u}|\!| = 1}\, ( \mathbf{u}^T M \mathbf{u}) \quad\quad s.t. \quad \mathbf{u} \perp \mathbf{u}_1. $$ and so on for the successive eigenvalues. Combining this with the approximation derived in~\eqref{eq:sigma_approx}, we get the following approximation for the first singular value of $A$: \begin{align} \sigma^2(A) & \approx & \max\limits_{|\!|\mathbf{u}|\!| = 1}\, \mathbf{u}^T\left(A_0A_0^T + \alpha^2 q \mathbf{1}_p \mathbf{1}_p^T \right) \mathbf{u}\nonumber\\ % &=& \max\limits_{|\!|\mathbf{u}|\!| = 1}\, \left( \mathbf{u}^TA_0A_0^T \mathbf{u} + \alpha^2 q \mathbf{u}^T\mathbf{1}_p \mathbf{1}_p^T \mathbf{u}\right)\nonumber\\ &=& \max\limits_{|\!|\mathbf{u}|\!| = 1} \left(\mathbf{u}^TA_0A_0^T \mathbf{u} + \alpha^2q\, (\sum_i u_i)^2\right) \label{eq:sigma_full} \end{align} This derivation shows that \begin{equation} \sigma_1^2(A) \leq \max\limits_{|\!|\mathbf{u}|\!| = 1} \left(\mathbf{u}^TA_0A_0^T \mathbf{u}\right) +\alpha^2 pq \\ = \sigma_1^2(A_0) + \alpha^2 pq, \label{eq:sigma_1_ineq} \end{equation} since from the Cauchy-Schwartz inequality it follows: $$ \left(\sum_i u_i\right)^2 \leq \left(\sum_{i=1}^p u_i^2\right) \left(\sum_{i=1}^p 1\right) = p \quad \quad \mbox{since}\quad |\!| u|\!| = 1. $$ However, in general the unit vector $\mathbf{u}$ that maximizes the Rayleigh quotient will not necessarily also maximize $(\sum u_i)^2 $. In fact, for higher singular values, the number of orthogonal constraints on $u$ increases proportionally, suggesting that on average $\sum u_i \approx 0$, and therefore $\sigma_i^2(A) \approx \sigma_i^2(A_0)$. This is indeed exactly what is seen in numerical experiments (Fig.~\ref{fig:singvals_mean_shift}). Notice that the first singular value is very close to the maximal value obtained in~\eqref{eq:sigma_1_ineq} which is derived if optimizing both terms in~\eqref{eq:sigma_full}, independently and simultaneously was had been done. \begin{figure}[!h] \centering \includegraphics[width=8cm]{figs/singvals_mean_shift} \caption{Comparison of the singular values of a matrix ($10 \times 10$) with zero-mean entries (red) and shifted mean ($\alpha = 5$). The dotted line indicates that (approximate) upper limit based on~\eqref{eq:sigma_1_ineq}. Recall that the entries of the matrix $A_0$ are random numbers, but by shifting the global mean the $SVR = \sigma_1/\sigma_2$ increases, erroneously suggesting that some underlying periodic structure is present. } \label{fig:singvals_mean_shift} \end{figure} Clearly, failing to remove the mean from a noisy time series would inflate the first singular value (and only the first one!) resulting in a upwardly biased value for the singular value ratio (SVR). This would reduce the power of an SVD-method in data mining applications such a blind screening. In the next section we will investigate what the impact of genuine underlying periodic signal is. \subsection{Impact of the underlying periodic signal} \label{sct:signal_shift} Suppose that we have a noisy but perfectly stationary and periodic time series $\mathbf{x} = (x_1,x_2, \ldots , x_n)$ with period $p$. For the sake of simplicity, we assume that the data cover an integer number $q = n/p $ of periods (cycles). As explained in Section~\ref{sct:intro}, we then use the first $p$ observations to create a first column of the matrix $A$, and the observations $x_{p+1},\ldots, x_{2p}$ to create the second column, and so on, until we end up with a $p\times q$ matrix $A$. If the noise is very small, each column is essentially a copy of the first one and we can write: $$A \approx \mathbf{a}\mathbf{1}_q^T $$ where the $p\times 1$ column $\mathbf{a}$ represents the data for one period. In general, the data is noisy, however, and we model that by adding independent additive noise with variance $\varepsilon^2$: $$ A = \mathbf{a}\mathbf{1}_q^T + \varepsilon N. $$ Here $N$ is a $p \times q$ matrix of independent, identically distributed (i.i.d.) noise variables with zero mean and unit variance. To investigate the behaviour of the singular values we use the fact that \begin{eqnarray*} \sigma^2(A) &= &\lambda(A^TA) \\ &=& \lambda ( (\mathbf{a}\mathbf{1}_q^T + \varepsilon N )^T (\mathbf{a}\mathbf{1}_q^T + \varepsilon N)) \\ &=& \lambda (a^2 \mathbf{1}_q \mathbf{1}_q^T + \varepsilon (N^T \mathbf{a} \mathbf{1}_q^T + \mathbf{1}_q \mathbf{a}^T N) +\varepsilon^2 N^TN ) \end{eqnarray*} where $a^2 = \mathbf{a}^T \mathbf{a} = |\!| \mathbf{a}|\!|^2$. Since the entries of the noise matrix $N$ are independent, zero-mean and unit variance stochastic variables, we can make the following approximation for the $q \times q $ matrix $N^TN $: $$ (N^T N)_{ij} = \sum_{k=1}^p N_{ki}N_{kj} \approx \left\{ \begin{array}{cc} p & \mbox{if } i=j \\ 0 & \mbox{if } i \neq j \end{array} \right. $$ The last approximation is obtained by taking the expected values and using the fact that $E(N_{ki}N_{kj}) = 1 $ if $i=j$, and zero otherwise. From this, we conclude that approximately: $$ N^TN \approx pI_q $$ Similarly, because the expectation value of the cross-term vanishes, using the linearity of the expectation operator yields: $$ E (N^T \mathbf{a} \mathbf{1}_q^T + \mathbf{1}_q \mathbf{a}^T N) = E (N^T) \mathbf{a} \mathbf{1}_q^T + \mathbf{1}_q \mathbf{a}^T E( N) = 0. $$ As a consequence, to a good approximation, the singular values of $A$ can be identified as the eigenvalues of the following matrix: $$ \sigma^2(A) \approx \lambda(a^2 \mathbf{1}_q \mathbf{1}_q^T + \varepsilon^2 p I_q) . $$ The structure of the matrix in the RHS allows us to arrive at some conclusions regarding the singular values. Since any vector is an eigenvector of the identity matrix, it suffices to focus on the first term which is a rank-1 matrix (as the product of a column and a row). This implies that all but one eigenvalue vanish, and since $\mathbf{1}_q$ is obviously an eigenvector $\left( (\mathbf{1}_q \mathbf{1}_q^T ) \mathbf{1}_q = \mathbf{1}_q (\mathbf{1}_q^T \mathbf{1}_q) =q \mathbf{1}_q \right)$ it follows that the maximal eigenvalue (and therefore, singular value) is approximately equal to $$ \sigma_1(A) \approx \sqrt{a^2q+\varepsilon^2p} $$ The subsequent singular values correspond to the eigenvectors which are mapped to zero by the rank-1 matrix and therefore are not influenced by the $a^2$ term: $$ \sigma_i(A) \approx \sqrt{q}\varepsilon \quad\quad (\mbox{for }i \geq 2 ) $$ Put differently, these lower ranked singular values are not influenced by the signal $\mathbf{a}$, just by the noise. Notice also that the difference between the first and the subsequent singular values grows proportional to $\sqrt{q}$, as it means that the more cycles that are present in the data, the more pronounced the difference. Furthermore, in many cases the noise-level $\varepsilon^2$ can be neglected with respect to the strength of the signal ($a^2$), resulting in a further approximation: $$ \sigma_1(A) \approx \sqrt{q}a. $$ This is illustrated in Fig.~\ref{fig:signal_approx} where we took a fixed noise-level $\epsilon = 0.2$ and a signal strength $ a$ which is a multiple of some basic level $a_0 = \sqrt{12.5}$ and $a =k a_0$ with $k =0, 1, 2, 3$. The number of full cycles in each case was equal to $q=10$. We therefore expect the first singular value for each of these signal levels to be roughly equal to $\sqrt{q}\, a_0 k \approx 11.2k$. \begin{figure} \centering \includegraphics[width=7.4cm]{figs/svd_singval_1_signal_strength} \caption{The influence of the underlying signal strength on the first singular value. The curve for $k=0$ corresponds to pure noise (no underlying signal). Notice how increasing the signal strength results in the corresponding increments in the first singular value. } \label{fig:signal_approx} \end{figure} It is important to realize that this observation is different from the result in Section~\ref{sct:mean_shift} where the first singular value was affected by a shift in the mean noise level. In this case, the mean $ (1/p) \sum_i a_i$ of the periodic signal $\mathbf{a}$ can still be zero, but it is its $L_2$ norm ($a^2 = |\!|\mathbf{a}|\!|^2 $) that is seen to affect the first singular value. \section{Application: Data-driven outlier identification} \label{sct:cooler} In the preceding sections we have explored how the singular value spectrum can be used to identify a low-rank approximation of a time series and how to avoid misleading biases in the process. These low rank approximations provide us with a useful tool to identify and interpret outliers. As an illustration, consider the data in the top panel of Fig.~\ref{fig:var7_svd_rk2} which represents the hourly averaged power consumption of an industrial cooler (installed in business offices) over roughly 6 months (January through early July, or $n=4368$ data points). This cooler works in tandem with two other coolers which explains the burst-like character of the data. Since the activity of this cooler is linked to human activity, it shows a clear daily periodicity and we therefore performed an SVD with $p = 24$ and $q=n/p=182$. The plots in the next two rows of Fig.~\ref{fig:var7_svd_rk2} show (left) the first two $U$-columns (24 entries each) and (right) the corresponding $V$ -columns of length 182 each. The two $U$-profiles are plausible: the first captures a (weighted) average of the daily activity and therefore shows some baseline-activity during the night which then ramps up around 8am and returns to the baseline at about 8pm. The additional contribution encoded in the second profile results in a higher activity in the morning, but lower activity in the afternoon. The corresponding $V$-columns on the right specify the appropriate coefficients with which these profiles should be weighted to obtain the approximation (red graph in top panel of Fig.~\ref{fig:var7_residuals}). The $V_1$ values roughly mirror the raw data, but the $V_2$ shows a spike that corresponds to the high value in the 3rd burst, indicating that this high value is partly due to an unusually high value in the morning. However, notice that this spike is well modelled by the first two coefficients of the SVD: as a consequence this high value does not result in a corresponding high value for the residual (see bottom panel of Fig.~\ref{fig:var7_residuals} and the zoomed-in version in Fig.~\ref{fig:svd_detail_residual}). In fact, the third burst shows a spike in the residuals but this corresponds to a relatively low value, which however is not adequately captured by a combination of the first two $U$-profiles. So using this type of analysis we can easily make the distinction between high values that are the result of unusual but regular activity (encoded in $U$-profiles that correspond to large singular values, and possibly lower values that however cannot be adequately approximated by combining such prominent data-driven profiles (i.e. "real" outliers). \section{Conclusion} In this paper we have argued that the well-known singular value decomposition (SVD) (which is usually applied to matrix problems) can also be successfully applied to identify periodic patterns (profiles) in time series. Furthermore, these profiles are completely defined by the data and do not require the specification of user-defined parameters, apart from the period (which itself can be estimated using this approach). As such, this methodology offers a purely data-driven approach to adaptive signal approximation, and based on that, outlier detection. Moreover, we have shown that a judicious comparison of the $V$-coefficients and residuals allows one to distinguish between different ways in which data-points can be atypical or salient. From a data mining perspective, this opens up new ways of analyzing time series in a data-driven, bottom-up fashion. However, it then becomes essential to thoroughly understand how the spectrum of time series is influenced by various characteristics of the signal and noise. In this paper, we have extended the work in earlier papers by initiating a more systematic analysis of these effects. \section*{Acknowledgment} The authors would like to acknowledge partial support by the Dutch TTW-project SES-BE.
{ "timestamp": "2018-07-11T02:02:34", "yymm": "1807", "arxiv_id": "1807.03386", "language": "en", "url": "https://arxiv.org/abs/1807.03386", "abstract": "We address the problem of data-driven pattern identification and outlier detection in time series. To this end, we use singular value decomposition (SVD) which is a well-known technique to compute a low-rank approximation for an arbitrary matrix. By recasting the time series as a matrix it becomes possible to use SVD to highlight the underlying patterns and periodicities. This is done without the need for specifying user-defined parameters. From a data mining perspective, this opens up new ways of analyzing time series in a data-driven, bottom-up fashion. However, in order to get correct results, it is important to understand how the SVD-spectrum of a time series is influenced by various characteristics of the underlying signal and noise. In this paper, we have extended the work in earlier papers by initiating a more systematic analysis of these effects. We then illustrate our findings on some real-life data.", "subjects": "Methodology (stat.ME)", "title": "Data-driven pattern identification and outlier detection in time series", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692291542525, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7079584938704495 }
https://arxiv.org/abs/2301.10547
General Distributions of Number Representation Elements
We provide general expressions for the joint distributions of the $k$ most significant $b$-ary digits and of the $k$ leading continued fraction coefficients of outcomes of an arbitrary continuous random variable. Our analysis highlights the connections between the two problems. In particular, we give the general convergence law of the distribution of the $j$-th significant digit, which is the counterpart of the general convergence law of the distribution of the $j$-th continued fraction coefficient (Gauss-Kuz'min law). We also particularise our general results for Benford and Pareto random variables. The former particularisation allows us to show the central role played by Benford variables in the asymptotics of the general expressions, among other results. The particularisation for Pareto variables -- which include Benford variables as a special case -- is specially relevant in the context of pervasive scale-invariant phenomena, where Pareto variables occur much more frequently than Benford variables. This suggests that the Pareto expressions that we produce have wider applicability than their Benford counterparts in modelling most significant digits and leading continued fraction coefficients of real data. Our results may find practical application in all areas where Benford's law has been previously used.
\section{Introduction} \label{sec:introduction} Real numbers can be represented using positional numeral systems, but also using continued fraction expansions. Our main goal in this paper is to provide a general treatment of the related problems of modelling probabilistically the most significant digits and the leading continued fraction coefficients of outcomes of an arbitrary continuous random variable, and to evince the parallelisms between the two problems. Ever since the observations made by Newcomb~\cite{newcomb81:_note} and Benford~\cite{benford1938}, studies of the distribution of most significant digits have largely focused on Benford variables ---for an overview, see the introduction by Berger and Hill~\cite{berger11:_basic_th_benford} and the book by Miller~\cite{miller15:_benfords_law}. Some generalisations have been pursued by Pietronero et al.~\cite{pietronero01:_explaining} and by Barabesi and Pratelli~\cite{barabesi20:_generalized}, among others, but a truly general approach to modelling significant digits has never been presented. A lot less attention has been devoted to modelling continued fraction (CF) coefficients. Except for the well-known Gauss-Kuz'min asymptotic law~\cite{khinchin61:continued} and an approximation due to Blachman~\cite{blachman84}, most of the work in this area has been pioneered by Miller and Takloo-Bighash~\cite{miller06:_invitation}. In any case, all existing finite results for CF coefficients models are solely for the particular case in which the fractional part of the data represented by means of continued fractions is uniformly distributed. No general approach has been investigated in this problem either. This paper is organised as follows. In Section~\ref{sec:gen-prob-msd} we give the general expression for the joint distribution of the $k$ most significant $b$-ary digits of outcomes drawn from an arbitrary positive-valued distribution. One application of our analysis is a proof of the general asymptotic distribution of the $j$-th most significant $b$-ary digit, which, as we will discuss, is the near exact counterpart of the Gauss-Kuz'min law for the general asymptotic distribution of the $j$-th continued fraction coefficient. Our approach to modelling the $k$ most significant $b$-ary digits through a single variable ---rather than through $k$ separate variables as in previous works--- leads to further contributions\footnote{Some preliminary results in this paper previously appeared in~\cite{balado21:_benford}.} in the particularisation of our general results in Section~\ref{sec:particular-cases}. Therein we produce a new closed-form expression for the distribution of the $j$-th significant $b$-ary digit of a Benford variable, and we give a short new proof of the asymptotic sum-invariance property of these variables. We also show that Benford's distribution is just a particular case of a more general distribution based on Pareto variables, which must have wider applicability in the pervasive realm of scale-invariant data ---a fact first pointed out by Pietronero et al.~\cite{pietronero01:_explaining} and then expanded upon by Barabesi and Pratelli~\cite{barabesi20:_generalized}, who however did not give results as complete as ours. In Section~\ref{sec:gen-prob-cf-coeff} we give the general expression for the joint distribution of the $k$ leading continued fraction coefficients of outcomes drawn from an arbitrary distribution. This is shown to be explicitly analogous to modelling the $k$ most significant $b$-ary digits of the data when the continued fraction coefficients correspond to the logarithm base $b$ of the data. Therefore, modelling leading CF coefficients is a realistic practical alternative to modelling most significant digits. Most of the results in Section~\ref{sec:gen-prob-cf-coeff} are novel, and so are their particularisations in Section~\ref{sec:particular-cases}, except for the special cases previously given by Miller and Takloo-Bighash~\cite{miller06:_invitation}. Additionally, we show in Section~\ref{sec:benf-vari-spec} the central role played by the particular analysis for Benford variables in the asymptotics of the general expressions ---both when modelling significant digits and leading continued fraction coefficients--- and we demonstrate this numerically in the case of Pareto variables in Section~\ref{sec:benf-based-appr}. Finally, we empirically verify all our theoretical results in Section~\ref{sec:empirical-tests}, using both Monte Carlo experiments and real datasets. \textbf{Notation and preliminaries.} Calligraphic letters are sets, and $\vert\mathcal{V}\vert$ is the cardinality of set~$\mathcal{V}$. Boldface Roman letters are row vectors. Random variables (r.v.'s) are denoted by capital Roman letters, or by functions of these. The cumulative distribution function (cdf) of r.v.~$Z$ is $F_Z(z)=\Pr(Z\le z)$, where $z\in\mathbb{R}$. The expectation of $Z$ is denoted by $\mathop{\textrm{E}}(Z)$. If~$Z$ is continuous with support~$\mathcal{Z}$, its probability density function (pdf) is~$f_Z(z)$, where $z\in\mathcal{Z}$. A r.v.~$Z$ which is uniformly distributed between $a$ and $b$ is denoted by~$Z\sim U(a,b)$. The probability mass function (pmf) of a discrete r.v. $Z$ with support $\mathcal{Z}$ is denoted by $\Pr(Z=z)$, where $z\in\mathcal{Z}$. The unit-step function is defined as ~$u(z)=1$ if $z\ge 0$, and $u(z)=0$ otherwise. The fractional part of $z\in \mathbb{R}$ is $\{z\}=z-\lfloor z\rfloor$. Curly braces are also used to list the elements of a discrete set, and the meaning of $\{z\}$ (i.e. either a fractional part or a one-element set) is clear from the context. We exclude zero from the set of natural numbers $\mathbb{N}$. We use Knuth's notation for the rising factorial powers of $z\in\mathbb{R}$: $z^{\overline{m}}=\Pi_{i=0}^{m-1}(z+i)=\Gamma(z+m)/\Gamma(z)$~\cite{graham99:_concrete}. Throughout the manuscript, $X$ denotes a positive continuous~r.v. We also define the associated r.v. \begin{equation*}\label{eq:y} Y=\log_b X \end{equation*} for an arbitrary $b\in\mathbb{N}\backslash\{1\}$. The fractional part of $Y$, i.e. $\{Y\}$, will play a particularly relevant role in our analysis. The cdf of $\{Y\}$ is obtained from the cdf of $Y$ as follows: \begin{equation}\label{eq:cdffrcy} F_{\{Y\}}(y)=\sum_{i\in\mathbb{Z}}F_{Y}(y+i)-F_{Y}(i), \end{equation} for~$y\in[0,1)$. Because $\{Y\}$ is a fractional part, it always holds that $F_{\{Y\}}(y)=0$ for $y\le 0$ and $F_{\{Y\}}(y)=1$ for $y\ge 1$. Also, ~$F_{\{Y\}}(y)$ is a continuous function of $y$ because $X$ is a continuous r.v. \section{General Probability Distribution of the \lowercase{$k$} Most Significant \lowercase{$b$}-ary Digits}\label{sec:gen-prob-msd} In this section we will obtain the general expression for the joint probability distribution of the~$k$ most significant digits of a positive real number written in a positional base~$b$ numeral system, where $b\in\mathbb{N}\backslash \{1\}$. Let us first define \begin{equation*} \label{eq:support_set_bary_digits} \mathcal{A}=\{0,1,\ldots,b-1\}. \end{equation*} The $b$-ary representation of $x\in\mathbb{R}^+$ is formed by the unique digits $a_i\in\mathcal{A}$ such that $x=\sum_{i\in \mathbb{Z}} a_i\, b^i$ ---unicity requires ruling out representations where $a_i=b-1$ for all $i<j$, where $j<0$ and $a_j<b-1$ or $j=0$ and $a_j\in\mathcal{A}$. If we now let $n=\lfloor \log_b x\rfloor$, the most significant $b$-ary digit of $x$ is~$a_n$. This is because the definition of~$n$ implies $n\le \log_b x < n+1$, or, equivalently, $b^{n}\le x < b^{n+1}$. Using $n$, the~$k$ most significant $b$-ary digits of~$x$ can be inferred as follows: % \begin{equation} a=\lfloor x\,b^{-n+k-1}\rfloor=\lfloor b^{\{\log_b x\}+k-1}\rfloor.\label{eq:a} \end{equation} By using $0\le \{\log_b x\}<1$ in~\eqref{eq:a} we can verify that $a$ belongs to the following set of integers: % \begin{equation}\label{eq:support} \mathcal{A}_{(k)}=\{b^{k-1},\ldots, b^{k}-1\}, % % \end{equation} whose cardinality is $\vert\mathcal{A}_{(k)}\vert=b^{k}-b^{k-1}$. We propose to call $a$ in~\eqref{eq:a} the $k$-th \textit{integer significand} of $x$. We must mention that for some authors the significand of~$x$ is the integer $\lfloor x\,b^{-n}\rfloor=a_n$, or even $\lfloor x\,b^{-n+k-1}\rfloor$ itself~\cite[page 7]{nigrini12:_benford}, but for some others the significand of $x$ is the real $x\,b^{-n}\in [1,b)$~\cite{berger11:_basic_th_benford,miller15:_benfords_law} ---which is sometimes also called the normalised significand. In any case, the advantages of consistently working with the $k$-th integer significand will become clear throughout this paper. To give an example of~\eqref{eq:a} and~\eqref{eq:support}, say that $b=10$ and $x=0.00456678$. In this case $n=\lfloor\log_{10} x\rfloor=-3$, so if we choose for instance $k=2$ then $a=\lfloor x\, 10^{4}\rfloor=\lfloor 10^{1.65961}\rfloor=45\in\mathcal{A}_{(2)}=\{10,11,12,\ldots,98,99\}$. \begin{theorem}[General distribution of the $k$ most significant $b$-ary digits]\label{thm:msd} If~$A_{(k)}$ denotes the discrete r.v. that models the $k$ most significant $b$-ary digits (i.e. the $k$-th integer significand) of a positive continuous r.v. $X$, then \begin{equation}\label{eq:pmfA} \Pr(A_{(k)}=a)=F_{\{Y\}}\big(\log_b(a+1)-k+1\big)-F_{\{Y\}}\big(\log_ba-k+1\big), \rule[-1em]{0pt}{0pt} \end{equation} where $a\in\mathcal{A}_{(k)}$ and $Y=\log_b X$. \end{theorem} \begin{proof} Seeing~\eqref{eq:a}, the r.v. we are interested in is defined as \begin{equation} A_{(k)}=\lfloor b^{\{\log_b X\}+k-1}\rfloor.\label{eq:ak_definition} \end{equation} From this definition, $A_{(k)}=a$ when $a\le b^{\{\log_b X\}+k-1}< a+1$, or, equivalently, when \begin{equation} \log_b a-k+1\le \{\log_b X\} < \log_b (a+1)-k+1.\label{eq:ineq_mod} \end{equation} Using~\eqref{eq:ineq_mod} and the cdf of $\{Y\}=\{\log_b X\}$ we get~\eqref{eq:pmfA}. \end{proof} \begin{remark} It is straightforward to verify that the pmf~\eqref{eq:pmfA} adds up to one over its support, as \begin{equation} \label{eq:pmfA_verification} \sum_{a\in\mathcal{A}_{(k)}} \Pr(A_{(k)}=a)=F_{\{Y\}}\big(1\big)-F_{\{Y\}}\big(0\big) \end{equation} due to the cancellation of all consecutive terms in the telescoping sum on the left-hand side of~\eqref{eq:pmfA_verification} except for the two shown on the right-hand side. Because $F_{\{Y\}}(y)$ is the cdf of a r.v. with support~$[0,1)$, the right-hand side of~\eqref{eq:pmfA_verification} must equal one. % % % % \end{remark} \subsection{Distribution of the $j$-th Most Significant $b$-ary Digit} \label{sec:prob-distr-k} Next, let us denote by $A_{[j]}$ the r.v. that models the $j$-th most significant $b$-ary digit of~$X$. This variable can be obtained from the variable $A_{(j)}$ that models the $j$-th integer significand as follows: \begin{equation} A_{[j]}=A_{(j)}\pmod{b}.\label{eq:aj_mod_definition} \end{equation} Obviously, $A_{[1]}=A_{(1)}$. From~\eqref{eq:aj_mod_definition}, the pmf of $A_{[j]}$ for $j\ge 2$ is \begin{align} \label{eq:pmfAj} \Pr(A_{[j]}\!=\!a)&= \sum_{r\in\mathcal{A}_{(j-1)}} \Pr(A_{(j)}=r b+a),% \end{align} where $a\in\mathcal{A}$. \begin{remark}\label{rem:kth_significand} Observe that~\eqref{eq:pmfA} is also the joint pmf of $A_{[1]},\dots,A_{[k]}$. To see this we just have to write $a=\sum_{j=1}^{k}a_j b^{-j+k}$, which implies that $\Pr(A_{[1]}=a_1,\dots,A_{[k]}=a_k)=\Pr(A_{(k)}=a)$. Under this view,~\eqref{eq:pmfAj} is simply a marginalisation of~\eqref{eq:pmfA}. However, the derivation of~\eqref{eq:pmfA} is simpler using the $k$-th integer significand variable $A_{(k)}$ than using $A_{[1]},\dots,A_{[k]}$. Further examples of the advantages of working with $k$-th integer significands are the following theorem and the results in Sections~\ref{sec:most-sign-digits} and~\ref{sec:most-sign-digits-1}. \end{remark} \begin{theorem}[General asymptotic distribution of the $j$-th most significant $b$-ary digit]\label{thm:asympt-aj} For any positive continuous random variable $X$, it holds that \begin{equation} \lim_{j\to\infty}\Pr(A_{[j]}=a)= b^{-1}.\label{eq:asympt-aj} \end{equation} \end{theorem} \begin{proof} For any $\epsilon>0$ there exists $j_{\min}$ such that for all $j\ge j_{\min}$ \begin{equation} \label{eq:epsilon} \log_b(rb+a)-\log_b(rb)<\epsilon \end{equation} for all $a\in\mathcal{A}$ and $r\in\mathcal{A}_{(j-1)}$. Specifically, $j_{\min}=\lceil 1+\log_b(\frac{b-1}{b^\epsilon-1}) \rceil$. Therefore, inequality~\eqref{eq:epsilon} and the continuity of $F_{\{Y\}}(y)$ in~\eqref{eq:pmfA} imply that $\lim_{j\to\infty}\Pr(A_{(j)}=r b+a)-\Pr(A_{(j)}=r b)=0$ for all $a\in\mathcal{A}$. Thus, from~\eqref{eq:pmfAj}, we have that~\eqref{eq:asympt-aj} holds. \end{proof} \begin{remark} In general, the larger~$b$ is, the faster the convergence of~$A_{[j]}$ to a uniform discrete r.v. This is because $j_{\min}$ is nonincreasing on $b$ for a given $\epsilon$. Informally, Theorem~\ref{thm:asympt-aj} can be argued as follows: since $rb\ge b^{j-1}$ in~\eqref{eq:pmfAj}, for large~$j$ we have that $rb\gg a$, and therefore $rb+a\approx rb$. Consequently, when $j$ is large $\Pr(A_{(j)}=rb+a)\approx \Pr(A_{(j)}=rb)$ because of the continuity of the cdf of~$\{Y\}$. In such case~\eqref{eq:pmfAj} is approximately constant over~$a\in\mathcal{A}$, and so we have that~$A_{[j]}$ is asymptotically uniformly distributed. \end{remark} \section{General Probability Distribution of the \lowercase{$k$} Leading Continued Fraction Coefficients} \label{sec:gen-prob-cf-coeff} Continued fraction expansions are an alternative to positional base~$b$ numeral systems for representing real numbers. In this section we will obtain the general expression for the joint probability distribution of the $k$ leading coefficients in the simple CF of a real number. % Let $y_0=y\in \mathbb{R}$ and define the recursion $y_{j}=\{y_{j-1}\}^{-1}$ based on the decomposition $y_{j-1}=\lfloor y_{j-1}\rfloor +\{y_{j-1}\}$. By letting $a_j=\lfloor y_j\rfloor$ we can express $y$ as the following simple~CF: \begin{equation}\label{eq:continued_y0} y=a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cdots}} % \end{equation} A CF is termed \textit{simple} (or regular) when all the numerators of the nested fractions equal~$1$. % For typographical convenience, we will write the CF representation of $y$ in~\eqref{eq:continued_y0} using the standard notation \begin{equation*}\label{eq:continued_y0_2} y=[a_0;a_1,a_2,\ldots]. \end{equation*} From the construction of the simple CF we have that $a_0\in\mathbb{Z}$, whereas $a_j\in\mathbb{N}$ for $j\ge 1$. The recursion stops if $\{y_j\}=0$ for some $j$, which only occurs when $y\in\mathbb{Q}$; otherwise the CF is infinite (for an in-depth introduction to continued fractions see~\cite{khinchin61:continued}). Our goal in this section is to model probabilistically the~$a_j$ coefficients for $j\ge 1$. To this end we will assume that~$y$ is drawn from a continuous r.v.~$Y$. Because~$\mathbb{Q}$ is of measure zero then $\Pr(Y\in\mathbb{Q})=0$, and so we may assume that the CF of~$y$ drawn from $Y$ is almost surely infinite ---and thus that the $a_j$ coefficients are unique. In practical terms, this means that, letting $Y_0=Y$, we can define the continuous r.v.'s \begin{equation*} Y_{j}=\{Y_{j-1}\}^{-1} \end{equation*} with support $(1,\infty)$ for all $j\ge 1$. Therefore, the simple CF coefficients we are interested in are modelled by the discrete r.v.'s \begin{equation} A_j=\lfloor Y_j\rfloor\label{eq:aj_rvs} \end{equation} with support $\mathbb{N}$, for all $j\ge 1$. \textbf{Additional notation.} In order to streamline the presentation in this section we define the $k$-vector \begin{equation*} \mathbf{A}_{k}=[A_1,\ldots,A_k]\label{eq:ak_cf} \end{equation*} comprising the first $k$ r.v.'s defined by~\eqref{eq:aj_rvs}, i.e. the r.v.'s modelling the $k$ leading CF coefficients of $Y$. A realisation of $\mathbf{A}_k$ is $\mathbf{a}_k=[a_1,\ldots,a_k]\in\mathbb{N}^k$. Also, $\mathbf{e}_k$ denotes a unit $k$-vector with a one at the $k$-th position and zeros everywhere else, i.e. $\mathbf{e}_k=[0,\ldots,0,1]$. A~vector symbol placed within square brackets denotes a finite CF; for example, $[\mathbf{a}_k]$ denotes $[a_1;a_2,\ldots,a_k]$. Observe that we can write $[\mathbf{a}_k]^{-1}=[0;\mathbf{a}_k]=[0;a_1,a_2,\ldots,a_k]$. Finally, the subvector of consecutive entries of $\mathbf{a}_k$ between its $m$-th entry and its last entry is denoted by $\mathbf{a}_k^{m:k}=[a_m,\ldots,a_k]$. When $m>1$ the amount $[\mathbf{a}_k^{m:k}]$ is called a remainder of $[\mathbf{a}_k]$~\cite{khinchin61:continued}. As a preliminary step, we will next prove a lemma which we will then use as a stepping stone in the derivation of the joint pmf of~$\mathbf{A}_k$ in Theorem~\ref{thm:jpmfcf}. \begin{lemma}\label{thm:lemma} The following two sets of inequalities hold for the $(j-1)$-th order convergent $[\mathbf{a}_j]=[a_1;a_2,\dots,a_j]$ of the infinite simple continued fraction $[a_1;a_2,a_3,\dots]$ with $a_i\in\mathbb{N}$: \begin{equation} \label{eq:ineq1} (-1)^{j-1}[\mathbf{a}_j]< (-1)^{j-1}[\mathbf{a}_j+\mathbf{e}_j] \end{equation} and \begin{equation} \label{eq:ineq2} (-1)^{j-1}[\mathbf{a}_{j-1}+\mathbf{e}_{j-1}]\le (-1)^{j-1}[\mathbf{a}_j]<(-1)^{j-1}[\mathbf{a}_{j+1}+\mathbf{e}_{j+1}], \rule[-1em]{0pt}{0pt} \end{equation} where the lower bound in~\eqref{eq:ineq2} requires $j>1$. \end{lemma} \begin{proof} Consider the function $\varphi(\mathbf{a}_j)=[\mathbf{a}_j]$. Taking $a_j$ momentarily to be a continuous variable, we can obtain the partial derivative of $\varphi(\mathbf{a}_j)$ with respect to $a_j$. For $j\ge 2$, applying the chain rule $j-1$ times yields \begin{equation}\label{eq:deriv} \frac{\partial\varphi(\mathbf{a}_j)}{\partial a_j}=(-1)^{j-1}\prod_{r=2}^j[\mathbf{a}_j^{r:j}]^{-2}, \end{equation} whereas when $j=1$ we have that $d\varphi(\mathbf{a}_1)/d a_1=1$. As the product indexed by $r$ is positive, the sign of~\eqref{eq:deriv} only depends on~$(-1)^{j-1}$. Consequently, if $j$ is odd then $\varphi(\mathbf{a}_j)$ is strictly increasing on $a_j$, and if~$j$ is even then $\varphi(\mathbf{a}_j)$ is strictly decreasing on $a_j$. Thus, when $j$ is odd $[\mathbf{a}_j]<[\mathbf{a}_j+\mathbf{e}_j]$ and when $j$ is even $[\mathbf{a}_{j}]>[\mathbf{a}_j+\mathbf{e}_j]$, which proves inequality~\eqref{eq:ineq1}. Let us now prove the two inequalities in~\eqref{eq:ineq2} assuming first that~$j$ is odd. Considering again~\eqref{eq:deriv}, the upper bound can be obtained by seeing that $[\mathbf{a}_j]<[\mathbf{a}_{j-1},a_j+\epsilon]$ for $\epsilon>0$, and then choosing $\epsilon=1/(a_{j+1}+1)$. The lower bound, which requires $j>1$, is due to $[\mathbf{a}_j]\ge [\mathbf{a}_{j-1},1]=[\mathbf{a}_{j-1}+\mathbf{e}_{j-1}]$. To conclude the proof, when $j$ is even the two inequalities we have just discussed are reversed due to the change of sign in~\eqref{eq:deriv}. \end{proof} \begin{remark} Lemma~\ref{thm:lemma} is closely related to the fact that the $j$-th order convergent of a continued fraction is smaller or larger than the continued fraction it approximates depending on the parity of $j$~\cite[Theorem 4]{khinchin61:continued}. \end{remark} \begin{theorem}[General distribution of the $k$ leading continued fraction coefficients]\label{thm:jpmfcf} For any continuous r.v. $Y$ represented by the simple continued fraction $Y=[A_0;A_1,A_2,\dots]$ it holds that \begin{equation}\label{eq:cf_joint_pmf} \Pr(\mathbf{A}_{k}=\mathbf{a}_k)=(-1)^{k}\big(F_{\{Y\}}([0;\mathbf{a}_{k}+\mathbf{e}_k])-F_{\{Y\}}([0;\mathbf{a}_k])\big), \end{equation} where $\mathbf{a}_k\in\mathbb{N}^k$. \end{theorem} \begin{proof} For all $j\ge 2$, if $a_m\le Y_m< a_m+1$ for all $m=1,\dots,j-1$ then we have that $Y_1=[\mathbf{a}_{j-1} , Y_j]$. In these conditions it holds that $\{a_j\le Y_j < a_j+1\}=\{(-1)^{j-1}[\mathbf{a}_{j}]\le (-1)^{j-1} Y_1 <(-1)^{j-1}[\mathbf{a}_{j}+\mathbf{e}_j]\}$, where the reason for the alternating signs is~\eqref{eq:ineq1}. Therefore \begin{align}\label{eq:cf_joint_pmf2} \Pr(\mathbf{A}_{k}=\mathbf{a}_k)&=\Pr(A_1=a_1,\dots,A_k=a_k)\nonumber\\ &=\Pr(\cap_{j=1}^k \{a_j\le Y_j< a_j+1\})\nonumber\\ &=\Pr(\cap_{j=1}^k \{(-1)^{j-1}[\mathbf{a}_{j}]\le (-1)^{j-1} Y_1<(-1)^{j-1}[\mathbf{a}_{j}+\mathbf{e}_j]\}). \end{align} From~\eqref{eq:ineq2} we have that the lower bounds on~$Y_1$ in~\eqref{eq:cf_joint_pmf2} are related as \begin{equation}\label{eq:shrinklow} [\mathbf{a}_1]<[\mathbf{a}_2+\mathbf{e}_2]\le [\mathbf{a}_3]<[\mathbf{a}_4+\mathbf{e}_4]\le [\mathbf{a}_5]\cdots \end{equation} whereas the upper bounds on $Y_1$ in the same expression are related~as \begin{equation}\label{eq:shrinkhigh} [\mathbf{a}_1+\mathbf{e}_1]\ge [\mathbf{a}_2]> [\mathbf{a}_3+\mathbf{e}_3]\ge[\mathbf{a}_4]> [\mathbf{a}_5+\mathbf{e}_5]\cdots \end{equation} Hence, except for possible equality constraints (which are anyway immaterial in probability computations with continuous random variables), the intersection of the $k$ events in~\eqref{eq:cf_joint_pmf2} equals the $k$-th event, and thus \begin{align}\label{eq:cf_joint_pmf3} \Pr(\mathbf{A}_{k}=\mathbf{a}_k)&=\Pr( (-1)^{k-1}[\mathbf{a}_{k}]< (-1)^{k-1} Y_1<(-1)^{k-1}[\mathbf{a}_{k}+\mathbf{e}_k])\nonumber\\ &=(-1)^{k-1} \big(F_{Y_1}([\mathbf{a}_{k}+\mathbf{e}_k])-F_{Y_1}([\mathbf{a}_k])\big). \end{align} Finally, using \begin{equation*}\label{eq:FY1} F_{Y_1}(y)=\Pr(Y_1\le y)=\Pr(\{Y\}\ge y^{-1})=1-F_{\{Y\}}(y^{-1}) \end{equation*} in~\eqref{eq:cf_joint_pmf3} we get~\eqref{eq:cf_joint_pmf}. \end{proof} \begin{remark}\label{rem:th2} Observe that if we choose $Y=\log_b X$, then both~\eqref{eq:cf_joint_pmf} and~\eqref{eq:pmfA} depend solely on the same variable $\{Y\}$, which is the reason why we have used the notation $Y$ rather than~$X$ in this section. With this choice of $Y$, the general expression~\eqref{eq:cf_joint_pmf} models the~$k$ leading CF coefficients of $\log_b X$ (with the exception of $A_0$), and becomes analogous to the general expression~\eqref{eq:pmfA} that models the~$k$ most significant $b$-ary digits of~$X$. The reason why we have left $A_0$ out of the joint distribution~\eqref{eq:cf_joint_pmf} is because, unlike the rest of variables (i.e. $A_j$ for $j\ge 1$), it cannot be put as a sole function of~$Y_1$. Moreover, it is not possible to model $A_0$ in one important practical scenario ---see Section~\ref{sec:lead-cf-coeff}. % We can also verify that the joint pmf~\eqref{eq:cf_joint_pmf} adds up to one over its support, namely~$\mathbb{N}^k$. Let us first add the joint pmf of $\mathbf{A}_k$ over $a_k\in\mathbb{N}$ assuming $k>1$. As this infinite sum is a telescoping series, in the computation of the partial sum $S_k^{(n)}=\sum_{a_k=1}^n\Pr(\mathbf{A}_{k}=\mathbf{a}_k)$ all consecutive terms but two are cancelled, and so \begin{align*} S_k^{(n)}% &=(-1)^k\big(F_{\{Y\}}([0;\mathbf{a}_{k-1},n+1])-F_{\{Y\}}([0;\mathbf{a}_{k-1},1])\big). \end{align*} Now, as $\lim_{n\to\infty} [0;\mathbf{a}_{k-1},n+1]=[0;\mathbf{a}_{k-1}]$ and $[0;\mathbf{a}_{k-1},1]=[0;\mathbf{a}_{k-1}+\mathbf{e}_{k-1}]$, we then have that \begin{align*} \lim_{n\to \infty} S_k^{(n)}&=(-1)^{k-1}\big(F_{\{Y\}}([0;\mathbf{a}_{k-1}+\mathbf{e}_{k-1}])-F_{\{Y\}}([0;\mathbf{a}_{k-1}])\big)\\ &=\Pr(\mathbf{A}_{k-1}=\mathbf{a}_{k-1}). \end{align*} The continuity of the cdf~$F_{\{Y\}}(y)$ allows writing $\lim_{n\to\infty}F_{\{Y\}}(g(n))=F_{\{Y\}}(\lim_{n\to\infty} g(n))$, which justifies the limit above. % % In view of this result, it only remains to verify that the pmf of $\mathbf{A}_1=A_1$ adds up to one. The partial sum up to $n$ is \begin{equation*} S_1^{(n)}=F_{\{Y\}}(1)- F_{\{Y\}}(1/(n+1)), \end{equation*} and therefore $\lim_{n\to\infty} S_1^{(n)}=1$ for the same reason that makes~\eqref{eq:pmfA_verification} equal to one. Incidentally, observe that it would have been rather more difficult to verify the fact that~\eqref{eq:cf_joint_pmf} adds up to one by summing out the random variables in~$\mathbf{A}_k$ in an order different than the decreasing order $A_k,A_{k-1},\ldots,A_1$ that we have used above. \end{remark} \subsection{Distribution of the $j$-th CF Coefficient} \label{sec:distribution-j-th} Just like in Section~\ref{sec:prob-distr-k}, we can marginalise the joint pmf of $\mathbf{A}_j$ to obtain the distribution of the $j$-th CF coefficient $A_j$ of $Y$. Although we already know that $A_1=\mathbf{A}_1$, the main obstacle to explicitly getting the distribution of $A_j$ for $j>1$ is that in this case marginalisation involves $j-1$ infinite series, rather than a single finite sum as in~\eqref{eq:pmfAj}. In general, it is difficult to carry out the required summations in closed form. Moreover, the order of evaluation of these series may influence the feasibility of the computation, which is connected to the comment in the very last sentence of the previous paragraph. However, under the sole assumption that~$\{Y\}$ is a continuous r.v. with support $[0,1)$, the Gauss-Kuz'min theorem furnishes the general asymptotic distribution of~$A_j$~\cite[Theorem 34]{khinchin61:continued}: \begin{equation} \label{eq:gauss-kuzmin} \lim_{j\to\infty} \Pr(A_j=a)= \log_2\Big(1+\frac{1}{a(a+2)}\Big). \end{equation} \begin{remark} Observe that Theorem~\ref{thm:asympt-aj}, which gives the general asymptotic behaviour of $A_{[j]}$ (the $j$-th most significant $b$-ary digit), is the near exact counterpart of the Gauss-Kuz'min theorem~\eqref{eq:gauss-kuzmin}, which gives the general asymptotic behaviour of $A_j$ (the $j$-th continued fraction coefficient). The only essential difference is the requirement that the support of $\{Y\}$ be precisely $[0,1)$ in~\eqref{eq:gauss-kuzmin}~\cite[Theorem 33]{khinchin61:continued}, whereas this condition is not required in~\eqref{eq:asympt-aj} ---i.e. the support of $\{Y\}$ may be a subset of $[0,1)$ in Theorem~\ref{thm:asympt-aj}. % % \end{remark} \section{Particular Cases} \label{sec:particular-cases} In this section we will particularise the general expressions in Sections~\ref{sec:gen-prob-msd} and~\ref{sec:gen-prob-cf-coeff} for two especially relevant distributions of $X$. As it is clear from~\eqref{eq:pmfA} and~\eqref{eq:cf_joint_pmf}, we just need the cdf $F_{\{Y\}}(y)$ of the r.v. $\{Y\}=\{\log_b X\}$ in order to achieve our goal. \subsection{Benford Variables} \label{sec:benford} We consider in this section a r.v.~$X$ for which $\{Y\}\sim U(0,1)$. We call such a r.v. a \textit{Benford variable}, although we must note that some authors call it a \textit{strong} Benford variable instead. At any rate, this is the archetypal case in which a model of the $k$ most significant $b$-ary digits has been widely used and discussed ---i.e. Benford's law~\cite{benford1938}. The cdf of~$\{Y\}$ for a Benford variable $X$ is simply \begin{equation} F_{\{Y\}}(y)=y\label{eq:cdfYbenford} \end{equation} for $y\in[0,1)$. \subsubsection{Most Significant $b$-ary Digits of $X$} \label{sec:most-sign-digits} For a Benford variable, applying~\eqref{eq:cdfYbenford} to~\eqref{eq:pmfA} yields \begin{align} \Pr(A_{(k)}=a)% &=\log_b\left(1+\frac{1}{a}\right),\label{eq:benfordk} \end{align} which is the well-known Benford distribution for the $k$ most significant $b$-ary digits. This distribution has almost always been expressed in previous works as the joint pmf of $A_{[1]},\dots,A_{[k]}$ rather than as the pmf of $k$-th integer significand $A_{(k)}$ (see for example~\cite{berger11:_basic_th_benford}). As evinced in Theorems~\ref{thm:msd} and~\ref{thm:asympt-aj}, and as it will become clear in the remainder of this section, working with the $k$-th integer significand is not just an aesthetical notation choice ---although it does make for simpler expressions. Let us obtain next the pmf of $A_{[j]}$ when $j\ge 2$ (i.e. the distribution of the $j$-th most significant $b$-ary digit). From~\eqref{eq:pmfAj} and~\eqref{eq:benfordk} we have that \begin{align} \Pr(A_{[j]}\!=\!a)&= \sum_{r\in\mathcal{A}_{(j-1)}}\log_b\!\left(1+\frac{1}{rb+a}\right)\label{eq:margbendford}\\ &=\log_b\!\Bigg(\prod_{r\in\mathcal{A}_{(j-1)}}\frac{(a+1)b^{-1}+r}{ab^{-1}+r}\Bigg)\label{eq:rising}\\ &=\log_b\left(\frac{\Gamma\left((a+1)b^{-1}+b^{j-1}\right)\Gamma\left(a b^{-1}+b^{j-2}\right)}{\Gamma\left((a+1)b^{-1}+b^{j-2}\right)\Gamma\left(a b^{-1}+b^{j-1}\right)}\right). \label{eq:benfordk_j} % \end{align} The last equality is due to the fact that the argument of the logarithm in~\eqref{eq:rising} can be expressed as a fraction whose numerator and denominator are the rising factorial powers $((a+1)b^{-1}+b^{j-2})^{\overline{\vert\mathcal{A}_{(j-1)}\vert}}$ and $(a b^{-1}+b^{j-2})^{\overline{\vert\mathcal{A}_{(j-1)}\vert}}$, respectively. We can also explicitly restate the general result in Theorem~\ref{thm:asympt-aj} for a Benford variable by relying on~\eqref{eq:benfordk_j}. Invoking the continuity of the logarithm and using % $\lim_{z\to\infty}z^{w-v}\Gamma(v+z)/\Gamma(w+z)=1$~\cite{abramowitz72:_handbook} % in~\eqref{eq:benfordk_j} twice ---with $z=b^{j-1}$ and $z=b^{j-2}$, respectively--- yields % \begin{equation*}\label{eq:limpaj} \lim_{j\to\infty}\Pr(A_{[j]}=a)=\log_b\lim_{j\to\infty} \frac{b^{(j-1)b^{-1}}}{b^{(j-2)b^{-1}}} =b^{-1}, % \end{equation*} a fact that was originally pointed out by Benford~\cite{benford1938} through the marginalisation of~\eqref{eq:benfordk}. % % \begin{remark} The closed-form analytic expression~\eqref{eq:benfordk_j} for the pmf of $A_{[j]}$ deserves some comments, as it appears that it was never given in studies of Benford's distribution previous to~\cite{balado21:_benford}: only the equivalent of~\eqref{eq:margbendford} was previously published. % % This is another sensible reason for working with the pmf of the $j$-th integer significand variable $A_{(j)}$ instead of the joint pmf of $A_{[1]},\dots,A_{[j]}$. The former approach makes the obtention of closed-form distributions for $A_{[j]}$ more feasible: if we use $A_{(j)}$ we just have to evaluate one single sum [i.e. \eqref{eq:pmfAj}], whereas if we use $A_{[1]},\dots,A_{[j]}$ we have to evaluate $j-1$ separate sums ---which obscures the result. This appears to be the reason why previous works never produced~\eqref{eq:benfordk_j}. \end{remark} \begin{figure}[t!] \setlength\fwidth{.6\textwidth} \setlength\fheight{.4\textwidth} \pgfplotsset{every tick label/.append style={font=\scriptsize}} \centering \input{fig2/si.tex} \caption{Illustration of the asymptotic sum-invariance property of a Benford variable for~$b=10$.} \label{fig:sum_invariance} \end{figure} \textbf{Asymptotic sum-invariance property.} A further advantage of working with the $k$-th integer significand $A_{(k)}$ is that it allows for an uncomplicated statement and proof of the asymptotic sum-invariance property of a Benford variable~\cite{berger11:_basic_th_benford,nigrini12:_benford}. In the literature, this property has simply been called the ``sum-invariance property''. Here we prefer to stress the fact that its validity is only asymptotic when one considers a finite number $k$ of most significant digits of $X$ ---i.e. the $k$-th integer significand $A_{(k)}$--- which in fact originally motivated the empirical definition of the sum-invariance property by Nigrini~\cite{nigrini92:_phd}. \begin{theorem}[Asymptotic sum-invariance property]\label{thm:sum-inv} If $X$ is a Benford variable, then it holds that \begin{equation} \label{eq:sum-invariance-prop} \lim_{\stackrel{k\to\infty,}{a\in\mathcal{A}_{(k)}}} a\Pr(A_{(k)}=a)=(\ln b)^{-1}. \end{equation} \end{theorem} \begin{proof} We just need to see that $\lim_{k\to\infty, a\in\mathcal{A}_{(k)}} a\log_b\left(1+\frac{1}{a}\right)=\lim_{v\to\infty}v\log_b\left(1+\frac{1}{v}\right)$ due to~\eqref{eq:support}. The proof is completed by using either L'Hôpital's theorem, or the continuity of the logarithmic function and the definition of Euler's number as $\lim_{v\to\infty}(1+1/v)^v$. \end{proof} \begin{remark} Informally, Theorem~\ref{thm:sum-inv} tells us that in a large set of outcomes from a Benford variable the sum of $a\in\mathcal{A}_{(k)}$ over all those outcomes whose $k$-th integer significand is equal to~$a$ is roughly invariant over $a$ when $k$ is large (i.e. the sum-invariance property as defined by Nigrini). Convergence speed to the limit~\eqref{eq:sum-invariance-prop} is exponential on~$k$ ---the faster the larger~$b$ is. % % Figure~\ref{fig:sum_invariance} shows that the sum-invariance property holds approximately when $k=3$ already, for $b=10$. Theorem~\ref{thm:sum-inv} also implies that \begin{equation*} \lim_{k\to\infty} \frac{\mathop{\textrm{E}}(A_{(k)})}{\vert\mathcal{A}_{(k)}\vert}=(\ln b)^{-1}.\label{eq:lim_normalised_EAk} \end{equation*} The corresponding approximation $\mathop{\textrm{E}}(A_{(k)})\approx (b^k-b^{k-1}) (\ln b)^{-1}$ improves with $k$, but it never achieves strict equality for finite $k$ ---in fact, $\mathop{\textrm{E}}(A_{(k)})<(b^k-b^{k-1}) (\ln b)^{-1}$ for all $k$. \end{remark} \subsubsection{Leading CF Coefficients of\hspace{.13cm}$\log_b X$} \label{sec:lead-cf-coeff} For a Benford variable the application of~\eqref{eq:cdfYbenford} to~\eqref{eq:cf_joint_pmf} yields \begin{equation}\label{eq:cf_joint_pmf_benford} \Pr(\mathbf{A}_{k}=\mathbf{a}_k)=(-1)^{k}\big([0;\mathbf{a}_{k}+\mathbf{e}_k]-[0;\mathbf{a}_k]\big). \end{equation} According to our discussion in Remark~\ref{rem:th2}, this distribution of the $k$ leading CF coefficients of $\log_b X$ is the counterpart of Benford's distribution of the $k$ most significant $b$-ary digits of $X$. Therefore~\eqref{eq:cf_joint_pmf_benford} can be seen as Benford's law for continued fractions. In particular, any real dataset that complies with~\eqref{eq:benfordk} will also comply with~\eqref{eq:cf_joint_pmf_benford}. By transforming the subtraction of fractions into a single fraction,~\eqref{eq:cf_joint_pmf_benford} can also be written as the inverse of the product of $[\mathbf{a}_k]$, $[\mathbf{a}_k+\mathbf{e}_k]$ and all of their remainders, i.e. \begin{equation}\label{eq:cf_identity} \Pr(\mathbf{A}_{k}=\mathbf{a}_k) =\prod_{j=1}^k \big[0;\mathbf{a}_k^{j:k}\big]\big[0;\mathbf{a}_k^{j:k}+\mathbf{e}_{k-j+1}\big], \end{equation} which, apart from showing at a glance that~\eqref{eq:cf_joint_pmf_benford} cannot be negative, may be more suitable for log-likelihood computations. % The equivalent of expression~\eqref{eq:cf_identity} was previously given by Miller and Takloo-Bighash~\cite[Lemma 10.1.8]{miller06:_invitation} in their exploration of the distribution of CF coefficients---called digits by these authors. However, unlike our result above, the version of~\eqref{eq:cf_identity} given by Miller and Takloo-Bighash is not explicit, as it is presented in terms of CF convergents. Therefore,~\eqref{eq:cf_identity} or~\eqref{eq:cf_joint_pmf_benford} are clearly more useful when it comes to practical applications ---furthermore, we show in Section~\ref{sec:empirical-tests} the empirical accuracy of our expressions using both synthetic and real data, something that was not attempted by Miller and Takloo-Bighash. Lastly, Blachman also gave the following explicit approximation for uniform $\{Y\}$~\cite[equation (9)]{blachman84}: \begin{equation} \label{eq:blachman} \Pr(\mathbf{A}_{k}=\mathbf{a}_k)\approx \left|\log_2\left(\frac{1+[0; \mathbf{a}_k]}{1+[0; \mathbf{a}_k+\mathbf{e}_k]}\right)\right|. \end{equation} The reader should be cautioned that this expression was given in~\cite{blachman84} with an equal sign, although the author unequivocally produced it as an approximation. Using $\ln(1+z)\approx z$, which is accurate for $|z|\ll 1$, we can see that~\eqref{eq:blachman} is roughly off by a factor of $(\ln 2)^{-1}$ with respect to the exact expression~\eqref{eq:cf_joint_pmf_benford}. Let us now look at the marginals, that is to say, the distributions of individual $A_j$ coefficients. When $k=1$ expression~\eqref{eq:cf_joint_pmf_benford} gives the distribution of~$A_1=\mathbf{A}_1$ straightaway: \begin{equation}\label{eq:a1_benford} \Pr(A_1=a)=a^{-1}-(a+1)^{-1}. % \end{equation} This pmf, also previously given by Miller and Takloo-Bighash~\cite[page 232]{miller06:_invitation}, can be rewritten as $\Pr(A_1=a)=a^{-1}(a+1)^{-1}$, which is the form that~\eqref{eq:cf_identity} takes in this particular case. Incidentally, observe that $\mathop{\textrm{E}}(A_1)=\infty$ because of the divergence of the harmonic series. It is also instructive to particularise Blachman's approximation~\eqref{eq:blachman} for $A_1$~\cite[equation (10)]{blachman84}: this renders the asymptotic Gauss-Kuz'min law~\eqref{eq:gauss-kuzmin} instead of the exact pmf~\eqref{eq:a1_benford}. Recalling our discussion at the start of Section~\ref{sec:distribution-j-th}, the Benford case is probably unusual in the fact that we can also obtain the distribution of $A_2$ in closed form by marginalising~\eqref{eq:cf_joint_pmf_benford} for $k=2$. Summing $\Pr(\mathbf{A}_2=\mathbf{a}_2)=1/(a_1+(a_2+1)^{-1})-1/(a_1+a_2^{-1})$ over $a_1\in\mathbb{N}$, and using the digamma function defined as $\psi(1+z)=-\gamma+\sum_{n=1}^\infty z/(n(n+z))$~\cite{abramowitz72:_handbook} ---which is applicable because the range of validity $z\notin \mathbb{Z}^-$ of this definition always holds here--- one finds that \begin{equation}\label{eq:a2_benford} \Pr(A_2=a)=\psi\left(1+a^{-1}\right)-\psi\left(1+(1+a)^{-1}\right). \end{equation} % It does not seem possible to obtain a closed-form exact expression for the distribution of a single CF coefficient~$A_j$ when~$j>2$ in the Benford case. However it is possible to explicitly produce the Gauss-Kuz'min law~\eqref{eq:gauss-kuzmin} by pursuing an approximation of $\Pr(A_j=a_j)$ for all $j\ge 2$. To see this, consider first the sum \begin{equation}\label{eq:sum} \sum_{x=1}^\infty \left(\frac{1}{x+b}-\frac{1}{x+c}\right)=\psi(1+c)-\psi(1+b) \end{equation} for some $b,c>0$, which is just a generalisation of~\eqref{eq:a2_benford}, and its integral approximation \begin{equation}\label{eq:int_approx} \int_1^\infty \left(\frac{1}{x+b}-\frac{1}{x+c}\right) dx= \ln(1+c)-\ln(1+b), \end{equation} which attests to the intimate connection between the digamma function and the natural logarithm~\cite[see Exercise 8.2.20 and equation (8.51)]{arfken05:_mathematical}. Now, in the marginalisation that leads to $\Pr(A_j=a_j)$ the summation on $a_1$ is of the form~\eqref{eq:sum}, and so we may approximate it by the integral~\eqref{eq:int_approx}: \begin{align} \Pr(A_{j}=a_j)&=(-1)^j\sum_{a_{j-1}=1}^\infty\dots\sum_{a_{1}=1}^\infty\big([0;\mathbf{a}_{j}+\mathbf{e}_j]-[0;\mathbf{a}_j]\big) \label{eq:aj_marginalisation}\\ &\approx (-1)^{j}\sum_{a_{j-1}=1}^\infty\dots\sum_{a_{2}=1}^\infty\int_{1}^\infty\big([0;\mathbf{a}_{j}+\mathbf{e}_j]-[0;\mathbf{a}_j]\big)\, da_1 \nonumber\\ &= (-1)^{j}\sum_{a_{j-1}=1}^\infty\dots\sum_{a_2=1}^\infty(-1)\big(\ln(1+[0;\mathbf{a}^{2:j}_{j}+\mathbf{e}_{j-1}])-\ln(1+[0;\mathbf{a}^{2:j}_j])\big).\label{eq:ln} \end{align} As $[0;\mathbf{a}^{i:j}_j]\ll 1$ nearly always for $\mathbf{a}^{i:j}_j=[a_i,\cdots,a_j]\in \mathbb{N}^{j-i+1}$, then we may use $\ln(1+z)\approx z$ in~\eqref{eq:ln} to obtain \begin{align}\label{eq:ln1p} \Pr(A_{j}=a_j)&\approx (-1)^{j+1}\sum_{a_{j-1}=1}^\infty\dots\sum_{a_2=1}^\infty\big([0;\mathbf{a}^{2:j}_{j}+\mathbf{e}_{j-1}])-[0;\mathbf{a}^{2:j}_j]\big). \end{align} Remarkably,~\eqref{eq:ln1p} has the exact same form as~\eqref{eq:aj_marginalisation} ---but with one less infinite summation. Therefore we can keep sequentially applying the same approximation procedure described above to the summations on $a_2$, $a_3$,\ldots,$a_{j-1}$. In the final summation on $a_{j-1}$ we do not need the approximation in~\eqref{eq:ln1p} anymore, and thus in the last step we have that \begin{align} \Pr(A_{j}=a_j) &\approx (-1)^{2j-2}\int_{1}^\infty\big([0;\mathbf{a}^{j-1:j}_{j}+\mathbf{e}_{2}])-[0;\mathbf{a}^{j-1:j}_j]\big) \,da_{j-1}\nonumber\\ &=(-1)^{2j-2}\int_{1}^\infty \left(\cfrac{1}{a_{j-1}+\cfrac{1}{a_j+1}}-\cfrac{1}{a_{j-1}+\cfrac{1}{a_j}}\right)da_{j-1}\nonumber\\ &=(-1)^{2j-1}\left(\ln\left(1+\frac{1}{a_{j}+1}\right)-\ln\left(1+\frac{1}{a_{j}}\right)\right)\label{eq:gateway2a1}\\ &=\ln\frac{(a_j+1)^2}{a_j(a_j+2)}.\label{eq:unnormalised_gk} \end{align} Due to the successive approximations~\eqref{eq:unnormalised_gk} is not necessarily a pmf, and so we need to normalise it. The normalisation factor is \begin{equation*} \sum_{a_j=1}^\infty \ln\frac{(a_j+1)^2}{a_j(a_j+2)}=\ln\prod_{a_j=1}^\infty\frac{(a_j+1)^2}{a_j(a_j+2)}=\ln\frac{2 \cdot \cancel{2}}{1\cdot \cancel{3}} \cdot\frac{\cancel{3}\cdot \cancel{3}}{\cancel{2}\cdot \cancel{4}}\cdot\frac{\cancel{4}\cdot \cancel{4}}{\cancel{3}\cdot \cancel{5}}\cdots=\ln 2. \end{equation*} Applying this factor to~\eqref{eq:unnormalised_gk} we finally obtain \begin{equation}\label{eq:approx_aj_gk} \Pr(A_{j}=a_j)\approx \log_2\frac{(a_j+1)^2}{a_j(a_j+2)}=\log_2\left(1+\frac{1}{a_j(a_j+2)}\right), \end{equation} for $j\ge 2$.% \begin{remark} Like in the verification of the general joint pmf~\eqref{eq:cf_joint_pmf} in Remark~\ref{rem:th2}, the right order of evaluation of the marginalisation sums is again key for us to be able to produce approximation~\eqref{eq:approx_aj_gk}. Also, had we used $\ln (1+z)\approx z$ one last time in~\eqref{eq:gateway2a1} then we would have arrived at~\eqref{eq:a1_benford} instead of at the Gauss-Kuz'min law as the final approximation. This shows that the pmf of the first CF coefficient and the asymptotic law are close already, which was also mentioned by Miller and Takloo-Bighash~\cite[Exercise 10.1.1]{miller06:_invitation}. Since the convergence of the distribution of $A_j$ to the asymptotic distribution is exponentially fast on $j$, it is unsurprising that the pmf~\eqref{eq:a2_benford} of the second CF coefficient turns out to be even closer to~\eqref{eq:gauss-kuzmin}, as suggested by~\eqref{eq:approx_aj_gk} ---see empirical validation in Section~\ref{sec:empirical-tests}. % % % % % % % Although beyond the goals of this paper, it should be possible to refine the approximation procedure that we have given to get \eqref{eq:unnormalised_gk} in order to to explicitly obtain the exponential rate of convergence to the Gauss-Kuz'min law, by exploiting the expansion of the digamma function in terms of the natural logarithm and an error term series~\cite[equation (8.51)]{arfken05:_mathematical}. Finally, see that although~$A_0$ is not included in the joint pmf~\eqref{eq:cf_joint_pmf_benford}, this variable cannot be modelled anyway when the only information that we have about $X$ is its ``Benfordness''. \end{remark} \subsubsection{Benford Variables and the Asymptotics of the General Analysis} \label{sec:benf-vari-spec} To conclude Section~\ref{sec:benford} we examine the role played by the particular analysis for a Benford variable [i.e. \eqref{eq:benfordk} and~\eqref{eq:cf_joint_pmf_benford}] in the general analysis [i.e. Theorems~\ref{thm:msd} and~\ref{thm:jpmfcf}] when $k$ is large. Let us start by looking at the asymptotics of~\eqref{eq:pmfA}. For any $\epsilon>0$ there exists~$k_{\min}$ such that $\log_b(1+a^{-1})<\epsilon$ for all $k\ge k_{\min}$ and $a\in\mathcal{A}_{(k)}$. Explicitly, this minimum index is $k_{\min}=\lceil -\log_b(b^{\epsilon}-1)+1\rceil$. This inequality and the continuity of $F_{\{Y\}}(y)$ allow us to approximate~\eqref{eq:pmfA} for large~$k$ using the pdf of $\{Y\}$ as \begin{equation} \label{eq:ak_pmf_asympt} \Pr(A_{(k)}=a)\approx f_{\{Y\}}(\log_b a -k+1)\,\log_b\left(1+\frac{1}{a}\right). \end{equation} We now turn our attention to the asymptotics of~\eqref{eq:cf_joint_pmf}. Similarly as above, for any $\epsilon>0$ \eqref{eq:shrinklow} and~\eqref{eq:shrinkhigh} guarantee that there exists~$k_{\min}$ such that $(-1)^k([0;\mathbf{a}_k+\mathbf{e}_k]-[0;\mathbf{a}_k])<\epsilon$ for all $k\ge k_\text{min}$. Invoking again the continuity of $F_{\{Y\}}(y)$ we can approximate~\eqref{eq:cf_joint_pmf} for large~$k$ using again the pdf of $\{Y\}$ as \begin{equation} \label{eq:cf_joint_pmf_asympt} \Pr(\mathbf{A}_{k}=\mathbf{a}_k)\approx f_{\{Y\}}([0;\mathbf{a}_k])\,(-1)^{k}\big([0;\mathbf{a}_{k}+\mathbf{e}_k]-[0;\mathbf{a}_k]\big). \end{equation} The key point that we wish to make here is that the Benford expressions~\eqref{eq:benfordk} and~\eqref{eq:cf_joint_pmf_benford} appear as factors in the general asymptotic approximations~\eqref{eq:ak_pmf_asympt} and~\eqref{eq:cf_joint_pmf_asympt}, respectively, which illustrates the special place that Benford variables take in the modelling of significant digits and leading continued fraction coefficients. Of course, for Benford~$X$ the pdf of~$\{Y\}$ is $f_{\{Y\}}(y)=1$ for $y\in[0,1)$, and so in this case approximations~\eqref{eq:ak_pmf_asympt} and~\eqref{eq:cf_joint_pmf_asympt} coincide with their exact counterparts. \subsection{Pareto Variables} \label{sec:pareto} In this section we let~$X$ be a Pareto r.v. with minimum value $x_\text{m}$ and shape parameter~$s$, whose pdf is \begin{equation*}\label{eq:pareto} f_X(x)= s\, x_\text{m}^s\,x^{-(s+1)},\quad 0<x_\text{m}\le x,\; s>0. \end{equation*} The main motivation for considering the Pareto distribution is its pervasiveness in natural phenomena, which is reflected in the fact that Pareto variables are able to model a wealth of scale-invariant datasets. According to Nair et al.~\cite{nair22:_fundam_heavy_tails} heavy-tailed distributions are just as prominent as the Gaussian distribution, if not more. This is a consequence of the Central Limit Theorem (CLT) \textit{not} yielding Gaussian distributions ---but heavy-tailed ones--- in common scenarios where the variance of the random variables being added is infinite (or does not exist). Furthermore, heavy-tailed distributions appear when the CLT is applied to the logarithm of variables emerging from multiplicative processes. In this context, the relevance of the Pareto distribution owes to the fact that the tails of many heavy-tailed distributions follow the Pareto law. Additionally, the Pareto distribution is the only one that fulfils exactly the relaxed scale-invariance criterion \begin{equation}\label{sec:scale_invariance_2} f_X(x)= \alpha^{s+1} f_X(\alpha\, x) % \end{equation} for any scaling factor $\alpha>0$, where $s>0$. Let us firstly obtain the cdf of $\{Y\}$ in this case. The cdf of a Pareto r.v. $X$ is $F_X(x)=1-x_\text{m}^s x^{-s}$ for $x\ge x_\text{m}$, and thus the cdf of $Y=\log_b X$ is $F_Y(y)=F_X(b^y)=1-x_\text{m}^sb^{-s y}$ for $y\ge \log_b x_\text{m}$. Letting \begin{equation*} \rho=\{\log_b x_\text{m}\} \label{eq:frac_logbxm} \end{equation*} and using~\eqref{eq:cdffrcy}, we have that the cdf of $\{Y\}$ for a Pareto r.v. $X$ is \begin{equation} \label{eq:cdf_frac_logbx} F_{\{Y\}}(y)= b^{s(\rho-1)}\, \frac{1-b^{-sy}}{1-b^{-s\phantom{y}}}+u\big(y-\rho\big)\left(1-b^{-s(y-\rho)}\right) \end{equation} for $y\in[0,1)$, where $u(\cdot)$ is the unit-step function. \begin{remark}\label{rem:pareto_benford} By application of l'Hôpital's rule, it can be verified that~\eqref{eq:cdf_frac_logbx} tends to~\eqref{eq:cdfYbenford} as $s\!\downarrow\! 0$, and so a Pareto variable becomes asymptotically Benford as its shape parameter $s$ vanishes ---for any value of $\rho$. Because~\eqref{eq:pmfA} and~\eqref{eq:cf_joint_pmf} only depend on $\{Y\}$, the distributions that we will produce in this section generalise their counterparts in the previous section [i.e. \eqref{eq:benfordk},~\eqref{eq:benfordk_j} and~\eqref{eq:cf_joint_pmf_benford} are particular cases of~\eqref{eq:paretok_general}, \eqref{eq::aj_pareto_general} and~\eqref{eq:cf_joint_pmf_pareto}, respectively, when $s\!\downarrow\! 0$]. The fact that Benford variables can appear as a particular case of Pareto variables is a likely reason for the sporadic emergence of Benford's distribution~\eqref{eq:benfordk} in scale-invariant scenarios. Finally, observe that, asymptotically as~$s\!\downarrow\! 0$, the relaxed scaled invariance property~\eqref{sec:scale_invariance_2} becomes \textit{strict}, i.e. $f_X(x)=\alpha\, f_X(\alpha\, x)$. Strict scale invariance is in turn a property that drives the appearance of Benford's distribution~\cite{balado21:_benford}. An interesting line of research beyond the scope of this paper would entail pursuing analytical insights about the probability distribution of the $s$ parameter itself in scale-invariant scenarios. If this distribution could be found, perhaps under constraints yet to be specified, it would determine the frequency of emergence of Benford variables in those scenarios. In any case, it can be empirically verified that scale-invariant datasets are far more often Paretian rather than just Benfordian (see some examples in Figure~\ref{fig:msd_real_datasets}). Thus, the expressions that we will give in this section may have wider practical applicability than the ones in Section~\ref{sec:benford} in the context of scale-invariant datasets ---with the caveat that two parameters ($s$ and $\rho$ or $x_\text{m}$) must be estimated when using the Pareto distribution results. % % % \end{remark} \subsubsection{Most Significant $b$-ary Digits of $X$} \label{sec:most-sign-digits-1} Combining~\eqref{eq:pmfA} and~\eqref{eq:cdf_frac_logbx}, and letting \begin{equation*} \xi=\rho+k-1\label{eq:xi} \end{equation*} yields the Paretian generalisation of~\eqref{eq:benfordk}: \begin{align} \Pr(A_{(k)}=a)&= \frac{b^{s(\xi-1)}}{1-b^{-s}}\,\big(a^{-s}-(a+1)^{-s}\big)\nonumber\\ &+u\big(a+1-b^{\xi}\big)\big(1-b^{s\,\xi}(a+1)^{-s}\big)% -u\big(a-b^{\xi}\big)\big(1-b^{s\,\xi}\,a^{-s}\big). \label{eq:paretok_general} \end{align} Let us obtain the distribution of $A_{[j]}$ for $j\ge 2$ next. For this single purpose we make two definitions: $\eta_v=\lceil b^{\xi-1}-v b^{-1}\rceil$ and \begin{equation*} \label{eq:taus} \tau_s(v)=\left\{ \begin{array}{l} -\psi(v),\quad s=1\\ \zeta(s,v),\quad s\neq 1 \end{array}\right. \end{equation*} where $\psi(\cdot)$ is again the digamma function and $\zeta(s,v)=\sum_{n=0}^\infty (n+v)^{-s}$ is Hurwitz's zeta function~\cite{apostol76:_intro_analytic_nt}. Now, combining~\eqref{eq:pmfAj} and~\eqref{eq:paretok_general} and using the two previous definitions it is tedious but straightforward to show that the Paretian generalisation of~\eqref{eq:benfordk_j} is \begin{align} % % \label{eq::aj_pareto_general} \Pr(A_{[j]}=a) &= \frac{b^{s(\xi-2)}}{1-b^{-s}}\left(\tau_s(ab^{-1}+b^{j-2})-\tau_s((a+1)b^{-1}+b^{j-2})\right)\nonumber\\ &-\frac{b^{s(\xi-1)}}{1-b^{-s}}\left(\tau_s(ab^{-1}+b^{j-1})-\tau_s((a+1)b^{-1}+b^{j-1})\right)\nonumber\\ &+b^{s(\xi-1)}\Big(\tau_s(ab^{-1}+\eta_a)-\tau_s((a+1)b^{-1}+ \eta_{a+1})\Big)\nonumber\\ &+\eta_a-\eta_{a+1}. \end{align} \begin{remark} Like in Section~\ref{sec:most-sign-digits}, we have been able to obtain a closed-form expression for the pmf of $A_{[j]}$ thanks to the use of the $j$-th integer significand. Of particular interest is the distribution of $A_{(k)}$~\eqref{eq:paretok_general}, which had only been published before our own work~\cite{balado21:_benford} for the special case in which the fractional part of the minimum of the Pareto distribution is zero, i.e. $\rho=0$ and thus $\xi=k-1$. In this case~\eqref{eq:paretok_general} becomes \begin{equation} \Pr(A_{(k)}=a) =\frac{a^{-s}-(a+1)^{-s}}{b^{-s(k-1)}-b^{-sk}}.\label{eq:paretok_dtp} \end{equation} % The case $k=1$ of~\eqref{eq:paretok_dtp} was first given by Pietronero et al.~\cite{pietronero01:_explaining} in the course of their investigation on the generalisation of Benford's distribution to scale-invariant phenomena. Barabesi and Pratelli~\cite{barabesi20:_generalized} then extended Pietronero et al.'s result and obtained~\eqref{eq:paretok_dtp} itself. As we will empirically verify in Section~\ref{sec:empirical-tests}, the fact that~\eqref{eq:paretok_general} can handle the general case $\rho> 0$ is not a minor detail, but a major factor in terms of that expression being able to model real data that cannot be modelled by~\eqref{eq:paretok_dtp} alone. Interestingly,~\eqref{eq:paretok_dtp} was first identified as a new distribution only a few years ago by Kozubowski et al.~\cite{kozubowski15:_pareto}, who called it the \textit{discrete truncated Pareto} (DTP) distribution. Kozubowski et al. also noticed that the DTP distribution generalises Benford's distribution, but they landed on this fact solely because of the mathematical form of~\eqref{eq:paretok_dtp}. In fact, their practical motivation was far removed from the distribution of most significant digits: it was a biological problem involving the distribution of diet breadth in Lepidoptera. Another striking fact is that Kozubowski et al. arrived at the DTP distribution through the quantisation of a \textit{truncated} Pareto variable, instead of through the discretisation of the fractional part of the logarithm of a \textit{standard} Pareto variable ---i.e. the procedure that we have followed to get to~\eqref{eq:paretok_dtp}, which is the ultimate reason why the DTP distribution is connected with Benford's distribution. A Pareto variable must surely be the only choice for which two such remarkably different procedures yield the very same outcome. The reason for this serendipitous coincidence is that the complementary cdf of the variable to be quantised or discretised, respectively, turns out to be a negative exponential function in both cases. To end this remark, Kozubowski et al. rightly point out that that the shape parameter~$s$ in~\eqref{eq:paretok_dtp} can be taken to be negative in terms of its validity as a pmf. However observe that $s$ must be strictly positive for~\eqref{eq:paretok_dtp} to have physical meaning in terms of modelling a distribution of most significant digits. \end{remark} \subsubsection{Leading CF Coefficients of\hspace{.13cm}$\log_b X$} \label{sec:lead-cf-coeff-1} Applying~\eqref{eq:cdf_frac_logbx} to~\eqref{eq:cf_joint_pmf} yields the Paretian generalisation of~\eqref{eq:cf_joint_pmf_benford}: \begin{align}\label{eq:cf_joint_pmf_pareto} \Pr(\mathbf{A}_{k}=\mathbf{a}_k)&=(-1)^{k}\Big(b^{s(\rho-1)}\,\frac{b^{-s[0;\mathbf{a}_k]}-b^{-s[0;\mathbf{a}_k+\mathbf{e}_k]}}{1-b^{-s}}\nonumber\\ &+\,u\big([0;\mathbf{a}_k+\mathbf{e}_k]-\rho\big)\,\big(1- b^{-s([0;\mathbf{a}_k+\mathbf{e}_k]-\rho)}\big)\nonumber\\ &-\,u\big([0;\mathbf{a}_k]-\rho\big)\,\big(1- b^{-s([0;\mathbf{a}_k]-\rho)}\big)\Big). \end{align} The special case of~\eqref{eq:cf_joint_pmf_pareto} for $\rho=0$ yields the counterpart of the DTP distribution~\eqref{eq:paretok_dtp} in the CF setting: \begin{equation} \label{eq:dpt_CF} \Pr(\mathbf{A}_k=\mathbf{a}_k)=(-1)^k\left(\frac{b^{-s[0;\mathbf{a}_k]}-b^{-s[0;\mathbf{a}_k+\mathbf{e}_k]}}{1-b^{-s}}\right). \end{equation} Expression~\eqref{eq:cf_joint_pmf_pareto} is clearly not amenable to analytic marginalisation beyond $A_1=\mathbf{A}_1$. An interesting particular case of $A_1$ is given by specialising~\eqref{eq:dpt_CF} for $k=1$: \begin{equation}\label{eq:a1_pareto_zero_rho} \Pr(A_1=a)=\frac{b^{-\frac{s}{a+1}}-b^{-\frac{s}{a}}}{1-b^{-s}}. \end{equation} Recalling Remark~\ref{rem:pareto_benford},~\eqref{eq:a1_pareto_zero_rho} tends to~\eqref{eq:a1_benford} as $s\downarrow 0$. \begin{figure}[t!] \setlength\fwidth{.6\textwidth} \setlength\fheight{.4\textwidth} \pgfplotsset{every tick label/.append style={font=\scriptsize}} \centering \input{fig2/general_pareto_vs_benford_approx_s1_00_rho0_30_mincha_291122_202751.tex}% \caption{Theoretical distribution of the two most significant decimal digits of Pareto~$X$~\eqref{eq:paretok_general} versus theoretical Benford-based asymptotic approximation~\eqref{eq:ak_pmf_asympt}. The lines join probability mass points for clarity.} \label{fig:bb_asympt_msd_pareto} \end{figure} \begin{figure}[t!] \setlength\fwidth{.6\textwidth} \setlength\fheight{.4\textwidth} \pgfplotsset{every tick label/.append style={font=\scriptsize}} \centering \input{fig2/cf_general_pareto_vs_benford_approx_s1_00_rho0_30_mincha_271122_110309.tex} \caption{Theoretical joint pmf of the first two CF coefficients of $\log_{10}X$ for Pareto~$X$ with $s=1$ and $\rho=0.3$ [solid lines, \eqref{eq:cf_joint_pmf_pareto}] versus theoretical Benford-based asymptotic approximation [dashed lines, \eqref{eq:cf_joint_pmf_asympt}]. The lines join probability mass points corresponding to equal $a_2$ for clarity.} \label{fig:bb_asympt_lcf_pareto} \end{figure} \subsubsection{Comparison with Benford-based Asymptotic Approximations} \label{sec:benf-based-appr} Now that we have particularised~\eqref{eq:pmfA} and~\eqref{eq:cf_joint_pmf} for a non-Benfordian variable, we are in a position to examine how well the Paretian expressions~\eqref{eq:paretok_general} and~\eqref{eq:cf_joint_pmf_pareto} are approximated by the Benford-based asymptotic expressions discussed in Section~\ref{sec:benf-vari-spec}. In order to evaluate approximations~\eqref{eq:ak_pmf_asympt} and~\eqref{eq:cf_joint_pmf_asympt} we need the pdf of $\{Y\}$, $f_{\{Y\}}(y)$. This is obtained by differentiating~\eqref{eq:cdf_frac_logbx}, which yields $f_{\{Y\}}(y)= s (\ln b)\,b^{-s(y-\rho)}\left(b^{-s}/(1-b^{-s\phantom{y}})+u\big(y-\rho\big)\right)$ for $y\in[0,1)$. We now compare in Figures~\ref{fig:bb_asympt_msd_pareto} and~\ref{fig:bb_asympt_lcf_pareto} the exact Paretian expressions and their Benford-based approximations for some arbitrary values of $s$ and $\rho$ and for $k=2$, which we have chosen because the visualisation of a joint distribution is not simple for $k>2$ in the CF case. Even though the Benford-based approximations were obtained assuming $k$ to be large, we can see that they are close to the exact expressions already. This is markedly true for the most significant digits model in Figure~\ref{fig:bb_asympt_msd_pareto}, where the approximation is accurate for all $a\in\mathcal{A}_{(2)}$ already ---in fact, it can be verified that the asymptotic approximation is also acceptable for $k=1$ in this case. Regarding the CF coefficients model, the approximation becomes accurate when $a_1+a_2\gtrapprox 10^2$. \section{Empirical Tests} \label{sec:empirical-tests} \begin{figure}[t!] \setlength\fwidth{.6\textwidth} \setlength\fheight{.4\textwidth} \pgfplotsset{every tick label/.append style={font=\scriptsize}} \centering \input{fig2/b-p.tex} \caption{Distributions of the most significant decimal digit of~$X$. The theoretical pmf's (solid and dashed lines) are \eqref{eq:benfordk} and~\eqref{eq:paretok_general}, and the empirical frequencies (\raisebox{.8pt}{{\tiny $\square$}}) correspond to $p=10^7$ pseudorandom outcomes in each case.} \label{fig:pr_ak_1} \end{figure} \begin{figure}[t!] \setlength\fwidth{.6\textwidth} \setlength\fheight{.4\textwidth} \pgfplotsset{every tick label/.append style={font=\scriptsize}} \centering % \input{fig2/b-p-2.tex} \caption{Distributions of the two most significant decimal digits of~$X$. The theoretical pmf's (solid and dashed lines) are \eqref{eq:benfordk} and~\eqref{eq:paretok_general}, and the empirical frequencies (\raisebox{.8pt}{{\tiny $\square$}}) correspond to $p=10^7$ pseudorandom outcomes in each case.} \label{fig:pr_ak_2} \end{figure} In this section the theoretical expressions given in Sections~\ref{sec:benford} and~\ref{sec:pareto} are verified. In all plots, solid or dashed lines represent theoretical probabilities (joining discrete probability mass points) whereas square symbols represent empirical frequencies obtained using a dataset~$\{x_1,x_2,\ldots,x_p\}$. For simplicity, we use the maximum likelihood (ML) estimators $\hat{x}_\text{m}=\min_i x_i$ and $\hat{s}=\big(\frac{1}{p}\sum_i \ln(x_i/\hat{x}_\text{m})\big)^{-1}$ to drive the Paretian expressions with real datasets, but be aware that better estimation approaches are possible (see for instance~\cite{nair22:_fundam_heavy_tails}). We start with distributions of the most significant digits of $X$. Figures~\ref{fig:pr_ak_1} and~\ref{fig:pr_ak_2} present the distributions of the most significant decimal digit $A_{(1)}$ and of the two most significant decimal digits $A_{(2)}$, respectively. The Benford results, which are a particular case of~\eqref{eq:paretok_general}, are well known. The Paretian cases with $\rho=0$ are covered by~\eqref{eq:paretok_dtp}, as previously shown by Barabesi and Pratelli~\cite{barabesi20:_generalized}. However, it is essential to use the general expression~\eqref{eq:paretok_general} when $\rho>0$. In this case the pmf's of Paretian significant digits do not behave anymore in a monotonically decreasing way (i.e. like Benford's pmf) but rather feature a peak midway along the support of~$A_{(k)}$. Therefore modelling real Paretian datasets requires being able to take a general value of $\rho$ into account, as there is no special reason why $\rho$ should be zero in practice ---observe some examples in Figure~\ref{fig:msd_real_datasets}. % Next, Figure~\ref{fig:Aj_digits_pareto_theoretical_montecarlo} shows distributions of the $j$-th most significant decimal digit $A_{[j]}$. Again, peaks can be seen in the distributions when $\rho>0$, but in general these are less pronounced than in the distribution of $A_{(k)}$ due to~\eqref{eq:asympt-aj}. An illustration of the asymptotic behaviour proved in Theorem~\ref{thm:asympt-aj} is the fact that~$A_{[4]}$ is nearly uniformly distributed for all three distributions of~$X$ considered in Figure~\ref{fig:Aj_digits_pareto_theoretical_montecarlo}. \begin{figure} \setlength\fwidth{.6\textwidth} \setlength\fheight{.4\textwidth} \pgfplotsset{every tick label/.append style={font=\scriptsize}} \centering % \input{fig2/msd_real_datasets.tex} \caption{Distributions of the most significant decimal digit of~$X$ for real Benfordian and Paretian datasets. The theoretical pmf's (solid lines) are \eqref{eq:benfordk} and~\eqref{eq:paretok_general}, and the empirical frequencies (\raisebox{.8pt}{{\tiny $\square$}}) are joined by dotted lines for clarity.} \label{fig:msd_real_datasets}\phantom{\cite{fassett11:_mercury}} \end{figure} \begin{figure}[t!] \setlength\fwidth{.6\textwidth} \setlength\fheight{.4\textwidth} \pgfplotsset{every tick label/.append style={font=\scriptsize}} \centering % \input{fig2/paretoj_general.tex} \caption{Distributions of the $j$-th most significant decimal digit of~$X$. The theoretical pmf's (solid and dashed lines) are~\eqref{eq:benfordk_j} and~\eqref{eq::aj_pareto_general}, and the empirical frequencies (\raisebox{.8pt}{{\tiny $\square$}}) correspond to $p=5\times 10^7$ pseudorandom outcomes in each case.} \label{fig:Aj_digits_pareto_theoretical_montecarlo} \end{figure} \begin{figure}[t!] \setlength\fwidth{.6\textwidth} \setlength\fheight{.4\textwidth} \pgfplotsset{every tick label/.append style={font=\scriptsize}} \centering \input{fig2/cf_general_benford_pseudo_n1e+08_mincha_110522_092109.tex} \caption{Joint distribution of the first two CF coefficients of $\log_{10} X$ for Benford~$X$. The theoretical joint pmf (dashed lines) is~\eqref{eq:cf_joint_pmf_benford} and the empirical frequencies (\raisebox{.8pt}{{\tiny $\square$}}) correspond to $p=10^8$ pseudorandom outcomes.} \label{fig:cf_joint_A2_benford_theoretical_montecarlo} \end{figure} \begin{figure}[t!] \setlength\fwidth{.6\textwidth} \setlength\fheight{.4\textwidth} \pgfplotsset{every tick label/.append style={font=\scriptsize}} \centering \input{fig2/cf_general_pareto_pseudo_n1e+08_s1_50_rho0_48_mincha_110522_102323.tex} \caption{Joint distribution of the first two CF coefficients of $\log_{10} X$ for Pareto~$X$, $s=1.5$, $\rho=0.48$. The theoretical joint pmf (dashed lines) is~\eqref{eq:cf_joint_pmf_pareto} and the empirical frequencies (\raisebox{.8pt}{{\tiny $\square$}}) correspond to $p=10^8$ pseudorandom outcomes.} \label{fig:cf_joint_A2_pareto_theoretical_montecarlo} \end{figure} \begin{figure}[t!] \setlength\fwidth{.6\textwidth} \setlength\fheight{.4\textwidth} \pgfplotsset{every tick label/.append style={font=\scriptsize}} \centering \input{fig2/cf_gk_a1_a2_n1e+08_mincha_110522_101623.tex} \caption{Distributions of the $j$-th CF coefficient of $\log_{10} X$. The theoretical pmf's (solid lines) are~\eqref{eq:gauss-kuzmin},~\eqref{eq:a1_benford},~\eqref{eq:a2_benford}, and~\eqref{eq:cf_joint_pmf_pareto} [with $k=1$]. The empirical frequencies (\raisebox{.8pt}{{\tiny $\square$}}) correspond to $p=10^8$ pseudorandom outcomes in each case.} \label{fig:cf_gk_a1_a2} \end{figure} \begin{figure}[t!] \setlength\fwidth{0.6\textwidth} \setlength\fheight{0.4\textwidth} \pgfplotsset{every tick label/.append style={font=\scriptsize}} \centering \input{fig2/cf_general_benford_US_total_income_per_ZIP_code,_2016_n159924_mincha_110522_092226.tex} \caption{Joint distribution of the first two CF coefficients of $\log_{10} X$ for a real Benfordian dataset (US total income per ZIP code, National Bureau of Economic Research, 2016, $p=159,928$). The theoretical joint pmf (dashed lines) is~\eqref{eq:cf_joint_pmf_benford}, whereas the symbols (\raisebox{.8pt}{{\tiny $\square$}}) represent empirical frequencies. } \label{fig:A2_cf_benford_income_dataset} \end{figure} \begin{figure}[t!] \setlength\fwidth{0.6\textwidth} \setlength\fheight{0.4\textwidth} \pgfplotsset{every tick label/.append style={font=\scriptsize}} \centering \input{fig2/cf_general_pareto_Lunar_Craters_diameter_gt_1km_n1296785_s1_59_rho0_00_mincha_110522_094958.tex} \caption{Joint distribution of the first two CF coefficients of $\log_{10} X$ for a real Paretian dataset (diameter of Lunar craters $\ge 1$ km, $p=1,296,796$~\cite{robbins18}). The theoretical joint pmf (dashed lines) is~\eqref{eq:cf_joint_pmf_pareto}, driven by $\hat{s}=1.59$ and $\hat{\rho}=0.00$, whereas the symbols (\raisebox{.8pt}{{\tiny $\square$}}) represent empirical frequencies.} \label{fig:A2_cf_pareto_lunar_dataset} \end{figure} We move on next to distributions of the leading CF coefficients of $\log_{10} X$. Figures~\ref{fig:cf_joint_A2_benford_theoretical_montecarlo} and~\ref{fig:cf_joint_A2_pareto_theoretical_montecarlo} verify the validity of the joint distributions~\eqref{eq:cf_joint_pmf_benford} and~\eqref{eq:cf_joint_pmf_pareto} of the two leading CF coefficients $\mathbf{A}_2$ when $X$ is Benford and Pareto, respectively. In Figure~\ref{fig:cf_gk_a1_a2} we show the distributions of the marginals $A_1$ and $A_2$ for Benford $X$ [i.e. \eqref{eq:a1_benford} and~\eqref{eq:a2_benford}], and we compare them to the Gauss-Kuz'min law~\eqref{eq:gauss-kuzmin}. As we can see, the distribution of $A_j$ converges really fast to Gauss-Kuz'min: the distribution of $A_1$ is close to it already, as remarked by Miller and Takloo-Bighash~\cite{miller06:_invitation}, but that of $A_2$ is even closer as expected from~\eqref{eq:approx_aj_gk}. Figure~\ref{fig:cf_gk_a1_a2} also depicts the distribution of~$A_1$ for Paretian~$X$, using~\eqref{eq:cf_joint_pmf_pareto}. As we know, for Pareto $X$ the distribution of $A_j$ must also converge exponentially fast to the Gauss-Kuz'min law, although this is not graphically illustrated in Figure~\ref{fig:cf_gk_a1_a2} due to the lack of a theoretical expression for~$A_2$ in this case. Finally, we show in Figures~\ref{fig:A2_cf_benford_income_dataset} and~\ref{fig:A2_cf_pareto_lunar_dataset} how~\eqref{eq:cf_joint_pmf_benford} and~\eqref{eq:cf_joint_pmf_pareto} correctly model real Benfordian and Paretian datasets, respectively. \section{Conclusions} \label{sec:conclusions} We have provided a general theoretical analysis of the distributions of the most significant digits and the leading continued fraction coefficients of the outcomes of an arbitrary random variable, which highlights the connections between the two subjects. Empirical verification for two relevant particularisations of our results (Benford and Pareto variables, respectively) also supports the accuracy of our results in practice. Our analysis reveals novel facts ---specially, but not only, concerning modelling continued fraction coefficients--- and provides simpler proofs and new closed-form expressions for already known ones. In particular, we have shown that the use of what we propose to call $k$-th integer significands considerably simplifies modelling significant digits, allowing for uncomplicated finite and asymptotic analyses. We have also shown the parallelism between the general asymptotics of the probabilistic models for the $j$-th significant $b$-ary digit and of the $j$-th continued fraction coefficient ---i.e. between~\eqref{eq:asympt-aj} and the Gauss-Kuz'min law~\eqref{eq:gauss-kuzmin}---, and the role played by the Benford variables in the asymptotics of the general analyses. Applications of our results may be found in the area of forensic examination, where Benford's law has been extensively used ---see~\cite{barabesi21:_tests} for a recent relevant example. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2023-01-26T02:10:10", "yymm": "2301", "arxiv_id": "2301.10547", "language": "en", "url": "https://arxiv.org/abs/2301.10547", "abstract": "We provide general expressions for the joint distributions of the $k$ most significant $b$-ary digits and of the $k$ leading continued fraction coefficients of outcomes of an arbitrary continuous random variable. Our analysis highlights the connections between the two problems. In particular, we give the general convergence law of the distribution of the $j$-th significant digit, which is the counterpart of the general convergence law of the distribution of the $j$-th continued fraction coefficient (Gauss-Kuz'min law). We also particularise our general results for Benford and Pareto random variables. The former particularisation allows us to show the central role played by Benford variables in the asymptotics of the general expressions, among other results. The particularisation for Pareto variables -- which include Benford variables as a special case -- is specially relevant in the context of pervasive scale-invariant phenomena, where Pareto variables occur much more frequently than Benford variables. This suggests that the Pareto expressions that we produce have wider applicability than their Benford counterparts in modelling most significant digits and leading continued fraction coefficients of real data. Our results may find practical application in all areas where Benford's law has been previously used.", "subjects": "Probability (math.PR)", "title": "General Distributions of Number Representation Elements", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692277960746, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7079584928859467 }
https://arxiv.org/abs/1304.4839
Non-commuting graphs of nilpotent groups
Let $G$ be a non-abelian group and $Z(G)$ be the center of $G$. The non-commuting graph $\Gamma_G$ associated to $G$ is the graph whose vertex set is $G\setminus Z(G)$ and two distinct elements $x,y$ are adjacent if and only if $xy\neq yx$. We prove that if $G$ and $H$ are non-abelian nilpotent groups with irregular isomorphic non-commuting graphs, then $|G|=|H|$.
\section{\bf Introduction and results} Let $G$ be a non-abelian group and $Z(G)$ be its center. The non-commuting graph $\Gamma_G$ of $G$ is a graph whose vertex set is $G\setminus Z(G)$ and two vertices $x$ and $y$ are adjacent if and only if $xy\neq yx$. The non-commuting graph of a group was first considered by Paul Erd\H{o}s in 1975 \cite{N}. Many people have studied the non-commuting graph (e.g., \cite{AADS,AAM,D,SW,WS}). In \cite{AAM} the following conjecture was put forward: \begin{conj}[Conjecture 1.1 of \cite{AAM}]\label{con1} Let $G$ and $H$ be two finite non-abelian groups such that $\Gamma_G\cong \Gamma_H$. Then $|G|=|H|$. \end{conj} Conjecture \ref{con1} was refuted by an example due to Isaacs in \cite{M}, however it is valid whenever one of $G$ or $H$ is a non-abelian finite simple group \cite{D} or whenever one of $G$ or $H$ has prime power order \cite{AADS}. The counterexample given in \cite{M} is a pair $(G,H)$ of nilpotent non-abelian groups with regular non-commuting graph; recall that a graph is called regular if the degree of all vertices are the same, otherwise the graph is called irregular. It follows from a result of Ito \cite{Ito} that a finite group with a regular non-commuting graph is a direct product of a non-abelian $p$-group for some prime $p$ and an abelian group.\\ Here we study pairs $(G,H)$ of non-abelian finite groups which provide a counterexample to Conjecture \ref{con1}. It follows from the main result of \cite{AADS}, that if a pair $(G,H)$ provides a counterexample then none of $G$ and $H$ are of prime power order. Here we prove that if a pair of non-abelian finite nilpotent groups provides a counterexample for Conjecture \ref{con1} then their non-commuting graphs must be regular. \begin{thm}\label{thm1} Let $G$ and $H$ be two finite non-abelian nilpotent groups with irregular non-commuting graphs such that $\Gamma_G\cong\Gamma_H$. Then $|G|=|H|$. \end{thm} We conjecture that the word ``nilpotent" in Theorem \ref{thm1} is sufficient for one of the groups $G$ and $H$. \section{\bf Non-commuting graphs of nilpotent groups} A non-abelian group is called an $AC$-group if the centralizer of every non-central element is abelian. For a group $G$ and an element $g\in G$, $g^G$ denotes the conjugacy class of $g$ in $G$. \begin{lem} Let $G$ and $H$ be two finite non-abelian groups. If $\phi:\Gamma_G\rightarrow \Gamma_H$ is a graph isomorphism and $g$ is a non-central element of $G$, then the following hold: \begin{enumerate} \item $|G|-|Z(G)|=|H|-|Z(H)|$. \item $|G|-|C_G(g)|=|H|-|C_H(\phi(g))|$. \item $|C_G(g)|-|Z(C_G(g))|=|H|-|Z(C_H(\phi(g)))|$, where $C_G(g)$ is not abelian. \item If $C_G(g)$ is not abelian, then $\Gamma_{C_G(g)}\cong \Gamma_{C_H(\phi(g))}$. \item Suppose that $C_1=C_G(g_1)$ and $C_i=C_{C_{i-1}}(g_i)$ for $i\geq 2$, where $g_1\in G\setminus Z(G)$ and $g_i\in C_{i-1}\setminus Z(C_{i-1})$. Then there exists $k\in\mathbb{N}$ such that $C_k$ is an $AC$-group. \item $|G|=|H|$ if and only if $|C_G(g)|=|C_H(\phi(g))|$ if and only if $|Z(G)|=|Z(H)|$. \end{enumerate} \end{lem} \begin{proof} It is straightforward. To prove (5), note that if the centralizer $C_i$ is not an $AC$-group, then some proper centralizer in $C_i$ is not abelian guaranteeing the existence of an element $g_{i+1}$. On the other hand, $G$ is finite so the series $C_1>C_2>\cdots>C_i>\cdots$ will eventually terminate in an $AC$-group. \end{proof} \begin{lem}[see e.g. Theorem 2.1 of \cite{AADS}] \label{l1} Let $G$ be a finite non-abelian group and $H$ be a group such that $\Gamma_G\cong \Gamma_H$. Then the following hold: \begin{enumerate} \item $|C_H(h)|$ divides $(|g^G| - 1) (|Z(G)| - |Z(H)|)$ for any $g\in G\setminus Z(G)$ and $h=\phi(g)$, where $\phi$ is any graph isomorphism from $\Gamma_G$ to $\Gamma_H$. \item If $|Z(G)|\geq |Z(H)|$ and $G$ contains a non-central element $g$ such that ${|C_G(g)|}^2\geq |G|\cdot|Z(G)|$, then $|G|=|H|$. \end{enumerate} \end{lem} We need the following result concerning a number theoretic conjecture due to Goormaghtigh. \begin{thm}[see e.g. Theorem 1.3 of \cite{HT}] \label{l3} Let $x,y,m,n$ be integers such that $y>x>1$ and $m,n>1$. Then the following equation has at most one pair $(m,n)$ of solution for every fixed pair $(x,y)$: $$\frac{y^n-1}{y-1}=\frac{x^m-1}{x-1}.$$ \end{thm} \begin{thm} \label{t1} Let $G$ be a nilpotent group with at least two distinct non-abelian Sylow subgroups. Suppose also that $H$ is any non-abelian group such that $|Z(G)|\geq|Z(H)|$ and $\Gamma_G\cong\Gamma_H$. Then $|G|=|H|$. \end{thm} \begin{proof} Suppose $G=P\times Q\times S$, where $P$ and $Q$ are non-abelian Sylow $p$, $q$-subgroups of $G$ such that $p\neq q$ and $S$ is a subgroup of $G$. If $x\in P\setminus Z(P)$ and $y\in Q\setminus Z(Q)$, then $|Z(G)|< |C_P(x)||C_Q(y)||S|$. Therefore $$|G||Z(G)|< |C_P(x)||C_Q(y)||P||Q||S|^2=|C_G(x)||C_G(y)|.$$ It follows that $$|G||Z(G)|< \max\{|C_G(x)|^2,|C_G(y|^2\}.$$ Now, Lemma \ref{l1}(2) completes the proof. \end{proof} \begin{cor} \label{cor1} Let $G$ and $H$ be two nilpotent groups each of which have at least two non-abelian Sylow subgroups. If $\Gamma_G\cong\Gamma_H$, then $|G|=|H|$. \end{cor} Both groups $G$ and $H$ in the counterexample of Conjecture \ref{con1} due to Isaacs in \cite{M} have the same shape, that is, they are direct products of a non-abelian group of prime power order $P$ and a non-trivial abelian group $A$ such that $\gcd(|P|,|A|)=1$ and all non-trivial conjugacy class sizes of $G$ or $H$ have equal order. The latter property was first studied by Ito \cite{Ito} and we want to prove Theorem \ref{thm1} for all nilpotent groups except those satisfying the latter shape. \section{\bf Proof of Theorem \ref{thm1}} Now, we prove Theorem \ref{thm1} in four cases. In this section $G$ and $H$ are finite non-abelian nilpotent groups with irregular non-commuting graphs and $\phi:\Gamma_G\rightarrow \Gamma_H$ is a graph isomorphism. By Corollary \ref{cor1}, we may assume that $G$ has exactly one non-abelian Sylow subgroup. If $G$ is of prime power order, the main result of \cite{AADS} implies that $|G|=|H|$. Thus we may assume $G=P\times A$, where $P$ is a non-abelian Sylow $p$-subgroup of $G$ and $A$ is a non-trivial abelian subgroup whose order is prime to $p$. Also, set $|P|=p^n$ and $|Z(P)|=p^r$.\\ {\bf{Case (a): \; $H=P_1\times B$ for some non-abelian Sylow $p$-subgroup $P_1$ of $H$ and for some non-trivial abelian subgroup $B$ of $H$.}}\\ We use the following notation: $|P_1|=p^m$, $|Z(P_1)|=p^s$ and $\phi(g_i)=h_i$, where $g_1,\dots, g_k$ are non-central elements of $G$ chosen from conjugacy classes of $G$ with pairwise distinct sizes such that $|{g_i}^G|=p^{a_i}$ and $|{h_i}^H|=p^{b_i}$ and $a_1<\dots< a_k$ and $b_1<\dots< b_k$. Notice that $k\geq 2$, since $\Gamma_G$ and $\Gamma_H$ are irregular.\\ Since $\Gamma_G\cong\Gamma_H$, \begin{equation} |A|p^r(p^{n-r}-1)=|B|p^s(p^{m-s}-1)\end{equation} \begin{equation} |A|p^{n-a_i}(p^{a_i}-1)=|B|p^{m-b_i}(p^{b_i}-1)\end{equation} for every $1\leq i\leq k$. Equation (1) implies that $r=s$ and equation (2) implies that $n-a_i=m-b_i$. Since $\Gamma_G$ is not regular, graph isomorphism implies that \begin{equation} |A|(p^{n-a_1}-p^{n-a_2})=|B|(p^{m-b_1}-p^{m-b_2}).\end{equation} Therefore $|A|=|B|$. Now, equation (2) implies that $a_1=b_1$. Hence $|P|=|P_1|$.\\ {\bf{Case (b): \; $H=P_1\times X$, where $P_1$ is a non-abelian Sylow $p$-subgroup of $H$ and $X$ is an arbitrary group such that $\gcd(p,|X|)=1$.}}\\ Suppose $H$ is a minimal counterexample. Also suppose by way of contradiction that $X$ is a non-abelian group. Then $P_1$ and $X$ are $AC$-group. Let $x\in X\setminus Z(X)$. Then $C_H(x)=P_1\times B $, where $B\subseteq X$ is an abelian subgroup of $X$. Therefore Case (a) implies that $|C_H(x)|=|C_G(\phi^{-1}(x))|$. Since $\Gamma_G\cong\Gamma_H$, we have $|G|=|H|$. Now, $|G|=|H|$ implies that $|P|=|P_1|$. Set $|C_G(\phi^{-1}(x))|=p^{n-\alpha}|A|$ for some integer $1<\alpha<n$. By graph isomorphism, we have $$(p^n-p^{n-\alpha})|A|=p^n(|X|-|C_X(x)|).$$ The largest $p$-power dividing the right-hand side of the equation is $\geq p^n$ and the left is $p^{n-\alpha}$. This is a contradiction. Hence $X$ is abelian and Case (a) completes the proof.\\ {\bf{ Case (c): \; $H=Q_1\times X$, where $Q_1$ is a Sylow $q_1$-subgroup of $H$ and $X$ is a non-abelian nilpotent group.}}\\ If $p=q_1$, then Case (b) completes the proof. We claim that $p\neq q_1$ is not possible. Let $H$ be a minimal counterexample. Therefore $Q_1$ and $X$ are $AC$-groups. By the characterization of $AC$-groups \cite{S}, a nilpotent $AC$-group is a direct product of a non-abelian group of prime power order and an abelian group. Therefore $X=Q_2\times B$, where $Q_2$ is a non-abelian $q_2$-group for some prime $q_2$, $B$ is an abelian group and $\gcd(|Q_2|,|B|)=1$. Let $h_i\in Q_i\setminus Z(Q_i)$ for $i\in\{1,2\}$. Also, set $\phi^{-1}(h_i)=g_i$ for $i\in\{1,2\}$ and $|C_G(g_i)|=|A|p^{n-{a_i}}$, where $1<a_i<n$. If $q_2=p$, then again Case (b) implies that $Q_1\times B$ is an abelian group. This is a contradiction. Therefore $p\neq q_1,q_2$. We have $C_H(h_1)=C_{Q_1}(h_1)\times Q_2\times B$ and $C_H(h_2)=Q_1\times C_{Q_2}(h_2)\times B$ and $Z(C_H(h_1))=C_{Q_1}(h)\times Z(Q_2)\times B$ and $Z(C_H(h_2))=Z(Q_1)\times C_{Q_2}(h_1)\times B$. So $Z(H)\subsetneqq Z(C_H(h_i))$. Therefore graph isomorphism implies that $Z(G)\subsetneqq Z(C_G(g_i))$. Set $|Z(C_G(g_i))|=|A|p^{d_i}$ for $i\in\{1,2\}$ and $|Z(G)|=|A|p^r$. It is clear that $d_i>r$. Now, $\Gamma_G\cong\Gamma_H$ and $\Gamma_{C_G(g_i)}\cong\Gamma_{C_H(h_i)}$ for $i\in\{1,2\}$ imply that \begin{equation} |C_H(h_2)|-|Z(C_H(h_2))|=(|Q_1|-|Z(Q_1)|)|C_{Q_2}(h_2)||B|=|A|(p^{n-a_2}-p^{d_2}) \end{equation} \begin{equation} |Z(C_H(h_1))|-|Z(H)|=(|C_{Q_1}(h_1)|-|Z(Q_1)|)|Z(Q_2)||B|=|A|(p^{d_1}-p^r) \end{equation} \begin{equation} |H|-|C_H(h_1)|=(|Q_1|-|C_{Q_1}(h_1)|)|Q_2||B|=|A|(p^n-p^{n-a_1}). \end{equation} Since $|B|(|Q_1|-|Z(Q_1)|)=|B|(|Q_1|-|C_{Q_1}(h_1)|)+|B|(|C_{Q_1}(h_1)|-|Z(Q_1)|)$, by equation (5) and (6) the largest $p$-power dividing the right-hand side of the latter equation is $p^r$ and by equation (4) the largest $p$-power dividing the left hand side is $p^{d_2}$. This is a contradiction.\\ {\bf{Case (d): \; $H=Q\times B$, where $Q$ is a non-abelian Sylow $q$-subgroup for some prime $q\neq p$ and $B$ is a non-trivial abelian subgroup.}}\\ Suppose by way of contradiction that $|G|\neq |H|$. Since $\Gamma_G $ is not regular, there exist $g_1,g_2\in G\setminus Z(G)$ such that $|g_1^G|=p^{a_1}\neq p^{a_2}= |g_2^G|$. Set $|Q|=q^m$, $|Z(Q)|=q^s$, $\phi(g_i)=h_i$ for $i\in \{1,2\}$ and $|h_i^H|=q^{b_i}$. Since $\Gamma_G\cong \Gamma_H$, \begin{equation} |A|(p^n-p^r)=|B|(q^m-q^s) \end{equation} \begin{equation} |A|(p^{n-a_i}-p^r)=|B|(q^{m-b_i}-q^s). \end{equation} If $u=\gcd(a_1,a_2,n-r)$ and $v=\gcd(b_1,b_2,m-s)$, by considering equations (7) and (8) and taking greatest common divisors, we have \begin{equation} |A|p^r(p^{u}-1)=|B|q^s(q^v-1). \end{equation} Now, by dividing equations (7) and (9), we have \begin{equation}\frac{p^{n-r}-1}{p^u-1}=\frac{q^{m-s}-1}{q^v-1}\end{equation} and by dividing equations (8) and (9), we have \begin{equation}\frac{p^{n-a_i-r}-1}{p^u-1}=\frac{q^{m-b_i-s}-1}{q^v-1}.\end{equation} Note that it is not possible that $n-a_1-r=u=n-a_2-r$, since $a_1\neq a_2$. Now, Theorem \ref{l3} and equation (10) and (11) yield a contradiction. $\hfill\Box$ \\ \begin{center}{\textbf{Acknowledgments}} \end{center} The authors are grateful to the referee for his/her invaluable comments. The first author was financially supported by the Center of Excellence for Mathematics, University of Isfahan. This research was in part supported by grants IPM (No. 91050219) and IPM (No. 91200045).
{ "timestamp": "2013-04-18T02:02:35", "yymm": "1304", "arxiv_id": "1304.4839", "language": "en", "url": "https://arxiv.org/abs/1304.4839", "abstract": "Let $G$ be a non-abelian group and $Z(G)$ be the center of $G$. The non-commuting graph $\\Gamma_G$ associated to $G$ is the graph whose vertex set is $G\\setminus Z(G)$ and two distinct elements $x,y$ are adjacent if and only if $xy\\neq yx$. We prove that if $G$ and $H$ are non-abelian nilpotent groups with irregular isomorphic non-commuting graphs, then $|G|=|H|$.", "subjects": "Group Theory (math.GR)", "title": "Non-commuting graphs of nilpotent groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692345869641, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7079584920035517 }
https://arxiv.org/abs/2101.04590
Complete minors in digraphs with given dichromatic number
The dichromatic number $\vec{\chi}(D)$ of a digraph $D$ is the smallest $k$ for which it admits a $k$-coloring where every color class induces an acyclic subgraph. Inspired by Hadwiger's conjecture for undirected graphs, several groups of authors have recently studied the containment of directed graph minors in digraphs with given dichromatic number. In this short note we improve several of the existing bounds and prove almost linear bounds by reducing the problem to a recent result of Postle on Hadwiger's conjecture.
\section{Introduction} For a given integer $t\geq 1$ let $m_\chi(t)$ be the least integer for which it is true that every graph with chromatic number at least $m_\chi(t)$ contains a $K_t$-minor. Hadwiger's conjecture~\cite{H43}, which is one of the most important open problems in graph theory, states that $m_\chi(t)=t$ for all $t \ge 1$. The conjecture remains unsolved for $t \ge 7$. For many years, the best general upper bound on $m_\chi(t)$ was due to Kostochka~\cite{K82,K84} and Thomason~\cite{T84}, who independently proved that every graph of average degree at least $O(t\sqrt{\log t})$ contains a $K_t$-minor, implying that $m_\chi(t)=O(t\sqrt{\log t})$. Recently, however, there has been progress. First, Norine, Postle and Song~\cite{NPS19} showed that $m_\chi(t)=O\left(t(\log t)^\beta\right)$ (for any $\beta > \frac{1}{4}$), and then this was further improved by Postle~\cite{P20} to give $m_\chi(t)=O\left(t(\log\log t)^{6}\right)$. For more details about Hadwiger's conjecture the interested reader may consult the recent survey of Seymour~\cite{S16}. This famous conjecture has influenced many researchers and different va\-ri\-a\-tions of it have been studied in various frameworks, one of which is directed graphs. In this case there are multiple ways to define a minor. Here we consider three popular variants: \emph{strong minors}, \emph{butterfly minors} and \emph{topological minors}. The containment of these different minors in dense digraphs as well as their relation to the dichromatic number have already been studied in several previous works, see e.g. \cite{AGSW20, KS15, J96} for strong minors, \cite{G20, KK15,J01,MSW19} for butterfly minors and \cite{A19, GSSz20, G21, M85, M95, M96, S00} for topological minors. \smallskip Given digraphs $D$ and $H$, we say that $D$ is a \emph{strong $H$-minor model} if $V(D)$ can be partitioned into non-empty sets $\{X_v : v\in V(H)\}$ (called \emph{branch sets}) such that the digraph induced by $X_v$ is strongly-connected for all $v\in H$; and for every arc $(u,v)$ in $H$ there is an arc in $D$ from $X_u$ to $X_v$. More generally, we also say that $D$ \emph{contains $H$ as a strong minor} and write $D \succcurlyeq_s H$ if a subdigraph of $D$ is a strong $H$-minor model. Pause to note that strong minor containment defines a transitive relation on digraphs, that is, if $D_1 \succcurlyeq_s D_2$ and $D_2 \succcurlyeq_s D_3$ for digraphs $D_1,D_2,D_3$, then $D_1\succcurlyeq_s D_3$. Given an undirected graph $G$ we denote by $\bivec{G}$ the directed graph with the same vertex set and for every edge $uv\in E(G)$ the vertices $u$ and $v$ are connected in $\bivec{G}$ by an arc in each direction. We will be particularly interested in forcing strong $\bivec{K}_t$-minors, as those also yield a strong $H$-minor for every digraph $H$ on at most $t$ vertices. Analogously to the undirected case, one can ask how large the dichromatic number of a digraph should be to guarantee that it contains a strong $\bivec{K}_t$ minor. More precisely, we consider the function $sm_{\vec{\chi}}(t)$, which is the least integer for which it is true that every digraph $D$ with $\vec{\chi}(D) \ge sm_{\vec{\chi}}(t)$ satisfies $D\succcurlyeq_s \bivec{K}_t$. In a recent work, Axenovich, Gir\~{a}o, Snyder and Weber~\cite{AGSW20} showed that $sm_{\vec{\chi}}(t)$ exists for every $t \ge 1$ and proved the bounds \begin{equation*} t+1\leq sm_{\vec{\chi}}(t)\leq t4^t. \end{equation*} Here we improve their upper bound substantially by reducing the problem to the undirected setting. \begin{thm}\label{thm:cor} For every $t\geq 1$ we have \begin{equation*} sm_{\vec{\chi}}(t)\le 2m_\chi(t)-1. \end{equation*} \end{thm} By combining \Cref{thm:cor} with the aforementioned result of Postle we get that $sm_{\vec{\chi}}(t)= O\left(t(\log\log t)^{6}\right)$. \smallskip Now let us turn to butterfly minors. Given a digraph $D$ and an arc $(u,v) \in A(D)$, this arc is called \emph{(butterfly-)contractible} if $v$ is the only out-neighbor of $u$ or if $u$ is the only in-neighbor of $v$ in $D$. Given such a contractible arc $e$, the digraph $D/e$ is obtained from $D$ by merging $u$ and $v$ into a common vertex and joining their in- and out-neighborhoods, ignoring parallel arcs. A \emph{butterfly minor} of a digraph $D$ is any digraph that can be obtained by repeatedly deleting arcs, deleting vertices or contracting arcs. In \cite{MSW19}, inspired by Hadwiger's conjecture, Millani, Steiner and Wiederrecht raised the question that for a given integer $k\geq 1$, what is the largest butterfly minor closed class $\mathcal{D}_k$ of $k$-colorable digraphs, and they gave a precise characterization of $\mathcal{D}_2$ as \emph{non-even digraphs}. The question concerning a characterization of $\mathcal{D}_k$ for $k \ge 3$ is closely related to the question of forcing complete butterfly minors in digraphs. For an integer $t \ge 1$, let us define $bm_{\vec{\chi}}(t)$ as the least integer such that every digraph $D$ with $\vec{\chi}(D) \ge bm_{\vec{\chi}}(t)$ contains $\bivec{K}_t$ as a butterfly minor, and put \begin{equation*} b(x):=\max\left\{t \ge 1\ \mid \ bm_{\vec{\chi}}(t) \le x\right\} \end{equation*} for the integer inverse function of $bm_{\vec{\chi}}(\cdot)$. Let us further denote by $\mathcal{K}_t$ the class of all digraphs with no $\bivec{K}_t$ as a butterfly minor. Then, on the one hand, every digraph excluding $\bivec{K}_{b(k+1)}$ as a butterfly minor is colourable with $bm_{\vec{\chi}}(b(k+1))-1 \le k$ colours. On the other hand, every digraph in $\mathcal{D}_k$ must exclude $\bivec{K}_{k+1}$ as a butterfly minor, since its dichromatic number exceeds $k$. Therefore, for every $k$ we have \begin{equation*} \mathcal{K}_{b(k+1)} \subseteq \mathcal{D}_k \subseteq \mathcal{K}_{k+1}. \end{equation*} To see how tight the the above inclusions are one needs to obtain good lower bounds on $b(k+1)$, or equivalently good upper bounds on $bm_{\vec{\chi}}(t)$. In this direction, as an app\-li\-cation of \Cref{thm:cor} we prove the following corollary. \begin{corollary}\label{cor:but} For $t \ge 1$ we have $bm_{\vec{\chi}}(t) \le 2m_\chi(2t)-1= O(t(\log\log t)^{6})$. \end{corollary} For the sake of completeness we remark that a lower bound of $t+1\le bm_{\vec{\chi}}(t)$ follows by taking $D=\bivec{G}$ where $G$ is the complete graph on $t+2$ vertices with a $5$-cycle removed. It is a simple exercise to verify that $\bivec{\chi}(D)=t$ but it contains no butterfly $\bivec{K}_t$-minor. \smallskip Finally, we consider topological minors. Given a digraph $H$, a \emph{subdivision of $H$} is any digraph obtained by replacing every arc $(u,v)\in A(H)$ by a directed path from $u$ to $v$, such that subdivision-paths of different arcs are internally vertex-disjoint. Then $H$ is said to be a \emph{topological minor} of some digraph $D$ if $D$ contains a subdivision of $H$ as a subgraph. Aboulker, Cohen, Havet, Lochet, Moura and Thomassé~\cite{A19} initiated the study of the existence of various subdivisions in digraphs of large dichromatic number. For a digraph $H$ they introduced the parameter $\text{mader}_{\vec{\chi}}(H)$, the \emph{dichromatic Mader number of $H$}, as the least integer such that any digraph $D$ with $\vec{\chi}(D)\geq \text{mader}_{\vec{\chi}}(H)$ contains a subdivision of $H$. In their main result they proved that if $H$ is a digraph with $n$ vertices and $m$ arcs, then \begin{equation*} n\leq \text{mader}_{\vec{\chi}}(H)\leq 4^m(n-1)+1. \end{equation*} Gishboliner, Steiner and Szabó~\cite{GSSz20} conjectured that $\text{mader}_{\vec{\chi}}(\bivec{K}_t)\le Ct^2$ for some absolute constant $C$, however, it seems surprisingly hard to find a polynomial upper bound even for quite simple digraphs $H$. An indication for this increased difficulty compared to the undirected case could be that for digraphs it is not even possible to force a $\bivec{K}_3$-subdivision by means of large minimum out- and in-degree (compare~\cite{M85}). In \cite{GSSz20} the authors still managed to identify a wide class of graphs, called octus graphs\footnote{We note that this class, in particular, includes orientations of cactus graphs (and hence orientations of cycles), as well as bioriented forests.}, for which the lower bound is tight. Their result means that given a digraph $D$ with $\vec{\chi}(D)\geq n$ it contains the subdivision of every octus graph on at most $n$ vertices. Here, along the same line of thinking, as a corollary of \Cref{thm:cor} we prove a similar result for another class of digraphs. By slightly abusing the terminology, we call a digraph $D$ \emph{subcubic} if $D$ is an orientation of a graph with maximum degree at most three such that the in- and out-degree of any vertex is at most two. \begin{corollary}\label{cor:top} For $n\ge 1$ if $D$ is a digraph with $\vec{\chi}(D) \ge 22n$ then it contains a subdivision of every subcubic digraph on at most $n$ vertices. \end{corollary} \paragraph{Notation.} For a digraph $D$ and a set $S\subseteq V(D)$ we denote by $D[S]$ the subdigraph spanned by the vertices in $S$. The set $S$ is called \emph{acyclic} if $D[S]$ is an acyclic digraph. We call $D$ \emph{strongly-connected} if for every ordered pair $u,v$ of vertices in $D$ there is a directed path in $D$ from $u$ to $v$. An in-/out-arborescence is a rooted directed tree where every arc is directed towards/away from the root. For the starting/ending point of an arc we will also use the names tail/head. A \emph{(proper) coloring} of an undirected graph $G$ with colors in a set $A$ is a map $f:V(G)\rightarrow A$ where neighbouring vertices are mapped to different colors, or equivalently $f^{-1}(a)$ is an independent set for every $a\in A$. If $|A|=k$ then $f$ is called a \emph{ $k$-coloring}. Analogously, an \emph{(acyclic) $k$-coloring} of a digraph $D$ is a map $f:V(D)\rightarrow A$ with $|A|=k$ where $f^{-1}(a)$ is an acyclic set for every $a\in A$. The minimum $k$ for which a $k$-coloring exists is the \emph{chromatic} (resp. \emph{dichromatic}) \emph{number} of the undirected graph $G$ (resp. digraph $D$), which we shall denote by $\chi(G)$ (resp. $\vec{\chi}(D))$. \section{Proofs} \subsection{Strong minors} The proof of \Cref{thm:cor} will be based on the following result. \begin{thm}\label{thm:main} For every digraph $D$ there is an undirected graph $G$ such that \begin{enumerate}[(i)] \item $D$ is a strong $\bivec{G}$-minor model, and \item $\vec{\chi}(D)\leq 2\chi(G)$. \end{enumerate} \end{thm} \begin{proof} To start with, let us first fix a partition $X_1,X_2,\dots, X_m$ of $V(D)$ such that for every $i\in \{1,2,\dots,m\}$ the set $X_i$ is an inclusion-wise maximal subset of $V(D)\setminus \left(X_1\cup \cdots \cup X_{i-1}\right)$ with $D[X_i]$ strongly connected and $\vec{\chi}(D[X_i]) \le 2$. Note that the $X_i$'s are well-defined since the one vertex-digraph is strongly connected and $2$-colorable. Now we define $G$ to be the undirected simple graph with vertex set $\{X_1,\dots,X_m\}$ and $X_iX_j\in E(G)$ if and only if there are arcs in both directions between $X_i$ and $X_j$ in $D$. Then, by definition, $D$ is a strong $\bivec{G}$-minor model, as one can simply take $X_1,X_2,\dots,X_m$ as the branch sets. Therefore, what remains to prove is property (ii). For this let us assume that $\chi(G)=k$ and fix a proper coloring $f_G:V(G)\rightarrow \{c_1,c_2,....,c_k\}$ of $G$. Now, for every $i$ take an arbitrary acyclic two-coloring of $D[X_i]$ (which exists by assumption) with colors $\{c_i',c_i''\}$. The rest of the proof is about showing that by putting these colorings together we obtain an acyclic coloring $f_D$ of $D$ with the $2k$ colors $\{c_{1}',c_{1}'',c_{2}',c_{2}'',\dots,c_{k}',c_{k}''\}$. Assume for contradiction that this is not the case, and there is a directed cycle $C$ in $D$ which is monochromatic. We may, without loss of generality, assume that $C$ is a shortest such cycle, in particular, it is and induced cycle. Let $i_0$ be the smallest index for which $C$ contains a vertex from $X_{i_0}$. Note that, in particular, $V(C)\subseteq V(D)\setminus \left(X_1\cup \cdots \cup X_{i_0-1}\right)$ and, as $f_D$ is a proper coloring on $D[X_{i_0}]$, the cycle $C$ cannot be fully contained in $X_{i_0}$. Hence, $C$ contains a subsequence $u,w_1,\dots,w_\ell,v$ of consecutive vertices on $C$ with $(u,w_1),(w_1,w_2),\dots,(w_\ell,v) \in A(C)$, such that $u,v\in X_{i_0}$ (possibly $u=v$), $w_1,\ldots,w_\ell \in X_{i_0+1} \cup \cdots \cup X_m$, and $\ell>0$. Let $s \in \{1,\ldots,\ell\}$ be the smallest index such that $w_{s}$ has an out-neighbour in $X_{i_0}$, and denote this out-neighbor by $x \in X_{i_0}$. We claim that $w_{s}$ has no in-neighbor in $D$ that is contained in $X_{i_0}$. Suppose towards a contradiction that there exists $y \in X_{i_0}$ such that $(y,w_{s}) \in A(D)$. Let $j>{i_0}$ be such that $w_s\in X_j$. Then, because of the arcs $(y,w_{s}) , (w_s,x) \in A(D)$, we have $X_{i_0}X_j\in E(G)$ and hence $f_G(X_{i_0})\neq f_G(X_j)$. This in turn implies that $f_D(u)\neq f_D(w_s)$ and $f_D(v)\neq f_D(w_s)$ which contradicts the monochromaticity of $C$. Hence, we may assume that $w_s$ has no in-neighbor contained in $X_{i_0}$. In particular, this implies $s \ge 2$. Let us now consider the set \begin{equation*} X=X_{i_0}\cup\{w_1,\dots,w_{s}\}\subseteq V(D)\setminus \left(X_1\cup \cdots \cup X_{i_0-1}\right). \end{equation*} It is clearly strongly connected, as $X_{i_0}$ is so and $u,w_1,\ldots,w_{s},x$ induce a directed path (or cycle in case $u=x$) starting and ending in $X_{i_0}$. Moreover, any extension of an acyclic $\{1,2\}$-coloring of $D[X_{i_0}]$ to a $\{1,2\}$-coloring of $D[X]$ where $w_1,\ldots,w_{s-1}$ receive color $1$ and $w_{s}$ receives color $2$ is acyclic. Indeed, by the definition of $s$, there are no arcs starting in $\{w_1,\dots, w_{s-1}\}$ and ending in $X_{i_0}$, and by the inducedness of $C$ there are no arcs spanned between non-consecutive vertices inside $\{w_1,\dots, w_{s-1}\}$. Adding the fact that $w_s$ has no in-neighbours in $X_{i_0}$, these imply that any directed cycle in $D[X]$ is either fully contained in $D[X_{i_0}]$, or contains both $w_s$ and at least one vertex in $\{w_1,\dots, w_{s-1}\}$. In any case, it is not monochromatic. However, the existence of the set $X$ then contradicts with the maximality of $X_{i_0}$, which finishes the proof. \end{proof} Now we can easily deduce \Cref{thm:cor} from \Cref{thm:main}. \begin{proof}[Proof of \Cref{thm:cor}.] Let $D$ be a digraph with $\vec{\chi}(D) \ge 2m_\chi(t)-1$. By Theorem~\ref{thm:main} there exists an undirected graph $G$ such that $\vec{\chi}(D) \le 2\chi(G)$ and $D \succcurlyeq_s \bivec{G}$. This implies that $\chi(G) \ge m_\chi(t)$, and hence $G$ contains a $K_t$-minor. Taking the same branch sets in $\bivec{G}$ which give a $K_t$-minor in $G$ shows that $\bivec{G} \succcurlyeq_s \bivec{K}_t$, and by transitivity $D\succcurlyeq_s \bivec{K}_t$. Since $D$ was arbitrarily chosen such that $\vec{\chi}(D) \ge 2m_\chi(t)-1$, this proves that $sm_{\vec{\chi}}(t) \le 2m_\chi(t)-1$, as required. \end{proof} \subsection{Butterfly minors} \Cref{cor:but} follows directly from Theorem~\ref{thm:cor} and the following proposition. \begin{prop} Every strong $\bivec{K}_{2t}$-minor model contains $\bivec{K}_t$ as a butterfly minor. \end{prop} \begin{proof} Let $D$ be a strong $\bivec{K}_{2t}$-minor model and let $\{X_1^+, X_1^-,\ldots,X_t^+,X_t^-\}$ be a corres\-ponding partition of $V(D)$ into $2t$ branch sets. In particular, for every $i \in \{1,\ldots,t\}$ there exist $r_i^+ \in X_i^+$ and $r_i^- \in X_i^-$ such that $(r_i^-,r_i^+) \in A(D)$. Since $D[X_i^-]$ and $D[X_i^+]$ are strongly connected digraphs, there exist\footnote{Such trees can easily be obtained by considering a breadth-first in-search (resp. out-search) starting from $r_i^-$ (resp. $r_i^+$).} oriented spanning trees $T_i^- \subseteq D[X_i^-]$ and $T_i^+ \subseteq D[X_i^+]$ such that $T_i^-$ is an in-arbores\-cence rooted at $r_i^-$ and $T_i^+$ is an out-arborescence rooted at $r_i^+$. Let us consider the spanning subdigraph $D'$ of $D$ consisting of the arcs contained in % \begin{equation*} T:=\bigcup_{i=1}^{t}\Big({\{(r_i^-,r_i^+)\} \cup A(T_i^+) \cup A(T_i^-)}\Big), \end{equation*} % as well as all arcs of $D$ starting in $X_i^+$ and ending in $X_j^-$ for $i \neq j$. Then every arc of $D'$ contained in $T$ is either the unique arc in $D'$ emanating from its tail or the unique arc in $D'$ entering its head. It follows that all arcs in $T$ are butterfly-contractible. Note that the contraction of an arc does not affect the butterfly-contractibility of other arcs, hence the digraph $D'/T$, obtained from $D'$ by successively contracting all arcs in $T$, is a butterfly minor of $D$. The vertices of $D'/T$ can be labelled $v_1,\ldots,v_t$, where $v_i$ denotes the vertex corresponding to the contraction of the (weakly) connected component of $D'$ inside $X_i^+ \cup X_i^-$. As $D$ is a strong $\bivec{K}_{2t}$-minor model, by definition of $D'$ for every $(i,j) \in \{1,\ldots,k\}^2$ with $i \neq j$, there exists an arc in $D'$ starting in $X_i^+$ and ending in $X_j^-$. Therefore, $D'/T$ is a butterfly minor of $D$ isomorphic to $\bivec{K}_t$, concluding the proof. \end{proof} \subsection{Topological minors} Finally, we prove \Cref{cor:top}. \begin{proof}[Proof of \Cref{cor:top}.] As a first step note that given $n \in \mathbb{N}$, every undirected graph $G$ with minimum degree at least $10.5n>n+6.291\cdot\frac{3}{2}n$ contains every $n$-vertex subcubic graph as a minor. This follows directly from a result of Reed and Wood~\cite{reedwood}, who proved that every graph with average degree at least $n+6.291m$ contains every graph with $n$ vertices and $m$ edges as a minor. Let now $D$ be any digraph with $\vec{\chi}(D) \ge 22n$, $F$ a subcubic digraph on $n \ge 2$ vertices and $H$ its underlying undirected subcubic graph. By Theorem~\ref{thm:main} there exists an undirected graph $G$ such that $D$ is a strong $\bivec{G}$-minor model and $\chi(G) \ge 11n$. In particular, $G$ contains a subgraph of minimum degree at least $11n-1>10.5n$ and hence, by our earlier remark, an $H$-minor. This implies that $\bivec{G}$ contains a strong $\bivec{H}$-minor and hence $D$ does so. However, as $F \subseteq \bivec{H}$, it also follows that $D$ contains a strong $F$-minor, i.e. a subdigraph $D'$ which is a strong $F$-minor model. Let $\{X_f : f \in V(F)\}$ be a branch set partition of $V(D')$ witnessing this. Recall that, by definition, for every arc $e=(u_1,u_2) \in A(F)$ there exist vertices $v(e,u_1) \in X_{u_1}$ and $v(e,u_2) \in X_{u_2}$ such that $\left(v(e,u_1),v(e,u_2)\right) \in A(D')\subseteq A(D)$. Let next $u \in V(F)$ be an arbitrary vertex with total degree $d=d(u)\in \{0,1,2,3\}$ and let us denote the arcs incident to $u$ by $e_1,\dots,e_d$. Furthermore, for $i=1,\ldots,d$ we put $v_i:=v(e_i,u)$. We claim that there exists a vertex $b(u) \in X_u$ and for every $i=1,\ldots,d$ a directed path $P_i^u$ in $D[X_u]$ such that \begin{itemize} \item $P_1^u,\dots,P_d^u$ are internally vertex-disjoint; \item if $u$ is the tail of $e_i$, then $P_i^u$ is a directed path from $b(u)$ to $v_i$; \item if $u$ is the head of $e_i$, then $P_i^u$ is a directed path from $v_i$ to $b(u)$. \end{itemize} This claim holds trivially if $d=0$, and if $d=1$ then we can simply put $b(u)=v_1$ and let $P_1^u$ be the trivial one-vertex path consisting of $v_1$. If $d=2$ then, without loss of generality, by the symmetry of reversing all arcs in $D$ and $F$, we may assume that $u$ is the head of $e_1$. We then can put $b(u):=v_1$, let $P_1^u$ be the trivial one-vertex path consisting of $v_1$, and take $P_2^u$ to be any directed path in in $D[X_u]$ from $v_1$ to $v_2$, which exists by strong connectivity. Finally suppose $d=3$. Since $F$ is subcubic, $u$ either has in-degree one and out-degree two, or vice versa. As before, without loss of generality, by symmetry we may assume that the first case occurs, and it is $e_1$ that enters $u$ and $e_2$ and $e_3$ that emanate from it. Take now $P_{12}$ and $P_{13}$ to be directed paths in $D[X_u]$ starting at $v_1$ and ending at $v_2$ and $v_3$, respectively. We define now $b(u)$ as the first vertex in $V(P_{12})$ that we meet when traversing $P_{13}$ backwards (starting at $v_3$); $P_1^u$ as the subpath of $P_{12}$ directed from $v_1$ to $b(u)$; $P_2^u$ as the subpath of $P_{12}$ directed from $b(u)$ to $v_2$; and $P_{3}^3$ as the subpath of $P_{13}$ directed from $b(u)$ to $v_3$. It follows by definition that $P_1, P_2, P_3$ are internally vertex-disjoint, and hence the claim follows. To finish the proof, let $S \subseteq D$ be a subdigraph with vertex set \begin{equation*} V(S):=\bigcup_{u \in V(F)}\left(\bigcup_{i=1}^{d(u)}V(P_i^u)\right), \end{equation*} and arcs \begin{equation*} A(S):=\left\{\Big(v(e,u_1),v(e,u_2)\Big)\ \Big|\ e=(u_1,u_2) \in A(F)\right\} \cup \left(\bigcup_{u \in V(F)}\left(\bigcup_{i=1}^{d(u)}A(P_i^u)\right)\right). \end{equation*} $S$ is a digraph isomorphic to a subdivision of $F$ in which a vertex $u \in V(F)$ is represented by the branch-vertex $b(u)$. This concludes the proof. \end{proof} \section{Concluding remarks} In this note we showed that $sm_{\vec{\chi}}(t) \le 2m_\chi(t)-1$ and $bm_{\vec{\chi}}(t) \le 2m_\chi(2t)-1$ for any $t \ge 1$. As far as lower bounds are considered, it is not hard to see that $m_\chi(t) \le \min\{sm_{\vec{\chi}}(t),bm_{\vec{\chi}}(t)\}$ for every $t \ge 1$. Indeed, for any graph $G$ with $\chi(G) \ge \min\{sm_{\vec{\chi}}(t),bm_{\vec{\chi}}(t)\}$, as $\vec{\chi}(\bivec{G})=\chi(G)$, by definition $\bivec{G}$ contains $\bivec{K}_t$ either as a strong minor or as a butterfly minor, each of which implies that $G$ contains a $K_t$-minor. Therefore, our results reduce the question about the asymptotics of $sm_{\vec{\chi}}(t)$ and $bm_{\vec{\chi}}(t)$ to the well-studied undirected version of the problem. Also, as Hadwiger's conjecture is known to be true for small values, for $3 \le t \le 6$ we have $$t+1 \le sm_{\vec{\chi}}(t) \le 2t-1\quad\text{and}\quad t+1 \le bm_{\vec{\chi}}(t) \le 4t-1.$$ We believe that the upper bounds should not be tight. To support this intuition, let us mention that a more careful analysis of our proof of Theorem~\ref{thm:cor} yields the stronger statement that any digraph $D$ with $\vec{\chi}(D) \ge 2m_\chi(t)-1$ contains a strong $\bivec{K}_t$-minor model in which between any two branch sets, there are at least two arcs spanned in both directions. Under the assumption that Hadwiger's conjecture is true, the bound $2t-1$ for this stronger property would be sharp, as shown by $\bivec{K}_{2t-2}$. This indicates that our proof should not be expected to give a tight bound for the problem of forcing a strong $\bivec{K}_t$-minor. Instead it seems plausible that $sm_{\vec{\chi}}(t)=t+1$ (and maybe $bm_{\vec{\chi}}(t)=t+1$) for any $t \ge 3$. \begin{problem} Does every digraph $D$ with $\vec{\chi}(D) \ge t+1$ contain $\bivec{K}_t$ as a strong minor (butterfly minor)? \end{problem} Already resolving the first open case $t=3$ would be quite interesting. \section*{Acknowledgments} We would like to thank Patrick Morris for fruitful discussions on the topic.
{ "timestamp": "2021-01-13T02:22:34", "yymm": "2101", "arxiv_id": "2101.04590", "language": "en", "url": "https://arxiv.org/abs/2101.04590", "abstract": "The dichromatic number $\\vec{\\chi}(D)$ of a digraph $D$ is the smallest $k$ for which it admits a $k$-coloring where every color class induces an acyclic subgraph. Inspired by Hadwiger's conjecture for undirected graphs, several groups of authors have recently studied the containment of directed graph minors in digraphs with given dichromatic number. In this short note we improve several of the existing bounds and prove almost linear bounds by reducing the problem to a recent result of Postle on Hadwiger's conjecture.", "subjects": "Combinatorics (math.CO)", "title": "Complete minors in digraphs with given dichromatic number", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.976669234586964, "lm_q2_score": 0.7248702702332476, "lm_q1q2_score": 0.7079584920035517 }
https://arxiv.org/abs/0712.3911
N-systems, class polynomials for double eta-quotients and singular values of J-invariant function
Enge and Schertz gave the method of using the double eta-quotient for the construction of elliptic curves over finite fields. In their method, it is necessary to count the number of rational points of elliptic curves corresponding to solutions of the modular equation over a finite field, because in advance we can not know which solution of the modular equation is that corresponding to the modular invariant. We give a condition that the modular invariant is a multiple root of the modular polynomial. Consequently, we give a method to reduce the amount of computation in the process of counting the number of rational points.
\section{Introduction}\label{section1} In elliptic curve cryptosystems, it is important to construct elliptic curves with a required number of points over finite fields. The theory of complex multiplication leads an approach to the construction of elliptic curves over finite fields. When we construct elliptic curves, we often make use of the class polynomial of $J$-invariant function. However, this method has the problem on practical use that the coefficients of the polynomial grow very large. To overcome the problem, the polynomials with small coefficients have been invented by using Weber functions. Recently, Enge and Schertz \cite{EN1} gave the method of using the double $\eta$-quotient. The double $\eta$-quotient is not $\rm{SL}_{2}(\mathbb{Z})$-invariant but $\Gamma^{0}(N)$-invariant. For this reason, they considered $N$-systems and calculated class polynomials with respect to double $\eta$-quotients. Furthermore, they can calculate the modular invariants $J(\mathfrak{a})$, for an ideal $\mathfrak{a}$ of an imaginary quadratic field, by using the modular polynomial from the class polynomials with respect to double $\eta$-quotients. However in their method, it is necessary to count the number of rational points of elliptic curves corresponding to solutions of the modular equation over a finite field, because in advance we can not know which solution of the modular equation is that corresponding to the modular invariant. In this article, we shall give a method to reduce the amount of computation in the process of counting the number of rational points. Thus, if we are in the situation that the modular invariant is a multiple root of the modular polynomial, we may expect that the amount of computation reducers to at most a half of the original one. In Section 2, we give some basic results and definitions. In Section 3, we determine the relations between class polynomials with respect to double $\eta$-quotients and $N$-systems. In Section 4 , we give a condition that the modular invariant is a multiple root of the modular polynomial. In section 5, we give an example. \section{Some basic results and definitions}\label{section2} For two prime numbers $p_1$ and $p_2$, the double $\eta$-quotient $\mathfrak{w}_{p_{1},p_{2}}(z)$ of level $N=p_1p_2$ is defined by $$ \mathfrak{w}_{p_{1},p_{2}}(z) = \frac{\eta(z/p_1) \eta(z/p_2)}{\eta(z) \eta(z/p_{1}p_{2})}, $$ where $\eta(z)$ denotes the Dedekind $\eta$-function defined by $$ \eta(z) = q^{1/24} \prod_{n=1}^{\infty}(1-q^{n}), \qquad q = q(z) = e^{2\pi iz}. $$ The function $\mathfrak{w}_{p_{1},p_{2}}(z)$ is invariant under the modular group $$\Gamma^{0}(N) = \left\{ \left(\left. \begin{array}{cc} a & b \\ c & d \\ \end{array} \right) \in \rm{SL}_{2}(\mathbb{Z})\right | b\equiv 0\pmod N\right\}$$ (see \cite{MN}). For a divisor $Q$ of $N$ such that $(Q,\frac{N}{Q})=1$, set $$ W_Q = \left( \begin{smallmatrix} -Q & N \\ -y & -Qx \\ \end{smallmatrix} \right), $$ where $x,y \in \mathbb{Z}$ and $\det (W_Q)=Q$. Then we know $W_Q$ is a normalizer of $\Gamma^{0}(N)$. Especially the normalizer $\displaystyle W_N=\begin{pmatrix}0&N\\-1&0\end{pmatrix}$ is called the Atkin-Lehner involution of $\Gamma^{0}(N)$. By Theorem 2 of \cite{EN2}, we know \begin{equation}\label{eq1} \mathfrak{w}_{p_{1},p_{2}}(W_N(z)) = \mathfrak{w}_{p_{1},p_{2}}(z). \end{equation} For $\mathfrak{w}_{p_{1},p_{2}}^{s}(z)$ with $s = 24/\gcd(24,(p_1-1)(p_2-1))$, Enge and Schertz \cite{EN1} defined the class polynomials $H_\mathfrak{N}(X)$ associated with $N$-systems $\mathfrak{N}$. In the following, we shall recall their results. Let $\mathcal{O}_f$ be the order of conductor $f$ in a imaginary quadratic field $K=\mathbb{Q}(\sqrt{m})$. Let $d_K$ be a discriminant of $K$. Then the discriminant $D$ of $\mathcal{O}_f$ is given by $D=f^2d_k$. Let $\mathfrak{H}_f$ be the (proper) ideal class group of $\mathcal{O}_f$. To a proper ideal $\mathfrak{a}=[\beta_1,\beta_2] = \mathbb{Z}\beta_1 + \mathbb{Z}\beta_2$ of $\mathcal{O}_f$, we associate its basis quotient $$\alpha_{\mathfrak{a}}=\displaystyle{\frac{\beta_1}{\beta_2}} \qquad \left(\alpha_{\mathfrak{a}} \in \mathbb{H} = \{ \tau \in \mathbb{C}~|~ Im(\tau) > 0 \}\right),$$ and a quadratic form $\mathfrak q_{\mathfrak{a}}(X,Y)=N_{K/\mathbb Q}(\beta_1X+\beta_2Y)/N_{K/\mathbb Q}(\mathfrak{a})$. It is noted that the basis quotient $\alpha_\mathfrak{a}$ is determined up to $\rm{SL}_{2}(\mathbb{Z})$-equivalence and the form $\mathfrak{q}_{\mathfrak{a}}(X,Y)$ is a primitive quadratic form with integral coefficients of discriminant $D$. The following results are well known (see \cite{AC}). Let $\mathfrak{a}_1$ and $\mathfrak{a}_2$ are proper ideals of $O_f$. We write $ \mathfrak{a}_1 \sim \mathfrak{a}_2$ if $\mathfrak{a}_1$ and $\mathfrak{a}_2$ are in the same ideal class of $\mathcal{O}_f$. Then \begin{eqnarray} \mathfrak{a}_1 \sim \mathfrak{a}_2 & \Longleftrightarrow & \alpha_{\mathfrak{a}_1} \; \text{is} \;\rm{SL}_{2}(\mathbb{Z})\text{-equivalent} \; \text{to} \; \alpha_{\mathfrak{a}_2} \nonumber\\ & \Longleftrightarrow & \mathfrak q_{\mathfrak{a}_1}(X,Y) \; \text{is proper equivalent to} \; \mathfrak q_{\mathfrak{a}_2}(X,Y).\label{equiv1} \end{eqnarray} Further the map $\mathfrak{a}\mapsto \mathfrak q_{\mathfrak{a}}(X,Y)$ gives rise to a bijection between $\mathfrak{H}_f$ and the proper equivalent classes of quadratic forms of discriminant $D$. Since for an ideal $\mathfrak{a}$ the value $J(\alpha_\mathfrak{a})$ is independent on the choice of basis quotients $\alpha_\mathfrak{a}$, we shall denote by $J(\mathfrak{a})$ the value $J(\alpha_\mathfrak{a})$. Hereafter for a triple $(A,B,C)$ of integers such that $A\ne 0$, we shall denote by $[A,B,C]$ a quadratic form $AX^2+BXY+CY^2$. Furthermore, for a quadratic form $\mathfrak q=[A,B,C]$, we put $\mathfrak{a}_\mathfrak{q}=[A,\frac{-B+\sqrt D}2]$ and $\alpha_\mathfrak{q}=\frac{-B+\sqrt D}{2A}$. Let $h(D)$ be the class number of the order $\mathcal{O}_f$. Then we know there exist $h(D)$ isomorphic classes of elliptic curves with complex multiplication by $\mathcal{O}_f$. They are represented by the elliptic curves with $j$-invariants $j(\alpha_{\mathfrak{a_i}})$, where $\mathfrak{a}_i~(i=1,\dots ,h(D))$ are ideals representing all classes of $\mathfrak{H}_f$. Let $K_f$ be the ring class field of $K$ of conductor $f$. Then the theory of complex multiplication shows $J(\mathfrak{a}_i)$ generates $K_f$ over $K$ for each $i$ and $J(\mathfrak{a}_i),~(i=1,\dots ,h(D))$ are conjugate to each other over $\mathbb Q$. To calculate the values $J(\mathfrak{a})$, we use the classical class equation for the order $\mathcal{O}_f$ defined by \[H_D[J](X)=\prod_{i=1}^{h(D)} (X-J(\mathfrak{a}_i)).\] However, since this polynomial has large integral coefficient, it is hard to compute the polynomial for large $D$'s. Enge and Schertz \cite{EN1} devised the method of using the double $\eta$-quotient to obtain a class polynomial with small integral coefficients. Since double $\eta$-quotient is $\Gamma^{0}(N)$-invariant but not $\rm{SL}_{2}(\mathbb{Z})$-invariant, the values $\mathfrak{w}_{p_{1},p_{2}}^s(\alpha_\mathfrak{a})$ depend on the choice of ideals $\mathfrak{a}$ in an ideal class. Therefore to define a class polynomial for $\mathfrak{w}_{p_{1},p_{2}}^s$, we must use the $N$-systems introduced by Schertz \cite{RS}. \begin{definition} Let $\mathfrak{N}$ be a set of $h(D)$ primitive quadratic forms $[A_i,B_i,C_i]$ of discriminant $D$. we say $\mathfrak{N}$ an $N$-system for $D$ if quadratic forms $[A_i,B_i,C_i]$ satisfy the following conditions: \begin{enumerate} \item $\gcd(A_i,N)=1, B_i\equiv B_j\pmod{2N},~N|C_i$ for every $i,j$, \item the ideals $[A_i, \frac{-B_i+\sqrt{D}}{2}]$ form a representative system of $\mathfrak{H}_f$. \end{enumerate} \end{definition} The following result is basic for $N$-systems. See Theorems 3.1, 3.2 of \cite{EN1}. \begin{theorem}\label{theorem2.3} Assume prime numbers $p_1$ and $p_2$ satisfy the following conditions: \begin{enumerate} \item If $p_1 \neq p_2$, then $\bigl( \frac{D}{p_1} \bigr),\bigl( \frac{D}{p_2} \bigr) \neq -1$, \item if $p_1=p_2=p$, then either $\bigl( \frac{D}{p} \bigr)=1$ or $p|f$. \end{enumerate} Then there exists an $N$-system $\mathfrak{N}=\{\mathfrak{q}_i\}$ for $D$ . Further $\mathfrak{w}_{p_{1},p_{2}}^{s}(\alpha_{\mathfrak{q}_i}) \in K_f$ for every $i$ and $\mathfrak{w}_{p_{1},p_{2}}^{s}(\alpha_{\mathfrak{q}_i})$ are conjugate under $G(K_f/K)$ to each other. \end{theorem} Now the class polynomial of $\mathfrak{w}_{p_{1},p_{2}}^{s}$ associated with an $N$-system $\mathfrak{N}$ is defined by \[ H_\mathfrak{N}(X) = \prod_{i=1}^{h(D)} \left (X-\mathfrak{w}_{p_{1},p_{2}}^{s}(\alpha_{\mathfrak{q}_i})\right). \] By Theorem~\ref{theorem2.3}, we have $H_\mathfrak{N}(X)\in K[X]$. Further Corollary 3.1 of \cite{EN1} gives \begin{corollary}\label{Cor2.4} Suppose that following conditions hold: \begin{enumerate} \item If $p_1 \neq p_2$, then $\bigl( \frac{D}{p_1} \bigr),\bigl( \frac{D}{p_2} \bigr) \neq -1$, and $p_1,p_2 \nmid f$; \item if $p_1=p_2=p \neq 2$, then $\bigl( \frac{D}{p} \bigr)=1$ or $p|f$; \item if $p_1=p_2=2$, then $\bigl( \frac{D}{2} \bigr)=1$, or $2|f$, but $D\not\equiv 4\pmod{32}$. \end{enumerate} Then $H_\mathfrak{N}(X)\in\mathbb{Z}[X]$. \end{corollary} To obtain modular invariants $J(\mathfrak{a}_{\mathfrak{q}})$ from the singular values $\mathfrak{w}_{p_{1},p_{2}}^{s}(\alpha_{\mathfrak{q}})$, we use a modular polynomial $\Phi_{p_1,p_2}$ that associates $\mathfrak{w}_{p_{1},p_{2}}$ with $J$, which is defined as follows. \begin{definition} $$ \Phi_{p_1,p_2}(X,J) = \prod_{\sigma \in Iso(\mathbb{C}_{\Gamma^{0}(N)}/\mathbb{C}_{\Gamma})} (X-\sigma(\mathfrak{w}_{p_{1},p_{2}}^{s})), $$ where $\mathbb{C}_{\Gamma^{0}(N)}$ and $\mathbb{C}_{\Gamma}$ denote the modular function field of $\Gamma^{0}(N)$ and $\Gamma=\rm{SL}_{2}(\mathbb{Z})$ respectively. \end{definition} We know that $\Phi_{p_1,p_2}(X,J) \in \mathbb{Z}[X,J]$ and $\Phi_{p_1,p_2}(X,J)$ is minimal polynomial of $\mathfrak{w}_{p_{1},p_{2}}^{s}$ over $\mathbb{C}(J)$ by Theorems 7 and 8 by \cite{EN2}. To obtain elliptic curves with complex multiplication by $\mathcal{O}_f$ over finite field $\mathbb{F}_q$ of $q$-elements, the polynomials $H_\mathfrak{N}(X)$ and $\Phi_{p_1,p_2}(X,J)$ are used in the following algorithm. Assume that $q$ is a prime number which splits completely in $K_f$. \begin{algorithm} \begin{enumerate} \item Construct an $N$-system $\{\mathfrak{q}_i\}$. \item Compute $\mathfrak{w}_{p_{1},p_{2}}^{s}(\alpha_{\mathfrak{q}_i})$ and $H_\mathfrak{N}(X) $. \item Compute a roots $\overline{\mathfrak{w}}$ of $H_\mathfrak{N}(X) \bmod q $. \item Compute $\mathbb{F}_q$-rational roots $\overline{J_{k}}$ of $\Phi_{p_1,p_2}(\overline{\mathfrak{w}},J) \bmod q $. \item Output the desired $J$-invariant among $\overline{J_{k}}$. \end{enumerate} \end{algorithm} In step 5, it is necessary to count the number of $\mathbb{F}_q$-rational points of each elliptic curve $E_k$ with the j-invariant $\overline{J_{k}}$ to determine which elliptic curve $E_k$ has complex multiplication by $\mathcal{O}_f$. In step 4, if $\Phi_{p_1,p_2}(\overline{\mathfrak{w}},J) \bmod q $ has the degree 2 in $J$ and the multiple root $\overline{J}$ in $J$, then it is not necessary to count the number of rational points. Accordingly, we will consider the condition that the polynomial has the multiple root in Section \ref{section4}. \section{$N$-systems and class polynomials}\label{section3} In this section, we study the relation between the class polynomials and $N$-systems. \ \begin{lemma}\label{lemma1} Let $\{[A_i,B_i,C_i]\}$ and $\{[A_i',B_i',C_i']\}$ be $N$-systems for $D$. Suppose that $[A_i,B_i,C_i]$ and $[A_i',B_i',C_i']$ are proper equivalent. Then $\frac{-B_i+\sqrt{D}}{2A_i}$ is $\Gamma^0(N)$-equivalent to $\frac{-B_i'+\sqrt{D}}{2A_i'}$ if and only if $B_i\equiv B_i'\pmod {2N}$. In particular, if $B_i\equiv B_i'\pmod {2N}$, then $$ \mathfrak{w}_{p_{1},p_{2}}^{s}(\frac{-B_i+\sqrt{D}}{2A_i})=\mathfrak{w}_{p_{1},p_{2}}^{s}(\frac{-B_i'+\sqrt{D}}{2A_i'}). $$ \end{lemma} \begin{proof} Since $[A_i,B_i,C_i]$ and $[A_i',B_i',C_i']$ are proper equivalent, there exists a matrix $ M= \left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right) \in \rm{SL}_{2}(\mathbb{Z}) $ such that \begin{eqnarray} \frac{-B_i+\sqrt{D}}{2A_i} = \frac{a(\frac{-B_i'+\sqrt{D}}{2A_i'})+b}{c(\frac{-B_i'+\sqrt{D}}{2A_i'})+d}. \label{5} \end{eqnarray} We have only to prove $M \in \Gamma^0(N)$ if and only if $B_i\equiv B_i'\pmod {2N}$. By (\ref{5}), we have \begin{eqnarray} B_iB_i'c-2A_i'B_id+Dc & = &- 2A_iB_i'a+4A_iA_i'b, \label{6}\\ -B_ic-B_i'c+2A_i'd & = & 2A_ia. \label{7} \end{eqnarray} By substituting (\ref{7}) into (\ref{6}), we obtain \begin{eqnarray} A_ia(B_i'-B_i)-2A_iC_ic=2A_iA_i'b. \label{7.1} \end{eqnarray} Since $\gcd(A_iA_i',N)=1$ and $N|C_i$, we have $a(B_i'-B_i)\equiv 2A_i'b\pmod {2N}$. Therefore $M \in \Gamma^0(N)$ if and only if $B_i\equiv B_i'\pmod {2N}$. \end{proof} The following result is deduced from Proposition 3 of \cite{RS}. \begin{proposition}\label{prop_rs} Let $[A,B,C]$ be a primitive quadratic form of discriminant $D$ such that $A>0, \gcd(A_,N)=1$ and $N|C$. Then there exists an $N$-system $\mathfrak{N}$ for $D$ containing $[A,B,C]$. In particular, for an integer $B$ such that $B^2\equiv D\pmod {4N}$, there exists an $N$-system $\mathfrak{N}$ for $D$ containing $[1,B,(B^2-D)/4]$. \end{proposition} By Proposition~\ref{prop_rs} and Lemma~\ref{lemma1}, we know the class polynomials of double $\eta$-quotient associated with $N$-systems depend only on integers $B$, considered mod $2N$, such that $B^2\equiv D\pmod {4N}$. Thus, hereafter, we shall denote by $H_{B,N}(X)$ the class polynomial $H_\mathfrak{N}(X)$ associated with an $N$-system $\mathfrak{N}$ containing a form $[1,B,(B^2-D)/4]$. In the following, we shall fix an $N$-system containing $[1,B,(B^2-D)/4]$ and shall denote it by $\mathfrak{N}_B$. \begin{lemma}\label{lemma2} Assume that $p_1$ and $p_2$ are odd primes. Let $N(D)$ be the number of integers $B \bmod 2N$ such that $B^2\equiv D \pmod {4N}$. Then \begin{eqnarray} N(D) = \begin{cases} 4 \quad if \quad {\bigl( \frac{D}{p_1} \bigr)=1 \quad and \quad \bigl( \frac{D}{p_2} \bigr)=1}, \\ 2 \quad if \quad {\bigl( \frac{D}{p_1} \bigr)=1 \quad and \quad \bigl( \frac{D}{p_2} \bigr)=0}. \end{cases} \end{eqnarray} \end{lemma} \begin{proof} Let us consider the case $\bigl( \frac{D}{p_1} \bigr)=1$ and $\bigl( \frac{D}{p_2} \bigr)=1$. Then there exists an integer $\alpha_i$ such that $\alpha_i^2\equiv D \pmod{p_i}$ for $i=1,2$. By Chinese reminder theorem, we see $B^2\equiv D \pmod{4N}$ if and only if \begin{eqnarray} B & \equiv & D \pmod{2}, \\ B & \equiv & \pm \alpha_i \pmod{p_i}\quad (i=1,2). \label{1} \end{eqnarray} This shows $N(D) = 4$. The remaining case can be treated similarly. Thus we omit the details. \end{proof} If $N$ is odd, we obtain at most $N(D)$ distinct class polynomials of the double $\eta$-quotient associated with $N$-systems for $D$. \begin{lemma}\label{lemma3} Let $B$ be an integer such that $B^2\equiv D \pmod {4N}$. Then $H_{-B,N}(X)=H_{B,N}(X)$. \end{lemma} \begin{proof} We know the $q$-expansion of $\mathfrak{w}_{p_{1},p_{2}}^{s}(z)$ is rational, thus $$ \mathfrak{w}_{p_{1},p_{2}}^{s}(z) = \sum a_n q^n \qquad (a_n \in \mathbb{Q}, \quad q=e^{2\pi iz}) $$ (see section 3 of \cite{EN2}). Therefore we have $$ \overline{\mathfrak{w}_{p_{1},p_{2}}^{s}(z)} = \sum a_n \overline{q}^n = \mathfrak{w}_{p_{1},p_{2}}^{s}({\overline{q}})=\mathfrak{w}_{p_{1},p_{2}}^{s}(-\overline z) $$ and \[ \overline{\mathfrak{w}_{p_{1},p_{2}}^{s}\Bigl(\frac{-B+\sqrt{D}}{2}\Bigr)} = \mathfrak{w}_{p_{1},p_{2}}^{s}\Bigl(\frac{B+\sqrt{D}}{2}\Bigr). \] Since Corollary~\ref{Cor2.4} shows that $H_{B,D}(X)\in\mathbb Z[X]$, we have $H_{B,D}(X)=\overline{H_{B,N}(X)}=H_{-B,N}(X)$. \end{proof} \begin{theorem}\label{theorem2} Let $N(H_D)$ be the number of distinct class polynomials $H_{B,N}(X)$ associated with $N$-systems. Then \begin{eqnarray} N(H_D) = \begin{cases} 1,2 \quad &if \quad {\bigl( \frac{D}{p_1} \bigr)=1 \quad and \quad \bigl( \frac{D}{p_2} \bigr)=1}, \\ 1 \quad &if \quad {\bigl( \frac{D}{p_1} \bigr)=1 \quad and \quad \bigl( \frac{D}{p_2} \bigr)=0}. \end{cases} \end{eqnarray} \end{theorem} \begin{proof} Lemmas \ref{lemma2} and \ref{lemma3} imply that $N(H_D) \le N(D)/2$. Thus we have the assertion. \end{proof} In the case $N(H_D)=2$, we have two class polynomials $H_{B,N}(X)$ and $H_{B',N}(X)$, where $B,B'$ are integers such that $B^2\equiv D\pmod{4N},~B'\equiv B\pmod{p_1},~B'\equiv -B\pmod{p_2}$. We shall show $H_{B',N}(X)$ is obtainable from $H_{B,N}(X)$ by a simple transformation. We shall use the following transformation formula of the Dedekind $\eta$-function (see Theorem 1 of \cite{EN2}). \begin{theorem}\label{EN2_Th1} Let $ M = \left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right) \in \rm{SL}_{2}(\mathbb{Z}) $ be normalized such that $c\geq 0$, and $d>0$ if $c=0$. Write $c=\gamma 2^{\lambda}$ with $\gamma$ odd; by convention, $\gamma=\lambda=1$ if $c=0$. Then $$ \eta(Mz) = \epsilon(M)\sqrt{cz+d} \eta(z) $$ with $$ \mathfrak{R}(\sqrt{cz+d})>0 , \quad \epsilon(M) = \Bigl( \frac{a}{\gamma} \Bigr) \zeta^{ab+c(d(1-a^2)-a)+3\gamma(a-1)+\frac{3}{2}\lambda(a^2-1)}_{24}. $$ \end{theorem} \begin{lemma}\label{lemma_involution} Let $ W_{p_1} = \left( \begin{array}{cc} -p_1 & N \\ -y & -p_1x \\ \end{array} \right) $ with $y < 0,~y\equiv 1\pmod 2$ and $p_1x+p_2y=1$. Then $$ \mathfrak{w}_{p_1,p_2}^s(W_{p_1}(z)) = \frac{\Bigl( \frac{p_1}{p_2} \Bigr)^s}{\mathfrak{w}_{p_1,p_2}^{s}(z)}. $$ \end{lemma} \begin{proof} By Theorem \ref{EN2_Th1}, \begin{eqnarray*} \mathfrak{w}_{p_1,p_2}(W_{p_1}(z)) = \mathfrak{w}_{p_1,p_2}(\frac{-p_1z+N}{-yz-p_1x}) = \frac{\eta(\frac{-p_1z+N}{p_1(-yz-p_1x)})\eta(\frac{-p_1z+N}{p_2(-yz-p_1x)})}{\eta(\frac{-p_1z+N}{-yz-p_1x})\eta(\frac{-p_1z+N}{p_1p_2(-yz-p_1x)})} = \frac{\epsilon^{*}}{\mathfrak{w}_{p_1,p_2}(z)}. \end{eqnarray*} Here $$ \epsilon^{*} ={\footnotesize \frac{ \epsilon \left(\Bigl( \begin{array}{cc} -1 & p_2 \\ -y & -p_1x \\ \end{array}\Bigr) \right) \epsilon \left(\Bigl( \begin{array}{cc} -p_1 & 1 \\ -p_2y & -x \\ \end{array}\Bigr) \right) }{ \epsilon \left(\Bigl( \begin{array}{cc} -p_1 & p_2 \\ -y & -x \\ \end{array}\Bigr) \right) \epsilon \left(\Bigl( \begin{array}{cc} -1 & 1 \\ -p_2y & -p_1x \\ \end{array}\Bigr) \right) }} = \frac{\Bigl(\frac{-1}{-y} \Bigr)\zeta^{a}_{24} \Bigl( \frac{-p_1}{-p_2y} \Bigr)\zeta^{b}_{24}} {\Bigl(\frac{-p_1}{-y} \Bigr)\zeta^{c}_{24} \Bigl( \frac{-1}{-p_2y} \Bigr)\zeta^{d}_{24}} = \Bigl(\frac{p_1}{p_2} \Bigr)\zeta^{a+b-c-d}_{24} $$ with \begin{eqnarray*} a & = & -p_2-y+3\gamma(-2), \\ b & = & -p_1 -p_2y(-x(1-p_1^2)+p_1)+3p_2\gamma(-p_1-1)+\frac{3}{2}\lambda(p_1^2-1), \\ c & = & -p_1p_2 -y(-x(1-p_1^2)+p_1)+3\gamma(-p_1-1)+\frac{3}{2}\lambda(p_1^2-1), \\ d & = & -1-p_2y+3p_2\gamma(-2). \end{eqnarray*} Therefore we see $$ a+b-c-d = (1-p_2)(1-p_1)(1-y-xy(p_1+1)-3\gamma). $$ Since $s(p_1-1)(p_2-1) \equiv 0 \pmod{24}$, we have $(\epsilon^{*})^s= \Bigl(\frac{p_1}{p_2} \Bigr)^s$. \end{proof} \begin{proposition}\label{prop5} Suppose that $\bigl( \frac{D}{p_1} \bigr)=\bigl( \frac{D}{p_2} \bigr)=1$. Then $$ H_{B',N}(X)= \frac{X^{h(D)}}{H_{B,D}(0)} H_{B,N}\Bigl( \frac{\Bigl( \frac{p_1}{p_2} \Bigr)^s}{X} \Bigr). $$ \end{proposition} \begin{proof} Let $\mathfrak{q}=[A,B,C]$ be a form of the $N$-system $\mathfrak{N}_B$. Put $\alpha_\mathfrak{q}=\frac{-B+\sqrt D}{2A}$. We can write $W_{p_1}(\alpha_\mathfrak{q})=\frac{-B'+\sqrt d}{2A'}$. We see easily that $B'\equiv B\pmod{p_1}$, $B'\equiv -B\pmod{p_2}$ and $C'=(B'^2-D)/4A'\equiv 0\pmod N$. If $\gcd(A',N)>1$, then we shall show there exists an element $\gamma \in \Gamma^{0}(N)$ such that the first coefficient $A''$ of the quadratic form $[A'',B'',C'']$ corresponding to $\gamma W_{p_1}(\alpha)$ is prime to $N$. It is noted by the proof of Lemma~\ref{lemma1} that $B'\equiv B''\pmod{2N}$ and $N|C''$. Since $\mathfrak{w}_{p_{1},p_{2}}(\gamma W_{p_1}(\alpha_\mathfrak{q})) = \mathfrak{w}_{p_{1},p_{2}}(W_{p_1}(\alpha_\mathfrak{q}))$, we have \[ H_{B',N}(X)=\prod_{\mathfrak{q}\in\mathfrak{N}_B}(X- \mathfrak{w}_{p_{1},p_{2}}(W_{p_1}(\alpha_\mathfrak{q}))). \] By Lemma~\ref{lemma_involution}, we have our result. Thus we have only to prove the existence of the above $\gamma$ in the case $A'$ is not prime to $N$. Set $\gamma = \left( \begin{array}{cc} r & N \\ t & 1 \\ \end{array} \right) $, for integers $r,t$. Then \begin{eqnarray*} \gamma W_{p_1}(\alpha) & = & \frac{r \Bigl(\frac{-B'+\sqrt{D}}{2A'}\Bigr)+N}{t\Bigl( \frac{-B'+\sqrt{D}}{2A'}\Bigr) + 1} = \frac{(-rB'-tB'N+2A'N+2rtC')+\sqrt{D}}{2(-tB'+A'+t^2C')}. \end{eqnarray*} Therefore we have $A''= -tB'+A'+t^2C'$. Since $N|C'$, we know $\gcd(A'',N)=1$ if and only if $\gcd(-tB'+A',N)=1$. Assume $N|A'$. Then $\gcd(D,N)=1$ implies $(B',N)=1$. Therefore we can take $r=N+1$, $t=1$. Next assume $p_i|A'$ and $p_j\nmid A'$. Then we have $p_i\nmid B'$. Therefore we can take $r=p_jN+1$, $t=p_j$. This completes our proof. \end{proof} \section{Multiple roots of modular equation }\label{section4} In this section, we assume the conditions in Corollary~\ref{Cor2.4}. We shall study a condition that for singular values $\alpha$ of double $\eta$-quotients the polynomial $\Phi_{p_1,p_2}(\alpha,J)$ of $J$ has a multiple root. \begin{proposition}\label{prop1} The polynomial $\Phi_{p_1,p_2}(X,J)$ has degree $\displaystyle{N \prod_{p|N} \bigl(1+\frac{1}{p}\bigr)}$ as a polynomial of $J$ and has degree $\displaystyle{\frac{s(p_1-1)(p_2-1)}{12}}$ as a polynomial of $X$. For $\tau\in\mathbb H$, the equation $\Phi_{p_1,p_2}(\mathfrak{w}_{p_1,p_2}^s(\tau),J)=0$ has two roots $J(\tau)$ and $J(W_N (\tau))$. In particular, if $J(\tau)=J(W_N (\tau))$, then the equation has a multiple root. \end{proposition} \begin{proof} The assertion concerning the degree follows from Theorem 9 of \cite{EN2}. The equation $\Phi_{p_1,p_2}(\mathfrak{w}_{p_1,p_2}^s(\tau),J)=0$ obviously has the root $J(\tau)$. Similarly, $J(W_N(\tau))$ is a root of $\Phi_{p_1,p_2}(\mathfrak{w}_{p_1,p_2}^s(W_N(\tau)),J)=0$. By \eqref{eq1}, we have $\mathfrak{w}_{p_{1},p_{2}}^{s}(W_N(\tau))=\mathfrak{w}_{p_{1},p_{2}}^{s}(\tau)$. Therefore $J(W_N (\tau))$ is a root of $\Phi_{p_1,p_2}(\mathfrak{w}_{p_1,p_2}^s(\tau),J)=0$. \end{proof} Let $\mathfrak{q}=[A,B,C]$ be a form of an $N$-system $\mathfrak{N}_B$. Since \begin{eqnarray*} W_N(\alpha_\mathfrak{q}) & = & \frac{N}{-\frac{-B+\sqrt{D}}{2A}} = \frac{2AN(B+\sqrt{D})}{B^2-D} = \frac{B+\sqrt{D}}{2(\frac{C}{N})}, \end{eqnarray*} the action of $W_N$ on the ideal $\mathfrak{a_\mathfrak{q}}$ is given by $W_N(\mathfrak{a_\mathfrak{q}}) = [\frac{C}{N}, \frac{B+\sqrt{D}}{2}]$. \begin{lemma}\label{lemma_a_b} If we set $\mathfrak{a}_B=[N,\frac{-B+\sqrt{D}}{2}]$, then $W_N(\mathfrak{a_\mathfrak{q}}) \sim \mathfrak{a_\mathfrak{q}}\mathfrak{a}_B$. \end{lemma} \begin{proof}Since $(A,B,C)=1$, \begin{eqnarray*} \overline{\mathfrak{a_\mathfrak{q}}}W_N(\mathfrak{a_\mathfrak{q}}) & = & [A,\frac{B+\sqrt{D}}{2}][\frac{C}{N},\frac{B+\sqrt{D}}{2}] \\ & = & [\frac{AC}{N},\frac{C}{N}(\frac{B+\sqrt{D}}{2}),A(\frac{B+\sqrt{D}}{2}),B(\frac{B+\sqrt{D}}{2})] \\ & = & [\frac{AC}{N},\frac{B+\sqrt{D}}{2}] \\ & = & \frac{1}{N}(\frac{B+\sqrt{D}}{2})[N,\frac{-B+\sqrt{D}}{2}]. \end{eqnarray*} Thus we have $\overline{\mathfrak{a_\mathfrak{q}}}W_N(\mathfrak{a_\mathfrak{q}})\sim\mathfrak{a_\mathfrak{q}}\mathfrak{a}_B$. Since $\overline{\mathfrak{a_\mathfrak{q}}}\mathfrak{a_\mathfrak{q}}\sim 1$, this proves the assertion. \end{proof} \begin{proposition}\label{prop2} Let $\mathfrak{q}=[A,B,C]$ be a form of an $N$-system $\mathfrak{N}_B$. Then $J(W_N(\alpha_\mathfrak{q}))$ $=J(\alpha_\mathfrak{q})$ if and only if there exist $u,v \in \mathbb{Z}$ such that \begin{eqnarray}\label{con1} \begin{cases} u^2 - Dv^2 = 4N \\ u-Bv \equiv 0 \pmod{2N}. \end{cases} \end{eqnarray} \end{proposition} \begin{proof} By \eqref{equiv1} and Lemma~\ref{lemma_a_b}, we have \begin{eqnarray*} J(W_N(\mathfrak{a}_\mathfrak{q}))=J(\mathfrak{a}_\mathfrak{q}) \quad & \Leftrightarrow & \quad W_N(\mathfrak{a_\mathfrak{q}}) \sim \mathfrak{a_\mathfrak{q}} \\ \quad & \Leftrightarrow & \quad \mathfrak{a}_B \sim 1. \end{eqnarray*} Further we know the condition $\mathfrak{a}_B \sim 1$ is equivalent to the existence of an element $ \left( \begin{array}{cc} x & y \\ z & w \\ \end{array} \right) \in \rm{SL}_{2}(\mathbb{Z}) $ such that \begin{eqnarray}\label{con2} \frac{-B+\sqrt{D}}{2N} = \frac{x(\frac{-B+\sqrt{D}}{2})+y}{z(\frac{-B+\sqrt{D}}{2})+w}. \end{eqnarray} Let us assume \eqref{con2}. Then we have \begin{eqnarray} zB^2+zD-2wB & = & -2xNB+4Ny, \label{8} \\ w & = & xN+zB. \label{9} \end{eqnarray} By substituting (\ref{9}) into (\ref{8}), we have $y = -A\frac{C}{N}z$. Therefore, from $xw - yz = 1$, we obtain $(Bz+2xN)^2-Dz^2=4N$. Now, we put $u=Bz+2Nx$, $v=z$. Then we have $u^2-Dv^2=4N$ and $x=\frac{u-Bv}{2N}$. Further, since $x \in \mathbb{Z}$, we have $u-Bv \equiv 0 \pmod{2N}$. Conversely, let $u,v$ be integers satisfying \eqref{con1}. Put $x=\frac{u-Bv}{2N}$, $y= -A\frac{C}{N}v$, $z=v$ and $w=xN+zB$. Then we have $xw-zy=1$ and \eqref{con2}. \end{proof} Immediately from Proposition~\ref{prop2}, we obtain \begin{corollary}\label{coro4} If $J(W_N(\mathfrak{a}_\mathfrak{q}))=J(\mathfrak{a}_\mathfrak{q})$, then $D > -4N$. \end{corollary} \begin{proof} The condition \eqref{con1} shows $ D = \frac{u^2-4N}{v^2} > -4N. $ \end{proof} \begin{proposition}\label{prop3} Assume that there exist integers $u$ and $v$ satisfying \eqref{con1}. Then the equation $\Phi_{p_1,p_2}(\mathfrak{w}_{p_{1},p_{2}}^{s}(\alpha_\mathfrak{q}),J)=0$ has a multiple root $J(\mathfrak{a}_\mathfrak{q})$. \end{proposition} \begin{proof} The assertions is obvious. \end{proof} By a similar argument in Proposition~\ref{prop2}, we have: \begin{proposition}\label{prop5} Let the notation be as in Proposition~\ref{prop2}. Then ${W_N}^2(\mathfrak{a}) \sim \mathfrak{a} $ if and only if there exist integers $X,Y$ such that $Y\neq 0$ and \begin{eqnarray*} \begin{cases} X^2-DY^2=4N^2 \\ X-BY \equiv 0 \pmod{2N} \\ (\frac{X-BY}{2N})^2 \equiv 1 \pmod{Y}. \end{cases} \end{eqnarray*} \end{proposition} \begin{theorem}\label{theorem4.7} Let $\mathfrak{a_{\mathfrak{q}_i}}~(i=1,\dots,h(D))$ be the ideals associated with the quadratic forms $\mathfrak{q}_i=[A_i,B_i,C_i]$ of $\mathfrak{N}_B$. Then \begin{eqnarray*} J(\mathfrak{a_{\mathfrak{q}_1}})=J(W_N(\mathfrak{a_{\mathfrak{q}_1}})) \quad \Leftrightarrow \quad J(\mathfrak{a_{\mathfrak{q}_i}})=J(W_N(\mathfrak{a_{\mathfrak{q}_i}})) \quad (i=1,\dots,h(D)). \end{eqnarray*} \end{theorem} \begin{proof} We set $\mathfrak{a}_{B_i}=[N,\frac{-B_i+\sqrt{D}}{2}]$. Since $B_1 \equiv B_i \pmod{2N}$, we know $$ \mathfrak{a}_{B_i}=[N,\frac{-B_i+\sqrt{D}}{2}]=\mathfrak{a}_{B_1}. $$ Consequently, by Lemma~\ref{lemma_a_b} \begin{eqnarray*} J(\mathfrak{a}_1)=J(W_N(\mathfrak{a}_1)) \Leftrightarrow \mathfrak{a}_{1B} \sim (1) \Leftrightarrow \mathfrak{a}_{iB} \sim (1) \Leftrightarrow J(\mathfrak{a}_i)=J(W_N(\mathfrak{a}_i)). \end{eqnarray*} \end{proof} \begin{corollary}\label{coro5} Let $\ell$ be a prime number which splits completely in $K_f$. Let $B$ be an integer such that there exist $u$ and $v$ satisfied with \eqref{con1}. Further let $\mathfrak{N}_B$ be the $N$-system determined by $B$. Then for any $\mathfrak{q} \in \mathfrak{N}_B$, the polynomial $\Phi_{p_1,p_2}(X,J)$ has a multiple root $\overline{J(\mathfrak{a}_\mathfrak{q})}$ over $\mathbb F_\ell$. \end{corollary} \begin{proof} Our assertion follows from Proposition~\ref{prop2} and Theorem~\ref{theorem4.7}. \end{proof} Finally, we give a result for the decomposition of $\Phi_{p_1,p_2}(X,J(\mathfrak{a}))$ for an ideal $\mathfrak{a}$ over finite fields. \begin{proposition} Assume that $p_1$ and $p_2$ satisfy the condition 1 of Corollary~\ref{Cor2.4}. Let $\ell$ be a prime number which splits completely in $K_f$. Let $\mathfrak{a}$ be an ideal of $\mathcal{O}_f$. Then the polynomial $\Phi_{p_1,p_2}(X,J(\mathfrak{a})) \mod \ell$ has at least four linear factors over $\mathbb F_\ell$. \end{proposition} \begin{proof} By Lemmas~\ref{lemma1} and \ref{lemma2}, we know in the ideal class of $\mathfrak{a}$ there exist four forms $\mathfrak{q}_B$ of $N$-systems $\mathfrak{N}_B$ for four distinct $B \pmod {2N},B^2\equiv D\pmod{4N}~$. By Theorem~\ref{theorem2.3}, we have $\mathfrak{w}_{p_{1},p_{2}}^{s}(\alpha_{\mathfrak{q}_B})$ are integers of $K_f$. Therefore we have our assertion. \end{proof} \section{Example}\label{section5} We give an example for the result given in Corollary~\ref{coro5}. Let $D=-56$ and $N=39$. The integer $B=10$ satisfies $B^2\equiv D\pmod{4N}$. Consider the $N$-system $\mathfrak{N}_B$. For the integer $B$, there exist $u,v \in \mathbb{Z}$ that satisfy \eqref{con1}. For instance, we can take $u=10,v=1$. The modular equation $\Phi_{3,13}(X,J)$ and the class polynomial $H_{B,N}(X)$ are given as follows. {\footnotesize $ \begin{array}{l} \Phi_{3,13}(X,J) = X^{56}+(704-J)X^{55}+(168568+39J)X^{54}+(14498520-663J)X^{53} \\ \hspace{62pt} +(187807764+6331J)X^{52}+(744637296-35763J)X^{51} \\ \hspace{62pt} +(-6562036+106392J)X^{50}+(-3840625568-18070J)X^{49} \\ \hspace{62pt} +(1058251610-1082016J)X^{48}+(10302034600+3516903J)X^{47} \\ \hspace{62pt} +(4510900472-1278901J)X^{46}+(-34331690432-18277116J)X^{45} \\ \hspace{62pt} +(-7097865034+40532700J)X^{44}+(84188024320+11574823J)X^{43} \\ \hspace{62pt} +(546780176-161476962J)X^{42}+(-154959173464+168751479J)X^{41} \\ \hspace{62pt} +(-12359340101+230086922J)X^{40}+(327081484064-617987682J)X^{39} \\ \hspace{62pt} +(-49301838300+137626281J)X^{38}+(-576339027576+928366231J)X^{37} \\ \hspace{62pt} +(284363953068-959457720J)X^{36}+(735938431592-477589944J)X^{35} \\ \hspace{62pt} +(-558265224452+1429130144J)X^{34}+(-890017323520-466517064J)X^{33} \\ \hspace{62pt} +(977815434427-963208272J)X^{32}+(966995235128+909996295J)X^{31} \\ \hspace{62pt} +(-1755072840368+158515461J)X^{30}+(-345165085024-607329720J)X^{29} \\ \hspace{62pt} +(2218368968890+197238236J)X^{28}+(-911733108784+179445279J)X^{27} \\ \hspace{62pt} +(-1540031876048-140684622J)X^{26}+(1628026178168-6888479J)X^{25} \\ \end{array} $ } {\footnotesize $ \begin{array}{l} \hspace{62pt} +(261124933147+37909092J)X^{24}+(-1229692547200-8835450J)X^{23} \\ \hspace{62pt} +(462040501468-4070053J)X^{22}+(441029439032+1885689J)X^{21} \\ \hspace{62pt} +(-422841966612+44928J)X^{20}+(-7261052136-111436J)X^{19} \\ \hspace{62pt} +(163453863300+9516J)X^{18}+(-59787354976+740J)X^{17} \\ \hspace{62pt} +(J^2-1486J-26470898021)X^{16}+(24009911816-49J)X^{15} \\ \hspace{62pt} +(-1731574864+29J)X^{14}+(-3926472080+246J)X^{13} \\ \hspace{62pt} +(1333660406-364J)X^{12}+(158103088-221J)X^{11} \\ \hspace{62pt} +(-172600168+650J)X^{10}+(25597000-221J)X^9 \\ \hspace{62pt} +(5195450-364J)X^8+(-2155088+247J)X^7+(177164+26J)X^6 \\ \hspace{62pt} +(39936-52J)X^5+(-9996+13J)X^4+(600-J)X^3+88X^2-16X+1, \end{array} $ } $ H_{B,N}(X)=X^4-2X^3-X^2+2X-1. $ It is noted the degree of the $\Phi_{3,13}(X,J)$ in $J$ is 2. We take a prime number $\ell=3593$, which splits completely in $K_1$. Then $H_{B,N}(X)$ decomposes into linear factors over $\mathbb F_\ell$ as follows. $H_{B,N}(X)\equiv (X-607)(X-166)(X-3428)(X-2987) \pmod{3593}$. By substituting the roots of $H_{B,N}(X) \pmod{3593}$ into $\Phi_{3,13}(X,J)$, we have \begin{eqnarray*} \Phi_{3,13}(607,J) & \equiv & (J-229)^2 \pmod{3593}, \\ \Phi_{3,13}(166,J) & \equiv & (J-2979)^2 \pmod{3593}, \\ \Phi_{3,13}(3428,J) & \equiv & (J-2874)^2 \pmod{3593}, \\ \Phi_{3,13}(2987,J) & \equiv & (J-2696)^2 \pmod{3593}. \end{eqnarray*} Let $D(X)$ be the discriminant of $\Phi_{3,13}(X,J)$ as a polynomial in $J$. Then we see $D(X) \equiv 0 \bmod H_{B,N}(X)$. This means that $\Phi_{3,13}(\mathfrak{w}_{p_{1},p_{2}}^{s}(\alpha_{\mathfrak{q}_i}),J)$ has still a multiple root for every form $\mathfrak{q}_i\in\mathfrak{N}_B$. Thus we also have $J(\mathfrak{a_{\mathfrak{q}_i}})=J(W_N(\mathfrak{a_{\mathfrak{q}_i}}))$ for every $i$.
{ "timestamp": "2007-12-23T13:15:22", "yymm": "0712", "arxiv_id": "0712.3911", "language": "en", "url": "https://arxiv.org/abs/0712.3911", "abstract": "Enge and Schertz gave the method of using the double eta-quotient for the construction of elliptic curves over finite fields. In their method, it is necessary to count the number of rational points of elliptic curves corresponding to solutions of the modular equation over a finite field, because in advance we can not know which solution of the modular equation is that corresponding to the modular invariant. We give a condition that the modular invariant is a multiple root of the modular polynomial. Consequently, we give a method to reduce the amount of computation in the process of counting the number of rational points.", "subjects": "Number Theory (math.NT)", "title": "N-systems, class polynomials for double eta-quotients and singular values of J-invariant function", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692325496974, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7079584905267976 }
https://arxiv.org/abs/1406.4230
The Steinberg torus of a Weyl group as a module over the Coxeter complex
Associated to each irreducible crystallographic root system $\Phi$, there is a certain cell complex structure on the torus obtained as the quotient of the ambient space by the coroot lattice of $\Phi$. This is the Steinberg torus. A main goal of this paper is to exhibit a module structure on (the set of faces of) this complex over the (set of faces of the) Coxeter complex of $\Phi$. The latter is a monoid under the Tits product of faces. The module structure is obtained from geometric considerations involving affine hyperplane arrangements. As a consequence, a module structure is obtained on the space spanned by affine descent classes of a Weyl group, over the space spanned by ordinary descent classes. The latter constitute a subalgebra of the group algebra, the classical descent algebra of Solomon. We provide combinatorial models for the module of faces when $\Phi$ is of type $A$ or $C$.
\section*{Introduction} Let $\Phi$ be a root system and $W$ the associated Coxeter group. The \emph{descent set} of an element $w\in W$ keeps track of the simple roots whose image under $w$ is negative. When $\Phi$ is irreducible and crystallographic, a notion of \emph{affine descent} may be defined. This is due to Cellini \cite[Section 2]{Cel:1995}. The affine descent set enlarges the ordinary descent set by recording the behavior of $w$ on the (unique) highest root of $\Phi$. Lumping group elements according to ordinary descent sets leads to \emph{Solomon's descent algebra} (or ring)~\cite{Sol:1976}, denoted in this paper by $\Sol(\Phi)$. It is a subring of the group ring $\Z W$. One goal of this paper is to arrive at a related algebraic structure on the additive subgroup of $\Z W$ spanned by affine descent classes and denoted $\Stb{\Sol}(\Phi)$. We show in Theorem~\ref{thm:main} that the multiplication of $\Z W$ turns $\Stb{\Sol}(\Phi)$ into a left module over $\Sol(\Phi)$. This module structure is part of a family of such structures introduced by Moszkowski~\cite{Mos:1989}; see Remark~\ref{rmk:mosz}. Our main interest lies however in more general geometric considerations. We follow the approach of Tits (in his appendix~\cite{Tit:1976} to Solomon's paper~\cite{Sol:1976}), as developed by Bidigare~\cite{Bid:1997} and Brown~\cite[Section 4.8]{Bro:2000}. These works relate the algebraic structure of $\Sol(\Phi)$ to the geometric structure of the Coxeter complex $\Sigma(\Phi)$. This relation is based on the following key points: \begin{enumerate}[(i)] \item the set $\Sigma(\Phi)$ is a monoid under the Tits product; \item the group $W$ acts on $\Sigma(\Phi)$ and the $W$-orbits are in bijection with descent sets; \item $\Sol(\Phi)$ is (anti-isomorphic to) the subring of $W$-invariants in the monoid ring $\Z\Sigma(\Phi)$. \end{enumerate} Work of Dilks, Petersen, and Stembridge \cite{DPS:2009} uncovered a relation analogous to (ii) between affine descents and the structure of the \emph{Steinberg torus} $\Stb{\Sigma}(\Phi)$. This cell complex was introduced by Steinberg in~\cite{Ste:1968}. It is obtained as the quotient of the affine Coxeter complex $\aff{\Sigma}(\Phi)$ by the coroot lattice $\Z\Phi^\vee$. We provide analogs of the remaining points. We show that $\Sigma(\Phi)$ acts on $\aff{\Sigma}(\Phi)$, and that this action passes through the quotient by $\Z\Phi^\vee$ to an action on the Steinberg torus $\Stb{\Sigma}(\Phi)$. Finally, we show that the elements of the module $\Stb{\Sol}(\Phi)$ arise as the $W$-invariants in the module $\Z\Stb{\Sigma}(\Phi)$. The action on affine faces can be defined in the general context of real affine hyperplane arrangements. Such an arrangement splits the ambient space into a set of affine faces. The hyperplane \emph{at infinity} is similarly decomposed into a set of \emph{celestial} faces. The latter set is a semigroup under the Tits product and the former is a right module over it. In the case of the affine arrangement of $\Phi$, celestial faces constitute the Coxeter complex $\Sigma(\Phi)$ and affine faces constitute $\aff{\Sigma}(\Phi)$. The contents of the paper are as follows. Celestial faces and other geometric aspects are discussed in Section~\ref{sec:hyparr}. Section \ref{sec:products} provides background on both finite and affine Coxeter complexes and arrives at the module structure on the set of faces of the Steinberg torus. Section \ref{sec:modules} relates the geometric actions to the module structures over the descent algebra. Section~\ref{sec:models} reviews the known combinatorial models for the Coxeter complex when $\Phi$ is of type $A$ or $C$, and introduces analogous models for the Steinberg torus of such root systems. They involve certain cyclic partitions that we call \emph{spin necklaces} in type $A$, and similar structures in type $C$. The partial order among faces as well as the product of faces are described in these terms. \section{Hyperplane arrangements. Faces: mundane and celestial}\label{sec:hyparr} We give some general background on hyperplane arrangements, following \cite[Appendix A]{Bro:2000}. We center around the notion of face of an arrangement and the Tits product of faces. We discuss the notion of celestial face and focus on the construction of a right module structure on the set of mundane faces over the semigroup of celestial faces. The notion of celestial face may not have appeared explicitly in the literature before, but closely related ideas appear in~\cite[Section 11.8]{AbrBro:2008}, in the context of buildings. \subsection{Hyperplane arrangements and the Tits product}\label{ss:hyparr} Let $V$ be a finite-dimensional real vector space. A \emph{hyperplane} $H$ is an affine subspace of $V$ of codimension 1. A \emph{hyperplane arrangement} $\Hy$ in $V$ is a collection of hyperplanes in $V$ (not necessarily finite). For each $H\in \Hy$ we choose a nonzero affine functional $f_H$ so that $H= \{ \lambda \in V \mid f_H(\lambda) = 0\}$. The positive and negative \emph{halfspaces} of $H$ are defined by \[ H^+:= \{ \lambda \in V \mid f_H(\lambda) > 0\} \quad\text{and}\quad H^-:= \{ \lambda \in V \mid f_H(\lambda) < 0\}. \] Also, let $H^0 := H$. The hyperplane arrangement $\Hy$ partitions $V$ into a collection of nonempty disjoint convex polyhedral sets called \emph{faces}, given by intersections of hyperplanes and their halfspaces. A face $F$ is uniquely determined by its \emph{sign vector}: \[ \sigma(F) = \bigl(\sigma_H(F)\bigr)_{H \in \Hy}, \] where $\sigma_H(F)\in\{+, -,0\}$ is the sign of $f_H$ on $F$ (the sign of $f_H(\lambda)$ on any point $\lambda \in F$). Indeed, \[ F = \bigcap_{H \in \Hy} H^{\sigma_H(F)}. \] A sign sequence $(\sigma_H)_{H \in \Hy}$ is the sign vector of a face if and only if $\bigcap_{H \in \Hy} H^{\sigma_H}\neq\emptyset$. We denote by $\Sigma(\Hy)$ the collection of faces of $\Hy$. There is a partial order on faces, given by $F\leq G \Leftrightarrow \overline{F} \subseteq \overline{G}$; that is, if the closure of $F$ is contained in the closure of $G$. In terms of sign vectors, this can be stated as: $F \leq G$ if and only if for each $H \in \Hy$ either $\sigma_H(F) = 0$ or $\sigma_H(F) = \sigma_H(G)$. Maximal faces $C$ are called \emph{chambers}, and they are characterized by the fact that $\sigma_H(C)\neq 0$ for all $H \in \Hy$. Let $\mathcal{C}(\Hy)$ denote the set of chambers. Let $F$ and $G$ be faces of a hyperplane arrangement $\Hy$, and choose any two points $\lambda \in F$ and $\lambda'\in G$. We say the ordered pair $(F,G)$ is \emph{multiplicable}, if there is $\epsilon > 0$ such that the line segment $\{(1-t)\lambda+t\lambda' \mid 0<t<\epsilon\}$, crosses only finitely many hyperplanes in $\Hy$. If $(F,G)$ is a multiplicable pair, their \emph{Tits product} $FG$ is the first face of $\Hy$ entered upon traveling a small positive distance along the line $(1-t)\lambda + t\lambda'$. (That is, there is some $\epsilon>0$ such that for all $0<t < \epsilon$, we have $(1-t)\lambda+t \lambda' \in FG$.) Both notions are independent of the choice of points $\lambda$ and $\lambda'$. An example is shown in Figure~\ref{fig:Titsprod}. \begin{figure}[!h] \[ \begin{tikzpicture}[scale=0.8,>=stealth] \draw[-, very thick] (-1.5,2.6) -- (1.875,-3.25); \draw[-, very thick] (-3,0) -- (0,0); \draw[-, very thick] (0,0)--(3,0) node[fill=white,midway] {$F$}; \draw[-, very thick] (3,0)--(6,0); \draw[-, very thick] (-1.5,-2.6) -- (0,0) node[fill=white,midway] {$G$}; \draw[-, very thick] (0,0) --(1.5,2.6); \draw[-, very thick] (1.125,-3.25) -- (4.5,2.6); \draw (1.5,-1.25) node {$FG$}; \draw (0,0) node {$\bullet$}; \draw (3,0) node {$\bullet$}; \draw (1.5,-2.6) node {$\bullet$}; \blue{ \draw (2.5,0) node (a) {$\bullet$}; \draw (-1.15,-2) node (b) {$\bullet$}; \draw[dashed] (b)--(a); \draw[->,line width=1] (2.5,0)--(1.77,-.4); } \end{tikzpicture} \] \caption{The product of faces in a line arrangement.} \label{fig:Titsprod} \end{figure} The product of two multiplicable faces admits a simple description in terms of sign vectors. Indeed, for small positive $\epsilon$, \[ f_H\bigl((1-\epsilon)\lambda + \epsilon\lambda'\bigr) = (1-\epsilon)f_H(\lambda) + \epsilon f_H(\lambda') \] has the same sign as $f_H(\lambda)$ unless $f_H(\lambda) = 0$, in which case it adopts the sign of $f_H(\lambda')$. In other words, we have \begin{equation}\label{eq:product} \sigma_H(FG) = \begin{cases} \sigma_H(F) & \text{if } \sigma_H(F) \neq 0,\\ \sigma_H(G) & \text{if } \sigma_H(F) = 0. \end{cases} \end{equation} It follows that when defined, the Tits product is associative. Moreover, if $C\in\mathcal{C}(\Hy)$ is a chamber and $F\in\Sigma(\Hy)$ is any face, then $FC$ is also a chamber, called the \emph{Tits projection} of $F$ onto $C$. Also, $CF=C$. We say that $\Hy$ is \emph{locally finite} if every $\lambda\in V$ has a neighborhood that intersects only finitely many hyperplanes in $\Hy$. In this case, all pairs of faces of $\Hy$ are multiplicable, and the Tits product gives $\Sigma(\Hy)$ the structure of a semigroup. Moreover, the set $\mathcal{C}(\Hy)$ is a two-sided ideal of $\Sigma(\Hy)$, with the right action being trivial. In particular, all this holds if $\Hy$ is finite. If the intersection of all hyperplanes in $\Hy$ is nonempty, this intersection is a face which is the unit for the Tits product, and $\Sigma(\Hy)$ is in this case a monoid. This holds in particular if all hyperplanes in $\Hy$ are linear. For more details, see \cite[Sections~1.4 and 10.1]{AbrBro:2008}, \cite[Appendix A]{Bro:2000}, or \cite[Section 2]{BroDia:1998}. \subsection{The celestial sphere}\label{ss:celestial} A \emph{ray} $r$ in $V$ is a subset of the form $r=f(\R_+)$, where $f:\R\to V$ is an affine transformation. The \emph{base} of the ray is $r_0:=f(0)$. Two rays $r$ and $s$ are \emph{parallel} if there exists $\lambda\in V$ such that $r=\lambda+s$ (equality of subsets). A \emph{celestial point} is a parallelism class of rays. The \emph{celestial sphere} in $V$ is the set $S_\infty(V)$ of celestial points in $V$. Let $[r]\in S_\infty(V)$ denote the parallelism class of a ray $r$ in $V$. See Figure~\ref{fig:cpoint}. \begin{figure}[!h] \begin{tikzpicture}[>=stealth] \draw[very thick,->] (0,0) node {$\bullet$} -- (1,1); \draw[very thick,->] (0,2) node {$\bullet$} -- (1,3); \draw[dashed] (1,1) node[above] {$s$} .. controls (3,3) .. (3.5,4.5); \draw[dashed] (1,3) node[above] {$r$} .. controls (2,4) .. (3.5,4.5); \draw[dashed] (3.5,4.5) arc (45:75:6.364); \draw[dashed] (3.5,4.5) arc (45:15:6.364); \draw[fill=white] (3.5,4.5) circle (3pt) node[above right] {$[r]=[s]$}; \end{tikzpicture} \caption{A celestial point.} \label{fig:cpoint} \end{figure} Let $H$ be a hyperplane in $V$. For each $\sigma\in\{+,-,0\}$, let \[ H_\infty^\sigma :=\{[r]\in S_\infty(V) \mid \text{ $r$ is a ray with $r_0\in H$ and $r\subseteq H^\sigma$}\}. \] In other words, $H_\infty^\sigma$ consists of those celestial points that can be represented by a ray based on $H$ and contained in the halfspace $H^\sigma$ (hyperplane $H$, if $\sigma=0$). We may refer to $H_\infty^+$ and $H_\infty^-$ as the positive and negative \emph{halfspheres} determined by $H$. Let $\Hy$ be a hyperplane arrangement in $V$. A \emph{celestial face} is a nonempty subset of $S_\infty(V)$ of the form \[ F = \bigcap_{H\in \Hy} H_\infty^{\sigma_H}, \] where $(\sigma_H)_{H\in\Hy}$ is a sign sequence. Let $\Sigma_\infty(\Hy)$ denote the collection of celestial faces. It partitions the celestial sphere. When warranted, we refer to the faces in $\Sigma(\Hy)$ as \emph{mundane}. The product of two celestial faces $F$ and $G$ is defined as follows. Choose rays $r$ and $s$ with a common base such that $[r]\in F$ and $[s]\in G$. Assume first that $r$ and $s$ are neither equal nor opposite. Let $f:\C\to V$ be an affine map that sends the positive real axis to $r$ and the positive imaginary axis to $s$. The product $FG$ is the celestial face that contains the celestial points $[f(e^{it})]$ for all $0<t<\epsilon$, for some $\epsilon>0$. (More plainly, we rotate $r$ towards $s$ a small positive angle. Of the two directions of rotation, we choose the one that stays in the convex sector determined by $r$ and $s$.) If $r$ an $s$ are equal or opposite, we stay put and $FG=F$. For this to be well-defined, we require that once all hyperplanes in $\Hy$ are translated to the common base of $r$ and $s$, only finitely many of them intersect the sector $\{f(re^{it}) \mid 0<t<\epsilon\}$. Equivalently, $FG$ is the first celestial face entered by walking a small positive distance from a celestial point in $F$ to one in $G$, along a maximal celestial circle. See Figure~\ref{fig:cprod}. \begin{figure}[!h] \begin{tikzpicture}[>=stealth] \draw[dashed,draw=blue, fill=blue!20!white] (1.5,0) -- (4,0) arc (0:-15:4) -- (0,0); \draw[very thick, blue, ->] (0,0) node {$\bullet$} -- (1.5,0) node[above right] {$r$}; \draw[very thick, blue, ->] (0,0) -- (-1,-1) node[above left] {$s$}; \draw[dashed,blue] (-1,-1) -- (-2.828,-2.828); \draw[dashed] (4,0) arc (0:-145:4); \draw[dashed] (4,0) arc (0:10:4); \draw[very thick] (4,0) node {$\bullet$} arc (0:-25:4); \draw (4,0) node[right] {$F$}; \draw (4.35,-1) node {$FG$}; \draw (-2.828,-2.828) node {$\bullet$}; \draw (-2.828,-2.828) node[below left] {$G$}; \end{tikzpicture} \caption{The product of two celestial faces.} \label{fig:cprod} \end{figure} It is also possible to multiply a mundane face with a celestial face, in that order. Let $F\in \Sigma(\Hy)$ and $G\in \Sigma_\infty(\Hy)$. The product $FG\in \Sigma(\Hy)$ is the mundane face entered upon traveling a small positive distance starting from a point in $F$ and heading in the direction of a celestial point contained in $G$. A local condition similar to the preceding ones guarantees the existence of this product. See Figure~\ref{fig:fcprod}. \begin{figure}[!h] \begin{tikzpicture}[>=stealth] \draw[very thick] (0,0) -- (3,3); \draw[very thick] (-1.5,1) -- (1.5,4); \draw[very thick] (-1,2) -- node[pos=.4,fill=white] {$F$} (3,2); \draw[dashed] (3,3) .. controls (4,4) .. (4.5,5.5); \draw[dashed] (1.5,4) .. controls (2.5,5) .. (4.5,5.5); \draw[dashed] (4.5,5.5) arc (45:75:6.364); \draw[dashed] (4.5,5.5) arc (45:15:6.364); \draw[fill=black] (4.5,5.5) circle (3pt) node[above right] {$G$}; \draw (1.25,2.75) node {$FG$}; \draw[very thick, blue,->] (1.5,2) node {$\bullet$} -- (2.25,2.75); \draw[blue,dashed] (2.25,2.75) .. controls (3.75,4.25) .. (4.5,5.5); \end{tikzpicture} \caption{The product of a mundane face and a celestial face.} \label{fig:fcprod} \end{figure} \begin{proposition}\label{p:celestial} Assume that $\Hy$ is locally finite and has finitely many parallelism classes. Then the preceding products are globally defined, the set $\Sigma_\infty(\Hy)$ is a semigroup, and the set $\Sigma(\Hy)$ is a right module over it. \end{proposition} \begin{proof} We resort to the \emph{cone} on $\Hy$. This is the linear arrangement $\cHy$ in $\widehat{V}:=\R\oplus V$ consisting of the linear hyperplane $H_0:= \{(0,v) \mid v\in V\}$ together with the linear hyperplanes $\widehat{H}:=\ker(\widehat{f}_H)$, for $H\in\Hy$, where $\widehat{f}_H$ is the unique linear functional on $\widehat{V}$ such that $\widehat{f}_H(1,v)=f_H(v)$. The \emph{induced} linear arrangement $\cHy_0:=\{H_0\cap\widehat{H} \mid H\in\Hy\}$ (in the space $V$) is finite, since two parallel hyperplanes in $\Hy$ have the same intersection with $H_0$. Hence $\Sigma(\cHy_0)$ is a monoid. Now, save for its central face, the set $\Sigma(\cHy_0)$ is in bijection with $\Sigma_\infty(\Hy)$, in such a way that the Tits product corresponds to the product of celestial faces. Thus, $\Sigma_\infty(\Hy)$ is a semigroup. Note that $\Sigma(\cHy_0)$ consists of the faces of $\cHy$ contained in $H_0$. On the other hand, let $\Sigma_+(\cHy)$ denote the subset of $\Sigma(\cHy)$ consisting of those faces contained in the positive halfspace of $H_0$. It is in bijection with $\Sigma(\Hy)$. The arrangement $\cHy$ need not be locally finite, but the product of a face in $\Sigma_+(\cHy)$ with a face in $\Sigma(\cHy_0)$ (in that order) is well-defined, since $\Hy$ is locally finite. Moreover, the two bijections in question intertwine the right action of $\Sigma(\cHy_0)$ on $\Sigma_+(\cHy)$ with the right action of $\Sigma_\infty(\Hy)$ on $\Sigma(\Hy)$. Since the former is associative (as an instance of the Tits product in $\Sigma(\cHy)$), $\Sigma(\Hy)$ is a right module over $\Sigma_\infty(\Hy)$. \end{proof} Figure~\ref{fig:cone} illustrates the discussion in Proposition~\ref{p:celestial} in the case $\Hy$ is an arrangement of points in a one-dimensional space. \begin{figure}[!h] \begin{tikzpicture}[scale=2.2] \draw[dashed] (-2.5,1)--(2.5,1); \draw (-2,1) node {$\bullet$}; \draw (-1,1) node {$\bullet$}; \draw (0,1) node {$\bullet$}; \draw (1,1) node {$\bullet$}; \draw (2,1) node {$\bullet$}; \draw (0,0) node {$\bullet$}; \draw (0,-.2)--(0,1.5); \draw (-2.5,0)--(2.5,0) node[right] {$H_0$}; \draw (-2.5,1.25)--(.4,-.2); \draw (-1.5,1.5)--(.2,-.2); \draw (2.5,1.25)--(-.4,-.2); \draw (1.5, 1.5)--(-.2,-.2); \draw (-2.75,1) node[left] {$\Sigma_+(\cHy) \cong \Sigma(\Hy)$:}; \draw (-2.75,0) node[left] {$\Sigma(\cHy_0)\setminus\{0\} \cong \Sigma_{\infty}(\Hy)$:}; \end{tikzpicture} \caption{The cone of a rank $1$ arrangement.} \label{fig:cone} \end{figure} The product $GF$ of a celestial face $G\in \Sigma_\infty(\Hy)$ and a mundane face $F\in \Sigma(\Hy)$, in that order, generally is not defined. Note that the corresponding pair of faces in $\Sigma(\cHy)$ are not multiplicable, since any neighborhood of $G$ intersects infinitely many hyperplanes in $\cHy$. See Figure~\ref{fig:cone}. \section{Root systems, Coxeter complexes and Steinberg tori}\label{sec:products} We turn to hyperplane arrangements associated to root systems. Our ultimate goal in this section is to arrive at a right module structure on the set of faces of the Steinberg torus (of an irreducible crystallographic root system) over the monoid of faces of the finite Coxeter arrangement. These notions are reviewed along the way; additional background may be found in \cite{AbrBro:2008,DPS:2009, Hum:1990}. Throughout, $V$ denotes a Euclidean vector space, with inner product $\br{ \cdot\,{,}\,\cdot }$. \subsection{The finite Coxeter arrangement}\label{ss:finarr} Let $\Phi$ be a root system in $V$, in the sense of~\cite[Section 1.2]{Hum:1990} (a generalized root system in the sense of~\cite[Definition 1.5]{AbrBro:2008}). Let $\Delta$ be a set of simple roots. Every root $\beta\in\Phi$ belongs either to the nonnegative span of $\Delta$ and is designated \emph{positive}, or to the nonpositive span of $\Delta$ and is designated \emph{negative}. We write $\beta>0$ or $\beta<0$ accordingly. Let $\Pi = \{ \beta \in \Phi \mid \beta > 0\}$ denote the set of positive roots. For any root $\beta \in \Phi$, let \[ H_\beta := \{\lambda \in V \mid \br{\lambda, \beta} = 0\} \] be the hyperplane orthogonal to $\beta$. The set of hyperplanes \[ \Hy(\Phi) :=\{ H_\beta \mid \beta\in\Phi\} \] is the \emph{Coxeter arrangement} associated to $\Phi$. Since $H_{\beta} = H_{-\beta}$, we have $\Hy(\Phi) :=\{ H_\beta \mid \beta\in\Pi\}$. Let $\Sigma:=\Sigma\bigl(\Hy(\Phi)\bigr)$ be the set of faces of the arrangement $\Hy(\Phi)$, and $\mathcal{C}:=\mathcal{C}\bigl(\Hy(\Phi)\bigr)$ the subset of chambers. Since $\Hy(\Phi)$ is finite and linear, $\Sigma$ is a monoid under the Tits product. The set $\mathcal{C}$ is a two-sided ideal, with trivial right action. The \emph{dominant} or \emph{fundamental} chamber is \[ C_{\emptyset}:=\{\lambda\in V \mid \br{ \lambda,\alpha } >0 \text{ for all }\alpha\in\Delta\}. \] The faces of $C_{\emptyset}$ are the sets of the form \begin{equation}\label{e:CJ} C_J:=\{\lambda\in V \mid \br{ \lambda, \alpha } = 0 \mbox{ for } \alpha\in J, \\ \br{ \lambda, \alpha} > 0 \mbox{ for } \alpha\in \Delta\setminus J\}, \end{equation} where $J$ is a subset of $\Delta$. Figure~\ref{fig:A2-finite} illustrates the situation for the root system $\Phi=A_2$ and Figure~\ref{fig:C2-finite} for $\Phi=C_2$. \begin{figure}[!h] \begin{tabular}{c c c} \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)}, >=stealth,baseline=0] \draw (2,3) node {$\bullet$}; \draw[very thick,->] (2,3)--(1,5) node[above,yshift=.2cm] {$\alpha_2$}; \draw[very thick,->] (2,3)--(3,4) node[above right] {$\aff{\alpha}$}; \draw[very thick,->] (2,3)--(4,2) node[below right] {$\alpha_1$}; \end{tikzpicture} & \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)}, >=stealth,baseline=0] \draw (2,3) node {$\bullet$}; \draw[dashed,thin,->] (2,3)--(1,5); \draw[dashed,thin,->] (2,3)--(3,4); \draw[dashed,thin,->] (2,3)--(4,2); \draw[very thick] (0,3)--(4,3) node[right] {$\scriptstyle H_{\alpha_2}$}; \draw[very thick] (2,1)--(2,5) node[above right] {$\scriptstyle H_{\alpha_1}$}; \draw[very thick] (0,5)--(4,1) node[below right] {$\scriptstyle H_{\aff{\alpha}}$}; \draw (3.5,4.5) node {$\scriptstyle C_{\emptyset}$}; \end{tikzpicture} & \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},scale=3,baseline=-1cm] \draw[fill=blue!20!white,draw=none] (0,0) node {$\bullet$} node[below] {$C_{\{\alpha_1,\alpha_2\}}$} -- node[midway,below] {$C_{\{\alpha_2\}}$} (1,0) ..controls (.8,.6) and (.3,.5).. (0,1) -- node[midway,left] {$C_{\{\alpha_1\}}$} (0,0); \draw[very thick] (0,0)--(0,1.1); \draw[very thick] (0,0)--(1.1,0); \draw (.33,.33) node {$C_{\emptyset}$}; \end{tikzpicture} \\ (a) & (b) & (c) \end{tabular} \caption{ The root system $A_2$. (a) $\Delta=\{\alpha_1,\alpha_2\}$ and $\Pi=\Delta\cup\{\aff{\alpha}\}$.\\ (b) The arrangement $\Hy(A_2)$. (c) The faces of the fundamental chamber.} \label{fig:A2-finite} \end{figure} \begin{figure}[!h] \begin{tabular}{c c c} \begin{tikzpicture}[>=stealth,baseline=0] \draw (0,0) node {$\bullet$}; \draw[very thick,->] (0,0)--(2,0) node[right] {$\alpha_1$}; \draw[very thick,->] (0,0)--(0,2) node[above] {$\aff{\alpha}$}; \draw[very thick,->] (0,0)--(-1,1) node[above left] {$\alpha_2$}; \draw[very thick,->] (0,0)--(1,1) node[above right] {$\beta$}; \end{tikzpicture} & \begin{tikzpicture}[>=stealth,baseline=0] \draw[very thick] (-2,2)--(2,-2) node[below right] {\small $H_{\beta}$}; \draw[very thick] (-2,-2)--(2,2) node[above right] {\small $H_{\alpha_2}$}; \draw[very thick] (0,-2)--(0,2) node[above] {\small $H_{\alpha_1}$}; \draw[very thick] (-2,0)--(2,0) node[right] {\small $H_{\aff{\alpha}}$}; \draw (0,0) node {$\bullet$}; \draw (1,2) node {$C_{\emptyset}$}; \end{tikzpicture} & \begin{tikzpicture}[scale=3,baseline=1cm] \draw[fill=blue!20!white,draw=none] (0,0) node {$\bullet$} node[below] {$C_{\{\alpha_1,\alpha_2\}}$} -- node[midway,right, xshift=5pt] {$C_{\{\alpha_2\}}$} (1,1) ..controls (.8,1) and (.3,.7).. (0,1) -- node[midway,left] {$C_{\{\alpha_1\}}$} (0,0); \draw[very thick] (0,0)--(0,1.1); \draw[very thick] (0,0)--(1.1,1.1); \draw (.3,.7) node {$C_{\emptyset}$}; \end{tikzpicture} \\ (a) & (b) & (c) \end{tabular} \caption{ The root system $C_2$. (a) $\Delta=\{\alpha_1,\alpha_2\}$ and $\Pi=\Delta\cup\{\beta,\aff{\alpha}\}$.\\ (b) The arrangement $\Hy(C_2)$. (c) The faces of the fundamental chamber.} \label{fig:C2-finite} \end{figure} For each $\beta \in \Phi$, let $s_\beta$ denote the orthogonal reflection through $H_\beta$. Let $W$ be the subgroup of $\GL(V)$ generated by these reflections. This is the \emph{Coxeter group} of $\Phi$. It permutes the hyperplanes in $\Hy(\Phi)$ and therefore acts on the set $\Sigma$. We let $w\cdot F$ denote the action of $w\in W$ on $F\in\Sigma$. The closure of the fundamental chamber is a strict fundamental domain for the action of $W$ on $V$~\cite[Theorem 1.104]{AbrBro:2008}. Therefore, every face $F$ in $\Sigma$ is in the orbit of a unique face $C_J$ of $C_\emptyset$. We let \[ \col(F):= \Delta\setminus J \] denote the complement of the unique subset of $\Delta$ such that \begin{equation}\label{e:col} F = w\cdot C_J \end{equation} for some $w\in W$. We call $\col(F)$ the \emph{color set} of $F$. Proposition~\ref{prp:Dcol} below provides information on the uniqueness of $w$. The action of $W$ on $\mathcal{C}$ is simply-transitive~\cite[Theorem 1.69]{AbrBro:2008}. Therefore, given a face $F$ there is a unique $w\in W$ such that \begin{equation}\label{e:wF} w\cdot C_{\emptyset}=FC_{\emptyset}. \end{equation} We let \[ w_F:=w \] denote this element of $W$. In other words, acting with the group element $w_F$ on the fundamental chamber has the same effect as projecting the face $F$ onto that chamber. A face $F\in\Sigma$ has sign vector $ \sigma(F) = \bigl(\sigma_{\beta}(F)\bigr)_{\beta \in \Pi}, $ where $\sigma_{\beta}(F)\in\{+, -, 0\}$ is the sign of $\br{ \lambda, \beta }$, $\lambda$ being any point in $F$. The face $C_\Delta$ has sign vector $(0,0,\ldots,0)$. It is the intersection of all the hyperplanes, and hence the unit for the Tits product. The fundamental chamber $C_{\emptyset}$ has sign vector $(+,+,\ldots,+)$. More generally, for a positive root $\beta$ and $\lambda \in C_J$, we have $\br{ w(\lambda), \beta} = \br{ \lambda, w^{-1}(\beta)}$, and hence \begin{equation}\label{e:signJ} \sigma_{\beta}(w\cdot C_J) = \begin{cases} 0 & \mbox{if } w^{-1}\beta \in \spn\{ \alpha \mid \alpha \in J\}, \\ + & \mbox{if } w^{-1}\beta \in \Pi \setminus \spn\{\alpha \mid \alpha \in J\}, \\ - & \mbox{if } -w^{-1}\beta \in \Pi \setminus \spn\{\alpha \mid \alpha \in J\}. \end{cases} \end{equation} Let \[ D(w) = \{ \alpha \in \Delta \mid w(\alpha) < 0 \}.\] This is the \emph{descent set} of $w$, which will be discussed at length in Section \ref{sec:modules}. \begin{prp} \label{prp:Dcol} Let $F\in\Sigma$ be a face. The element $w_F$ is the unique $w\in W$ such that \begin{equation}\label{e:Dcol} F = w\cdot C_J \quad\text{and}\quad D(w) \subseteq \col(F). \end{equation} where $J=\Delta\setminus \col(F)$. \end{prp} \begin{proof} We first show that $w_F$ fulfills~\eqref{e:Dcol}. Since $F$ is a face of $F C_{\emptyset}=w_F\cdot C_{\emptyset}$, there exists $I\subseteq\Delta$ such that $F=w_F\cdot C_I$. By the uniqueness in~\eqref{e:col}, we must have $I=J$. Now suppose there exists $\alpha\in D(w_F)\cap J$. Since $\alpha\in D(w_F)$, we have $w_F\alpha=-\beta$ for some positive root $\beta$. Since $\alpha\in J$, we have from~\eqref{e:signJ} that \[ \sigma_{\beta}(F)= \sigma_{\beta}(w_F\cdot C_J) = 0. \] Hence, from~\eqref{eq:product}, we have \[ \sigma_{\beta}(FC_{\emptyset}) = \sigma_{\beta}(C_{\emptyset}) = +. \] On the other hand, again from ~\eqref{e:signJ}, \[ \sigma_{\beta}(w_F\cdot C_{\emptyset})= -. \] This contradicts $FC_{\emptyset}=w_F\cdot C_{\emptyset}$. Thus, $D(w_F) \subseteq \Delta\setminus J$. It remains to establish uniqueness. Suppose $w\in W$ satisfies~\eqref{e:Dcol}. Since $D(w)\subseteq\col(F)$, we may apply~\cite[Proposition 4]{Bro:2000} to the chambers $C_\emptyset$ and $w\cdot C_\emptyset$ to conclude that $FC_\emptyset = w\cdot C_\emptyset$. Hence $w=w_F$ by~\eqref{e:wF}. \end{proof} The rank $1$ faces of $\Sigma$ are of the form $w\cdot C_{\Delta\setminus\{\alpha\}}$ where $w\in W$ and $\alpha\in\Delta$. If we assign color $\alpha$ to all such faces, we obtain a \emph{balanced coloring} of $\Sigma$; i.e., every chamber of $\Sigma$ has exactly one rank $1$ face of each color~\cite[Proposition 1.128]{AbrBro:2008}. The set $\col(F)$ defined by~\eqref{e:col} is the set of colors of the rank $1$ faces of a face $F$. Thus, the face $w\cdot C_J$ has color set $\Delta\setminus J$. \subsection{Coxeter complexes}\label{ss:complex} Let $S=\{s_\alpha \mid \alpha\in\Delta\}$ be the set of simple reflections. The group $W$ is generated by $S$ and in fact $(W,S)$ is a \emph{Coxeter system}. Let $W_J :=\br{ s \mid s \in J }$ denote the \emph{parabolic} subgroup of $W$ generated by the subset $J \subseteq S$. The Coxeter complex of $(W,S)$ is the abstract simplicial complex whose faces are cosets of parabolic subgroups, \[ \Sigma(W,S) := \{ wW_J \mid w \in W, J \subseteq S\}, \] with inclusion of faces given by containment of subsets of $W$, i.e., \[ wW_J \leq vW_K \Leftrightarrow wW_J \supseteq vW_K. \] In particular, $W_S = W$ corresponds to the empty face, as it contains all cosets. The facets (maximal faces) of $\Sigma(W,S)$ correspond to the singletons $wW_{\emptyset} = \{ w\}$, and are thus indexed by elements of $W$. We have that \[ w\cdot C_J \subseteq v\cdot C_K \iff wW_J \supseteq vW_K. \] This defines an order-preserving bijection $\Sigma\leftrightarrow \Sigma(W,S)$ between $\Sigma$ and the Coxeter complex $\Sigma(W,S)$~\cite[Theorem 1.111]{AbrBro:2008}. Chambers in $\Sigma$ correspond to facets in $\Sigma(W,S)$, and rank $1$ faces correspond to vertices. From now on, we refer to $\Sigma$ as the Coxeter complex. For more details on finite Coxeter arrangements and complexes, see \cite[Sections~1.5 and 1.6]{AbrBro:2008}. \subsection{The affine Coxeter arrangement}\label{ss:affine} We assume from now on that the root system $\Phi$ is crystallographic (so $W$ is a Weyl group) and irreducible. For definitions, see~\cite[Appendix B]{AbrBro:2008} or~\cite[Section 2.9]{Hum:1990}. Such a system has an associated \emph{affine Weyl group} $\aff{W}$. This is the group generated by the reflections $s_{\beta,k}$ through the affine hyperplanes \begin{equation}\label{e:affarr} H_{\beta,k} := \{\lambda \in V:\br{\lambda, \beta} = k\} \qquad(\beta\in\Phi,\ k\in\Z). \end{equation} Let $\Phi^\vee$ be the set of \emph{coroots} \[ \beta^\vee:=2\beta/\br{\beta,\beta} \] ($\beta\in\Phi$). The additive subgroup of $V$ it generates is the \emph{coroot lattice} $\Z\Phi^\vee$. Composing two reflections $s_{\beta,k}$ corresponding to the same $\beta$ results in a translation by a vector in $\Z\Phi^\vee$; see Figure~\ref{fig:root}. Thus, $\aff{W}$ contains $\Z\Phi^\vee$. It also contains the finite Weyl group $W$, as this consists of the reflections across the hyperplanes $H_\beta$. The crystallographic condition guarantees that the action of $W$ on $V$ stabilizes $\Z\Phi^\vee$, and $\aff{W}$ identifies with the semidirect product $\Z\Phi^\vee\rtimes W$. The product in the latter group is \[ (\mu,w)\cdot(\mu',w') = (\mu+w(\mu'), ww'). \] The action of $\aff{W}$ on $V$ extends the action of $W$ by linear reflections and the action of $\Z\Phi^\vee$ by translations: \[ (\mu,w)\cdot \lambda = \mu+w(\lambda), \] for $\mu\in \Z\Phi^\vee$, $w\in W$, and $\lambda\in V$. \begin{figure}[!h] \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)}, >=stealth,scale=1.5] \draw (1,3) -- (2,2) node[below] {\small $H_{\beta}$}; \draw (1,4) -- (3,2) node[below] {\small $H_{\beta,1}$}; \draw (1,5) -- (4,2) node[below] {\small $H_{\beta,2}$}; \draw[very thick,->] (1.5,2.5) node {$\bullet$} -- (2.5,3.5) node[below right, fill=white] {$\small \beta^{\vee}$}; \draw[very thick,->] (1.5,2.5) -- (2.2,3.2) node[below right, fill=white] {$\small \beta$}; \end{tikzpicture} \caption{A root and its coroot.} \label{fig:root} \end{figure} Moreover, $\Phi$ has a unique \emph{highest} root $\aff{\alpha}$, the group $\aff{W}$ is generated by $\aff{S}:=S\cup\{s_{\aff{\alpha},1}\}$, and $(\aff{W},\aff{S})$ is an irreducible Coxeter system~\cite[Sections 4.3 and 4.6]{Hum:1990}. For the root systems $A_2$ and $C_2$, the coroots corresponding to the positive roots are shown in Figures~\ref{fig:A2}(a) and~\ref{fig:C2}(a). For the root system $A_n$, we have $\alpha^\vee=\alpha$ for all $\alpha\in A_n$. For the root system $C_n$, some coroots are half the size of the corresponding root, others are equal. \begin{figure}[!h] \begin{tabular}{c c c} \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)}, >=stealth,baseline=0] \foreach \x in {3,4,5}{ \draw (-.2,\x)--(7.2-\x,\x); \draw (-.2,\x+.2)--(\x-.8,.8); \draw (\x-1,.8)--(\x-1,8.2-\x); } \foreach \x in {1,2}{ \draw (2.8-\x,\x)--(4.2,\x); \draw (\x-.2,5.2)--(4.2,\x+.8); \draw (\x-1,3.8-\x)--(\x-1,5.2); } \draw (2,3) node {$\bullet$}; \draw[very thick,->] (2,3)--(1,5) node[above,yshift=.2cm] {$\alpha_2^\vee$}; \draw[very thick,->] (2,3)--(3,4) node[above right] {$\aff{\alpha}^\vee$}; \draw[very thick,->] (2,3)--(4,2) node[below right] {$\alpha_1^\vee$}; \end{tikzpicture} & \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0] \foreach \x in {3,4,5}{ \draw (-.2,\x)--(7.2-\x,\x); \draw (-.2,\x+.2)--(\x-.8,.8); \draw (\x-1,.8)--(\x-1,8.2-\x); } \foreach \x in {1,2}{ \draw (2.8-\x,\x)--(4.2,\x); \draw (\x-.2,5.2)--(4.2,\x+.8); \draw (\x-1,3.8-\x)--(\x-1,5.2); } \draw (2,3) node {$\bullet$}; \draw[very thick] (-.7,3)--(4.7,3) node[right] {$\scriptstyle H_{\alpha_2}$}; \draw[very thick] (2,.3)--(2,5.7) node[above right] {$\scriptstyle H_{\alpha_1}$}; \draw[very thick] (.3,5.7)--(4.7,1.3) node[below right] {$\scriptstyle H_{\aff{\alpha},1}$}; \draw (2.33,3.33) node {$\scriptstyle A_{\emptyset}$}; \end{tikzpicture} & \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},scale=3,baseline=-1cm] \draw[fill=blue!20!white,very thick] (0,0) node {$\bullet$} node[below] {$A_{\{\alpha_1,\alpha_2\}}$} -- node[midway,below] {$A_{\{\alpha_2\}}$} (1,0) node {$\bullet$} node[below] {$A_{\{\alpha_2,\alpha_0\}}$} -- node[midway,right] {$A_{\{\alpha_0\}}$} (0,1) node {$\bullet$} node[above] {$A_{\{\alpha_1,\alpha_0\}}$} -- node[midway,left] {$A_{\{\alpha_1\}}$} (0,0); \draw (.33,.33) node {$A_{\emptyset}$}; \end{tikzpicture} \\ (a) & (b) & (c) \end{tabular} \caption{The affine arrangement $\aff{\Hy}(A_2)$. (a) Positive (co)roots. (b) Affine hyperplanes and the fundamental alcove. (c) The faces of the fundamental alcove.} \label{fig:A2} \end{figure} \begin{figure}[!h] \begin{tabular}{c c c} \begin{tikzpicture}[scale=.85,>=stealth,baseline=0] \draw (-2.2,2.2)--(2.2,-2.2); \draw (-2.2,-2.2)--(2.2,2.2); \foreach \x in {-2,...,2}{ \draw (-2.2,\x)--(2.2,\x); \draw (\x,-2.2)--(\x,2.2); } \foreach \x in {-1,1}{ \draw (\x*2.2,\x*1.8) -- (\x*1.8,\x*2.2); \draw (\x*2.2,-\x*1.8) -- (\x*1.8,-\x*2.2); \draw (\x*2.2,-\x*.2) -- (-\x*.2,\x*2.2); \draw (\x*2.2,\x*.2) -- (-\x*.2,-\x*2.2); } \draw (0,0) node {$\bullet$}; \draw[very thick,->] (0,0)--(2,0) node[right, xshift=3pt] {$\alpha_1^\vee$}; \draw[very thick,->] (0,0)--(0,2) node[above, yshift=3pt] {$\aff{\alpha}^\vee$}; \draw[very thick,->] (0,0)--(-2,2) node[above left] {$\alpha_2^\vee$}; \draw[very thick,->] (0,0)--(2,2) node[above right] {$\beta^\vee$}; \end{tikzpicture} & \begin{tikzpicture}[scale=.85,>=stealth,baseline=0] \foreach \x in {-2,...,2}{ \draw (-2.2,\x)--(2.2,\x); \draw (\x,-2.2)--(\x,2.2); } \foreach \x in {-1,1}{ \draw (\x*2.2,\x*1.8) -- (\x*1.8,\x*2.2); \draw (\x*2.2,-\x*1.8) -- (\x*1.8,-\x*2.2); \draw (\x*2.2,-\x*.2) -- (-\x*.2,\x*2.2); \draw (\x*2.2,\x*.2) -- (-\x*.2,-\x*2.2); } \draw (-2.2,2.2)--(2.2,-2.2); \draw[very thick] (-2.7,-2.7)--(2.7,2.7) node[above right] {$\scriptstyle H_{\alpha_2}$}; \draw[very thick] (0,-2.7)--(0,2.7) node[above] {$\scriptstyle H_{\alpha_1}$}; \draw[very thick] (-2.7,1)--(2.7,1) node[right] {$\scriptstyle H_{\aff{\alpha},1}$}; \draw (0,0) node {$\bullet$}; \draw (.3,.7) node {$\scriptstyle A_{\emptyset}$}; \end{tikzpicture} & \begin{tikzpicture}[scale=2.7,baseline=1.5cm] \draw[fill=blue!20!white,very thick] (0,0) node {$\bullet$} node[below] {$A_{\{\alpha_1,\alpha_2\}}$} -- node[midway,right,xshift=.2cm] {$A_{\{\alpha_2\}}$} (1,1) node {$\bullet$} node[above,xshift=.2cm] {$A_{\{\alpha_2,\alpha_0\}}$} -- node[midway,above] {$A_{\{\alpha_0\}}$} (0,1) node {$\bullet$} node[above,xshift=-.2cm] {$A_{\{\alpha_1,\alpha_0\}}$} -- node[midway,left] {$A_{\{\alpha_1\}}$} (0,0); \draw (.3,.6) node {$A_{\emptyset}$}; \end{tikzpicture} \\ (a) & (b) & (c) \end{tabular} \caption{The affine arrangement $\aff{\Hy}(C_2)$. (a) Positive coroots: $\alpha_1^\vee=\frac{1}{2}\alpha_1$, $\aff{\alpha}^\vee=\frac{1}{2}\aff{\alpha}$, $\alpha_2^\vee=\alpha_2$, $\beta^\vee=\beta$. (b) Affine hyperplanes and the fundamental alcove. (c) The faces of the fundamental alcove.} \label{fig:C2} \end{figure} The \emph{affine Coxeter arrangement} is \[ \aff{\Hy}(\Phi) := \{ H_{\beta, k} \mid \beta \in \Pi, k \in \Z\}. \] Two examples are shown in Figures~\ref{fig:A2}(b) and~\ref{fig:C2}(b). Let $\aff{\Sigma}$ denote the set of faces of $\aff{\Hy}(\Phi)$. Since $\aff{\Hy}(\Phi)$ is locally finite, $\aff{\Sigma}$ is a semigroup under the Tits product. In the poset $\aff{\Sigma}$ (with faces ordered by inclusion of their closures), every vertex is a minimal element. Adding in a smallest element turns $\aff{\Sigma}$ into a simplicial complex, isomorphic to the Coxeter complex of $\aff{W}$~\cite[Proposition~10.13]{AbrBro:2008}. The Tits product can be extended as well, by letting the smallest element act as a unit. This turns the semigroup $\aff{\Sigma}$ into a monoid. Since in $\aff{\Hy}(\Phi)$ there is one parallelism class for each linear hyperplane in $\Hy(\Phi)$, the set of celestial faces of $\aff{\Hy}(\Phi)$ identifies with the finite Coxeter complex $\Sigma$ minus its central face. Proposition~\ref{p:celestial} then yields a right $\Sigma$-module structure on $\aff{\Sigma}$. (The cone built on the affine arrangement as in the proof of the proposition is in this context known as the \emph{Tits cone}.) The maximal faces of $\aff{\Sigma}$ are called \emph{alcoves}, \emph{chambers} being reserved for the maximal faces of $\Sigma$. The set $\aff{\Cc}$ of alcoves is a two-sided ideal in the semigroup $\aff{\Sigma}$. The right action of $\Sigma$ on $\aff{\Cc}$ is trivial. On the other hand, the right action of a a chamber $C\in\mathcal{C}$ on a face $F\in \aff{\Sigma}$ results in an alcove $FC\in\aff{\Cc}$. The \emph{fundamental} alcove is \[ A_\emptyset:= C_\emptyset \cap \{ \lambda \in V \mid \br{\lambda, \aff{\alpha}} < 1\}. \] The faces of $A_\emptyset$ are the sets of the form \begin{equation}\label{e:AJ} A_J:=\begin{cases} \hbox to 31pt{\hfill $C_J$} \cap \{ \lambda \in V \mid \br{\lambda, \aff{\alpha}} < 1\} &\text{if $\alpha_0\notin J$},\\ C_{J\setminus\{\alpha_0\}} \cap \{ \lambda \in V \mid \br{\lambda, \aff{\alpha}} = 1\} &\text{if $\alpha_0\in J$,} \end{cases} \end{equation} where $J$ is a proper subset of $\aff{\Delta}$ and $C_J$ is as in~\eqref{e:CJ}. See Figures~\ref{fig:A2}(c) and~\ref{fig:C2}(c). The closure of $A_\emptyset$ is a fundamental domain for the action of $\aff{W}$ on $V$. Hence, each face in $\aff{\Sigma}$ is of the form $\mu+w\cdot A_J$, where $\mu\in \Z\Phi^\vee$, $w\in W$, and $J$ is a proper subset of $\aff{\Delta}$. We may let $J=\aff{\Delta}$ in~\eqref{e:AJ}. Then $A_J=\emptyset$, and we may think of this \emph{empty face} as the smallest element that when added to $\aff{\Sigma}$ turns it into a simplicial complex (and a monoid). The vertices of $\aff{\Sigma}$ are of the form $\mu+w\cdot A_{\aff{\Delta}\setminus\{\alpha\}}$, where $\mu$ and $w$ are as before, and $\alpha\in\aff{\Delta}$. If we assign color $\alpha$ to all such vertices, we obtain a balanced coloring of $\aff{\Sigma}$. The face $F=\mu+w\cdot A_J$ receives color set $\aff{\Delta}\setminus J$, which we denote $\col(F)$, just as for faces of the Coxeter complex. A face $F\in\aff{\Sigma}$ can be encoded by an infinite sign vector $\sigma(F)$ that records whether the face is ``above", ``below", or ``on" a particular hyperplane~\cite[Section 2.7]{AbrBro:2008}. We have \begin{equation}\label{eq:expanded} \sigma(F) = \bigl( \sigma_{\beta,k}(F) \bigr)_{\beta \in \Pi,\, k \in \Z}\,, \end{equation} where $\sigma_{\beta,k}(F)\in\{+,-,0\}$ is the sign of $\br{ \lambda, \beta} - k$ for points $\lambda\in F$. Fix $\beta\in\Pi$. The face $F$ will be contained in a stripe between two consecutive hyperplanes of the form $H_{\beta, k}$. More precisely, there is a unique $j$ such that \begin{equation}\label{e:signj} \sigma_{\beta,k}(F) = \begin{cases} + & \text{ if }k<j, \\ - & \text{ if }k>j,\\ + \text{ or } 0 & \text{ if }k=j. \end{cases} \end{equation} Let $k_{\beta}(F)$ denote this integer $j$. Equivalently, $k_{\beta}(F) = j$ means $j \leq \br{ \lambda, \beta } < j +1$ for points $\lambda\in F$, and $\sigma_{\beta,j}(F) = 0$ or $+$ according to whether $\br{ \lambda, \beta }$ is equal to or greater than $j$. In view of~\eqref{e:signj}, to gain full knowledge of the sign vector of $F$, we need no more than the pairs $(k_{\beta},\sigma_{\beta,k_{\beta}}(F))$ for all $\beta\in\Pi$. Thus, let us write instead \begin{equation}\label{eq:compact} \sigma(F) = \bigl(k_{\beta}(F),\sigma_{\beta,k_{\beta}(F)}(F)\bigr)_{\beta \in \Pi}. \end{equation} We refer to \eqref{eq:expanded} and \eqref{eq:compact} as the \emph{expanded} and \emph{compact} sign vector of $F$, respectively. For example, if $\Phi=A_2$ and $F$ is the edge given by \[ \{ \lambda \in \R^2 \mid \br{ \lambda, \alpha_1 } = 1,\ -1 < \br{ \lambda, \alpha_2 } < 0,\ 0 < \br{ \lambda, \aff{\alpha} } < 1 \} \] (see Figure~\ref{fig:signvector}), the entries of its compact sign vector are \[ \sigma_{\alpha_1}(F) = (1,0), \quad \sigma_{\alpha_2}(F)=(-1,+), \quad\text{and}\quad \sigma_{\aff{\alpha}}(F)=(0,+). \] \begin{figure}[!h] \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0] \foreach \x in {3,4,5}{ \draw (-.2,\x)--(7.2-\x,\x); \draw (-.2,\x+.2)--(\x-.8,.8); \draw (\x-1,.8)--(\x-1,8.2-\x); } \foreach \x in {1,2}{ \draw (2.8-\x,\x)--(4.2,\x); \draw (\x-.2,5.2)--(4.2,\x+.8); \draw (\x-1,3.8-\x)--(\x-1,5.2); } \draw (2,3) node {$\bullet$}; \draw[very thick] (-.7,3)--(4.7,3) node[right] {$\scriptstyle H_{\alpha_2}$}; \draw (.3,2)--(4.7,2) node[right] {$\scriptstyle H_{\alpha_2,-1}$}; \draw[very thick] (2,.3)--(2,5.7) node[above right] {$\scriptstyle H_{\alpha_1}$}; \draw (3,.3)--(3,4.7) node[above right] {$\scriptstyle H_{\alpha_1,1}$}; \draw (.3,5.7)--(4.7,1.3) node[below right] {$\scriptstyle H_{\aff{\alpha},1}$}; \draw[very thick] (-.7,5.7)--(4.7,.3) node[below right] {$\scriptstyle H_{\aff{\alpha}}$}; \draw (3,2.5) node[fill=white] {$\scriptstyle F$}; \end{tikzpicture} \caption{An edge in the affine arrangement $\aff{\Hy}(A_2)$.} \label{fig:signvector} \end{figure} The compact sign vector of the fundamental alcove $A_{\emptyset}$ has \[ \sigma_\beta(A_{\emptyset}) =(0,+) \] for every $\beta\in\Pi$. For $w \in W$, let \[ \aff{D}(w) = \{ \alpha \in \aff{\Delta} \mid w(\alpha) < 0\},\] which we call the \emph{affine descent set} of $w$. This set is discussed in more detail in Section \ref{ss:affdes}. Arguments similar to those in Proposition~\ref{prp:Dcol} lead to the following result. \begin{prp}\label{prp:affDcol} Given a face $F\in\aff{\Sigma}$, there are unique $\mu\in \Z\Phi^\vee$, $w\in W$, and a proper subset $J$ of $\aff{\Delta}$ such that \begin{equation}\label{e:affDcol} F = \mu+w\cdot A_J \quad\text{and}\quad \aff{D}(w) \subseteq \col(F). \end{equation} \end{prp} The elements $\mu$ and $w$ are determined by \[ \mu + w\cdot A_{\emptyset} = FC_{\emptyset}, \] where the latter is the right action of the fundamental chamber $C_{\emptyset}\in\Sigma$ on the face $F\in\aff{\Sigma}$. The existence and uniqueness of $\mu$ and $w$ follow from the fact that the action of $\aff{W}$ on the set of alcoves is simply transitive. Letting \[ w_F := (\mu,w)\in \Z\Phi^\vee\rtimes W =\aff{W}, \] the condition defining this element of the affine Weyl group can be rewritten as \begin{equation}\label{e:wF2} w_F\cdot A_{\emptyset} = FC_{\emptyset}. \end{equation} \subsection{The Steinberg torus}\label{ss:invariance} Consider the action of the coroot lattice $\Z\Phi^\vee$ by translations on the ambient space $V$. This action preserves the arrangement $\aff{\Hy}(\Phi)$ (see Figure~\ref{fig:A2}), and hence also the set of faces $\aff{\Sigma}$. The \emph{Steinberg torus} \cite{DPS:2009} is the set of orbits for the action of the coroot lattice on the set of faces: \[ \Stb{\Sigma}:=\aff{\Sigma}/\Z\Phi^{\vee}. \] As $\aff{\Sigma}$ partitions $V$, $\Stb{\Sigma}$ partitions the geometric torus $V/\Z\Phi^{\vee}$. This cell decomposition of the torus is not simplicial, as different faces may share the same vertex set, but it does have the property that the poset of subfaces of a face is isomorphic to the poset of nonempty subsets of the vertex set of the face. Let $\Stb{\Cc}$ denote the set of maximal faces of $\Stb{\Sigma}$. The Steinberg torus may also be obtained as a quotient of the convex polytope \[ P_{\Phi} = \{ \lambda \in V \mid -1\leq \br{ \lambda, \alpha } \leq 1 \mbox{ for all } \alpha \in \Phi\}. \] A point $\lambda$ on the boundary of $P_{\Phi}$ satisfies $\br{ \lambda, \beta } = -1$ for some root $\beta\in\Phi$. To obtain the torus, each such point $\lambda$ is identified with the point $\lambda':=\lambda+\beta^\vee$, which satisfies $\br{ \lambda', \beta } = 1$ and also lies on the boundary. See Figure \ref{fig:tori}. The polytope $P_{\Phi}$ is the union of the closures of the alcoves of the form $w\cdot A_{\emptyset}$, $w \in W$. It is an \emph{alcoved polytope} in the sense of~\cite{LamPos:2007}, see particularly~\cite[Section 4]{LamPos:2012}. \begin{figure}[!h] \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0,scale=2] \draw[draw=none,fill=black!20!white] (-1,0)--(-1,1)--(0,1)--(1,0)--(1,-1)--(0,-1)--(-1,0); \draw[very thick] (-1,1)--(0,1); \draw[very thick] (1,-1)--(0,-1); \draw[very thick,dash pattern=on 10pt off 5pt] (-1,0)--(0,-1); \draw[very thick,dash pattern=on 10pt off 5pt] (1,0)--(0,1); \draw[very thick,dashed] (-1,0)--(-1,1); \draw[very thick,dashed] (1,-1)--(1,0); \draw[very thick] (-1,1)--(1,-1); \draw[very thick] (-1,0)--(1,0); \draw[very thick] (0,-1)--(0,1); \draw (-1,0) node {$\bullet$}; \draw (1,0) node {$\bullet$}; \draw (-1,1) node {$\bullet$}; \draw (0,1) node {$\bullet$}; \draw (0,-1) node {$\bullet$}; \draw (1,-1) node {$\bullet$}; \draw (0,0) node {$\bullet$}; \end{tikzpicture} \hspace{3cm} \begin{tikzpicture}[scale=2,baseline=0] \draw[draw=none,fill=black!20!white] (-1,-1)--(-1,1)--(1,1)--(1,-1)--(-1,-1); \draw[very thick,dash pattern=on 10pt off 5pt] (-1,1)--(-1,-1); \draw[very thick,dash pattern=on 10pt off 5pt] (1,1)--(1,-1); \draw[very thick] (-1,1)--(1,1); \draw[very thick] (-1,-1)--(1,-1); \draw[very thick] (-1,1)--(1,-1); \draw[very thick] (-1,-1)--(1,1); \draw[very thick] (0,1)--(0,-1); \draw[very thick] (-1,0)--(1,0); \draw (-1,1) node {$\bullet$}; \draw (1,1) node {$\bullet$}; \draw (-1,0) node {$\bullet$}; \draw (1,0) node {$\bullet$}; \draw (-1,-1) node {$\bullet$}; \draw (1,-1) node {$\bullet$}; \draw (0,1) node {$\bullet$}; \draw (0,-1) node {$\bullet$}; \draw (0,0) node {$\bullet$}; \end{tikzpicture} \caption{The polytopes $P_{A_2}$ and $P_{C_2}$. The Steinberg tori are obtained by identifying points on the boundary.} \label{fig:tori} \end{figure} The geometric construction of the product of faces (Figure~\ref{fig:fcprod}) shows that translations commute with the right action of celestial faces on the set of mundane faces. We thus have the following. \begin{prp}\label{prp:productinvariance} Let $F \in \aff{\Sigma}$, $G \in \Sigma$, and $\mu \in \Z\Phi^{\vee}$. Then $(\mu + F)G = \mu + FG$. \end{prp} Figure \ref{fig:affprodA} illustrates this fact. \begin{figure}[!h] \begin{tikzpicture} \draw (0,0) node{ \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0,>=stealth] \foreach \x in {3,4,5}{ \draw (-.5,\x)--(7.5-\x,\x); \draw (-.5,\x+.5)--(\x-.5,.5); \draw (\x-1,.5)--(\x-1,8.5-\x); } \foreach \x in {1,2}{ \draw (2.5-\x,\x)--(4.5,\x); \draw (\x-.5,5.5)--(4.5,\x+.5); \draw (\x-1,3.5-\x)--(\x-1,5.5); } \draw[dashed] (4.5,3)--(7,3); \draw[dashed] (-.5,3)--(-3,3); \draw[dashed] (4.5,.5)--(7,-2); \draw[dashed] (-.5,5.5)--(-3,8); \draw[dashed] (2,5.5)--(2,8); \draw[dashed] (2,.5)--(2,-2); \foreach \x in {0,3}{ \draw (1,5-\x) node { \begin{tikzpicture} \draw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }; \draw (3,4-\x) node { \begin{tikzpicture} \draw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }; } \foreach \x in {2,3,4}{ \draw (8-2*\x,\x) node { \begin{tikzpicture} \draw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }; } \draw (1.3,2.7) node[fill=white, inner sep=1pt] {$\scriptstyle F$}; \draw (2.3,3.7) node[fill=white, inner sep=1pt] {$\scriptstyle \mu+F$}; \draw[blue,->,very thick] (1.7,2.3) node {$\bullet$} --(1.7,2.8); \draw[blue,->,very thick] (2.7,3.3) node {$\bullet$} --(2.7,3.8); \draw[blue,dashed] (1.7,2.8) .. controls (1.7,6) .. (2,7); \draw[blue,dashed] (2.7,3.8) .. controls (2.7,5.5) .. (2,7); \draw (2,7) node {$\bullet$}; \draw (2,7) node[above right,xshift=2pt] {$G$}; \draw (6,3) node {$\bullet$}; \draw (6,-1) node {$\bullet$}; \draw (2,-1) node {$\bullet$}; \draw (-2,3) node {$\bullet$}; \draw (-2,7) node {$\bullet$}; \end{tikzpicture} }; \draw[very thick] (0,0) circle (4); \end{tikzpicture} \caption{Translate first and then walk, or vice versa. Elements of the coroot lattice are indicated with white circles.}\label{fig:affprodA} \end{figure} As an immediate consequence of Proposition \ref{prp:productinvariance}, we have the following. \begin{cor}\label{cor:productinvariance} The set $\Stb{\Sigma}$ is a right $\Sigma$-module and the canonical quotient map \[ \aff{\Sigma}\twoheadrightarrow\Stb{\Sigma} \] is a morphism of $\Sigma$-modules. \end{cor} Let $\overline{F}\in\Stb{\Sigma}$ denote the $\Z\Phi^\vee$-orbit of a face $F\in \aff{\Sigma}$. Given $G\in\Sigma$, we let $\overline{F}G\in\Stb{\Sigma}$ denote the right action of $G$ on $F$. Note, however, that the product of translates of two faces $F$ and $G$ of $\aff{\Sigma}$ is in general \emph{not} a translate of $FG$. For this reason, the semigroup structure of $\aff{\Sigma}$ does not descend to $\Stb{\Sigma}$. The finite Weyl group $W$ acts on $\Stb{\Sigma}$. This is explained in more detail in Section~\ref{sec:modules}; see the discussion around~\eqref{e:semilinear-coroot}. (Another way to see this is by noting that the polytope $P_{\Phi}$ is stable under the action of $W$ on $V$ and that this action preserves the identifications.) We denote the action of $w\in W$ on $\overline{F}\in\Stb{\Sigma}$ by \[ w\cdot \overline{F} := \overline{w\cdot F}. \] The action of $W$ on $\Stb{\Cc}$ is simply-transitive. The faces of $\Stb{\Sigma}$ can be identified with cosets of the \emph{quasi-parabolic} subgroups \[ W_J := \br{ s_\alpha \mid \alpha \in J}, \] where $J$ is a proper subset of $\aff{\Delta}$. Namely, \[ \Stb{\Sigma} \cong \{ wW_J \mid w \in W, J\subsetneq \aff{\Delta}\}, \] with inclusion of faces given by reverse inclusion of cosets. See \cite{DPS:2009}. The action of the affine Weyl group $\aff{W}$ on $\aff{\Sigma}$ is color-preserving, and hence so is the action of the subgroup $\Z\Phi^{\vee}$. Therefore, the complex $\Stb{\Sigma}$ inherits the balanced coloring of $\aff{\Sigma}$, for which the faces of the form $F= w\cdot \overline{A_J}$ receive color set $\aff{\Delta}\setminus J$, which we again denote $\col(\overline{F})$. See Figure~\ref{fig:colortori}. The following is a consequence of Proposition~\ref{prp:affDcol}. \begin{cor}\label{cor:sDcol} Given a face $\overline{F}\in\Stb{\Sigma}$, there is a unique $w\in W$ and a unique proper subset $J$ of $\aff{\Delta}$ such that \begin{equation}\label{e:sDcol} \overline{F} = w\cdot \overline{A_J} \quad\text{and}\quad \aff{D}(w) \subseteq \col(\overline{F}). \end{equation} \end{cor} The element $w$ is determined by \begin{equation}\label{e:wF3} w\cdot \overline{A_{\emptyset}} = \overline{F}C_{\emptyset}. \end{equation} \begin{figure}[!h] \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0,scale=2] \draw[blue,very thick] (-1,1)--(0,1); \draw[blue,very thick] (1,-1)--(0,-1); \draw[blue,very thick,dash pattern=on 10pt off 5pt] (-1,0)--(0,-1); \draw[blue,very thick,dash pattern=on 10pt off 5pt] (1,0)--(0,1); \draw[blue,very thick,dashed] (-1,0)--(-1,1); \draw[blue,very thick,dashed] (1,-1)--(1,0); \draw[green,very thick] (-1,1)--(0,0); \draw[red,very thick] (0,0)--(1,-1); \draw[red,very thick] (-1,0)--(0,0); \draw[green,very thick] (0,0)--(1,0); \draw[green,very thick] (0,-1)--(0,0); \draw[red,very thick] (0,0)--(0,1); \draw (-1,0) node[circle,inner sep=2pt,fill=magenta,draw=black] {}; \draw (1,0) node[circle,inner sep=2pt,fill=cyan,draw=black] {}; \draw (-1,1) node[circle,inner sep=2pt,fill=cyan,draw=black] {}; \draw (0,1) node[circle,inner sep=2pt,fill=magenta,draw=black] {}; \draw (0,-1) node[circle,inner sep=2pt,fill=cyan,draw=black] {}; \draw (1,-1) node[circle,inner sep=2pt,fill=magenta,draw=black] {}; \draw (0,0) node[circle,inner sep=2pt,fill=yellow,draw=black] {}; \end{tikzpicture} \hspace{3cm} \begin{tikzpicture}[scale=2,baseline=0] \draw[blue,very thick,dash pattern=on 10pt off 5pt] (-1,1)--(-1,-1); \draw[blue,very thick,dash pattern=on 10pt off 5pt] (1,1)--(1,-1); \draw[blue,very thick] (-1,1)--(1,1); \draw[blue,very thick] (-1,-1)--(1,-1); \draw[green,very thick] (-1,1)--(1,-1); \draw[green,very thick] (-1,-1)--(1,1); \draw[red,very thick] (0,1)--(0,-1); \draw[red,very thick] (-1,0)--(1,0); \draw (-1,1) node[circle,inner sep=2pt,fill=cyan,draw=black] {}; \draw (1,1) node[circle,inner sep=2pt,fill=cyan,draw=black] {}; \draw (-1,0) node[circle,inner sep=2pt,fill=magenta,draw=black] {}; \draw (1,0) node[circle,inner sep=2pt,fill=magenta,draw=black] {}; \draw (-1,-1) node[circle,inner sep=2pt,fill=cyan,draw=black] {}; \draw (1,-1) node[circle,inner sep=2pt,fill=cyan,draw=black] {}; \draw (0,1) node[circle,inner sep=2pt,fill=magenta,draw=black] {}; \draw (0,-1) node[circle,inner sep=2pt,fill=magenta,draw=black] {}; \draw (0,0) node[circle,inner sep=2pt,fill=yellow,draw=black] {}; \end{tikzpicture} \caption{The Steinberg tori for $A_2$ and $C_2$ with faces colored according to $W$-orbits.} \label{fig:colortori} \end{figure} \section{Solomon's descent ring and the module of affine descent classes}\label{sec:modules} We review Solomon's definition of the descent ring $\Sol(\Phi)$ and introduce an analogous object $\Stb{\Sol}(\Phi)$ in terms of Cellini's notion of affine descents. For the latter, the root system is assumed to be irreducible and crystallographic. We then review the approach of Tits to the descent ring from~\cite{Tit:1976}, following Bidigare~\cite{Bid:1997} and Brown~\cite[Section 4.8]{Bro:2000}, and adapt it to the case of affine descents. Via the constructions of Section~\ref{sec:products}, this leads to a left module structure on $\Stb{\Sol}(\Phi)$ over $\Sol(\Phi)$. \subsection{Descents}\label{ss:des} In this section, $\Phi$ is allowed to be an arbitrary root system. The Coxeter group $W$ is generated by the set $S$ of simple reflections. Let $\ell$ denote the corresponding length function. According to~\cite[Proposition~4.4.6]{BjB:2005}, for any positive root $\beta$ and any $w\in W$, \begin{equation}\label{e:des-equiv} \ell(w) > \ell(ws_{\beta}) \iff w(\beta) < 0. \end{equation} For $w \in W$, the set of \emph{right descents} of $w$ is \begin{equation}\label{e:des} D(w) := \{\alpha\in\Delta \mid \ell(w) > \ell(ws_\alpha) \} = \{ \alpha \in \Delta \mid w(\alpha) < 0\}. \end{equation} For any $J \subseteq \Delta$, let \[ x_J := \sum_{ D(w) \subseteq J} w \] denote the sum, in the group ring $\Z W$, of all elements of $W$ whose descent set is contained in $J$. As $J$ runs over the subsets of $\Delta$, the elements $x_J$ span a subring of $\Z W$, called \emph{Solomon's descent ring}~\cite[Theorem~1]{Sol:1976}. We denote it by $\Sol(\Phi)$. There is another standard basis of $\Sol(\Phi)$ given by the sums of elements with a fixed descent set, \[ y_J := \sum_{ D(w) = J} w. \] The descent algebra was introduced by Solomon in~\cite{Sol:1976} and has been the object of many subsequent works including~\cite{ABN:2004,APVW:2002,BauHoh:2008,BBHT:1992,BonPfe:2008,Cel:1995,Ful:2001,GarReu:1989,MatOre:2008,Pat:1994,Sal:2008}. \subsection{Affine descents}\label{ss:affdes} We turn to \emph{affine descent sets}, a notion introduced by Cellini \cite[Section 2]{Cel:1995}, and further studied in~\cite{DPS:2009,Ful:2000,Pet:2005}. We assume that root system $\Phi$ is irreducible and crystallographic, as in Sections~\ref{ss:affine} and~\ref{ss:invariance}. Thus, $W$ is a Weyl group. Let $\aff{\alpha}\in\Phi$ be the highest root and $\alpha_0 = -\aff{\alpha}$ the \emph{lowest} root. Let and $\aff{\Delta}:=\Delta\cup \{\alpha_0\}$. The highest root is positive; in particular, $\alpha_0\notin \Delta$. The \emph{affine descent set} of an element $w\in W$ is \begin{equation}\label{e:affdes} \aff{D}(w) := \{ \alpha\in \aff{\Delta} \mid w(\alpha) < 0\}. \end{equation} Thus, $D(w) \subseteq \aff{D}(w)$, and the only difference occurs when $w$ does not take $\alpha_0$ to a positive root. Note that the set $\aff{D}(w)$ is defined only for elements $w$ of the finite Weyl group $W$, and not for general elements of the affine Weyl group $\aff{W}$. Every element has at least one affine descent, and no element can have more than $\abs{\Delta}$ affine descents. For any proper nonempty subset $J$ of $\aff{\Delta}$, let \[ \Stb{x}_J := \sum_{\aff{D}(w) \subseteq J} w \qquad \mbox{and} \qquad \Stb{y}_J := \sum_{\aff{D}(w) = J} w. \] This is a refinement of the basis for Solomon's descent algebra in the sense that for $J \subseteq \Delta$, \[ y_J = \Stb{y}_J + \Stb{y}_{J\cup\{\alpha_0\}}. \] While the elements $\Stb{x}_J$ (or equivalently, $\Stb{y}_J$) do not span a subring of $\Z W$, we show below (Theorem~\ref{thm:main}) that they span a left module over $\Sol(\Phi)$, which we denote by $\Stb{\Sol}(\Phi)$. \begin{rmk} Root systems with isomorphic Coxeter groups (such as $B_n$ and $C_n$) have isomorphic descent rings. On the other hand, such systems may have non-isomorphic modules of affine descents. The module $\Stb{\Sol}(\Phi)$ depends on the root system and not just on the Weyl group. \end{rmk} \begin{rmk}\label{rmk:cell} Cellini~\cite[Proposition 1.2]{Cel:1995} constructs a certain commutative subring of $\Z W$. It follows from~\cite[Lemma~2.4]{Cel:1995} that this subring lies inside $\Stb{\Sol}(\Phi)$. We do not pursue any further connections with Cellini's work in this paper. \end{rmk} \begin{rmk}\label{rmk:mosz} In~\cite{Mos:1989}, Moszkowski defines a family of modules over Solomon's descent ring. This family contains the module $\Stb{\Sol}(\Phi)$, as we now explain. Let $T = \{ wsw^{-1} \mid s \in S, w \in W\}$ denote the set of \emph{reflections} in $W$. Fix a subset $R$ of $T$. For any element $w\in W$, define the $R$-descent set of $w$ to be: \begin{equation}\label{e:Rdes} D_R(w) := \{ r \in R \mid \ell(w) > \ell(wr) \}. \end{equation} Given $J\subseteq R$, let \[ x_J^R := \sum_{D_R(w) \subseteq J} w \qquad \mbox{ and } \qquad y_J^R := \sum_{D_R(w) =J} w \] denote the sum of elements whose $R$-descent set is contained in, or, respectively, equal to $J$. Moszkowski shows that the subspace of $\Z W$ spanned by the elements $x_J^R$ (or equivalently, by the elements $y_J^R$), as $J$ runs over the subsets of $R$, is a left module over $\Sol(\Phi)$~\cite[Th\'eor\`eme~1]{Mos:1989}. (Moszkowski, as Solomon, index the basis elements by the complement of the subsets $J$ above.) Now let $s_0$ denote the linear reflection through the linear hyperplane orthogonal to $\alpha_0$, i.e., the hyperplane $\{\lambda \in V \mid \br{\lambda, \alpha_0} = 0\}$, and let $R=S \cup \{ s_0\}$. Applying~\eqref{e:des-equiv} to $\aff{\alpha}$, we have that \[ \ell(w) > \ell(ws_0) \iff w(\alpha_0) > 0. \] Comparing~\eqref{e:affdes} to \eqref{e:Rdes} we see that \[ D_R(w) = \begin{cases} \aff{D}(w) \cup \{\alpha_0\} & \mbox{if } w(\alpha_0) > 0, \\ \aff{D}(w) \setminus \{\alpha_0\} & \mbox{if } w(\alpha_0) < 0. \end{cases} \] It follows that, for $J \subseteq \Delta$, \[ y^R_J = \Stb{y}_{J\cup \{\alpha_0\}} \qquad \mbox{and} \qquad \Stb{y}_J = y^R_{J\cup \{\alpha_0\}}. \] Thus, $\Stb{\Sol}(\Phi)$ is Moszkowski's module for $R=S\cup\{s_0\}$. \end{rmk} \subsection{The geometric approach to descents}\label{ss:geo-des} We continue to assume that $\Phi$ is an irreducible crystallographic root system, $W$ is the associated Weyl group, and $\Sigma$, $\aff{\Sigma}$ and $\Stb{\Sigma}$ are the complexes discussed in Section~\ref{sec:products}. Let $\Z\Sigma$ denote the monoid ring of $\Sigma$ and consider the subring $(\Z\Sigma)^W$ of $W$-invariants. Tits~\cite{Tit:1976} showed that the latter is anti-isomorphic to Solomon's descent ring; see also Bidigare~\cite{Bid:1997}. We follow here the proof of this fact by Brown~\cite[Section~9.6]{Bro:2000}, and obtain counterparts for $\aff{\Sigma}$ and $\Stb{\Sigma}$. As discussed in Section~\ref{sec:products}, the Tits product turns $\Sigma$ into a monoid, and $\aff{\Sigma}$ and $\Stb{\Sigma}$ are right $\Sigma$-modules, with the projection $\pi: \aff{\Sigma} \to \Stb{\Sigma}$ being a morphism of right $\Sigma$-modules. The Weyl group $W$ acts on both $\Z\Phi^\vee$ and $\aff{\Sigma}$, and these actions and the action of $\Z\Phi^\vee$ on $\aff{\Sigma}$ are related by the \emph{semilinearity} condition \begin{equation}\label{e:semilinear-coroot} w\cdot(\mu + F) = w\cdot \mu + w\cdot F \end{equation} for $w\in W$, $\mu\in\Z\Phi^\vee$, and $F\in\aff{\Sigma}$. It follows that $W$ acts on $\Stb{\Sigma}$ and that $\pi$ is a morphism of left $W$-modules. The Weyl group $W$ also acts on the monoid $\Sigma$ and we have \begin{equation}\label{e:semilinear-face} w\cdot(FG) = (w\cdot F)(w\cdot G) \end{equation} for $w\in W$, $G$ in $\Sigma$ and $F$ in either $\Sigma$, $\aff{\Sigma}$, or $\Stb{\Sigma}$. We linearize the sets $\aff{\Sigma}$ and $\Stb{\Sigma}$, obtaining (free) abelian groups $\Z\aff{\Sigma}$ and $\Z\Stb{\Sigma}$. We emphasize that $\Z\aff{\Sigma}$ consists of \emph{finite} linear combinations of elements of $\aff{\Sigma}$. For this reason, $0$ is the only element of $\Z\aff{\Sigma}$ invariant under the linear action of the (infinite) group $\aff{W}$. We consider the linear action of the finite Weyl group $W$ on the abelian groups $\Z\aff{\Sigma}$ and $\Z\Stb{\Sigma}$, and the corresponding subgroups of $W$-invariant elements. It follows from~\eqref{e:semilinear-face} that $(\Z\aff{\Sigma})^W$ and $(\Z \Stb{\Sigma})^W$ are right modules over the ring $(\Z\Sigma)^W$, and also that the map $\pi: \aff{\Sigma} \to \Stb{\Sigma}$ gives rise to a morphism of right $(\Z\Sigma)^W$-modules $\pi: (\Z\aff{\Sigma})^W \to (\Z \Stb{\Sigma})^W$. Recall again from Section~\ref{sec:products} that the set of chambers $\mathcal{C}$ is a left ideal of the monoid $\Sigma$: the product of a face of $\Sigma$ and a chamber of $\Sigma$ is another chamber of $\Sigma$. Similarly, the right action of a chamber of $\Sigma$ on a face of $\aff{\Sigma}$ (respectively, of $\Stb{\Sigma}$) results in an alcove of $\aff{\Sigma}$ (respectively, a maximal face of $\Stb{\Sigma}$). This gives rise to three maps \begin{equation}\label{e:phi-maps} \Z\Sigma \to \End_{\Z}(\Z\mathcal{C}) \qquad \Z\aff{\Sigma} \to \Hom_{\Z}(\Z\mathcal{C},\Z\aff{\Cc}) \qquad \Z\Stb{\Sigma} \to \Hom_{\Z}(\Z\mathcal{C},\Z\Stb{\Cc}) \end{equation} denoted in every case by $\Theta$ and given by \[ \Theta(F)(C) := FC \] (and extended by $\Z$-linearity). Here, $F$ denotes a face of either $\Sigma$, $\aff{\Sigma}$, or $\Stb{\Sigma}$, according to the case, while $C$ denotes a chamber of $\Sigma$ in every case. The abelian group $\End_{\Z}(\Z\mathcal{C})$ is a ring under composition, while both $\Hom_{\Z}(\Z\mathcal{C},\Z\aff{\Cc})$ and $\Hom_{\Z}(\Z\mathcal{C},\Z\Stb{\Cc})$ are right $\End_{\Z}(\Z\mathcal{C})$-modules in the same manner. Associativity for the product of $\Sigma$ (or for the right action of $\Sigma$ on $\aff{\Sigma}$, or on $\Stb{\Sigma}$) translates into the fact that $\Theta(FG) = \Theta(F)\circ\Theta(G)$ for $G\in\Sigma$ and $F$ in either $\Sigma$, $\aff{\Sigma}$, or $\Stb{\Sigma}$. This says that the first map in~\eqref{e:phi-maps} is a morphism of rings, while the other two maps are morphisms of right $\Sigma$-modules, where $\Hom_{\Z}(\Z\mathcal{C},\Z\aff{\Cc})$ and $\Hom_{\Z}(\Z\mathcal{C},\Z\Stb{\Cc})$ are viewed as right $\Z\Sigma$-modules by restriction via $\Theta:\Z\Sigma \to \End_{\Z}(\Z\mathcal{C})$. The sets $\mathcal{C}$, $\aff{\Cc}$, and $\Stb{\Cc}$ are stable under the action of $W$, and hence the groups $\End_{\Z}(\Z\mathcal{C})$, $\Hom_{\Z}(\Z\mathcal{C},\Z\aff{\Cc})$, and $\Hom_{\Z}(\Z\mathcal{C}, \Z\Stb{\Cc})$ are acted upon by $W$ from the left. The action is \[ (w\cdot f)(C) = w\cdot f(w^{-1}\cdot C) \] for $w\in W$, $C\in\mathcal{C}$, and $f$ in either $\End_{\Z}(\Z\mathcal{C})$, $\Hom_{\Z}(\Z\mathcal{C},\Z\aff{\Cc})$, or $\Hom_{\Z}(\Z\mathcal{C},\Z\Stb{\Cc})$. Equation~\eqref{e:semilinear-face} implies that \[ \Theta(w\cdot F) = w\cdot\Theta(F) \] for $w\in W$ and $F$ in either $\Sigma$, $\aff{\Sigma}$, or $\Stb{\Sigma}$. It follows that each map $\Theta$ restricts as follows: \[ (\Z\Sigma)^W \to \End_{\Z}(\Z\mathcal{C})^W, \qquad (\Z\aff{\Sigma})^W \to \Hom_{\Z}(\Z\mathcal{C},\Z\aff{\Cc})^W, \qquad (\Z\Stb{\Sigma})^W \to \Hom_{\Z}(\Z\mathcal{C},\Z\Stb{\Cc})^W. \] These maps are still denoted by $\Theta$. The first one is a morphism of rings and the other two are morphisms of right $(\Z\Sigma)^W$-modules. Since the action of $W$ on $\mathcal{C}$ is free and transitive, we have isomorphims \[ \End_{\Z}(\Z\mathcal{C})^W = \End_{\Z W}(\Z\mathcal{C}) \cong \Z\mathcal{C}, \quad \Hom_{\Z}(\Z\mathcal{C},\Z\aff{\Cc})^W = \Hom_{\Z W}(\Z\mathcal{C},\Z\aff{\Cc}) \cong \Z\aff{\Cc}, \] \[ \Hom_{\Z}(\Z\mathcal{C},\Z\Stb{\Cc})^W = \Hom_{\Z W}(\Z\mathcal{C},\Z\Stb{\Cc}) \cong \Z\Stb{\Cc}, \] given in every case by $f \mapsto f(C_{\emptyset})$, where $C_{\emptyset}$ is the fundamental chamber of $\Sigma$. We may further identify $W$ with $\mathcal{C}$ by means of $ w \leftrightarrow w\cdot C_{\emptyset}, $ where $C_{\emptyset}$ is the fundamental chamber of $\Sigma$. Consider the composite isomorphism of abelian groups \begin{equation}\label{e:iso-comp} \End_{\Z W}(\Z\mathcal{C}) \cong \Z W. \end{equation} A group element $u\in W$ corresponds to the endomorphism $f$ such that $f(C_{\emptyset}) = u\cdot C_{\emptyset}$. If another element $v\in W$ corresponds to the endomorphism $g$, then \[ (f\circ g)(C_{\emptyset}) = f(v\cdot C_{\emptyset}) = v\cdot f(C_{\emptyset}) =vu\cdot C_{\emptyset}, \] so $f\circ g$ corresponds to $vu$. Therefore, the isomorphism of rings~\eqref{e:iso-comp} reverses products. Similarly, we have $\aff{\Cc}\cong \aff{W}$ and $\Stb{\Cc}\cong W$ via the actions of these groups on the fundamental alcoves of $\aff{\Sigma}$ and $\Stb{\Sigma}$. This gives rise to isomorphisms of right $\End_{\Z W}(\Z\mathcal{C})$-modules \[ \Hom_{\Z W}(\Z\mathcal{C},\Z\aff{\Cc}) \cong \Z\aff{W} \qquad\text{and}\qquad \Hom_{\Z W}(\Z\mathcal{C},\Z\Stb{\Cc}) \cong \Z W \] where now $\Z\aff{W}$ and $\Z W$ are first viewed as left $\Z W$-modules by multiplication, and then as right $\End_{\Z W}(\Z\mathcal{C})$-modules via the antimorphism~\eqref{e:iso-comp}. Composing the maps $\Theta$ with the preceding isomorphisms we obtain three maps \begin{equation}\label{e:psi-map} (\Z\Sigma)^W \to \Z W, \qquad (\Z\aff{\Sigma})^W \to \Z\aff{W}, \qquad (\Z\Stb{\Sigma})^W \to \Z W, \end{equation} denoted in every case by $\Psi$ and given by \[ \Psi\left(\sum_F a_F\, F\right) = \sum_F a_F\, w_F, \] where in each case $\sum_F a_F\, F$ stands for a $W$-invariant element of $\Z\Sigma$, $\Z\aff{\Sigma}$, or $\Z\Stb{\Sigma}$, and $w_F$ is the element determined by~\eqref{e:wF}, \eqref{e:wF2} or \eqref{e:wF3}. Note that in the second case $w_F\in\aff{W}$ is an element of the affine Weyl group, while in the other cases it is an element of the finite Weyl group $W$. The first map in~\eqref{e:psi-map} is an anti-morphism of rings and the other two are morphisms of right $(\Z\Sigma)^W$-modules, where $\Z\aff{W}$ and $\Z W$ are first viewed as left $\Z W$-modules by multiplication, and then as right $(\Z\Sigma)^W$-modules via the antimorphism $\Psi:(\Z\Sigma)^W \to \Z W$. Let us analyze the subgroup $(\Z\Sigma)^W$ of $W$-invariants. In the action of $W$ on $\Sigma$ there is one orbit for each subset $J$ of the set of simple roots $\Delta$; namely, the orbit of the face $C_{\Delta\setminus J}$, which consists of all the faces of color set $J$. Let us denote it by $\Sigma_J$. Then, the group $(\Z\Sigma)^W$ is freely generated by the elements \[ \sigma_J := \sum_{F\in\Sigma_J} F, \] where $J$ runs over the subsets of $\Delta$. Define elements $x_J\in \Z W$ by \[ x_J := \sum_{ w \in W:\, D(w) \subseteq J} w. \] It follows from Proposition~\ref{prp:Dcol} that there is a bijection \[ \{w\in W \mid D(w)\subseteq J\} \leftrightarrow \Sigma_J \] that sends $w$ to $w\cdot C_{\Delta\setminus J}$ and returns $F$ to $w_F$ as in \eqref{e:wF}. Therefore, the map $\Psi:(\Z\Sigma)^W \to \Z W$ satisfies \[ \Psi( \sigma_J) = x_{J}. \] Let \[ \Sol(\Phi) := \Z\{ x_J \mid J \subseteq \Delta\} \] denote the subgroup of $\Z W$ generated by the $x_J$, $J\subseteq\Delta$. Thus, $\Sol(\Phi)$ is the image of $\Psi$. As the latter is an anti-morphism of rings, $\Sol(\Phi)$ a subring of $\Z W$. This is a result of Solomon~\cite[Theorem 1]{Sol:1976}. In addition, as $J$ varies, the sets $\{w\in W \mid D(w)= J\}$ are disjoint, and therefore the $x_J$ are linearly independent. It follows that the ring $\Sol(\Phi)$ is anti-isomorphic to $(\Z\Sigma)^W$ via $\Psi$. This is a result of Bidigare~\cite{Bid:1997}; the approach followed above is due to Brown~\cite[Section 9.6]{Bro:2000}. A similar analysis applies to $(\Z\aff{\Sigma})^W$ and $(\Z\Stb{\Sigma})^W$. Given a nonempty subset $J$ of $\aff{\Delta}$ and $\mu\in\Z\Phi^\vee$, let $\aff{\Sigma}_{J,\mu}$ denote the $W$-orbit of the face $\mu + A_{\aff{\Delta}\setminus J}$ in $\aff{\Sigma}$, and let $\Stb{\Sigma}_J$ denotes the $W$-orbit of the face $\overline{A_{\aff{\Delta}\setminus J}}$ in $\Stb{\Sigma}$. The groups $(\Z\aff{\Sigma})^W$ and $(\Z\Stb{\Sigma})^W$ are then freely generated by the elements \[ \aff{\sigma}_{J,\mu} := \sum_{F\in\aff{\Sigma}_{J,\mu}} F \qquad\text{and}\qquad \Stb{\sigma}_J := \sum_{F\in\Stb{\Sigma}_J} F, \] respectively. For $J$ and $\mu$ as above, define elements $\aff{x}_{J,\mu}\in\Z\aff{W}$ and $\Stb{x}_J\in\Z W$ by \[ \aff{x}_{J,\mu} := \sum_{w\in W:\, \aff{D}(w)\subseteq J} (\mu,w),\qquad\text{and}\qquad \Stb{x}_J := \sum_{w\in W:\, \aff{D}(w)\subseteq J} w. \] Invoking Proposition~\ref{prp:affDcol} and Corollary~\ref{cor:sDcol}, we obtain that the maps $\Psi:(\Z\aff{\Sigma})^W \to \Z\aff{W}$ and $\Psi:(\Z\Stb{\Sigma})^W \to \Z W$ satisfy \[ \Psi(\aff{\sigma}_{J,\mu}) = \aff{x}_{J,\mu} \qquad\text{and}\qquad \Psi(\Stb{\sigma}_J) = \Stb{x}_{J}. \] Since the sets $\{\aff{x}_{J,\mu}\}$ and $\{\Stb{x}_J\}$ are linearly independent, the map $\Psi$ is injective in every case. Define \[ \aff{\Sol}(\Phi) := \Z\{ \aff{x}_{J,\mu} \mid \emptyset\neq J \subseteq \aff{\Delta},\, \mu \in \Z\Phi^\vee\} \quad\text{and}\quad \Stb{\Sol}(\Phi) := \Z\{ \Stb{x}_J \mid \emptyset\neq J \subseteq \aff{\Delta}\}. \] The preceding shows that these groups are respectively isomorphic to $(\Z\aff{\Sigma})^W$ and $(\Z\Stb{\Sigma})^W$ via $\Psi$. Taking into account the multiplicative properties of the morphisms $\Psi$, we arrive at our main result. \begin{thm}\label{thm:main} Let $W$ be a Weyl group. \begin{enumerate}[(i)] \item $\aff{\Sol}(\Phi)$ is a left module over the ring $\Sol(\Phi)$. More precisely, it is a submodule of $\Z\aff{W}$, where the latter is a left module over $\Z W$ by multiplication and then over the subring $\Sol(\Phi)$ by restriction. \item $\Stb{\Sol}(\Phi)$ is a left module over the ring $\Sol(\Phi)$. More precisely, it is a submodule of $\Z W$, where the latter is a left module over the subring $\Sol(\Phi)$ by multiplication. \item The map $\Psi$ in its three versions \[ (\Z\Sigma)^W \to \Sol(\Phi), \qquad (\Z\aff{\Sigma})^W \to \aff{\Sol}(\Phi), \qquad\text{and}\qquad (\Z\Stb{\Sigma})^W \to \Stb{\Sol}(\Phi) \] constitutes an anti-isomorphism of rings (first version) and an isomorphism of right $(\Z\Sigma)^W$-modules (second and third versions), where $\aff{\Sol}(\Phi)$ and $\Stb{\Sol}(\Phi)$ are viewed first as left $\Sol(\Phi)$-modules and then as right $(\Z\Sigma)^W$-modules via $\Psi:(\Z\Sigma)^W \to \Sol(\Phi)$. \end{enumerate} \end{thm} \begin{ex} We illustrate the isomorphism between the module of invariant faces of the Steinberg torus and the module of affine descents with an explicit computation. Consider the root system $A_2$ (Figures~\ref{fig:A2-finite} and~\ref{fig:A2}). There are $3$ rays of color $\{\alpha_2\}$ in the finite Coxeter complex $\Sigma$. They constitute the $W$-orbit $\Sigma_{\{\alpha_2\}}$ of the ray $C_{\{\alpha_1\}}$, and thus $\sigma_{\{\alpha_2\}}$ is the sum of these $3$ rays. In the Steinberg torus, there are $9$ edges in total, and $3$ of them constitute the $W$-orbit $\Stb{\Sigma}_{\{\alpha_1,\alpha_2\}}$ of the edge $A_{\{\alpha_0\}}$. Thus, $\Stb{\sigma}_{\{\alpha_1,\alpha_2\}}$ is the sum of these $3$ edges. See Figure~\ref{fig:ray-edge}. \begin{figure}[!h] \begin{tabular}{c c} \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0] \draw[very thick,draw=red] (-2,0) node[left] {$s_1s_2C_{\{\alpha_1\}}$} --(0,0) --node [pos=.85,left] {$C_{\{\alpha_1\}}$} (0,2); \draw[very thick,draw=red] (0,0)-- node [pos=.85,left] {$s_2C_{\{\alpha_1\}}$} (2,-2); \draw (0,0) node {$\bullet$}; \end{tikzpicture} \hspace{1cm} & \hspace{1cm} \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0] \draw[dashed,draw=blue,very thick] (-2,0) -- (-2,2); \draw[draw=blue,very thick] (-2,2) -- node[midway,above] {$s_1A_{\{\alpha_0\}}$} (0,2) ; \draw[dash pattern=on 10pt off 5pt,draw=blue,very thick] (0,2) -- node[midway,above right] {$A_{\{\alpha_0\}}$} (2,0); \draw[dashed,draw=blue,very thick] (2,0) -- node[midway,below right] {$s_2A_{\{\alpha_0\}}$} (2,-2); \draw[draw=blue,very thick] (2,-2) -- (0,-2); \draw[dash pattern=on 10pt off 5pt,draw=blue,very thick] (0,-2) -- (-2,0); \end{tikzpicture} \\ & \\ (a) & (b) \\ \end{tabular} \caption{(a) The rays of color $\alpha_2$. (b) The edges of color $\{\alpha_1,\alpha_2\}$. }\label{fig:ray-edge} \end{figure} The product of one of these edges by each one of the three rays is computed in Figure~\ref{fig:ray-edge2}(a). The result is the edge itself plus the $2$ adjacent chambers. Performing this operation for each of the $3$ edges we obtain the $3$ edges back plus $6$ chambers. Modulo translations, the chambers tile the torus, as shown in Figure~\ref{fig:ray-edge2}(b). The $6$ chambers constitute the orbit $\Stb{\sigma}_{\{\alpha_0,\alpha_1,\alpha_2\}}$. In conclusion, \begin{equation}\label{eq:ray-edge} \Stb{\sigma}_{\{\alpha_1,\alpha_2\}} \cdot \sigma_{\{\alpha_2\}} = \Stb{\sigma}_{\{\alpha_1,\alpha_2\}} + \Stb{\sigma}_{\{\alpha_0,\alpha_1,\alpha_2\}}. \end{equation} \begin{figure}[!h] \begin{tabular}{c c} \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0,scale=2,>=stealth] \draw[draw=none,fill=blue!20!white] (0,0)--(0,1)--(-1,2)--(-1,1)--(0,0); \draw[very thick,draw=blue] (-1,1)--(0,1); \draw (-1,2)--(1,0)--(1,-1)--(0,-1)--(-1,0)--(-1,2); \draw (-1,1)--(0,0)--(0,-1); \draw (0,0)--(1,0); \draw[very thick,draw=red] (-2,0)--(0,0)--(0,2); \draw[very thick,draw=red] (0,0) node {$\bullet$} -- (2,-2); \draw[->,very thick,red] (-.61,1)--(-.61,1.3); \draw[->,very thick,red] (-.61,1)--(-.91,1); \draw[->,very thick,red] (-.61,1)--(-.31,.7); \draw (-.61,1) node {$\bullet$}; \end{tikzpicture} \hspace{2cm} & \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0,scale=2] \draw[draw=none,fill=blue!20!white] (-1,0)--(-1,1)--(0,1)--(1,0)--(1,-1)--(0,-1)--(-1,0); \draw[draw=none,pattern color=white,pattern=bricks] (0,0)--(0,1)--(1,0)--(0,0); \draw[draw=none,pattern color=white,pattern=bricks] (0,0)--(0,-1)--(-1,0)--(0,0); \draw[draw=none,pattern color=white,pattern=crosshatch] (0,0)--(-1,1)--(-1,0)--(0,0); \draw[draw=none,pattern color=white,pattern=crosshatch] (0,0)--(1,0)--(1,-1)--(0,0); \draw[draw=blue,very thick] (-1,1)--(0,1); \draw[draw=blue,very thick] (1,-1)--(0,-1); \draw[draw=blue,very thick,dash pattern=on 10pt off 5pt] (-1,0)--(0,-1); \draw[draw=blue,very thick,dash pattern=on 10pt off 5pt] (1,0)--(0,1); \draw[draw=blue,very thick,dashed] (-1,0)--(-1,1); \draw[draw=blue,very thick,dashed] (1,-1)--(1,0); \draw (-1,1)--(1,-1); \draw (-1,0)--(1,0); \draw (0,-1)--(0,1); \end{tikzpicture} \\ & \\ (a) & (b) \\ \end{tabular} \caption{(a) Walking from an edge along $3$ rays. (b) $3$ edges and $6$ chambers. }\label{fig:ray-edge2} \end{figure} We turn to the Weyl group. Let $s_1$ and $s_2$ denote the reflections corresponding to the simple roots $\alpha_1$ and $\alpha_2$. The affine descent set $\aff{D}(w)$ of each group element $w\in W$ is shown below. The descent set is determined by $D(w) = \aff{D}(w)\setminus\{\alpha_0\}$. \begin{center} \begin{tabular}{r|l} $w$ & $\aff{D}(w)$ \\ \hline $\mathrm{id}$ & $\alpha_0$ \\ $s_1$ & $\alpha_0, \alpha_1$ \\ $s_2$ & $\alpha_0, \alpha_2$ \\ $s_1s_2$ & $\alpha_2$ \\ $s_2s_1$ & $\alpha_1$ \\ $s_1s_2s_1$ & $\alpha_1, \alpha_2$ \end{tabular} \end{center} It follows that \[ x_{\{\alpha_2\}} = \mathrm{id} + s_2 + s_1s_2 \qquad\text{and}\qquad \Stb{x}_{\{\alpha_1,\alpha_2\}} = s_1s_2 + s_2s_1 + s_1s_2s_1. \] Multiplying out in the Weyl group ring we find that \begin{multline}\label{eq:ray-edge2} x_{\{\alpha_2\}} \cdot \Stb{x}_{\{\alpha_1,\alpha_2\}} = s_1s_2 + s_2s_1 + s_1s_2s_1 \\ + \mathrm{id} + s_1 + s_2 + s_1s_2 + s_2s_1 + s_1s_2s_1 = \Stb{x}_{\{\alpha_1,\alpha_2\}} + \Stb{x}_{\{\alpha_0,\alpha_1,\alpha_2\}}. \end{multline} A comparison of \eqref{eq:ray-edge} and \eqref{eq:ray-edge2} witnesses the isomorphism $(\Z\Stb{\Sigma})^W\cong \Stb{\Sol}(\Phi)$. \end{ex} \section{Combinatorial models}\label{sec:models} We analyze the constructions of the preceding sections for the case of the root systems of types $A$ and $C$. We review the combinatorial description of the finite Coxeter complex of type $A$ in terms of set compositions and provide a similar model for the Steinberg torus in terms of \emph{spin necklaces}. We describe the module structure of the latter in these terms. We also discuss the situation for the root systems of type $C$, more briefly. Let $\{\varepsilon_1,\dots,\varepsilon_n\}$ denote the canonical basis of $\R^n$. We equip this space with the standard inner product, for which the $\varepsilon_i$ are orthonormal. \subsection{The root system $A_{n-1}$}\label{ss:rootA} The root system in question is \[ A_{n-1} := \{\varepsilon_i-\varepsilon_j \mid 1\leq i\neq j\leq n\}. \] For the ambient space we take $V_n:= \{x \in \R^n \mid \sum x_i = 0\}$, the subspace spanned by $A_{n-1}$. The Weyl group of $A_{n-1}$ may be identified with $S_n$, the symmetric group of permutations of $[n]$, via its action on the canonical basis of $\R^n$. The reflection associated to the root $\varepsilon_i-\varepsilon_j$ exchanges $\varepsilon_i$ for $\varepsilon_j$; it thus corresponds to the transposition $(i,j)$. See Figure~\ref{fig:ij}. \begin{figure}[!h] \[ \begin{tikzpicture} \draw (0,0) node (0) {$\bullet$}; \draw[->,very thick] (0,0)--(0,1) node[above] {$\varepsilon_i$}; \draw[->,very thick] (0,0)--(1,0) node[right] {$\varepsilon_j$}; \draw[->,very thick] (0,0)--(-1,1) node[above] {$\varepsilon_i-\varepsilon_j$}; \draw[very thick] (-1,-1)--(1,1) node[above right] {$H_{ij}$}; \end{tikzpicture} \] \caption{A root of type $A$.} \label{fig:ij} \end{figure} For the simple roots, we choose \[ \alpha_i := \varepsilon_{i+1}-\varepsilon_i, \] with $1\leq i\leq n-1$. Let $s_i$ denote the associated simple reflection. It corresponds to the elementary transposition $(i,i+1)$. The positive roots are then $\varepsilon_j-\varepsilon_i$ for $i<j$, and the lowest root $\alpha_0=-\aff{\alpha}$ is $\varepsilon_1-\varepsilon_n$. See Figure~\ref{fig:A2-finite} for an illustration. We let $w$ denote both an element of the Weyl group and the corresponding permutation. We write permutations in one-line form $w=w_1w_2\cdots w_n$, where $w_i=w(i)$, and adopt the convention that other subscripts are taken modulo $n$, so that $w_0=w_n$ and $w_{n+1}=w_1$. In these terms, we have that $w(\alpha_i)=\varepsilon_{w_{i+1}}-\varepsilon_{w_i}$, and therefore \[ w(\alpha_i) < 0 \iff w_{i+1}<w_i, \] for $i=0,\ldots,n-1$. We identify $\Delta$ with $\{1,\ldots,n-1\}$ and $\aff{\Delta}$ with $\{1,\ldots,n-1,n\}$ via \[ \alpha_0 \leftrightarrow n \quad\text{and}\quad \alpha_i\leftrightarrow i \] for $i=1,\ldots,n-1$. Then, the descent sets (ordinary and affine) of $w$ are \[ D(w) = \{i \mid 1\leq i\leq n-1,\,w_i>w_{i+1}\} \quad\text{and}\quad \aff{D}(w) = \{i \mid 1\leq i\leq n,\,w_i>w_{i+1}\}. \] For example, $D(25413)=\{2,3\}$ and $\aff{D}(25413)=\{2,3,5\}$. The hyperplane orthogonal to $\varepsilon_i-\varepsilon_j$ is $H_{ij} := \{ x \in V_n \mid x_i = x_j \}$. The Coxeter arrangement \[ \Hy(A_{n-1}) = \{ H_{ij} \mid 1\leq i < j \leq n\} \] is the \emph{braid arrangement}. The affine Coxeter arrangement is \[ \aff{\Hy}(A_{n-1}) = \{ H_{ij,k} \mid 1\leq i < j \leq n, k \in \Z \}, \] where $H_{ij,k} = \{ x \in V_n \mid x_j - x_i = k \}$. For any root $\alpha\in A_{n-1}$, we have $\br{ \alpha,\alpha} =2$. Therefore, $\Phi^\vee=\Phi$. The coroot lattice is thus spanned by the (simple) roots, and we have $\Z\Phi^\vee = \{x \in \Z^n \mid \sum x_i = 0\}$. \subsection{Faces of the finite Coxeter complex of type $A$}\label{ss:CCA} A \emph{composition} of a set $I$ is a sequence $(S_1,\ldots,S_k)$ of disjoint nonempty subsets of $I$ such that $S_1\cup\cdots\cup S_k=I$. Each $S_i$ is a \emph{block} of the composition. We represent such a composition by means of a string as follows \[ \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$S_1$} -- (1,0) node[state1] {$S_2$} -- (2,0) node[fill=white] {$\cdots$} -- (3,0) node[state1] {$S_k$}; \end{tikzpicture}. \] The faces of the Coxeter complex $\Sigma(A_{n-1})$ are in correspondence with compositions of the set $[n]$. A composition corresponds to the face determined by the (in)equalities \begin{align*} x_i=x_j & \text{ if $i$ and $j$ belong to the same block,}\\ x_i<x_j & \text{ if the block of $i$ precedes that of $j$.} \end{align*} The faces in $\Sigma(A_2)$, labeled with set compositions, are shown in Figure \ref{fig:A2Coxeter}. The picture shows the lines in $\Hy(A_{2})$ drawn in $V_3 \cong \R^2$. \begin{figure} \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0,scale=2] \draw (0,1) node[inner sep=0,rotate=60] (12) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=red,inner sep=1,draw] \draw (0,0) node[state1] {\color{white}{12}} -- (1,0) node[state1] {\color{white}{3}}; \end{tikzpicture} }; \draw (1,0) node[inner sep=0] (1) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=green,inner sep=1,draw] \draw (0,0) node[state1] {1} -- (1,0) node[state1] {23}; \end{tikzpicture} }; \draw (1,-1) node[inner sep=0,rotate=-60] (13) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=red,inner sep=1,draw] \draw (0,0) node[state1] {\color{white}{13}} -- (1,0) node[state1] {\color{white}{2}}; \end{tikzpicture} }; \draw (0,-1) node[inner sep=0,rotate=60] (3) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=green,inner sep=1,draw] \draw (0,0) node[state1] {3} -- (1,0) node[state1] {12}; \end{tikzpicture} }; \draw (-1,0) node[inner sep=0] (23) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=red,inner sep=1,draw] \draw (0,0) node[state1] {\color{white}{23}} -- (1,0) node[state1] {\color{white}{1}}; \end{tikzpicture} }; \draw (-1,1) node[inner sep=0,rotate=-60] (2) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=green,inner sep=1,draw] \draw (0,0) node[state1] {2} -- (1,0) node[state1] {13}; \end{tikzpicture} }; \draw (0,0) node[ellipse,inner sep=1,draw=black,fill=yellow] (0) {123}; \draw (1,1) node { \begin{tikzpicture}[scale=.75] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {1} -- (1,0) node[state1] {2} -- (2,0) node[state1] {3}; \end{tikzpicture} }; \draw (2,-1) node { \begin{tikzpicture}[scale=.75] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {1} -- (1,0) node[state1] {3} -- (2,0) node[state1] {2}; \end{tikzpicture} }; \draw (1,-2) node { \begin{tikzpicture}[scale=.75] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {3} -- (1,0) node[state1] {1} -- (2,0) node[state1] {2}; \end{tikzpicture} }; \draw (-1,-1) node { \begin{tikzpicture}[scale=.75] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {3} -- (1,0) node[state1] {2} -- (2,0) node[state1] {1}; \end{tikzpicture} }; \draw (-2,1) node { \begin{tikzpicture}[scale=.75] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {2} -- (1,0) node[state1] {3} -- (2,0) node[state1] {1}; \end{tikzpicture} }; \draw (-1,2) node { \begin{tikzpicture}[scale=.75] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {2} -- (1,0) node[state1] {1} -- (2,0) node[state1] {3}; \end{tikzpicture} }; \draw[green,very thick] (-2,2)--(2)--(0)--(1)--(2,0); \draw[green,very thick] (0,-2)--(3)--(0); \draw[red,very thick] (-2,0)--(23)--(0)--(12)--(0,2); \draw[red,very thick] (2,-2)--(13)--(0); \end{tikzpicture} \caption{The faces in $\Sigma(A_2)$, with colors corresponding to $W$-orbits.} \label{fig:A2Coxeter} \end{figure} The partial order on faces (given by inclusion of closures) corresponds to refinement of set compositions. The finest set compositions are those for which all blocks are singletons. The chambers thus correspond to linear orders on $[n]$, which in turn are identified with permutations of $[n]$. With the preceding choices, the fundamental chamber corresponds to the identity permutation, and the action of the Weyl group $W(A_{n-1})$ on chambers corresponds to the action of $S_n$ on itself by left multiplication. More generally, the action of the Weyl group on faces translates as follows: if the face $F$ corresponds to the composition $(S_1,\ldots,S_k)$, then $w\cdot F$ corresponds to $\bigl(w(S_1),\ldots,w(S_k)\bigr)$. The color set of $F$ is \[ \col(F)=\{a_1,a_1+a_2,\ldots,a_1+a_2+\cdots+a_{k-1}\}\subseteq\{1,\ldots,n-1\}, \] where $a_i:=\abs{S_i}$. The faces of color $J$ constitute the $W$-orbit $\Sigma_J$ and correspond to the set compositions with block sizes prescribed by $J$. Figure \ref{fig:A2Coxeter} shows the orbits in $\Sigma(A_2)$. For example, the rays in red constitute the orbit of color set $\{2\}$, corresponding to the set compositions with block sizes $(2,1)$. The permutation $w_F$ associated to the face $F$ as in~\eqref{e:wF} (or as in Proposition~\ref{prp:Dcol}) is obtained by listing the blocks in the given order, and writing the elements in each block in increasing order. For example, if $F\in\Sigma(A_4)$ corresponds to the composition $ \begin{tikzpicture}[baseline=-2pt] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$134$} -- (1,0) node[state1] {$5$} -- (2,0) node[state1] {$2$}; \end{tikzpicture} $, then $w_F=13452\in S_5$. The sign vector of a face $F$ has entries \begin{equation}\label{eq:signF} \sigma_{ij}(F) = \begin{cases} 0 & \text{if $i$ and $j$ are in the same block,}\\ + & \text{if the block of $i$ precedes that of $j$,}\\ - & \text{if the block of $i$ succeeds that of $j$.} \end{cases} \end{equation} \subsection{Faces of the Steinberg torus of type $A$}\label{ss:STA} Let $I$ be a finite set. A \emph{spin necklace on $I$} consists of a partition of the set $I$ (into disjoint nonempty blocks), a cyclic order on the set of blocks, and a labeling of the edges of the cycle with integers from 1 to $n=|I|$ satisfying the following condition. Let $B$ be a block of the partition and $i$ and $j$ the labels of the edges of the cycle incident to $B$, with $i$ coming before $j$ when we read the labels according to the cyclic order. Then \begin{equation}\label{e:cyc-lab} j\equiv i+\abs{B} \mod n. \end{equation} In our pictures, the cyclic order is always represented clockwise. For instance, the following is a spin necklace on the set $[6]$: \begin{equation}\label{e:spin1} \begin{tikzpicture}[node distance=.5cm,baseline=0] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [below left=of a,yshift=10pt] {$46$}; \node[state] (c) [above=of a] {$135$}; \node[state] (e) [below right=of a,yshift=10pt] {$2$}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $4$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $1$} (e); \path (e) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} \end{equation} The following is a spin necklace which differs from the previous one only in the labeling: \begin{equation}\label{e:spin2} \begin{tikzpicture}[node distance=.5cm,baseline=0] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [below left=of a,yshift=10pt] {$46$}; \node[state] (c) [above=of a] {$135$}; \node[state] (e) [below right=of a,yshift=10pt] {$2$}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $3$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $6$} (e); \path (e) edge[bend left=20] node[midway,below] {\footnotesize $1$} (b); \end{tikzpicture} \end{equation} Given a partition and a cyclic order on its blocks, a spin necklace is uniquely determined by the choice of one edge label (the remaining labels being determined by \eqref{e:cyc-lab}), and this choice is arbitrary. Note also that all edge labels in a spin necklace are distinct modulo $n$. This is because as we traverse the cycle from one edge to another, the labels increase by the total size of the intermediate blocks, which is an integer strictly between $0$ and $n$. A spin necklace has a distinguished block, the \emph{clasp}, defined as follows. Choose the edge labels from $\{1,\ldots,n\}$. Since the edge labels increase cyclically, the maximum label occurs immediately before the minimum label somewhere along the necklace. This intermediate block is the clasp of the necklace. In example \eqref{e:spin1}, the clasp is the block $\{1,3,5\}$. In example \eqref{e:spin2}, the clasp is $\{2\}$. The clasp may occur at a block of any size. Given a spin necklace, we can contract any edge (and remove its label) to obtain another spin necklace. If the edge connected blocks $B$ and $C$, then in the new spin necklace there is one new block $B\cup C$. Note that condition~\eqref{e:cyc-lab} is verified by the new necklace. For example, the spin necklace on the right below is obtained by contracting one edge from the one on the left. \[ \begin{tikzpicture}[node distance=.5cm,baseline=0] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [below left=of a,yshift=10pt] {$46$}; \node[state] (c) [above=of a] {$135$}; \node[state] (e) [below right=of a,yshift=10pt] {$2$}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $3$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $6$} (e); \path (e) edge[bend left=20] node[midway,below] {\footnotesize $1$} (b); \end{tikzpicture} \qquad \begin{tikzpicture}[node distance=.3cm,baseline=-7] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [below=of a] {$246$}; \node[state] (c) [above=of a] {$135$}; \path (b) edge[bend left=40] node[midway,left] {\footnotesize $3$} (c); \path (c) edge[bend left=40] node[midway,right] {\footnotesize $6$} (b); \end{tikzpicture} \] There is a partial order on the set of spin necklaces on $I$ in which a spin necklace precedes another if the former can be obtained from the latter by contracting some of its edges. This is such that given a spin necklace, the elements below it form a poset isomorphic to the poset of nonempty subsets of its edge set. The minimal elements of this partial order are the spin necklaces with one block; there are $n$ of them. For example, when $n=4$ they are: \[ \begin{tikzpicture} \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \draw (0,0) node[state] (a) {$1234$}; \draw (a) .. controls (-1,1) and (1,1) .. node[midway, above] {\footnotesize $1$} (a); \end{tikzpicture} \begin{tikzpicture} \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \draw (0,0) node[state] (a) {$1234$}; \draw (a) .. controls (-1,1) and (1,1) .. node[midway, above] {\footnotesize $2$} (a); \end{tikzpicture} \begin{tikzpicture} \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \draw (0,0) node[state] (a) {$1234$}; \draw (a) .. controls (-1,1) and (1,1) .. node[midway, above] {\footnotesize $3$} (a); \end{tikzpicture} \begin{tikzpicture} \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \draw (0,0) node[state] (a) {$1234$}; \draw (a) .. controls (-1,1) and (1,1) .. node[midway, above] {\footnotesize $4$} (a); \end{tikzpicture}. \] The maximal elements are the spin necklaces for which each block is a singleton. There are then $n$ edges and each integer from 1 to $n$ occurs as a label. If we read the elements in the blocks starting at the clasp and proceeding cyclically, we obtain a bijection between the set of maximal elements and permutations of $[n]$. \[ \begin{tikzpicture}[node distance=.7cm,baseline=0pt] \tikzstyle{state}=[ellipse,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [right=of a,xshift=10pt] {$5$}; \node[state] (c) [below right=of a] {$2$}; \node[state] (d) [below left=of a] {$6$}; \node[state] (e) [left=of a,xshift=-10pt] {$4$}; \node[state] (f) [above left=of a] {$1$}; \node[state] (g) [above right=of a] {$3$}; \path (b) edge[bend left=20] node[midway,xshift=-1pt,yshift=-2pt,right] {\footnotesize $6$} (c); \path (c) edge[bend left=20] node[midway,below] {\footnotesize $1$} (d); \path (d) edge[bend left=20] node[midway,xshift=1pt,yshift=-2pt,left] {\footnotesize $2$} (e); \path (e) edge[bend left=20] node[midway,xshift=1pt,yshift=2pt,left] {\footnotesize $3$} (f); \path (f) edge[bend left=20] node[midway,above] {\footnotesize $4$} (g); \path (g) edge[bend left=20] node[midway,xshift=-1pt,yshift=2pt,right] {\footnotesize $5$} (b); \end{tikzpicture} \quad\leftrightarrow\quad 264135 \] The faces of the Steinberg torus $\Stb{\Sigma}(A_{n-1})$ are in correspondence with spin necklaces on the set $[n]$, by means of the following procedure. Given a spin necklace, first locate its clasp $C$. Let $\ell$ and $r$ be the incoming and outgoing labels, respectively (with respect to the cyclic order). Then, by~\eqref{e:cyc-lab}, we have \[ \ell+\abs{C}=r+n. \] Split $C$ into two blocks $C_1$ and $C_2$, consisting respectively of the first $n-\ell$ elements of $C$ and of the last $r$ elements. (The elements of $C$ are ordered by the standard order of $[n]$.) Note that $C_1$ may be empty (since $l$ may be $n$), but $C_2$ may not. Schematically, \[ \begin{tikzpicture}[node distance=.5cm,baseline=0pt] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {$L$}; \node[state] (c) [above=of a] {$C_1\mid C_2$}; \node[state] (d) [right=of a] {$R$}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $\ell$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $r$} (d); \path[dashed] (d) edge[bend left=70] (b); \end{tikzpicture} \,. \] We then turn the necklace into a string by listing the blocks in the given cyclic order, starting with $C_2$ and ending with $C_1$ (and dropping the edge labels): \[ \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$C_2$} -- (1,0) node[state1] {$R$} -- (2,0) node[fill=white] {$\cdots$} -- (3,0) node[state1] {$L$} -- (4,0) node[state1] {$C_1$}; \end{tikzpicture}. \] We refer to this list as the \emph{split necklace}. If $C_1$ is nonempty, the split necklace is a composition of $[n]$. If $C_1$ is empty, we keep track of this event by displaying the list as follows: \[ \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$C_2$} -- (1,0) node[state1] {$R$} -- (2,0) node[fill=white] {$\cdots$} -- (3,0) node[state1] {$L$} -- (4,0) node[fill=white] {}; \end{tikzpicture}. \] Note that the spin necklace can be reconstructed from the split necklace. The given spin necklace corresponds to the $\Z\Phi^\vee$-orbit of the affine face determined by the (in)equalities \begin{align*} x_i &=x_j \text{ if $i$ and $j$ belong to the same block,}\\ x_i &<x_j \text{ if the block of $i$ precedes that of $j$,}\\ x_i &=x_j+1 \text{ if $i$ belongs to $C_1$ and $j$ to $C_2$,}\\ x_i &<x_j+1 \text{ if $C_1=\emptyset$, $i$ belongs to $L$ and $j$ to $C_2$.} \end{align*} The blocks in question and their relative order are those of the split necklace. For the spin necklaces~\eqref{e:spin1} and~\eqref{e:spin2}, the split necklaces are \[ \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$5$} -- (1,0) node[state1] {$2$} -- (2.1,0) node[state1] {$46$} -- (3.3,0) node[state1] {$13$}; \end{tikzpicture} \qquad\text{and}\qquad \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$2$} -- (1,0) node[state1] {$46$} -- (2.1,0) node[state1] {$135$} -- (3.4,0) node[fill=white] {}; \end{tikzpicture} , \] and the conditions defining the faces are \[ x_5<x_2<x_4=x_6<x_1=x_3=x_5+1 \] and \[ x_2<x_4=x_6<x_1=x_3=x_5<x_2+1, \] respectively. The faces in $\Stb{\Sigma}(A_2)$, labeled with spin necklaces, are shown in Figure \ref{fig:A2Steinberg}. \begin{figure}[!ht] \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0,scale=6] \draw[blue,very thick] (-1,1)-- node[midway,inner sep=0,fill=white] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\color{white}{$23$}}; \node[state] (c) [right=of a] {\color{white}{$1$}}; \path[black] (b) edge[bend left=20] node[midway,above] {\footnotesize $1$} (c); \path[black] (c) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} } (0,1); \draw[blue,very thick] (1,-1)-- node[midway,inner sep=0,fill=white] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\color{white}{$23$}}; \node[state] (c) [right=of a] {\color{white}{$1$}}; \path[black] (b) edge[bend left=20] node[midway,above] {\footnotesize $1$} (c); \path[black] (c) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} } (0,-1); \draw[blue,very thick] (-1,0)-- node[midway,inner sep=0,fill=white,rotate=-60] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\color{white}{$13$}}; \node[state] (c) [right=of a] {\color{white}{$2$}}; \path[black] (b) edge[bend left=20] node[midway,above] {\footnotesize $1$} (c); \path[black] (c) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} } (0,-1); \draw[blue,very thick] (1,0)-- node[midway,inner sep=0,fill=white,rotate=-60] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\color{white}{$13$}}; \node[state] (c) [right=of a] {\color{white}{$2$}}; \path[black] (b) edge[bend left=20] node[midway,above] {\footnotesize $1$} (c); \path[black] (c) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} } (0,1); \draw[blue,very thick] (-1,0)-- node[midway,inner sep=0,fill=white,rotate=60] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\color{white}{$12$}}; \node[state] (c) [right=of a] {\color{white}{$3$}}; \path[black] (b) edge[bend left=20] node[midway,above] {\footnotesize $1$} (c); \path[black] (c) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} } (-1,1); \draw[blue,very thick] (1,-1)-- node[midway,inner sep=0,fill=white,rotate=60] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\color{white}{$12$}}; \node[state] (c) [right=of a] {\color{white}{$3$}}; \path[black] (b) edge[bend left=20] node[midway,above] {\footnotesize $1$} (c); \path[black] (c) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} } (1,0); \draw[green,very thick] (-1,1)-- node[midway,inner sep=0,fill=white,rotate=-60] { \begin{tikzpicture}[node distance=.5cm,black] \tikzstyle{state}=[ellipse,fill=green,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {$2$}; \node[state] (c) [right=of a] {$13$}; \path (b) edge[bend left=20] node[midway,above] {\footnotesize $1$} (c); \path (c) edge[bend left=20] node[midway,below] {\footnotesize $3$} (b); \end{tikzpicture} } (0,0); \draw[red,very thick] (0,0)-- node[midway,inner sep=0,fill=white,rotate=-60] { \begin{tikzpicture}[node distance=.5cm,black] \tikzstyle{state}=[ellipse,fill=red,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\color{white}{$13$}}; \node[state] (c) [right=of a] {\color{white}{$2$}}; \path (b) edge[bend left=20] node[midway,above] {\footnotesize $2$} (c); \path (c) edge[bend left=20] node[midway,below] {\footnotesize $3$} (b); \end{tikzpicture} } (1,-1); \draw[red,very thick] (-1,0)-- node[midway,inner sep=0,fill=white] { \begin{tikzpicture}[node distance=.5cm,black] \tikzstyle{state}=[ellipse,fill=red,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\color{white}{$23$}}; \node[state] (c) [right=of a] {\color{white}{$1$}}; \path (b) edge[bend left=20] node[midway,above] {\footnotesize $2$} (c); \path (c) edge[bend left=20] node[midway,below] {\footnotesize $3$} (b); \end{tikzpicture} } (0,0); \draw[green,very thick] (0,0)-- node[midway,inner sep=0,fill=white] { \begin{tikzpicture}[node distance=.5cm,black] \tikzstyle{state}=[ellipse,fill=green,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {$1$}; \node[state] (c) [right=of a] {$23$}; \path (b) edge[bend left=20] node[midway,above] {\footnotesize $1$} (c); \path (c) edge[bend left=20] node[midway,below] {\footnotesize $3$} (b); \end{tikzpicture} } (1,0); \draw[green,very thick] (0,-1)-- node[midway,inner sep=0,fill=white,rotate=60] { \begin{tikzpicture}[node distance=.5cm,black] \tikzstyle{state}=[ellipse,draw,fill=green,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {$3$}; \node[state] (c) [right=of a] {$12$}; \path (b) edge[bend left=20] node[midway,above] {\footnotesize $1$} (c); \path (c) edge[bend left=20] node[midway,below] {\footnotesize $3$} (b); \end{tikzpicture} } (0,0); \draw[red,very thick] (0,0)-- node[midway,inner sep=0,fill=white,rotate=60] { \begin{tikzpicture}[node distance=.5cm,black] \tikzstyle{state}=[ellipse,draw,fill=red,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\color{white}{$12$}}; \node[state] (c) [right=of a] {\color{white}{$3$}}; \path (b) edge[bend left=20] node[midway,above] {\footnotesize $2$} (c); \path (c) edge[bend left=20] node[midway,below] {\footnotesize $3$} (b); \end{tikzpicture} } (0,1); \draw (.33,.33) node { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a,xshift=5pt] {$3$}; \node[state] (c) [above=of a] {$1$}; \node[state] (e) [right=of a,xshift=-5pt] {$2$}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $3$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $1$} (e); \path (e) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} }; \draw (.67,-.33) node { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a,xshift=5pt] {$2$}; \node[state] (c) [above=of a] {$1$}; \node[state] (e) [right=of a,xshift=-5pt] {$3$}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $3$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $1$} (e); \path (e) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} }; \draw (.33,-.67) node { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a,xshift=5pt] {$2$}; \node[state] (c) [above=of a] {$3$}; \node[state] (e) [right=of a,xshift=-5pt] {$1$}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $3$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $1$} (e); \path (e) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} }; \draw (-.33,-.33) node { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a,xshift=5pt] {$1$}; \node[state] (c) [above=of a] {$3$}; \node[state] (e) [right=of a,xshift=-5pt] {$2$}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $3$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $1$} (e); \path (e) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} }; \draw (-.67,.33) node { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a,xshift=5pt] {$1$}; \node[state] (c) [above=of a] {$2$}; \node[state] (e) [right=of a,xshift=-5pt] {$3$}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $3$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $1$} (e); \path (e) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} }; \draw (-.33,.67) node { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a,xshift=5pt] {$3$}; \node[state] (c) [above=of a] {$2$}; \node[state] (e) [right=of a,xshift=-5pt] {$1$}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $3$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $1$} (e); \path (e) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} }; \draw (0,0) node[fill=white,inner sep=0,scale=.75] { \begin{tikzpicture} \useasboundingbox (-.4,-.2) rectangle (.4,.75); \tikzstyle{state}=[ellipse,draw=black,fill=yellow,inner sep=1pt] \draw (0,1) node {}; \draw (0,0) node[state] (a) {$123$}; \draw (a) .. controls (-1,1) and (1,1) .. node[midway, above] {\footnotesize $3$} (a); \end{tikzpicture} }; \draw (0,1) node[fill=white,inner sep=0,scale=.75] { \begin{tikzpicture} \useasboundingbox (-.4,-.2) rectangle (.4,.75); \tikzstyle{state}=[ellipse,draw=black,fill=magenta,inner sep=1pt] \draw (0,1) node {}; \draw (0,0) node[state] (a) {\color{white}{123}}; \draw (a) .. controls (-1,1) and (1,1) .. node[midway, above] {\footnotesize $2$} (a); \end{tikzpicture} }; \draw (-1,0) node[fill=white,inner sep=0,scale=.75] { \begin{tikzpicture} \useasboundingbox (-.4,-.2) rectangle (.4,.75); \tikzstyle{state}=[ellipse,draw=black,fill=magenta,inner sep=1pt] \draw (0,1) node {}; \draw (0,0) node[state] (a) {\color{white}{123}}; \draw (a) .. controls (-1,1) and (1,1) .. node[midway, above] {\footnotesize $2$} (a); \end{tikzpicture} }; \draw (1,-1) node[fill=white,inner sep=0,scale=.75] { \begin{tikzpicture} \useasboundingbox (-.4,-.2) rectangle (.4,.75); \tikzstyle{state}=[ellipse,draw=black,fill=magenta,inner sep=1pt] \draw (0,0) node[state] (a) {\color{white}{123}}; \draw (a) .. controls (-1,1) and (1,1) .. node[midway, above] {\footnotesize $2$} (a); \end{tikzpicture} }; \draw (1,0) node[fill=white,inner sep=0,scale=.75] { \begin{tikzpicture} \useasboundingbox (-.4,-.2) rectangle (.4,.75); \tikzstyle{state}=[ellipse,draw=black,fill=cyan,inner sep=1pt] \draw (0,0) node[state] (a) {\color{white}{123}}; \draw (a) .. controls (-1,1) and (1,1) .. node[midway, above] {\footnotesize $1$} (a); \end{tikzpicture} }; \draw (0,-1) node[fill=white,inner sep=0,scale=.75] { \begin{tikzpicture} \useasboundingbox (-.4,-.2) rectangle (.4,.75); \tikzstyle{state}=[ellipse,draw=black, fill=cyan,inner sep=1pt] \draw (0,0) node[state] (a) {\color{white}{123}}; \draw (a) .. controls (-1,1) and (1,1) .. node[midway, above] {\footnotesize $1$} (a); \end{tikzpicture} }; \draw (-1,1) node[fill=white,inner sep=0,scale=.75] { \begin{tikzpicture} \useasboundingbox (-.4,-.2) rectangle (.4,.75); \tikzstyle{state}=[ellipse,draw=black,fill=cyan,inner sep=1pt] \draw (0,0) node[state] (a) {\color{white}{123}}; \draw (a) .. controls (-1,1) and (1,1) .. node[midway, above] {\footnotesize $1$} (a); \end{tikzpicture} }; \end{tikzpicture} \caption{The faces of the Steinberg torus $\Stb{\Sigma}(A_2)$, with colors corresponding to $W$-orbits. Note the identifications along the boundary.} \label{fig:A2Steinberg} \end{figure} The partial order on faces (given by inclusion of closures) corresponds to the partial order on spin necklaces given by edge contraction. On the other hand, sliding consecutive beads (blocks) past each other in the necklace corresponds to walking between adjacent chambers. A permutation $w$ acts on a spin necklace by changing each block $B$ into $w(B)$, and keeping the edge labels. This corresponds to the action of the Weyl group on faces of the torus, and the set of edge labels of the necklace corresponds to the color set of the face (under the identification between $\aff{\Delta}$ and $\{1,\ldots,n\}$.) The orbits are thus parametrized by nonempty subsets of $\{1,\ldots,n\}$, with the orbit $\Stb{\Sigma}_J$ consisting of the spin necklaces with edge label set $J$. Figure \ref{fig:A2Steinberg} shows the orbits in $\Stb{\Sigma}(A_2)$. For example, the edges in red constitute the orbit of color set $\{2,3\}$. The permutation associated to the torus face as in~\eqref{e:wF3} (or as in Corollary~\ref{cor:sDcol}) is obtained by listing the blocks in the split necklace from left to right, and writing the elements in each block in increasing order. For example, the permutations associated to the faces (spin necklaces)~\eqref{e:spin1} and~\eqref{e:spin2} are $524613$ and $246135$, respectively. Recall an affine face $F$ has both an \emph{expanded} and a \emph{compact} sign vector (see Equations \eqref{eq:expanded} and \eqref{eq:compact}). For each pair $i < j$, there is a critical value $k = k_{ij}(F)$ (see \eqref{e:signj}), and the compact sign vector consists of the values $k_{ij}(F)$ along with the signs relative to the corresponding hyperplanes, which are either $+$ or $0$. In terms of spin necklaces these signs are simple: \begin{equation}\label{eq:signFbar} \sigma_{ij,k}(F) = \begin{cases} 0 &\mbox{ if $i$ and $j$ are in the same block,} \\ + &\mbox{ if $i$ and $j$ are in different blocks.} \\ \end{cases} \end{equation} \subsection{Products of faces in type $A$}\label{ss:prodA} We turn to a combinatorial description for the right module structure of $\Stb{\Sigma}(A_{n-1})$ over $\Sigma(A_{n-1})$. First, recall that the product of two faces in the finite Coxeter complex admits the following description. Let $F$ and $G\in\Sigma(A_{n-1})$ be the faces corresponding to the compositions $ \begin{tikzpicture}[baseline=-2.5pt] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$S_1$} -- (1,0) node[fill=white,inner sep=1] {$\cdots$} -- (2,0) node[state1] {$S_p$}; \end{tikzpicture} $ and $ \begin{tikzpicture}[baseline=-2.5pt] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$T_1$} -- (1,0) node[fill=white,inner sep=1] {$\cdots$} -- (2,0) node[state1] {$T_q$}; \end{tikzpicture} $ of $[n]$. Then the Tits product of $FG\in\Sigma(A_{n-1})$ corresponds to the composition of $[n]$ whose blocks are the pairwise intersections $S_i \cap T_j$, arranged lexicographically: \[ \begin{tikzpicture}[baseline=-2.5pt,xscale=1.75] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$S_1\cap T_1$} -- (1,0) node[fill=white,inner sep=1] {$\cdots$} -- (2,0) node[state1] {$S_1\cap T_q$} -- (3,0) node[fill=white,inner sep=1] {$\cdots$} -- (4,0) node[state1] {$S_p\cap T_1$} -- (5,0) node[fill=white,inner sep=1] {$\cdots$} -- (6,0) node[state1] {$S_p\cap T_q$}; \end{tikzpicture} , \] with the understanding that empty intersections are removed from the string. For example, \[ \begin{tikzpicture}[baseline=-2.5pt] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$3567$} -- (1,0) node[state1] {$4$} -- (2,0) node[state1] {$12$}; \end{tikzpicture} \bm\cdot \begin{tikzpicture}[baseline=-2.5pt] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$26$} -- (1,0) node[state1] {$35$} -- (2,0) node[state1] {$17$} -- (3,0) node[state1] {$4$}; \end{tikzpicture} = \begin{tikzpicture}[baseline=-2.5pt] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$6$} -- (1,0) node[state1] {$35$} -- (2,0) node[state1] {$7$} -- (3,0) node[state1] {$4$} -- (4,0) node[state1] {$2$} -- (5,0) node[state1] {$1$}; \end{tikzpicture} \,. \] The agreement between the geometric definition of the Tits product and the combinatorial procedure is illustrated in Figure~\ref{fig:prodA2}. \begin{figure}[!ht] \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0,scale=2] \draw (1,0) node[inner sep=0] (1) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {1} -- (1,0) node[state1] {23}; \end{tikzpicture} }; \draw (0,-1) node[inner sep=0,rotate=60] (3) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {3} -- (1,0) node[state1] {12}; \end{tikzpicture} }; \draw (2,-1) node { \begin{tikzpicture}[scale=.75] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {1} -- (1,0) node[state1] {3} -- (2,0) node[state1] {2}; \end{tikzpicture} }; \draw[very thick] (-2,2)--(0,0); \draw[very thick] (0,-2)--(3)--(0,0)--(1)--(2,0); \draw[very thick] (-2,0)--(0,0)--(0,2); \draw[very thick] (2,-2)--(0,0); \draw (0,0) node[circle,fill] (0) {}; \draw[blue,dashed] (1.75,0) node {$\bullet$} -- (0,-1.75) node {$\bullet$}; \draw[blue,->,>=stealth,line width=2] (1.75,0)--(1.5,-.25); \end{tikzpicture} \vspace{.5cm} \[ \begin{tikzpicture}[baseline=-.1cm] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {1} -- (1,0) node[state1] {23}; \end{tikzpicture} \bm\cdot \begin{tikzpicture}[baseline=-.1cm] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {3} -- (1,0) node[state1] {12}; \end{tikzpicture} = \begin{tikzpicture}[baseline=-.1cm,scale=.75] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {1} -- (1,0) node[state1] {3} -- (2,0) node[state1] {2}; \end{tikzpicture} \] \caption{The product of two faces of $\Sigma(A_2)$.} \label{fig:prodA2} \end{figure} The product of a face of the Steinberg torus by a face of the finite Coxeter complex admits a similar description. \begin{proposition}\label{p:prodSCA} Let $F\in\Stb{\Sigma}(A_{n-1})$ be a face of the Steinberg torus and $G\in\Sigma(A_{n-1})$ a face of the Coxeter complex. Let $(S_1,\ldots,S_k)$ be the composition of $[n]$ corresponding to $G$. To obtain the spin necklace on $[n]$ corresponding to the face $FG\in\Stb{\Sigma}(A_{n-1})$, replace each block $B$ in the the spin necklace of $F$ by the string of intersections $(B\cap S_1,\ldots,B\cap S_k)$, removing those that are empty. The incoming edge to $B$ is now incoming to $B\cap S_1$, and the outgoing edge from $B$ is now outgoing from $B\cap S_k$. The label of the intermediate edges is uniquely determined by~\eqref{e:cyc-lab}. \end{proposition} This proposition can be proved by considering the effect of the Tits product on sign vectors (Equation \eqref{eq:product}) and employing \eqref{eq:signF} and \eqref{eq:signFbar} for translating set compositions and spin necklaces into sign vectors. Schematically, if $F$ and $G$ correspond respectively to \[ \begin{tikzpicture}[node distance=.5cm,baseline=0pt] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\phantom{$B$}}; \node[state] (c) [above=of a] {$B$}; \node[state] (d) [right=of a] {\phantom{$B$}}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $i$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $j$} (d); \path[dashed] (d) edge[bend left=30] (b); \end{tikzpicture} \qquad\text{and}\qquad \begin{tikzpicture}[baseline=-2.5pt] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$S_1$} -- (1,0) node[fill=white,inner sep=1] {$\cdots$} -- (2,0) node[state1] {$S_k$}; \end{tikzpicture} \,, \] then $FG$ corresponds to \[ \begin{tikzpicture}[node distance=.5cm,baseline=0pt] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\phantom{$B$}}; \node[state] (c) [left =of a, xshift=10pt, yshift=30pt] {$B\cap S_1$}; \node[state] (d) [right=of a, xshift=-10pt, yshift=30pt] {$B\cap S_k$}; \node[state] (e) [right=of a] {\phantom{$B$}}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $i$} (c); \path (d) edge[bend left=20] node[midway,right] {\footnotesize $j$} (e); \path[dashed] (c) edge[bend left=30] (d); \path[dashed] (e) edge[bend left=30] (b); \end{tikzpicture} \,. \] For a concrete example, let $F\in\Stb{\Sigma}(A_4)$ and $G\in\Sigma(A_4)$ be the faces corresponding to \[ \begin{tikzpicture}[node distance=.5cm,baseline=0pt] \tikzstyle{state}=[ellipse,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [below right=of a,yshift=10pt,xshift=5pt] {$2$}; \node[state] (c) [below left=of a,yshift=10pt,xshift=-5pt] {$46$}; \node[state] (d) [above=of a] {$135$}; \path (b) edge[bend left=20] node[midway,below] {\footnotesize $2$} (c); \path (c) edge[bend left=20] node[midway,left] {\footnotesize $4$} (d); \path (d) edge[bend left=20] node[midway,right] {\footnotesize $1$} (b); \end{tikzpicture} \qquad\text{and}\qquad \begin{tikzpicture}[baseline=-.1cm,scale=.75] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {256} -- (1.5,0) node[state1] {13} -- (2.5,0) node[state1] {4}; \end{tikzpicture} , \] respectively. Then the face $FG\in\Stb{\Sigma}(A_4)$ corresponds to \[ \begin{tikzpicture}[node distance=.5cm,baseline=0pt] \tikzstyle{state}=[ellipse,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [right=of a,yshift=5pt] {$2$}; \node[state] (c) [below right=of a] {$6$}; \node[state] (d) [below left=of a] {$4$}; \node[state] (e) [left=of a,yshift=5pt] {$5$}; \node[state] (f) [above = of a] {$13$}; \path (b) edge[bend left=20] node[midway,right] {\footnotesize $2$} (c); \path (c) edge[bend left=20] node[midway,below] {\footnotesize $3$} (d); \path (d) edge[bend left=20] node[midway,left] {\footnotesize $4$} (e); \path (e) edge[bend left=20] node[midway,xshift=-3pt,yshift=-3pt,above] {\footnotesize $5$} (f); \path (f) edge[bend left=20] node[midway,xshift=3pt,yshift=-3pt,above] {\footnotesize $1$} (b); \end{tikzpicture} . \] The agreement between the geometric definition and the combinatorial procedure is illustrated in Figure~\ref{fig:prodA2torus}. \begin{figure}[!h] \begin{tikzpicture} \draw (0,.375) node { \begin{tikzpicture}[cm={1,0,.5,.8660254,(0,0)},baseline=0,scale=3] \draw[draw=none,fill=blue!10!white] (0,0)--(1,-1)--(0,-1)--(0,0); \draw[draw=none,fill=blue!10!white] (-1,0)--(-1.2,0)--(-1.2,.2)--(-1,0); \draw[draw=none,fill=blue!10!white] (1,0)--(1.2,0)--(1,.2)--(1,0); \draw[draw=none,fill=blue!10!white] (-1,1)--(-1,1.25) .. controls (-.9,1.5) and (-.5,1.1).. (-.25,1.25)--(0,1)--(-1,1); \draw[blue] (-.5,1) node[inner sep=0,scale=.75] (e1) { \begin{tikzpicture}[node distance=.5cm,black] \tikzstyle{state}=[ellipse,draw,fill=blue,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\color{white}{$23$}}; \node[state] (c) [right=of a] {\color{white}{$1$}}; \path (b) edge[bend left=20] node[midway,above] {\footnotesize $1$} (c); \path (c) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} }; \draw[blue] (.5,-1) node[inner sep=0,scale=.75] (e2) { \begin{tikzpicture}[node distance=.5cm,black] \tikzstyle{state}=[ellipse,draw,fill=blue,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {\color{white}{$23$}}; \node[state] (c) [right=of a] {\color{white}{$1$}}; \path (b) edge[bend left=20] node[midway,above] {\footnotesize $1$} (c); \path (c) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} }; \draw[blue,very thick] (-1,1)-- (e1) --(0,1); \draw[blue,very thick] (1,-1)-- (e2)--(0,-1); \draw[very thick] (-1.2,.2)--(.2,-1.2); \draw[very thick] (1.2,-.2)--(-.2,1.2); \draw[very thick] (-1,-.2)--(-1,1.2); \draw[very thick] (1,-1.2)--(1,.2); \draw[very thick] (-1.2,1.2)--(0,0); \draw[very thick] (0,0)--(1.2,-1.2); \draw[very thick] (-1,0)--(0,0); \draw[very thick,blue] (-1,0)--(-1.2,0); \draw[very thick] (0,0)--(1,0); \draw[very thick,blue] (1,0)--(1.2,0); \draw[very thick] (0,-1.2)--(0,0); \draw[very thick] (0,0)--(0,1.2); \draw[very thick] (-1,1)--(-1.2,1); \draw[very thick] (0,1)--(.2,1); \draw[very thick] (-.2,-1)--(0,-1); \draw[very thick] (1.2,-1)--(1,-1); \draw (.3,-.6) node[scale=.75] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a,xshift=5pt] {$2$}; \node[state] (c) [above=of a] {$3$}; \node[state] (e) [right=of a,xshift=-5pt] {$1$}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $0$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $1$} (e); \path (e) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} }; \draw (0,0) node[circle,fill,inner sep=.1cm] {}; \draw (0,1) node[circle,fill,inner sep=.1cm] {}; \draw (-1,0) node[circle,fill,inner sep=.1cm] {}; \draw (1,-1) node[circle,fill,inner sep=.1cm] {}; \draw (1,0) node[circle,fill,inner sep=.1cm] {}; \draw (0,-1) node[circle,fill,inner sep=.1cm] {}; \draw (-1,1) node[circle,fill,inner sep=.1cm] {}; \draw[red,line width=2,->,>=stealth] (.15,-1) node {$\bullet$} -- (.15,-.8); \draw[red,line width=2,->,>=stealth] (-.85,1) node {$\bullet$} -- (-.85,1.2); \end{tikzpicture} }; \draw (0,0) circle (6); \draw[dashed] (1.5,2.598) -- (4,6.928); \draw[dashed] (-1.5,2.598) -- (-4,6.928); \draw[dashed] (1.5,-2.598) -- (4,-6.928); \draw[dashed] (-1.5,-2.598) -- (-4,-6.928); \draw[dashed] (3.6,0) -- (7,0); \draw[dashed] (-3.6,0) --(-7,0); \draw (3,5.196) node[circle,fill=red,inner sep=.1cm] (b) {}; \draw (-3,5.196) node[circle,fill=black,inner sep=.1cm] {}; \draw (3,-5.196) node[circle,fill=black,inner sep=.1cm] {}; \draw (-3,-5.196) node[circle,fill=black,inner sep=.1cm] {}; \draw (6,0) node[circle,fill=black,inner sep=.1cm] {}; \draw (-6,0) node[circle,fill=black,inner sep=.1cm] {}; \draw (4,6.928) node[inner sep=0,rotate=60] (12) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=red,draw,inner sep=1,draw] \draw (0,0) node[state1] {\color{white}{12}} -- (1,0) node[state1] {\color{white}{3}}; \end{tikzpicture} }; \draw[very thick,red] (b)--(12); \end{tikzpicture} \vspace{.5cm} \[ \begin{tikzpicture}[baseline=-.1cm,node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {$23$}; \node[state] (c) [right=of a] {$1$}; \path (b) edge[bend left=20] node[midway,above] {\footnotesize $1$} (c); \path (c) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} \bm\cdot \begin{tikzpicture}[baseline=-.1cm] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {12} -- (1,0) node[state1] {3}; \end{tikzpicture} = \begin{tikzpicture}[baseline=0,node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a,xshift=5pt] {$2$}; \node[state] (c) [above=of a] {$3$}; \node[state] (e) [right=of a,xshift=-5pt] {$1$}; \path (b) edge[bend left=20] node[midway,left] {\footnotesize $0$} (c); \path (c) edge[bend left=20] node[midway,right] {\footnotesize $1$} (e); \path (e) edge[bend left=20] node[midway,below] {\footnotesize $2$} (b); \end{tikzpicture} \] \caption{The product of a face of $\Stb{\Sigma}(A_2)$ with a face of $\Sigma(A_2)$.}\label{fig:prodA2torus} \end{figure} \subsection{The root system $C_n$}\label{ss:rootC} The root system in question is \[ C_n := \{\pm 2\varepsilon_i \mid 1\leq i\leq n\}\cup \{a\varepsilon_i+b\varepsilon_j \mid 1\leq i\neq j\leq n,\, a,b=\pm 1\}. \] It contains $2n^2$ vectors. The ambient space is $\R^n$. Let $[-n,n]$ denote the interval $\{-n,\ldots,-1,0,1,\ldots,n\}$. The Weyl group of $C_n$ may be identified with the wreath product $S_2\wr S_n$, or with the group of permutations $w$ of $[-n,n]$ such that $w(-i)=-w(i)$ for all $i\in [-n,n]$. Note that such a permutation has $w(0)=0$ and is determined by the list of values $w_i=w(i)$, $i=1,\ldots,n$. In examples, we represent negatives with bars, so that $w=25\Bar{1}\Bar{4}3$ stands for the following permutation of $[-5,5]$: \setcounter{MaxMatrixCols}{11} \[ \begin{pmatrix} -5 & -4 & -3 & -2 & -1 & 0 & 1 & 2 & 3 & 4 & 5\\ -3 & 4 & 1 & -5 & -2 & 0 & 2 & 5 & -1 & -4 & 3 \end{pmatrix}. \] We also adopt the convention that $w_{n+1}=w_0=0$. For the simple roots, we choose $\Delta=\{\alpha_1,\ldots,\alpha_n\}$ where \[ \alpha_1:= 2\varepsilon_1 \quad\text{and}\quad \alpha_i := \varepsilon_{i}-\varepsilon_{i-1} \] for $2\leq i\leq n$. The set of positive roots is then \[ \Pi =\{2\varepsilon_i \mid 1\leq i\leq n\} \cup \{\varepsilon_i\pm \varepsilon_j \mid i>j\}. \] The highest root is $\aff{\alpha}=\alpha_1+2\sum_{i=2}^n \alpha_i = 2\varepsilon_n$. See Figure~\ref{fig:C2-finite} for an illustration. We identify $\Delta$ with $\{0,1,\ldots,n-1\}$ and $\aff{\Delta}$ with $\{0,1,\ldots,n\}$ via \[ \alpha_0 \leftrightarrow n \quad\text{and}\quad \alpha_i\leftrightarrow i-1 \] for $i=1,\ldots,n$. With these choices, the descent sets (ordinary and affine) of $w$ are \[ D(w) = \{i \mid 0\leq i\leq n-1,\,w_i>w_{i+1}\} \quad\text{and}\quad \aff{D}(w) = \{i \mid 0\leq i\leq n,\,w_i>w_{i+1}\}. \] The integers in $[-n,n]$ are ordered in the standard fashion. For example, $\aff{D}(25\Bar{1}\Bar{4}3)=\{2,3,5\}$ and $\aff{D}(\Bar{2}5\Bar{1}\Bar{4}3)=\{0,2,3,5\}$. The Coxeter arrangement $\Hy(C_n)$ consists of the coordinate hyperplanes $\{ x \in \R^n \mid x_i = 0 \}$, $i=1,\ldots,n$, and the hyperplanes $\{ x \in \R^n \mid x_i = \epsilon x_j \}$, where $1\leq i < j \leq n$ and $\epsilon=\pm 1$. The set $C_n^\vee$ of coroots is the root system $B_n$. \subsection{Faces of the finite Coxeter complex of type $C$}\label{ss:CCC} The faces of the Coxeter complex $\Sigma(C_n)$ are in correspondence with compositions of the set $[-n,n]$ with the following property: reversing the order of the list of blocks has the same effect as replacing each block $B$ by its negative $-B:=\{-i\mid i\in B\}$. Such a composition has an odd number of blocks with the middle block being equal to its negative and containing $0$. For example, \[ \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (-2,0) node[state1] {$\Bar{5}\Bar{1}3$} -- (-1,0) node[state1] {$\Bar{4}$} -- (0,0) node[state1] {$\Bar{2} 0 2$} -- (1,0) node[state1] {$4$} -- (2,0) node[state1] {$\Bar{3}15$}; \end{tikzpicture}. \] is such a composition of $[-5,5]$. Given an integer $i$, write \[ x_i:= \epsilon(i) x_{\abs{i}}, \] where $\epsilon(i)=\pm 1$ and $\abs{i}$ denote the sign and the absolute value of $i$. With this convention, a composition as above corresponds to the face in $\Sigma(C_n)$ determined by the (in)equalities \begin{align*} x_i=x_j & \text{ if $i$ and $j$ belong to the same block,}\\ x_i<x_j & \text{ if the block of $i$ precedes that of $j$.} \end{align*} The faces in $\Sigma(C_2)$, labeled with compositions, are shown in Figure \ref{fig:C2Coxeter}. \begin{figure}[!h] \begin{tikzpicture}[baseline=0,scale=2] \draw (0,1.5) node[inner sep=0,rotate=90] (2) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,white,fill=red,draw=black,inner sep=1] \draw (0,0) node[state1] {$\bar 2$} -- (1,0) node[state1] {$\bar 1 0 1$} -- (2,0) node[state1] {2}; \end{tikzpicture} }; \draw (1,1) node[inner sep=0,rotate=45] (12) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=green,inner sep=1,draw] \draw (0,0) node[state1] {$\bar 2 \bar 1$} -- (1,0) node[state1] {0} -- (2,0) node[state1] {12}; \end{tikzpicture} }; \draw (1.5,0) node[inner sep=0] (1) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,white,fill=red,inner sep=1,draw=black] \draw (0,0) node[state1] {$\bar 1$} -- (1,0) node[state1] {$\bar 2 0 2$} -- (2,0) node[state1] {1}; \end{tikzpicture} }; \draw (1,-1) node[inner sep=0,rotate=-45] (b21) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=green,inner sep=1,draw] \draw (0,0) node[state1] {$\bar 1 2$} -- (1,0) node[state1] {0} -- (2,0) node[state1] {$\bar 2 1$}; \end{tikzpicture} }; \draw (0,-1.5) node[inner sep=0,rotate=90] (b2) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,white,fill=red,inner sep=1,draw=black] \draw (0,0) node[state1] {2} -- (1,0) node[state1] {$\bar 1 0 1$} -- (2,0) node[state1] {$\bar 2$}; \end{tikzpicture} }; \draw (-1,-1) node[inner sep=0,rotate=45] (b2b1) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=green,inner sep=1,draw] \draw (0,0) node[state1] {12} -- (1,0) node[state1] {0} -- (2,0) node[state1] {$\bar 2 \bar 1$}; \end{tikzpicture} }; \draw (-1.5,0) node[inner sep=0] (b1) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,white,fill=red,inner sep=1,draw=black] \draw (0,0) node[state1] {1} -- (1,0) node[state1] {$\bar 2 0 2$} -- (2,0) node[state1] {$\bar 1$}; \end{tikzpicture} }; \draw (-1,1) node[inner sep=0,rotate=-45] (b12) { \begin{tikzpicture} \tikzstyle{state1}=[ellipse,fill=green,inner sep=1,draw] \draw (0,0) node[state1] {$\bar 2 1$} -- (1,0) node[state1] {0} -- (2,0) node[state1] {$\bar 1 2$}; \end{tikzpicture} }; \draw (0,0) node[ellipse,inner sep=1,draw=black,fill=yellow] (0) {$\bar 2 \bar 1 0 12$}; \draw (1,2) node { \begin{tikzpicture}[scale=.5] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$\bar 2$} -- (1,0) node[state1] {$\bar 1$} -- (2,0) node[state1] {0} -- (3,0) node[state1] {1} -- (4,0) node[state1] {2}; \end{tikzpicture} }; \draw (2,.75) node { \begin{tikzpicture}[scale=.5] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$\bar 1$} -- (1,0) node[state1] {$\bar 2$} -- (2,0) node[state1] {0} -- (3,0) node[state1] {2} -- (4,0) node[state1] {1}; \end{tikzpicture} }; \draw (2,-.75) node { \begin{tikzpicture}[scale=.5] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$\bar 1$} -- (1,0) node[state1] {2} -- (2,0) node[state1] {0} -- (3,0) node[state1] {$\bar 2$} -- (4,0) node[state1] {1}; \end{tikzpicture} }; \draw (1,-2) node { \begin{tikzpicture}[scale=.5] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {2} -- (1,0) node[state1] {$\bar 1$} -- (2,0) node[state1] {0} -- (3,0) node[state1] {1} -- (4,0) node[state1] {$\bar 2$}; \end{tikzpicture} }; \draw (-1,-2) node { \begin{tikzpicture}[scale=.5] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {2} -- (1,0) node[state1] {1} -- (2,0) node[state1] {0} -- (3,0) node[state1] {$\bar 1$} -- (4,0) node[state1] {$\bar 2$}; \end{tikzpicture} }; \draw (-2,-.75) node { \begin{tikzpicture}[scale=.5] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {1} -- (1,0) node[state1] {2} -- (2,0) node[state1] {0} -- (3,0) node[state1] {$\bar 2$} -- (4,0) node[state1] {$\bar 1$}; \end{tikzpicture} }; \draw (-2,.75) node { \begin{tikzpicture}[scale=.5] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {1} -- (1,0) node[state1] {$\bar 2$} -- (2,0) node[state1] {0} -- (3,0) node[state1] {2} -- (4,0) node[state1] {$\bar 1$}; \end{tikzpicture} }; \draw (-1,2) node { \begin{tikzpicture}[scale=.5] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$\bar 2$} -- (1,0) node[state1] {1} -- (2,0) node[state1] {0} -- (3,0) node[state1] {$\bar 1$} -- (4,0) node[state1] {2}; \end{tikzpicture} }; \draw[green,very thick] (-2,2)--(b12)--(0)--(b21)--(2,-2); \draw[green,very thick] (-2,-2)--(b2b1)--(0)--(12)--(2,2); \draw[red,very thick] (-2.82,0)--(b1)--(0)--(1)--(2.82,0); \draw[red,very thick] (0,-2.82)--(b2)--(0)--(2)--(0,2.82); \end{tikzpicture} \caption{The faces in $\Sigma(C_2)$, with colors corresponding to $W$-orbits.} \label{fig:C2Coxeter} \end{figure} Let $F\in\Sigma(C_n)$ be a face. The color set of $F$ is \[ \col(F)=\{a_0,a_0+a_1,\ldots,a_0+a_1+\cdots+a_{k-1}\}\subseteq\{0,1,\ldots,n-1\}, \] where $a_0$ is the number of positive elements in the block of $0$, $a_i$ is the size of the $i$-th block as we proceed outwards (either to the right or to the left), and the $(k-1)$-th is the penultimate block we encounter. Figure \ref{fig:C2Coxeter} shows the orbits in $\Sigma(C_2)$. For example, the rays in red constitute the orbit of color set $\{1\}$ and the rays in green that of color set $\{0\}$. The permutation $w_F$ (as in~\eqref{e:wF}, or as in Proposition~\ref{prp:Dcol}) is obtained by first listing the positive elements in the block of $0$ in increasing order, then proceeding to the right and listing all the elements in each block in increasing order. \subsection{Faces of the Steinberg torus of type $C$}\label{ss:STC} Consider a partition of the set $[-n,n]$ and a cyclic order on the set of blocks, with the property that replacing each block by its negative has the same effect as reversing the order. In such a partition the block containing $0$ is equal to its negative. We represent such structures by drawing each block as a vertex of a regular polygon and proceeding clockwise according to the cyclic order. The result is a necklace with the property that flipping it across the diameter through $0$ has the same effect as replacing each block by its negative. Here is an example, for $n=5$: \begin{equation}\label{e:spinC} \begin{tikzpicture}[node distance=.5cm,baseline=0pt] \tikzstyle{state}=[ellipse,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [right=of a,xshift=8pt, yshift=8pt] {$4$}; \node[state] (c) [below right=of a] {$\Bar{3}15$}; \node[state] (d) [below left=of a] {$\Bar{5}\Bar{1}3$}; \node[state] (e) [left=of a,xshift=-8pt, yshift=8pt] {$\Bar{4}$}; \node[state] (f) [above = of a] {$\Bar{2}02$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (f); \path (f) edge[bend left=20] (b); \end{tikzpicture} . \end{equation} The faces of the Steinberg torus $\Stb{\Sigma}(C_n)$ are in correspondence with such necklaces. We leave to the reader the task of describing the equations of (an affine representative) determining a face. The faces in $\Stb{\Sigma}(C_2)$, labeled with such necklaces, are shown in Figure \ref{fig:C2Steinberg}. Inclusion of faces corresponds to edge contractions, with the understanding that if you contract an edge, you must also contract its mirror image. \begin{figure}[!h] \begin{tikzpicture}[baseline=0,scale=2] \draw (0,1.5) node[inner sep=1,scale=.75] (2) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,white,fill=red,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$\bar 1 0 1$}; \node[state] (c) [right=of a,yshift=-10pt] {$2$}; \node[state] (d) [left=of a,yshift=-10pt] {$\bar 2$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (b); \end{tikzpicture} }; \draw (1.5,1.5) node[inner sep=0,scale=.75] (12) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,fill=green,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a,yshift=-10pt] {$12$}; \node[state] (d) [left=of a,yshift=-10pt] {$\bar 2 \bar 1$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (b); \end{tikzpicture} }; \draw (1.5,0) node[inner sep=0,scale=.75] (1) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,white,fill=red,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$\bar 2 0 2$}; \node[state] (c) [right=of a,yshift=-10pt] {$1$}; \node[state] (d) [left=of a,yshift=-10pt] {$\bar 1$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (b); \end{tikzpicture} }; \draw (1.5,-1.5) node[inner sep=0,scale=.75] (b21) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,fill=green,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a,yshift=-10pt] {$\bar 2 1$}; \node[state] (d) [left=of a,yshift=-10pt] {$\bar 1 2$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (b); \end{tikzpicture} }; \draw (0,-1.5) node[inner sep=1,scale=.75] (b2) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,white,draw=black,fill=red,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$\bar 1 0 1$}; \node[state] (c) [right=of a,yshift=-10pt] {$\bar 2$}; \node[state] (d) [left=of a,yshift=-10pt] {$2$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (b); \end{tikzpicture} }; \draw (-1.5,-1.5) node[inner sep=0,scale=.75] (b2b1) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,fill=green,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a,yshift=-10pt] {$\bar 2 \bar 1$}; \node[state] (d) [left=of a,yshift=-10pt] {$12$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (b); \end{tikzpicture} }; \draw (-1.5,0) node[inner sep=0,scale=.75] (b1) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,white,fill=red,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$\bar 2 0 2$}; \node[state] (c) [right=of a,yshift=-10pt] {$\bar 1$}; \node[state] (d) [left=of a,yshift=-10pt] {$1$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (b); \end{tikzpicture} }; \draw (-1.5,1.5) node[inner sep=0,scale=.75] (b12) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,fill=green,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a,yshift=-10pt] {$\bar 1 2$}; \node[state] (d) [left=of a,yshift=-10pt] {$\bar 2 1$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (b); \end{tikzpicture} }; \draw (1.5,3) node[inner sep=0,scale=.75] (tr) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,white,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a] {$1$}; \node[state] (d) [below=of a] {$\bar 2 2$}; \node[state] (e) [left=of a] {$\bar 1$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (b); \end{tikzpicture} }; \draw (3,1.5) node[inner sep=0,scale=.75] (rt) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,white,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a] {$2$}; \node[state] (d) [below=of a] {$\bar 1 1$}; \node[state] (e) [left=of a] {$\bar 2$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (b); \end{tikzpicture} }; \draw (3,-1.5) node[inner sep=0,scale=.75] (rb) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,white,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a] {$\bar 2$}; \node[state] (d) [below=of a] {$\bar 1 1$}; \node[state] (e) [left=of a] {$2$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (b); \end{tikzpicture} }; \draw (1.5,-3) node[inner sep=0,scale=.75] (br) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,white,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a] {$1$}; \node[state] (d) [below=of a] {$\bar 2 2$}; \node[state] (e) [left=of a] {$\bar 1$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (b); \end{tikzpicture} }; \draw (-1.5,-3) node[inner sep=0,scale=.75] (bl) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,white,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a] {$\bar 1$}; \node[state] (d) [below=of a] {$\bar 2 2$}; \node[state] (e) [left=of a] {$1$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (b); \end{tikzpicture} }; \draw (-3,-1.5) node[inner sep=0,scale=.75] (lb) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,white,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a] {$\bar 2$}; \node[state] (d) [below=of a] {$\bar 1 1$}; \node[state] (e) [left=of a] {$2$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (b); \end{tikzpicture} }; \draw (-3,1.5) node[inner sep=0,scale=.75] (lt) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,white,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a] {$2$}; \node[state] (d) [below=of a] {$\bar 1 1$}; \node[state] (e) [left=of a] {$\bar 2$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (b); \end{tikzpicture} }; \draw (-1.5,3) node[inner sep=0,scale=.75] (tl) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,white,fill=blue,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a] {$\bar 1$}; \node[state] (d) [below=of a] {$\bar 2 2$}; \node[state] (e) [left=of a] {$1$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (b); \end{tikzpicture} }; \draw (0,0) node[inner sep=1,scale=.75] (0) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black,fill=yellow,inner sep=1pt] \draw (0,0) node[state] (a) {$\bar 2 \bar 1 0 1 2$}; \draw (a) .. controls (-1,-1) and (1,-1) .. (a); \end{tikzpicture} }; \draw (0,3) node[inner sep=0,scale=.75] (b22) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black,fill=magenta,inner sep=1pt] \node[state] (a) {\color{white}{$\bar 2 2$}}; \node[state] (b) [above=of a] {\color{white}{$\bar 1 0 1$}}; \path (b) edge[bend left=30] (a); \path (a) edge[bend left=30] (b); \end{tikzpicture} }; \draw (0,-3) node[inner sep=0,scale=.75] (ab22) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black,fill=magenta,inner sep=1pt] \node[state] (a) {\color{white}{$\bar 2 2$}}; \node[state] (b) [above=of a] {\color{white}{$\bar 1 0 1$}}; \path (b) edge[bend left=30] (a); \path (a) edge[bend left=30] (b); \end{tikzpicture} }; \draw (3,0) node[inner sep=0,scale=.75] (b11) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black,fill=magenta,inner sep=1pt] \node[state] (a) {\color{white}{$\bar 1 1$}}; \node[state] (b) [above=of a] {\color{white}{$\bar 2 0 2$}}; \path (b) edge[bend left=30] (a); \path (a) edge[bend left=30] (b); \end{tikzpicture} }; \draw (-3,0) node[inner sep=0,scale=.75] (ab11) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black,fill=magenta,inner sep=1pt] \node[state] (a) {\color{white}{$\bar 1 1$}}; \node[state] (b) [above=of a] {\color{white}{$\bar 2 0 2$}}; \path (b) edge[bend left=30] (a); \path (a) edge[bend left=30] (b); \end{tikzpicture} }; \draw (3,3) node[inner sep=0,scale=.75] (b2b112) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black,fill=cyan,inner sep=1pt] \node[state] (a) {\color{white}{$\bar 2 \bar 1 1 2$}}; \node[state] (b) [above=of a] {\color{white}{$0$}}; \path (b) edge[bend left=30] (a); \path (a) edge[bend left=30] (b); \end{tikzpicture} }; \draw (3,-3) node[inner sep=0,scale=.75] (a1b2b112) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black,fill=cyan,inner sep=1pt] \node[state] (a) {\color{white}{$\bar 2 \bar 1 1 2$}}; \node[state] (b) [above=of a] {\color{white}{$0$}}; \path (b) edge[bend left=30] (a); \path (a) edge[bend left=30] (b); \end{tikzpicture} }; \draw (-3,-3) node[inner sep=0,scale=.75] (a2b2b112) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black,fill=cyan,inner sep=1pt] \node[state] (a) {\color{white}{$\bar 2 \bar 1 1 2$}}; \node[state] (b) [above=of a] {\color{white}{$0$}}; \path (b) edge[bend left=30] (a); \path (a) edge[bend left=30] (b); \end{tikzpicture} }; \draw (-3,3) node[inner sep=0,scale=.75] (a3b2b112) { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw=black,fill=cyan,inner sep=1pt] \node[state] (a) {\color{white}{$\bar 2 \bar 1 1 2$}}; \node[state] (b) [above=of a] {\color{white}{$0$}}; \path (b) edge[bend left=30] (a); \path (a) edge[bend left=30] (b); \end{tikzpicture} }; \draw (.75,2) node[scale=.75] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a,yshift=3pt] {$1$}; \node[state] (d) [below right=of a,yshift=-5pt] {$2$}; \node[state] (e) [below left=of a,yshift=-5pt] {$\bar 2$}; \node[state] (f) [left=of a,yshift=3pt] {$\bar 1$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (f); \path (f) edge[bend left=20] (b); \end{tikzpicture} }; \draw (2,.75) node[scale=.75] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a,yshift=3pt] {$2$}; \node[state] (d) [below right=of a,yshift=-5pt] {$1$}; \node[state] (e) [below left=of a,yshift=-5pt] {$\bar 1$}; \node[state] (f) [left=of a,yshift=3pt] {$\bar 2$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (f); \path (f) edge[bend left=20] (b); \end{tikzpicture} }; \draw (2,-.75) node[scale=.75] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a,yshift=3pt] {$\bar 2$}; \node[state] (d) [below right=of a,yshift=-5pt] {$1$}; \node[state] (e) [below left=of a,yshift=-5pt] {$\bar 1$}; \node[state] (f) [left=of a,yshift=3pt] {$2$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (f); \path (f) edge[bend left=20] (b); \end{tikzpicture} }; \draw (.75,-2.2) node[scale=.75] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a,yshift=3pt] {$1$}; \node[state] (d) [below right=of a,yshift=-5pt] {$\bar 2$}; \node[state] (e) [below left=of a,yshift=-5pt] {$2$}; \node[state] (f) [left=of a,yshift=3pt] {$\bar 1$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (f); \path (f) edge[bend left=20] (b); \end{tikzpicture} }; \draw (-.75,-2.2) node[scale=.75] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a,yshift=3pt] {$\bar 1$}; \node[state] (d) [below right=of a,yshift=-5pt] {$\bar 2$}; \node[state] (e) [below left=of a,yshift=-5pt] {$2$}; \node[state] (f) [left=of a,yshift=3pt] {$1$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (f); \path (f) edge[bend left=20] (b); \end{tikzpicture} }; \draw (-2,-.75) node[scale=.75] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a,yshift=3pt] {$\bar 2$}; \node[state] (d) [below right=of a,yshift=-5pt] {$\bar 1$}; \node[state] (e) [below left=of a,yshift=-5pt] {$1$}; \node[state] (f) [left=of a,yshift=3pt] {$2$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (f); \path (f) edge[bend left=20] (b); \end{tikzpicture} }; \draw (-2,.75) node[scale=.75] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a,yshift=3pt] {$2$}; \node[state] (d) [below right=of a,yshift=-5pt] {$\bar 1$}; \node[state] (e) [below left=of a,yshift=-5pt] {$1$}; \node[state] (f) [left=of a,yshift=3pt] {$\bar 2$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (f); \path (f) edge[bend left=20] (b); \end{tikzpicture} }; \draw (-.75,2) node[scale=.75] { \begin{tikzpicture}[node distance=.5cm] \tikzstyle{state}=[ellipse,draw,inner sep=1pt] \node (a) {}; \node[state] (b) [above=of a] {$0$}; \node[state] (c) [right=of a,yshift=3pt] {$\bar 1$}; \node[state] (d) [below right=of a,yshift=-5pt] {$2$}; \node[state] (e) [below left=of a,yshift=-5pt] {$\bar 2$}; \node[state] (f) [left=of a,yshift=3pt] {$1$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (f); \path (f) edge[bend left=20] (b); \end{tikzpicture} }; \draw[green,very thick] (a3b2b112)--(b12)--(0)--(b21)--(a1b2b112); \draw[green,very thick] (a2b2b112)--(b2b1)--(0)--(12)--(b2b112); \draw[red,very thick] (ab11)--(b1)--(0)--(1)--(b11); \draw[red,very thick] (ab22)--(b2)--(0)--(2)--(b22); \draw[blue,very thick] (b2b112)--(rt)--(b11)--(rb)--(a1b2b112)--(br)--(ab22)--(bl)--(a2b2b112)--(lb)--(ab11)--(lt)--(a3b2b112)--(tl)--(b22)--(tr)--(b2b112); \end{tikzpicture} \caption{The faces of the Steinberg torus $\Stb{\Sigma}(C_2)$, with colors corresponding to $W$-orbits. Note the identifications along the boundary.} \label{fig:C2Steinberg} \end{figure} Let $F\in\Stb{\Sigma}(C_n)$ be a face. The color set of $F$ is \[ \col(F)=\{a_0,a_0+a_1,\ldots,a_0+a_1+\cdots+a_{k-1}\}\subseteq\{0,1,\ldots,n\}, \] where $a_0$ is the number of positive elements in the block of $0$, $a_i$ is the size of the $i$-th block as we proceed around the corresponding necklace (either clockwise or counterclockwise), and the $(k-1)$-th is the last block we encounter before reaching the point diametrically opposed to the block of $0$. Figure \ref{fig:C2Steinberg} shows the orbits in $\Sigma(C_2)$. For example, the edges in red constitute the orbit of color set $\{1,2\}$ and the edges in green that of color set $\{0,2\}$. The permutation $w_F$ (as in~\eqref{e:wF3}, or as in Corollary~\ref{cor:sDcol}) is obtained by first listing the positive elements in the block of $0$ in increasing order, then proceeding clockwise and listing all the elements in each block in increasing order, stopping at the point diametrically opposed to the block of $0$. If there is a block at that location, we list only its negative elements, in increasing order. For example, if $F$ corresponds to the necklace~\eqref{e:spinC}, then $\col(F)=\{1,2,4\}$ and $w_F=24\Bar{3}15$. \subsection{Products of faces in type $C$}\label{ss:prodC} The product of two faces in the finite Coxeter complex of type $C$ admits the same description as in type $A$: we list the nonempty intersections of the blocks lexicographically. For example, \[ \begin{tikzpicture}[baseline=-2.5pt] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$\Bar{2}135$} -- (1.5,0) node[state1] {$\Bar{4}04$} -- (3,0) node[state1] {$\Bar{5}\Bar{3}\Bar{1}2$}; \end{tikzpicture} \bm\cdot \begin{tikzpicture}[baseline=-2.5pt] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$1\Bar{3}5$} -- (1.5,0) node[state1] {$\Bar{4}\Bar{2}024$} -- (3,0) node[state1] {$\Bar{5}3\Bar{1}$}; \end{tikzpicture} \ = \ \begin{tikzpicture}[baseline=-2.5pt] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$15$} -- (.8,0) node[state1] {$\Bar{2}$} -- (1.5,0) node[state1] {$3$} -- (2.5,0) node[state1] {$\Bar{4}04$} -- (3.5,0) node[state1] {$\Bar{3}$} -- (4.2,0) node[state1] {$2$} -- (5,0) node[state1] {$\Bar{5}\Bar{1}$}; \end{tikzpicture} \,. \] The product of a face of the Steinberg torus by a face of the finite Coxeter complex admits the following description, similar to that in type $A$: we replace each block in the necklace representing a torus face by the string of intersections with the blocks of the composition representing a face in the Coxeter complex. Schematically, \[ \begin{tikzpicture}[node distance=.5cm,baseline=0pt] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {$A$}; \node[state] (c) [above=of a] {$B$}; \node[state] (d) [right=of a] {$C$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path[dashed] (d) edge[bend left=40] (b); \end{tikzpicture} \bm\cdot \begin{tikzpicture}[baseline=-2.5pt] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (0,0) node[state1] {$S_1$} -- (1,0) node[fill=white,inner sep=1] {$\cdots$} -- (2,0) node[state1] {$S_k$}; \end{tikzpicture} \quad = \quad \begin{tikzpicture}[node distance=.5cm,baseline=0pt] \tikzstyle{state}=[ellipse,draw=black, inner sep=1pt] \node (a) {}; \node[state] (b) [left=of a] {$A\cap S_k$}; \node[state] (c) [left =of a, xshift=8pt, yshift=30pt] {$B\cap S_1$}; \node[state] (d) [right=of a, xshift=-8pt, yshift=30pt] {$B\cap S_k$}; \node[state] (e) [right=of a] {$C\cap S_1$}; \path (b) edge[bend left=10] (c); \path (d) edge[bend left=10] (e); \path[dashed] (c) edge[bend left=30] (d); \path[dashed] (e) edge[bend left=50] (b); \end{tikzpicture} \,. \] For example, \[ \begin{tikzpicture}[node distance=.5cm,baseline=-2pt] \tikzstyle{state}=[ellipse,draw=black,inner sep=1pt] \node (a) {}; \node[state] (b) [right=of a,xshift=8pt, yshift=8pt] {$4$}; \node[state] (c) [below right=of a] {$\Bar{3}15$}; \node[state] (d) [below left=of a] {$\Bar{5}\Bar{1}3$}; \node[state] (e) [left=of a,xshift=-8pt, yshift=8pt] {$\Bar{4}$}; \node[state] (f) [above = of a] {$\Bar{2}02$}; \path (b) edge[bend left=20] (c); \path (c) edge[bend left=20] (d); \path (d) edge[bend left=20] (e); \path (e) edge[bend left=20] (f); \path (f) edge[bend left=20] (b); \end{tikzpicture} \bm\cdot \begin{tikzpicture}[node distance=.5cm,baseline=-3pt] \tikzstyle{state1}=[ellipse,fill=white,inner sep=1,draw] \draw (-2,0) node[state1] {$\Bar{5}3$} -- (-1,0) node[state1] {$\Bar{4}1$} -- (0,0) node[state1] {$\Bar{2} 0 2$} -- (1,0) node[state1] {$\Bar{1}4$} -- (2,0) node[state1] {$\Bar{3}5$}; \end{tikzpicture} \quad = \quad \begin{tikzpicture}[node distance=.5cm,baseline=2pt] \tikzstyle{state}=[ellipse,draw=black,inner sep=1pt] \node (o) {}; \node[state] (a) [above=of o] {$\Bar{2}02$}; \node[state] (b) [above right=of o,xshift=6pt,yshift=-2pt] {$4$}; \node[state] (c) [right=of o,xshift=7pt,yshift=-2pt] {$1$}; \node[state] (d) [below right=of o,xshift=-5pt] {$\Bar{3}5$}; \node[state] (e) [below left=of o,xshift=5pt] {$\Bar{5}3$}; \node[state] (f) [left=of o,xshift=-7pt,yshift=-2pt] {$\Bar{1}$}; \node[state] (g) [above left=of o,xshift=-6pt,yshift=-2pt] {$\Bar{4}$}; \path (a) edge[bend left=10] (b); \path (b) edge[bend left=10] (c); \path (c) edge[bend left=10] (d); \path (d) edge[bend left=15] (e); \path (e) edge[bend left=10] (f); \path (f) edge[bend left=10] (g); \path (g) edge[bend left=10] (a); \end{tikzpicture} \,. \] \providecommand{\germ}{\mathfrak} \bibliographystyle{plain}
{ "timestamp": "2014-06-18T02:06:35", "yymm": "1406", "arxiv_id": "1406.4230", "language": "en", "url": "https://arxiv.org/abs/1406.4230", "abstract": "Associated to each irreducible crystallographic root system $\\Phi$, there is a certain cell complex structure on the torus obtained as the quotient of the ambient space by the coroot lattice of $\\Phi$. This is the Steinberg torus. A main goal of this paper is to exhibit a module structure on (the set of faces of) this complex over the (set of faces of the) Coxeter complex of $\\Phi$. The latter is a monoid under the Tits product of faces. The module structure is obtained from geometric considerations involving affine hyperplane arrangements. As a consequence, a module structure is obtained on the space spanned by affine descent classes of a Weyl group, over the space spanned by ordinary descent classes. The latter constitute a subalgebra of the group algebra, the classical descent algebra of Solomon. We provide combinatorial models for the module of faces when $\\Phi$ is of type $A$ or $C$.", "subjects": "Combinatorics (math.CO)", "title": "The Steinberg torus of a Weyl group as a module over the Coxeter complex", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692325496974, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7079584905267976 }
https://arxiv.org/abs/2107.03559
Remarks on algebraic dynamics in positive characteristic
In this paper, we study arithmetic dynamics in arbitrary characteristic, in particular in positive characteristic. We generalise some basic facts on arithmetic degree and canonical height in positive characteristic. As applications, we prove the dynamical Mordell-Lang conjecture for automorphisms of projective surfaces of positive entropy, the Zariski dense orbit conjecture for automorphisms of projective surfaces and for endomorphisms of projective varieties with large first dynamical degree. We also study ergodic theory for constructible topology. For example, we prove the equidistribution of backward orbits for finite flat endomorphisms with large topological degree. As applications, we give a simple proof for weak dynamical Mordell-Lang and prove a counting result for backward orbits without multiplicities. This gives some applications for equidistributions on Berkovich spaces.
\section{Introduction} Let ${\mathbf{k}}$ be an algebraically closed field. In this paper, most of the time (from Section 2 to Section 4), we are mainly interested in the case ${\rm char}\, {\mathbf{k}}>0$. Many problems in arithmetic dynamics, such as Dynamical Mordell-Lang conjecture, Zariski dense orbit conjecture are proposed in characteristic $0$. Indeed, their original statements do not hold in positive characteristic. But their known counter-examples often involve some Frobenius actions or some group structures. We suspect that the original statement of these conjecture still valid for ``general" dynamical systems in positive characteristic. \medskip The $p$-adic interpolation lemma (\cite[Theorem 1]{Poonen2014} and \cite[Theorem 3.3]{Bell2010}) is a fundamental tool in arithmetic dynamics. It has important applications in Dynamical Mordell-Lang and Zariski dense orbit conjecture \cite{Bell2016, Bell2010, Amerik2008, E.Amerik2011, Xie2019}. But this lemma does not work in positive characteristic. Because this, some very basic cases of Dynamical Mordell-Lang and Zariski dense orbit conjecture are still open in positive characteristic. We hope that some corollaries of the $p$-adic interpolation lemma still survive in positive characteristic. For this, I propose the following conjecture. \begin{con}Set $K:=\overline{{\mathbb F}_p}((t))$ and $K^{\circ}=\overline{{\mathbb F}_p}[[t]]$ its valuation ring. Let $f: (K^{\circ})^r\to (K^{\circ})^r$ be an analytic automorphism satisfying $f={\rm id} \mod t.$ If there is no $n\geq 1$ such that $f^n={\rm id}$, then the $f$-periodic points are not dense in $(K^{\circ})^r$ w.r.t. $t$-adic topology. \end{con} On the other hand, we observed that, under certain assumption on the complexity of $f$, a global argument using height can be used to replace the local argument using the $p$-adic interpolation lemma. We generalise the notion of arithmetic degree and prove some basic properties of it in positive characteristic. In particular, we generalise Kawaguchi-Silverman-Matsuzawa's upper bound for arithmetic degree \cite[Theorem 1.4]{Matsuzawa2020a} in positive characteristic. With such notion, we apply our observation to dynamical system in positive characteristic. In particular, we prove the Dynamical Mordell-Lang and Zariski dense orbit conjecture in some cases (see Section \ref{subintrodml} and \ref{subintrozdo}). \medskip Another aim of this paper is to study the ergodic theory on algebraic variety w.r.t constructible topology. Using this, we get some equidistribution reults and apply them to get some weak verisons of Dynamical Mordell-Lang, Manin-Mumford conjecture in arbitrary characteristic. This also gives some applications for equidistributions on Berkovich spaces. \subsection{Dynamical Mordell-Lang conjecture}\label{subintrodml} Let $X$ be a variety over ${\mathbf{k}}$ and $f: X\dashrightarrow X$ be a rational self-map. \begin{defi} We say $(X,f)$ satisfies the \emph{DML} property if for every $x\in X({\mathbf{k}})$ whose $f$-orbit is well defined and every subvariety $V$ of $X$, the set $\{n\geq 0|\,\, f^n(x)\in V\}$ is a finite union of arithmetic progressions. \end{defi} Here an arithmetic progression is a set of the form $\{an + b|\,\, n\in {\mathbb N}\}$ with $a,b \in {\mathbb N}$ possibly with $a = 0$. \begin{dmlcon}If ${\rm char}\,{\mathbf{k}}=0$, then $(X,f)$ satisfies the DML property. \end{dmlcon} It was proved when $f$ is unramified \cite{Bell2010} and when $f$ is an endomorphism of ${\mathbb A}^2_{\overline{{\mathbb Q}}}$ \cite{Xie2017a}. See \cite{Bell2016, Ghioca2018} for other known results. In general, this conjecture does not hold in positive characteristic. An example is \cite[Example 3.4.5.1]{Bell2016} as follows (see \cite{Ghioca2019a,Corvaja2021} for more examples). \begin{exe}\label{exenotdml} Let ${\mathbf{k}}=\overline{{\mathbb F}_p(t)}$, $f: {\mathbb A}^2\to {\mathbb A}^2$ be the endomorphism defined by $(x,y)\mapsto (tx, (1-t)y).$ Set $V:=\{x-y=0\}$ and $e=(1,1).$ Then $\{n\geq 0|\,\, f^n(e)\in V\}=\{p^n|\,\, n\geq 0\}.$ \end{exe} In \cite[Conjecture 13.2.0.1]{Bell2016}, Ghioca and Scanlon proposed a variant of the Dynamical Mordell-Lang conjecture in positive characteristic (=$p$-DML), which asked $\{n\geq 0|\,\, f^n(x)\in V\}$ to be a finite union of arithmetic progressions along with finitely many sets taking form $$\{\sum_{i=1}^mc_ip^{l_in_i}|\,\, n_i\in {\mathbb Z}_{\geq 0}, i=1,\dots,m\}$$ where $m\in {\mathbb Z}_{>1}, k_i\in {\mathbb Z}_{\geq 0}, c_i\in {\mathbb Q}.$ See \cite{Ghioca2019a,Corvaja2021} for known results of $p$-DML. However, we suspect that for a ``general" dynamical system in positive characteristic still has the DML property. \begin{thm}\label{dmlsurface}Let $X$ be a projective surface over ${\mathbf{k}}.$ Let $f: X\to X$ be an automorphism. Assume that $\lambda_1(f)>1$. Then the pair $(X,f)$ satisfies the DML property. \end{thm} Here $\lambda_i(f)$ is the $i$-th dynamical degree of $f$ (see Section \ref{subsectiondydeg}). The following is a similar result for birational endomorphisms of ${\mathbb A}^2.$ In \cite[Theorem A]{Xie2014}, it is stated in characteristic $0$. But when $\lambda_1(f)>1$, its proof works in any characteristic. \begin{thm}\cite[Theorem A]{Xie2014}\label{thmdmlat} Let $f:{\mathbb A}^2\to {\mathbb A}^2$ be a birational endomorphism over ${\mathbf{k}}$. If $\lambda_1(f)>1$, $({\mathbb A}^2,f)$ satisfies the DML property. \end{thm} \subsection{Zariski dense orbit conjecture}\label{subintrozdo} Let $X$ be a variety over ${\mathbf{k}}$ and $f: X\dashrightarrow X$ be a dominant rational self-map. Denote by ${\mathbf{k}}(X)^f$ the field of $f$-invariant rational functions on $X.$ Let $X_f({\mathbf{k}})$ is the set of $X({\mathbf{k}})$ whose orbit is well-defined. For $x\in X_f({\mathbf{k}})$, $O_f(x)$ is the orbit of $x.$ \begin{defi} We say $(X,f)$ satisfies the \emph{SZDO} property if there is $x\in X_f({\mathbf{k}})$ such that $O_f(x)$ is Zariski dense in $X.$ We say $(X,f)$ satisfies the \emph{ZDO} property if either ${\mathbf{k}}(X)^f\neq {\mathbf{k}}$ or it satisfies SZDO property. \end{defi} The Zariski dense orbit conjecture was proposed by Medvedev and Scanlon \cite[Conjecture 5.10]{Medvdevv1}, by Amerik, Bogomolov and Rovinsky \cite{E.Amerik2011} and strengthens a conjecture of Zhang \cite{zhang}. \begin{zdocon}\label{conexistszdo}If ${\rm char}\,{\mathbf{k}}=0$, then $(X,f)$ satisfies the ZDO property. \end{zdocon} This conjecture was proved for endomorphisms of projective surfaces \cite{Jia2020, Xie2019}, endomorphisms of $(\P^1)^N$ \cite{Medvdev,Xie2019} and endomorphisms of ${\mathbb A}^2$ \cite{Xie2017}. See \cite{Amerik2008,Amerik,E.Amerik2011,Fakhruddin2014,Bell2017,Bell2017a,Ghioca2017a,Ghioca2018b,Ghioca2019, Bell, Jia2021} for other known results. \medskip The original statement of Zariski dense orbit conjecture is not true in characteristic $p>0$. It is completely wrong over ${\mathbf{k}}=\overline{{\mathbb F}_p}$ and has counter-examples even when $\text{tr.d.}_{\overline{F_p}} {\mathbf{k}}\geq 1$ (see \cite[Section 1.6]{Xie2019} and \cite[Remark 1.2]{Ghioca}). Concerning the variants of the Zariski dense orbit conjecture in positive characteristic proposed in \cite[Section 1.6]{Xie2019} and \cite[Conjecture 1.3]{Ghioca}, we get the following result. \begin{pro}\label{protautzdo}Let $K$ be an algebraically closed field extension of ${\mathbf{k}}$ with $\text{tr.d.}_{{\mathbf{k}}}K\geq \dim X$. Then $(f_K,X_K)$ satisfies the ZDO property. Here $X_{K}$ and $f_K$ are the base change by $K$ of $X$ and $f.$ \end{pro} The following example shows that the assumption $\text{tr.d.}_{{\mathbf{k}}}K\geq \dim X$ is sharp. \begin{exe}Let $X$ be a variety over ${\mathbf{k}}:=\overline{{\mathbb F}_p}$ of dimension $d\geq 1$. Assume that $X$ is defined over ${\mathbb F}_p$. Let $F: X\to X$ be the Frobenius endomorphism. It is clear that $\overline{{\mathbb F}_p}(X)^F=\overline{{\mathbb F}_p}$. For every algebraically closed field extension $K$ of ${\mathbf{k}}$ with $\text{tr.d.}_{{\mathbf{k}}}K\leq d-1$, and every $x\in X_K(K)$, $O_{F_K}(x)$ is not Zariski dense in $X_K.$ \end{exe} On the other hand, the known counter-examples often involve some Frobenius actions. See \cite[Theorem 1.5, Question 1.7]{Ghioca} for this phenomenon. We suspect that when $\text{tr.d.}_{\overline{{\mathbb F}_p}} {\mathbf{k}}\geq 1,$ a ``general" dynamical system in positive characteristic still have the ZDO property. Applying arguments using height, we get the following results. \begin{thm}\label{thmendononpre}Assume that ${\rm char}\, {\mathbf{k}}=p>0$ and $\text{tr.d.}_{\overline{F_p}} {\mathbf{k}}\geq 1.$ Let $f: X\to X$ be a dominant endomorphism of a projective variety. If $\lambda_1(f)>1$, then for every nonempty Zariski open subset $U$ of $X$, there is $x\in U({\mathbf{k}})$ with infinite orbit and $O_f(x)\subseteq U$. \end{thm} Theorem \ref{thmendononpre} can be viewed as a weak version of \cite[Corollary 9]{Amerik} in positive characteristic. \begin{thm}\label{thmzdosuraut}Assume that ${\rm char}\, {\mathbf{k}}=p>0$ and $\text{tr.d.}_{\overline{F_p}} {\mathbf{k}}\geq 1.$ Let $f: X\to X$ be an automorphism of a projective surface. Then $(X,f)$ satisfies the ZDO property. \end{thm} The following result is a generalization of \cite[Theorem 1.12 (iii)]{Jia2021} in positive characteristic. \begin{thm}\label{thmzdola}Assume that ${\rm char}\, {\mathbf{k}}=p>0$ and $\text{tr.d.}_{\overline{F_p}} {\mathbf{k}}\geq 1.$ Let $f: X\to X$ be a dominant endomorphism of a projective variety. Assume that $X$ is smooth of dimension $d\geq 2$, and $\lambda_1(f)>\max_{i=2}^d \{\lambda_{i}(f)\}$. Then $(X,f)$ satisfies the SZDO property. \end{thm} \subsection{Ergodic theory}\label{subsecergo} Let $X$ be a variety over ${\mathbf{k}}$. Denote by $|X|$ the underling set of $X$ with the constructible topology i.e. the topology on a $X$ generated by the constructible subsets (see~\cite[Section~(1.9) and in particular (1.9.13)]{EGA-IV-I}). In particular every constructible subset is open and closed. This topology is finer than the Zariski topology on $X.$ Moreover $|X|$ is (Hausdorff) compact. Denote by ${\mathcal M}(|X|)$ the space of Radon measures on $X$ endowed with the weak-$\ast$ topology. \begin{thm}\label{thmRadon}Every $\mu\in {\mathcal M}(|X|)$ takes form $$\mu=\sum_{i\geq 0}a_i\delta_{x_i}$$ where $\delta_{x_i}$ is the Dirac measure at $x_i\in X$, $a_i\geq 0$. \end{thm} \begin{rem}Theorem \ref{thmRadon} is inspired by \cite[Theorem A]{Gignac2014}. In \cite[Theorem A]{Gignac2014}, Gignac worked on the Zariski topology, which is not Hausdorff. Here, we use the constructible topology systematically. We think that the constructible topology is the right topology for studying ergodic theory in algebraic dynamics. For example, using constructible topology, we may avoid the conception of finite signed Borel measure used in \cite[Theorem A]{Gignac2014}. Instead of it, we use the more standard notion of Radon measure. \end{rem} A sequence $x_n\in X, n\geq 0$ is said to be \emph{generic}, if every subsequence $x_{n_i}, i\geq 0$ is Zariski dense in $X.$ \begin{cor}\label{corgenericseqence}A sequence $x_n\in X, n\geq 0$ is generic if and only if $$\lim_{n\to \infty}\delta_{x_n}=\delta_{\eta},$$ where $\eta$ is the generic point of $X.$ \end{cor} \medskip Let $f: X\dashrightarrow X$ be a dominant rational self-map. Set $|X|_f:=|X|\setminus (\cup_{i\geq 1}I(f^i)).$ Because every Zariski closed subset of $X$ is open and closed in the constructible topology, $|X|_f$ is a closed subset of $|X|.$ The restriction of $f$ to $|X|_f$ is continuous. We still denote by $f$ this restriction. \medskip \subsubsection{DML problems} Applying Corolary \ref{corgenericseqence}, the dynamical Moredell-Lang conjecture can be interpreted as the following equidistribution statement: \begin{dmlcon}[DML in form of equidistribution] For $x\in X_f({\mathbf{k}})$, if $O_f(x)$ is Zariski dense in $X$, then $$\lim_{n\to \infty}\delta_{f^n(x)}=\delta_{\eta}.$$ \end{dmlcon} \begin{rem} Here the assumption that $O_f(x)$ is Zariski dense in $X$ does not cause any problem. Because after replacing $x$ by some $f^m(x)$ and $f$ by a suitable iterate, we may assume that $\overline{O_f(x)}$ is irreducible. Then after replacing $X$ by $\overline{O_f(x)}$, we may assume that $O_f(x)$ is Zariski dense in $X$. \end{rem} \medskip Using Theorem \ref{thmRadon}, we give a fast proof of the weak dynamical Mordell-Lang. Same result was proved in \cite[Corollary 1.5]{Bell2015} (see also \cite[Theorem 2.5.8]{Favre2000a}, \cite[Theorem D, Theorem E]{Gignac2014}, \cite[Theorem 2]{Petsche2015}, \cite[Theorem 1.10]{Bell2020}). \begin{thm}[Weak DML]\label{thmdml} Let $x$ be a points $\in X_f({\mathbf{k}})$ with $\overline{O_f(x)}=X.$ Let $V$ be a proper subvariety of $X$. Then $\{n\geq 0|\,\, f^n(x)\in V\}$ is of Banach density zero in ${\mathbb Z}_{\geq 0}$ i.e. for every sequence of intervals $I_n, n\geq 0$ in ${\mathbb Z}_{\geq 0}$ with $\lim_{n\to \infty}\# I_n=+\infty$, we have $$\lim_{n\to \infty}\frac{\#(\{n\geq 0|\,\, f^n(x)\in V\}\cap I_n)}{\#I_n}=0.$$ \end{thm} We also prove the weak dynamical Mordell-Lang for coherent backward orbits. A slightly weaker version was proved in \cite[Theorem F]{Gignac2014}. This can be viewed as a weak version of \cite[Conjecture 1.5]{Xie2018}. \begin{thm}[Weak DML for coherent backward orbits]\label{thmdmlback} Let $x_n \in X_f({\mathbf{k}}), n\leq 0$ be a sequence of points such that $\overline{\{x_n, n\leq 0\}}=X$ and $f(x_n)=x_{n+1}$ for all $n\leq -1.$ Let $V$ be a proper subvariety of $X$. Then $\{n\leq 0|\,\, x_n\in V\}$ is of Banach density zero in ${\mathbb Z}_{\leq 0}$ \end{thm} \subsubsection{Backward orbits} Now assume that $f: X\to X$ is a flat and finite endomorphism. Let $d_f:=[{\mathbf{k}}(X)/f^*{\mathbf{k}}(X)]$ be topological degree of $f$. It is just the $(\dim X)$-th dynamical degree of $f$. \medskip Recall that for every $x\in X$, the multiplicity of $f$ at $x$ is $$m_f(x):=\dim_{\kappa(f(x))}(O_{X,x}/m_{f(x)}O_{X,x})\in {\mathbb Z}_{\geq 1}$$ where $O_{X,x}$ is viewed as an $O_{X,f(x)}$-module via $f$. For every $x\in X$, we have $\sum_{y\in f^{-1}(x)}m_f(y)=d_f$ (see \cite[Theorem 2.4]{Gignac2014a}). \medskip In Section \ref{subsecfunct}, we define a natural pullback $f^*: {\mathcal M}(X)\to {\mathcal M}(X)$ which is continuous and for every $x\in X$, $$f^*\delta_x=\sum_{y\in f^{-1}(x)}m_f(y)\delta_y.$$ We get the following equidistribution result. \begin{thm}\label{thmequpullback}Let $f: X\to X$ be a flat and finite endomorphism. Let $x\in X({\mathbf{k}})$ with $\overline{\cup_{i\geq 0}f^{-i}(x)}=X.$ Then for every sequence of intervals $I_n, n\geq 0$ in ${\mathbb Z}_{\geq 0}$ with $\lim_{n\to \infty}\# I_n=+\infty$, we have $$\lim_{n\to \infty}\frac{1}{\#I_n}(\sum_{i\in I_n}d_f^{-i}(f^i)^*\delta_{x})=\delta_{\eta}.$$ \end{thm} \begin{rem}The assumption $\overline{\cup_{i\geq 0}f^{-i}(x)}=X$ is necessary. Otherwise, $$\frac{1}{\#I_n}(\sum_{i\in I_n}d_f^{-i}(f^i)^*\delta_{x}), n\geq 0$$ are supported on the proper closed subset $\overline{\cup_{i\geq 0}f^{-i}(x)}$ of $X.$ \end{rem} \medskip Applying Theorem \ref{thmequpullback}, we count the preimages of a point without multiplicities. \begin{thm}\label{thmcountprestr}Let $f: X\to X$ be a flat and finite endomorphism. Assume that the field extension ${\mathbf{k}}(X)/f^*{\mathbf{k}}(X)$ is separable. Let $x\in X({\mathbf{k}})$ be a point with $\overline{\cup_{i\geq 0}f^{-i}(x)}=X.$ For $c\in (0,1]$, $n\geq 0,$ define $$S^n_c:=\min\{\#S|\,\, S\subseteq f^{-n}(x),\,\, \sum_{y\in S}m_{f^n}(y)\geq cd_f^n\}.$$ Then for every $c\in (0,1]$, we have $$\lim_{n\to \infty}(S^n_c)^{1/n}= d_f.$$ \end{thm} Taking $c=1$ in Theorem \ref{thmcountprestr}, we get the following corollary. \begin{cor}\label{corcountpre}Let $f: X\to X$ be a flat and finite endomorphism. If the field extension ${\mathbf{k}}(X)/f^*{\mathbf{k}}(X)$ is separable, then for every $x\in X({\mathbf{k}})$ with $\overline{\cup_{i\geq 0}f^{-i}(x)}=X,$ $$\lim_{n\to \infty}(\#f^{-n}(x))^{1/n}= d_f.$$ \end{cor} If the topological degree is large, we have the following stronger equidistribution result. \begin{thm}\label{thmseqd} Let $f: X\to X$ be a flat and finite endomorphism of a quasi-projective variety. Assume that \begin{equation}\label{equladdom}d_f:=\lambda_{\dim X}(f)>\max_{1\leq i\leq \dim X-1} \lambda_i. \end{equation} If the field extension ${\mathbf{k}}(X)/f^*{\mathbf{k}}(X)$ is separable, then for every $x\in X({\mathbf{k}})$ with $\overline{\cup_{i\geq 0}f^{-i}(x)}=X,$ $$\lim_{n\to \infty}d_f^{-n}(f^n)^*\delta_x=\delta_{\eta}.$$ Moreover, for every irreducible subvariety $V$ of $X$ of dimension $d_V\leq \dim X-1$, $$\limsup_{n\to \infty}\#(f^{-n}(x)\cap V)^{1/n}\leq \lambda_{d_V}<d_f.$$ \end{thm} Assumption \ref{equladdom} holds for polarized endomorphisms on projective varieties. A similar statement for polarized endomorphisms can be fund in \cite[Theorem 5.1]{Gignac2014a}. See \cite{Guedj2005,Dinh2015} for according result for complex topology. \medskip Theorem \ref{thmseqd} is not true without Assumption \ref{equladdom}. \begin{exe} Under the notation of Example \ref{exenotdml}. Set $g:=f^{-1}.$ Then $\lambda_i(g)=1, i=0,1,2.$ Denote by $1_{V}$ the characteristic function of $V$. Since $V$ is open and closed in $|{\mathbb A}^2|$, $1_{V}$ is continuous. We have $$\lim_{n\to \infty}\int 1_V(g^{-p^n})^*\delta_{e}=\lim_{n\to \infty}1_V(f^{p^n}(e))=1\neq 0=\int 1_V\delta_{\eta}.$$ \end{exe} \subsection{Relation to Berkovich spaces} We will see in Section \ref{subsecberko}, $|X|$ can be viewed as a closed subset of the Berkovich analytification $X^{{\rm an}}$ of $X$ w.r.t the trivial norm on ${\mathbf{k}}$. So the statements in ergodic theory on $|X|$ can be translated to statements on $X^{{\rm an}}.$ See the translation of Corollary \ref{corgenericseqence} and Theorem \ref{thmseqd} in Section \ref{subsecberko}. Using reduction map, we may also use ergodic theory w.r.t. the constructible topology to study endomorphisms on Berkovich spaces with good reduction. In Section \ref{subsectionreduction}, we apply Theorem \ref{thmseqd} to get an equidistribution result for endomorphisms of large topological degree with good reduction. \subsection{Notation and Terminology} \begin{points} \item[$\bullet$] For a set $S$, denote by $\# S$ the cardinality of $S.$ \item[$\bullet$] A \emph{variety} is an irreducible separated scheme of finite type over a field. A \emph{subvariety} of a variety $X$ is a closed subset of $X.$ \item[$\bullet$] For a variety $X$ (resp. a rational self-map $f: X\dashrightarrow Y$) over a field $k$ and a subfield $K$ of $k$, we say that $X$ (resp. $f$) is \emph{defined over $K$} if there is a variety $X_K$ (resp. a rational map $f_K$) over $K$ such that $X$ (resp. $f$) is the base change by $k$ of $X$ (resp. $f$). \item[$\bullet$]For a rational map $f: X\dashrightarrow Y$ between varieties. Denote by $I(f)$ the indeterminacy locus of $f$. \item[$\bullet$] For a dominant rational self-map $f: X\dashrightarrow X$ between varieties, a subvariety $V$ of $X$ is said to be \emph{$f$-invariant} if $I(f)$ does not contain any irreducible component of $V$ and $f(V)\subseteq V.$ \item[$\bullet$] For a projective variety $X$, $N^i(X)$ is the the group of numerical $i$-cycles of $X$ and $N^i(X)_{{\mathbb R}}:=N^i(X)\otimes {\mathbb R}.$ \item[$\bullet$] For two Cartier ${\mathbb R}$-divisors $D_1,D_2$, write $D_1\equiv D_2$ if $D_1,D_2$ are numerically equivalent. \item[$\bullet$] For a field extension $k/K$, $\text{tr.d.}_Kk$ is the transcendence degree of $k/K.$ \end{points} \subsection*{Acknowledgement} I would like to thank Xinyi Yuan. Section \ref{sectionergodictheory} of this paper is motivated by some interesting discussion with him. \section{Dynamical degree and arithmetic degree} \subsection{The dynamical degrees}\label{subsectiondydeg} In this section we recall the definition and some basic facts on the dynamical degree. Let $X$ be a variety over ${\mathbf{k}}$ and $f: X\dashrightarrow X$ a dominant rational self-map. Let $X'$ be a normal projective variety which is birational to $X$. Let $L$ be an ample (or just nef and big) divisor on $X'$. Denote by $f'$ the rational self-map of $X'$ induced by $f$. For $i=0,1,\dots,\dim X$, and $n\geq 0$, $(f'^n)^*(L^i)$ is the $(\dim X-i)$-cycle on $X'$ as follows: let $\Gamma$ be a normal projective variety with a birational morphism $\pi_1\colon\Gamma\to X'$ and a morphism $\pi_2\colon\Gamma\to X'$ such that $f'^n=\pi_2\circ\pi_1^{-1}$. Then $(f'^n)^*(L^i):= (\pi_1)_*\pi_2^*(L^i)$. The definition of $(f'^n)^*(L^i)$ does not depend on the choice of $\Gamma$, $\pi_1$ and $\pi_2$. The $i$-th \textit{dynamical degree} of $f$ is $$ \lambda_i(f):=\lim_{n\to\infty}((f'^n)^*(L^i)\cdot L^{\dim X-i})^{1/n}. $$ The limit converges and does not depend on the choice of $X'$ and $L$ \cite{Russakovskii1997, Dinh2005, Truong2020,Dang2020}. Moreover, if $\pi: X\dashrightarrow Y$ is a generically finite and dominant rational map between varieties and $g\colon Y\dashrightarrow Y$ is a rational self-map such that $g\circ\pi=\pi\circ f$, then $\lambda_i(f)=\lambda_i(g)$ for all $i$; for details, we refer to \cite[Theorem 1]{Dang2020} (and the projection formula), or Theorem 4 in its arXiv version \cite{Dang}. The following result is easy when ${\mathbf{k}}$ is of characteristic 0 and $Z\not\subseteq {\rm Sing} X$. \begin{pro}\cite[Proposition 3.2]{Jia2021}\label{p:dyn_sub_var} Let $X$ be a variety over ${\mathbf{k}}$ and $f\colon X\dashrightarrow X$ a dominant rational self-map. Let $Z$ be an irreducible subvariety in $X$ which is not contained in $I(f)$ such that $f|_Z$ induces a dominant rational self-map of $Z$. Then $\lambda_i(f|_Z)\leq \lambda_i(f)$ for $i=0,1,\dots,\dim Z$. \end{pro} \subsection{Arithmetic degree} The arithmetic degree was defined in \cite{Kawaguchi2016} over a number field or a function field of characteristic zero. In this section we extend this definition to the case over function field of positive characteristic and we prove some basic fact of it. Let ${\mathbf{k}}=\overline{K(B)}$, where $K$ is an algebraically closed field and $B$ is a smooth projective curve. \subsubsection{Weil height} Let $X$ be a normal and projective variety over ${\mathbf{k}}.$ For every $L\in {\rm Pic}(X)$, we denote by $h_L: X({\mathbf{k}})\to {\mathbb R}$ a Weil height associated to $L$ and the function field $K(B)$. It is unique up to adding a bounded function. \begin{exe}\label{exekbheight} Assume that $X$ is defined over $K(B)$ i.e. there is a projective morphism $\pi: X_B\to B$ where $X_B$ is normal, projective and geometric generic fiber of $\pi$ is $X$. Assume that there is a line bundle $L_B$ on $X_B$ whose restriction on $X$ is $L$. In this case, for every $x\in X({\mathbf{k}})$, we may take $h_L$ to be $$h_{(X_B, L_B)}(x)=[K(B)(x):K(B)]^{-1}(\overline{x}\cdot L),$$ where $\overline{x}$ is the Zariski closure of $x$ in $X_B.$ \end{exe} Keep the notations in Example \ref{exekbheight}. Let $b$ be a point in $B(K).$ It induces a norm $|\cdot|_b$ on $K(B)$. Denote by $K(B)_b$ the completion of $K(B)$ w.r.t. $|\cdot|_b$. Denote by ${\mathbb C}_b$ the completion of $\overline{K(B)_b}.$ Every field embedding $\tau: {\mathbf{k}}=\overline{K(B)}\hookrightarrow {\mathbb C}_b$ induces an embedding $\phi_{\tau}: X({\mathbf{k}})\hookrightarrow X({\mathbb C}_b).$ On $X({\mathbb C}_b)$, we have a natural $b$-adic topology induced by $|\cdot|_b$. \begin{rem}\label{remopenred}Let $x_b$ be a point in $X_b$. Then $x_b$ defines a nonempty open subset $U_{x_b}$ consisting of all points in $X({\mathbb C}_b)$ whose reduction is $x_b\in X_b(K).$ Then for every $x\in \phi_{\tau}^{-1}(U_{x_b})$, $x_0$ is contained in the Zariski closure of $x$ in $X_B.$ \end{rem} \begin{lem}\label{lemlocheig} There is $d\geq 1$ such that for every $b\in B(K),$ every nonempty $b$-adic open subset of $U\subseteq X({\mathbb C}_b),$ and every $l\geq 1$, there is $x\in X({\mathbf{k}})$ such that $\deg(x)\leq d$ and $h_L(x)\geq l$. \end{lem} \proof By Noether normalization lemma, we only need to prove the lemma when $X=\P^N$ and $L=O(1).$ After replace $K(B)$ by a finite extension, a changing of coordinates, we may assume that $0\in U.$ We may assume that $h_L$ is the naive height on $\P^N$ i.e. the height defined by the model $(\P^N_B, O_{\P^N(B)}(1)).$ Pick any rational function $g\in K(B)\setminus \{0\}$ with $g(b)=0.$ Then for $n\geq 1$, $x_n:=(g^n,\dots, g^n)\in {\mathbb A}^N(K(B)).$ We have $h_L(x_n)\to \infty$ as $n\to \infty$ and $\phi_{\tau}(x_n)\to 0$ in the $b$-adic topology. This concludes the proof. \endproof \subsubsection{Admissible triples.} As in \cite{Jia2021}, we define an \textit{admissible triple} to be $(X,f,x)$ where $X$ is a quasi-projective variety over ${\mathbf{k}}$, $f\colon X\dashrightarrow X$ is a dominant rational self-map and $x\in X_f({\mathbf{k}})$. We say that $(X,f,x)$ \textit{dominates} (resp.~\textit{generically finitely dominates}) $(Y,g,y)$ if there is a dominant rational map (resp.~generically finite and dominant rational map) $\pi\colon X\dashrightarrow Y$ such $\pi\circ f=g\circ\pi$, $\pi$ is well defined along $O_f(x)$ and $\pi(x)=y$. We say that $(X,f,x)$ is \textit{birational} to $(Y,g,y)$ if there is a birational map $\pi\colon X\dashrightarrow Y$ such $\pi\circ f=g\circ\pi$ and if there is a Zariski dense open subset $V$ of $Y$ containing $O_g(y)$ such that $\pi|_U: U:=\pi^{-1}(V)\to V$ is a well-defined isomorphism and $\pi(x)=y$. In particular, if $(X,f,x)$ is birational to $(Y,g,y)$, then $(X,f,x)$ generically finitely dominates $(Y,g,y)$. \begin{rem} \leavevmode \begin{enumerate} \item If $(X,f,x)$ dominates $(Y,g,y)$ and if $O_f(x)$ is Zariski dense in $X$, then $O_g(y)$ is Zariski dense in $Y$. Moreover, if $(X,f,x)$ generically finitely dominates $(Y,g,y)$, then $O_f(x)$ is Zariski dense in $X$ if and only if $O_g(y)$ is Zariski dense in $Y$. \item Every admissible triple $(X,f,x)$ is birational to an admissible triple $(X',f',x')$ where $X'$ is projective. Indeed, we may pick $X'$ to be any projective compactification of $X$, $f'$ the self-map of $X'$ induced from $f$, and $x'=x$. \end{enumerate} \end{rem} \subsubsection{The set $A_f(x)$.} As in \cite{Jia2021}, we will associate to an admissible triple $(X,f,x)$ a subset $$A_f(x)\subseteq [1,\infty].$$ \begin{rem} We will show in Proposition \ref{proupboundarth} that $A_f(x)\subseteq [1,\lambda_1(f)].$ \end{rem} We first define it when $X$ is projective. Let $L$ be an ample divisor on $X$, we define $$A_f(x)\subseteq [1,\infty]$$ to be the limit set of the sequence $(h_L^+(f^n(x)))^{1/n}$, $n\geq 0$, where $h_L^+(\cdot):=\max\{h_L(\cdot),1\}$. The following lemma was proved in \cite[Lemma 3.8]{Jia2021} when ${\mathbf{k}}=\overline{{\mathbb Q}}$, but its proof still works our case. It shows that the set $A_f(x)$ does not depend on the choice of $L$ and is invariant in the birational equivalence class of $(X,f,x)$. \begin{lemma}\cite[Lemma 3.8]{Jia2021}\label{lemsingwilldef} Let $\pi\colon X\dashrightarrow Y$ be a dominant rational map between projective varieties. Let $U$ be a Zariski dense open subset of $X$ such that $\pi|_U\colon U\to Y$ is well-defined. Let $L$ be an ample divisor on $X$ and $M$ an ample divisor on $Y$. Then there are constants $C\geq 1$ and $D>0$ such that for every $x\in U$, we have \begin{equation}\label{equationdomineq1} h_M(\pi(x))\leq Ch_L(x)+D. \end{equation} Moreover if $V:=\pi(U)$ is open in $Y$ and $\pi|_U\colon U\to V$ is an isomorphism, then there are constants $C\geq 1$ and $D>0$ such that for every $x\in U$, we have \begin{equation}\label{equationbirdomineq} C^{-1}h_L(x)-D\leq h_M(\pi(x))\leq Ch_L(x)+D. \end{equation} \end{lemma} Now for every admissible triple $(X,f,x)$, we define $A_f(x)$ to be $A_{f'}(x')$ where $(X',f',x')$ is an admissible triple which is birational to $(X,f,x)$ such that $X'$ is projective. By Lemma~\ref{lemsingwilldef}, this definition does not depend on the choice of $(X',f',x')$. \subsubsection{The arithmetic degree.}\label{subsec_arithdeg} We define (see also \cite{Kawaguchi2016}): \[ \overline{\alpha}_f(x):=\sup A_f(x),\qquad\underline{\alpha}_f(x):=\inf A_f(x). \] We say that $\alpha_f(x)$ is well-defined and call it the \textit{arithmetic degree} of $f$ at $x$, if $\overline{\alpha}_f(x)=\underline{\alpha}_f(x)$; and, in this case, we set \[ \alpha_f(x):=\overline{\alpha}_f(x)=\underline{\alpha}_f(x). \] By Lemma~\ref{lemsingwilldef}, if $(X,f,x)$ dominates $(Y,g,y)$, then $\overline{\alpha}_f(x)\geq \overline{\alpha}_g(y)$ and $\underline{\alpha}_f(x)\geq\underline{\alpha}_g(y)$. Applying Inequality~\eqref{equationdomineq1} of Lemma~\ref{lemsingwilldef} to the case where $Y=X$ and $M=L$, we get the following trivial upper bound: let $f\colon X\dashrightarrow X$ be a dominant rational self-map, $L$ any ample line bundle on $X$ and $h_L$ a Weil height function associated to $L$; then there is a constant $C\geq 1$ such that for every $x\in X\setminus I(f)$, we have \begin{equation}\label{equationtrivialupper} h_L^+(f(x))\leq Ch_L^+(x). \end{equation} For a subset $A\subseteq [1,\infty)$, define $A^{1/\ell}:= \{a^{1/\ell}\mid a\in A\}$. We have the following simple properties, where the second half of \ref{eq:alpha_pow} used Inequality~\eqref{equationtrivialupper}. \begin{pro}\label{probasicaf}We have: \begin{enumerate} \item $A_f(x)\subseteq [1,\infty)$. \item $A_f(x)=A_f(f^{\ell}(x))$, for any $\ell\geq 0$. \item \label{eq:alpha_pow} $A_{f}(x)=\bigcup_{i=0}^{\ell-1}(A_{f^{\ell}}(f^i(x)))^{1/\ell}$. In particular, $\overline{\alpha}_{f^{\ell}}(x)=\overline{\alpha}_{f}(x)^{\ell}$, $\underline{\alpha}_{f^{\ell}}(x)=\underline{\alpha}_{f}(x)^{\ell}$. \end{enumerate} \end{pro} The following lemma is easy. \begin{lemma} \label{lem_subvar} Let $f\colon X\dashrightarrow X$ be a dominant rational self-map of a projective variety $X$ and $W\subseteq X$ an $f$-invariant subvariety. Then $X_f({\mathbf{k}})\cap W({\mathbf{k}})\subseteq W_{f|_W}({\mathbf{k}})$ and for every $x\in X_f({\mathbf{k}})\cap W({\mathbf{k}})$, $\alpha_{f|_W}(x)=\alpha_f(x).$ \end{lemma} When ${\mathbf{k}}=\overline{{\mathbb Q}}$, the next result was proved in \cite[Theorem 1.4]{Matsuzawa2020a} in the smooth case and in \cite[Proposition 3.11]{Jia2021} in the singular case. The proof here in the function field case is much easier. \begin{pro}[Kawaguchi-Silverman-Matsuzawa's upper bound]\label{proupboundarth} For every admissible triple $(X,f,x_0)$, we have $\overline{\alpha}_f(x_0)\leq \lambda_1(f)$. \end{pro} \begin{proof} We may assume that $X$ is projective. Set $d:=\dim X.$ After replacing $f$ by a suitable iteration and $x_0$ by $f^n(x_0)$ for some $n\geq 0$ and noting that $\lambda_1(f^n)=\lambda_1(f)^n$ and by Proposition \ref{probasicaf}, we may assume that the Zariski closure $Z_f(x_0)$ of $O_f(x_0)$ is irreducible. By Proposition~\ref{p:dyn_sub_var} and Lemma~\ref{lem_subvar}, we may replace $X$ by $Z_f(x_0)$ and assume that $O_f(x_0)$ is Zariski dense in $X$. Assume that $X$ is defined over $K(B)$ i.e. there is a projective morphism $\pi: {\mathcal X}\to B$ where ${\mathcal X}$ is projective, normal and geometric generic fiber of $\pi$ is $X$. Pick an ample line bundle $L_B$ on ${\mathcal X}$ and let $L$ be its restriction to $X$. We take the Weil height $h_L: X({\mathbf{k}})\to {\mathbb R}$ as follows: for every $x\in X({\mathbf{k}})$, $$h_L(x):=h_{({\mathcal X}, L_B)}(x)=[K(B)(x):K(B)]^{-1}(\overline{x}\cdot {\mathcal L}).$$ We may assume that $x_0$ is defined over $K(B)$. Let $F: {\mathcal X}\dashrightarrow {\mathcal X}$ be the rational self-map over $B$ induced by $f.$ The relative dynamical degree formula \cite[Theorem 4]{Dang}, shows that $$\lambda_1(F)=\max\{1,\lambda_1(f)\}=\lambda_1(f).$$ So for every $r>0$, there is $C_r>0$ such that for every $n\geq 0$, \begin{equation}\label{equcnln}((F^n)^*L_B\cdot L_B^d)\leq C_r (\lambda_1(f)+r)^n. \end{equation} Let ${\mathcal I}$ be the ideal sheaf of $\overline{x_0}$ on ${\mathcal X}.$ After replacing $L_B$ be a suitable multiple, we may assume that ${\mathcal L}\otimes {\mathcal I}$ is globally generated. For every $n\geq 0$, there are divisors $H_i, i=0,\dots,d$ in $|L_B|$ such that $\dim H_1\cap \dots \cap H_d=1$ and containing $\overline{x_0}$ as an irreducible component. Set $V_n:=H_1\cdot \dots \cdot H_d$. Let $\Gamma$ be a normal projective variety with a birational morphism $\pi_1\colon\Gamma\to {\mathcal X}$ and a morphism $\pi_2:\Gamma\to {\mathcal X}$ such that $F^n=\pi_2\circ\pi_1^{-1}$. Write $(\pi_1)^{\#}\overline{x_0}$ the strict transform of $V^n$ $\overline{x_0}$ by $\pi_1^N.$ Then $(\pi_1)^{\#}\overline{x_0}$ is an irreducible component of $\cap_{i=1}^d(\pi_1^*H_i).$ In $N^1(\Gamma)$, we have $\pi_1^*V_n=\pi_1^*H_1\cdot\dots \cdot \pi^*H_d.$ By \cite[Lemma 3.3]{Jia2021}, $\pi_1^*V_n-(\pi_1)^{\#}\overline{x_0}$ is pseudo-effective. Then we have $$h_L(f^n(x_0))=(\overline{f^n(x_0)}\cdot L_B)=((\pi_1)^{\#}\overline{x_0}\cdot \pi_2^*L_B)$$ $$\leq (\pi_1^*H_1\cdot \dots \cdot \pi^*_1H_d\cdot \pi_2^*L_B)=((F^n)^*L_B\cdot L_B^d).$$ $$\leq C_r (\lambda_1(f)+r)^n.$$ It follows that $$\overline{\alpha}_f(x_0)=\limsup_{n\to \infty}h_L(f^n(x_0))^{1/n}\leq \lim_{n\to \infty}(C_r (\lambda_1(f)+r)^n)^{1/n}=\lambda_1(f)+r.$$ Letting $r\to \infty$, we conclude the proof. \end{proof} \subsection{Canonical height} Let $X$ be a normal projective variety and $f: X\to X$ a surjective endomorphism. Let $A$ be an ample divisor of $X$, denote by $h_A$ a Weil height on $X({\mathbf{k}})$ associated to $A$ with $h_A\geq 1.$ \begin{pro}\label{procanheight} Let $D$ be a nonzero Cartier ${\mathbb R}$-divisor such that $f^*D\equiv\beta D$ where $\beta> \lambda_1(f)^{1/2}.$ Let $[D]\in N^1(X)_{{\mathbb R}}$ be the numerical class of $D.$ Then for every $x\in X({\mathbf{k}})$, the limit $h_{[D]}^+(x):=\lim_{n\to \infty}h_{D}(f^n(x))/\beta^n$ exist, only depend on the numerical class $[D]$ and satisfies the following properties: \begin{points} \item $h_{[D]}^+=h_{D}+O(h_A^{1/2})$; \item $h_{[D]}^+\circ f=\beta h^+$. \end{points} \end{pro} \proof This result was proved in \cite[Theorem 5]{Kawaguchi2016} in characteristic zero. The proof presented here is the same as \cite[Theorem 5]{Kawaguchi2016}, but slightly shorter. By \cite[Proposition B.3]{Matsuzawa2020a}, there is $C>0$ such that for every $x\in X({\mathbf{k}})$, $$|h_D(f(x))-\beta h_D(x)|\leq Ch_A(x)^{1/2}.$$ Pick $\mu\in (\lambda_1(f)^{1/2}, \beta),$ by Proposition \ref{proupboundarth}, for every $x\in X({\mathbf{k}})$, there is $C_x>0$ such that , $$h_A(f^n(x))\leq C_x\mu^{2n}h_A(x).$$ Then we have $$|h_{D}(f^n(x))/\beta^n-h_{D}(f^{n-1}(x))/\beta^{n-1}|=\beta^{-n}|h_{D}(f^n(x))-\beta h_{D}(f^{n-1}(x))|$$ $$\leq \beta^{-n}Ch_A(f^{n-1}(x))^{1/2}\leq \beta^{-n}CC_x^{1/2}\mu^nh_A(x)^{1/2}=CC_x^{1/2}(\mu/\beta)^nh_A(x)^{1/2}.$$ Since $0<\mu/\beta<1$, $$h_{[D]}^+(x)=h_D(x)+\sum_{n\geq 1}(h_{D}(f^n(x))/\beta^n-h_{D}(f^{n-1}(x))/\beta^{n-1})$$ converges and $$|h_{[D]}^+(x)-h_D(x)|\leq \sum_{n\geq 1}|h_{D}(f^n(x))/\beta^n-h_{D}(f^{n-1}(x))/\beta^{n-1}|$$ $$\leq (\sum_{n\geq 1}CC_x^{1/2}(\mu/\beta)^n)h_A(x)^{1/2}=O(h_A(x)^{1/2}).$$ Then we get (i). The statement (ii) follows from the definition. For $D'\equiv D$, by \cite[Proposition B.3]{Matsuzawa2020a}, there is $B>0$ such that for every $x\in X({\mathbf{k}})$, $$|h_{D'}(x)-h_D(x)|\leq Bh_A(x)^{1/2}.$$ Then $$|h_{[D']}^+(x)-h_{[D]}^+(x)|:=\lim_{n\to \infty}|h_{D'}(f^n(x))-h_{D}(f^n(x))|/\beta^n$$ $$\leq \limsup_{n\to \infty}Bh_A(f^n(x))^{1/2}/\beta^n\leq \limsup_{n\to \infty}BC_xh_A(x)^{1/2}(\mu/\beta)^n=0,$$ which concludes the proof.\endproof The following was proved in \cite[Lemma 9.1]{Matsuzawa2018} when ${\mathbf{k}}=\overline{{\mathbb Q}}$ and $X$ is smooth. After replacing \cite[Theorem 5]{Kawaguchi2016} by Proposition \ref{procanheight}, \cite[Lemma 9.1]{Matsuzawa2018} is still valid when ${\mathbf{k}}=K(B)$ and $X$ is singular. \begin{pro}\label{proextarhitd} Assume that $\lambda_1(f)>1$. Let $D\not\equiv 0$ be a nef ${\mathbb R}$-Cartier divisor on $X$ such that $f^*D\equiv \lambda_1(f)D$. Let $V\subseteq X$ be a subvariety of positive dimension such that $(D^{\dim V}\cdot V)>0$. Then there exists a nonempty open subset $U\subseteq V$ and a set $S\subseteq U({\mathbf{k}})$ of bounded height such that for every $x\in U({\mathbf{k}})\setminus S$ we have $\alpha_f(x)=\lambda_1(f)$. \end{pro} \begin{cor}\label{cordenseal}Keep the notation in Proposition \ref{proextarhitd}. For every Zariski dense open subset $U$ of $X$, there is $x\in U({\mathbf{k}})$ such that $\alpha_f(x)=\lambda_1(f)$ and $O_f(x)\subseteq U$. \end{cor} \proof[Proof of Corollary \ref{cordenseal}] We may assume that $X$ is normal and $X,f,D$ and $U$ are defined over $K(B).$ There is a normal and projective $B$-scheme $\pi: X_B\to B$ and a rational self-map $f_B: X_B\dashrightarrow X_B$ over $B$ such that the geometric generic fiber of $(X_B,f_B)$ is $(X,f).$ Let $b$ be a general point of $B(K)$ and denote by $(X_b,f_b)$ the fiber of $(X_B,f_B)$ above $b$. Then $f_b$ is an endomorphism of $X_b.$ Set $Z:=X\setminus U$. Let $Z_B$ be the Zariski closure of $Z$ in $X_B.$ Then $U_b:=X_b\setminus Z_B$. By Proposition \ref{proxfnonempty} (see Section \ref{subsectionexweldef} for its proof), there is $x_b\in (U_b)_{f_b|_{U_b}}(K).$ Let $M$ be a very ample line bundle on $X_B.$ Taking $W_B$ to be the intersection of $\dim X-1$ general elements of $|10M|$ of $X_B$ passing through $x_b.$ By \cite[Theorem 0.4]{Benoist2011}, $W_B$ is irreducible. Let $W\subseteq X$ be the generic fiber of $W_B$. It is of pure dimension $1$. Then $(W\cap D)>0.$ Because $W_B$ is irreducible, for every irreducible component $W'$ of $W$, $(W'\cdot D)>0.$ By Lemma \ref{lemlocheig} and Remark \ref{remopenred}, there are $x_n\in W'({\mathbf{k}}), n\geq 0$ such that $x_b\in \overline{\{x_n\}}$ and the height of $x_n$ tends to $+\infty$. Because $O_{f_b}(x_b)\subseteq U$, $O_f(x_n)\subseteq U$ for all $n\geq 0.$ By Proposition \ref{proextarhitd}, for $n>>0$, we have $x_n\in V({\mathbf{k}})\cap U$ and $\alpha_f(x_n)=\lambda_1(f)$. \endproof \section{Proof of Theorem \ref{dmlsurface}} This proof mixes the ideas from \cite{Xie2014} and \cite{Lesieutre2021}. \subsection{Reduce to the smooth case} By \cite{Lipman1978}, there is a minimal desingularization $\pi:X'\to X$. Then one may lift $f$ to an automorphism $f'$ of $X'.$ The following lemma allows us to replace $(X,f)$ by $(X',f')$ and assume that $X$ is smooth. \begin{lem}\label{lemredudesing}If $(X',f')$ satisfies the DML property, then $(X,f)$ satisfies the DML property. \end{lem} \proof Assume that $(X',f')$ satisfies the DML property. We only need to prove the following statement: for every $x\in X({\mathbf{k}})$ and an irreducible curve $C\subseteq X({\mathbf{k}})$, if $O_f(x)\cap C$ is infinite, then $C$ is $f$-periodic. Pick $x'\in \pi^{-1}(x)({\mathbf{k}})$. There is an irreducible component $C'$ of $\pi^{-1}(C)$ such that $O_{f'}(x')\cap C'$ is infinite. We have $\dim C'\leq 1.$ If $\pi(C')\neq C$, then $\pi(C')$ is a point. Then $x=\pi(x')$ is periodic. So $\pi(C')= C$ and $\dim C'=1.$ Since $(X',f')$ satisfies the DML property, $C'$ is $f'$-periodic. So $C=\pi(C')$ is $f'$-periodic. \endproof \subsection{Numerical geometry}\label{subsectionnumgeo} Set $\lambda:=\lambda_1(f)>1$. There is a nef class $\theta^*\in N^1(X)_{{\mathbb R}}\setminus \{0\}$ such that $f^*\theta^*=\lambda\theta^*.$ By projection formula $\lambda_1(f^{-1})=\lambda.$ So there is a nef class $\theta^*\in N^1(X)_{{\mathbb R}}\setminus \{0\}$ such that $(f^{-1})^*\theta_*=\lambda\theta_*.$ Then $f^*\theta_*=\lambda^{-1}\theta_*.$ Since $\lambda^2({\theta^*}^2)=({f^*\theta^*}^2)=({\theta^*}^2),$ we get $({\theta^*}^2)=0.$ Similarly, $({\theta_*}^2)=0.$ By Hodge index theorem, $(\theta^*\cdot \theta_*)>0.$ It follows that $(\theta^*+\theta_*)^2>0.$ So $\theta^*+\theta_*$ is big and nef. Set $H:=\{\alpha\in {\mathbb N}^1(X)_{{\mathbb R}}|\,\, (\theta^*\cdot \alpha)=(\theta_*\cdot \alpha)=0\}.$ It is clear that ${\mathbb N}^1(X)_{{\mathbb R}}={\mathbb R}\theta^*\oplus {\mathbb R}\theta_*\oplus H$ and $f^*H=H.$ By Hodge index theorem, the intersection form on $H$ is negative define. Since $f^*$ preserves the intersection form, all eigenvalues of $f^*|_H$ are of norm $1$. Since $f^*$ is an automorphism of the lattes $N^1(X)\subseteq N^1(X)_{{\mathbb R}}$, all eigenvalues of $f^*: N^1(X)_{{\mathbb R}}\to N^1(X)_{{\mathbb R}}$ are algebraic integers. In particular both $\lambda$ and $\lambda^{-1}$ are algebraic integers. \begin{lem}\label{lemlaoneconj}There is $\sigma\in {\rm Gal}(\overline{{\mathbb Q}}/{\mathbb Q})$ such that $\sigma(\lambda)=\lambda^{-1}.$ \end{lem} \proof[Proof of Lemma \ref{lemlaoneconj}] Since $\lambda_1$ is an algebraic integer with $|\lambda|>1$, by product formula, there is $\sigma\in {\rm Gal}(\overline{{\mathbb Q}}/{\mathbb Q})$ such that $|\sigma(\lambda_1)|<1.$ Because $\sigma(\lambda_1)$ is an eigenvalue of $f^*$ and $\lambda_1^{-1}$ is the unique eigenvalue of $f^*$ with norm $<1,$ we have $\sigma(\lambda_1)=\lambda_1^{-1}$. \endproof Then $f^*\sigma(\theta^*)=\sigma(f^*\theta^*)=\sigma(\lambda)\sigma(\theta^*)=\lambda^{-1}\sigma(\theta^*).$ So there is $c>0$ such that $\theta_*=c\sigma(\theta^*).$ After replacing $\theta_*$ by $c^{-1}\theta_*$, we may assume that $\sigma(\theta^*)=\theta_*.$ \begin{cor}\label{corcurudst}For every curve $C$ of $X$, $(\theta^*\cdot C)=0$ if and only if $(\theta_*\cdot C)=0.$ \end{cor} \proof[Proof of Corollary \ref{corcurudst}] The subspace $P:=\{\alpha\in N^1(X)_{{\mathbb C}}|\,\, (\alpha\cdot C)=0\}$ is a hyperplane of $N^1(X)_{{\mathbb C}}$ defined over ${\mathbb Q}.$ We have $\sigma(P)=P.$ Embed $N^1(X)_{{\mathbb R}}$ in $N^1(X)_{{\mathbb C}}$. Then $\theta^*\in P$ if and only if $\theta_*=\sigma(\theta^*)\in \sigma(P)=P.$ \endproof \subsection{Canonical height} In this section, we assume \begin{points} \item either ${\mathbf{k}}=\overline{{\mathbb Q}}$; \item or there is an algebraically closed subfield $K\subseteq {\mathbf{k}}$, a curve $B$ over $K$, such that $X$ and $f$ are defined over $K(B)$ and ${\mathbf{k}}=\overline{K(B)}.$ \end{points} Let $A$ be an ample divisor of $X$, denote by $h_A$ a Weil height on $X({\mathbf{k}})$ associated to $A$ with $h_A\geq 1.$ Pick ${\mathbb R}$-divisors $D^{*}$ and $D_*$ with numerical classes $\theta^*, \theta_*$. By \cite[Theorem 5]{Kawaguchi2016} and \cite{Kawaguchi2020} in characteristic zero and Proposition \ref{procanheight} in positive characteristic, for every $y\in X({\mathbf{k}})$, the limits $$h^+(y):=\lim_{n\to \infty}h_{D^*}(f^n(y))/\lambda^n$$ and $$h^-(y):=\lim_{n\to \infty}h_{D_*}(f^{-n}(y))/\lambda^n$$ exist, do not depend on the choice of $D^{*}$, $D_*$, $h_{D^*}$ and $h_{D_*},$ and satisfies the following properties: \begin{points} \item $h^+=h_{D^*}+O(h_A^{1/2})$, $h^-=h_{D_*}+O(h_A^{1/2})$; \item $h^+\circ f=\lambda h^+$ and $h^-\circ f=\lambda^{-1}h^-.$ \end{points} \begin{lem}\label{lemcomheigonc}Let $C$ be an irreducible curve of $X$ such that $(C\cdot \theta_*)>0.$ Then for every $M\geq 0$, there is $M'\geq 0$, such that $$\{y\in C({\mathbf{k}})|\,\, h^-(y)\leq M\}\subseteq \{y\in C({\mathbf{k}})|\,\, h_A(y)\leq M'\}.$$ \end{lem} \proof[Proof of Lemma \ref{lemcomheigonc}] There is $d>0$, such that $$h^{-}\geq h_{D_*}-dh_A^{1/2}.$$ Pick $a>0$ such that $a(D_*\cdot C)>(A\cdot C).$ Then there is $b>0$ such that for every $y\in C$, $$ah_{D^*}(y)+b\geq h_{A}(y).$$ So for every $y\in C,$ $$h^{-}(y)\geq a^{-1}(h_{A}(y)-b)-dh_A^{1/2}(y).$$ If $h^-(y)\leq M$, we get $$M\geq a^{-1}(h_{A}(y)-b)-dh_A^{1/2}(y)=(a^{-1}h_A^{1/2}(y)-d)h_A^{1/2}(y)-a^{-1}b.$$ This implies that $$h_A^{1/2}(y)\leq \max\{ad, aM+b+ad\}=aM+b+ad.$$ Then we get $h_A(y)\leq (aM+b+ad)^2.$ \endproof \subsection{The case $(C\cdot \theta_*)>0$} \begin{lem}\label{lemcthqezerfinite}Let $C$ be an irreducible curve of $X$ such that $(C\cdot \theta_*)>0.$ For every $x\in X({\mathbf{k}})$, $O_f(x)\cap C$ is finite. \end{lem} \proof[Proof of Lemma \ref{lemcthqezerfinite}] Let ${\mathbb F}$ be the minimal algebraically closed subfield of ${\mathbf{k}}$. So ${\mathbb F}=\overline{{\mathbb Q}}$ if ${\rm char}\, {\mathbf{k}}=0$ and ${\mathbb F}=\overline{{\mathbb F}_p}$ when ${\rm char}\, {\mathbf{k}}=p>0.$ There is an algebraically closed subfield ${\mathbf{k}}'$ of ${\mathbf{k}}$ with $\text{tr.d.}_{{\mathbb F}}{\mathbf{k}}'<\infty$ such that $X,f,C$ and $x$ are defined over ${\mathbf{k}}'$. After replacing ${\mathbf{k}}$ by ${\mathbf{k}}'$, we may assume $\text{tr.d.}_{{\mathbb F}}{\mathbf{k}}<\infty$. Now we prove Lemma \ref{lemcthqezerfinite} by induction on $\text{tr.d.}_{{\mathbb F}}{\mathbf{k}}.$ \medskip When ${\mathbf{k}}=\overline{{\mathbb F}_p}$ for some prime $p>0,$ $O_f(x)$ is finite. Then Lemma \ref{lemcthqezerfinite} holds. \medskip Assume ${\mathbf{k}}=\overline{{\mathbb Q}}.$ Set $I:=\{i\geq 0|\,\, f^i(x)\in C\}.$ For every $i\geq I$, $h^-(f^i(x))=\lambda^{-i}h^-(x)\leq h^-(x).$ By Lemma \ref{lemcomheigonc}, there is $M>0$ such that $h_A(f^i(x))<M$ for every $i\in I.$ We conclude the proof by the Northcott property. \medskip Now we may assume that $\text{tr.d.}_{{\mathbb F}}{\mathbf{k}}\geq 1.$ There is an algebraically closed subfield $K\subseteq {\mathbf{k}}$, a smooth irreducible projective curve $B$ over $K$, such that $X$, $f$, $C$ and $x$ are defined over $K(B)$ and ${\mathbf{k}}=\overline{K(B)}.$ There is a projective morphism $\pi: {\mathcal X}\to B$ whose geometric generic fiber is $X$. The automorphism $f$ extends to a birational self-map $f_B: {\mathcal X}\dashrightarrow {\mathcal X}$ over $B$. Let $A_B$ be an ample divisor on ${\mathcal B}$. Let $C_B$ be the Zariski closure of $C$ in ${\mathcal X}$. Let $A$ be the restriction of $A_B$ on the generic fiber $X.$ There is a nonempty open subset $U$ of $B$, such that $\pi$ is smooth above $U$ and $f_B|_{\pi^{-1}(U)}$ is an automorphism. Assume that $(A\cdot \theta_*)=1.$ For every $b\in B$, let $X_b:=\pi^{-1}(b)$, $C_b:=C\cap X_b$, $f_b$ be the restriction of $f$ to $X_b$ and $A_b$ be the restriction of $L_B$ to $X_b.$ After shrinking $U$, we may assume that $C_b$ is irreducible for every $b\in U.$ For every $n\geq 0$ and $b\in U$, we have $((f^n)^*A\cdot A)=((f_b^n)^*A_b\cdot A_b)$. So $\lambda_1(f_b)=\lambda_1(f)=\lambda>1.$ For $b\in U$, set $$\theta_{*,b}':=\lim_{n\to \infty}((f_b^{-n})^*A_b\cdot A_b)/\lambda^n.$$ The discussion in Section \ref{subsectionnumgeo} shows that ${\mathbb R}\theta_{*,b}'$ the eigenspace of $(f_b^{-1})^*$ in $N^1(X_b)$ for eigenvalue $\lambda$. Set $\theta_{*,b}:=\theta_{*,b}'/(\theta_{*,b}'\cdot A).$ We have $$(\theta_{*,b}\cdot C_b)=(\theta_{*}\cdot C)>0.$$ \medskip Set $I:=\{i\geq 0|\,\, f^i(x)\in C\}.$ For every $i\geq I$, $h^-(f^i(x))=\lambda^{-i}h^-(x)\leq h^-(x).$ By Lemma \ref{lemcomheigonc}, there is $M>0$ such that $h_A(f^i(x))<M$ for every $i\in I.$ For every point $y\in X$ defined over $K(B)$, its closure $s_y$ in ${\mathcal X}$ is a section of $\pi.$ We may assume that for every $y\in X(K(B))$, $h_A(y)=(A_B\cdot s_y)$. Also, for every section $s$ of $\pi$, its generic fiber defines a point $y_s\in X(K(B)).$ For every $y\in X(K(B))$, $\pi$ induces an isomorphism from $s_y$ to the curve B. Consider the Hilbert polynomial $$\chi(s^*_y{\mathcal O}(nA_B))=1-g(B)+n(s_y\cdot A_B)=1-g(B)+nh_A(y).$$ So there is a quasi-projective $K$-variety ${\mathcal M}_M$ that parameterizes the sections $s$ of $\pi$ with $h_A(y_s)\leq M$ (see \cite{debarre}). For every $b\in U$, denote by $e_b: {\mathcal M}_M\to X_b$ the morphism $s\mapsto s(b).$ Pick a sequence $b_i, i\geq 1$ of distinct points in $U(K)$. For $s_1, s_2\in {\mathcal M}_M$, $s_1=s_2$ if and only if $e_{b_i}(s_1)=e_{b_i}(s_2)$ for every $i\geq 1.$ For $l\geq 1$, set $$e_l:=\prod_{i=1}^le_{b_i}: {\mathcal M}_M\to \prod_{i=1}^lX_{b_i}.$$ By \cite[Lemma 8.1]{Xie2014}, there is $L\geq 1$ such that $e_L$ is quasi-finite. For $j\in I$, $f^j(x)$ defines a point $s_{f^j(x)}\in {\mathcal M}_M.$ The induction hypothesis shows that, for $i=1,\dots,L$, $$e_{b_i}(\{f^j(x)|\,\, j\in I\})=\{f_{b_i}^j(x_{b_i})|\,\, j\in I\}\subseteq O_{f_{b_i}}(x_{b_i})\cap C_{b_i}$$ is finite. So $e_L(\{f^j(x)|\,\, j\in I\})$ is finite. Since $e_L$ is quasi-finite, $O_f(x)\cap C=\{f^j(x)|\,\, j\in I\}$ is finite. \endproof \subsection{Conclusion} Let $x\in X({\mathbf{k}})$ and $C$ be an irreducible curve of $X$. If $(C\cdot \theta_*)>0,$ we conclude the proof by Lemma \ref{lemcthqezerfinite}. Now assume that $(C\cdot \theta_*)=0.$ Let $B(f)$ be the set of curves $C'$ with $(C'\cdot \theta_*)=0.$ By Corollary \ref{corcurudst}, $C'\in B(f)$ if and only if $(C'\cdot \theta^*)=0$, if and only if $(C'\cdot (\theta^*+\theta_*))=0$. Since $\theta^*+\theta_*$ is big and nef, $B(f)$ is finite. Since $f^*\theta^*=\lambda_1\theta^*,$ $C'\in B(f)$ if and only if $f(C')\in B(f).$ So every curve in $B(f)$ is periodic. Since $C\in B(f),$ $C$ is periodic. $\square$ \section{Zariski dense orbit conjecture} Let $X$ be a variety over ${\mathbf{k}}$ of dimension $d_X.$ Let $f: X\dashrightarrow X$ be a dominant rational self-map. \subsection{Existence of well-defined orbits}\label{subsectionexweldef} In characteristic $0$, the following result is well know. In positive characteristic, the proof is similar. \begin{pro}\label{proxfnonempty}For every Zariski dense open subset $U$ of $X$, there is $x\in U({\mathbf{k}})$ whose $f$-orbit is well defined and contained in $U.$ \end{pro} \proof[Proof of Proposition \ref{proxfnonempty}] After replacing $X, f$ by $U, f|_U$, we may assume that $X=U.$ So we only need to show that $X_f({\mathbf{k}})\neq \emptyset.$ Let ${\mathbb F}$ be the smallest algebraically closed subfield of ${\mathbf{k}}.$ So ${\mathbb F}=\overline{{\mathbb Q}}$ or $\overline{{\mathbb F}_p}.$ We may replace ${\mathbf{k}}$ by an algebraically closed subfield ${\mathbf{k}}'$ of ${\mathbf{k}}$ with $\text{tr.d.}_{{\mathbb F}}{\mathbf{k}}'<\infty$ such that $X,f$ are defined over ${\mathbf{k}}'$. Now assume that $\text{tr.d.}_{{\mathbb F}}{\mathbf{k}}<\infty.$ If ${\rm char}\,{\mathbf{k}}=0$, we conclude the proof by \cite[Proposition 3.22]{Xie2019}. Now assume that ${\rm char}\,{\mathbf{k}}=p>0.$ The case ${\mathbf{k}}=\overline{{\mathbb F}_p}$ is essentially proved in \cite[Proposition 5.5]{fa}. On may also see \cite[Proposition 6.2]{Xie2015}. In \cite[Proposition 6.2]{Xie2015}, $f$ is assumed to be birational, but its proof works for arbitrary dominant rational self-map. Now assume that $\text{tr.d.}_{{\mathbb F}}{\mathbf{k}}\geq 1.$ There is a subfield $L$ of $K$ which is finitely generated over ${\mathbf{k}}$ such that $X,f$ are defined over $L.$ Let $B$ be a projective and normal variety over ${\mathbb F}$ such that $L={\mathbf{k}}.$ There is a $B$-scheme $\pi: X_B\to B$ and a rational self-map $f_B: X_B\dashrightarrow X_B$ over $B$ such that the geometric generic fiber of $(X_B,f_B)$ is $(X,f).$ Let $b$ be a general point of $B({\mathbb F})$ and denote by $(X_b,f_b)$ the fiber of $(X_B,f_B)$ above $b$. Then $V_b:=X_b\setminus I(f_B)$ and $f_b$ is dominant. Applying the case over $\overline{{\mathbb F}_p}$ to $(V_b, f_b|_{V_b})$, there is $x_b\in (V_b)_{f_b|_{V_b}}({\mathbb F}).$ Cutting by general hyperplanes of $X_B$, there is an irreducible subvariety $S$ of $X_B$ of dimension $\dim S=\dim B$ passing through $b$ with $\pi(S)=B.$ Then the generic point of $S$ defines a point $x\in X_f({\mathbf{k}})$, which concludes the proof. \endproof \subsection{Tautological upper bound} The following lemmas was proved in characteristic zero, but their proof works in any characteristic. \begin{lemma}\cite[Lemma 2.15]{Jia2021}\label{l_inv_fun_field} Let $K$ be an algebraically closed field extension of ${\mathbf{k}}$. Then ${\mathbf{k}}(X)^{f}= {\mathbf{k}}$ if and only if, $K(X_{K})^{f_{K}}= K$. \end{lemma} \begin{lem}\cite[Lemma 2.1]{Xie2019}\label{leminvratfunite}Let $X'$ be an irreducible variety over ${\mathbf{k}}$, $f': X'\dashrightarrow X'$ be a rational self-map and $\pi: X'\dashrightarrow X$ be a generically finite dominant rational map satisfying $f\circ \pi=\pi\circ f',$ then we have the following properties. \begin{points} \item If there exists $m\geq 1$, and $H\in {\mathbf{k}}(X)^{f^m}\setminus {\mathbf{k}}$, then there exists $G\in {\mathbf{k}}(X)^f\setminus {\mathbf{k}}$. \item There exists $H'\in {\mathbf{k}}(X')^{f'}\setminus {\mathbf{k}}$, if and only if there exists $H\in {\mathbf{k}}(X)^f\setminus {\mathbf{k}}$. \end{points} \end{lem} They show that the assumption ${\mathbf{k}}(X)^f={\mathbf{k}}$ is stable under base change, under positive iterate and under semiconjugacy by generaically finite dominant morphism. As an example of realization problems, the author asked the following question in \cite[Section 1.6]{Xie2019}. \begin{que}\label{querealzdo} What is the minimal transcendence degree $R({\mathbf{k}}, X, f)$ of an algebraically closed field extension $K$ of ${\mathbf{k}}$ such that $(X_K,f_K)$ satisfies the ZDO property? \end{que} Proposition \ref{protautzdo} gives a tautological upper bound of $R({\mathbf{k}}, X, f)$. \proof[Proof of Proposition \ref{protautzdo}] We may assume that ${\mathbf{k}}(X)^f= {\mathbf{k}}$. By Lemma \ref{l_inv_fun_field}, $K(X_{K})^{f_{K}}= K$. An irreducible $f_K$-invariant variety $V$ is said to be maximal, if the only irreducible $f_K$-invariant variety $W$ containing $V$ is $X_K.$ We note that $I(f_K)=I(f)\otimes_{{\mathbf{k}}}K$ is defined over ${\mathbf{k}}.$ \begin{lem}\label{leminvaroverk}Let $V$ be an irreducible $f_K$-invariant variety. Then $V$ is over defined over ${\mathbf{k}}.$ \end{lem} \proof[Proof of Lemma \ref{leminvaroverk}] Set $r:=\dim V<d_X.$ There is a subfield $L$ of $K$ which is finitely generated over ${\mathbf{k}}$ such that $V$ is defined over $L.$ Let $B$ be a projective and normal variety over ${\mathbf{k}}$ such that $L={\mathbf{k}}(B).$ Then there is a subvariety $V_B$ of $X\times B$ such that $\pi_2(V_B)=B$ where $\pi_2: X\times B\to B$ is the projection to the second coordinate and $V=V_{\eta}\times_{L}K$ where $\eta$ is the generic point of $B$ and $V_{\eta}$ is the generic fiber of $\pi_2|_{V_B}.$ We have $\dim V_B=\dim B+r.$ Since $V$ is $f_K$-invariant, $V_B\subseteq X\times B$ is $f_B:=f\times {\rm id}$ invariant. Consider $\pi_1: X\times B\to B$ the projection to the first coordinate. It is clear that $\pi_1(V)$ is irreducible and $f$-invariant. Since $V\subseteq \pi_1(V)_K$ and $V$ is maximal, we get either $V_B=\pi_1^{-1}(\pi_1(V_B))$ or $\pi_2(V_B)=X.$ In the former case $V=\pi_1(V_B)_{K}$ is defined over ${\mathbf{k}}.$ Now we assume that $\pi_2(V_B)=X.$ Then $\dim B= \dim V_B-r\geq d_X-r\geq 1$ and ${\mathbf{k}}\subsetneq \pi_2^*({\mathbf{k}}(B))\subseteq {\mathbf{k}}(V_B)^{f_B|_{V_B}}.$ If $\dim V_B=d_X$, we conclude the proof by Lemma \ref{leminvratfunite}. Now assume that $\dim V_B\geq d_X+1.$ So a general fiber of $\pi_1|_{V_B}$ has dimension $s\geq 1$. We have $\dim B=d_X+s-r>s.$ Let $H_1, \dots, H_2$ be very ample divisors on $B$ which are general in their linear system. Then the intersection of $\pi_2^{-1}(H_i), i=1,\dots,s$ and a general fiber of $\pi_1|_{V_B}$ is of dimension $0$ and $W':=V_B\cap H_1\dots \cap H_s$ is $f_B$-invariant. Because $\pi_1(W')=X,$ there is an irreducible component $W$ of $W'$ with $\pi_1(W)=X$ and there is $l\geq 1$ such that $W$ is $f^l$-invariant. Because $d_X=\dim W$ and $\dim\pi_2(W)=d_X-r>0.$ So ${\mathbf{k}}\subsetneq{\mathbf{k}}(W)^{(f_B|_W)^l},$ which is a contradiction by Lemma \ref{leminvratfunite}. \endproof We only need to treat the case $\text{tr.d.}_{{\mathbf{k}}}K=d.$ So we may assume that $K=\overline{{\mathbf{k}}(X)}.$ The diagonal $\Delta$ of $X\times X$ defines a point $o$ in $X_K(K).$ Here we view $X_K$ as the geometric generic fiber of the second projection $\pi_2: X\times X\to X.$ Because $\pi_1(\Delta)=X$ where $\pi_1: X\times X\to X$ is the first projection, $O_{f_K}(o)$ is well defined and for every $n\geq 0$, $f_K^n(o)$ is not contained in any proper subvariety of $X_K$ defined over ${\mathbf{k}}.$ An irreducible component $W$ of $\overline{O_{f_K}(o)}$ of maximal dimension is $f_K$-periodic and does not contained in any proper subvariety of $X_K$ defined over ${\mathbf{k}}.$ By Lemma \ref{leminvaroverk}, $W=X_K$ which concludes the proof. \endproof In fact, with a slight modification, we prove a stronger result related to the strong form of the Zariski dense orbit conjecture \cite[Conjecture 1.4]{Xie2019}. \begin{pro}\label{protautstzdo}Assume that ${\mathbf{k}}(X)^f={\mathbf{k}}.$ Let $K$ be an algebraically closed field extension of ${\mathbf{k}}$ with $\text{tr.d.}_{{\mathbf{k}}}K\geq \dim X$. Then for every nonempty Zariski open subset $U$ of $X_K$, there is a point $x\in U(K)$ whose $f_K$-orbit is well defined and contained in $U.$ \end{pro} \proof[Proof of Proposition \ref{protautstzdo}]Keep the notation in the proof of Proposition \ref{protautzdo}. Pick a general point $b\in X({\mathbf{k}}).$ Then $U_b:=X\setminus\overline{(X_K\cap U)}$ is not empty. By Proposition \ref{proxfnonempty}, there is $x_b\in U_b$, whose $f$ orbit is well defined and contained in $U_b.$ Cutting by general hyperplanes of $X\times X$, there is an irreducible subvariety $S$ of $X\times X$ of dimension $\dim S=\dim X$ passing through $(x_b,b)$ such that $\pi_1(S)=X$ and $\pi_2(S)=X.$ The generic point of $S$ defines a point in $x\in X_K(K).$ Then the $f_K$-orbit of $x$ is well defined and contained in $U.$ After replacing $o$ by $x$, the argument in the last paragraph of the proof of Proposition \ref{protautzdo} shows that $O_{f_K}(x)$ is Zariski dense in $X_K.$ \endproof \subsection{Height argument} The aim of this section is to prove Theorem \ref{thmendononpre}, \ref{thmzdosuraut} and \ref{thmzdola}. Assume that ${\rm char}\, {\mathbf{k}}=p>0$ and $\text{tr.d.}_{\overline{F_p}} {\mathbf{k}}\geq 1.$ Let $f: X\to X$ be a dominant endomorphism of a projective variety. There is a algebraically closed subfield $K$ of ${\mathbf{k}}$ such that $\text{tr.d.}_K{\mathbf{k}}=1.$ So there is smooth projective curve $B$ over $K$, such that $f,X$ are defined over $K(B).$ The Weil heights appeared in the section are associated to the function field $K(B).$ \proof[Proof of Theorem \ref{thmendononpre}] By Corollary \ref{cordenseal}, there exists a point $x\in U({\mathbf{k}})$ with $\alpha_f(x)=\lambda_1(f)>1$ and $O_f(x)\subseteq U$. So $x$ has infinite orbit. \endproof \proof[Proof of Theorem \ref{thmzdola}]The proof of \cite[Proposition 8.6]{Jia2021} shows that for every $f$-periodic proper subvariety $V$ of period $m\geq 1,$ $\lambda_1(f^m|_V)<\lambda_1(f^m).$ By Propositon \ref{proextarhitd}, there exists a point $x\in X({\mathbf{k}})$ with $\alpha_f(x)=\lambda_1(f)>1$. Let $W$ be an irreducible component of $\overline{O_f(x)}$ of maximal dimension. There is $m\geq 1$ with $f^m(W)=W.$ There is $l\geq 0$ such that $f^l(x)\in W$. If $W\neq X$, by Proposition \ref{proupboundarth} and Lemma \ref{lem_subvar}, we get $$\lambda_1(f)^m=\overline{\alpha}_{f}(x)^m=\overline{\alpha}_{f^m}(f^l(x))\leq \lambda_1(f^m|_W)<\lambda_1(f)^m.$$ We get a contradiction. So $W=X$, which concludes the proof. \endproof The following theorem was proved in \cite[Theorem 1]{Invarianthypersurfaces}, but when $f$ is an automorphism, its proof work in arbitrary characteristic. \begin{thm}\label{thminvhyper}If $f$ is an automorphism and it preserves infinitely many (not necessarily irreducible) hyperplanes, then ${\mathbf{k}}(X)^f\neq {\mathbf{k}}.$ \end{thm} \begin{pro}\label{proautbounded}Let $X$ be a projective variety over ${\mathbf{k}}$ of dimension $d_X$. Let $L$ be an ample line bundle on $X$. Let $f: X\to X$ be an automorphism such that $((f^n)^*L\cdot L^{d_X-1}),n\geq 0$ is bounded. Then $(X,f)$ satisfies the ZDO property. \end{pro} \proof[Proof of Proposition \ref{proautbounded}] Let ${\rm Aut}(X)$ be the scheme of automorphisms of $X.$ Every connected component of ${\rm Aut}(X)$ is a variety over ${\mathbf{k}}$, but ${\rm Aut}(X)$ may have infinite connected component. Because $((f^n)^*L\cdot L^{d_X-1}),n\geq 0$ is bounded, the Zariski closure $G$ of $f^n, n\geq 0$ in ${\rm Aut}(X)$ is a commutative algebraic group. After replacing $f$ by a suitable iterate, we may assume that $G$ is irreducible. We may assume that $f$ is of infinite order. So $\dim G\geq 1.$ For every $x\in X({\mathbf{k}})$, $\overline{O_f(x)}=\overline{G.x}$. Consider the morphism $\Phi: G\times X\to X\times X$ sending $(g,x)$ to $(g(x),x)$. Denote by $\pi_i: X\times X\to X$ the $i$-th projection. Consider the $G$-action on $X\times X$ by $g.(x,y)=(g(x),y).$ Set $F:=f\times {\rm id}: X\times X\to X\times X.$ The image $W$ of $\Phi$ is a constructible subset of $X\times X$. Let $Y$ be the Zariski closure of $W$ in $X\times X$. It is irreducible and $F$-invariant. Let $\Delta$ be the diagonal of $X\times X.$ Then $\Delta\subseteq W\subseteq Y.$ So $\pi_1(Y)=\pi_2(Y)=X.$ Because $\dim G\geq 1$ and the action of $G$ on $X$ is faithful, $Y\neq \Delta.$ So the general fiber of $\pi_2|_{Y}$ has dimension $r\geq 1.$ If $r=\dim X$, then for a general $x\in X({\mathbf{k}})$, $\overline{O_f(x)}=\overline{G.x}=X$ which concludes the proof. Now assume that $r<\dim X.$ We have $\dim Y=\dim X+r.$ The general fiber of $\pi_1|_{Y}$ also has dimension $r\geq 1.$ Let $H_1,\dots,H_r$ be very ample hyperplanes of $X$ which are general in their linear system. The intersection of $\pi_2^*H_1,\dots, \pi_2^*H_r$ and a general fiber of $\pi_1|_{Y}$ is proper. Set $Z:=\pi_2^{-1}(\cap_{i=1}^r H_i).$ We have $\pi_1(Z)=X$, $\dim Z=\dim X$ and $\dim\pi_2(Z)=\dim(H_1\cap \dots \cap H_r)=\dim X-r\geq 1.$ Because $G$ is connected, every irreducible component of $Z$ is $G$-invariant. In particular, let $T$ be an irreducible component of $Z$ with $\pi_1(T)=X$, then $T$ is $F$ invariant and we have $\dim T=\dim X$, $\dim\pi_2(T)=\dim X-r\geq 1.$ Because ${\mathbf{k}}\subsetneq {\mathbf{k}}(T)^{F|_T}$ and $\pi_1\circ F|_T=f\circ \pi_2,$ we conclude the proof by Lemma \ref{leminvratfunite}. \endproof \begin{thm}\label{thmzdosuraut}Assume that ${\rm char}\, {\mathbf{k}}=p>0$ and $\text{tr.d.}_{\overline{F_p}} {\mathbf{k}}\geq 1.$ Let $f: X\to X$ be an automorphism of a projective surface. Then $(X,f)$ satisfies the ZDO property. \end{thm} \proof[Proof of Theorem \ref{thmzdosuraut}] By \cite{Lipman1978}, there is a minimal desingularization $\pi:X'\to X$. Then one may lift $f$ to an automorphism $f'$ of $X'.$ Easy to see that $(X,f)$ satisfies the ZDO property if and only if $(X',f')$ satisfies the ZDO property. After replacing $(X,f)$ by $(X',f')$, we may assume that $X$ is smooth. By Theorem \ref{thmzdola}, we may assume that $\lambda_1(f)=1.$ Let $L$ be an ample line bundle on $X$. If $((f^n)^*L\cdot L), n\geq 0$ is unbounded, by Gizatullin \cite{Gizatullin1980}, there is a surjective morphism $\pi: X\dashrightarrow C$ to a smooth projective curve $C$ and an automorphism $f_C: C\to C$ such that $f_C\circ \pi=\pi\circ f.$ \footnote{In \cite{Gizatullin1980}, there is an assumption that ${\rm char}\, {\mathbf{k}}\neq 2,3.$ But, it is checked in \cite{Cantat2019a} that such assumption in \cite{Gizatullin1980} can be removed.} After replacing $\pi:X\dashrightarrow C$ by a minimal resolution of $\pi$, we may assume that $\pi$ is a morphism. There is $m\geq 1$ such that $f_C^m={\rm id}$, we have ${\mathbf{k}}\subsetneq \pi^*({\mathbf{k}}(C)^{f_C})\subseteq {\mathbf{k}}(X)^f$. Now we may assume that $((f^n)^*L\cdot L), n\geq 0$ is bounded. We conclude the proof by Proposition \ref{proautbounded}. \endproof \section{Ergodic theory}\label{sectionergodictheory} Let $X$ be a variety over ${\mathbf{k}}$. Denote by $|X|$ the underling set of $X$ with the constructible topology i.e. the topology on a $X$ generated by the constructible subsets. This topology is finer than the Zariski topology on $X.$ Moreover $|X|$ is (Hausdorff) compact. Denote by $\eta$ the generic point of $X$. \medskip Using the Zariski topology, on may define a partial ordering on $|X|$ by $x\geq y$ if and only if $y\in \overline{x}.$ The noetherianity of $X$ implies that this partial ordering satisfies the descending chain condition: for every chain in $|X|$, $$x_1\geq x_2\geq \dots$$ there is $N\geq 1$ such that $x_n=x_N$ for every $n\geq N.$ For every $x\in |X|$, the Zariski closure of $x$ in $X$ is $U_x:=\overline{\{x\}}=\{y\in |X||\,\, y\leq x\}$ which is open and closed in $|X|.$ \medskip Let ${\mathcal M}(X)$ be the space of Radon measure on $X$ endowed with the weak-$\ast$ topology and ${\mathcal M}^1(|X|)$ be the space of probability Radon measure on $|X|.$ Note that ${\mathcal M}^1(|X|)$ is compact. \proof[Proof of Theorem \ref{thmRadon}] We claim that for every Radon measure $\mu$ on $|X|$ with $\mu(|X|)>0$, there exists $x\in X$ such that $\mu(x)>0.$ \medskip Then for every Radon measure $\mu$ on $|X|$, set $S(\mu):=\{x\in |X||\,\, \mu(x)>0\}.$ Then $S(\mu)$ is at most countable and we have $c:=\sum_{x\in S(\mu)}\mu(x)\in (0,\mu(|X|)].$ If $c=\mu(|X|)$, then we have $\mu=\sum_{x\in S(\mu)}\mu(x)\delta_x$, which concludes the proof. Assume that $c<\mu(|X|)$, set $$\alpha:=\mu-\sum_{x\in S(\mu)}\mu(x)\delta_x.$$ Then $\alpha$ is a Radon measure with $\alpha(|X|)=\mu(|X|)-c>0$ and $S(\alpha)=\emptyset$. This contradicts our claim. \medskip Now we only need to prove the claim. \begin{lem}\label{lemfindnexp}For $x\in |X|$, if $\mu(U_x)>0$ and $\mu(x)=0$, then there exists $y\in U_x\setminus \{x\}$ such that $\mu(U_y)>0.$ \end{lem} Now assume that for every $x\in |X|$, $\mu(x)=0.$ Since $|X|=\cup_{x\in X}U_x$ and $|X|$ is compact, there exists a finite subset $F$ of $|X|$ such that $|X|=\cup_{x\in F}U_x.$ Then there exists $x_0\in F$ such that $\mu(U_{x_0})>0.$ Since $\mu(x_0)=0$ by the assumption, by Lemma \ref{lemfindnexp}, we get a sequence of points $x_i, i\geq 0$, $x_i> x_{i+1}$ such that $\mu(U_{x_i})>0, \mu(x_i)=0.$ This contradicts the descending chain condition. \endproof \proof[Proof of Lemma \ref{lemfindnexp}] Observe that $U_x\setminus \{x\}$ is open and $\mu(U_x\setminus \{x\})>0.$ Since $\mu$ is Radon, there exists a compact subset $K\subseteq U_x\setminus \{x\}$ such that $\mu(K)>0.$ Since $K\subseteq \cup_{z\in K}U_z$, there exists a finite set $x_1,\dots,x_m$ in $K$ such that $K\subseteq \cup_{i=1}^mU_{x_i}.$ Since $\sum_{i=1}^m\mu(U_{x_i})\geq \mu(K)>0,$ there exists some $1\leq i\leq m$ such that $\mu(U_{x_i})>0.$ Set $y:=x_i$, we concludes the proof. \endproof \proof[Proof of Corollary \ref{corgenericseqence}] Let $x_n\in X, n\geq 0$ be a sequence of points. We first assume that $x_n\in X, n\geq 0$ is generic. Because ${\mathcal M}^1(|X|)$ is compact, we only need to show that for every subsequence with $\lim_{i\to \infty}\delta_{x_{n_i}}=\mu$, we have $\mu=\delta_{\eta}.$ By Theorem \ref{thmRadon}, we may write $$\mu=\sum_{i\geq 0}^ma_i\delta_{x_i}$$ where $m\in {\mathbb Z}_{\geq 0}\cup \{\infty\}$, $x_i$ are distinct points, $a_i> 0$ and $\sum_{i\geq 0}a_i=1.$ If $\mu\neq \delta_{\eta},$ we may assume that $x_0\neq \eta.$ Then $V:=\overline{\{x_0\}}$ is a closed proper subvariety of $X.$ Then we have $$1_{V}(x_{n_i})=\int 1_V\delta_{x_{n_i}}\to \int 1_V\mu>a_0$$ as $n\to \infty.$ So $x_{n_i}\in V$ for all but finitely many $i$, which is a contradiction. Now assume that $\lim_{n\to \infty}\delta_{x_{n}}=\delta_{\eta}.$ For every subsequence $x_{n_i}, i\geq 0$ and every closed proper subvariety $V$ of $X,$ $$\lim_{i\to\infty}1_V(x_{n_i})=\lim_{i\to\infty}\int1_V\delta_{x_{n_i}}=\int 1_V\delta_{\eta}=0.$$ So $x_{n_i}\not\in V$ for all but finitely many $i$. So $x_{n_i}$ is Zariski dense in $X$. \endproof \subsection{DML problems} Let $f: X\dashrightarrow X$ be a dominant rational self-map. Set $|X|_f:=|X|\setminus (\cup_{i\geq 1}I(f^i)).$ Because every Zariski closed subset of $X$ is open and closed in the constructible topology, $|X|_f$ is a closed subset of $|X|.$ The restriction of $f$ to $|X|_f$ is continuous. We still denote by $f$ this restriction. \medskip Denote by ${\mathcal P}(X,f)$ the set of $f$-periodic points in $|X|_f.$ Theorem \ref{thmRadon} implies directly the following lemma. \begin{lem}\label{leminvm}If $\mu\in {\mathcal M}^1(|X|_f)$ with $f_*\mu=\mu$, then there are $x_i\in {\mathcal P}(X,f), i\geq 0$ and $a_i\geq 0, i\geq 0$ with $\sum_{i=0}a_i=1$ such that $$\mu=\sum_{i\geq 0}\frac{a_i}{\#O_f(y)}(\sum_{y\in O_f(x_i)}\delta_y)$$ \end{lem} Now we prove Theorem \ref{thmdml} and Theorem \ref{thmdmlback}. \proof[Proof of Theorem \ref{thmdml}] Let $x$ be a points $\in X_f({\mathbf{k}})$ with $\overline{O_f(x)}=X.$ Let $V$ be a proper subvariety of $X$. Consider a sequence of intervals $I_n, n\geq 0$ in ${\mathbb Z}_{\geq 0}$ with $\lim\limits_{n\to \infty}\# I_n=+\infty$. For every $n\geq 0$, set $\mu_n:=(\#I_n)^{-1}(\sum_{i\in I_n}\delta_{f^i(x)})\in {\mathcal M}^1(|X|_f).$ Because $$\frac{\#(\{n\geq 0|\,\, f^n(x)\in V\}\cap I_n)}{\#I_n}=\int 1_V\mu_n,$$ we only need to show that \begin{equation}\label{equlimmundml}\lim_{n\to \infty}\mu_n=\delta_{\eta}.\end{equation} Because ${\mathcal M}^1(|X|)$ is compact, we only need to show that for every convergence subsequence $\mu_{n_i}, i\geq 0$, $\mu_{n_i}\to \delta_{\eta}$ as $i\to \infty.$ Set $\mu:=\lim_{n\to \infty}\mu_{n_i}.$ We have $$f_*\mu=\lim_{n\to \infty}f_*\mu_{n_i}=\lim_{i\to \infty}\mu_{n_i}+\lim_{i\to \infty}(\#I_{n_i})^{-1}(\delta_{f^{\max I_{n_i}+1}(x)}-\delta_{f^{\min I_{n_i}}(x)})$$ $$=\lim_{i\to \infty}\mu_{n_i}=\mu.$$ For every $y\in {\mathcal P}(X,f)\setminus\{\eta\}$, $U_y$ is open and closed in $|X|_f.$ Then $$Y:=|X|_f\setminus (\cup_{y\in {\mathcal P}(X,f)}U_y)$$ is an $f$-invariant closed proper subset of $|X|_f.$ Because $\overline{O_f(x)}=X,$ $x\in Y$. So for every $n\geq 0$, ${\rm Supp}\, \mu_n\subseteq Y.$ Because $Y\cap {\mathcal P}(X,f)=\{\eta\},$ Lemma \ref{leminvm} shows that $\mu=\delta_{\eta}.$ \endproof \proof[Proof of Theorem \ref{thmdmlback}] Let $x_n \in X_f({\mathbf{k}}), n\leq 0$ be a sequence of points such that $\overline{\{x_n, n\leq 0\}}=X$ and $f(x_n)=x_{n+1}$ for all $n\leq -1.$ Consider a sequence of intervals $I_n, n\geq 0$ in ${\mathbb Z}_{\leq 0}$ with $\lim\limits_{n\to \infty}\# I_n=+\infty$. For $n\geq 1,$ define $x_n:=f^n(x_0).$ For every $n\geq 0$, set $\mu_n:=(\#I_n)^{-1}(\sum_{i\in I_n}\delta_{x_i})\in {\mathcal M}^1(|X|_f).$ As the proof of Theorem \ref{thmdml}, we only need to show \begin{equation}\label{equlimmunbackdml}\lim_{n\to \infty}\mu_n=\delta_{\eta}.\end{equation} Because ${\mathcal M}^1(|X|)$ is compact, we only need to show that for every convergence subsequence $\mu_{n_i}, i\geq 0$, $\mu_{n_i}\to \delta_{\eta}$ as $i\to \infty.$ Set $\mu:=\lim_{n\to \infty}\mu_{n_i}.$ We have $$f_*\mu=\lim_{n\to \infty}f_*\mu_{n_i}=\lim_{i\to \infty}\mu_{n_i}+\lim_{i\to \infty}(\#I_{n_i})^{-1}(\delta_{x_{\max I_{n+1}+1}}-\delta_{x_{\min I_{n+1}+1}})$$ $$=\lim_{i\to \infty}\mu_{n_i}=\mu.$$ For every $y\in {\mathcal P}(X,f)\setminus \{\eta\},$ $U_y\cap \{x_i, i\leq 0\}$ is finite. Otherwise $\{x_i, i\leq 0\}\subseteq \cup_{z\in O_f(y)}U_z$ is not Zariski dense in $X.$ This implies that $\mu(U_y)=\lim_{i\to \infty}\mu_{n_i}(U_y)=0.$ So ${\rm Supp}\, \mu\subseteq Y:=|X|_f\setminus (\cup_{y\in {\mathcal P}(X,f)}U_y).$ Because $Y\cap {\mathcal P}(X,f)=\{\eta\},$ Lemma \ref{leminvm} shows that $\mu=\delta_{\eta}.$ \endproof \subsection{Functoriality}\label{subsecfunct} Assume that $f: X\to X$ is a flat and finite endomorphism. Because the image by $f$ of every constructible subset is constructible, $f$ is open w.r.t the constructible topology. Moreover, for every $x\in X$, $f(U_x)=U_{f(x)}.$ \medskip Denote by $C(|X|)$ the space of continuous ${\mathbb R}$-valued functions on $|X|$ with the $L_{\infty}$ norm $\|\cdot\|.$ For every $\phi\in C(|X|)$, define $f_*\phi$ to be the function $$x\in |X| \mapsto f_*\phi:=\sum_{y\in f^{-1}(x)}m_f(y)\phi(y).$$ The following Lemma shows that $f_*$ is a bounded linear operator on $C(|X|).$ \begin{lem}\label{lempushffun}For every $\phi\in C(|X|)$, $f_*\phi$ is continuous and $\|f_*\phi\|\leq d_f\|\phi\|.$ \end{lem} \proof By \cite[Proposition 2.8]{Gignac2014a}, for every $x\in |X|$, there is an open subset $V_x\subseteq U_x$ containing $x$ such that $V_x=f^{-1}(f(V_x))\cap U_x$ and for every $y\in f(V_x)$, $$m_f(x)=\sum_{z\in f^{-1}(y)\cap V_x}m_f(z).$$ Because $\{x\}=f^{-1}(f(x))\cap U_x$, such $V_x$ can be taken arbritarily small. \medskip Because $\phi\in C(|X|),$ for every $x\in |X|$ and $r>0$, there is an open subset $V_x^r$ containing $x$ such that for every $y\in V_x^r$, $|\phi(y)-\phi(x)|<r.$ Let $w$ be a point in $|X|$. There are open neighborhoods $O_y$ of $y\in f^{-1}(w)$, such that for distinct $y_1,y_2\in f^{-1}(w),$ $O_{y_1} \cap O_{y_2}=\emptyset.$ For every $r>0$, and $y\in f^{-1}(w)$, we may take $V_y$ as in the first paragraph such that $V_y\subseteq O_y\cap V^{r/d_f}_y.$ Then $W^r_w:=\cap_{y\in f^{-1}(w)}f(V_y)$ is an open set containing $w.$ For every $x\in W^r_w$ and distinct $y_1,y_2\in f^{-1}(w)$, we have $$(f^{-1}(x)\cap V_{y_1})\cap (f^{-1}(x)\cap V_{y_2})=\emptyset.$$ Since $$d_f=\sum_{z\in f^{-1}(x)}m_f(z)\geq \sum_{y\in f^{-1}(w)}\sum_{z\in f^{-1}(x)\cap V_{y}}m_f(x)=\sum_{y\in f^{-1}(w)}m_f(y)=d_f,$$ we have $$f^{-1}(x)=\sqcup_{y\in f^{-1}(w)} (f^{-1}(x)\cap V_{y}).$$ Then we get $$|f_*\phi(x)-f_*\phi(w)|\leq \sum_{y\in f^{-1}(w)}|m_f(y)\phi(y)-\sum_{z\in V_y\cap f^{-1}(x)}m_f(z)\phi(z)|$$ $$\leq \sum_{y\in f^{-1}(w)}\sum_{z\in V_y\cap f^{-1}(x)}m_f(z)|\phi(y)-\phi(z)|< \sum_{y\in f^{-1}(w)}\sum_{z\in V_y\cap f^{-1}(x)}m_f(z)r/d_f=r.$$ So $f_*\phi$ is continuous. Moreover for every $x\in |X|$ $$f_*\phi(x)=|\sum_{y\in f^{-1}(x)}m_f(x)\phi(y)|\leq \sum_{y\in f^{-1}(x)}m_f(x)\|\phi\|=d_f\|\phi\|,$$ which concludes the proof.\endproof Now one may define the pullback $f^*:{\mathcal M}(|X|)\to {\mathcal M}(|X|)$ by the duality: for every $\mu\in {\mathcal M}(|X|)$ and $\phi\in C(|X|)$, $$\int \phi (f^*\mu)=\int (f_*\phi)\mu.$$ In particular, $f^*\mu(|X|)=d_f\mu(|X|).$ The pullback $f^*:{\mathcal M}(|X|)\to {\mathcal M}(|X|)$ is continuous w.r.t. the weak-$\ast$ topology on ${\mathcal M}(|X|)$ and one may check that for every $x\in |X|,$ $$f^*\delta_x=\sum_{y\in f^{-1}(x)}m_f(y)\delta(y).$$ \subsection{Backward orbits} Assume that $f: X\to X$ is a flat and finite endomorphism. In particular, $f$ is surjective. The aim of this section is to prove Theorem \ref{thmequpullback}, \ref{thmcountprestr} and \ref{thmseqd}. \medskip Let $TP(X,f)$ be the point $x\in |X|$ such that $\cup_{n\geq 0}f^{-n}(x)$ is finite. It is clear that $f^*TP(X,f)\subseteq TP(X,f).$ For $x\in TP(X,f)$, since $f:\cup_{n\geq 1}f^{-n}(x)\to \cup_{n\geq 0}f^{-n}(x)$ is surjective, it is bijective. So $x$ is periodic. Then $f^{-1}(TP(X,f))=TP(X,f)$ and for every $x\in TP(X,f)$, $f^{-1}(x)$ is a single point. For the simplicity, we still denote by $f^{-1}(x)$ the unique points in it. For every $x\in TP(X,f)$, $f^{-1}(U_x)=\cup_{y\in f^{-1}(x)}U_{f^{-1}(y)}.$ Then $$Y:=X\setminus \cup_{x\in TP(X,f)\setminus \{\eta\}}U_x$$ is a closed subset of $|X|$ such that $f^{-1}(Y)=f(Y)=Y.$ It is clear that $Y$ is exactly the subset of $x\in |X|$ such that $\overline{\cup_{i\geq 0}f^{-i}(x)}=X.$ \begin{lem}\label{lemtotinm}For $\mu\in {\mathcal M}(|X|)$ supported in $Y$, if $d_f^{-1}f^*\mu=\mu$, then $\mu=\delta_{\eta}.$ \end{lem} \proof Assume that $\mu\neq \delta_{\eta}.$ We may assume that $\mu(\eta)=0$. Otherwise, we may replace $\mu$ by $\mu-\mu(\eta)\delta_{\eta}.$ By Theorem \ref{thmRadon}, one may write $$\mu=\sum_{i= 0}^ma_i\delta_{x_i}$$ where $m\in {\mathbb Z}_{\geq 0}\cup \{\infty\}$, $x_i$ are distinct points in $Y\setminus \{\eta\}$, $a_i> 0$ and $\sum_{i\geq 0}a_i=1.$ We have $$\mu=d_f^{-1}f^*\mu=\sum_{i=0}^m\sum_{y\in f^{-1}(x)}\frac{a_im_f(y)}{d_f} \delta_y.$$ Terms in the right hand side have distinct supports. Assume that $a_i$ is decreasing. We claim that for every $i,$ $f^{-1}(x_i)$ is a single point. Otherwise, pick $l$ minimal such that $f^{-1}(x_l)$ is not a single point. Assume that $s\geq 0$ is maximal such that $a_{l+s}=a_l.$ Think $\mu$ as a function $\mu: |X|\to [0,1]$ sending $x$ to $\mu(x)$. We have $\mu^{-1}(a_l)=s+1.$ On the other hand $$(d_f^{-1}f^*\mu)^{-1}(a_l)=\{i=l,\dots,l+s|\,\, f^{-1}(x_i) \text{ is a single point}\}\leq s,$$ which is a contradiction. Then we get $\mu=\sum_{i=0}^ma_i\delta_{f^{-1}(x_i)}.$ Because for every $r>0$, $\{i=0,\dots,m|\,\, a_i\geq r\}$ is finite, all $x_i, i=0,\dots, m$ are contained in $TP(X,f)\cap (Y\setminus\{\eta\})=\emptyset.$ We get a contradiction. \endproof \medskip \proof[Proof of Theorem \ref{thmequpullback}] Let $x$ be a point in $X({\mathbf{k}})$ with $\overline{\cup_{i\geq 0}f^{-i}(x)}=X.$ Let $I_n, n\geq 0$ be a sequence of intervals in ${\mathbb Z}_{\geq 0}$ with $\lim_{n\to \infty}\# I_n=+\infty$. Set $$\mu_n:=\frac{1}{\#I_n}(\sum_{i\in I_n}d_f^{-i}(f^i)^*\delta_{x})\in {\mathcal M}^1(|X|).$$ Because ${\mathcal M}^1(|X|)$ is compact, only need to show that for every convergence subsequence $\mu_{n_i}, i\geq 0$, $\mu_{n_i}\to \delta_{\eta}$ as $i\to \infty.$ Set $\mu:=\lim_{n\to \infty}\mu_{n_i}.$ Then $$f^*\mu=\lim_{i\to \infty}f^*\mu_{n_i}=\lim_{i\to \infty}\frac{1}{\#I_n}(\sum_{j\in I_{n_i}}d_f^{-j}(f^{j+1})^*\delta_{x})$$ $$\lim_{i\to \infty}d_f\mu_{n_i}+\lim_{i\to \infty}\frac{d_f}{\#I_n}(d_f^{-\max I_{n_i}-1}(f^{\max I_{n_i}+1})^*\delta_{x}-d_f^{-\min I_{n_i}}(f^{\min I_{n_i}})^*\delta_{x})$$ Because $d_f^{-\max I_{n_i}-1}(f^{\max I_{n_i}+1})^*\delta_{x}(|X|)=d_f^{-\min I_{n_i}}(f^{\min I_{n_i}})^*\delta_{x}(|X|)=1,$ we get $$f^*\mu=\lim_{i\to \infty}d_f\mu_{n_i}=d_f\mu.$$ Because $x\in Y$, for every $n\geq 0$, ${\rm Supp}\, \mu_n\subseteq Y.$ So $\mu\subseteq Y.$ We conclude the proof by Lemma \ref{lemtotinm}. \endproof \proof[Proof of Theorem \ref{thmcountprestr}] Assume that ${\mathbf{k}}(X)/f^*{\mathbf{k}}(X)$ is separable. Let $x\in X({\mathbf{k}})$ be a point with $\overline{\cup_{i\geq 0}f^{-i}(x)}=X.$ Pick $c\in (0,1]$ Because $$\#f^n(x)\leq \sum_{y\in f^{-n}(x)}m_{f^n}(y)=d_f^n,$$ we have $$\limsup_{n\to \infty} (S^n_c)^{1/n}\leq \limsup_{n\to \infty} \#f^n(x)^{1/n}\leq d_f.$$ We now prove the inequality in the other direction. \medskip By \cite[Theorem 2.1]{Gignac2014a} and \cite[Proposition 2.3]{Gignac2014a}, there is a proper Zariski closed subset $R$ of $X$, such that for every $y\in X({\mathbf{k}})\setminus R,$ $m_f(y)=1.$ Set $$\mu_n:=\frac{1}{n}(\sum_{i=1}^{n}d_f^{-i}(f^i)^*\delta_{x})\in {\mathcal M}^1(|X|).$$ By Theorem \ref{thmequpullback}, \begin{equation}\label{equmutoz}\lim_{n\to \infty}\mu_n=\delta_{\eta}.\end{equation} \medskip Set $D:=\{1,\dots, d_f\}$. Let $\Omega:=\sqcup_{n\geq 0}D^n$ be the set of words in $D$ of finite length. In particular $D^{0}=\{\emptyset\}.$ By induction, one may define a map $$\phi: \Omega\to \sqcup_{n\geq 0} f^{-n}(x)\subseteq \sqcup_{n\geq 0} X$$ such that \begin{points} \item $\theta(D^n)=f^{-n}(x), $ in particular $\phi(\emptyset)=x.$ \item for every word $w_1\dots w_n\in D^n, n\geq 1,$ $$\theta(w_1\dots w_{n-1})=f(\theta(w_1\dots w_{n}));$$ \item for every $y\in f^{-n-1}(x)$ and $w_1\dots w_n\in D^n$ satisfying $\theta(w_1\dots w_{n})=f(y),$ $$\#\{w\in D|\,\, \theta(w_1\dots w_nw)=y\}=m_f(y).$$ \end{points} By \cite[Proposition 2.5]{Gignac2014a}, for every $y\in f^{-n-1}(x)$, $m_{f^{n+1}}(y)=m_{f^n}(f(y))m_f(y).$ This implies that for every $y\in f^{-n}(x),$ $$\#\{\omega\in D^n|\,\, \theta(\omega)=y\}=m_{f^n}(y).$$ Define a function $A:\Omega\to (0,1]$ by $$A: \omega\in D^n\mapsto m_{f^n}(\theta(\omega))^{-1}.$$ We have \begin{points} \item $\sum_{\omega\in D^n}A(\omega)=\#f^{-n}(x);$ \item for every $w_1\dots w_{n+1}\in D^{n+1}$, $$A(w_1\dots w_{n+1})=m_f(\theta(w_1\dots w_{n+1}))^{-1}A(w_1\dots w_{n}).$$ \end{points} We have $A(\emptyset)=1$ and $$A(w_1\dots w_{n+1})\geq d_f^{-1_R(\theta(w_1\dots w_{n+1}))}A(w_1\dots w_{n}).$$ Then we have $$\prod_{\omega\in D^{n+1}}A(\omega)=\prod_{\omega\in D^{n}}\prod_{w\in D}A(\omega w)\geq \prod_{\omega\in D^{n}}\prod_{w\in D}d_f^{-1_R(\theta(w_1\dots w_{n+1}))}A(\omega)$$ $$=(\prod_{\omega\in D^{n+1}}d_f^{-1_R(\theta(\omega))})(\prod_{\omega\in D^{n}}A(\omega))^{d_f}=d_f^{-\int1_R(f^{n+1})^*\delta_x}(\prod_{\omega\in D^{n}}A(\omega))^{d_f}.$$ Set $B_n:=\log_{d_f}\prod_{\omega\in D^{n}}A(\omega).$ We get $$B_{n+1}/d_f^{n+1}\geq -d_f^{-n-1}\int1_R(f^{n+1})^*\delta_x+B_{n}/d_f^n.$$ Then we get $$B_n/d_f^{n}\geq \sum_{i=1}^{n}-d_f^{-i}\int1_R(f^{i})^*\delta_x=-n\int 1_R\mu_n.$$ For every $n\geq 0$, pick $E_n\subseteq f^{-n}(x)$, such that $$\sum_{y\in E_n}m_{f^n}(y)\geq cd_f^n$$ and $\#E_n=S^n_c.$ So $$\#\theta^{-1}(E_n)=\sum_{y\in E_n}m_{f^n}(y)\geq cd_f^n.$$ By Inequality of arithmetic and geometric means, we have $$S^n_c=\sum_{\omega\in \theta^{-1}(E_n)}A(\omega)\geq \#\theta^{-1}(E_n)(\prod_{\omega\in \theta^{-1}(E_n)}A(\omega))^{\frac{1}{\#\theta^{-1}(E_n)}}$$ $$\geq cd_f^n(\prod_{\omega\in \theta^{-1}(E_n)}A(\omega))^{\frac{1}{cd_f^n}}\geq cd_f^n(\prod_{\omega\in D^n}A(\omega))^{\frac{1}{cd_f^n}}$$ $$=cd_f^{n+B_n/cd_f^{n}}\geq cd_f^{n(1-c^{-1}\int 1_R\mu_n)}.$$ So $(S^n_c)^{1/n}\geq c^{1/n}d_f^{1-\int 1_R\mu_n}.$ By Equality \ref{equmutoz}, $$\liminf_{n\geq 0}(S^n_c)^{1/n}\geq d_f,$$ whcih concludes the proof. \endproof \medskip \proof[Proof of Theorem \ref{thmseqd}] Set $d_X:=\dim X.$ Assume that ${\mathbf{k}}(X)/f^*{\mathbf{k}}(X)$ is separable and $$\lambda_{\dim X}(f)>\max_{1\leq i\leq \dim X-1} \lambda_i.$$ Let $x$ be a point in $X({\mathbf{k}})$ with $\overline{\cup_{i\geq 0}f^{-i}(x)}=X.$ \medskip We first show that for every irreducible subvariety $V$ of $X$ of $\dim V=d_V<d_X$, \begin{equation}\label{equintvnum}\limsup_{n\to \infty}\#(f^{-n}(x)\cap V)^{1/n}\leq \lambda_{d_V}. \end{equation} Let $Y$ be a normal and projective variety containing $X$ as an Zariski dense open subset. Let $Z$ be the Zariski closure of $V$ in $W.$ Let ${\mathcal I}_Z$ be the ideal sheaf associated to $Z.$ Let $H$ be a very ample divisor on $Y$ such that ${\mathcal O}(H)\otimes {\mathcal I}_Z$ is generated by global sections. For every $n\geq 0,$ consider the following commutative diagram \[ \xymatrixcolsep{3.5pc} \xymatrix{ \Gamma_n\ar[d]_{\pi_1^n}\ar[dr]^{\pi_2^n}\\ Y \ar@{-->}[r]_{f^n} & Y } \] where $\pi_1^n$ is birational and it is an isomorphism above $X.$ There are $H_1,\dots, H_{d_X-d_V}\in |H|$ such that the intersection of $H_1,\dots, H_{d_X-d_V}$ is proper, $V$ is an irreducible component of $\cap_{i=1}^{d_X-d_V}H_i$ and $V$ is the unique irreducible component meeting $f^{-n}(x)\cap V.$ Take $H_1',\dots, H_{d_V}'$ general in those elements of $|H|$ containing $x.$ Then the intersection of $H_1',\dots, H_{d_V}'$ and $f(V)$ at $x$ is proper. Since $f$ is finite, the intersection of $f^{*}(H_1'),\dots, f^*H_{d_V}'$ and $V$ is proper at every $y\in f^{-n}(x)\cap V.$ We have $$(\pi_1^n)^{-1}(f^{-n}(x)\cap V)\subseteq (\cap_{i=1}^{d_X-d_V}(\pi_1^n)^*H_i)\cap(\cap_{i=1}^{d_V}(\pi_2^n)^*H_i'),$$ and every point $y\in (\pi_1^n)^{-1}(f^{-n}(x)\cap V)$ is isolated in $(\cap_{i=1}^{d_X-d_V}(\pi_1^n)^*H_i)\cap(\cap_{i=1}^{d_V}(\pi_2^n)^*H_i').$ By \cite[Lemma 3.3]{Jia2021}, $$(H^{d_X-d_V}\cdot(f^{n})^*H^{d_V})=((\pi_1^{n})^*H_1\cdot \dots \cdot (\pi_1^{n})^*H_{d_X-d_V}\cdot (\pi_2^{n})^*H_1'\cdot \dots \cdot (\pi_2^{n})^*H_{d_V}')$$ $$\geq \#(\pi_1^n)^{-1}(f^{-n}(x)\cap V)=\#(f^{-n}(x)\cap V).$$ Then we get $$\limsup_{n\to \infty}\#(f^{-n}(x)\cap V)^{1/n}\leq \lim_{n\to \infty}(H^{d_X-d_V}\cdot(f^{n})^*H^{d_V})^{1/n}=\lambda_{d_V}.$$ \medskip Now we only need to show $$\lim_{n\to \infty}d_f^{-n}(f^n)^*\delta_x=\delta_{\eta}.$$ Because ${\mathcal M}^1(|X|)$ is compact, only need to show that for every convergence subsequence $d_f^{-n_i}(f^{n_i})^*\delta_{x}, i\geq 0$, $\lim_{i\to \infty}d_f^{-n_i}(f^{n_i})^*\delta_{x}=\delta_{\eta}.$ Set $\mu:=\lim_{i\to \infty}d_f^{-n_i}(f^{n_i})^*\delta_{x}.$ By Theorem \ref{thmRadon}, we may write $$\mu=\sum_{i\geq 0}^ma_i\delta_{x_i}$$ where $m\in {\mathbb Z}_{\geq 0}\cup \{\infty\}$, $x_i$ are distinct points, $a_i> 0$ and $\sum_{i\geq 0}a_i=1.$ Assume that $\mu\neq \delta_{\eta}$. Then we may assume that $a_0>0$ and $x_0\neq \eta.$ Set $r:=\overline{\{x_0\}}<d_X.$ Then $$\int 1_{U_{x_0}}\mu\geq \int 1_{U_{x_0}}a_0\delta_{x_0}=a_0.$$ Pick $c\in (0,a_0).$ Then there is $N\geq 0$ such that for every $i\geq N$, $$\frac{\sum_{y\in f^{-n_i}(x)\cap \overline{\{x_0\}}}m_{f^{n_i}}(y)}{d_f^{n_i}}=\int 1_{U_{x_0}}d_f^{-n_i}(f^{n_i})^*\delta_{x}\geq c.$$ So $\sum_{y\in f^{-n_i}(x)\cap \overline{\{x_0\}}}m_{f^{n_i}}(y)\geq cd_f^{n_i},$ then $\#(f^{-n_i}(x)\cap \overline{\{x_0\}})\geq S_c^{n_i}.$ By Theorem \ref{thmcountprestr} and Inequality \ref{equintvnum}, we get $$d_f>\lambda_{r}\geq \limsup_{i\to \infty}(\#(f^{-n_i}(x)\cap \overline{\{x_0\}}))^{1/n_i}\geq \liminf_{i\to \infty}(S_c^{n_i})^{1/n_i}=d_f,$$ which is a contradiction. \endproof \subsection{Berkovich spaces}\label{subsecberko} In this section, ${\mathbf{k}}$ is a complete nonarchimedean valued field with norm $|\cdot|$. See \cite{Berkovich1990} and \cite{Berkovich1993} for basic theory of Berkovich spaces. Let $X$ be a variety over ${\mathbf{k}}.$ Recall that, as a topological space, Berkovich's analytification of $X$ is $$X^{{\rm an}}:=\{(x,|\cdot|_x)|\,\, x\in X, |\cdot|_x \text{ is a norm on }\kappa(x) \text{ which extends } |\cdot| \text{ on }{\mathbf{k}}\},$$ endowed with the weakest topology such that \begin{points} \item $\tau: X^{{\rm an}}\to X$ by $(x,|\cdot|_x)\mapsto x$ is continuous; \item for every Zariski open $U\subseteq X$ and $\phi\in O(U)$, the map $|\phi|:\tau^{-1}(U)\to [0+\infty)$ sending $(x,|\cdot|_x)$ to $|\phi|_x$ is continuous. \end{points} Let ${\mathcal M}(X^{{\rm an}})$ be the space of Radon measures on $X^{{\rm an}}$ and let ${\mathcal M}^1(X^{{\rm an}})$ be the space of probability Radon measures on $X^{{\rm an}}.$ \subsection{Trivial norm case} Assume that $|\cdot|$ is the trivial norm. For every $x\in X$, let $|\cdot|_{x,0}$ be the trivial norm on $\kappa(x).$ Then we have an embedding $\sigma: X\to X^{{\rm an}}$ sending $x\in X$ to $(x,|\cdot|_{x,0}).$ We have $\tau\circ\sigma={\rm id}.$ One may check that the constructible topology on $X$ is exact the topology induced by the topology on $X^{{\rm an}}$ and the embedding $\sigma.$ Because $|X|$ is compact, $\sigma(X)$ is closed in $X^{{\rm an}}$ and $\sigma: |X|\to \sigma(|X|)$ is a homeomorphism. \begin{rem}We note that, if $X$ is endowed with the constructible topology, $\tau: X^{{\rm an}}\to |X|$ is no longer continuous. \end{rem} Using the embedding $\sigma$, Corollary \ref{corgenericseqence} can be translated to a statement on $X^{{\rm an}}.$ \begin{cor}[=Corollary \ref{corgenericseqence}]A sequence $x_n\in X, n\geq 0$ is generic if and only if in ${\mathcal M}(X^{{\rm an}})$ $$\lim_{n\to \infty}\delta_{\sigma(x_n)}=\delta_{\sigma(\eta)}.$$ \end{cor} \medskip Let $f: X\to X$ be a finite flat morphism. It induces a morphism $f^{{\rm an}}: X^{{\rm an}}\to X^{{\rm an}}.$ We have $$f^{{\rm an}}\circ \sigma=\sigma\circ f \text{ and } \tau\circ f^{{\rm an}}=f\circ \tau.$$ According to \cite[Lemma 6.7]{Gignac2014a}, there is a natural pullback ${f^{{\rm an}}}^*: {\mathcal M}(X^{{\rm an}})\to {\mathcal M}(X^{{\rm an}}).$ One may check that the following diagram is commutative. \[ \xymatrixcolsep{3.5pc} \xymatrix{ {\mathcal M}(|X|)\ar[d]^{\sigma_*}\ar[r]^{f^*}&{\mathcal M}(|X|)\ar[d]^{\sigma_*}\\ {\mathcal M}(X^{{\rm an}}) \ar[r]_{{f^{{\rm an}}}^*} & {\mathcal M}(X^{{\rm an}}) } \] Then we may translate Theorem \ref{thmseqd} to a statement on $X^{{\rm an}}.$ \begin{thm}[=Theorem \ref{thmseqd}] Let $f: X\to X$ be a flat and finite endomorphism of a quasi-projective variety. Assume that \begin{equation}\label{equladdom}d_f:=\lambda_{\dim X}(f)>\max_{1\leq i\leq \dim X-1} \lambda_i. \end{equation} If the field extension ${\mathbf{k}}(X)/f^*{\mathbf{k}}(X)$ is separable, then for every $x\in X({\mathbf{k}})$ with $\overline{\cup_{i\geq 0}f^{-i}(x)}=X,$ $$\lim_{n\to \infty}d_f^{-n}(f^n)^*\delta_{\sigma(x)}=\delta_{\sigma(\eta)}.$$ \end{thm} \subsection{Reduction}\label{subsectionreduction} Let ${\mathbf{k}}^{\circ}$ be the valuation ring of ${\mathbf{k}}$ and ${\mathbf{k}}^{\circ\circ}$ the maximal ideal of ${\mathbf{k}}^{\circ}.$ Set $\widetilde{{\mathbf{k}}}:={\mathbf{k}}^{\circ}/{\mathbf{k}}^{\circ\circ}$ the residue field of ${\mathbf{k}}.$ Let ${\mathcal X}$ be a flat projective scheme over ${\mathbf{k}}^{\circ}.$ Denote by $X_0$ its special fiber, it is a (maybe reducible) variety over $\widetilde{{\mathbf{k}}}.$ Let $X$ be the generic fiber of ${\mathcal X}.$ Let $Y_1,\dots, Y_m$ be the irreducible components of $X_0$ and $\eta_i,i=1,\dots,m$ the generic points of $Y_i.$ Set $\xi_i$ the unique point in ${\rm red}^{-1}(\eta_i).$ \medskip Denote by ${\rm red}: X^{{\rm an}}\to X_0$ the reduction map. It is anti-continuous i.e. for every Zariski open subset $U$ of $X_0$, ${\rm red}^{-1}(U)$ is closed. In particular, for constructible topology on $X_0$, ${\rm red}: X^{{\rm an}}\to |X_0|$ Borel measurable. \medskip For every $\mu\in {\mathcal M}(X^{{\rm an}})$, we may define its push forward ${\rm red}_*\mu\in {\mathcal M}(|X_0|)$ as follows: For every $\phi\in C(|X_0|)$, $$\int \phi\, {\rm red}_*\mu:=\int ({\rm red}^*\phi)\, \mu.$$ Because ${\rm red}^*\phi$ is Borel measurable and bounded, $\int ({\rm red}^*\phi) \mu$ is well defined and we have $|\int ({\rm red}^*\phi) \mu|\leq \|\phi\|_{\infty}\mu(X^{{\rm an}}).$ We note that, in general, ${\rm red}_*: {\mathcal M}(X^{{\rm an}})\to {\mathcal M}(|X_0|)$ is not continuous. \begin{exe}\label{exerednotmeacon} Let ${\mathcal X}=\P^N_{{\mathbf{k}}^{\circ}}.$ Let $x_n, n\geq 0$ be the Gauss point of the polydisc $\{|T_i|\leq 1-1/(n+2), i=1,\dots, N\}\subseteq ({\mathbb A}^N)^{{\rm an}}\subseteq (\P^N)^{{\rm an}}.$ We have $\delta_{x_n}\to \xi_1$ as $n\to \infty$, but for every $n\geq 0$, $${\rm red}_*\delta_{x_n}=\delta_{{\rm red}(x_n)}=\delta_{[1:0:\dots:0]}\neq \delta_{\eta_1}={\rm red}_*\delta_{\xi_1}.$$ \end{exe} \begin{pro}\label{prolimgen}Let $\mu_n\in {\mathcal M}^1(X^{{\rm an}}), n\geq 0$ be a sequence of probability Radon measures on $X^{{\rm an}}.$ Assume that there are $a_i\geq 0, i=1,\dots,m$ with $\sum_{i=1}^ma_i=1$ such that $${\rm red}_*(\mu_n)\to \sum_{i=1}^m a_i\delta_{\eta_i}$$ as $n\to \infty.$ Then we have $$\mu_n\to \sum_{i=1}^m a_i\delta_{\xi_i}$$ as $n\to \infty.$ \end{pro} \proof Because $X^{{\rm an}}$ is compact, ${\mathcal M}^1(X^{{\rm an}})$ is weak-$\ast$ compact. So we may assume that $$\lim_{n\to \infty}\mu_n=\mu$$ for some $\mu\in {\mathcal M}^1(X^{{\rm an}}).$ We first show that ${\rm Supp}\, \mu\subseteq \{\xi_1\dots,\xi_m\}.$ Otherwise $\mu(X^{{\rm an}}\setminus \{\xi_1\dots,\xi_m\})=1.$ Then there is a compact subset $K$ of $X^{{\rm an}}\setminus \{\xi_1\dots,\xi_m\}$ such that $\mu(K)>0.$ For every $x\in K$, set $V_x:={\rm red}^{-1}(\overline{{\rm red}(x)}).$ It is an open neighborhood of $x$ in $X^{{\rm an}}\setminus \{\xi_1\dots,\xi_m\}.$ Because $K$ is compact, there is one $x\in K$ such that $\mu(V_x)>0.$ Set $Z:=\overline{{\rm red}(x)}.$ There is a compact subset $S\subseteq V_x$ such that $\mu(S)>0.$ By Urysohn's Lemma, there is a continuous function $\chi: X^{{\rm an}}\to [0,1]$ such that $\chi|_S=1$ and $\chi|_{X^{{\rm an}}\setminus V_x}=0.$ Then we have $$0=\lim_{n\to \infty}\int 1_Z\,{\rm red}_*\mu_n=\lim_{n\to \infty}\int ({\rm red}^*1_Z)\,\mu_n=\lim_{n\to \infty}\int 1_{V_x}\,\mu_n$$ $$\geq \lim_{n\to \infty}\int \chi\,\mu_n=\int \chi \mu\geq \mu(S)>0,$$ which is a contradiction. Now we may write $\mu=\sum_{i=1}^m b_i\delta_{\xi_i}$ with $b_i\geq 0$ and $\sum_{i=1}^m b_i=1.$ For each $i=1,\dots,m$, set $U_i:=Z_i\setminus (\cup_{j\neq i}Z_j).$ Then ${\rm red}^{-1}(U_i)$ is a closed subset contained in the open subset ${\rm red}^{-1}(Z_i).$ By Urysohn's Lemma, there is a continuous function $\chi_i: X^{{\rm an}}\to [0,1]$ such that $\chi|_{{\rm red}^{-1}(U_i)}=1$ and $\chi|_{X^{{\rm an}}\setminus {\rm red}^{-1}(Z_i)}=0.$ Then we have $$b_i=\int\chi_i\mu=\lim_{n\to \infty}\int \chi_i\mu_n$$ $$\geq \lim_{n\to \infty}\mu_n({\rm red}^{-1}(U_i))=\lim_{n\to \infty}\int 1_{U_i}\,{\rm red}_*\mu_n$$ $$=\int 1_{U_i}\,(\sum_{j=1}^m a_j\delta_{\eta_j})=a_i.$$ Because $\sum_{i=1}^{m}b_i=\sum_{i=1}^m a_i=1$, we get $b_i=a_i$ for every $i=1,\dots,m.$ This concludes the proof. \endproof Now assume that $X_0$ is irreducible and smooth. Denote by $\eta$ the generic point of $X_0$ and $\xi$ the unique point in ${\rm red}^{-1}(\eta).$ Let $F:{\mathcal X}\to {\mathcal X}$ be a finite endomorphism. Denote by $f,f_0$ the restriction of $F$ to $X,X_0.$ We note that for $i=0,\dots, \dim X$, one has $\lambda_i(f)=\lambda_i(f_0).$ \medskip By Theorem \ref{thmseqd} and Proposition \ref{prolimgen}, we get the following equidistribution result for endomorphisms of good reductions. \begin{cor}\label{corseqdber} Assume that $$d_f:=\lambda_{\dim X}(f)>\max_{1\leq i\leq \dim X-1} \lambda_i.$$ If the field extension $\widetilde{{\mathbf{k}}}(X_0)/f_0^*\widetilde{{\mathbf{k}}}(X_0)$ is separable, then for every $x\in X({\mathbf{k}})$ with $\overline{\cup_{i\geq 0}f_0^{-i}({\rm red}{x})}=X_0,$ $$\lim_{n\to \infty}d_f^{-n}(f^n)^*\delta_x=\delta_{\xi}.$$ \end{cor} One may compare Corollary \ref{corseqdber} with \cite[Theorem A]{Gignac2014a} for polarized endomorphism. See \cite{Guedj2005,Dinh2015} for according result for complex topology. \newpage
{ "timestamp": "2021-07-09T02:07:58", "yymm": "2107", "arxiv_id": "2107.03559", "language": "en", "url": "https://arxiv.org/abs/2107.03559", "abstract": "In this paper, we study arithmetic dynamics in arbitrary characteristic, in particular in positive characteristic. We generalise some basic facts on arithmetic degree and canonical height in positive characteristic. As applications, we prove the dynamical Mordell-Lang conjecture for automorphisms of projective surfaces of positive entropy, the Zariski dense orbit conjecture for automorphisms of projective surfaces and for endomorphisms of projective varieties with large first dynamical degree. We also study ergodic theory for constructible topology. For example, we prove the equidistribution of backward orbits for finite flat endomorphisms with large topological degree. As applications, we give a simple proof for weak dynamical Mordell-Lang and prove a counting result for backward orbits without multiplicities. This gives some applications for equidistributions on Berkovich spaces.", "subjects": "Dynamical Systems (math.DS); Algebraic Geometry (math.AG)", "title": "Remarks on algebraic dynamics in positive characteristic", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692318706084, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7079584900345461 }
https://arxiv.org/abs/2205.08633
Classification as Direction Recovery: Improved Guarantees via Scale Invariance
Modern algorithms for binary classification rely on an intermediate regression problem for computational tractability. In this paper, we establish a geometric distinction between classification and regression that allows risk in these two settings to be more precisely related. In particular, we note that classification risk depends only on the direction of the regressor, and we take advantage of this scale invariance to improve existing guarantees for how classification risk is bounded by the risk in the intermediate regression problem. Building on these guarantees, our analysis makes it possible to compare algorithms more accurately against each other and suggests viewing classification as unique from regression rather than a byproduct of it. While regression aims to converge toward the conditional expectation function in location, we propose that classification should instead aim to recover its direction.
\section{Introduction} The correct assignment of binary labels to data is a fundamental problem of machine learning. Practitioners naturally seek to minimize the portion of observations they misclassify, yet in practice, minimizing a loss function over binary labels and outcomes is computationally intractable. Therefore, modern approaches start with regression, which may be viewed as a convex relaxation of the original classification problem. They identify a real-valued function that minimizes a smooth \emph{surrogate risk criterion} pre-selected by the algorithm designer, and they then threshold resulting predictions to arrive at binary classifications. Several influential results in statistical machine learning relate the performance of a classifier to the performance of this intermediate regression procedure \citep{lugosi_bayes-risk_2004,zhang_statistical_2004,doi:10.1198/016214505000000907}. Attempts to attack this problem theoretically have, with few exceptions, proceeded by relating the classification risk to the surrogate risk, reducing the analysis to the better understood problem of stochastic optimization \citep{pmlr-v35-hazan14a,10.1162/089976604773135104}. Results on surrogate risk convergence, when combined with this analysis, produce theoretical guarantees that provide guidance on the choice of surrogate loss for classification tasks. In this paper, we aim to improve this guidance by taking advantage of a fundamental geometric difference between classification and regression. In particular, we note that classification risk depends only on the direction of a regressor, whereas surrogate risk depends also on its scale. By taking advantage of the scale invariance of classification, we achieve tighter bounds relating classification risk to surrogate risk. The precision gained in these bounds can help practitioners better compare classification procedures and thus design better algorithms. Throughout our analysis, we reframe the problem of classification as “direction recovery.” To illustrate, suppose the conditional expectation function $\bb E [Y|X]$ may be represented by some $\beta^*$ a multidimensional vector space. Rather than aim to produce predictions close to $\beta^*$ in \emph{location}, as in traditional regression, we instead aim to produce predictions close to $\beta^*$ in \emph{direction}. We show that procedures designed to converge in direction to $\beta^*$ may achieve lower classification error than those that only seek to minimize regression error, and that upper bounds for existing procedures may be sharpened by studying convergence in direction. We hope this perspective shows the potential of treating classification not only as a byproduct of regression, but as a unique problem deserving a tailored approach. \subsection{Outline} The paper proceeds in four main sections. In Section 2, we identify slack in existing bounds of classification risk and proceed to reduce the slack by introducing a notion of angle $\theta$ between a regressor and its optimal value. We characterize how a small angle $\theta$ minimizes excess classification risk. To achieve small angles in practice, we turn to surrogate loss minimization in Section 3. We show that regularized least squares obtains the optimal classifier when features are uncorrelated, and otherwise, study how regularization biases the predictor away from the optimal direction. Lastly, we present simulations in Section 4 that exemplify how surrogate loss minimization can fail or succeed to minimize the relevant angle. \subsection{Related Work} Recently, great attention has been paid to how bounds based on the excess risk alone can be systematically improved based on a margin condition, which says that the posterior probability of a positive label does not concentrate near one-half \citep{mammen_smooth_1999}. While these results are of great conceptual and practical importance, and similarly illustrate how bounds based on the surrogate risk can fail to adequately capture the classification problem, they rely on specific properties of the distribution at hand. In contrast, our results will focus primarily on the structure of classification problems in general, without attention to a specific class of distributions. Our results can, however, be strengthened by combining them with a margin condition, and we illustrate this in the paper. Perhaps most similar to our work in spirit is the example of \citet{10.1007/11503415_20}, in which the excess classification risk decays exponentially fast, whereas the excess surrogate risk decays no faster than $O(1/n)$. While they make use of a margin condition and a specific class of distributions, particular attention is paid to the discrepancy between classification and stochastic optimization, and scale invariance of the optimal set of classifiers is used analytically. Like the authors, in our paper we give attention to the subtle aspects of classification that meaningfully affect classifier performance. In the words of the authors, \begin{quote} ``In classification problems, there are many relevant probabilistic, analytic and geometric parameters to play with when one studies the convergence rates... probably, we have not understood to the end [the] rather subtle interplay between various parameters that influence the behaviour of this type of classifier.'' \end{quote} We believe that a powerful such parameter is the notion of direction we introduce in this paper. Finally, canonical results relating the classification error to the surrogate risk can be found in \cite{doi:10.1198/016214505000000907}, and the results there are foundational to the analysis in this paper. \section{Bounding Excess Classification Risk} In this section, we present a new bound on excess classification risk. To begin, we describe our setting and motivation for why these bounds are useful in practice. Then we show evidence of slack in existing bounds that can be substantially reduced, and finally, we reduce that slack to achieve tighter bounds. \subsection{Setting} Consider a setting where features $X$ fix a linear conditional expectation function $f^*(X) = \bb E [Y|X] $ of a binary outcome $Y \in \{ -1,1\}$. A machine learner minimizes a surrogate convex loss function $\phi$ over data $(X,Y)$ to construct an estimate $f(X)$ of $f^*(X)$. The sign of $f$ fixes their classification decisions $\hat Y \equiv \sign f$. Although they are minimizing $\phi$-loss, ultimately they care about classification loss: the probability that $f$ takes a different sign than $Y$. Luckily, it has been shown that minimizing the convex surrogate loss successfully can constrain classification error. These guarantees are used in practice for machine learners to ultimately select their learning procedure. We discuss one contemporary approach to bound the classification error in the next section, as well as show slack in that approach which can be reduced to achieve tighter bounds. In practice, these tighter bounds can aid investigations of how learning procedures perform. \subsection{Finding slack in existing bounds for excess classification risk} Contemporary bounds generally relate the excess classification risk to some increasing function $\psi$ of the excess $\phi$-risk, taking the form \begin{equation} \bb{P}(\sign f \ne Y) - \bb{P}(\sign f^* \ne Y) \le \psi(\bb{E}\phi(f,Y) - \bb{E}\phi(f^*,Y)) \label{eq:usual-bound} \end{equation} (cf. Thms. 1 and 3 of \citet{doi:10.1198/016214505000000907}). Since the slack is revealed from following the proof behind this bound, we sketch the main idea. The argument on which this bound is based relies first on fixing a value of $f^*$ and then computing the associated excess classification risk of an arbitrary $f$, \begin{align} \bb{P}(\sign f \ne Y) &- \bb{P}(\sign f^* \ne Y) = \bb{E}\left[|f^*| \mathbbm{1}\{\sign f \ne \sign f^*\} \right] \label{eq:cls-risk-step} \end{align} \citet{doi:10.1198/016214505000000907} show how to bound the step function (\ref{eq:cls-risk-step}) by a smooth convex function based on $\phi$-risk. We illustrate their bound in Figure \ref{fig:slack} for a given $f^*$ and shade the associated slack in green (when $\sign f = \sign f^*$) and in yellow (when $\sign f \ne \sign f^*$). The way that we will ultimately reduce this slack is by noting that the LHS of (\ref{eq:cls-risk-step}) depends only on whether $f^*$ and $f$ share the same sign. Therefore, we have the opportunity to rewrite the bound (\ref{eq:usual-bound}) in terms of a predictor $g$ that i) satisfies $\sign g = \sign f$ so that the LHS of (\ref{eq:usual-bound}) is unchanged but ii) that corresponds to a tighter convex bound on the RHS of (\ref{eq:usual-bound}). To do so, we will choose $g$ among the rescalings of $f$, i.e., among all vectors pointing in the direction of $f$. \begin{figure}[b] \centering \includegraphics[width=.75 \columnwidth]{slack_edited.pdf} \caption{\label{fig:slack}Illustration of upper bound for a given $f^*$. The green and yellow regions correspond to slack that we aim to reduce.} \end{figure} \subsection{Tightening the bound: a first example} In order to illustrate the relevance of the direction of $f$ to its associated classification loss, we present a visual example. In particular, we consider the problem of learning a linear classification rule in the presence of features that satisfy a notion of symmetry called rotational invariance. \begin{definition} The law of $X$ is \emph{rotation invariant} if it satisfies $\bb{P}(X \in S) = \bb{P}(RX \in S)$ for any measurable set $S$ and any rotation $R$ of its coordinates. \end{definition} Note that rotational invariance is a stronger condition than uncorrelated features. This property allows us to exactly characterize the probability that a linear predictor $\tilde \beta$ yields a different classification from the optimal $\beta^*$, based on the angle $\theta(\tilde \beta, \beta^*)$ between them. The probability grows in $\theta$ as follows. \begin{figure} \centering \includegraphics[width=.5 \columnwidth]{diagram3.pdf} \caption{\label{fig:theory_diag_3} Top-down view of projection $Z$ on plane spanned by $(\tilde \beta, \beta^*)$ Dashed lines correspond to the classification boundaries associated with each $\beta \in \{ \tilde \beta, \beta^*\}$. Points to the right are positively classified by $\beta$ while those to the left are negatively classified. Thus, the two sectors between the dashed lines (with combined measure of $\frac{\theta}{\pi}$) designate observations that are classified differently by $\tilde \beta$ and $\beta^*$.} \end{figure} \begin{lemma} If the law of $X$ is rotation invariant, we have \begin{equation} \bb{P}(\mathrm{sign}\bks{\beta^*}{X}\ne \mathrm{sign}\bks{\tilde\beta}{X}) = \frac{\theta(\tilde \beta, \beta^*)}{\pi} \label{rot-inv-eq} \end{equation} \end{lemma} \begin{proof} We have \begin{align*} \bb{P}(\mathrm{sign}\bks{\beta^*}{X}\ne \mathrm{sign}\bks{\tilde\beta}{X}) = \bb{P}\left(\mathrm{sign}\bk{\tilde\beta}{\frac{X}{\norms{X}_2}}\ne \sign \bk{\beta^*}{\frac{X}{\norms{X}_2}}\right) \end{align*} Now let $Z$ denote the projection of $X/\norms{X}_2$ onto the plane spanned by $(\tilde \beta, \beta^*)$. We know $Z$ is uniformly distributed on the circle by rotation invariance, and the sign associated with $\tilde \beta$ and with $\beta^*$ will differ precisely when $Z$ belongs to a subset of the circle of measure $\theta/\pi$. This is illustrated in Figure \ref{fig:theory_diag_3}, which visualizes the projection $Z$ from above. \end{proof} The key insight which leads to the angle $\theta(\beta^*,\beta)$ on the RHS of (\ref{rot-inv-eq}) is that the classification rules $x \mapsto \sign\bk{\beta}{x}$ and $x \mapsto \sign\bk{\beta^*}{x}$ are invariant to rescaling the linear predictors $\beta$ and $\beta^*$. The angle, which corresponds to the distance between $\beta/\norm{\beta}$ and $\beta^*/\norm{\beta^*}$ along the surface of the unit sphere, emerges as a natural, \emph{scale invariant} measure of the distance between the two predictors. In the following section, we will see that this intuition extends far beyond the simple case of rotationally invariant features. \subsection{The general angle criterion} In the general case, the situation is more nuanced. However, the key insight that the classifier $\mathbbm{1}\{f \ge 0\}$ is invariant to rescaling $f$ remains. To begin, we need to know how to actually express the angle $\theta$ between two vectors $u$ and $v$. In the following lemma, we use a geometric argument to write $\sin \theta$ as the minimum distance between a rescaled $u$ and a normalized $v$. \begin{lemma} \label{lemma-sin-theta} The angle $\theta(u,v)$ between vectors $u, v \in \bb{R}^d$ with $\bk{u}{v} > 0$ satisfies \[\sin \theta(u,v) = \inf_{t \ge 0} \norm{tu - \frac{v}{\norm{v}_2}},\] and $\theta(-u,v) = \pi - \theta(u,v)$ for all $u,v\in \bb{R}^d$. \end{lemma} \begin{proof} As seen in Figure \ref{fig:theory_diag_2}, the distance $\norm{tu - \frac{v}{\norm{v}_2}}$ is minimized when $t=t^*$, where $t^*u$ is equal to the orthogonal projection of $v/\norms{v}$ onto the line spanned by $u$. As seen, $(v/\norms{v}, 0,t^*u)$ forms a right triangle with hypotenuse of length $1$, and $(t^*u, v/\norms{v})$ is opposite the angle $\theta(u,v)$. \end{proof} \begin{figure} \centering \includegraphics[width=.6 \columnwidth]{diagram2.pdf} \caption{\label{fig:theory_diag_2} $\sin \theta$ is the shortest distance between $span(u)$ and $\frac{v}{\norm v}$.} \end{figure} Now that we no longer require the law of $X$ to be rotation invariant, we must deal directly with $L^2(\bb P)$. This requires us to define the relevant angle of a predictor with respect to the norm $\norm{f}_{2,\bb P} = \bb{E}[f^2]^{\frac 1 2}$ in the probability space. Motivated by Lemma \ref{lemma-sin-theta}, we generalize our notion of the relevant angle $\theta_{2,\bb{P}}(\beta,\beta^*)$ to $L^2(\bb P)$ by the relation \begin{equation} \sin\theta_{2,\bb P}(u,v) = \inf_{t > 0}\norm{tu - \frac{v}{\norm{v}_{2,\bb P}}}_{2,\bb P}, \label{eq:general-sin} \end{equation} when $ \bb{E}[uv]>0$ and $\theta_{2,\bb P}(u,v) = \pi - \theta_{2,\bb P}(-u,v)$ when $ \bb{E}[uv]<0$. {Equipped with the suitable notion of angle, we can establish a main result. Here we bound the excess classification risk of an arbitrary predictor $f$ according to the direction of $f$. Note that while the $L^2(\bb P)$ distance and the square loss are essential ingredients in its proof, the result applies to \emph{any} predictor, however it is obtained. } \ \begin{theorem}\label{thm:sin-bound} Let $f^* = \bb{E}[Y|X]$. Then, the excess classification risk is bounded as \[\bb{P}(Y \ne \sign f) - \bb{P}(Y \ne \sign f^*) \le \norm{f^*}_{2,\bb{P}} \sin \theta_{2,\bb P}(f,f^*).\] \end{theorem} We prove this result using the canonical bound given by Theorem 1 in \citet{doi:10.1198/016214505000000907}, which relates the classification risk in excess of $f^*$ to the excess surrogate loss. Stated for the square loss, the result reduces to the following. \begin{theorem}[{\citet[Thm. 1]{doi:10.1198/016214505000000907}}]\label{thm:bartlett-square} \begin{equation*} \bb{P}(Y \ne \sign f) - \bb{P}(Y \ne \sign f^*) \le \norm{f-f^*}_{2,\bb P}\label{eq:bartlett-square} \end{equation*} \end{theorem} We use the canonical theorem to prove our new result. \begin{proof}[Proof of Theorem \ref{thm:sin-bound}] Let $C_f$ denote the convex cone of functions $g$ satisfying $\sign g = \sign f$ almost surely. For any $g \in C_f$ we can apply Theorem \ref{thm:bartlett-square} to obtain \begin{align*} \bb{P}(Y \ne \sign f) - \bb{P}(Y \ne \sign f^*) &= \bb{P}(Y \ne \sign g) - \bb{P}(Y \ne \sign f^*) \\ &\le \norm{g-f^*}_{2,\bb P}. \end{align*} Optimizing over the bounds obtained in this manner yields \begin{equation} \bb{P}(Y \ne \sign f) - \bb{P}(Y \ne \sign f^*) \le \inf_{g \in C_f}\left\{\beef\norm{g-f^*}_{2,\bb P}\right\} \end{equation} While we cannot tractably minimize over all $g$ in the cone $C_f$, we can minimize over the $g$ that rescale $f$, noting that these rescalings satisfy $\Set[tf]{t > 0} \subset C_f$. This gives us the bound \begin{align*} \bb{P}(Y \ne \sign f) - \bb{P}(Y \ne \sign f^*) &\le \inf_{t > 0 }\left\{\beef\norm{tf-f^*}_{2,\bb P}\right\} \\ &= \norm{f^*}_{2, \bb P} \inf_{t > 0}\left\{\norm{\frac{tf}{\norm{f^*}_{2, \bb P}}-\frac{f^*}{\norm{f^*}_{2, \bb P}}}_{2,\bb P}\right\}. \intertext{Making the change of variable $t'= \norm{f^*}_{2, \bb P}t$ gives} &= \norm{f^*}_{2, \bb P} \inf_{t' > 0}\left\{\norm{t'f-\frac{f^*}{\norm{f^*}_{2, \bb P}}}_{2,\bb P}\right\} \\ &= \norm{f^*}_{2, \bb P} \sin \theta_{2,\bb P}(f,f^*), \end{align*} by our definition of $\theta_{2, \bb P}$. This is what we aimed to show. \end{proof} In fact, \citet{doi:10.1198/016214505000000907} proved stronger versions of Theorem \ref{thm:bartlett-square} under the \emph{low-noise condition} (also sometimes called a \emph{margin condition}), and the same machinery can be applied to yield stronger versions of our Theorem \ref{thm:sin-bound}. To state these, we first introduce the low-noise condition. It characterizes the extent to which the best prediction of the outcome is close to the classification boundary. \begin{definition} Given $\a \in [0,1]$ the pair $(X,Y)$ is said to satisfy the $\a$-noise condition if for some $C > 0$, $f^*=\bb{E}[Y|X]$ satisfies \begin{equation*} \bb{P}\left(\left|f^* \right| < \ep \right) \le C\ep^{\a/(1-\a)}, \end{equation*} for all sufficiently small $0 < \ep < c$. \end{definition} Under the above condition, \citet{doi:10.1198/016214505000000907} proved the following improvement on their bound. \begin{theorem}[{Special case of \citet[Theorem 3]{doi:10.1198/016214505000000907}}] Suppose $(X,Y)$ satisfies the $\a$-noise condition with constants $c, C > 0$. Then \[\bb{P}(\sign f \ne Y) - \bb{P}(\sign f^* \ne Y) \le \frac{\norm{f-f^*}_{2,\bb P}^{1 + \a}}{4c'}\] for some $c'$ which depends only on $c$ and $C$.\label{thm:bartlett-margin} \end{theorem} Repeating the proof of Theorem \ref{thm:sin-bound}, and replacing our use of Theorem \ref{thm:bartlett-square} with the improved Theorem \ref{thm:bartlett-margin}, gives the following improved result. \begin{theorem}\label{thm:sin-margin} Suppose $(X,Y)$ satisfies the $\a$-noise condition with constant $c > 0$. Then, for the same $c'$ appearing in Theorem \ref{thm:bartlett-margin}, it holds that \begin{align} \bb{P}(\sign f \ne Y) - \bb{P}(\sign f^* \ne Y) &\le \inf_{g \in C_f} \left\{ \frac{\norm{g-f^*}_{2,\bb P}^{1 + \a}}{4c'} \right\} \\ &\le \frac{\norm{f^*}_{2, \bb P}^{1+\a}}{4c'} \left( \sin \theta_{2,\bb P}(f,f^*)\right)^{1+\a} \end{align} \end{theorem} \begin{remark} An interesting aspect of the bounds in Theorem \ref{thm:sin-margin} and Theorem \ref{thm:sin-bound} is that they are \emph{never weaker} than the corresponding bounds of \citet{doi:10.1198/016214505000000907}, which relate classification risk to the excess square loss, on which they are based. As such, since $\norm{f^*}_{2,\bb P}$ is a problem-invariant constant, procedures that are tailored to minimization of $\theta_{2,\bb{P}}(f,f^*)$ will yield stronger bounds than those based only on control of the excess mean squared error. \end{remark} \begin{remark} For the rotationally invariant case, we show in the appendix that excess classification error is given precisely by \[\frac{1}{\pi}\left(\int_{0}^\theta \sin t\, dt \right)\bb{E}|\bk{X}{\beta^*}|.\] We explain how this implies a convergence rate of $\frac{1}{n}$, whereas the standard bound by \citet{doi:10.1198/016214505000000907} only guarantees a rate of $\frac{1}{\sqrt n}$. Therefore, we see that rotation invariant linear classification produces fast rates, even without imposition of a margin condition. This demonstrates how an angle-based analysis of learning procedures can improve convergence guarantees from traditional bounds. \end{remark} \section{Relationship between surrogate loss minimization and $\theta$} \subsection{Defining classification calibration} Thus far, we have related excess classification risk to the angle $\theta $ between a predictor $\tilde \beta$ and the optimal $\beta^*$, showing that the excess risk is guaranteed to be small when $\theta$ is small. Now we turn our attention to how $\theta$ is actually determined. In particular, we consider procedures that minimize a surrogate loss function $\phi$ over a set $S\subset \bb{R}^d$ of linear predictors and present guarantees on their maximum associated values of $\theta$. Procedures that are guaranteed to achieve $\sin(\theta) = 0$ in the population are of particular interest, as they converge to the optimal classifier. We call these ``classification calibrated.'' In the case of minimizing surrogate risk over $S$, we define this special trait as follows. \begin{definition} A procedure that minimizes the $\phi$-risk $\bb E [\phi (Y, \langle \beta, X \rangle )]$ over $S$ is \emph{classification calibrated} if its constrained minimizer $\tilde \beta$ is also a global minimizer of the classification risk $\bb P [ \sign \langle \beta, X \rangle \neq Y]$. \end{definition} Note that this definition does not require the procedure to identify the global minimizer of the $\phi$ risk. In fact, in our simulations we will provide examples of when the global minimizer of $\phi$-risk is not contained in $S$, but the constrained minimizer in $S$ yields the global minimum of classification risk nonetheless. In this section, we study classification calibration in settings with well-specified models that are regularized so that $S$ is a ball of positive radius $r$, i.e., \[S = \Set[v \in \bb R^d]{\norm v \le r}.\] In the following lemma, we start by noting that this choice of $S$ contains a global minimizer of the classification risk. The question therefore becomes, when does minimizing the surrogate loss $\phi$ within $S$ identify this global minimizer? \begin{lemma} $S$ contains a global minimizer of the classification risk. \end{lemma} \begin{proof} Since $\mathrm{sign}\bk{\beta}{X} = \mathrm{sign}\bk{c \beta}{X}$ for any $c >0$, recall that the classifications associated with any given $\beta$ are invariant to rescaling $\beta$. This is illustrated in Figure \ref{fig:theory_diag_1}. Since $S$ contains a neighborhood of the origin, it follows that it is guaranteed to contain a rescaled $\tilde \beta = c \beta^* \in S$ of some global minimizer $\beta^*$ of the classification risk. \end{proof} \begin{figure} \centering \includegraphics[width=.6 \columnwidth]{diagram1.pdf} \caption{\label{fig:theory_diag_1} Example of a classification calibrated predictor. The optimal regressor $\beta^*$ corresponds to the same classifications as the minimizer $\tilde \beta$ in the orange ball depicting $S$. Instances to the right of the dashed line are classified as positive and those to the left of the dashed line are classified as negative. } \end{figure} \subsection{Studying $\theta$ under Square Loss Minimization} In this section, we will investigate the case where $\phi$ is the square loss $\phi(Y, f(X)) = (Y-f(X))^2$. We consider de-meaned features $X$ that may or may not be correlated and then characterize the population square loss in terms of their covariance matrix $\Sigma = \bb{E}[XX^\top]$. Then, we will bound the angle between the constrained population minimizer $\tilde \beta$ and the unconstrained population minimizer $\beta^*$ in terms of $\Sigma$, showing that when features are uncorrelated, the angle is 0. Finally, we will investigate whether the excess misclassification risk can be controlled in terms of $\beta^*$, $\tilde \beta$, and $\Sigma$ alone. \subsubsection{Characterizing the population loss} We will show that the population loss associated with a linear predictor $\beta$ can be expressed in terms of $\Sigma$. We begin with the following lemma as an intermediate step to computing the mean square loss. \begin{lemma}\label{lem:quad-form-sigma} If $\bb{E}[XX^\top]=\Sigma$ then $\bb{E}\bk{X}{v}^2=v^\top\Sigma v$. \end{lemma} \textit{Proof in appendix.} This result makes it possible for us to express the mean square loss associated with a linear predictor $\beta$ in terms of $\Sigma$ and $\beta^*$, the orthogonal projection of $Y$ onto the span of the features. We will see then that choosing $\beta$ to minimize the mean square loss corresponds to minimizing the expression $ (\beta^* - \beta)^\top \Sigma (\beta^* - \beta)$. \begin{lemma}\label{lem:pop-loss} Let $\beta^*$ be the orthogonal projection of $Y$ onto the span of the features, $\Set[\bks{\gamma}{X}]{\gamma \in \bb{R}^d}$ in $L^2(\bb P)$. Then \begin{align*} \bb{E}(Y-\bks{\beta}{X})^2 &= \bb{E}(Y-\bks{\beta^*}{X})^2 + \bb{E}(\bks{\beta^*}{X}-\bks{\beta}{X})^2 \\ &= C + (\beta^* - \beta)^\top \Sigma (\beta^* - \beta), \end{align*} where $C$ is a constant independent of $\beta$. \end{lemma} \textit{Proof in appendix.} These lemmas have particularly useful implications in cases where the features are uncorrelated and standardized, that is, when $X$ is isotropic according to the following definition. \begin{definition} A random vector $X \in \bb{R}^d$ is called \emph{isotropic} if $\bb{E}[XX^\top] = I$. \end{definition} When this holds, then the following corollary proves that minimizing the mean square loss in the population corresponds to choosing $\beta$ to minimize its distance from $\beta^*$. \begin{corollary} The $\beta$ that minimizes $ \bb{E}(Y-\bks{\beta}{X})^2$ also minimizes $\norm{\beta^* - \beta}_2^2$. \label{cor-beta-min-distance} \end{corollary} \textit{Proof in appendix} \subsubsection{Bounding the angle} We now shift our attention to bounding the angle between a linear predictor $\beta$ and the best linear predictor $\beta^*$. We see that in the isotropic case, minimizing square loss recovers a $\tilde \beta$ whose angle with $\beta^*$ is 0. That is, minimizing square loss gives the optimal classifier. \begin{proposition} \label{prop:sq-loss-isotropic} If $X$ is isotropic, then the minimizer $\tilde \beta \in S$ of the square loss $\bb{E}(Y-\bks{\beta}{X})^2$ satisfies \[ \sin \theta (\tilde \beta, \beta^*) = 0 \] \end{proposition} \noindent This follows easily from the following, more general result given a feature covariance matrix $\Sigma$. \begin{theorem} \label{thm:sq-loss-general} In general, the minimizer $\tilde \beta \in S$ of the square loss satisfies \[\sin \theta (\tilde \beta, \beta^*) \le \inf_{a \ge 0} \norm{a\Sigma - I}_{\mathrm{op}}\] \end{theorem} \textit{Proof in appendix.} \subsection{Studying $\theta$ under General Loss Minimization} When we consider minimizing a general surrogate loss function $\phi$ in a setting where the law of $X$ is rotation invariant, then we can guarantee convergence to the optimal classifier. \begin{proposition} Suppose that the following conditions hold. \label{prop:general-loss} \begin{enumerate}[label=(\roman*)] \item The law of $X$ is rotation invariant. \label{it:rotinv} \item $\bb{P}(Y=1|X) = \eta(\bks{X}{\beta^*})$ for some $\beta^* \in \bb{R}^d$. \label{it:suff} \item The loss function $\phi(Y,f)$ is convex in $f$. \label{it:conv} \end{enumerate} Then the constrained minimizer $\tilde \beta$ of $\beta \mapsto \bb{E}\phi(Y,\bks{\beta}{X})$ subject to $\norm{\beta} \le r$ is unique and satisfies $\tilde \beta = c\beta^*$ from some $c \in \bb{R}$. \end{proposition} \textit{Proof in appendix.} In the simulations in the following section, we present evidence that this result cannot be completely relaxed without further assumptions. \section{Application} \begin{figure} \centering \includegraphics[width=\columnwidth]{new_simfig_1.pdf} \caption{\label{fig:sqloss_plot} Minimizing square loss when $p^*$ is linear in $X$. Gray lines correspond to correctly specified models without binding norm constraints, and they are therefore associated with excess risks that converge to zero in panels (a) and (b). In panel (a) we see that the misspecified models trained on uncorrelated features (solid red and solid orange lines) have excess 0-1 risk converging to zero despite the fact that panel (b) shows they do not recover a globally minimized $\phi$-risk. Meanwhile, when features are correlated (dashed lines), the misspecified models do not yield zero excess risk neither according to 0-1 loss nor square loss.} \end{figure} In this section, we construct simulations that illustrate classification-calibration in practice. We also present evidence on future investigations to characterize when surrogate loss minimization can recover optimal classifiers. In these simulations we construct features $X$ that are either normally or uniformly distributed, and that may or may not be correlated. According to a fixed true $\beta^*$, these ultimately determine true underlying probabilities $p^*$ of a binary outcome $Y$. We then construct $Y$ according to binomial draws of $p^*$. This data-generating process defines primitives $(\beta^*, p^*)$ that are unobserved by a machine learner, as well as data $(X,Y)$ that are observed. The machine learner constructs models of the data-generating process to predict $Y$ from $X$. We suppose the learner passes training instances of $(X,Y)$ through a procedure that minimizes a convex loss function $\phi$, either square or logistic loss, and thus produces estimated regressors $f_\phi(\langle \tilde \beta, X \rangle)$ in a test set. To measure the success of their procedure, we compute excess $\phi$-risk, $\bb{E}[\phi (Yf_\phi (X)] - \bb{E}[\phi (Yf^*(X)] $ as well as excess 0-1 risk, $\bb{P}[ \sign (\langle \tilde \beta, X \rangle) \neq Y] - \bb{P}[ \sign (\langle \beta^*, X \rangle) \neq Y] $. We consider cases where $p^*$ is linear or nonlinear in $X$. This allows us to explore two kinds of misspecification: one where the models are only misspecified through a norm restriction, and the other where the estimating model itself is structurally different from the data-generating process. \subsection{$p^*$ is linear in features} We first consider the linear case $p^* = \frac{1}{2} + \beta^* X$ for which minimizing square loss produces a well-specified model. Recall that in Proposition \ref{prop:sq-loss-isotropic}, we showed that when $X$ is isotropic, then models minimizing square loss are classification calibrated. That is, they recover the optimal classifier even when their norm constraint $r$ prevents them from recovering the optimal regressor. Meanwhile, when features are not isotropic (Theorem \ref{thm:sq-loss-general}), then more restrictive choices of $r$ prevent the convergence of the classifications to the optimal. We demonstrate these results in Figure \ref{fig:sqloss_plot} where we plot excess 0-1 classification risk and excess $\phi$ risk. Features are distributed as $U(-\frac{1}{8}, \frac{1}{8})$ and they fix $p^* = \frac{1}{2} + \langle \beta^*, X \rangle $ with $\beta^* = (1,-3)$. Consider first when there is no binding norm restriction, $r = \infty$. The model is correctly specified regardless of whether features are correlated, and as the gray lines show, both 0-1 and $\phi$ excess risks converge to 0 as the training size grows. Meanwhile, when we impose a misspecified norm constraint on the models (red and orange lines), then $\phi$ excess risks no longer converge to 0. Yet 0-1 excess risk still does converge to 0 so long as the features are uncorrelated (solid red and orange lines), marking the cases where models are classification calibrated. We separately minimized logistic loss to model the same data generating process. While the functional form is misspecified in this case, we were surprised to see that a notion of Proposition \ref{prop:sq-loss-isotropic} still held. As seen in Figure \ref{fig:logloss_plot}, these models were again classification calibrated in the isotropic case. This suggests future exploration of a wider class of surrogate loss functions that yield optimal classifiers associated with linear predictors. \begin{figure}[b] \centering \includegraphics[width=.55 \columnwidth]{new_simfig_4.pdf} \caption{\label{fig:logloss_plot} Minimizing logistic loss when $p^*$ is linear in $X$. Misspecified models in this case with a tight norm restriction (solid red and solid orange lines) are seen to be classification calibrated in the isotropic case, with excess 0-1 risk converging to zero. Meanwhile, tightening the norm restrictions leads to convergence toward non-zero values. This suggests that our results on square loss may be extended to other convex surrogate loss functions.} \end{figure} \subsection{$p^*$ is nonlinear in features} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{new_simfig_3.pdf} \caption{\label{fig:rotinv_plot} Minimizing square and logistic loss when $p^*$ is a nonlinear transformation of $\langle \beta, X \rangle$. Adjusting the data-generating process is seen to affect convergence of excess 0-1 risk to zero, showing there is a limit to how much we can extend the result in Proposition \ref{prop:general-loss}. When features are uniformly distributed but $\beta$ is non-symmetric (dashed red line), models are not classification-calibrated regardless of whether $\phi$ is square or logistic loss. } \end{figure} We next explored weakening the assumption that $p^*$ is linear in $X$ to see whether we could extend the result in Proposition \ref{prop:general-loss} to cases where the law of $X$ is not rotation invariant. We learned that this result cannot be generalized without further assumptions. In this new set of simulations, we adjusted the data-generating process by passing $\langle \beta^*, X \rangle$ through the logistic CDF function to construct the true underlying probabilities $p^*$. The results are depicted in Figure \ref{fig:rotinv_plot}. We first considered specifications of $X$ satisfying rotation invariance: we constructed two features each independent and distributed as $N(0,1)$. For both symmetric and non-symmetric choices of $\beta^*$, minimizing square or logistic loss produced classification-calibrated models regardless of whether square or logistic loss was minimized (black lines in each panel), supporting the result in Proposition \ref{prop:general-loss}. However, when we instead constructed features that are independent but distributed as $U(-1,1)$, so that the law of $X$ is not rotation invariant, we saw that excess 0-1 risk does not necessarily converge to 0. This is depicted by the dashed red lines corresponding to $\beta^* = (1,-3)$. Therefore, to guarantee convergence to the optimal classifier when $p^*$ is not linear in $X$ and the law of $X$ is not rotation invariant, we understood that additional assumptions are required. \section{Conclusion} In this paper, we used a geometric distinction between classification and regression problems to more precisely characterize how loss in one setting relates to loss in the other. Using the scale invariance of classification, we were able to improve the bounds used by theorists and practitioners to compare classification procedures against one another. We hope that this work will help inform decisions about which classification algorithms to deploy in practice, and that it may open the door for effective new algorithms that aim to directly predict the direction of the conditional expectation function rather than its location. \newpage \bibliographystyle{plainnat}
{ "timestamp": "2022-05-19T02:02:56", "yymm": "2205", "arxiv_id": "2205.08633", "language": "en", "url": "https://arxiv.org/abs/2205.08633", "abstract": "Modern algorithms for binary classification rely on an intermediate regression problem for computational tractability. In this paper, we establish a geometric distinction between classification and regression that allows risk in these two settings to be more precisely related. In particular, we note that classification risk depends only on the direction of the regressor, and we take advantage of this scale invariance to improve existing guarantees for how classification risk is bounded by the risk in the intermediate regression problem. Building on these guarantees, our analysis makes it possible to compare algorithms more accurately against each other and suggests viewing classification as unique from regression rather than a byproduct of it. While regression aims to converge toward the conditional expectation function in location, we propose that classification should instead aim to recover its direction.", "subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)", "title": "Classification as Direction Recovery: Improved Guarantees via Scale Invariance", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692305124305, "lm_q2_score": 0.7248702702332476, "lm_q1q2_score": 0.7079584890500434 }
https://arxiv.org/abs/1702.04529
Semi-Baxter and strong-Baxter: two relatives of the Baxter sequence
In this paper, we enumerate two families of pattern-avoiding permutations: those avoiding the vincular pattern $2-41-3$, which we call semi-Baxter permutations, and those avoiding the vincular patterns $2-41-3$, $3-14-2$ and $3-41-2$, which we call strong-Baxter permutations. We call semi-Baxter numbers and strong-Baxter numbers the associated enumeration sequences. We prove that the semi-Baxter numbers enumerate in addition plane permutations (avoiding $2-14-3$). The problem of counting these permutations was open and has given rise to several conjectures, which we also prove in this paper.For each family (that of semi-Baxter -- or equivalently, plane -- and that of strong-Baxter permutations), we describe a generating tree, which translates into a functional equation for the generating function. For semi-Baxter permutations, it is solved using (a variant of) the kernel method: this gives an expression for the generating function while also proving its D-finiteness. From the obtained generating function, we derive closed formulas for the semi-Baxter numbers, a recurrence that they satisfy, as well as their asymptotic behavior. For strong-Baxter permutations, we show that their generating function is (a slight modification of) that of a family of walks in the quarter plane, which is known to be non D-finite.
\section{Introduction} The purpose of this article is the study of two enumeration sequences, which we call the \emph{semi-Baxter sequence} and the \emph{strong-Baxter sequence}. They enumerate, among other objects, families of pattern-avoiding permutations closely related to the well-known family of Baxter permutations, and to the slightly less popular one of twisted Baxter permutations, which are both counted by the sequence of Baxter numbers~\cite[sequence A001181]{OEIS}. Recall that a permutation $\pi = \pi_1 \pi_2 \dots \pi_n$ contains the vincular\footnote{ Throughout the article, we adopt the convention of denoting by the symbol $\underbracket[.5pt][1pt]{~~~}$ the elements that are required to be adjacent in an occurrence of a vincular pattern, rather than using the historical notation with dashes wherever elements are not required to be consecutive. For instance, our pattern $2\underbracket[.5pt][1pt]{41}3$ is sometimes written $2-41-3$ in the literature.} pattern $2\underbracket[.5pt][1pt]{41}3$ if there exists a subsequence $\pi_i \pi_j \pi_{j+1} \pi_k$ of $\pi$ (with $i<j<k-1$), called an \emph{occurrence} of the pattern, that satisfies $\pi_{j+1} < \pi_i < \pi_k < \pi_j$. Containment and occurrences of the patterns $3\underbracket[.5pt][1pt]{14}2$, $3\underbracket[.5pt][1pt]{41}2$, $2\underbracket[.5pt][1pt]{14}3$ and $\underbracket[.5pt][1pt]{14}23$ are defined similarly. A permutation not containing a pattern avoids it. Baxter permutations~\cite[among many others]{BM} are those that avoid both $2\underbracket[.5pt][1pt]{41}3$ and $3\underbracket[.5pt][1pt]{14}2$, while twisted Baxter permutations~\cite[and references therein]{Twis} are the ones avoiding $2\underbracket[.5pt][1pt]{41}3$ and $3\underbracket[.5pt][1pt]{41}2$. We denote by $Av(P)$ the family of permutations avoiding all patterns in $P$. The two sequences that will be our main focus are first the one enumerating permutations avoiding $2\underbracket[.5pt][1pt]{41}3$, called \emph{semi-Baxter permutations}, and second the one enumerating permutations avoiding all three patterns $2\underbracket[.5pt][1pt]{41}3$, $3\underbracket[.5pt][1pt]{14}2$ and $3\underbracket[.5pt][1pt]{41}2$, called \emph{strong-Baxter permutations}. Remark that a permutation avoiding the (classical) pattern $231$ necessarily avoids $2\underbracket[.5pt][1pt]{41}3$, $3\underbracket[.5pt][1pt]{14}2$ and $3\underbracket[.5pt][1pt]{41}2$, and recall that $Av(231)$ is enumerated by the sequence of Catalan numbers. Therefore, the definitions in terms of pattern-avoidance and the enumeration results given above can be summarized as shown in Figure~\ref{fig:inclusions}. \begin{figure}[ht] \begin{center} \begin{tabular}{ccccccccc} Catalan & $\leq$ & strong-Baxter & $\leq$ & Baxter & $\leq$ & semi-Baxter & $\leq$ & factorial \\ & & & & & & & & \\ & & $Av(2\underbracket[.5pt][1pt]{41}3,$ & \rotatebox[origin=c]{30}{$\subseteq$} & $Av(2\underbracket[.5pt][1pt]{41}3,3\underbracket[.5pt][1pt]{14}2)$ & \rotatebox[origin=c]{-30}{$\subseteq$} & & & all \\ $Av(231)$ & $\subseteq$ & ~~~~~$3\underbracket[.5pt][1pt]{14}2,$ & & & & $Av(2\underbracket[.5pt][1pt]{41}3)$ & $\subseteq$ & permutations \\ & & ~~~~~~$3\underbracket[.5pt][1pt]{41}2)$ & \rotatebox[origin=c]{-30}{$\subseteq$} & $Av(2\underbracket[.5pt][1pt]{41}3,3\underbracket[.5pt][1pt]{41}2)$ & \rotatebox[origin=c]{30}{$\subseteq$} & & & \\ \end{tabular} \end{center} \caption{Sequences from Catalan to factorial numbers, with nested families of pattern-avoiding permutations that they enumerate. } \label{fig:inclusions} \end{figure} \bigskip The focus of this paper is the study of the two sequences of semi-Baxter and strong-Baxter numbers. \smallskip We deal with the semi-Baxter sequence (enumerating semi-Baxter permutations) in Section~\ref{sec:semi}. It has been proved in~\cite{Kasraoui} (as a special case of a general statement) that this sequence also enumerates \emph{plane permutations}, defined by the avoidance of $2\underbracket[.5pt][1pt]{14}3$. This sequence is referenced as A117106 in~\cite{OEIS}. We first give a more specific proof that plane permutations and semi-Baxter permutations are equinumerous, by providing a common generating tree (or succession rule) with two labels for these two families. Basics and references about generating trees can be found in Section~\ref{sec:review_gentree}. We solve completely the problem of enumerating semi-Baxter permutations (or equivalently, plane permutations), pushing further the techniques that were used to enumerate Baxter permutations in~\cite{BM}. Namely, we start from the functional equation associated with our succession rule for semi-Baxter permutations, and we solve it using variants of the kernel method \cite{BM,iteratedKM}. This results in an expression for the generating function for semi-Baxter permutations, showing that this generating function is D-finite\footnote{Recall that $F(x)$ is D-finite when there exist $k \geq 0$ and polynomials $Q(x), Q_0(x), \dots, Q_k(x)$ of $\mathbb{Q}[x]$ with $Q_k(x) \neq 0$ such that $Q_0(x) F(x) + Q_1(x) F'(x) + Q_2(x) F''(x) + \dots + Q_k(x) F^{(k)}(x) = Q(x)$.}. From it, we obtain several formulas for the semi-Baxter numbers: first, a complicated closed formula; second, a simple recursive formula; and third, three simple closed formulas that were conjectured by D. Bevan~\cite{BevanPrivate}. The problem of enumerating plane permutations was posed by M.~Bousquet-M\'elou and S.~Butler in~\cite{BB}. Some conjectures related to this enumeration problem were later proposed, in particular by D.~Bevan~\cite{bevan,BevanPrivate} and M.~Martinez and C.~Savage~\cite{savage}. Not only do we solve the problem of enumerating plane permutations (or equivalently, semi-Baxter permutations) completely, but we also prove these conjectures. In addition, from one of these (former) conjectures (relating the semi-Baxter sequence to sequence \text{A005258} of~\cite{OEIS}, whose terms are sometimes called Ap\'ery numbers), we easily deduce the asymptotic behavior of semi-Baxter numbers. We mention that it has been conjectured in~\cite{BaxterShattuck} by A.~Baxter and M.~Shattuck that permutations avoiding $\underbracket[.5pt][1pt]{14}23$ are also enumerated by the same sequence, but we have not been able to prove it. \smallskip In Section~\ref{sec:strong}, we focus on the study of strong-Baxter permutations and of the strong-Baxter sequence. Again, we provide a generating tree for strong-Baxter permutations, and translate the corresponding succession rule into a functional equation for their generating function. However, we do not solve the equation using the kernel method. Instead, from the functional equation, we prove that the generating function for strong-Baxter permutations is a very close relative of the one for a family of walks in the quarter plane studied in~\cite{bostan}. As a consequence, the generating function for strong-Baxter permutations is not D-finite. Families of permutations with non D-finite generating functions are quite rare in the literature on pattern-avoiding permutations (although mostly studied for classical patterns, instead of vincular ones -- see the analysis in~\cite{nonDF1,nonDF2}): this makes the example of strong-Baxter permutations particularly interesting. \bigskip The article is next organized as follows. Section~\ref{sec:review_gentree} recalls easy facts about the Catalan sequence, and includes basics about generating trees and succession rules. Sections~\ref{sec:semi}, \ref{sec:Bax} and \ref{sec:strong} then focus on the sequences of semi-Baxter numbers, Baxter numbers, and strong Baxter numbers, respectively, and on the associated families of pattern-avoiding permutations. \section{The Catalan family $Av(231)$ and a Catalan succession rule}\label{sec:review_gentree} Generating trees and succession rules will be important for our work. We give a brief general presentation below. Details can be found for instance in~\cite{GFGT,Eco,BM,West_gt}. We also review the classical succession rule for Catalan numbers. This rule encodes generating trees for many Catalan families (see \cite{Eco}), but we will present only a generating tree for the family of permutations avoiding $231$, since we will build on it later in this work. \medskip Consider any combinatorial class $\mathcal{C}$, that is to say any set of discrete objects equipped with a notion of size, such that there is a finite number of objects of size $n$ for any integer $n$. Assume also that $\mathcal{C}$ contains exactly one object of size $1$. A \emph{generating tree} for $\mathcal{C}$ is an infinite rooted tree, whose vertices are the objects of $\mathcal{C}$, each appearing exactly once in the tree, and such that objects of size $n$ are at level $n$ in the tree, that is to say at distance $n-1$ from the root (thus, the root is at level $1$, its children are at level $2$, and so on). The children of some object $c \in \mathcal{C}$ are obtained by adding an \emph{atom} (\emph{i.e.}~a piece of object that makes its size increase by $1$) to $c$. Of course, since every object should appear only once in the tree, not all additions are possible. We should ensure the unique appearance property by considering only additions that follow some restricted rules. We will call the \emph{growth} of $\mathcal{C}$ the process of adding atoms following these prescribed rules. Our focus in this section is on $Av(231)$, the set of permutations avoiding the pattern $231$: a permutation $\pi$ avoids $231$ when it does not contain any subsequence $\pi_i \pi_j \pi_k$ (with $i<j<k$) such that $\pi_k < \pi_i < \pi_j$. A growth for $Av(132)$ has been originally described in~\cite{West_gt}, by insertion of a maximal element, which can be translated by symmetry into a growth for $Av(231)$ by insertion of a maximal or a leftmost element. In our paper, we are however interested in a different (and not symmetric) growth for $Av(231)$: by insertion of a rightmost element, in the same flavor as what is done in~\cite{BFR} for subclasses of $Av(231)$. Indeed, throughout the paper our permutations will grow by performing ``local expansions'' on the right of any permutation $\pi$. More precisely, when inserting $a \in \{1, \dots, n+1\}$ on the right of any $\pi$ of size $n$, we obtain the permutation $\pi' = \pi'_1 \dots \pi'_{n}\pi'_{n+1}$ where $\pi'_{n+1}=a$, $\pi'_i = \pi_i$ if $\pi_i < a$ and $\pi'_i = \pi_i +1$ if $\pi_i \geq a$. We use the notation $\pi \cdot a$ to denote $\pi'$. For instance, $1\,4\,2\,3 \cdot 3 = 1\,5\,2\,4\,3$. This is easily understood on the diagrams representing permutations (which consist of points in the Cartesian plane at coordinates $(i,\pi_i)$): a local expansion corresponds to adding a new point on the right of the diagram, which lies vertically between two existing points (or below the lowest, or above the highest), and finally normalizing the picture obtained -- see Figure~\ref{fig:av231}. These places where new elements may be inserted are called \emph{sites} of the permutation. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.75]{231av_growth} \end{center} \caption{ The growth of a permutation avoiding $231$: active sites are marked with $\Diamond$ and non-active sites by $\times$.} \label{fig:av231} \end{figure} Clearly, performing this growth without restriction on the values $a$ would produce a generating tree for the family of all permutations. To ensure that only permutations avoiding $231$ appear in the tree (that all such appear exactly once being then obvious), insertions are not possible in all sites, but only in those such that the insertion does not create an occurrence of $231$ -- see Figure~\ref{fig:av231}. Such sites are called \emph{active sites}. In the considered example, the active sites of $\pi \in Av(231)$ are easily characterized as those above the largest element $\pi_i$ such that there exists $j$ with $i<j$ and $\pi_i<\pi_j$. The first few levels of the generating tree for $Av(231)$ are shown in Figure~\ref{fig:gen_tree} (left). \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{Gen_tree_perm} \qquad \includegraphics[scale=0.4]{Gen_tree} \end{center} \caption{Two ways of looking at the generating tree for $Av(231)$: with objects (left) and with labels from the succession rule $\Omega_{Cat}$ (right).} \label{fig:gen_tree} \end{figure} \medskip Of importance for enumeration purposes is the general shape of a generating tree, not the specific objects labeling its nodes. From now on, when we write generating tree, we intend this shape of the tree, without the objects labeling the nodes. A \emph{succession rule} is a compact way of representing such a generating tree for a combinatorial class $\mathcal{C}$ without referring to its objects, but identifying them with \emph{labels}. Therefore, a succession rule is made of one starting label corresponding to the label of the root, and of productions encoding the way labels spread in the generating tree. From the beginnings~\cite{West_gt}, generating trees and succession rules have been used to derive enumerative results. As we explain in~\cite{paper1}, the sequence enumerating the class $\mathcal{C}$ can be recovered from the succession rule itself, without reference to the specifics of the objects in $\mathcal{C}$: indeed, the $n$th term of the sequence is the total number of labels (counted with repetition) that are produced from the root by $n-1$ applications of the productions, or equivalently, the number of nodes at level $n$ in the generating tree. From the growth for $Av(231)$ described above, examining carefully how the number of active sites evolves when performing insertions (as done in~\cite{West_gt} or~\cite{BFR} for example), we obtain the following (and classical) succession rule associated with Catalan numbers (corresponding to the tree shown in Figure~\ref{fig:gen_tree}, right): $$\Omega_{Cat}=\left\{\begin{array}{ll} (1)\\ (k) \rightsquigarrow (1), (2), \dots , (k), (k+1). \end{array}\right.$$ The intended meaning of the label $(k)$ is the number of active sites of a permutation, minus $1$. In Figure~\ref{fig:av231} and similar figures later on, the labels of the permutations are indicated below them. \section{Semi-Baxter numbers}\label{sec:semi} \subsection{Definition, context, and summary of our results} \label{sec:intro_semiBax} \begin{definition} \label{dfn:semiBaxPerm} A \emph{semi-Baxter permutation} is a permutation that avoids the pattern $2\underbracket[.5pt][1pt]{41}3$. \end{definition} \begin{definition} \label{dfn:semiBaxNumber} The sequence of \emph{semi-Baxter numbers}, $(SB_n)$, is defined by taking $SB_n$ to be the number of semi-Baxter permutations of size $n$. \end{definition} The name ``semi-Baxter'' has been chosen because $2\underbracket[.5pt][1pt]{41}3$ is one of the two patterns (namely, $2\underbracket[.5pt][1pt]{41}3$ and $3\underbracket[.5pt][1pt]{14}2$) whose avoidance defines the family of so-called Baxter permutations~\cite{CGHK78,Gire}, enumerated by the Baxter numbers~\cite[sequence \text{A001181}]{OEIS}. (Remark that up to symmetry, we could have defined semi-Baxter permutations by the avoidance of $3\underbracket[.5pt][1pt]{14}2$, obtaining the same sequence.) Note that $2\underbracket[.5pt][1pt]{41}3$ is also one of the two patterns (namely, $2\underbracket[.5pt][1pt]{41}3$ and $3\underbracket[.5pt][1pt]{41}2$) whose avoidance defines the family of so-called twisted Baxter permutations~\cite{Rea05,West}, also enumerated by the Baxter numbers. The first few terms of the sequence of semi-Baxter numbers are \[1,2,6,23,104,530,2958,17734,112657, 750726, 5207910, 37387881, 276467208, \ldots\] The family of semi-Baxter permutations already appears in the literature, at least on a few occasions. Indeed, it is an easy exercise to see that the avoidance of $2\underbracket[.5pt][1pt]{41}3$ is equivalent to that of the barred pattern $25\bar{3}14$, which has been studied by L. Pudwell in~\cite{Pudwell}. (The definition of barred patterns, which is not essential to our work, can be found in~\cite{Pudwell}.) In this work, by means of enumeration schemes L. Pudwell suggests that the enumerative sequences of semi-Baxter permutations and \emph{plane permutations} (see Definition~\ref{dfn:planePerm} below) coincide. This conjecture has later been proved as a special case of a general statement in~\cite[Corollary 1.9(b)]{Kasraoui}. In Section~\ref{sec:GenTreeSemiBax} we give an alternative and self-contained proof that plane permutations and semi-Baxter permutations are indeed equinumerous. The sequence enumerating plane permutations has already been registered on the OEIS~\cite{OEIS} as sequence \text{A117106}, which is then our sequence $(SB_n)$. The enumeration of plane permutations has received a fair amount of attention in the literature. It first arose as an open problem in~\cite{BB}. This family of permutations, indeed, was identified as a superset of forest-like permutations, which are thoroughly investigated in~\cite{BB}. A forest-like permutation is any permutation whose Hasse graph is a forest -- the Hasse graph of a permutation $\pi$ of size $n$ is the oriented graph on the vertex set $\{1,\ldots,n\}$, which includes an edge from $i$ to $j$ (for $i<j$) if and only if $\pi(i)<\pi(j)$ and there is no $k$ such that $i<k<j$ and $\pi(i)<\pi(k)<\pi(j)$, and with all edges pointing upward. For instance, the Hasse graphs of permutations $2413$ and $2143$ are depicted in Figure~\ref{fig:Hasse}. In addition, it shows that the Hasse graph of $2413$ is plane (\emph{i.e.} can be drawn on the plane without any crossing of edges), while the one of $2143$ is not. \begin{figure}[ht] \centering \begin{tikzpicture}[scale=0.5] \begin{scope} \node at (-.5,-.5) {$1$}; \node at (-.5,2.5) {$2$}; \node at (2.5,2.5) {$4$}; \node at (2.5,-.5) {$3$}; \filldraw[black] (0,0) circle (2pt); \filldraw[black] (2,0) circle (2pt); \filldraw[black] (2,2) circle (2pt); \filldraw[black] (0,2) circle (2pt); \draw[->] (0,0) -- (0,2); \draw[->] (0,0) -- (2,2); \draw[->] (2,0) -- (2,2); \end{scope}\end{tikzpicture}\hspace{2.5cm} \begin{tikzpicture}[scale=0.5] \begin{scope} \node at (-.5,-.5) {$1$}; \node at (-.5,2.5) {$3$}; \node at (2.5,2.5) {$4$}; \node at (2.5,-.5) {$2$}; \filldraw[black] (0,0) circle (2pt); \filldraw[black] (2,0) circle (2pt); \filldraw[black] (2,2) circle (2pt); \filldraw[black] (0,2) circle (2pt); \draw[->] (0,0) -- (0,2); \draw[->] (0,0) -- (2,2); \draw[->] (2,0) -- (0,2); \draw[->] (2,0) -- (2,2); \end{scope} \end{tikzpicture} \caption{\label{fig:Hasse}The Hasse graphs of permutations $2413$ (left) and $2143$ (right).} \end{figure} The authors of~\cite{BB} named plane permutations those permutations whose Hasse graph is plane, characterized them as those avoiding $2\underbracket[.5pt][1pt]{14}3$, and called for their enumeration. This enumerative problem was studied with a quite experimental perspective, as one case of many, through enumeration schemes by L.~Pudwell in~\cite{Pudwell}. Then, D.~Bevan computed the first 37 terms of their enumerative sequence~\cite{bevan}, by iterating a functional equation provided in \cite[Theorem 13.1]{bevan}. Although~\cite{bevan} gives a functional equation for the generating function of semi-Baxter numbers, there is no formula (closed or recursive) for $SB_n$. There is however a conjectured explicit formula, which, in addition, gives information about their asymptotic behavior (see Proposition~\ref{prop:conj} and Corollary~\ref{cor:asym_SB_n}). Another recursive formula for $SB_n$ has been conjectured by M.~Martinez and C.~Savage in~\cite{savage}, in relation with \emph{inversion sequences} avoiding some patterns (definition and precise statement are provided in Subsection~\ref{sec:inv}). Finally, closed formulas for $SB_n$ have been conjectured by D.~Bevan in~\cite{BevanPrivate}. \medskip Our results about semi-Baxter numbers are the following. Most importantly, we solve the problem of enumerating semi-Baxter permutations, as well as plane permutations. We provide a common succession rule that governs their growth, presented in Subsection~\ref{sec:GenTreeSemiBax}. Next, in Subsection~\ref{sec:inv}, we show that inversion sequences avoiding the patterns $210$ and $100$ grow along the same rule, thereby proving a first formula for $SB_n$ and settling a conjecture of~\cite{savage}. Then, by means of standard tools we translate the succession rule into a functional equation whose solution is the generating function of semi-Baxter numbers. Subsection~\ref{sec:resultsGF} gives a closed expression for the generating function of semi-Baxter numbers, together with closed, recursive and asymptotic formulas for $SB_n$. The results of this subsection are proved in Subsection~\ref{sec:proofs_GF} following the same method as in~\cite{MBM_Xin}: the functional equation is solved using the obstinate kernel method, a first closed formula for $SB_n$ is obtained by the Lagrange inversion, the recursive formula follows from it applying the method of \emph{creative telescoping}~\cite{zeilberger}, which can then be applied again to prove that the explicit formulas for $SB_n$ conjectured in~\cite{BevanPrivate} are correct. Finally, we prove the formula for $SB_n$ conjectured in~\cite{bevan}, which in turn gives us the asymptotic behavior of $SB_n$. \subsection{Succession rule for semi-Baxter permutations and plane permutations}\label{sec:GenTreeSemiBax} Similarly to the case of permutations avoiding $231$ described in the introduction, we will provide below generating trees for semi-Baxter permutations and for plane permutations, where permutations grow by insertion of an element on the right. Recall that for any permutation $\pi$ of size $n$ and for any $a \in \{1, \dots, n+1\}$, $\pi \cdot a$ denotes the permutation $\pi'$ of size $n+1$ such that $\pi'_{n+1}=a$, $\pi'_i = \pi_i$ if $\pi_i < a$ and $\pi'_i = \pi_i +1$ if $\pi_i \geq a$. \begin{proposition} \label{prop:semi_rule} A generating tree for semi-Baxter permutations can be obtained by insertions on the right, and it is isomorphic to the tree generated by the following succession rule: $$\Omega_{semi}=\left\{\begin{array}{ll} (1,1)\\ (h,k) \rightsquigarrow \hspace{-3mm}& (1,k+1), \dots , (h,k+1)\\ & (h+k,1), \dots , (h+1,k). \end{array}\right.$$ \end{proposition} \begin{proof} First, observe that removing the last element of a permutation avoiding $2\underbracket[.5pt][1pt]{41}3$, we obtain a permutation that still avoids $2\underbracket[.5pt][1pt]{41}3$. So, a generating tree for semi-Baxter permutations can be obtained with local expansions on the right. For $\pi$ a semi-Baxter permutation of size $n$, the \emph{active sites} are by definition the points $a$ (or equivalently the values $a$) such that $\pi \cdot a$ is also semi-Baxter, \emph{i.e.}, avoids $2\underbracket[.5pt][1pt]{41}3$. The other points $a$ are called non-active sites. An occurrence of $2\underbracket[.5pt][1pt]{31}$ in $\pi$ is a subsequence $\pi_j \pi_i \pi_{i+1}$ (with $j<i$) such that $\pi_{i+1}<\pi_j<\pi_i$. Obviously, the non-active sites $a$ of $\pi$ are characterized by the fact that $a \in (\pi_j,\pi_i]$ for some occurrence $\pi_j \pi_i \pi_{i+1}$ of $2\underbracket[.5pt][1pt]{31}$. We call a \emph{non-empty descent} of $\pi$ a pair $\pi_i \pi_{i+1}$ such that there exists $\pi_j$ that makes $\pi_j \pi_i \pi_{i+1}$ an occurrence of $2\underbracket[.5pt][1pt]{31}$. Note that in the case where $\pi_{n-1} \pi_n$ is a non-empty descent, choosing $\pi_j = \pi_{n} +1$ always gives an occurrence of $2\underbracket[.5pt][1pt]{31}$, and it is the smallest possible value of $\pi_j$ for which $\pi_j \pi_{n-1} \pi_n$ is an occurrence of $2\underbracket[.5pt][1pt]{31}$. To each semi-Baxter permutation $\pi$ of size $n$, we assign a label $(h,k)$, where $h$ (resp. $k$) is the number of the active sites of $\pi$ smaller than or equal to (resp. greater than) $\pi_n$. Remark that $h,k\geq1$, since $1$ and $n+1$ are always active sites. Moreover, the label of the permutation $\pi=1$ is $(1,1)$, which is the root in $\Omega_{semi}$. Consider a semi-Baxter permutation $\pi$ of size $n$ and label $(h,k)$. Proving Proposition~\ref{prop:semi_rule} amounts to showing that permutations $\pi \cdot a$ have labels $(1,k+1), \dots , (h,k+1), (h+k,1), \dots , (h+1,k)$ when $a$ runs over all active sites of $\pi$. Figure~\ref{fig:semi}, which shows an example of semi-Baxter permutation $\pi$ with label $(2,2)$ and all the corresponding $\pi \cdot a$ with their labels, should help understanding the case analysis that follows. Let $a$ be an active site of $\pi$. Assume first that $a > \pi_n$ (this happens exactly $k$ times), so that $\pi \cdot a$ ends with an ascent. The occurrences of $2\underbracket[.5pt][1pt]{31}$ in $\pi \cdot a$ are the same as in $\pi$. Consequently, the active sites are not modified, except that the active site $a$ of $\pi$ is now split into two actives sites of $\pi \cdot a$: one immediately below $a$ and one immediately above. It follows that $\pi \cdot a$ has label $(h+k+1-i,i)$, if $a$ is the $i$-th active site from the top. Since $i$ ranges from $1$ to $k$, this gives the second row of the production of $\Omega_{semi}$. Assume next that $a=\pi_n$. Then, $\pi \cdot a$ ends with a descent, but an empty one. Similarly to the above case, we therefore get one more active site in $\pi \cdot a$ than in $\pi$, and $\pi \cdot a$ has label $(h,k+1)$, the last label in the first row of the production of $\Omega_{semi}$. Finally, assume that $a < \pi_n$ (this happens exactly $h-1$ times). Now, $\pi \cdot a$ ends with a non-empty descent, which is $(\pi_n +1) a$. It follows from the discussion at the beginning of this proof that all sites of $\pi \cdot a$ in $(a+1,\pi_n+1]$ become non-active, while all others remain active if they were so in $\pi$ (again, with $a$ replaced by two active sites surrounding it, one below it and one above). If $a$ is the $i$-th active site from the bottom, it follows that $\pi \cdot a$ has label $(i,k+1)$, hence giving all missing labels in the first row of the production of $\Omega_{semi}$. \end{proof} \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{2413av_growth} \caption{The growth of semi-Baxter permutations (with notation as in Figure~\ref{fig:av231}). Non-empty descents are represented with bold lines.} \label{fig:semi} \end{figure} \begin{definition} A \emph{plane permutation} is a permutation that avoids the vincular pattern $2\underbracket[.5pt][1pt]{14}3$ (or equivalently, the barred pattern $21\bar{3}54$). \label{dfn:planePerm} \end{definition} \begin{proposition}\label{prop:plane} A generating tree for plane permutations can be obtained by insertions on the right, and it is isomorphic to the tree generated by the succession rule $\Omega_{semi}$. \end{proposition} \begin{proof} The proof of this statement follows applying the same steps as in the proof of Proposition~\ref{prop:semi_rule}. First, observe that removing the last element of a permutation avoiding $2\underbracket[.5pt][1pt]{14}3$, we obtain a permutation that still avoids $2\underbracket[.5pt][1pt]{14}3$. So, a generating tree for plane permutations can be obtained with local expansions on the right. For $\pi$ a plane permutation of size $n$, the active sites are by definition the values $a$ such that $\pi \cdot a$ is also plane, \emph{i.e.}, avoids $2\underbracket[.5pt][1pt]{14}3$. The other points $a$ are called non-active sites. An occurrence of $2\underbracket[.5pt][1pt]{13}$ in $\pi$ is a subsequence $\pi_j \pi_i \pi_{i+1}$ (with $j<i$) such that $\pi_i<\pi_j<\pi_{i+1}$. Note that the non-active sites $a$ of $\pi$ are characterized by the fact that $a \in (\pi_j,\pi_{i+1}]$ for some occurrence $\pi_j \pi_i \pi_{i+1}$ of $2\underbracket[.5pt][1pt]{13}$. We call a \emph{non-empty ascent} of $\pi$ a pair $\pi_i \pi_{i+1}$ such that there exists $\pi_j$ that makes $\pi_j \pi_i \pi_{i+1}$ an occurrence of $2\underbracket[.5pt][1pt]{13}$. As in the proof of Proposition~\ref{prop:semi_rule}, if $\pi_{n-1} \pi_n$ is a non-empty ascent, $\pi_j = \pi_{n-1} +1$ is the smallest value of $\pi_j$ such that $\pi_j \pi_{n-1} \pi_n$ is an occurrence of $2\underbracket[.5pt][1pt]{13}$. Now, to each plane permutation $\pi$ of size $n$, we assign a label $(h,k)$, where $h$ (resp. $k$) is the number of the active sites of $\pi$ greater than (resp. smaller than or equal to) $\pi_n$. Remark that $h,k\geq1$, since $1$ and $n+1$ are always active sites. Moreover, the label of the permutation $\pi=1$ is $(1,1)$, which is the root in $\Omega_{semi}$. The proof is concluded by showing that the permutations $\pi \cdot a$ have labels $(1,k+1), \dots , (h,k+1), (h+k,1), \dots , (h+1,k)$, when $a$ runs over all active sites of $\pi$. If $a \leq \pi_n$, $\pi \cdot a$ ends with a descent, and it follows as in the proof of Proposition~\ref{prop:semi_rule} that the active sites of $\pi \cdot a$ are the same as those of $\pi$ (with $a$ split into two sites). This gives the second row of the production of $\Omega_{semi}$ (the label $(h+k+1-i,i)$ for $1 \leq i \leq k$ corresponding to $a$ being the $i$-th active site from the bottom). If $a=\pi_n +1$, $\pi \cdot a$ ends with an empty ascent, and hence has label $(h,k+1)$ again as in the proof of Proposition~\ref{prop:semi_rule}. Finally, if $a>\pi_n +1$ (which happens $h-1$ times), $\pi \cdot a$ ends with a non-empty ascent. The discussion at the beginning of the proof implies that all sites of $\pi \cdot a$ in $(\pi_n+1,a]$ are deactivated while all others remain active. If $a$ is the $i$-th active site from the top, it follows that $\pi \cdot a$ has label $(i,k+1)$, hence giving all missing labels in the first row of the production of $\Omega_{semi}$. \end{proof} Because the two families of semi-Baxter and of plane permutations grow according to the same succession rule, we obtain the following. \begin{corollary}\label{cor:equinum} Semi-Baxter permutations and plane permutations are equinumerous. In other words, $SB_n$ is also the number of plane permutations of size $n$. \end{corollary} Note that the two generating trees for semi-Baxter and for plane permutations which are encoded by $\Omega_{semi}$ are of course isomorphic: this provides a size-preserving bijection between these two families. It is however not defined directly on the objects themselves, but only referring to the generating tree structure. \subsection{Another occurrence of semi-Baxter numbers} \label{sec:inv} In this section we provide another occurrence of semi-Baxter numbers that is not in terms of pattern-avoiding permutations, yet of pattern-avoiding inversion sequences. This occurrence appears as a conjecture in the work of M.~Martinez and C.~Savage on these families of objects~\cite{savage}. Recall that an \emph{inversion sequence} of size $n$ is an integer sequence $(e_1,e_2,\dots,e_n)$ satisfying $0\leq e_i< i$ for all $i\in\{1,2,\dots,n\}$. In \cite{savage} the authors introduce the notion of pattern avoidance in inversion sequences in quite general terms. Of interest to us here is only the set $\mathbf{I}_n(210,100)$ of inversion sequences avoiding the patterns $210$ and $100$: an inversion sequence $(e_1,\ldots,e_n)$ \emph{contains} the pattern $q$, with $q=q_1\ldots q_k$, if there exist $k$ indices $1\leq i_1<\ldots<i_k\leq n$ such that $e_{i_1}\ldots e_{i_k}$ is order-isomorphic to $q$; otherwise $(e_1,\ldots, e_n)$ is said to \emph{avoid} $q$. For instance, the sequence $(0,0,1,3,1,3,2,7,3)$ avoids both $210$ and $100$, while $(0,0,1,2,0,1,1)$ avoids $210$ but contains $100$. The family $\cup_n\mathbf{I}_n(210,100)$ can moreover be characterized as follows. A \emph{weak left-to-right maximum} of an inversion sequence $(e_1,e_2,\dots,e_n)$ is an entry $e_i$ satisfying $e_i \geq e_j$ for all $j\leq i$. Every inversion sequence $e$ can be decomposed in $e^{top}$, which is the (weakly increasing) sequence of weak left-to-right maxima of $e$, and $e^{bottom}$, which is the (possibly empty) sequence of the remaining entries of $e$. \begin{proposition}[\cite{savage}, Observation 10] \label{prop:charact_inv_seq} An inversion sequence $e$ avoids $210$ and $100$ if and only if $e^{top}$ is weakly increasing and $e^{bottom}$ is strictly increasing. \end{proposition} The enumeration of inversion sequences avoiding $210$ and $100$ is solved in \cite{savage}, with a summation formula as reported in Proposition~\ref{prop:enum_inv_seq} below. Let top$(e)=\max\,(e^{top})$ and bottom$(e)=\max\, (e^{bottom})$. If $e^{bottom}$ is empty, the convention is to take $\text{bottom}(e)=-1$. \begin{proposition}[\cite{savage}, Theorem 32] \label{prop:enum_inv_seq} Let $Q_{n,a,b}$ be the number of inversion sequences $e\in\mathbf{I}_n(210,100)$ with top$(e)=a$ and bottom$(e)=b$. Then \[Q_{n,a,b}=\sum_{i=-1}^{b-1}Q_{n-1,a,i}+\sum_{j=b+1}^a Q_{n-1,j,b},\] with initial conditions $Q_{n,a,b}=0$, if $n\leq a$, and $Q_{n,a,-1}=\frac{n-a}{n}\binom{n-1+a}{a}$. Hence, \begin{equation}\label{conjsav} |\mathbf{I}_n(210,100)|=\sum_{a=0}^{n-1}\sum_{b=-1}^{a-1} Q_{n,a,b}=\frac{1}{n+1}\binom{2n}{n}+\sum_{a=0}^{n-1}\sum_{b=0}^{a-1} Q_{n,a,b}. \end{equation} \end{proposition} We prove the following conjecture of~\cite[Section 2.27]{savage}. \begin{thm}\label{conjecture} There are as many inversion sequences of size $n$ avoiding $210$ and $100$ as plane permutations of size $n$. In other words $|\mathbf{I}_n(210,100)| = SB_n$. \end{thm} \begin{proof} We prove the statement by showing a growth for $\cup_n \mathbf{I}_n(210,100)$ which can be encoded by $\Omega_{semi}$. Given an inversion sequence $e\in\mathbf{I}_n(210,100)$, we make it grow by adding a rightmost entry. Let $a=\mbox{top}(e)$ and $b=\mbox{bottom}(e)$. From Proposition~\ref{prop:charact_inv_seq}, it follows that $f=(e_1,\dots,e_n,p)$ is an inversion sequence of size $n+1$ avoiding $210$ and $100$ if and only if $n\geq p>b$. Moreover, if $p\geq a$, then $f^{top}$ comprises $p$ in addition to the elements of $e^{top}$, and $f^{bottom}=e^{bottom}$; and if $b<p<a$, then $f^{top}=e^{top}$ and $f^{bottom}$ comprises $p$ in addition to the elements of $e^{bottom}$. Now, we assign to any $e\in\mathbf{I}_n(210,100)$ the label $(h,k)$, where $h=a-b$ and $k=n-a$. The sequence $e=(0)$ has label $(1,1)$, since $a=$ top$(e)=0$ and $b=$ bottom$(e)=-1$. Let $e$ be an inversion sequence of $\mathbf{I}_n(210,100)$ with label $(h,k)$. The labels of the inversion sequences of $\mathbf{I}_{n+1}(210,100)$ produced adding a rightmost entry $p$ to $e$ are \begin{itemize} \item $(h+k,1), (h+k-1,2),\dots, (h+1,k)$ when $p = n, n-1,\dots, a+1$, \item $(h,k+1)$ when $p=a$, \item $(1,k+1), \dots, (h-1,k+1)$ when $p=a-1,\dots, b+1$, \end{itemize} which concludes the proof that the growth of $\cup_n \mathbf{I}_n(210,100)$ by addition of a rightmost entry is encoded by $\Omega_{semi}$. \end{proof} \begin{remark} In addition, in~\cite[Section 2.27]{savage} it is proved that the set $\mathbf{I}_n(210,100)$ has as many inversion sequences as the sets $\mathbf{I}_n(210,110)$, $\mathbf{I}_n(201,100)$, and $\mathbf{I}_n(201,101)$. Thus, our proof that the semi-Baxter numbers enumerate inversion sequences avoiding $210$ and $100$ also solves the enumeration problem of exactly four cases of~\cite[Table 2]{savage}. \end{remark} \subsection{Enumerative results} \label{sec:resultsGF} For $h,k\geq 1$, let $S_{h,k}(x)\equiv S_{h,k}$ denote the size generating function for semi-Baxter permutations having label $(h,k)$. The rule $\Omega_{semi}$ translates into a functional equation for the generating function $S(x;y,z)\equiv S(y,z)=\sum_{h,k\geq 1} S_{h,k} y^h z^k$. \begin{proposition} \label{prop:funcEqSemiBax} The generating function $S(y,z)$ satisfies the following functional equation: \begin{equation}\label{funeq1} S(y,z)= xyz+\frac{xyz}{1-y} \left( S(1,z) - S(y,z) \right) + \frac{xyz}{z-y} \left( S(y,z) - S(y,y) \right) .\end{equation} \end{proposition} \begin{proof} Starting from the growth of semi-Baxter permutations according to $\Omega_{semi}$ we write: \begin{align*} S(y,z) &= xyz+x\sum_{h,k\geq 1} S_{h,k} \left( (y+y^2+\dots +y^h)z^{k+1}+(y^{h+k}z+y^{h+k-1}z^2+\dots +y^{h+1}z^k) \right) \\ &= xyz+x\sum_{h,k\geq 1} S_{h,k}\left( \frac{1-y^{h}}{1-y} y\, z^{k+1} + \frac{1-\left(\frac{y}{z}\right)^{k}}{1-\frac{y}{z}}y^{h+1} z^{k}\right) \\ &= xyz+ \frac{xyz}{1-y} \left ( S(1,z) - S(y,z) \right) + \frac{xyz}{z-y} \left ( S(y,z) - S(y,y) \right) \, . \qedhere \end{align*} \end{proof} From Proposition~\ref{prop:funcEqSemiBax}, a lot of information can be derived about the generating function $S(1,1)$ of semi-Baxter numbers, and about these numbers themselves. The results we obtain are stated below, but the proofs are postponed to Subsection~\ref{sec:proofs_GF}. A Maple worksheet recording the computations in these proofs is available from the authors' webpage\footnote{for instance at \url{http://user.math.uzh.ch/bouvel/publications/Semi-Baxter.mw}}. First, using the ``obstinate kernel method'' (used for instance in~\cite{BM} to enumerate Baxter permutations), we can give an expression for $S$. We let $\bar{a}$ denote $1/a$, and $\Omega_{\geq}[F(x;a)]$ denote the \emph{non-negative part} of $F$ in $a$, where $F$ is a formal power series in $x$ whose coefficients are Laurent polynomials in $a$. More precisely, if $F(x;a)=\sum_{n\geq0,i\in\mathbb{Z}}f(n,i)a^ix^n$, then \[\Omega_{\geq}[F(x;a)]=\sum_{n\geq0}x^n\sum_{i\geq0}f(n,i)\,a^i.\] \begin{thm}\label{thm:GFSemiBaxter} Let $W(x;a)\equiv W$ be the unique formal power series in $x$ such that \begin{equation*} W=x\bar{a}(1+a)(W+1+a)(W+a). \end{equation*} The series solution $S(y,z)$ of eq.~\eqref{funeq1} satisfies \[S(1+a,1+a)={\Omega}_{\geq}\left[F(a,W)\right],\] where the function $F(a,W)$ is defined by \begin{equation}\label{eqP} \begin{array}{ll}F(a,W)=&(1+a)^2\,x+\left(\bar{a}^5+\bar{a}^4+2+2a\right)\,x\,W\\ \\ &+\left(-\bar{a}^5-\bar{a}^4+\bar{a}^3-\bar{a}^2-\bar{a}+1\right)\,x\,W^2+\,\left(\bar{a}^4-\bar{a}^2\right)\,x\,W^3. \end{array} \end{equation}\smallskip \end{thm} Note that in Theorem~\ref{thm:GFSemiBaxter}, $W$ and $F(a,W)$ are algebraic series in $x$ whose coefficients are Laurent polynomials in $a$ with rational coefficients. It follows, as in~\cite[page 6]{BM}, that $S(1+a,1+a)={\Omega}_{\geq}[F(a,W)]$ is D-finite\footnote{By definition, a multivariate generating function $F(\textbf{x})$, where $\textbf{x} = (x_1, \dots, x_k)$, is D-finite when it satisfies a system of linear partial differential equations, one for each $i = 1\dots k$, of the form $Q_{i,0}(\textbf{x})F(\textbf{x}) + Q_{i,1}(\textbf{x}) \frac{\partial}{\partial x_i} F(\textbf{x}) + Q_{i,2} \frac{\partial^2}{\partial x_i^2} F(\textbf{x}) + \dots + Q_{i,r_i} \frac{\partial^{r_i}}{\partial x_i^{r_i}} F(\textbf{x})= 0$, where the $Q_{i,j}$ are polynomials. As stated in~\cite[Theorem B.3]{Flaj}, D-finiteness is preserved by specialization of the variables.}, and hence also $S(1,1)$. Using the Lagrange Inversion, we can derive from Theorem~\ref{thm:GFSemiBaxter} an explicit but complicated expression for the coefficients of $S(1,1)$, which is reported in Corollary~\ref{cor:CoefSemiBaxter} in Subsection~\ref{sec:proofs_GF}. Surprisingly this complicated expression hides a very simple recurrence, which also appears as a conjecture in~\cite{bevan}. \begin{proposition}\label{prop:recSemiBaxter} The numbers $SB_n$ are recursively characterized by $SB_0=0$, $SB_1=1$ and for $n\geq2$ \begin{equation}\label{recurrenceSB} SB_n=\frac{11n^2+11n-6}{(n+4)(n+3)}SB_{n-1}+\frac{(n-3)(n-2)}{(n+4)(n+3)}SB_{n-2}. \end{equation} \end{proposition} From the recurrence of Proposition~\ref{prop:recSemiBaxter}, we can in turn prove closed formulas for semi-Baxter numbers, which have been conjectured in~\cite{BevanPrivate}. These are much simpler than the one given in Corollary~\ref{cor:CoefSemiBaxter} by the Lagrange inversion, and also very much alike the summation formula for Baxter numbers (which we recall in Subsection~\ref{sec:Bax_known}). \begin{thm}\label{thm:nice_formulas_SB_n} For any $n\geq 2$, the number $SB_n$ of semi-Baxter permutations of size $n$ satisfies \begin{align*} SB_n & = \frac{24}{(n-1) n^2 (n+1) (n+2)} \sum_{j=0}^n \binom{n}{j+2} \binom{n+2}{j} \binom{n+j+2}{j+1} \\ & = \frac{24}{(n-1) n^2 (n+1) (n+2)} \sum_{j=0}^n \binom{n}{j+2} \binom{n+1}{j} \binom{n+j+2}{j+3}\\ & = \frac{24}{(n-1) n^2 (n+1) (n+2)} \sum_{j=0}^n \binom{n+1}{j+3} \binom{n+2}{j+1} \binom{n+j+3}{j}. \end{align*} \end{thm} There is actually a fourth formula that has been conjectured in~\cite{BevanPrivate}, namely \[ SB_n = \frac{24}{(n-1) n (n+1)^2 (n+2)} \sum_{j=0}^n \binom{n+1}{j} \binom{n+1}{j+3} \binom{n+j+2}{j+2}. \] Taking the multiplicative factors inside the sums, it is easy to see (for instance going back to the definition of binomial coefficients as quotients of factorials) that it is term by term equal to the second formula of Theorem~\ref{thm:nice_formulas_SB_n}. As indicated in Subsection~\ref{sec:intro_semiBax}, in addition to the formulas reported in Theorem~\ref{thm:nice_formulas_SB_n} above, two conjectural formulas for $SB_n$ have been proposed in the literature, in different contexts. The first one has been shown in Subsection~\ref{sec:inv} (eq.~\eqref{conjsav}). This formula was proved in~\cite{savage} and its validity for semi-Baxter numbers follows by Theorem~\ref{conjecture}. The second formula is attributed to M. Van Hoeij and reported by D. Bevan in~\cite{bevan}. This second conjecture is an explicit formula for semi-Baxter numbers that involves the numbers $a_n=\sum_{j=0}^n\binom{n}{j}^2\binom{n+j}{j}$ (sequence \text{A005258} on~\cite{OEIS}). We will prove in Subsection~\ref{sec:proofs_GF} the validity of this conjecture by using the recursive formula for semi-Baxter numbers (Proposition~\ref{prop:recSemiBaxter}). \begin{proposition}[\cite{bevan}, Conjecture 13.2]\label{prop:conj} For $n\geq2$,$$SB_n=\frac{24}{5}\,\frac{(5n^3-5n+6)a_{n+1}-(5n^2+15n+18)a_n}{(n-1)n^2(n+2)^2(n+3)^2(n+4)}$$ \end{proposition} \begin{remark} With Corollary~\ref{cor:CoefSemiBaxter}, Theorem~\ref{thm:nice_formulas_SB_n} and Proposition~\ref{prop:conj}, we get five expressions for the $n$th semi-Baxter number as a sum over $j$. Note that although the sums are equal, the corresponding summands in each sum are not. Therefore, Corollary~\ref{cor:CoefSemiBaxter}, Theorem~\ref{thm:nice_formulas_SB_n} and Proposition~\ref{prop:conj} give five essentially different ways of expressing the semi-Baxter numbers. Note however that we are not aware of any combinatorial interpretation of the summation index $j$, for any of them. \end{remark} From the formula of Proposition~\ref{prop:conj}, we can derive the dominant asymptotics of $SB_n$. \begin{corollary}\label{cor:asym_SB_n} Let $\lambda = \frac{1}{2}(\sqrt{5}-1)$. It holds that \[SB_n \sim A\,\frac{\mu^{n}}{n^{6}},\] where $A=\frac{12}{\pi}\,5^{-1/4} \lambda^{-15/2}\approx94.34$ and $\mu=\lambda^{-5}= (11+5\sqrt{5})/2$. \end{corollary} \subsection{Enumerative results: proofs}\label{sec:proofs_GF} Recall that $S(y,z)$ denotes the multivariate generating function of semi-Baxter permutations. In Theorem~\ref{thm:GFSemiBaxter}, we have given an expression for $S(1+a,1+a)$, which we now prove. \begin{proof}[Proof of Theorem~\ref{thm:GFSemiBaxter}] The linear functional equation of eq.~\eqref{funeq1} has two catalytic variables, $y$ and $z$. To solve eq.~\eqref{funeq1} it is convenient to set $y=1+a$ and collect all the terms having $S(1+a,z)$ in them, obtaining the {\em kernel form} of the equation: \begin{equation} \label{eq1az} K(a,z) S(1+a,z)= xz(1+a)-\frac{xz(1+a)}{a} S(1,z) - \frac{xz(1+a)}{z-1-a} S(1+a,1+a) , \end{equation} where the kernel is \[K(a,z)=1-\frac{xz(1+a)}{a}- \frac{xz(1+a)}{z-1-a}.\] For brevity, we refer to the right-hand side of eq.~\eqref{eq1az} as $R(x,a,z,S(1,z),S(1+a,1+a))$. The kernel is quadratic in $z$. Denoting $Z_{+}(a)$ and $Z_{\_}(a)$ the solutions of $K(a,z)=0$ with respect to $z$, and $Q=\sqrt{a^2-2ax-6a^2x+x^2+2ax^2+a^2x^2-4a^3x}$, we have \begin{align*} Z_{+}(a) & =\frac{1}{2}\frac{a+x+ax-Q}{x(1+a)}=(1+a)+(1+a)^2x+\frac{(1+a)^3(1+2a)}{a}x^2+O(x^3),\\ Z_{\_}(a) & =\frac{1}{2}\frac{a+x+ax+Q}{x(1+a)}=\frac{a}{(1+a)x}-a-(1+a)^2x-\frac{(1+a)^3(1+2a)}{a}x^2+O(x^3). \end{align*} Both $Z_{+}$ and $Z_{\_}$ are Laurent series in $x$ whose coefficients are Laurent polynomials in $a$. However, only the kernel root $Z_{+}$ is a formal power series in $x$ whose coefficients are Laurent polynomials in $a$. So, setting $z=Z_{+}$, the function $S(1+a,z)$ is a formal power series in $x$ whose coefficients are Laurent polynomials in $a$, and the right-hand side of eq.~\eqref{eq1az} is equal to zero, \emph{i.e.} $R(x,a,Z_{+},S(1,Z_{+}),S(1+a,1+a))=0$. Note in addition that the coefficients of $Z_{+}$ are multiples of $(1+a)$. At this point we follow the usual kernel method (see for instance~\cite{BM}) and attempt to eliminate the term $S(1,Z_+)$ by exploiting transformations that leave the kernel, $K(a,z)$, unchanged. Examining the kernel shows that the transformations $$\Phi:(a,z)\rightarrow\left(\frac{z-1-a}{1+a},z\right)\;\;\;\mbox{ and }\;\;\;\Psi:(a,z)\rightarrow\left(a,\frac{z+za-1-a}{z-1-a}\right)$$ leave the kernel unchanged and generate a group of order $10$. Among all the elements of this group we consider the following pairs $(f_1(a,z),f_2(a,z))$: $$\left[a,z\right]\mathop{\longleftrightarrow}_{\Phi}\left[\frac{z- 1- a}{1+a},z\right]\mathop{\longleftrightarrow}_{\Psi}\left[\frac{z- 1- a}{1+a},\frac{z-1}{a}\right]\mathop{\longleftrightarrow}_{\Phi}\left[\frac{z- 1- a}{az},\frac{z-1}{a}\right]\mathop{\longleftrightarrow}_{\Psi} \left[\frac{z- 1- a}{az},\frac{1+a}{a}\right].$$ These have been chosen since, for each of them, $f_1(a,Z_{+})$ and $f_2(a,Z_{+})$ are formal power series in $x$ with Laurent polynomial coefficients in $a$. Consequently, they share the property that $S(1+f_1(a,Z_{+}),f_2(a,Z_{+}))$ are formal power series in $x$. It follows that, substituting each of these pairs for $(a,z)$ in eq.~\eqref{eq1az}, we obtain a system of five equations, whose left-hand sides are all $0$, and with six unknowns: \[ \begin{cases} 0=R(x,a,Z_{+},S(1,Z_{+}),S(1+a,1+a))\\ \\ 0=R\left(x,\frac{Z_{+}- 1- a}{1+a},Z_{+},S(1,Z_{+}),S(1+\frac{Z_{+}- 1- a}{1+a},1+\frac{Z_{+}- 1- a}{1+a})\right)\\ \\ 0=R\left(x,\frac{Z_{+}-1}{a},\frac{Z_{+}- 1- a}{1+a},S(1,\frac{Z_{+}-1}{a}),S(1+\frac{Z_{+}- 1- a}{1+a},1+\frac{Z_{+}- 1- a}{1+a})\right) \\ \\ 0=R\left(x,\frac{Z_{+}- 1- a}{aZ_{+}},\frac{Z_{+}-1}{a},S(1,\frac{Z_{+}-1}{a}),S(1+\frac{Z_{+}- 1- a}{aZ_{+}},1+\frac{Z_{+}- 1- a}{aZ_{+}})\right)\\ \\ 0=R\left(x,\frac{Z_{+}- 1- a}{aZ_{+}},\frac{1+a}{a},S(1,\frac{1+a}{a}),S(1+\frac{Z_{+}- 1- a}{aZ_{+}},1+\frac{Z_{+}- 1- a}{aZ_{+}})\right). \end{cases} \] Eliminating all unknowns except $S(1+a, 1+a)$ and $S(1,1+\bar{a})$, this system reduces (after some work) to the following equation: \begin{equation} \label{final} S(1+a, 1+a)+\frac{(1+a)^2x}{a^4}S\left(1,1+\bar{a}\right)+P(a,Z_{+})=0, \end{equation} where $P(a,z)=(-z+1+a)(-za^4+z^2a^4-za^3+z^2a^3-z^3a^2-2a^2+z^2a^2+za^2-4a+5az-3az^2+z^3a+3z-z^2-2)/({z}{a}^4(z-1))$. Note that the coefficient of $S(1,1+\bar{a})$ in eq.~\eqref{final} results to be equal to $(1+a)^2x\bar{a}^4$ only after setting $z=Z_+$ and simplifying the expression obtained. Now, the form of eq.~\eqref{final} allows us to separate its terms according to the power of $a$: \begin{itemize} \item $S(1+a,1+a)$ is a power series in $x$ with polynomial coefficients in $a$ whose lowest power of $a$ is $0$, \item $S(1,1+\bar{a})$ is a power series in $x$ with polynomial coefficients in $\bar{a}$ whose highest power of $a$ is $0$; consequently, we obtain that $\frac{(1+a)^2x}{a^4}S(1,1+\bar{a})$ is a power series in $x$ with polynomial coefficients in $\bar{a}$ whose highest power of $a$ is $-2$. \end{itemize} Hence when we expand the series $-P(a,Z_+)$ as a power series in $x$, the non-negative powers of $a$ in the coefficients must be equal to those of $S(1+a,1+a)$, while the negative powers of $a$ come from $\frac{(1+a)^2x}{a^4}S(1,1+\bar{a})$. Then, in order to have a better expression for $P(a,z)$, we perform a further substitution setting $z=w+1+a$. More precisely, let $W\equiv W(x;a)$ be the power series in $x$ defined by $W=Z_{+}-(1+a)$. Since $K(a,W+1+a)=0$, the function $W$ is recursively defined by \begin{equation} \label{Wexpr} W=x\bar{a}(1+a)(W+1+a)(W+a), \end{equation} as claimed. Moreover, we have the following expression for $F(a,W) := -P(a,Z_{+})$: \begin{equation*} \begin{array}{ll}F(a,W) = - P(a,W+1+a)=&\displaystyle\,(1+a)^2\,x+\left(\frac{1}{a^5}+\frac{1}{a^4}+2+2a\right)\,x\,W\\[2ex] &\displaystyle+\left(-\frac{1}{a^5}-\frac{1}{a^4}+\frac{1}{a^3}-\frac{1}{a^2}-\frac{1}{a}+1\right)\,x\,W^2\\[2ex] &\displaystyle+\left(\frac{1}{a^4}-\frac{1}{a^2}\right)\,x\,W^3, \end{array} \end{equation*} in which the denominator of $- P(a,W+1+a)$ is eliminated by substituting in it a factor $W$ for the right-hand side of eq.~\eqref{Wexpr}. \end{proof} From the expression of $S(1+a,1+a)$ obtained above, the Lagrange inversion allows us to derive an explicit expression for the semi-Baxter numbers, as shown below in Corollary~\ref{cor:CoefSemiBaxter}. From it, we next obtain the simple recurrence of Proposition~\ref{prop:recSemiBaxter}, the conjectured simpler formulas for $SB_n$ given in Theorem~\ref{thm:nice_formulas_SB_n}, and the asymptotic estimate of $SB_n$ stated in Corollary~\ref{cor:asym_SB_n}. \begin{corollary}\label{cor:CoefSemiBaxter} The number $SB_n$ of semi-Baxter permutations of size $n$ satisfies, for all $n\geq2$: \begin{eqnarray*} {SB}_n&\hspace{-3mm}=\frac{1}{n-1}\sum_{j=0}^{n}\binom{n-1}{j}\biggr[\binom{n-1}{j+1}\left[\binom{n+j+1}{j+5}+2\binom{n+j+1}{j}\right]+2\binom{n-1}{j+2}\left[-\binom{n+j+2}{j+5}+\binom{n+j+1}{j+3}\right.\\[-.5ex] &\left.\qquad\quad\quad-\binom{n+j+2}{j+2}+\binom{n+j+1}{j}\right]+3\binom{n-1}{j+3}\left[\binom{n+j+2}{j+4}-\binom{n+j+2}{j+2}\right]\biggr]. \end{eqnarray*} \end{corollary} \begin{proof} The $n$th semi-Baxter number, $SB_n$, is the coefficient of $x^n$ in $S(1,1)$, which we denote as usual $[x^n]S(1,1)$. Notice that this number is also the coefficient $[a^0x^n]S(1+a,1+a)$, and so by Theorem~\ref{thm:GFSemiBaxter} it is the coefficient of $a^0x^n$ in $F(a,W) = - P(a,W+1+a)$, namely \[\begin{array}{ll}SB_n=\displaystyle[a^0x^{n-1}]&\hspace{-3mm}\displaystyle\left((1+a)^2+\left(\frac{1}{a^5}+\frac{1}{a^4}+2+2a\right)\,W+\left(-\frac{1}{ a^5}-\frac{1}{a^4}+\frac{1}{a^3}-\frac{1}{a^2}-\frac{1}{a}+1\right)\,W^2\right.\\[3ex] &\hspace{-3mm}\displaystyle+\left.\left(\frac{1}{a^4}-\frac{1}{a^2}\right)\,W^3\right).\end{array}\] This expression can be evaluated from $[a^sx^k]W^i$, for $i=1,2,3$. Precisely, \[\begin{array}{ll} SB_n=&\hspace{-3mm}[a^5x^{n-1}]W+[a^4x^{n-1}]W+2[a^0x^{n-1}]W+2[a^{-1}x^{n-1}]W-[a^5x^{n-1}]W^2-[a^4x^{n-1}]W^2\\ \\ &\hspace{-3mm}+[a^3x^{n-1}]W^2-[a^2x^{n-1}]W^2-[a^1x^{n-1}]W^2+\left[a^0x^{n-1}\right]W^2+[a^4x^{n-1}]W^3-[a^2x^{n-1}]W^3.\end{array}\] The Lagrange inversion and eq.~\eqref{Wexpr} then prove that $$[a^sx^k]W^i=\frac{i}{k}\sum_{j=0}^{k-i}\binom{k}{j}\binom{k}{j+i}\binom{k+j+i}{j+s},\mbox{ for }i=1,2,3.$$ We can then substitute this into the above expression for $SB_n$ and, for $n\geq2$, obtain the announced explicit formula for the semi-Baxter coefficients $SB_n$ setting $SB_n=\sum_{j=0}^{n-1} F_{SB}(n,j)$, where \belowdisplayskip=-10pt \begin{eqnarray} \notag F_{SB}(n,j)\hspace{-6mm}&&\displaystyle=\frac{1}{n-1}\binom{n-1}{j}\Biggr[\binom{n-1}{j+1}\left[\binom{n+j+1}{j+5}+2\binom{n+j+1}{j}\right]\\[1ex]\label{eq:summand} &&\quad\displaystyle +2\,\binom{n-1}{j+2}\left[-\binom{n+j+2}{j+5}+\binom{n+j+1}{j+3}-\binom{n+j+2}{j+2}+\binom{n+j+1}{j}\right]\\[1ex]\notag &&\quad\displaystyle+3\,\binom{n-1}{j+3}\left[\binom{n+j+2}{j+4}-\binom{n+j+2}{j+2}\right]\Biggr]. \end{eqnarray} \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:recSemiBaxter}] From Corollary~\ref{cor:CoefSemiBaxter}, we can write $SB_n=\sum_{j=0}^{n-1} F_{SB}(n,j)$, where the summand $F_{SB}(n,j)$ given by eq.~\eqref{eq:summand} is hypergeometric, and we prove the announced recurrence using \emph{creative telescoping}~\cite{zeilberger}. The Maple package {\tt SumTools[Hypergeometric][Zeilberger]} implements this approach: using $F_{SB}(n,j)$ as input, it yields \begin{multline}\label{zeil1} (n+5)(n+6)\cdot F_{SB}(n+2,j) - (11n^2+55n+60)\cdot F_{SB}(n+1,j) - n(n-1)\cdot F_{SB}(n,j)\\ = G_{SB}(n, j + 1)-G_{SB}(n, j), \end{multline} where $G_{SB}(n,j)$ is known as the certificate. It has the additional property that $G_{SB}(n,j) / F_{SB}(n,j)$ is a rational function of $n$ and $j$. The expression $G_{SB}(n,j)$ is quite cumbersome and we do not report it here --- it can be readily reconstructed using \texttt{Zeilberger} as done in the Maple worksheet associated with our paper. To complete the proof of the recurrence it is sufficient to sum both sides of eq.~\eqref{zeil1} over $j$, $j$ ranging from $0$ to $n+1$. Since the coefficients on the left-hand side of eq.~\eqref{zeil1} are independent of $j$, summing it over $j$ gives \begin{multline} (n+5)(n+6) \cdot SB_{n+2} - (11n^2+55n+60)\cdot SB_{n+1} - n(n-1)\cdot SB_n \\ - (11n^2+55n+60) \cdot F_{SB}(n+1,n+1) - n(n-1)\cdot (F_{SB}(n,n)+F_{SB}(n,n+1)). \end{multline} Summing the right-hand side over $j$ gives a telescoping series, and simplifies as $G_{SB}(n, n + 2)-G_{SB}(n, 0)$. From the explicit expression of $F_{SB}(n,j)$ and $G_{SB}(n,j)$, it is elementary to check that \[ F_{SB}(n+1,n+1) = F_{SB}(n,n) = F_{SB}(n,n+1) = G_{SB}(n, n + 2) = G_{SB}(n, 0) =0. \] Summing eq.~\eqref{zeil1} therefore gives \[ (n+5)(n+6) \cdot SB_{n+2} - (11n^2+55n+60)\cdot SB_{n+1} - n(n-1)\cdot SB_n =0. \] Shifting $n \mapsto n-2$ and rearranging finally gives the recurrence of Proposition~\ref{prop:recSemiBaxter}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:nice_formulas_SB_n}] For each of the summation formulas given in Theorem~\ref{thm:nice_formulas_SB_n}, we apply the method of creative telescoping, as in the proof of Proposition~\ref{prop:recSemiBaxter}. In all three cases, this produces a recurrence satisfied by these numbers, and every time we find exactly the recurrence given in Proposition~\ref{prop:recSemiBaxter}. Checking that the initial terms of the sequences coincide completes the proof. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:conj}] For the sake of brevity we write $A(n)=5n^3-5n+6$ and $B(n)=5n^2+15n+18$ so that the statement becomes \begin{equation}\label{pr:conj} SB_n=\frac{24(A(n)\,a_{n+1}-B(n)\,a_n)}{5(n-1)n^2(n+2)^2(n+3)^2(n+4)}. \end{equation} The validity of eq.~\eqref{pr:conj} is proved by induction on $n$ using Proposition~\ref{prop:recSemiBaxter} and the following recurrence satisfied by the numbers $a_n$, for $n\geq1$: \begin{equation}\label{rec:aperi} a_{n+1}=\frac{11n^2+11n+3}{(n+1)^2}\,a_n+\frac{n^2}{(n+1)^2}\,a_{n-1}, \mbox{ with }a_0=1,\mbox{ and }a_1=3. \end{equation} For $n=2,3$, it holds that $SB_2=({A(2)a_3-B(2)a_2})/{2000}=({36\cdot147-68\cdot19})/{2000}=2$ and $SB_3=({A(3)a_4-B(3)a_3})/{23625}=({126\cdot1251-108\cdot147})/{23625}=6$. Then, suppose that eq.~\eqref{pr:conj} is valid for $n-1$ and $n-2$. In order to prove it for $n$, consider the recursive formula of eq.~\eqref{recurrenceSB} and substitute in it $SB_{n-1}$ and $SB_{n-2}$ by using eq.~\eqref{pr:conj}. Now, after some work of manipulation and by using eq.~\eqref{rec:aperi} we can write $SB_n$ as in eq.~\eqref{pr:conj}. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:asym_SB_n}] Applying the main theorem of~\cite{McIntosh}, it follows immediately that \[ a_n \sim \frac{\mu^{n+1/2}}{2 \pi \lambda n \sqrt{\nu}} \textrm{ \quad for } \lambda = \frac{\sqrt{5}-1}{2}, \mu= \frac{11+5\sqrt{5}}{2} \textrm{ and } \nu=\frac{2\sqrt{5}}{3-\sqrt{5}}. \] Plugging this expression in the relation \[ SB_n=\frac{24}{5(n-1)n^2(n+2)^2(n+3)^2(n+4)}\,\big((5n^3-5n+6)a_{n+1}-(5n^2+15n+18)a_n\big), \] we see that only the first of these two terms contributes to the asymptotic behavior of $SB_n$, and more precisely that \[ SB_n \sim A \mu^ n n^{-6} \textrm{ \quad for } \mu \textrm{ as above and } A=\frac{24 \mu^{3/2}}{2 \pi \lambda \sqrt{\nu}}. \] The claimed statement then follows noticing that $\mu=\lambda^{-5}$ and $\nu = \frac{\sqrt{5}}{\lambda^2}$. \end{proof} \section{Baxter numbers}\label{sec:Bax This section starts with an overview of some known results about Baxter numbers. We believe it helps understanding the relations, similarities and differences between this well-known sequence and the two main sequences studied in our work (semi-Baxter numbers in Section~\ref{sec:semi} and strong-Baxter numbers in Section~\ref{sec:strong}). Next, studying two families of restricted semi-Baxter permutations enumerated by Baxter numbers, we show that $\Omega_{semi}$ generalizes two known succession rules for Baxter numbers. \subsection{Baxter numbers and restricted permutations}\label{sec:Bax_known} \emph{Baxter permutations} (see~\cite{Gire} among others) are usually defined as permutations avoiding the two vincular patterns $2\underbracket[.5pt][1pt]{41}3$ and $3\underbracket[.5pt][1pt]{14}2$. Denoting $B_n$ the number of Baxter permutations of size $n$, the sequence $(B_n)$ is known as the sequence of \emph{Baxter numbers}. It is identified as sequence \text{A001181} in~\cite{OEIS} and its first terms are $1,2,6,22,92,422,2074,10754, 58202, 326240, 1882960, 11140560,\dots$. Since~\cite{CGHK78}, an explicit formula for $B_n$ has been known: \begin{equation*} \textrm{for all } n\geq1,\ B_{n}=\frac{2}{n(n+1)^2}\sum_{j=1}^{n}\binom{n+1}{j-1}\binom{n+1}{j}\binom{n+1}{j+1}. \end{equation*} In~\cite{BM}, M. Bousquet-M\'elou investigates further properties of Baxter numbers. The above formula can also be found in~\cite[Theorem 1]{BM}. Moreover, using the succession rule reviewed in Proposition~\ref{prop:growthBaxterPerm} below, \cite{BM} characterizes the generating function of Baxter numbers as the solution of a bivariate functional equation. It is then solved with the obstinate kernel method, implying that the generating function for Baxter numbers is D-finite~\cite[Theorem 4]{BM}. Although technical details differ, it is the same approach than the one we used in Section~\ref{sec:semi}. In the light of our recurrence for semi-Baxter numbers (see Proposition~\ref{prop:recSemiBaxter}), it is also interesting to note that Baxter numbers satisfy a similar recurrence, reported by R. L. Ollerton in~\cite{OEIS}, namely \begin{equation*}B_0=0, \quad B_1=1, \quad \text{ and for } n\geq2, B_n=\frac{7n^2+7n-2}{(n+3)(n+2)}B_{n-1}+\frac{8(n-2)(n-1)}{(n+3)(n+2)}B_{n-2}.\end{equation*} In addition to Baxter permutations, several combinatorial families are enumerated by Baxter numbers. See for instance~\cite{ffno} which collects some of them and provides links between them. We will be specifically interested in a second family of restricted permutations which is also enumerated by Baxter numbers, namely the \emph{twisted Baxter permutations}, defined by the avoidance of $2\underbracket[.5pt][1pt]{41}3$ and $3\underbracket[.5pt][1pt]{41}2$~\cite{Rea05,West}. \subsection{Succession rules for Baxter and twisted Baxter permutations} It is clear from their definition in terms of pattern-avoidance that the families of Baxter and twisted Baxter permutations are subsets of the family of semi-Baxter permutations. Therefore, the growth of semi-Baxter permutations provided in Subsection~\ref{sec:GenTreeSemiBax} can be restricted to each of these families, producing a succession rule for Baxter numbers. In the following, we present these two restrictions, which happen to be (variants of) well-known succession rules for Baxter numbers. This reinforces our conviction that the generalization of Baxter numbers to semi-Baxter numbers is natural. \medskip Let us first consider Baxter permutations. To that effect, recall that a LTR (left-to-right) maximum of a permutation $\pi$ is an element $\pi_i$ such that $\pi_i > \pi_j$ for all $j<i$. Similarly, a RTL maximum (resp. RTL minimum) of $\pi$ is an element $\pi_i$ such that $\pi_i > \pi_j$ (resp. $\pi_i < \pi_j$) for all $j>i$. Following~\cite[Section 2.1]{BM} we can make Baxter permutations grow by adding new maximal elements to them, which may be inserted either immediately before a LTR maximum or immediately after a RTL maximum. Giving to any Baxter permutation the label $(h,k)$ where $h$ (resp. $k$) is the number of its RTL (resp. LTR) maxima, this gives the most classical succession rule for Baxter numbers. \begin{proposition}[\cite{BM}, Lemma 2] \label{prop:growthBaxterPerm} The growth of Baxter permutations by insertion of a maximal element is encoded by the rule $$\Omega_{Bax}=\left\{\begin{array}{ll} (1,1)\\ (h,k) \rightsquigarrow \hspace{-3mm}& (1,k+1), \dots , (h,k+1)\\ & (h+1,1), \dots , (h+1,k), \end{array}\right.$$ where $h$ (resp. $k$) is the number of RTL (resp. LTR) maxima. \end{proposition} But note that Baxter permutations are invariant under the $8$ symmetries of the square. Consequently, up to a $90^\circ$ rotation, inserting a new maximum element in a Baxter permutation can be easily regarded as inserting a new element on the right of a Baxter permutation (as we did for semi-Baxter permutations). Those are then inserted immediately below a RTL minimum or immediately above a RTL maximum. Note that, in a semi-Baxter permutation, these are always active sites, so the generating tree associated with $\Omega_{Bax}$ is a subtree of the generating tree associated with $\Omega_{semi}$. Through the rotation, the interpretation of the label $(h,k)$ of a Baxter permutation is modified as follows: $h$ (resp. $k$) is the number of its RTL minima (resp. RTL maxima), that is to say of active sites below (resp. above) the last element of the permutation. As expected, this coincides with the interpretation of labels in the growth of semi-Baxter permutations according to $\Omega_{semi}$. \medskip Turning to twisted Baxter permutations, specializing the growth of semi-Baxter permutations, we obtain the following. \begin{proposition}\label{prop:twist_rule} A generating tree for twisted Baxter permutations can be obtained by insertions on the right, and it is isomorphic to the tree generated by the following succession rule: $$\Omega_{TBax}=\left\{\begin{array}{ll} (1,1)\\ (h,k) \rightsquigarrow \hspace{-3mm}& (1,k), \dots ,(h-1,k), (h,k+1)\\ & (h+k,1), \dots , (h+1,k). \end{array}\right.$$ \end{proposition} \begin{proof} As in the proof of Proposition~\ref{prop:semi_rule}, we let twisted Baxter permutations grow by performing local expansions on the right, as illustrated in Figure~\ref{fig:twisted_growth}. (This is possible since removing the last element in a twisted Baxter permutation produces a twisted Baxter permutation.) Let $\pi$ be a twisted Baxter permutation of size $n$. By definition an active site of $\pi$ is an element $a$ such that $\pi\cdot a$ avoids the two forbidden patterns. Then, we assign to $\pi$ a label $(h,k)$, where $h$ (resp. $k$) is the number of active sites smaller than or equal to (resp. greater than) $\pi_n$. As in the proof of Proposition~\ref{prop:semi_rule}, the permutation $1$ has label $(1,1)$ and now we describe the labels of the permutations $\pi\cdot a$ when $a$ runs over all the active sites of $\pi$. If $a<\pi_n$, then $\pi\cdot a$ ends with a non-empty descent and, as in the proof of Proposition~\ref{prop:semi_rule}, all sites of $\pi$ in the range $(a+1,\pi_n+1]$ become non-active in $\pi\cdot a$ (due to the avoidance of $2\underbracket[.5pt][1pt]{41}3$). Moreover, due to the avoidance of $3\underbracket[.5pt][1pt]{41}2$, the site immediately above $a$ in $\pi\cdot a$ also becomes non-active. All other active sites of $\pi$ remain active in $\pi\cdot a$, hence giving the labels $(i,k)$, for $1\leq i<h$, in the productions of $\Omega_{TBax}$ ($(i,k)$ corresponds to the case where $a$ is the $i$th active site from the bottom). If $a=\pi_n$, no sites of $\pi$ become non-active, giving the label $(h,k+1)$. If $a>\pi_n$, then $\pi\cdot a$ ends with an ascent and no site of $\pi$ become non-active. Hence, we obtain the missing labels in the production of $\Omega_{TBax}$: $(h+k+1-i,i)$, for $1\leq i\leq k$ (the label $(h+k+1-i,i)$ corresponds to $a$ being the $i$th active site from the top). \end{proof} \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{twisted_growth} \caption{The growth of twisted Baxter permutations (with notation as in Figure~\ref{fig:semi}).} \label{fig:twisted_growth} \end{figure} We remark that although $\Omega_{TBax}$ is not precisely the succession rule presented in~\cite{Twis} for twisted Baxter permutations, it is an obvious variant of it: indeed, starting from the rule of~\cite{Twis}, it is enough to replace every label $(q,r)$ by $(r+1,q-1)$ to recover $\Omega_{TBax}$. It follows immediately from the proof of Proposition~\ref{prop:twist_rule} that $\Omega_{TBax}$ is a specialization of $\Omega_{semi}$. With $\Omega_{Bax}$, we therefore obtain two such specializations. In addition, we can observe that the productions of $\Omega_{TBax}$ on second line are the same as in $\Omega_{semi}$, whereas the productions on the first line of $\Omega_{Bax}$ are the same as $\Omega_{semi}$. This means that the restrictions imposed by these two specializations are ``independent''. We will combine them in Section~\ref{sec:strong}, obtaining a succession rule which consists of the first line of $\Omega_{TBax}$ and the second line of $\Omega_{Bax}$. \section{Strong-Baxter numbers} \label{sec:strong} While Section~\ref{sec:semi} was studying a sequence larger than the Baxter numbers (with a family of permutations containing both the Baxter and twisted Baxter permutations), we now turn to a sequence smaller than the Baxter numbers (associated with a family of permutations included in both families of Baxter and twisted Baxter permutations). We present a succession rule for this sequence, and properties of its generating function. \subsection{Strong-Baxter numbers, strong-Baxter permutations, and their succession rule} \begin{definition} A \emph{strong-Baxter permutation} is a permutation that avoids all three vincular patterns $2\underbracket[.5pt][1pt]{41}3$, $3\underbracket[.5pt][1pt]{14}2$ and $3\underbracket[.5pt][1pt]{41}2$. \end{definition} \begin{definition} The sequence of \emph{strong-Baxter numbers} is the sequence that enumerates strong-Baxter permutations. \end{definition} We have added the sequence enumerating strong-Baxter permutations to the OEIS, where it is now registered as \cite[A281784]{OEIS}. It starts with: \[1,2,6,21,82,346,1547,, 7236, 35090, 175268, 897273, 4690392, 24961300, \ldots\] The pattern-avoidance definition makes it clear that the family of strong-Baxter permutations is the intersection of the two families of Baxter and twisted Baxter permutations. In that sense, these permutations ``satisfy two Baxter conditions'', hence the name strong-Baxter. \medskip A succession rule for strong-Baxter numbers is given by the following proposition. \begin{proposition} A generating tree for strong-Baxter permutations can be obtained by insertions on the right, and it is isomorphic to the tree generated by the following succession rule: $$\Omega_{strong}=\left\{\begin{array}{ll} (1,1)\\ (h,k) \rightsquigarrow \hspace{-3mm}& (1,k), \dots ,(h-1,k), (h,k+1)\\ & (h+1,1), \dots , (h+1,k). \end{array}\right.$$ \end{proposition} \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{strong_growth} \caption{The growth of strong-Baxter permutations (with notation as in Figure~\ref{fig:semi}).} \label{fig:strong} \end{figure} \begin{proof} As in the proof of Propositions~\ref{prop:semi_rule} and~\ref{prop:twist_rule}, we build a generating tree for strong-Baxter permutations performing local expansions on the right, as illustrated in Figure~\ref{fig:strong}. Note that this is possible since removing the last point from any strong-Baxter permutation gives a strong-Baxter permutation. Let $\pi$ be a strong-Baxter permutation of size $n$. By definition, the active sites of $\pi$ are the $a$'s such that $\pi \cdot a$ is a strong-Baxter permutations. Any non-empty descent (resp. ascent) of $\pi$ is a pair $\pi_i \pi_{i+1}$ such that there exists $\pi_j$ that makes $\pi_j \pi_i \pi_{i+1}$ an occurrence of $2\underbracket[.5pt][1pt]{31}$ (resp. $2\underbracket[.5pt][1pt]{13}$). Then, the non-active sites $a$ of $\pi$ are characterized by the fact that $a \in (\pi_{i+1},\pi_i]$ (resp. $a \in (\pi_i,\pi_j]\,$), for some occurrence $\pi_j \pi_i \pi_{i+1}$ of $2\underbracket[.5pt][1pt]{31}$ (resp. $2\underbracket[.5pt][1pt]{13}$). Note that in the case where $\pi_{n-1} \pi_n$ is a non-empty descent (resp. ascent), choosing $\pi_j = \pi_{n} +1$ (resp. $\pi_j = \pi_{n}-1$) always gives an occurrence of $2\underbracket[.5pt][1pt]{31}$ (resp. $2\underbracket[.5pt][1pt]{13}$), and it is the smallest (resp. largest) possible value of $\pi_j$ for which $\pi_j \pi_{n-1} \pi_n$ is an occurrence of $2\underbracket[.5pt][1pt]{31}$ (resp. $2\underbracket[.5pt][1pt]{13}$). To the strong-Baxter permutation $\pi$ we assign the label $(h,k)$, where $h$ (resp. $k$) is the number of active sites that are smaller than or equal to (resp. greater than) $\pi_n$. As in the proof of Proposition~\ref{prop:semi_rule}, the permutation $1$ has label $(1,1)$, and we now need to describe, for $\pi$ of label $(h,k)$, the labels of the permutations $\pi \cdot a$ when $a$ runs over all active sites of $\pi$. So, let $a$ be such an active site. If $a < \pi_n$, then $\pi \cdot a$ ends with a non-empty descent. As in the proof of Proposition~\ref{prop:semi_rule}, all sites of $\pi \cdot a$ in $(a+1,\pi_n+1]$ become non-active (due to the avoidance of $2\underbracket[.5pt][1pt]{41}3$). Moreover, due to the avoidance of $3\underbracket[.5pt][1pt]{41}2$, the site immediately above $a$ in $\pi \cdot a$ also becomes non-active. All other active sites of $\pi$ remain active in $\pi \cdot a$, hence giving the labels $(i,k)$ for $1 \leq i < h$ in the production of $\Omega_{strong}$ (again, $i$ is such that $a$ is the $i$-th active site from the bottom). If $a=\pi_n$, no site of $\pi$ becomes non-active, giving the label $(h,k+1)$ in the production of $\Omega_{strong}$. Finally, if $a > \pi_n$, then $\pi \cdot a$ ends with an ascent. Because of the avoidance of $3\underbracket[.5pt][1pt]{14}2$, we need to consider the occurrences of $2\underbracket[.5pt][1pt]{13}$ in $\pi$ to identify which active sites of $\pi$ become non-active in $\pi \cdot a$. It follows from a discussion similar to that in the proof of Proposition~\ref{prop:semi_rule} that all sites of $\pi \cdot a$ in $[\pi_n+1,a)$ become non-active. Hence, we obtain the missing labels in the production of $\Omega_{strong}$: $(h+1,i)$ for $1 \leq i \leq k$ (where $i$ indicates that $a$ is the $i$-th active site from the top). \end{proof} In the same sense that both $\Omega_{Bax}$ and $\Omega_{TBax}$ specialize $\Omega_{semi}$, it is easy to see that the succession rule $\Omega_{strong}$ is a specialization of the rule $\Omega_{Bax}$ (for Baxter permutations) as well as of the rule $\Omega_{TBax}$ (for twisted Baxter permutations). In this case, the rule $\Omega_{strong}$ associated with the intersection of these two families is simply obtained by taking, for each object produced, the minimum label among the two labels given by $\Omega_{Bax}$ and $\Omega_{TBax}$. This appears clearly in the following representation: $$\begin{array}{lllcccccccc} \Omega_{semi}:&(h,k)&\rightarrow&(1,k+1)&\dots&(h-1,k+1)&(h,k+1)&(h+k,1)&\dots&(h+1,k)\\ \Omega_{Bax}:&(h,k)&\rightarrow&(1,k+1)&\dots&(h-1,k+1)&(h,k+1)&(h+1,1)&\dots&(h+1,k)\\ \Omega_{TBax}:&(h,k)&\rightarrow&(1,k)&\dots&(h-1,k)&(h,k+1)&(h+k,1)&\dots&(h+1,k)\\ \Omega_{strong}:&(h,k)&\rightarrow&(1,k)&\dots&(h-1,k)&(h,k+1)&(h+1,1)&\dots&(h+1,k). \end{array}$$ This is easily explained. Note first that in all four cases $h$ (resp. $k$) records the number of active sites below (resp. above) the rightmost element of a permutation. Then, it is enough to remark that among the active sites of a semi-Baxter permutation (avoiding $2\underbracket[.5pt][1pt]{41}3$), the avoidance of $3\underbracket[.5pt][1pt]{41}2$ deactivates only sites above the rightmost element of the permutation, while the avoidance of $3\underbracket[.5pt][1pt]{14}2$ deactivates only sites below it. \subsection{Generating function of strong-Baxter numbers} Let $I_{h,k}(x)\equiv I_{h,k}$ denote the generating function for strong-Baxter permutations having label $(h,k)$, with $h,k\geq1$, and let $I(x;y,z)\equiv I(y,z)=\sum_{h,k\geq 1} I_{h,k} y^h z^k$. (The notation $I$ stands for Intersection, of the families of Baxter and twisted Baxter permutations.) \begin{proposition} The generating function $I(y,z)$ satisfies the following functional equation: \begin{equation} \label{inters} I(y,z)=xyz+\frac{x}{1-y}(y\,I(1,z)-I(y,z))+xz\,I(y,z)+\frac{xyz}{1-z}(I(y,1)-I(y,z)). \end{equation} \end{proposition} \begin{proof} From the growth of strong-Baxter permutations according to $\Omega_{strong}$ we write: \begin{align*} I(y,z) &= xyz+x\sum_{h,k\geq 1} I_{h,k} \left( (y+y^2+\dots +y^{h-1})z^{k}+y^h z^{k+1}+y^{h+1}(z+z^2+\dots +z^k) \right) \\ &= xyz+x\sum_{h,k\geq 1} I_{h,k}\left( \frac{1-y^{h-1}}{1-y} y\, z^{k} +y^h z^{k+1}+ \frac{1-z^{k}}{1-z} y^{h+1}\, z\right) \\ &= xyz+ \frac{x}{1-y} \left (y\,I(1,z) - I(y,z) \right) + xz\,I(y,z)+\frac{xyz}{1-z} \left ( I(y,1) - I(y,z) \right) \, . \qedhere \end{align*} \end{proof} In order to study the nature of the generating function $I(1,1)$ for strong-Baxter numbers, we look at the kernel of eq.~\eqref{inters}, which is \begin{equation} \label{kernel_in} K(y,z)=1+x\left(\frac{1}{1-y}-z+\frac{yz}{1-z}\right). \end{equation} We perform the substitutions $y=1+a$ and $z=1+b$ so that eq.~\eqref{kernel_in} is rewritten as \begin{equation} \label{kerab_in} K(1+a,1+b)= 1-x Q(a,b) \textrm{ \ where \ } Q(a,b) = \frac{1}{a}+\frac{1}{b}+\frac{a}{b}+a+2+b. \end{equation} As in the proof of Theorem~\ref{thm:GFSemiBaxter} (see Subsection~\ref{sec:proofs_GF}), we look for the birational transformations $\Phi$ and $\Psi$ in $a$ and $b$ that leave the kernel unchanged, which are: $$\Phi:(a,b)\rightarrow\left(a,\frac{1+a}{b}\right),\;\;\;\mbox{ and }\;\;\;\Psi:(a,b)\rightarrow\left(-\frac{b}{a(1+b)},b\right).$$ One observes, using Maple for example, that the group generated by these two transformations is not of small order. We actually suspect that it is of infinite order, preventing us from using the obstinate kernel method to solve eq.~\eqref{inters}. Nevertheless, after the substitution $y=1+a$ and $z=1+b$, the kernel we obtain in eq.~\eqref{kerab_in} resembles kernels of functional equations associated with the enumeration of families of walks in the (positive) quarter plane \cite{BMM}. \begin{proposition}\label{prop:walks} Let $W(t;a,b)$ be the generating function for walks confined in the quarter plane and using $\{(-1,0)$, $(0,-1),(1,-1)$, $(1,0)$, $(0,1)\}$ as step set, where $t$ counts the number of steps and $a$ (resp. $b$) records the $x$-coordinate (resp. $y$-coordinate) of the ending point. The function $W(t;a,b)$ satisfies the following functional equation: \begin{equation}\label{walks} W(t;a,b)=1+t\left(\frac{1}{a}+\frac{1}{b}+\frac{a}{b}+a+b\right)W(t;a,b)-\frac{t}{a}W(t;0,b)-t\frac{(1+a)}{b}W(t;a,0). \end{equation} \end{proposition} Not only can we take inspiration from the literature on walks in the quarter plane for our problem of solving eq.~\eqref{inters}, but modifying the step set, we can even arrange that $K(1+a,1+b)$ is exactly the kernel arising in the functional equation for enumerating a family of walks. \begin{lemma}\label{lem:strong} Let $W_2(t;a,b)$ be the generating function for walks confined in the quarter plane and using $\{(-1,0),(0,-1),(1,-1),(1,0),(0,1),(0,0), (0,0)\}$ as step (multi-)set, where $t$ counts the number of steps and $a$ (resp. $b$) records the $x$-coordinate (resp. $y$-coordinate) of the ending point. The difference with the step set of Proposition~\ref{prop:walks} is that we have added two copies of the trivial step $(0,0)$, which are distinguished (they can be considered as counterclockwise and clockwise loops for instance). The generating functions $W(t;a,b)$ and $W_2(t;a,b)$ are related by \begin{equation} \label{eq:WW2} W_2(x;a,b) =W\left(\frac{x}{1-2x};a,b\right) \frac{1}{1-2x} \end{equation} Moreover, denoting by $J(x;a,b):=I(x;1+a,1+b)$ the generating function for strong-Baxter permutations, it holds that \begin{equation} \label{eq:JW2} J(x;a,b) = (1+a)(1+b)\,x\,W_2(x;a,b). \end{equation} \end{lemma} \begin{proof} First, walks counted by $W_2$ can be described from walks counted by $W$ as follows: a $W_2$-walk is a (possibly empty) sequence of trivial steps, followed by a $W$-walk where, after each step, we insert a (possibly empty) sequence of trivial steps. This simple combinatorial argument shows that $W_2(x;a,b) =W(\frac{x}{1-2x};a,b) \frac{1}{1-2x}$. Next, consider the kernel form of eq.~\eqref{inters} after substituting $y=1+a$ and $z=1+b$, which is \begin{equation}\label{comparing1} (1-xQ(a,b)) J(x;a,b)=x(1+a)(1+b)-x\,\frac{1+a}{a}\,J(x;0,b)-x\,\frac{(1+a)(1+b)}{b}\,J(x;a,0). \end{equation} Compare it to the kernel form of eq.~\eqref{walks}: \begin{equation}\label{comparing2} (1-t(Q(a,b)-2))W(t;a,b)=1-\frac{t}{a}\,W(t;0,b)-t\,\frac{(1+a)}{b}\,W(t;a,0). \end{equation} Substituting $t$ with $\frac{x}{1-2x}$ in eq.~\eqref{comparing2}, and multiplying this equation by $(1+a)(1+b)x$, we see that $(1+a)(1+b)\,x\,W_2(x;a,b)$ satisfies eq.~\eqref{comparing1}, proving our claim. \end{proof} With results of~\cite{bostan}, this easily gives the following theorem. \begin{thm}\label{naturegf} The generating function $I(1,1)$ of strong-Baxter numbers is not D-finite. The same holds for the refined generating function $I(a+1,b+1)$. \end{thm} \begin{proof} Because D-finiteness is preserved by specialization, it is enough to prove that $I(1,1)$ is not D-finite. So, with the notation of Lemma~\ref{lem:strong}, our goal is to prove that $J(x;0,0)$ is not D-finite. Recall from eq.~\eqref{eq:JW2} that $J(x;a,b) = (1+a)(1+b)\,x\,W_2(x;a,b)$, so $J(x;0,0)$ and $W_2(x;0,0)$ coincide up to a factor $x$. Therefore, proving that $W_2(x;0,0)$ is non D-finite is enough. It is proved in~\cite{bostan} that $W(t;0,0)$ is not D-finite. Consequently, since $\frac{1}{1-2x}$ and $\frac{x}{1-2x}$ are rational series, it follows from eq.~\eqref{eq:WW2} $W_2(x;0,0)$ is not D-finite, as desired. \end{proof} Moreover, some information on the asymptotic behavior of the number of strong-Baxter permutations can be derived starting from the connection to walks confined in the quarter plane. In~\cite{bostan} the following proposition is presented. \begin{proposition}[Denisov and Wachtel, \cite{bostan}(Theorem~4)]\label{DW} Let $\mathfrak{S}\subseteq\{0,\pm1\}^2$ be a step set which is not confined to a half-plane. Let $e_n$ denote the number of $\mathfrak{S}$-excursions of length $n$ confined to the quarter plane $\mathbb{N}^2$ and using only steps in $\mathfrak{S}$. Then, there exist constants $K$, $\rho$, and $\alpha$ which depend only on $\mathfrak{S}$, such that: \begin{itemize} \item if the walk is aperiodic, $e_n\sim K\,\rho^n\,n^\alpha$, \item if the walk is periodic (then of period 2), $e_{2n}\sim K\,\rho^{2n}\,(2n)^\alpha,\;\; e_{2n+1}=0$. \end{itemize} \end{proposition} From~\cite[Section 2.5]{bostan}, the growth constant $\rho_W$ associated with $W(t;0,0)$ is an algebraic number whose minimal polynomial is $\mu_{\rho}=t^3+t^2-18t-43$. The approximate value for $\rho_W$ is $4.729031538$. We show below that the growth constant of strong-Baxter numbers is closely related to $\rho_W$. \begin{corollary}\label{cor:strong} \label{cor:growth_strong} The growth constant for the strong-Baxter numbers is $\rho_W+2 \approx 6.729031538$. \end{corollary} \begin{proof} From Lemma~\ref{lem:strong}, $I(x;1,1)=x\,W_2(x;0,0)=x\,W(\frac{x}{1-2x};0,0)\,\frac{1}{1-2x}$. And from the discussion above, $\frac{1}{\rho_W}$ is the radius of convergence of $W(t;0,0)$. The radius of convergence of $g(x)=\frac{x}{1-2x}$ is $\frac{1}{2}$, and $\lim_{{x\to 1/2} \atop {x<1/2}}g(x) = +\infty > \frac{1}{\rho_W}$. So, the composition $W(g(x);0,0)$ is supercritical (see \cite[p. 411]{Flaj}), and the radius of convergence of $W(\frac{x}{1-2x};0,0)$ is $g^{-1}\left(\frac{1}{\rho_W}\right)=\frac{1}{\rho_W+2}$. Since $\frac{1}{\rho_W+2}$ is smaller than the radius of convergence $\frac{1}{2}$ of $\frac{1}{1-2x}$, $\frac{1}{\rho_W+2}$ is also the radius of convergence of $x\,W(\frac{x}{1-2x};0,0)\,\frac{1}{1-2x} = I(x;1,1)$, proving our claim. \end{proof} \subsection*{Acknowledgements} The comments of several colleagues on an earlier draft of our paper have helped us improve it significantly. First, we would like to thank David Bevan, for sharing his conjectural formulas for $SB_n$ in~\cite{BevanPrivate}, for bringing to our attention the conjecture about the enumeration of permutations avoiding $\underbracket[.5pt][1pt]{14}23$, and for suggesting the method used in an earlier version of this paper to derive the asymptotic behavior of $SB_n$. We also thank Christian Krattenthaler for independently suggesting this method, and pointing to the reference~\cite{MBM_Xin}. We are very grateful to the referee for their numerous and helpful suggestions. In particular, the current proof of the asymptotic behavior of $SB_n$ (together with the reference~\cite{McIntosh}) were suggested to us by the referee. Finally, we thank Andrew Baxter for clarifying the status of the conjecture about $\underbracket[.5pt][1pt]{14}23$-avoiding permutations, which brought reference~\cite{Kasraoui} to our attention.
{ "timestamp": "2018-01-12T02:04:57", "yymm": "1702", "arxiv_id": "1702.04529", "language": "en", "url": "https://arxiv.org/abs/1702.04529", "abstract": "In this paper, we enumerate two families of pattern-avoiding permutations: those avoiding the vincular pattern $2-41-3$, which we call semi-Baxter permutations, and those avoiding the vincular patterns $2-41-3$, $3-14-2$ and $3-41-2$, which we call strong-Baxter permutations. We call semi-Baxter numbers and strong-Baxter numbers the associated enumeration sequences. We prove that the semi-Baxter numbers enumerate in addition plane permutations (avoiding $2-14-3$). The problem of counting these permutations was open and has given rise to several conjectures, which we also prove in this paper.For each family (that of semi-Baxter -- or equivalently, plane -- and that of strong-Baxter permutations), we describe a generating tree, which translates into a functional equation for the generating function. For semi-Baxter permutations, it is solved using (a variant of) the kernel method: this gives an expression for the generating function while also proving its D-finiteness. From the obtained generating function, we derive closed formulas for the semi-Baxter numbers, a recurrence that they satisfy, as well as their asymptotic behavior. For strong-Baxter permutations, we show that their generating function is (a slight modification of) that of a family of walks in the quarter plane, which is known to be non D-finite.", "subjects": "Combinatorics (math.CO)", "title": "Semi-Baxter and strong-Baxter: two relatives of the Baxter sequence", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692284751636, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7079584875732892 }
https://arxiv.org/abs/0709.4070
Quasi-period collapse and GL_n(Z)-scissors congruence in rational polytopes
Quasi-period collapse occurs when the Ehrhart quasi-polynomial of a rational polytope has a quasi-period less than the denominator of that polytope. This phenomenon is poorly understood, and all known cases in which it occurs have been proven with ad hoc methods. In this note, we present a conjectural explanation for quasi-period collapse in rational polytopes. We show that this explanation applies to some previous cases appearing in the literature. We also exhibit examples of Ehrhart polynomials of rational polytopes that are not the Ehrhart polynomials of any integral polytope.Our approach depends on the invariance of the Ehrhart quasi-polynomial under the action of affine unimodular transformations. Motivated by the similarity of this idea to the scissors congruence problem, we explore the development of a Dehn-like invariant for rational polytopes in the lattice setting.
\section{Introduction} A \emph{convex rational} (respectively, \emph{integral}) \emph{polytope} \( P \subset \mathbb{R}^{n} \) is the convex hull of finitely many points in \( \mathbb{Q}^{n} \) (respectively, \( \mathbb{Z}^{n} \)). The \emph{dimension} of \( P \) is the dimension of the affine subspace of \( \mathbb{R}^{n} \) spanned by \( P \). Dilating \( P \) by a positive integer factor \( k \) yields the polytope \( k P = \braces{x \in \mathbb{R}^{n} : \tfrac{1}{k} x \in P} \). The \emph{denominator} of \( P \) is the minimum positive integer \( \mathcal{D} \) such that \( \mathcal{D} P \) is an integral polytope. A seminal result of Ehrhart in 1962 \cite{Ehr62} provides a beautiful description of the counting function giving the number \( \lvert kP \cap \mathbb{Z}^{n} \rvert \) of integer lattice points in \( k P \). \begin{thm}[\cite{Ehr62}] \label{thm:EhrhartQPs} If \( P \subset \mathbb{R}^{n} \) is a \( d \)-dimensional rational polytope, then \( \lvert kP \cap \mathbb{Z}^{n} \rvert \) is given by the restriction to the positive integers of a degree-\( d \) quasi-polynomial \( \mathcal{L}_{P}: \mathbb{Z} \to \mathbb{Z} \). That is, there exist periodic functions \( c_{0}, \dotsc, c_{d} \colon \mathbb{Z} \to \mathbb{Q} \) such that \( c_{d} \) is not identically zero and \begin{equation*} \lvert kP \cap \mathbb{Z}^{n} \rvert = \mathcal{L}_{P}(k) = c_{d}(k) k^{d} + \dotsb + c_{1}(k) k + c_{0}(k), \qquad k \in \mathbb{Z}_{> 0}. \end{equation*} \end{thm} We call \( \mathcal{L}_{P} \) the \emph{Ehrhart quasi-polynomial} of \( P \). A positive integer \( N \) is a \emph{quasi-period} of \( \mathcal{L}_{P} \) (or of \( P \)) if \( N \) is divisible by the periods of all of the coefficient functions \( c_{i} \), \( 0 \le i \le d \). (We do not assume that \( N \) is the minimum such positive integer.) When \( P \) is an integral polytope, \( \mathcal{L}_{P} \) has quasi-period 1; that is, \( \mathcal{L}_{P}(k) \) is a polynomial function of \( k \). More generally, the denominator \( \mathcal{D} \) of a polytope \( P \) is a quasi-period of \( \mathcal{L}_{P} \) \cite{Ehr62}. It is somewhat surprising that \( \mathcal{D} \) is not always the \emph{minimum} quasi-period of \( P \). When the minimum quasi-period of \( P \) is less than \( \mathcal{D} \), we say that \emph{quasi-period collapse} has occurred. Several important polyhedra appearing in the representation theory of Lie algebras exhibit period collapse, but the known proofs of these results are not given in terms of the polyhedral geometry \cite{DLM04, DLM06, DW02, KR86}. Quasi-period collapse cannot happen in dimension \( 1 \), but there exist families of polygons in \( \mathbb{R}^{2} \) with arbitrarily large denominators whose minimum quasi-periods are 1. This result was originally proved in \cite{MW05}, where the proof of polynomiality involved subdividing the polygons into polygonal pieces whose Ehrhart quasi-polynomials could be computed. The periodic parts for these pieces could be seen by inspection to cancel, with the result that the counting function for the entire polygon was a polynomial. In this paper, we give a new approach to understanding quasi-period collapse in rational polytopes. This approach yields a much simpler explanation for the polynomiality of the Ehrhart quasi-polynomials appearing in \cite{MW05} (see Example \ref{exm:MW05Triang} below). The demonstration again depends upon polyhedral subdivisions. However, instead of explicitly computing the Ehrhart quasi-polynomials of the pieces in this subdivision, we rearrange unimodular images of the pieces to form an integral polytope. Since this rearrangement does not change the number of lattice points in the polytope or in any of its dilations, it follows immediately that the original Ehrhart quasi-polynomial is a polynomial. Thus we avoid computing the Ehrhart quasi-polynomials of the individual pieces. This approach provides a unified framework for demonstrating quasi-period collapse of rational polytopes. We conjecture that a polytope exhibits quasi-period collapse only when the pieces of some subdivision of the polytope can be rearranged by affine unimodular transformations to form a polyhedral complex with the ``right'' denominator. See Conjecture \ref{conj:EhrhartPolysIffUnionOfIntegral} for a precise statement. This motivates a study of the invariants of rational polyhedra under polyhedral subdivision and piecewise unimodular transformations. This is reminiscent of the scissors congruence problem for the group of rigid motions in \( \mathbb{R}^{3} \). In the classical scissors congruence problem, congruence classes of polyhedra are parameterized by volume and the Dehn invariant \cite{Syd65}. This suggests that an analogous system of invariants might determine when two rational polyhedra are equidecomposable with respect to the group \( \Aff_n(\mathbb{Z}) \cong \SLZ \ltimes \mathbb{Z}^{n} \) of affine unimodular transformations. \section{Proving polynomiality of Ehrhart quasi-polynomials} \label{sec:Examples} The phenomenon of quasi-period collapse for rational polytopes is in general poorly understood. In this section, we give examples of rational polytopes that can be shown to have quasi-period 1 by subdivision and rearrangement of unimodular images of the pieces. These examples serve to motivate the following section, in which we conjecture that this method applies to all examples of quasi-period collapse among rational polytopes. \begin{exm} \label{exm:MW05Triang} Given an integer $\mathcal{D} \geq 2$, let $T$ be the triangle with vertices \( (0,0)^{t} \), \( (1,\frac{\mathcal{D}-1}{\mathcal{D}})^{t} \), and $(\mathcal{D},0)^{t}$. Subdivide \( T \) into two triangles by the line \( x = 1 \) (see left of Figure \ref{fig:TriangRearrange}). Let \( L \) be the ``one-third-open" triangle strictly to the left of the line, and let \( R \) be the closed triangle to the right. Thus we have \begin{align*} L & = \conv \{% (0, 0)^{t}, (1, 0)^{t}, (1,\tfrac{\mathcal{D}-1}{\mathcal{D}})^{t} \} \; \backslash \; [(1, 0)^{t}, (1,\tfrac{\mathcal{D}-1}{\mathcal{D}})^{t}] \\ R & = \conv \{ (1, 0)^{t}, (\mathcal{D}, 0)^{t}, (1,\tfrac{\mathcal{D}-1}{\mathcal{D}})^{t} \} \end{align*} Let \( U \) be the affine unimodular transformation \( \mathbb{R}^{2} \to \mathbb{R}^{2} \) defined by \ U(x) = \begin{bmatrix} \mathcal{D}-1 & -\mathcal{D} \\ -1 & 1 \end{bmatrix} x + \begin{bmatrix} 1 \\ 1 \end{bmatrix}. \] Then \( U(L) \) and \( R \) are disjoint, and their union is the \emph{integral} triangle \[ T' = \conv\{ (1, 0)^{t}, (1, 1)^{t}, (\mathcal{D}, 0)^{t} \} \] (see right of Figure \ref{fig:TriangRearrange}). By construction, \( \mathcal{L}_{T'} = \mathcal{L}_{T} \), and so, since \( T' \) is integral, \( \mathcal{L}_{T} \) is a polynomial. \end{exm} The triangle in Example \ref{exm:MW05Triang} first appeared in \cite{MW05}, where it was used to establish the following theorem. \begin{thm} \label{thm:Ex} Given an integer $\mathcal{D} \ge 2$, there exists a polygon with denominator $\mathcal{D}$ whose Ehrhart quasi-polynomial is a polynomial. \end{thm} \ifpdf \pdfsyncstop \fi \begin{figure}[tbp] \includegraphics{Figures/TriangRearrange} \caption{% Triangle \( T \) in the case \( \mathcal{D} = 3 \) on left, and \( \SLZ \)-equi\-decom\-posable integral triangle on right }% \label{fig:TriangRearrange} \end{figure} \ifpdf \pdfsyncstart \fi \begin{exm} \label{exm:StanleyPyramid} In \cite{Sta97}, Stanley gives an example of a \( 3 \)-dimensional non-integral polyhedron with quasi-period 1. Let \( P \subset \mathbb{R}^{3} \) be the convex hull of the points \( (0, 0, 0)^{t} \), \( (1, 0, 0)^{t} \), \( (1, 1, 0)^{t} \), \( (0, 1, 0)^{t} \), and \( (1/2, 0, 1/2)^{t} \). This is the pyramid pictured on the left side of Figure \ref{fig:StanleyPyramid}. To see that \( \mathcal{L}_{P} \) is a polynomial, dissect \( P \) by the plane perpendicular to the vector \( w = (-1, 1, 1)^{t} \). The intersection of this plane with \( P \) is indicated by the dark gray triangle in Figure \ref{fig:StanleyPyramid}. Let \( U \) be the unimodular transformation of \( \mathbb{R}^{3} \) whose matrix with respect to the standard basis is \begin{equation*} \begin{bmatrix*}[r] 1 & 0 & 0 \\ 1 & 0 & -1 \\ -1 & \makebox[\negone][r]{\( 1 \)} & 2 \end{bmatrix*}. \end{equation*} Applying this transformation to the half-space \( \braces{x \in \mathbb{R}^{3} : w \cdot x \ge 0}\) maps \( P \) to the integral simplex on the right side of Figure \ref{fig:StanleyPyramid}. \end{exm} \ifpdf \pdfsyncstop \fi \begin{figure}[tbp] \def0.5{0.5} \includegraphics{Figures/Pyramid} \caption{% Non-integral polyhedron and its integral image under piecewise unimodular transformation }% \label{fig:StanleyPyramid} \end{figure} \ifpdf \pdfsyncstart \fi In the preceding examples, we showed that a non-integral polytope had a polynomial Ehrhart quasi-polynomial because it was, in some sense, a disguised integral polytope---it was an integral polytope up to rearrangement and unimodular transformation of its pieces. One might be tempted to conjecture that all polytopes with polynomial Ehrhart quasi-polynomials are disguised integral polytopes in this sense. In particular, this would imply that, for any rational polytope \( Q \), if \( \mathcal{L}_{Q} \) is a polynomial, then \( \mathcal{L}_{Q} = \mathcal{L}_{P} \) for some \emph{integral} polytope \( P \). However, this turns out not to be the case. There exist Ehrhart polynomials that are not the Ehrhart polynomials of any integral polytope. \begin{exm} \label{exm:nonintqp} Let \( T \) be the triangle from Example \ref{exm:MW05Triang}, and let \( Q \) be the quadrilateral that results from the union of \( T \) with its reflection about the \( x \)-axis. Then \( \mathcal{L}_{Q}(k) = 2 \mathcal{L}_{T}(k) - \mathcal{D} k - 1 \) (correcting for the double-counting of the points on the \( x \)-axis). Hence, \( \mathcal{L}_{Q} \) is also a polynomial. Yet we claim that \( \mathcal{L}_{Q} \) is not the Ehrhart polynomial of any integral polygon. This is because \( Q \) has only two lattice points on its boundary, so, by \cite[Theorem 3.1]{MW05}, the coefficient of the linear term of \( \mathcal{L}_{Q} \) is 1. But any integral polygon \( P \) has at least three lattice points on its boundary, so, by Pick's theorem, the coefficient of the linear term of \( \mathcal{L}_{P} \) is at least \( 3/2 \). \end{exm} \section{Conjectures} \label{sec:Conjectures} As seen in the example concluding the previous section, a polytope \( P \) may have quasi-period 1 and yet not be the result of rearranging unimodular images of the pieces of an integral polytope. Therefore, a more flexible formulation of the process carried out in the preceding examples is necessary if we hope to find a general explanation for the phenomenon of quasi-period collapse. To this end, recall that a \emph{simplex} is the convex hull of a finite set of affinely independent points. An \emph{open simplex} is the interior of a simplex with respect to the affine subspace that it spans. We call an open simplex \emph{integral} if its closure is integral. The function \( \mathcal{L}_{S} \) counting the lattice points in integral dilations of a \( d \)-dimensional open simplex \( S \) satisfies a well-known reciprocity property: \( \mathcal{L}_{S}(k) = (-1)^{d} \mathcal{L}_{\bar{S}} (-k)\), where \( \bar{S} \) is the closure of \( S \) \cite{Ehr67}. In particular, if \( S \) is an integral open simplex, then \( \mathcal{L}_{S}(k) \) is a polynomial function of \( k \). \begin{exm:nonintqpCont} The quadrilateral \( Q \) is a disjoint union of \( T \) and the reflection about the \( x \)-axis of those points in \( T \) strictly above the \( x \)-axis. As in Example \ref{exm:MW05Triang}, each of these two sets may in turn be partitioned into open simplices that, under suitable rearrangement by unimodular transformations, form a disjoint union of integral open simplices. \end{exm:nonintqpCont} Let \( \Aff_n(\mathbb{Z}) \cong \SLZ \ltimes \mathbb{Z}^{n} \) be the group of affine unimodular transformations on \( \mathbb{R}^{n} \). To make the process employed above precise, we define the notion of \( \SLZ \)-equidecomposability. This definition first appeared in~\cite[\S3.1]{Kan98}; it is analogous to the classical Euclidean notion of equidecomposability (see, e.g., \cite[Chapter 7]{AZ04}). \begin{defn} \label{defn:G-Equidecomposability} We say that two subsets $P, Q \subset \mathbb{R}^n$ are \emph{\( \SLZ \)-equi\-de\-com\-posable} if there are open simplices \( T_{1}, \dotsc, T_{r} \) and affine unimodular transformations \( U_{1}, \dotsc, U_{r} \in \Aff_n(\mathbb{Z}) \) such that \begin{equation*} P = \coprod_{i=1}^{r} T_{i} \quad \text{and} \quad Q = \coprod_{i=1}^{r} U_{i}(T_{i}). \end{equation*} (Here, \( \coprod \) indicates disjoint union.) \end{defn} \begin{conj} \label{conj:EhrhartPolysIffUnionOfIntegral} Suppose that \( P \) is a rational polytope with quasi-period 1. Then there exists a disjoint union \( Q \) of integral open simplices such that \( P \) and \( Q \) are \( \SLZ \)-equidecomposable. \end{conj} Conjecture \ref{conj:EhrhartPolysIffUnionOfIntegral} has a natural generalization to polytopes whose quasi-periods collapse to values larger than \( 1 \): if \( P \) has minimum quasi-period \( N \), we conjecture that \( P \) is \( \SLZ \)-equidecomposable with a disjoint union of open simplices whose denominators are at most \( N \). The decompositions employed in Examples \ref{exm:MW05Triang} and \ref{exm:StanleyPyramid} were reasonably easy to find. However, a systematic method of finding such decompositions is obviously desirable if we hope to extend this approach to a general technique for proving polynomiality of Ehrhart quasi-polynomials. \begin{openproblem} Find a systematic and useful technique that, given a rational polytope \( P \) that is \( \SLZ \)-equidecomposable with some integral polytope \( Q \), produces a decomposition \( \{T_{i}\} \) of \( P \) and a set of unimodular maps \( \{U_{i}\} \) as in Definition \ref{defn:G-Equidecomposability}. \end{openproblem} \section{% \texorpdfstring{% \( \SLZ \)% }{% GL\_n(Z)% }-Scissors Congruence } \label{sec:GLZ-ScissorsCongruence} Another phenomenon that appeared in the examples from Section \ref{exm:MW05Triang} was the equality of the Ehrhart quasi-polynomials of two distinct polytopes. We say that two rational polytopes \( P \) and \( Q \) are \emph{Ehrhart equivalent} if and only if their Ehrhart quasi-polynomials are equal. Obviously, any two \( \SLZ \)-equidecomposable polytopes are Ehrhart equivalent. But what about the converse? Suppose a rational polytope $Q$ has the same Ehrhart quasi-polynomial as a polytope $P$. Are $P$ and $Q$ $\SLZ$-equidecomposable? The answer is known to be ``yes'' in the case \( d=2 \) \cite[Theorem 1.3]{Gre93}. An analogy with the scissors congruence problem suggests that this is no longer the case for \( d \ge 3 \). Nonetheless, as we prove below, a weak version of the converse direction does hold (Proposition \ref{prop:EhrEquivImpliesWeaklyEquidecomp}). We also propose an ansatz for a $\SLZ$-Dehn invariant, based on a theorem for reflexive polygons. \begin{question} \label{question:EhrEquivIffEquidecomp} Are Ehr\-hart-equi\-va\-lent rational polytopes always $\SLZ$-equi\-decom\-posable? \end{question} \subsection{Weak $\SLZ$-scissors congruence} If we allow more general translations of the pieces in a decomposition of \( P \), we get weak scissors congruences. \begin{defn} Two rational polytopes $P, Q \subset \mathbb{R}^n$ are \emph{weakly $\SLZ$-equi\-de\-com\-posable} if they can be decomposed into rational polytopes $P_1, \ldots, P_r$ and $Q_1, \ldots, Q_r$, respectively, such that $P_i$ is equivalent to $Q_i$ via $\SLZ \ltimes \mathbb{Q}^n$. \end{defn} This is equivalent to saying that there is a factor $k \in \mathbb{Z}_{>0}$ such that $kP$ and $kQ$ are (ordinarily) $\SLZ$-equidecomposable. Observe that the weak version of $\SLZ$-equidecomposability does not imply that the Ehrhart quasi-poly\-no\-mi\-als agree everywhere. Nonetheless, they will agree at infinitely many arguments. Therefore, if two \emph{integral} polytopes are weakly $\SLZ$-equi\-de\-com\-posable, then they must be Ehrhart equivalent. \begin{prop} \label{prop:EhrEquivImpliesWeaklyEquidecomp} Let $P$ and $Q$ be Ehrhart-equivalent rational polytopes. Then $P$ and $Q$ are weakly $\SLZ$-equidecomposable. \end{prop} \begin{cor} Two integral polytopes are Ehrhart equivalent if and only if they are weakly $\SLZ$-equidecomposable. \end{cor} \begin{proof}[% Proof of Proposition \ref{prop:EhrEquivImpliesWeaklyEquidecomp}% ]% By a famous theorem of Kempf et al., there is a positive integer \( N \) such that \( NP \) and \( NQ \) are both integral and admit unimodular triangulations---i.e., triangulations whose simplices are $\Aff_n(\mathbb{Z})$-equivalent to the standard simplex \cite{KKMSD73}. It is well known that the Ehrhart polynomial of a polytope determines the $f$-vector of a unimodular triangulation of that polytope (see, e.g., \cite[Corollary 2.5]{Sta80}). Hence, the triangulations of \( NP \) and \( NQ \) have the same $f$-vector, and all simplices of a given dimension are equivalent under $\Aff_n(\mathbb{Z})$. Therefore, the corresponding simplices of \( P \) and \( Q \) are equivalent under \( \SLZ \ltimes \mathbb{Q}^d \). The claim follows. \end{proof} \subsection{A $\SLZ$-Dehn invariant?} For the classical scissors congruence problem in three dimensions, one uses rigid motions rather than lattice preserving transformations. The volume and the Dehn invariant \begin{equation*} \operatorname{Dehn}(P) \ = \ \sum_{\text{\( e \) an edge of \( P \)}} \text{length}(e) \otimes \text{angle}(e) \quad \in \quad \mathbb{R} \otimes_\mathbb{Z} \mathbb{R} / \mathbb{Z} \pi \end{equation*} provide a complete set of invariants. That is, $3$-dimensional polytopes $P$ and $Q$ are scissors congruent if and only if they have the same volume and the same Dehn invariant. The ``only if'' part is relatively easy to see (see~\cite[Chapter 7]{AZ04}), because the Dehn invariant is additive, and decompositions of polyhedra satisfy the following two properties. \begin{itemize} \item[$(\pi)$] A decomposition edge through a two-dimensional face contributes an angle of $\pi$, so it does not contribute to the Dehn invariant. \item[$(2\pi)$] A decomposition edge through the interior contributes an angle of $2\pi$, so it does not contribute to the Dehn invariant. \end{itemize} \begin{prob} Can we manufacture a Dehn-like invariant in the $\SLZ$ case? \end{prob} This invariant, once constructed, will likely be more appropriate to detecting when two lattice polytopes are $\SLZ$-equidecomposable into lattice polytopes, in particular, when unimodular triangulations exist. The role of the full circle $2\pi$ should be played by the ``12'' of Poonen and Rodriguez-Villegas~\cite{PRV00}. \begin{thm} The sum of the lengths of a reflexive polygon and its dual is $12$. \end{thm} Here, a lattice polygon is reflexive if it contains a unique interior lattice point, and the length is measured with respect to the lattice. The polygon does not need to be convex. In the non-convex case, the definition of the dual is a little harder~\cite{PRV00, HS04}. Around a subdivision edge, we see a polygon with a distinguished interior point---the projection of the edge (see Figure \ref{fig:subdEdge}). \ifpdf \pdfsyncstop \fi \begin{figure}[thb] \centering \includegraphics[width=40mm]{Figures/link} \caption{Projecting a subdivision edge} \label{fig:subdEdge} \end{figure} \ifpdf \pdfsyncstart \fi This gives rise in a canonical way to a (possibly non-convex) reflexive polygon. So we could mimic property $(2\pi)$ of the Dehn invariant by mapping to $\mathbb{Z}/12$. Is there a way to incorporate the property $(\pi)$? \input{HaaseMcAllister.bbl} \end{document}
{ "timestamp": "2007-09-26T15:12:17", "yymm": "0709", "arxiv_id": "0709.4070", "language": "en", "url": "https://arxiv.org/abs/0709.4070", "abstract": "Quasi-period collapse occurs when the Ehrhart quasi-polynomial of a rational polytope has a quasi-period less than the denominator of that polytope. This phenomenon is poorly understood, and all known cases in which it occurs have been proven with ad hoc methods. In this note, we present a conjectural explanation for quasi-period collapse in rational polytopes. We show that this explanation applies to some previous cases appearing in the literature. We also exhibit examples of Ehrhart polynomials of rational polytopes that are not the Ehrhart polynomials of any integral polytope.Our approach depends on the invariance of the Ehrhart quasi-polynomial under the action of affine unimodular transformations. Motivated by the similarity of this idea to the scissors congruence problem, we explore the development of a Dehn-like invariant for rational polytopes in the lattice setting.", "subjects": "Combinatorics (math.CO)", "title": "Quasi-period collapse and GL_n(Z)-scissors congruence in rational polytopes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692277960746, "lm_q2_score": 0.7248702702332476, "lm_q1q2_score": 0.7079584870810378 }
https://arxiv.org/abs/1005.1634
Interference Alignment in Regenerating Codes for Distributed Storage: Necessity and Code Constructions
Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any arbitrary k of n nodes. However regenerating codes possess in addition, the ability to repair a failed node by connecting to any arbitrary d nodes and downloading an amount of data that is typically far less than the size of the data file. This amount of download is termed the repair bandwidth. Minimum storage regenerating (MSR) codes are a subclass of regenerating codes that require the least amount of network storage; every such code is a maximum distance separable (MDS) code. Further, when a replacement node stores data identical to that in the failed node, the repair is termed as exact.The four principal results of the paper are (a) the explicit construction of a class of MDS codes for d = n-1 >= 2k-1 termed the MISER code, that achieves the cut-set bound on the repair bandwidth for the exact-repair of systematic nodes, (b) proof of the necessity of interference alignment in exact-repair MSR codes, (c) a proof showing the impossibility of constructing linear, exact-repair MSR codes for d < 2k-3 in the absence of symbol extension, and (d) the construction, also explicit, of MSR codes for d = k+1. Interference alignment (IA) is a theme that runs throughout the paper: the MISER code is built on the principles of IA and IA is also a crucial component to the non-existence proof for d < 2k-3. To the best of our knowledge, the constructions presented in this paper are the first, explicit constructions of regenerating codes that achieve the cut-set bound.
\section{Introduction}\label{sec:intro} In a distributed storage system, information pertaining to a data file is dispersed across nodes in a network in such a manner that an end-user~(whom we term as a data-collector, or a DC) can retrieve the data stored by tapping into neighboring nodes. A popular option that reduces network congestion and that leads to increased resiliency in the face of node failures, is to employ erasure coding, for example by calling upon maximum-distance-separable~(MDS) codes such as Reed-Solomon~(RS) codes. Let $B$ be the total number of message symbols, over a finite field $\mathbb{F}_q$ of size $q$. With RS codes, data is stored across $n$ nodes in the network in such a way that the entire data can be recovered by a data-collector by connecting to any arbitrary $k$ nodes, a process of data recovery that we will refer to as \textit{reconstruction}. Several distributed storage systems such as RAID-6, OceanStore~\cite{oceanstore} and Total~Recall~\cite{totalRecall} employ such an erasure-coding option. Upon failure of an individual node, a self-sustaining data storage network must necessarily possess the ability to repair the failed node. An obvious means to accomplish this, is to permit the replacement node to connect to any $k$ nodes, download the entire data, and extract the data that was stored in the failed node. For example, RS codes treat the data stored in each node as a single symbol belonging to the finite field $\mathbb{F}_q$. When this is coupled with the restriction that individual nodes perform linear operations over $\mathbb{F}_q$, it follows that the smallest unit of data that can be downloaded from a node to assist in the repair of a failed node (namely, an $\mathbb{F}_q$ symbol), equals the amount of information stored in the node itself. As a consequence of the MDS property of an RS code, when carrying out repair of a failed node, the replacement node must necessarily collect data from at least $k$ other nodes. As a result, it follows that the total amount of data download needed to repair a failed node can be no smaller than $B$, the size of the entire file. But clearly, downloading the entire $B$ units of data in order to recover the data stored in a single node that stores only a fraction of the entire data file is wasteful, and raises the question as to whether there is a better option. Such an option is provided by the concept of a \emph{regenerating code} introduced by Dimakis et~al.~\cite{DimKan1}. Regenerating codes overcome the difficulty encountered when working with an RS code by working with codes whose symbol alphabet is a vector over $\mathbb{F}_q$, i.e., an element of $\mathbb{F}_q^{\alpha}$ for some parameter $\alpha > 1$. Each node stores a vector symbol, or equivalently stores $\alpha$ symbols over $\mathbb{F}_q$. In this setup, it is clear that while maintaining linearity over $\mathbb{F}_q$, it is possible for an individual node to transfer a fraction of the data stored within the node. Apart from this new parameter $\alpha$, two other parameters $(d, \beta)$ are associated with regenerating codes. Thus we have \[ \{ q, \ [n, \ k, \ d], \ (\beta, \ \alpha, B)\} \] as the parameter set of a regenerating code. Under the definition of regenerating codes introduced in \cite{DimKan1}, a failed node is permitted to connect to an arbitrary subset of $d$ nodes out of the remaining $(n-1)$ nodes while downloading $\beta \leq \alpha$ symbols from each node. The total amount $d\beta$ of data downloaded for repair purposes is termed the \textit{repair bandwidth}. Typically, with a regenerating code, the average repair bandwidth $d\beta$ is small compared to the size of the file $B$. Fig.~\ref{fig:intro_recon} and Fig.~\ref{fig:intro_regen} illustrate reconstruction and node repair respectively, also depicting the relevant parameters. \begin{figure}[t] \centering \subfloat[]{\includegraphics[trim=0in 1.81in 7in 0in, clip, width=0.3\textwidth]{fig_intro_recon.pdf}\label{fig:intro_recon}} \hspace{.1\textwidth} \subfloat[]{\includegraphics[trim=0in 1.81in 7in 0in, clip, width=0.3\textwidth]{fig_intro_regen.pdf}\label{fig:intro_regen}} \caption{\small The regenerating codes setup: (a) data reconstruction, and (b) repair of a failed node.} \label{fig:completegraph} \end{figure} The cut-set bound of network coding can be invoked to show that the parameters of a regenerating code must necessarily satisfy \cite{YunDimKan}: \begin{eqnarray} B & \leq & \sum_{i=0}^{k-1} \min\{\alpha,(d-i)\beta\}. \label{eq:cut_set_bound} \end{eqnarray} It is desirable to minimize both $\alpha$ as well as $\beta$ since minimizing $\alpha$ results in a minimum storage solution while minimizing $\beta$~(for a fixed $d$) results in a solution that minimizes the repair bandwidth. It turns out that there is a tradeoff between $\alpha$ and $\beta$. The two extreme points in this tradeoff are termed the minimum storage regenerating~(MSR) and minimum bandwidth regenerating~(MBR) points respectively. The parameters $\alpha$ and $\beta$ for the MSR point on the tradeoff can be obtained by first minimizing $\alpha$ and then minimizing $\beta$ to obtain \begin{eqnarray} \alpha_{\text{MSR}} & = & \frac{B}{k} , \nonumber \\ \beta_{\text{MSR}} & = & \frac{B}{k(d-k+1)}. \label{eq:MSR_parameters} \end{eqnarray} Reversing the order, leads to the MBR point which thus corresponds to \begin{eqnarray} \beta_{\text{MBR}} & = & \frac{2B}{k(2d-k+1)} , \nonumber \\ \alpha_{\text{MBR}} & = & \frac{2dB}{k(2d-k+1)} . \label{eq:MBR_parameters} \end{eqnarray} The focus of the present paper is on the MSR point. Note that regenerating codes with $(\alpha = \alpha_{\text{MSR}})$ and $(\beta = \beta_{\text{MSR}})$ are necessarily MDS codes over the vector alphabet $\mathbb{F}_q^{\alpha}$. This follows since the ability to reconstruct the data from any arbitrary $k$ nodes necessarily implies a minimum distance $d_{\min}=n-k+1$. Since the code size equals $ \left( q^{\alpha} \right) ^k$, this meets the Singleton bound causing the code to be an MDS code. \subsection{Choice of the Parameter $\beta$}\label{subsec:beta_1} Let us next rewrite \eqref{eq:MSR_parameters} in the form \begin{eqnarray} \alpha_{\text{MSR}} & = & \beta_{\text{MSR}} (d-k+1) \nonumber \\ B & = & \beta_{\text{MSR}} (d-k+1)(k). \label{eq:beta_as_quantum}\end{eqnarray} Thus if one is able to construct an $[n,\;k,\;d]$ MSR code with repair bandwidth achieving the cut-set bound for a given value of $\beta$, then both $\alpha_{\text{MSR}}=(d-k+1) \beta_{\text{MSR}}$ and the size $B=k \, \alpha_{\text{MSR}}$ of the file are necessarily fixed. It thus makes sense to speak of an achievable triple \[ (\beta, \ \ \alpha=(d-k+1) \beta, \ \ B= k \alpha). \] However if a triple $(\beta, \alpha, B)$ is achievable, then so is the triple $(\ell \beta, \ell \alpha, \ell B)$ simply through a process of divide and conquer, i.e., we divide up the message file into $\ell$ sub-files and apply the code for $(\beta, \alpha, B)$ to each of the $\ell$ sub-files. Hence, codes that are applicable for the case $\beta=1$, are of particular importance as they permit codes to be constructed for every larger integral value of $\beta$. In addition, a code with small $\beta$ will involve manipulating a smaller number of message symbols and hence will in general, be of lesser complexity. For these reasons, in the present paper, codes are constructed for the case $\beta=1$. Setting $\beta=1$ at the MSR point yields \begin{equation} \alpha_{\text{MSR}}=d-k+1 . \label{eq:MSR_beta1_parameters}\end{equation} Note that when $\alpha=1$, we have $B=k$ and meeting the cut-set bound would imply $d = k$. In this case, any $[n,k]$-MDS code will achieve the bound. Hence, we will consider $\alpha > 1$ throughout. \subsection{Additional Terminology} \subsubsection{Exact versus Functional Repair} In general, the cut-set bound~(as derived in~\cite{DimKan1}) applies to functional-repair, that is, it applies to networks which replace a failed node with a replacement node which can carry out all the functions of the earlier failed node, but which does not necessarily store the same data. Thus, under functional-repair, there is need for the network to inform all nodes in the network of the replacement. This requirement is obviated under exact-repair, where a replacement node stores exactly the same data as was stored in the failed node. We will use the term {\em exact-repair MSR code} to denote a regenerating code operating at the minimum storage point, that is capable of exact-repair. \subsubsection{Systematic Codes} A systematic regenerating code can be defined as a regenerating code designed in such a way that the $B$ message symbols are explicitly present amongst the $k \alpha$ code symbols stored in a select set of $k$ nodes, termed as the systematic nodes. Clearly, in the case of systematic regenerating codes, exact-repair of the systematic nodes is mandated. A data-collector connecting to the $k$ systematic nodes obtains the $B$ message symbols in an uncoded form, making systematic nodes a preferred choice for data recovery. This makes the fast repair of systematic nodes a priority, motivating the interest in minimizing the repair bandwidth for the exact-repair of systematic nodes. ~ The immediate question that this raises, is as to whether or not the combination of (a) restriction to repair of systematic nodes and (b) requirement for exact-repair of the systematic nodes leads to a bound on the parameters $(\alpha, \beta)$ different from the cut-set bound. It turns out that the same bound on the parameters $(\alpha, \beta)$ appearing in \eqref{eq:MSR_parameters} still applies and this is established in Section~\ref{sec:notation}. \subsection{Exact-repair MSR Codes as Network Codes}\label{subsec:net_cod} The existence of regenerating codes for the case of functional-repair was proved~(\cite{DimKan1,YunDimKan}) after casting the reconstruction and repair problems as a multicast network coding problem, and using random network codes to achieve the cut-set bound. As shown in our previous work \cite{ourNCC_NC}, construction of exact-repair MSR codes for the repair of systematic nodes is most naturally mapped to a non-multicast problem in network coding, for which very few results are available \begin{figure}[h] \centering \includegraphics[trim=8in 3.3in 9in 1.5in, clip=true, width=0.8\textwidth]{fig_storage_multicast.pdf} \caption{\small The MSR code design problem for the exact-repair of just the systematic nodes, as a non-multicast network coding problem. Here, $[n=4, \ k=2 \ d=3]$ with $\beta=1$ giving $(\alpha=2, \ B=4)$. Unmarked edges have capacity $\alpha$. Nodes labelled \textit{DC} are data-collector sinks, and those labelled \textit{$l'$} are replacement node sinks.} \label{fig:stoMult423} \end{figure} The non-multicast network for the parameter set $[n=4,\ k=2,\ d=3]$ with $\beta=1$ is shown in Fig.~\ref{fig:stoMult423}. In general, the network can be viewed as having $k$ source nodes, corresponding to the $k$ systematic nodes, generating $\alpha$ symbols each per channel use. The parity nodes correspond to downlink nodes in the graph. To capture the fact that a parity node can store only $\alpha$ symbols, it is split~(as in~\cite{YunDimKan}) into two parts connected by a link of capacity $\alpha$ : parity node $m$ is split into $m_{\text{in}}$ and $m_{\text{out}}$ with all incoming edges arriving at $m_{\text{in}}$ and all outgoing edges emanating from $m_{\text{out}}$. The sinks in the network are of two types. The first type correspond to data-collectors which connect to an arbitrary collection of $k$ nodes in the network for the purposes of data reconstruction. Hence there are ${n}\choose{k}$ sinks of this type. The second type of sinks represent a replacement node that is attempting to duplicate a failed systematic node, with the node replacing systematic node $\ell$ denoted by $\ell'$. Sinks of this type connect to an arbitrary set of $d$ out of the remaining $(n-1)$ nodes, and hence they are $k { {n-1}\choose{d}}$ in number. It is the presence of these sinks that gives the problem a non-multicast nature. Thus, the present paper provides an instance where explicit code constructions achieve the cut-set bound for a non-multicast network, by exploiting the specific structure of the network. ~ \paragraph*{Relation Between $\beta$ and Scalar/Vector Network Coding} The choice of $\beta$ as unity~(as in Fig.~\ref{fig:stoMult423}) may be viewed as an instance of scalar network coding. Upon increase in the value of $\beta$, the capacity of each data pipe is increased by a factor of $\beta$, thereby transforming the problem into a \textit{vector network coding} problem. Thus, $\beta=1$ implies the absence of \textit{symbol extension}, which in general, reduces the complexity of system implementation and is thus of greater practical interest. \subsection{Results of the Present Paper} The primary results of the present paper are: \begin{itemize} \item The construction of a family of MDS codes for $d = n-1 \geq 2k-1$ that enable exact-repair of systematic nodes while achieving the cut-set bound on repair bandwidth. We have termed this code the MISER~\footnote{Short for an MDS, Interference-aligning, Systematic, Exact-Regenerating code, that is miserly in terms of bandwidth expended to repair a systematic node.} code. \item Proof that interference alignment is \textit{necessary} for every exact-repair MSR code. \item The proof of non-existence of linear exact-repair MSR codes for $d < 2k-3$ in the absence of symbol extension~(i.e., $\beta=1$). This result is clearly of interest in the light of on-going efforts to construct exact-repair codes with $\beta=1$ meeting the cut-set bound~\cite{WuDimISIT,ourAllerton,ourITW,DimSearch,WuArxiv,ourInterior_pts,Changho,ourProductMatrix,puyol}. \item The construction, also explicit, of an MSR code for $d=k+1$. For most values of the parameters, $d=k+1$ falls under the $d<2k-3$ regime, and in light of the non-existence result above, exact-repair is not possible. The construction does the next best thing, namely, it carries out repair that is approximately exact~\footnote{The code consists of an exact-repair part along with an auxiliary part whose repair is not guaranteed to be exact. This is explained in greater detail in Section~\ref{sec:MDSplus}.}. \end{itemize} ~ Note that the only explicit codes of the MDS type to previously have been constructed are for small values of parameters, $[n=4, \ k=2,\ d=3]$ and $[n=5, \ k=3,\ d=4]$. Prior work is described in greater detail in Section~\ref{sec:priorWork}. ~ The remainder of the paper is organized as follows. A brief overview of the prior literature in this field is given in the next section, Section~\ref{sec:priorWork}. The setting and notation are explained in Section~\ref{sec:notation}. The appearance of interference alignment in the context of distributed storage for construction of regenerating codes is detailed in Section~\ref{sec:intf_align} along with an illustrative example. Section~\ref{sec:gen_explicit} describes the MISER code. The non-existence of linear exact-repair MSR codes for $d < 2k-3$ in the absence of symbol extension can be found in Section~\ref{sec:non_exist_alpha_3}, along with the proof establishing the necessity of interference alignment. Section~\ref{sec:MDSplus} describes the explicit construction of an MSR code for $d=k+1$. The final section, Section~\ref{sec:conclusion}, draws conclusions. \section{Prior Work}\label{sec:priorWork} The concept of regenerating codes, introduced in~\cite{DimKan1,YunDimKan}, permit storage nodes to store more than the minimal $B/k$ units of data in order to reduce the repair bandwidth. Several distributed systems are analyzed, and estimates of the mean node availability in such systems are obtained. Using these values, the substantial performance gains offered by regenerating codes in terms of bandwidth savings are demonstrated. The problem of minimizing repair bandwidth for the \textit{functional} repair of nodes is considered in~\cite{DimKan1,YunDimKan} where it is formulated as a multicast network-coding problem in a network having an infinite number of nodes. A cut-set lower bound on the repair bandwidth is derived. Coding schemes achieving this bound are presented in~\cite{YunDimKan, WuAchievable} which however, are non-explicit. These schemes require large field size and the repair and reconstruction algorithms are also of high complexity. Computational complexity is identified as a principal concern in the practical implementation of distributed storage codes in~\cite{Complexi} and a treatment of the use of random, linear, regenerating codes for achieving functional-repair can be found there. The authors in~\cite{WuDimISIT} and~\cite{ourAllerton} independently introduce the notion of exact-repair. The idea of using interference alignment in the context of exact-repair codes for distributed storage appears first in~\cite{WuDimISIT}. Code constructions of the MDS type are provided, which meet the cut-set lower bound when $k=2$. Even here, the constructions are not explicit, and have large complexity and field-size requirement. The first explicit construction of regenerating codes for the MBR point appears in \cite{ourAllerton}, for the case $d=n-1$. These codes carry out uncoded exact-repair and hence have zero repair complexity. The required field size is of the order of $n^2$, and in terms of minimizing bandwidth, the codes achieve the cut-set bound. A computer search for exact-repair MSR codes for the parameter set $[n=5,~k=3,~d=4], \ ~\beta=1$, is carried out in~\cite{DimSearch}, and for this set of parameters, codes for several values of field size are obtained. A slightly different setting, from the exact-repair situation is considered in~\cite{WuArxiv}, where optimal MDS codes are given for the parameters $d=k+1$ and $n>2k$. Again, the schemes given here are non-explicit, and have high complexity and large field-size requirement. We next describe the setting and notation to be used in the current paper. \section{Setting and Notation} \label{sec:notation} The distributed storage system considered in this paper consists of $n$ storage nodes, each having the capacity to store $\alpha$ symbols. Let $\underline{\mathbf{u}}$ be the message vector of length $B$ comprising of the $B$ message symbols. Each message symbol can independently take values from $\mathbb{F}_q$, a finite field of size $q$. In this paper, we consider only linear storage codes. As in traditional coding theory, by a linear storage code, we mean that every stored symbol is a linear combination of the message symbols, and only linear operations are permitted on the stored symbols. Thus all symbols considered belong to $\mathbb{F}_q$. For $m=1,\ldots,n$, let the $(B \times \alpha)$ matrix $\mathbf{G}^{(m)}$ denote the generator matrix of node $m$. Node $m$ stores the following $\alpha$ symbols \begin{equation} \underline{\mathbf{u}}^t\mathbf{G}^{(m)}. \end{equation} \noindent In the terminology of network coding, each column of the nodal generator matrix $\mathbf{G}^{(m)}$ corresponds to the \textit{global kernel}~(linear combination vector) associated to a symbol stored in the node. The $(B \times n \alpha)$ generator matrix for the entire distributed-storage code, is given by \begin{equation} \mathbb{G} \ = \ \begin{bmatrix} \mathbf{G}^{(1)} & \mathbf{G}^{(2)} & \cdots & \mathbf{G}^{(n)} \end{bmatrix}. \end{equation} Note that under exact-repair, the generator matrix of the code remains unchanged. We will interchangeably speak of a node as either storing $\alpha$ symbols, by which we will mean the symbols $\underline{\mathbf{u}}^t\mathbf{G}^{(m)}$ or else as storing $\alpha$ vectors, by which we will mean the corresponding set of $\alpha$ global kernels that form the columns of nodal generator matrix $\mathbf{G}^{(m)}$. We partition the $B(=k\alpha)$-length vector $\underline{\mathbf{u}}$ into $k$ components, $\underline{u}_i$ for $i=1,\ldots,k$, each comprising of $\alpha$ distinct message symbols: \begin{equation} \underline{\mathbf{u}}= \begin{bmatrix} \underline{u}_1 \\ \vdots \\ \underline{u}_k\end{bmatrix}. \end{equation} We also partition the nodal generator matrices analogously into $k$ sub-matrices as \begin{equation} \mathbf{G}^{(m)} = \begin{bmatrix} G^{(m)}_1 \vspace{5pt} \\ \vdots \vspace{5pt} \\ G^{(m)}_k \vspace{5pt} \end{bmatrix} \label{eq:notation_1}, \end{equation} \noindent where each $G^{(m)}_i$ is an $(\alpha \times \alpha)$ matrix. We will refer to $G^{(m)}_i$ as the $i^{\text{th}}$ component of $\mathbf{G}^{(m)}$. Thus, node $m$ stores the $\alpha$ symbols \begin{equation} \underline{\mathbf{u}}^t \mathbf{G}^{(m)} = \sum_{i=1}^{k} \underline{u}^t_i G^{(m)}_i \label{eq:notation_2}. \end{equation} ~ Out of the $n$ nodes, the first $k$ nodes~(i.e., nodes $1,\ldots,k$) are systematic. Thus, for systematic node $\ell$ \begin{eqnarray} G^{(\ell)}_i = \left \lbrace \begin{array}{ll} I_{\alpha} &\text{if } i=\ell \\ 0_\alpha &\text{if } i\neq \ell \end{array} \right. \quad \forall i \in \{1,\ldots,k \}, \end{eqnarray} where $0_\alpha$ and $I_\alpha$ denote the $(\alpha \times \alpha)$ zero matrix and identity matrix respectively; systematic node $\ell$ thus stores the $\alpha$ message symbols that $\underline{u}_\ell$ is comprised of. Upon failure of a node, the replacement node connects to an arbitrary set of $d$ remaining nodes, termed as \textit{helper nodes}, downloading $\beta$ symbols from each. Thus, each helper node passes a collection of $\beta$ linear combinations of the symbols stored within the node. As described in Section~\ref{subsec:beta_1}, an MSR code with $\beta=1$ can be used to construct an MSR code for every higher integral value of $\beta$. Thus it suffices to provide constructions for $\beta=1$ and that is what we do here. When $\beta=1$, each helper node passes just a single symbol. Again, we will often describe the symbol passed by a helper node in terms of its associated global kernel, and hence will often speak of a helper node passing a \textit{vector}~\footnote{A simple extension to the case of $\beta > 1$ lets us treat the global kernels of the $\beta$ symbols passed by a helper node as a \textit{subspace} of dimension at most $\beta$. This `subspace' viewpoint has been found useful in proving certain general results at the MBR point in \cite{ourAllerton}, and for the interior points of the tradeoff in~\cite{ourInterior_pts}.}. ~ Throughout the paper, we use superscripts to refer to node indices, and subscripts to index the elements of a matrix. The letters $m$ and $\ell$ are reserved for node indices; in particular, the letter $\ell$ is used to index systematic nodes. All vectors are assumed to be column vectors. The vector $\underline{e}_i$ represents the standard basis vector of length $\alpha$, i.e., $\underline{e}_i$ is an $\alpha$-length unit vector with $1$ in the $i$th position and $0$s elsewhere. For a positive integer $p$, we denote the $(p \times p)$ zero matrix and the $(p \times p)$ identity matrix by $0_p$ and $I_p$ respectively. We say that a set of vectors is \textit{aligned} if the vector-space spanned by them has dimension at most one. ~ We next turn our attention to the question as to whether or not the combination of (a) restriction to systematic-node repair and (b) requirement of exact-repair of the systematic nodes leads to a bound on the parameters $(\alpha, \beta)$ different from the cut-set bound appearing in~\eqref{eq:cut_set_bound}. The theorem below shows that the cut-set bound comes into play even if functional repair of a single node is required. \begin{thm} Any $[n, \ k, \ d]$-MDS regenerating code~(i.e., a regenerating code satisfying $B=k\alpha$) that guarantees the functional-repair of even a single node, must satisfy the cut-set lower bound of~\eqref{eq:cut_set_bound} on repair bandwidth, i.e., must satisfy \begin{equation} \beta \geq \frac{B}{k(d-k+1)}. \end{equation} \end{thm} \begin{IEEEproof} First, consider the case when $\beta=1$. Let $\ell$ denote the node that needs to be repaired, and let $\{m_i \mid i=1, \ldots, d\}$ denote the $d$ helper nodes assisting in the repair of node $\ell$. Further, let $\{\underline{\mathbf{\gamma}}^{(m_i, \; \ell)}\mid i=1,\ldots,d\}$ denote the vectors passed by these helper nodes. At the end of the repair process, let the $(B \times \alpha)$ matrix $\mathbf{G}^{(\ell)}$ denote the generator matrix of the replacement node~(since we consider only functional-repair in this theorem, $\mathbf{G}^{(\ell)}$ need not be identical to the generator matrix of the failed node). Looking back at the repair process, the replacement node obtains $\mathbf{G}^{(\ell)}$ by operating linearly on the collection of $d$ vectors $\{\underline{\mathbf{\gamma}}^{(m_i, \; \ell)}\mid i=1,\ldots,d\}$ of length $B$. This, in turn, implies that the dimension of the nullspace of the matrix \begin{equation} \begin{bmatrix} \mathbf{G}^{(\ell)} & \underline{\mathbf{\gamma}}^{(m_1,\; \ell)} & \cdots & \underline{\mathbf{\gamma}}^{(m_d,\; \ell)} \end{bmatrix} \label{eq:nullspace_alpha}\end{equation} should be greater than or equal to the dimension of $\mathbf{G}^{(l)}$, which is $\alpha$. However, the MDS property requires that at the end of the repair process, the global kernels associated to any $k$ nodes be linearly independent, and in particular, that the matrix \begin{equation} \begin{bmatrix}\mathbf{G}^{(\ell)} & \underline{\mathbf{\gamma}}^{(m_1,\; \ell)} & \cdots & \underline{\mathbf{\gamma}}^{(m_{k-1},\; \ell)} \end{bmatrix} \end{equation} have full-rank. It follows that we must have \[ d \ \geq \ k-1+\alpha. \] The proof for the case $\beta>1$, when every helper node passes a set of $\beta$ vectors, is a straightforward extension that leads to: \begin{equation} d\beta \ \geq\ (k-1)\beta + \alpha. \end{equation} Rearranging the terms in the equation above, and substituting $\alpha = \frac{B}{k}$ leads to the desired result. \end{IEEEproof} ~ \noindent Thus, we recover equation~\eqref{eq:MSR_parameters}, and in an optimal code with $\beta=1$, we will continue to have \[ d \ = \ k-1+\alpha. \] In this way, we have shown that even in the setting that we address here, namely that of the exact-repair of the systematic nodes leads us to the same cut-set bound on repair bandwidth as in ~\eqref{eq:cut_set_bound}. \noindent The next section explains how the concept of interference alignment arises in the distributed-storage context. \section{Interference Alignment in Regenerating Codes}\label{sec:intf_align} The idea of interference alignment has recently been proposed in \cite{CadJafar}, \cite{MotKhan} in the context of wireless communication. The idea here is to design the signals of multiple users in such a way that at every receiver, signals from all the unintended users occupy a subspace of the given space, leaving the remainder of the space free for the signal of the intended user. In the distributed-storage context, the concept of `interference' comes into play during the exact-repair of a failed node in an MSR code. We present the example of a systematic MSR code with $[n=4, \; k=2, \; d=3]$ and $\beta=1$, which gives $(\alpha=d-k+1=2,\; B=k\alpha = 4)$. Let $\{ u_1, \ u_2, \ u_3, \ u_4 \}$ denote the four message symbols. Since $k=2$ here, we may assume that nodes $1$ and $2$ are systematic and that node $1$ stores $\{ u_1, \ u_2\}$ and node $2$ stores $\{ u_3, \ u_4 \}$. Nodes $3$ and $4$ are then the parity nodes, each storing two linear functions of the message symbols. \begin{figure}[h] \centering \includegraphics[trim= 0.1in 6.4in 4in 0in, clip=true,width=\textwidth]{fig_42msr_regensys} \caption{\small Illustration of interference alignment during exact-repair of systematic node $1$.} \label{fig:fig_42msr_regensys} \end{figure} Consider repair of systematic node $1$ wherein the $d=3$ nodes, nodes $2$, $3$ and $4$, serve as helper nodes. The second systematic node, node $2$, can only pass a linear combination of message symbols $u_3$ and $u_4$. The two symbols passed by the parity nodes are in general, functions of all four message symbols: $(a_1 u_1 + a_2 u_2 + a_3 u_3 + a_4 u_4)$ and $(b_1 u_1 + b_2 u_2 + b_3 u_3 + b_4 u_4)$ respectively. Using the symbols passed by the three helper nodes, the replacement of node $1$ needs to be able to recover message symbols $\{u_1,u_2\}$. For obvious reasons, we will term $(a_1 u_1 + a_2 u_2 )$ and $(b_1 u_1 + b_2 u_2)$ as the \textit{desired} components of the messages passed by parity nodes $3$ and $4$ and the terms $(a_3 u_3 + a_4 u_4)$ and $(b_3 u_3 + b_4 u_4)$ as \textit{interference} components. Since node $2$ cannot provide any information pertaining to the desired symbols $\{ u_1, \ u_2\}$, the replacement node must be able to recover the desired symbols from the desired components $(a_1 u_1 + a_2 u_2 )$ and $(b_1 u_1 + b_2 u_2)$ of the messages passed to it by the parity nodes $3$ and $4$. To access the desired components, the replacement node must be in a position to subtract out the interference components $(a_3 u_3 + a_4 u_4)$ and $(b_3 u_3 + b_4 u_4)$ from the received linear combinations $(a_1 u_1 + a_2 u_2 + a_3 u_3 + a_4 u_4)$ and $(b_1 u_1 + b_2 u_2 + b_3 u_3 + b_4 u_4)$; the only way to subtract out the interference component is by making use of the linear combination of $\{u_3,u_4\}$ passed by node $2$. It follows that this can only happen if the interference components $(a_3 u_3 + a_4 u_4)$ and $(b_3 u_3 + b_4 u_4)$ are aligned, meaning that they are scalar multiples of each other. An explicit code over $\mathbb{F}_5$ for the parameters chosen in the example is shown in Fig.~\ref{fig:fig_42msr_regensys}. The exact-repair of systematic node $1$ is shown, for which the remaining nodes pass the first of the two symbols stored in them. Observe that under this code, the interference component in the two symbols passed by the parity nodes are aligned in the direction of $u_3$, i.e., are scalar multiples of $u_3$. Hence node $2$ can simply pass $u_3$ and the replacement node can then make use of $u_3$ to cancel~(i.e., subtract out) the interference. In the context of regenerating codes, interference alignment was first used by Wu et al. \cite{WuDimISIT} to provide a scheme~(although, not explicit) for the exact-repair at the MSR point. However, interference alignment is employed only to a limited extent as only a portion of the interference components is aligned and as a result, the scheme is optimal only for the case $k=2$. In the next section, we describe the construction of the MISER code which aligns interference and achieves the cut-set bound on the repair bandwidth for repair of systematic nodes. This is the \textit{first} interference-alignment-based explicit code construction that meets the cut-set bound. \section{Construction of the MISER Code}\label{sec:gen_explicit} In this section we provide an explicit construction for a systematic, MDS code that achieves the lower bound on repair bandwidth for the exact-repair of systematic nodes and which we term as the MISER code. We begin with an illustrative example that explains the key ideas behind the construction. The general code construction for parameter sets of the form $n=2k,~d=n-1$ closely follows the construction in the example. A simple, code-shortening technique is then employed to extend this code construction to the more general parameter set $n \geq 2k,~d=n-1$. The construction technique can also be extended to the even more general case of arbitrary $n$, $d \geq 2k-1$, under the added requirement however, that the replacement node connect to all of the remaining systematic nodes. \subsection{An Example} \label{sec:example} The example deals with the parameter set, $[n=6,\;k=3,\;d=5]$, $\beta=1$, so that $(\alpha=d-k+1=3,\;B=k\alpha=9)$. We select $\mathbb{F}_7$ as the underlying finite field so that all message and code symbols are drawn from $\mathbb{F}_7$. Note that we have $\alpha=k=3$ here. This is true in general: whenever $n=2k$ and $d=n-1$, we have $\alpha=d-k+1=k$ which simplifies the task of code construction. ~ \subsubsection{Design of Nodal Generator Matrices} As $k=3$, the first three nodes are systematic and store data in uncoded form. Hence \begin{equation} \mathbf{G}^{(1)} = \begin{bmatrix} I_3 \vspace{2pt} \\ 0_3 \vspace{2pt} \\ 0_3 \end{bmatrix} , \ \mathbf{G}^{(2)} = \begin{bmatrix} 0_3 \vspace{2pt} \\ I_3 \vspace{2pt} \\ 0_3 \end{bmatrix} , \ \mathbf{G}^{(3)} = \begin{bmatrix} 0_3 \vspace{2pt} \\ 0_3 \vspace{2pt} \\ I_3 \end{bmatrix}~. \end{equation} A key ingredient of the code construction presented here is the use of a Cauchy matrix~\cite{cauchy}. Let \begin{equation} {\Psi}_3 = \left[ \resizebox{!}{!}{\begin{tabular}{*{3}{c}} ${\psi}_1^{(4)}$ & ${\psi}_1^{(5)}$ & ${\psi}_1^{(6)}$ \vspace{2pt} \\ ${\psi}_2^{(4)}$ & ${\psi}_2^{(5)}$ & ${\psi}_2^{(6)}$ \vspace{2pt} \\ ${\psi}_3^{(4)}$ & ${\psi}_3^{(5)}$ & ${\psi}_3^{(6)}$ \end{tabular}} \right] \label{eq:cauchy} \end{equation} be a $(3 \times 3)$ matrix such that each of its sub-matrices is full rank. Cauchy matrices have this property and in our construction, we will assume ${\Psi}_3$ to be a Cauchy matrix. ~ We choose the generator matrix of parity node $m~(m=4,5,6)$ to be \begin{equation} \mathbf{G}^{(m)} = \left[ \resizebox{!}{!}{\renewcommand{\arraystretch}{1.2}\begin{tabular}{*{3}{c}} $2{\psi}_1^{(m)} $&$ 0 $&$ 0 $ \\ $2{\psi}_2^{(m)} $&$ {\psi}_1^{(m)} $&$ 0$ \\ $2{\psi}_3^{(m)} $&$ 0 $&$ {\psi}_1^{(m)} $ \\ \hline \vspace{-11pt} \\ $ {\psi}_2^{(m)} $&$ 2{\psi}_1^{(m)} $&$ 0 $ \\ $ 0 $&$ 2{\psi}_2^{(m)} $&$ 0 $ \\ $0$ &$ 2{\psi}_3^{(m)} $&${\psi}_2^{(m)}$ \\ \hline \vspace{-11pt} \\ $ {\psi}_3^{(m)} $&$ 0 $&$ 2{\psi}_1^{(m)} $ \\ $ 0 $&$ {\psi}_3^{(m)} $&$ 2{\psi}_2^{(m)} $ \\ $ 0 $&$ 0 $&$ 2{\psi}_3^{(m)}$ \\ \end{tabular}} \right], \end{equation} where the location of the non-zero entries of the $i$th sub-matrix are restricted to lie either along the diagonal or else within the $i$th column. The generator matrix is designed keeping in mind the need for interference alignment and this will be made clear in the discussion below concerning the exact-repair of systematic nodes. The choice of scalar `$2$' plays an important role in the data reconstruction property; the precise role of this scalar will become clear when this property is discussed. An example of the $[6, \; 3, \; 5]$ MISER code over $\mathbb{F}_7$ is provided in Fig.~\ref{fig:example_635}, where the Cauchy matrix $\Psi$ is chosen as \begin{equation} \Psi = \left[\begin{tabular}{>{$}c<{$}>{$}c<{$}>{$}c<{$}} 5 & 4 & 1 \\ 2 & 5 & 4 \\ 3 & 2 & 5 \end{tabular} \right]. \end{equation} Also depicted in the figure is the exact-repair of node $1$, for which each of the remaining nodes pass the first symbol that they store. It can be seen that the first symbols stored in the three parity nodes $4$, $5$ and $6$ have their interference components (components $2$ and $3$) aligned and their desired components (component $1$) linearly independent. \begin{figure} \centering \includegraphics[trim=0in 0.8in 0 0, clip=true, width=\textwidth]{fig_635_example} \caption{\small An example of the $[6, \; 3, \; 5]$ MISER code over $\mathbb{F}_7$. Here, $\{u_1,\ldots,u_9\}$ denote the message symbols and the code symbols stored in each of the nodes are shown. Exact-repair of node $1$ is also depicted.} \label{fig:example_635} \end{figure} ~ The key properties of the MISER code will be established in the next section, namely: \begin{itemize} \item that the code is an MDS code over alphabet $\mathbb{F}_q^\alpha$ and this property enables data reconstruction and \item that the code has the ability to carry out exact-repair of the systematic nodes while achieving the cut-set bound on repair bandwidth. \end{itemize} We begin by establishing the exact-repair property. ~ \subsubsection{Exact-repair of Systematic Nodes} Our algorithm for systematic node repair is simple. As noted above, each node stores $\alpha=k$ symbols. These $k$ symbols are assumed to be ordered so that we may speak of the first symbol stored by a node, etc. To repair systematic node $\ell$, $1 \leq \ell \leq k$, each of the remaining nodes passes their respective $\ell$th symbol. Suppose that in our example construction here, node $1$ fails. Each of the parity nodes then pass on their first symbol, or equivalently, in terms of global kernels, the first column of their generator matrices for the repair of node $1$. Thus, from nodes $4,\ 5,$ and $6$, the replacement node obtains \begin{equation} \hspace{-2pt} \left[\hspace{-2pt} \resizebox{1.2cm}{!}{\begin{tabular}{c} $2{\psi}_1^{(4)}$ \\ $2{\psi}_2^{(4)}$ \\ $2{\psi}_3^{(4)}$ \vspace{2pt}\\ \hline \vspace{-.3cm} \\ ${\psi}_2^{(4)}$ \\ $0$ \\ $0$ \vspace{2pt}\\ \hline \vspace{-.3cm} \\ ${\psi}_3^{(4)}$ \\ $0$ \\ $0$ \end{tabular}} \hspace{-2pt}\right]\hspace{-2pt}, \quad \left[\hspace{-2pt} \resizebox{1.2cm}{!}{\begin{tabular}{c} $2{\psi}_1^{(5)}$ \\ $2{\psi}_2^{(5)}$ \\ $2{\psi}_3^{(5)}$ \vspace{2pt}\\ \hline \vspace{-.3cm} \\ ${\psi}_2^{(5)}$ \\ $0$ \\ $0$ \vspace{2pt}\\ \hline \vspace{-.3cm} \\ ${\psi}_3^{(5)}$ \\ $0$ \\ $0$ \end{tabular}} \hspace{-2pt}\right]\hspace{-2pt}, \quad \left[\hspace{-2pt} \resizebox{1.2cm}{!}{\begin{tabular}{c} $2{\psi}_1^{(6)}$ \\ $2{\psi}_2^{(6)}$ \\ $2{\psi}_3^{(6)}$ \vspace{2pt}\\ \hline \vspace{-.3cm} \\ ${\psi}_2^{(6)}$ \\ $0$ \\ $0$ \vspace{2pt}\\ \hline \vspace{-.3cm} \\ ${\psi}_3^{(6)}$ \\ $0$ \\ $0$ \end{tabular}} \hspace{-2pt}\right]~. \end{equation} Note that in each of these vectors, the desired~(first) components are a scaled version of the respective columns of the Cauchy matrix $\Psi_3$. The interference~(second and third) components are aligned along the vector $[1 \ \ 0 \ \ 0]^t$. Thus, each interference component is aligned along a single dimension. Systematic nodes $2$ and $3$ then pass a single vector each that is designed to cancel out this interference. Specifically, nodes $2$ and $3$ respectively pass the vectors \begin{equation} \left[\hspace{-2pt} \resizebox{0.6cm}{!}{\begin{tabular}{c} $0$ \\ $0$ \\ $0$ \\ \hline $1$ \\ $0$ \\ $0$ \\ \hline $0$ \\ $0$ \\ $0$ \end{tabular}} \hspace{-2pt}\right], \quad \left[\hspace{-2pt} \resizebox{0.6cm}{!}{\begin{tabular}{c} $0$ \\ $0$ \\ $0$ \\ \hline $0$ \\ $0$ \\ $0$ \\ \hline $1$ \\ $0$ \\ $0$ \end{tabular}} \hspace{-2pt}\right]~. \end{equation} The net result is that after interference cancellation has taken place, replacement node $1$ is left with access to the columns of the matrix \[ \left[ \resizebox{!}{!}{\begin{tabular}{c} $2{\Psi}_3$ \\ \hline $0_3$ \\ \hline $0_3$ \end{tabular}} \right] . \] Thus the desired component is a scaled Cauchy matrix ${\Psi}_3$. By multiplying this matrix on the right by $\frac{1}{2}\Psi_3^{-1}$, one recovers \[ \left[ \resizebox{!}{!}{\begin{tabular}{c} $I_3$ \\ \hline $0_3$ \\ \hline $0_3$ \end{tabular}} \right] \] as desired. Along similar lines, when nodes $2$ or $3$ fail, the parity nodes pass the second or third columns of their generator matrices respectively. The design of generator matrices for the parity nodes is such that interference alignment holds during the repair of either systematic node, hence enabling the exact-repair of all the systematic nodes. ~ \subsubsection{Data Reconstruction~(MDS property)}\label{sec:eg_recon} For the reconstruction property to be satisfied, a data-collector downloading symbols stored in any three nodes should be able to recover all the nine message symbols. That is, the $(9 \times 9)$ matrix formed by columnwise concatenation of any three nodal generator matrices, should be non-singular. We consider the different possible sets of three nodes that the data-collector can connect to, and provide appropriate decoding algorithms to handle each case. (a) \textit{Three systematic nodes:} When a data-collector connects to all three systematic nodes, it obtains all the message symbols in uncoded form and hence reconstruction is trivially satisfied. (b) \textit{Two systematic nodes and one parity node:} Suppose the data-collector connects to systematic nodes $2$ and $3$, and parity node $4$. It obtains all the symbols stored in nodes $2$ and $3$ in uncoded form and proceeds to subtract their effect from the symbols in node $4$. It is thus left to decode the message symbols $\underline{u}_1$, that are encoded using matrix $G^{(4)}_1$ given by \begin{equation} G^{(4)}_1= \left[ \resizebox{!}{!}{\begin{tabular}{ccc} $2{\psi}_1^{(4)} $&$0 $&$0 $\\ $ 2{\psi}_2^{(4)} $ &$ {\psi}_2^{(4)} $&$ 0$\\ $ 2{\psi}_3^{(4)} $&$ 0$&$ {\psi}_3^{(4)} $ \end{tabular}} \right]~. \end{equation} This lower-triangular matrix is non-singular since by definition, all the entries in a Cauchy matrix are non-zero. The message symbols $\underline{u}_1$ can hence be recovered by inverting $G^{(4)}_1$. (c) \textit{All three parity nodes:} We consider next the case when a data-collector connects to all three parity nodes. Let $C_1$ be the $(9 \times 9)$ matrix formed by the columnwise concatenation of the generator matrices of these three nodes. ~ \textit{Claim 1:} The data-collector can recover all the message symbols encoded using the matrix $C_1$, formed by the columnwise concatenation of the generator matrices of the three parity nodes: \begin{equation} C_1 = \left[ \mathbf{G}^{(4)} \quad \mathbf{G}^{(5)} \quad \mathbf{G}^{(6)} \right]. \end{equation} \begin{IEEEproof} We permute the columns of $C_1$ to obtain a second matrix $C_2$ in which the $i^{th}\; (i=1,2,3)$ columns of all the three nodes are adjacent to each other as shown below: \begin{equation} C_2 = \label{eq:invertNonsysStart} \left[ \resizebox{!}{2.5cm}{\begin{tabular}{ccc|ccc|ccc} $2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$ 2{\psi}_1^{(6)} $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $\\ $2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 2{\psi}_2^{(6)} $&$ {\psi}_1^{(4)} $&$ {\psi}_1^{(5)} $&$ {\psi}_1^{(6)} $&$ 0 $&$ 0 $&$ 0 $\\ $2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ 2{\psi}_3^{(6)} $&$ 0 $&$ 0 $&$ 0 $&$ {\psi}_1^{(4)} $&$ {\psi}_1^{(5)} $&$ {\psi}_1^{(6)} $ \vspace{2pt}\\ \hline \vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\\ ${\psi}_2^{(4)} $&$ {\psi}_2^{(5)} $&$ {\psi}_2^{(6)} $&$ 2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$ 2{\psi}_1^{(6)} $&$ 0 $&$ 0 $&$ 0$ \\ $0 $&$ 0 $&$ 0 $&$ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 2{\psi}_2^{(6)} $&$ 0 $&$ 0 $&$ 0$ \\ $0 $&$ 0 $&$ 0 $&$ 2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ 2{\psi}_3^{(6)} $&$ {\psi}_2^{(4)} $&$ {\psi}_2^{(5)} $&$ {\psi}_2^{(6)}$ \vspace{2pt} \\ \hline \vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\\ ${\psi}_3^{(4)} $&$ {\psi}_3^{(5)} $&$ {\psi}_3^{(6)} $&$ 0 $&$ 0 $&$ 0 $&$ 2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$ 2{\psi}_1^{(6)}$ \\ $0 $&$ 0 $&$ 0 $&$ {\psi}_3^{(4)} $&$ {\psi}_3^{(5)} $&$ {\psi}_3^{(6)} $&$ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 2{\psi}_2^{(6)}$ \\ $0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ 2{\psi}_3^{(6)}$ \\ \multicolumn{3}{c}{$\underbrace{\qquad\qquad\qquad\qquad}$}&\multicolumn{3}{c}{$\underbrace{\qquad\qquad\qquad\qquad}$}&\multicolumn{3}{c}{$\underbrace{\qquad\qquad\qquad\qquad}$}\\ \multicolumn{3}{c}{\small{group 1}}&\multicolumn{3}{c}\small{group 2}\hspace{1.3cm}&\multicolumn{3}{c}{\small{group 3}} \vspace{-.8cm} \end{tabular}} \right] \nonumber~.\end{equation} \vspace{.6cm}\\ ~ Note that a permutation of the columns does not alter the information available to the data-collector and hence is a permissible operation. This rearrangement of coded symbols, while not essential, simplifies the proof. We then post-multiply by a block-diagonal matrix ${\Psi}_3^{-1}$ to obtain the matrix $C_3$ given by \begin{eqnarray} C_3 &=& C_2 \left[ \resizebox{!}{!}{\begin{tabular}{ccc} ${\Psi}_3^{-1} $&$ 0_3 $&$ 0_3 $\\ $0_3 $&$ {\Psi}_3^{-1} $&$ 0_3 $\\ $0_3 $&$ 0_3 $&$ {\Psi}_3^{-1} $ \end{tabular}} \right] \\ &=& \left[ \begin{tabular}{ccc|ccc|ccc} $2 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $\\ $0 $&$ 2 $&$ 0 $&$ 1 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $\\ $0 $&$ 0 $&$ 2 $&$ 0 $&$ 0 $&$ 0 $&$ 1 $&$ 0 $&$ 0 $\\ \hline $0 $&$ 1 $&$ 0 $&$ 2 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $\\ $0 $&$ 0 $&$ 0 $&$ 0 $&$ 2 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $\\ $0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 2 $&$ 0 $&$ 1 $&$ 0 $\\ \hline $0 $&$ 0 $&$ 1 $&$ 0 $&$ 0 $&$ 0 $&$ 2 $&$ 0 $&$ 0 $\\ $0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 1 $&$ 0 $&$ 2 $&$ 0 $\\ $0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 2 $ \end{tabular} \right]. \end{eqnarray} To put things back in perspective, the data collector at this point, has access to the coded symbols \[ \underline{u}^t C_3 \] associated with the three parity nodes. From the nature of the matrix it is evident that message symbols $u_1$, $u_5$ and $u_9$ are now available to the data-collector, and their effect can be subtracted from the remaining symbols to obtain the matrix \begin{equation} [u_2 \ u_3 \ u_4 \ u_6 \ u_7 \ u_8] \underbrace{\left[ \begin{tabular}{cccccc} $ 2 $&$ 0 $&$ 1 $&$ 0 $&$ 0 $&$ 0 $\\ $ 0 $&$ 2 $&$ 0 $&$ 0 $&$ 1 $&$ 0 $\\ $ 1 $&$ 0 $&$ 2 $&$ 0 $&$ 0 $&$ 0 $\\ $ 0 $&$ 0 $&$ 0 $&$ 2 $&$ 0 $&$ 1 $\\ $ 0 $&$ 1 $&$ 0 $&$ 0 $&$ 2 $&$ 0 $\\ $ 0 $&$ 0 $&$ 0 $&$ 1 $&$ 0 $&$ 2 $\\ \end{tabular} \right]}_{C_4} \label{eq:invertNonsysEnd}.\end{equation} As $2^2 \neq 1$ in $\mathbb{F}_7$, the matrix $C_4$ above can be verified to be non-singular and thus the remaining message symbols can also be recovered by inverting $C_4$. \end{IEEEproof} (d) \textit{One systematic node and two parity nodes:} Suppose the data-collector connects to systematic node $1$ and parity nodes $4$ and $5$. All symbols of node $1$, i.e., $\underline{u}_1$ are available to the data-collector. Thus, it needs to decode the message-vector components $\underline{u}_2$ and $\underline{u}_3$ which are encoded using a matrix $B_1$ given by \begin{equation} B_1 = \begin{bmatrix} G_2^{(4)} & G_2^{(5)} \\ G_3^{(4)}& G_3^{(5)} \end{bmatrix} \end{equation} ~ \textit{Claim 2:} The block-matrix $B_1$ above is non-singular and in this way, the message-vector components $\underline{u}_2$ and $\underline{u}_3$ can be recovered. \begin{IEEEproof} Once again, we begin by permuting the columns of $B_1$. For $i=2,3,1$ (in this order), we group the $i^{th}$ columns of the two parity nodes together to give the matrix \begin{equation} B_2 = \left[ \hspace{-.3cm}\resizebox{7.5cm}{!}{ \renewcommand{\arraystretch}{1.3} \begin{tabular}{cc|cc|cc} $ 2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$0 $&$ 0 $&$ {\psi}_2^{(4)} $&$ {\psi}_2^{(5)}$ \\ $ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 0 $&$ 0 $&$ 0 $&$ 0$ \\ $2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ {\psi}_2^{(4)} $&$ {\psi}_2^{(5)} $&$ 0 $&$ 0 $ \\ \hline $ 0 $&$ 0 $&$ 2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$ {\psi}_3^{(4)} $&$ {\psi}_3^{(5)}$\\ $ {\psi}_3^{(4)} $&$ {\psi}_3^{(5)} $&$ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 0 $&$ 0 $\\ $ 0 $&$ 0 $&$ 2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ 0 $&$ 0 $ \end{tabular}} \right]. \end{equation} \noindent Let $\Psi_2$ be the $(2 \times 2)$ sub-matrix of the Cauchy matrix $\Psi_3$, given by \begin{equation}{\Psi}_2 = \left[ \resizebox{!}{!}{\begin{tabular}{cc} ${\psi}_2^{(4)}$ & ${\psi}_2^{(5)}$ \\ ${\psi}_3^{(4)}$ & ${\psi}_3^{(5)}$ \end{tabular}} \right]. \end{equation} Since every sub-matrix of $\Psi_3$ is non-singular, so is $\Psi_2$. Keeping in mind the fact that the data collector can perform any linear operation on the columns of $B_2$, we next multiply the last two columns of $B_2$ by ${\Psi}_2^{-1}$ (while leaving the other $4$ columns unchanged) to obtain the matrix \begin{equation} B_3 = \left[ \resizebox{!}{!}{\begin{tabular}{cc|cc|cc} $ 2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$0 $&$ 0 $&$ 1 $&$ 0$\\ $ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 0 $&$ 0 $&$ 0 $&$ 0$ \\ $2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ {\psi}_2^{(4)} $&$ {\psi}_2^{(5)} $&$ 0 $&$ 0 $ \\ \hline $ 0 $&$ 0 $&$ 2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$0$&$1$\\ $ {\psi}_3^{(4)} $&$ {\psi}_3^{(5)} $&$ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 0 $&$ 0 $\\ $ 0 $&$ 0 $&$ 2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ 0 $&$ 0 $ \end{tabular}} \right]~. \end{equation} The message symbols associated to the last last two columns of $B_2$ are now available to the data-collector and their effect on the rest of the encoded symbols can be subtracted out to get \begin{equation} B_4 =\left[ \resizebox{5.3cm}{!}{\begin{tabular}{cc|cc} $ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 0 $&$0$\\ $2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)}$&$ {\psi}_2^{(4)}$&$ {\psi}_2^{(5)} $\\ \hline $ {\psi}_3^{(4)} $&$ {\psi}_3^{(5)}$&$ 2{\psi}_2^{(4)}$&$ 2{\psi}_2^{(5)} $\\ $ 0 $&$ 0 $&$ 2{\psi}_3^{(4)}$&$ 2{\psi}_3^{(5)}$ \end{tabular}} \right]~.\end{equation} Along the lines of the previous case, the matrix $B_4$ above can be shown to be non-singular. We note that this condition is equivalent to the reconstruction in a MISER code with $k=2$ and a data-collector that attempts to recover the data by connecting to the two parity nodes. \end{IEEEproof} \subsection{The General MISER Code for $n = 2k,~d=n-1$} \label{sec:MISER_gen} In this section, the construction of MISER code for the general parameter set $n = 2k,~d=n-1$ is provided. Since the MISER code is built to satisfy the cut-set bound, we have that $d=\alpha+k-1$ which implies that \begin{equation} k=\alpha~. \end{equation} This relation will play a key role in the design of generator matrices for the parity nodes as this will permit each parity node to reserve $\alpha=k$ symbols associated to linearly independent global kernels for the repair of the $k$ systematic nodes. In the example just examined, we had $\alpha=k=3$. The construction of the MISER code for the general parameter set $n = 2k,~d=n-1$ is very much along the lines of the construction of the example code. \subsubsection{Design of Nodal Generator Matrices} ~ The first $k$ nodes are systematic and store the message symbols in uncoded form. Thus the component generator matrices $G^{(\ell)}_i $, $1 \leq i \leq k$ of the $\ell$th systematic node, $1 \leq \ell \leq k$, are given by \begin{eqnarray} G^{(\ell)}_i = \left \lbrace \begin{array}{ll} I_{\alpha} &\text{if } i=\ell \\ 0_\alpha &\text{if } i\neq \ell \end{array} \right. \label{eq:explicitSystematicGenMxs}. \end{eqnarray} Let $\Psi$ be an $\left(\alpha \times (n-k)\right)$ matrix with entries drawn from $\mathbb{F}_q$ such that every sub-matrix of $\Psi$ is of full rank. Since $n-k=\alpha=k$, we have that $\Psi$ is a square matrix \footnote{In Section~\ref{sec:connect_to_all_systematic}, we extend the construction to the even more general case of arbitrary $n$, $d \geq 2k-1$, under the added requirement however, that the replacement node connect to all of the remaining systematic nodes. In that section, we will be dealing with a rectangular $\left(\alpha \times (n-k)\right)$ matrix $\Psi$.}. Let the columns of $\Psi$ be given by \begin{eqnarray} \Psi=\begin{bmatrix} \underline{\psi}^{(k+1)} & \underline{\psi}^{(k+2)} & \cdots & \underline{\psi}^{(n)} \end{bmatrix} \end{eqnarray} where the $m$th column is given by \begin{equation} \underline{\psi}^{(m)}=\begin{bmatrix}{\psi}^{(m)}_1 \\ \vdots \\ {\psi}^{(m)}_\alpha \end{bmatrix} . \end{equation} A Cauchy matrix is an example of such a matrix, and in our construction, we will assume ${\Psi}$ to be a Cauchy matrix. ~ \begin{defn}[Cauchy matrix] An $(s \times t)$ Cauchy matrix $\Psi$ over a finite field $\mathbb{F}_q$ is a matrix whose $(i,j)$th element ($1 \leq i \leq s$, $1 \leq j \leq t$) equals $\frac{1}{(x_i-y_j)}$ where $\{x_i\}\cup\{y_j\}$ is an injective sequence, i.e., a sequence with no repeated elements. \end{defn} ~ Thus the minimum field size required for the construction of a $(s \times t)$ Cauchy matrix is $s+t$. Hence if we choose $\Psi$ to be a Cauchy matrix, \begin{equation} q \geq \alpha + n - k. \end{equation} Any finite field satisfying this condition will suffice for our construction. Note that since $n-k \geq \alpha \geq 2$, we have $q \geq 4$. ~ We introduce some additional notation at this point. Denote the $j$th column of the $(\alpha \times \alpha)$ matrix $G^{(m)}_i$ as $\underline{g}^{(m)}_{i,j}$, i.e., \begin{equation} G^{(m)}_i = \left[\underline{g}^{(m)}_{i,1}\quad \cdots\quad \underline{g}^{(m)}_{i,\alpha}\right].\end{equation} The code is designed assuming a regeneration algorithm under which each of the $\alpha$ parity nodes passes its $\ell^{\text{th}}$ column for repair of the $\ell^{\text{th}}$ systematic node. With this in mind, for $k+1 \leq m \leq n$, $1 \leq i,j \leq \alpha$, we choose \begin{equation} \underline{g}^{(m)}_{i,j}= \left\lbrace \begin{array}{ll} \epsilon \underline{\psi}^{(m)} &\text{if }i = j \\ {\psi}^{(m)}_i\underline{e}_j\;\; &\text{if }i\neq j \end{array} \right.\label{eq:choose_g} \end{equation} where $\epsilon$ is an element from $\mathbb{F}_q$ such that $\epsilon \neq 0 $ and $\epsilon^2 \neq 1$ ~(in the example provided in the previous section, $\epsilon \in \mathbb{F}_7$ was set equal to $2$). The latter condition $\epsilon^2 \neq 1$ is needed during the reconstruction process, as was seen in the example. Note that there always exists such a value $\epsilon$ as long as $q \geq 4$. As in the example, the generator matrix is also designed keeping in mind the need for interference alignment. This property is utilized in the exact-repair of systematic nodes, as described in the next section. ~ \subsubsection{Exact-Repair of Systematic Nodes} The repair process we associate with the MISER code is simple. The repair of a failed systematic node, say node $\ell$, involves each of the remaining $d=n-1$ nodes passing their $\ell$th symbols (or equivalently, associated global kernels) respectively. In the set of $\alpha$ vectors passed by the parity nodes, the $\ell$th (desired) component is independent, and the remaining (interference) components are aligned. The interference components are cancelled using the vectors passed by the remaining systematic nodes. Independence in the desired component then allows for recovery of the desired message symbols. The next theorem describes the repair algorithm in greater detail. \begin{thm} In the MISER code, a failed systematic node can be exactly repaired by downloading one symbol from each of the remaining $d=n-1$ nodes. \end{thm} \begin{IEEEproof} Consider repair of the systematic node $\ell$. Each of the remaining $(n-1)$ nodes passes its $\ell$th column, so that the replacement node has access to the global kernels represented by the columns shown below: \[ \left[ \renewcommand{\arraystretch}{1.43} \begin{tabular}{>{$}c<{$}|>{$}c<{$}|>{$}c<{$}|>{$}c<{$}|>{$}c<{$}|>{$}c<{$}|>{$}c<{$}|>{$}c<{$}|>{$}c<{$} \underline{e}_{\ell} &\cdots & \underline{0} & \underline{0} &\cdots & \underline{0} & {\psi}^{(k+1)}_1\underline{e}_{\ell} & \cdots & {\psi}^{(n)}_1\underline{e}_{\ell} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ \underline{0} &\cdots & \underline{e}_{\ell} & \underline{0} &\cdots & \underline{0} & {\psi}^{(k+1)}_{\ell-1}\underline{e}_{\ell} & \cdots &{\psi}^{(n)}_{\ell-1}\underline{e}_{\ell} \\ \underline{0} &\cdots & \underline{0} & \underline{0} &\cdots & \underline{0} & \textcolor{blue}{\epsilon \underline{\psi}^{(k+1)} }& \textcolor{blue}{\cdots} & \textcolor{blue}{\epsilon \underline{\psi}^{(n)}} \\ \underline{0} &\cdots & \underline{0} & \underline{e}_{\ell} &\cdots & \underline{0} & {\psi}^{(k+1)}_{\ell+1}\underline{e}_{\ell} & \cdots & {\psi}^{(n)}_{\ell+1}\underline{e}_{\ell} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ \underline{0} &\cdots & \underline{0} & \underline{0} &\cdots & \underline{e}_{\ell} & {\psi}^{(k+1)}_{k}\underline{e}_{\ell} & \cdots & {\psi}^{(n)}_{k}\underline{e}_{\ell}\vspace{-.35cm} \\ \multicolumn{6}{>{$}c<{$}}{\underbrace{\hspace{4.5cm}}}&\multicolumn{3}{>{$}c<{$}}{\underbrace{\hspace{3.45cm}}}\vspace{-.1cm}\\ \multicolumn{6}{c}{From systematic nodes}&\multicolumn{3}{c}{From parity nodes} \vspace{-1cm} \end{tabular} \right], \vspace{1cm} \] where $\underline{e}_{\ell}$ denotes the $\ell$th unit vector of length $\alpha$ and $\underline{0}$ denotes a zero vector of length $\alpha$. Observe that apart from the desired $\ell$th component, every other component is aligned along the vector $\underline{e}_{\ell}$. The goal is to show that some $\alpha$ linear combinations of the columns above will give us a matrix whose $\ell$th component equals the $(\alpha \times \alpha)$ identity matrix, and has zeros everywhere else. But this is clear from the interference alignment structure just noted in conjunction with linear independence of the $\alpha$ vectors in the desired component: \begin{equation} \{ \underline{\psi}^{(k+1)}, ~\cdots~, \underline{\psi}^{(n)} \} . \end{equation} \end{IEEEproof} Next, we discuss the data reconstruction property. ~ \subsubsection{Data Reconstruction~(MDS Property)} \label{sec:recon} For reconstruction to be satisfied, a data-collector downloading all symbols stored in any arbitrary $k$ nodes should be able to recover the $B$ message symbols. For this, we need the $(B \times B)$ matrix formed by the columnwise concatenation of any arbitrary collection of $k$ nodal generator matrices to be non-singular. The proof of this property is along the lines of the proof in the example. For completeness, a proof is presented in the appendix. \begin{thm}\label{thm:MISER_gen_recon} A data-collector connecting to any $k$ nodes in the MISER code can recover all the $B$ message symbols. \end{thm} \begin{IEEEproof} Please see the Appendix. \end{IEEEproof} ~ \begin{note} It is easily verified that both reconstruction and repair properties continue to hold even when we choose the generator matrices of the parity nodes $\underline{g}^{(m)}_{i,j}$, $k+1 \leq m \leq n$, $1 \leq i,j \leq \alpha$ to be given by: \begin{equation} \underline{g}^{(m)}_{i,j}= \left\lbrace \begin{array}{l l} \Sigma_i \underline{\psi}^{(m)} &\text{if }i = j \\ {\psi}^{(m)}_i\underline{e}_j\;\; &\text{if }i\neq j \end{array} \right.\label{eq:choose_g2} \end{equation} where $\Sigma_i = \text{diag}\{\epsilon_{i,1}~,~\ldots~,~\epsilon_{i,\alpha}\}$ is an $(\alpha \times \alpha)$ diagonal matrix satisfying \begin{enumerate} \item $\epsilon_{i,j} \neq 0$, $\quad\quad~ \forall ~i,j$ \item $\epsilon_{i,j} \, \epsilon_{j,i} \neq 1$, $\quad \forall ~i \neq j$. \end{enumerate} ~ \noindent The first condition suffices to ensure exact-repair of systematic nodes. The two conditions together ensure that the~(MDS) reconstruction property holds as well. \end{note} \subsection{The MISER Code for $n \geq 2k,~d=n-1$} In this section we show how the MISER code construction for $n = 2k,~d=n-1$ can be extended to the more general case $n \geq 2k, \ d=n-1$. From the cut-set bound~\eqref{eq:MSR_beta1_parameters}, for this parameter regime, we get \begin{equation} k \leq \alpha~. \end{equation} We begin by first showing how an incremental change in parameters is possible. ~ \begin{thm} \label{thm:smaller_k} An $[n,\ k, \ d]$, linear, systematic, exact-repair MSR code ${\cal C}$ can be derived from an $[n'=n+1,k'=k+1,d'=d+1]$ linear, systematic, exact-repair MSR code ${\cal C}'$. Furthermore if $d'=a k'+b$ in code ${\cal C}'$, $d=a k+b+(a-1)$ in code ${\cal C}$. \end{thm} \begin{IEEEproof} We begin by noting that \begin{eqnarray} n-k & = & n'-k' \\ \alpha'& = & \alpha =d-k+1 \\ B' = k'(d'-k'+1) & = & B + \alpha . \label{eq:B_difference} \end{eqnarray} In essence, we use code shortening \cite{SloaneBook} to derive code ${\cal C}$ from code ${\cal C}'$. Specification of code ${\cal C}$ requires that given a collection of $B=k \alpha$ message symbols, we identify the $\alpha$ code symbols stored in each of the $n$ nodes. We assume without loss of generality, that in code ${\cal C}$, the nodes are numbered $1$ through $n$, with nodes $1$ through $k$ representing the systematic nodes. We next create an additional node numbered $0$. The encoding algorithm for code ${\cal C}$ is based on the encoding algorithm for code ${\cal C}'$. Given a collection of $B$ message symbols to be encoded by code ${\cal C}$, we augment this collection by an additional $\alpha$ message symbols all of which are set equal to zero. The first set of $B$ message symbols will be stored in systematic nodes $1$ through $k$ and the string of $\alpha$ zeros will be stored in node $0$. Nodes $0$ through $k$ are then regarded as constituting a set of $k'=(k+1)$ systematic nodes for code ${\cal C}'$. The remaining $(n-k)$ parity nodes are filled using the encoding process associated with code ${\cal C}'$ using the message symbols stored in the $k'$ nodes numbered $0$ through $k$. Note that both codes ${\cal C}$ and ${\cal C}'$ share the same number $(n-k)$ of parity nodes. To prove the data reconstruction property of ${\cal C}$, it suffices to prove that all the $B$ message symbols can be recovered by connecting to an arbitrary set of $k$ nodes. Given a data-collector connecting to a particular set of $k$ nodes, we examine the corresponding scenario in code ${\cal C}'$ in which the data-collector connects to node $0$ in addition to these $k$ nodes. By the assumed MDS property of code ${\cal C}'$, all the $B$ message symbols along with the $\alpha$ message symbols stored in node $0$ can be decoded using the data stored these $(k+1)$ nodes. However, since the $\alpha$ symbols stored in node $0$ are all set equal to zero, they clearly play no part in the data-reconstruction process. It follows that the $B$ message symbols can be recovered using the data from the $k$ nodes (leaving aside node $0$), thereby establishing that code ${\cal C}$ possesses the required MDS data-reconstruction property. A similar argument can be used to establish the repair property of code ${\cal C}$ as well. Finally, we have \begin{eqnarray*} d' & = & a k'+b \\ \ \Rightarrow \ d+1 & = & a(k+1) + b \\ \ \Rightarrow \ d & = & a k + b + (a-1). \end{eqnarray*} \end{IEEEproof} ~ By iterating the procedure in the proof of Theorem~\ref{thm:smaller_k} above $i$ times we obtain: ~ \begin{cor} \label{cor:MSR_higher_d} An $[n,\ k, \ d]$ linear, systematic, exact-repair MSR code ${\cal C}$ can be constructed by shortening a $[n'=n+i,k'=k+i,d'=d+i]$ linear, systematic, exact-repair MSR code ${\cal C}'$. Furthermore if $d'=a k'+b$ in code ${\cal C}'$, $d=a k+b+i(a-1)$ in code ${\cal C}$. \end{cor} ~ \begin{note} It is shown in the sequel~(Section~\ref{subsec:equivalence}) that every linear, exact-repair MSR code can be made systematic. Thus, Theorem~\ref{thm:smaller_k} and Corollary~\ref{cor:MSR_higher_d} apply to any linear, exact-repair MSR code~(not just systematic). In addition, note that the theorem and the associated corollary hold for general values of $[n, \ k, \ d]$ and are not restricted to the case of $d=n-1$. Furthermore, a little thought will show that they apply to linear codes ${\cal C}'$ that perform functional repair as well. \end{note} ~ The next corollary follows from Corollary~\ref{cor:MSR_higher_d}, and the code-shortening method employed in the Theorem~\ref{thm:smaller_k}. \begin{cor} \label{cor:MISER_code-shortening} The MISER code for $n \geq 2k, \ d=n-1$ can be obtained by shortening the MISER code for $n'=n+(n-2k), \ k'=k + (n-2k), \ d'=d+(n-2k)=n'-1$ . \end{cor} ~ \begin{figure} \centering \includegraphics[trim=0in 0.7in 0in 0in, clip=true,width=\textwidth]{fig_shortening_example} \caption{\small Construction of a $[n=5, \; k=2, \; d=4]$ MISER code from a $[n'=6, \; k'=3, \; d'=5]$ MISER code. Shortening the code with respect to node zero is equivalent to removing systematic node $0$ as well as the top component of every nodal generator matrix. The resulting $[n=5, \; k=2, \; d=4]$ MISER code has $\{u_4,\ldots,u_9\}$ as its $B=k\alpha=6$ message symbols.} \label{fig:MISER_shorten} \end{figure} \textit{Example:} The code-shortening procedure represented by Theorem~\ref{thm:smaller_k} is illustrated by the example shown in Fig.~\ref{fig:MISER_shorten}. Here it is shown how a MISER code having code parameters $[n'=6, \; k'=3, \; d'=5]$, $\beta'=1$ and $(\alpha'=d'-k'+1=3, B'=\alpha^{'} k'=9)$ yields upon shortening with respect to the message symbols in node $0$, a MISER code having code parameters $[n=5, \; k=2, \; d=4]$, $\beta=1$ and $(\alpha=d-k+1=3, B=\alpha k=6)$. \subsection{Extension to $ 2k-1 \leq d \leq n-1$ When The Set of Helper Nodes Includes All Remaining Systematic Nodes} \label{sec:connect_to_all_systematic} In this section, we present a simple extension of the MISER code to the case when $2k-1 \leq d \leq n-1$, under the additional constraint however, that the set of $d$ helper nodes assisting a failed systematic node includes the remaining $k-1$ systematic nodes. The theorem below, shows that the code provided in Section~\ref{sec:MISER_gen} for $n=2k, \ d=n-1$ supports the case $d=2k-1, d \leq n-1$ as well as long as this additional requirement is met. From here on, extension to the general case $d \geq 2k-1, \ d \leq n-1$ is straightforward via the code-shortening result in Theorem \ref{thm:smaller_k}. Note that unlike in the previous instance, the $(\alpha \times (n-k))$ Cauchy matrix used in the construction for $d < n-1$ is a rectangular matrix. \begin{thm} For $d=2k-1, \ d \leq n-1$, the code defined by the nodal generator matrices in equations~(\ref{eq:explicitSystematicGenMxs}) and~(\ref{eq:choose_g}), achieves reconstruction and optimal, exact-repair of systematic nodes, provided the replacement node connects to all the remaining systematic nodes. \end{thm} \begin{IEEEproof} \textit{Reconstruction:} The reconstruction property follows directly from the reconstruction property in the case of the original code. \textit{Exact-repair of systematic nodes:} The replacement node connects to the $(k-1)$ remaining systematic nodes and an arbitrary $\alpha$ parity nodes~(since, meeting the cut-set bound requires $d=k-1 + \alpha$). Consider a distributed storage system having only these $(k-1+\alpha)$ nodes along with the failed node as its $n$ nodes. Such a system has $d=n-1, \ d=2k-1$ and is identical to the system described in Section \ref{sec:MISER_gen}. Hence exact-repair of systematic nodes meeting the cut-set bound is guaranteed. \end{IEEEproof} \subsection{Analysis of the MISER Code} \paragraph{Field Size Required} The constraint on the field size comes due to construction of the $\left(\alpha \times (n-k)\right)$ matrix $\Psi$ having all sub-matrices of full rank. For our constructions, since $\Psi$ is chosen to be a Cauchy matrix, any field of size $(n+d-2k+1)$ or higher suffices. For specific parameters, the matrix $\Psi$ can be handcrafted to yield smaller field sizes. \paragraph{Complexity of Exact-Repair of Systematic Nodes} Each node participating in the exact-repair of systematic node $i$, simply passes its $i$th symbol, without any processing. The replacement node has to multiply the inverse of an~($\alpha \times \alpha$) Cauchy matrix with an $\alpha$ length vector and then perform $(k-1)$ subtractions for interference cancellation. \paragraph{Complexity of Reconstruction} The complexity analysis is provided for the case $n=2k, \ d=n-1$, other cases follow on the similar lines. A data-collector connecting to the $k$ systematic nodes can recover all the data without any additional processing. A data-collector connecting to some $k$ arbitrary nodes has to (in the worst case) multiply the inverse of a $(k \times k)$ Cauchy matrix with $k$ vectors, along with operations having a lower order of complexity. \subsection{Relation to Subsequent Work~\cite{Changho}} Two regenerating codes are equivalent if one code can be transformed into the other via a non-singular symbol remapping~(this definition is formalized in Section~\ref{subsec:equivalence}). The capabilities and properties of equivalent codes are thus identical in every way. The initial presentation of the MISER code in~\cite{ourITW}~(the name `MISER' was coined only subsequently) provided the construction of the code along with two~(of three) parts of what may be termed as a complete decoding algorithm, namely: (a) reconstruction by a data collector, and (b) exact-repair of failed systematic nodes. It was not known whether the third part of decoding, i.e., repair of a failed parity node could be carried out by the MISER code. Following the initial presentation of the MISER code, the authors of \cite{Changho} show how a \textit{common eigenvector} approach can be used to establish that exact repair of the parity nodes is also possible under the MISER code construction \footnote{In ~\cite{Changho} a class of regenerating codes is presented that have the same parameters as does the MISER code. This class of codes can however, be shown to be equivalent to the MISER code (and hence to each other) under the equivalence notion presented in Section~\ref{subsec:equivalence}.}. \section{Necessity of Interference Alignment and Non-Existence of Scalar, Linear, Exact-repair MSR Codes for $d<2k-3$}\label{sec:non_exist_alpha_3} In Section~\ref{sec:gen_explicit}, explicit, exact-repair MSR codes are constructed for the parameter regimes $(d \geq 2k-1, \ d=n-1)$ performing reconstruction and exact-repair of systematic nodes. These constructions are based on the concept of interference alignment. Furthermore, these codes have a desirable property of having the smallest possible value for the parameter $\beta$, i.e., $\beta=1$. As previously discussed in Section~\ref{subsec:net_cod}, the problem of constructing exact-repair MSR codes is (in part) a non-multicast network coding problem. In particular, for the case of $\beta=1$, it reduces to a \textit{scalar network coding} problem. Upon increase in the value of $\beta$, the capacity of every data pipe is increased by a factor of $\beta$, thereby transforming it into a \textit{vector network coding} problem. Thus, $\beta=1$ corresponds to the absence of symbol extension, which in general, reduces the complexity of system implementation. Furthermore, as noted in Section~\ref{subsec:beta_1}, an MSR code for every larger integer value of $\beta$, can be obtained by concatenating multiple copies of a $\beta=1$ code. For this reason, the case of $\beta=1$ is of special interest and a large section of the literature in the field of regenerating codes~(\cite{WuDimISIT,ourAllerton,ourITW,DimSearch,WuArxiv,ourInterior_pts,Changho,ourProductMatrix,puyol}) is devoted to this case. In the present section, we show that for $d<2k-3$, there exist no linear, exact-repair MSR codes achieving the cut-set bound on the repair bandwidth in the absence of symbol extension. In fact, we show that the cut-set bound cannot be achieved even if exact-repair of only the systematic nodes is desired. We first assume the existence of such a linear, exact-repair MSR code $\mathcal{C}$ satisfying: \begin{equation} (\beta=1,\ B=k\alpha,\ \alpha=d-k+1) \end{equation} and \begin{equation} (d < 2k-3 \Rightarrow \alpha < k-2).\end{equation} Subsequently, we derive properties that this code must necessarily satisfy. Many of these properties hold for a larger regime of parameters and are therefore of independent interest. In particular, we prove that \textit{interference alignment}, in the form described in Section~\ref{sec:intf_align}, is \textit{necessary}. We will show that when $d <2k-3$ the system becomes over-constrained, leading to a contradiction. We begin with some some additional notation. ~ \begin{note} In recent work, subsequent to the original submission of this paper, it is shown in \cite{Jafar_arxiv,Changho_arxiv_intfalign} that the MSR point under exact-repair can be achieved asymptotically for all $[n, ~k, ~d]$ via an infinite symbol extension, i.e., in the limit as $\beta \rightarrow \infty$. This is established by presenting a scheme under which $\lim_{\beta \rightarrow \infty} \frac{\gamma}{d \beta} = 1$. Note that in the asymptotic setup, since both $\alpha, B$ are multiples of $\beta$, these two parameters tend to infinity as well. \end{note} \subsection{Additional Notation}\label{sec:subspaceview} We introduce some additional notation for the vectors passed by the helper nodes to the replacement node. For $\ell,m \in \{1,\ldots,n\},\ell \neq m$, let $\underline{\gamma}^{(m,\ell)}$, denote the vector passed by node $m$ for repair of node $\ell$. In keeping with our component notation, we will use $\underline{\gamma}^{(m,\ell)}_i$ to denote the $i$th component, $1 \leq i \leq k$, of this vector. Recall that a set of vectors are \textit{aligned} when the vector-space spanned by them has a dimension no more than one. Given a matrix $A$, we denote its column-space by $\text{colspace}[A]$ and its~(right) null space by $\text{nullspace}[A]$. Clearly, $\underline{\gamma}^{(m,\ell)} \in \text{colspace}\left[\mathbf{G}^{(m)}\right]$. \subsection{Equivalent Codes}\label{subsec:equivalence} Two codes $\mathcal{C}$ and $\mathcal{C}'$ are equivalent if $\mathcal{C}'$ can be represented in terms of $\mathcal{C}$ by \begin{enumerate}[i)] \item a change of basis of the vector space generated by the message symbols~(i.e., a remapping of the message symbols), and \item a change of basis of the column-spaces of the nodal generator matrices~(i.e., a remapping of the symbols stored within a node). \end{enumerate} A more rigorous definition is as follows. ~ \begin{defn}[Equivalent Codes] Two codes $\mathcal{C}$ and $\mathcal{C}'$ are equivalent if \begin{eqnarray} \mathbf{G}'^{(m)} &=& W \;\mathbf{G}^{(m)}\; U^{(m)} \\ \underline{\gamma}'^{(m,\ell)} &=& W \;\underline{\gamma}^{(m,\ell)} \end{eqnarray} $\forall~\ell,m \in \{1,\ldots,n\},\;\ell\neq m$, for some $(B\times B)$ non-singular matrix $W$, and some $(\alpha \times \alpha)$ non-singular matrix $U^{(m)}$. \end{defn} ~ Since the only operator required to transform a code to its equivalent is a symbol remapping, the capabilities and properties of equivalent codes are identical in every respect. Hence, in the sequel, we will not distinguish between two equivalent codes and the notion of code equivalence will play an important role in the present section. Here, properties of a code that is equivalent to a given code are first derived and the equivalence then guarantees that these properties hold for the given code as well. The next theorem uses the notion of equivalent codes to show that every linear exact-repair MSR code can be made systematic. ~ \begin{thm} Every linear, exact-repair MSR code can be made systematic via a non-singular linear transformation of the rows of the generator matrix, which simply corresponds to a re-mapping of the message symbols. Furthermore, the choice of the $k$ nodes that are to be made systematic can be arbitrary. \end{thm} \begin{IEEEproof} Let the generator matrix of the given linear, exact-repair MSR code $\mathcal{C}$ be $\mathbb{G}$. We will derive an equivalent code $\mathcal{C}'$ that has its first $k$ nodes in systematic form. The reconstruction (MDS property) of code $\mathcal{C}$ implies that the $(B \times B)$ sub-matrix of $\mathbb{G}$, \[ \left[ \mathbf{G}^{(1)}~\mathbf{G}^{(2)}~\cdots~\mathbf{G}^{(k)}\right]\] is non-singular. Define an equivalent code $\mathcal{C}'$ having its generator matrix $\mathbb{G}'$ as: \begin{equation} \mathbb{G}' = \left[ \mathbf{G}^{(1)}~\mathbf{G}^{(2)}~\cdots~\mathbf{G}^{(k)}\right]^{-1} ~ \mathbb{G}. \label{eq:convert_to_systematic}\end{equation} Clearly, the $B$ left-most columns of $\mathbb{G}'$ form a $B \times B$ identity matrix, thus making the equivalent code $\mathcal{C}'$ systematic. As the repair is exact, the code will retain the systematic form following any number of failures and repairs. The transformation in equation~\eqref{eq:convert_to_systematic} can involve any arbitrary set of $k$ nodes in $\mathcal{C}$, thus proving the second part of the theorem. \end{IEEEproof} ~ The theorem above permits us to restrict our attention to the class of systematic codes, and assume the first $k$ nodes~(i.e., nodes $1,\ldots,k$) to be systematic. Recall that, for systematic node $\ell~(\in \{1,\ldots,k\})$, \begin{eqnarray} G^{(\ell)}_i = \left \lbrace \begin{array}{ll} I_{\alpha} &\text{if } i=\ell \\ 0_\alpha &\text{if } i\neq \ell \end{array} \right. \quad \forall i \in \{1,\ldots,k \}. \end{eqnarray} Thus, systematic node $\ell$ stores the $\alpha$ symbols in $\underline{u}_\ell$. \subsection{Approach} An exact-repair MSR code should be capable of performing exact-repair of any failed node by connecting to any arbitrary subset of $d$ of the remaining $(n-1)$ nodes, while meeting the cut-set bound on repair bandwidth. This requires a number of repair scenarios to be satisfied. Our proof of non-existence considers a less restrictive setting, in which exact-repair of only the systematic nodes is to be satisfied. Further, we consider only the situation where a failed systematic node is to be repaired by downloading data from a specific set of $d$ nodes, comprised of the $(k-1)$ remaining systematic nodes, and some collection of $\alpha$ parity nodes. Thus, for the remainder of this section, we will restrict our attention to a subset of the $n$ nodes in the distributed storage network, of size $(k+\alpha)$ nodes, namely, the set of $k$ systematic nodes and the first $\alpha$ parity nodes. Without loss of generality, within this subset, we will assume that nodes $1$ through $k$ are the systematic nodes and that nodes $(k+1)$ through $(k+\alpha)$ are the $\alpha$ parity nodes. Then with this notation, upon failure of systematic node $\ell$, $1 \leq \ell \leq k$, the replacement node is assumed to connect to nodes $\{1,\ldots,k+\alpha\}\backslash\{\ell\}$. The generator matrix $\mathbb{G}$ of the entire code can be written in a block-matrix form as shown in Fig.~\ref{fig:non_ach_1}. In the figure, each~(block) column represents a node and each~(block) row, a component. The first $k$ and the remaining $\alpha$ columns contain respectively, the generator matrices of the $k$ systematic nodes and the $\alpha$ parity nodes. \begin{figure}[h] \centering \includegraphics[trim=0.5in 2.7in 1.5in 0.4in, clip=true, width=0.5\textwidth]{fig_non_ach_1.pdf} \caption{\small The generator matrix $\mathbb{G}$ of the entire code. First $k$~(block) columns are associated with the systematic nodes $1$ to $k$ and the next $\alpha$~(block) columns to the parity nodes $(k+1)$ to $(k+\alpha)$. Empty blocks denote zero matrices.} \label{fig:non_ach_1} \end{figure} We now outline the steps involved in proving the non-existence result. Along the way, we will uncover some interesting and insightful properties possessed by linear, exact-repair MSR codes. \begin{enumerate} \item We begin by establishing that in order to satisfy the data reconstruction property, each sub-matrix in the parity-node section of the generator matrix~(see Fig.~\ref{fig:non_ach_1}) must be non-singular. \item Next, we show that the vectors passed by the $\alpha$ parity nodes for the repair of any systematic node must necessarily satisfy two properties: \begin{itemize} \item alignment of the interference components, and \item linear independence of the desired component. \end{itemize} \item We then prove that in the collection of $k$ vectors passed by a parity node for the respective repair of the $k$ systematic nodes, every $\alpha$-sized subset must be linearly independent. This is a key step that links the vectors stored in a node to those passed by it, and enables us to replace the $\alpha$ columns of the generator matrix of a parity node with the vectors it passes to aid in the repair of some subset of $\alpha$ systematic nodes. We will assume that these $\alpha$ systematic nodes are in fact, nodes $1$ through $\alpha$. \item Finally, we will show that the necessity of satisfying multiple interference-alignment conditions simultaneously, turns out to be over-constraining, forcing alignment in the desired components as well. This leads to a contradiction, thereby proving the non-existence result. \end{enumerate} \subsection{Deduced Properties} \begin{pty}[Non-singularity of the Component Submatrices]\label{pty:nec_recon} Each of the component submatrices $\{ G^{(m)}_i \mid k+1 \leq m \leq k+ \alpha, \ \ 1 \leq i \leq k \}$ is non-singular. \end{pty} \begin{IEEEproof} Consider a data-collector connecting to systematic nodes $2$ to $k$ and parity node $(k+1)$. The data-collector has thus access to the block matrix shown in Fig.~\ref{fig:non_ach_2}. \begin{figure}[h] \centering \includegraphics[trim=0.3in 3.5in 3in 0.3in, clip=true, width=0.45\textwidth]{fig_non_ach_2} \caption{\small The block matrix accessed by a data-collector connecting to systematic nodes $2$ through $k$ and parity node $(k+1)$.} \label{fig:non_ach_2} \end{figure} For the data-collector to recover all the data, this block matrix must be non-singular, forcing $G_1^{(k+1)}$ to be non-singular. A similar argument shows that the same must hold in the case of each of the other component submatrices. \end{IEEEproof} ~ \begin{cor}\label{cor:component_colspace} Let $H=[H_1^t \ H_2^t, \cdots, H_k^t]^t$ be a $(k \alpha \times \ell)$ matrix each of whose $\ell \geq 1$ columns is a linear combination of the columns of $\mathbf{G}^{(m)}$ for some $m\in\{k+1,\ldots,k+\alpha\}$, and having $k$ components $\{H_i\}$ of size $(\alpha \times \ell)$. Thus \[ \text{colspace}[H] \subseteq \text{colspace}[\mathbf{G}^{(m)}] . \] Then for every $i \in \{1,\ldots,k\}$, we have \begin{equation} \text{nullspace}[H_i] = \text{nullspace}[H] . \end{equation} \end{cor} \begin{IEEEproof} Clearly, \begin{equation} \text{nullspace}[H] \subseteq \text{nullspace}[H_i]. \end{equation} Let $H = \mathbf{G}^{(m)} A$, for some $(\alpha \times \ell)$ matrix A. Then \begin{equation} H_i = G_i^{(m)} A. \end{equation} For a vector $\underline{v} \in \text{nullspace}[H_i]$, \begin{equation} H_i \; \underline{v} = G_i^{(m)} A \; \underline{v} =\underline{0}. \end{equation} However, since $G_i^{(m)}$ is of full rank~(Property~\ref{pty:nec_recon}) it follows that \begin{eqnarray} A \; \underline{v} &=&\underline{0} \\ \Rightarrow \ \mathbf{G}^{(m)} A \; \underline{v} &=& H \underline{v} = \underline{0} \\ \Rightarrow \ \text{nullspace}[H_i] &\subseteq& \text{nullspace}[H]. \end{eqnarray} \end{IEEEproof} The corollary says, in essence, that any linear dependence relation that holds amongst the columns of any of the components $H_i$, also extends to the columns of the entire matrix $H$ itself. ~ We next establish properties that are mandated by the repair capabilities of exact regenerating codes. Consider the situation where a failed systematic node, say node $\ell$, \ $1 \leq \ell \leq k$, is repaired using one vector~(as $\beta=1$) from each of the remaining $k-1+\alpha$ nodes. ~ \begin{defn} When considering repair of systematic node $\ell$, $1 \leq \ell \leq k$, the $\ell$th component $\{ \underline{\gamma}^{(m,\ell)}_\ell\}$ of each of the $\alpha$ vectors $\{ \underline{\gamma}^{(m,\ell)} \mid k+1 \leq m \leq k+ \alpha \}$ passed by the $\alpha$ parity nodes will be termed as the {\em desired component}. The remaining components $\{ \underline{\gamma}^{(m,\ell)}_i \mid i \neq \ell \}$ will be termed as {\em interference components}. \end{defn} ~ The next property highlights the necessity of interference alignment in any exact-repair MSR code. Clearly, the vectors passed by the remaining $(k-1)$ systematic nodes have $\ell^{\text{th}}$ component equal to $\underline{0}$, and thus the onus of recovering the `desired' $\ell^{\text{th}}$ component of replacement node $\ell$ falls on the $\alpha$ parity nodes. However, the vectors passed by the parity nodes have non-zero `interference' components that can be nulled out only by the vectors passed by the systematic nodes. This forces an alignment in these interference components, and this is shown more formally below. ~ \begin{pty}[Necessity of Interference Alignment]\label{pty:IA_necessary} In the vectors $\{ \underline{\gamma}^{(m,\ell)} \mid k+1 \leq m \leq k+\alpha \}$ passed by the $\alpha$ parity nodes for the repair of any systematic node (say, node $\ell$), the set of $\alpha$ interference components $\{ \underline{\gamma}^{(m,\ell)}_i \}$, $1 \leq i \leq k$, $ i \neq \ell$ must necessarily be \textit{aligned}, and the desired components $\{ \underline{\gamma}^{(m,\ell)}_\ell \}$ must necessarily be linearly independent. \end{pty} \begin{IEEEproof} We assume without loss of generality that $\ell=1$, i.e., we consider repair of systematic node $1$. The matrix depicted in Fig.~\ref{fig:non_ach_3} consists of the $\alpha$ vectors needed to be recovered in the replacement node $\ell$, alongside the $d$ vectors passed by the $d$ helper nodes $2,\ldots,k+\alpha$. This matrix may be decomposed into three sub-matrices, namely: a $(B \times \alpha)$ matrix $\Gamma_1$, comprising of the $\alpha$ columns to be recovered at the replacement node; a $(B \times (k-1))$ matrix $\Gamma_2$, comprising of the $(k-1)$ vectors passed by the remaining systematic nodes; and a $(B \times \alpha)$ matrix $\Gamma_3$, comprising of the $\alpha$ vectors passed by the parity nodes. \begin{figure}[h] \centering \includegraphics[trim=0.4in 2.2in 0.4in 0.4in, clip=true,width=0.6\textwidth]{fig_non_ach_3} \caption{\small Matrix depicting the $\alpha$~(global-kernel) vectors to be recovered by replacement node 1~(represented by the matrix $\Gamma_1$), alongside the $d$ vectors passed by the helper nodes $2, \ldots,k+\alpha$~(represented by $[ \Gamma_2 \mid \Gamma_3]$). } \label{fig:non_ach_3} \end{figure} The vectors $\{\underline{\gamma}_1^{(k+1,1)},~ \ldots~ ,\underline{\gamma}_1^{(k+\alpha,1)}\}$ appearing in the first row of the matrix constitute the desired component; for every $i \in \{2,\ldots,k\}$, the set of vectors $\{\underline{\gamma}_i^{(k+1,1)},~ \ldots~ ,\underline{\gamma}_i^{(k+\alpha,1)}\}$, constitute interference components. An exact-repair of node $1$ is equivalent to the recovery of $\Gamma_1$ from the columns of $\Gamma_2$ and $\Gamma_3$ through a linear transformation, and hence it must be that \begin{equation} \text{colspace} [\Gamma_1] \ \subseteq \ \text{colspace}\left[ \Gamma_2 | \Gamma_3 \right], \label{eq:IA_rk_arg_1}\end{equation} where `$|$' operator denotes concatenation. When we restrict attention to the first components of the matrices, we see that we must have \begin{equation} \text{colspace}[I_{\alpha}] \ \subseteq \ \text{colspace} \left[\underline{\gamma}_1^{(k+1,1)}~ \ldots~ \underline{\gamma}_1^{(k+\alpha,1)}\right], \end{equation} thereby forcing the desired components $\{\underline{\gamma}_1^{(k+1,1)},~ \ldots~ ,\underline{\gamma}_1^{(k+\alpha,1)}\}$ to be linearly independent. \vspace{5pt} Further, from \eqref{eq:IA_rk_arg_1} it follows that \begin{equation} \text{colspace} \left[\Gamma_1 | \Gamma_2\right] \ \subseteq \ \text{colspace}\left[ \Gamma_2 | \Gamma_3 \right]. \label{eq:IA_rk_arg_2}\end{equation} Clearly, $\text{rank}[\Gamma_1] = \alpha$, and from Fig.~\ref{fig:non_ach_3} it can be inferred that \begin{equation} \text{rank}[\Gamma_1 | \Gamma_2] \ = \ \alpha + \text{rank}[\Gamma_2]~. \label{eq:IA_rk_arg_3}\end{equation} Moreover, as the first component in $\Gamma_3$ is of rank $\alpha$, \begin{eqnarray} \text{rank}[\Gamma_2 | \Gamma_3] \ &\leq& \ \text{rank}[\Gamma_2] + \alpha \label{eq:IA_rk_arg_4}\\ &=&\ \text{rank}[\Gamma_1 | \Gamma_2]. \label{eq:IA_rk_arg_5}\end{eqnarray} It follows from equation~\eqref{eq:IA_rk_arg_2} and~\eqref{eq:IA_rk_arg_5}, that \begin{equation} \text{colspace} \left[\Gamma_1 | \Gamma_2\right] \ = \ \text{colspace}\left[ \Gamma_2 | \Gamma_3 \right], \label{eq:IA_rk_arg_6}\end{equation} and this forces the interference components in $\Gamma_3$ to be aligned. Thus, for $i\in\{2,\dots,k\}$, \begin{equation} \text{colspace}\left[\underline{\gamma}_i^{(k+1,1)}~\cdots~\underline{\gamma}_i^{(k+\alpha,1)}\right] \subseteq \text{colspace}\left[\underline{\gamma}_i^{(i,1)}\right]. \end{equation} \end{IEEEproof} ~ \begin{note} Properties~\ref{pty:nec_recon} and~\ref{pty:IA_necessary} also hold for all $\beta \geq 1$, in which case, each of the $\alpha$ helper parity nodes pass a $\beta$-dimensional subspace, and each interference component needs to be confined to a $\beta$-dimensional subspace. Furthermore, the two properties also hold for all $[n,~k,~d]$ exact-repair MSR codes, when $(k-1)$ of the $d$ helper nodes along with the replacement node are viewed as systematic. \end{note} \begin{figure}[t] \centering \includegraphics[trim=0in 1.5in 0.2in 0in,clip=true,width=0.7\textwidth]{fig_fromTo.pdf} \caption{\small Table indicating the vectors passed by the $\alpha$ parity nodes to repair the first $\alpha$ systematic nodes.}\label{fig:fig_fromTo} \end{figure} ~ The next property links the vectors stored in a parity node to the vectors it passes to aid in the repair of any set of $\alpha$ systematic nodes. ~ \begin{pty}\label{pty:alpha_ind} For $d < 2k-1$, the vectors passed by a parity node to repair any arbitrary set of $\alpha$ systematic nodes are linearly independent, i.e., for $m \in \{k+1,\ldots,k+\alpha\}$, it must be that every subset of size $\alpha$ drawn from the set of vectors \[\left\lbrace\underline{\gamma}^{(m,1)},\ldots,\underline{\gamma}^{(m,k)}\right\rbrace \] is linearly independent. (Thus the matrix $[ \underline{\gamma}^{(m,1)}~\ldots~\underline{\gamma}^{(m,k)} ]$ may be viewed as the generator matrix of a $[k,\alpha]$-MDS code.) \end{pty} \begin{IEEEproof} Consider Fig.~\ref{fig:fig_fromTo} which depicts the vectors passed by parity nodes $\{k+1,\ldots,k+\alpha\}$ to repair systematic nodes $\{1,\ldots,\alpha\}$. From Property~\ref{pty:IA_necessary} one can infer that in column $i \in \{1,\ldots,\alpha\}$, the $i^{\text{th}}$~(desired) components of the $\alpha$ vectors are independent, and the $j^{\text{th}}$~(interference) components for all $j \in \{1,\ldots,k\}\backslash\{i\}$ are aligned. In particular, for all $j\in\{\alpha+1,\ldots,k\}$, the $j^{\text{th}}$ components of each column are aligned. Note that as $d<2k-1$ we have $k>\alpha$, which guarantees that the set $\{\alpha+1,\ldots,k\}$ is non-empty and hence, the presence of an $(\alpha+1)$th component. We will prove Property~\ref{pty:alpha_ind} by contradiction. Suppose, for example, we were to have \begin{equation} \underline{\gamma}^{(k+1,1)} \subseteq \text{colspace}\left[\underline{\gamma}^{(k+1,2)}~\cdots~\underline{\gamma}^{(k+1,\alpha)}\right], \end{equation} which is an example situation under which the $\alpha$ vectors passed by parity node $(k+1)$ for the respective repair of the first $\alpha$ systematic nodes would fail to be linearly independent. Restricting our attention to component $(\alpha+1)$, we get \begin{equation} \underline{\gamma}^{(k+1,1)}_{\alpha+1} \subseteq \text{colspace}\left[\underline{\gamma}^{(k+1,2)}_{\alpha+1}~\cdots~\underline{\gamma}^{(k+1,\alpha)}_{\alpha+1}\right]. \label{eq:non_ach_pty_3_1}\end{equation} Now, alignment of component $(\alpha+1)$ along each column forces the same dependence in all other parity nodes, i.e., \begin{equation} \underline{\gamma}^{(m,1)}_{\alpha+1} \subseteq \text{colspace}\left[\underline{\gamma}^{(m,2)}_{\alpha+1}~\cdots~\underline{\gamma}^{(m,\alpha)}_{\alpha+1}\right] \quad \forall m \in \{k+2,\ldots,k+\alpha\} \label{eq:non_ach_pty_3_2}.\end{equation} Noting that a vector passed by a helper node lies in the column-space of its generator matrix, we now invoke Corollary~\ref{cor:component_colspace}: \begin{equation} \text{nullspace}\left[\underline{\gamma}^{(m,1)}_{\alpha+1}~\cdots~\underline{\gamma}^{(m,\alpha)}_{\alpha+1}\right] = \text{nullspace}\left[\underline{\gamma}^{(m,1)}~\cdots~\underline{\gamma}^{(m,\alpha)}\right] \quad \forall m \in \{k+1,\ldots,k+\alpha\} \end{equation} This, along with equations~\eqref{eq:non_ach_pty_3_1} and \eqref{eq:non_ach_pty_3_2}, implies \begin{equation} \underline{\gamma}^{(m,1)} \subseteq \text{colspace}\left[\underline{\gamma}^{(m,2)}~\cdots~\underline{\gamma}^{(m,\alpha)}\right] \quad \forall m\in \{k+1,\ldots,k+\alpha\}. \end{equation} Thus the dependence in the vectors passed by one parity node carries over to every other parity node. In particular, we have \begin{eqnarray} \underline{\gamma}^{(m,1)}_1 &\subseteq& \text{colspace}\left[\underline{\gamma}^{(m,2)}_1~\cdots~\underline{\gamma}^{(m,\alpha)}_1\right] \quad \forall m \in \{k+1,\ldots,k+\alpha\}. \label{eq:fromto_onecolspothers}\end{eqnarray} However, from Property~\ref{pty:IA_necessary}, we know that the vectors passed to systematic nodes $2$ to $\alpha$ have their first components aligned, i.e., \begin{equation} \text{rank}\left[ \underline{\gamma}^{(k+1,\ell)}_1~\ldots~\underline{\gamma}^{(k+\alpha,\ell)}_1\right] \leq 1 \qquad \forall \ell \in \{2,\ldots,\alpha\}.\label{eq:fromto_confined}\end{equation} Aggregating all instantiations~(w.r.t. $m$) of equation~\eqref{eq:fromto_onecolspothers}, the desired component is confined to: \begin{eqnarray} \text{colspace}\left[\left\lbrace \underline{\gamma}^{(m,1)}_1\right\rbrace_{m=k+1}^{k+\alpha}\right] &\subseteq& \text{colspace}\left[\left\lbrace \underline{\gamma}^{(m,\ell)}_1\right\rbrace_{(m,~\ell)=(k+1,~2)}^{(k+\alpha,~\alpha)} \right]\\ \Rightarrow \text{rank}\left[\left\lbrace \underline{\gamma}^{(m,1)}_1\right\rbrace_{m=k+1}^{k+\alpha}\right] &\leq& \text{rank}\left[\left\lbrace \underline{\gamma}^{(m,\ell)}_1\right\rbrace_{(m,~\ell)=(k+1,~2)}^{(k+\alpha,~\alpha)} \right]\\ &\leq& \sum_{\ell=2}^{\alpha}\text{rank}\left[\left\lbrace \underline{\gamma}^{(m,\ell)}_1\right\rbrace_{m=k+1}^{k+\alpha} \right]\\ &\leq& \alpha-1, \end{eqnarray} where the last inequality follows from equation~\eqref{eq:fromto_confined}. This contradicts the assertion of Property~\ref{pty:IA_necessary} with respect to the desired component: \begin{equation} \text{rank}\left[\left\lbrace \underline{\gamma}^{(m,1)}_1\right\rbrace_{m=k+1}^{k+\alpha}\right] = \ \alpha.\end{equation} \end{IEEEproof} ~ \begin{note} It turns out that an attempted proof of the analogue of this theorem for the case $\beta>1$, fails to hold. \end{note} ~ The connection between the vectors passed by a parity node and those stored by it, resulting out of Property~\ref{pty:alpha_ind}, is presented in the following corollary. ~ \begin{cor}\label{cor:storedISpassed} If there exists a linear, exact-repair MSR code for $d<2k-1$, then there exists an equivalent linear, exact-repair MSR code, where, for each parity node, the $\alpha$ columns of the generator matrix are respectively the vectors passed for the repair of the first $\alpha$ systematic nodes. \end{cor} \begin{IEEEproof} Since a node can pass only a function of what it stores, the vectors passed by a parity node $m\in\{k+1,\ldots,k+\alpha\}$, for repair of the systematic nodes must belong to the column-space of its generator matrix, i.e., \begin{equation} \left[\underline{\gamma}^{(m,1)}~\cdots~\underline{\gamma}^{(m,\alpha)}\right] \subseteq \text{colspace}\left[\mathbf{G}^{(m)}\right]. \end{equation} Further, Property~\ref{pty:alpha_ind} asserts that the vectors it passes for repair of the first $\alpha$ systematic nodes are linearly independent, i.e., \begin{eqnarray} \text{rank}\left[\underline{\gamma}^{(m,1)}~\cdots~\underline{\gamma}^{(m,\alpha)}\right] &=& \alpha \ = \ \text{rank}\left[\mathbf{G}^{(m)}\right]. \end{eqnarray} It follows that the generator matrix $\mathbf{G}^{(m)}$ is a non-singular transformation of the vectors $\left[\;\underline{\gamma}^{(m,1)}~\cdots~\underline{\gamma}^{(m,\alpha)}\;\right]$ that are passed for the repair of the first $\alpha$ systematic nodes, and the two codes with generator matrices given by the two representations are hence equivalent. \end{IEEEproof} ~ In the equivalent code, each row of Fig.~\ref{fig:fig_fromTo} corresponds to the generator matrix $\mathbf{G}^{(m)}$ of the associated parity node, i.e., \begin{equation} \mathbf{G}^{(m)} = \left[\underline{\gamma}^{(m,1)} \; \cdots \; \underline{\gamma}^{(m,\alpha)} \right] \qquad \forall \ m\in\{k+1,\ldots,k+\alpha\}.\label{eq:nonach_storedISpassed} \end{equation} Since the capabilities of a code are identical to an equivalent code, we will restrict our attention to this generator matrix for the remainder of this section. The two properties that follow highlight some additional structure in this code. ~ \begin{pty}[Code structure - what is stored] \label{pty:nonach_struct_stored} For $d<2k-1$, any component ranging from $(\alpha+1)$ to $k$ across the generator matrices of the parity nodes differ only by the presence of a multiplicative diagonal matrix on the right, i.e., \begin{equation} \begin{tabular}{>{$}c<{$}>{$}c<{$}>{$}c<{$}>{$}c<{$}} G^{(k+1)}_{\alpha+1} = H_{\alpha+1} ~\Lambda^{(k+1)}_{\alpha+1}, &G^{(k+2)}_{\alpha+1} = H_{\alpha+1} ~\Lambda^{(k+2)}_{\alpha+1}, & \quad \cdots \quad & G^{(k+\alpha)}_{\alpha+1} = H_{\alpha+1}~ \Lambda^{(k+\alpha)}_{\alpha+1}\\ \vdots & \quad \vdots \quad &\quad \ddots \quad & \vdots \\ G^{(k+1)}_{k} \ = \ H_{k} ~ \Lambda^{(k+1)}_{k},&G^{(k+2)}_{k} \ = \ H_{k} ~ \Lambda^{(k+2)}_{k},& \quad \cdots \quad &G^{(k+\alpha)}_{k} \ = \ H_{k} ~ \Lambda^{(k+\alpha)}_{k}\end{tabular} \label{eq:nonach_mxBelow} \end{equation} where the matrices of the form $\Lambda_*^{(*)}$ are $\alpha \times \alpha$ diagonal matrices (and where, for instance, we can choose $H_{\alpha+1} = G^{(k+1)}_{\alpha+1}$, in which case $\Lambda^{(k+1)}_{\alpha+1}=I_{\alpha}$). \end{pty} \begin{IEEEproof} Consider the first column in Fig.~\ref{fig:fig_fromTo}, comprising of the vectors passed by the $\alpha$ parity nodes to repair node $1$. Property~\ref{pty:IA_necessary} tells us that in these $\alpha$ vectors, the components ranging from $(\alpha+1)$ to $k$ constitute interference, and are hence aligned. Clearly, the same statement holds for every column in Fig.~\ref{fig:fig_fromTo}. Thus, the respective components across these columns are aligned. Since the generator matrices of the parity nodes are as in~\eqref{eq:nonach_storedISpassed}, the result follows. \end{IEEEproof} ~ For the repair of a systematic node, a parity node passes a vector from the column-space of its generator matrix, i.e., the vector $\underline{\gamma}^{(m,\ell)}$ passed by parity node $m$ for repair of failed systematic node $\ell$ can be written in the form: \begin{equation} \underline{\gamma}^{(m,\ell)}~ =~ \mathbf{G}^{(m)}~ \underline{\theta}^{(m,\ell)}\end{equation} for some $\alpha$-length vector $\underline{\theta}^{(m,\ell)}$. In the equivalent code obtained in~\eqref{eq:nonach_storedISpassed}, a parity node simply stores the $\alpha$ vectors it passes to repair the first $\alpha$ systematic nodes. On the other hand, the vector passed to systematic node $\ell$, $ \alpha+1 \leq \ell \leq k$, is a linear combination of these $\alpha$ vectors. The next property employs Property~\ref{pty:alpha_ind} to show that every coefficient in this linear combination is non-zero. ~ \begin{pty}[Code structure - what is passed]\label{pty:nonach_struct_passed} For $d<2k-1$, and a helper parity node $m$ assisting a failed systematic node $\ell$\\ (a) For $\ell \in \{1,\ldots,\alpha\}$, $\underline{\theta}^{(m,\ell)}= \underline{e}_\ell$, and\\ (b) For $\ell \in \{\alpha+1,\ldots,k\}$, every element of $\underline{\theta}^{(m,\ell)}$ is non-zero. \end{pty} \begin{IEEEproof} Part~(a) is a simple consequence of the structure of the code. We will prove part~(b) by contradiction. Suppose $\theta^{(m,\ell)}_{\alpha}=0$, for some $\ell \in \{\alpha+1,\ldots,k\}$. Then $\underline{\gamma}^{(m,\ell)}$ is a linear combination of only the first $(\alpha-1)$ columns of $\mathbf{G}^{(m)}$. This implies,\begin{equation} \underline{\gamma}^{(m,\ell)} \subseteq \text{colspace}\left[\underline{\gamma}^{(m,1)} \cdots \underline{\gamma}^{(m,\alpha-1)} \right]. \end{equation} This clearly violates Property~\ref{pty:alpha_ind}, thus leading to a contradiction. \end{IEEEproof} \subsection{Proof of Non-existence} We now present the main theorem of this section, namely, the non-achievability proof. The proof, in essence, shows that the conditions of Interference Alignment necessary for exact-repair of systematic nodes, coupled with the MDS property of the code, over-constrain the system, leading to alignment in the desired components as well. We begin with a toy example that will serve to illustrate the proof technique. Consider the case when $[n=7,~k=5,~d=6]$. Then it follows from \eqref{eq:MSR_beta1_parameters} that $(\alpha=d-k+1=2,~B=k \alpha=10)$. In this case, as depicted in Figure~\ref{fig:nonAch_finalProof}, in the vectors passed by parity nodes $6$ and $7$, (a) when repairing systematic node $3$, there is alignment in components $4$ and $5$, and (b) when repairing systematic node $4$, there is alignment in component $5$. It is shown that this, in turn, forces alignment in component $4$~(desired component) during repair of node $4$ which is in contradiction to the assertion of Property~\ref{pty:IA_necessary} with respect to the desired component being linearly independent. \begin{figure}[h] \centering \includegraphics[trim=1in 8.2in 2in 3.1in,clip=true,width=.7\textwidth]{fig_nonAch_finalProof.pdf} \caption{\small A toy-example, with parameters $[n=7,~k=5,~d=6]$, to illustrate the proof of non-existence.}\label{fig:nonAch_finalProof} \end{figure} ~ \begin{thm} \label{thm:non_exist} Linear, exact-repair MSR codes achieving the cut-set bound on the repair-bandwidth do not exist for $d<2k-3$ in the absence of symbol extension~(i.e., when $\beta=1$). \end{thm} \begin{IEEEproof} Recall that achieving the cut-set bound on the repair bandwidth in the absence of symbol extension gives $d=k-1+\alpha$. For the parameter regime $d<2k-3$ under consideration, we get $k \geq \alpha+3$. Furthermore, since $\alpha >1$~\footnote{As discussed previously in Section~\ref{sec:intro}, $\alpha=1$ corresponds to a trivial scalar MDS code; hence, we omit this case from consideration.}, we have $n \geq k+2$~(as $n \geq d+1=k+\alpha$). Hence the system contains at least $(\alpha+3)$ systematic nodes and at least two parity nodes. We use Property~\ref{pty:nonach_struct_stored} to express the generator matrix of any parity node, say node $m$, in the form: \[ \mathbf{G}^{(m)} \ = \ \left[\begin{tabular}{>{$}c<{$}} G^{(m)}_1 \\ \vdots \\ G^{(m)}_{\alpha} \\ H_{\alpha+1} \Lambda^{(m)}_{\alpha+1} \\ \vdots \\ H_{k} \ \Lambda^{(m)}_{k} \end{tabular} \right]. \label{eq:nonach_nodemx} \] In this proof, we will use the notation $A \prec B$ to indicate that the matrices $A$ and $B$ are scalar multiples of each other, i.e., $A$ = $\kappa B$ for some non-zero scalar $\kappa$ and write $A \nprec B$ to indicate that matrices $A$ and $B$ are \text{not} scalar multiples of each other. We will restrict our attention to components $(\alpha+2)$ and $(\alpha+3)$. First, consider repair of systematic node $(\alpha+1)$. By the interference alignment property, Property~\ref{pty:IA_necessary}, \begin{eqnarray} \underline{\gamma}_{\alpha+2}^{(k+1,\alpha+1)} &\prec& \underline{\gamma}_{\alpha+2}^{(k+2,\alpha+1)} \\ \text{i.e.,}~~~~~~~~~G^{(k+1)}_{\alpha+2} ~\underline{\theta}^{(k+1,\alpha+1)} &\prec& G^{(k+2)}_{\alpha+2}~ \underline{\theta}^{(k+2,\alpha+1)}\label{eq:nonach_final_1}\\ \Rightarrow ~ H_{\alpha+2}~ \Lambda^{(k+1)}_{\alpha+2}~ \underline{\theta}^{(k+1,\alpha+1)} &\prec& H_{\alpha+2}~ \Lambda^{(k+2)}_{\alpha+2}~ \underline{\theta}^{(k+2,\alpha+1)}\label{eq:nonach_final_2}\\ \Rightarrow ~~~~~~~~~\Lambda^{(k+1)}_{\alpha+2}~ \underline{\theta}^{(k+1,\alpha+1)} &\prec& \Lambda^{(k+2)}_{\alpha+2}~ \underline{\theta}^{(k+2,\alpha+1)}\label{eq:nonach_final_3},\end{eqnarray} where, equation~\eqref{eq:nonach_final_3} uses the non-singularity of $H_{\alpha+2}$ (which is a consequence of Property~\ref{pty:nec_recon}). We will use the notation $\Theta^{(*,*)}$ to denote an $(\alpha \times \alpha)$ diagonal matrix, with the elements on its diagonal as the respective elements in $\underline{\theta}^{(*,*)}$. Observing that the matrices $\Lambda^{(*)}_{*}$ are diagonal matrices, we rewrite equation~\eqref{eq:nonach_final_3} as \begin{equation} \Lambda^{(k+1)}_{\alpha+2} \Theta^{(k+1,\alpha+1)} \prec \Lambda^{(k+2)}_{\alpha+2} \Theta^{(k+2,\alpha+1)}\label{eq:nonach_final_4}.\end{equation} Similarly, alignment conditions on the $(\alpha+3)$th component in the vectors passed for repair of systematic node $(\alpha+1)$ give \begin{equation}\Lambda^{(k+2)}_{\alpha+3} \Theta^{(k+2,\alpha+1)} \prec \Lambda^{(k+1)}_{\alpha+3} \Theta^{(k+1,\alpha+1)} \label{eq:nonach_final_5},\end{equation} and those on the $(\alpha+3)$th component in the vectors passed for repair of systematic node $(\alpha+2)$ give \begin{equation}\Lambda^{(k+1)}_{\alpha+3} \Theta^{(k+1,\alpha+2)} \prec \Lambda^{(k+2)}_{\alpha+3} \Theta^{(k+2,\alpha+2)} \label{eq:nonach_final_6}.\end{equation} Observe that in equations~\eqref{eq:nonach_final_4},~\eqref{eq:nonach_final_5} and \eqref{eq:nonach_final_6}, matrices $\Lambda^{(*)}_{*}$ and $\Theta^{(*,*)}$ are non-singular, diagonal matrices. As a consequence, a product~(of the terms respective in the left and right sides) of equations~\eqref{eq:nonach_final_4},~\eqref{eq:nonach_final_5} and~\eqref{eq:nonach_final_6}, followed by a cancellation of common terms leads to: \begin{equation} \Lambda^{(k+1)}_{\alpha+2} \Theta^{(k+1,\alpha+2)} \prec \Lambda^{(k+2)}_{\alpha+2} \Theta^{(k+2,\alpha+2)}\label{eq:nonach_final_9}. \end{equation} This is clearly in contradiction to Property~\ref{pty:IA_necessary}, which mandates linear independence of the desired components in vectors passed for repair of systematic node $(\alpha+2)$: \begin{eqnarray} H_{\alpha+2} \Lambda^{(k+1)}_{\alpha+2} \underline{\theta}^{(k+1,\alpha+2)} &\nprec& H_{\alpha+2} \Lambda^{(k+2)}_{\alpha+2} \underline{\theta}^{(k+2,\alpha+2)},\label{eq:nonach_final_7}\\ \text{i.e.},\qquad \Lambda^{(k+1)}_{\alpha+2} \Theta^{(k+1,\alpha+2)} &\nprec& \Lambda^{(k+2)}_{\alpha+2} \Theta^{(k+2,\alpha+2)}\label{eq:nonach_final_8}. \end{eqnarray} \end{IEEEproof} \section{Explicit Codes for $d=k+1$}\label{sec:MDSplus} In this section, we give an explicit MSR code construction for the parameter set $\left[n,~k,~d=k+1\right]$, capable of repairing any failed node with a repair bandwidth equal to that given by the cut-set bound. This parameter set is relevant since \begin{enumerate}[a)] \item the total number of nodes $n$ in the system can be arbitrary~(and is not constrained to be equal to $d+1$), making the code pertinent for real-world distributed storage systems where it is natural for the system to expand/shrink, \item $k+1$ is the smallest value of the parameter $d$ that offers a reduction in repair bandwidth, making the code suitable for networks with low connectivity.\end{enumerate} The code is constructed for $\beta=1$, i.e., the code does not employ any symbol extension. All subsequent discussion in this section will implicitly assume $\beta=1$. For most values of the parameters $[n, \; k , \; d]$, $d=k+1$ falls under $d<2k-3$ regime, where we have shown (Section~\ref{sec:non_exist_alpha_3}) that exact-repair is not possible. When repair is not exact, a nodal generator matrix is liable to change after a repair process. Thus, for the code construction presented in this section, we drop the global kernel viewpoint and refer directly to the symbols stored or passed. ~ As a build up to the code construction, we first inspect the trivial case of $d=k$. In this case, the cut-set lower bound on repair bandwidth is given by \begin{equation} d \geq k = B. \end{equation} Thus the parameter regime $d=k$ mandates the repair bandwidth to be no less than the file size $B$, and has the remaining parameters satisfying \begin{equation} \left(\alpha=1, \ B=k\right). \end{equation} An MSR code for these parameters is necessarily an $[n,~k]$ scalar MDS code. Thus, in this code, node $i$ stores the symbol \begin{equation} \left(\underline{p}_i^t \, \underline{u}\right), \end{equation} where $\underline{u}$ is a $k$-length vector containing all the message symbols, and $\lbrace\underline{r}_i\rbrace_{i=1}^{n}$ is a set of $k$-length vectors such that any arbitrary $k$ of the $n$ vectors are linearly independent. Upon failure of a node, the replacement node can connect to any arbitrary $d=k$ nodes and download one symbol each, thereby recovering the entire message from which the desired symbol can be extracted. \begin{figure} \centering \includegraphics[trim=0in 0.2in 0in 0in, clip, width=\textwidth]{fig_dkp1Evolution3.pdf} \caption{\small Evolution of a node through multiple repairs in the MSR $d=k+1$ code.} \label{fig:dkp1_node_Evolution} \end{figure} When $d=k+1$, the cut-set bound~\eqref{eq:MSR_beta1_parameters} gives \begin{equation} \left( \alpha=d-k+1=2, \ B=\alpha k=2k\right) . \end{equation} Let the $2k$ message symbols be the elements of the $2k$-dimensional column vector \[\left[ \begin{tabular}{c} $\underline{u}_1$\\ $\underline{u}_2$ \end{tabular} \right], \] where $\underline{u}_1$ and $\underline{u}_2$ are $k$-length column vectors. In the case of $d=k+1$, a code analogous to the $d=k$ code would have node $i$ storing the two symbols: \begin{equation} \left(\underline{p}_i^t \, \underline{u}_1, ~~\underline{p}_i^t \,\underline{u}_2\right). \label{eq:dkp1_init} \end{equation} Maintaining the code as in~\eqref{eq:dkp1_init}, after one or more node repairs, necessitates \textit{exact} repair of any failed node. Since in this regime, exact-repair is not possible for most values of the parameters, we allow an auxiliary component in our code, as described below. In our construction, the symbols stored in the nodes are initialized as in~\eqref{eq:dkp1_init}. On repair of a failed node, the code allows for an auxiliary component in the second symbol. Thus, under this code, the two symbols stored in node $i,~1 \leq i \leq n$, are \begin{equation} \text{\huge (}\underbrace{\underline{p}_i^t\,\underline{u}_1, \qquad \underline{p}_i^t \, \underline{u}_2}_{\text{Exact component}}~+\hspace{-.43cm}\underbrace{\underline{r}_i^t\,\underline{u}_1}_{\text{Auxiliary component}}\hspace{-.8cm}\text{\huge )}, \end{equation} where $\underline{r}_i$ is a $k$-length vector corresponding to the auxiliary component. Further, the value of $\underline{r}_i$ may alter when node $i$ undergoes repair. Hence we term this repair process as \textit{approximately-exact-repair}. For a better understanding, the system can be viewed as analogous to a $Z$-channel; this is depicted in Fig.~\ref{fig:dkp1_node_Evolution}, where the evolution of a node through successive repair operations is shown. In the latter half of this section, we will see that the set of vectors $\{\underline{r}_i\}_{i=1}^{n}$ do not, at any point in time, influence either the reconstruction or the repair process. We now proceed to a formal description of the code construction. \subsection{Code Construction:} Let $\lbrace\underline{p}_i\rbrace_{i=1}^{n}$ be a set of $k$-length vectors such that any arbitrary $k$ of the $n$ vectors are linearly independent. Further, let $\{\underline{r}_i\}_{i=1}^{n}$ be a set of $k$-length vectors initialized to arbitrary values. Unlike $\lbrace\underline{p}_i\rbrace$, the vectors $\{\underline{r}_i\}$ do not play a role either in reconstruction or in repair. In our code, node $i$ stores the two symbols: \begin{equation} \left(\underline{p}_i^t~\underline{u}_1, ~~ \underline{p}_i^t\,\underline{u}_2+\underline{r}_i^t\,\underline{u}_1\right). \end{equation} Upon failure of a node, the exact component, as the name suggests, is exactly repaired. However, the auxiliary component may undergo a change. The net effect is what we term as \textit{approximately-exact-repair}. The code is defined over the finite field $\mathbb{F}_q$ of size $q$. The sole restriction on $q$ comes from the construction of the set of vectors $\lbrace\underline{r}_i\rbrace_{i=1}^{n}$ such that every subset of $k$ vectors are linearly independent. For instance, these vectors can be chosen from the rows of an $(n \times k)$ Vandermonde matrix or an $(n \times k)$ Cauchy matrix, in which case any finite field of size $q\geq n$ or $q \geq n+k$ respectively will suffice. \textit{Example: } Fig.~\ref{fig:dkp1_example} depicts a sample code construction over $\mathbb{F}_{11}$ for the parameters $[n=8,~k=5,~d=6]$ with $\beta=1$ giving $(\alpha=2,\ B=10)$. Here, \[ \left[\begin{tabular}{>{$}c<{$}} \underline{p}_1^t \\ \vdots \\ \underline{p}_8^t \end{tabular}\right] = \left[\begin{tabular}{>{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$}} 1&0&0&0&0\\ 0&1&0&0&0\\ 0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1\\ 4&5&3&1&1\\ 3&6&1&1&7\\ 3&7&8&3&4 \end{tabular} \right],~ \left[\begin{tabular}{>{$}c<{$}} \underline{r}_1^t \\ \vdots \\ \underline{r}_8^t \end{tabular}\right] = \left[\begin{tabular}{>{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$}} 0& 0& 1& 2& 2\\ 2& 0& 1& 1& 1\\ 0& 0& 0& 10&0\\ 1& 2& 1& 0& 1\\ 1& 0& 0& 1& 0\\ 0& 0& 0& 0& 0\\ 0& 0& 0& 1& 0\\ 1& 0& 4& 0& 0 \end{tabular} \right]. \] \begin{figure}[t] \centering \includegraphics[trim=1.1in 4.7in 2.8in 0.5in, clip, width=\textwidth]{fig_example_dkp1.pdf} \caption{\small A sample MSR $d=k+1$ code for the parameters $[n=8,~k=5,~d=6]$, $(\beta=1,\;\alpha=2,\;B=10)$, over $\mathbb{F}_{11}$. Also depicted is the repair of node $8$, assisted by helper nodes $1$ to $6$.} \label{fig:dkp1_example} \end{figure} The two theorems below show that the code described above is an $[n,~k,~d=k+1]$ MSR code by establishing respectively, the reconstruction and the repair properties of the code. ~ \begin{thm}[Reconstruction, i.e., MDS property] In the code presented, all the $B$ message symbols can be recovered by a data-collector connecting to any arbitrary $k$ nodes.\label{thm:dkp1_recon} \end{thm} \begin{IEEEproof} Due to symmetry we assume (without loss of generality) that the data-collector connects to the first $k$ nodes. Then the data-collector obtains access to the $2k$ symbols stored in the first $k$ nodes: \begin{equation} \left\lbrace\underline{p}_i^t\,\underline{u}_1, \quad \underline{p}_i^t\,\underline{u}_2\,+\,\underline{r}_i^t\,\underline{u}_1\right\rbrace_{i=1}^{k}. \end{equation} By construction, the vectors $\lbrace\underline{p}_i\rbrace_{i=1}^{k}$ are linearly independent, allowing the data-collector to recover the first message vector $\underline{u}_1$. Next, the data-collector subtracts the effect of $\underline{u}_1$ from the second term. Finally, in a manner analogous to the decoding of $\underline{u}_1$, the data-collector recovers the second message vector $\underline{u}_2$. \end{IEEEproof} ~ \begin{thm}[Node repair] In the code presented, \textit{approximately} exact-repair of any failed node can be achieved by connecting to an arbitrary subset of $d~(=k+1)$ of the remaining $(n-1)$ nodes.\label{thm:dkp1_regen} \end{thm} \begin{IEEEproof} Due to symmetry, it suffices to consider the case where helper nodes $\{1,\ldots,k+1\}$ assist in the repair of another failed node $f$. The two symbols stored in node $f$ prior to failure are \[\left(\underline{p}_{f}^t\,\underline{u}_1, \quad \underline{p}_{f}^t\,\underline{u}_2+\underline{r}_{f}^t\,\underline{u}_1\right).\] However, since repair is guaranteed to be only approximately exact, it suffices for the replacement node to obtain \[\left(\underline{p}_{f}^t\,\underline{u}_1, \quad \underline{p}_{f}^t\,\underline{u}_2+\underline{\tilde{r}}_{f}^t\,\underline{u}_1\right),\] where $\underline{\tilde{r}}_{f}$ is an arbitrary vector that need not be identical to $\underline{r}_{f}$. The helper nodes $\{1,\ldots,k+1\}$ pass one symbol each, formed by a linear combination of the symbols stored in them. More specifically, helper node $i, \, 1 \leq i \leq k+1$, under our repair algorithm, passes the symbol \begin{equation}\lambda_i\left(\underline{p}_{i}^t\,\underline{u}_1\right) \,+\, \left( \underline{p}_{i}^t\,\underline{u}_2+\underline{r}_{i}^t\,\underline{u}_1\right). \end{equation} We introduce some notation at this point. For $\ell \in \{k, \, k+1\}$, let $P_{\ell}$ be a $( \ell \times k)$ matrix comprising of the vectors $\underline{p}_1, \ldots,\underline{p}_{\ell}$ as its $\ell$ rows respectively. Let $R_{\ell}$ be a second $(\ell \times k)$ matrix comprising of the vectors $\underline{r}_1, \ldots,\underline{r}_{\ell}$ as its $\ell$ rows respectively. Further, let $\Lambda_{\ell}=\text{diag}\{\lambda_1,\ldots,\lambda_{\ell}\}$ be an $(\ell \times \ell)$ diagonal matrix. In terms of these matrices, the $k+1$ symbols obtained by the replacement node can be written as the $(k+1)$-length vector \begin{equation} (\Lambda_{k+1} P_{k+1} + R_{k+1} )~\underline{u}_1 + (P_{k+1})~\underline{u}_2~\label{eq:dkp1_regenvec}.\end{equation} The precise values of the scalars $\{\lambda_i\}_{i=1}^{k+1}$ are derived below. ~ \paragraph*{Recovery of the First Symbol} Let $\underline{\rho}$ be the linear combination of the received symbols that the replacement node takes to recover the first symbol that was stored in the failed node, i.e., we need \begin{equation} \underline{\rho}^t \left((\Lambda_{k+1} P_{k+1} + R_{k+1} )~\underline{u}_1 + (P_{k+1})~\underline{u}_2\right) \ = \ \underline{p}_{f}^t~\underline{u}_1.\end{equation} This requires elimination of $\underline{u}_2$, i.e., we need \begin{equation} \underline{\rho}^{t} P_{k+1} = \underline{0}^t. \label{eq:dkp1_repair_1}\end{equation} To accomplish this, we first choose \begin{equation} \underline{\rho} = \left[\begin{tabular}{>{$}c<{$}} \underline{\rho}_1 \\ -1 \end{tabular}\right], \end{equation} and in order to satisfy equation~\eqref{eq:dkp1_repair_1}, we set \begin{equation} \underline{\rho}_1^{t} = \underline{p}_{k+1}^t P_k^{-1}. \label{eq:dkp1_repair_2}\end{equation} Note that the $(k \times k)$ matrix $P_k$ is non-singular by construction. Now as $\underline{u}_2$ is eliminated, to obtain $\underline{p}_{f}^t~\underline{u}_1$, we need \begin{eqnarray} \underline{\rho}^{t} \left( \Lambda_{k+1} P_{k+1} + R_{k+1} \right) & = & \underline{p}_{f}^t \\ \Rightarrow \quad \underline{\rho}_1^{t} \left( \Lambda_{k} P_{k} + R_{k} \right) & = & \underline{p}_f^t + \left( \lambda_{k+1} \; \underline{p}^{t}_{k+1} + \; \underline{r}^{t}_{k+1} \right). \label{eq:dkp1_repair_25}\end{eqnarray} Choosing $\lambda_{k+1}=0$ and substituting the value of $\underline{\rho}^{t}_1$ from equation~\eqref{eq:dkp1_repair_2}, a few straightforward manipulations yield that choosing \begin{equation} \Lambda_k = \left(\text{diag}\left[\underline{p}_{k+1}^t ~P_k^{-1}\right]\right)^{-1} \text{diag}\left[\left(\underline{p}_{f}^t ~-~ \underline{p}_{k+1}^t~ P_k^{-1}~R_k ~+~ \underline{r}_{k+1}^t\right) P_k^{-1}\right],\end{equation} satisfies equation~\eqref{eq:dkp1_repair_25}, thereby enabling the replacement node to exactly recover the first symbol. The non-singularity of the matrix $\text{diag}\left[\underline{p}_{k+1}^t ~P_k^{-1}\right]$ used here is justified as follows. Consider \begin{equation} \left[\underline{p}_{k+1}^t~ P_k^{-1}\right] P_k= \underline{p}_{k+1}^t ~. \end{equation} Now, if any element of $\left[\underline{p}_{k+1}^t ~P_k^{-1}\right]$ is zero, it would imply that a linear combination of $(k-1)$ rows of $P_k$ can yield $\underline{p}_{k+1}^t$. However, this contradicts the linear independence of every subset of $k$ vectors in $\{\underline{p}_i\}_{i=1}^{n}$. ~ \paragraph*{Recovery of the Second Symbol} Since the scalars $\{\lambda_i\}_{i=1}^{k+1}$ have already been utilized in the exact recovery of the first symbol, we are left with fewer degrees of freedom. This, in turn, gives rise to the presence of an auxiliary term in the second symbol. Let $\underline{\delta}$ be the linear combination of the received symbols, that the replacement node takes, to obtain its second symbol $(\underline{p}_{f}^t~\underline{u}_2+\underline{\tilde{r}}_{f}^t~\underline{u}_1)$, i.e., we need \begin{equation} \underline{\delta}^t \left((\Lambda_{k+1} P_{k+1} + R_{k+1} )~\underline{u}_1 + (P_{k+1})~\underline{u}_2\right) \ = \ \underline{p}_{f}^t~\underline{u}_2+\underline{\tilde{r}}_{f}^t~\underline{u}_1.\label{eq:dkp1_delta1}\end{equation} Since the vector $\underline{\tilde{r}}_{f}$ is allowed to take any arbitrary value, the condition in~\eqref{eq:dkp1_delta1} is reduced to the requirement \begin{equation} \underline{\delta}^{t} P_{k+1} = \underline{p}^{t}_f. \label{eq:dkp1_repair_3} \end{equation} To accomplish this, we first choose \begin{equation} \underline{\delta} = \left[\begin{tabular}{>{$}c<{$}} \underline{\delta}_1 \\ 0 \end{tabular}\right], \end{equation} where, in order to satisfy equation~\eqref{eq:dkp1_repair_3}, we choose \begin{equation} \underline{\delta}_1^{t} = \underline{p}_{f}^t P_k^{-1}~. \label{eq:dkp1_repair_4}\end{equation} \end{IEEEproof} In the example provided in Fig.~\ref{fig:dkp1_example}, node $8$ is repaired by downloading one symbol each from nodes $1$ to $6$. The linear combination coefficients used by the helper nodes are: \[ \left[ \lambda_1 ~\cdots~ \lambda_{6} \right] = \left[ 6~1~3~3~1~0\right]. \] The replacement node retains the exact part, and obtains a different auxiliary part, with $\tilde{\underline{r}}_{8} = \left[6~2~4~7~9\right].$ \section{Conclusions}\label{sec:conclusion} This paper considers the problem of constructing MDS regenerating codes achieving the cut-set bound on repair bandwidth, and presents four major results. First, the construction of an explicit code, termed the MISER code, that is capable of performing data reconstruction as well as optimal exact-repair of the systematic nodes, is presented. The construction is based on the concept of interference alignment. Second, we show that interference alignment is, in fact, necessary to enable exact-repair in an MSR code. Thirdly, using the necessity of interference alignment as a stepping stone, several properties that every exact-repair MSR code must possess, are derived. It is then shown that these properties over-constrain the system in the absence of symbol extension for $d<2k-3$, leading to the non-existence of any linear, exact-repair MSR code in this regime. Finally, an explicit MSR code for $d=k+1$, suited for networks with low connectivity, is presented. This is the first explicit code in the regenerating codes literature that does not impose any restriction on the total number of nodes $n$ in the system.
{ "timestamp": "2010-09-14T02:02:36", "yymm": "1005", "arxiv_id": "1005.1634", "language": "en", "url": "https://arxiv.org/abs/1005.1634", "abstract": "Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any arbitrary k of n nodes. However regenerating codes possess in addition, the ability to repair a failed node by connecting to any arbitrary d nodes and downloading an amount of data that is typically far less than the size of the data file. This amount of download is termed the repair bandwidth. Minimum storage regenerating (MSR) codes are a subclass of regenerating codes that require the least amount of network storage; every such code is a maximum distance separable (MDS) code. Further, when a replacement node stores data identical to that in the failed node, the repair is termed as exact.The four principal results of the paper are (a) the explicit construction of a class of MDS codes for d = n-1 >= 2k-1 termed the MISER code, that achieves the cut-set bound on the repair bandwidth for the exact-repair of systematic nodes, (b) proof of the necessity of interference alignment in exact-repair MSR codes, (c) a proof showing the impossibility of constructing linear, exact-repair MSR codes for d < 2k-3 in the absence of symbol extension, and (d) the construction, also explicit, of MSR codes for d = k+1. Interference alignment (IA) is a theme that runs throughout the paper: the MISER code is built on the principles of IA and IA is also a crucial component to the non-existence proof for d < 2k-3. To the best of our knowledge, the constructions presented in this paper are the first, explicit constructions of regenerating codes that achieve the cut-set bound.", "subjects": "Information Theory (cs.IT); Distributed, Parallel, and Cluster Computing (cs.DC); Networking and Internet Architecture (cs.NI)", "title": "Interference Alignment in Regenerating Codes for Distributed Storage: Necessity and Code Constructions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692352660529, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.7079584866908938 }
https://arxiv.org/abs/1902.02241
A classical functional generalization of the first Barnes lemma
We give a brief account and a simpler proof of a contour integral formula for the Gauss hypergeometric function. Such formula is alternative to Barnes's integral formula and generalizes the first Barnes Lemma.
\section{Introduction} The Gauss hypergeometric function (denoted by $F(a,b;c)$ throughout the present paper) has been deeply studied, and several integral representations can be found in books dealing with special functions (see e.g.~\cite[Sections 8.3, 8.8]{viola}). An important integral was discovered by Barnes (see formula (\ref{barnes}) below), who build an alternative theory of the function $F(a,b;c)$ based on such integral formula. One useful feature of formulas of the type (\ref{barnes}) relies in the possibility of applying the saddle point method to obtain a precise asymptotic estimate of the function involved (see the monography~\cite{pariskaminski}). Another interesting property of (\ref{barnes}) is that it possesses a wide range of extensions to generalized hypergeometric series (see~\cite[Sections 4.6, 4.7]{slater}). The countour integral formula proved in the present paper is not new (see~\cite[Section 14.53]{whittakerwatson} and~\cite[formula (15.6.7)]{nist}). However, we believe it is worth the present short note, because our proof appears to be simpler than that in~\cite{whittakerwatson}, and is independent of Barnes's integral formula (\ref{barnes}). We remark that formula (\ref{maintheorem}) encompasses (and, in the present note, relies on) the first Barnes lemma (see (\ref{firstbarnes}) below), whose proof in~\cite{bailey} is very similar to the proof of Barnes's integral formula (\ref{barnes}). Therefore our contribution allows one to use the residue theorem in the proof of the first Barnes Lemma only. After that, one can prove the contour integral formula (\ref{maintheorem}) as in the present paper, and finally combine the two results to prove (\ref{barnes}), with an argument similar to~\cite[Section 14.53]{whittakerwatson}, without applying the residue theorem a second time, as in~\cite{whittakerwatson} . Also, our argument is very simple but apparently has been generally overlooked in this context, and may have further applications. The first Barnes lemma is often considered as an integral analogue of the Gauss summation formula \begin{equation} \label{gauss} F(a,b;c;1)= \frac{\Gamma(c) \Gamma(c-a-b)}{\Gamma(c-a) \Gamma(c-b)}. \end{equation} In addition, formula (\ref{maintheorem}) can be seen as an integral analogue of the formula connecting the values of hypergeometric functions of $z$ and $1-z$ (see (\ref{zetatoneminus}) below), and this is precisely the context where (\ref{maintheorem}) is used in~\cite[Section 14.53]{whittakerwatson}. Let us also point out two formulas close to (\ref{maintheorem}): the first one, obtained in 1939 by S.O. Rice for his function $H_n(\xi,p;v)={}_3 F_2 (-n,n+1,\xi;1,p;v)$ (see~\cite[Vol I, p.193]{bateman}), and the second one, usually used in the proof af the second Barnes lemma (see e.g.~\cite[p.43]{bailey}. We mention these formulas at the end of the present paper. \section{The main result and a few similar formulas} We denote by $(\xi)_n$ the product $\xi(\xi+1)\cdots (\xi+n-1)$ for any complex number $\xi$ and for any $n=1,2,\dots$, and we put $(\xi)_0=1$. We say that $\xi$ is {\it admissible} if $\xi$ is not a negative integer nor $0$. The Gauss hypergeometric function $F(a,b;c;z)$ is defined over the unit disc $|z|<1$ in the complex plane by the series \begin{equation} \label{hyperdefinit} F(a,b;c;z) = \sum_{k=0}^\infty \frac{(a)_k (b)_k }{k! (c)_k } z^k, \end{equation} where $a$, $b$ and $c$ are complex numbers and $c$ is admissible. Note that the series $F(a,b;c;z)$ may terminate: this happens when $a$ or $b$ are not admissible. In this case the function (\ref{hyperdefinit}) is a polynomial in $z$, and could be defined even if $c$ is not admissible, provided that $\min\{a,b\}\le c$. Let $\Gamma(z)$ be the Euler gamma function, defined in the complex half-plane ${\rm Re }\, z>0$ by \[ \Gamma(z)=\int\limits_0^\infty t^{z-1} e^{-t} {\rm d} t, \] and extended to a meromorphic function in the complex plane, with simple poles at $z=-n$ with residue $\frac{(-1)^n}{n}$ ($n=0,1,2,\dots$), for example by splitting the integration path $(0,\infty)$ in the union of $(0,1)$ and $(1,\infty)$. Two main properties of the function $\Gamma(z)$ are important in the following: the Stirling formula \[ \log \Gamma(z) = (s-\frac{1}{2}) \log s -s +\frac{1}{2} \log (2\pi) + o(1), \] valid for $|\arg z|<\pi-\delta $ for any $\delta>0$, and the functional equations \[ \Gamma(z) \Gamma(1-z)= \frac{\pi z}{\sin \pi z}, \qquad \Gamma(z+1)=z\Gamma(z). \] The Barnes integral representation (see e.g.~\cite[Theorem 2.4.1]{andrewsaskeyroy}) of the function (\ref{hyperdefinit}) is given by \begin{equation} \label{barnes} \frac{\Gamma(a) \Gamma(b)}{\Gamma(c)} F(a,b;c;z) = \frac{1}{2\pi i} \int\limits_{-i \infty}^{i \infty} \frac{\Gamma(a+s) \Gamma(b+s) \Gamma(-s)}{\Gamma(c+s)} (-z)^s {\rm d} s, \end{equation} valid under the conditions that $|z|<1$, $z\not=0$ and $|\arg(-z)|<\pi$, and that $a$, $b$ and $c$ are admissible. The path $L$ of integration is curved, if necessary, in such a way that separates the poles $s=-a-n$ and $s=-b-n$ ($n=0,1,2,\dots$) at the left of $L$ from the poles $s=0,1,2,\dots$ at the right of $L$. In the sequel, we denote by $F(a,b;c;z)$ the analytic function defined for $z\notin [1,\infty)$ either by the series (\ref{hyperdefinit}), if $|z|<1$, or by the integral (\ref{barnes}), if $z\notin [0,\infty)$. The first Barnes lemma (see e.g.~\cite[Theorem 2.4.2]{andrewsaskeyroy}) states that \begin{equation} \label{firstbarnes} \frac{1}{2\pi i} \int\limits_{-i \infty}^{i \infty} \Gamma(a+s) \Gamma(b+s) \Gamma(c-s) \Gamma(d-s) = \frac{\Gamma(a+c) \Gamma(a+d) \Gamma(b+c) \Gamma(b+d)}{\Gamma(a+b+c+d)}, \end{equation} provided that $a+c$, $a+d$, $b+c$ and $b+d$ are admissible. Using (\ref{barnes}) and (\ref{firstbarnes}) one can prove (see~\cite[Sect. 14.53]{whittakerwatson}) that \begin{multline} \label{zetatoneminus} F(a,b;c;z) = \frac{\Gamma(c) \Gamma(c-a-b)}{\Gamma(c-a) \Gamma(c-b)} F(a,b;1+a+b-c;1-z) \\ + \frac{\Gamma(c) \Gamma(a+b-c)}{\Gamma(a) \Gamma(b)} (1-z)^{c-a-b} F(c-a,c-b;1+c-a-b;1-z) \end{multline} Using (\ref{firstbarnes}) we can prove an integral formula that encompasses (\ref{zetatoneminus}), which is a generalization of (\ref{gauss}). For this reason we named formula (\ref{maintheorem}) below a functional generalization of the first Barnes lemma. \begin{theorem}\cite[Section 14.53]{whittakerwatson} Let $a$, $b$, $c$ and $z$ be complex numbers such that $z\notin (-\infty,0]$, and that $a$, $c-a$, $b$, $c-b$ and $c$ are admissible. Then \begin{equation} \label{maintheorem} \frac{\Gamma(a) \Gamma(c-a) \Gamma(b) \Gamma(c-b)}{\Gamma(c)} F(a,b;c;1-z) = \frac{1}{2\pi i} \int\limits_{-i \infty}^{i \infty} \Gamma(a+s) \Gamma(b+s) \Gamma(c-a-b-s) \Gamma(-s) z^s {\rm d} s, \end{equation} where the integration path $L$ separates the poles $s=-a-n$ and $s=-b-n$ $(n=0,1,2,\dots)$ on the left of $L$ from the poles $s=n$ and $s=a+b-c+n$ $(n=0,1,2,\dots)$ on the right of $L$. \end{theorem} \begin{proof} Suppose that $|1-z|<1$. For any $n=0,1,2,\dots$ we have \[ (-1)^n \frac{{\rm d}^n}{{\rm d} z^n} F(a,b;c;1-z) \Big|_{z=1} = \frac{(a)_n (b)_n}{(c)_n} = \frac{\Gamma(a+n)}{\Gamma(a)} \frac{\Gamma(b+n)}{\Gamma(b)} \frac{\Gamma(c)}{\Gamma(c+n)}. \] The integral at the right-hand side of (\ref{maintheorem}) is an analytic function in the domain $|\arg z|<2 \pi$ (see~\cite[Lemma 2.4]{pariskaminski}), which plainly contains the disc $|1-z|<1$. This implies that the derivative of the integral in (\ref{maintheorem}) with respect to $z$ equals \[ \frac{1}{2\pi i} \int\limits_{-i \infty}^{i \infty} \Gamma(a+s) \Gamma(b+s) \Gamma(c-a-b-s) \Gamma(-s) s z^{s-1} {\rm d} s, \] this being an integral of the same type as in (\ref{maintheorem}), once it is noticed that $-s \Gamma(-s) = \Gamma(1-s)$, and after substituting the variable $s$ with $t$ by putting $s=1+t$, and then renaming $t$ with $s$. We thus have \begin{multline*} (-1)^n \frac{{\rm d}^n}{{\rm d} z^n} \frac{1}{2\pi i} \int\limits_{-i \infty}^{i \infty} \Gamma(a+s) \Gamma(b+s) \Gamma(c-a-b-s) \Gamma(-s) z^s {\rm d} s \Big|_{z=1} \\ = \frac{1}{2\pi i} \int\limits_{-i \infty}^{i \infty} \Gamma(a+s) \Gamma(b+s) \Gamma(c-a-b-s) \Gamma(n-s) {\rm d} s \qquad (n=0,1,2,\dots). \end{multline*} By (\ref{firstbarnes}) the last integral equals \[ \frac{\Gamma(c-a) \Gamma(c-b) \Gamma(a+n) \Gamma(b+n)}{\Gamma(c+n)}, \] therefore (\ref{maintheorem}) is proved for $|1-z|<1$, because all the derivatives of both sides of (\ref{maintheorem}) coincide at $z=1$. By analytic continuation (\ref{maintheorem}) holds for $z\notin(-\infty,0]$. \end{proof} From (\ref{maintheorem}), using Stirling's formula, the residue theorem, and changing $z$ into $1-z$, after a few simplifications one easily gets (\ref{zetatoneminus}), very much as in the standard proofs of (\ref{barnes}) and (\ref{firstbarnes}). Of course, it is possible to go the other way, which is the usual proof of (\ref{maintheorem}). Let us finish this short paper with two formulas formally close to (\ref{maintheorem}): the first one (see~\cite[p.43]{bailey}) is \begin{multline*} \sum_{n=0}^\infty \frac{(\alpha_1)_n (\alpha_2)_n (\alpha_3)_n }{n! (\beta_1)_n (\beta_2)_n} = \frac{\Gamma(\beta_1)}{\Gamma(\alpha_1) \Gamma(\beta_1-\alpha_1) \Gamma(\alpha_2) \Gamma(\beta_1-\alpha_2)} \\ \times \frac{1}{2\pi i} \int\limits_{-i \infty}^{i \infty} \Gamma(\alpha_1+s) \Gamma(\alpha_2+s) \Gamma(\beta_1-\alpha_1-\alpha_2-s) \Gamma(-s) F(\alpha_3,-s;\beta_2;1){\rm d} s, \end{multline*} and is used in the standard proof of the second Barnes lemma. As to the second one, let us consider (see~\cite[Vol I, p.193]{bateman}) the sequence of polynomials \[ H_n(\xi,p;v)= \sum_{j=0}^n \frac{(-n)_j (n+1)_j (\xi)_j}{j!^2 (p)_j}\, v^j \qquad (n=0,1,2,\dots). \] Here $\xi$, $p$ and $v$ are complex numbers and $p+n+1$ is admissible. Then \begin{multline*} \Gamma(p-q) \Gamma(q) \Gamma(p-\xi) \Gamma(\xi) H_n(\xi,p;v) \\ = \frac{\Gamma(p)}{2\pi i} \int\limits_{\sigma-i \infty}^{\sigma+i \infty} \Gamma(s) \Gamma(q-s) \Gamma(\xi-s) \Gamma(p-q-\xi+s) H_n (s,q;v) {\rm d} s, \end{multline*} where $0<{\rm Re}\, \sigma<{\rm Re}\, q$ and $0<{\rm Re} (\xi - \sigma) < {\rm Re} (q-p)$. It is worth noticing that the generating function of the sequence $H_n$ is \[ \sum_{n=0}^\infty t^n H_n(\xi,p;v)= \frac{1}{1-t} F(\xi,1/2;p; -4vt (1-t)^{-2}). \]
{ "timestamp": "2019-02-07T02:16:52", "yymm": "1902", "arxiv_id": "1902.02241", "language": "en", "url": "https://arxiv.org/abs/1902.02241", "abstract": "We give a brief account and a simpler proof of a contour integral formula for the Gauss hypergeometric function. Such formula is alternative to Barnes's integral formula and generalizes the first Barnes Lemma.", "subjects": "Complex Variables (math.CV); Number Theory (math.NT)", "title": "A classical functional generalization of the first Barnes lemma", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692352660528, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.7079584866908937 }
https://arxiv.org/abs/1911.04090
A post hoc test on the Sharpe ratio
We describe a post hoc test for the Sharpe ratio, analogous to Tukey's test for pairwise equality of means. The test can be applied after rejection of the hypothesis that all population Signal-Noise ratios are equal. The test is applicable under a simple correlation structure among asset returns. Simulations indicate the test maintains nominal type I rate under a wide range of conditions and is moderately powerful under reasonable alternatives.
\section{Introduction Sharpe's ``reward-to-variability ratio'' was originally devised to compare the performance of mutual funds. Sharpe found it to be weakly predictive of out-of-sample performance when measured over a \emph{decade} of returns. \cite{Sharpe:1966} Early research on the \txtSR, as it came to be known\footnote{Although it was described over a decade earlier by A. D. Roy. \cite{roySafety1952}}, ignored its statistical nature, treating it like an observable population parameter, though this was soon remedied. \cite{CambridgeJournals:4493808,jobsonkorkie1981,lo2002} More recently, statistical procedures have been proposed to test whether the population \txtSRs of several assets (\eg mutual funds, ETFs, hedge funds, \etc) are equal. \cite{Leung2008,wright2014} Here we propose a test to be used to compare pairwise differences after application of such a test. \section{The test Suppose one has observed \ssiz \iid samples of some $\nlatf$-vector \vreti, representing the returns of $\nlatf$ different ``assets.'' We imagine these assets to be different mutual funds, or trading strategies, ETFs, \etc From the sample one computes the \txtSR of each asset, resulting in a $\nlatf$-vector, \svsr. One natural question to ask is whether the ``\txtSNR'' (the population analogue of the \txtSR) of each asset is equal. One can test the hypothesis of equal \txtSNR via the tests of Leung and Wong, or Wright, Yam and Yung. \cite{Leung2008,wright2014} The test of Wright \etal, for example, uses asymptotic normality of the \svsr to construct a statistic following a $\chi^2$ distribution under the null. In the case where one rejects the null hypothesis of equality, one seeks a \posthoc test, to determine which pairs of the \nlatf assets have different \txtSNR. Testing the equality of \txtSNRs is analogous to the classical procedure for testing equality of means via ANOVA. \cite{bain1992introduction} The \posthoc procedure that classically followed a rejection of the null in ANOVA is Tukey's range test, sometimes called the ``honest significant difference'' (HSD) test. \cite{tukey_hsd,bretz2016multiple} In the ANOVA, and Tukey's HSD, the quantity is assumed to have identical variance among all individuals, but potentially different means in different groups. For this reason, the variance is estimated by pooling all observations. It is unnecessary to assume equal volatility of the returns for the $\nlatf$ different assets when testing the \txtSNR. While this simplifies our \posthoc test somewhat, typically in the testing of asset returns one observes them contemporaneously, and they are generally correlated. Tukey's HSD proceeds by computing an upper quantile on the \emph{range} of independent normals divided by a rescaled $\chi$ variable. When the means of two individuals differ by more than this amount, one rejects the null that they are equal. Our test will perform a similar computation. Previously the author showed that when returns are drawn from a multivariate normal distribution with correlation \RMAT, then \begin{equation} \svsr \approx \normlaw{\pvsnr,\oneby{\ssiz}\wrapParens{\RMAT + \frac{1}{2} \Mdiag{\pvsnr} \wrapParens{\RMAT\hadm\Mtx{R}} \Mdiag{\pvsnr}}}, \label{eqn:apx_srdist_gaussian} \end{equation} where \pvsnr is the vector of \txtSNRs and \ssiz is the sample size. \cite{pav2019maxsharpe} Note that the approximate covariance matrix here generalizes the well-known standard error of the scalar \txtSR. \cite{Johnson:1940,jobsonkorkie1981,lo2002,pav_ssc} In the case of the small \txtSNRs likely to be encountered in practice, that approximation may be further simplified to \begin{equation} \svsr \approx \normlaw{\pvsnr,\oneby{\ssiz}\RMAT}. \label{eqn:apx_srdist_simple} \end{equation} Then under the null hypothesis that $\pvsnr=\pvsnr[0]$, one observes \begin{equation} \vect{z} = \sqrt{\ssiz}\ichol{\RMAT} \wrapParens{\svsr - \pvsnr[0]} \approx \normlaw{\vzero,\eye}, \label{eqn:bonf_zform_simple} \end{equation} where $\ichol{\RMAT}$ is the inverse of the (symmetric) square root of $\RMAT$. As previously, we assume a simple rank-one form for the correlation matrix, \begin{equation} \label{eqn:simple_RMAT} \RMAT=\makerho{\rho}{\wrapParens{1-\rho}}, \end{equation} where $\abs{\rho} \le 1$. \cite{pav2019maxsharpe} Under this assumption, it is simple to show that \begin{equation} \label{eqn:simple_RMAT_ichol_simple} \ichol{\RMAT} = \makerho{c}{\wrapParens{1-\rho}^{-1/2}}, \end{equation} for some constant $c$. Now we consider the difference in \txtSRs of two assets, indexed by $i$ and $j$. Let $\vect{v}=\basev[i]-\basev[j]$, where $\basev[i]$ is the \kth{i} column of the identity matrix. From \eqnref{bonf_zform_simple} we have \begin{align*} \trAB{\vect{v}}{\vect{z}} &= \trAB{\vect{v}}{\sqrt{\ssiz}\ichol{\RMAT} \wrapParens{\svsr - \pvsnr[0]}},\nonumber\\ &= \sqrt{\ssiz}\trAB{\vect{v}}{\wrapBracks{\makerho{c}{\wrapParens{1-\rho}^{-1/2}}}} \wrapParens{\svsr - \pvsnr[0]},\\ &= \sqrt{\frac{\ssiz}{1-\rho}}\trAB{\vect{v}}{\svsr}. \end{align*} Here we have used that $\trAB{\vect{v}}{\vone} = 0$ and under the null hypothesis, \pvsnr[0] is some constant times \vone. Thus \begin{equation} \ssr[i] - \ssr[j] = \sqrt{\frac{1-\rho}{\ssiz}} \wrapParens{z_i - z_j}. \end{equation} Now note that the \vect{z} is distributed as a standard multivariate normal. So the \emph{range} of \svsr, which is to say $\max_{i,j}\wrapParens{\ssr[i] - \ssr[j]}$, is distributed as $\sqrt{\wrapParens{1-\rho}/\ssiz}$ times the range of a standard $\nlatf$-variate normal. To quote this as a hypothesis test, \begin{equation} \label{eqn:hyp_test_inf_df} \max_{i,j} \abs{\ssr[i] - \ssr[j]} \ge HSD = \qtuk{1-\typeI}{\nlatf}{\infty} \sqrt{\frac{(1-\rho)}{\ssiz}}, \end{equation} with probability \typeI, where the $\qtuk{1-\typeI}{k}{l}$ is the upper $\typeI$-quantile of the Tukey distribution with $k$ and $l$ degrees of freedom. In the \textsc{R} language, this quantile may be computed via the \texttt{qtukey} function. \cite{Rlang,OdehEvans} With $l=\infty$, the cutoff $HSD$ is the rescaled upper \typeI quantile of the range of \nlatf independent Gaussians. That is, $\qtuk{1-\typeI}{\nlatf}{\infty}$ is the number such that $$ 1-\typeI = \nlatf \int_{-\infty}^{\infty} \dnorm[x]\wrapParens{\pnorm[x + \qtuk{1-\typeI}{\nlatf}{\infty}] - \pnorm[x]}^{\nlatf-1} \dx. $$ We note that the approximation of \eqnref{apx_srdist_gaussian} may be too coarse for the computation of the $HSD$ cutoff. Even if the covariance given there is approximately correct, it is likely that the distributional shape of $\svsr$ is far enough from multivariate normal that we cannot use Tukey's distribution for a cutoff, especially when $\ssiz$ is small and $\nlatf$ is large. In that case, one is tempted to heuristically compare the observed range to \begin{equation} \label{eqn:hsd_n_df} HSD=\qtuk{1-\typeI}{\nlatf}{\ssiz-1}\sqrt{\frac{(1-\rho)}{\ssiz-1}}. \end{equation} The reasoning here is that we are essentially computing the range of (non-independent) $t$ statistics, up to scaling, which is almost the same as the Tukey distribution, which is the ratio of the range of normals divided by a pooled $\chi$ variable. In our testing below we will refer to the cutoff of \eqnref{hyp_test_inf_df} as ``$df=\infty$'' and the cutoff of \eqnref{hsd_n_df} as the ``$df=\ssiz-1$'' cutoff. \paragraph{Bonferroni Cutoff: } We note that an alternative calculation provides a very similar cutoff value. Considering two assets with correlation $\rho$. Suppose the \txtSNRs of the two assets are, respectively, $\psnr\wrapParens{1 + \epsilon}$ and \psnr. The difference in \txtSRs can then be shown to be approximately normal: \cite{pav_ssc} \begin{equation} \wrapBracks{\ssr[1] - \ssr[2]} \rightsquigarrow \normlaw{\epsilon\psnr,\frac{2}{\ssiz} \wrapParens{1 - \rho} + \frac{\psnrsq}{2\ssiz}\wrapParens{1 + \wrapParens{1+\epsilon}^2 - 2\rho^2\wrapParens{1+\epsilon}} }. \label{eqn:del_correlated_sr} \end{equation} Assuming that $\psnrsq / \ssiz$ will be very small for most practical work, one can compute the alternative cutoff, a ``Bonferroni Cutoff,'' as $$ BC = \sqrt{\frac{2 \wrapParens{1-\rho}}{\ssiz}} \qnorm{1 - \typeI/{\nlatf \choose 2}}, $$ where $\qnorm{\typeI}$ is the $\typeI$ quantile of the standard normal distribution. This cutoff is based on a Bonferroni correction that recognizes we are performing $\nlatf \choose 2$ pairwise comparison tests. The cutoff $BC$ is typically very similar to $HSD$ (for $df=\infty$) or slightly smaller (and is easier to compute). We note that since $BC$ is based on a normal approximation, it may suffer from the same issues that the $HSD$ cutoff does for small samples. However, there is hope one can compute an exact small $\ssiz$ Bonferroni cutoff. \paragraph{Arbitrary correlation structure: } The test outlined above is strictly only applicable to the rank-one correlation matrix, $\RMAT=\makerho{\rho}{\wrapParens{1-\rho}}$. To apply the test to assets with arbitrary correlation matrices, one would like to appeal to a stochastic dominance result. For example, if one could adapt Slepian's lemma to the distribution of the \emph{range}, then the above analysis could be applied where $\rho$ is the smallest off-diagonal correlation, to give a test with maximum type I rate of $\typeI$. However, it is not immediately clear that Slepian's lemma can be so modified. \cite{slepian1962one,zeitouni2015gaussian,yin2019stochastic} The Bonferroni Cutoff, however, is easily adapted to this kind of worst-case analysis, however. \section{Examples} \subsection{Simulations under the null \paragraph{Basic Simulations} We spawn 4 years of daily data (252 days per year) from 16 assets, each with \txtSNR of $1\yrto{-\halff}$. Returns are multivariate normal with correlation $\RMAT=\makerho{\rho}{\wrapParens{1-\rho}},$ for $\rho=0.8$. We compute the \txtSR of the simulated returns, $\svsr$, then compute the range $\max_{i,j} \ssr[i] - \ssr[j]$. We repeat this experiment 5,000 times. In \figref{null_basic_qq_plot}, we Q-Q plot these simulated ranges of the \txtSR against the theoretical quantile function $$ \sqrt{\frac{1-\rho}{\ssiz}}\qtuk{\cdot}{\nlatf}{\infty}. $$ We see good agreement between theoretical and actual, with little deviance from the $y=x$ line. \begin{knitrout}\small \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[h] \includegraphics[width=0.975\textwidth,height=0.646\textwidth]{posthoc_null_basic_qq_plot-1} \caption[The quantiles of the range of \txtSR from 5,000 simulations are plotted against a transformed Tukey distribution, $\sqrt{(1-\rho)/\ssiz}\, \qtuk{\cdot}{\nlatf}{\infty}$]{The quantiles of the range of \txtSR from 5,000 simulations are plotted against a transformed Tukey distribution, $\sqrt{(1-\rho)/\ssiz}\, \qtuk{\cdot}{\nlatf}{\infty}$. The points show little deviation from the plotted $y=x$ line. }\label{fig:null_basic_qq_plot} \end{figure} \end{knitrout} We then convert these simulated ranges to p-values via the \texttt{ptukey} function in \textsc{R}, using the $df=\infty$ cutoff and the actual $\rho$. We Q-Q plot these putative p-values against a uniform law in \figref{null_basic_pp_plot}, and again find very good agreement. To check the tails we have transformed the p-values to $p \mapsto \abs{2p - 1}$ and plotted in log-log scale. \begin{knitrout}\small \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[h] \includegraphics[width=0.975\textwidth,height=0.646\textwidth]{posthoc_null_basic_pp_plot-1} \caption{The computed p-values from 5,000 simulations are plotted against a uniform law, visually confirming that the p-values are nearly uniform. Simulations use the exact $\rho$ to compute the p-values via \code{ptukey}. We transform the p-values and plot $\abs{2p - 1}$ in log-log space to emphasize the tails. The points show little deviation from the plotted $y=x$ line. }\label{fig:null_basic_pp_plot} \end{figure} \end{knitrout} \clearpage \paragraph{Varying \ssiz and \nlatf:} Next we perform the same kind of simulations, but vary the number of days observed in each simulation, \ssiz, as well as the number of different assets, \nlatf. We let the former vary from 20 to 1,280 measured in days, and the latter vary from 8 to 32. We take $\RMAT=\makerho{\rho}{\wrapParens{1-\rho}},$ for $\rho=0.8$ and set the \txtSNR to $1\yrto{-\halff}$. We assume $252$ days per year for annualizing the \txtSR. For each set of $50,000$ simulations, we compute the empirical type I rate at the nominal $0.05$ level by comparing the range to the HSD cutoff. We tabulate rejections using both the $df=\infty$ and $df=\ssiz-1$ cutoffs. We plot that type I rate against $\ssiz$ in \figref{null_day_scan_plot} for the different values of $\nlatf$. For the $df=\infty$ cutoff, the procedure is apparently anticonservative, yielding too many type I errors, when the sample size is small and the number of assets is large. For the $df=\ssiz-1$ cutoff, however, the nominal type I rate is approximately achieved. \begin{knitrout}\small \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[h] \includegraphics[width=0.975\textwidth,height=0.646\textwidth]{posthoc_null_day_scan_plot-1} \caption[The empirical type I rate at the nominal 0.05 level is plotted against the number of days in each simulation, for different values of $\nlatf$]{The empirical type I rate at the nominal 0.05 level is plotted against the number of days in each simulation, for different values of $\nlatf$. Simulations use the exact $\rho$ to perform the hypothesis test. The two facets show rejection rates under the $df=\ssiz-1$ and $df=\infty$ cutoffs. When using the $df=\infty$ cutoff, the test is anti-conservative for the ``large \nlatf, small \ssiz'' case, but the nominal rate is nearly achieved for the $df=\ssiz-1$ cutoff.}\label{fig:null_day_scan_plot} \end{figure} \end{knitrout} \paragraph{Varying $\rho$:} We next perform the same simulations under the null, with $\RMAT=\makerho{\rho}{\wrapParens{1-\rho}},$ but scanning through $\rho$. We set $\ssiz=1,008$ days, $\nlatf=16$, and set the \txtSNR to $1\yrto{-\halff}$. We compute the empirical type I rate at the nominal $0.05$ level for each set of $50,000$ simulations. We plot that type I rate against $\rho$ in \figref{null_rho_scan_plot}. For these values of $\ssiz, \nlatf$, the procedure achieves near nominal type I rate, and does not vary in a systematic way with $\rho$. \begin{knitrout}\small \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[h] \includegraphics[width=0.975\textwidth,height=0.646\textwidth]{posthoc_null_rho_scan_plot-1} \caption[The empirical type I rate at the nominal 0.05 level is plotted against the correlation, $\rho$]{The empirical type I rate at the nominal 0.05 level is plotted against the correlation, $\rho$. Simulations use the exact $\rho$ to perform the hypothesis test, and the $df=\ssiz-1$ cutoff. }\label{fig:null_rho_scan_plot} \end{figure} \end{knitrout} \paragraph{Feasible Estimator, Varying $\rho$:} In the simulations above we have used the actual $\rho$ in computing the threshold for rejection of the null. We repeat the experiments using a \emph{feasible} test where we esimate $\rho$ from the sample. We compute the correlation of returns, then take the median value of the upper triangle of the correlation matrix. In the first set of simulations, the true correlation matrix follows $\RMAT=\makerho{\rho}{\wrapParens{1-\rho}}.$ We set $\ssiz=1,008$ days, $\nlatf=16$, and set the \txtSNR to $1\yrto{-\halff}$. We compute the empirical type I rate at the nominal $0.05$ level for each set of $50,000$ simulations. We plot that type I rate against $\rho$ in \figref{null_feas_rho_scan_plot}. For these values of $\ssiz, \nlatf$, the procedure achieves near nominal type I rate, and does not appear to suffer from having estimated the $\rho$. In fact the plot greatly resembles \figref{null_rho_scan_plot} where we have used the actual $\rho$. \begin{knitrout}\small \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[h] \includegraphics[width=0.975\textwidth,height=0.646\textwidth]{posthoc_null_feas_rho_scan_plot-1} \caption[The empirical type I rate at the nominal 0.05 level is plotted against the correlation, $\rho$]{The empirical type I rate at the nominal 0.05 level is plotted against the correlation, $\rho$. Simulations use an \emph{estimated} $\rho$ to perform the hypothesis test, and the $df=\ssiz-1$ cutoff. }\label{fig:null_feas_rho_scan_plot} \end{figure} \end{knitrout} \paragraph{Feasible Estimator, Misspecified Model, Varying $\rho$:} We repeat those simulations, estimating the $\rho$ from the sample, but now we let the correlation matrix take an ``AR(1)'' structure. That is, we let $\RHAT[i,j] = \rho^{\abs{i-j}}$, and vary $\rho$. Again we have $\ssiz=1,008$ days, $\nlatf=16$, the \txtSNR is equal to $1\yrto{-\halff}$. We compute the empirical type I rate at the nominal $0.05$ level for each set of $50,000$ simulations. For these simulations, we also record the type I rate when the $\rho$ is not estimated, but instead assumed to be $0$. Given that $\rho=0$ forms a kind of `stochastic lower bound', we expect that the procedure will be anti-conservative when performed this way. Indeed we see in the plot that the empirical type I rate decreases to zero in increasing $\rho$. For the case where we take the median sample correlation as the estimate of $\rho$, the procedure is somewhat conservative for small $\rho$, then anti-conservative for large $\rho$. This is not surprising: for large $\rho$, the median element of \RHAT will be fairly large, but the correlation among assets is somewhat weak. A more robust heuristic for estimating the $\rho$ is needed. \begin{knitrout}\small \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[h] \includegraphics[width=0.975\textwidth,height=0.646\textwidth]{posthoc_null_feas_ar1_rho_scan_plot-1} \caption[The empirical type I rate at the nominal 0.05 level is plotted against the correlation, $\rho$, for simulations where the correlation follows an AR(1) structure]{The empirical type I rate at the nominal 0.05 level is plotted against the correlation, $\rho$, for simulations where the correlation follows an AR(1) structure. Simulations use an \emph{estimated} $\rho$ to perform the hypothesis test, and the $df=\ssiz-1$ cutoff. We include separate lines for the cases where $\rho$ is estimated, and for where it is assumed to equal $0$. For the estimated $\rho$, the procedure is conservative for small and moderate $\rho$, but anticonservative for $\rho$ near 1. When $\rho=0$ is assumed, the procedure is increasingly conservative in $\rho$. }\label{fig:null_feas_ar1_rho_scan_plot} \end{figure} \end{knitrout} \clearpage \subsection{Simulations under the alternative} We next perform the same simulations under the alternative. It is somewhat difficult to quantify the power of this procedure because the procedure can reject multiple nulls for a given experiment. Indeed, in the simulations under the null above, we analyzed the rate of \emph{any} rejections for the multiple comparisons performed in a single simulation. \paragraph{Under the alternative, one good: In the first set of simulations we let \pvsnr have a single non-zero value, call it \psr, and vary that \psr. We then compute, as the `range', the \txtSR of the single good asset minus the minimum \txtSR of the $\nlatf - 1$ remaining assets. Because we are only testing $\nlatf - 1$ comparisons, rather than ${\nlatf \choose 2}$, we expect to see fewer than the nominal type I rate when $\psr=0$. Moreover, we are performing a one-sided test. As such it may be more natural to compare the rejection rate to $\typeI/2$. We take $\RMAT=\makerho{\rho}{\wrapParens{1-\rho}},$ letting $\rho$ vary from 0 to 0.9; we set $\ssiz=1,008$ days, $\nlatf=16$, and let \psnr vary from $0\yrto{-\halff}$ to $1.5\yrto{-\halff}$. We compute the rejection rate at the nominal $0.05$ level for each set of $10,000$ simulations. We plot that (true) rejection rate against $\psnr$ in \figref{alt_sim_onehi_scan_plot}. For these values of $\ssiz, \nlatf$, the procedure is fairly weak, only achieving power of one half for large \psnr or highly correlated assets. It is not surprising that the power is increasing in $\rho$: one expects less spread among the assets for higher $\rho$, thus a true difference in \txtSNR is more easily detected. This same effect is visible in the paired test for equality of \txtSNRs. \cite{pav_ssc} \begin{knitrout}\small \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[h] \includegraphics[width=0.975\textwidth,height=0.646\textwidth]{posthoc_alt_sim_onehi_scan_plot-1} \caption[The empirical rejection rate at the nominal 0.05 level is plotted against the correlation, $\rho$]{The empirical rejection rate at the nominal 0.05 level is plotted against the correlation, $\rho$. The population consists of one good asset with \txtSNR equal to \psnr, and the remainder with zero \txtSNR. Rejection is based on the \txtSR of the single good asset minus the minimum \txtSR of the remaining assets. Simulations use the exact $\rho$ to perform the hypothesis test, and the $df=\ssiz-1$ cutoff. We plot a horizontal line at half the nominal type I rate, 0.025, because we are performing a one-sided test. }\label{fig:alt_sim_onehi_scan_plot} \end{figure} \end{knitrout} \paragraph{Under the alternative, half good: We repeat those experiments, but set half the $16$ assets to have \txtSNR equal to \psnr, and the rest to have zero \txtSNR. We compute, as the `range', the maximum \txtSR of the good assets minus the minimum \txtSR of the $\nlatf - 1$ remaining assets. We are effectively testing $\wrapParens{\nlatf / 2}^2$ comparisons, Because we are only testing $\nlatf - 1$ comparisons, rather than ${\nlatf \choose 2}$, so we expect to see fewer than the nominal type I rate when $\psr=0$. As above we take $\RMAT=\makerho{\rho}{\wrapParens{1-\rho}},$ let $\rho$ vary from 0 to 0.9, $\ssiz=1,008$ days, $\nlatf=16$, and let \psnr vary from $0\yrto{-\halff}$ to $1.5\yrto{-\halff}$. We compute the rejection rate at the nominal $0.05$ level for each set of $10,000$ simulations. We plot that (true) rejection rate against $\psnr$ in \figref{alt_sim_halhi_scan_plot}. For these values of $\ssiz, \nlatf$, the procedure is again fairly underpowered, with higher power for more correlated assets. \begin{knitrout}\small \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[h] \includegraphics[width=0.975\textwidth,height=0.646\textwidth]{posthoc_alt_sim_halhi_scan_plot-1} \caption[The empirical rejection rate at the nominal 0.05 level is plotted against the correlation, $\rho$]{The empirical rejection rate at the nominal 0.05 level is plotted against the correlation, $\rho$. The population consists of half good assets with \txtSNR equal to \psnr, and the remainder with zero \txtSNR. Rejection is based on the maximum \txtSR of the good assets minus the minimum \txtSR of the remaining assets. Simulations use the exact $\rho$ to perform the hypothesis test, and the $df=\ssiz-1$ cutoff. We plot a horizontal line at half the nominal type I rate, 0.025, because we are performing a one-sided test. }\label{fig:alt_sim_halhi_scan_plot} \end{figure} \end{knitrout} \clearpage \subsection{Real Assets} We now apply the technique to real asset returns. \paragraph{Five Industry Portfolios:} We consider the 5 industry portfolios, whose returns are computed and distributed by French. \cite{ind_5_def} The dataset consists of 1104 months of returns, from Jan 1927 to Dec 2018. The returns are highly correlated, and the correlation matrix is likely well modeled by the form $\makerho{\rho}{\wrapParens{1-\rho}},$ with $\rho$ estimated as approximately $0.8$. The \txtSRs range from $0.485\yrto{-\halff}$ for Other to $0.667\yrto{-\halff}$ for Healthcare. First we perform the hypothesis test of equality of \txtSNRs, as proposed by Wright \etal \cite{wright2014} We compute a statistic of $12.2$ which should be distributed as a $\chi^2\wrapParens{4}$ under the null. \cite{pav_ssc} This corresponds to a p-value of $0.016$, and we reject the null of equality of all \txtSNRs. Using the $df=\ssiz-1$ formulation and the estimated $\rho$, we compute $HSD = 0.18\yrto{-\halff}$ for $\typeI = 0.05$, and narrowly reject the equality of \txtSNRs for Other and Healthcare. In \figref{mind5_lolly_plot}, we plot these \txtSRs, along with error bars at plus and minus one $HSD$. \begin{knitrout}\small \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[h] \includegraphics[width=0.975\textwidth,height=0.390\textwidth]{posthoc_mind5_lolly_plot-1} \caption[The annualized \txtSR of French's 5 industry portfolios are plotted, as computed on monthly returns from Jan 1927 to Dec 2018]{The annualized \txtSR of French's 5 industry portfolios are plotted, as computed on monthly returns from Jan 1927 to Dec 2018. We plot error bars at $\pm HSD$ for $\typeI=0.05$. We narrowly reject equality of the \txtSNR of Other and Healthcare. }\label{fig:mind5_lolly_plot} \end{figure} \end{knitrout} \paragraph{Sharpe's 34 Mutual Funds: } We consider the returns of the 34 mutual funds described by Sharpe in his original paper. \cite{Sharpe:1966} We transcribed the annualized percent return and standard deviation values from Sharpe's Table I. In his paper, Sharpe computed the ``reward-to-variability ratio'' of each using a fixed rate of $3\%$; however, we compute the \txtSR without subtracting a fixed rate. The \txtSRs range from $0.549\yrto{-\halff}$ for Incorporated Investors to $1.087\yrto{-\halff}$ for American Business Shares. We do not have access to the series of returns, and cannot estimate the correlation structure. Somewhat optimistically we make the wild guess $\rho=0.85$. Based on this value, and setting $\typeI=0.05$, we compute $HSD = 0.68\yrto{-\halff}$, and we fail to reject the null hypothesis that all \txtSNRs are equal. In \figref{sr34_lolly_plot}, we plot these \txtSRs, along with error bars at $\pm HSD$. Given the lack of separation of the funds, it is curious that Sharpe found correlation between the in-sample and out-of-sample \txtSRs of his funds. \cite{Sharpe:1966} \begin{knitrout}\small \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[h] \includegraphics[width=0.975\textwidth,height=0.812\textwidth]{posthoc_sr34_lolly_plot-1} \caption{The annualized \txtSR of Sharpe's 34 mutual funds are plotted. Returns are from the decade 1954-1963. \cite{Sharpe:1966} We plot error bars at $\pm HSD$. We fail to reject the nulls that all pairwise differences are equal. }\label{fig:sr34_lolly_plot} \end{figure} \end{knitrout} \section{Future Work A number of issues remain outstanding: \begin{enumerate} \item The heuristic use of the $df=\ssiz-1$ cutoff requires theoretical justification. \item A stochastic inequality like Slepian's lemma for ranges would allow one to apply the test using a lower bound $\rho$ to achieve maximum type I rate. \item Should we expect the Tukey HSD cutoff and the Bonferroni Cutoff to be nearly equal, or will one dominate the other under certain conditions? \item Can we quantify the power of the test? \item Though we suspect it cannot, can the power of the test be improved? \end{enumerate} \bibliographystyle{plainnat}
{ "timestamp": "2019-11-12T02:22:10", "yymm": "1911", "arxiv_id": "1911.04090", "language": "en", "url": "https://arxiv.org/abs/1911.04090", "abstract": "We describe a post hoc test for the Sharpe ratio, analogous to Tukey's test for pairwise equality of means. The test can be applied after rejection of the hypothesis that all population Signal-Noise ratios are equal. The test is applicable under a simple correlation structure among asset returns. Simulations indicate the test maintains nominal type I rate under a wide range of conditions and is moderately powerful under reasonable alternatives.", "subjects": "Methodology (stat.ME); Portfolio Management (q-fin.PM)", "title": "A post hoc test on the Sharpe ratio", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692332287862, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.7079584852141397 }
https://arxiv.org/abs/1903.00554
$t$-Pebbling in $k$-connected diameter two graphs
Graph pebbling models the transportation of consumable resources. As two pebbles move across an edge, one reaches its destination while the other is consumed. The $t$-pebbling number is the smallest integer $m$ so that any initially distributed supply of $m$ pebbles can place $t$ pebbles on any target vertex via pebbling moves. The 1-pebbling number of diameter two graphs is well-studied. Here we investigate the $t$-pebbling number of diameter two graphs under the lense of connectivity.
\section{Introduction}\label{s:intro} Graph pebbling has an interesting history, with many challenging open problems. Calculating pebbling numbers of graphs is a well known computationally difficult problem. See \cite{Hurl1,Hurl2} for more background. A {\it configuration} C of pebbles on the vertices of a connected graph G is a function $C : V(G){\rightarrow} {\mathbb N}$ (the nonnegative integers), so that $C(v)$ counts the number of pebbles placed on the vertex $v$. We write $|C|$ for the {\it size} $\sum_v C(v)$ of $C$; i.e. the number of pebbles in the configuration. A {\it pebbling step} from a vertex $u$ to one of its neighbors $v$ reduces $C(u)$ by two and increases $C(v)$ by one. Given a specified {\it root} vertex $r$ we say that $C$ is $t$-{\it fold} $r$-{\it solvable} if some sequence of pebbling steps places $t$ pebbles on $r$. We are concerned with determining $\pi_t(G,r)$, the minimum positive integer $m$ such that every configuration of size $m$ on the vertices of $G$ is $t$-fold $r$-solvable. The $t$-{\it pebbling number} of $G$ is defined to be $\pi_t(G) = \max_{r\in V(G)} \pi(G,r)$. We avoid writing $t$ when $t=1$. Pebbling number of diameter 2 graphs was solved and characterized by the following theorem. For the purpose of the present work, it is enough to know that a pyramidal graph has no {\it universal} vertex (a vertex adjacent to every other vertex) and has connectivity 2. \begin{thm} \label{t:Diam2} \cite{ClHoHu,PaSnVo} For a diameter 2 graph $G$ with connectivity $k$ and $n$ vertices, $\pi(G)=n+1$ if and only if $k=1$ or $G$ is pyramidal. Otherwise (i.e. $k=2$ and $G$ is not pyramidal, or $k\geq3$), $\pi(G)=n$. \end{thm} In contrast, other than the following bound, little is known about the $t$-pebbling number of diameter 2 graphs. \begin{thm}\label{t:d2bound}\cite{HeHeHu} If $G$ is a diameter 2 graph on $n$ vertices then $\pi_t(G) \le \pi(G)+4t-4$. Moreover, $\liminf_{t{\rightarrow}\infty}\pi_t(G)/t = 4$. \end{thm} The goal of the present paper is to determine the exact $t$-pebbling number of a large subfamily of diameter 2 graphs by considering their connectivity. Define ${\cal G}(n,k)$ to be the set of all $k$-connected graphs on $n$ vertices having a universal vertex. Set $f_t(n,k) = n + 4t - k - 2$ and $h_t(n)=n+2t-2$. Notice that $h_t(n)\ge f_t(n,k)$ if and only if $k\ge 2t$. Define $p_t(n,k)=\max\{f_t(n,k),h_t(n)\}$. The main result is the following theorem which is proved in Section \ref{s:thm}. \begin{thm}\label{Thm} If $G\in{\cal G}(n,k)$ then $\pi_t(G)=p_t(n,k)$. \end{thm} We observe from our result that, for any fixed $t$, in the family of graphs with universal vertex, there are graphs whose $t$-pebbling number is much lower than the bound given by Theorem \ref{t:d2bound}, and also that there are graphs reaching that bound: when $k\ge 2t$ we have $\pi_t(n,k)=(n+4t-4)-2(t-1)$; when $k< 2t$ $\pi_t(n,k)=(n+4t-4)-(k-2)$. It will be useful to take advantage of Menger's Theorem. The version of Menger's theorem that we use is the following (exercise 4.2.28 in \cite{West}). \begin{thm} \label{Menger} {\bf (Menger's Theorem)} \cite{West} Let $G$ be a $k$-connected graph and $S=\{v_1,\ldots,v_k\}$ be a multiset of vertices of $G$. For any $r\not\in S$ there are $k$ pairwise-internally-disjoint paths, one from each $v_i$ to $r$. \end{thm} \section{Technical Lemmas} \label{s:tech} We begin with a lemma that is used to prove lower bounds on the pebbling number of a graph by helping to show that certain configurations are unsolvable. For a vertex $v$, define its {\it open neighborhood} $N(v)$ to be the set of vertices adjacent to $v$, and its {\it closed neighborhood} $N[v]=N(v)\cup\{v\}$. We say that a vertex $y$ is a {\it junior sibling of} a vertex $x$ (or, more simply, {\it junior to} $x$) if $N(y)\subseteq N[x]$, and that $y$ is a {\it junior} if it is junior to some vertex $x$. \begin{lem} \label{l:JuniorRemoval} {\bf (Junior Removal Lemma)} \cite{AGH3} Given the graph $G$ with root $r$ and $t$-fold $r$-solvable configuration $C$, suppose that $y$ is a junior with $C(y)=0$. Then $C$ (restricted to $G-y$) is $t$-fold $r$-solvable in $G-y$. \end{lem} Given a configuration $C$ of pebbles, we say that a path $Q=(r,q_1,\ldots,q_j)$ with $j\geq 1$ is a {\it slide} from $q_j$ to $r$ if no $q_i$ is zero (it has no pebbles on) and $q_j$ has at least two pebbles. A {\it potential move} is a pair of pebbles sitting on the same vertex. To say that $C$ has $j$ potential moves means that the $j$ pairs are pairwise disjoint. For example, any configuration on 5 vertices with values $0,1,1,2,$ and $7$ has 4 potential moves. The {\it potential} of $C$, ${\sf pot}(C)$, is the maximum $j$ for which $C$ has $j$ potential moves. Because every solution that requires a pebbling move uses a potential move, the following fact is evident. \begin{fct} \label{f:pot} Let $r$ be an empty vertex in a configuration $C$ with ${\sf pot}(C)<t$. Then $C$ is not $t$-fold $r$-solvable. \end{fct} Basic counting yields the following lemma. \begin{comment} The {\it support} of a configuration $C$ is defined to be the number of vertices of $G$ with at least one pebble, and is denoted by ${\sigma}(C)$. The following lemma is used often. \begin{lem}\label{l:PotLem} {\bf (Potential Lemma)} Let $C$ be a configuration on a graph $G$. Then $${\sf pot}(C)\ge \left\lceil\frac{|C|-{\sigma}(C)|}{2}\right\rceil\ .$$ \end{lem} We state the following corollary in the form we use most. \end{comment} \begin{lem}\label{l:PotLem} {\bf (Potential Lemma)} Let $G$ be a graph on $n$ vertices. If $C$ is a configuration on $G$ of size $n+y$ ($y\ge 0$) having $z$ zeros, then ${\sf pot}(C)\ge\lceil \frac{y+z}{2}\rceil$. \end{lem} A nice application of the Potential Lemma is the following, which we will use repeatedly in the arguments that follow. \begin{lem}\label{l:zeros} {\bf (Slide Lemma)} Let $r$ be a vertex of a $k$-connected graph $G$. Let $C$ be a configuration on $G$ of size $n+y$ ($y\geq 0$) with $z$ zeros. If $\lceil\frac{y+3z}{2}\rceil\le k$ then $C$ is $\lceil\frac{y+z}{2}\rceil$-fold $r$-solvable. \end{lem} {\noindent\bf Proof.\ \ } Set $p=\lceil\frac{y+z}{2}\rceil$. By Lemma \ref{l:PotLem} we can choose a set $P$ of $p$ potential moves. Note that the hypothesis implies that $p\le k-z$. Delete all non-root zeros to obtain $G'$. Since $G$ is $k$-connected, $G'$ is $p$-connected. Thus Menger's Theorem \ref{Menger} implies that there are $p$ pair-wise disjoint slides in $G'$ from $P$ to $r$, which yield $p$ $r$-solutions. {\hfill $\Box$\bigskip} \section{Proof of Theorem \ref{Thm}} \label{s:thm} The proof will follow from Lemmas \ref{l:LowerBd} and \ref{l:upper_bound}, below. Let $u$ be a universal vertex of a graph $G\in{\cal G}(n,k)$. If $C$ is a configuration of size $n+2t-3$ with $u$ empty and every other vertex odd then ${\sf pot}(C)=t-1$, and so $C$ is not $t$-fold $u$-solvable. Hence $\pi_t(G,u)\ge n+2t-2$. On the other hand, if $|C|\ge n+2t-2$ then ${\sf pot}(C)\ge t$ when $u$ is empty, and ${\sf pot}(C)\ge t-1$ when $u$ is not; either way $C$ is $t$-fold $u$-solvable because $u$ is universal. Thus $\pi_t(G,u)= n+2t-2$, which is at most $p_t(n,k)$ always. \subsection{Lower bound}\label{ss:lower_bound} Clearly, $\pi_t(G)\geq \pi_t(G,u)=h_t(n)$. Now let $r$ be any non-universal vertex of $G$, and let $s$ be a vertex at distance two from $r$. Let $X$ be any $(r,s)$-cutset of size $k$ (in particular, $u\in X$) and define the configuration $F_t(n,k)$ by placing 0 on $r$ and $X$, $4t-1$ on $s$, and 1 on each vertex of $V(G)-(X\cup\{r,s\})$; then $|F_t(n,k)| =(4t-1)+(n-k-2)= f_t(n,k)-1$. Since the vertices of $X-\{u\}$ have 0 pebbles and all them are juniors to $u$, Lemma \ref{l:JuniorRemoval} states that if $t$ pebbles can reach $r$ then $2t$ pebbles can reach $u$. But, with exactly $2t-1$ potential moves in $F$, by Fact \ref{f:pot}, we can place at most $2t-1$ pebbles on $u$. Therefore $\pi_t(G,r)\geq f_t(n,k) $, implying $\pi_t(G)\geq f_t(n,k) $. We record these results as \begin{lem}\label{l:LowerBd} For $G\in{\cal G}(n,k)$ we have $\pi_t(G)\ge p_t(n,k)$. \end{lem} \begin{comment} Now define a configuration to be $t$-{\it extremal} if it has size $\pi_t(G)-1$ and is not $t$-fold solvable. \gh{(We may have to be slightly more careful with this definition by referring to a specific vertex.)} \begin{lem}\label{ExtLem}{\bf (Extremal Lemma)} For $G\in {\cal G}(n,k)$, a configuration is $t$-extremal if and only if it is in ${\cal F}_t(n,k)$ when $k<2t$, in ${\cal H}_t(n)$ when $k>2t$, and in either when $k=2t$. \end{lem} {\noindent\bf Proof.\ \ } Lemma \ref{l:LowerBd} shows sufficiency; we prove necessity as follows. \gh{Still to write.} {\hfill $\Box$\bigskip} \end{comment} \subsection{Upper bound}\label{ss:upper_bound} We will prove that any configuration of size $f_t(n,k)$ when $k\leq 2t$, and of size $h_t(n)$ when $k\geq 2t$, is $t$-fold $r$-solvable for any $r\in V(G)$. \begin{lem}\label{l:upper_bound} For $k\ge 2$, let $G\in{\cal G}(n,k)$ with a universal vertex $u$, and let $r$ be any root vertex. Then $\pi_t(G,r) \leq p_t(n,k).$ \end{lem} \begin{figure} \begin{center} \includegraphics[height=2.05in]{table.pdf} \end{center} \caption{The values $m$ for which $\pi_t(G)=|V(G)|+m$.} \label{f:chart2} \end{figure} {\noindent\bf Proof.\ \ } First note that the lemma is true when $t=1$. Indeed, in this case we have $k\ge 2t$, and so $p_t(n,k)=h_t(n)=n+2t-2=n$. On the other hand, because no pyramidal graph has a universal vertex, we have from Theorem \ref{t:Diam2} that $\pi(G)=n$, hence $\pi(G,r)\leq n$. In addition, the lemma holds for $k=2$. Indeed, in this case we have $k\le 2t$, and so $p_t(n,k)=f_t(n,k)=n+4t-k-2=n-4t-4$. Also, we have by Theorem \ref{t:d2bound} that $\pi_t(G,r)\le n+4t-4$. Hence, we may assume that $t\ge 2$ and $k\geq 3$. Figure \ref{f:chart2} shows the structure of this proof. As was noted above, the grey section has been proven before. We continue by proving the dashed-bordered, lower left section and diagonal circled entries together, and then the solid-bordered, upper right section by induction. \\ \noindent {\it Base case.} We will simultaneously address the case $k=2t-1$ (the circled entries), for which $|C|=f_t(n,k)=n+2t-1$, and the case $ k \geq 2t$ (the dashed-bordered section), for which $|C|=h_t(n)=n+2t-2$, by writing $ k \geq 2t-1 $ and considering a configuration of size $|C|=n+2t-2+\phi$, where $\phi=1$ if $2t-1=k$ and 0 otherwise. The natural idea we leverage here is repeating the argument that zeros force potential which, combined with connectivity, yields either more solutions or more zeros. Let $x \geq 0$ such that $k=2t-1+x$. By Lemma \ref{l:PotLem}, since we may assume that $C(r)=0$ (otherwise induct on $t$), we have at least $\lceil (2t-2+1)/2\rceil=t$ potential moves. Therefore, we have at least $t$ solutions if there are at least $t$ different slides from them to $r$. Thus we consider the case in which there are at most $t-1$ slides; that is, from some of the vertices in which a potential move is sitting, say $v$, there is no path to $r$ without an internal zero after considering the remaining $t-1$ slides. Since $G$ is $k$-connected, that implies that $C$ has at least $k-(t-1)$ zeros between $v$ and $r$ and so, because of $r$, $C$ has at least $k-(t-1)+1=t+1+x$ zeros. Assume that there are exactly $z=t+1+j$ zeros, for some $j\ge x$. Then, by Lemma \ref{l:PotLem}, $C$ has at least $$\left\lceil\frac{(2t-2)+(t+1+j)}{2}\right\rceil =t+\left\lceil\frac{t-1+j}{2}\right\rceil$$ potential moves. If there are at least $t-\left\lceil\frac{t-1+j}{2}\right\rceil$ slides from them to $r$, then we can use those slides for that many solutions. Then, the other $\left\lceil\frac{t-1+j}{2}\right\rceil$ solutions can be obtained from the remaining $2\left\lceil\frac{t-1+j}{2}\right\rceil$ potential moves, putting $2\left\lceil\frac{t-1+j}{2}\right\rceil$ pebbles on the universal vertex $u$ and then $\left\lceil\frac{t-1+j}{2}\right\rceil$ on $r$. Otherwise, there are at most $t-\left\lceil\frac{t-1+j}{2}\right\rceil-1$ slides, from which we find, using $k=2t-1+x$, at least $$k-\left(t-\left\lceil\frac{t-1+j}{2}\right\rceil-1\right)+1 =t+x+\left\lceil\frac{t-1+j}{2}\right\rceil+1$$ zeros. Clearly, this number cannot exceed the total number of zeros $z=t+1+j$; therefore $j\ge x+\left\lceil\frac{t-1+j}{2}\right\rceil \ge x+\frac{t-1+j}{2}$, and so $j\ge t-1+2x$. Let $j=t-1+2x+i$ for some $i\ge 0$; then $z=t+1+j=t+1+t-1+2x+i= 2t+2x+i$. Applying Lemma \ref{l:PotLem} again, there are at least $$\left\lceil\frac{(2t-2)+(2t+2x+i)}{2}\right\rceil=2t+x-1+\lceil i/2\rceil$$ potential moves. If either $x\geq 1$ or $i\ge 1$, then we can move $2t$ pebbles to the universal vertex $u$, and then $t$ to $r$. Hence, we consider the case for which $x=i=0$; i.e. $k=2t-1$, $z=2t$, and $|C|=n+2t-1$ (because $\phi=1$ in such a case). We let $T$ be the star centered on $u$, having leaves $r$ and the nonzero vertices of $G$. Clearly, $T$ is a subgraph of $G$ with $n+2t-1$ pebbles on it and with either $2+(n-z)$ or $1+(n-z)$ vertices, depending on whether $u$ is empty or not. In either case $n(T)\leq 2+n-z=2+n-2t$. Therefore, since $$ \pi_t(T,r) = n(T)+4t-3 \leq (2+n-2t)+4t-3 = n+2t-1 = |C(T)|,$$ we see that $C$ is $r$-solvable. \\ \noindent {\it Induction step.} Finally, we consider the cases when $k<2t-1$ (the solid-bordered section); so $|C|=f_t(n,k)=n + 4t - k - 2$. Since $2(t-1)=2t-1-1\geq k$, we have $\pi_{t-1}(G,r)= f_{t-1}(n,k)=n+4(t-1)-k-2=n+4t-k-2-4=|C|-4$. Hence, if $C$ has a solution of cost at most 4, we are done. Otherwise, there is at most one vertex $v$ having two or more pebbles, and on such a vertex there are at most 3 pebbles. This implies the contradiction $|C|\leq 3+(n-2)$, which completes the proof. {\hfill $\Box$\bigskip} In future work we intend to study $k$-connected diameter two graphs without a universal vertex, and use that work as a base step toward studying graphs of larger diameter.
{ "timestamp": "2019-03-05T02:03:21", "yymm": "1903", "arxiv_id": "1903.00554", "language": "en", "url": "https://arxiv.org/abs/1903.00554", "abstract": "Graph pebbling models the transportation of consumable resources. As two pebbles move across an edge, one reaches its destination while the other is consumed. The $t$-pebbling number is the smallest integer $m$ so that any initially distributed supply of $m$ pebbles can place $t$ pebbles on any target vertex via pebbling moves. The 1-pebbling number of diameter two graphs is well-studied. Here we investigate the $t$-pebbling number of diameter two graphs under the lense of connectivity.", "subjects": "Combinatorics (math.CO)", "title": "$t$-Pebbling in $k$-connected diameter two graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692325496974, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.7079584847218885 }
https://arxiv.org/abs/2006.02265
Finite groups contain large centralizers
Every finite non-abelian group of order $n$ has a non-central element whose centralizer has order exceeding $n^{1/3}$. The proof does not rely on the classification of finite simple groups, yet it uses the Feit-Thompson theorem.
\section{Introduction} A classical theorem of Brauer and Fowler \cite{BF55} states that a finite non-abelian group $G$ of even order with a center of odd order has a non-central element $x$ such that \[ |G|<|C_G(x)|^3. \] For finite non-abelian solvable groups, Bertram \cite{eB84} proved the same inequality and asked whether the exponent $3$ could be improved to $2$. This question was answered affirmatively by Isaacs \cite{iI86}, who showed that every finite non-abelian solvable group contains a non-central element whose centralizer has order exceeding its index. In \cite{GR20}, Guralnick and Robinson considered some variants of the Brauer-Fowler theorem. Among other results, they prove in \cite[Theorem 5]{GR20} that any finite non-abelian group $G$ has a non-central element $x$ of $G$ such that \[ |G| < \frac{6}{5}|C_G(x)|^3. \] Their proof does not rely on the classification of finite simple groups but uses the Feit-Thompson odd order theorem as well as a degenerate case of a result of Griess \cite{rG78}. In fact, using the classification they slightly improved this result showing the following: \begin{theoremA}\cite{GR20}\label{T:ThmGeneral} Let $G$ be a finite non-abelian group. Then, there exists a non-central element $x$ of $G$ such that \[ |G|<|C_G(x)|^3. \] \end{theoremA} The purpose of this short note is to give a proof of this result without using the classification, but still using the Feit-Thompson theorem. Note that as a consequence of the aforementioned result of Bertram \cite{eB84} (see also \cite[Lemma 5.1]{GR20}), to prove Theorem \ref{T:ThmGeneral} it suffices to consider finite non-solvable groups. Hence, since all finite groups of odd order are solvable by the Feit-Thompson odd order theorem, we are reduced to considering a finite non-abelian group $G$ such that $G/Z(G)$ has even order. Therefore, Theorem \ref{T:ThmGeneral} follows from the following statement: \begin{theoremA}\label{T:ThmEven} Let $G$ be a finite non-abelian group of even order and let $t$ be a non-central element of $G$ such that $t^2$ is central. Then, there exists a non-central element $x$ of $G$ such that \[ |G|\le |C_G(t)|^2\left( |C_G(x)| - \frac{1}{2} \right). \] \end{theoremA} We remark that in general the exponents in Theorem \ref{T:ThmGeneral} and Theorem \ref{T:ThmEven} cannot be improved as occurs in $\mathrm{SL}(2,2^n)$, where the centralizer of an involution has order $2^n$ and the maximum order of a centralizer of a non-identity element is $2^n+1$. \section{Proof of Theorem \ref{T:ThmEven}} Let $G$ be a finite non-abelian group and let $t$ be a non-central element of $G$ such that $t^2$ is central. Write $Z=Z(G)$ and $k(G)$ for the number of conjugacy classes of $G$. Also, we let $i(Z)$ denote the number of involutions of $Z$. \subsubsection*{Claim}\label{Claim} The following equation holds: \[ |G| \le (1 + i(Z))|C_G(t)| + (k(G) - |Z|) |C_G(t)|^2. \] \begin{claimproof} Let $W$ be the set of pairs $(x,y)$ in $G\times G$ such that $x$ is a conjugate of $t$ which inverts $y$, that is \[ W =\left\{ (x,y)\in t^G \times G \, : \, y^x = y^{-1} \right\}, \] and set $W_y = \left\{ x \in t^G : (x,y)\in W\right\}$ for an element $y$ of $G$. It is clear that a central element $y$ of $G$ with $W_y\neq \emptyset$ must be an involution or the identity element and in any case $W_y = t^G$. For an arbitrary involution $y$ of $G$, the set $W_y$ equals $t^G\cap C_G(y)$ and for any other element $y$ of $G$ we have that either $W_y$ is empty or equals $t^G\cap C_G(y)x$ for any $x\in W_y$. In particular, we have that $|W_y|\le |C_G(y)|$ for every $y\in G\setminus Z$. Therefore, this yields: \begin{align*} |W| = \sum_{y\in G} |W_y| & \le (1 + i(Z))|t^G| + \sum_{y\in G\setminus Z} |C_G(y)| \\ & = (1 + i(Z))|t^G| + \sum_{i=1}^r |y_i^G||C_G(y_i)|, \end{align*} where $y_1,\ldots, y_r$ are the representatives of the non-central conjugacy classes of $G$. Thus $r=k(G)-|Z|$ and so \begin{eqnarray}\label{eq1} |W| \le (1 + i(Z)) \frac{|G|}{|C_G(t)|} + (k(G) - |Z|) |G|. \end{eqnarray} On the other hand, observe that every element $x\in t^G$ inverts all elements of $[x,G]$, since $x^{-2}$ is central and so \[ [x,g]^{x} = x^{-2} g^{-1} x g x = g^{-1} x^{-1} gx = [x,g]^{-1} \] for any $g$ in $G$. As $|[x,G]|=|x^{-1}x^G|=|t^G|$ for $x\in t^G$, we then have that \begin{eqnarray}\label{eq2} \left|W \right| \ge \sum_{x\in t^G} |[x,G]| = |t^G|^2 = \frac{|G|^2}{|C_G(t)|^2}. \end{eqnarray} Hence, comparing (\ref{eq1}) and (\ref{eq2}) we get the desired equation. \end{claimproof} Now, let $x$ be an element of $G\setminus Z$ such that the order $|C_G(x)|$ is the maximum of all orders of centralizers for non-central elements, {\it i.e.} \[ |C_G(x)| = \max \left\{ |C_G(y)| : y\in G\setminus Z \right\}. \]Then, the class equation yields that \[ |G| \ge |Z| + ( k(G) - |Z| )\frac{|G|}{|C_G(x)|} \] and so $k(G) - |Z| < |C_G(x)|$, since certainly $|G|<|Z|+|G|$. Thus, we get that $k(G)-|Z|\le |C_G(x)|-1$. Combining this with the equation given by \nameref{Claim}, it follows that \begin{align*} |G| & \le (1 + i(Z)) |C_G(t)| + ( |C_G(x)| - 1 ) |C_G(t)|^2 \\ & \le |C_G(t)|^2 \left( |C_G(x)| - 1 + \frac{|Z|}{|C_G(t)| } \right), \end{align*} since $1+i(Z) \le |Z|$. This yields the desired inequality. \bibliographystyle{plain}
{ "timestamp": "2020-07-23T02:10:58", "yymm": "2006", "arxiv_id": "2006.02265", "language": "en", "url": "https://arxiv.org/abs/2006.02265", "abstract": "Every finite non-abelian group of order $n$ has a non-central element whose centralizer has order exceeding $n^{1/3}$. The proof does not rely on the classification of finite simple groups, yet it uses the Feit-Thompson theorem.", "subjects": "Group Theory (math.GR)", "title": "Finite groups contain large centralizers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692325496974, "lm_q2_score": 0.72487026428967, "lm_q1q2_score": 0.7079584847218884 }
https://arxiv.org/abs/0901.3163
Deflated Hermitian Lanczos Methods for Multiple Right-Hand Sides
A deflated and restarted Lanczos algorithm to solve hermitian linear systems, and at the same time compute eigenvalues and eigenvectors for application to multiple right-hand sides, is described. For the first right-hand side, eigenvectors with small eigenvalues are computed while simultaneously solving the linear system. Two versions of this algorithm are given. The first is called Lan-DR and is based on conjugate gradient (CG) implementation of the Lanczos algorithm. This version will be optimal for the hermitian positive definite case. The second version is called MinRes-DR and is based on the minimum residual (MinRes) implementation of Lanczos algorithm. This version is optimal for indefinite hermitian systems where the CG algorithm is subject to instabilities. For additional right-hand sides, we project over the calculated eigenvectors to speed up convergence. The algorithms used for subsequent right-hand sides are called D-CG and D-MinRes respectively. After some introductory examples are given, we show tests for the case of Wilson fermions at kappa critical. A considerable speed up in the convergence is observed compared to unmodified CG and MinRes.
\section{CG and MinRes Examples} We have presented and described deflation methods for non-hermitian systems in previous talks\cite{1} and papers\cite{2}. These methods simultaneously solve the linear equations and compute eigenvalues and eigenvectors. Recently, we have also developed similar methods for hermitian systems\cite{3}, and we will partially describe that work here. See also Ref.~\cite{4} for a description of a new approach for solving multiple right-hand sides using hermitian \lq\lq seed" methods. Krylov subspace methods develop polynomial solutions of $Ax=b$ for a hermitian matrix $A$ given an initial residual vector $r_0$. This polynomial space, $K$, after $m$ iterations of a method like conjugate gradient (CG) or minimum residual (MinRes), is given by \begin{equation} K=Span\{r_0, Ar_0, A^2r_0,... A^{m-1}r_0\}. \end{equation} The polynomial produced, considered as a continuous function in eigenvalue space, $\lambda$, is of degree $m$ or less and has the value 1 at $\lambda=0$. If the problem is solved exactly, one can show that this polynomial has a zero at the position of each eigenvalue, $\lambda_i$. To get a feeling for the types of behavior expected from the standard CG and MinRes routines, consider a hermitian problem of dimension 1000 with positive definite eigenvalues $\lambda_i=.1,1,2,3,\dots , 999$. The CG and MinRes polynomials developed after 70 iterations are shown in Fig.~1. The eigenvalues are circled in blue. The CG polyniomial is closer to zero at the lowest eigenvalue, but the MinRes polynomial is better at zeroing out most of the smaller eigenvalues. The residual norm curves for these solutions, as a function of iteration number, are shown in Fig.~2. Notice that the better MinRes eigenvalue polynomial is reflected in the slightly faster convergence versus CG. \begin{figure} \begin{center} \leavevmode \includegraphics*[scale=0.5]{wwfig.pdf} \caption{MinRes (solid green) and CG (dot dashed red) polynomials of degree 70 in eigenvalue space on a small problem with a positive definite spectrum of dimension 1000. The real eigenvalues are shown with blue circles.} \vspace{1cm} \includegraphics*[scale=0.5]{wwf150convcurves.pdf} \caption{Residual norm curves for MinRes (solid green) and CG (dot dashed red) on the problem in Fig.~1.} \end{center} \end{figure} Let us consider another example, which has an indefinite eigenvalue spectrum and shows the effects of deflation. Fig.~3 shows the MinRes and CG polynomials developed after 110 iterations for a hermitian system of dimension 1000 whose diagonal entries are generated with random numbers distributed Normal (0,1), but shifted by 2.0 to the right. There are then 22 negatives among the 1000 eigenvalues. Both algorithms have polynomials with value 1 at $\lambda=0$. One can again see the MinRes polynomial is doing a better job of zeroing the eigenvalue spectrum than the CG one. Fig.~4 shows the resultant residual norm curves. MinRes slightly outperforms CG again. This indefinite spectrum problem is more difficult for CG, resulting in a spiked residual norm curve. Notice that both methods display so-called super-linear convergence after about 125 iterations, when the small eigenvalues in the spectrum are effectively removed or deflated out. This is also referred to as the deflation \lq\lq knee". A method which retained and used such deflated eigenvector information to solve additional right-hand sides, $b$, of $Ax=b$, clearly would be greatly speeded up. Both of these standard methods are, of course, unrestarted. For similar restarted methods, it will be important to keep the low eigenvalue information across the restart so that the Krylov subspace polynomials will not have to redevelop it. Thus, to effectively implement deflation of small eigenvalues in restarted methods, it is indispensable to simultaneously solve the small eigenvalue/eigenvector problem along with the linear equations. As a bonus, this low eigenvalue information can then be used to speed up the solution of subsequent right-hand sides, $b$, of $Ax=b$. We implement these ideas in a hermitian context with a deflated Lanczos/conjugate gradient algorithm combination, Lan-DR($m,k$)/D-CG, which is optimal for a positive definite spectrum. Similarly, a deflated minimum residual combination, MinRes-DR($m,k$)/D-MinRes, is designed to be optimal for indefinite hermitian systems. Here $m$ stands for the dimension of the Krylov subspace and $k$ is the number of deflated eigenvectors. The second algorithm in the two cases, either D-CG or D-MinRes, refers to the form of the algorithm that projects over these deflated eigenvectors to speed up the solution for additional multiple right-hand sides. We will see that the general behaviors for CG or MinRes in the simple examples considered will carry over to the new algorithms on lattice QCD problems. Please see Ref.~\cite{3} for the mathematical definition of these algorithms and more details. \begin{figure} \begin{center} \leavevmode \includegraphics*[scale=0.5]{wwf110indef.pdf} \caption{MinRes (solid green) and CG (dot-dashed red) polynomials of degree 110 in eigenvalue space on a small indefinite spectrum problem with dimension 1000. The real eigenvalues are shown with blue circles.} \end{center} \end{figure} \begin{figure} \begin{center} \leavevmode \includegraphics*[scale=0.5]{wwf150convindef.pdf} \caption{Residual norm curves for MinRes (solid green) and CG (dot-dashed red) on the problem in Fig.~3.} \end{center} \end{figure} \begin{figure} \begin{center} \leavevmode \includegraphics*[viewport=0 0 800 530, scale=0.4]{D-CG.pdf} \caption{D-CG convergence curves compared to CG on a $\beta=6.0$ quenched $20^3\times 32$ lattice at $\kappa=0.15720$, solving through $M^{\dagger}Mx=M^{\dagger}b$.} \label{D-CG} \end{center} \end{figure} \begin{figure} \begin{center} \leavevmode \includegraphics*[viewport=0 0 800 530, scale=0.4]{D-MinRes.pdf} \caption{D-MinRes convergence curves compared to MinRes on the same configuration and quark mass as Fig.~5, solving through $\gamma_5 Mx=\gamma_5 b$.} \end{center} \end{figure} \section{Lattice Applications} In lattice applications, as in the simple examples considered above, we would expect the Lan-DR($m,k$)/D-CG combination to be effective mainly on problems with positive definite spectra, while the MinRes-DR($m,k$)/D-MinRes combination is designed for indefinite spectra. Designating $M$ as the Wilson matrix, we will consider $M^{\dagger}M$ as a model for the former spectrum and $\gamma_5 M$ as a model for the latter in our tests. The following examples show the residual norm for convergence of $M^{-1}$ itself so that results from the various algorithms can be compared. The residual norm quoted is normalized to one for an initial guess of $x=0$ for the solution vector. These runs are on quenched $\beta=6.0$ $20^3\times 32$ lattices at essentially kappa critical ($\kappa =0.15720$). Fig.~5 shows the residual norm as a function of matrix-vector products (MVPs) of the $M^{\dagger}M$ problem using CG. Although the convergence curves are not shown, the Lan-DR($m,k$) results, done for $m=200$ and various $k$ values (20, 60, or 150), are quite similar to the CG result, which took about 4900 MVPs to converge to a relative residual norm of $10^{-8}$. (The actual number of MVPs for Lan-DR to reach the same level of convergence ranged from about 5300 for $k=20$ to about 4950 for $k=150$.) The eigenvectors are identified from the solution of the exterior eigenvalue problem using regular Ritz vectors and are then passed on to D-CG, which uses a Galerkin projection to deflate out these eigenvectors. The next right-hand side is then greatly accelerated. Notice the sharp deflation knee occurring at $\sim 4000$ iterations and the subsequent super-linear convergence. The slope of this curve after the knee is the rate full deflation will achieve when a sufficient number of eigenvectors are kept. We find that full convergence to $10^{-8}$ is achieved in a little over 300 iterations when 150 eigenvectors are deflated on additional right-hand sides. We would not expect to see additional acceleration from deflating more eigenvectors since the rate of convergence approximately matches the slope of the CG curve after the deflation knee. We have referred to this phenomenon as \lq\lq saturation" (first ref. in \cite{1}). \begin{figure} \begin{center} \leavevmode \includegraphics*[scale=0.5]{ldrf17.pdf} \caption{D-CG convergence curves compared to CG on the same configuration and $\kappa$ as Fig.~5, solving through $\gamma_5 Mx=\gamma_5 b$.} \end{center} \end{figure} Fig.~6 shows the residual norm of the MinRes algorithms on the same lattice matrix. Again, the MinRes($m,k$) results, with $m=200$ and $k=20, 60$ or $150$ are not shown but converge similarly to the unrestarted MinRes result, which took about 2800 MVPs to converge to a relative residual norm of $10^{-8}$. (The actual number of MVPs for MinRes-DR to reach the same level of convergence ranged from about 3100 for $k=20$ to about 2900 for $k=150$.) Note that the unrestarted MinRes solution takes fewer MVPs compared to pure CG since there are two MVPs per iteration of the CG normal equations as opposed to the one with MinRes. The small eigenvectors are identified from the harmonic Ritz vectors, which are passed on to the D-MinRes algorithm. Similar to D-CG, a Galerkin projection is again applied at the beginning of the algorithm for multiple right-hand sides. In this case, we obtain full convergence of the residual norm to $10^{-8}$ in $\sim 300$ iterations with 150 eigenvectors, which is slightly faster than with D-CG. We do not recommend the use of the Lan-DR/D-CG combination for indefinite spectra. We see the effect of such usage in Fig.~7, which shows the use of CG on the original right-hand side (Lan-DR would be similar) and D-CG on additional right-hand sides of the same matrix used ($\gamma_5 M$) in Fig.~5. The extremely spiked nature of the residual norms is evidence of the instability, similar to the example in Fig.~4. For this particular configuration the residual norm converged. We have also seen cases where it does not converge. However, although the first right-hand side may not converge, we have observed that the $k$ deflated eigenvectors and eigenvalues output still produce accelerated convergence for additional right-hand sides. To avoid roundoff errors associated with the Lanczos algorithm we apply a periodic re-orth-ogonalization over the eigenvectors kept at restart during the solution of the first right-hand side. Also, please note that to achieve the full effect of the deflated eigenvectors for additional right-hand sides, we typically run the first right-hand side (using Lan-DR or MinRes-DR) well past convergence of the linear equations. This overhead contributes at most a factor of two in MVPs on the first right-hand side, and often less. \section{Summary} Lan-DR($m,k$) combines the solution of linear equations with the calculation of $k$ Ritz eigenvectors. It is optimal for postive definite spectra. If calculated to sufficient precision, it provides D-CG an efficient starting point for multiple right-hand sides. Minres-DR$(m,k$)/D-Minres does the same for systems with an indefinite spectrum using harmonic Ritz vectors. Although the Lan-DR/D-CG combination is simplier to implement, both MinRes-DR and D-Minres converged slightly faster than their counterparts as a function of MVPs for our tests with the quenched Wilson matrix. \section{Acknowledgment} Calculations were done with the HPC systems at Baylor University.
{ "timestamp": "2009-01-20T23:42:04", "yymm": "0901", "arxiv_id": "0901.3163", "language": "en", "url": "https://arxiv.org/abs/0901.3163", "abstract": "A deflated and restarted Lanczos algorithm to solve hermitian linear systems, and at the same time compute eigenvalues and eigenvectors for application to multiple right-hand sides, is described. For the first right-hand side, eigenvectors with small eigenvalues are computed while simultaneously solving the linear system. Two versions of this algorithm are given. The first is called Lan-DR and is based on conjugate gradient (CG) implementation of the Lanczos algorithm. This version will be optimal for the hermitian positive definite case. The second version is called MinRes-DR and is based on the minimum residual (MinRes) implementation of Lanczos algorithm. This version is optimal for indefinite hermitian systems where the CG algorithm is subject to instabilities. For additional right-hand sides, we project over the calculated eigenvectors to speed up convergence. The algorithms used for subsequent right-hand sides are called D-CG and D-MinRes respectively. After some introductory examples are given, we show tests for the case of Wilson fermions at kappa critical. A considerable speed up in the convergence is observed compared to unmodified CG and MinRes.", "subjects": "High Energy Physics - Lattice (hep-lat)", "title": "Deflated Hermitian Lanczos Methods for Multiple Right-Hand Sides", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692311915194, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.7079584837373857 }
https://arxiv.org/abs/2106.14408
A Bound on the Edge-Flipping Distance between Triangulations (Revisiting the Proof)
We revisit here a fundamental result on planar triangulations, namely that the flip distance between two triangulations is upper-bounded by the number of proper intersections between their straight-segment edges. We provide a complete and detailed proof of this result in a slightly generalised setting using a case-based analysis that fills several gaps left by previous proofs of the result.
\section{Introduction} Triangulations of finite sets of points in the plane are important in many areas of applied geometry, from computer graphics and rendering, computational geometry to imaging, computational fluid dynamics, and finite element theory. However, triangulations of a same finite set of points are not unique in general, and we therefore need tools to compare different triangulations. One tool to do this is to define a local operation called edge-flipping and define an ``edge-flipping distance'' between different triangulations of the same set of points. An edge flip consists in an operation on a triangulation, where given two adjacent triangles that form a convex quadrilateral, we flip the existing diametrical edge to the other diameter of the quadrilateral, as illustrated in \cref{fig: flip conv and non conv}. This operation maps the quadrilateral to itself but its interior is triangulated differently, the rest of the triangulation remaining unchanged. A flip does not create any conflict between the new edge and the rest of the triangulation's edges. Thus, the edge-flipping operation maps a triangulation to an other slightly different triangulation. \begin{figure}[tbhp] \centering \subfloat[][Convex case] \includegraphics[width=0.66\textwidth]{Pictures/flip.png}} \hspace{2em} \subfloat[][Non convex case] \includegraphics[width=0.2\textwidth]{Pictures/unflippable.png}}\\ \caption{Flip operation on two neighbouring triangles of a triangulation. The flip is well defined if the quadrilateral is convex. The diagonal of a non convex quadrilateral in a triangulation is not flippable.} \label{fig: flip conv and non conv} \end{figure} It is a well-known fact that given any finite set of points $S$ on the plane, and two triangulations $T_1$ and $T_2$ of this set, then there always exists a finite sequence of edge-flipping operations such that we can transform $T_1$ to $T_2$: $T_1 \xrightarrow{flip} T^{(2)} \xrightarrow{flip} T^{(3)} \hdots \xrightarrow{flip} T_2$. The traditional proof consists in first showing that given any triangulation of planar points, we can reach a reference triangulation, and since the flip operation is clearly reversible, the result follows immediately. The Delaunay triangulation is commonly chosen as the reference triangulation, and the flip sequence to reach it is one that increases the minimum angle of the triangulation at each flip \cite{LAWSON1977flip}. Unfortunately, if there are four co-circular points, the Delaunay triangulation is not unique and extra effort is needed to prove that any Delaunay triangulation is reachable from any other one using only edge flips. A more elegant proof that does not require any high level knowledge on triangulations, such as the Delaunay triangulation and its angular properties, can be found in Osherovich and Bruckstein \cite{osherovich2008all}. The core idea of this proof is to look at ``ears of triangulations'' of polygons and try to cut them in a goal-oriented way and then generalise by induction. Given two triangulations $T_1$ and $T_2$ of the same finite set of vertices $S$, there exist in general many sequences of edge flips that map $T_1$ to $T_2$. Denote $d_f(T_1,T_2)$ the shortest length of all the possible such sequences of flips. It is easy to check that $d_f$ is a distance measure over the set of triangulations of a given finite point set (for details see \cref{prop: d_f distance measure}), and it is therefore called the edge-flipping distance. Computing the minimal sequence of flips is a difficult computational challenge. Lubiw et al. \cite{lubiw2015flip} have shown that it is actually a NP-complete task. The same year, Aichholzer et al. \cite{aichholzer2015flip} proved that the problem is also NP-complete when considering triangulations of simple planar polygons. Thus, we may search instead for an upper-bound of the edge-flipping distance. For this, we will look at edge intersections of the original and target triangulations. Due to the maximality of triangulations, we have the following equivalence: two triangulations $T_1$ and $T_2$ of a same finite set of points $S$ are identical if and only if there are no proper intersections between edges of $T_1$ with those of $T_2$ (see \cref{prop: = equiv no inter}). We say that two edges of two different triangulations intersect properly if the ``open edges'' (excluding the border vertices) intersect at a single point. Note that identical edges in two different triangulations do not properly intersect, they are superimposed. Magically, it was found that the flip distance $d_f(T_1,T_2)$, i.e. the minimum number of flips required to go from $T_1$ to $T_2$, is upper-bounded by the number of proper intersections between the two triangulations, denoted as $\#(T_1,T_2)$. The proof of this beautiful result was originally proposed by Hanke et al. \cite{hanke1996edge} and was later revisited in the reference book on triangulations by De Loera et al. \cite{de2010triangulations}. Both proofs are quite difficult to follow due to the case based analysis involved and to some omissions, minor errors, and even some logical flaws in the proofs provided in those references. The purpose of this paper is to revisit the proof in a way that is hopefully readable, complete, and understandable by anyone with some background in planar geometry and triangulations. \section{Edge-flipping distance and intersections: an upper-bound} \label{sec: edge flipping dist and inter upper bound} We consider triangulations of a fixed finite set of points $S$ in the plane. We assume that the edges of the triangulations are straight segments, this being an essential assumption for counting proper edge intersections. In particular, we do not triangulate the outer face, i.e. the outside of the convex hull of $S$. While Hanke et al. \cite{hanke1996edge} and De Loera \cite{de2010triangulations} both consider triangulations of the convex hull of $S$ without any further constraint, we notice that the result and proof still hold for triangulations with more general border constraints, e.g. a fixed non convex outer border and some fixed inner faces called holes that are not triangulated. Formally, let $B = (B_0, B_1,\cdots, B_h)$ with $h\ge 0$ be the border constraints of $S$. Each constraint $B_i$ is a simple polygon defined on vertices of $S$. The polygons $B_i$ must not overlap, although they are allowed to share vertices but not edges. $B_0$ is taken to be a simple polygon of $S$ such that its outer face does not contain any point of $S$. $B_0$ may be chosen as the boundary of the convex hull of $S$, but we will not limit our choice to this example. The set of further polygonal constraints $(B_1,\cdots,B_h)$, which can be empty, defines holes of the triangulation. Each $B_i$ for $i\ge 1$ is defined such that their inner region does not contain any point of $S$. Note that the holes are not necessarily triangles and can have arbitrarily long polygonal boundaries $B_i$. See \cref{fig: border example} for an illustration of triangulation with general border constraints. \begin{figure}[tbhp] \centering \includegraphics[width=0.7\textwidth]{Pictures/border_example.png} \caption{Example of a triangulation with general border constraints $B$. Here, we constrain the triangulation to have $h=2$ fixed holes defined by $B_1$ and $B_2$. The outer border of the triangulation is constrained to the polygon $B_0$, which in this case is not convex as it is not the polygon defined by the border of the convex hull of $S$. Given $B = (B_0, B_1, B_2)$, we will compare different triangulations of $S$ that respect the border constraints $B$.} \label{fig: border example} \end{figure} We want to show that the edge-flipping distance of points in any such region of the plane is upper-bounded by the number of edge intersections between both triangulations. The proof is heavily based on the original ideas of \cite{hanke1996edge} and \cite{de2010triangulations} with all details thoroughly explained, and hopefully all problems removed. The goal is to prove the following theorem: \begin{theorem} \label{th: goal} If $T_1$ and $T_2$ are two triangulations of a planar region defined by a finite set of points $S$ with a set of boundary constraints $B$, then we have that: $$ d_f(T_1,T_2) \le \#(T_1,T_2). $$ In particular, if $n$ is the number of vertices in $S$, if $n_b$ is the number of boundary vertices of $S$, and if $h$ is the number of holes of $S$, i.e. $B = (B_0, \cdots, B_h)$ and $n_b$ is the sum of the length of the polygons $B_i$ for $0\le i \le h$, then: $$ d_f(T_1,T_2) \le \#(T_1,T_2) \le (3n-2n_b-3-3h)^2.$$ \end{theorem} The main idea for the proof is to prove the existence a sequence of edge flips that strictly decreases the number of intersections at each step. To achieve this, we consider the edges in $T_1$ with maximal number of intersections with $T_2$. We then prove the existence of one of these maximal edges such that it is flippable and that flipping it will strictly reduce the number of intersections with $T_2$. We will first show that such maximal edges are flippable, which means that they are diagonals of convex quadrilaterals in $T_1$. Second, we will show that there exists a class of configurations of maximal edges for which flipping strictly reduces the number of intersections. Third, we will show that if no maximal edge belongs to this class of configurations, then there must exist at least one maximal edge that does not belong to this class which will strictly reduce the number of intersections with $T_2$ after being flipped as we would otherwise reach a contradiction. Therefore, there is at least one edge in $T_1$ that we can flip and for which this flip strictly reduces the number of intersections with $T_2$. This result proves \cref{th: goal}. We will denote, unless mentioned otherwise, the edge between the vertices $a$ and $b$ as $ab$. We will call a quadrilateral of a triangulation $T$ the shape formed by two adjacent triangles of $T$ (sharing an edge). Given a quadrilateral created by two adjacent triangles, we will say that an edge intersects with this quadrilateral if it crosses the interior of the quadrilateral. If $ab$, $cd$ and $ef$ are edges between vertices of the triangulations, we will denote $\#(ab,T_2)$ the number of edges intersecting $ab$ that belong to $T_2$, $\#(ab,cd,T_2)$ the number of edges intersecting $ab$ and $cd$ that belong to $T_2$, $\#(ab,cd,ef,T_2)$ the number of edges intersecting $ab$, $cd$ and $ef$ that belong to $T_2$, $\#_a(cd,T_2)$ the number of edges intersecting $cd$ that belong to $T_2$ but that emerge from $a$ and finally $\#_a(cd,ef,T_2)$ the number of edges intersecting $cd$ and $ef$ that belong to $T_2$ but that emerge from $a$, where an edge is said to emerge from $a$ if $a$ is one of its vertices. See figure \cref{fig: notations inter} for an illustration of each case. \begin{figure}[tbhp] \centering \subfloat[][$\#(ab,T_2)$] \includegraphics[width=0.25\textwidth]{Pictures/inter_ab.png}} \subfloat[][$\#(ab,cd,T_2)$] \includegraphics[width=0.35\textwidth]{Pictures/inter_ab_cd.png}} \subfloat[][$\#(ab,cd,ef,T_2)$] \includegraphics[width=0.4\textwidth]{Pictures/inter_ab_cd_ef.png}}\\ \subfloat[][$\#_a(bc,T_2)$] \includegraphics[width=0.3\textwidth]{Pictures/inter_a_bc.png}} \subfloat[][$\#_a(bc,de,T_2)$] \includegraphics[width=0.35\textwidth]{Pictures/inter_a_bc_de.png}} \caption{Notations for counting the types of intersecting edges. The edges drawn in black are assumed to belong to $T_1$ and those in red are assumed to belong in $T_2$. This colour code will remain consistent throughout this paper.} \label{fig: notations inter} \end{figure} Note that while Hanke et al. \cite{hanke1996edge} stated \cref{th: goal} with a strict inequality, it is easy to find equality cases. For instance, any triangulation of three points is trivially reduced to a triangle and there cannot be any intersection. Another simple example consists in the two possible triangulations of a convex set of four points, which differ by only one flipped edge and have exactly one edge intersection. This small misstatement was also corrected in the proof of De Loera et al. \cite{de2010triangulations}. \subsection{Preliminary properties of triangulations} Some of the important tools for proving theorem \cref{th: goal} are listed as the following properties: \begin{proposition} \label{prop: planar} If $T$ is a triangulation of a finite set of points, then $T$ is a planar graph, i.e. its edges do not intersect. \end{proposition} \begin{proposition} \label{prop: no vertex inside quad} Let $T_1$ and $T_2$ be two triangulations of the same finite set of points. Consider any quadrilateral of $T_1$ defined by two adjacent triangles of $T_1$. Consider any edge of $T_2$ intersecting with the quadrilateral. Necessarily, the considered edge is one of the three possible kinds: both vertices of the edge are outside the quadrilateral, or one of the vertices is one of the vertices of the polygon and the other vertex is outside the polygon, or the edge is a diagonal of the quadrilateral. \end{proposition} \begin{proposition} \label{prop: no intersec inside quad} Let $T_1$ and $T_2$ be two triangulations of the same finite set of points. Consider any quadrilateral of $T_1$ defined by two adjacent triangles. Consider any two edges of $T_2$ that both intersect the quadrilateral. Then these edges cannot meet inside the quadrilateral. \end{proposition} \begin{proposition} \label{prop: vertex linked to the closest edge} Let $T$ be a triangulation of a finite set of points $S$. Let $a\in S$ be a vertex of $T$ and $x\in\mathbb{R}^2$ be a point that is not necessarily in $S$. Assume that the segment $[ax[$ open at $x$ linking $a$ to $x$ does not intersect with any edge of $T$. Then if $x$ is a vertex of $T$, i.e. $x\in S$, then $ax$ is an edge of $T$. Likewise, if $x$ is not a vertex of $T$ but is on an edge $bc$ of $T$ with $b\neq a$ and $c\neq a$, then $ab$ and $ac$ are edges of $T$. \end{proposition} \begin{proposition} \label{prop: = equiv no inter} Let $T_1$ and $T_2$ be two triangulations of a same finite set of points, then $T_1\neq T_2$ if and only if the number of intersections between both triangulations is strictly positive. \end{proposition} \begin{proposition} \label{prop: d_f distance measure} The function $d_f$ between triangulations of a same finite set of points is a distance measure. \end{proposition} \begin{proposition} \label{prop: inter edge not edge other triangu} Let $T_1$ and $T_2$ be two triangulations of a same finite set of points. If $T_2$ intersects an edge $ac$ of $T_1$, then $ac$ is not an edge of $T_2$. In particular, when $T_1\neq T_2$, the edges $ac\in \argmax\limits_{\widetilde{ac}\in T_1} \#(\widetilde{ac},T_2)$ of $T_1$ with maximal number of intersections with $T_2$ cannot be edges of $T_2$. \end{proposition} \begin{proposition} \label{prop: inter not on border} Let $T_1$ and $T_2$ be two triangulations of a same finite set of points. If $ac$ an edge of $T_1$ with at least one intersection with $T_2$, then $ac$ cannot be a border edge. Likewise, all border edges are in both $T_1$ and $T_2$. \end{proposition} \begin{proof} \Cref{prop: planar} follows from the definition of a triangulation. Triangulations are defined as maximal planar graphs, i.e. such that adding any extra edge to the triangulation would render it non planar and thus cause an intersection. One can see that if all the faces are not triangles then there is a face with 4 or more vertices. By definition, the inside of the face is empty without any edge passing through it. Therefore, we can simply add an edge between two non adjacent vertices of the face passing through the face in order to violate the maximality assumption while still being a planar graph. Reciprocally, if we have only triangles for faces, in order to be able to add an edge without causing intersections this would mean that there is a face with an available diagonal at which we can add an edge. But faces with a diagonal have at least 4 vertices, which contradicts the triangles only assumption. Thus the graph is maximally planar. Therefore, we have the equivalence of triangulations as maximal planar graphs and as connected graphs that have only triangle faces. What is important to remember is that should two edges of a triangulation meet, then necessarily they meet at a vertex of the triangulation. \end{proof} \begin{proof} \Cref{prop: no vertex inside quad} is a consequence of the fact that $T_1$ and $T_2$ are the triangulation of the same set of points $S$. As such, when defining the quadrilateral as two adjacent triangles in $T_1$, there cannot be any other vertex of $S$ inside the quadrilateral as it would be inside one of the faces or inside the diagonal edge (not on the extremities). Therefore, if an edge of $T_2$ passes inside the quadrilateral, then necessarily its vertices are either outside the quadrilateral or on its boundary. The only possibilities for a vertex being on the boundary is for it to be one of the four vertices of the quadrilateral since an edge of a triangulation cannot pass through a vertex. By definition, the edges of the boundary of the quadrilateral do not intersect the polygon as they do not pass through its interior. Thus the three possibilities: the edge in $T_2$ is a diagonal of the quadrilateral, or one of its vertices is on the polygon but the other one is outside it, or both vertices are outside it. In summary, there cannot be a vertex inside the polygon. \end{proof} \begin{proof} \begin{figure}[tbhp] \centering \includegraphics[width=0.3\textwidth]{Pictures/no_inter_inside.png} \caption{Edges of $T_2$ cannot meet inside a quadrilateral composed of two neighbouring triangles of $T_1$.} \label{fig: no intersec inside quad} \end{figure} \Cref{prop: no intersec inside quad} is proven similarly to the previous property. See \cref{fig: no intersec inside quad} for an illustration. Two edges of $T_2$ cannot intersect since $T_2$ is planar (\cref{prop: planar}), they are either disjoint or they meet at a common vertex. Now assume two edges of $T_2$ intersect a quadrilateral of adjacent triangles of $T_1$. If these edges meet inside the quadrilateral, then there is a vertex inside it. However, since $T_1$ and $T_2$ triangulate the same set of vertices, that means that there is a vertex inside the triangulated quadrilateral, which is a contradiction. Therefore, edges of $T_2$ cannot meet inside the quadrilateral. \end{proof} \begin{proof} \begin{figure}[tbhp] \centering \includegraphics[width=0.8\textwidth]{Pictures/ax_no_inter_triangle.png} \caption{If $T$ has no intersection with $[ax[$ and $x$ is on an edge of $T$ without being a vertex, then $ab$ and $bc$ are edges of $T$. If instead $x$ is a vertex of $T$, then $ax$ is an edge of $T$.} \label{fig: vertex linked to the closest edge} \end{figure} \Cref{prop: vertex linked to the closest edge} is one of the most important tools in proving \cref{th: goal}. Both Hanke et al. \cite{hanke1996edge} and De Loera et al. \cite{de2010triangulations} implicitly use it, however they neither explicitly formulate it, nor prove it. The proof follows from the fact that triangles only have three edges. See \cref{fig: vertex linked to the closest edge} for an illustration of both cases. Consider the case when $x$ is not a vertex of $T$ but is on an edge $bc$ of $T$ with $a\notin\{b,c\}$. The open segment $]ax[$ does not intersect with any other edge of the triangulation and is not included in one of the edges. As such, it is necessarily included in a face $F$. Since $]ax[\subset F$, $a$ is a vertex of $F$ and $bc$ is an edge of $F$. In turn, this implies that the vertices of $F$ are $a$, $b$, and $c$. Therefore, $ab$ and $ac$ are edges of $T$. If $x$ is now a vertex of $T$, then $ax$ has no intersection with any other edge of $T$. If $ax$ is not an edge of $T$, then adding it to $T$ would not create an intersection, which would violate the maximality principle of the triangulation $T$. Therefore, $ax$ is necessarily part of the triangulation. \end{proof} \begin{proof} \Cref{prop: = equiv no inter} is the fundamental property for the edge-flipping distance to be a proper distance. If the triangulations are identical, then there are no intersections. Reciprocically, assume that the triangulations have no intersections. If they are not equal, then one of the triangulations has an edge that is not in the other. Without loss of generality assume $ab$ is an edge of $T_1$ but not of $T_2$. Since $ab$ does not intersect any edge of $T_2$ then the open edge $]ab[$ is included in the interior of a face of $T_2$, and thus it is a diagonal of a face of $T_2$. Adding the edge $ab$ to $T_2$ would then produce an expanded planar graph, which violates the maximality of the triangulation $T_2$. Therefore, both triangulations share the same edges, implying that they are equal. \end{proof} \begin{proof} \Cref{prop: d_f distance measure} can be proven by simply checking that $d_f$ satisfies the defining properties of distance measures. The measure $d_f(T_1,T_2)$ is positive. Due to the fact that edge flips are invertible, with inverses being edge flips, the sequence of opposite flips from $T_2$ to $T_1$ is of minimal length. This result gives the symmetry $d_f$. If the two triangulations are identical then no edge flip is necessary and if no edge flip is needed then the triangulations are identical (see \cref{prop: = equiv no inter}). The minimality provides the triangular inequality. If $T_1$, $T_2$, and $T_3$ are three triangulations of $S$, then the shortest sequence of edge flips from $T_1$ to $T_3$ does not necessarily pass through $T_2$ and is necessarily shorter than any sequence going from $T_1$ to $T_2$ and then from $T_2$ to $T_3$. In particular, it is shorter than the sum of minimal ones. Thus $d_f$ is a distance measure. \end{proof} \begin{proof} The proof of \cref{prop: inter edge not edge other triangu} directly follows from the planarity of triangulations. Indeed, assume an edge $ac$ of $T_1$ has an intersection with an edge $bd$ of $T_2$. If $ac$ is an edge of $T_2$, then $ac$ and $bd$ are two edges of $T_2$ that intersect, which is impossible since $T_2$ is planar. From \cref{prop: = equiv no inter}, if $T_1\neq T_2$, then they have least one intersection. This implies that the maximally intersecting edges of $T_1$ with $T_2$ are not edges of $T_2$. \end{proof} \begin{proof} \Cref{prop: inter not on border} is a consequence of \cref{prop: inter edge not edge other triangu}. Indeed, consider the border constraint $B_i$ for $0\le i\le h$. We can define a cyclical ordering on the vertices on the border polygon $B_i$, such as a clockwise ordering. Then, each border vertex is linked to its next neighbour, otherwise adding that edge would preserve planarity while respecting the border constraints (as border polygons do not overlap nor share edges) but violate the maximality of triangulations. Therefore, all triangulations of the same finite set of points share the same border edges. Since triangulations are planar, i.e. they do not have intersecting edges, and since the edges are taken to be straight and thus cannot self-intersect, then intersecting edges between triangulations cannot be border edges. \end{proof} \subsection{Upper-bounding the flip distance} We are now equipped with the basic tools for proving \cref{th: goal}. In all the following, we will assume that $T_1$ and $T_2$ are two triangulations of a same finite set of points $S$ with $T_1\neq T_2$. Due to \cref{prop: = equiv no inter} there is at least one intersection. In particular, if $ac$ an edge with maximal number of intersections with $T_2$, then due to \cref{prop: inter edge not edge other triangu} $ac$ cannot be an edge of $T_2$. Following the sketch of the proof of this theorem outlined in the introduction of \cref{sec: edge flipping dist and inter upper bound}, the first step is to prove that all edges with maximal intersections can be flipped. \begin{lemma} \label{lemma 1} All quadrilaterals of $T_1$, formed by adjacent triangles of $T_1$, that contain a diagonal with maximum number of intersection with $T_2$ are necessarily convex. \end{lemma} \begin{proof} First, realise that if the quadrilateral is not convex, then by definition the flip of the diagonal is an impossible operation (see \cref{fig: flip conv and non conv}). Let us prove that the quadrilateral $abcd$ with diagonal $ac$ in $T_1$ is convex, where $ac$ has maximum number of intersections with $T_2$. Assume the quadrilateral is not convex and that $a$ is the reflex vertex, i.e. the outer angle $\widehat{bad}<\pi$. The trick will be to prove that all edges that intersect $ac$ will necessarily also intersect $cd$ and $cb$, which will imply by maximality that $cd$ and $cb$ also have maximal intersections with $T_2$. And then we will find an edge that intersects $cd$ or $cb$ but not $ac$ and this will violate the maximality assumption of $ac$. \begin{figure}[tbhp] \centering \includegraphics[width=0.3\textwidth]{Pictures/lemma_1_5_types_inter_non_conv.png} \caption{All 5 possible types of edges of $T_2$ that intersect $ac$ in a non convex configuration.} \label{fig: lemma 1 5 types intersec} \end{figure} There are $5$ types of possible edges that can intersect $ac$ (see \cref{fig: lemma 1 5 types intersec}): those that intersect $cb$ and $cd$ while intersecting $ac$, those that intersect $cd$ and come from $b$ while intersecting $ac$, those that intersect $ab$ and $cd$ while intersecting $ac$, those that intersect $ad$ and $cb$ while intersecting $ac$ and those that come from $d$ and intersect $cb$ and $ac$. This can be summarised in the following summation where it is important to remember that different terms ine the sum correspond to different edges: \begin{align} \label{ac,T2 lemma 1} \#(ac,T_2) = \, &\#(ab,cd,ac,T_2) + \#(bc,cd,ac,T_2) + \#(da,bc,ac,T_2) \nonumber\\ &+ \#_b(cd,ac,T_2) + \#_d(bc,ac,T_2) \end{align} \begin{figure}[tbhp] \centering \includegraphics[width=0.6\textwidth]{Pictures/lemma_1_ef_implies_no_ad.png} \caption{If $T_2$ has an edge $ef$ (or $bf$) intersecting $ac$ that passes through $]ab]$, then it cannot also have an edge intersecting $ac$ that passes through $]ad]$ since that would create illegal self intersections inside the non-convex quadrilateral because $a$ is a reflex vertex.} \label{fig: lemma 1 ef implies no ad} \end{figure} \begin{figure}[tbhp] \centering \includegraphics[width=0.7\textwidth]{Pictures/lemma_1_x_ag_not_all_vertical.png} \caption{The closest to $a$ intersecting edge $gh$ of $T_2$ with $ac$ that passes through $]ab]$ intersects $ac$ at point $x$. In order to satisfy the triangulation assumption, necessarily $ag$ and $ah$ are in the triangulation. At least one of these two edges intersects $cd$ but they do not intersect $ac$, which violates the maximality assumption on $ac$.} \label{fig: lemma 1 x ag not all vertical} \end{figure} Assume that not all of them pass through $cb$ and $cd$. For instance, there is an edge $ef$ in $T_2$ that intersects $ab$ and $cd$ and that intersects $ac$, or that comes from $b$ and intersects $cd$ and $ac$ (see \cref{fig: lemma 1 ef implies no ad}). Then, since the quadrilateral is not convex, any edge that intersects $ad$ and $ac$ also intersects $ef$ inside the quadrilateral. Similarly, all edges that come from $d$ and intersect $ac$ also intersect $ef$ inside the quadrilateral. Due to \cref{prop: no intersec inside quad}, there cannot be an intersection of edges of $T_2$ inside the quadrilateral. Thus, no edge from $T_2$ passes through $]ad]$ (intersects $ad$ or comes from $d$). Therefore, all edges intersecting $ac$ necessarily intersect $cd$. We can now zero all terms that do not use $cd$ in the previous formula: \begin{equation} \#(ac,T_2) = \, \#(ab,cd,ac,T_2) + \#(bc,cd,ac,T_2) + \#_b(cd,ac,T_2). \end{equation} Therefore, there are at least as many edges intersecting $cd$ than $ac$. By maximality of $ac$, $ad$ is maximal and cannot have any other edge intersecting it that does not also intersect $ac$. Denote $x$ the closest intersection point on $ac$ to $a$ (see \cref{fig: lemma 1 x ag not all vertical}). This point $x$ does not belong to the vertices $S$ but lies on an edge $gh$ of $T_2$. Then $]ax[$ does not intersect any edge of $T_2$ and is not included in an edge of $T_2$ since $ac$ is not in $T_2$ because it has intersections with the triangulation (\cref{prop: inter edge not edge other triangu}). Therefore, $ag$ and $ah$ are edges of $T_2$ (\cref{prop: vertex linked to the closest edge}). Since $gh$ intersects $cd$, one of its vertices is on the other half-plane delimited by $cd$ from $a$. Without loss of generality choose this vertex to be $g$. Furthermore, since $gh$ also passes through the semi-open edge $]ab]$, $g$ also lies in the cone defined by $cba$ (in the direction of the face $cba$). Because $a$ is a reflex vertex, then all points inside this cone and on the opposite half-plane delimited by $cd$ are also in the cone delimited by $cad$, implying that they would create an intersection with $cd$ when linking them with $a$. In particular, this result holds for $g$. Therefore, $ag$ intersects $cd$. However, $ag$ does not intersect $ac$. This is contradictory to the previous results we established: all edges intersecting $ac$ intersect $cd$ and by maximality there are no edges intersecting $cd$ that do not intersect $ac$. We have thus proven that all edges intersecting $ac$ also intersect $cd$ and $bc$. \begin{figure}[tbhp] \centering \includegraphics[width=0.7\textwidth]{Pictures/lemma_1_x_ae_all_vertical.png} \caption{All edges of $T_2$ intersecting $ac$ also intersect $cd$ and $bc$, which implies that they are also maximal. Thus all edges intersecting $cd$ or $bc$ are exactly those that intersect $ac$. Consider the closest edge intersecting $ac$, at point $x$. In order to satisfy the triangulation assumption, $ae$ and $af$ are edges of $T_2$, but at least one (not necessarily both) of these edges intersects either $cd$ or $bc$ but not $ac$, which leads to a contradiction.} \label{fig: lemma 1 x ae all vertical} \end{figure} This result implies that the number of intersections of $cd$ with $T_2$ is at least the number of intersections of $ac$ with $T_2$. By maximality of $ac$, $cd$ is also maximal. Similarly, $cb$ is also maximal. All edges that intersect $cd$ or $cb$ must intersect $ac$. We now apply a similar reasoning to what we previously did (see \cref{fig: lemma 1 x ae all vertical}). Since the intersecting edges of $ac$ intersect $cd$ and $cb$, they cannot come from $d$ or $c$ nor pass through $ad$ or $ab$. Therefore, if we denote $ef$ an intersecting edge of $ac$, then necessarily $e$ and $f$ are different from $d$ and $b$, which implies that they both need to be outside of the quadrilateral as there are no vertices inside the quadrilateral. Consider $ef$ the edge intersecting $ac$ at point $x$ closest to $a$. Because it is the closest to $a$, $[ax[$ has no intersection with $T_2$, with $x$ on the edge $ef$ of $T_2$. Thus $ae$ and $af$ are edges of $T_2$ (see \cref{prop: vertex linked to the closest edge}). Furthermore, because $a$ is a reflex vertex, at least one of the vertices $e$ or $f$ needs to be inside one of the cones $cad$ or $cab$, as otherwise $ef$ would not intersect $ac$. Assume without loss of generality that $e$ lies in the cone $cad$ (in the direction of the face $cad$ but on the other side from $a$ of the edge $cd$). Then $ea$, an edge of $T_2$, intersects $cd$, but does not intersect $ac$. This statement is in contradiction to the previous results: all edges of $T_2$ intersecting $cd$ (or $cb$) intersect $ac$. Given the assumption that the quadrilateral $abcd$ is not convex, we have systematically reach a contradiction in all cases. Therefore $abcd$ is necessarily convex. \end{proof} Hanke et al. \cite{hanke1996edge} proved a weaker version of \cref{lemma 1}. Indeed, they only showed that there exists at least one convex quadrilateral with diagonal having maximal intersections. This is due to the fact that once they proved that all edges intersecting $ac$ also intersect $cd$ and $bc$, they then claimed that these two edges could not be on the border of the convex hull (or more generally on the border of the triangulation). This observation led them to reiterate their argument by induction on the next quadrilateral until either reaching a desired convex quadrilateral with diagonal having maximal intersections or reaching a contradiction as they would have to reach the border of the convex hull (or more generally the border of the triangulation). These argument are not incorrect. However, the proof of \cref{th: goal} requires the knowledge that all quadrilaterals with diagonal having maximal number of intersections must be convex. Unfortunately, Hanke et al. \cite{hanke1996edge} implicitly assume this knowledge, that was neither claimed nor proven when they call upon crucial similar inductive arguments in the rest of their proof. De Loera et al. \cite{de2010triangulations} reinforced the original paper's claim and fixed the last part of the proof of \cref{lemma 1}, by invoking the edge $ef$ of $T_2$ intersecting $ac$ closest to $a$, to show that all quadrilaterals with diagonal having maximal intersections are convex. However, their proof is incomplete and their illustration of the edge $ef$ does not tell the full story. Indeed, they are misled by a non general drawing that omits a possible case. Their illustration corresponds to that of the top case in \cref{fig: lemma 1 x ae all vertical}, with an incorrect claim that since $ef$ intersects $ac$ and $cd$, then $ae$ intersects $cd$. This claim is wrong as shown on the bottom of \cref{fig: lemma 1 x ae all vertical}. Indeed, merely having that $ef$ intersects $ac$, $bc$, and $cd$ only implies that at least one of $ae$ or $af$ intersects $cd$ or $bc$. Note that while this may seem like nitpicking, in a contradiction proof based on a hierarchy of case analyses, it is essential to explore all cases. In particular, while graphical illustrations provide insight, they can easily mislead by omitting cases and hence lead to false claims. While in this case the omission has only a minor impact, as one could debate that the reasoning was implicitly without loss of generality, and while one easily completes the considerations for the overlooked cases, such oversights could be much more detrimental at later stages of the proof. Let us return to the proof \cref{th: goal}. We now know that all edges in $T_1$ with maximal intersections with $T_2$ can be flipped. Unfortunately, not all maximally intersecting edges will necessarily reduce the total number of intersections with $T_2$. However, we only need to find one of these maximal edges that reduces the number of intersections after being flipped. Next, we will scrutinise a class of configurations of quadrilaterals with maximal intersections diagonal for which we are guaranteed to strictly reduce the number of intersections with $T_2$ by flipping the diagonal edge. \begin{lemma} \label{lemma 2} Let $abcd$ be a convex quadrilateral in $T_1$ with diagonal $ac$ with maximum number of intersections with $T_2$. If $T_2$ contains an edge $eb$ intersecting $da$ or $cd$, or if it contains and edge $dg$ intersecting $ab$ or $bc$, or if $bd$ an edge of $T_2$, then flipping $ac \xrightarrow{flip} bd$ reduces the total number of intersections with $T_2$. In other words, if an edge of $T_2$ intersects a maximal edge of $T_1$ and comes from a vertex of the quadrilateral around the maximal edge, then flipping that maximal edge reduces the number of intersections. \end{lemma} \begin{proof} First, look at the easy case where $bd$ is an edge of $T_2$. As $bd$ belongs to $T_2$, then it has no intersections with this triangulation. On the other hand, since $T_1\neq T_2$ and $ac$ has maximal number of intersections with $T_2$, then $ac$ has at least one intersection with $T_2$. Therefore flipping $ac$ to $bd$ reduces the number of intersections by $\#(ac,T_2) - \#(bd,T_2) = \#(ac,T_2)\ge 1$. \begin{figure}[tbhp] \centering \subfloat[] \includegraphics[width=0.3\textwidth]{Pictures/lemma_2_all_possibilities_inter_bd.png}} \hfill \subfloat[] \includegraphics[width=0.3\textwidth]{Pictures/lemma_2_all_possibilities_inter_bd_not_ad.png}} \hfill \subfloat[] \includegraphics[width=0.3\textwidth]{Pictures/lemma_2_all_possibilities_inter_bd_ad.png}} \\ \caption{Left: all 9 types of possible edges of $T_2$ that can intersect $bd$. Middle: the assumed existing edge $be$ along with all 6 types of possible edges of $T_2$ that can intersect $bd$ but not $ad$. All these edges must necessarily intersect $be$ inside the quadrilateral, implying that they cannot exist. Right: the assumed existing edge $be$ along with all 3 types of possible edges of $T_2$ that can intersect $bd$ and $ad$. All edges intersecting $bd$ intersect $ad$, but $be$ intersects $ad$ and not $bd$. Thus, there is at least one less intersection with $bd$ than with $ad$ and a fortiori with the maximum $ac$.} \label{fig: lemma 2} \end{figure} Let us now consider the other cases. The idea is to look at the edges in $T_2$ that intersect $bd$. The assumption of an existing edge coming from $b$ (or $d$) intersecting $ac$ and one other edge of the quadrilateral will imply that edges intersecting $bd$ will be restricted to intersect that same edge of the quadrilateral in order to avoid illegal intersections or meetings of edges of $T_2$ inside the quadrilateral. The concluding argument will be that $bd$ is an edge intersecting an edge of the quadrilateral but it does not intersect $bd$, which implies that $bd$ has at least one less intersection with $T_2$ than $ac$ has. Assume without loss of generality that we are in the case where $eb$ intersects $ac$ and $ad$ (see \cref{fig: lemma 2}). Let us look at edges that intersect $bd$. Note that we do not have $bd$ in $T_1$, but we can still estimate the number of intersections between this segment and $T_2$ and compare it with the number of intersections of $ac$ to justify the flip. Edges that intersect $bd$ must either intersect $ad$ or exclusive not intersect $ad$ and be one of the six following: intersect $ab$ and $bc$, pass through $a$ and intersect $bc$, pass through $c$ and intersect $ab$, intersect $ab$ and $dc$, pass through $a$ and intersect $dc$ or be $ac$. We will show that necessarily such edges must intersect $be$. First, $ac$ has at least one intersection with $T_2$, therefore it cannot be in $T_2$. Second, any edge passing through the semi-open segments $[ab[$ and $]bc]$ will intersect $eb$ inside the triangle $abc$, which is impossible since both triangulations share the same vertices. Similarly, any edge passing through the semi-open segments $[ab[$ and $]dc]$ will intersect $eb$ inside the quadrilateral, which is forbidden for the same reasons. As such, the only possibility for edges intersecting $bd$ are edges intersecting $ad$. In addition, $eb$ intersects $ad$, but it does not intersect $bd$. Therefore, $bd$ has at least one less intersections with $T_2$ than $ad$, which in turn is smaller than the maximal number of intersections: $\#(bd,T_2) \le \# (ad,T_2) -1 < \mathrm{argmax}_{\widetilde{ac}\in T_1} = \#(ac,T_2)$. Therefore, flipping $ac \xrightarrow{flip} bd$ reduces the number of intersections with $T_2$ by at least one. \end{proof} Note that Hanke et al. \cite{hanke1996edge} omit in their proof the case $bd$ in $T_2$ in \cref{lemma 2}. The claim and proof still hold, and it is not mathematically detrimental later in the proof as they correctly argue that ``if the edge-flipping operation $ac\xrightarrow[]{} bd$ decreases the number of intersections between the triangulations $T_1$ and $T_2$, then we are done'' and have found a maximal edge that reduces the total number of intersections, which is the goal for proving \cref{th: goal}. Indeed, $bd$ is an edge of $T_2$ provides such a case. Explicitly mentioning this case in \cref{lemma 2} greatly improves clarity of the main proof of the paper, while simultaneously yielding a sanity check that we did not forget any possible case in the case based proof. Although De Loera et al. \cite{de2010triangulations} do not explicitly write down \cref{lemma 2} (surely due to length constraints of their book), they wisely explicitly mention the case when $bd$ is an edge of $T_2$. Before diving into the proof of the main theorem, we shall next show that if a quadrilateral has a maximal intersection diagonal, then it cannot belong to a specific class of configurations. This will later help us to find a maximal edge that reduces the number of intersections when flipped. \begin{lemma} \label{lemma 2.2} If $abcd$ is a convex quadrilateral in $T_1$ with diagonal $ac$ of maximal intersections with $T_2$, then $T_2$ cannot have an edge $ae$ intersecting $bc$ or $cd$. Similarly, $T_2$ cannot have an edge $cf$ intersecting $ab$ or $ad$. Also, $T_2$ cannot have the edge $ac$. In summary, edges of $T_2$ that come from $a$ or $c$ cannot intersect the polygon $abcd$. \end{lemma} \begin{proof} The discussion for $ac$ is trivial. Indeed, this edge cannot be an edge of $T_2$ as we assumed $T_1\neq T_2$ (see \cref{prop: inter edge not edge other triangu}). \begin{figure}[tbhp] \centering \subfloat[] \includegraphics[width=0.3\textwidth]{Pictures/lemma_2-2_all_possibilities_inter_ac.png}} \hfill \subfloat[] \includegraphics[width=0.3\textwidth]{Pictures/lemma_2-2_all_possibilities_inter_ac_not_bc.png}} \hfill \subfloat[] \includegraphics[width=0.3\textwidth]{Pictures/lemma_2-2_all_possibilities_inter_ac_bc.png}} \\ \caption{Left: all 9 types of possible edges of $T_2$ that can intersect $ac$. Middle: the assumed existing edge $ae$ along with all 6 types of possible edges of $T_2$ that can intersect $ac$ but not $bc$. All these edges must necessarily intersect $ae$ inside the quadrilateral, implying that they cannot exist. Right: the assumed existing edge $ae$ along with all 3 types of possible edges of $T_2$ that can intersect $ac$ and $bc$. All edges intersecting $ac$ intersect $bc$. By maximality of $ac$, $bc$ is also maximal and no edge can intersect it that does not also intersect $ac$. However, $ae$ intersects $bc$ and not $ac$, which leads to a contradiction.} \label{fig: lemma 2.2} \end{figure} Let us now consider the other cases. Assume without loss of generality that there is an edge $ae$ in $T_2$ intersecting $bc$ (see \cref{fig: lemma 2.2}). We want to show that all edges intersecting $ac$ will intersect $bc$. This fact would imply that $bc$ would have maximal intersections with $T_2$ by maximality of $ac$ and then that $ac$ and $bc$ would intersect the same edges. This result would lead to a contradiction since $ae$ intersects $bc$ but not $ac$. We look at the different possibilities for an edge intersecting $ac$ that does not intersect $bc$: the edge intersects $ab$ and $ad$, or it comes from $b$ and intersects $ad$, or it intersects $ab$ and comes from $d$, or it intersects $ab$ and $dc$, or it comes from $b$ and intersects $cd$, or it is simply $bd$. In any of these cases, the presence of such an edge will create an intersection with $ae$ inside the triangle $abc$, which is forbidden since both $T_1$ and $T_2$ are triangulations of the same set of points. Therefore, all edges intersecting $ac$ must intersect $bc$. Furthermore, $ae$ intersects $bc$ but does not intersect $ac$. Hence, $bc$ has at least one more intersection with $T_2$ than $ac$. This result violates the maximality assumption on $ac$. Thus, $T_2$ cannot have such an edge $ae$ intersecting $bc$. In summary, we have proven that $T_2$ cannot have an edge coming from $a$ intersecting the polygon $abcd$. Likewise, $T_2$ cannot either have an edge coming from $c$ intersecting $abcd$. \end{proof} Both Hanke et al. \cite{hanke1996edge} and De Loera et al. \cite{de2010triangulations} prove and use \cref{lemma 2.2} without having written it out explicitly. We believe that this choice is detrimental to clarity as this result is invoked with the same importance as \cref{lemma 2}. Furthermore, writing it out in a separate lemma avoids some awkward or unnecessarily complex formulations. Indeed, De Loera et al. \cite{de2010triangulations} claim that ``if $T_2$ has an edge which crosses the sides of the quadrilateral $abcd$ (present in $T_1$) and of which at least one of its two vertices is $a$, $b$, $c$, or $d$, then flipping $ac$ in $T_1$ decreases the number of intersections''. While the claim is mathematically accurate, the cases with such an edge with vertex $a$ or $c$ simply do not exist, which misleads the readers into thinking that they might, and should they exist, flipping the maximal edge $ac$ would reduce the number of intersections. We are now ready for the final part of the proof. Recall that so far, we have proven that all maximally intersecting edges of $T_1$ with $T_2$ are located within a convex quadrilateral (\cref{lemma 1}), that there exists a class of configurations of these quadrilateral that guarantees that flipping that maximal edge will reduce the number of intersections (\cref{lemma 2}), and that some configurations of these quadrilaterals are forbidden (\cref{lemma 2.2}). We now have to prove that if there are no quadrilaterals with maximally intersecting diagonals satisfying the configurations of \cref{lemma 2}, then we can still find at least one maximal edge for which flipping reduces the total number of intersections with $T_2$. \begin{lemma} \label{lemma 3} There exists a convex quadrilateral $abcd$ in $T_1$ with diagonal $ac$ with maximum number of intersections with $T_2$ for which performing the flip operation $ac \xrightarrow{flip} bd$ strictly reduces the number of intersections with $T_2$. \end{lemma} \begin{proof} We once again proceed by contradiction. Assume that flipping any maximally intersecting edges with $T_2$ does not strictly reduce the total number of intersections with $T_2$. In particular, each quadrilateral with maximally intersecting diagonal will not satisfy the assumptions of \cref{lemma 2}. With this assumption in mind, we first present the sketch of the proof. Consider a quadrilateral of $T_1$ with maximally intersecting diagonal with $T_2$. Our assumptions imply that it cannot lie on the borders of the triangulations. Afterwards, we prove that there can exist at most one kind of diagonally intersecting edge of this quadrilateral. This result allows us to count how many intersections some of the edges of the quadrilateral have. In particular, it will allow us to show that there exists a $3$-zigzag of edges of the polygon, including its diagonal edge, that have maximal intersections with $T_2$. After proving that all edges of the quadrilateral have at least one intersection with $T_2$, we then show that each corner of the quadrilateral is cut by a ``corner cutter'' edge of $T_2$. We then construct a strip delimited by two sequences of vertices $u$ and $v$, such that the strip is already triangulated in $T_1$, and such that all edges from one vertex of the sequence to one of the other sequence, i.e. edges $u_iv_j$ of $T_1$, have maximal intersections with $T_2$. As the sequence can never reach the border of the triangulation, as the set of vertices is finite, and as the edges of $T_1$ do not self-intersect, we necessarily reach a cycle, and the strip has the same topology as a ring. We thus rename the strip as the ring. Since we are working on the Euclidean plane, the ring necessarily has a reflex vertex on at least one of either the $u$ sequence or the $v$ sequence (not necessarily on both). We then analyse the structure of the ring around this reflex vertex. In particular, we show that concatenations of triangles of the strip in clockwise order form a convex quadrilateral, as long as the vertices are within an angle of $\pi$ to the reference anti-clockwise border edge of the strip located at the reflex vertex. This result allows us to prove that the corner cutters of this vertex in each quadrilateral of the strip that has this vertex must intersect this reference anti-clockwise border edge of the strip, as long as the quadrilateral is within an angular reach of $\pi$ from it. By then looking at the first vertex of the strip around the reflex vertex that is not within an angular reach of $\pi$ from the reference anti-clockwise border edge, we will prove that two corner cutters of neighbouring quadrilaterals intersect within the strip, which leads to a contradiction. The existence of ``corner cutters'', to be later rigorously defined, is essential in this proof. Unfortunately, both Hanke et al. \cite{hanke1996edge} and De Loera et al. \cite{de2010triangulations} directly claim their existence without convincing arguments. They both directly claim without proof that if a quadrilateral $abcd$ has a diagonal $ac$ with maximal intersections with $T_2$, and if that quadrilateral does not fulfil the assumptions of \cref{lemma 2}, then \cref{lemma 2.2} directly implies that there exists ``corner cutter'' edges in $T_2$ for each vertex of the quadrilateral, i.e. that there exists in $T_2$ an edge intersecting $ab$ and $bc$, one intersecting $cb$ and $cd$, one intersecting $dc$ and $da$, and another one intersecting $ad$ and $ab$. While it is true that \cref{lemma 2.2} is responsible to this result, this claim is not trivial and deserves to be proven. Moreover, the proof requires an advanced familiarity with the geometry of triangulations. In order to be understandable for all, we prefer to use another approach. Furthermore, we provide a detailed proof for each of our claims. \begin{figure}[tbhp] \centering \subfloat[][Disjoint candidate neighbour areas] \includegraphics[width=0.4\textwidth]{Pictures/lemma_3_not_border_1.png}}\hfill \subfloat[][Intersecting candidate neighbour areas] \includegraphics[width=0.4\textwidth]{Pictures/lemma_3_not_border_2.png}}\\ \subfloat[][Disjoint candidate neighbour areas] \includegraphics[width=0.4\textwidth]{Pictures/lemma_3_not_border_1_not_convex_hull_outside.png}}\hfill \subfloat[][Intersecting candidate neighbour areas] \includegraphics[width=0.4\textwidth]{Pictures/lemma_3_not_border_2_not_convex_hull_outside.png}}\\ \subfloat[][Disjoint candidate neighbour areas] \includegraphics[width=0.4\textwidth]{Pictures/lemma_3_not_border_1_not_convex_hull.png}}\hfill \subfloat[][Intersecting candidate neighbour areas] \includegraphics[width=0.4\textwidth]{Pictures/lemma_3_not_border_2_not_convex_hull.png}}\\ \caption{Top: the edge $ab$ is on the outer border $B_0$ and on the convex hull. Middle: the edge $ab$ is on the outer border $B_0$ but is not on the polygon defined by the border of the convex hull. Bottom: the edge $ab$ is on an inner border $B_i$ defined by a fixed hole, i.e. a fixed inner non-triangulated face. Because we assume that flipping the diagonal $ac\xrightarrow{flip}bd$ does not reduce the number of intersections with $T_2$, \cref{lemma 2,lemma 2.2} constrain the domain of existence of neighbours in $T_2$ of $a$ and of neighbours of $b$. Consider $e$ and $f$ the angular extremal neighbours in $T_2$ of $a$ and $b$ closest to the quadrilateral $abcd$. Since $ab$ is on the border of $T_2$, it is an edge of $T_2$ as well and cannot have intersections with this triangulation. Thus, to maintain a triangulation, necessarily $e$ and $f$ are shared neighbours of both $a$ and $b$. This result implies that $e=f$ and in particular that the candidate areas of neighbours of $a$ and $b$ in $T_2$ intersect. We then deduce that $aeb$ is a face of $T_2$ that includes two other vertices $c$ and $d$, leading to a contradiction.} \label{fig: lemma 3 not border} \end{figure} We will now present the detailed proof. Let $abcd$ be a convex quadrilateral with diagonal having maximum intersections with $T_2$. We will first prove that $abcd$ cannot lie on the border of the triangulation. Note that both Hanke et al. \cite{hanke1996edge} and De Loera et al. \cite{de2010triangulations} implicitly and directly obtain this essential property by claiming the existence of ``corner cutters''. Assume $abcd$ lies on the border of the triangulation (see \cref{fig: lemma 3 not border}). Without loss of generality, assume $ab$ is a boundary edge. Then, $ab$ is necessarily an edge of $T_2$. We will then reach a contradiction by looking at the neighbours of $a$ and $b$ in $T_2$. First, since flipping $ac\xrightarrow{flip}bd$ does not reduce the number of intersections with $T_2$, we cannot have $bd$ in $T_2$, nor an edge $be$ intersecting $ac$ or $bc$ (\cref{lemma 2}). This result means that we cannot have an edge $be$ in $T_2$, where $e\neq a$ is a vertex in the cone defined by $abc$ (in the direction of the triangle $abc$). In summary, $b$ does not have a neighbour $e$ in $T_2$ for which $be$ intersects the quadrilateral $abcd$. Likewise, $a$ has no neighbour $e$ in $T_2$ such that the edge $ae$ intersects $abcd$. Then, notice that both $a$ and $b$ have neighbours in $T_2$ different from $b$ and $a$ that lie in the half-plane delimited by $ab$ containing the quadrilateral $abcd$. Indeed, if $ab$ is on the outer boundary $B_0$ and also on the border of the convex hull, then that half-plane is where all their neighbours are. On the other hand, if $ab$ lies on the outer boundary $B_0$ but not on the border of the convex hull, or if it lies on an inner boundary $B_i$ with $i\ge 1$ (the following reasoning also applies if $ab$ is an outer boundary and on the border of the convex hull), then due to the presence of the edges $ad$ and $bc$ in $T_1$, necessarily $a$ and $b$ have neighbours in $T_2$ in that half-plane. Without loss of generality, look at $ad$. If $d$ is a neighbour of $a$ in $T_2$ then we are done. If not, then $ad$ has intersections with $T_2$. By taking the edge $ef$ with intersection point $x$ with $ad$ closest to $a$, then \cref{prop: vertex linked to the closest edge} gives that $ae$ and $af$ are edges of $T_2$. However, necessarily one of $e$ or $f$ is in the open half-plane delimited by $ab$ containing the quadrilateral $abcd$ as otherwise $ef$ would not intersect $ad$. Say $e$ is in that open half-plane. Then $f\neq b$ as $bf$ would then intersect $ad$, which is forbidden as previously proven. Therefore, $a$ has a neighbour different from $b$ in that half-plane. The same goes for $b$. We now look at the vertices in the half-open plane delimited by $ab$ that are neighbours in $T_2$ with $a$ or $b$. Let $e$ and $f$ be such neighbours of $a$ and $b$ respectively. We showed that $e$ necessarily lies in the cone $b'ab$ that does not include $abcd$, where $b'$ is a point of $\mathbb{R}^2$ that is the rotation of $b$ around $a$ of angle $\pi$ ($b'$ is on the line $ab$ but $a$ belongs to the open segment $[b'b]$). Likewise, $f$ is in the cone $a'bc$ that does not include $abcd$, where $a'\in\mathbb{R}^2$ is the rotation of $b$ around $a$ of angle $\pi$. We now fix $e$ to be such a neighbour of $a$ that is angularly closest to the edge $ad$. The orientation we are considering is the one that goes around $a$ from $ab$ to $ad$ without passing through the quadrilateral $abcd$. We can do this since we know that at least one neighbour $e$ lies in the half-plane delimited by $ab$ containing $abcd$, since there is no edge $ae$ intersecting the quadrilateral $abcd$, and since we have a finite set of points. Note that the case $e=d$ is not excluded. Similarly, we fix $f$ to be such a neighbour of $b$ closest angularly to the edge $bc$. We here considered the opposite orientation around $b$, going from $ba$ to $bc$ without passing through $abcd$. The edge $ab$ is in $T_2$. Locally, the part of the plane close to the edge inside the quadrilateral $abcd$ is part of a triangulated face of $T_1$. Therefore, it is also part of a triangulated face of $T_2$ as $T_1$ and $T_2$ triangulate the same domain. That face has $ab$ as an edge and is inside the half-plane delimited by $ab$ containing $abcd$. By closure of this face, it also contains the edge $ae$, defined as the edge as close as possible angularly to $ab$ in the orientation around $a$ going from $ab$ to $ad$ without going through $abcd$. Likewise, the face has $bf$ as an edge. Thus $a$, $b$, $e$, and $f$ are vertices of this face. Since that face has to be a triangle, which can only have $3$ vertices, we have $e=f$. Two possibilities now arise. The first is that the previously used cones $b'ad$ and $a'bc$ do not intersect in the half-plane delimited by $ab$ containing $abcd$. This leads to a contradiction as it would impose that $e$ cannot be equal to $f$. The second possibility is that these cones intersect in this half-plane, and further work is necessary to reach a contradiction. Although we could have had $e=d$ or $f=c$, we cannot have $e=c$ or $f=d$. Indeed, $ac$ has intersections with $T_2$ by maximality assumption since $T_1\neq T_2$ (\cref{prop: inter edge not edge other triangu}) and so it cannot be an edge of $T_2$, but $ae$ is an edge of $T_2$. On the other hand, as mentioned previously, $bd$ is assumed to not belong to $T_2$ as then flipping $ac\xrightarrow{flip}bd$ would strictly reduce the number of intersections with $T_2$, but $bf$ is an edge of $T_2$. Therefore, $c$ and $d$ must lie within the triangle $abe$. However, $abe$ is a face of $T_2$, so it cannot have any other vertices of the triangulation inside it. We have reached a contradiction. We have thus proven that necessarily $abcd$ cannot lie on the border of the triangulation. In fact, the contradictory assumption in the previous proof was that $ab$ was an edge of $T_2$. Indeed, we only used the assumption that $ab$ is on the border of the triangulation to get that $ab$ was an edge of both triangulations. Therefore, we actually proved a more general statement: under our assumptions, $ab$ cannot be an edge of $T_2$, i.e. it must have an intersection with $T_2$. As $ab$ was chosen without loss of generality, we proved that all the edges of the quadrilateral are not in $T_2$, i.e. they each have at least one intersection with $T_2$. For those unconvinced, we will later reprove this result when we will need it. \begin{figure}[tbhp] \centering \includegraphics[width=0.4\textwidth]{Pictures/lemma_3_not_both_diag.png} \caption{Not both diagonal intersecting edges can exist. We cannot have simultaneously at least one edge intersecting $ab$ and $cd$ and at least one edge intersecting $bc$ and $ad$, as otherwise there would be a forbidden intersection inside the quadrilateral. We assume from now on that there are no edges intersecting $ab$ and $cd$.} \label{fig: lemma 3 not both diags} \end{figure} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{Pictures/lemma_3_possibilities.png} \caption{All five types of possible edges of $T_2$ intersecting the quadrilateral. They comprise in the corner cutters and the fixed only possible diagonal direction.} \label{fig: lemma 3 possibilities} \end{figure} Before continuing any further, it is important to realise that there cannot be edges of $T_2$ intersecting with the quadrilateral that pass through two different diagonals, i.e. we cannot have an edge that intersects $ab$ and $dc$ while also having another one intersecting $bc$ and $ad$ (see \cref{fig: lemma 3 not both diags}). This would indeed imply that there is an intersection of edges of $T_2$ inside the quadrilateral, which is something forbidden. Therefore, there is at most one diagonal direction of intersection possible. Without loss of generality, we will assume from now on that the diagonally intersecting edges with the quadrilateral can only be those that intersect $ad$ and $bc$. This assumption means that we impose from now on without loss of generality that $\#(ab,cd,T_2) = 0$. We insist that this assumption is essential and invite the reader to keep it in mind. An assumption without loss of generality only claims that the proof for either case will be the same up to renaming the vertices accordingly. However, this renaming implies that all consequences derived using this general assumption must be renamed when the assumption changes. In particular, De Loera et al. \cite{de2010triangulations} forget about this important assumption in their proof which later leads them to a false claim and logic flaw. We now want to show that the edges $ad$, $ac$, and $bc$ have all maximal number of intersections with $T_2$. We will say that these edges form a $3$-zigzag of maximally intersecting edges. To prove this result, let us count the number of intersections of the edges of the $3$-zigzag and also of $bd$. Recall that since $abcd$ does not satisfy the assumptions of \cref{lemma 2}, $bd$ cannot be an edge of $T_2$. We also assumed that $\#(ab,dc,T_2)=0$ due to our assumption that diagonally intersecting edges with the quadrilateral can only be those that intersect $ad$ and $bc$. Since $ac$ has an intersection with $T_2$, $ac$ is not an edge of $T_2$ (\cref{prop: inter edge not edge other triangu}). Due to \cref{lemma 2.2}, we also have that $\#_a(bc,T_2) = \#_a(cd,T_2) = \#_c(ad,T_2) = \#_c(ab,T_2) = 0$. By assumption, we do not satisfy the hypotheses of \cref{lemma 2}, thus $\#_b(ad,T_2) = \#_b(dc,T_2) = \#_d(ab,T_2) = \#_d(bc,T_2) = 0$. The only possibilities of edges intersecting the quadrilateral $abcd$ are displayed in \cref{fig: lemma 3 possibilities}. We can now count intersections with $T_2$: \begin{alignat}{8} &\# (ac,T_2&) &= &\#&(ab,da,T_2&) &+ &\#&(bc,cd,T_2&) &+ &\#&(da,bc,T_2&), \label{eqn 1}\\ &\# (bc,T_2&) &= &\#&(ab,bc,T_2&) &+ &\#&(bc,cd,T_2&) &+ &\#&(da,bc,T_2&), \label{eqn 2}\\ &\# (da,T_2&) &= &\#&(ab,da,T_2&) &+ &\#&(da,cd,T_2&) &+ &\#&(da,bc,T_2&), \label{eqn 3}\\ &\# (bd,T_2&) &= &\#&(ab,bc,T_2&) &+ &\#&(da,cd,T_2&) &+ &\#&(da,bc,T_2&). \label{eqn 4} \end{alignat} By the maximality of $ac$, we have the following implications: \begin{alignat}{3} \cref{eqn 1} &\ge \cref{eqn 2} \implies \#(ab,da,T_2&) &\ge \#(ab,bc,T_2&), \label{eqn 5}\\ \cref{eqn 1} &\ge \cref{eqn 3} \implies \#(bc,cd,T_2&) &\ge \#(da,cd,T_2&). \label{eqn 6} \end{alignat} By assumption, the flip operation does not reduce the number of intersections, thus $bd$ has at least as many intersections with $T_2$ than $ac$ has: \begin{equation} \label{eqn 7} \cref{eqn 4} \ge \cref{eqn 1} \implies \#(ab,bc,T_2)+\#(da,cd,T_2)\ge \#(ab,da,T_2)+\#(bc,cd,T_2). \end{equation} By combining the previously obtained results, we have proven equality between the following quantities: \begin{equation} \label{eqn 8} \begin{cases} \cref{eqn 7} \\ \cref{eqn 5} + \cref{eqn 6} \end{cases} \hspace{-1.3em}\implies \#(ab,da,T_2)+\#(bc,cd,T_2) = \#(ab,bc,T_2)+\#(da,cd,T_2). \end{equation} Using this equality, we can replace the value of $\#(ab,da,T_2)$ in one of our inequalities to obtain: \begin{equation} \label{eqn 9} \begin{cases} \cref{eqn 8} \\ \cref{eqn 5} \end{cases} \implies \#(ab,bc,T_2)+\#(da,cd,T_2)-\#(bc,cd,T_2) \ge \#(ab,bc,T_2). \end{equation} By adding $\#(bc,cd,T_2)$ to this inequality, and removing $\#(ab,bc,T_2)$ on both sides, we get: \begin{equation} \label{eqn 10} \cref{eqn 9}+\#(bc,cd,T_2) \implies \#(da,cd,T_2) \ge \#(bc,cd,T_2). \end{equation} We have thus proven that in fact, the following quantities are equal: \begin{equation} \label{eqn 11} \begin{cases} \cref{eqn 6} \\ \cref{eqn 10} \end{cases} \implies \#(bc,cd,T_2)=\#(da,cd,T_2). \end{equation} An analogous proof will give the symmetrical result: \begin{equation} \label{eqn 12} \#(ab,da,T_2)=\#(ab,bc,T_2). \end{equation} By substituting the two previous results in \cref{eqn 1},\cref{eqn 2},\cref{eqn 3}, and \cref{eqn 4}, we prove the desired result: \begin{equation} \#(ac,T_2) = \#(bc,T_2) =\#(da,T_2)=\#(bd,T_2). \end{equation} Therefore, the edges $ad$, $ac$, and $bc$ are all three of maximal intersections. Thus, the $3$-zigzag of edges of the quadrilateral $abcd$ containing the diagonal edge $ac$, and in the direction of possible diagonally intersecting edges (there are no edges intersecting simultaneously $ab$ and $cd$) is a $3$-zigzag of maximally intersecting edges with $T_2$. \begin{figure}[tbhp] \centering \subfloat[][Joint neighbour in the half-plane delimited by $ab$ not including $abcd$] \includegraphics[width=0.45\textwidth]{Pictures/lemma_3_inter_1.png}} \hspace{2em} \subfloat[][Disjoint candidate neighbour areas in the half-plane delimited by $ab$ not including $abcd$] \includegraphics[width=0.45\textwidth]{Pictures/lemma_3_inter_2.png}}\\ \subfloat[][Intersecting candidate neighbour areas in the half-plane delimited by $ab$ not including $abcd$] \includegraphics[width=0.45\textwidth]{Pictures/lemma_3_inter_3.png}}\\ \caption{Since $ab$ has no intersections with $T_2$, it is an edge of $T_2$. Denote $F$ the face of $T_2$ with edge $ab$ that is locally around $ab$ on the same half-plane delimited by $ab$ as the quadrilateral $abcd$. As $F$ is a triangle, it is then necessarily entirely included within this half-plane. In particular, its third vertex $e$, which is a common neighbour of both $a$ and $b$ in $T_2$, cannot lie on the opposite half-plane, although $a$ and $b$ might have other common neighbours in that other half-plane. As $e$ is a shared neighbour from $a$ and $b$, the domain of existence of neighbours of $a$ and $b$ must intersect. If this statement is not already contradictory, then the face $F$ contains the vertices $c$ and $d$, which leads to a contradiction.} \label{fig: lemma 3 all intersected} \end{figure} In the next part of the proof, we will need to know that all the edges of the quadrilateral $abcd$ are intersected. Recall that we already proved this result when proving that the quadrilateral could not lie on the border of the triangulation, as the proof there only used the border assumption to get an edge of $abcd$ that belonged to both triangulations, which then leads to a contradiction. For sceptical readers, we here re-derive a proof. Note that Hanke et al. \cite{hanke1996edge} and De Loera et al. \cite{de2010triangulations} implicitly and directly obtain this property by claiming the existence of ``corner cutters''. Without loss of generality, assume that the edge $ab$ is not intersected by $T_2$ (see \cref{fig: lemma 3 all intersected}). There is no loss of generality since we will not use the fact that we fixed the only possibility for intersecting diagonals of the quadrilateral to necessarily intersect $ad$ and $bc$. Since $T_2$ does not intersect $ab$, $ab$ is an edge of $T_2$ (\cref{prop: inter edge not edge other triangu}). As $ab$ is an edge of both triangulations, and as the area of the plane locally around $ab$ on the half-plane delimited by this edge containing the quadrilateral $abcd$ is triangulated in $T_1$, then this area is necessarily also triangulated in $T_2$ albeit differently. In particular, the area locally around $ab$ on this half-plane is part of a face $F$ of $T_2$. As $T_2$ is a triangulation, $F$ is a triangle, which implies that $F$ is entirely included in this half-plane. The vertices of $F$ are $a$, $b$, and some vertex $e$. Necessarily, $e$ is also in this half-plane. Since the quadrilateral $abcd$ does not satisfy the assumptions of \cref{lemma 2}, and since $ac$ is not in $T_2$ by maximality of $ac$ due to $T_1\neq T_2$ (\cref{prop: inter edge not edge other triangu}), the edges $ae$ and $be$ cannot intersect the quadrilateral $abcd$. Therefore, $e$ needs to be in the cone defined by $bad$ that does not include $abcd$ and it must also be in the cone defined by $abc$ that does not include $abcd$. However, $e$ must also lie on the half-plane delimited by $ab$ containing the quadrilateral $abcd$. Thus, if both previously defined cones do not intersect in this half-plane, we have reached a contradiction. If they do, then necessarily both $c$ and $d$ are included within the triangle $abe$. However, $abe$ is the face $F$ of $T_2$, which cannot have any other vertices inside it. We have reached a contradiction. We thus proved in detail that the edges $ab$, $bc$, $cd$, and $ad$ of the quadrilateral $abcd$ are not part of $T_2$, and thus each have at least one intersection with this triangulation. Now, we want to prove that for each vertex of the quadrilateral there is at least one ``corner cutter''. A ``corner cutter'' of $a$ for $abcd$ is defined as an edge of $T_2$ that intersects $ab$ and $ad$. The same definition is extended to other vertices of the quadrilateral. Assume that $a$ is not cut, i.e. there is no edge of $T_2$ intersecting $ab$ and $ad$: $\#(ab,da,T_2) = 0$. Using \cref{eqn 12}, we get that $\#(ab,bc,T_2)=0$. However, the edges of $T_2$ intersecting $ab$ are those that intersect $ab$ and $ad$ and those that intersect $ab$ and $bc$ (see \cref{fig: lemma 3 possibilities}). Since non of these edges exist, $T_2$ has no edge intersecting $ab$. This statement is in contradiction with the previous result that $ab$ has at least one intersection with $T_2$. Therefore $a$ is necessarily cut. Similarly, if we assume $b$ is not cut, i.e. $\#(ab,bc,T_2)=0$, \cref{eqn 12} will give that $\#(ab,da,T_2) = 0$, which implies once again that $ab$ has no intersection with $T_2$ which is a contradiction. Thus $b$ is but. By symmetry, we have that $c$ and $d$ are also cut. We have thus proven that all the corners of the quadrilateral $abcd$ are cut. We here take a short break in the detailed proof to remind us of what is the idea of the proof and how all the effort we have made so far is essential for the parts to come. The sketch of the proof left to reach a final contradiction is the following. We will construct a sequence of maximal edges. More formally, we will construct a triangulated ring in $T_1$, where each edge connecting a vertex on one side of the ring to a vertex on the other side of it is an edge with maximal intersections with $T_2$. Adjacent faces in this ring all have maximal edge diagonals, so we can apply the previous reasoning to them as we assumed that we could not find any such quadrilateral for which flipping the diagonal edge would reduce the number of intersections. In particular, all the quadrilaterals will have corner cutter. The reason why we have a ring is that we will sequentially create a strip-like structure that cannot reach the border of the triangulation and that cannot create self-intersections, thus by finiteness of the set of vertices we create a cycle, coined as a ring. However, since we are on the plane, a ring necessarily has one reflex vertex. By analysing corner cutters of quadrilaterals in the ring having that vertex, we will find two corner cutters that intersect inside the triangulated ring, which leads to a contradiction. We now continue the detailed proof. The structure we want to create will be defined by two sequences of connected vertices in $T_1$. We denote $(u_k)$ and $(v_l)$ these sequences that are yet to be defined. Start with $u_1 = d$, $u_2 = c$, $v_1 = a$, and $v_2 = b$. Note that the strip delimited by $u_1u_2$ and $v_1v_2$ is already triangulated in $T_1$ by the triangles $acd$ and $abc$. Further note that all edges of $T_1$ of the form $u_iv_j$ for $(i,j)\in\{1,2\}^2$ are in $T_1$, and they all have maximal intersections with $T_2$. The task is to incrementally find $u_{k+1}$ or $v_{l+1}$ given $u_1,u_2,\cdots,u_k$ and $v_1,v_2,\cdots,v_l$ that will maintain the property of the curved strip delimited by $u$ and $v$ is already triangulated in $T_1$, and for which all edges of $T_1$ of the form $u_i v_j$ have maximal intersections with $T_2$. Proving how to do the construction from $k=l=2$ to $k+1=3$ or $l+1=3$ will explain how to perform the incremental construction for general $k$ and $l$. See \cref{fig: lemma 3 strip construction} for an illustration. \begin{figure}[tbhp] \centering \includegraphics[width=0.5\textwidth]{Pictures/lemma_3_strip_construction.png} \caption{Since the quadrilateral $abcd$ is not on the borders of the triangulation, then $b$ and $c$ share a common neighbour $t$ different from $a$. As we showed that $bc$ has maximal intersections with $T_2$, the quadrilateral $abtc$ is a new quadrilateral with diagonal having maximal intersections with $T_2$. Furthermore, the assumption that flipping its diagonal does not result in a strict decrease in the number of intersections with $T_2$ holds for this new quadrilateral. Thus, by reapplying the same proof as we did on $abcd$, $abtc$ has a $3$-zigzag of maximally intersecting edges. In particular, $bt$ or $ct$ must also have maximal intersections with $T_2$. If $bt$ is maximally intersecting $T_2$, then add $u_3 = t$ to the $u$ sequence of the strip. If it is not, then $ct$ is maximally intersecting $T_2$, and then add $v_3$ to the $v$ sequence of the strip. We can reiterate the reasoning in the next quadrilateral, which exists as $abtc$ cannot lie on the border of the triangulations, and which has a maximally intersecting diagonal with $T_2$. This allows us to construct incrementally the desired strip.} \label{fig: lemma 3 strip construction} \end{figure} We previously showed that, because we assumed without loss of generality that there are no edges of $T_2$ intersecting simultaneously $ab$ and $cd$, the 3-zigzag of edges $ad$, $ac$, and $bc$ of the quadrilateral $abcd$ passing through the diagonal $ac$ all have maximal intersections, i.e. $ad$ and $bc$ are also maximal intersection edges. Since $abcd$ is not on the border of the triangulations, the edge $bc$ is the diagonal of a quadrilateral in $T_1$, i.e. there exists a vertex $t\neq a$ such that $bt$ and $ct$ are edges of $T_1$. This new quadrilateral $abtc$ has its diagonal $bc$ with maximal intersections with $T_2$, which implies that it is convex due to \cref{lemma 1}. Furthermore, we assumed that all such quadrilaterals could not have a diagonal for which a flip would strictly reduce the number of intersections. Therefore, we can redo the same reasoning on $abtc$ that we did for $abcd$. The induction is not as trivial as it seems and it is easy to make mistakes and omit important cases. Both Hanke et al. \cite{hanke1996edge} and De Loera et al. \cite{de2010triangulations} make major logic flaws by applying the same reasoning too hastily. By applying the induction, there exists a $3$-zigzag of edges of $abtc$ passing through its diagonal that are all with maximum intersections with $T_2$. There are two possible $3$-zigzags for $abtc$: the first one consists of the edges $ac$, $bc$, and $bt$, the second consists of $ab$, $bc$, and $ct$. Without further assumptions, we do not know which of these two $3$-zigzags are maximal, even though we assumed that the only diagonally intersecting edges of $abcd$ are those intersecting $ad$ and $bc$. We will look at the two only cases: either $bt$ has maximal intersections with $T_2$, or it does not and then $ct$ has maximal intersections with $T_2$. \begin{figure}[tbhp] \centering \includegraphics[width=0.6\textwidth]{Pictures/lemma_3_second_case_zigzag_must_be_careful.png} \caption{We cannot exclude the case where although $ad$, $ac$, and $bc$ form a 3-zigzag of maximally intersecting edges with $T_2$, since there is an edge of $T_2$ diagonally intersecting $abcd$ by intersecting $ad$ and $bc$ by assumption, that the 3-zigzag of maximally intersecting edges with $T_2$ in $abtc$ is $ab$, $bc$, and $ct$ and not $ac$, $bc$, and $bt$. It is possible that all diagonally intersecting edges of $abcd$ are corner cutters in the next quadrilateral whereas corner cutters of one vertex of $abcd$ become diagonally intersecting edges of $abtc$. This new zigzag does not necessarily create a trivial conflict with corner cutters of either $abcd$ or $abtc$. There is thus no apparent contradiction in this case.} \label{fig: lemma 3 second case zigzag must be careful} \end{figure} De Loera et al. \cite{de2010triangulations} here forget one of the cases, which is the most important gap in their book version of the proof. This omission is due to the fact that they overlook the fact that the 3-zigzag was obtained under the fundamental assumption that edges of $T_2$ diagonally intersecting $abcd$ could only intersect $ad$ and $bc$, i.e. the assumption that $\#(ab,cd, T_2) = 0$. Indeed, they implicitly assume that the 3-zigzag extends to $ac$, $bc$, and $ct$ in $abtc$. But the direction of the diagonal is given by the direction of possible diagonally intersecting edges of $T_2$ with the quadrilateral. Unfortunately, having that all edges of $T_2$ diagonally intersecting $abcd$ intersect $ad$ and $bc$ does not imply that they all then intersect as well $ac$ and $bt$. In fact, it can be possible that none of these edges intersect $bt$. In such cases, we could actually have that all edges of $T_2$ diagonally intersecting $abtc$ intersect $ab$ and $ct$ instead. Even by taking into account the existence of corner cutters, there are cases where this other direction of 3-zigzag does not immediately imply a contradiction. Indeed, edges diagonally intersecting $ad$ and $bc$ could all be corner cutters of $c$ in $abtc$, corner cutters of $b$ in $abcd$ could all be diagonally intersecting $abtc$ by intersecting $ab$ and $ct$, and corner cutters of $c$ in $abcd$ could all be corner cutters of $c$ in $abtc$. Because of their mistake, fixing the initial $3$-zigzag incorrectly fixes for the authors inductively a sequence of zigzags with a new maximal edge with vertex the latest added vertex to the sequence. For this reason, De Loera et al. \cite{de2010triangulations} only introduce the term ``zigzag''. As we have seen, such a zigzagging only occurs between three edges of a quadrilateral and does not generalise: a new vertex from a 3-zigzag may not be a vertex of the next maximal edge in the sequence. To fix this, we introduced the concept of a $3$-zigzag rather than a zigzag. Overall, this error shows the importance of rigorous attention to details in the derivation of the proof, rather than using very quick and seemingly reasonable arguments justified by misleading figures. These turn out to be possibly restricted to some particular constellations but are false for general ones when all cases are carefully scrutinised. See \cref{fig: lemma 3 second case zigzag must be careful} for an illustration of such a possible but, so far, overlooked case. We return to the discussion on which of the two $3$-zigzags are maximal. Consider the first case: $bt$ has maximal intersections with $T_2$. We then choose $u_{k+1} = u_3 = t$. The strip delimited by $u_1u_2\cdots u_{k+1}$ and $v_1v_2\cdots v_l$ is already triangulated in $T_1$, and each edge of $T_1$ of the form $u_i v_j$ has maximal intersections with $T_2$. Note that we did not create yet $v_{l+1} = v_3$. As we showed previously for $abcd$, the quadrilateral $abtc$ cannot lie on the border of the triangulation, therefore $b$ and $t$ have a second shared neighbour, i.e. there exists a vertex $p\neq c$ such that $bp$ and $tp$ are edges of $T_1$. This new quadrilateral has maximally intersecting diagonal. We can thus continue from here the construction of $u$ and $k$ by induction. Consider the second case: $bt$ does not have maximal intersections with $T_2$. Since $abtc$ must have a $3$-zigzag of maximally intersecting edges including its diagonal edge, then $ct$ has maximal intersections with $T_2$. We thus choose $v_{l+1} = v_3 = t$, but do not create yet $u_{k+1}$. The strip delimited by $u_1u_2\cdots u_k$ and $v_1v_2\cdots v_{l+1}$ is already triangulated in $T_1$, and each edge of $T_1$ of the form $u_i v_j$ has maximal intersections with $T_2$. As we showed previously for $abcd$, the quadrilateral $abtc$ cannot lie on the border of the triangulation, therefore $c$ and $t$ have a second shared neighbour, i.e. there exists a vertex $q\neq b$ such that $cq$ and $tq$ are edges of $T_1$. This new quadrilateral has maximally intersecting diagonal. We can thus continue from here the construction of $u$ and $k$ by induction. Using this process, we can incrementally construct the sequences $u$ and $v$ until we reach a cycle, that is until we reach a point where the current $k,l$ points are $u_k = u_1$ and $v_l = v_1$ for $k$ or $l$ different to $1$. Indeed, reaching a cycle is unescapable. Since the quadrilaterals being considered at each step always have a maximally intersecting diagonal with $T_2$, then they can never lie on the border of the triangulation, which implies that we can always continue the construction process. However, the number of vertices, and thus of edges, is finite. Therefore, at some point, we will have cycled. Hanke et al. \cite{hanke1996edge} missed the fact that cycling is a possibility and directly claim that this inductive process necessarily reaches the border and thus leads to a contradiction. De Loera et al. \cite{de2010triangulations} correct this by analysing a cycle of maximal edges. Unfortunately, they incorrectly constructed this cycle with a global zigzag of maximal edges, but in fact, a correct analysis of the 3-zigzags as we did, shows that the cycle construction does not provide a global zigzag of maximal edges. This logic error is the greatest flaw in their proof. See \cref{fig: lemma 3 ring} for an example of ring of maximal edges, constructed by analysing local 3-zigzags of maximal edges, but for which the maximal edges do not globally zigzag throughout the ring. \begin{figure}[tbhp] \centering \includegraphics[width=0.5\textwidth]{Pictures/lemma_3_ring.png} \caption{The strip delimited by $u$ and $v$ with edges of $T_1$ between both sequences, i.e. edges of $T_1$ of the form $u_iv_j$, all have maximal intersections with $T_2$. As the strip cannot reach the border of the triangulation, by finiteness of the problem it necessarily cycles. Furthermore, since it cannot self-intersect, and since we are in the Euclidean plane, the strip has the topology of a ring, and it necessarily has at least one reflex vertex.} \label{fig: lemma 3 ring} \end{figure} As the strip is delimited by $u_1\cdots u_ku_1$ and $v_1v_2\cdots v_lv_1$ that are edges of $T_1$, then there necessarily is one vertex $u_i$ or $v_j$ that is a reflex vertex. See \cref{fig: lemma 3 ring} for an illustration. We say $u_i$ is a reflex vertex of the strip if $\widehat{u_{i-1}u_i u_{i+1}} < \pi$, where $\widehat{u_{i-1}u_i u_{i+1}}$ is the geometric angle of the cone $u_{i-1}u_i u_{i+1}$ that does not contain the triangles of the strip with at least one vertex among $u_{i-1}$, $u_i$, or $u_{i+1}$. It can be seen as the ``outer'' angle from the strip at $u_i$ is strictly smaller than $\pi$. Similarly, we say $v_j$ is a reflex vertex of the strip if the ``outer'' angle from the strip at $v_j$ is strictly smaller than $\pi$. If the strip did not have a reflex vertex, then necessarily it would intersect one of the edges $u_i v_j$, which is forbidden as the strip is composed only of edges of the planar triangulation $T_1$. We thus have that the strip has the topology of a ring. Without loss of generality, assume that the ring has a reflex vertex at a vertex of the $u$ sequence, denoted $u_i$. By renumbering the $v$ vertices, let $v_{-1}, v_0, v_1, \cdots v_r $ be the vertices $v_j$ in clockwise order such that $u_i v_j$ are edges of $T_1$. Note that $r\ge 0$, i.e. that there are at least $2$ vertices of the $v$ sequence connected to $u_i$ in $T_1$ on the strip. Indeed, by construction of the strip, each vertex of $u$ is connected to at least one vertex of $v$ in $T_1$, and vice versa each vertex of $v$ is connected to at least one vertex of $u$ in $T_1$. If $u_i$ is connected to only one vertex of $v$ in $T_1$, say $v_j$, then look at the quadrilateral of the strip $u_i u_{i-1} v_j u_{i+1}$, which has $u_i v_j$ as maximally intersecting diagonal with $T_2$. This quadrilateral is convex (\cref{lemma 1}). This property implies that the angle formed by the $u_{i-1}u_iu_{i+1}$ including the quadrilateral is smaller than or equal to $\pi$. Therefore, such vertex $u_i$ with only one neighbour in $v$ in the strip cannot be a reflex vertex. Thus, the reflex vertex $u_i$ of the ring has at least two neighbours in $v$ in $T_1$. We will want to show that all corner cutters of $u_i$ in quadrilaterals $v_{j-1}v_{j}v_{j+1}u_i$ necessarily intersect $u_{i-1}u_i$ if the angle $\widehat{u_{i-1}u_i v_{j+1}}$, defined by the cone $u_{i-1}u_i v_{j+1}$ containing the vertices $v_s$ for $-1\le s\le j$, is smaller or equal to $\pi$. For that statement to be proven, we first need to show that the concatenation of these triangles form a convex polygon $u_i u_{i-1} v_{-1}\cdots v_{j+1}$ when the angle previously defined satisfies $\widehat{u_{i-1}u_i v_{j+1}}\le \pi$. \begin{figure}[tbhp] \centering \includegraphics[width=0.7\textwidth]{Pictures/lemma_3_reflex_convex.png} \caption{The successive concatenation of triangles of the ring around $u_i$ forms a convex quadrilateral $u_iu_{i-1}v_{-1}v_0\cdots v_{j+1}$ as long as the angle defined by the cone $u_{i-1}u_iv_{j+1}$ containing the other vertices of the polygon is smaller or equal to $\pi$. This result is due to the convexity of the successive overlapping quadrilaterals of the strip, which all have the common vertex $u_i$ in their diagonal.} \label{fig: lemma 3 reflex convex} \end{figure} We prove the convexity of the concatenation of triangles while the angle at $u_i$ of the agglomerated polygon is less than or equal to $\pi$ by a simple induction. See \cref{fig: lemma 3 reflex convex} for an illustration. The result easily holds for $u_i u_{i-1} v_{-1} v_0$, as by construction of the ring this quadrilateral is convex. Assume now that the polygon $u_i u_{i-1} v_{-1}\cdots v_{s}$ is convex for $s\ge 0$, and that the angle, defined by the cone $u_{i-1}u_iv_{s+1}$ and including $v_{-1}\cdots v_{s}$, is smaller or equal to $\pi$. By construction of the ring, $u_iv_{s-1}v_{s}v_{s+1}$ is a convex quadrilateral. This fact constrains $v_{s+1}$ to lie in the cone defined by $u_iv_{s-1}v_s$ and including the triangle $u_iv_{s-1}v_s$. However, due to the angular assumption on $v_{s+1}$ with respect to $u_{i-1}u_i$, necessarily $v_{s+1}$ also lies in the half-plane delimited by $u_{i-1}u_i$ and including the vertices $v_{-1}\cdots v_{s}$. Thus, $v_{s+1}$ belongs in the intersection of both domains. Then, all segments $u_{i-1}v_{s+1}$ and $v_{-1}v_{s+1}, \cdots, v_{s}v_{s+1}$ are included in the polygon $u_i u_{i-1} v_{-1}\cdots v_{s+1}$, which means that it is convex. \begin{figure}[tbhp] \centering \includegraphics[width=0.7\textwidth]{Pictures/lemma_3_reflex_corner_cutters.png} \caption{The corner cutter of $u_i$ in the quadrilateral $u_iu_{i-1}v_{-1}v_0$ necessarily intersects $u_{i-1}u_i$. Furthermore, the polygon of successively concatenated triangles is convex when the polygon is entirely contained within a half-plane delimited by $u_{i-1}u_i$. Thus, a simple induction shows that all of the corner cutters of $u_i$ inside the quadrilaterals of this convex polygon must intersect $u_{i-1}u_i$. However, the corner cutter of $u_i$ in the first quadrilateral that cannot be concatenated with the previous convex polygon in a convex way will intersect the corner cutter of $u_i$ for the previous quadrilateral inside the triangulated ring. This statement leads to a final contradiction.} \label{fig: lemma 3 corner cutters} \end{figure} We now prove that, in each quadrilateral of such a concatenation of triangles $u_i u_{i-1} v_{-1}\cdots v_{j+1}$ with the previously defined angle $\widehat{u_{i-1}u_iv_{j+1}}\le \pi$, the corner cutters of $u_i$ intersect $u_{i-1}u_i$. See \cref{fig: lemma 3 corner cutters} for an illustration. First, notice that they cannot come from $u_{i-1}$ or $v_s$ for any $-1\le s\le j-1$, as such corner cutters would create edges intersecting quadrilaterals with maximal diagonals that come from one of the vertices of that same quadrilateral, which is a property we showed could never hold under the assumption that no flip of a maximally intersecting edge reduces the total number of intersections. Thus, the corner cutters of $u_i$ in $u_iu_{i-1}v_{-1}v_0$ intersects $u_{i-1}u_i$. Look now at the corner cutters of $u_i$ in the quadrilateral $u_i v_{-1}v_0v_1$. These edges of $T_2$ necessarily intersect either $u_{i-1}u_i$ or $u_{i-1}v_{-1}$, in particular they cannot come from $u_{i-1}$. However, by convexity of the polygon $u_iu_{i-1}v_0v_1$, if they intersect $u_{i-1}v_{-1}$, then they will also intersect the corner cutters of $u_i$ in the previous quadrilateral $u_i u_{i-1}v_{-1}v_0$, as these corner cutters intersect $u_{i-1}u_i$ and $u_{i}v_0$, and for which we know there exists at least one. Thus, the corner cutters of $u_i$ in $u_i v_{-1}v_0v_1$ intersect $u_{i-1}u_i$. We can repeat this argument of looking at corner cutters of $u_i$ along successive quadrilaterals by using the fact that the polygon $u_i u_{i-1} v_{-1}\cdots v_{s+1}$ remains convex for $-1\le s\le j$, which proves by induction that in each quadrilateral of a concatenation of triangles $u_i u_{i-1} v_{-1}\cdots v_{j+1}$ with the previously defined angle $\widehat{u_{i-1}u_iv_{j+1}}\le \pi$, the corner cutters of $u_i$ intersect $u_{i-1}u_i$. We are now ready to ready to reach a contradiction. See \cref{fig: lemma 3 corner cutters} for an illustration. Since $u_i$ is a reflex vertex of the ring, the previously defined angle $\widehat{u_{i-1}u_iu_{i+1}}$ along this cone containing the vertices $v_s$ for $-1\le s\le r$, is strictly greater than $\pi$. We can thus take the first vertex $w_{j+2}$ such that $\widehat{u_{i-1}u_iw_{j+2}} >\pi$, where $w_{j+2}$ is either $v_{j+2}$ for some $j\le r-2$ or either $u_{i+1}$, i.e. $\widehat{u_{i-1}u_iv_{j+1}}\le \pi$ and $\widehat{u_{i-1}u_iv_{j+2}}> \pi$ using the previous definition of angles. Since $\widehat{u_{i-1}u_iv_{j+1}}\le \pi$, the corner cutters of $u_i$ in the quadrilateral $u_i w_{j-1}v_jv_{j+1}$ intersect $u_{i-1}u_i$, where $w_{j-1}$ is $v_{j-1}$ if $j\ge 0$ and $u_{i-1}$ otherwise. On the other hand, the corner cutters of $u_i$ in $u_i v_j v_{j+1}w_{j+2}$ intersect $u_i w_{j+2}$ and either $v_j v_{j+1}$ or either $v_{j+1} w_{j+2}$. In particular, they cannot intersect $u_{i-1}u_i$ since $\widehat{u_{i-1}u_iv_{j+2}}> \pi$. Thus, they must intersect the previous corner cutters inside the polygon $u_i w_{j-1} v_{j}v_{j+1}w_{j+2}$, which is not allowed as it would create a vertex inside faces of the triangulation $T_1$ or intersecting edges in the planar triangulation $T_2$. However, these corner cutters must exist as previously proven. Therefore, we have reached a contradiction. Due to the fact that De Loera et al. \cite{de2010triangulations} missed an important case when assuming that only the first case was possible when extending the $3$-zigzag to the next quadrilateral (they incorrectly implicitly claim that $bt$ is maximal when $ad$, $ac$, and $bc$ are maximal), they seem to reach a contradiction much faster. Indeed, as a consequence of their incorrect claim, any vertex of the ring can only be present in two overlapping quadrilaterals, i.e. that any $u_i$ is connected to two vertices $v_j$ and $v_{j+1}$ but not $v_{j+2}$, and that likewise all $v_j$ are connected to two $u_i$ and $u_{i+1}$ but not $u_{i+2}$. This incorrect result leads to a simplified analysis of corner cutters at reflex vertices of the ring, say $u_i$ without loss of generality, as then the corner cutters of $u_i$ in the two successive quadrilaterals must necessarily meet inside the overlapping quadrilaterals which leads to a contradiction. To avoid the pitfall of De Loera et al. \cite{de2010triangulations}, we saw that further work was needed, as simply analysing two consecutive corner cutters around a reflex vertex does not necessarily provide a contradiction. We must instead find the right successive overlapping quadrilaterals for that reflex vertex that guarantees a contradiction when studying their corner cutters of the reflex vertex. This result is the final contradiction against the earliest assumption. This assumption was that no matter what maximally intersecting edge of $T_1$ with $T_2$ we chose, flipping it would not strictly reduce the number of intersections with $T_2$. Thus, we have proven that there exists a maximally intersecting edge of $T_1$ that we can flip, and for which this flip operation reduces the number of intersections with $T_2$ by at least one. \end{proof} We can now conclude the proof of \cref{th: goal}. \begin{proof} Starting from $T_1$, as long as the current triangulation differs from $T_2$, there exists edges of the current triangulation with at least one intersection with $T_2$ that are also have maximal intersections with $T_2$. According to \cref{lemma 3}, there exists at least of these edges such that when flipped, the total number of intersections with $T_2$ reduces by at least one. To reach $T_2$ we need only reduce the number of intersections to $0$. By following the strategy of flipping a maximally intersecting edge with $T_2$ that reduces the number of intersections, we perform a sequence of flips that reduce this number by at least one at each step until it reaches $T_2$. Therefore, we found a sequence of edge flips bringing $T_1$ to $T_2$ using at most the original total number of intersections $\#(T_1,T_2)$ between $T_1$ and $T_2$. By taking the minimum length sequence of edge flips from $T_1$ to $T_2$, we have proven that ${d_f(T_1,T_2) \le \#(T_1,T_2)}$. The last inequality of \cref{th: goal} is achieved by naively upper-bounding the number of intersections of both triangulations using a worst case analysis. Since both triangulations share the same boundary edges, in the worst case, i.e. the one given the most intersections, all inner edges of $T_1$ intersect all inner edges of $T_2$. We then estimate the number of these edges using the Euler formula. The proof is well-known and thus not detailed in Hanke et al. \cite{hanke1996edge} nor in De Loera et al. \cite{de2010triangulations}. We nevertheless provide its details here for completeness. For planar graphs with holes, the Euler formula is \begin{equation} n-e+f=1-h, \end{equation} where $n$ is the number of vertices, $e$ is the number of edges, $f$ is the number of faces (excluding the outer face and the holes), and $h$ is the number of holes (i.e. the number of fixed non triangulated inner faces). For each face, we can count the number of edges it sees. Since all faces are triangles, this accounts to a vote of $3f$. By looking at it from the point of view of the edges, each inner edge is neighboured by exactly two faces and each border edge is neighboured by exactly one face. Thus we get an edge count of $2e_{int}+e_{b}$, where $e_{int}$ is the number of inner edges and $e_b$ is the number of boundary edges. Note that $e_b = n_b$ is the sum of the length of the polygons $B_i$, and that it is larger than the number of inner boundary vertices with equality only when each inner boundary vertex is only present on a single boundary polygon. To see this, cyclically order the vertices in a clockwise fashion along the boundary for each boundary contour (around each hole and around the outer face). This naturally translates to a cyclical ordering of boundary edges where each edge can be associated uniquely with its counter clockwise vertex and reciprocally. We have therefore an edge count of $2e_{int}+n_b$. By equalising both counts, we get that $3f = 2e_{int}+b$. We then plug in this result in the Euler formula (multiplied on both sides by 3) to obtain $3n-3e+2e_{int}+n_b = 3 - 3h$. However, $e = e_{int}+e_b$. Therefore, $e_{int} = 3n-2n_b-3 - 3h$. In particular, note how this formula does not depend on the triangulation of the set of vertices. Therefore, we can crudely upper-bound the total number of intersections by $\#(T_1,T_2)\le(3n-2n_b-3-3h)^2$. Hanke et al. \cite{hanke1996edge} similarly estimated the number of intersections as $3n-2n_b-3$ as they only considered full triangulations of the convex hull of the points $S$. De Loera \cite{de2010triangulations} also use this assumption in their book and provide the same bound. However, as we have shown, the proof naturally extends to more general triangulations that can incorporate holes and an outer border only given by a simple polygon rather than simply the polygon defined as the boundary of the convex hull. \end{proof} \section{Conclusion} We revisited the proof that the flip distance between triangulations of a same finite set of vertices in a planar region is upper-bounded by the total number of edge intersections between them $d_f \le \#(T_1,T_2)$, and we provided a crude upper-bound estimate of this number of intersections. This upper-bound can be reached for specific configurations. On the other hand, there are known examples of optimal sequences of edges flips that do not necessarily follow the heuristic of systematically reducing the number of intersections \cite{hanke1996edge}. The global line of attack of the proof is due to Hanke et al. \cite{hanke1996edge}, later revised by De Loera et al. \cite{de2010triangulations} in their book. Our main contribution is to provide what we believe to be a fully detailed and extensively revised proof that corrects some errors and some false claims, while detailing every step in a hopefully readable fashion. Furthermore, due to our consideration of full details, we showed that the proof readily applies to triangulations of simple polygons and polygonal regions with holes and hence refined the estimation of the number of intersections to take into account this possibility. \bibliographystyle{siamplain}
{ "timestamp": "2021-06-29T02:26:34", "yymm": "2106", "arxiv_id": "2106.14408", "language": "en", "url": "https://arxiv.org/abs/2106.14408", "abstract": "We revisit here a fundamental result on planar triangulations, namely that the flip distance between two triangulations is upper-bounded by the number of proper intersections between their straight-segment edges. We provide a complete and detailed proof of this result in a slightly generalised setting using a case-based analysis that fills several gaps left by previous proofs of the result.", "subjects": "Computational Complexity (cs.CC)", "title": "A Bound on the Edge-Flipping Distance between Triangulations (Revisiting the Proof)", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692264378963, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.7079584802916257 }
https://arxiv.org/abs/math/0610316
Producing Set Theoretic Complete Intersection Monomial Curves in P^n
In this paper we produce infinitely many examples of set-theoretic complete intersection monomial curves in $\mathbb{P}^{n+1}$, starting with a set-theoretic complete intersection monomial curve in $\mathbb{P}^{n}$ . In most of the cases our results cannot be obtained through semigroup gluing technique and we can tell apart explicitly which cases are new.
\section{Introduction} It is well known that a variety in an $n$-space can be written as the intersection of $n$ hypersurfaces set theoretically, see \cite{eis}. It is then natural to ask whether this number is minimal. A curve in $n$-space which is the intersection of $n-1$ hypersurfaces is called a set-theoretic complete intersection, s.t.c.i. for short. If moreover its defining ideal is generated by $n-1$ polynomials, then it is called an ideal theoretic complete intersection, abbreviated i.t.c.i.. Determining set-theoretic or ideal-theoretic complete intersection curves is a classical and longstanding problem in algebraic geometry. An associated problem is to give explicitly the equations of the hypersurfaces involved. When the characteristic of the field K is positive, it is known that all monomial curves are s.t.c.i. in $\mathbb{P}^{n}$, see \cite{moh}. However the question is still open in characteristic zero case despite the tremendous progress in this direction, see for example \cite{eto2,kat,thoma4} and the references there for some recent activity. The purpose of the present paper is to describe a method to produce \textit{infinitely many} s.t.c.i. monomial curves starting from one single s.t.c.i. monomial curve, see section 4. Our approach has the side novelty of describing explicitly the equations of hypersurfaces on which these new monomial curves lie as s.t.c.i.. On the other hand, semigroup gluing being one of the most popular techniques of recent research, we develop numerical criteria to determine when these new curves can or cannot be obtained via gluing, see section 3. In the last section we discuss several consequences and variations of these results. \section{Preliminaries} Throughout the paper, $K$ will be assumed to be an algebraically closed field of characteristic zero. By an \textit{affine monomial curve} $C(m_{1},\dots,m_{n})$, for some positive integers $m_{1}<\cdots<m_{n}$ with $gcd(m_{1},\dots,m_{n})=1$, we mean a curve with generic zero $(v^{m_{1}},\dots ,v^{m_{n}})$ in the affine n-space $\mathbb{A}^{n}$, over $K$. By a \textit{projective monomial curve} $\overline{C}(m_{1},\dots,m_{n})$ we mean a curve with generic zero \[(u^{m_{n}},u^{m_{n}-m_{1}}v^{m_{1}},\dots ,u^{m_{n}-m_{n-1}}v^{m_{n-1}},v^{m_{n}}) \] in the projective n-space $\mathbb{P}^{n}$, over $K$. Note that $\overline{C}(m_{1},\dots,m_{n})$ is the projective closure of $C(m_{1},\dots,m_{n})$. Whenever we write $\overline{C} \subset \mathbb{P}^n$ to simplify the notation, we always mean a monomial curve $\overline{C}(m_{1},\dots,m_{n})$ for some fixed positive integers $m_1<\dots<m_n$ with $gcd(m_1,\dots,m_n)=1$. Let $m$ be a positive integer in the numerical semigroup generated by $m_1,\dots,m_n$, i.e. $m=s_1 m_1 +\cdots+ s_n m_n$ where $s_1,\dots ,s_n$ are some non-negative integers. Note that in general there is no unique choice for $s_1,\dots,s_n$ to represent $m$ in terms of $m_1,\dots,m_n$. We define the degree $\delta(m)$ of $m$ to be the minimum of all possible sums $s_1+\cdots+s_n$. If $\ell$ is a positive integer with $gcd(\ell,m)=1$, then we say that the monomial curve $\overline{C}(\ell m_1,\dots,\ell m_n, m)$ in $\mathbb{P}^{n+1}$ is an {\em extension} of $\overline{C}$. We similarly define $C(\ell m_1,\dots,\ell m_n, m)$ to be an {\em extension} of $C=C(m_{1},\dots,m_{n})$. We say that an extension is {\em nice} if $\delta(m) > \ell$ and {\em bad} otherwise, adopting the terminology of \cite{pf}. When the integers $m_1,\dots,m_n$ are fixed and understood in a discussion, we will use $\overline{C}_{\ell,m}$ to denote the extensions $\overline{C}(\ell m_1,\dots,\ell m_n, m)$ in $\mathbb{P}^{n+1}$, and use $C_{\ell,m}$ to denote the extensions $C(\ell m_1,\dots,\ell m_n, m)$ in $\mathbb{A}^{n+1}$. \subsection{Extensions of Monomial Curves in $\mathbb{A}^{n}$} Let $C=C(m_{1},\dots,m_{n})$ be a s.t.c.i. monomial curve in $\mathbb{A}^{n}$. In this section, we show that all extensions of $C$, in the sense defined above, are s.t.c.i. For this we first define, for any ideal $I \subset K[x_1,\dots,x_{n+1}]$, $\Gamma_{\ell}(I)$ to be the ideal which is generated by all polynomials of the form $\Gamma_{\ell}({g})$, where $\Gamma_{\ell}({g(x_1,\dots ,x_{n+1})})=g(x_1,\dots ,x_n,x_{n+1}^{\ell})$, for all $g\in I$. We use the following trick of M. Morales: \begin{lemma}[\mbox{\cite[Lemma ~3.2]{mor}}]\label{mor} Let $Y_{\ell}$ be the monomial curve $C(\ell m_1,\dots,\ell m_n,m_{n+1})$ in $\mathbb{A}^{n+1}$. Then $I(Y_{\ell})=\Gamma_{\ell}(I(Y_1))$. \end{lemma} For any extension of $C$ of the form $C_{\ell,m}$, we obviously have $I(C)\subset I(C_{\ell,m})$ and $I(C_{\ell,m})\cap K[x_1,\dots,x_n]=I(C)$. The exact relation between the ideals of $C$ and $C_{\ell,m}$ are given by the following lemma. \begin{lemma}\label{afinlemma} Let $m=s_1 m_1 +\cdots+ s_n m_n$. For any positive integer $\ell$ with $gcd(\ell,m)=1$ we have $I(C_{\ell,m})=I(C)+(G)$, where $G={x_{1}}^{s_{1}}\cdots \,\,{x_{n}}^{s_{n}}-x_{n+1}^{\ell}$. \end{lemma} \begin{proof} \mbox{} \par \textbf{Case $\ell=1$:} We show that $I(C_{1,m})=I(C)+({x_{1}}^{s_{1}}\cdots \,\,{x_{n}}^{s_{n}}-x_{n+1})$.\\ For any polynomial $f\in K[x_{1},\dots ,x_{n+1}]$, there are polynomials $g\in K[x_{1},\dots ,x_{n}]$ and $h\in K[x_{1},\dots ,x_{n+1}]$ such that \begin{eqnarray*} f(x_{1},\dots ,x_{n+1})&=& f(x_{1},\dots ,x_{n},x_{n+1}-x_{1}^{s_{1}}\cdots x_{n}^{s_{n}}+ x_{1}^{s_{1}}\cdots x_{n}^{s_{n}}) \\ &=&g(x_{1},\dots ,x_{n})+(x_{1}^{s_{1}}\cdots x_{n}^{s_{n}}-x_{n+1})h(x_{1},\dots ,x_{n+1}). \end{eqnarray*} This identity implies that $f\in I(C_{1,m})$ if and only if $g\in I(C)$. \textbf{Case $\ell>1$:} Applying Lemma \ref{mor} with $Y_1=C_{1,m}$ we have \begin{eqnarray*} I(C_{\ell,m})&=&\Gamma_{\ell}(I(C_{1,m})), \; \; \text{by Lemma \ref{mor}} \\ &=& \Gamma_{\ell}(I(C)+(x_{1}^{s_{1}}\cdots x_{n}^{s_{n}}-x_{n+1})) \; \; \text{by the first part of this lemma} \\ &=& I(C)+(G). \end{eqnarray*} \end{proof} This lemma provides an alternate proof to the following theorem which is a special case of \cite[Theorem~2]{thoma4}. \begin{theorem}\label{afinstci} If{~} $C \subset \mathbb{A}^{n}$ is a s.t.c.i. monomial curve, then all extensions of the form $C_{\ell,m} \subset \mathbb{A}^{n+1}$ are also s.t.c.i. monomial curves. \end{theorem} \begin{proof} Since $I(C_{\ell,m})=I(C)+(G)$ by Lemma \ref{afinlemma}, it follows that \begin{eqnarray*} Z(I(C_{\ell,m}))&=& Z(I(C)+(G)) \\ C_{\ell,m}&=&Z(I(C))\bigcap Z(G), \end{eqnarray*} where $Z(\cdot)$ denotes the zero set as usual. Hence $C_{\ell,m}$ is a s.t.c.i. if $C$ is. \end{proof} \section{Extensions that can not be obtained by gluing} If $\overline{C}(m_1,\dots ,m_{n+1})$ is a monomial curve in $\mathbb{P}^{n+1}$, then there is a corresponding semigroup $\mathbb{N}T$, where \[ T=\{(m_{n+1},0),(m_{n+1}-m_1,m_1),\dots ,(m_{n+1}-m_n,m_n),(0,m_{n+1})\} \subset \mathbb{N}^2. \] Let $T=T_1\bigsqcup T_2$ be a decomposition of $T$ into two disjoint proper subsets. Without loss of generality assume that the cardinality of $T_1$ is less than or equal to the cardinality of $T_2$. $\mathbb{N}T$ is called a \textit{gluing} of $\mathbb{N}T_1$ and $\mathbb{N}T_2$ if there exists a nonzero $\alpha \in \mathbb{N}T_1 \bigcap \mathbb{N}T_2$ such that $\mathbb{Z}\alpha=\mathbb{Z}T_1 \bigcap \mathbb{Z}T_2$. Following the literature we write $I(T)$ for the ideal of the toric variety corresponding to the affine semigroup $\mathbb{N}T$. Note that if $\mathbb{N}T$ is a gluing of $\mathbb{N}T_1$ and $\mathbb{N}T_2$ then we have $I(T)=I(T_1)+I(T_2)+(G_\alpha)$, where $G_\alpha$ is the relation polynomial, see \cite{thoma4}. We note that the condition $\mathbb{Z}\alpha=\mathbb{Z}T_1 \bigcap \mathbb{Z}T_2$ is not fulfilled when $T_1$ is not a singleton. Hence we formulate this observation to be the following \begin{proposition}\label{gluingprop1} If $T_1$ is not a singleton then $\mathbb{N}T$ is not a gluing of $\mathbb{N}T_1$ and $\mathbb{N}T_2$. \end{proposition} \begin{proof} If $T_1$ is not a singleton, then neither is $T_2$ by the assumption on the cardinalities of these sets. Thus $\mathbb{Z}T_1$ and $\mathbb{Z}T_2$ are submodules of $\mathbb{Z}^2$ of rank two each. It is elementary to show that their intersection has rank two. For instance, let $r$ and $t$ be generators of $\mathbb{Z}T_1$, then the images of $r$ and $t$ have finite order in the finite group $\mathbb{Z}^2/\mathbb{Z}T_2$, meaning that $ar$ and $bt$ are in $\mathbb{Z}T_2$ for some positive integers $a$ and $b$. Then the rank two $\mathbb{Z}$-module generated by $ar$ and $bt$ is contained in the intersection $\mathbb{Z}T_1\cap \mathbb{Z}T_2$ which must be of rank two itself being a submodule of $\mathbb{Z}^2$. Hence the intersection cannot be generated by a single element. Thus $\mathbb{N}T$ is not a gluing of $\mathbb{N}T_1$ and $\mathbb{N}T_2$. \end{proof} This proposition means that the only way to show that an extension in $\mathbb{P}^{n+1}$ is a s.t.c.i. via gluing is to apply the technique to a projective monomial curve in $\mathbb{P}^{n}$. Thus we discuss the case where $T_1$ is a singleton. But if $T_1$ is $\{(m_{n+1},0)\}$ or $\{(0,m_{n+1})\}$ then $\mathbb{N}T_1 \bigcap \mathbb{N}T_2=\{(0,0)\}$. So it is sufficient to deal with the case where $T_1$ is of the form $\{(m_{n+1}-m_{i},m_{i})\}$, for some $i \in \{1,\dots ,n\}$. From now on, $\Delta_i$ denotes the greatest common divisor of the positive integers $m_1,\dots ,\widehat{m_i},\dots ,m_{n+1}$ ($m_i$ is omitted), for $i=1,\dots,n$. Note that we have $gcd(\Delta_i,m_i)=1$, for all $i=1,\dots,n$, since $gcd(m_1,\dots,m_{n+1})=1$. \begin{proposition}\label{gluingprop2} If \;$T_1=\{(m_{n+1}-m_{i_0},m_{i_0})\}$ for some fixed $i_0 \in \{1,\dots ,{n}\}$, then $\mathbb{N}T$ is a gluing of $\mathbb{N}T_1$ and $\mathbb{N}T_2$ if and only if there exist non-negative integers $d_j$, for $j=1,\dots,\widehat{i}_0,\dots,n+1$, satisfying the following two conditions: \\ {\rm (I)} $\displaystyle \Delta_{i_0}m_{i_0}=\sum_{\substack{j=1 \\j \neq i_0}}^{n+1} d_jm_j$, and {\rm (II)} $\displaystyle \Delta_{i_0} \geq \sum_{\substack{j=1 \\j \neq i_0}}^{n+1} d_j$. \end{proposition} \begin{proof} Let $\alpha=\Delta_{i_0}(m_{n+1}-m_{i_0},m_{i_0})$. We first show that $\mathbb{Z}T_1 \bigcap \mathbb{Z}T_2=\mathbb{Z}\alpha$. Since $\Delta_{i_0}=gcd(m_1,\dots ,\widehat{m_{i_0}},\dots ,m_{n+1})$, there are $z_j \in \mathbb{Z}$, for $j=1,\dots,\widehat{i}_0,\dots,n+1$, such that $\Delta_{i_0}=\sum_{j \neq i_0} z_jm_j$. So, $\Delta_{i_0}m_{i_0}=\sum_{j \neq {i_0}} m_{i_0}z_jm_j$ which implies that \[ \Delta_{i_0}(m_{n+1}-m_{i_0},m_{i_0})=\sum_{j \neq {i_0}} m_{i_0}z_j(m_{n+1}-m_j,m_j)+(\Delta_{i_0}-\sum_{j \neq {i_0}} m_{i_0}z_j)(m_{n+1},0).\] Thus $\alpha=\Delta_{i_0}(m_{n+1}-m_{i_0},m_{i_0})\in \mathbb{Z}T_1 \bigcap \mathbb{Z}T_2$ implying $\mathbb{Z}\alpha \subseteq \mathbb{Z}T_1 \bigcap \mathbb{Z}T_2$. For the converse inclusion, take $c(m_{n+1}-m_{i_0},m_{i_0})\in \mathbb{Z}T_1 \bigcap \mathbb{Z}T_2$, for some $c\in \mathbb{Z}$. Then, obviously we have $c(m_{n+1}-m_{i_0},m_{i_0}) \in \mathbb{Z}T_2$ which implies that $cm_{i_0} \in \mathbb{Z}(\{m_1,\dots ,\widehat{m_{i_0}},\dots ,m_{n+1}\})=\mathbb{Z}\Delta_{i_0}$. So, $\Delta_{i_0}$ divides $cm_{i_0}$. If $\Delta_{i_0}>1$, then $\Delta_{i_0}$ divides $c$, since it does not divide $m_{i_0}$ (remember that $gcd(\Delta_{i_0},m_{i_0})=1$). If $\Delta_{i_0}=1$, obviously $\Delta_{i_0}$ divides $c$. Thus, $c(m_{n+1}-m_{i_0},m_{i_0})$ is a multiple of $\alpha$ and $\mathbb{Z}T_1 \bigcap \mathbb{Z}T_2 \subseteq \mathbb{Z}\alpha$. Since $\mathbb{Z}T_1 \bigcap \mathbb{Z}T_2 = \mathbb{Z}\alpha$, it will follow by definition that $\mathbb{N}T$ is a gluing of $\mathbb{N}T_1$ and $\mathbb{N}T_2$ if and only if $\alpha \in \mathbb{N}T_1 \bigcap \mathbb{N}T_2$. But, if $\alpha \in \mathbb{N}T_1 \bigcap \mathbb{N}T_2$ then there exists non-negative integers $d_j$ and $d$ for which we have \begin{eqnarray*}\Delta_{i_0}(m_{n+1}-m_{i_0},m_{i_0})&=&\sum_{j \neq {i_0}} d_j(m_{n+1}-m_j,m_j)+d(m_{n+1},0)\\ (\Delta_{i_0}m_{n+1}-\Delta_{i_0}m_{i_0},\Delta_{i_0}m_{i_0}) &=&([d+\sum_{j \neq {i_0}} d_j]m_{n+1}-\sum_{j \neq {i_0}} d_jm_j,\sum_{j \neq {i_0}} d_jm_j). \end{eqnarray*} Thus, $\Delta_{i_0}m_{i_0}=\sum_{j \neq {i_0}} d_jm_j$ and $d=\Delta_{i_0}-\sum_{j \neq {i_0}} d_j$. Since $d\geq0$, we see that the conditions ${\rm (I)}$ and ${\rm (II)}$ hold. On the other hand, if ${\rm (I)}$ and ${\rm (II)}$ hold then we observe that $\alpha \in \mathbb{N}T_1 \bigcap \mathbb{N}T_2$, by the equalities above. Thus, the condition $\alpha \in \mathbb{N}T_1 \bigcap \mathbb{N}T_2$ is equivalent to the existence of the non-negative integers $d_j$ satisfying ${\rm (I)}$ and ${\rm (II)}$. \end{proof} As a direct consequence of Proposition \ref{gluingprop2} we get the following \begin{corollary}\label{gluingcor1} If $\Delta_{i_0}=1$, for some fixed $i_0 \in \{1,\dots ,{n}\}$, then $\mathbb{N}T$ cannot be obtained as a gluing of $\mathbb{N}T_1$ and $\mathbb{N}T_2$, where $T_1=\{(m_{n+1}-m_{i_0},m_{i_0})\}$ and $T_2=T - T_1$. \end{corollary} \begin{proof} We apply Proposition \ref{gluingprop2}. If ${\rm (I)}$ does not hold, we are done. If it holds, then we have two cases: either $\displaystyle \sum_{\substack{j=1 \\j \neq i_0}}^{n+1} d_j =1$ or $\displaystyle \sum_{\substack{j=1 \\j \neq i_0}}^{n+1} d_j >1$. The first case forces $m_{i_0}=m_j$ for some $j\neq i_0$, from (I), but this contradicts the way we choose $m_i's$. The second case causes (II) to fail, as $\Delta_{i_0}=1$. \end{proof} \begin{example} If we consider the curve $\overline{C}(2,3,4,8)\subset \mathbb{P}^4$ and take $i_0=2$, then the conditions ${\rm (I)}$ and ${\rm (II)}$ of the above proposition hold. Thus this curve can be obtained by gluing. But if we consider the monomial curve $\overline{C}(2,4,7,8)\subset \mathbb{P}^4$, then for every choice of $i_0$, either $\Delta_{i_0}=1$, or else condition ${\rm (II)}$ of the above proposition fails. Hence this curve cannot be obtained by gluing. \end{example} \begin{corollary}\label{badextensions} Let $\overline{C}_{\ell,m}\subset \mathbb{P}^{n+1}$ be a bad extension of $\overline{C}=\overline{C}(m_1,\dots ,m_n)$, i.e. $\ell \geq \delta(m)$. If $\overline{C}$ is a s.t.c.i. on the hypersurfaces $f_1=\cdots=f_{n-1}=0$, then $\overline{C}_{\ell,m}$ can be shown to be a s.t.c.i. on the hypersurfaces $f_1=\cdots=f_{n-1}=0$ and $F=x_{n+1}^{\ell}-x_0^{\ell-\delta(m)}x_1^{s_1} \cdots x_n^{s_n}=0$ by the technique of gluing, where $m=s_1m_1+\cdots+s_nm_n$ and $s_1+\cdots+s_n=\delta(m)$. \end{corollary} \begin{proof}Since $m_1<\cdots<m_n$ and $m=s_1m_1+\cdots+s_nm_n \leq \delta(m)m_n \leq \ell m_n$, it follows that $\ell m_n$ is the biggest number among $\{\ell m_1,\dots ,\ell m_n,m\}$. The extension $\overline{C}_{\ell,m}$ corresponds to the semigroup $\mathbb{N}T$, where $T=T_1 \bigcup T_2$, $T_1=\{(\ell m_n-m,m)\}$ and $T_2=\{(\ell m_n,0),(\ell m_n-\ell m_1,\ell m_1),\dots ,(\ell m_n-\ell m_{n-1},\ell m_{n-1}),(0,\ell m_n)\}$. Since $gcd(\ell m_1,\dots ,\ell m_n)=\ell$, $\ell m=s_1(\ell m_1)+\cdots+s_n(\ell m_n)$ and $\ell \geq \delta(m)$, $\mathbb{N}T$ is a gluing of $\mathbb{N}T_1$ and $\mathbb{N}T_2$, by Proposition \ref{gluingprop2}. Since $I(T)=I(T_1)+I(T_2)+(F)$, the claim follows from \cite[Theorem~2]{thoma4}. \end{proof} \section{The Main Results} Since \emph{bad} extensions are shown to be a s.t.c.i. by the technique of gluing (see Corollary \ref{badextensions} above), we study \textit{nice} extensions of monomial curves in this section. By using the theory developed in the previous section one can check which of these extensions can be obtained by the technique of gluing semigroups. Throughout this section we will assume that \begin{itemize} \item $\overline{C}=\overline{C}(m_1,\dots,m_n) \subset \mathbb{P}^n$ is a s.t.c.i. on $f_1=\cdots=f_{n-1}=0$ \item $m=s_1m_1+\cdots+s_nm_n$ for some nonnegative integers $s_1,\dots,s_n$ such that $s_1+\cdots+s_n=\delta(m)$ \item $\ell$ is a positive integer with $gcd(\ell,m)=1$ \item $\delta(m)>\ell$. \end{itemize} \begin{remark}\label{inclusion} Since $\overline{C}$ is s.t.c.i. on $f_1=\cdots=f_{n-1}=0$, its affine part $C$ is s.t.c.i. on $g_1=\cdots=g_{n-1}=0$, where $g_i(x_1,\dots ,x_n)=f_i(1,x_1,\dots ,x_n)$ is the dehomogenization of $f_i$, $i=1,\dots ,n-1$. It follows from Theorem \ref{afinstci} that $C_{\ell,m}$ is a s.t.c.i. on the hypersurfaces $g_i=0$ and $G={x_{1}}^{s_{1}}\cdots\,\,{x_{n}}^{s_{n}}-x_{n+1}^{\ell}=0$. So, the ideal of the affine curve $C_{\ell,m}$ contains $g_i$'s and $G$. Hence the ideal of the projective closure of $C_{\ell,m}$ must contain (at least) $f_i$'s and $F$, where $F$ is the homogenization of $G$. Now, since $f_1,\dots , f_{n-1}, F \in I(\overline{C}_{\ell,m})$, we always have $\overline{C}_{\ell,m} \subseteq Z(f_1,\dots ,f_{n-1},F)$. \end{remark} \subsection{The case where $f_i$'s are general, but $m$ is special} In this section we assume that $m$ is a multiple of $m_n$, i.e. $m=s_nm_n$ where $s_n$ is a positive integer. Note that $(s_1,\dots,s_{n-1})=(0,\dots,0)$ and $\delta(m)=s_n$ in this case. \begin{theorem}\label{projstci} Let $\overline{C}\subset \mathbb{P}^{n}$ be a s.t.c.i. on the hypersurfaces $f_1=\cdots=f_{n-1}=0$, $gcd(\ell,s_nm_n)=1$ and $s_n>\ell$. Then, the nice extensions $\overline{C}_{\ell,s_nm_n}$ in $\mathbb{P}^{n+1}$ are s.t.c.i. on $f_1=\cdots=f_{n-1}=F=0$ where $F=x_{n}^{s_n}-x_{0}^{s_n-\ell}x_{n+1}^{\ell}$. \end{theorem} \begin{proof} The fact that these nice extensions are s.t.c.i. can be seen easily by \cite[Theorem~3.4]{thoma2} taking $b_1=m_1,\dots,b_{n-1}=m_{n-1}$, $d=m_n$ and $k=(s_n-\ell)m_n$. In addition to this, we provide here the equation of the binomial hypersurface $F=0$ on which these extensions lie as s.t.c.i. monomial curves. Since $\overline{C}_{\ell,s_nm_n} \subseteq Z(f_{1},\dots ,f_{n-1},F)$, we need to get the converse inclusion. Take a point $P=(p_0,\dots ,p_n,p_{n+1}) \in Z(f_1,\dots ,f_{n-1},F)$. Then, since $f_i \in K[x_0,\dots,x_n]$, we have $f_i(P)=f_i(p_0,\dots ,p_n)=0$, for all $i=1,\dots ,n-1$. Since $Z(f_1,\dots ,f_{n-1})=\overline{C}$ in $\mathbb{P}^n$ by assumption, the last observation implies that \[(p_0,\dots,p_n)=(u^{m_{n}},u^{m_{n}-m_{1}}v^{m_{1}}, \dots,u^{m_{n}-m_{n-1}}v^{m_{n-1}},v^{m_{n}}). \] If $p_0=0$ then $u=0$, yielding that $(p_0,\dots,p_{n-1},p_n)=(0,\dots ,0,p_n)$. Since $s_n>\ell$, we have also $p_n=0$, by $F(0,\dots ,0,p_n,p_{n+1})={p_{n}}^{s_n}-p_{0}^{s_n-\ell}p_{n+1}^{\ell}=0$. So we observe that $(p_0,\dots ,p_n,p_{n+1})=(0,\dots ,0,1)$ which is on the curve $\overline{C}_{\ell,s_nm_n}$. If $p_0=1$ then $(1,p_1,\dots ,p_n,p_{n+1}) \in Z(g_1,\dots ,g_{n-1},G)$ by the assumption, where $g_i$ and $G$ are polynomials defined in Remark \ref{inclusion}. Since $C_{\ell,s_nm_n}$ is a s.t.c.i. on the hypersurfaces $g_1=\cdots=g_{n-1}=0$ and $G=0$ it follows that $(1,p_1,\dots ,p_n,p_{n+1}) \in C_{\ell,s_nm_n} \subset \overline{C}_{\ell,s_nm_n}$. \end{proof} \subsection{The case where $f_i$'s are special and $m$ is general} Assume now that $m$ is not a multiple of $m_n$, i.e. $(s_1,\dots ,s_{n-1}) \neq (0,\dots ,0)$. Recall that we choose $s_1,\dots,s_n$ in the representation of $m=s_1m_1+\cdots+s_nm_n$ in such a way that $s_{1}+\dots +s_{n}$ is minimum, i.e. $s_{1}+\dots +s_{n}=\delta(m)$. First we prove a lemma where no restriction on the $f_i$ is required. \begin{lemma}\label{projlemma} Let $\overline{C}\subset \mathbb{P}^{n}$ be a s.t.c.i. on $f_1=\cdots=f_{n-1}=0$ and $\delta(m)>\ell$. Then, $Z(f_{1},\dots ,f_{n-1},F)= \overline{C}_{\ell,m}\cup L \subset \mathbb{P}^{n+1}$, where $F={x_{1}}^{s_{1}}\cdots\,\,{x_{n}}^{s_{n}}-x_{0}^{\delta(m)- \ell}x_{n+1}^{\ell}$ and $L$ is the line $x_{0}=\cdots=x_{n-1}=0$. \end{lemma} \begin{proof} We first prove $\overline{C}_{\ell,m}\bigcup L \subseteq Z(f_1,\dots ,f_{n-1},F)$. By the light of Remark \ref{inclusion}, it is sufficient to see that $L \subseteq Z(f_1,\dots ,f_{n-1},F)$. For this, we take a point $P=(p_0,\dots ,p_{n+1})$ on the line $L$, i.e., $P=(0,\dots ,0,p_n,p_{n+1})$. Since $(s_1,\dots ,s_{n-1}) \neq (0,\dots ,0)$ and $\delta(m)>\ell$, we see that $F(P)=0$. Letting $v\in K$ be any $m_n$-th root of $p_n$, we get $(0,\dots ,0,p_n)=(0,\dots ,0,v^{m_n}) \in \overline{C}=Z(f_{1},\dots ,f_{n-1})$. Since the polynomials $f_i$ are in $K[x_0,\dots ,x_n]$, it follows that $f_i(P)=f_i(0,\dots ,0,p_n)=0$, for all $i=1,\dots,n-1$. Thus $P \in Z(f_1,\dots ,f_{n-1},F)$. For the converse inclusion, take $P=(p_0,\dots ,p_n,p_{n+1}) \in Z(f_1,\dots ,f_{n-1},F)$. Then, for all $i=0,\dots ,n-1$, we get $f_i(p_0,\dots ,p_n)=f_i(P)=0$ implying that \[(p_0,\dots,p_n)=(u^{m_{n}},u^{m_{n}-m_{1}}v^{m_{1}},\dots, u^{m_{n}-m_{n-1}}v^{m_{n-1}},v^{m_{n}}). \] If $p_0=0$ then $u=0$, yielding that $(p_0,\dots ,p_n)=(0,\dots ,0,p_n)$. Thus, we get $P=(p_0,\dots ,p_n,p_{n+1})=(0,\dots ,0,p_n,p_{n+1})\in L$. If $p_0=1$ then by assumption we know that $P=(1,p_1,\dots ,p_n,p_{n+1}) \in Z(g_1,\dots ,g_{n-1},G)$. Since $C_{\ell,m}$ is a s.t.c.i. on the hypersurfaces $g_1=\cdots=g_{n-1}=0$ and $G=0$ it follows that $P=(1,p_1,\dots ,p_n,p_{n+1}) \in C_{\ell,m} \subset \overline{C}_{\ell,m}$. \end{proof} To get rid of $L$ in the intersection of the hypersurfaces $f_1=\cdots=f_{n-1}=0$ and $F=0$, we modify the $F={x_{1}}^{s_{1}}\cdots\,\,{x_{n}}^{s_{n}}-x_0^{\delta(m)- \ell}x_{n+1}^{\ell}$ of the Lemma~\ref{projlemma}, as in the work of Bresinsky (see \cite{bre}), for some special choice of $f_1,\dots,f_{n-1}$. In this way we construct a new polynomial $F^{*}$ from $F$ such that $Z(f_1,\dots,f_{n-1},F^{*})=\overline{C}_{\ell,m}$, where $F^{*}$ is a polynomial of the form \[ F^*=x_n^{\alpha}+x_0^{\beta}H(x_0,\dots ,x_{n+1}),\] where $\beta$ is a positive integer. Note that when $x_0=0$, the vanishing of $F^{*}$ implies that $x_n=0$. It follows from the last part of the proof of Lemma \ref{projlemma} that this property of $F^{*}$ ensures that we have a point at infinity, in the intersection of $f_1=\cdots=f_{n-1}=0$ and $F^{*}=0$, instead of a line. The construction of $F^{*}$ can be described as follows. We first assume that $f_i=x_i^{a_i}-x_0^{a_i-b_i}x_{n}^{b_i}=0$, where $a_i>b_i$ are positive integers, for all $i=1,\dots ,n-1$. Let $p=a_1\cdots a_{n-1}$ and $p_i=\frac{b_i}{a_i}p$, for $i=1,\dots ,n-1$. Take the $p$-th power of $F$ and for every occurrence of $x_i^{a_i}$ substitute $x_0^{a_i-b_i}x_n^{b_i}$, for all $i=1,\dots ,n-1$. Then we have \begin{eqnarray*} F^p&=&x_0^{\gamma}x_n^{\alpha}+ x_0^{\delta(m)-\ell}H(x_0,\dots ,x_{n+1}) \,\,\,\, mod(f_1,\dots,f_{n-1})\\ &=&x_0^{\gamma}[x_n^{\alpha}+x_0^{\delta(m)-\ell-\gamma}H(x_0,\dots ,x_{n+1})] \,\,\,\, mod(f_1,\dots ,f_{n-1}) \end{eqnarray*} where $\gamma=\sum_{i=1}^{n-1} (p-p_i)s_{i}$, $\alpha=ps_n+\sum_{i=1}^{n-1}p_i s_{i}$ and $H$ is a polynomial. Letting \[F^{*}(x_0,\dots ,x_{n+1})=x_n^{\alpha}+x_0^{\delta(m)-\ell-\gamma}H(x_0,\dots ,x_{n+1})\] we observe that \begin{equation}\label{eqn} F^p(x_0,\dots ,x_{n+1})=x_0^{\gamma}F^*(x_0,\dots ,x_{n+1}) \,\,\,\, mod(f_1,\dots ,f_{n-1}). \end{equation} Recall that $m$ is an element of the numerical semigroup generated by $m_1,\dots,m_n$, i.e. $m=s_1m_1+\cdots+s_nm_n$ with $s_1+\cdots+s_n=\delta(m)$. If $m$ is large enough that $s_n > \ell+\sum_{i=1}^{n-1} (p-p_i-1)s_{i}$ (or equivalently $\delta(m)-\ell-\gamma>0$) then $F^*$ is the required polynomial. (Otherwise, $F^*$ may not be a polynomial.) Hence we conclude the following \begin{theorem}\label{projstci1} Let $p$, $p_i$, $f_i$ and $F^*$ be as above. Assume that $m$ is chosen so that $s_n > \ell+ \sum_{i=1}^{n-1} (p-p_i-1)s_{i}$. Then, for all $\ell<\delta(m)$ with $gcd(\ell,m)=1$, the nice extensions $\overline{C}_{\ell,m} \subset \mathbb{P}^{n+1}$ are s.t.c.i. on $f_1=\cdots=f_{n-1}=0$ and $F^*=0$. \end{theorem} \begin{proof} We will show that $\overline{C}_{\ell,m}$ is a s.t.c.i. on $f_1=\cdots=f_{n-1}=0$ and $F^*=0$. To do this, take a point $P=(p_0,\dots ,p_{n+1}) \in \overline{C}_{\ell,m}$. Then, $F(P)=0$ and $f_i(P)=0$, for all $i=1,\dots ,n-1$, since $Z(f_1,\dots ,f_{n-1},F)=\overline{C}_{\ell,m} \bigcup L$, by Lemma \ref{projlemma}. From equation (\ref{eqn}) it follows that $F^*(P)=0$ or $p_0=0$. Since $P$ is a point on the monomial curve $\overline{C}_{\ell,m}$, it can be parameterized as follows: \[(u^{m},u^{m-\ell m_{1}}v^{\ell m_{1}},\dots ,u^{m-\ell m_{n}}v^{\ell m_{n}},v^{m}) \] So if $p_0=0$, we get $u=0$ and thus $p_i=0$, for all $i=1,\dots ,n$. Therefore $P=(0,\dots ,0,1)$ and hence $F^*(P)=0$ in any case. Conversely, let $P=(p_0,\dots ,p_{n+1}) \in Z(f_1,\dots ,f_{n-1},F^*)$. If $p_0=0$, then $p_i=0$ by $f_i(P)=0$, for all $i=1,\dots ,n-1$. Since $\delta(m)-\ell-\gamma>0$, we have $p_n=0$ by $F^*(P)=0$. Thus $P=(0,\dots ,0,1)$ which is always on the curve $\overline{C}_{\ell,m}$. If $p_0=1$ then $C$ is a s.t.c.i. on the hypersurfaces given by $g_i=x_i^{a_i}-x_{i+1}^{b_i}=0$, for $i=1,\dots ,n-1$, by the assumption. Hence, Theorem \ref{afinstci} implies that $C_{\ell,m}$ is a s.t.c.i. on $g_1=\cdots=g_{n-1}=0$ and $G={x_{1}}^{s_{1}}\cdots\,\,{x_{n}}^{s_{n}}-x_{n+1}^{\ell}=0$. Thus $P=(1,p_1,\dots,p_{n+1}) \in C_{\ell,m} \subset \overline{C}_{\ell,m}$. \end{proof} \begin{remark} The \emph{nice} extensions in Theorem \ref{projstci1} can also be shown to be s.t.c.i. by using \cite[Theorem~3.4]{thoma2}. But to show that the hypotheses of \cite[Theorem~3.4]{thoma2} are satisfied by these extensions is much more difficult than the proof here. As a byproduct we also constructed here the hypersurface $F^*=0$ on which these \emph{nice} extensions are s.t.c.i. \end{remark} \begin{example} We start with $\overline{C}=\overline{C}(3,4,6)\subset \mathbb{P}^{3}$. Let $\ell=1$ and $m=6s+7$, for some positive integer $s$. Then $\delta(m)=s+2$, $s_1=s_2=1$ and $s_3=s$. Thus we get the nice extensions $\overline{C}_{1,6s+7}=\overline{C}(3,4,6,6s+7)\subset \mathbb{P}^{4}$. Since $\Delta_1=gcd(4,6,6s+7)=1$, $\Delta_2=gcd(3,6,6s+7)=1$ and $\Delta_3=gcd(3,4,6s+7)=1$ it follows from Corollary \ref{gluingcor1} that these curves can not be obtained by gluing. Using the software Macaulay, it is easy to see that the ideal of $\overline{C}_{1,6s+7}$ is minimally generated by the polynomials \begin{eqnarray*} f_1&=&x_1^2-x_0x_3,\\ f_2&=&x_2^3-x_0x_3^2, \\ f_3&=&x_3^{s+3}-x_0^{s-1}x_1x_2^2x_4 \\ f_4&=&x_2x_3^{s+1}-x_0^{s}x_1x_4, \\ f_5&=&x_1x_3^{s+2}-x_0^{s}x_2^2x_4 \\ F&=&x_1x_2x_3^{s}-x_0^{s+1}x_4. \end{eqnarray*} Since $\overline{C}(3,4,6)\subset \mathbb{P}^{3}$ is a s.t.c.i. on the surfaces $f_1=0$ and $f_2=0$, it follows from Theorem \ref{projstci1} that $\overline{C}_{1,6s+7}$ is a s.t.c.i. on $f_1=0$, $f_2=0$ and $$F^*=x_3^{6s+7}-6x_0^{s-1}x_1x_2^2x_3^{5s+4}x_4+ 15x_0^{2s}x_2x_3^{4s+4}x_4^2-20x_0^{3s}x_1x_3^{3s+3}x_4^3+$$ $$+15x_0^{4s}x_2^2x_3^{2s+1}x_4^4-6x_0^{5s}x_1x_2x_3^sx_4^5+ x_0^{6s+1}x_4^6=0$$ provided that $s>2$. \end{example} \section{Variations and consequences of the main results} In this section, we give some consequences of Theorem \ref{projstci} and hence all the notation is as in that theorem. We also include some theorems about \emph{nice} extensions of projective monomial curves that are variations of Theorem \ref{projstci1}. \subsection{Consequences of Theorem \ref{projstci}} Since arithmetically Cohen-Macaulay monomial curves are s.t.c.i. in $ \mathbb{P}^3$ (see \cite{rv-ACM}), we get the following corollary as a consequence of Theorem \ref{projstci}. \begin{corollary}\label{corprojmain} Let $\overline{C}(m_1,m_2,m_3)$ be an arithmetically Cohen-Macaulay monomial curve in $\mathbb{P}^3$. Let $m=s_3m_3$, $gcd(\ell,m)=1$ and $\delta(m)=s_3>\ell$. Then the nice extensions $\overline{C}_{\ell,s_3m_3}=\overline{C}(\ell m_1,\ell m_2,\ell m_3,s_3m_3)$ are all s.t.c.i. in $\mathbb{P}^4$. \mbox{} \hfill $\Box$ \end{corollary} \begin{remark} There are very few examples of s.t.c.i. monomial curves in $\mathbb{P}^n$, where $n>3$. We know that rational normal curve $\overline{C}(1,2,\dots ,n)$ is a s.t.c.i. in $\mathbb{P}^{n}$, for any $n>0$, (see \cite{rv-normal,thoma2}). Applying Theorem \ref{projstci} to $\overline{C}(1,2,\dots ,n) \subset \mathbb{P}^{n}$, we can produce infinitely many new examples of s.t.c.i. monomial curves in $\mathbb{P}^{n+1}$: \end{remark} \begin{corollary} For all positive integers $\ell$, $n$ and $s$ with $gcd(\ell,sn)=1$, the monomial curves $\overline{C}(\ell,2\ell,\dots ,n\ell,sn) \subset \mathbb{P}^{n+1}$ are s.t.c.i. \end{corollary} \begin{proof} Let $m=sn$. Clearly $\delta(m)=s$. If $s \leq \ell$, then the curves $\overline{C}_{\ell,m}=\overline{C}(\ell,2\ell,\dots ,n\ell,sn) \subset \mathbb{P}^{n+1}$ are bad extensions of $\overline{C}(1,2,\dots,n)\subset \mathbb{P}^n$. Hence they are s.t.c.i. by Corollary \ref{badextensions}. If $s>\ell$, then these curves are nice extensions of $\overline{C}(1,2,\dots,n)\subset \mathbb{P}^n$. Therefore they are s.t.c.i. by Theorem \ref{projstci}. \end{proof} In \cite{mt}, all complete intersection (i.t.c.i.) lattice ideals are characterized by gluing semigroups. But, for a given projective monomial curve it is not easy to find two subsemigroups whose ideals are complete intersection. So, as another application of Theorem \ref{projstci} we can produce infinitely many i.t.c.i. monomial curves: \begin{proposition}\label{projitci} If \,$\overline{C}\subset\mathbb{P}^{n}$ is an i.t.c.i., then the nice extensions $\overline{C}_{\ell,s_nm_n}\subset\mathbb{P}^{n+1}$ are i.t.c.i. for all positive integers $\ell$ and $s_n$ with $s_n>\ell$, $gcd(\ell,s_nm_n)=1$. \end{proposition} \begin{proof} Since $\overline{C}$ is a s.t.c.i. on the binomial hypersurfaces $f_1=\cdots=f_{n-1}=0$, it follows from Theorem \ref{projstci} that $\overline{C}_{\ell,s_nm_n}$ is a s.t.c.i. on $f_1=\cdots=f_{n-1}=0$ and $F(x_0,\dots ,x_{n+1})=x_{n}^{s_n}-x_{0}^{s_n-\ell}x_{n+1}^{\ell}=0$. Since these are all binomial, the monomial curves $\overline{C}_{\ell,s_nm_n}$ are i.t.c.i. on the same hypersurfaces, by \cite[Theorem~4]{bmt}. \end{proof} \begin{corollary}\label{itci} The monomial curves $\overline{C}(\ell m_1,\ell m_2,s_2m_2)$ are i.t.c.i. in $\mathbb{P}^{3}$, for all positive integers $m_1,m_2,\ell$ and $s_2$ with $s_2>\ell$, $gcd(\ell,s_2m_2)=1$. \end{corollary} \begin{proof} Let $m=s_2m_2$. Then $\delta(m)=s_2$ and $\overline{C}_{\ell,m}=\overline{C}(\ell m_1,\ell m_2,s_2m_2)$ is a nice extension of $\overline{C}(m_1,m_2)$, by the assumption $s_2>\ell$. Since $\overline{C}(m_1,m_2)$ is an i.t.c.i. on $x_1^{m_2}-x_0^{m_2-m_1}x_2^{m_1}=0$, it follows from Proposition \ref{projitci} that the nice extensions $\overline{C}(\ell m_1,\ell m_2,s_2m_2)$ are i.t.c.i. on $x_1^{m_2}-x_0^{m_2-m_1}x_2^{m_1}=0$ and $x_2^{s_2}-x_0^{s_2-\ell}x_3^{\ell}=0.$ \end{proof} To produce infinitely many examples of i.t.c.i. curves, our method starts from just one i.t.c.i. curve, whereas semigroup gluing method produces only one example starting from one i.t.c.i.. The following example illustrates this point. \begin{example}\label{exampleitci} From Corollary \ref{itci}, we know that $\overline{C}(1,2,4)$ is an i.t.c.i. on \[ f_1=x_1^2-x_0x_2=0\; \text{~and~} \; f_2=x_2^2-x_0x_3=0. \] Take two positive integers $\ell$ and $s$ with $s>\ell$, $gcd(\ell,4s)=1$. Then the monomial curves $\overline{C}(\ell,2\ell,4\ell,4s)\subset \mathbb{P}^4$ are nice extensions of $\overline{C}(1,2,4)\subset \mathbb{P}^3$. Thus, by Proposition \ref{projitci}, the monomial curves $\overline{C}(\ell,2\ell,4\ell,4s)$ are i.t.c.i. on \[ f_1=x_1^2-x_0x_2=0, \; f_2=x_2^2-x_0x_3=0 \; \text{~and~} \; F=x_3^s-x_0^{s-\ell}x_4^{\ell}=0. \] The nice extensions $\overline{C}(\ell,2\ell,4\ell,4s)$ can also be obtained by gluing subsemigroups generated by $T_1=\{(4s-\ell,\ell)\}$ and $T_2=\{(4s,0),(4s-2\ell,2\ell),(4s-4\ell,4\ell),(0,4s)\}$. But, in this case one has to know that $\overline{C}(\ell,2\ell,2s)$ is an i.t.c.i. for each $\ell$ and $s$. In other words, starting with the fact that $\overline{C}(1,2,4)$ is an i.t.c.i., gluing method can only produce $\overline{C}(1,2,4,8)$ as an i.t.c.i. monomial curve. \end{example} \subsection{Variations of Theorem \ref{projstci1}} Recall that our method starts with a monomial curve $\overline{C}=Z(f_1,\dots,f_{n-1})$ in $\mathbb{P}^n$ and produces infinitely many nice extensions $\overline{C}_{\ell,m}=Z(f_1,\dots,f_{n-1},F^*)$ in $\mathbb{P}^{n+1}$. Since the construction of $F^*$ depends on the choice of $f_1,\dots, f_{n-1}$, it is possible to start with another curve $\overline{C}=Z(f_1,\dots,f_{n-1})$ in $\mathbb{P}^n$ and obtain new families of nice extensions. In this section we provide two examples of this sort. For instance, if we assume that $\overline{C}$ is a s.t.c.i. on the hypersurfaces $f_i=x_i^{a_i}-x_0^{a_i-b_i}x_{i+1}^{b_i}=0$, where $a_i > b_i$ are positive integers, $i=1,\dots ,n-1$, then under some suitable conditions we obtain other families of s.t.c.i. nice extensions. Let $p=a_1\cdots a_{n-1}$, $q_0=b_1 \cdots b_{n-1}$ and $q_i=a_1\cdots a_ib_{i+1}\cdots b_{n-1}$, $i=1,\dots ,n-2$. The first variation is the following \begin{theorem}\label{projstci3} Let $p,q_0,\dots,q_{n-2}$ be as above. For all $m$ which give rise to $s_n >\ell+\sum_{i=0}^{n-2}(p-q_i-1)s_{i+1}$ and for all $\ell$ with $\ell<\delta(m)$ and $gcd(\ell,m)=1$, the nice extensions $\overline{C}_{\ell,m}\subset\mathbb{P}^{n+1}$ are s.t.c.i. on $f_1=\cdots=f_{n-1}=F^*=0$. \hfill $\Box$ \end{theorem} Now, we give another variation where $m=s_im_i+s_jm_j$, for $i,j\in \{1,\dots,n\}$. For the notational convenience we take $i=1$ and $j=n$. \begin{theorem}\label{projstci2} Let $\overline{C}\subset \mathbb{P}^{n}$ be a s.t.c.i. on the hypersurfaces given by \begin{eqnarray*}f_1&=&x_1^{a}-x_0^{a-b}x_{n}^{b}=0\\ f_i&=&x_i^{a_i}+x_0^{b_i}A(x_1,\dots ,x_n)+x_{1}^{c_i}B(x_2,\dots ,x_n)=0, \end{eqnarray*} where $a$, $b$, $a-b$, $a_i$, $b_i$, and $c_i$ are positive integers, for $i=2,\dots ,n-1$, $A$ and $B$ are some polynomials. For all $m$ which give rise to $s_n > \ell+(a-b-1)s_1$ and for all $\ell$ with $\ell<\delta(m)$ and $gcd(\ell,m)=1$, the nice extensions $\overline{C}_{\ell,m}\subset\mathbb{P}^{n+1}$ are s.t.c.i. on $f_1=\cdots=f_{n-1}=F^*=0$. \hfill $\Box$ \end{theorem} \section*{Acknowledgements} The author would like to extend his sincere thanks to F. Arslan, \"{O}. Ki\c{s}isel, M. Morales, S. Sert\"{o}z and A. Thoma for their numerous comments, and to the anonymous referee for his suggestions which improved the final presentation of the paper. \bigskip
{ "timestamp": "2008-05-30T12:04:09", "yymm": "0610", "arxiv_id": "math/0610316", "language": "en", "url": "https://arxiv.org/abs/math/0610316", "abstract": "In this paper we produce infinitely many examples of set-theoretic complete intersection monomial curves in $\\mathbb{P}^{n+1}$, starting with a set-theoretic complete intersection monomial curve in $\\mathbb{P}^{n}$ . In most of the cases our results cannot be obtained through semigroup gluing technique and we can tell apart explicitly which cases are new.", "subjects": "Algebraic Geometry (math.AG); Commutative Algebra (math.AC)", "title": "Producing Set Theoretic Complete Intersection Monomial Curves in P^n", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812327313545, "lm_q2_score": 0.7310585903489891, "lm_q1q2_score": 0.7079434189210003 }
https://arxiv.org/abs/cs/0501003
Implementation of Motzkin-Burger algorithm in Maple
Subject of this paper is an implementation of a well-known Motzkin-Burger algorithm, which solves the problem of finding the full set of solutions of a system of linear homogeneous inequalities. There exist a number of implementations of this algorithm, but there was no one in Maple, to the best of the author's knowledge.
\section{The problem} Consider a linear homogeneous system of inequalities with rational coefficients \begin{equation}\label{linear_system1} l_j(x) = a_{j1} x_1 + \ldots + a_{jn} x_n \leqslant 0, \quad a_{ji} \in \mathbb{Q}, \quad (j=1,\ldots,m). \end{equation} Let $r$ be the rank of the matrix of \eqref{linear_system1}. Recall some common facts from the theory of convex sets (see for example, \cite{Fulton}). Every solution set $X_1,\ldots,X_p$ of a system \eqref{linear_system1} generates a convex cone $C$ of solutions \begin{equation}\label{decomposition} k_1 X_1 + \ldots + k_p X_p, \qquad k_i \geqslant 0. \end{equation} We call a finite set of solutions $X_1,\ldots,X_p \in \mathbb{Q}^n$ a set of generators for $C$ if every element of $C$ is a conical combination \eqref{decomposition} of $X_1,\ldots,X_p$. Hereinafter we will consider the minimal sets of generators only, i.e. such that none of the $X_i, i=1,\ldots,p$ is expressed from the others with positive coefficients. We call $d$ the dimension of a cone $C$ if $d$ is the dimension of the minimal linear space spanned by $C$. A solution locus of each inequality is a half-space of $\mathbb{Q}^n$. For any subsystem $\{l_j(x)\leqslant 0, j \in J\}$ of \eqref{linear_system1} its solution set is a convex polyhedral cone. The faces of the latter are the intersections of $C$ with solution loci of some equation subsystems $\{l_j(x)=0, j \in I \subseteq J\}$ with linearly independent left-hand sides. The problem discussed in this paper is to solve \eqref{linear_system1} a over $\mathbb{Q}^n$. This problem is closely related to the following one. Extend the system \eqref{linear_system1} by a new inequality \begin{equation}\label{new} l(x) = a_{1} x_1 + \ldots + a_{n} x_n \leqslant 0. \end{equation} Not all the points of the cone $C$ satisfy \eqref{new}, thus the system \eqref{linear_system1} together with \eqref{new} define a new cone $C^\star$. We are interested in computing the set of generators for $C^\star$ given the set of generators for $C$. The Motzkin-Burger algorithm solves this problem. The iteration of the Motzkin-Burger algorithm solves the first problem by means of extending the trivial system $\{0\leqslant 0\}$ by the inequalities of \eqref{linear_system1} one \linebreak by one. \section{Algorithm description} \subsection{Theoretical study} Let us recall some well-known facts (generally following \cite{Fulton}, \cite{Chernikov}, \cite{Solodovnikov}). The set of generators for a cone $C^\star$ consists of generators for cone $C$ (which satisfy the inequality $l(x) < 0$) and the whole set of generators for cone $C \cap H$ where $H:=\{l(x)=0\}$. Thus, in order to compute the cone $C^\star$ we need to study both the structure of the cone $C$ and structure of the intersection. The cone $C$ is a Minkovsky sum of its linear subspace $L$ and the strongly convex cone~$P$ (that is the cone with apex at the origin and contain no line through the origin). Vectors $u$ of the base $U$ of $L$ satisfy $\{l_j(u)=0, j=1,\ldots,m\}$, whereas any element $v$ of set $V$ of generators of~$P$ satisfies $l_j(v) < 0$ for at least one $j$. Let us denote vectors $X$ belonging to $U$ or $V$ by $X^+\,$ if $\,l(X)>0$, $X^-\,$ if $\,l(X)<0\,$ and $X^0\,$ if $\,l(X)=0\,$. Also denote the linear subspace of the new cone with its base and the strongly convex cone of the new cone with its set of generators by $L^\star$, $U^\star$, $P^\star$, $V^\star$, respectively. Hence the vector $u^0$ belongs to $U^\star$, while $u^-$, $v^0$ and $v^-$ belong to $V^\star$. Although, $\{u \in U : l(u)=0\}$ is not $U^\star$. By discarding all $u^+$-s and $v^+$-s we lose a number of generators for the cone $C$ and have to be replaced them by the whole set of generators for $H \cap C$. Since $H \cap C = \left(H \cap L \right) \cup \left(H \cap P\right)$ we can describe how to convert $U$ to $U^\star$ and $V$ to $V^\star$ separately. There are two cases with respect to the intersection $H \cap L$: $H \cap L = L$ or $H \cap L \not=L$. In the first case, $U^\star = U$. In the second case the new inequality \eqref{new} reduces the dimension of $L$ by one. Choose one $u^- \in U$ (if there is no such vector in $U$, take $-u^+$). Let us form $\dim L - 1$ pairs of vectors $u^-$ and $u_j \in U\setminus\{u^-\}$. For every pair consider the conic combinations \begin{equation}\label{pair's cone} au^- + bu_j, \quad a,b \geqslant 0. \end{equation} The intersection of $H$ with the cone \eqref{pair's cone} is the ray of positive multiplies of \begin{equation} u^\star_j := -l(u^-)u_j + l(u_j)u^-. \end{equation} It is clear that $l(u_j^\star) = 0$. Thus, $\dim L - 1$ vectors $u^\star_j$ are the base of $L^\star$. Note that the representation of the base of $L$ is not unique since we can choose different vectors for $u^-$ if they exist. The conversion of $V$ to $V^\star$ also depends on existence of abovementioned $u^-$. The element $u^-$ belongs to $V^\star$ if it exists. New inequality \eqref{new} induces the affine transformation \begin{equation}\label{transform} -l(u^-)v + l(v)u^- \end{equation} of elements $v \in V$ that results in that every vector \eqref{transform} satisfies \eqref{new}. It is evident that the set $\{u^-,-l(u^-)v + l(v)u^- \text{ for all } v \in V\}$ is minimal in the abovementioned sense, thus the latter is the set of generators $V^\star$. If there exist no such $u^- \in U$, the transformation \eqref{transform} of $V$ is impossible. The new inequality separates $P$ into two parts. As already was mentioned, the elements $v^-$ belong to $V^\star$. To find all the vectors that lie in $H$ let us consider any pair $v^-,v^+ \in V$. For this pair the combination \begin{equation} v^\star_k := -l(v^-)v^+ + l(v^+)v^- \end{equation} lies in $H$. The set of all $v^\star_k$-s generates the convex polyhedral cone in $H$, but there are too many elements of $\{v^\star_k\}$ for it to be the set of generators for the latter cone. The minimal faces that contain a pair of generators are exactly two-dimensional ones. Therefore, in order to avoid the superfluous solutions we only need to reject all the pairs of generators that are not lying on faces of dimension $2$. Checking if two vectors lie on a face of dimension $2$ may be done in two ways. The first way is to check whether there exists $\{l_j(x), j \in J, |J|=r-2\}$ with linear independent $l_j(x)$ such that $l_j(v^-)=l_j(v^+)=0$ holds true for all $j \in J$. The second way is to check there exist no third $v \in V$ such that $l_j(v)=0$ for all $j$ such that $l_j(v^-)=l_j(v^+)=0$ (are not taken into account the rank $r$ nor linear independence of $l_j$-s). We therefore come to the algorithm that solves a system of linear inequalities over $\mathbb{Q}^n$: \vspace{\baselineskip} \begin{description} \setlength{\itemsep}{-\parsep} \setlength{\labelsep}{1em} \item[Input:\ \ ] $S:=\{l_1(x) ,\ldots,l_m(x) \}$ --- the left-hand sides of the inequalities \item[\hspace{3.4em}] of \eqref{linear_system1}, n --- the space dimension. \item[Output:] $U=\{u_1,\ldots,u_t\}$ --- the base of the maximal linear subspace $L$ of \item[\hspace{3.4em}] the solution cone of~\eqref{linear_system1}, \item[\hspace{3.4em}] $V=\{v_1,\ldots,v_s\}$ --- the set of generators for the strongly convex \item[\hspace{3.4em}] cone $P$ of the solutions of the system~\eqref{linear_system1}. \end{description} \begin{enumerate} \setlength{\itemsep}{-\parsep} \setlength{\labelsep}{1em} \item $U_{current}:=\{(1,\ldots,0),\ldots,(0,\ldots,1)\}$; \item $V_{current}:=\emptyset$; \item $S_{current}:=\{0 \leqslant 0\}$; \item $i:=1$; \item $l:=l_i(x)$; \item \textbf{if} $\exists u \in U_{current}$: $l(u) \not= 0$, \textbf{then} \par\vspace{-0.75\baselineskip} \begin{description} \setlength{\itemsep}{-\parsep} \setlength{\labelsep}{1em} \item[\quad] $U_{current} := \{l(u) u_i - l(u_i) u, \quad u_i \in U_{current}\}$; \item[\quad] $n:=-(l(u)/|l(u)|)u$; \item[\quad] $V_{current} := \{n, -l(n) v_i + l(v_i) n , \quad v_i \in V_{current}\}$; \end{description} \par\vspace{-0.75\baselineskip} \item \textbf{else} \par\vspace{-0.75\baselineskip} \begin{description} \setlength{\itemsep}{-\parsep} \setlength{\labelsep}{1em} \item[\quad] $V^1_{current}:=\{v \in V_{current} | \; l(v) \leqslant 0 \}$; $V^2_{current}:=\emptyset$; \item[\quad] \textbf{for} $\forall (v_k, v_s) \in V_{current}^2 \;:\; l(v_k) < 0$, $l(v_s) > 0$ \textbf{do} \begin{description} \setlength{\itemsep}{-\parsep} \setlength{\labelsep}{1em} \item[\quad] $S^\star:=\{ l_j(x) \in S_{current} | \; l_j(v_k) = l_j(v_s) = 0\}$; \item[\quad] \textbf{if} $S^\star \not= \emptyset$ \textbf{then} \begin{description} \setlength{\itemsep}{-\parsep} \setlength{\labelsep}{1em} \item[\quad] \textbf{for} $\forall\;v \in V_{current} \setminus \{v_k,v_s\}$ \begin{description} \setlength{\itemsep}{-\parsep} \setlength{\labelsep}{1em} \item[\quad] \textbf{for} $\forall\; l_j(x) \in S^\star$ \textbf{do} \begin{description} \setlength{\itemsep}{-\parsep} \setlength{\labelsep}{1em} \item[\quad] \textbf{if} $l_j(v) \not= 0$ \textbf{then} \item[\qquad]$V^2_{current}:= V^2_{current}\cup\{-l(v_s)v_k+ l(v_k)v_s \} $; \item[\quad] \textbf{end if;} \end{description} \item[\quad] \textbf{end do;} \end{description} \item[\quad] \textbf{end do;} \end{description} \item[\quad] \textbf{end if;} \end{description} \item[\quad] \textbf{end do;} \end{description} \par\vspace{-0.7\baselineskip}\par \textbf{end if;} \item $V_{current}:=V^1_{current} \cup V^2_{current}$; \item $S_{current}:=S_{current} \cup \{l_i(x) \}$; \item $S:=S \setminus \{l_i(x)\}$; \item $i:=i+1$; \item \textbf{if} $S = \emptyset$ \textbf{then} \par\vspace{-0.75\baselineskip} \begin{description} \setlength{\itemsep}{-\parsep} \setlength{\labelsep}{1em} \item[\quad] goto $14$; \end{description} \par\vspace{-0.75\baselineskip} \textbf{end if;} \item goto $5$; \item return $U_{current}, V_{current}$. \end{enumerate} \subsection{Some improvements} There are some tricks concerning the preparation of the system in order to lower the number of inequalities and/or variables. Firstly, the system may have no occurrences of some of $x_k$, $k=1,\ldots,n$. In this case we can ``clean up'' the system and thus reduce the number of variables. Secondly, we can avoid the linear subspace $L$ of cone of solutions by performing the change of variables. This trick also enables one to simplify the original system of inequalities. The dimension of $L$ is $n-r$, where $r$ is the rank of matrix of \eqref{linear_system1}, as was mentioned above. Let us define new $r$ variables $y_1,\ldots,y_r$ to equal any $r$ linear independent left-hand sides of \eqref{linear_system1} (for example, first $r$ ones): \begin{flalign} l_j(x) = - y_j &\leqslant 0, \quad j=1,\ldots,r,\\ l_j(x) &\leqslant 0, \quad j=r+1,\ldots,m. \end{flalign} By solving the system $\{l_j(x) = - y_j, j=1,\ldots,r\}$ for $\{x_1,\ldots,x_n\}$ we obtain the $(n-r)$-dimensional space of solutions \begin{align}\label{subs1} \begin{split} &x_{i_1} = f_1(y_1,\ldots,y_r,x_{i_{r+1}},\ldots,x_{i_{n}}),\\ &\dots\\ &x_{i_r} = f_r(y_1,\ldots,y_r,x_{i_{r+1}},\ldots,x_{i_{n}}),\\ \end{split}\\ \begin{split} &x_{i_{r+1}} = x_{i_{r+1}}\\ &\dots\\ &x_{i_n} = x_{i_n} \end{split} \end{align} where $(i_1,\ldots,i_n)$ is the permutation of indices $1,\ldots,n$ (we don't know \textit{a priori} which variables $x_i$ are most likely to be solved for). Upon substitution \eqref{subs1} the system \eqref{linear_system1} does not depend on $x_{i_{r+1}}, \ldots, x_{i_n}$\, because\, $l_{r+1}(x),\ldots,l_m(x)$ are the linear combinations of the first $r$ ones. Therefore, substitution \eqref{subs1} reduces the system to that of $m$ inequalities for $r$ independent variables $y_1,\ldots,y_r$. The $r$ inequalities of reduced system are simplified to be just $-y_i \leqslant 0$. This inequality subsystem has the \textit{a priori} known solution $E_r$ --- the $r$-dimensional linear space base. Hence we may efface these inequalities from the reduced system. More precisely, solving the system we iterate Motzkin-Burger algorithm starting from the system of inequalities $\{-y_i \leqslant 0, i=1,\ldots,r\}$ and $U=\emptyset$ and $V=E_r$. Thus the system is reduced to of $m-r$ inequalities for $r$ independent variables. Because $U=\emptyset$ at the start of algorithm, there is no need in any parts of algorithm except for the most complicated part that is the conversion of $V$ to $V^\star$ when there is no $u^-$ exists. The application of the Motzkin-Burger algorithm yields a number of solutions of the reduced system. By means of $n-r$ substitutions \eqref{subs1} and $(x_{i_{r+1}},\ldots,x_{i_{n}}) = E^k_{n-r}$, where $E^k_{n-r}$ are the base vectors of $(n-r)$-dimensional linear space, we then obtain the solution set of the original system. The case of $r = n$ is also of interest for simplifying the system despite that there is no lowering the number of variables. The system is reduced to that of $m-n$ inequalities for $n$ independent variables in this case. The calculation procedure is repeated in full except for that there is no need to consider the substitution the $(n-r)$-dimensional linear space base vectors to the several components $(x_{i_{r+1}},\ldots,x_{i_{n}})$ of any solution. Also of interest is the case of $r=m$ where $m$ is the number of inequalities. Upon the abovementioned substitution, the system is reduced to the diagonal one. Therefore, the whole system have \textit{a priori} known solution in terms of new variables. We therefore come to the algorithm of solving the system \eqref{linear_system1} with special preparation prior to the application of the Motzkin-Burger algorithm: \vspace{\baselineskip} \begin{description} \setlength{\itemsep}{-\parsep} \setlength{\labelsep}{1em} \item[Input:\ \ ] $S:=\{l_1(x) ,\ldots,l_m(x) \}$ --- the left-hand sides of the inequalities \item[\hspace{3.4em}] of \eqref{linear_system1}, n --- the space dimension. \item[Output:] $U=\{u_1,\ldots,u_t\}$ --- the base of the maximal linear subspace $L$ of \item[\hspace{3.4em}] the solution cone of~\eqref{linear_system1}, \item[\hspace{3.4em}] $V=\{v_1,\ldots,v_s\}$ --- the set of generators for the strongly convex \item[\hspace{3.4em}]cone $P$ of the solutions of the system~\eqref{linear_system1}. \end{description} \begin{enumerate} \setlength{\itemsep}{-\parsep} \setlength{\labelsep}{1em} \item Find all indices $Bad \subset \{1,\ldots,n\}$ such that $S$ has no occurrences of $x_j$,\; $j \in Bad$; \item Renumber the indices (not encountered in $Bad$) of variables so that $S$ explicitly depend on $n-|Bad|$ variables only; \item Choose $Base$ --- the maximal linear independent subsystem of $\{l_j(x), j=1,\ldots,m\}$; $r:=|Base|$; \item Solve $\{Base_i = -y_i, i=1,\ldots,r\}$ for $\{x_1,\ldots,x_n\}$ to obtain set of identities \eqref{subs1}; \item $I := \{i_{r+1},\ldots,i_n\}$; \item $U^\bullet:=\emptyset$; \item $V^\bullet:= \{\underbrace{(1,0,\ldots,0)}_{\text{$r$ components}}, \ldots,(0,\ldots,0,1)\}$; \item \textbf{if} $r < m$ \textbf{then} \par\vspace{-0.75\baselineskip} \begin{enumerate} \setlength{\itemsep}{-\parsep} \setlength{\labelsep}{1em} \item Substitute \eqref{subs1} to $S$ in order to obtain $S^\bullet$; \item Reorder the inequalities $S$ such that $-y_i \leqslant 0$ are the first $r$ ones. \item Iterate the part of Motzkin-Burger algorithm mentioned above excluding the items 1, 2, 3, 4, 6, considering $U_{current}=U^\bullet$, $V_{current}=V^\bullet$, $S_{current}=S^\bullet$, $i = r+1$ as the initial condition. \end{enumerate} \textbf{end if;} \item $E:=\{\underbrace{(1,0,\ldots,0)}_{\text{$n-r$ components}},\ldots, (0,\ldots,0,1)\}$; \item For each element $e \in E$ substitute $\{y_1=\ldots=y_r=0\}$ and $(x_{I_1},\ldots,\linebreak x_{I_{n-r}})=e$ into \eqref{subs1} and form $u := (x_1,\ldots,x_n) \in U$; \item For each element $v \in V^\bullet$ substitute $\{(y_1,\ldots,y_r)=v\}$ and $(x_{I_1}=\ldots=x_{I_{n-r}})=0$ into \eqref{subs1} and form $v := (x_1,\ldots,x_n) \in V$; \item return $U, V$. \end{enumerate} \subsection{Algorithm complexity} Let us not distinguish arithmetic and comparison operations. Let $n$ be the number of independent variables, $p$ be the number of elements in $V$ at the current step, $l$ be the number of $v^-, v \in V$, $k$ be the number of $v^+, v \in V$, $q$ be the number of inequalities already examined by the current step and $m$ be the number of inequalities in total. The calculation of the value of $l(x)$ has the complexity $2n-1$. If there exists such $u \in U$ that $l(u)\not=0$, than the conversion $U \rightarrow U^\star$ and $V \rightarrow V^\star$ requires the calculation of $n-1$ differences of the kind of $l(u^-)u_j-l(u_j)u^-$ with the total complexity $2 \cdot 2n + 1 = 4n+1$. Therefore, the overall complexity of converting both lists is $2(n-1)(4n+1)\sim 8n^2$. The case when no such $u \in U$ exists is of baffling complexity. Let us estimate the complexity of one iteration of this case as a function $f(p,q,n)$. For efficiency reasons we suppose that all the values of $l_j(x) \in S$ for all $v \in V$ are computed beforehand in order to exclude repeated calculations. Selection of $v^-$ takes $l$ operations. Next step --- to select the pairs $v^-,v^+$ --- requires $2kl$ more operations. The number of pairs reaches the maximum if $k=l=\frac{p}{2}$ so let $kl=\frac{p^2}{4}$ below. For every pair $v^-,v^+$ we need to choose those inequalities $l_j(x)$ for which $l_j(v^-)=l_j(v^+)$ is true. This requires $2q$ operations. Then we should examine every pair on whether $p-2$ elements of $V$ do not zero all the chosen inequalities. Let the number of the chosen inequalities be as large as possible \textit{i.e.,} $\frac{q}{2}$. Therefore, checking all the pairs have the complexity $\frac{p^2}{4}(p-2)\frac{q}{2} = \frac{p^3q}{8} - \frac{p^2q}{4}$. Let all the pairs satisfy the conditions above, then the overall complexity for all new the elements $v$ is $\frac{3p^2}{4}$. In total, $f(p,q,n) = pq(2n-1) + \frac{p}{2} + \frac{p^2}{2} + \frac{p^3q}{8} - \frac{p^2q}{4} + \frac{3p^2}{4} = \frac{1}{8}p(p^2q+10p-2pq+16qn-8q+4) \sim \frac{1}{8}p^3q$. These intensive calculations are possible since $p_0=3$ because up to $p \leqslant = 2$ there is only one (or no one) pair of generators for $V$. Therefore, up to one new generator is produced by algorithm for it to replace another. As was mentioned, number $p_k$ of elements $v \in V$ grows as $\frac{p_{k-1}}{2}+\frac{p_{k-1}^2}{4}$ on every $k$-th step. Iterating this function one can see that $p_k = p_k(p_{k-1}) = O(p_0^{2^{k-1}})$, so the worst-case complexity of pure iteration of the most complicated part of algorithm is $f(p_0,m,n) = O(mp_0^{(2^{m-4})^3}/8) = O(mp_0^{2^{3(m-4)}}) = O(mp_0^{2^{3m}}) = O(m3^{2^{3m}})$. Practice shows that, in fact, such a terrible complexity is almost impossible to happen. There are many pairs rejected on every step. The number of pairs generally is not too large. Number of probe inequatilities for every pair usually lower than $\frac{q}{2}$. Usually systems of full rank are incompatible for sufficiently large number of inequalities. Overall computation time nevertheless depends on $n$ for the moderate systems. Despite of it, the data grows exponentially. \subsection{Practical experience} We implemented abovementioned algorithms in Maple as one package \linebreak \texttt{motzkin\_burger} consists of three user procedures: \texttt{Conehull}, \texttt{MB} and \linebreak \texttt{CheckSolutions}. The first of these solves the problem to compute a set of generators for solution cone of system of inequalities. The second is the single Motzkin-Burger iteration. Last of these is the tool that allow user to check up the set of solutions on a correctness. Prototypes of these are \vspace{\baselineskip} \texttt{Conehull(L,x,n,options)} \vspace{\baselineskip} \texttt{MB(L,U,V,x,n)} \vspace{\baselineskip} \texttt{CheckSolutions(L,S,x,n)} \vspace{\baselineskip} Here \texttt{L} is a list of homogeneous polynomials of degree $1$ in terms of variables \texttt{x${}_{\text{\texttt{i}}}$}, \texttt{n} is dimension of space of solutions; \texttt{U} is the base of linear space of cone of solutions, \texttt{V} is the set of generators of strongly convex cone in latter cone; \texttt{S} is any solution set. Parameter \texttt{options}, at this moment, may accept only one value, ``\texttt{as is}'', what gives an instruction do not perform change of variables. Both \texttt{Conehull} and \texttt{MB} return an \texttt{exprseq} that consists of two $2$-dimensional lists which are abovementioned lists \texttt{U} and \texttt{V}. \texttt{CheckSolutions} returns an \linebreak \texttt{exprseq} too, but it might by interpret in another way: first is the list of vectors that are true solutions whereas the second is the list of uncorrect solutions. The timings presented in the table below are obtained in Maple 7 on computer based on Duron 700Mhz, 256 RAM. For every combination of ($n$,$m$,$r$) there was computed random system having these parameters, with sufficiently small integer coefficients. Here $n$ is space dimension, $m$ is the number of inequalities in the system and $r$ is the rank of the system. Values $t_1$ and $t_2$ are times in seconds for solving the latter systems using or not optional parameter of \texttt{Conehull} procedure. All these systems are saved in attached to paper package codes text files. \vspace{\baselineskip} {\centering \begin{tabular}[t]{|p{60pt}|p{60pt}|p{30pt}|p{80pt}|p{45pt}|} \hline space dimension, $n$ & number of inequalities, $m$ & rank, $r$ & \texttt{Conehull}, option \texttt{"as is"}, $t_1$, sec & \texttt{Conehull}, $t_2$, sec\\ \hline 5 & 5 & 5 & 0.010 & 0.120\\ \hline 5 & 7 & 3 & 0.090 & 0.050\\ \hline 10 & 10 & 10 & 0.130 & 0.261\\ \hline 10 & 15 & 5 & 0.150 & 0.180\\ \hline 20 & 20 & 20 & 1.783 & 2.103\\ \hline 20 & 30 & 10 & 1.512 & 1.021\\ \hline 30 & 30 & 15 & 2.864 & 2.003\\ \hline 40 & 40 & 20 & 8.663 & 4.316\\ \hline 40 & 40 & 30 & 45.326 & 17.775\\ \hline 50 & 50 & 40 & 89.118 & 1816.392\\ \hline 50 & 50 & 45 & 73.766 & 1549.628\\ \hline \end{tabular} } \vspace{\baselineskip} One can see that \texttt{Conehull} with option ``\texttt{as is}'' computes this examples with always increasing time (with rare exceptions). Without the option it behave more complicated: for every fixed $n$ and $m$ the case of the systems of full rank or ``almost'' full rank may be computed more faster than in the case of small rank. Of course, the option slows down the performance (this is the consequence of inefficient Maple \texttt{subs} implementation we are using) starting from some dimension, but we hope that the systems of larger dimension will be computed more faster with this option. Nevertheless, to improve this performance bottleneck is the main aim of further work.\looseness=-1
{ "timestamp": "2005-01-01T05:55:41", "yymm": "0501", "arxiv_id": "cs/0501003", "language": "en", "url": "https://arxiv.org/abs/cs/0501003", "abstract": "Subject of this paper is an implementation of a well-known Motzkin-Burger algorithm, which solves the problem of finding the full set of solutions of a system of linear homogeneous inequalities. There exist a number of implementations of this algorithm, but there was no one in Maple, to the best of the author's knowledge.", "subjects": "Computational Geometry (cs.CG); Computational Complexity (cs.CC); Symbolic Computation (cs.SC)", "title": "Implementation of Motzkin-Burger algorithm in Maple", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812327313545, "lm_q2_score": 0.7310585844894971, "lm_q1q2_score": 0.7079434132467782 }
https://arxiv.org/abs/2302.14698
Heuristic Modularity Maximization Algorithms for Community Detection Rarely Return an Optimal Partition or Anything Similar
Community detection is a fundamental problem in computational sciences with extensive applications in various fields. The most commonly used methods are the algorithms designed to maximize modularity over different partitions of the network nodes. Using 80 real and random networks from a wide range of contexts, we investigate the extent to which current heuristic modularity maximization algorithms succeed in returning maximum-modularity (optimal) partitions. We evaluate (1) the ratio of the algorithms' output modularity to the maximum modularity for each input graph, and (2) the maximum similarity between their output partition and any optimal partition of that graph. We compare eight existing heuristic algorithms against an exact integer programming method that globally maximizes modularity. The average modularity-based heuristic algorithm returns optimal partitions for only 19.4% of the 80 graphs considered. Additionally, results on adjusted mutual information reveal substantial dissimilarity between the sub-optimal partitions and any optimal partition of the networks in our experiments. More importantly, our results show that near-optimal partitions are often disproportionately dissimilar to any optimal partition. Taken together, our analysis points to a crucial limitation of commonly used modularity-based heuristics for discovering communities: they rarely produce an optimal partition or a partition resembling an optimal partition. If modularity is to be used for detecting communities, exact or approximate optimization algorithms are recommendable for a more methodologically sound usage of modularity within its applicability limits.
\section{Introduction} Community detection, the process of inductively identifying communities within a network, is a core problem in computational sciences and especially, physics, computer science, biology, and computational social science \cite{zhang2014,fortunato2022newman}. Among common approaches for community detection are the modularity maximization algorithms which are designed to maximize a utility function, modularity \cite{newman_modularity_2006}, across all possible ways that the nodes of the input network can be partitioned into communities. Modularity measures the fraction of edges within communities minus the expected fraction if the edges were distributed randomly; with the random distribution of the edges being a null model that preserves the degrees. Despite their name and design philosophy, current modularity maximization algorithms, which are used by no less than tens of thousands of peer-reviewed studies, are not guaranteed to maximize modularity \cite{good_performance_2010,newman_equivalence_2016}. This has led to uncertainty in the extent to which they succeed in returning a modularity-maximum (optimal) partition or something similar. Modularity is among the first objective functions proposed for optimization-based detection of communities \cite{newman_modularity_2006,fortunato2016}. Several limitations of modularity including the resolution limit \cite{fortunato_2007} have led researchers develop alternative methods for detecting communities using stochastic block modeling \cite{Karrer_2011,sbm_2014,liu2021scalable,serrano2021community}, information theoretic approaches \cite{rosvall_2007,rosvall_2008}, and alternative objective functions \cite{Aldecoa_2011_surprise,traag_2015_surprise,marchese2022detecting}. In spite of its shortcomings, modularity is the most commonly used method for community detection \cite{sobolevsky2014general,fortunato2022newman}. Despite the widespread adoption of modularity-based heuristic methods \cite{clauset_finding_2004,blondel_fast_2008,rb_pots_2008,sobolevsky2014general,zhang2014,paris_2018,traag_louvain_2019,edmot_2019} for community detection, there is uncertainty in the extent to which they succeed in returning an optimal partition or a partition resembling an optimal partition. This study aims to address this question through computational experimentation. After describing the methods and materials, we present the main results of our computational analysis and discuss the methodological ramifications and future directions. \section{Methods and Materials} We investigate the extent to which eight commonly used heuristic modularity maximization algorithms \cite{clauset_finding_2004,blondel_fast_2008,rb_pots_2008,sobolevsky2014general,zhang2014,paris_2018,traag_louvain_2019,edmot_2019} succeed in returning an optimal partition or a partition similar to an optimal partition. To achieve this objective, we quantify the proximity of their results to the globally optimal partition(s), which we obtain using an exact Integer Programming (IP) model for maximizing modularity \cite{brandes2007modularity,agarwal_modularity-maximizing_2008,dinh_toward_2015}. Throughout the paper, we use the terms network and graph interchangeably. \subsection{Modularity} Consider the simple undirected graph $G=(V,E)$ with $|V|=n$ nodes, $|E|=m$ edges, adjacency matrix entries $a_{ij}$, and a $k$-partition $X=\{C_1,C_2, \dots, C_k \}$ of the node set $V$. The modularity function $Q_{(G,X)}$ is computed \cite{newman_modularity_2006,fortunato2016} according to Eq.\ \eqref{eq0} \begin{equation} \label{eq0} Q_{(G,X)}= \frac{1}{2m} \sum \limits_{(i,j) \in V^2, i\leq j} \left( a_{ij} - \gamma\frac{d_id_j}{2m}\right) \delta(i,j) \end{equation} \noindent where $d_i$ is the degree of node $i$, $\gamma$ is the resolution parameter\footnote{Without loss of generality, we set $\gamma=1$ for all the analysis in this paper.}, and $\delta(i,j)$ is 1 if nodes $i$ and $j$ are in the same community else 0. The term associated with each pair of nodes $(i,j)$ is alternatively represented as $b_{ij}=a_{ij} -\gamma\frac{d_id_j}{2m}$ and referred to as the modularity matrix entry for $(i,j)$. The modularity maximization problem for input graph $G=(V,E)$ involves finding a partition $X^*$ whose associated $Q_{(G,X^*)}$ is globally maximum over all possible partitions of the node set $V$. \subsection{Sparse IP formulation of modularity maximization} Consider the simple graph $G=(V,E)$ with modularity matrix entries $b_{ij}$, obtained using the resolution parameter $\gamma$. Using the binary decision variable $x_{ij}$ for each pair of distinct nodes $(i,j),i<j$, their community membership is either the same (represented by $x_{ij}=0$) or different (represented by $x_{ij}=1$). Accordingly, the problem of maximizing the modularity of input graph $G$ can be formulated as an IP model \cite{dinh_toward_2015} as in Eq.\ \eqref{eq1}. \begin{equation} \label{eq1} \begin{split} &\max_{x_{ij}} Q = \frac{1}{2m} \left( \sum\limits_{(i,j) \in V^2 , i< j} b_{ij}(1- x_{ij}) + \sum\limits_{(i,i) \in V^2} b_{ii} \right) \\ &\text{s.t.} \quad x_{ik}+x_{jk} \geq x_{ij} \quad \forall (i,j) \in V^2 , i< j, k\in K(i,j) \\ & \quad \quad x_{ij} \in \{0,1\} \quad \forall (i,j) \in V^2 , i< j \end{split} \end{equation} In Eq.\ \eqref{eq1}, the optimal objective function value equals the maximum modularity for the input graph and an optimal community assignment is characterized by the optimal values of the $x_{ij}$ variables. $K(i,j)$ indicates a minimum-cardinality separating set \cite{dinh_toward_2015} for the nodes $i,j$. Using $K(i,j)$ in the IP model of this problem leads to a more efficient formulation with $\mathcal{O}(n^2)$ constraints \cite{dinh_toward_2015} instead of $\mathcal{O}(n^3)$ constraints in earlier IP formulations of the problem \cite{brandes2007modularity,agarwal_modularity-maximizing_2008}. Solving this optimization problem is NP-complete \cite{brandes2007modularity}. We use the \textit{Gurobi} solver (version 10.0) \cite{gurobi} to solve it for our computational experiemts as outlined in Subsection \ref{ss:data}. \subsection{Reviewing heuristic modularity maximization algorithms} We evaluate eight modularity maximization heuristics known as Clauset-Newman-Moore (CNM) \cite{clauset_finding_2004}, Louvain \cite{blondel_fast_2008}, Leicht-Newman (LN) \cite{rb_pots_2008}, Combo \cite{sobolevsky2014general}, Belief \cite{zhang2014}, Paris \cite{paris_2018}, Leiden \cite{traag_louvain_2019}, and EdMot-Louvain \cite{edmot_2019}. A Python implementation of all these algorithms is available in the Community Discovery library (\textit{CDlib}) \cite{rossetti2019cdlib}. We briefly describe how these eight algorithms use modularity to discover communities. The CNM algorithm initializes each node as a community by itself. It then follows a greedy scheme of merging two communities that contribute maximum positive value to the modularity \cite{clauset_finding_2004}. The Louvain algorithm involves two iterative steps of (1) locally moving nodes for increasing modularity and (2) aggregating the communities from the first step \cite{blondel_fast_2008}. It has probably been the most commonly used method for community detection, but it may sometimes lead to disconnected components in the same community \cite{traag_louvain_2019}. The LN algorithm uses spectral optimization to maximize modularity which also supports directed graphs \cite{rb_pots_2008}. The Combo algorithm is a general community detection method through optimization which includes modularity maximization. It involves finding the best merger, split, or recombination of communities to maximize modularity as well as performing a series of Kernighan-Lin bisections \cite{kernighan1970efficient} on the communities as long as they increase modularity \cite{sobolevsky2014general}. The Belief algorithm seeks the consensus of different high-modularity partitions through a message-passing algorithm \cite{zhang2014} motivated by the premise that maximizing modularity can lead to many poorly correlated competing partitions. The Paris algorithm is suggested to be a modularity-maximization scheme with a sliding resolution \cite{paris_2018}; that is, an algorithm capable of capturing the multi-scale community structure of real networks without a resolution parameter. It generates a hierarchical community structure based on a simple distance between communities using a nearest-neighbour chain \cite{paris_2018}. The Leiden algorithm attempts to resolve a defect of the Louvain algorithm in returning badly connected communities. It is suggested to guarantee well-connected communities in which all subsets of all communities are locally optimally assigned \cite{traag_louvain_2019}. The EdMot-Louvain algorithm (EdMot for short) is suggested to overcome the hypergraph fragmentation issue observed in previous motif-based community detection methods \cite{edmot_2019}. It first creates the graph of higher-order motifs (small dense subgraph patterns) and then partition it using the Louvain method to maximize modularity using higher-order motifs \cite{edmot_2019}. To evaluate these eight modularity-based community detection algorithms in maximizing modularity, we measure their performance in terms of (1) the ratio of their output modularity to the maximum modularity for each input graph and (2) the maximum similarity between their output partition and any optimal partition of that graph. We obtain optimal partitions by solving the IP model in Eq.\ \eqref{eq1} using the Gurobi solver with a termination criterion ensuring global optimality \cite{gurobi}. \subsection{Measures for evaluating heuristic algorithms} For a quantitative measure of proximity to global optimality, we define and use the \textit{Global Optimality Percentage} (GOP) as the fraction of the modularity returned by a heuristic method for a network divided by the globally maximum modularity for that network (obtained by solving the IP model in Eq.\ \eqref{eq1}). In all cases where the modularity returned by a heuristic method equals the maximum modularity for the input graph, we set GOP$=1$. In cases where a heuristic algorithm returns a partition with a negative modularity value, we set GOP$=0$ to facilitate a simple numerical comparison of the partitions based on their extent of sub-optimality. We also use a quantitative measure for the similarity of a partition to an optimal partition. Normalized Adjusted Mutual Information (AMI) \cite{vinh_AMI} is a measure of similarity of partitions which (unlike normalized mutual information \cite{vinh_AMI}) adjusts the measurement based on the similarity that two partitions may have by pure chance. AMI for a pair of identical partitions (or permutations of the same partition) equals 1. For two different partitions, however, AMI takes a smaller value (including 0 or negative values close to 0 for two extremely dissimilar partitions). \subsection{Data and resources} \label{ss:data} For this evaluation, we include {60 real networks}\footnote{The 60 real networks are loaded from the publicly accessible network repository \href{https://networks.skewed.de/}{Netzschleuder} as simple undirected graphs. The complete list of network names is provided in the Appendix.} with no more than {2812 edges} as well as 10 Erd\H{o}s-R\'{e}nyi graphs and 10 Barab\'{a}si-Albert graphs with 125-153 edges. The computational experiments were implemented in Python 3.9 using a notebook computer with an Intel Core i7-11800H @ 2.30GHz CPU and 64 GBs of RAM running Windows 10. \section{Results} We present the main results from our experiments in the following four subsections. In Subsection \ref{sec:results1}, we compare partitions from different algorithms on a single network. In Subsection \ref{sec:results2}, we examine the multiplicity of optimal partitions and investigate the similarity between multiple optimal partitions of the same networks. In Subsection \ref{sec:results3}, we evaluate the effectiveness of the heuristic algorithms on 80 networks by measuring the distance of sub-optimal partitions from an optimal partition. Finally, in Subsection \ref{sec:results4}, we investigate the success rate of the heuristic algorithms in finding an optimal partition. \subsection{Comparing partitions from different algorithms on one network}\label{sec:results1} Figure \ref{fig:facebook} shows one of the 80 graphs and the partitions (represented by node colors) returned by nine community detection methods. This graph\footnote{ {facebook\_friends} network \cite{maier2017cover} from the \href{https://networks.skewed.de/}{Netzschleuder} repository} represents an anonymized Facebook \textit{ego network}\footnote{A network of one person's social ties to other persons and their ties to each other}. Nodes are Facebook users, and an edge exists if the two users were friends on Facebook in April 2014 \cite{maier2017cover}. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[trim={10.6cm 10.6cm 10.6cm 10.6cm},clip,width=\textwidth]{figures/facebook_friends_bayan.pdf} \caption{IP, $Q^*=0.7157714, \\\hspace{\textwidth} k=28, \text{AMI}=1$} \label{subfig:ip} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[trim={10.6cm 10.6cm 10.6cm 10.6cm},clip,width=\textwidth]{figures/facebook_friends_CNM.pdf} \caption{CNM, $Q=0.6971,\\\hspace{\textwidth} k=30, \text{AMI}=0.829$} \label{subfig:cnm} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[trim={10.6cm 10.6cm 10.6cm 10.6cm},clip,width=\textwidth]{figures/facebook_friends_combo.pdf} \caption{Combo, $Q=0.7157709,\\\hspace{\textwidth} k=13, \text{AMI}=0.949$} \label{subfig:combo} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[trim={10.6cm 10.6cm 10.6cm 10.6cm},clip,width=\textwidth]{figures/facebook_friends_EdMot.pdf} \caption{EdMot, $Q=0.4902,\\\hspace{\textwidth} k=53, \text{AMI}=0.651$} \label{subfig:edmot} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[trim={10.6cm 10.6cm 10.6cm 10.6cm},clip,width=\textwidth]{figures/facebook_friends_leiden.pdf} \caption{Leiden, $Q=0.7082,\\\hspace{\textwidth} k=32, \text{AMI}=0.908$} \label{subfig:leiden} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[trim={10.6cm 10.6cm 10.6cm 10.6cm},clip,width=\textwidth]{figures/facebook_friends_louvain.pdf} \caption{Louvain, $Q=0.7087,\\\hspace{\textwidth} k=29, \text{AMI}=0.920$} \label{subfig:louvain} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[trim={10.6cm 10.6cm 10.6cm 10.6cm},clip,width=\textwidth]{figures/facebook_friends_paris.pdf} \caption{Paris, $Q=0.0338,\\\hspace{\textwidth} k=20, \text{AMI}=0.363$} \label{subfig:paris} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[trim={10.6cm 10.6cm 10.6cm 10.6cm},clip,width=\textwidth]{figures/facebook_friends_rb_pots.pdf} \caption{LN, $Q=0.7139,\\\hspace{\textwidth} k=28, \text{AMI}=0.971$} \label{subfig:ln} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[trim={10.6cm 10.6cm 10.6cm 10.6cm},clip,width=\textwidth]{figures/facebook_friends_belief.pdf} \caption{Belief, $Q=0.4566, \\\hspace{\textwidth}k=3, \text{AMI}=0.786$} \label{subfig:belief} \end{subfigure} \caption{Modularity maximization for one network and nine methods leading to one optimal partition (panel a) from the IP model and eight sub-optimal partitions (panels b-i) from eight heuristic algorithms with different modularity values ($Q$), number of communities ($k$), and similarity to an optimal partition ($\text{AMI}$). (Magnify the high-resolution figure on screen for details.) } \label{fig:facebook} \end{figure} Panel \ref{subfig:ip} of Figure \ref{fig:facebook} shows the optimal partition obtained by solving the IP model in Eq.\ \eqref{eq1} for the network {facebook\_friends} to global optimality. It involves $k=28$ communities and a maximum modularity value of $Q^*=0.7157714$. The results from the eight heuristic modularity maximization algorithms are all sub-optimal partitions as depicted in panels \ref{subfig:cnm}--\ref{subfig:belief} of Figure \ref{fig:facebook}. Compared to other algorithms, the two algorithms Combo and LN have more success in achieving proximity to an optimal partition. LN returns a partition with $k=28$ communities and a modularity of $Q=0.7139$ which has the highest AMI among all heuristics ($0.971$). The relative success of the Combo algorithm is in returning a high-modularity partition with $Q=0.7157709$, but with $k=13$ communities and a lower AMI ($0.949$) compared to LN. The sub-optimal partitions from the other six algorithms have more substantial variations in $Q$, AMI, and $k$ as shown by the values in the corresponding subcaptions in Figure \ref{fig:facebook}. \subsection{Multiplicity of optimal partitions}\label{sec:results2} While the partition which maximizes modularity is often unique, some graphs have multiple optimal partitions. For all networks considered in our analysis, we obtain all optimal partitions using the Gurobi solver by running it with a special configuration for finding all optimal partitions \cite{gurobi}. Figure \ref{fig:multiplicity} shows a protein network\footnote{ {interactome\_pdz} network \cite{pdzbase2005} from the \href{https://networks.skewed.de/}{Netzschleuder} repository} and its four optimal partitions. In this network, nodes represent proteins and an edge represents a binding interaction between two proteins (PDZ-domain-mediated protein–protein binding interaction) \cite{pdzbase2005}. All four optimal partitions have $Q^*=0.80267$ and $k=29$. For this network, the differences between optimal partitions are in the community assignments for two nodes which are shown by red arrows in Figure \ref{fig:multiplicity}. The six pairwise AMI values for the similarity between each pair of optimal partitions are all $>0.98$ confirming the high level of similarity between the optimal partitions in Figure \ref{fig:multiplicity}. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[trim={8cm 8cm 8cm 8cm},clip,width=\textwidth]{figures/interactome_pdz_opt1.pdf} \caption{Indicated nodes are blue and green} \label{subfig:op1} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[trim={8cm 8cm 8cm 8cm},clip,width=\textwidth]{figures/interactome_pdz_opt2.pdf} \caption{Indicated nodes are green and green} \label{subfig:op2} \end{subfigure} \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[trim={8cm 8cm 8cm 8cm},clip,width=\textwidth]{figures/interactome_pdz_opt3.pdf} \caption{Indicated nodes are blue and pale orange} \label{subfig:op3} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[trim={8cm 8cm 8cm 8cm},clip,width=\textwidth]{figures/interactome_pdz_opt4.pdf} \caption{Indicated nodes are green and pale orange} \label{subfig:op4} \end{subfigure} \caption{A protein network and its four optimal partitions (panels a-d). The red arrows show the differences between optimal partitions. (Magnify the high-resolution figure on screen for details.) } \label{fig:multiplicity} \end{figure} Obtaining all optimal partitions for all 80 networks, we observed that 89\% of the graphs have unique optimal partitions and the multiplicity of optimal partitions is a relatively rare event. Given the possibility of graphs having multiple optimal partitions, we calculated the AMI for the partition of each heuristic algorithm and each of the multiple optimal partitions of that graph. We then conservatively reported the maximum AMI of each heuristic for each graph to quantify the similarity between that partition and its closest optimal partition. Consequently, a low value of AMI for a the partition obtained by a heuristic algorithm indicates its dissimilarity to any optimal partition. Our results show that the rarely observed multiple optimal partitions of a graph often have a high degree of similarity (AMI values $>0.9$) because their differences are only in the community assignments of a very few nodes (as in Figure \ref{fig:multiplicity}). Dissimilarity between multiple optimal partitions of a network is exceptional, but it has been observed in one of our 80 networks: \textit{contiguous USA}\footnote{ {contiguous\_usa} network \cite{knuth1993stanford} from the \href{https://networks.skewed.de/}{Netzschleuder} repository}, where nodes are US states and edges indicate a land-based border between two states. The AMI of the two optimal partitions for this network is exceptionally low $(0.34)$. Upon further investigation, we observed that one optimal partition combines five communities of the other optimal partition together. This makes the two partitions related in terms of belonging to a clustering hierarchy, while they are not similar according to an AMI definition of partition similarity. Setting aside the exceptional cases which are possible due to the mathematical symmetries resulted from the value of $\gamma$ used in Eq.\ \eqref{eq0} for defining modularity, our results show that there is usually a distinct uniqueness to an optimal partition (or a group of similar optimal partitions) for a given network in comparison to sub-optimal partitions. This new perspective challenges the premise that maximizing modularity leads to many poorly correlated competing partitions with almost the same modularity \cite{zhang2014}. It is failing to actually maximize modularity that may leave us with many poorly correlated competing partitions with unclear distance to maximum modularity. What are yet to be analyzed are how different sub-optimal partitions are from an optimal partition and how often heuristic modularity maximization algorithms return sub-optimal partitions. We investigate these two questions in the next two Subsections. \subsection{Evaluating heuristic algorithms on 80 networks}\label{sec:results3} For summarizing the results on all 80 networks and eight heuristics, we present four scatterplots of GOP and AMI. Figure \ref{fig:heuristic} shows GOP on the y-axes and AMI on the x-axes for the combination of each network and algorithm. For each algorithm (represented by colors), there are 60 data points for the 60 real networks and 2 data points for the average of 10 Erd\H{o}s-R\'{e}nyi and the average of 10 Barab\'{a}si-Albert graphs. The first three letters of the network names are indicated on each data point (magnify the figure on screen for details). 45-degree lines are drawn to indicate the cases where the GOP and AMI are equal. In other words, the 45-degree lines show cases where the extent of sub-optimality ($1-\text{GOP}$) is associated with a dissimilarity ($1-\text{AMI}$) of the same size between the sub-optimal partition and any optimal partition. \begin{figure}[!htb] \centering \includegraphics[width=1\textwidth]{figures/scatter_subs.pdf} \caption{Global optimality percentage and normalized adjusted mutual information measured for eight modularity maximization heuristics in comparison with (all) globally optimal partitions. (Magnify the high-resolution figure on screen for details.)} \label{fig:heuristic} \end{figure} Looking at the y-axes values in Figure \ref{fig:heuristic}, we observe that there is a substantial variation in the values of GOP (i.e.\, extent of sub-optimality) returned by the eight heuristic algorithms. The Belief algorithm returns partitions associated with negative modularity values for most of the 80 instances (leading to datapoints with GOP$=0$ concentrated at the bottom of Figure \ref{fig:heuristic}). The Paris algorithm returns partitions with modularity values substantially smaller than the maximum modularity. Aside from a few exceptions, all data points for Leiden and LN have the same position indicating their identical performance on most of these instances. The two algorithms CNM and EdMot seem to have higher variation in GOP (compared to the other algorithms) for these instances. Overall, the four algorithms with highest and increasing performance in returning close-to-maximum modularity values are LN, Leiden, Louvain, and Combo respectively. Despite that these instance are graphs with no more than 2812 edges, they are, according to Figure \ref{fig:heuristic}, challenging instances for these heuristic algorithms. Given that modularity maximization is an NP-complete problem \cite{brandes2007modularity}, one can argue that the performance of these heuristic methods in term of proximity to an optimal partition does not improve for larger networks. The x-axes values in Figure \ref{fig:heuristic} show the considerable dissimilarity between the sub-optimal partitions and an optimal partition for these 80 instances. Except for the Combo algorithm, a large number of the sub-optimal partitions obtained by these heuristic algorithms have AMI values smaller than {$0.6$}. This indicates that their sub-optimal partitions are substantially different from any optimal partition. Even for data points concentrated at the top of Figure \ref{fig:heuristic} which have $0.95<\text{GOP}<1$, we see AMI values substantially smaller than 1. Compared to other heuristics, Combo appears to consistently return partitions with large AMIs on a larger number of these 80 instances. Focusing on the position of data points, we observe that they are mostly located above their corresponding 45-degree line. This indicates that sub-optimal partitions tend to be disproportionally dissimilar to any optimal partition. This result goes against the naive viewpoint that close-to-maximum modularity partitions are also close to an optimal partition. This finding confirms previous concerns that heuristic modularity maximization algorithms have a high risk of failing to obtain relevant communities \cite{kawamoto2019counting} and they may result in degenerate solutions of communities that could be far from the underlying community structure \cite{good_performance_2010}. \subsection{Success rate of heuristic algorithms in maximizing modularity}\label{sec:results4} Our results on GOP for the eight heuristic algorithms allow us to answer a fundamental question about the heuristic modularity maximization algorithms: How often each algorithm returns an optimal (a modularity-maximum) partition? We report the fraction of networks (out of 80) for which a given algorithm returns an optimal partition. Combo \cite{sobolevsky2014general} has the highest success rate, returning an optimal partition for $55\%$ of the networks. LN \cite{rb_pots_2008} and Leiden \cite{traag_louvain_2019} maximize modularity for 26.2\% of the networks considered. Louvain \cite{blondel_fast_2008} has a success rate of 18.7\%. The algorithms CNM \cite{clauset_finding_2004}, EdMot \cite{edmot_2019}, Paris \cite{paris_2018}, and Belief \cite{zhang2014} have success rates of 5\%, 2.5\%, 1.2\%, and 0\% respectively. These are arguably low success rates for what the name \textit{modularity maximization algorithm} implies or the design philosophy of discovering network communities through maximizing a function. Earlier from Figure \ref{fig:heuristic}, we observed that near-optimal partitions tend to be disproportionally dissimilar to any optimal partition. In other words, close-to-maximum modularity partitions are rarely close to any optimal partition. Taken together with the low success rates of heuristic algorithms in maximizing modularity, our results indicate a crucial mismatch between the design philosophy of modularity maximization algorithms for community detection and their performance: heuristic modularity maximization algorithms rarely return an optimal partition or a partition resembling an optimal partition. \section{Conclusions and Future Directions} We analyzed eight heuristic modularity-based algorithms for community detection. While our findings are limited to these algorithms, their usage by tens of thousands of peer-reviewed studies indicates the importance of assessing their performance. Most heuristic algorithms for modularity maximization tend to scale well for large networks \cite{zhao2021community}. They are widely used not only because of their scalability, but also because the high risk of failing to obtain relevant communities is not well understood \cite{kawamoto2019counting}. The scalability of these heuristics come at a cost: their partitions have no guarantee of proximity to an optimal partition \cite{good_performance_2010} and, as our computational analysis showed, they rarely return an optimal partition. Moreover, we showed that their sub-optimal partitions tend to be disproportionally dissimilar to any optimal partition. An emphasis on scalability is certainly valid for some applications and contexts. However, for many other applications, more attention on the quality of partitions or a trade-off between quality and running time seems more justifiable than sacrificing optimality/quality for scalability. Our findings suggest that developing an actual modularity maximization algorithm \cite{aref2022bayan} is recommendable for a more methodologically sound usage of modularity in community detection. Such an algorithm can also reveal the theoretical limits \cite{fortunato2022newman} of maximum modularity partitions in retrieving ground-truth communities. Understanding what modularity does and does not provide has been complicated with the under-studied sub-optimality of heuristic \textit{modularity-inspired algorithms} and their methodological consequences. This is because previous methodological studies \cite{lancichinetti_limits_2011,Chen_2018,global2018,kawamoto2019counting,peixoto_2023} had rarely disentangled the heuristic aspect of these algorithms from the fundamental concept of modularity. Our study is a continuation of previous efforts \cite{good_performance_2010,kawamoto2019counting} in separating the effects of sub-optimality (or the choice of using greedy approaches) from the effects of using modularity on discovering network communities. A promising path forward seems to be using the advances in integer programming to push the computational limits of solving the IP formulation of the modularity maximization problem \cite{brandes2007modularity,agarwal_modularity-maximizing_2008,dinh_toward_2015} more efficiently for moderate-sized graphs of practical size and order. This approach acknowledges that there will be a limit to the largest graph whose modularity can be maximized within a reasonable time using fixed computing resources. Heuristic modularity-inspired algorithms may also serve the network and data science practitioners better if a new paradigm brings more attention to partition optimality/quality alongside scalability. This allows researchers from computational sciences to have more accurate methods for clustering networked data, thereby improving upon widely used computational tools for understanding networked systems. \subsubsection{Author contributions} Conceptualization (SA); data curation (SA, HC); formal analysis (SA, MM); funding acquisition (SA); investigation (SA); methodology (SA,MM); project administration (SA); resources (SA, HC, MM); software (SA, HC, MM); supervision (SA, MM); validation (SA, HC, MM); visualization (SA, MM); writing - original draft preparation (SA); writing - review \& editing (SA, MM). \subsubsection{Acknowledgements} We acknowledge Zachary P. Neal for pointing us to this problem and Santo Fortunato for the encouraging correspondence. The free and public access to community detection algorithms in the \href{https://cdlib.readthedocs.io/}{CDlib} maintained by Giulio Rossetti and colleagues, and network data repositories maintained by Tiago P. Peixoto (the \href{https://networks.skewed.de/}{Netzschleuder}) and by Aaron Clauset and his students (the Index of Complex Networks - \href{https://icon.colorado.edu/}{ICON}) has been particularly helpful in this study for which we are grateful. This study has been supported by the Data Sciences Institute at the University of Toronto. % % % \bibliographystyle{splncs04}
{ "timestamp": "2023-03-01T02:21:00", "yymm": "2302", "arxiv_id": "2302.14698", "language": "en", "url": "https://arxiv.org/abs/2302.14698", "abstract": "Community detection is a fundamental problem in computational sciences with extensive applications in various fields. The most commonly used methods are the algorithms designed to maximize modularity over different partitions of the network nodes. Using 80 real and random networks from a wide range of contexts, we investigate the extent to which current heuristic modularity maximization algorithms succeed in returning maximum-modularity (optimal) partitions. We evaluate (1) the ratio of the algorithms' output modularity to the maximum modularity for each input graph, and (2) the maximum similarity between their output partition and any optimal partition of that graph. We compare eight existing heuristic algorithms against an exact integer programming method that globally maximizes modularity. The average modularity-based heuristic algorithm returns optimal partitions for only 19.4% of the 80 graphs considered. Additionally, results on adjusted mutual information reveal substantial dissimilarity between the sub-optimal partitions and any optimal partition of the networks in our experiments. More importantly, our results show that near-optimal partitions are often disproportionately dissimilar to any optimal partition. Taken together, our analysis points to a crucial limitation of commonly used modularity-based heuristics for discovering communities: they rarely produce an optimal partition or a partition resembling an optimal partition. If modularity is to be used for detecting communities, exact or approximate optimization algorithms are recommendable for a more methodologically sound usage of modularity within its applicability limits.", "subjects": "Social and Information Networks (cs.SI); Statistical Mechanics (cond-mat.stat-mech); Data Structures and Algorithms (cs.DS); Machine Learning (cs.LG); Optimization and Control (math.OC)", "title": "Heuristic Modularity Maximization Algorithms for Community Detection Rarely Return an Optimal Partition or Anything Similar", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812345563904, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.7079434089067641 }
https://arxiv.org/abs/1710.05251
Symmetries in the time-averaged dynamics of networks: reducing unnecessary complexity through minimal network models
Complex networks are the subject of fundamental interest from the scientific community at large. Several metrics have been introduced to characterize the structure of these networks, such as the degree distribution, degree correlation, path length, clustering coefficient, centrality measures etc. Another important feature is the presence of network symmetries. In particular, the effect of these symmetries has been studied in the context of network synchronization, where they have been used to predict the emergence and stability of cluster synchronous states. Here we provide theoretical, numerical, and experimental evidence that network symmetries play a role in a substantially broader class of dynamical models on networks, including epidemics, game theory, communication, and coupled excitable systems. Namely, we see that in all these models, nodes that are related by a symmetry relation show the same time-averaged dynamical properties. This discovery leads us to propose reduction techniques for exact, yet minimal, simulation of complex networks dynamics, which we show are effective in order to optimize the use of computational resources, such as computation time and memory.
\section{Outline} \label{sec:oultine} In recent years a large body of research has investigated the dynamics of complex networks, including percolation \cite{moore2000epidemics}, epidemics \cite{ganesh2005effect, earn2000simple}, synchronization \cite{SUCNS, pecora1990synchronization,Ro:Pi:Ku,belykh2005synchronization1,belykh2005synchronization2,belykh2011mesoscale,belykh2001cluster,taylor2009dynamical,tinsley2012chimera,nature_fs}, evolutionary games \cite{nature_gt, weibull1997evolutionary}, neuronal models \cite{luke2013complete,uzuntarla2017inverse}, and traffic dynamics \cite{korea,korea3,So:Va,Oh:Sa,Mo:Pa02,Gu:Gu,fs_comm_01, fs_comm_02, fs_comm_03}. These studies are relevant to better model, understand, design, and control networks in technological, biological, and social applications. Extensive research has shown that the topology of these networks (e.g. their degree distribution \cite{pastor2001epidemic,barabasi1999emergence}, degree correlation \cite{newman2003mixing,newman2002assortative}, community structure \cite{girvan2002community}, etc.) plays a significant role in their dynamical time evolution \cite{boccaletti2006complex}. \begin{figure}[h!t!] \centering \includegraphics[width=0.9\textwidth]{fig01} \caption{\textbf{Effects of the network symmetries in three different dynamical models/networks:} (a) the Zachary's Karate Club network \cite{zachary1977information}, (b) the Bellsouth network \cite{dataset} and (c) a random graph, nodes colored the same are in the same symmetry cluster except the gray colored nodes, each of which is in a cluster by itself. (d-f) Prisoner's Dilemma game played, (g-i) Network traffic model simulated and (j-l) dynamics of the Kinouchi Copelli model \cite{kc_model} in these networks (a-c).} \label{fig:AB} \end{figure} Another important feature of the network topology is the presence of network symmetries, which so far have remained to some extent unexplored, with some exceptions \cite{nature_fs, nicosia2013remote, golubitsky2003symmetry, golubitsky2012singularities, NCs}. These symmetries have been shown to be commonplace in many real networks \cite{macarthur2009spectral}, hence it becomes important to understand how they can affect the dynamics of the network. References \cite{nature_fs, nicosia2013remote, golubitsky2003symmetry, golubitsky2012singularities, NCs} have focused on the role played by the network symmetries on the emergence of cluster synchronization. Here we consider several well known dynamical models on networks and try to illustrate the effects of the underlying network symmetries on the network dynamics. Our study indicates that network symmetries play a role in all the dynamical models considered and in particular: evolutionary games \cite{nature_gt, weibull1997evolutionary}, network traffic \cite{korea,korea3,So:Va,Oh:Sa,Mo:Pa02,Gu:Gu,fs_comm_01, fs_comm_02, fs_comm_03}, and propagation of excitation among excitable systems \cite{KI1,KI2,KI3}; and thus suggests the effects of the symmetries on the dynamics may be a rather general feature of complex networks. However, as we will see, the particular effect of the symmetries varies based on the particular type of dynamics considered. Here, the topology of a network is described by the adjacency matrix $A=\{A_{ij}\}$, where $A_{ij}=A_{ji}$ is equal to $1$ if node $j$ and $i$ affect each other and is equal to $0$ otherwise. We define a network symmetry as a permutation of the network nodes that leaves the network structure unaltered. The symmetries of the network form a (mathematical) group $\mathcal{G}$. Each element of the group can be described by a permutation matrix $\Pi$ that re-orders the nodes in a way that leaves the network structure unchanged (that is, each $\Pi$ commutes with $A$, $\Pi A = A\Pi$). Though the set of symmetries (or automorphisms) of a network can be quite large, even for small networks, it can be computed from knowledge of the matrix $A$ by using widely available discrete algebra routines. In fact, except for simple cases for which it may be possible to identify the symmetries by inspection, in general for an arbitrary network, the use of a software is required. In this study we used Sage \cite{sage}, an open-source mathematical software. Once the symmetries are identified, the nodes of the network can be partitioned into $M$ \textit{clusters} by finding the orbits of the symmetry group, i.e., the disjoint sets of nodes that, when all of the symmetry operations are applied, permute among one another in the same set. \section{Symmetries of the Network Dynamics} \label{sec:snd} Figure\ \ref{fig:AB}(a), (b), and (c) shows three examples of undirected networks: the Zachary's Karate Club network \cite{zachary1977information} of $N=34$ nodes, the Bell South network \cite{knight2011internet} of $N=51$ nodes and a randomly generated Erd\"os-R\'enyi (ER) graph of $N=20$ nodes, respectively. Each node of the Karate Club network is a member of a university karate club and a connection represents a friendship relation between the members. The nodes of the Bell South network are the IP/MPLSs (Multiprotocol Label Switching: a switching mechanism used in high-performance telecommunications networks) backbone and connections represent routing paths. In Fig.\ \ref{fig:AB}(a), (b) and (c) colors of the nodes indicate the clusters they belong to, either non-trivial (i.e. clusters with more than one node in them) or trivial clusters (clusters with only one node in them). All the nodes in trivial clusters are colored gray, while the non-trivial clusters are colored differently. The Karate Club network in Fig.\ \ref{fig:AB}(a) has $C=4$ nontrivial clusters, $23$ trivial clusters and $480$ symmetries. The Bell South network in Fig.\ \ref{fig:AB}(b) displays $C=9$ nontrivial clusters, $24$ trivial clusters and $29,859,840$ symmetries. The random network in Fig.\ \ref{fig:AB}(c) has $C=6$ non-trivial clusters, $8$ trivial clusters and $8$ symmetries. The rest of Fig.\ \ref{fig:AB} has a total of nine panels, one for each of three networks and each of three dynamical models. The three dynamical models are: evolutionary game theory (d-f), network traffic (g-i), and propagation of excitation among excitable system (j-l). We now briefly introduce the models, which are all stochastic in nature. The evolutionary game theory dynamics models the evolution of cooperation and defection in a population of coupled agents (nodes), playing the Prisoner's Dilemma game. At each time step a node is randomly selected and its strategy is updated. The new strategy to be adopted is probabilistically determined based on the payoffs of the nodes surrounding the selected node and their strategy selection. Each one of the network nodes (agents) iteratively plays a version of the Prisoner's Dilemma game \cite{nature_gt}. Each node $i$ can either be a cooperator ($S_{i}=1$) or a defector ($S_{i}=0$). The network connectivity is defined by the adjacency matrix $A$, described earlier. We define a payoff between two players, based on the well known \emph{Prisoner's Dilemma} game. There are two types of strategy adopted by the players: \emph{cooperation} and \emph{defection}. A cooperator pays a cost $c$ for each one of the agents it is connected to and a defector pays nothing \cite{nature_gt}. Each node receives a benefit equal to $b$ for each cooperator it is connected to. When playing the game, node $i$ receives a payoff equal to \begin{equation} \xi_{i} = \sum_{j}(A_{ij}bS_{j}- A_{ji}cS_{i}). \end{equation} We define the fitness \cite{nature_gt} of each node to be $f_{i}=1-\omega+\omega \xi_{i}$, where $0 \leq \omega \leq 1$ measures the intensity of selection: $\omega\simeq 1$ means strong selection, that is the fitness is almost equal to the payoff and $\omega\simeq 0$ means weak selection, that is the fitness is almost independent of the payoff and close to $1$. The literature \cite{nature_gt, wu2007cooperation, wang2010evolutionary, tan2014structure} focuses on the case of weak selection, which is also what we consider here (in all our simulations we either set $\omega=0.1$ or $\omega=0.2$). Following \cite{nature_gt} we choose a `Death-birth' (DB) updating rule for the game evolution. Namely, in each time step a randomly selected node $i$ is replaced by a new offspring (node). The new offspring evolves into either a cooperator or a defector depending on the fitness of the surrounding agents. We set the probability of that new node to be a cooperator to be $\sigma(F_{Ci}-F_{Di})$, where $F_{Ci}$ and $F_{Di}$ are the fitnesses of cooperators and defectors in the neighboring nodes and $\sigma$ is a monotonically increasing function such that $0\leq\sigma\leq 1$. This reflects a higher propensity of turning into a cooperator based on how \emph{well} the neighbors of a given node that are cooperators are doing with respect to the other neighbors of that node that are defectors. The total fitness of the neighbors of player $i$ is equal to \begin{equation} F_{i}=\sum\limits_{j}^{N}A_{ij}f_{j} \label{eq:total_fitness} \end{equation} The total fitness of the cooperators, $F_{Ci}$ and defectors, $F_{Di}$ in the neighboring nodes of $i$ is defined as, \begin{equation} \begin{split} F_{Ci} =&\sum\limits_{j}^{N}A_{ij}S_{j}f_{j} = \sum_{j}A_{ij}S_{j}(1-\omega)+\omega\sum_{j}A_{ij}S_{j}\xi_{j} \cr F_{Di} =& F_i - F_{Ci} = \sum_{j}^{N}A_{ij}(1-S_{j})(1-\omega)+\omega\sum_{j}A_{ij}(1-S_{j})\xi_{j} \end{split} \label{eq:fitness_cooperator} \end{equation} Letting $x_i = (F_{Ci}-F_{Di})$, we write the probability that the new offspring will be a cooperator $\sigma(x_i)$. Here we set $\sigma(x_i) = \gamma x_i + \epsilon$, where $\gamma>0$ and $\epsilon$ are two arbitrary constants. In all our numerical simulations the values of $\gamma$ and $\epsilon$ were chosen so as to ensure $0\leq \sigma \leq 1$ for all $i$'s. In Sec.\ S1 of the Supplementary Information we obtain a set of equations that describe the time evolution of the game and prove its equivariance with respect to permutations of the network nodes that are in the automorphism group of $A$. We numerically iterated the game on several networks, including the three shown in Figs.\ \ref{fig:AB} (a), (b) and (c) for a number of time-steps and for each node $i$ we monitored $\left<S_i\right>$ the fraction of times a node spends in the cooperator state. For each run, the game was iterated until a state was reached in which the number of cooperators and defectors stabilized. Figures \ref{fig:AB}(d), (e) and (f) show $\left<S\right>_i$, the fraction of times that each node spends in the cooperator state for each one of the nodes of the networks in Fig.\ \ref{fig:AB}(a), (b) and (c), respectively. In the network traffic model, packets are originated at source nodes, get routed through a sequence of intermediate nodes, until they reach the destination nodes and get removed from the network \cite{korea,korea3,So:Va,Oh:Sa,Mo:Pa02,Gu:Gu,fs_comm_01, fs_comm_02, fs_comm_03}. At every intermediate node, packets are placed at the bottom of that node's queue. When they reach the top of the queue they get routed to one of the neighboring nodes. Here we consider a simple routing strategy that attempts to avoid nodes with large queues assigning them a lower probability of being selected for routing. Figure 1(g), (h), and (i) show the rate of growth of the queue length (number of packets in the queue) at each node for the three networks shown in Figure 1(a), (b), and (c), respectively. Finally, we consider a network of coupled excitable systems \cite{KI1}. Each one of these systems can be in either one of three states: quiescent, excited, and refractory. Nodes that are excited can excite neighboring nodes that are in the quiescent state with a certain probability. Figure 1(j), (k), and (l) show the frequency of excitation at each node for the three networks shown in Figure 1(a), (b), and (c), respectively. A more precise and detailed description of the evolutionary game theory model is provided in Supplementary Information Sec.\ S1, of the network traffic model in Supplementary Information Sec.\ S2, and of the excitable systems model in Supplementary Information Sec.\ S3. Our main result is that for all the three networks and the three dynamical models, \emph{nodes that belong to the same cluster show the same time-averaged dynamics.} This is illustrated in detail in Fig.\ 1 (d)-(l). While this observation holds irrespective of the particular network and type of dynamics, the particular time-averaged value attained by the nodes in the same cluster is network and model specific. Note that in the figure nodes are ordered by their degree (which is the label on the abscissa-axis). For example, for the game theory model, we observe that the nodes in the same cluster approximately show the same frequency of being a cooperator (or defector) but that does not necessarily correlate with the degree. Here we see that the symmetries in the network topology can predict dynamics better than the nodes' degrees. For each one of the dynamical models considered, we have performed an analysis to: (i) predict the emergence of clusters when the dynamics is averaged in time and (ii) predict the values attained by the nodes in each cluster. The results of this analysis is reported for the evolutionary game, communication model and the excitable systems model in the Supplementary Information, Secs.\ S1, S2, and S3. \section{Quotient Graph Reduction} \label{sec:qtgr} As mentioned in the introduction, symmetries are common features of biological networks, technological networks, social networks etc. MacArthur \textit{et al}.\ \cite{macarthur2008symmetry} have analyzed datasets of large complex networks and have found that these present large numbers of symmetries, see Supplementary Information Sec.\ S4. Intensive research in social sciences, biology, engineering, and physics uses numerical simulations of large complex networks to understand and predict their dynamical behavior (e.g., in a given network, the critical value of the infection rate above which an epidemics occurs), in order to better characterize and control real-world phenomena. Our results in Secs.\ \ref{sec:snd}, S1, S2, and S3 point out that nodes that are related by a symmetry operation display the same time-averaged dynamical behavior. This immediately raises the question whether a reduction of the dynamics is possible in which \emph{duplicate nodes} can be omitted, leading to \emph{minimal models} of complex networks, and so to a better exploitation of computational resources in numerical simulations, such as computation time and memory. Related questions have been asked in the literature of complex networks, where nodes have been grouped according to some of their features, most notably the degree \cite{pastor2001epidemic} and these approaches have been successful at predicting and explaining several network properties, in particular in the case of scale free networks \cite{barabasi1999emergence}. While these approaches are typically based on mean-field models and thus approximate, here our grouping of nodes is based on the exact concept of a symmetry. Our ultimate goal is to generate {minimal network models} that reproduce certain features of the dynamics by using the least possible number of nodes. In the case of synchronization dynamics, we know that a \emph{quotient network reduction} is possible, in which the exact cluster-synchronous time-evolution is generated by a minimal number of nodes (i.e., one node for each cluster). However, how to obtain minimal network models for other types of dynamics, remains an open question. To address this issue, here we will briefly review the concept of a \emph{quotient network}. Under the action of the symmetry group, the set of the network nodes is partitioned into $C$ disjoint structural equivalence classes called the group \emph{orbits}, $\mathcal{O}_1,\mathcal{O}_2,...,\mathcal{O}_C$, such that $\bigcup\limits_{\ell =1}^{C}|\mathcal{O}_\mathcal{G}^{\ell}| = N \text{ and } \mathcal{O}_\mathcal{G}^i \cap \mathcal{O}_\mathcal{G}^j=0, \text{ where } i,j=1,2,...,C, j\neq i$. Then we can define a $C \times C$ matrix $\hat{A}$ corresponding to the quotient network such that for each pair of sets $(\mathcal{O}^v, \mathcal{O}^u)$, \begin{equation} \hat{A}_{uv}= \sum_{j \in O^v} A_{ij}, \end{equation} for any $i \in O^u$ (i.e., independent of $i \in O^u$), and for $u,v = 1,2,...,C$. \begin{figure}[ht!] \centering \includegraphics[width=0.8\textwidth]{fig04} \caption{(a) A 10 node network and (b) its 5 node quotient reduction. Inset of (a) shows one arduino board with radio transmitter, which we used in our experiments. Plots (c) and (d) are experimental time traces showing the running average for the fraction of times each node spends in the cooperator state for the full network in (a) and the quotient network in (b), respectively. Colors in (c) and (d) are consistent with (a) and (b).} \label{fig:experiment} \end{figure} In Fig. \ref{fig:experiment}(a) we show a randomly generated network of 10 nodes and in Fig. \ref{fig:experiment}(b) its quotient graph reduction. We see that all the nodes in the same cluster (these are colored the same in the figure on the left), map to only one node of the quotient network (on the right) and the color identifies the reduced node. Note that the quotient network may be directed even if the original full network is undirected. It is also possible that nodes of the quotient network form self connections, which represent connections between nodes belonging to the same cluster in the original graph. In general, the mathematical equations we have derived in Sec.\ \ref{sec:snd} and S1, S2, S3 of Supplementary Information for all the three models can be projected onto the corresponding quotient network equations (see the Supplementary Information Sec.\ S4 for an example). However, obtaining an equivalent model that can be simulated on the quotient network and reproduce the original full network dynamics may require a particular adaptation of the model, which is model specific. We show that the quotient graph can be conveniently exploited in simulations involving large networks to reduce computation time and memory. These simulations may be used to model various type of dynamics, including epidemics, congestion, emergence of cooperation, as discussed in Sec.\ \ref{sec:snd}, just to mention a few examples. In order to demonstrate this point we consider the evolutionary game theory model presented in Sec.\ \ref{sec:snd} and for a number of networks, we study how well the corresponding quotient graphs can approximate the evolutionary dynamics of the original full network. In what follows we describe evolution of the Prisoner's Dilemma dynamics, as described in Sec.\ \ref{sec:snd}, on the quotient network. We indicate with $\hat{S}_j = \{0,1\}$ the strategy of node $j$ of the quotient network, where $0$ represents defection and $1$ represents cooperation. At each iteration of the game, the payoff received by quotient node $i$ from its neighboring nodes is equal to $\hat{\xi}_i = \sum_j \hat{A}_{ij} \left(b\hat{S}_j-c\hat{S}_i\right)$. Note that this expression for the payoff differs from that for the full network in Eq.\ (1) as the mapping of nodes from the full network to the quotient network preserves the indegree of the nodes, but not the outdegree. Moreover, the fitness of quotient node $i$ is equal to $\hat{f}_i=1-\omega+\omega\hat{\xi}_i$. Similarly we write the expressions, \begin{equation} \begin{split} \hat{F}_i &=\sum_{j}\hat{A}_{ij}\hat{f}_j \cr \hat{F}_{Ci} &=\sum_{j}\hat{A}_{ij}\hat{S}_{j}(1-\omega)+\omega\sum_{j}\hat{A}_{ij}\hat{S}_{j}\hat{\xi}_{j} \cr \hat{F}_{Di} &=\sum_{j}\hat{A}_{ij}(1-\hat{S}_{j})(1-\omega)+\omega\sum_{j}\hat{A}_{ij}(1-\hat{S}_{j})\hat{\xi}_{j} \end{split} \end{equation} where $\hat{F}_i$, $\hat{F}_{Ci}$ and $\hat{F}_{Di}$ are the total fitness, the fitness of the neighboring cooperators, and the fitness of the neighboring defectors of a quotient node $i$, respectively. We set the same cost $c$ and benefit $b$ as for the full network. At each time step, a node of the quotient network is selected with probability proportional to the cardinality of its orbit set and replaced by a new offspring. This new node becomes a cooperator with a probability $\sigma\left(\hat{F}_{Ci} - \hat{F}_{Di}\right)$, where $\sigma$ is the function defined in Sec.\ 2. We expect that the time averages $\left<\xi_f\right>\approx\left<\hat{\xi}_q\right>$, $\left<F_{Cf}\right>\approx\left<\hat{F}_{Cq}\right>$ and $\left<F_{Df}\right>\approx\left<\hat{F}_{Dq}\right>$, where node $f$ of the full network maps to node $q$ of the quotient network (for nodes of the full network that are in the same cluster, we know that the time averages are the same from the analysis in Sec.\ \ref{sec:snd}). Moreover, the time-averaged strategy for a node of the quotient network $\left<\hat{S}_q\right>$ is the same as the time-averaged strategy for the corresponding node of the full network, $\left<S_f\right>$. Figure \ref{fig:sim_match_quotient} compares the simulation results for the Bellsouth network (squares) and its quotient reduction (diamonds). As can be seen, we find very good agreement between the full and quotient network time-averaged dynamics. This indicates that, if we are interested in the time-averaged behavior, we can equivalently perform a simulation on the full network or on the reduced quotient network. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{fig05} \label{fig:sim_match_quotient} \caption{Comparison between simulation results of the evolutionary game on the Bellsouth network in Fig.\ 1(b) and its quotient version. The diamond marks correspond to simulation on the full network and the square marks correspond to the simulation on the quotient network. Points that are colored the same correspond to nodes in the same cluster (coloring is consistent with Fig.\ 1(a) and (b), except for the gray colored nodes, each of which is in a cluster by itself).} \end{figure} In order to test the robustness of our results in a real setting, we have built an experimental network of coupled agents iteratively playing the Prisoner's Dilemma. This network is composed of 10 wirelessly coupled transceiver modules (details in Supplementary Sec.\ S7) as shown in Fig.\ \ref{fig:experiment}a using nRF24L01 2.4 GHz RF transceivers on Arduino boards. The transceiver modules act as the nodes of the network. All the transceiver modules have unique addresses, i.e., communication is one to one. To construct this network, communication links of each module are restricted as per the topology of the network, i.e., only the connected modules can communicate and share information with each other. For example, radio module 7 in Fig. \ref{fig:experiment}a can only communicate with modules 4, 6, 8, and 10. This experimental realization is subject to practical limitations that are hard to reproduce in simulation. In particular, in our experimental setting these limitations are mainly imposed by the reliability of the radio communication between the individual units (details in the Supplementary Information Sec.\ S7). We have also built an experimental version of the quotient network in Fig.\ \ref{fig:experiment}b. The experimental time traces for the networks in Fig.\ \ref{fig:experiment} a and b when the game is played are shown in Fig.\ \ref{fig:experiment}c and d, respectively. We see that: i) nodes of the full network that are in the same cluster attain the same frequency of cooperation as the game is iterated and ii) for large time the quotient network well predicts the full network experimental dynamics. Since the quotient graphs have in general fewer nodes than the full graphs, they can be advantageous in terms of both memory and time needed in simulation. The size of the adjacency matrix reduces from $N \times N$ to $C \times C$ and the number of nonzero elements of the matrix decreases from $NK_{av}$ to roughly $\sqrt{NC}K_{av}$, where $K_{av}$ is the average node degree of the full network. We define the reduction coefficient $\rho_g=C/N$. A smaller value of the ratio $\rho_g$ indicates higher reduction of the number of nodes in the quotient graph with respect to the original graph. Note that a critical aspect of simulation of large real networks is the limitation of software memory allocation. We computed the CPU time required by a single iteration of the prisoner's dilemma dynamics for several networks and their quotients. Fig.\ \ref{fig:error_result}(a) and Table S2 in the Supplementary Information show the CPU time ratio $\rho_t = t_q/t_f$, where $t_f$ and $t_q$ are the CPU time for an iteration of full and quotient network, respectively. For both the full network and the quotient network we also computed the simulation convergence time, which we measured as follows. We randomly picked an initial condition for each node of the quotient network and assigned the same initial condition to all the nodes in the corresponding cluster of the full network. At each time step, we computed the running average for the fraction of times a node spent in the cooperation state. For each node we measured its individual convergence time, i.e., the time after which the running average remained steadily in a $[- \delta, +\delta]$ interval of the final state. The convergence time of the network was taken to be the largest of the convergence times of the nodes. Fig.\ \ref{fig:error_result}(b) and Table S2 in the Supplementary Information show the convergence time ratio $\rho_c = \tau_q/\tau_f$, where $\tau_f$ and $\tau_q$ are the convergence times for the full and the quotient network, respectively. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{fig06} \caption{(a) CPU time ratio (b) Convergence time ratio (c) Error between the full and quotient network time-averaged dynamics.} \label{fig:error_result} \end{figure} We note that simulating the dynamics on the quotient graph, rather than on the original network, can reduce the computational effort, while producing approximately the same time-averaged dynamics. In Fig.\ \ref{fig:error_result}(c) and Table S2 in the Supplementary Information we show the accuracy parameter ${\Delta}$, defined as the normalized average difference of the time-averaged frequency of cooperation between the full and the quotient networks, \begin{equation} \Delta = \frac{1}{N}\sum_{i}^{N}\frac{|\langle S \rangle_{i}-\hat{\langle S \rangle}_{i}|}{S_{i}}, \end{equation} where $\langle S \rangle_{i}$ and $\hat{\langle S \rangle}_{i}$ are the time-averaged frequency of cooperation of node $i$ and its corresponding node in the quotient network, respectively. Figure \ref{fig:error_result} shows that as the reduction coefficient is lowered, the CPU time ratio $\rho_t$ and the convergence time ratio $\rho_c$ decrease in a linear fashion but the normalized error parameter $\Delta$ is roughly independent of $\rho_g$. It is important to look at the $y$-axes of the plots in Figs.\ \ref{fig:error_result}(a), (b) and (c). For example, for a strong reduction coefficient $\rho_g \simeq 0.2$, corresponding to the Media Owners Group network, the CPU time is lowered by roughly $95 \%$ but the normalized error $\Delta$ only increases by approximately $2\%$. \section{Conclusions} By applying a symmetry analysis to a network, we have uncovered clusters of nodes that are structurally and functionally equivalent. This becomes apparent when monitoring the time-averaged state of the nodes in a variety of network models (previous work had only focused on the particular case of synchronization dynamics) and is confirmed in an experimental realization of an evolutionary game played on a network, in the presence of noise and communication losses. Thus it appears that the emergence of symmetry clusters in the time-averaged dynamics of networks is a quite general feature. For the case of the evolutionary game, we obtain a reduction technique for exact, yet minimal, simulation of complex networks dynamics, which produces similar dynamical results, while computation requires less time and memory. However, a generalization of this quotient network reduction to other types of dynamics is nontrivial. The reason is that each dynamical model involves a different set of \emph{rules}, which may be difficult to convert into equivalent rules for the quotient network. We hope our work will stimulate further research into reduction techniques that can be applied in a variety of dynamical models. \textbf{Supplementary Material} Supplementary Material for this paper includes 7 sections. The evolutionary game theory model, network traffic model, and biological excitable system model are described in Sections S1, S2, and S3, respectively. The equations describing the quotient network dynamics are presented in Section S4. Tables for the symmetries of real networks and the computational aspects of the quotient network simulations are included in Sections S5 and S6, respectively. The experimental realization is described in Section S7. \textbf{Acknowledgement.} This work was supported by the Office of Naval Research through ONR Award No. N00014-16-1-2637, the National Science Foundation through NSF grant CMMI- 1727948, NSF grant CRISP- 1541148, and the Defense Threat Reduction Agency's Basic Research Program under grant No. HDTRA1-13-1-0020. The authors acknowledge help in running the experiment from Fabio Della Rossa, Robert Morris, Jonathan Ungaro, John Padilla, and Shakeeb Ahmad, all from the University of New Mexico. \bibliographystyle{unsrt}
{ "timestamp": "2018-12-27T02:11:08", "yymm": "1710", "arxiv_id": "1710.05251", "language": "en", "url": "https://arxiv.org/abs/1710.05251", "abstract": "Complex networks are the subject of fundamental interest from the scientific community at large. Several metrics have been introduced to characterize the structure of these networks, such as the degree distribution, degree correlation, path length, clustering coefficient, centrality measures etc. Another important feature is the presence of network symmetries. In particular, the effect of these symmetries has been studied in the context of network synchronization, where they have been used to predict the emergence and stability of cluster synchronous states. Here we provide theoretical, numerical, and experimental evidence that network symmetries play a role in a substantially broader class of dynamical models on networks, including epidemics, game theory, communication, and coupled excitable systems. Namely, we see that in all these models, nodes that are related by a symmetry relation show the same time-averaged dynamical properties. This discovery leads us to propose reduction techniques for exact, yet minimal, simulation of complex networks dynamics, which we show are effective in order to optimize the use of computational resources, such as computation time and memory.", "subjects": "Physics and Society (physics.soc-ph); Chaotic Dynamics (nlin.CD)", "title": "Symmetries in the time-averaged dynamics of networks: reducing unnecessary complexity through minimal network models", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812299938007, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.7079434055712438 }
https://arxiv.org/abs/1111.2964
Fano Hypersurfaces in Positive Characteristic
We prove that a general Fano hypersurface in a projective space over an algebraically closed field of arbitrary characteristic is separably rationally connected.
\section{Typical Curves and Deformation Theory} Let $n$ be an integer $\ge 3$. Let $X$ be a hypersurface of degree $n$ in $\mathbb{P}^n$. Let $C$ be a smoothly embedded rational curve of degree $e$ in $X$. We have the normal bundle short exact sequence. $$\begin{CD} 0@>>>TC@>>>TX|_C@>>>\mathcal{N}_{C|X}@>>>0\\ \end{CD}$$ By adjunction, the degree of $TX|_C$ is the degree of $\O_{\P^n}(1)|_C$. Thus the degree of the normal bundle $\mathcal{N}_{C|X}$ is $e-2$ and the rank is $n-2$. \begin{definition}\label{typicaldef} Let $e$ be a positive integer $\le n$. A smoothly embedded rational curve $C$ of degree $e$ in $X$ is \emph{typical}, if the normal bundle is the following: \begin{equation*}\mathcal{N}_{C|X}\cong\big\{ \begin{array}{ll} \mathscr{O}^{\oplus(n-3)}\oplus\mathscr{O}(-1),&\text{if } e= 1,\\ \mathscr{O}^{\oplus(n-e)}\oplus\mathscr{O}(1)^{\oplus(e-2)}, &\text{if } e\ge 2. \end{array}\end{equation*} The curve $C$ is a \emph{typical line} if the degree of $C$ is one. \end{definition} Note that when $e=n$, typical rational curves of degree $n$ are very free. \begin{lemma}\label{typicalline} Let $L$ be a smoothly embedded line in a hypersurface $X$ of degree $n$. Then $L$ is typical if and only if both of the following conditions hold: \begin{enumerate} \item $h^1(C, \mathcal{N}_{L|X})=0$, \item $h^1(C,\mathcal{N}_{L|X}(-1))\le1$. \end{enumerate} \end{lemma} \proof We may assume that $\N_{L|X}\cong \mathcal{O}(a_1)\oplus\dots\oplus\mathcal{O}(a_{n-2})$, where $a_1\ge\cdots\ge a_{n-2}$. Condition (1) is equivalent to that $a_1\ge\cdots\ge a_{n-2}\ge -1$. Together with condition (2), $a_{n-2}$ is either $0$ or $-1$. When $a_{n-2}=0$, $\N_{L|X}$ is semipositive, contradicting with the fact that the degree of $\N_{C|X}$ is $-1$. When $a_{n-2}=-1$, $\N_{L|X}/\O(a_{n-2})$ is semipositive. Because of the degree of the normal bundle, $L$ is typical.\qed \begin{lemma}\label{typicalcurve} Let $C$ be a smoothly embedded rational curve of degree $e$ in a hypersurface $X$ of degree $n$, where $2\le e\le n$. Then $C$ is typical if and only if both of the following conditions hold: \begin{enumerate} \item $h^1(C, \mathcal{N}_{C|X}(-1))=0$, \item $h^1(C,\mathcal{N}_{C|X}(-2))\le n-e$. \end{enumerate} \end{lemma} \proof Recall that the rank of the normal bundle $\mathcal{N}_{C|X}$ is $n-2$ and the degree is $e-2$. We may assume that $\mathcal{N}_{C|X}\cong \mathcal{O}(a_1)\oplus\dots\oplus\mathcal{O}(a_{n-2})$, where $a_1\ge\cdots\ge a_{n-2}$. Condition (1) is equivalent to that $a_{n-2}\ge 0$. Condition (2) implies that at most $n-e$ of $a_i$'s are $0$. By degree count, $C$ is a typical rational curve of degree $e$. \qed Typical rational curves in the hypersurface $X$ are deformation open as very free curves in the following sense. Let $H_n$ be the Hilbert scheme of hypersurfaces of degree $n$ in $\P^n$. It is isomorphic to some projective space. Let $\X\rightarrow H_n$ be the universal hypersurface. The morphism $\X\rightarrow H_n$ is flat projective and there exists a relative very ample invertible sheaf $\O_\X(1)$ on $\X$. Let $R_{e,n}$ be the Hilbert scheme parameterizing flat projective families of one-dimensional subschemes in $\X$ with the Hilbert polynomial $P(d)=ed+1$. By \cite{Kollar} Theorem 1.4, $R_{e,n}$ is projective over $H_n$. Let $\C$ be the universal families over $R_{e,n}$, denoted by $\pi : \C\rightarrow R_{e,n}$. We have the following diagram, $$ \xymatrix{ \C \ar[d]_\pi \ar[r]^<<<<<<i & R_{e,n}\times_{H_n} \X \ar[ld] \\ R_{e,n}&} $$ where $i$ is a closed immersion. \begin{prop}\label{typicaldefopen} Let $e$ be a positive integer $\le n$. There exists an open subset in $R_{e,n}$ parameterizing typical curves of degree $e$ in hypersurfaces of degree $n$. \end{prop} \proof Every typical curve of degree $e$ in a hypersurface of degree $n$ gives a point in $R_{e,n}$. Any small deformation of a smoothly embedded rational curve is still a smoothly embedded rational curve. Thus the proposition follows by Lemma \ref{typicalline}, Lemma \ref{typicalcurve} and the upper semicontinuity theorem \cite{Hartshorne} III.12.8.\qed \begin{lemma}\label{smoothdefopen} There exists an open subset in $R_{e,n}$ such that for every closed point $(C,X)$ in the open subset, $C$ lies in the smooth locus of $X$. \end{lemma} \proof Let $S\subset \X$ be the relative singular locus in the universal hypersurface. $S$ is a closed subset of $\X$. Since $\pi$ is proper, the locus $\pi(i^{-1}(R_{e,n}\times_{H_n} S))$ is a closed subset of $R_{e,n}$ parametrizing the point $(C',X')$ such that $C'$ intersects the singular locus of $X'$. Thus the complement $U$ is open in $R_{e,n}$ and satisfies the desired property. \qed Let $L$ be a typical line in a hypersurface $X$ of degree $n$ in $\P^n$. By definition, $\mathcal{N}_{L|X}\cong\mathscr{O}^{\oplus(n-3)}\oplus\mathscr{O}(-1)$. We have a canonically defined \emph{trivial subbundle} $\O^{\oplus(n-2)}$ of $\N_{L|X}$. \begin{prop}\label{typicalconic} Let $X$ be a hypersurface of degree $n$ in $\P^n$. Let $L$ and $M$ be two typical lines in $X$ intersecting transversally at only one point $p$. Assume that the following conditions hold: \begin{enumerate} \item the direction $T_p L$ is not in the trivial subbundle of $\mathcal{N}_{M|X}$; \item the direction $T_p M$ is not in the trivial subbundle of $\N_{L|X}$. \end{enumerate} Then the pair $(L\cup M, X)\in R_{2,n}$ can be smoothed to a pair $(C, X^\prime)$ where $C$ is a typical conic in $X^\prime$. Furthermore, there exists an open neighborhood of $(L\cup M,X)$ in which any smoothing of $(L\cup M,X)$ is a typical conic. \end{prop} \proof Let $D$ be the union of the lines $L$ and $M$. Since $D$ is a local complete intersection and lies in the smooth locus of $X$, the normal bundle $\mathcal{N}_{D|X}$ is locally free. We have the following short exact sequence. $$\begin{CD} 0@>>>\mathcal{N}_{L|X}@>>>\mathcal{N}_{D|X}|_L@>>>T_p L\otimes T_p M@>>>0 \end{CD}$$ By \cite{GHS} Lemma 2.6, the locally free sheaf $\mathcal{N}_{D|X}|_L$ is the sheaf of rational sections of $\mathcal{N}_{L|X}$ which has at most one pole at the direction of $T_p M$. Since $\mathcal{N}_{L|X}\cong\mathscr{O}^{\oplus(n-3)}\oplus\mathscr{O}(-1)$, condition (2) implies that $\N_{D|X}|_L$ is isomorphic to $\O^{\oplus(n-2)}$. By the same argument, condition (1) implies that the sheaf $\mathcal{N}_{D|X}|_M$ is isomorphic to $\mathscr{O}^{\oplus(n-2)}$. Now we have the following short exact sequence. $$\begin{CD} 0@>>>\mathcal{N}_{D|X}|_M(-p)@>>>\mathcal{N}_{D|X}@>>>\mathcal{N}_{D|X}|_L@>>>0\\ @.@|@.@|@.\\ @. \mathscr{O}(-1)^{\oplus(n-2)} @. @.\mathscr{O}^{\oplus(n-2)} @. \end{CD}$$ First we claim that $D$ can be smoothed. Since $h^1(D,\N_{D|X})=0$, the pair $(D,X)$ is unobstructed in $R_{2,n}$, cf. \cite{Kollar} I.2. By \cite{Starr0} Lemma 3.17, it suffices to show that the map $$H^0(D,\mathcal{N}_{D|X})\rightarrow H^0(L,\N_{D|X}|L)\rightarrow T_p L\otimes T_p M$$ is surjective. Since $H^1(M,\mathcal{N}_{D|X}|_M(-p))=0$, the first map is surjective. Since $H^1(L,\mathcal{N}_{D|X}|_L)=0$, the second map is surjective. Let $q$, $r$ be two distinct points on $L-\{p\}$. By the long exact sequence associated to the above short exact sequence at $h^1$, we get $h^1(D,\mathcal{N}_{D|X}(-q))=0$ and $h^1(D,\mathcal{N}_{D|X}(-q-r))=n-2$. Now for any smoothing $(D_t,X_t)$ of $(D,X)$ over $T$, we can specify two distinct points $p_t$ and $q_t$ on $D_t$ which specialize to $q$ and $r$ on $D$. By Lemma \ref{smoothdefopen}, after shrinking $T$, the conic $D_t$ lies in the smooth locus of $X_t$. Thus $D_t$ is smoothly embedded. By the upper semicontinuity theorem and Lemma \ref{typicalcurve}, $D_t$ is a typical conic in $X_t$. \qed \begin{definition}\label{tc} Let $X$ be a hypersurface of degree $n$ in $\P^n$. A \emph{typical comb} with $m$ teeth in $X$ is a reduced curve in $X$ with $m+1$ irreducible components $C, L_1,\cdots,L_m$ satisfying the following conditions: \begin{enumerate} \item $C$ is a typical conic in $X$; \item $L_1,\cdots,L_m$ are disjoint typical lines in $X$ and each $L_i$ intersects $C$ transversally at $p_i$. \end{enumerate} The conic $C$ is called the \emph{handle} of the comb and $L_i$'s are called the \emph{teeth}. \end{definition} \begin{prop}\label{typicalcomb} Let $X$ be a hypersurface of degree $n$ in $\P^n$. Let $D=C\cup L_1\cup\cdots\cup L_{n-2}$ be a typical comb with $n-2$ teeth in $X$. Let $p_i$ be the intersection point $L_i\cap C$. Assume that the following conditions hold: \begin{enumerate} \item the direction $T_{p_i} C$ is not in the trivial subbundle of $\mathcal{N}_{L_i|X}$; \item the directions $T_{p_i} L_i$ are general in $\N_{C|X}$ such that the sheaf $\N_{D|X}|_C$ is isomorphic to $\O(1)^{\oplus(n-2)}$. \end{enumerate} Then the pair $(D, X)\in R_{n,n}$ can be smoothed to a pair $(C', X^\prime)$ where $C'$ is a very free curve in $X^\prime$. \end{prop} \proof The proof is very similar to the proof of Proposition \ref{typicalconic}. Here we only sketch the proof. Condition (1) implies that the sheaf $\N_{D|X}|_{L_i}$ is isomorphic to $\O^{\oplus(n-2)}$ for each $i$. We have the following short exact sequence. $$\begin{CD} 0@>>>\sqcup_i\mathcal{N}_{D|X}|_{L_i}(-p)@>>>\mathcal{N}_{D|X}@>>>\mathcal{N}_{D|X}|_C@>>>0\\ @.@|@.@|@.\\ @. \mathscr{O}(-1)^{\oplus(n-2)} @. @.\mathscr{O}(1)^{\oplus(n-2)} @. \end{CD}$$ Since $H^1(D,\N_{D|X})=0$, $D$ is unobstructed. By diagram chasing, the map $H^0(D,\N_{D|X})\rightarrow \sqcup_i T_{p_i}C\otimes T_{p_i} L_i$ is surjective. Thus we can smooth the typical comb $D$. Now we may choose a smoothing $(D_t,X_t)$ and specify two distinct points $(q_t, r_t)$ which specialize to two distinct points $(q,r)$ on $C-\{p_1,\cdots,p_{n-2}\}$. By the long exact sequence, we know that $h^1(D, \N_{D|X}(-q-r))=0$. By Lemma \ref{smoothdefopen} and the upper semicontinuity theorem, a general smoothing of the pair $(D,X)$ gives a very free curve in a general hypersurface.\qed \section{An Example} In this section, we construct a hypersurface of degree $n$ in $\P^n$, which contains a special configuration of lines. Later we will use this example to produce a very free curve in a general hypersurface. Let $n$ be an integer $\ge 4$. Let $\lbrack x_0:\dots:x_n\rbrack$ be the homogeneous coordinates for $\mathbb{P}^n$. Let $X$ be a hypersurface of degree $n$ in the projective space $\P^n$ defined by the following equation. \begin{eqnarray*} \begin{array}{lllll x_0^{n-1}x_n &+x_1^{n-3}x_n^2x_0 & +(x_1^{n-1}+x_0x_1^{n-2}+\dots+x_0^{n-3}x_1^2)x_2 &+(x_2^{n-1}+x_0x_2^{n-2}+\dots+x_0^{n-3}x_2^2)x_3 &+\dots\\ & +x_1^{n-4}x_n^3x_3 &+(x_0x_1^{n-2}+\dots+x_0^{n-3}x_1^2)x_3 &+(x_0x_2^{n-2}+\dots+x_0^{n-3}x_2^2)x_4&+\dots\\ &\vdots &\vdots\\ &+x_1x_n^{n-2}x_{n-2} &+(x_0^{n-4}x_1^{3}+x_0^{n-3}x_1^2)x_{n-2} &+(x_0^{n-4}x_2^{3}+x_0^{n-3}x_2^2)x_{n-1}&+\dots\\ &+x_n^{n-1}x_{n-1}&+x_0^{n-3}x_1^2x_{n-1} &+x_0^{n-3}x_2^2x_{1}&+\dot \end{array}\end{eqnarray*} \begin{notation}\label{spiky}Let $p$ be the point $\lbrack 1:0:\dots:0\rbrack$ and $q$ be the point $\lbrack 0:1:0:\dots:0\rbrack$. Let $L_i$ be the line spanned by $\{e_0,e_i\}$ for $i=1,\dots, n-1$ and $L_n$ be the line spanned by $\{e_1,e_n\}$. It is easy to check that they all lie in the hypersurface $X$. Let $C$ be the union of $L_1,\cdots, L_n$. The following picture shows the configuration of the points and the lines in the projective space. \end{notation} \begin{center} \includegraphics[scale=0.5]{spiky.jpg} \end{center} \begin{lemma} \begin{enumerate} \item Both $p$ and $q$ lie in the smooth locus of $X$. \item The tangent space $T_p X$ is the hyperplane $\{x_n=0\}$, which is spanned by the lines $L_1,\dots,L_{n-1}$. \item The tangent space of $T_q X$ is the hyperplane $\{x_2=0\}$. \end{enumerate} \end{lemma} \proof By taking the partial derivatives of $F$, we have $\frac{\partial F}{\partial x_i}(p)=0$ for $i=0,\cdots,n-1$ and $\frac{\partial F}{\partial x_n}(p)=1$. Similarly, we have $\frac{\partial F}{\partial x_i}(q)=0$ for $i\neq 2$ and $\frac{\partial F}{\partial x_2}(q)=1$.\qed \begin{lemma} The lines $L_1,\cdots, L_{n-1}$ are in the smooth locus of $X$. \end{lemma} \proof We will prove the case for line $L_1$. The remaining cases can be computed directly by the same method. Denote $L_1=\{[x_0:x_1:0:\dots:0]\in\mathbb{P}^n\}$. By restricting the partial derivatives of the defining equation of the hypersurface $X$ on $L_1$, we get the following. \begin{equation} \begin{array}{l} \frac{\partial F}{\partial x_2}|_{L_1}=x_1^{n-1}+x_0x_1^{n-2}+\dots+x_0^{n-3}x_1^2 \\ \frac{\partial F}{\partial x_3}|_{L_1}=x_0x_1^{n-2}+\dots+x_0^{n-3}x_1^2 \\ \vdots\\ \frac{\partial F}{\partial x_{n-2}}|_{L_1}=x_0^{n-4}x_1^{3}+x_0^{n-3}x_1^2 \\ \frac{\partial F}{\partial x_{n-1}}|_{L_1}=x_0^{n-3}x_1^2 \\\frac{\partial F}{\partial x_n}|_{L_1}=x_0^{n-1} \end{array}\label{L1}\end{equation} For points on $L_1$ with $x_0\neq 0$, we have $\frac{\partial F}{\partial x_n}|_{L_1}\neq 0$. At the point $q$, $\frac{\partial F}{\partial x_2}|_{L_1}\neq 0$. Hence every point on the line $L_1$ is a smooth point of $X$. \qed \begin{lemma} The line $L_n$ is in the smooth locus of $X$. \end{lemma} \proof By restricting the partial derivatives of the defining equation of $X$ on $L_n$, we get the following. \begin{equation} \begin{array}{l} \frac{\partial F}{\partial x_0}|_{L_n}=x_1^{n-3}x_n^2 \\ \frac{\partial F}{\partial x_3}|_{L_n}=x_1^{n-4}x_n^3 \\ \vdots\\ \frac{\partial F}{\partial x_{n-2}}|_{L_n}=x_1x_n^{n-2} \\ \frac{\partial F}{\partial x_{n-1}}|_{L_n}=x_n^{n-1} \\\frac{\partial F}{\partial x_2}|_{L_n}=x_1^{n-1} \end{array}\label{Ln}\end{equation} For points on $L_n$ with $x_1\neq 0$, we have $\frac{\partial F}{\partial x_2}|_{L_n}\neq 0$. For points on $L_n$ with $x_n\neq 0$, we have $\frac{\partial F}{\partial x_{n-1}}|_{L_n}\neq 0$. Hence every point on the line $L_n$ is a smooth point of $X$. \qed \begin{prop}\label{x0p1}With the notations as above, $X$ satisfies the following properties. \begin{enumerate} \item The lines $L_1,\dots,L_n$ are typical in $X$. \item For $i=1,\cdots, n-1$, the trivial subbundle of the normal bundle $\mathcal{N}_{L_i|X}$ at $p$ is generated by $\partial_{\overline{i+1}}-\partial_{\overline{i+2}},\dots,\partial_{\overline{i+n-3}}-\partial_{\overline{i+n-2}}$, where $\overline{j}$ takes value in $1,\dots, n-1$ mod $n-1$. \item The trivial subbundle of the normal bundle $\mathcal{N}_{L_1|X}$ at $q$ is generated by $\partial_3,\dots, \partial_{n-1}$ \item The trivial subbundle of the normal bundle $\mathcal{N}_{L_n|X}$ at $q$ is generated by $\partial_3,\dots,\partial_{n-1}$. \end{enumerate} \end{prop} \proof Let $L$ be a line in $X$. We have the following short exact sequences. $$\begin{CD} 0 @>>> \mathcal{N}_{L|X}(-1) @>>> \mathcal{N}_{L|\mathbb{P}^n}(-1) @>>> \mathcal{N}_{X|\mathbb{P}^n}|_{L}(-1) @>>> 0\\ @. @| @| @| @. \\ 0 @>>> \mathcal{N}_{L|X}(-1) @>>> \mathscr{O}_{L}^{\oplus(n-1)} @>>> \mathscr{O}_{L}(n-1)@>>> 0 \end{CD}$$ The associated long exact sequence is the following. $$\begin{CD} H^0(L, \mathcal{N}_{L|X}(-1)) \rightarrow k^n@>\alpha>> H^0(L, \mathscr{O}(n-1))\rightarrow H^1(L, \mathcal{N}_{L|X}(-1))\rightarrow0 \end{CD}$$ where the map $\alpha$ sends the natural basis of $k^n$ to the derivatives of $F$ with respect to the normal directions of $L$ in $\P^n$. By Lemma \ref{typicalline}, $L$ is typical if and only if the image of $\alpha$ is of codimension one in $H^0(L,\mathscr{O}(n-1))$. When $L=L_1$, by (\ref{L1}), $\frac{\partial F}{\partial x_2}|_{L_1},\dots, \frac{\partial F}{\partial x_n}|_{L_1}$ form a codimensional-one subspace of $H^0(L_1,\mathscr{O}_{L_1}(n-1))$. Thus we get that $H^1(L_1,\mathcal{N}_{L_1|X}(-1) )$ is one dimensional, i.e., $L_1$ is typical in $X$. By the short exact sequence above, $N_{{L_1|X}}(-1)$ is a subbundle of the trivial bundle $\mathscr{O}_{L_1}^{\oplus(n-1)}$ which maps to $0$ in $\mathscr{O}_{L_1}(n-1)$. Let $\partial_2,\dots,\partial_n$ be the generators of $\mathscr{O}_{L_1}^{\oplus(n-1)}$. We get $N_{{L_1|X}}(-1)$ is generated by $x_0(\partial_2-\partial_3)-x_1(\partial_3-\partial_4), \dots, x_0(\partial_{n-2}-\partial_{n-1})-x_1\partial_{n-1}, x_0^2\partial_{n-1}-x_1^2\partial_n$ as an $\O_{L_1}$-module. If we restrict the bundle at $p$ and $q$, we get (2) and (3) for $L_1$. When $L=L_2,\cdots, L_{n-1}$, we can prove in a similar way. When $L=L_n$, (4) follows from the same computation as above by applying (\ref{Ln}).\qed With the description of the trivial subbundles of the normal bundles of lines in $X$ as above, we get the following corollaries. \begin{cor}\label{cond1} We have the following statements. \begin{enumerate} \item The lines $L_1$ and $L_n$ are typical in $X$. \item The direction $T_q L_1$ is not in the trivial subbundle of $\N_{L_n|X}$. \item The direction $T_q L_n$ is not in the trivial subbundle of $\N_{L_1|X}$.\qed \end{enumerate} \end{cor} \begin{cor} \label{cond2}We have the following statements. \begin{enumerate} \item The lines $L_2,\cdots,L_{n-2}$ are typical in $X$. \item The direction $T_p L_1$ is not in the trivial subbundle of $\N_{L_i|X}$ for $2\le i\le n-1$. \item The directions $T_p L_2,\cdots,T_p L_{n-1}$ span the normal bundle $\N_{L_1|X}$ at $p$.\qed \end{enumerate} \end{cor} \section{Introduction} In this paper, we work with varieties over an algebraically closed field $k$ of arbitrary characteristic. \begin{definition}[\cite{Kollar} IV.3]Let $X$ be a variety defined over $k$. A variety $X$ is \emph{rationally connected} if there is a family of irreducible proper rational curves $g: U\rightarrow Y$ and an evaluation morphism $u:U\rightarrow X$ such that the morphism $u^{(2)}:U\times_Y U\rightarrow X\times X$ is dominant. A variety $X$ is \emph{separably rationally connected} if there exists a proper rational curve $f:\P^1\rightarrow X $ such that the image lies in the smooth locus of $X$ and the pullback of the tangent sheaf $f^*TX$ is ample. Such rational curves are called \emph{very free} curves. \end{definition} We refer to Koll\'ar's book \cite{Kollar} or the work of Koll\'ar-Miyaoka-Mori \cite{KMM} for the background. If $X$ is separably rationally connected, then $X$ is rationally connected. The converse is true when the ground field is of characteristic zero by using the generic smoothness for the dominant map $u^{(2)}$. In positive characteristics, the converse statement is open. In characteristic zero, a very important class of rationally connected varieties are Fano varieties, i.e., smooth varieties with ample anticanonical bundles. In positive characteristic, we only know that they are rationally chain connected. \begin{question}[Koll\'ar] In arbitrary characteristic, are Fano varieties separably rationally connected? \end{question} The question is open even for Fano hypersurfaces in projective spaces. In this paper, we prove the following theorem. \begin{theorem}\label{mainn} In arbitrary characteristic, a general Fano hypersurface of degree $n$ in $\P^n_k$ contains a minimal very free rational curve of degree $n$, i.e., the pullback of the tangent bundle has the splitting type $\O(2)\oplus\O(1)^{\oplus(n-2)}$. \end{theorem} \begin{theorem}\label{main} In arbitrary characteristic, a general Fano hypersurface in $\mathbb{P}^n_k$ is separably rationally connected. \end{theorem} de Jong-Starr \cite{dJS1} proved that every family of separably rationally connected varieties over a curve admits a rational section. Thus using Theorem \ref{main}, we give another proof of Tsen's theorem. \begin{cor} Every family of Fano hypersurfaces in $\P^n$ over a curve admits a rational section.\qed \end{cor} \begin{ack} The author would like to thank his advisor Professor Jason Starr for helpful discussions. \end{ack} \input{deformation.tex} \input{example.tex} \section{Proof of the Main Theorem} \begin{lemma}[\cite{Harris} Ex 13.8]\label{p=0} Let $C$ be the union of $n$ lines $L_1,\cdots,L_n$ in $\P^n$ as in Notation \ref{spiky}. Then the Hilbert polynomial of $C$ is $P(d)=nd+1$. In particular, the arithmetic genus of $C$ is $0$. \end{lemma} \proof This can be computed directly. For any $d>0$, when $i=1,\cdots,n-1$, the homogeneous polynomials of degree $d$ that do not vanish on $L_i$ are generated by $\{x_0^d, x_0^{d-1}x_i,\cdots,x_i^d\}$. The homogeneous polynomials of degree $d$ that do not vanish on $L_n$ are generated by $\{x_1^d, x_1^{d-1}x_i,\cdots,x_n^d\}$. Thus when $d\gg 0$, $P(d)=H^0(C,\O_C(d))=nd+1$.\qed The curve $C$ is an example of curves with rational $n$-fold point, cf. \cite{dawei} 3.7. The following lemma is an analogue of \cite{dawei} Lemma 3.8. \begin{lemma}\label{h0C} With the same notations as in Lemma \ref{p=0}, the following properties hold for $C$ for every positive integer $d$: \begin{enumerate} \item $h^0(C, \O_C(d))=nd+1$ and $h^1(C,\O_C(d))=0$. \item $h^1(C,\I_C(d))=0$. \item $h^0(C,\I_C(d))=h^0(\P^n,\O(d))-nd-1$. \end{enumerate} \end{lemma} \proof By Riemann-Roch and Lemma \ref{p=0}, we have $$h^0(C,\O_C(d))\ge \chi(\O_C(d))=nd+1-p_g(C)=nd+1.$$ On the other hand, every global section of $\O_C(d)$ is obtained by gluing global sections on each component, which imposes at least $n-1$ linear conditions. Since we have $h^0(L_i,\O(d))=d+1$ for every $i$, $$h^0(\O_C(d))\le n(d+1)-(n-1)=nd+1.$$ Therefore, $h^0(C, \O_C(d))=nd+1$ for every positive integer $d$. Since the image of $r:H^0(\P^n,\O(d))\rightarrow H^0(C,\O_C(d))$ has dimension $nd+1$ as in Lemma \ref{p=0}, the map $r$ is surjective. The lemma follows by considering the long exact sequence associated to the ideal sheaf of $C$. \qed \begin{construction} Let $C$ be the union of $n$ lines $L_1,\cdots,L_n$ in $\P^n$ as in Notation \ref{spiky}. If we consider $L_1\cup L_n$ as a conic in $\P^n$, there exists a smooth affine pointed curve $(T,0)$ and a smoothing $D'\rightarrow (T,0)$ satisfying the following conditions: \begin{enumerate} \item The special fiber $D'_0$ is $L_1\cup L_n$; \item For any $t\in T-\{0\}$, $D'_t$ is a smooth conic contained in the plane spanned by $L_1$ and $L_n$. \end{enumerate} We can choose $n-2$ sections $s_i:(T,0)\rightarrow D'$ for $i=1,\cdots, n-2$ such that $s_i(0)=p$ for all $i$'s and for $t\in T-\{0\} $, $s_i(t)$'s are all distinct on $D'_t$. For any $s_i(t)$, there exists a unique line $L_{i+1}(t)$ through $s_i(t)$ parallel to $L_{i+1}$. After gluing the families of lines $L_{i+1}(t)$ on $D'_t$ at $s_i(t)$ for all $i$'s, we get a family of reducible curves $\pi:D\rightarrow (T,0)$ satisfying the following conditions: \begin{enumerate} \item The special fiber $D_0$ is $C$ as constructed in \ref{spiky}. \item For any $t\in T-\{0\}$, $D_t$ is a comb with the handle $D'_t$ and with the teeth lines. \end{enumerate} The family $\pi:D\rightarrow (T,0)$ is flat by Lemma \ref{p=0}. We have the following diagram. $$\xymatrix{D_0=C \ar[r] \ar[d] & D \ar[d]_\pi \ar[r]^i &\P^n_T \ar@{->}[ld]^\pi \\0 \ar[r] &(T,0)}$$ \end{construction} \begin{lemma}\label{flatlift} Let $\I_D$ be the ideal sheaves of $D$ in $\P^n_T$. The sheaf $\pi_*\I_D(d)$ is locally free on $T$ for any $d>0$. \end{lemma} \proof By the cohomology and base change theorem \cite{Hartshorne} III.12.9, it suffices to show that $h^0(\P^n_t, I_{D_t}(d))$ is constant. By upper semicontinuity and Lemma \ref{h0C}, we have $h^0(\P^n_t, I_{D_t}(d))\le h^0(\P^n,\O(d))-nd-1$. On the other hand, for any $t\in T-\{0\}$, the curve $D_t$ is a local complete intersection and $\O_{D_t}(d)$ is a positive bundle on $D_t$. Thus we have $h^1(D_t,\O_{D_t}(d))=0$ and $h^0(D_t,\O_{D_t}(d))=nd+1$. Consider the following exact sequence. $$\begin{CD} 0@>>>H^0(\P^n_t,\I_{D_t}(d))@>>>H^0(\P^n_t, \O(d))@>>>H^0(D_t,\O_{D_t}(d)) \end{CD}$$ We get $h^0(\P^n_t,\I_{D_t}(d))\ge h^0(\P^n,\O(d))-nd-1$.\qed \proof [Proof of Theorem \ref{mainn}] The theorem is trivial for $n=2,3$. We can assume that $n\ge 4$. By \cite{Kollar} IV.3.11 and Lemma \ref{smoothdefopen}, it suffices to produce one very free curve in a hypersurface of degree $n$. By Lemma \ref{flatlift}, after shrinking $T$, hypersurfaces of degree $n$ containing $D_t$ in $\P^n_t$ form a trivial projective bundle over $(T,0)$. Thus the family $\pi:D\rightarrow (T,0)$ admits a lifting to a flat family of pairs $\pi:(D,\X_T)\rightarrow (T,0)$ in $R_{n,n}$ such that the special fiber $(D_0,\X_0)$ is $(C,X)$ which is constructed in Section 3. $$\xymatrix{ D \ar[d]_\pi \ar[r]^i &\X_T \ar[r] \ar[ld] &\P^n_T \ar[lld]^\pi \\(T,0)}$$ All the following steps of the proof requires to shrink $T$ if necessary. By Proposition \ref{typicalconic} and Corollary \ref{cond1}, we may assume that the handle $D'_t$ is a typical conic in $\X_t$ for every $t\in T-\{0\}$. By Proposition \ref{typicaldefopen} and Corollary \ref{cond2} (1), all the teeth of the comb $D_t$ are typical. Thus for every $t\in T-\{0\}$, we get a typical comb $D_t$ as in Definition \ref{tc}. Now the theorem follows if we verify the two conditions in Proposition \ref{typicalcurve}. Since they are open conditions, it suffices to check on the special fiber $(C,X)$, which is proved in Corollary \ref{cond2}.\qed \proof [Proof of Theorem \ref{main}] By \cite{Kollar} IV.3.11 and Lemma \ref{smoothdefopen}, it suffices to produce one very free curve in a hypersurface of degree $d$. Let $Y$ be a general smooth Fano hypersurface of degree $d$ in $\P^n$. When $d=n$, this is proved in Theorem \ref{mainn}. When $d<n$, we may choose a general linear subspace $L$ of dimension $d$ such that $Y\cap L$ is smooth and contains a very free curve $f:\P^1\rightarrow Y\cap L$ by Theorem \ref{mainn}. By the normal bundle exact sequence, $$\begin{CD} 0@>>>T(Y\cap L)@>>>TY@>>>\N_{Y\cap L|Y}@>>>0 \end{CD}$$ the sheaf $f^*T(Y\cap L)$ is positive and the sheaf $\N_{Y\cap L|Y}$ is isomorphic to $\N_{L|\P^n}$, which is $\O(1)^{\oplus(n-d)}$. Therefore the pullback bundle $f^*TY$ is positive. Thus $f:\P^1\rightarrow Y\cap L\rightarrow Y$ is a very free curve in $Y$.\qed
{ "timestamp": "2011-11-15T02:01:44", "yymm": "1111", "arxiv_id": "1111.2964", "language": "en", "url": "https://arxiv.org/abs/1111.2964", "abstract": "We prove that a general Fano hypersurface in a projective space over an algebraically closed field of arbitrary characteristic is separably rationally connected.", "subjects": "Algebraic Geometry (math.AG)", "title": "Fano Hypersurfaces in Positive Characteristic", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812299938006, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.7079434055712437 }
https://arxiv.org/abs/1403.4059
Automorphisms of normal quasi-circular domains
It was shown by Kaup that every origin-preserving automorphism of quasi-circular domains is a polynomial mapping. In this paper, we study how the weight of quasi-circular domains and the degree of such automorphisms are related. By using the Bergman mapping, we prove that every origin-preserving automorphism of normal quasi-circular domains in $\mathbb C^2$ is linear.
\section{Introduction} A domain $D$ is called a {\it quasi-circular} domain (or {\it $(m_1,\ldots, m_n)$-circular}) if it is invariant under the following mapping: \[ D \ni (z_1,\ldots,z_n) \mapsto (e^{i m_1\theta}z_1,\ldots, e^{i m_n\theta} z_n) \in D, \mbox{ for some $m_1,\ldots,m_n\in \mathbb Z_{+}$}. \] The $n$-tuple $(m_1,\ldots, m_n)$ is called the {\it weight} of the quasi-circular domain $D$. In particular, if $m_1=\cdots=m_n$, it is called circular. For instance, the followings are examples of quasi-circular domains: \begin{align*} {\mathbb G}_2&=\{(z_1+z_2,z_1z_2)\colon |z_1|,|z_2|<1\},\\ \mathbb E &=\{z\in\mathbb C^3; |z_1-\overline{z_2}z_3|+ |z_2-\overline{z_1}z_3| + |z_3|^2<1\}. \end{align*} These quasi-circular domains have been studied from various aspects (see, \cite{Agler2001,Agler2004,Edi,Jarnicki,Kos,Pflug,Young} and references therein). The symmetrized bidisk $\mathbb G_2$ is known as an example of a bounded pseudoconvex domain such that the Carath\'{e}odory and Kobayashi distances coincide, but it cannot be exhausted by domains biholomorphic to convex domains \cite{Costara2004,Edi2004}. The domain $\mathbb G_2$ also appeared in \cite{Costara2005} in connection with the $2 \times 2$ spectral Nevanlinna-Pick problem. Thus quasi-circular domains have received much interest in recent decades. A famous theorem due to Cartan asserts that every origin-preserving automorphism of a circular domain is linear. By using Cartan's theorem, the automorphism groups of various circular domains have been extensively investigated. It is also known that every origin-preserving automorphism of a quasi-circular domain is a polynomial mapping \cite[Lemma 1]{Kaup}. For quasi-circular cases, the following problem arises naturally: \begin{problem}\label{problem} Let $D$ be a quasi-circular domain and suppose that $f$ is an origin-preserving automorphism of $D$. Describe how the weight $(m_1,\ldots, m_n)$ of $D$ and the degree of the polynomial $f$ are related. \end{problem} In this paper, we focus our attention to one of the most interesting case \mbox{deg $f=1$}. Our main result tells us that every origin-preserving automorphism of a normal quasi-circular domain in $\mathbb C^2$ is a linear mapping: \begin{theorem} Let $D, D'\subset \mathbb C^2$ be normal quasi-circular domains. Then a biholomorphism $f:D\rightarrow D'$ which preserves the origin is linear. In particular, an automorphism $f \in \mbox{Aut}(D)$ which preserves the origin is linear. \end{theorem} This theorem is an answer of Problem \ref{problem} for $\deg f =1$ and $n=2$. For circular domains, there is a commutative relation between $\rho_\theta(z)=e^{i \theta} z$ and every origin-preserving automorphism $\varphi$. Namely we have $\varphi \circ \rho_\theta = \rho_\theta \circ \varphi$. For the proof of Cartan's theorem, this relation is substantial. However we do not pursue this idea here (see Remark \ref{rem}). Instead, we pursue an idea given by Ishi and Kai \cite{Ishi}. The proof of our main theorem uses the Bergman mapping (also known as the Bergman coordinate). This mapping was introduced by Stefan Bergman. After its discovery, the Bergman mapping appeared in many studies (cf. \cite{G-K0,G-K,Ishi,Lu,Song,Tsuboi}) and it played a substantial role in their studies. We will see that the Bergman mapping also plays an essential role in our study. The organization of this paper is as follows. Subsequent to this introduction, Section \ref{sec2} provides some basic properties of minimal and representative domains. In Section \ref{sec3}, we introduce the normal quasi-circular domain and the Bergman mapping. We will see that the Bergman mapping is a linear mapping if a domain is normal quasi-circular. This fact is used to prove our main result. \section{Preliminaries}\label{sec2} In this section, we introduce the minimal and representative domain and collect some basic facts. Let us begin this section with some basic properties of the Bergman kernel. \subsection{Basics of Bergman kernels} Let $D$ be a domain in $\mathbb C^n$. Define the Bergman space $A^2(D)$ by \begin{align*} A^2(D)=\left\lbrace f\in \mathcal O(D); \int_D |f(z)|^2 dV(z) < \infty \right\rbrace. \end{align*} The Bergman space is a Hilbert space with the inner product $$ \langle f,g \rangle = \int_{D} f(z)\overline{g(z)} dV(z) .$$ The reproducing kernel of the Bergman space is called the Bergman kernel and denote it by $K_D$. Namely, the Bergman kernel is the unique function satisfying the following property: \begin{align} f(z)=\int_D f(w) K_D(z,w)dV(w), \quad \mbox{ for all } f\in A^2(D). \end{align} The Bergman kernel is also obtained by a complete orthonormal basis $\{ e_k \}_{k\in \mathbb N}$ of the Bergman space: \begin{align}\label{CONS} K_D(z,w)=\sum_{k\in \mathbb N}e_k(z)\overline{e_k(w)}. \end{align} An important property of the Bergman kernel is relative invariance under the biholomorphisms. Let $\varphi:D\rightarrow D'$ be a biholomorphism. Then the Bergman kernels $K_D$ and $K_{D'}$ satisfy the following relation: \begin{align}\label{trans} K_D(z,w)=\overline{\det J(\varphi, w)} K_{D'}(\varphi(z) ,\varphi (w)) \det J(\varphi, z). \end{align} Here $J(\varphi,z)$ is the Jacobian matrix of $\varphi = {}^t (\varphi_1, \ldots , \varphi_n)$ at $z$: $$J(\varphi,z):= \begin{pmatrix} \dfrac{\partial \varphi_1}{\partial z_1}(z)& \cdots & \dfrac{\partial \varphi_1}{\partial z_n}(z) \\ \vdots & \ddots & \vdots\\ \dfrac{\partial \varphi_n}{\partial z_1}(z) & \cdots &\dfrac{\partial \varphi_n }{\partial z_n}(z) \end{pmatrix}.$$ The unit disk is an example whose Bergman kernel has an explicit form. \begin{example}\label{disk} For the unit disk $\mathbb D=\{|z|<1\} \subset \mathbb C$, the set $\{(\frac{k+1}{\pi} )^{\frac{1}{2}}z^k \}_{k\in\mathbb Z_{+}}$ forms a complete orthonormal basis of the Bergman space $A^2(\mathbb D)$. Using \eqref{CONS}, we compute the Bergman kernel: \begin{align*} K_{\mathbb D}(z,w)&=\dfrac{1}{\pi}\sum_{k=0}^\infty (k+1)(z\overline{w})^k =\dfrac{1}{\pi(1-z\overline{w})^2 }. \end{align*} \end{example} Further information about the Bergman kernel can be found in \cite{Greene,Krantz}. \subsection{Minimal domains and Representative domains} Recall that a bounded domain $D$ in $\mathbb C^n$ is called a minimal domain with a center at $z_0\in D$ if $\mbox{Vol}(D') \geq \mbox{Vol}(D)$ for any biholomorphism $\varphi: D\rightarrow D'$ such that $J(\varphi, z_0)=1$. It is known that the minimality is equivalent to the following condition on the Bergman kernel \cite{Maschler}: \begin{theorem} A bounded domain $D$ is a minimal domain with the center at $z_0$, if and only if \begin{align} K_D(z,z_0)\equiv c, \quad \mbox{for any $z\in D$,} \end{align} where $c$ is a non-zero constant. \end{theorem} By the reproducing property of the Bergman kernel, we see that \begin{align*} 1=\int_D 1 \cdot K(z,z_0) dV(z)= c\int_D dV(z). \end{align*} Thus the constant $c$ must be equal to $1/\mbox{Vol}(D)$. By Example \ref{disk}, we know that $K_{\mathbb D}(z,0)=1/\pi=1/\mbox{Vol} (\mathbb D) $. Thus the unit disk is a minimal domain with the center at the origin. For $z,w\in D$ such that $K_D(z,w)\not=0$, we define an $n \times n$ matrix $T_D$ by $$T_D (z,w):= \begin{pmatrix} \dfrac{\partial^2 }{\partial \overline{w_1}\partial z_1}\log K_D(z,w) & \cdots & \dfrac{\partial^2 }{\partial \overline{w_1}\partial z_n}\log K_D(z,w) \\ \vdots & \ddots & \vdots\\ \dfrac{\partial^2 }{\partial \overline{w_n}\partial z_1}\log K_D(z,w) & \cdots & \dfrac{\partial^2 }{\partial\overline{ w_n}\partial z_n}\log K_D(z,w) \end{pmatrix}.$$ The matrix $T_D(z,z)$ is a positive definite hermitian matrix for all $z\in D$. The matrix $T_D$ possesses the following transformation formula under the biholomorphisms: \begin{align}\label{eq:T} T_D(z,w)=\overline{ {}^t J(\varphi, w)} T_{D'}(\varphi(z) ,\varphi (w)) J(\varphi, z), \quad \mbox{if $K_D(z,w)\not=0$} . \end{align} By following a paper by Q.-K.~Lu \cite{Lu}, we introduce the representative domain. \begin{definition} A bounded domain $D$ in $\mathbb C^n$ is called a representative domain (in the sense of Q.-K.~Lu) if there exists a point $z_0\in D$ such that $T_D(z,z_0)$ is a constant matrix for all $z\in D$. The point $z_0$ is called the center of the representative domain. \end{definition} We already know that the unit disk is minimal with the center at the origin. Moreover it is also representative with the same center. Indeed, by a simple computation, we have $T_{\mathbb D}(z,w)=2/(1-z\overline{w})^2$ and also $T_{\mathbb D}(z,0)=2$. We finish this section with a remark on zeros of the Bergman kernel $K_D$. \begin{remark} The matrix $T_{D}(z,w)$ is not well-defined for $z,w\in D$ such that $K_D(z,w)=0$. If a domain $D$ is a homogeneous bounded domain then it is known that $K_D(z,w)\not =0$ for all $z,w\in D$ (see \cite[Proposition 3.1]{Ishi}). However there are non-homogeneous examples whose Bergman kernels are not zero-free. For instance, the followings are such examples: \begin{enumerate} \item[(i)] $\{z\in \mathbb C; r<|z|<1\}$ for $r<e^{-2}$ (see \cite{S}), \item[(ii)] $\{z\in\mathbb C^n; |z_1|+ \cdots + |z_n|<1 \}$ for $3\geq n$ (see \cite{Boas}), \item[(iii)] $\left\lbrace (z_1,z_2)\in\mathbb C^2; |z_2|< \dfrac{1}{1+|z_1|} \right\rbrace $ (see \cite{Boas2}). \end{enumerate} For further information about zeros of the Bergman kernel, see \cite{Ahn,Jarnicki2,Krantz2,Y,Y2} and references therein. \end{remark} \section{Bergman mapping for quasi-circular domains}\label{sec3} The purpose of this section is to study the Bergman mapping for quasi-circular domains and prove our main theorems (Theorems \ref{main} and \ref{main0}). We begin our study with some definitions. In the following we only consider bounded domains which contain the origin. \begin{definition}\label{def:quasi} Let $ m_1,\ldots,m_n \in \mathbb Z_{+}$. A bounded domain $D$ is called quasi-circular (or $(m_1,\ldots,m_n)$-circular) if $(e^{i m_1\theta}z_1,\ldots, e^{i m_n\theta} z_n)\in D$ for any $\theta \in \mathbb R$ and $(z_1,\ldots,z_n)\in D$. The $n$-tuple $(m_1,\ldots, m_n)$ is called the weight of a quasi-circular domain. \end{definition} If $m_1=\cdots = m_n$, then it is an usual circular domain. Now we define the normal quasi-circular domain. \begin{definition} Let $D\subset \mathbb C^2$ be a quasi-circular domain and $(m_1, m_2)$ its weight. Without loss of generality we may assume that $m_1 \leq m_2 $ and $\mathrm{gcd}(m_1, m_2)=1$. A quasi-circular domain $D$ is called normal if $m_1\geq2$. \end{definition} Let us give some concrete examples. Consider the following two domains \begin{align*} D_1&=\{(z_1, z_2)\in \mathbb B^2; | z_1^3+z_2^2 |<1\},\\ D_2&=\{(z_1, z_2)\in \mathbb B^2; | z_1^2+z_2 |<1\}. \end{align*} Then $D_1$ is a $(2,3)$-circular domain and also normal. On the other hand, $D_2$ is not normal, since $D_2$ is a $(1,2)$-circular domain. It is easy to see that a $(p_1,p_2)$-circular domain is normal for any prime numbers $p_1, p_2$ such that $p_1 < p_2$. For the first step of our study, we prove the minimality of quasi-circular domains. \begin{proposition}\label{minimal} If a domain $D\subset \mathbb C^2$ is quasi-circular, then it is a minimal domain with the center at the origin. \end{proposition} \begin{proof} Define $f_{\theta}: D\rightarrow D$ by $f_{\theta}(z_1, z_2)=(e^{i m_1\theta}z_1, e^{i m_2\theta} z_2)$ for $z\in D$ and $\theta \in \mathbb R$. Then the Jacobian matrix $J(f_\theta,z)$ is given by $\mathrm{diag}(e^{im_1 \theta},e^{im_2 \theta} )$. Since $D$ is quasi-circular, $f_{\theta}$ is an automorphism of $D$. By the relative invariance of the Bergman kernel \eqref{trans} we obtain \begin{align}\label{eq:Bergman} K_{D}(z,0)=K_D(f_\theta(z),0), \end{align} for any $\theta\in \mathbb R$. Using \eqref{eq:Bergman} and the Taylor expansion, we have \begin{align*} K_D(z,0)&=\sum_{k\in\mathbb Z^2_{\geq 0}} a_k z_1^{k_1} z_2^{k_2},\\ &=\sum_{k\in\mathbb Z^2_{\geq 0}} e^{i(\sum_{j=1}^2 m_j k_j)\theta } a_k z_1^{k_1} z_2^{k_2}=K_D(f_\theta(z),0). \end{align*} It follows that $a_k= e^{i(\sum_{j=1}^2 m_j k_j)\theta } a_k$ for any $\theta \in \mathbb R$ and $k \in \mathbb Z_{\geq 0}^2$. Since $\sum_{j=1}^2 m_j k_j \not =0$ except $(k_1,k_2)=(0,0)$, all coefficients except the constant term are zero. Thus $K_{D}(z,0)$ is a constant. Since $D$ is bounded, we know that $K(0,0)$ is non-zero constant. Therefore, we finally conclude that $K_{D}(z,0)$ is a non-zero constant. \end{proof} For the minimality of quasi-circular domains, we do not need to assume the normality for them. On the other hand, we need the normality to prove that a quasi-circular domain is representative. \begin{proposition}\label{repre} Let $D \subset \mathbb C^2$ be a normal quasi-circular domain. Then $D$ is a representative domain with the center at the origin. \end{proposition} \begin{proof} In the following, for simplicity, we use the notation $$K_{\overline{i}j}(z,w)= \frac{\partial^2}{\partial{\overline{w}_i}\partial{z_j} }\log K_D(z,w).$$ By the transformation formula \eqref{eq:T}, we have \begin{align}\label{TD} \begin{pmatrix} K_{\overline{1}1}(z,0) & K_{\overline{1}2}(z,0)\\ K_{\overline{2}1}(z,0) & K_{\overline{2}2}(z,0) \end{pmatrix} = \begin{pmatrix} K_{\overline{1}1}(f_\theta(z),0) & e^{i(m_2-m_1)\theta} K_{\overline{1}2}(f_\theta(z),0)\\ e^{i(m_1-m_2)\theta} K_{\overline{2}1}(f_\theta(z),0) & K_{\overline{2}2}(f_\theta(z),0) \end{pmatrix}. \end{align} for any $\theta\in \mathbb R$. By a similar argument used in Proposition \ref{minimal}, we know that $K_{\overline{k}k}(z,0)$ is a constant for $k=1,2$. Let us prove that $K_{\overline{1}2}(z,0)$ is a constant. By \eqref{TD} and the Taylor expansion, we see that \begin{align*} K_{\overline{1}2}(z,0)&= \sum_{k\in\mathbb Z^2_{\geq 0}} a_k z_1^{k_1} z_2^{k_2},\\ &=e^{i(m_2-m_1)\theta} \sum_{k\in\mathbb Z^2_{\geq 0}} e^{i(\sum_{j=1}^2 m_j k_j)\theta } a_k z_1^{k_1} z_2^{k_2},\\ &= e^{i(m_2-m_1)\theta} K_{\overline{1}2}(f_\theta(z),0). \end{align*} Then we obtain $a_k=e^{i(m_2-m_1 + \sum_{j=1}^2 m_j k_j )\theta } a_k$. Since $m_2-m_1>0$, $c_{k,m}=m_2-m_1 + \sum_{j=1}^2 m_j k_j$ is non-zero for any $k_1,k_2 \geq 0$. Thus we know that $a_k=0$ for any $k_1,k_2 \geq 0$ and $K_{\overline{1}2}(z,0)\equiv 0$. Next we consider $K_{\overline{2}1}(z,0)$. A similar argument shows that \begin{align*} K_{\overline{2}1}(z,0)&= \sum_{k\in\mathbb Z^2_{\geq 0}} a'_k z_1^{k_1} z_2^{k_2},\\ &=e^{i(m_1-m_2)\theta} \sum_{k\in\mathbb Z^2_{\geq 0}} e^{i(\sum_{j=1}^2 m_j k_j)\theta } a'_k z_1^{k_1} z_2^{k_2},\\ &= e^{i(m_1-m_2)\theta} K_{\overline{2}1}(f_\theta(z),0). \end{align*} It follows that $a'_k=e^{i(m_1-m_2 + \sum_{j=1}^2 m_j k_j )\theta } a'_k$. For any $k_1\geq 0, k_2\geq 1$, the number $c'_{k,m}=m_1-m_2 + \sum_{j=1}^2 m_j k_j$ is a non-zero constant. It follows that $a'_k=0$ for $k_1\geq 0, k_2 \geq 1$. There remains the task of proving that $c'_{k,m}\not =0$ for $k_1\geq 1$ and $k_2=0$. In this case, $c'_{k,m}=(k_1+1)m_1-m_2$. If $c'_{k,m}=0$, then $(k_1+1)m_1=m_2$. On the other hand, by the condition $\mathrm{gcd}(m_1,m_2)=1$, we have $n m_1\not = m_2$ for any $n \geq 2$. It is contradiction. Thus we conclude that $a'_k=0$ except for the case $(k_1 ,k_2)=(0,0) $. It follows that $K_{\overline{2}1}(z,0)$ is a constant. Hence $T_D(z,0)$ is a constant matrix. \end{proof} We note that the constant term of $K_{\overline{2}1}(z,0)$ also vanishes. In other words, $T_D(z,0)$ is a diagonal matrix. Now we introduce the Bergman mapping and recall its relevant properties. \begin{definition} Let $D$ be a bounded domain. Set $U_p^D:=\{ z\in D; K_D(z,p)\not=0 \}$. Define a mapping $\sigma_p^D:U_p^D \rightarrow \mathbb C^n$ by \begin{align} \sigma_{p}^D (z):=\left. T_D(p,p)^{-1/2}\mathrm{grad}_{\overline{w}}\log \dfrac{K_D(z,w)}{K_D(p,w) } \right|_{w=p} ,\quad \mbox{for } z\in U_p^D. \end{align} The mapping $\sigma_p^D$ is called the Bergman mapping defined at $p$. \end{definition} Here we set $$\mathrm{grad}_{\overline{w}} f(w) := {}^{t} \left( \dfrac{\partial f}{\partial \overline{w_1}} (w), \ldots, \dfrac{\partial f}{\partial \overline{w_n}}(w) \right),$$ for anti-holomorphic functions $f$ on $D$. By the definition of the Bergman mapping, we easily verify the following properties (see also \cite{Ishi}): \begin{align} \sigma_{p}^D(p)&=0, \label{property1}\\ J(\sigma_p^D,z)&=T_D(p,p)^{-1/2} T_D(z,p) \quad \mbox{for } z\in U_p^D.\label{property2} \end{align} Let $D\rightarrow D'$ be a biholomorphism. Define an $n\times n$ matrix $L(\varphi, p)$ by $$L(\varphi, p):= T_{D'}(\varphi(p),\varphi(p) )^{-1/2} \overline{{}^t J(\varphi, p) ^{-1}} T_D(p,p)^{1/2}.$$ The matrix $L(\varphi, p)$ is a unitary matrix. Indeed, by \eqref{eq:T}, we have \begin{align*} L(\varphi, p)^* L(\varphi, p)&= T_{D}(p,p )^{-1/2} ( J(\varphi, p) ^{-1} T_{D'}(\varphi(p),\varphi(p) )^{-1} \overline{{}^t J(\varphi, p) ^{-1}} )T_D(p,p)^{1/2},\\ &=T_{D}(p,p )^{-1/2} T_D(p,p ) T_{D}(p,p )^{1/2},\\ &=I. \end{align*} The following relation between the Bergman mapping $\sigma^D_p$ and the unitary matrix $L(\varphi,p)$ is substantial for our purpose (see \cite{Ishi}). \begin{proposition}\label{diagram} Let $D, D'$ be bounded domains and $\varphi: D\rightarrow D'$ a biholomorphism. One has $\sigma_{\varphi(p)}^D \circ \varphi= L(\varphi,p) \circ \sigma_p^D$. In other words, the diagram \[\xymatrix{ U_p^D \ar[d]_{\sigma_{p}^D} \ar[r]^\varphi_\sim \ar@{}[dr]|\circlearrowright & U_{\varphi(p)}^{D'} \ar[d]^{\sigma_{\varphi(p)}^{D'}} \\ \mathbb C^n \ar[r]^{L(\varphi,p)} & \mathbb C^n \\ } \] is commutative. \end{proposition} In the following, we consider minimal representative domains. In \cite[Corollary 2]{Lu}, Q.-K.~Lu proved that if $D$ and $D'$ are both representative domains in $ \mathbb C^n$ then any biholomorphism which maps the center of $D$ to that of $D'$ is an affine transformation. If two domains $D,D'$ are minimal representative domains with the center at the origin, then we obtain the following: \begin{proposition}\label{comm} Assume that $D$ and $D'$ are minimal representative domains with the center at the origin in $ \mathbb C^n$. Then any biholomorphism which maps the center of $D$ to that of $D'$ is linear. \end{proposition} \begin{proof} Since two domains $D$ and $D'$ are both minimal, we know that $K_{D}(z,0)\not =0$ and $K_{D'}(z',0)\not =0$ for all $z\in D$ and $z' \in D'$. It follows that $U_{0}^{D}=D$ and $U_{0}^{D'}=D'$. Next we prove that the Bergman mapping $\sigma_{0}^D$ is linear. Since $D$ is representative, we know that $T_D(z,0)\equiv T_D(0,0)$. This, together with \eqref{property2}, implies that $J(\sigma_{0}^D, z )=T_D(0,0)^{1/2}$. Thus the Jacobian matrix $J(\sigma_{0}^D, z)$ is a constant matrix. It follows that $\sigma_{0}^D$ is an affine transformation. By \eqref{property1}, we have $\sigma_{0}^D(z)=T_D(0,0)^{1/2}z$. By the same argument, $\sigma_{0}^{D'}$ is also linear. Let $f: D \rightarrow D' $ be a biholomorphism which maps the center of $D$ to that of $D'$. The above argument and Proposition \ref{diagram} give us the following commutative diagram: \[\xymatrix{ D \ar[d]_{T_{D}(0,0)^{\frac{1}{2}} =\sigma_{0}^D} \ar[r]^f_\sim \ar@{}[dr]|\circlearrowright & D' \ar[d]^{ \sigma_{0}^{D'}= T_{D'}(0,0)^{\frac{1}{2}} } \\ \mathbb C^n \ar[r]^{L(f,0)} & \mathbb C^n. \\ } \] Therefore we conclude that $f(z)=T_{D'}(0,0)^{-\frac{1}{2}} L(f, 0) T_{D}(0,0)^{\frac{1}{2}}z$. It is obviously linear. \end{proof} This proposition is a natural generalization of the argument given in \cite[Section 2.2]{Ishi} to the minimal representative domains. Now we are ready to prove our main result. \begin{theorem}\label{main} Let $D, D'\subset \mathbb C^2$ be normal quasi-circular domains and $f:D\rightarrow D'$ an origin-preserving biholomorphism. Then the mapping $f$ is given by $f=T_{D'}(0,0)^{-\frac{1}{2}} L(f, 0) T_{D}(0,0)^{\frac{1}{2}}$. In particular, the mapping $f$ is a linear mapping. \end{theorem} \begin{proof} Since the second part of the theorem follows from the first part, we prove only the first part of the theorem. By Propositions \ref{minimal} and \ref{repre}, we know that $D, D'$ are minimal representative domains with the center at the origin. Then we obtain the following commutative diagram by Proposition \ref{comm}: \[\xymatrix{ D \ar[d]_{T_{D}(0,0)^{\frac{1}{2} }= \sigma_0^D } \ar[r]^f_\sim \ar@{}[dr]|\circlearrowright & D' \ar[d]^{\sigma_0^{D'}=T_{D'}(0,0)^{\frac{1}{2}} } \\ \mathbb C^2 \ar[r]^{L(f,0)} & \mathbb C^2. \\ } \] Hence we conclude that $f(z)=T_{D'}(0,0)^{-\frac{1}{2}} L(f, 0) T_{D}(0,0)^{\frac{1}{2}}z$. \end{proof} As a special case of this theorem we obtain the following. \begin{theorem}\label{main0} Let $D$ be a normal quasi-circular domain in $\mathbb C^2$ and $f$ an origin-preserving automorphism of $D$. Then we have $f=T_{D}(0,0)^{-\frac{1}{2}} L(f, 0) T_{D}(0,0)^{\frac{1}{2}}$. In particular, the mapping $f$ is a linear mapping. \end{theorem} We should note that there is a concrete example of $(1,2)$-circular domain whose automorphism group contains an origin-preserving automorphism which is not linear. In \cite{Zapa}, Zapa{\l}owski studied proper holomorphic self-mappings for the symmetrized $(p,n)$-ellipsoid $\mathbb E_{p,n}:=\pi_n (\mathbb B_{p,n} )$ where $\pi_n=(\pi_{n,1}, \ldots, \pi_{n,n} )$ and $\mathbb B_{p,n}$ are defined by \begin{align*} \mathbb B_{p,n}&:= \left\lbrace z\in \mathbb C^n: \sum_{j=1}^n |z_j|^{2p}<1 \right\rbrace,\\ \pi_{n,k}(z)&:= \sum_{1\leq j_1 < \cdots <j_k\leq n} z_{j_1} \cdots z_{j_k}, \quad 1\leq k \leq n,\quad z=(z_1,\ldots, z_n)\in \mathbb C^n. \end{align*} The symmetrized $(p,n)$-ellipsoid $\mathbb E_{p,n}$ is a $(1,2,\ldots, n)$-circular domain. In the same paper, Zapa{\l}owski also determined the automorphism group of $\mathbb E_{p,n}$. For instance, if $(p,n)=(1/2,2)$ then the automorphism group of $\mathbb E_{1/2,2}$ contains the following mapping: \begin{align*} \varphi(z_1, z_2)=\left(\zeta z_1 , \zeta^2 \left(\frac{z_1^2}{4} -z_2 \right)\right), \quad (z_1, z_2) \in \mathbb E_{1/2,2}, \end{align*} where $\zeta \in \mathbb T=\{z\in\mathbb C; |z|=1\}$. Obviously, $\varphi$ is a non-linear mapping which preserves the origin. Thus we cannot drop the condition ``$m_1 \geq 2$" in the definition of the normality. We conclude this paper with some remarks. \begin{remark} We showed that every normal quasi-circular domain is a representative domain which is also minimal with the same center. One may expect that these two kinds of centers coincide with each other whenever a domain is minimal and representative. However this expectation is not true in general. Actually, Maschler proved that there are representative domains which are not minimal domains with the same center \cite[Corollary 2]{Maschler}. \end{remark} \begin{remark} In \cite{Ishi}, Ishi and Kai defined the representative domain as the image of the Bergman mapping. In their definition, the representative domain $D$ always satisfies $T_D(z,0) \equiv I$. Namely it is a representative domain with the center at the origin in the sense of Q.-K. Lu. As we have seen in this section, for the linearity of the Bergman mapping, the condition $T_D(z,0)\equiv T_D(0,0)$ is essential. This is why we use Q.-K. Lu's definition in this paper. \end{remark} \begin{remark}\label{rem} Let $D$ be a circular domain in $\mathbb C^2$ and put $\rho_\theta(z_1,z_2)=(e^{i\theta} z_1,e^{i\theta} z_2 )$. Since every circular domain is a minimal representative domain with the center at the origin, we can prove Cartan's theorem by using the Bergman mapping \cite{Ishi}. For circular cases, there is another way to prove Cartan's theorem. Namely, for circular cases, we have a relation $\varphi \circ \rho_\theta = \rho_\theta \circ \varphi$ for any $\varphi \in \mbox{Aut}(D)$ such that $\varphi(0)=0$. By using this relation, we can conclude that $f$ must be linear (cf. \cite[Chapter 6]{Gong} or \cite[Chapter 5]{Nara}). On the contrary to the circular cases, we cannot obtain such a relation for $f_\theta(z_1,z_2)=(e^{im_1\theta} z_1,e^{im_2\theta} z_2 )$ by using Cartan uniqueness theorem. Put $g=f_{-\theta} \circ \varphi^{-1} \circ f_\theta \circ \varphi$. In this case, the matrix $J(f_{\theta},z)$ is an element of the center of $\mbox{Mat$_{2\times 2}(\mathbb C)$}$ if and only if $m_1=m_2$. In other words, if $m_1\not=m_2$, then the matrix $J(f_{\theta},z)$ does not commute with all elements of $\mbox{Mat$_{2\times 2}(\mathbb C)$}$. Thus we cannot conclude that $J(g,z)=I$ in the same way as for the circular domains. \end{remark} \section*{Acknowledgement} The author would like to thank Professor Kang-Tae Kim for helpful discussion. The author also thanks to Hyeseon Kim and Van Thu Ninh for their comments on an earlier draft of the manuscript. Moreover the author is indebted to Pawe{\l} Zapa{\l}owski, who pointed out an error in an earlier version of the manuscript. \bibliographystyle{amsplain}
{ "timestamp": "2014-03-18T01:13:06", "yymm": "1403", "arxiv_id": "1403.4059", "language": "en", "url": "https://arxiv.org/abs/1403.4059", "abstract": "It was shown by Kaup that every origin-preserving automorphism of quasi-circular domains is a polynomial mapping. In this paper, we study how the weight of quasi-circular domains and the degree of such automorphisms are related. By using the Bergman mapping, we prove that every origin-preserving automorphism of normal quasi-circular domains in $\\mathbb C^2$ is linear.", "subjects": "Complex Variables (math.CV); Differential Geometry (math.DG); Functional Analysis (math.FA)", "title": "Automorphisms of normal quasi-circular domains", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812354689082, "lm_q2_score": 0.7310585727705127, "lm_q1q2_score": 0.7079434038996458 }
https://arxiv.org/abs/2109.00543
Controlled frames in n-Hilbert spaces and their tensor products
The concepts of controlled frames and it's dual in n-Hilbert spaces and their tensor products have been introduced and then some of their characterizations are given. We further study the relationship between controlled frame and bounded linear operator in tensor product of n-Hilbert spaces. At the end, the direct sum of controlled frames in n-Hilbert space is being considered.
\section{Introduction} \smallskip\hspace{.6 cm} In the study of vector spaces, the main characterestic of a basis is that every elements can be represented as a superpostion of the elements in the basis.\;A frame is also sequence of elements in a Hilbert space, which allows every elements to be written as a linear combinations of the elements in the frame.\,However, the corresponding coefficients are not necessarily unique.\;So, frame can be considered as a generalizations of a basis.\;In fact, frames play important role in theoretical research of wavelet analysis, signal denoising, feature extraction, robust signal processing etc.\;In 1946, D.\,Gabor \cite{Gabor} first initiated a technique for rebuilding signals using a family of elementary signals.\;In 1952, Duffin and Schaeffer abstracted Gabor's method to define frame for Hilbert space in their fundamental paper \cite{Duffin}.\,Later on, in 1986, as the wavelet era began, Daubechies et. al \cite{Daubechies} observed that frames can be used to find series expansions of functions in \,$L^{\,2}\,(\,\mathbb{R}\,)$\, which are very similar to the expansions using orthonormal bases. \,Controlled frame is one of the newest generalization of frame.\,I. Bogdanova et al.\,\cite{I} introduced controlled frame for spherical wavelets to get numerically more efficient approximation algorithm.\,Thereafter, P. Balaz \cite{B} developed weighted and controlled frame in Hilbert space.\,S.\,Rabinson \cite{S} presented the basic concepts of tensor product of Hilbert spaces.\,The tensor product of Hilbert spaces \,$X$\, and \,$Y$\, is a certain linear space of operators which was represented by Folland in \cite{Folland}, Kadison and Ringrose in \cite{Kadison}. \;The concept of \,$2$-inner product space was first introduced by Diminnie et. al \cite{Diminnie} in 1970's.\;In 1989, A.\,Misiak \cite{Misiak} developed the generalization of a \,$2$-inner product space for \,$n \,\geq\, 2$. In this paper, our focus is to study controlled frames associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, in \,$n$-Hilbert spaces and their tensor products.\,We will see that any controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, is a frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, in \,$n$-Hilbert space and the converse part is also true under some conditions.\,In tensor product of \,$n$-Hilbert spaces, we shall established that an image of a controlled frame under a bounded linear operator will be a controlled frame if and only if the operator is invertible.\,Dual controlled frame in tensor product of \,$n$-Hilbert spaces is discussed and finally, we shall established that the direct sum of controlled frames associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, is again a controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, under some sufficient conditions. \section{Preliminaries} \begin{theorem}\cite{Christensen}\label{th1} Let \,$H_{\,1},\, H_{\,2}$\; be two Hilbert spaces and \;$U \,:\, H_{\,1} \,\to\, H_{\,2}$\; be a bounded linear operator with closed range \;$\mathcal{R}_{\,U}$.\;Then there exists a bounded linear operator \,$U^{\dagger} \,:\, H_{\,2} \,\to\, H_{\,1}$\, such that \,$U\,U^{\dagger}\,x \,=\, x\; \;\forall\; x \,\in\, \mathcal{R}_{\,U}$. \end{theorem} The operator \,$U^{\dagger}$\, defined in Theorem (\ref{th1}), is called the pseudo-inverse of \,$U$. \begin{theorem}\cite{Kreyzig}\label{th1.051} The set \,$\mathcal{S}\,(\,H_{1}\,)$\; of all self-adjoint operators on a Hilbert space \,$H_{1}$\; is a partially ordered set with respect to the partial order \,$\leq$\, which is defined as for \,$T,\,S \,\in\, \mathcal{S}\,(\,H_{1}\,)$ \[T \,\leq\, S \,\Leftrightarrow\, \left<\,T\,f,\, f\,\right> \,\leq\, \left<\,S\,f,\, f\,\right>\; \;\forall\; f \,\in\, H_{1}.\] \end{theorem} \begin{definition}\cite{Kreyzig} A self-adjoint operator \,$U \,:\, H_{1} \,\to\, H_{1}$\, is called positive if \,$\left<\,U\,x \,,\, x\,\right> \,\geq\, 0$\, for all \,$x \,\in\, H_{1}$.\;In notation, we can write \,$U \,\geq\, 0$.\;A self-adjoint operator \,$V \,:\, H_{1} \,\to\, H_{1}$\, is called a square root of \,$U$\, if \,$V^{\,2} \,=\, U$.\;If, in addition \,$V \,\geq\, 0$, then \,$V$\, is called positive square root of \,$U$\, and is denoted by \,$V \,=\, U^{1 \,/\, 2}$. \end{definition} \begin{theorem}\cite{Kreyzig}\label{th1.05} The positive square root \,$V \,:\, H_{1} \,\to\, H_{1}$\, of an arbitrary positive self-adjoint operator \,$U \,:\, H_{1} \,\to\, H_{1}$\, exists and is unique.\;Further, the operator \,$V$\, commutes with every bounded linear operator on \,$H_{1}$\, which commutes with \,$U$. \end{theorem} In a complex Hilbert space, every bounded positiove operator is self-adjoint and any two bounded positive operators can be commute with each other. \begin{definition}\cite{Christensen} A sequence \,$\left\{\,f_{\,i}\,\right\}_{i \,=\, 1}^{\infty}$\, in a separable Hilbert space \,$H_{1}$\, is said to be a frame for \,$H_{1}$\, if there exist positive constants \,$A,\, B$\, such that \begin{equation}\label{ee1} A\; \|\,f\,\|^{\,2} \,\leq\, \sum\limits_{i \,=\, 1}^{\infty}\, \left|\ \left <\,f,\, f_{\,i} \, \right >\,\right|^{\,2} \,\leq\, B \,\|\,f\,\|^{\,2}\; \;\forall\; f \,\in\, H_{1}. \end{equation} The constants \,$A$\, and \,$B$\, are called frame bounds.\,If the collection \,$\left\{\,f_{\,i}\,\right\}_{i \,=\, 1}^{\infty}$\, satisfies only the right inequality of (\ref{ee1}) then it is called a Bessel sequence with bound \,$B$. \end{definition} \begin{definition}\cite{B} Let \,$C$\, be a bounded linear operator on \,$H_{1}$\, which has bounded inverse.\,A frame controlled by the operator \,$C$\, or \,$C$-controlled frame is a family of vectors \,$\left\{\,f_{\,i}\,\right\}_{i \,=\, 1}^{\infty}$\, in \,$H_{1}$, such that there exist constants \,$0 \,<\, A \,\leq\, B \,<\, \infty$, satisfying \[A\; \|\,f\,\|^{\,2} \,\leq\, \sum\limits_{i \,=\, 1}^{\infty}\, \left <\,f,\, f_{\,i} \, \right >\,\left<\,C\,f_{\,i},\, f\,\right> \,\leq\, B \,\|\,f\,\|^{\,2}\; \;\forall\; f \,\in\, H_{1}.\] \end{definition} The controlled frame operator \,$S \,:\, H_{1} \,\to\, H_{1}$\, is defined by \[S\,f \,=\, \sum\limits_{i \,=\, 1}^{\infty}\, \left <\,f,\, f_{\,i} \, \right >\,C\,f_{\,i}\; \;\forall\; f \,\in\, H_{1}.\] \begin{definition}\cite{Upender}\label{def0.001} The tensor product of Hilbert spaces \,$\left(\,H_{1},\, \left<\,,\, \cdot,\,\right>_{1}\,\right)$\, and \,$\left(\,H_{2},\, \left<\,,\, \cdot,\,\right>_{2}\,\right)$\, is denoted by \,$H_{1} \,\otimes\, H_{2}$\, and it is defined to be an inner product space associated with the inner product \[\left<\,f \,\otimes\, g,\, f^{\,\prime} \,\otimes\, g^{\,\prime}\,\right> \,=\, \left<\,f,\, f^{\,\prime}\,\right>_{1}\;\left<\,g,\, g^{\,\prime}\,\right>_{2}\; \;\forall\; f,\, f^{\,\prime} \,\in\, H_{1}\; \;\&\; \;g,\, g^{\,\prime} \,\in\, H_{2}.\] The norm on \,$H_{1} \,\otimes\, H_{2}$\, is given by \[\left\|\,f \,\otimes\, g\,\right\| \,=\, \|\,f\,\|_{\,1}\;\|\,g\,\|_{\,2}\; \;\forall\; f \,\in\, H_{1}\; \;\&\; \,g \,\in\, H_{2}.\] The space \,$H_{1} \,\otimes\, H_{2}$\, is a Hilbert space with respect to the above inner product. \end{definition} For \,$Q \,\in\, \mathcal{B}\,(\,H_{1}\,)$\, and \,$T \,\in\, \mathcal{B}\,(\,H_{2}\,)$, the tensor product of operators \,$Q$\, and \,$T$\, is denoted by \,$Q \,\otimes\, T$\, and defined as \[\left(\,Q \,\otimes\, T\,\right)\,A \,=\, Q\,A\,T^{\,\ast}\; \;\forall\; \;A \,\in\, H_{1} \,\otimes\, H_{2}.\] \begin{theorem}\cite{Folland}\label{th1.1} Suppose \,$Q,\, Q^{\prime} \,\in\, \mathcal{B}\,(\,H_{1}\,)$\, and \,$T,\, T^{\prime} \,\in\, \mathcal{B}\,(\,H_{2}\,)$.\,Then \begin{itemize} \item[$(i)$]\,$Q \,\otimes\, T \,\in\, \mathcal{B}\,(\,H_{1} \,\otimes\, H_{2}\,)$\, and \,$\left\|\,Q \,\otimes\, T\,\right\| \,=\, \|\,Q\,\|\; \|\,T\,\|$. \item[$(ii)$] \,$\left(\,Q \,\otimes\, T\,\right)\,(\,f \,\otimes\, g\,) \,=\, Q\,f \,\otimes\, T\,g$\, for all \,$f \,\in\, H_{1},\, g \,\in\, H_{2}$. \item[$(iii)$] $\left(\,Q \,\otimes\, T\,\right)\,\left(\,Q^{\,\prime} \,\otimes\, T^{\,\prime}\,\right) \,=\, (\,Q\,Q^{\,\prime}\,) \,\otimes\, (\,T\,T^{\,\prime}\,)$. \item[$(iv)$] \,$Q \,\otimes\, T$\, is invertible if and only if \,$Q$\, and \,$T$\, are invertible, in which case \,$\left(\,Q \,\otimes\, T\,\right)^{\,-\, 1} \,=\, \left(\,Q^{\,-\, 1} \,\otimes\, T^{\,-\, 1}\,\right)$. \item[$(v)$] \,$\left(\,Q \,\otimes\, T\,\right)^{\,\ast} \,=\, \left(\,Q^{\,\ast} \,\otimes\, T^{\,\ast}\,\right)$. \end{itemize} \end{theorem} \begin{definition}\cite{Mashadi} A \,$n$-norm on a linear space \,$X$\, (\,over the field \,$\mathbb{K}$\, of real or complex numbers\,) is a function \[\left(\,x_{\,1},\, x_{\,2},\, \cdots,\, x_{\,n}\,\right) \,\longmapsto\, \left\|\,x_{\,1},\, x_{\,2},\, \cdots,\, x_{\,n}\,\right\|,\; x_{\,1},\, x_{\,2},\, \cdots,\, x_{\,n} \,\in\, X\]from \,$X^{\,n}$\, to the set \,$\mathbb{R}$\, of all real numbers such that for every \,$x_{\,1},\, x_{\,2},\, \cdots,\, x_{\,n} \,\in\, X$\, and \,$\alpha \,\in\, \mathbb{K}$, \begin{itemize} \item[(i)]\;\; $\left\|\,x_{\,1},\, x_{\,2},\, \cdots,\, x_{\,n}\,\right\| \,=\, 0$\; if and only if \,$x_{\,1},\, \cdots,\, x_{\,n}$\; are linearly dependent, \item[(ii)]\;\;\; $\left\|\,x_{\,1},\, x_{\,2},\, \cdots,\, x_{\,n}\,\right\|$\; is invariant under permutations of \,$x_{\,1},\, x_{\,2},\, \cdots,\, x_{\,n}$, \item[(iii)]\;\;\; $\left\|\,\alpha\,x_{\,1},\, x_{\,2},\, \cdots,\, x_{\,n}\,\right\| \,=\, |\,\alpha\,|\, \left\|\,x_{\,1},\, x_{\,2},\, \cdots,\, x_{\,n}\,\right\|$, \item[(iv)]\;\; $\left\|\,x \,+\, y,\, x_{\,2},\, \cdots,\, x_{\,n}\,\right\| \,\leq\, \left\|\,x,\, x_{\,2},\, \cdots,\, x_{\,n}\,\right\| \,+\, \left\|\,y,\, x_{\,2},\, \cdots,\, x_{\,n}\,\right\|$. \end{itemize} A linear space \,$X$, together with a n-norm \,$\left\|\,\cdot,\, \cdots,\, \cdot \,\right\|$, is called a linear n-normed space. \end{definition} \begin{definition}\cite{Misiak} Let \,$n \,\in\, \mathbb{N}$\; and \,$X$\, be a linear space of dimension greater than or equal to \,$n$\; over the field \,$\mathbb{K}$, where \,$\mathbb{K}$\, is the real or complex numbers field.\;A n-inner product on \,$X$\, is a map \[\left(\,x,\, y,\, x_{\,2},\, \cdots,\, x_{\,n}\,\right) \,\longmapsto\, \left<\,x,\, y \,|\, x_{\,2},\, \cdots,\, x_{\,n} \,\right>,\; x,\, y,\, x_{\,2},\, \cdots,\, x_{\,n} \,\in\, X\]from \,$X^{n \,+\, 1}$\, to the set \,$\mathbb{K}$\, such that for every \,$x,\, y,\, x_{\,1},\, x_{\,2},\, \cdots,\, x_{\,n} \,\in\, X$\, and \,$\alpha \,\in\, \mathbb{K}$, \begin{itemize} \item[(i)]\;\; $\left<\,x_{\,1},\, x_{\,1} \,|\, x_{\,2},\, \cdots,\, x_{\,n} \,\right> \,\geq\, 0$\; and \;$\left<\,x_{\,1},\, x_{\,1} \;|\; x_{\,2},\, \cdots,\, x_{\,n} \,\right> \;=\; 0$\; if and only if \;$x_{\,1},\, x_{\,2},\, \cdots,\, x_{\,n}$\; are linearly dependent, \item[(ii)]\;\; $\left<\,x,\, y \;|\; x_{\,2},\, \cdots,\, x_{\,n} \,\right> \;=\; \left<\,x,\, y \;|\; x_{\,i_{\,2}},\, \cdots,\, x_{\,i_{\,n}} \,\right> $\; for every permutations \\$\left(\, i_{\,2},\, \cdots,\, i_{\,n} \,\right)$\; of \;$\left(\, 2,\, \cdots,\, n \,\right)$, \item[(iii)]\;\; $\left<\,x,\, y \;|\; x_{\,2},\, \cdots,\, x_{\,n} \,\right> \;=\; \overline{\left<\,y,\, x \;|\; x_{\,2},\, \cdots,\, x_{\,n} \,\right> }$, \item[(iv)]\;\; $\left<\,\alpha\,x,\, y \;|\; x_{\,2},\, \cdots,\, x_{\,n} \,\right> \;=\; \alpha \,\left<\,x,\, y \;|\; x_{\,2},\, \cdots,\, x_{\,n} \,\right> $, \item[(v)]\;\; $\left<\,x \,+\, y,\, z \;|\; x_{\,2},\, \cdots,\, x_{\,n} \,\right> \;=\; \left<\,x,\, z \;|\; x_{\,2},\, \cdots,\, x_{\,n} \,\right> \,+\, \left<\,y,\, z \;|\; x_{\,2},\, \cdots,\, x_{\,n} \,\right>$. \end{itemize} A linear space \,$X$\, together with n-inner product \,$\left<\,\cdot,\, \cdot \,|\, \cdot,\, \cdots,\, \cdot\,\right>$\, is called n-inner product space. \end{definition} \begin{theorem}\cite{Gunawan} For \,$n$-inner product space \,$\left(\,X,\, \left<\,\cdot,\, \cdot \,|\, \cdot,\, \cdots,\, \cdot\,\right>\,\right)$, \[\left|\,\left<\,x,\, y \,|\, x_{\,2},\, \cdots,\, x_{\,n}\,\right>\,\right| \,\leq\, \left\|\,x,\, x_{\,2},\, \cdots,\, x_{\,n}\,\right\|\, \left\|\,y,\, x_{\,2},\, \cdots,\, x_{\,n}\,\right\|\] hold for all \,$x,\, y,\, x_{\,2},\, \cdots,\, x_{\,n} \,\in\, X$, where \[\left \|\,x_{\,1},\, x_{\,2},\, \cdots,\, x_{\,n}\,\right\| \,=\, \sqrt{\left <\,x_{\,1},\, x_{\,1} \;|\; x_{\,2},\, \cdots,\, x_{\,n}\,\right>},\] is called Cauchy-Schwarz inequality. \end{theorem} \begin{definition}\cite{Mashadi} Let \,$\left(\,X,\, \left\|\,\cdot,\, \cdots,\, \cdot \,\right\|\,\right)$\; be a linear n-normed space.\;A sequence \,$\{\,x_{\,k}\,\}$\; in \,$X$\, is said to converge to some \,$x \,\in\, X$\; if \[\lim\limits_{k \to \infty}\,\left\|\,x_{\,k} \,-\, x,\, e_{\,2},\, \cdots,\, e_{\,n} \,\right\| \,=\, 0\] for every \,$ e_{\,2},\, \cdots,\, e_{\,n} \,\in\, X$\, and it is called a Cauchy sequence if \[\lim\limits_{l,\, k \,\to\, \infty}\,\left \|\,x_{l} \,-\, x_{\,k},\, e_{\,2},\, \cdots,\, e_{\,n}\,\right\| \,=\, 0\] for every \,$ e_{\,2},\, \cdots,\, e_{\,n} \,\in\, X$.\;The space \,$X$\, is said to be complete if every Cauchy sequence in this space is convergent in \,$X$.\;A n-inner product space is called n-Hilbert space if it is complete with respect to its induce norm. \end{definition} \begin{definition}\label{def0.1}\cite{Prasenjit} A sequence \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, in a \,$n$-Hilbert space \,$H$\, is said to be a frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, if there exist constant \,$0 \,<\, A \,\leq\, B \,<\, \infty$\, such that \begin{equation}\label{eee1} A \, \left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2} \,\leq\, \sum\limits^{\infty}_{i \,=\, 1}\,\left|\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\right|^{\,2} \,\leq\, B\, \left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|^{\,2} \end{equation} for all \,$f \,\in\, H$.\;The infimum of all such \,$B$\, is called the optimal upper frame bound and supremum of all such \,$A$\, is called the optimal lower frame bound.\;A sequence \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, satisfies only the right inequality of (\ref{eee1}) is called a Bessel sequence associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, in \,$H$\, with bound \,$B$. \end{definition} Let \,$L_{F}$\, denote the linear subspace of a \,$n$-Hilbert space \,$H$\, spanned by the non-empty finite set \,$F \,=\, \left\{\,\,a_{\,2},\, a_{\,3},\, \cdots,\, a_{\,n}\,\right\}$, where \,$a_{\,2},\, a_{\,3},\, \cdots,\, a_{\,n}$\, are fixed elements in \,$H$.\;Then the quotient space \,$H \,/\, L_{F}$\, is a normed linear space with respect to the norm, \[\left\|\,x \,+\, L_{F}\,\right\|_{F} \,=\, \left\|\,x \,,\, a_{\,2} \,,\, \cdots \,,\, a_{\,n}\,\right\|\; \;\text{for every}\; x \,\in\, H.\] Let \,$M_{F}$\, be the algebraic complement of \,$L_{F}$, then \,$H \,=\, L_{F} \,\oplus\, M_{F}$.\;Define \[\left<\,x \,,\, y\,\right>_{F} \,=\, \left<\,x \,,\, y \;|\; a_{\,2} \,,\, \cdots \,,\, a_{\,n}\,\right>\; \;\text{on}\; \;H.\] Then \,$\left<\,\cdot \,,\, \cdot\,\right>_{F}$\, is a semi-inner product on \,$H$\, and this semi-inner product induces an inner product on the quotient space \,$H \,/\, L_{F}$\; which is given by \[\left<\,x \,+\, L_{F} \,,\, y \,+\, L_{F}\,\right>_{F} \,=\, \left<\,x \,,\, y\,\right>_{F} \,=\, \left<\,x \,,\, y \,|\, a_{\,2} \,,\, \cdots \,,\, a_{\,n} \,\right>\;\; \;\forall \;\; x,\, y \,\in\, H.\] By identifying \,$H \,/\, L_{F}$\; with \,$M_{F}$\; in an obvious way, we obtain an inner product on \,$M_{F}$.\;Now, for every \,$x \,\in\, M_{F}$, we define \,$\|\,x\,\|_{F} \;=\; \sqrt{\left<\,x \,,\, x \,\right>_{F}}$\, and it can be easily verify that \,$\left(\,M_{F} \,,\, \|\,\cdot\,\|_{F}\,\right)$\; is a norm space.\;Let \,$H_{F}$\; be the completion of the inner product space \,$M_{F}$.\\ \begin{definition}\label{def0.0001}\cite{Prasenjit} Let \,$\left\{\,f_{\,i}\,\right\}_{i \,=\, 1}^{\infty}$\; be a frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$.\;Then the bounded linear operator \,$ T_{F} \,:\, l^{\,2}\,(\,\mathbb{N}\,) \,\to\, H_{F}$, defined by \,$T_{F}\,\{\,c_{i}\,\} \,=\, \sum\limits_{i \,=\, 1}^{\infty} \;c_{\,i}\,f_{\,i}$, is called pre-frame operator and its adjoint operator described by \[T_{F}^{\,\ast} \,:\, H_{F} \,\to\, l^{\,2}\,(\,\mathbb{N}\,),\;T_{F}^{\,\ast}\,f \,=\, \left \{\, \left <\,f,\, f_{i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right >\,\right \}_{i \,=\, 1}^{\infty}\] is called the analysis operator.\;The operator \,$S_{F} \,:\, H_{F} \,\to\, H_{F}$\, given by \[S_{F}\,f \,=\, T_{F}\,T_{F}^{\,\ast}\,f \,=\, \sum\limits^{\infty}_{i \,=\, 1}\; \left <\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n} \, \right >\,f_{\,i},\; \;\text{for all}\; \,f \,\in\, H_{F},\] is called the frame operator. \end{definition} It is easy to verify that \,$A\,I_{F} \,\leq\, S_{F} \,\leq\, B\,I_{F}$.\,Since \,$S^{\,-1}_{F}$\; commutes with both \,$S_{F}$\; and \,$I_{F}$, multiplying in the inequality, \,$ A\,I_{F} \,\leq\, S_{F} \,\leq\, B\,I_{F}$\, by \,$S^{\,-1}_{F}$, we get \,$B^{\,-1}\,I_{F} \,\leq\, S^{\,-1}_{F} \,\leq\, A^{\,-1}\,I_{F}$.\,For more details on frames in \,$n$-Hilbert spaces and their tesnor product one can go through the papers \cite{Prasenjit, GP, PK}.\\ For the remaining part of this paper, \,$\left(\,H,\, \left<\,\cdot,\, \cdot \,|\, \cdot,\, \cdots,\, \cdot\,\right> \,\right)$\; is consider to be a \,$n$-Hilbert space.\,$I_{F}$\, will denote the identity operator on \,$H_{F}$\, and \,$\mathcal{B}\,(\,H_{F}\,)$\, denote the space of all bounded linear operator on \,$H_{F}$.\,$\mathcal{G}\,\mathcal{B}\,(\,H_{F}\,)$\, denotes the set of all bounded linear operators which have bounded inverse.\,If \,$S,\, R \,\in\, \mathcal{G}\,\mathcal{B}\,(\,H_{F}\,)$, then \,$R^{\,\ast},\, R^{\,-\, 1}$\, and \,$S\,R$\, are also belongs to \,$\mathcal{G}\,\mathcal{B}\,(\,H_{F}\,)$.\,$\mathcal{G}\,\mathcal{B}^{\,+}\,(\,H_{F}\,)$\, is the set of all positive operators in \,$\mathcal{G}\,\mathcal{B}\,(\,H_{F}\,)$\, and \,$C,\,C_{1},\,C_{2}$\, are invertible operators in \,$\mathcal{G}\,\mathcal{B}\,\left(\,H_{F}\,\right)$.\,$l^{\,2}(\,\mathbb{N}\,)$\, denote the space of square summable scalar-valued sequences with index set of natural numbers \,$\mathbb{N}$. \section{Controlled frame in $n$-Hilbert space} \smallskip\hspace{.6 cm} In this section, we introduce the notion of controlled frame in \,$n$-Hilbert space and discuss it's several properties.\,At the end, dual controlled frame in \,$n$-Hilbert space is presented. \begin{definition} Let \,$C \,\in\, \mathcal{G}\,\mathcal{B}\left(\,H_{F}\,\right)$.\,A frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, controlled by the operator \,$C$\, or \,$C$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, is a family of vectors \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, in \,$H$\, such that there exist constants \,$0 \,<\, A \,\leq\, B \,<\, \infty$\, satisfying \begin{align} A \, \left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2}& \,\leq\, \sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\nonumber\\ &\,\leq\, B\, \left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|^{\,2}\; \;\forall\; f \,\in\, H_{F}\label{eq1}. \end{align} \end{definition} The constants \,$A$\, and \,$B$\, are called lower and upper bounds of \,$C$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$, respectively. \begin{description} \item[$(i)$]The family of vectors \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is called a \,$C$-controlled tight frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, if \,$A \,=\, B$\, and it is called \,$C$-controlled Parseval frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, if \,$A \,=\, B \,=\, 1$. \item[$(ii)$]If only the right inequality of (\ref{eq1}) is satisfied then it is called a \,$C$-controlled Bessel sequence associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, in \,$H$\, with bound \,$B$. \end{description} \begin{remark} Suppose that \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a \,$C$-controlled tight frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, with bounds \,$A$.\,Then for each \,$f \,\in\, H_{F}$, we have \begin{align*} &\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right> \,=\, A\, \left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|^{\,2}\\ &\Rightarrow\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, A^{\,-\, 1 \,/\, 2}\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,A^{\,-\, 1 \,/\, 2}\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &\hspace{1.3cm} \,=\, \left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|^{\,2}. \end{align*} This verify that \,$\left\{\,A^{\,-\, 1 \,/\, 2}\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a \,$C$-controlled Parseval frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$. \end{remark} \begin{definition}\label{def0.0001} Let \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, be a \,$C$-controlled Bessel sequence associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, in \,$H$.\,The operator \,$T_{C} \,:\, l^{\,2}\,(\,\mathbb{N}\,) \,\to\, H_{F}$\, given by \,$T_{C}\,\left(\,\left\{\,c_{i}\,\right\}_{i \,=\, 1}^{\,\infty}\,\right) \,=\, \sum\limits^{\infty}_{i \,=\, 1}\;c_{\,i}\,C\,f_{\,i}$\, is called pre-frame operator or the synthesis operator.\,The adjoint operator described by \[T^{\,\ast}_{C} \,:\, H_{F} \,\to\, l^{\,2}\,(\,\mathbb{N}\,),\; T_{C}^{\,\ast}\,f \,=\, \left\{\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n} \,\right>\,\right\}_{i \,\in\, I},\, \,f \,\in\, H_{F}\] is called the analysis operator.\,The \,$C$-controlled frame operator \,$S_{C} \,:\, H_{F} \,\to\, H_{F}$\, is defined as \[S_{C}\,f \,=\, T_{C}\,T^{\,\ast}_{C}\,f \,=\, \sum\limits_{i \,=\, 1}^{\,\infty}\, \left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n} \,\right>\,C\,f_{\,i}\; \;\forall\; f \,\in\, H_{F}.\] \end{definition} For each \,$f \,\in\, H_{F}$, we have \begin{align*} \left<\,S_{C}\,f,\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right> &\,=\, \left<\,\sum\limits_{i \,=\, 1}^{\,\infty}\, \left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n} \,\right>\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\, \sum\limits_{i \,=\, 1}^{\,\infty}\, \left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n} \,\right>\,\left<\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n} \,\right>. \end{align*} Thus, if \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a \,$C$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, then from (\ref{eq1}), we have \begin{align*} &A\,\left<\,f,\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right> \,\leq\, \left<\,S_{C}\,f,\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right> \,\leq\, B\,\left<\,f,\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &\Rightarrow\, A\,I_{H} \,\leq\, S_{C} \,\leq\, B\,I_{H}. \end{align*} From this we can conclude that \,$S_{C}$\, is bounded, self-adjoint, positive and invertible. \begin{theorem}\label{th2.1} Let \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, be a \,$C$-controlled Bessel sequence associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, in \,$H$\, with bound \,$B$.\,Then the operator given by \[U \,:\, l^{\,2}\,(\,\mathbb{N}\,) \,\to\, H_{F},\; U\,\left(\,\left\{\,a_{i}\,\right\}_{i \,=\, 1}^{\,\infty}\,\right) \,=\, \sum\limits^{\infty}_{i \,=\, 1}\;a_{\,i}\,C\,f_{\,i}\] is well-defined and bounded operator from \,$l^{\,2}\,(\,\mathbb{N}\,)$\, into \,$H_{F}$\, with \,$\|\,U\,\| \,\leq\, \sqrt{B}\,\left\|\,C^{\,1 \,/\, 2}\,\right\|$. \end{theorem} \begin{proof} Suppose \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a \,$C$-controlled Bessel sequence associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, in \,$H$\, with bound \,$B$.\,Then for each \,$f \,\in\, H_{F}$, we have \[\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right> \,\leq\, B\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2}.\] Let \,$\left\{\,a_{i}\,\right\}_{i \,=\, 1}^{\,\infty} \,\in\, l^{\,2}\,(\,\mathbb{N}\,)$.\,For arbitrary \,$l \,>\, k$, we have \begin{align*} &\left\|\,\sum\limits^{l}_{i \,=\, 1}\,a_{\,i}\,C\,f_{\,i} \,-\, \sum\limits^{k}_{i \,=\, 1}\,a_{i}\,C\,f_{\,i}\,\right\|_{F}^{\,2} \,=\, \left\|\,\sum\limits^{l}_{i \,=\, k+1}\,a_{\,i}\,C\,f_{\,i}\, \;,\; a_{\,2} \,,\, \cdots \,,\, a_{\,n}\,\right\|^{\,2}\\ &=\sup\limits_{f \,\in\, H_{F},\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\| \,=\, 1}\,\,\left\{\,\left|\,\left<\,\sum\limits^{l}_{i \,=\, k+1}\,a_{\,i}\,C\,f_{\,i}, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\right|^{\,2}\,\right\}\\ &=\,\sup\limits_{f \,\in\, H_{F},\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\| \,=\, 1}\,\left\{\,\left|\,\sum\limits^{l}_{i \,=\, k+1}\,a_{\,i}\,\left<\,C\,f_{\,i}, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\right|^{\,2}\,\right\}\\ &\leq\,\sup\limits_{f \,\in\, H_{F},\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\| \,=\, 1}\,\left\{\,\sum\limits^{l}_{i \,=\, k+1}\,\left<\,f,\, C\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\right\}\,\times\\ &\hspace{2cm}\sum\limits_{i \,=\, k \,+\, 1}^{\,l}\,\left|\,a_{i}\,\right|^{\,2}\\ &\leq\sup\limits_{\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\| \,=\, 1}\left<\,\sum\limits_{i \,=\, k \,+\, 1}^{\,l}\left<\,f,\, C\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\left\|\,\left\{\,a_{i}\,\right\}_{i \,\in\, I}\,\right\|^{\,2}\\ &=\,\sup\limits_{\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\| \,=\, 1}\,\left<\,C\,S_{C}\,f,\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left\|\,\left\{\,a_{i}\,\right\}_{i \,\in\, I}\,\right\|^{\,2}\\ &=\,\sup\limits_{\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\| \,=\, 1}\,\left<\,\left(\,C\,S_{C}\,\right)^{1 \,/\, 2}\,f,\, \left(\,C\,S_{C}\,\right)^{1 \,/\, 2}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left\|\,\left\{\,a_{i}\,\right\}_{i \,\in\, I}\,\right\|^{\,2}\\ &=\,\sup\limits_{\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\| \,=\, 1}\,\left\|\,\left(\,C\,S_{C}\,\right)^{1 \,/\, 2}\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|^{\,2}\,\left\|\,\left\{\,a_{i}\,\right\}_{i \,\in\, I}\,\right\|^{\,2}\\ &\leq\,\sup\limits_{\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\| \,=\, 1}\,\left\|\,C^{1 \,/\, 2}\,\right\|^{\,2}\,\left\|\,S_{C}^{1 \,/\, 2}\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|^{\,2}\,\left\|\,\left\{\,a_{i}\,\right\}_{i \,\in\, I}\,\right\|^{\,2}\\ &=\,\sup\limits_{\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\| \,=\, 1}\,\left\|\,C^{1 \,/\, 2}\,\right\|^{\,2}\,\left<\,S_{C}\,f,\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left\|\,\left\{\,a_{i}\,\right\}_{i \,\in\, I}\,\right\|^{\,2}\\ &\leq\, B\,\left\|\,C^{1 \,/\, 2}\,\right\|^{\,2}\,\left\|\,\left\{\,a_{i}\,\right\}_{i \,\in\, I}\,\right\|^{\,2}. \end{align*} This shows that \,$\sum\limits^{\infty}_{i \,=\, 1}\,a_{\,i}\,C\,f_{\,i}$\, is a Cauchy sequence which is convergent in \,$H_{F}$.\,Thus \,$U$\, is well-defined and bounded with \,$\|\,U\,\| \,\leq\, \sqrt{B}\,\left\|\,C^{\,1 \,/\, 2}\,\right\|$. \end{proof} For arbitrary \,$f \,\in\, H_{F}$\, and \,$\left\{\,a_{i}\,\right\}_{i \,=\, 1}^{\,\infty} \,\in\, l^{\,2}\,(\,\mathbb{N}\,)$, we have \begin{align*} \left<\,f,\, U\,\left\{\,a_{i}\,\right\} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>& \,=\, \left<\,f,\, \sum\limits_{i \,=\, 1}^{\,\infty}\,a_{i}\,C\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\,\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\overline{\,a_{i}}\,\left<\,C\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>. \end{align*} Therefore, \[\left<\,f,\, U\,\left\{\,a_{i}\,\right\}\right>_{F} \,=\, \left<\,\left\{\,\left<\,C\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\right\},\, \left\{\,a_{i}\,\right\}\,\right>\] and hence \[U^{\,\ast}\,f \,=\, \left\{\,\left<\,C\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\right\}_{i \,=\, 1}^{\,\infty}\; \;\forall\; f \,\in\, H_{F}.\] The following theorem shows that any controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, is a frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$. \begin{theorem} Let \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, be a \,$C$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, and \,$C \,\in\, \mathcal{G}\,\mathcal{B}^{\,+\,}\left(\,H_{F}\,\right)$.\,Then \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$. \end{theorem} \begin{proof} Suppose that \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a \,$C$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, with bounds \,$A$\, and \,$B$.\,Then for each \,$f \,\in\, H_{F}$, we have \begin{align*} &A\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2} \,=\, A\,\left\|\,C^{\, 1 \,/\, 2}\,C^{\,-\, 1 \,/\, 2}\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2}\\ &\leq\, A\,\left\|\,C^{\,1 \,/\, 2}\,\right\|^{\,2}\,\left\|\,C^{\,-\, 1 \,/\, 2}\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2}\\ &\leq\, \left\|\,C^{\,1 \,/\, 2}\,\right\|^{\,2}\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,C^{\,-\, 1 \,/\, 2}\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,f_{\,i},\, C^{\,-\, 1 \,/\, 2}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\, \left\|\,C^{\,1 \,/\, 2}\,\right\|^{\,2}\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,C^{\,-\, 1 \,/\, 2}\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,f_{\,i},\, C^{\, 1 \,/\, 2}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\, \left\|\,C^{\,1 \,/\, 2}\,\right\|^{\,2}\,\left<\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,C^{\,-\, 1 \,/\, 2}\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,f_{\,i},\, C^{\, 1 \,/\, 2}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\, \left\|\,C^{\,1 \,/\, 2}\,\right\|^{\,2}\,\left<\,S_{F}\,C^{\,-\, 1 \,/\, 2}\,f,\, C^{\, 1 \,/\, 2}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\, \left\|\,C^{\,1 \,/\, 2}\,\right\|^{\,2}\,\left<\,S_{F}\,f,\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right> \,=\, \left\|\,C^{\,1 \,/\, 2}\,\right\|^{\,2}\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left|\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\right|^{\,2}\\ &\Rightarrow\,A\,\left\|\,C^{\,1 \,/\, 2}\,\right\|^{\,-\, 2}\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2} \,\leq\, \sum\limits_{i \,=\, 1}^{\,\infty}\,\left|\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\right|^{\,2}. \end{align*} On the other hand, for each \,$f \,\in\, H_{F}$, we have \begin{align*} &\sum\limits_{i \,=\, 1}^{\,\infty}\,\left|\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\right|^{\,2} \,=\, \left<\,f,\, S_{F}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ & =\,\left<\,f,\, C^{\,-\, 1}\,C\,S_{F}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\, \left<\,\left(\,C^{\,-\, 1}\,C\,S_{F}\,\right)^{\,1 \,/\, 2} f,\, \left(\,C^{\,-\, 1}\,C\,S_{F}\,\right)^{\,1 \,/\, 2}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\, \left\|\,\left(\,C^{\,-\, 1}\,C\,S_{F}\,\right)^{\,1 \,/\, 2}\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|^{\,2}\\ &\leq\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\,\left\|\,\left(\,C\,S_{F}\,\right)^{\,1 \,/\, 2}\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|^{\,2}\\ &=\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\,\left<\,f,\, C\,S_{F}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\,\left<\,f,\, S_{C}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &\leq\, B\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2}. \end{align*} Thus, \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, with bounds \\$A\,\left\|\,C^{\,1 \,/\, 2}\,\right\|^{\,-\, 2}$\, and \,$B\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}$.\,This completes the proof. \end{proof} Next theorem shows that any frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, is a controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, under some conditions. \begin{theorem} Let \,$C \,\in\, \mathcal{G}\,\mathcal{B}^{\,+\,}\left(\,H_{F}\,\right)$\, be a self-adjoint operator.\,If \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, then \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a \,$C$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$. \end{theorem} \begin{proof} Suppose that \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, with bounds \,$A$\, and \,$B$.\,Then for each \,$f \,\in\, H_{F}$, we have \begin{align*} &A\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2} \,=\, A\,\left\|\,C^{\,-\, 1 \,/\, 2}\,C^{\,1 \,/\, 2}\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2}\\ &\leq\, A\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\,\left\|\,C^{\,1 \,/\, 2}\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2}\\ &\leq\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,C^{\,1 \,/\, 2}\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C^{\,1 \,/\, 2}\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\,\left<\,C^{\,1 \,/\, 2}\,f,\, \sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f_{\,i},\, C^{\,1 \,/\, 2}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\,\left<\,C^{\,1 \,/\, 2}\,f,\, C^{\,1 \,/\, 2}\,S_{F}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\,\left<\,f,\, C\,S_{F}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &\Rightarrow\, A\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,-\, 2}\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2}\\ & \,\leq\, \sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>. \end{align*} On the other hand, for each \,$f \,\in\, H_{F}$, we have \begin{align*} &\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\,\left<\,f,\, C\,S_{F}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right> \,=\, \left<\,C\,f,\, S_{F}\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &\leq\,\left\|\,C\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|\,\left\|\,S_{F}\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|\\ &\leq\,\|\,C\,\|\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|\,B\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|\\ &\,=\, B\,\|\,C\,\|\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|^{\,2}. \end{align*} Thus, \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a \,$C$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$. \\This completes the proof. \end{proof} For every controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, we can get a canonical controlled tight frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, with frame bound \,$1$. \begin{theorem} Let \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, be a \,$C$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, with frame operator \,$S_{C}$.\,Suppose \,$S_{C}^{\,-\, 1}$\, commutes with \,$C$.\,Then \,$\left\{\,S_{C}^{\,-\, 1 \,/\, 2}\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a \,$C$-controlled Parseval frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$. \end{theorem} \begin{proof} The existence of unique positive square root of \,$S_{C}^{\,-\, 1}$\, follows from Theorem \ref{th1.051}, since \,$S_{C}^{\,-\, 1 \,/\, 2}$\, is a limit of a sequence of polynomials in \,$S_{C}^{\,-\, 1}$, it commutes with \,$S_{C}^{\,-\, 1}$\, and therefore with \,$S_{C}$.\,Then for each \,$f \,\in\, H_{F}$, we have \begin{align*} f &\,=\, S_{C}^{\,-\, 1 \,/\, 2}\,S_{C}\,S_{C}^{\,-\, 1 \,/\, 2}\,f\\ &=\,\sum\limits_{i \,=\, 1}^{\,\infty}\, \left<\,f,\, S_{C}^{\,-\, 1 \,/\, 2}\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n} \,\right>\,S_{C}^{\,-\, 1 \,/\, 2}\,C\,f_{\,i}\\ &=\,\sum\limits_{i \,=\, 1}^{\,\infty}\, \left<\,f,\, S_{C}^{\,-\, 1 \,/\, 2}\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n} \,\right>\,C\,S_{C}^{\,-\, 1 \,/\, 2}\,f_{\,i}. \end{align*} Now, for each \,$f \,\in\, H_{F}$, we have \begin{align*} &\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|^{\,2} \,=\, \left<\,f,\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &\,=\, \left<\,\sum\limits_{i \,=\, 1}^{\,\infty}\, \left<\,f,\, S_{C}^{\,-\, 1 \,/\, 2}\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n} \,\right>\,C\,S_{C}^{\,-\, 1 \,/\, 2}\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\,\sum\limits_{i \,=\, 1}^{\,\infty}\, \left<\,f,\, S_{C}^{\,-\, 1 \,/\, 2}\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n} \,\right>\,\left<\,C\,S_{C}^{\,-\, 1 \,/\, 2}\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>. \end{align*} Thus, \,$\left\{\,S_{C}^{\,-\, 1 \,/\, 2}\,f_{\,i}\,\right\}^{\,\infty}_{\,i \,=\, 1}$\, is a \,$C$-controlled Parseval frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$. \end{proof} We now give the concept of dual controlled frame in \,$n$-Hilbert space. \begin{definition}\label{defi1} Let \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$\, be a \,$C$-controlled frame associated to \,$\left(\,a_{\,2} \,,\, \cdots \,,\, a_{\,n}\,\right)$\, for \,$H$.\,Then a \,$C$-controlled frame \,$\left\{\,g_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$\, associated to \,$\left(\,a_{\,2} \,,\, \cdots \,,\, a_{\,n}\,\right)$\, satisfying \[f \,=\, \sum\limits^{\,\infty}_{i \,=\, 1}\, \left<\,f,\, g_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,C\,f_{\,i}\; \;\;\forall\; f \,\in\, H_{F}\] is called a dual controlled frame or alternative dual controlled frame associated to \,$\left(a_{\,2},\, \cdots,\, a_{\,n}\right)$\, of \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$. \end{definition} \begin{theorem}\label{th3.1} Let \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$\, and \,$\left\{\,g_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$\, be two \,$C$-controlled Bessel sequences associated to \,$\left(\,a_{\,2} \,,\, \cdots \,,\, a_{\,n}\,\right)$\, in \,$H$.\,Then the following are equivalent: \begin{itemize} \item[$(i)$] $f \,=\, \sum\limits^{\,\infty}_{i \,=\, 1}\, \left<\,f,\, g_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,C\,f_{\,i}\; \;\forall\; f \,\in\, H_{F}$. \item[$(ii)$] $f \,=\, \sum\limits^{\,\infty}_{i \,=\, 1}\, \left<\,f,\, g_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,C\,g_{\,i}\; \;\forall\; f \,\in\, H_{F}$. \end{itemize} \end{theorem} \begin{proof}$(\,i\,) \,\Rightarrow\, (\,ii\,)$ Let \,$T_{C}\; \;\text{and}\; \,T_{C^{\,\prime}}$\, be the pre-frame operators of \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$\, and \,$\left\{\,g_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$, respectively.\;Composing \,$T_{C}$\, with the adjoint of \,$T_{C^{\,\prime}}$, for all \,$f \,\in\, H_{F}$, we get \[T_{C}\,T_{C^{\,\prime}}^{\,\ast} \,:\, H_{F} \,\to\, H_{F},\; T_{C}\,T_{C^{\,\prime}}^{\,\ast}\,f \,=\, \sum\limits^{\,\infty}_{i \,=\, 1}\, \left<\,f,\, g_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,C\,f_{\,i}.\] Now, in terms of pre-frame operators \,$(\,i\,)$\, can be written as \;$T_{C}\,T_{C^{\,\prime}}^{\,\ast} \,=\, I_{F}$\; and this equivalent to \,$T_{C^{\,\prime}}\,T^{\,\ast}_{C} \,=\, I_{F}$\,.\;Therefore, for each \,$f \,\in\, H_{F}$, \[f \,=\, \sum\limits^{\,\infty}_{i \,=\, 1}\, \left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,C\,g_{\,i}.\] Similarly, \,$(\,ii\,) \,\Rightarrow\, (\,i\,)$\, follows. \end{proof} \begin{remark} Suppose that the equivalent conditions of Theorem \ref{th3.1} are satisfied.\;Then using Cauchy-Schwartz inequality, for every \,$f \,\in\, H_{F}$, we have \begin{align*} &\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|^{\,2} \,=\, \left<\,f,\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &\,=\, \left<\,\sum\limits^{\,\infty}_{i \,=\, 1}\, \left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,C\,g_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ &=\, \sum\limits^{\,\infty}_{i \,=\, 1}\, \left<\,f,\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,g_{\,i},\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\\ & \leq \left(\sum\limits^{\,\infty}_{i \,=\, 1}\left|\,\left<\,f,\, f_{\,i}\,|\,a_{2},\, \cdots,\, a_{n}\,\right>\,\right|^{\,2}\right)^{1 \,/\, 2} \left(\,\sum\limits^{\,\infty}_{i \,=\, 1}\left|\,\left<\,C\,f,\, g_{\,i} \,|\, a_{2},\, \cdots,\, a_{n}\,\right>\,\right|^{\,2}\right)^{1 \,/\, 2}\\ &\leq\, \left(\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\right)^{1 \,/\, 2}\,\times\\ & \left(\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,C\,f,\, g_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,g_{\,i},\, C\,f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\right)^{1 \,/\, 2}\\ &\leq\,\left(\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\right)^{1 \,/\, 2}\,\times\\ &\hspace{2cm} B^{1 \,/\, 2}\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,2}\, \left\|\,C\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}\\ &\Rightarrow\, \dfrac{1}{B\,\left\|\,C^{\,-\, 1 \,/\, 2}\,\right\|^{\,4}\,\|\,C\,\|^{\,2}}\; \left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}\\ &\hspace{1cm} \,\leq\, \sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>\,\left<\,C\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>. \end{align*} This shows that \,$\left\{\,f_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$\, is a \,$C$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$.\;Similarly, it can be shown that \,$\left\{\,g_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$\, is also a \,$C$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$. \end{remark} \section{Controlled frame in tensor product of $n$-Hilbert spaces} \smallskip\hspace{.6 cm}In this section, we present the concept of controlled frame in tensor product of \,$n$-Hilbert spaces and give a characterization.\,Dual controlled frame in tensor product of \,$n$-Hilbert spaces is also described.\,Finally, we consider the direct sum of controlled frames in \,$n$-Hilbert spaces.\\ Let \,$H$\, and \,$K$\, be two \,$n$-Hilbert spaces associated with the \,$n$-inner products \,$\left<\,\cdot,\, \cdot \,|\, \cdot,\, \cdots,\, \cdot\,\right>_{1}$\, and \,$\left<\,\cdot,\, \cdot \,|\, \cdot,\, \cdots,\, \cdot\,\right>_{2}$, respectively.\;The tensor product of \,$H$\, and \,$K$\, is denoted by \,$H \,\otimes\, K$\, and it is defined to be an \,$n$-inner product space associated with the \,$n$-inner product given by \[\left<\,f_{\,1} \,\otimes\, g_{\,1},\, f_{\,2} \,\otimes\, g_{\,2} \,|\, f_{\,3} \,\otimes\, g_{\,3},\, \,\cdots,\, f_{\,n} \,\otimes\, g_{\,n}\,\right>\] \begin{equation}\label{eqn1} \,=\, \left<\,f_{\,1},\, f_{\,2} \,|\, f_{\,3},\, \,\cdots,\, f_{\,n}\,\right>_{1}\,\left<\,g_{\,1},\, g_{\,2} \,|\, g_{\,3},\, \,\cdots,\, g_{\,n}\,\right>_{2}, \end{equation} for all \,$f_{\,1},\, f_{\,2},\, f_{\,3},\, \,\cdots,\, f_{\,n} \,\in\, H$\, and \,$g_{\,1},\, g_{\,2},\, g_{\,3},\, \,\cdots,\, g_{\,n} \,\in\, K$.\\ The \,$n$-norm on \,$H \,\otimes\, K$\, is defined by \[\left\|\,f_{\,1} \,\otimes\, g_{\,1},\, f_{\,2} \,\otimes\, g_{\,2},\, \,\cdots,\,\, f_{\,n} \,\otimes\, g_{\,n}\,\right\|\] \begin{equation}\label{eqn1.1} \hspace{.6cm} =\,\left\|\,f_{\,1},\, f_{\,2},\, \cdots,\, f_{\,n}\,\right\|_{1}\;\left\|\,g_{\,1},\, g_{\,2},\, \cdots,\, g_{\,n}\,\right\|_{2}, \end{equation} for all \,$f_{\,1},\, f_{\,2},\, \,\cdots,\, f_{\,n} \,\in\, H\, \;\text{and}\; \,g_{\,1},\, g_{\,2},\, \,\cdots,\, g_{\,n} \,\in\, K$, where the \,$n$-norms \,$\left\|\,\cdot,\, \cdots,\, \cdot \,\right\|_{1}$\, and \,$\left\|\,\cdot,\, \cdots,\, \cdot \,\right\|_{2}$\, are generated by \,$\left<\,\cdot,\, \cdot \,|\, \cdot,\, \cdots,\, \cdot\,\right>_{1}$\, and \,$\left<\,\cdot,\, \cdot \,|\, \cdot,\, \cdots,\, \cdot\,\right>_{2}$, respectively.\;The space \,$H \,\otimes\, K$\, is complete with respect to the above \,$n$-inner product.\;Therefore the space \,$H \,\otimes\, K$\, is an \,$n$-Hilbert space.\\ Consider \,$G \,=\, \left\{\,b_{\,2},\, b_{\,3},\, \cdots,\, b_{\,n}\,\right\}$, where \,$b_{\,2},\, b_{\,3},\, \cdots,\, b_{\,n}$\, are fixed elements in \,$K$\, and \,$L_{G}$\, denote the linear subspace of \,$K$\, spanned by \,$G$.\,Now, we can define the Hilbert space \,$K_{G}$\, with respect to the inner product is given by \[\left<\,p \,+\, L_{G}\,,\, q \,+\, L_{G}\,\right>_{G} \,=\, \left<\,p \,,\, q\,\right>_{G} \,=\, \left<\,p \,,\, q \,|\, b_{\,2} \,,\, \cdots \,,\, b_{\,n} \,\right>_{2}; \;\forall \;\; p,\, q \,\in\, K.\] \begin{remark} According to the definition \ref{def0.001}, \,$H_{F} \,\otimes\, K_{G}$\, is the Hilbert space with respect to the inner product: \[\left<\,p \,\otimes\, q \,,\, p^{\,\prime} \,\otimes\, q^{\,\prime}\,\right> \,=\, \left<\,p \,,\, p^{\,\prime}\,\right>_{F}\;\left<\,q \,,\, q^{\,\prime}\,\right>_{G},\] for all \,$p,\, p^{\,\prime} \,\in\, H_{F}\; \;\text{and}\; \;q,\, q^{\,\prime} \,\in\, K_{G}$. \end{remark} \begin{definition} Let \,$C_{1} \,\otimes\, C_{2} \,\in\, \mathcal{G}\,\mathcal{B}\left(\,H_{F} \,\otimes\, K_{G}\,\right)$.\,Then the sequence \,$\left\{\,f_{\,i} \,\otimes\, g_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$\, in \,$H \,\otimes\, K$\, is said to be a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,)$\, for \,$H \,\otimes\, K$\, if there exist constants \,$0 \,<\, A \,\leq\, B \,<\, \infty$\, such that \begin{align} &A \left\|\,f \,\otimes\, g,\, a_{\,2} \,\otimes\, b_{\,2},\,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right\|^{\,2}\label{eqp1.1}\\ & \leq \sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f \,\otimes\, g,\, f_{\,i} \,\otimes\, g_{\,j} \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\,\times\nonumber\\ &\hspace{1.5cm}\left<\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right),\, f \,\otimes\, g \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\nonumber\\ &\leq\, B\,\left\|\,f \,\otimes\, g,\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right\|^{\,2}; \;\forall\, \;f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G},\nonumber \end{align} where \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty}$\, and \,$\{\,g_{\,j}\,\}_{j \,=\,1}^{\infty}$\, be the sequences of vectors in \,$H$\, and \,$K$, respectively and \,$a_{\,2} \,\otimes\, b_{\,2},\, a_{\,3} \,\otimes\, b_{\,3},\,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}$\, be fixed elements in \,$H \,\otimes\, K$.\;The constants \,$A,\,B$\, are called the frame bounds.\,If the sequence \,$\left\{\,f_{\,i} \,\otimes\, g_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$\, satisfies only the right inequality of (\ref{eqp1.1}) then it is called a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled Bessel sequence associated to \,$\left(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right)$\, in \,$H \,\otimes\, K$. \end{definition} \begin{theorem}\label{th2.1} Let \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty}$\, and \,$\{\,g_{\,j}\,\}_{j \,=\,1}^{\infty}$\, be the sequences of vectors in \,$n$-Hilbert spaces \,$H$\, and \,$K$.\,The sequence \,$\left\{\,f_{\,i} \,\otimes\, g_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1} \,\subseteq\, H \,\otimes\, K$\, is a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$\left(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right)$\, for \,$H \,\otimes\, K$\, if and only if \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty}$\, is a \,$C_{1}$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, and \,$\{\,g_{\,j}\,\}_{j \,=\,1}^{\infty}$\, is a \,$C_{2}$-controlled frame associated to \,$\left(\,b_{\,2},\, \cdots,\, b_{\,n}\,\right)$\, for \,$K$. \end{theorem} \begin{proof} Suppose that the sequence \,$\left\{\,f_{\,i} \,\otimes\, g_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$\, is a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$\left(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right)$\, for \,$H \,\otimes\, K$.\;Then, for each \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G} \,-\, \{\,\theta \,\otimes\, \theta\,\}$, there exist constants \,$A,\,B \,>\, 0$\, such that \begin{align*} &A \left\|\,f \,\otimes\, g,\, a_{\,2} \,\otimes\, b_{\,2},\,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right\|^{\,2}\\ & \leq \sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f \,\otimes\, g,\, f_{\,i} \,\otimes\, g_{\,j} \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\,\times\\ &\hspace{1.5cm}\left<\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right),\, f \,\otimes\, g \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\\ &\leq\, B\,\left\|\,f \,\otimes\, g,\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right\|^{\,2} \end{align*} Using (\ref{eqn1}) and (\ref{eqn1.1}), we get \begin{align*} &A\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}\,\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2}\\ &\leq\,\sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i}\,|\,a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\times\\ &\hspace{1.5cm}\left<\,C_{1}\,f_{\,i},\, f \,|\,a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,C_{2}\,g_{\,j},\, g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\\ &\leq\,B\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}\,\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2}\\ &\Rightarrow\, A\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}\,\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2}\\ &\sum\limits_{\,i \,=\, 1}^{\,\infty}\,\left<\,f ,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,C_{1}\,f_{\,i},\, f \,|\,a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\times\\ &\hspace{1.5cm}\sum\limits_{\,j \,=\, 1}^{\,\infty}\,\left<\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\left<\,C_{2}\,g_{\,j},\, g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\\ &\leq\,B\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}\,\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2} \end{align*} Since \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}$\, is non-zero element i.\,e., \,$f \,\in\, H_{F}$\, and \,$g \,\in\, K_{G}$\, are non-zero elements.\,Here, we may assume that every \,$f_{\,i},\, C_{1}\,f_{\,i}$\, and \,$a_{\,2},\, \cdots$, \,$ a_{\,n}$\, are linearly independent and every \,$g_{\,j},\, C_{2}\,g_{\,j}$\, and \,$b_{\,2},\, \cdots$, \,$ b_{\,n}$\, are linearly independent.\,Hence \[\sum\limits_{\,j \,=\, 1}^{\,\infty}\,\left<\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\left<\,C_{2}\,g_{\,j},\, g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2},\] \[\sum\limits_{\,i \,=\, 1}^{\,\infty}\,\left<\,f ,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,C_{1}\,f_{\,i},\, f \,|\,a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\] are non-zero.\,Therefore, by the above inequality, we get \begin{align*} &\dfrac{A\,\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2}\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}}{\sum\limits_{\,j \,=\, 1}^{\,\infty}\,\left<\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\left<\,C_{2}\,g_{\,j},\, g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}}\\ &\leq\,\sum\limits_{\,i \,=\, 1}^{\,\infty}\,\left<\,f ,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,C_{1}\,f_{\,i},\, f \,|\,a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\\ &\leq\,\dfrac{B\,\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2}\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}}{\sum\limits_{\,j \,=\, 1}^{\,\infty}\,\left<\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\left<\,C_{2}\,g_{\,j},\, g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}} \end{align*} This implies that \begin{align*} A_{1} \, \left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|_{1}^{\,2}& \,\leq\, \sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,C_{1}\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\nonumber\\ &\,\leq\, B_{1}\, \left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}\; \;\forall\; f \,\in\, H_{F}, \end{align*} where \,$A_{1} \,=\, \inf\limits_{g \,\in\, K_{G}}\,\left\{\,\dfrac{A\,\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2}}{\sum\limits_{\,j \,=\, 1}^{\,\infty}\,\left<\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\left<\,C_{2}\,g_{\,j},\, g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}}\,\right\}$\, and \\$B_{1} \,=\, \sup\limits_{g \,\in\, K_{G}}\,\left\{\,\dfrac{B\,\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2}}{\sum\limits_{\,j \,=\, 1}^{\,\infty}\,\left<\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\left<\,C_{2}\,g_{\,j},\, g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}}\,\right\}$.\\This shows that \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty}$\, is a \,$C_{1}$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$.\;Similarly, it can be shown that \,$\{\,g_{\,j}\,\}_{j \,=\,1}^{\infty}$\, is a \,$C_{2}$-controlled frame associated to \,$\left(\,b_{\,2},\, \cdots,\, b_{\,n}\,\right)$\, for \,$K$.\\ Conversely, suppose that \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty}$\, is a \,$C_{1}$-controlled frame associated to \,$(\,a_{2},\, \cdots,\, a_{n}\,)$\, for \,$H$\, with bounds \,$A,\,B$\, and \,$\{\,g_{\,j}\,\}_{j \,=\,1}^{\infty}$\, is a \,$C_{2}$-controlled frame associated to \,$\left(\,b_{2},\, \cdots,\, b_{n}\,\right)$\, for \,$K$\, with bounds \,$C,\,D$.\,Then \begin{align*} A \, \left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n} \,\right\|_{1}^{\,2}& \,\leq\, \sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,C_{1}\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\nonumber\\ &\,\leq\, B\, \left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}\; \;\forall\; f \,\in\, H_{F}, \end{align*} \begin{align*} C\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2} \,&\leq\,\sum\limits_{\,j \,=\, 1}^{\,\infty}\,\left<\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\left<\,C_{2}\,g_{\,j},\, g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\\ &\leq\, D\,\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2}\; \;\forall\; g \,\in\, K_{G}. \end{align*} Multiplying the above two inequalities and using (\ref{eqn1}) and (\ref{eqn1.1}), we get \begin{align*} &A\,C\,\left\|\,f \,\otimes\, g,\, a_{\,2} \,\otimes\, b_{\,2},\,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right\|^{\,2}\\ & \leq \sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f \,\otimes\, g,\, f_{\,i} \,\otimes\, g_{\,j} \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\,\times\\ &\hspace{1.5cm}\left<\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right),\, f \,\otimes\, g \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\\ &\leq\, B\,D\,\left\|\,f \,\otimes\, g,\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right\|^{\,2}; \;\forall\, \;f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}. \end{align*} Hence, \,$\left\{\,f_{\,i} \,\otimes\, g_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$\, is a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$(\,a_{\,2} \otimes b_{\,2},\, \,\cdots,\, a_{\,n} \otimes b_{\,n}\,)$\, for \,$H \,\otimes\, K$.\,This completes the proof. \end{proof} \begin{remark} Let \,$\left\{\,f_{\,i} \,\otimes\, g_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$\, be a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$(\,a_{\,2} \,\otimes\, b_{\,2}$, \,$\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,)$\, for \,$H \,\otimes\, K$.\;According to the definition \ref{def0.0001}, the frame operator \,$S_{C_{1} \,\otimes\, C_{2}} \,:\, H_{F} \,\otimes\, K_{G} \,\to\, H_{F} \,\otimes\, K_{G}$\, is described by \begin{align*} &S_{C_{1} \,\otimes\, C_{2}}\,(\,f \,\otimes\, g\,)\\ & = \sum\limits_{i,\, j \,=\, 1}^{\,\infty}\left<\,f \otimes g,\, f_{\,i} \otimes g_{\,j} \,|\, a_{\,2} \otimes b_{\,2}, \,\cdots,\, a_{\,n} \otimes b_{\,n}\,\right>\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\left(\,f_{\,i} \otimes g_{\,j}\,\right) \end{align*} for all \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}$. \end{remark} \begin{proposition} If \,$S_{C_{1}},\, \,S_{C_{2}}$\, and \,$S_{C_{1} \,\otimes\, C_{2}}$\, are the corresponding frame operator for \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty},\, \,\{\,g_{\,j}\,\}_{j \,=\,1}^{\infty}$\, and \,$\left\{\,f_{\,i} \,\otimes\, g_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$, respectively, then \,$S_{C_{1} \,\otimes\, C_{2}} \,=\, S_{C_{1}} \,\otimes\, S_{C_{2}}$\, and \,$S_{C_{1} \,\otimes\, C_{2}}^{\,-\, 1} \,=\, S^{\,-\, 1}_{C_{1}} \,\otimes\, S^{\,-\, 1}_{C_{2}}$. \end{proposition} \begin{proof} Since \,$S_{C_{1} \,\otimes\, C_{2}}$\, is the frame operator for \,$\left\{\,f_{\,i} \,\otimes\, g_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$, we have \begin{align*} &S_{C_{1} \,\otimes\, C_{2}}\,(\,f \,\otimes\, g\,)\\ & = \sum\limits_{i,\, j \,=\, 1}^{\,\infty}\left<\,f \otimes g,\, f_{\,i} \otimes g_{\,j} \,|\, a_{\,2} \otimes b_{\,2}, \,\cdots,\, a_{\,n} \otimes b_{\,n}\,\right>\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\left(\,f_{\,i} \otimes g_{\,j}\,\right)\\ & \,=\, \sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\left(\,C_{1}\,f_{\,i} \,\otimes\, C_{2}\,g_{\,j}\,\right)\\ &= \left(\sum\limits_{\,i \,=\, 1}^{\,\infty}\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,C_{1}\,f_{\,i}\right) \,\otimes\, \left(\sum\limits_{\,j \,=\, 1}^{\,\infty}\,\left<\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,C_{2}\,g_{\,j}\right)\\ &=\, S_{C_{1}}\,f \,\otimes\, S_{C_{2}}\,g \,=\, \left(\,S_{C_{1}} \,\otimes\, S_{C_{2}}\,\right)\,(\,f \,\otimes\, g\,)\; \;\forall\; f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}. \end{align*} Thus, \,$S_{C_{1} \,\otimes\, C_{2}} \,=\, S_{C_{1}} \,\otimes\, S_{C_{2}}$.\,Since \,$S_{C_{1}}$\, and \,$S_{C_{2}}$\, are invertible, by Theorem \ref{th1.1} \,$(iv)$, \,$S_{C_{1} \,\otimes\, C_{2}}^{\,-\, 1} \,=\, \left(\,S_{C_{1}} \,\otimes\, S_{C_{2}}\,\right)^{\,-\, 1} \,=\, S^{\,-\, 1}_{C_{1}} \,\otimes\, S^{\,-\, 1}_{C_{2}}$.\,This completes the proof. \end{proof} \begin{proposition} Let \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty}$\, be a \,$C_{1}$-controlled frame associated to \,$(\,a_{\,2},\, \cdots,\\ a_{\,n}\,)$\, for \,$H$\, with bounds \,$A,\,B$ and \,$\{\,g_{\,j}\,\}_{j \,=\,1}^{\infty}$\, be a \,$C_{2}$-controlled frame associated to \,$\left(\,b_{\,2},\, \cdots,\, b_{\,n}\,\right)$\, for \,$K$\, with bounds \,$C,\,D$\, having their corresponding frame operator \,$S_{C_{1}}$\, and \,$S_{C_{2}}$, respectively.\,Then \,$A\,C\,I_{F \,\otimes\, G} \,\leq\, S_{C_{1} \,\otimes\, C_{2}} \,\leq\, B\,D\,I_{F \,\otimes\, G}$, where \,$I_{F \,\otimes\, G}$\, is the identity operator on \,$H_{F} \,\otimes\, K_{G}$. \end{proposition} \begin{proof} Since \,$S_{C_{1}}$\, and \,$S_{C_{2}}$\, are frame operators, we have \[A\,I_{F} \,\leq\, S_{C_{1}} \,\leq\, B\,I_{F},\; \;C\,I_{G} \,\leq\, S_{C_{2}} \,\leq\, D\,I_{G},\] where \,$I_{F}$\,and \,$I_{G}$\, are the identity operators on \,$H_{F}$\, and \,$K_{G}$, respectively.\,Taking tensor product on the above two inequalities, we get \begin{align*} &A\,C \left(\,I_{F} \,\otimes\, I_{G}\,\right) \,\leq\, \left(\,S_{C_{1}} \,\otimes\, S_{C_{2}}\,\right) \,\leq\, B\,D\,\left(\,I_{F} \,\otimes\, I_{G}\,\right)\\ &\Rightarrow\,A\,C\,I_{F \,\otimes\, G} \,\leq\, S_{C_{1} \,\otimes\, C_{2}} \,\leq\, B\,D\,I_{F \,\otimes\, G}. \end{align*} This completes the proof. \end{proof} Next, we establish the frame decomposition formula in \,$H \,\otimes\, K$. \begin{proposition}\label{eq2.4} Let \,$\left\{\,f_{\,i}\,\right\}^{\infty}_{i \,=1\, }$\, and \,$\left\{\,g_{\,j}\,\right\}^{\infty}_{j \,=1\, }$\, be \,$C_{1}$-controlled and \,$C_{2}$-controlled frames associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, and \,$\left(\,b_{\,2},\, \cdots,\, b_{\,n}\,\right)$\, for \,$H$\, and \,$K$\, with the corresponding frame operators \,$S_{C_{1}}$\, and \,$S_{C_{2}}$, respectively.\,Then for each \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}$, we have \begin{align*} & f \,\otimes\, g \\ &\,=\, \sum\limits^{\infty}_{i,\,j \,=\, 1}\,\left <\,f \,\otimes\, g,\, S^{\,-1}_{C_{1} \,\otimes\, C_{2}}\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right) \,|\, a_{\,2} \otimes b_{\,2}, \,\cdots,\, a_{\,n} \otimes b_{\,n}\,\right >\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right), \\ & f \,\otimes\, g \\ &\,=\, \sum\limits^{\infty}_{i,\,j \,=\, 1}\,\left <\,f \,\otimes\, g,\, f_{\,i} \,\otimes\, g_{\,j} \,|\, a_{\,2} \otimes b_{\,2}, \,\cdots,\, a_{\,n} \otimes b_{\,n}\,\right >\,S^{\,-1}_{C_{1} \,\otimes\, C_{2}}\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right).\\ \end{align*} provided both the series converges unconditionally for all \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}$. \end{proposition} \begin{proof} Since \,$S_{C_{1}}$\, is the corresponding frame operator for \,$\left\{\,f_{\,i}\,\right\}^{\infty}_{i \,=1\, }$.\,Then \begin{align*} f \,=\, S_{C_{1}}\, S^{\,-1}_{C_{1}}\,f &\,=\, \sum\limits^{\infty}_{i \,=\, 1}\,\left <\,S^{\,-1}_{C_{1}}\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\,\right >\,C_{1}\,f_{\,i}\\ & \;=\; \sum\limits^{\infty}_{i \,=\, 1}\,\left <\,f,\, S^{\,-1}_{C_{1}}\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\,\right >\,C_{1}\,f_{\,i}\; \;\forall\; f \,\in\, H_{F}.\\ \end{align*} Similarly, it can be shown that \[g \,=\, \sum\limits^{\infty}_{j \,=\, 1}\,\left <\,g,\, S^{\,-1}_{C_{2}}\,g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\,\right >\,C_{2}\,g_{\,j}\; \;\forall\; g \,\in\, K_{G}.\] Thus, for each \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}$, we have \begin{align*} & f \,\otimes\, g \\ &\,= \left(\sum\limits^{\infty}_{i \,=\, 1}\left <\,f,\, S^{\,-1}_{C_{1}}\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\,\right>\,C_{1}\,f_{\,i}\right) \otimes \left(\sum\limits^{\infty}_{j \,=\, 1}\left <\,g,\, S^{\,-1}_{C_{2}}\,g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\,\right >\,C_{2}\,g_{\,j}\right)\\ &=\,\sum\limits^{\infty}_{i,\,j \,=\, 1}\,\left <\,f,\, S^{\,-1}_{C_{1}}\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\,\right>_{1}\,\left <\,g,\, S^{\,-1}_{C_{2}}\,g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\,\right>_{2}\,\left(\,C_{1}\,f_{\,i} \,\otimes\, C_{2}\,g_{\,j}\,\right)\\ &=\sum\limits^{\infty}_{i,\,j \,=\, 1}\,\left <\,f \,\otimes\, g,\, S^{\,-1}_{C_{1}}\,f_{\,i} \,\otimes\, S^{\,-1}_{C_{2}}\,g_{\,j} \,|\, a_{\,2} \otimes b_{\,2}, \,\cdots,\, a_{\,n} \otimes b_{\,n}\,\right>\,\left(\,C_{1}\,f_{\,i} \,\otimes\, C_{2}\,g_{\,j}\,\right)\\ &=\sum\limits^{\infty}_{i,\,j \,=\, 1}\,\left <\,f \,\otimes\, g,\, \left(\,S^{\,-1}_{C_{1}} \,\otimes\, S^{\,-1}_{C_{2}}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right) \,|\, a_{\,2} \otimes b_{\,2}, \,\cdots,\, a_{\,n} \otimes b_{\,n}\,\right>\,\left(\,C_{1}\,f_{\,i} \,\otimes\, C_{2}\,g_{\,j}\,\right)\\ &\,=\, \sum\limits^{\infty}_{i,\,j \,=\, 1}\,\left <\,f \,\otimes\, g,\, S^{\,-1}_{C_{1} \,\otimes\, C_{2}}\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right) \,|\, a_{\,2} \otimes b_{\,2}, \,\cdots,\, a_{\,n} \otimes b_{\,n}\,\right >\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right). \end{align*} Since \,$\left\{\,\left <\,f,\, S^{\,-1}_{C_{1}}\,f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\,\right >_{1}\,\right \}^{\infty}_{i \,=\, 1},\, \left\{\,\left <\,g,\, S^{\,-1}_{C_{2}}\,g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\,\right >_{2}\,\right \}^{\infty}_{j \,=\, 1} \,\in\, l^{\,2} (\,\mathbb{N}\,)$, \,$\left\{\,f_{\,i}\,\right\}^{\infty}_{i \,=\, 1}$\, and \,$\left\{\,g_{\,j}\,\right\}^{\infty}_{j \,=1\, }$\, are \,$C_{1}$-controlled and \,$C_{2}$-controlled Bessel sequence associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, and \,$\left(\,b_{\,2},\, \cdots,\, b_{\,n}\,\right)$\, in \,$H$\, and \,$K$, respectively, the above series converges unconditionally.\,On the other hand, for each \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}$, \begin{align*} &f \,\otimes\, g \,=\, S^{\,-1}_{C_{1}}\, S_{C_{1}}\,f \,\otimes\, S^{\,-1}_{C_{2}}\, S_{C_{2}}\,g \\ &=\,S^{\,-1}_{C_{1}}\,\left(\,\sum\limits^{\infty}_{i \,=\, 1}\,\left <\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\,\right >_{1}\,C_{1}\,f_{\,i}\,\right) \,\otimes\, S^{\,-1}_{C_{2}}\,\left(\,\sum\limits^{\infty}_{j \,=\, 1}\,\left <\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\,\right >_{2}\,C_{2}\,g_{\,j}\,\right)\\ &\,=\, \sum\limits^{\infty}_{i,\,j \,=\, 1}\,\left <\,f \,\otimes\, g,\, f_{\,i} \,\otimes\, g_{\,j} \,|\, a_{\,2} \otimes b_{\,2}, \,\cdots,\, a_{\,n} \otimes b_{\,n}\,\right >\,S^{\,-1}_{C_{1} \,\otimes\, C_{2}}\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right). \end{align*} This completes the proof. \end{proof} \begin{corollary} Let \,$\left\{\,f_{\,i}\,\right\}^{\infty}_{i \,=1\, }$\, and \,$\left\{\,g_{\,j}\,\right\}^{\infty}_{j \,=1\, }$\, be \,$C_{1}$-controlled and \,$C_{2}$-controlled tight frames associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, and \,$\left(\,b_{\,2},\, \cdots,\, b_{\,n}\,\right)$\, for \,$H$\, and \,$K$\, with bounds \,$A_{1}$\, and \,$A_{2}$, respectively.\,Then for each \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}$, we have \[ f \,\otimes\, g \,= \dfrac{1}{A_{1}\,A_{2}}\sum\limits^{\infty}_{i,\,j \,=\, 1}\left<\,f \,\otimes\, g,\, f_{\,i} \,\otimes\, g_{\,j} \,|\, a_{\,2} \otimes b_{\,2}, \,\cdots,\, a_{\,n} \otimes b_{\,n}\,\right >\left(\,C_{1} \,\otimes\, C_{2}\,\right)\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right).\] \end{corollary} In the next theorem, we establish that an image of controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, under a bounded linear operator becomes a controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, if and only if the bounded linear operator have to be invertible. \begin{theorem} Let \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty}$\, be a \,$C_{1}$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, with bounds \,$A,\,B$\, and \,$\{\,g_{\,j}\,\}_{j \,=\,1}^{\infty}$\, be a \,$C_{2}$-controlled frame associated to \,$(\,b_{\,2},\, \cdots$, \,$b_{\,n}\,)$\, for \,$K$\, with bounds \,$C,\,D$\, having their corresponding frame operators \,$S_{C_{1}}$\, and \,$S_{C_{2}}$, respectively.\,Suppose \,$C_{1}$\, and \,$C_{2}$\, commutes with \,$U_{1}$\, and \,$U_{2}$, respectively.\,Then \,$ \left\{\,\Delta_{i\,j} \,=\, \left(\,U_{1} \,\otimes\, U_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right)\,\right\}^{\infty}_{i,\,j \,=\, 1}$\, is a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots$,\, $a_{\,n} \,\otimes\, b_{\,n}\,)$\, for \,$H \,\otimes\, K$\, if and only if \,$U_{1} \,\otimes\, U_{2}$\, is an invertible operator on \,$H_{F} \,\otimes\, K_{G}$. \end{theorem} \begin{proof} First we suppose that \,$U_{1} \,\otimes\, U_{2}$\, is an invertible on \,$H_{F} \,\otimes\, K_{G}$.\;Then by Theorem \ref{th1.1}, \,$U_{1}$\, and \,$U_{2}$\, are invertible on \,$H_{F}$\, and \,$K_{G}$, respectively.\;For each \,$f \,\in\, H_{F}$\, and \,$g \,\in\, K_{G}$, we have \begin{equation}\label{eq1.05} \left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1} \,\leq\, \left\|\,U^{\,-\,1}_{1}\,\right\|\,\left\|\,U^{\,\ast}_{1}\,(\,f\,),\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1},\;\text{and} \end{equation} \begin{equation}\label{eq1.5} \left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2} \,\leq\, \left\|\,U^{\,-\, 1}_{2}\,\right\|\,\left\|\,U^{\,\ast}_{2}\,(\,g\,),\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}.\hspace{.7cm} \end{equation} Now, for each \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}$, we have \begin{align*} &\sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f \,\otimes\, g,\, \left(\,U_{1} \,\otimes\, U_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right) \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\,\times\\ &\hspace{1.2cm}\left<\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\left(\,U_{1} \,\otimes\, U_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right),\, f \,\otimes\, g \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\\ &=\,\sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f \,\otimes\, g,\, \left(\,U_{1}\,f_{\,i} \,\otimes\, U_{2}\,g_{\,j}\,\right) \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\,\times\\ &\hspace{1.2cm}\left<\,\left(\,C_{1}\,U_{1}\,f_{\,i} \,\otimes\, C_{2}\,U_{2}\,g_{\,j}\,\right),\, f \,\otimes\, g \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\\ &=\,\sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f,\, U_{1}\,f_{\,i}\,|\,a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,g,\, U_{2}\,g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\times\\ &\hspace{1.5cm}\left<\,C_{1}\,U_{1}\,f_{\,i},\, f \,|\,a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,C_{2}\,U_{2}\,g_{\,j},\, g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\\ &=\,\sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,U_{1}^{\,\ast}\,f,\, f_{\,i}\,|\,a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,U_{2}^{\,\ast}\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\times\\ &\hspace{1.5cm}\left<\,C_{1}\,f_{\,i},\, U_{1}^{\,\ast}\,f \,|\,a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,C_{2}\,g_{\,j},\, U_{2}^{\,\ast}\,g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\\ &=\,\sum\limits_{\,i \,=\, 1}^{\,\infty}\,\left<\,U_{1}^{\,\ast}\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,C_{1}\,f_{\,i},\, U_{1}^{\,\ast}\,f \,|\,a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\times\\ &\hspace{1.5cm}\sum\limits_{\,j \,=\, 1}^{\,\infty}\,\left<\,U_{2}^{\,\ast}\,g,\, g_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\left<\,C_{2}\,g_{\,j},\, U_{2}^{\,\ast}\,g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\\ &\leq\,B\,D\,\left\|\,U_{1}^{\,\ast}\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}\,\left\|\,U_{2}^{\,\ast}\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2}\\ &\leq\, B\,D\,\left\|\,U_{1}^{\,\ast}\,\right\|^{\,2}\,\left\|\,U_{2}^{\,\ast}\,\right\|^{\,2}\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}\,\left\|\,g \,,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2}\\ &=\, B\,D\,\left\|\,U_{1} \,\otimes\, U_{2}\,\right\|^{\,2}\,\left\|\,f \,\otimes\, g,\, a_{\,2} \,\otimes\, b_{\,2} \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right\|^{\,2}. \end{align*} On the other hand, \begin{align*} &\sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f \,\otimes\, g,\, \left(\,U_{1} \,\otimes\, U_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right) \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\,\times\\ &\hspace{1.2cm}\left<\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\left(\,U_{1} \,\otimes\, U_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right),\, f \,\otimes\, g \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\\ &\geq\, A\,C\,\left\|\,U_{1}^{\,\ast}\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}\,\left\|\,U_{2}^{\,\ast}\,f,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2}\\ &\geq\, \dfrac{A\,C\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|_{1}^{\,2}\,\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|_{2}^{\,2}}{\left\|\,U^{\,-\,1}_{1}\,\right\|^{\,2}\,\left\|\,U^{\,-\,1}_{2}\,\right\|^{\,2}}\;[\;\text{by (\ref{eq1.05}) and (\ref{eq1.5})}\;]\\ &=\, \dfrac{A\,C}{\left\|\,\left(\,U_{1} \,\otimes\, U_{2}\,\right)^{\,-\, 1}\,\right\|^{\,2}}\,\left\|\,f \,\otimes\, g,\, a_{\,2} \,\otimes\, b_{\,2},\,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right\|^{\,2}. \end{align*} Therefore, the sequence \,$\left\{\,\Delta_{i\,j}\,\right\}_{i,\,j \,=\, 1}^{\,\infty}$\, is a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,)$\, for \,$H \,\otimes\, K$.\\ Conversely, suppose that \,$\left\{\,\Delta_{i\,j}\,\right\}_{i,\,j \,=\, 1}^{\,\infty}$\, is a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots$, \,$a_{\,n} \,\otimes\, b_{\,n}\,)$\, for \,$H \,\otimes\, K$.\;Now, for each \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}$, \begin{align*} &\sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f \,\otimes\, g,\, \Delta_{i\,j} \,|\, a_{\,2} \,\otimes\, b_{\,2}, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\Delta_{i\,j}\\ &=\, \sum\limits_{i,\, j \,=\, 1}^{\,\infty}\left<\,f \otimes g,\, U_{1}\,f_{\,i} \,\otimes\, U_{2}\,g_{\,j} \,|\,a_{\,2} \,\otimes\, b_{\,2}, \,\cdots,\, a_{\,n} \otimes b_{\,n}\,\right>\,\left(\,C_{1}\,U_{1}\,f_{\,i} \,\otimes\, C_{2}\,U_{2}\,g_{\,j}\,\right)\\ &= \left(\sum\limits_{\,i \,=\, 1}^{\,\infty}\left<\,U_{1}^{\,\ast}f,\, f_{i}\,|\,a_{2},\, \cdots,\, a_{n}\,\right>_{1}\,U_{1}\,C_{1}\,f_{i}\,\right) \,\otimes\, \left(\sum\limits_{\,j \,=\, 1}^{\,\infty}\left<\,U_{2}^{\,\ast}\,g,\, g_{j}\,|\,b_{2},\, \cdots,\, b_{n}\,\right>_{2}\,U_{2}\,C_{2}\,g_{j}\right)\\ &=\, U_{1}\,S_{C_{1}}\,U_{1}^{\,\ast}\,f \,\otimes\, U_{2}\,S_{C_{2}}\,U_{2}^{\,\ast}\,g = \left(\,U_{1} \,\otimes\, U_{2}\,\right)\,\left(\,S_{C_{1}} \,\otimes\, S_{C_{2}}\,\right)\,\left(\,U^{\,\ast}_{1} \,\otimes\, U^{\,\ast}_{2}\right)\,(\,f \,\otimes\, g\,)\\ &=\, \left(\,U_{1} \,\otimes\, U_{2}\,\right)\,S_{C_{1} \,\otimes\, C_{2}}\,\left(\,U_{1} \,\otimes\, U_{2}\,\right)^{\,\ast}\,(\,f \,\otimes\, g\,) \end{align*} Hence, the frame operator for \,$\left\{\,\Delta_{i\,j}\,\right\}_{i,\,j \,=\, 1}^{\,\infty}$\, is \,$\left(\,U_{1} \,\otimes\, U_{2}\,\right)\,S_{C_{1} \,\otimes\, C_{2}}\,\left(\,U_{1} \,\otimes\, U_{2}\,\right)^{\,\ast}$\, and therefore it is invertible.\;Also, we know that \,$S_{C_{1} \,\otimes\, C_{2}}$\, is invertible and hence \,$U_{1} \,\otimes\, U_{2}$\, is invertible on \,$H_{F} \,\otimes\, K_{G}$.\,This completes the proof. \end{proof} We now present the concept of a dual controlled frame in \,$H \,\otimes\, K$. \begin{definition} Let \,$\left\{\,f_{\,i} \otimes g_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$\, be a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$(\,a_{\,2} \otimes b_{\,2},\, \cdots$, \,$a_{\,n} \otimes b_{\,n}\,)$\, for \,$H \otimes K$.\,Then a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame \,$\left\{\,e_{\,i} \otimes h_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$\, associated to \,$\left(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right)$\, for \,$H \,\otimes\, K$\, satisfying \begin{align} &f \,\otimes\, g \nonumber\\ &\,=\, \sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f \,\otimes\, g,\, e_{\,i} \,\otimes\, h_{\,j} \,|\, a_{\,2} \,\otimes\, b_{\,2}, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right),\label{eq2.1} \end{align} for all \,$f \,\otimes\, g \,\in\, H \,\otimes\, K$, is called a dual \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$(\,a_{\,2} \,\otimes\, b_{\,2},\, \cdots$, \,$a_{\,n} \,\otimes\, b_{\,n}\,)$\, for \,$H \otimes K$\, of \,$\left\{\,f_{\,i} \,\otimes\, g_{\,j}\,\right\}^{\infty}_{i,\,j = 1}$. \end{definition} \begin{theorem}\label{th3.2} Let \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty} \,,\, \left\{\,e_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$\, be a pair of dual \,$C_{1}$-controlled frames associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, and \,$\{\,g_{\,j}\,\}_{j \,=\,1}^{\infty} \,,\, \left\{\,h_{\,j}\,\right\}^{\,\infty}_{j \,=\, 1}$\, be a pair of dual \,$C_{2}$-controlled frames associated to \,$\left(\,b_{\,2},\, \cdots,\, b_{\,n}\,\right)$\, for \,$K$.\;Then \,$\left\{\,e_{\,i} \,\otimes\, h_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$\, is a dual \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$\left(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right)$\, for \,$H \otimes K$\, of \,$\left\{\,f_{\,i} \,\otimes\, g_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$. \end{theorem} \begin{proof} By Theorem \ref{th2.1}, \,$\left\{\,f_{\,i} \,\otimes\, g_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$, \,$\left\{\,e_{\,i} \,\otimes\, h_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$\, are \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frames associated to \,$\left(\,a_{2} \,\otimes\, b_{2},\, \,\cdots,\, a_{n} \,\otimes\, b_{n}\,\right)$\, for \,$H \,\otimes\, K$.\;Since \,$\left\{\,e_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$\, and \,$\left\{\,h_{\,j}\,\right\}^{\,\infty}_{j \,=\, 1}$\, are dual \,$C_{1}$-controlled and \,$C_{2}$-controlled frames associated to \,$\left(a_{2},\, \cdots,\, a_{n}\right)$\, and \,$\left(b_{2},\, \cdots,\, b_{n}\right)$\, of \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty}$\, and \,$\{\,g_{\,j}\,\}_{j \,=\,1}^{\infty}$, respectively, for all \,$f \,\in\, H_{F}$, \,$g \,\in\, K_{G}$, \[f \,=\, \sum\limits^{\,\infty}_{i \,=\, 1}\left<\,f,\, e_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,C_{1}\,f_{\,i},\, \;\text{and}\; g \,=\, \sum\limits^{\,\infty}_{j \,=\, 1}\left<\,g,\, h_{j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,C_{2}\,g_{j}.\] Then, for all \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}$, we have \begin{align*} &f \,\otimes\, g \\ &\,=\, \left(\sum\limits^{\,\infty}_{i \,=\, 1}\left<\,f,\, e_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,C_{1}\,f_{\,i}\right) \otimes \left(\sum\limits^{\,\infty}_{j \,=\, 1}\left<\,g,\, h_{j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,C_{2}\,g_{j}\right)\\ &=\, \sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f,\, e_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\, \left<\,g,\, h_{\,j} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\left(\,C_{1}\,f_{\,i} \,\otimes\, C_{2}\,g_{\,j}\,\right)\\ &=\, \sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f \,\otimes\, g,\, e_{\,i} \,\otimes\, h_{\,j} \,|\, a_{\,2} \,\otimes\, b_{\,2}, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right). \end{align*} This completes the proof. \end{proof} \begin{theorem} Let \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty},\, \left\{\,e_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$\, be a pair of dual \,$C_{1}$-controlled frames associated to \,$\left(a_{\,2},\, \cdots,\, a_{\,n}\right)$\, for \,$H$\, and \,$\{\,g_{\,j}\,\}_{j \,=\,1}^{\infty},\, \left\{\,h_{\,j}\,\right\}^{\,\infty}_{j \,=\, 1}$\, be a pair of dual \,$C_{2}$-controlled frames associated to \,$\left(b_{\,2},\, \cdots,\, b_{\,n}\right)$\, for \,$K$.\,Suppose \,$U \,\in\, \mathcal{B}\,(\,H_{F}\,),\, \,V \,\in\, \mathcal{B}\,(\,K_{G}\,)$\, are unitary operators, \,$C_{1}$\, and \,$C_{2}$\, commutes with \,$U$\, and \,$V$, respectively.\,Then \,$\Lambda \,=\, \left\{\,\left(\,U \,\otimes\, V\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right)\,\right\}^{\,\infty}_{i,\,j = 1}$\, and \,$\Gamma \,=\, \left\{\,\left(\,U \,\otimes\, V\,\right)\,\left(\,e_{\,i} \,\otimes\, h_{\,j}\,\right)\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$\, also form a pair of dual \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frames associated to \,$(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,)$\, for \,$H \,\otimes\, K$. \end{theorem} \begin{proof} By Theorem \ref{th3.2}, the sequences \,$\left\{\,f_{\,i} \,\otimes\, g_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$\, and \,$\left\{\,e_{\,i} \,\otimes\, h_{\,j}\,\right\}^{\,\infty}_{i,\,j \,=\, 1}$\, form a pair of dual \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frames associated to \,$\left(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right)$\, for \,$H \,\otimes\, K$.\;Now, for each \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}$, we have \begin{align*} &\sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f \,\otimes\, g,\, \left(\,U \,\otimes\, V\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right) \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\,\times\\ &\hspace{1cm}\left<\,\left(\,C_{1} \,\otimes\, C_{2}\,\right)\,\left(\,U \,\otimes\, V\,\right)\,\left(\,f_{\,i} \,\otimes\, g_{\,j}\,\right),\, f \,\otimes\, g \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\\ &=\,\sum\limits_{i,\, j \,=\, 1}^{\,\infty}\,\left<\,f \,\otimes\, g,\, \left(\,U\,f_{\,i} \,\otimes\, V\,g_{\,j}\,\right) \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\,\times\\ &\hspace{1cm}\left<\,\left(\,C_{1}\,U\,f_{\,i} \,\otimes\, C_{2}\,V\,g_{\,j}\,\right),\, f \,\otimes\, g \,|\, a_{\,2} \,\otimes\, b_{\,2},\, \cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right>\\ &= \sum\limits_{\,i \,=\, 1}^{\,\infty}\,\left<\,f,\, U\,f_{i}\,|\,a_{2},\, \cdots,\, a_{n}\,\right>_{1}\,\left<\,C_{1}\,U\,f_{\,i},\, f \,|\,a_{2},\, \cdots,\, a_{n}\,\right>_{1}\,\times\\ &\hspace{1cm}\sum\limits_{\,j \,=\, 1}^{\,\infty}\,\left<\,g,\, V\,g_{\,j}\,|\, b_{2},\, \cdots,\, b_{n}\,\right>_{2}\,\left<\,C_{2}\,V\,g_{\,j},\, g \,|\, b_{2},\, \cdots,\, b_{n}\,\right>_{2}\\ &=\,\sum\limits_{\,i \,=\, 1}^{\,\infty}\,\left<\,U^{\,\ast}\,f,\, f_{i}\,|\,a_{2},\, \cdots,\, a_{n}\,\right>_{1}\,\left<\,C_{1}\,f_{\,i},\, U^{\,\ast}\,f \,|\,a_{2},\, \cdots,\, a_{n}\,\right>_{1}\,\times\\ &\hspace{1cm}\sum\limits_{\,j \,=\, 1}^{\,\infty}\,\left<\,V^{\,\ast}\,g,\, g_{\,j}\,|\, b_{2},\, \cdots,\, b_{n}\,\right>_{2}\,\left<\,C_{2}\,g_{\,j},\, V^{\,\ast}\,g \,|\, b_{2},\, \cdots,\, b_{n}\,\right>_{2}.\\ \end{align*} Since \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty}$\, is a \,$C_{1}$-controlled frame associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, for \,$H$\, and \,$\{\,g_{\,j}\,\}_{j \,=\,1}^{\infty}$\, is a \,$C_{2}$-controlled frame associated to \,$\left(\,b_{\,2},\, \cdots,\, b_{\,n}\,\right)$\, for \,$K$, the above calculation shows that the sequence \,$\Lambda$\, is a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$\left(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right)$\, for \,$H \,\otimes\, K$.\;Similarly, it can be shown that \,$\Gamma$\, is a \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frame associated to \,$\left(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right)$\, for \,$H \,\otimes\, K$. Furthermore, for each \,$f \,\otimes\, g \,\in\, H_{F} \,\otimes\, K_{G}$, we have \begin{align*} &\sum\limits_{i,\, j = 1}^{\infty}\left<\,f \otimes f,\, \left(U \otimes V\right)\,\left(e_{i} \otimes h_{j}\right)\,|\,a_{2} \otimes b_{2}, \,\cdots,\, a_{n} \otimes b_{n}\,\right>\left(\,C_{1} \,\otimes\, C_{2}\,\right)\left(U \otimes V\right)\left(f_{i} \otimes g_{j}\right)\\ &=\, \sum\limits_{i,\, j \,=\, 1}^{\,\infty}\left<\,f \,\otimes\, g,\, \left(\,U\,e_{i} \,\otimes\, V\,h_{j}\,\right) \,|\, a_{2} \,\otimes\, b_{2}, \,\cdots,\, a_{n} \,\otimes\, b_{n}\,\right>\,\left(\,C_{1}\,U\,f_{i} \,\otimes\, C_{2}\,V\,g_{j}\,\right)\\ &=\, \sum\limits_{i,\, j \,=\, 1}^{\,\infty}\left<\,f \,\otimes\, g,\, \left(\,U\,e_{i} \,\otimes\, V\,h_{j}\,\right) \,|\, a_{2} \,\otimes\, b_{2}, \,\cdots,\, a_{n} \,\otimes\, b_{n}\,\right>\,\left(\,U\,C_{1}\,f_{i} \,\otimes\, V\,C_{2}\,g_{j}\,\right)\\ &= U\sum\limits_{\,i = 1}^{\infty}\left<\,U^{\,\ast}\,f,\, e_{\,i}\,|\,a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,C_{1}\,f_{\,i} \,\otimes\, V\sum\limits_{\,j = 1}^{\,\infty}\left<\,V^{\,\ast}\,g,\, h_{\,j}\,|\,b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,C_{2}\,g_{\,j}\\ & \,=\, U\,U^{\,\ast}\,(\,f\,) \,\otimes\, V\,V^{\,\ast}\,(\,g\,) \,=\, f \,\otimes\, g. \end{align*} Hence, \,$\Lambda$\, and \,$\Gamma$\, form a pair of dual \,$\left(\,C_{1} \,\otimes\, C_{2}\,\right)$-controlled frames associated to \,$\left(\,a_{\,2} \,\otimes\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\otimes\, b_{\,n}\,\right)$\, for \,$H \,\otimes\, K$. \end{proof} Now, we end this section by considering controlled frame in the direct sum of \,$n$-Hilbert spaces \,$H$\, and \,$K$.\,The direct sum of \,$n$-Hilbert spaces \,$H$\, and \,$K$\, is denoted by \,$H \,\oplus\, K$\, and defined to be a \,$n$-Hilbert space associated with the \,$n$-inner product \[\left<\,f_{\,1} \,\oplus\, g_{\,1},\, f_{\,2} \,\oplus\, g_{\,2} \,|\, f_{\,3} \,\oplus\, g_{\,3},\, \,\cdots,\, f_{\,n} \,\oplus\, g_{\,n}\,\right>\] \begin{equation}\label{eqnn1} \,=\, \left<\,f_{\,1},\, f_{\,2} \,|\, f_{\,3},\, \,\cdots,\, f_{\,n}\,\right>_{1} \,+\, \left<\,g_{\,1},\, g_{\,2} \,|\, g_{\,3},\, \,\cdots,\, g_{\,n}\,\right>_{2}, \end{equation} for all \,$f_{\,1},\, f_{\,2},\, f_{\,3},\, \,\cdots,\, f_{\,n} \,\in\, H$\, and \,$g_{\,1},\, g_{\,2},\, g_{\,3},\, \,\cdots,\, g_{\,n} \,\in\, K$.\\ The \,$n$-norm on \,$H \,\otimes\, K$\, is defined by \[\left\|\,f_{\,1} \,\oplus\, g_{\,1},\, f_{\,2} \,\oplus\, g_{\,2},\, \,\cdots,\,\, f_{\,n} \,\oplus\, g_{\,n}\,\right\|\] \begin{equation}\label{eqnn1.1} \hspace{.6cm} =\,\left\|\,f_{\,1},\, f_{\,2},\, \cdots,\, f_{\,n}\,\right\|_{1} \,+\, \left\|\,g_{\,1},\, g_{\,2},\, \cdots,\, g_{\,n}\,\right\|_{2}, \end{equation} for all \,$f_{\,1},\, f_{\,2},\, \,\cdots,\, f_{\,n} \,\in\, H\, \;\text{and}\; \,g_{\,1},\, g_{\,2},\, \,\cdots,\, g_{\,n} \,\in\, K$, where the \,$n$-norms \,$\left\|\,\cdot,\, \cdots,\, \cdot \,\right\|_{1}$\, and \,$\left\|\,\cdot,\, \cdots,\, \cdot \,\right\|_{2}$\, are generated by \,$\left<\,\cdot,\, \cdot \,|\, \cdot,\, \cdots,\, \cdot\,\right>_{1}$\, and \,$\left<\,\cdot,\, \cdot \,|\, \cdot,\, \cdots,\, \cdot\,\right>_{2}$, respectively.\\ The space \,$H_{F} \,\oplus\, K_{G}$\, is the Hilbert space with respect to the inner product: \[\left<\,p \,\oplus\, q,\, p^{\,\prime} \,\oplus\, q^{\,\prime}\,\right> \,=\, \left<\,p,\, p^{\,\prime}\,\right>_{F} \,+\, \left<\,q,\, q^{\,\prime}\,\right>_{G},\] for all \,$p,\, p^{\,\prime} \,\in\, H_{F}\; \;\text{and}\; \;q,\, q^{\,\prime} \,\in\, K_{G}$. \begin{definition} Let \,$T \,\in\, \mathcal{B}\left(\,H_{F}\,\right)$\, and \,$U \,\in\, \mathcal{B}\left(\,K_{G}\,\right)$.\,Then the direct sum of the operators \,$T$\, and \,$U$\, is the operator \,$T \,\oplus\, U :\, H_{F} \,\oplus\, K_{G} \,\to\, H_{F} \,\oplus\, K_{G}$\, defined as \,$(\,T \,\oplus\, U\,)\,(\,f \,\oplus\, g\,) \,=\, T\,f \,\oplus\, U\,g$. \end{definition} It is easy to verify that \,$T \,\oplus\, U$\, is a well-defined bounded linear operator whose norm is given by \,$\left\|\,T \,\oplus\, U\,\right\| \,=\, \sup\,\{\,\|\,T\,\|,\, \|\,U\,\|\,\}$.\\ In the following theorem we will show that direct sum of controlled frames is a controlled frame in \,$n$-Hilbert space under some sufficient conditions. \begin{theorem} Let \,$\{\,f_{\,i}\,\}_{i \,=\,1}^{\infty}$\, and \,$\{\,g_{\,i}\,\}_{i \,=\,1}^{\infty}$\, be a \,$C_{1}$-controlled and \,$C_{2}$-controlled frames associated to \,$\left(\,a_{\,2},\, \cdots,\, a_{\,n}\,\right)$\, and \,$\left(\,b_{\,2},\, \cdots,\, b_{\,n}\,\right)$\, for \,$H$\, and \,$K$\, with bounds \,$A,\,B$\, and \,$C,\,D$, respectively.\,Then \,$\left\{\,f_{\,i} \,\oplus\, g_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$\, is a \,$\left(\,C_{1} \,\oplus\, C_{2}\,\right)$-controlled frame associated to \,$\left(\,a_{\,2} \,\oplus\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\oplus\, b_{\,n}\,\right)$\, for \,$H \,\oplus\, K$, provided for each \,$f \,\in\, H_{F}$\, and \,$g \,\in\, K_{G}$, we have \begin{itemize} \item[$(i)$]$\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,C_{2}\,g_{\,i},\, g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2} \,=\, 0$, and \item[$(ii)$]$\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,g,\, g_{\,i} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\left<\,C_{1}\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1} \,=\, 0$. \end{itemize} \end{theorem} \begin{proof} For each \,$f \,\oplus\, g \,\in\, H \,\oplus\, K$, we have \begin{align*} &\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f \,\oplus\, g,\, f_{\,i} \,\oplus\, g_{\,i} \,|\, a_{\,2} \,\oplus\, b_{\,2},\, \cdots,\, a_{\,n} \,\oplus\, b_{\,n}\,\right>\,\times\\ &\hspace{1.5cm}\left<\,\left(\,C_{1} \,\oplus\, C_{2}\,\right)\,\left(\,f_{\,i} \,\oplus\, g_{\,i}\,\right),\, f \,\oplus\, g \,|\, a_{\,2} \,\oplus\, b_{\,2},\, \cdots,\, a_{\,n} \,\oplus\, b_{\,n}\,\right>\\ &=\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f \,\oplus\, g,\, f_{\,i} \,\oplus\, g_{\,i} \,|\, a_{\,2} \,\oplus\, b_{\,2},\, \cdots,\, a_{\,n} \,\oplus\, b_{\,n}\,\right>\,\times\\ &\hspace{1.5cm}\left<\,C_{1}\,f_{\,i} \,\oplus\, C_{2}\,g_{\,i},\, f \,\oplus\, g \,|\, a_{\,2} \,\oplus\, b_{\,2},\, \cdots,\, a_{\,n} \,\oplus\, b_{\,n}\,\right>\\ &=\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left\{\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1} \,+\, \left<\,g,\, g_{\,i} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\right\}\,\times\\ &\hspace{1.5cm}\left\{\,\left<\,C_{1}\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1} \,+\, \left<\,C_{2}\,g_{\,i},\, g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\right\}\\ &=\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f,\, f_{\,i} \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1}\,\left<\,C_{1}\,f_{\,i},\, f \,|\, a_{\,2},\, \cdots,\, a_{\,n}\,\right>_{1} \,+\\ &\hspace{1.5cm}\,\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,g,\, g_{\,i} \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\,\left<\,C_{2}\,g_{\,i},\, g \,|\, b_{\,2},\, \cdots,\, b_{\,n}\,\right>_{2}\\ &\leq\, B\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|^{\,2}_{1} \,+\, D\,\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|^{\,2}_{2}\\ &\leq\, \max\,\{\,B,\,D\,\}\,\left\|\,f \,\oplus\, g,\, a_{\,2} \,\oplus\, b_{\,2},\, \cdots,\, a_{\,n} \,\oplus\, b_{\,n}\,\right\|^{\,2}. \end{align*} On the other hand, for \,$f \,\oplus\, g \,\in\, H \,\oplus\, K$, we have \begin{align*} &\sum\limits_{i \,=\, 1}^{\,\infty}\,\left<\,f \,\oplus\, g,\, f_{\,i} \,\oplus\, g_{\,i} \,|\, a_{\,2} \,\oplus\, b_{\,2},\, \cdots,\, a_{\,n} \,\oplus\, b_{\,n}\,\right>\,\times\\ &\hspace{1.5cm}\left<\,\left(\,C_{1} \,\oplus\, C_{2}\,\right)\,\left(\,f_{\,i} \,\oplus\, g_{\,i}\,\right),\, f \,\oplus\, g \,|\, a_{\,2} \,\oplus\, b_{\,2},\, \cdots,\, a_{\,n} \,\oplus\, b_{\,n}\,\right>\\ &\geq\,A\,\left\|\,f,\, a_{\,2},\, \cdots,\, a_{\,n}\,\right\|^{\,2}_{1} \,+\, C\,\left\|\,g,\, b_{\,2},\, \cdots,\, b_{\,n}\,\right\|^{\,2}_{2}\\ &\geq\, \min\,\{\,A,\,C\,\}\,\left\|\,f \,\oplus\, g,\, a_{\,2} \,\oplus\, b_{\,2},\, \cdots,\, a_{\,n} \,\oplus\, b_{\,n}\,\right\|^{\,2}. \end{align*} Thus, the sequence \,$\left\{\,f_{\,i} \,\oplus\, g_{\,i}\,\right\}^{\,\infty}_{i \,=\, 1}$\, is a \,$\left(\,C_{1} \,\oplus\, C_{2}\,\right)$-controlled frame associated to \,$\left(\,a_{\,2} \,\oplus\, b_{\,2},\, \,\cdots,\, a_{\,n} \,\oplus\, b_{\,n}\,\right)$\, for \,$H \,\oplus\, K$\, with bounds \,$\max\,\{\,B,\,D\,\}$\, and \,$\min\,\{\,A,\,C\,\}$. This completes the proof. \end{proof}
{ "timestamp": "2021-09-03T02:00:26", "yymm": "2109", "arxiv_id": "2109.00543", "language": "en", "url": "https://arxiv.org/abs/2109.00543", "abstract": "The concepts of controlled frames and it's dual in n-Hilbert spaces and their tensor products have been introduced and then some of their characterizations are given. We further study the relationship between controlled frame and bounded linear operator in tensor product of n-Hilbert spaces. At the end, the direct sum of controlled frames in n-Hilbert space is being considered.", "subjects": "Functional Analysis (math.FA)", "title": "Controlled frames in n-Hilbert spaces and their tensor products", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812345563904, "lm_q2_score": 0.7310585727705126, "lm_q1q2_score": 0.7079434032325417 }
https://arxiv.org/abs/1706.05215
Constructing edge-disjoint spanning trees in augmented cubes
Let T1, T2,.... Tk be spanning trees in a graph G. If for any pair of vertices u and v of G, the paths between u and v in every Ti( 0 < i < k+1) do not contain common edges then T1, T2,.... Tk are called edge-disjoint spanning trees in G. The design of multiple edge-disjoint spanning trees has applications to the reliable communication protocols. The n-dimensional augmented cube, denoted as AQn, a variation of the hypercube, possesses some properties superior to those of the hypercube. For AQn (n > 2), construction of n-1 edge-disjoint spanning trees is given the result is optimal with respect to the number of edge-disjoint spanning trees.
\section{Introduction} \indent A graph $G$ is a triple consisting of a vertex set $V(G)$, an edge set $E(G)$, and a relation that associates with each edge two vertices called its endpoints\cite{we}. The topology of the network is modeled in the form of a graph whose vertices correspond to nodes, while edges represent direct physical connections between nodes. This paper deals with the well-established problem of handling the maximum possible number of communication requests without using a single physical link more than once, known as the edge-disjoint spanning trees Problem. The hypercubes $Q_n$ are one of the most versatile and efficient interconnection networks discovered for parallel computation. Many variants of the hypercube have been proposed. The augmented cube, proposed by Choudam and Sunitha\cite{c}, is one of such variations. An $n-$dimensional augmented cube $AQ_n$ can be formed as an extension of $Q_n$ by adding some links. For any positive integer $n$, $AQ_n$ is a vertex transitive, $(2n-1)$-regular and $(2n-1)$-connected graph with $2^n$ vertices. $AQ_n$ retains all favorable properties of $Q_n$ since $Q_n \subset AQ_n$. Moreover, $AQ_n$ possesses some embedding properties that $Q_n$ does not. The main merit of augmented cubes is that their diameters are about half of those of the corresponding hypercubes. \\ A tree $T$ is called a spanning tree of a graph $G$ if $V(T)= V(G)$. Two spanning trees $T_1$ and $T_2$ in $G$ are edge-disjoint if $E(T_1) \cap E(T_2)= \phi$. \\ The edge-disjoint spanning trees(EDSTs for short) problem has received a great deal of attention in recent years because of its numerous applications on interconnection networks such as fault-tolerant broad casting and secure message distribution\cite{f,j,h,t,l,n,w,x,y}.\\ Barden, Hadas, Davis and Williams\cite{b} proved there exist $n$ EDSTs in a hypercube of dimension $2n$ and provided a construction for obtaining the maximum number of EDSTs in a hyperube. Wang, Shen and Fan\cite{wa} proved existence of $n-1$ EDSTs in an augmented cube $AQ_n$ and they asked, ``how to derive an effective algorithm that constructs edge-disjoint spanning trees based on our algorithm, in an augmented cube?'' Motivated by this question, we provide the construction for obtaining the maximum number of EDSTs ($n-1$ EDSTs ) in augmented cube $AQ_n$ ( $n \geq 3$ ). \section {Preliminaries} The definition of the $n-$dimensional augmented cube is stated as the following. Let $n \geq 1$ be an integer. The $n$-dimensional augmented cube, denoted by $AQ_n$, is a graph with $2^n$ vertices, and each vertex $u$ can be distinctly labeled by an $n$-bit binary string, $u = u_1u_2....u_n$. $AQ_1$ is the graph $K_2$ with vertex set $\{0, 1\}$. For $n \geq 2$, $AQ_n$ can be recursively constructed by two copies of $AQ_{n-1}$, denoted by $AQ^0_{n-1}$ and $AQ^1_{n-1}$, and by adding $2^n$ edges between $AQ^0_{n-1}$ and $AQ^1_{n-1}$ as follows:\\ Let $V(AQ^0_{n-1}) = \{0u_2....u_n : u_i \in \{0, 1\}, 2 \leq i \leq n\}$ and $V(AQ^1_{n-1}) = \{1v_2....v_n : v_i \in \{0, 1\}, 2 \leq i \leq n\}$. A vertex $u = 0u_2....u_n$ of $AQ^0_{n-1}$ is joined to a vertex $v = 1v_2....v_n $ of $AQ^1_{n-1}$ if and only if for every $i$, $2 \leq i \leq n$ either\\ 1. $u_i = v_i$; in this case an edge $\langle u, v \rangle$ is called a hypercube edge and we say $v = u^h$, or\\ 2. $u_i = \overline{v_i}$; in this case an edge $\langle u, v \rangle$ is called a complement edge and we say $v = u^c$.\\ Let $E^h_n = \{\langle u, u^h \rangle : u \in V(AQ^0_{n-1})\}$ and $E^c_n = \{\langle u, u^c \rangle : u \in V(AQ^0_{n-1})\}$. See Fig.$1$.\\ \begin{center} \unitlength 1mm \linethickness{0.4pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(294.151,50.401)(0,0) \put(15,22.5){\line(0,1){18.25}} \put(34.5,22.5){\line(0,1){18.25}} \put(73.25,22.5){\line(0,1){18.25}} \put(121.5,22.5){\line(0,1){18.25}} \put(57.25,22.5){\line(0,1){18.25}} \put(96,22.5){\line(0,1){18.25}} \put(144.25,22.5){\line(0,1){18.25}} \put(34.75,40.5){\line(1,0){22.5}} \put(73.5,40.5){\line(1,0){22.5}} \put(121.75,40.5){\line(1,0){22.5}} \put(34.75,23){\line(1,0){22.25}} \put(73.5,23){\line(1,0){22.25}} \put(121.75,23){\line(1,0){22.25}} \multiput(34.5,40.25)(.0439453125,-.0336914063){512}{\line(1,0){.0439453125}} \multiput(73.25,40.25)(.0439453125,-.0336914063){512}{\line(1,0){.0439453125}} \multiput(121.5,40.25)(.0439453125,-.0336914063){512}{\line(1,0){.0439453125}} \multiput(34.75,23.25)(.0439453125,.0336914063){512}{\line(1,0){.0439453125}} \multiput(73.5,23.25)(.0439453125,.0336914063){512}{\line(1,0){.0439453125}} \multiput(121.75,23.25)(.0439453125,.0336914063){512}{\line(1,0){.0439453125}} \put(96.25,40.25){\line(1,0){25.5}} \put(96,23.5){\line(1,0){25.5}} \multiput(73.5,40.25)(.097082495,-.0337022133){497}{\line(1,0){.097082495}} \multiput(96,23)(.0929672447,.0337186898){519}{\line(1,0){.0929672447}} \multiput(96,40.25)(.0942460317,-.0337301587){504}{\line(1,0){.0942460317}} \multiput(121.5,40.25)(-.0920303605,-.0336812144){527}{\line(-1,0){.0920303605}} \qbezier(73.25,40.5)(111,55.5)(143.75,40.5) \qbezier(73.5,22.75)(105.375,8.625)(143.75,23) \put(293.25,123.5){\circle*{1.803}} \put(14.75,40.75){\circle*{1.581}} \put(15,21.75){\circle*{1.581}} \put(34.25,23){\circle*{1.581}} \put(34.5,39.75){\circle*{1.581}} \put(57.25,39.75){\circle*{1.581}} \put(57.25,22.5){\circle*{1.581}} \put(74,22.5){\circle*{1.581}} \put(73.25,40.5){\circle*{1.581}} \put(96.5,40.5){\circle*{1.581}} \put(122,40.25){\circle*{1.581}} \put(144.5,40){\circle*{1.581}} \put(144,23.25){\circle*{1.581}} \put(121.75,22.75){\circle*{1.581}} \put(96,23.5){\circle*{1.581}} \put(14.75,44.75){\makebox(0,0)[cc]{$\tiny{0}$}} \put(14.75,17.25){\makebox(0,0)[cc]{$\tiny{1}$}} \put(33,44.75){\makebox(0,0)[cc]{$\tiny{00}$}} \put(57.75,46.5){\makebox(0,0)[cc]{$\tiny{10}$}} \put(32.5,18.5){\makebox(0,0)[cc]{$\tiny{01}$}} \put(57.75,18){\makebox(0,0)[cc]{$\tiny{11}$}} \put(74,45.75){\makebox(0,0)[cc]{$\tiny{000}$}} \put(73,18.75){\makebox(0,0)[cc]{$\tiny{010}$}} \put(95,43.75){\makebox(0,0)[cc]{ $\tiny{001}$}} \put(95,18.75){\makebox(0,0)[cc]{$\tiny{011}$}} \put(121.5,44.25){\makebox(0,0)[cc]{$\tiny{101}$}} \put(145.75,44.5){\makebox(0,0)[cc]{$\tiny{100}$}} \put(145.75,18.5){\makebox(0,0)[cc]{$\tiny{110}$}} \put(121.25,19.5){\makebox(0,0)[cc]{$\tiny{111}$}} \put(14.75,13){\makebox(0,0)[cc]{$\tiny AQ_{1}$}} \put(44.75,14){\makebox(0,0)[cc]{$\tiny AQ_{2}$}} \put(106.25,11.75){\makebox(0,0)[cc]{$\tiny AQ_{3}$}} \put(64,6.5){\makebox(0,0)[cc]{Fig.$1$}} \put(59,31.25){\makebox(0,0)[cc]{$\tiny AQ^1_{1}$}} \put(31,32.25){\makebox(0,0)[cc]{$\tiny AQ^0_{1}$}} \put(77,31.5){\makebox(0,0)[cc]{$\tiny AQ^0_{2}$}} \put(140.25,31.75){\makebox(0,0)[cc]{$\tiny AQ^1_{2}$}} \end{picture} \end{center} For undefined terminology and notations see \cite{we}. \section {Construction of edge-disjoint spanning trees in augmented cubes } Our proof is by induction. \\ As $AQ_n$ is $(2n-1)$-regular, $ |E(AQ_n)|=(2n-1)(2^{n-1})= n2^n-2^{n-1}$. When we construct any spanning tree on $2^n$ vertices of $AQ_n$ we need exactly $2^n -1$ edges hence we can construct at most $n-1$ EDSTs. Still, for $n \geq 3$, $2^{n-1}+ n-1 (< 2^n -1)$ number of edges remain uncovered by these $n-1$ EDSTs, but by our method, we are able to construct the tree on $2^{n-1}+ n$ vertices containing uncovered edges. \begin{theorem}Let $n\geq 3$ be an integer. There exist $n-1$ edge-disjoint spanning trees in augmented cube $AQ_n$. \end{theorem} \begin{proof} First if $n = 3$, we construct two EDSTs $T_1$ and $T_2$ as follows. See Fig.2.\\ \begin{center} \unitlength 0.5mm \linethickness{0.4pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(344.75,88.375)(0,0) \put(9.291,21.041){\circle*{1.581}} \put(112.041,21.041){\circle*{1.581}} \put(45.791,21.041){\circle*{1.581}} \put(149.291,21.041){\circle*{1.581}} \put(9.791,53.54){\circle*{1.581}} \put(113.291,53.54){\circle*{1.581}} \put(47.041,53.54){\circle*{1.581}} \put(150.541,53.54){\circle*{1.581}} \put(4.25,57.75){\makebox(0,0)[cc]{$\tiny{000}$}} \put(53,56.5){\makebox(0,0)[cc]{ $\tiny{001}$}} \put(-1,14.75){\makebox(0,0)[cc]{$\tiny{010}$}} \put(48,14.25){\makebox(0,0)[cc]{$\tiny{011}$}} \put(159,16.75){\makebox(0,0)[cc]{$\tiny{111}$}} \put(157.25,58.5){\makebox(0,0)[cc]{$\tiny{101}$}} \put(104.75,55){\makebox(0,0)[cc]{$\tiny{100}$}} \put(105,14){\makebox(0,0)[cc]{$\tiny{110}$}} \put(9.75,53.5){\line(-1,0){.25}} \put(9.25,21){\line(0,1){33.5}} \put(8.5,54.5){\line(0,1){0}} \put(9.25,21.25){\line(1,0){4.25}} \put(12.75,21.25){\line(1,0){33}} \put(45.75,21.25){\line(0,1){0}} \put(45.75,21.25){\line(0,1){0}} \qbezier(8.75,20.5)(9,22.875)(9.25,21.75) \qbezier(9.25,21.75)(9,22)(8.75,21.25) \qbezier(8,21.25)(74,42.25)(112,21.25) \multiput(112,21.25)(.03941908714,.03371369295){964}{\line(1,0){.03941908714}} \qbezier(45.75,21)(111.875,50.625)(149.5,20.75) \qbezier(112.75,53.25)(65.625,88.375)(10,54) \qbezier(150.5,54.25)(101.75,86.125)(47,54.5) \put(75.25,1.75){\makebox(0,0)[cc]{Fig.2(a). Spanning tree $T_1$ in $AQ_3$}} \put(194.291,22.041){\circle*{1.581}} \put(298.541,20.041){\circle*{1.581}} \put(231.541,22.041){\circle*{1.581}} \put(335.041,22.041){\circle*{1.581}} \put(195.541,54.54){\circle*{1.581}} \put(299.041,54.54){\circle*{1.581}} \put(232.791,54.54){\circle*{1.581}} \put(336.291,54.54){\circle*{1.581}} \put(190,58.75){\makebox(0,0)[cc]{$\tiny{000}$}} \put(238.75,57.5){\makebox(0,0)[cc]{ $\tiny{001}$}} \put(184.75,15.75){\makebox(0,0)[cc]{$\tiny{010}$}} \put(233.75,15.25){\makebox(0,0)[cc]{$\tiny{011}$}} \put(344.75,17.75){\makebox(0,0)[cc]{$\tiny{111}$}} \put(343,59.5){\makebox(0,0)[cc]{$\tiny{101}$}} \put(290.5,56){\makebox(0,0)[cc]{$\tiny{100}$}} \put(290.75,15){\makebox(0,0)[cc]{$\tiny{110}$}} \put(195.5,54.5){\line(1,0){36.75}} \put(232.25,54.5){\line(0,-1){33.25}} \multiput(231.25,22.25)(.06920326864,.03370786517){979}{\line(1,0){.06920326864}} \put(297.75,28){\line(0,-1){.25}} \put(299,54.5){\line(-1,0){.25}} \put(298.5,20.75){\line(0,1){33}} \put(298.5,55){\line(1,0){37.75}} \multiput(299,54.75)(.03765690377,-.03373430962){956}{\line(1,0){.03765690377}} \multiput(194.25,22.25)(.046979866,.033557047){149}{\line(1,0){.046979866}} \multiput(201.25,27.25)(.0396039604,.03372524752){808}{\line(1,0){.0396039604}} \put(263.25,4.25){\makebox(0,0)[cc]{Fig.2(b). Spanning tree $T_2$ in $AQ_3$}} \end{picture} \end{center} Edges uncovered by $T_1$ and $T_2$ again form a tree say $T_3$ on $7$ vertices. See Fig.3.\\ \begin{center} \unitlength 0.6mm \linethickness{0.4pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(185.5,58.25)(0,0) \put(35.041,20.791){\circle*{1.581}} \put(138.541,20.791){\circle*{1.581}} \put(72.291,20.791){\circle*{1.581}} \put(175.791,20.791){\circle*{1.581}} \put(36.291,53.29){\circle*{1.581}} \put(139.791,53.29){\circle*{1.581}} \put(175.291,53.54){\circle*{1.581}} \put(73.541,53.29){\circle*{1.581}} \put(30.75,57.5){\makebox(0,0)[cc]{$\tiny{000}$}} \put(79.5,56.25){\makebox(0,0)[cc]{ $\tiny{001}$}} \put(25.5,14.5){\makebox(0,0)[cc]{$\tiny{010}$}} \put(74.5,14){\makebox(0,0)[cc]{$\tiny{011}$}} \put(185.5,16.5){\makebox(0,0)[cc]{$\tiny{111}$}} \put(183.75,58.25){\makebox(0,0)[cc]{$\tiny{101}$}} \put(131.25,54.75){\makebox(0,0)[cc]{$\tiny{100}$}} \put(131.5,13.75){\makebox(0,0)[cc]{$\tiny{110}$}} \multiput(35.5,53.5)(.0343406593,-.0336538462){364}{\line(1,0){.0343406593}} \multiput(48,41.25)(.0394308943,-.0337398374){615}{\line(1,0){.0394308943}} \put(72.25,20.5){\line(0,1){0}} \multiput(72.25,20.5)(-.03125,.15625){8}{\line(0,1){.15625}} \multiput(73.5,53.25)(.06716804979,-.03371369295){964}{\line(1,0){.06716804979}} \put(138.25,21.25){\line(1,0){37.5}} \put(175.5,21.5){\line(0,1){14.25}} \put(175.5,35.75){\line(0,1){18}} \put(175.5,53.75){\line(0,1){0}} \put(103,1.25){\makebox(0,0)[cc]{Fig.3. Tree $T_3$ in $AQ_3$}} \qbezier(34.5,21.5)(61.375,31.75)(174.75,54) \qbezier(35.75,53)(152.875,38.625)(175.5,20.75) \end{picture} \end{center} Let $AQ_{n+1}$ be decomposed into two augmented cubes say $AQ^0_n$ and $AQ^1_n$ with vertex set say $ \{u^0_i, v^0_i : 1 \leq i \leq 2^{n-1}\}$ and $\{u^1_i, v^1_i : 1 \leq i \leq 2^{n-1}\}$ respectively. Denote by $T^0_1, T^0_2,.......,T^0_{(n-1)}$ the EDSTs in $AQ^0_n$. Let the identical corresponding EDSTs in $AQ^1_n$ be denoted by $T^1_1, T^1_2,.......,T^1_{(n-1)}$. Let the vertices of the tree say $T^0_n$ which is made up of uncovered edges of $n-1$ EDSTs in $AQ^0_n$ be denoted by $u^0_1, u^0_2,....u^0_{2^{n-1}}, v^0_1, v^0_2,....v^0_n$ and the identical corresponding vertices of the corresponding tree say $T^1_n$ in $AQ^1_n$ be denoted by $u^1_1, u^1_2,....u^1_{2^{n-1}}, v^1_1, v^1_2,....v^1_n$. We want to construct $n$ EDSTs in $AQ_{n+1}$, of which first $n-2$ EDSTs are constructed from $T^0_i$ and $T^1_i$ by adding a hypercube edge $\langle v^0_i, v^1_i \rangle \in E(AQ_{n+1})$ to connect two internal vertices $v^0_i \in V(T^0_i)$ and $v^1_i \in V(T^1_i)$, for $1\leq i \leq n-2$. See Fig.4.\\ \begin{center} \unitlength 0.6mm \linethickness{0.4pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(189.545,84.795)(0,0) \put(86.795,50.5){\line(0,1){1.3033}} \put(86.77,51.803){\line(0,1){1.3014}} \multiput(86.695,53.105)(-.03115,.32439){4}{\line(0,1){.32439}} \multiput(86.571,54.402)(-.029029,.215309){6}{\line(0,1){.215309}} \multiput(86.396,55.694)(-.031929,.183462){7}{\line(0,1){.183462}} \multiput(86.173,56.978)(-.030278,.141638){9}{\line(0,1){.141638}} \multiput(85.9,58.253)(-.03211,.126337){10}{\line(0,1){.126337}} \multiput(85.579,59.516)(-.033566,.11365){11}{\line(0,1){.11365}} \multiput(85.21,60.767)(-.0320629,.095008){13}{\line(0,1){.095008}} \multiput(84.793,62.002)(-.0331282,.0870173){14}{\line(0,1){.0870173}} \multiput(84.329,63.22)(-.0318807,.0749746){16}{\line(0,1){.0749746}} \multiput(83.819,64.42)(-.0326847,.069364){17}{\line(0,1){.069364}} \multiput(83.264,65.599)(-.0333541,.0642807){18}{\line(0,1){.0642807}} \multiput(82.663,66.756)(-.0322114,.056661){20}{\line(0,1){.056661}} \multiput(82.019,67.889)(-.0327208,.052749){21}{\line(0,1){.052749}} \multiput(81.332,68.997)(-.0331381,.0491187){22}{\line(0,1){.0491187}} \multiput(80.603,70.077)(-.0334727,.0457352){23}{\line(0,1){.0457352}} \multiput(79.833,71.129)(-.0337324,.0425695){24}{\line(0,1){.0425695}} \multiput(79.024,72.151)(-.032619,.0380741){26}{\line(0,1){.0380741}} \multiput(78.175,73.141)(-.0327914,.0354346){27}{\line(0,1){.0354346}} \multiput(77.29,74.098)(-.0329052,.0329335){28}{\line(0,1){.0329335}} \multiput(76.369,75.02)(-.0354063,.032822){27}{\line(-1,0){.0354063}} \multiput(75.413,75.906)(-.038046,.0326518){26}{\line(-1,0){.038046}} \multiput(74.424,76.755)(-.0408388,.0324183){25}{\line(-1,0){.0408388}} \multiput(73.403,77.565)(-.0457064,.0335121){23}{\line(-1,0){.0457064}} \multiput(72.351,78.336)(-.0490901,.0331804){22}{\line(-1,0){.0490901}} \multiput(71.271,79.066)(-.0527207,.0327662){21}{\line(-1,0){.0527207}} \multiput(70.164,79.754)(-.0566333,.0322602){20}{\line(-1,0){.0566333}} \multiput(69.032,80.399)(-.0642519,.0334094){18}{\line(-1,0){.0642519}} \multiput(67.875,81.001)(-.0693358,.0327444){17}{\line(-1,0){.0693358}} \multiput(66.696,81.557)(-.0749471,.0319452){16}{\line(-1,0){.0749471}} \multiput(65.497,82.069)(-.0869887,.0332031){14}{\line(-1,0){.0869887}} \multiput(64.279,82.533)(-.0949803,.0321447){13}{\line(-1,0){.0949803}} \multiput(63.045,82.951)(-.113621,.033664){11}{\line(-1,0){.113621}} \multiput(61.795,83.322)(-.12631,.032219){10}{\line(-1,0){.12631}} \multiput(60.532,83.644)(-.141612,.0304){9}{\line(-1,0){.141612}} \multiput(59.257,83.917)(-.183435,.032087){7}{\line(-1,0){.183435}} \multiput(57.973,84.142)(-.215284,.029215){6}{\line(-1,0){.215284}} \multiput(56.681,84.317)(-.32437,.03143){4}{\line(-1,0){.32437}} \put(55.384,84.443){\line(-1,0){1.3013}} \put(54.083,84.519){\line(-1,0){2.6066}} \put(51.476,84.521){\line(-1,0){1.3015}} \multiput(50.175,84.447)(-.32442,-.03087){4}{\line(-1,0){.32442}} \multiput(48.877,84.324)(-.215334,-.028844){6}{\line(-1,0){.215334}} \multiput(47.585,84.151)(-.18349,-.031771){7}{\line(-1,0){.18349}} \multiput(46.3,83.929)(-.141664,-.030156){9}{\line(-1,0){.141664}} \multiput(45.025,83.657)(-.126365,-.032001){10}{\line(-1,0){.126365}} \multiput(43.762,83.337)(-.113679,-.033468){11}{\line(-1,0){.113679}} \multiput(42.511,82.969)(-.0950356,-.031981){13}{\line(-1,0){.0950356}} \multiput(41.276,82.553)(-.0870458,-.0330532){14}{\line(-1,0){.0870458}} \multiput(40.057,82.09)(-.0750021,-.0318161){16}{\line(-1,0){.0750021}} \multiput(38.857,81.581)(-.0693921,-.0326249){17}{\line(-1,0){.0693921}} \multiput(37.678,81.027)(-.0643094,-.0332987){18}{\line(-1,0){.0643094}} \multiput(36.52,80.427)(-.0566888,-.0321625){20}{\line(-1,0){.0566888}} \multiput(35.386,79.784)(-.0527771,-.0326753){21}{\line(-1,0){.0527771}} \multiput(34.278,79.098)(-.0491472,-.0330958){22}{\line(-1,0){.0491472}} \multiput(33.197,78.37)(-.0457641,-.0334333){23}{\line(-1,0){.0457641}} \multiput(32.144,77.601)(-.0425985,-.0336957){24}{\line(-1,0){.0425985}} \multiput(31.122,76.792)(-.0381022,-.0325862){26}{\line(-1,0){.0381022}} \multiput(30.131,75.945)(-.0354628,-.0327609){27}{\line(-1,0){.0354628}} \multiput(29.174,75.06)(-.0329619,-.0328768){28}{\line(-1,0){.0329619}} \multiput(28.251,74.14)(-.0328524,-.035378){27}{\line(0,-1){.035378}} \multiput(27.364,73.185)(-.0326846,-.0380178){26}{\line(0,-1){.0380178}} \multiput(26.514,72.196)(-.0324534,-.0408108){25}{\line(0,-1){.0408108}} \multiput(25.702,71.176)(-.0335515,-.0456775){23}{\line(0,-1){.0456775}} \multiput(24.931,70.125)(-.0332227,-.0490615){22}{\line(0,-1){.0490615}} \multiput(24.2,69.046)(-.0328116,-.0526925){21}{\line(0,-1){.0526925}} \multiput(23.511,67.939)(-.0323089,-.0566055){20}{\line(0,-1){.0566055}} \multiput(22.865,66.807)(-.0334648,-.0642231){18}{\line(0,-1){.0642231}} \multiput(22.262,65.651)(-.0328041,-.0693075){17}{\line(0,-1){.0693075}} \multiput(21.705,64.473)(-.0320098,-.0749196){16}{\line(0,-1){.0749196}} \multiput(21.192,63.274)(-.033278,-.0869601){14}{\line(0,-1){.0869601}} \multiput(20.727,62.057)(-.0322266,-.0949526){13}{\line(0,-1){.0949526}} \multiput(20.308,60.823)(-.030949,-.104126){12}{\line(0,-1){.104126}} \multiput(19.936,59.573)(-.032328,-.126282){10}{\line(0,-1){.126282}} \multiput(19.613,58.31)(-.030522,-.141585){9}{\line(0,-1){.141585}} \multiput(19.338,57.036)(-.032245,-.183407){7}{\line(0,-1){.183407}} \multiput(19.113,55.752)(-.0294,-.215258){6}{\line(0,-1){.215258}} \multiput(18.936,54.461)(-.03171,-.32434){4}{\line(0,-1){.32434}} \put(18.809,53.163){\line(0,-1){1.3013}} \put(18.732,51.862){\line(0,-1){3.9081}} \multiput(18.8,47.954)(.03059,-.32445){4}{\line(0,-1){.32445}} \multiput(18.923,46.656)(.028658,-.215358){6}{\line(0,-1){.215358}} \multiput(19.095,45.364)(.031613,-.183517){7}{\line(0,-1){.183517}} \multiput(19.316,44.079)(.030034,-.14169){9}{\line(0,-1){.14169}} \multiput(19.586,42.804)(.031892,-.126392){10}{\line(0,-1){.126392}} \multiput(19.905,41.54)(.033371,-.113708){11}{\line(0,-1){.113708}} \multiput(20.272,40.289)(.0318992,-.0950631){13}{\line(0,-1){.0950631}} \multiput(20.687,39.054)(.0329782,-.0870742){14}{\line(0,-1){.0870742}} \multiput(21.149,37.834)(.0317514,-.0750294){16}{\line(0,-1){.0750294}} \multiput(21.657,36.634)(.0325651,-.0694202){17}{\line(0,-1){.0694202}} \multiput(22.21,35.454)(.0332433,-.0643381){18}{\line(0,-1){.0643381}} \multiput(22.809,34.296)(.0321137,-.0567165){20}{\line(0,-1){.0567165}} \multiput(23.451,33.161)(.0326299,-.0528053){21}{\line(0,-1){.0528053}} \multiput(24.136,32.053)(.0330534,-.0491757){22}{\line(0,-1){.0491757}} \multiput(24.863,30.971)(.0333938,-.0457928){23}{\line(0,-1){.0457928}} \multiput(25.631,29.917)(.033659,-.0426275){24}{\line(0,-1){.0426275}} \multiput(26.439,28.894)(.0325534,-.0381302){26}{\line(0,-1){.0381302}} \multiput(27.286,27.903)(.0327303,-.035491){27}{\line(0,-1){.035491}} \multiput(28.169,26.945)(.0328484,-.0329902){28}{\line(0,-1){.0329902}} \multiput(29.089,26.021)(.0353497,-.0328829){27}{\line(1,0){.0353497}} \multiput(30.043,25.133)(.0379897,-.0327173){26}{\line(1,0){.0379897}} \multiput(31.031,24.283)(.0407829,-.0324886){25}{\line(1,0){.0407829}} \multiput(32.051,23.47)(.0456486,-.0335908){23}{\line(1,0){.0456486}} \multiput(33.101,22.698)(.0490329,-.033265){22}{\line(1,0){.0490329}} \multiput(34.179,21.966)(.0526642,-.032857){21}{\line(1,0){.0526642}} \multiput(35.285,21.276)(.0565776,-.0323577){20}{\line(1,0){.0565776}} \multiput(36.417,20.629)(.0641943,-.0335201){18}{\line(1,0){.0641943}} \multiput(37.572,20.025)(.0692793,-.0328638){17}{\line(1,0){.0692793}} \multiput(38.75,19.467)(.074892,-.0320743){16}{\line(1,0){.074892}} \multiput(39.948,18.953)(.0869314,-.033353){14}{\line(1,0){.0869314}} \multiput(41.165,18.487)(.0949248,-.0323084){13}{\line(1,0){.0949248}} \multiput(42.4,18.067)(.1041,-.031038){12}{\line(1,0){.1041}} \multiput(43.649,17.694)(.126254,-.032437){10}{\line(1,0){.126254}} \multiput(44.911,17.37)(.141559,-.030644){9}{\line(1,0){.141559}} \multiput(46.185,17.094)(.183379,-.032403){7}{\line(1,0){.183379}} \multiput(47.469,16.867)(.215233,-.029586){6}{\line(1,0){.215233}} \multiput(48.76,16.69)(.32431,-.03199){4}{\line(1,0){.32431}} \put(50.058,16.562){\line(1,0){1.3012}} \put(51.359,16.483){\line(1,0){2.6066}} \put(53.965,16.477){\line(1,0){1.3016}} \multiput(55.267,16.548)(.32447,.03031){4}{\line(1,0){.32447}} \multiput(56.565,16.669)(.215383,.028473){6}{\line(1,0){.215383}} \multiput(57.857,16.84)(.183544,.031455){7}{\line(1,0){.183544}} \multiput(59.142,17.06)(.15943,.033651){8}{\line(1,0){.15943}} \multiput(60.417,17.33)(.12642,.031784){10}{\line(1,0){.12642}} \multiput(61.682,17.647)(.113737,.033273){11}{\line(1,0){.113737}} \multiput(62.933,18.013)(.0950905,.0318172){13}{\line(1,0){.0950905}} \multiput(64.169,18.427)(.0871026,.0329031){14}{\line(1,0){.0871026}} \multiput(65.388,18.888)(.0750568,.0316868){16}{\line(1,0){.0750568}} \multiput(66.589,19.395)(.0694482,.0325053){17}{\line(1,0){.0694482}} \multiput(67.77,19.947)(.0643667,.0331878){18}{\line(1,0){.0643667}} \multiput(68.928,20.545)(.0567441,.0320648){20}{\line(1,0){.0567441}} \multiput(70.063,21.186)(.0528333,.0325843){21}{\line(1,0){.0528333}} \multiput(71.173,21.87)(.0492042,.0330111){22}{\line(1,0){.0492042}} \multiput(72.255,22.597)(.0458216,.0333544){23}{\line(1,0){.0458216}} \multiput(73.309,23.364)(.0426565,.0336222){24}{\line(1,0){.0426565}} \multiput(74.333,24.171)(.0381583,.0325205){26}{\line(1,0){.0381583}} \multiput(75.325,25.016)(.0355192,.0326997){27}{\line(1,0){.0355192}} \multiput(76.284,25.899)(.0330185,.03282){28}{\line(1,0){.0330185}} \multiput(77.209,26.818)(.0329134,.0353214){27}{\line(0,1){.0353214}} \multiput(78.097,27.772)(.03275,.0379615){26}{\line(0,1){.0379615}} \multiput(78.949,28.759)(.0325237,.0407548){25}{\line(0,1){.0407548}} \multiput(79.762,29.778)(.0336301,.0456196){23}{\line(0,1){.0456196}} \multiput(80.535,30.827)(.0333072,.0490042){22}{\line(0,1){.0490042}} \multiput(81.268,31.905)(.0329024,.0526359){21}{\line(0,1){.0526359}} \multiput(81.959,33.01)(.0324064,.0565497){20}{\line(0,1){.0565497}} \multiput(82.607,34.141)(.0335754,.0641654){18}{\line(0,1){.0641654}} \multiput(83.212,35.296)(.0329235,.0692509){17}{\line(0,1){.0692509}} \multiput(83.771,36.473)(.0321389,.0748643){16}{\line(0,1){.0748643}} \multiput(84.285,37.671)(.0334278,.0869026){14}{\line(0,1){.0869026}} \multiput(84.753,38.888)(.0323901,.0948969){13}{\line(0,1){.0948969}} \multiput(85.175,40.122)(.031128,.104073){12}{\line(0,1){.104073}} \multiput(85.548,41.37)(.032545,.126226){10}{\line(0,1){.126226}} \multiput(85.874,42.633)(.030766,.141533){9}{\line(0,1){.141533}} \multiput(86.15,43.907)(.032561,.183351){7}{\line(0,1){.183351}} \multiput(86.378,45.19)(.029771,.215207){6}{\line(0,1){.215207}} \multiput(86.557,46.481)(.03227,.32428){4}{\line(0,1){.32428}} \put(86.686,47.778){\line(0,1){1.3011}} \put(86.765,49.079){\line(0,1){1.4205}} \put(189.545,50.75){\line(0,1){1.3033}} \put(189.52,52.053){\line(0,1){1.3014}} \multiput(189.445,53.355)(-.03115,.32439){4}{\line(0,1){.32439}} \multiput(189.321,54.652)(-.029029,.215309){6}{\line(0,1){.215309}} \multiput(189.146,55.944)(-.031929,.183462){7}{\line(0,1){.183462}} \multiput(188.923,57.228)(-.030278,.141638){9}{\line(0,1){.141638}} \multiput(188.65,58.503)(-.03211,.126337){10}{\line(0,1){.126337}} \multiput(188.329,59.766)(-.033566,.11365){11}{\line(0,1){.11365}} \multiput(187.96,61.017)(-.0320629,.095008){13}{\line(0,1){.095008}} \multiput(187.543,62.252)(-.0331282,.0870173){14}{\line(0,1){.0870173}} \multiput(187.079,63.47)(-.0318807,.0749746){16}{\line(0,1){.0749746}} \multiput(186.569,64.67)(-.0326847,.069364){17}{\line(0,1){.069364}} \multiput(186.014,65.849)(-.0333541,.0642807){18}{\line(0,1){.0642807}} \multiput(185.413,67.006)(-.0322114,.056661){20}{\line(0,1){.056661}} \multiput(184.769,68.139)(-.0327208,.052749){21}{\line(0,1){.052749}} \multiput(184.082,69.247)(-.0331381,.0491187){22}{\line(0,1){.0491187}} \multiput(183.353,70.327)(-.0334727,.0457352){23}{\line(0,1){.0457352}} \multiput(182.583,71.379)(-.0337324,.0425695){24}{\line(0,1){.0425695}} \multiput(181.774,72.401)(-.032619,.0380741){26}{\line(0,1){.0380741}} \multiput(180.925,73.391)(-.0327914,.0354346){27}{\line(0,1){.0354346}} \multiput(180.04,74.348)(-.0329052,.0329335){28}{\line(0,1){.0329335}} \multiput(179.119,75.27)(-.0354063,.032822){27}{\line(-1,0){.0354063}} \multiput(178.163,76.156)(-.038046,.0326518){26}{\line(-1,0){.038046}} \multiput(177.174,77.005)(-.0408388,.0324183){25}{\line(-1,0){.0408388}} \multiput(176.153,77.815)(-.0457064,.0335121){23}{\line(-1,0){.0457064}} \multiput(175.101,78.586)(-.0490901,.0331804){22}{\line(-1,0){.0490901}} \multiput(174.021,79.316)(-.0527207,.0327662){21}{\line(-1,0){.0527207}} \multiput(172.914,80.004)(-.0566333,.0322602){20}{\line(-1,0){.0566333}} \multiput(171.782,80.649)(-.0642519,.0334094){18}{\line(-1,0){.0642519}} \multiput(170.625,81.251)(-.0693358,.0327444){17}{\line(-1,0){.0693358}} \multiput(169.446,81.807)(-.0749471,.0319452){16}{\line(-1,0){.0749471}} \multiput(168.247,82.319)(-.0869887,.0332031){14}{\line(-1,0){.0869887}} \multiput(167.029,82.783)(-.0949803,.0321447){13}{\line(-1,0){.0949803}} \multiput(165.795,83.201)(-.113621,.033664){11}{\line(-1,0){.113621}} \multiput(164.545,83.572)(-.12631,.032219){10}{\line(-1,0){.12631}} \multiput(163.282,83.894)(-.141612,.0304){9}{\line(-1,0){.141612}} \multiput(162.007,84.167)(-.183435,.032087){7}{\line(-1,0){.183435}} \multiput(160.723,84.392)(-.215284,.029215){6}{\line(-1,0){.215284}} \multiput(159.431,84.567)(-.32437,.03143){4}{\line(-1,0){.32437}} \put(158.134,84.693){\line(-1,0){1.3013}} \put(156.833,84.769){\line(-1,0){2.6066}} \put(154.226,84.771){\line(-1,0){1.3015}} \multiput(152.925,84.697)(-.32442,-.03087){4}{\line(-1,0){.32442}} \multiput(151.627,84.574)(-.215334,-.028844){6}{\line(-1,0){.215334}} \multiput(150.335,84.401)(-.18349,-.031771){7}{\line(-1,0){.18349}} \multiput(149.05,84.179)(-.141664,-.030156){9}{\line(-1,0){.141664}} \multiput(147.775,83.907)(-.126365,-.032001){10}{\line(-1,0){.126365}} \multiput(146.512,83.587)(-.113679,-.033468){11}{\line(-1,0){.113679}} \multiput(145.261,83.219)(-.0950356,-.031981){13}{\line(-1,0){.0950356}} \multiput(144.026,82.803)(-.0870458,-.0330532){14}{\line(-1,0){.0870458}} \multiput(142.807,82.34)(-.0750021,-.0318161){16}{\line(-1,0){.0750021}} \multiput(141.607,81.831)(-.0693921,-.0326249){17}{\line(-1,0){.0693921}} \multiput(140.428,81.277)(-.0643094,-.0332987){18}{\line(-1,0){.0643094}} \multiput(139.27,80.677)(-.0566888,-.0321625){20}{\line(-1,0){.0566888}} \multiput(138.136,80.034)(-.0527771,-.0326753){21}{\line(-1,0){.0527771}} \multiput(137.028,79.348)(-.0491472,-.0330958){22}{\line(-1,0){.0491472}} \multiput(135.947,78.62)(-.0457641,-.0334333){23}{\line(-1,0){.0457641}} \multiput(134.894,77.851)(-.0425985,-.0336957){24}{\line(-1,0){.0425985}} \multiput(133.872,77.042)(-.0381022,-.0325862){26}{\line(-1,0){.0381022}} \multiput(132.881,76.195)(-.0354628,-.0327609){27}{\line(-1,0){.0354628}} \multiput(131.924,75.31)(-.0329619,-.0328768){28}{\line(-1,0){.0329619}} \multiput(131.001,74.39)(-.0328524,-.035378){27}{\line(0,-1){.035378}} \multiput(130.114,73.435)(-.0326846,-.0380178){26}{\line(0,-1){.0380178}} \multiput(129.264,72.446)(-.0324534,-.0408108){25}{\line(0,-1){.0408108}} \multiput(128.452,71.426)(-.0335515,-.0456775){23}{\line(0,-1){.0456775}} \multiput(127.681,70.375)(-.0332227,-.0490615){22}{\line(0,-1){.0490615}} \multiput(126.95,69.296)(-.0328116,-.0526925){21}{\line(0,-1){.0526925}} \multiput(126.261,68.189)(-.0323089,-.0566055){20}{\line(0,-1){.0566055}} \multiput(125.615,67.057)(-.0334648,-.0642231){18}{\line(0,-1){.0642231}} \multiput(125.012,65.901)(-.0328041,-.0693075){17}{\line(0,-1){.0693075}} \multiput(124.455,64.723)(-.0320098,-.0749196){16}{\line(0,-1){.0749196}} \multiput(123.942,63.524)(-.033278,-.0869601){14}{\line(0,-1){.0869601}} \multiput(123.477,62.307)(-.0322266,-.0949526){13}{\line(0,-1){.0949526}} \multiput(123.058,61.073)(-.030949,-.104126){12}{\line(0,-1){.104126}} \multiput(122.686,59.823)(-.032328,-.126282){10}{\line(0,-1){.126282}} \multiput(122.363,58.56)(-.030522,-.141585){9}{\line(0,-1){.141585}} \multiput(122.088,57.286)(-.032245,-.183407){7}{\line(0,-1){.183407}} \multiput(121.863,56.002)(-.0294,-.215258){6}{\line(0,-1){.215258}} \multiput(121.686,54.711)(-.03171,-.32434){4}{\line(0,-1){.32434}} \put(121.559,53.413){\line(0,-1){1.3013}} \put(121.482,52.112){\line(0,-1){3.9081}} \multiput(121.55,48.204)(.03059,-.32445){4}{\line(0,-1){.32445}} \multiput(121.673,46.906)(.028658,-.215358){6}{\line(0,-1){.215358}} \multiput(121.845,45.614)(.031613,-.183517){7}{\line(0,-1){.183517}} \multiput(122.066,44.329)(.030034,-.14169){9}{\line(0,-1){.14169}} \multiput(122.336,43.054)(.031892,-.126392){10}{\line(0,-1){.126392}} \multiput(122.655,41.79)(.033371,-.113708){11}{\line(0,-1){.113708}} \multiput(123.022,40.539)(.0318992,-.0950631){13}{\line(0,-1){.0950631}} \multiput(123.437,39.304)(.0329782,-.0870742){14}{\line(0,-1){.0870742}} \multiput(123.899,38.084)(.0317514,-.0750294){16}{\line(0,-1){.0750294}} \multiput(124.407,36.884)(.0325651,-.0694202){17}{\line(0,-1){.0694202}} \multiput(124.96,35.704)(.0332433,-.0643381){18}{\line(0,-1){.0643381}} \multiput(125.559,34.546)(.0321137,-.0567165){20}{\line(0,-1){.0567165}} \multiput(126.201,33.411)(.0326299,-.0528053){21}{\line(0,-1){.0528053}} \multiput(126.886,32.303)(.0330534,-.0491757){22}{\line(0,-1){.0491757}} \multiput(127.613,31.221)(.0333938,-.0457928){23}{\line(0,-1){.0457928}} \multiput(128.381,30.167)(.033659,-.0426275){24}{\line(0,-1){.0426275}} \multiput(129.189,29.144)(.0325534,-.0381302){26}{\line(0,-1){.0381302}} \multiput(130.036,28.153)(.0327303,-.035491){27}{\line(0,-1){.035491}} \multiput(130.919,27.195)(.0328484,-.0329902){28}{\line(0,-1){.0329902}} \multiput(131.839,26.271)(.0353497,-.0328829){27}{\line(1,0){.0353497}} \multiput(132.793,25.383)(.0379897,-.0327173){26}{\line(1,0){.0379897}} \multiput(133.781,24.533)(.0407829,-.0324886){25}{\line(1,0){.0407829}} \multiput(134.801,23.72)(.0456486,-.0335908){23}{\line(1,0){.0456486}} \multiput(135.851,22.948)(.0490329,-.033265){22}{\line(1,0){.0490329}} \multiput(136.929,22.216)(.0526642,-.032857){21}{\line(1,0){.0526642}} \multiput(138.035,21.526)(.0565776,-.0323577){20}{\line(1,0){.0565776}} \multiput(139.167,20.879)(.0641943,-.0335201){18}{\line(1,0){.0641943}} \multiput(140.322,20.275)(.0692793,-.0328638){17}{\line(1,0){.0692793}} \multiput(141.5,19.717)(.074892,-.0320743){16}{\line(1,0){.074892}} \multiput(142.698,19.203)(.0869314,-.033353){14}{\line(1,0){.0869314}} \multiput(143.915,18.737)(.0949248,-.0323084){13}{\line(1,0){.0949248}} \multiput(145.15,18.317)(.1041,-.031038){12}{\line(1,0){.1041}} \multiput(146.399,17.944)(.126254,-.032437){10}{\line(1,0){.126254}} \multiput(147.661,17.62)(.141559,-.030644){9}{\line(1,0){.141559}} \multiput(148.935,17.344)(.183379,-.032403){7}{\line(1,0){.183379}} \multiput(150.219,17.117)(.215233,-.029586){6}{\line(1,0){.215233}} \multiput(151.51,16.94)(.32431,-.03199){4}{\line(1,0){.32431}} \put(152.808,16.812){\line(1,0){1.3012}} \put(154.109,16.733){\line(1,0){2.6066}} \put(156.715,16.727){\line(1,0){1.3016}} \multiput(158.017,16.798)(.32447,.03031){4}{\line(1,0){.32447}} \multiput(159.315,16.919)(.215383,.028473){6}{\line(1,0){.215383}} \multiput(160.607,17.09)(.183544,.031455){7}{\line(1,0){.183544}} \multiput(161.892,17.31)(.15943,.033651){8}{\line(1,0){.15943}} \multiput(163.167,17.58)(.12642,.031784){10}{\line(1,0){.12642}} \multiput(164.432,17.897)(.113737,.033273){11}{\line(1,0){.113737}} \multiput(165.683,18.263)(.0950905,.0318172){13}{\line(1,0){.0950905}} \multiput(166.919,18.677)(.0871026,.0329031){14}{\line(1,0){.0871026}} \multiput(168.138,19.138)(.0750568,.0316868){16}{\line(1,0){.0750568}} \multiput(169.339,19.645)(.0694482,.0325053){17}{\line(1,0){.0694482}} \multiput(170.52,20.197)(.0643667,.0331878){18}{\line(1,0){.0643667}} \multiput(171.678,20.795)(.0567441,.0320648){20}{\line(1,0){.0567441}} \multiput(172.813,21.436)(.0528333,.0325843){21}{\line(1,0){.0528333}} \multiput(173.923,22.12)(.0492042,.0330111){22}{\line(1,0){.0492042}} \multiput(175.005,22.847)(.0458216,.0333544){23}{\line(1,0){.0458216}} \multiput(176.059,23.614)(.0426565,.0336222){24}{\line(1,0){.0426565}} \multiput(177.083,24.421)(.0381583,.0325205){26}{\line(1,0){.0381583}} \multiput(178.075,25.266)(.0355192,.0326997){27}{\line(1,0){.0355192}} \multiput(179.034,26.149)(.0330185,.03282){28}{\line(1,0){.0330185}} \multiput(179.959,27.068)(.0329134,.0353214){27}{\line(0,1){.0353214}} \multiput(180.847,28.022)(.03275,.0379615){26}{\line(0,1){.0379615}} \multiput(181.699,29.009)(.0325237,.0407548){25}{\line(0,1){.0407548}} \multiput(182.512,30.028)(.0336301,.0456196){23}{\line(0,1){.0456196}} \multiput(183.285,31.077)(.0333072,.0490042){22}{\line(0,1){.0490042}} \multiput(184.018,32.155)(.0329024,.0526359){21}{\line(0,1){.0526359}} \multiput(184.709,33.26)(.0324064,.0565497){20}{\line(0,1){.0565497}} \multiput(185.357,34.391)(.0335754,.0641654){18}{\line(0,1){.0641654}} \multiput(185.962,35.546)(.0329235,.0692509){17}{\line(0,1){.0692509}} \multiput(186.521,36.723)(.0321389,.0748643){16}{\line(0,1){.0748643}} \multiput(187.035,37.921)(.0334278,.0869026){14}{\line(0,1){.0869026}} \multiput(187.503,39.138)(.0323901,.0948969){13}{\line(0,1){.0948969}} \multiput(187.925,40.372)(.031128,.104073){12}{\line(0,1){.104073}} \multiput(188.298,41.62)(.032545,.126226){10}{\line(0,1){.126226}} \multiput(188.624,42.883)(.030766,.141533){9}{\line(0,1){.141533}} \multiput(188.9,44.157)(.032561,.183351){7}{\line(0,1){.183351}} \multiput(189.128,45.44)(.029771,.215207){6}{\line(0,1){.215207}} \multiput(189.307,46.731)(.03227,.32428){4}{\line(0,1){.32428}} \put(189.436,48.028){\line(0,1){1.3011}} \put(189.515,49.329){\line(0,1){1.4205}} \put(43.75,75.25){\makebox(0,0)[cc]{}} \put(43.75,74.5){\makebox(0,0)[cc]{$T^0_1$}} \put(43.25,58.25){\makebox(0,0)[cc]{$T^0_2$}} \put(45,22.25){\makebox(0,0)[cc]{$T^0_{n-2}$}} \put(160.5,71.25){\makebox(0,0)[cc]{$T^1_1$}} \put(161,59){\makebox(0,0)[cc]{$T^1_2$}} \put(160,22.75){\makebox(0,0)[cc]{$T^1_{n-2}$}} \put(38.75,72){\framebox(27.75,6)[cc]{}} \put(38.75,56.5){\framebox(27.75,6)[cc]{}} \put(38.75,20.5){\framebox(27.75,6)[cc]{}} \put(139.25,20.5){\framebox(27.75,6)[cc]{}} \put(139,58){\framebox(27.75,6)[cc]{}} \put(139,70.5){\framebox(27.75,6)[cc]{}} \put(66.5,72.5){\line(1,0){29.5}} \put(96,72.5){\line(1,0){43.25}} \put(139.25,72.5){\line(0,1){0}} \put(139.25,72.5){\line(0,1){0}} \put(139.25,72.5){\line(0,1){0}} \put(139.25,72.5){\line(0,1){0}} \put(139.25,72.5){\line(0,1){0}} \put(139,60.5){\line(0,1){0}} \put(139,60.5){\line(0,1){0}} \put(139,60.5){\line(0,1){.25}} \put(66.5,59){\line(1,0){72.75}} \multiput(66.5,24.75)(9.125,.03125){8}{\line(1,0){9.125}} \put(135.5,74.75){\makebox(0,0)[cc]{$v^1_1$}} \put(136.5,60.25){\makebox(0,0)[cc]{$v^1_2$}} \put(136.75,28){\makebox(0,0)[cc]{$v^1_{n-2}$}} \put(68,74.25){\makebox(0,0)[cc]{$v^0_1$}} \put(70,60.5){\makebox(0,0)[cc]{$v^0_2$}} \put(69.75,27.75){\makebox(0,0)[cc]{$v^0_{n-2}$}} \put(101.25,10){\makebox(0,0)[cc]{Fig.4. }} \put(52.25,6){\makebox(0,0)[cc]{$AQ^0_n$}} \put(154.75,6.25){\makebox(0,0)[cc]{$AQ^1_n$}} \end{picture} \end{center} The $(n-1)^{th}$ EDST is constructed from $T^0_{(n-1)}$ by adding all complement edges $E^c_{n+1} = \{\langle u, u^c \rangle : u \in V(AQ^0_{n})\}$.See Fig.5.\\ \begin{center} \unitlength 0.6mm \linethickness{0.4pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(189.545,84.795)(0,0) \put(86.795,50.5){\line(0,1){1.3033}} \put(86.77,51.803){\line(0,1){1.3014}} \multiput(86.695,53.105)(-.03115,.32439){4}{\line(0,1){.32439}} \multiput(86.571,54.402)(-.029029,.215309){6}{\line(0,1){.215309}} \multiput(86.396,55.694)(-.031929,.183462){7}{\line(0,1){.183462}} \multiput(86.173,56.978)(-.030278,.141638){9}{\line(0,1){.141638}} \multiput(85.9,58.253)(-.03211,.126337){10}{\line(0,1){.126337}} \multiput(85.579,59.516)(-.033566,.11365){11}{\line(0,1){.11365}} \multiput(85.21,60.767)(-.0320629,.095008){13}{\line(0,1){.095008}} \multiput(84.793,62.002)(-.0331282,.0870173){14}{\line(0,1){.0870173}} \multiput(84.329,63.22)(-.0318807,.0749746){16}{\line(0,1){.0749746}} \multiput(83.819,64.42)(-.0326847,.069364){17}{\line(0,1){.069364}} \multiput(83.264,65.599)(-.0333541,.0642807){18}{\line(0,1){.0642807}} \multiput(82.663,66.756)(-.0322114,.056661){20}{\line(0,1){.056661}} \multiput(82.019,67.889)(-.0327208,.052749){21}{\line(0,1){.052749}} \multiput(81.332,68.997)(-.0331381,.0491187){22}{\line(0,1){.0491187}} \multiput(80.603,70.077)(-.0334727,.0457352){23}{\line(0,1){.0457352}} \multiput(79.833,71.129)(-.0337324,.0425695){24}{\line(0,1){.0425695}} \multiput(79.024,72.151)(-.032619,.0380741){26}{\line(0,1){.0380741}} \multiput(78.175,73.141)(-.0327914,.0354346){27}{\line(0,1){.0354346}} \multiput(77.29,74.098)(-.0329052,.0329335){28}{\line(0,1){.0329335}} \multiput(76.369,75.02)(-.0354063,.032822){27}{\line(-1,0){.0354063}} \multiput(75.413,75.906)(-.038046,.0326518){26}{\line(-1,0){.038046}} \multiput(74.424,76.755)(-.0408388,.0324183){25}{\line(-1,0){.0408388}} \multiput(73.403,77.565)(-.0457064,.0335121){23}{\line(-1,0){.0457064}} \multiput(72.351,78.336)(-.0490901,.0331804){22}{\line(-1,0){.0490901}} \multiput(71.271,79.066)(-.0527207,.0327662){21}{\line(-1,0){.0527207}} \multiput(70.164,79.754)(-.0566333,.0322602){20}{\line(-1,0){.0566333}} \multiput(69.032,80.399)(-.0642519,.0334094){18}{\line(-1,0){.0642519}} \multiput(67.875,81.001)(-.0693358,.0327444){17}{\line(-1,0){.0693358}} \multiput(66.696,81.557)(-.0749471,.0319452){16}{\line(-1,0){.0749471}} \multiput(65.497,82.069)(-.0869887,.0332031){14}{\line(-1,0){.0869887}} \multiput(64.279,82.533)(-.0949803,.0321447){13}{\line(-1,0){.0949803}} \multiput(63.045,82.951)(-.113621,.033664){11}{\line(-1,0){.113621}} \multiput(61.795,83.322)(-.12631,.032219){10}{\line(-1,0){.12631}} \multiput(60.532,83.644)(-.141612,.0304){9}{\line(-1,0){.141612}} \multiput(59.257,83.917)(-.183435,.032087){7}{\line(-1,0){.183435}} \multiput(57.973,84.142)(-.215284,.029215){6}{\line(-1,0){.215284}} \multiput(56.681,84.317)(-.32437,.03143){4}{\line(-1,0){.32437}} \put(55.384,84.443){\line(-1,0){1.3013}} \put(54.083,84.519){\line(-1,0){2.6066}} \put(51.476,84.521){\line(-1,0){1.3015}} \multiput(50.175,84.447)(-.32442,-.03087){4}{\line(-1,0){.32442}} \multiput(48.877,84.324)(-.215334,-.028844){6}{\line(-1,0){.215334}} \multiput(47.585,84.151)(-.18349,-.031771){7}{\line(-1,0){.18349}} \multiput(46.3,83.929)(-.141664,-.030156){9}{\line(-1,0){.141664}} \multiput(45.025,83.657)(-.126365,-.032001){10}{\line(-1,0){.126365}} \multiput(43.762,83.337)(-.113679,-.033468){11}{\line(-1,0){.113679}} \multiput(42.511,82.969)(-.0950356,-.031981){13}{\line(-1,0){.0950356}} \multiput(41.276,82.553)(-.0870458,-.0330532){14}{\line(-1,0){.0870458}} \multiput(40.057,82.09)(-.0750021,-.0318161){16}{\line(-1,0){.0750021}} \multiput(38.857,81.581)(-.0693921,-.0326249){17}{\line(-1,0){.0693921}} \multiput(37.678,81.027)(-.0643094,-.0332987){18}{\line(-1,0){.0643094}} \multiput(36.52,80.427)(-.0566888,-.0321625){20}{\line(-1,0){.0566888}} \multiput(35.386,79.784)(-.0527771,-.0326753){21}{\line(-1,0){.0527771}} \multiput(34.278,79.098)(-.0491472,-.0330958){22}{\line(-1,0){.0491472}} \multiput(33.197,78.37)(-.0457641,-.0334333){23}{\line(-1,0){.0457641}} \multiput(32.144,77.601)(-.0425985,-.0336957){24}{\line(-1,0){.0425985}} \multiput(31.122,76.792)(-.0381022,-.0325862){26}{\line(-1,0){.0381022}} \multiput(30.131,75.945)(-.0354628,-.0327609){27}{\line(-1,0){.0354628}} \multiput(29.174,75.06)(-.0329619,-.0328768){28}{\line(-1,0){.0329619}} \multiput(28.251,74.14)(-.0328524,-.035378){27}{\line(0,-1){.035378}} \multiput(27.364,73.185)(-.0326846,-.0380178){26}{\line(0,-1){.0380178}} \multiput(26.514,72.196)(-.0324534,-.0408108){25}{\line(0,-1){.0408108}} \multiput(25.702,71.176)(-.0335515,-.0456775){23}{\line(0,-1){.0456775}} \multiput(24.931,70.125)(-.0332227,-.0490615){22}{\line(0,-1){.0490615}} \multiput(24.2,69.046)(-.0328116,-.0526925){21}{\line(0,-1){.0526925}} \multiput(23.511,67.939)(-.0323089,-.0566055){20}{\line(0,-1){.0566055}} \multiput(22.865,66.807)(-.0334648,-.0642231){18}{\line(0,-1){.0642231}} \multiput(22.262,65.651)(-.0328041,-.0693075){17}{\line(0,-1){.0693075}} \multiput(21.705,64.473)(-.0320098,-.0749196){16}{\line(0,-1){.0749196}} \multiput(21.192,63.274)(-.033278,-.0869601){14}{\line(0,-1){.0869601}} \multiput(20.727,62.057)(-.0322266,-.0949526){13}{\line(0,-1){.0949526}} \multiput(20.308,60.823)(-.030949,-.104126){12}{\line(0,-1){.104126}} \multiput(19.936,59.573)(-.032328,-.126282){10}{\line(0,-1){.126282}} \multiput(19.613,58.31)(-.030522,-.141585){9}{\line(0,-1){.141585}} \multiput(19.338,57.036)(-.032245,-.183407){7}{\line(0,-1){.183407}} \multiput(19.113,55.752)(-.0294,-.215258){6}{\line(0,-1){.215258}} \multiput(18.936,54.461)(-.03171,-.32434){4}{\line(0,-1){.32434}} \put(18.809,53.163){\line(0,-1){1.3013}} \put(18.732,51.862){\line(0,-1){3.9081}} \multiput(18.8,47.954)(.03059,-.32445){4}{\line(0,-1){.32445}} \multiput(18.923,46.656)(.028658,-.215358){6}{\line(0,-1){.215358}} \multiput(19.095,45.364)(.031613,-.183517){7}{\line(0,-1){.183517}} \multiput(19.316,44.079)(.030034,-.14169){9}{\line(0,-1){.14169}} \multiput(19.586,42.804)(.031892,-.126392){10}{\line(0,-1){.126392}} \multiput(19.905,41.54)(.033371,-.113708){11}{\line(0,-1){.113708}} \multiput(20.272,40.289)(.0318992,-.0950631){13}{\line(0,-1){.0950631}} \multiput(20.687,39.054)(.0329782,-.0870742){14}{\line(0,-1){.0870742}} \multiput(21.149,37.834)(.0317514,-.0750294){16}{\line(0,-1){.0750294}} \multiput(21.657,36.634)(.0325651,-.0694202){17}{\line(0,-1){.0694202}} \multiput(22.21,35.454)(.0332433,-.0643381){18}{\line(0,-1){.0643381}} \multiput(22.809,34.296)(.0321137,-.0567165){20}{\line(0,-1){.0567165}} \multiput(23.451,33.161)(.0326299,-.0528053){21}{\line(0,-1){.0528053}} \multiput(24.136,32.053)(.0330534,-.0491757){22}{\line(0,-1){.0491757}} \multiput(24.863,30.971)(.0333938,-.0457928){23}{\line(0,-1){.0457928}} \multiput(25.631,29.917)(.033659,-.0426275){24}{\line(0,-1){.0426275}} \multiput(26.439,28.894)(.0325534,-.0381302){26}{\line(0,-1){.0381302}} \multiput(27.286,27.903)(.0327303,-.035491){27}{\line(0,-1){.035491}} \multiput(28.169,26.945)(.0328484,-.0329902){28}{\line(0,-1){.0329902}} \multiput(29.089,26.021)(.0353497,-.0328829){27}{\line(1,0){.0353497}} \multiput(30.043,25.133)(.0379897,-.0327173){26}{\line(1,0){.0379897}} \multiput(31.031,24.283)(.0407829,-.0324886){25}{\line(1,0){.0407829}} \multiput(32.051,23.47)(.0456486,-.0335908){23}{\line(1,0){.0456486}} \multiput(33.101,22.698)(.0490329,-.033265){22}{\line(1,0){.0490329}} \multiput(34.179,21.966)(.0526642,-.032857){21}{\line(1,0){.0526642}} \multiput(35.285,21.276)(.0565776,-.0323577){20}{\line(1,0){.0565776}} \multiput(36.417,20.629)(.0641943,-.0335201){18}{\line(1,0){.0641943}} \multiput(37.572,20.025)(.0692793,-.0328638){17}{\line(1,0){.0692793}} \multiput(38.75,19.467)(.074892,-.0320743){16}{\line(1,0){.074892}} \multiput(39.948,18.953)(.0869314,-.033353){14}{\line(1,0){.0869314}} \multiput(41.165,18.487)(.0949248,-.0323084){13}{\line(1,0){.0949248}} \multiput(42.4,18.067)(.1041,-.031038){12}{\line(1,0){.1041}} \multiput(43.649,17.694)(.126254,-.032437){10}{\line(1,0){.126254}} \multiput(44.911,17.37)(.141559,-.030644){9}{\line(1,0){.141559}} \multiput(46.185,17.094)(.183379,-.032403){7}{\line(1,0){.183379}} \multiput(47.469,16.867)(.215233,-.029586){6}{\line(1,0){.215233}} \multiput(48.76,16.69)(.32431,-.03199){4}{\line(1,0){.32431}} \put(50.058,16.562){\line(1,0){1.3012}} \put(51.359,16.483){\line(1,0){2.6066}} \put(53.965,16.477){\line(1,0){1.3016}} \multiput(55.267,16.548)(.32447,.03031){4}{\line(1,0){.32447}} \multiput(56.565,16.669)(.215383,.028473){6}{\line(1,0){.215383}} \multiput(57.857,16.84)(.183544,.031455){7}{\line(1,0){.183544}} \multiput(59.142,17.06)(.15943,.033651){8}{\line(1,0){.15943}} \multiput(60.417,17.33)(.12642,.031784){10}{\line(1,0){.12642}} \multiput(61.682,17.647)(.113737,.033273){11}{\line(1,0){.113737}} \multiput(62.933,18.013)(.0950905,.0318172){13}{\line(1,0){.0950905}} \multiput(64.169,18.427)(.0871026,.0329031){14}{\line(1,0){.0871026}} \multiput(65.388,18.888)(.0750568,.0316868){16}{\line(1,0){.0750568}} \multiput(66.589,19.395)(.0694482,.0325053){17}{\line(1,0){.0694482}} \multiput(67.77,19.947)(.0643667,.0331878){18}{\line(1,0){.0643667}} \multiput(68.928,20.545)(.0567441,.0320648){20}{\line(1,0){.0567441}} \multiput(70.063,21.186)(.0528333,.0325843){21}{\line(1,0){.0528333}} \multiput(71.173,21.87)(.0492042,.0330111){22}{\line(1,0){.0492042}} \multiput(72.255,22.597)(.0458216,.0333544){23}{\line(1,0){.0458216}} \multiput(73.309,23.364)(.0426565,.0336222){24}{\line(1,0){.0426565}} \multiput(74.333,24.171)(.0381583,.0325205){26}{\line(1,0){.0381583}} \multiput(75.325,25.016)(.0355192,.0326997){27}{\line(1,0){.0355192}} \multiput(76.284,25.899)(.0330185,.03282){28}{\line(1,0){.0330185}} \multiput(77.209,26.818)(.0329134,.0353214){27}{\line(0,1){.0353214}} \multiput(78.097,27.772)(.03275,.0379615){26}{\line(0,1){.0379615}} \multiput(78.949,28.759)(.0325237,.0407548){25}{\line(0,1){.0407548}} \multiput(79.762,29.778)(.0336301,.0456196){23}{\line(0,1){.0456196}} \multiput(80.535,30.827)(.0333072,.0490042){22}{\line(0,1){.0490042}} \multiput(81.268,31.905)(.0329024,.0526359){21}{\line(0,1){.0526359}} \multiput(81.959,33.01)(.0324064,.0565497){20}{\line(0,1){.0565497}} \multiput(82.607,34.141)(.0335754,.0641654){18}{\line(0,1){.0641654}} \multiput(83.212,35.296)(.0329235,.0692509){17}{\line(0,1){.0692509}} \multiput(83.771,36.473)(.0321389,.0748643){16}{\line(0,1){.0748643}} \multiput(84.285,37.671)(.0334278,.0869026){14}{\line(0,1){.0869026}} \multiput(84.753,38.888)(.0323901,.0948969){13}{\line(0,1){.0948969}} \multiput(85.175,40.122)(.031128,.104073){12}{\line(0,1){.104073}} \multiput(85.548,41.37)(.032545,.126226){10}{\line(0,1){.126226}} \multiput(85.874,42.633)(.030766,.141533){9}{\line(0,1){.141533}} \multiput(86.15,43.907)(.032561,.183351){7}{\line(0,1){.183351}} \multiput(86.378,45.19)(.029771,.215207){6}{\line(0,1){.215207}} \multiput(86.557,46.481)(.03227,.32428){4}{\line(0,1){.32428}} \put(86.686,47.778){\line(0,1){1.3011}} \put(86.765,49.079){\line(0,1){1.4205}} \put(189.545,50.75){\line(0,1){1.3033}} \put(189.52,52.053){\line(0,1){1.3014}} \multiput(189.445,53.355)(-.03115,.32439){4}{\line(0,1){.32439}} \multiput(189.321,54.652)(-.029029,.215309){6}{\line(0,1){.215309}} \multiput(189.146,55.944)(-.031929,.183462){7}{\line(0,1){.183462}} \multiput(188.923,57.228)(-.030278,.141638){9}{\line(0,1){.141638}} \multiput(188.65,58.503)(-.03211,.126337){10}{\line(0,1){.126337}} \multiput(188.329,59.766)(-.033566,.11365){11}{\line(0,1){.11365}} \multiput(187.96,61.017)(-.0320629,.095008){13}{\line(0,1){.095008}} \multiput(187.543,62.252)(-.0331282,.0870173){14}{\line(0,1){.0870173}} \multiput(187.079,63.47)(-.0318807,.0749746){16}{\line(0,1){.0749746}} \multiput(186.569,64.67)(-.0326847,.069364){17}{\line(0,1){.069364}} \multiput(186.014,65.849)(-.0333541,.0642807){18}{\line(0,1){.0642807}} \multiput(185.413,67.006)(-.0322114,.056661){20}{\line(0,1){.056661}} \multiput(184.769,68.139)(-.0327208,.052749){21}{\line(0,1){.052749}} \multiput(184.082,69.247)(-.0331381,.0491187){22}{\line(0,1){.0491187}} \multiput(183.353,70.327)(-.0334727,.0457352){23}{\line(0,1){.0457352}} \multiput(182.583,71.379)(-.0337324,.0425695){24}{\line(0,1){.0425695}} \multiput(181.774,72.401)(-.032619,.0380741){26}{\line(0,1){.0380741}} \multiput(180.925,73.391)(-.0327914,.0354346){27}{\line(0,1){.0354346}} \multiput(180.04,74.348)(-.0329052,.0329335){28}{\line(0,1){.0329335}} \multiput(179.119,75.27)(-.0354063,.032822){27}{\line(-1,0){.0354063}} \multiput(178.163,76.156)(-.038046,.0326518){26}{\line(-1,0){.038046}} \multiput(177.174,77.005)(-.0408388,.0324183){25}{\line(-1,0){.0408388}} \multiput(176.153,77.815)(-.0457064,.0335121){23}{\line(-1,0){.0457064}} \multiput(175.101,78.586)(-.0490901,.0331804){22}{\line(-1,0){.0490901}} \multiput(174.021,79.316)(-.0527207,.0327662){21}{\line(-1,0){.0527207}} \multiput(172.914,80.004)(-.0566333,.0322602){20}{\line(-1,0){.0566333}} \multiput(171.782,80.649)(-.0642519,.0334094){18}{\line(-1,0){.0642519}} \multiput(170.625,81.251)(-.0693358,.0327444){17}{\line(-1,0){.0693358}} \multiput(169.446,81.807)(-.0749471,.0319452){16}{\line(-1,0){.0749471}} \multiput(168.247,82.319)(-.0869887,.0332031){14}{\line(-1,0){.0869887}} \multiput(167.029,82.783)(-.0949803,.0321447){13}{\line(-1,0){.0949803}} \multiput(165.795,83.201)(-.113621,.033664){11}{\line(-1,0){.113621}} \multiput(164.545,83.572)(-.12631,.032219){10}{\line(-1,0){.12631}} \multiput(163.282,83.894)(-.141612,.0304){9}{\line(-1,0){.141612}} \multiput(162.007,84.167)(-.183435,.032087){7}{\line(-1,0){.183435}} \multiput(160.723,84.392)(-.215284,.029215){6}{\line(-1,0){.215284}} \multiput(159.431,84.567)(-.32437,.03143){4}{\line(-1,0){.32437}} \put(158.134,84.693){\line(-1,0){1.3013}} \put(156.833,84.769){\line(-1,0){2.6066}} \put(154.226,84.771){\line(-1,0){1.3015}} \multiput(152.925,84.697)(-.32442,-.03087){4}{\line(-1,0){.32442}} \multiput(151.627,84.574)(-.215334,-.028844){6}{\line(-1,0){.215334}} \multiput(150.335,84.401)(-.18349,-.031771){7}{\line(-1,0){.18349}} \multiput(149.05,84.179)(-.141664,-.030156){9}{\line(-1,0){.141664}} \multiput(147.775,83.907)(-.126365,-.032001){10}{\line(-1,0){.126365}} \multiput(146.512,83.587)(-.113679,-.033468){11}{\line(-1,0){.113679}} \multiput(145.261,83.219)(-.0950356,-.031981){13}{\line(-1,0){.0950356}} \multiput(144.026,82.803)(-.0870458,-.0330532){14}{\line(-1,0){.0870458}} \multiput(142.807,82.34)(-.0750021,-.0318161){16}{\line(-1,0){.0750021}} \multiput(141.607,81.831)(-.0693921,-.0326249){17}{\line(-1,0){.0693921}} \multiput(140.428,81.277)(-.0643094,-.0332987){18}{\line(-1,0){.0643094}} \multiput(139.27,80.677)(-.0566888,-.0321625){20}{\line(-1,0){.0566888}} \multiput(138.136,80.034)(-.0527771,-.0326753){21}{\line(-1,0){.0527771}} \multiput(137.028,79.348)(-.0491472,-.0330958){22}{\line(-1,0){.0491472}} \multiput(135.947,78.62)(-.0457641,-.0334333){23}{\line(-1,0){.0457641}} \multiput(134.894,77.851)(-.0425985,-.0336957){24}{\line(-1,0){.0425985}} \multiput(133.872,77.042)(-.0381022,-.0325862){26}{\line(-1,0){.0381022}} \multiput(132.881,76.195)(-.0354628,-.0327609){27}{\line(-1,0){.0354628}} \multiput(131.924,75.31)(-.0329619,-.0328768){28}{\line(-1,0){.0329619}} \multiput(131.001,74.39)(-.0328524,-.035378){27}{\line(0,-1){.035378}} \multiput(130.114,73.435)(-.0326846,-.0380178){26}{\line(0,-1){.0380178}} \multiput(129.264,72.446)(-.0324534,-.0408108){25}{\line(0,-1){.0408108}} \multiput(128.452,71.426)(-.0335515,-.0456775){23}{\line(0,-1){.0456775}} \multiput(127.681,70.375)(-.0332227,-.0490615){22}{\line(0,-1){.0490615}} \multiput(126.95,69.296)(-.0328116,-.0526925){21}{\line(0,-1){.0526925}} \multiput(126.261,68.189)(-.0323089,-.0566055){20}{\line(0,-1){.0566055}} \multiput(125.615,67.057)(-.0334648,-.0642231){18}{\line(0,-1){.0642231}} \multiput(125.012,65.901)(-.0328041,-.0693075){17}{\line(0,-1){.0693075}} \multiput(124.455,64.723)(-.0320098,-.0749196){16}{\line(0,-1){.0749196}} \multiput(123.942,63.524)(-.033278,-.0869601){14}{\line(0,-1){.0869601}} \multiput(123.477,62.307)(-.0322266,-.0949526){13}{\line(0,-1){.0949526}} \multiput(123.058,61.073)(-.030949,-.104126){12}{\line(0,-1){.104126}} \multiput(122.686,59.823)(-.032328,-.126282){10}{\line(0,-1){.126282}} \multiput(122.363,58.56)(-.030522,-.141585){9}{\line(0,-1){.141585}} \multiput(122.088,57.286)(-.032245,-.183407){7}{\line(0,-1){.183407}} \multiput(121.863,56.002)(-.0294,-.215258){6}{\line(0,-1){.215258}} \multiput(121.686,54.711)(-.03171,-.32434){4}{\line(0,-1){.32434}} \put(121.559,53.413){\line(0,-1){1.3013}} \put(121.482,52.112){\line(0,-1){3.9081}} \multiput(121.55,48.204)(.03059,-.32445){4}{\line(0,-1){.32445}} \multiput(121.673,46.906)(.028658,-.215358){6}{\line(0,-1){.215358}} \multiput(121.845,45.614)(.031613,-.183517){7}{\line(0,-1){.183517}} \multiput(122.066,44.329)(.030034,-.14169){9}{\line(0,-1){.14169}} \multiput(122.336,43.054)(.031892,-.126392){10}{\line(0,-1){.126392}} \multiput(122.655,41.79)(.033371,-.113708){11}{\line(0,-1){.113708}} \multiput(123.022,40.539)(.0318992,-.0950631){13}{\line(0,-1){.0950631}} \multiput(123.437,39.304)(.0329782,-.0870742){14}{\line(0,-1){.0870742}} \multiput(123.899,38.084)(.0317514,-.0750294){16}{\line(0,-1){.0750294}} \multiput(124.407,36.884)(.0325651,-.0694202){17}{\line(0,-1){.0694202}} \multiput(124.96,35.704)(.0332433,-.0643381){18}{\line(0,-1){.0643381}} \multiput(125.559,34.546)(.0321137,-.0567165){20}{\line(0,-1){.0567165}} \multiput(126.201,33.411)(.0326299,-.0528053){21}{\line(0,-1){.0528053}} \multiput(126.886,32.303)(.0330534,-.0491757){22}{\line(0,-1){.0491757}} \multiput(127.613,31.221)(.0333938,-.0457928){23}{\line(0,-1){.0457928}} \multiput(128.381,30.167)(.033659,-.0426275){24}{\line(0,-1){.0426275}} \multiput(129.189,29.144)(.0325534,-.0381302){26}{\line(0,-1){.0381302}} \multiput(130.036,28.153)(.0327303,-.035491){27}{\line(0,-1){.035491}} \multiput(130.919,27.195)(.0328484,-.0329902){28}{\line(0,-1){.0329902}} \multiput(131.839,26.271)(.0353497,-.0328829){27}{\line(1,0){.0353497}} \multiput(132.793,25.383)(.0379897,-.0327173){26}{\line(1,0){.0379897}} \multiput(133.781,24.533)(.0407829,-.0324886){25}{\line(1,0){.0407829}} \multiput(134.801,23.72)(.0456486,-.0335908){23}{\line(1,0){.0456486}} \multiput(135.851,22.948)(.0490329,-.033265){22}{\line(1,0){.0490329}} \multiput(136.929,22.216)(.0526642,-.032857){21}{\line(1,0){.0526642}} \multiput(138.035,21.526)(.0565776,-.0323577){20}{\line(1,0){.0565776}} \multiput(139.167,20.879)(.0641943,-.0335201){18}{\line(1,0){.0641943}} \multiput(140.322,20.275)(.0692793,-.0328638){17}{\line(1,0){.0692793}} \multiput(141.5,19.717)(.074892,-.0320743){16}{\line(1,0){.074892}} \multiput(142.698,19.203)(.0869314,-.033353){14}{\line(1,0){.0869314}} \multiput(143.915,18.737)(.0949248,-.0323084){13}{\line(1,0){.0949248}} \multiput(145.15,18.317)(.1041,-.031038){12}{\line(1,0){.1041}} \multiput(146.399,17.944)(.126254,-.032437){10}{\line(1,0){.126254}} \multiput(147.661,17.62)(.141559,-.030644){9}{\line(1,0){.141559}} \multiput(148.935,17.344)(.183379,-.032403){7}{\line(1,0){.183379}} \multiput(150.219,17.117)(.215233,-.029586){6}{\line(1,0){.215233}} \multiput(151.51,16.94)(.32431,-.03199){4}{\line(1,0){.32431}} \put(152.808,16.812){\line(1,0){1.3012}} \put(154.109,16.733){\line(1,0){2.6066}} \put(156.715,16.727){\line(1,0){1.3016}} \multiput(158.017,16.798)(.32447,.03031){4}{\line(1,0){.32447}} \multiput(159.315,16.919)(.215383,.028473){6}{\line(1,0){.215383}} \multiput(160.607,17.09)(.183544,.031455){7}{\line(1,0){.183544}} \multiput(161.892,17.31)(.15943,.033651){8}{\line(1,0){.15943}} \multiput(163.167,17.58)(.12642,.031784){10}{\line(1,0){.12642}} \multiput(164.432,17.897)(.113737,.033273){11}{\line(1,0){.113737}} \multiput(165.683,18.263)(.0950905,.0318172){13}{\line(1,0){.0950905}} \multiput(166.919,18.677)(.0871026,.0329031){14}{\line(1,0){.0871026}} \multiput(168.138,19.138)(.0750568,.0316868){16}{\line(1,0){.0750568}} \multiput(169.339,19.645)(.0694482,.0325053){17}{\line(1,0){.0694482}} \multiput(170.52,20.197)(.0643667,.0331878){18}{\line(1,0){.0643667}} \multiput(171.678,20.795)(.0567441,.0320648){20}{\line(1,0){.0567441}} \multiput(172.813,21.436)(.0528333,.0325843){21}{\line(1,0){.0528333}} \multiput(173.923,22.12)(.0492042,.0330111){22}{\line(1,0){.0492042}} \multiput(175.005,22.847)(.0458216,.0333544){23}{\line(1,0){.0458216}} \multiput(176.059,23.614)(.0426565,.0336222){24}{\line(1,0){.0426565}} \multiput(177.083,24.421)(.0381583,.0325205){26}{\line(1,0){.0381583}} \multiput(178.075,25.266)(.0355192,.0326997){27}{\line(1,0){.0355192}} \multiput(179.034,26.149)(.0330185,.03282){28}{\line(1,0){.0330185}} \multiput(179.959,27.068)(.0329134,.0353214){27}{\line(0,1){.0353214}} \multiput(180.847,28.022)(.03275,.0379615){26}{\line(0,1){.0379615}} \multiput(181.699,29.009)(.0325237,.0407548){25}{\line(0,1){.0407548}} \multiput(182.512,30.028)(.0336301,.0456196){23}{\line(0,1){.0456196}} \multiput(183.285,31.077)(.0333072,.0490042){22}{\line(0,1){.0490042}} \multiput(184.018,32.155)(.0329024,.0526359){21}{\line(0,1){.0526359}} \multiput(184.709,33.26)(.0324064,.0565497){20}{\line(0,1){.0565497}} \multiput(185.357,34.391)(.0335754,.0641654){18}{\line(0,1){.0641654}} \multiput(185.962,35.546)(.0329235,.0692509){17}{\line(0,1){.0692509}} \multiput(186.521,36.723)(.0321389,.0748643){16}{\line(0,1){.0748643}} \multiput(187.035,37.921)(.0334278,.0869026){14}{\line(0,1){.0869026}} \multiput(187.503,39.138)(.0323901,.0948969){13}{\line(0,1){.0948969}} \multiput(187.925,40.372)(.031128,.104073){12}{\line(0,1){.104073}} \multiput(188.298,41.62)(.032545,.126226){10}{\line(0,1){.126226}} \multiput(188.624,42.883)(.030766,.141533){9}{\line(0,1){.141533}} \multiput(188.9,44.157)(.032561,.183351){7}{\line(0,1){.183351}} \multiput(189.128,45.44)(.029771,.215207){6}{\line(0,1){.215207}} \multiput(189.307,46.731)(.03227,.32428){4}{\line(0,1){.32428}} \put(189.436,48.028){\line(0,1){1.3011}} \put(189.515,49.329){\line(0,1){1.4205}} \put(52.25,7.25){\makebox(0,0)[cc]{$AQ^0_n$}} \put(154.75,7.5){\makebox(0,0)[cc]{$AQ^1_n$}} \put(44.5,24.5){\framebox(19.25,54.75)[cc]{}} \put(152.75,76.25){\circle*{1}} \put(152.5,65.25){\circle*{1}} \put(152.75,29.75){\circle*{1}} \put(152.75,38){\circle*{1}} \put(53.75,50.25){\makebox(0,0)[cc]{$T^0_{(n-1)}$}} \multiput(152.5,76.75)(-.11681937173,-.03370418848){764}{\line(-1,0){.11681937173}} \multiput(152.25,29.75)(-.09141981614,.03370786517){979}{\line(-1,0){.09141981614}} \multiput(152.5,38.75)(-.0764604811,.03371993127){1164}{\line(-1,0){.0764604811}} \multiput(152.25,65.5)(-.0767733564,-.03373702422){1156}{\line(-1,0){.0767733564}} \put(65.5,73.5){\makebox(0,0)[cc]{$u^0_1$}} \put(155.75,42.75){\makebox(0,0)[cc]{$(u^0_1)^c$}} \put(67.25,64){\makebox(0,0)[cc]{$u^0_{2^{n-1}}$}} \put(65,47.5){\makebox(0,0)[cc]{$v^0_1$}} \put(158.75,77.75){\makebox(0,0)[cc]{$(v^0_1)^c$}} \put(153.5,24.5){\makebox(0,0)[cc]{ $(u^0_{2^{n-1}})^c$}} \put(156.5,66){\makebox(0,0)[cc]{$(v^0_{2^{n-1}})^c$}} \put(66.75,30.5){\makebox(0,0)[cc]{$v^0_{2^{n-1}}$}} \put(103.75,10){\makebox(0,0)[cc]{Fig.5}} \end{picture} \end{center} The $n^{th}$ EDST is constructed from $T^1_{(n-1)}$ by adding to it hypercube edges $\langle v^0_i, v^1_i \rangle \in E(AQ_{n+1}) $ to connect internal vertices $v^0_i \in V(AQ^0_n)$ and $v^1_i \in V(T^1_{(n-1)})$ $n \leq i \leq 2^{n-1}$. The $2^{n-1}+ (n-1)$ vertices in $AQ^0_n$ which are not connected to $T^1_{(n-1)}$ via hypercube edges are $u^0_i$ ($1 \leq i \leq 2^{n-1}$) and $ v^0_1, v^0_2,....v^0_{n-1}$, but the edge $\langle v^0_{n}, v^1_{n} \rangle$ is included in $T^1_{(n-1)}$, by adding to $T^1_{(n-1)}$ the edges of the tree $T^0_n$ through the vertex $v^0_{n}$ connect all remaining vertices $ v^0_1, v^0_2,....v^0_{n-1}$ and $u^0_i$ ($1 \leq i \leq 2^{n-1}$) to $T^1_{(n-1)}$. See Fig.6. \\ \begin{center} \unitlength 0.6mm \linethickness{0.4pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(189.545,84.795)(0,0) \put(86.795,50.5){\line(0,1){1.3033}} \put(86.77,51.803){\line(0,1){1.3014}} \multiput(86.695,53.105)(-.03115,.32439){4}{\line(0,1){.32439}} \multiput(86.571,54.402)(-.029029,.215309){6}{\line(0,1){.215309}} \multiput(86.396,55.694)(-.031929,.183462){7}{\line(0,1){.183462}} \multiput(86.173,56.978)(-.030278,.141638){9}{\line(0,1){.141638}} \multiput(85.9,58.253)(-.03211,.126337){10}{\line(0,1){.126337}} \multiput(85.579,59.516)(-.033566,.11365){11}{\line(0,1){.11365}} \multiput(85.21,60.767)(-.0320629,.095008){13}{\line(0,1){.095008}} \multiput(84.793,62.002)(-.0331282,.0870173){14}{\line(0,1){.0870173}} \multiput(84.329,63.22)(-.0318807,.0749746){16}{\line(0,1){.0749746}} \multiput(83.819,64.42)(-.0326847,.069364){17}{\line(0,1){.069364}} \multiput(83.264,65.599)(-.0333541,.0642807){18}{\line(0,1){.0642807}} \multiput(82.663,66.756)(-.0322114,.056661){20}{\line(0,1){.056661}} \multiput(82.019,67.889)(-.0327208,.052749){21}{\line(0,1){.052749}} \multiput(81.332,68.997)(-.0331381,.0491187){22}{\line(0,1){.0491187}} \multiput(80.603,70.077)(-.0334727,.0457352){23}{\line(0,1){.0457352}} \multiput(79.833,71.129)(-.0337324,.0425695){24}{\line(0,1){.0425695}} \multiput(79.024,72.151)(-.032619,.0380741){26}{\line(0,1){.0380741}} \multiput(78.175,73.141)(-.0327914,.0354346){27}{\line(0,1){.0354346}} \multiput(77.29,74.098)(-.0329052,.0329335){28}{\line(0,1){.0329335}} \multiput(76.369,75.02)(-.0354063,.032822){27}{\line(-1,0){.0354063}} \multiput(75.413,75.906)(-.038046,.0326518){26}{\line(-1,0){.038046}} \multiput(74.424,76.755)(-.0408388,.0324183){25}{\line(-1,0){.0408388}} \multiput(73.403,77.565)(-.0457064,.0335121){23}{\line(-1,0){.0457064}} \multiput(72.351,78.336)(-.0490901,.0331804){22}{\line(-1,0){.0490901}} \multiput(71.271,79.066)(-.0527207,.0327662){21}{\line(-1,0){.0527207}} \multiput(70.164,79.754)(-.0566333,.0322602){20}{\line(-1,0){.0566333}} \multiput(69.032,80.399)(-.0642519,.0334094){18}{\line(-1,0){.0642519}} \multiput(67.875,81.001)(-.0693358,.0327444){17}{\line(-1,0){.0693358}} \multiput(66.696,81.557)(-.0749471,.0319452){16}{\line(-1,0){.0749471}} \multiput(65.497,82.069)(-.0869887,.0332031){14}{\line(-1,0){.0869887}} \multiput(64.279,82.533)(-.0949803,.0321447){13}{\line(-1,0){.0949803}} \multiput(63.045,82.951)(-.113621,.033664){11}{\line(-1,0){.113621}} \multiput(61.795,83.322)(-.12631,.032219){10}{\line(-1,0){.12631}} \multiput(60.532,83.644)(-.141612,.0304){9}{\line(-1,0){.141612}} \multiput(59.257,83.917)(-.183435,.032087){7}{\line(-1,0){.183435}} \multiput(57.973,84.142)(-.215284,.029215){6}{\line(-1,0){.215284}} \multiput(56.681,84.317)(-.32437,.03143){4}{\line(-1,0){.32437}} \put(55.384,84.443){\line(-1,0){1.3013}} \put(54.083,84.519){\line(-1,0){2.6066}} \put(51.476,84.521){\line(-1,0){1.3015}} \multiput(50.175,84.447)(-.32442,-.03087){4}{\line(-1,0){.32442}} \multiput(48.877,84.324)(-.215334,-.028844){6}{\line(-1,0){.215334}} \multiput(47.585,84.151)(-.18349,-.031771){7}{\line(-1,0){.18349}} \multiput(46.3,83.929)(-.141664,-.030156){9}{\line(-1,0){.141664}} \multiput(45.025,83.657)(-.126365,-.032001){10}{\line(-1,0){.126365}} \multiput(43.762,83.337)(-.113679,-.033468){11}{\line(-1,0){.113679}} \multiput(42.511,82.969)(-.0950356,-.031981){13}{\line(-1,0){.0950356}} \multiput(41.276,82.553)(-.0870458,-.0330532){14}{\line(-1,0){.0870458}} \multiput(40.057,82.09)(-.0750021,-.0318161){16}{\line(-1,0){.0750021}} \multiput(38.857,81.581)(-.0693921,-.0326249){17}{\line(-1,0){.0693921}} \multiput(37.678,81.027)(-.0643094,-.0332987){18}{\line(-1,0){.0643094}} \multiput(36.52,80.427)(-.0566888,-.0321625){20}{\line(-1,0){.0566888}} \multiput(35.386,79.784)(-.0527771,-.0326753){21}{\line(-1,0){.0527771}} \multiput(34.278,79.098)(-.0491472,-.0330958){22}{\line(-1,0){.0491472}} \multiput(33.197,78.37)(-.0457641,-.0334333){23}{\line(-1,0){.0457641}} \multiput(32.144,77.601)(-.0425985,-.0336957){24}{\line(-1,0){.0425985}} \multiput(31.122,76.792)(-.0381022,-.0325862){26}{\line(-1,0){.0381022}} \multiput(30.131,75.945)(-.0354628,-.0327609){27}{\line(-1,0){.0354628}} \multiput(29.174,75.06)(-.0329619,-.0328768){28}{\line(-1,0){.0329619}} \multiput(28.251,74.14)(-.0328524,-.035378){27}{\line(0,-1){.035378}} \multiput(27.364,73.185)(-.0326846,-.0380178){26}{\line(0,-1){.0380178}} \multiput(26.514,72.196)(-.0324534,-.0408108){25}{\line(0,-1){.0408108}} \multiput(25.702,71.176)(-.0335515,-.0456775){23}{\line(0,-1){.0456775}} \multiput(24.931,70.125)(-.0332227,-.0490615){22}{\line(0,-1){.0490615}} \multiput(24.2,69.046)(-.0328116,-.0526925){21}{\line(0,-1){.0526925}} \multiput(23.511,67.939)(-.0323089,-.0566055){20}{\line(0,-1){.0566055}} \multiput(22.865,66.807)(-.0334648,-.0642231){18}{\line(0,-1){.0642231}} \multiput(22.262,65.651)(-.0328041,-.0693075){17}{\line(0,-1){.0693075}} \multiput(21.705,64.473)(-.0320098,-.0749196){16}{\line(0,-1){.0749196}} \multiput(21.192,63.274)(-.033278,-.0869601){14}{\line(0,-1){.0869601}} \multiput(20.727,62.057)(-.0322266,-.0949526){13}{\line(0,-1){.0949526}} \multiput(20.308,60.823)(-.030949,-.104126){12}{\line(0,-1){.104126}} \multiput(19.936,59.573)(-.032328,-.126282){10}{\line(0,-1){.126282}} \multiput(19.613,58.31)(-.030522,-.141585){9}{\line(0,-1){.141585}} \multiput(19.338,57.036)(-.032245,-.183407){7}{\line(0,-1){.183407}} \multiput(19.113,55.752)(-.0294,-.215258){6}{\line(0,-1){.215258}} \multiput(18.936,54.461)(-.03171,-.32434){4}{\line(0,-1){.32434}} \put(18.809,53.163){\line(0,-1){1.3013}} \put(18.732,51.862){\line(0,-1){3.9081}} \multiput(18.8,47.954)(.03059,-.32445){4}{\line(0,-1){.32445}} \multiput(18.923,46.656)(.028658,-.215358){6}{\line(0,-1){.215358}} \multiput(19.095,45.364)(.031613,-.183517){7}{\line(0,-1){.183517}} \multiput(19.316,44.079)(.030034,-.14169){9}{\line(0,-1){.14169}} \multiput(19.586,42.804)(.031892,-.126392){10}{\line(0,-1){.126392}} \multiput(19.905,41.54)(.033371,-.113708){11}{\line(0,-1){.113708}} \multiput(20.272,40.289)(.0318992,-.0950631){13}{\line(0,-1){.0950631}} \multiput(20.687,39.054)(.0329782,-.0870742){14}{\line(0,-1){.0870742}} \multiput(21.149,37.834)(.0317514,-.0750294){16}{\line(0,-1){.0750294}} \multiput(21.657,36.634)(.0325651,-.0694202){17}{\line(0,-1){.0694202}} \multiput(22.21,35.454)(.0332433,-.0643381){18}{\line(0,-1){.0643381}} \multiput(22.809,34.296)(.0321137,-.0567165){20}{\line(0,-1){.0567165}} \multiput(23.451,33.161)(.0326299,-.0528053){21}{\line(0,-1){.0528053}} \multiput(24.136,32.053)(.0330534,-.0491757){22}{\line(0,-1){.0491757}} \multiput(24.863,30.971)(.0333938,-.0457928){23}{\line(0,-1){.0457928}} \multiput(25.631,29.917)(.033659,-.0426275){24}{\line(0,-1){.0426275}} \multiput(26.439,28.894)(.0325534,-.0381302){26}{\line(0,-1){.0381302}} \multiput(27.286,27.903)(.0327303,-.035491){27}{\line(0,-1){.035491}} \multiput(28.169,26.945)(.0328484,-.0329902){28}{\line(0,-1){.0329902}} \multiput(29.089,26.021)(.0353497,-.0328829){27}{\line(1,0){.0353497}} \multiput(30.043,25.133)(.0379897,-.0327173){26}{\line(1,0){.0379897}} \multiput(31.031,24.283)(.0407829,-.0324886){25}{\line(1,0){.0407829}} \multiput(32.051,23.47)(.0456486,-.0335908){23}{\line(1,0){.0456486}} \multiput(33.101,22.698)(.0490329,-.033265){22}{\line(1,0){.0490329}} \multiput(34.179,21.966)(.0526642,-.032857){21}{\line(1,0){.0526642}} \multiput(35.285,21.276)(.0565776,-.0323577){20}{\line(1,0){.0565776}} \multiput(36.417,20.629)(.0641943,-.0335201){18}{\line(1,0){.0641943}} \multiput(37.572,20.025)(.0692793,-.0328638){17}{\line(1,0){.0692793}} \multiput(38.75,19.467)(.074892,-.0320743){16}{\line(1,0){.074892}} \multiput(39.948,18.953)(.0869314,-.033353){14}{\line(1,0){.0869314}} \multiput(41.165,18.487)(.0949248,-.0323084){13}{\line(1,0){.0949248}} \multiput(42.4,18.067)(.1041,-.031038){12}{\line(1,0){.1041}} \multiput(43.649,17.694)(.126254,-.032437){10}{\line(1,0){.126254}} \multiput(44.911,17.37)(.141559,-.030644){9}{\line(1,0){.141559}} \multiput(46.185,17.094)(.183379,-.032403){7}{\line(1,0){.183379}} \multiput(47.469,16.867)(.215233,-.029586){6}{\line(1,0){.215233}} \multiput(48.76,16.69)(.32431,-.03199){4}{\line(1,0){.32431}} \put(50.058,16.562){\line(1,0){1.3012}} \put(51.359,16.483){\line(1,0){2.6066}} \put(53.965,16.477){\line(1,0){1.3016}} \multiput(55.267,16.548)(.32447,.03031){4}{\line(1,0){.32447}} \multiput(56.565,16.669)(.215383,.028473){6}{\line(1,0){.215383}} \multiput(57.857,16.84)(.183544,.031455){7}{\line(1,0){.183544}} \multiput(59.142,17.06)(.15943,.033651){8}{\line(1,0){.15943}} \multiput(60.417,17.33)(.12642,.031784){10}{\line(1,0){.12642}} \multiput(61.682,17.647)(.113737,.033273){11}{\line(1,0){.113737}} \multiput(62.933,18.013)(.0950905,.0318172){13}{\line(1,0){.0950905}} \multiput(64.169,18.427)(.0871026,.0329031){14}{\line(1,0){.0871026}} \multiput(65.388,18.888)(.0750568,.0316868){16}{\line(1,0){.0750568}} \multiput(66.589,19.395)(.0694482,.0325053){17}{\line(1,0){.0694482}} \multiput(67.77,19.947)(.0643667,.0331878){18}{\line(1,0){.0643667}} \multiput(68.928,20.545)(.0567441,.0320648){20}{\line(1,0){.0567441}} \multiput(70.063,21.186)(.0528333,.0325843){21}{\line(1,0){.0528333}} \multiput(71.173,21.87)(.0492042,.0330111){22}{\line(1,0){.0492042}} \multiput(72.255,22.597)(.0458216,.0333544){23}{\line(1,0){.0458216}} \multiput(73.309,23.364)(.0426565,.0336222){24}{\line(1,0){.0426565}} \multiput(74.333,24.171)(.0381583,.0325205){26}{\line(1,0){.0381583}} \multiput(75.325,25.016)(.0355192,.0326997){27}{\line(1,0){.0355192}} \multiput(76.284,25.899)(.0330185,.03282){28}{\line(1,0){.0330185}} \multiput(77.209,26.818)(.0329134,.0353214){27}{\line(0,1){.0353214}} \multiput(78.097,27.772)(.03275,.0379615){26}{\line(0,1){.0379615}} \multiput(78.949,28.759)(.0325237,.0407548){25}{\line(0,1){.0407548}} \multiput(79.762,29.778)(.0336301,.0456196){23}{\line(0,1){.0456196}} \multiput(80.535,30.827)(.0333072,.0490042){22}{\line(0,1){.0490042}} \multiput(81.268,31.905)(.0329024,.0526359){21}{\line(0,1){.0526359}} \multiput(81.959,33.01)(.0324064,.0565497){20}{\line(0,1){.0565497}} \multiput(82.607,34.141)(.0335754,.0641654){18}{\line(0,1){.0641654}} \multiput(83.212,35.296)(.0329235,.0692509){17}{\line(0,1){.0692509}} \multiput(83.771,36.473)(.0321389,.0748643){16}{\line(0,1){.0748643}} \multiput(84.285,37.671)(.0334278,.0869026){14}{\line(0,1){.0869026}} \multiput(84.753,38.888)(.0323901,.0948969){13}{\line(0,1){.0948969}} \multiput(85.175,40.122)(.031128,.104073){12}{\line(0,1){.104073}} \multiput(85.548,41.37)(.032545,.126226){10}{\line(0,1){.126226}} \multiput(85.874,42.633)(.030766,.141533){9}{\line(0,1){.141533}} \multiput(86.15,43.907)(.032561,.183351){7}{\line(0,1){.183351}} \multiput(86.378,45.19)(.029771,.215207){6}{\line(0,1){.215207}} \multiput(86.557,46.481)(.03227,.32428){4}{\line(0,1){.32428}} \put(86.686,47.778){\line(0,1){1.3011}} \put(86.765,49.079){\line(0,1){1.4205}} \put(189.545,50.75){\line(0,1){1.3033}} \put(189.52,52.053){\line(0,1){1.3014}} \multiput(189.445,53.355)(-.03115,.32439){4}{\line(0,1){.32439}} \multiput(189.321,54.652)(-.029029,.215309){6}{\line(0,1){.215309}} \multiput(189.146,55.944)(-.031929,.183462){7}{\line(0,1){.183462}} \multiput(188.923,57.228)(-.030278,.141638){9}{\line(0,1){.141638}} \multiput(188.65,58.503)(-.03211,.126337){10}{\line(0,1){.126337}} \multiput(188.329,59.766)(-.033566,.11365){11}{\line(0,1){.11365}} \multiput(187.96,61.017)(-.0320629,.095008){13}{\line(0,1){.095008}} \multiput(187.543,62.252)(-.0331282,.0870173){14}{\line(0,1){.0870173}} \multiput(187.079,63.47)(-.0318807,.0749746){16}{\line(0,1){.0749746}} \multiput(186.569,64.67)(-.0326847,.069364){17}{\line(0,1){.069364}} \multiput(186.014,65.849)(-.0333541,.0642807){18}{\line(0,1){.0642807}} \multiput(185.413,67.006)(-.0322114,.056661){20}{\line(0,1){.056661}} \multiput(184.769,68.139)(-.0327208,.052749){21}{\line(0,1){.052749}} \multiput(184.082,69.247)(-.0331381,.0491187){22}{\line(0,1){.0491187}} \multiput(183.353,70.327)(-.0334727,.0457352){23}{\line(0,1){.0457352}} \multiput(182.583,71.379)(-.0337324,.0425695){24}{\line(0,1){.0425695}} \multiput(181.774,72.401)(-.032619,.0380741){26}{\line(0,1){.0380741}} \multiput(180.925,73.391)(-.0327914,.0354346){27}{\line(0,1){.0354346}} \multiput(180.04,74.348)(-.0329052,.0329335){28}{\line(0,1){.0329335}} \multiput(179.119,75.27)(-.0354063,.032822){27}{\line(-1,0){.0354063}} \multiput(178.163,76.156)(-.038046,.0326518){26}{\line(-1,0){.038046}} \multiput(177.174,77.005)(-.0408388,.0324183){25}{\line(-1,0){.0408388}} \multiput(176.153,77.815)(-.0457064,.0335121){23}{\line(-1,0){.0457064}} \multiput(175.101,78.586)(-.0490901,.0331804){22}{\line(-1,0){.0490901}} \multiput(174.021,79.316)(-.0527207,.0327662){21}{\line(-1,0){.0527207}} \multiput(172.914,80.004)(-.0566333,.0322602){20}{\line(-1,0){.0566333}} \multiput(171.782,80.649)(-.0642519,.0334094){18}{\line(-1,0){.0642519}} \multiput(170.625,81.251)(-.0693358,.0327444){17}{\line(-1,0){.0693358}} \multiput(169.446,81.807)(-.0749471,.0319452){16}{\line(-1,0){.0749471}} \multiput(168.247,82.319)(-.0869887,.0332031){14}{\line(-1,0){.0869887}} \multiput(167.029,82.783)(-.0949803,.0321447){13}{\line(-1,0){.0949803}} \multiput(165.795,83.201)(-.113621,.033664){11}{\line(-1,0){.113621}} \multiput(164.545,83.572)(-.12631,.032219){10}{\line(-1,0){.12631}} \multiput(163.282,83.894)(-.141612,.0304){9}{\line(-1,0){.141612}} \multiput(162.007,84.167)(-.183435,.032087){7}{\line(-1,0){.183435}} \multiput(160.723,84.392)(-.215284,.029215){6}{\line(-1,0){.215284}} \multiput(159.431,84.567)(-.32437,.03143){4}{\line(-1,0){.32437}} \put(158.134,84.693){\line(-1,0){1.3013}} \put(156.833,84.769){\line(-1,0){2.6066}} \put(154.226,84.771){\line(-1,0){1.3015}} \multiput(152.925,84.697)(-.32442,-.03087){4}{\line(-1,0){.32442}} \multiput(151.627,84.574)(-.215334,-.028844){6}{\line(-1,0){.215334}} \multiput(150.335,84.401)(-.18349,-.031771){7}{\line(-1,0){.18349}} \multiput(149.05,84.179)(-.141664,-.030156){9}{\line(-1,0){.141664}} \multiput(147.775,83.907)(-.126365,-.032001){10}{\line(-1,0){.126365}} \multiput(146.512,83.587)(-.113679,-.033468){11}{\line(-1,0){.113679}} \multiput(145.261,83.219)(-.0950356,-.031981){13}{\line(-1,0){.0950356}} \multiput(144.026,82.803)(-.0870458,-.0330532){14}{\line(-1,0){.0870458}} \multiput(142.807,82.34)(-.0750021,-.0318161){16}{\line(-1,0){.0750021}} \multiput(141.607,81.831)(-.0693921,-.0326249){17}{\line(-1,0){.0693921}} \multiput(140.428,81.277)(-.0643094,-.0332987){18}{\line(-1,0){.0643094}} \multiput(139.27,80.677)(-.0566888,-.0321625){20}{\line(-1,0){.0566888}} \multiput(138.136,80.034)(-.0527771,-.0326753){21}{\line(-1,0){.0527771}} \multiput(137.028,79.348)(-.0491472,-.0330958){22}{\line(-1,0){.0491472}} \multiput(135.947,78.62)(-.0457641,-.0334333){23}{\line(-1,0){.0457641}} \multiput(134.894,77.851)(-.0425985,-.0336957){24}{\line(-1,0){.0425985}} \multiput(133.872,77.042)(-.0381022,-.0325862){26}{\line(-1,0){.0381022}} \multiput(132.881,76.195)(-.0354628,-.0327609){27}{\line(-1,0){.0354628}} \multiput(131.924,75.31)(-.0329619,-.0328768){28}{\line(-1,0){.0329619}} \multiput(131.001,74.39)(-.0328524,-.035378){27}{\line(0,-1){.035378}} \multiput(130.114,73.435)(-.0326846,-.0380178){26}{\line(0,-1){.0380178}} \multiput(129.264,72.446)(-.0324534,-.0408108){25}{\line(0,-1){.0408108}} \multiput(128.452,71.426)(-.0335515,-.0456775){23}{\line(0,-1){.0456775}} \multiput(127.681,70.375)(-.0332227,-.0490615){22}{\line(0,-1){.0490615}} \multiput(126.95,69.296)(-.0328116,-.0526925){21}{\line(0,-1){.0526925}} \multiput(126.261,68.189)(-.0323089,-.0566055){20}{\line(0,-1){.0566055}} \multiput(125.615,67.057)(-.0334648,-.0642231){18}{\line(0,-1){.0642231}} \multiput(125.012,65.901)(-.0328041,-.0693075){17}{\line(0,-1){.0693075}} \multiput(124.455,64.723)(-.0320098,-.0749196){16}{\line(0,-1){.0749196}} \multiput(123.942,63.524)(-.033278,-.0869601){14}{\line(0,-1){.0869601}} \multiput(123.477,62.307)(-.0322266,-.0949526){13}{\line(0,-1){.0949526}} \multiput(123.058,61.073)(-.030949,-.104126){12}{\line(0,-1){.104126}} \multiput(122.686,59.823)(-.032328,-.126282){10}{\line(0,-1){.126282}} \multiput(122.363,58.56)(-.030522,-.141585){9}{\line(0,-1){.141585}} \multiput(122.088,57.286)(-.032245,-.183407){7}{\line(0,-1){.183407}} \multiput(121.863,56.002)(-.0294,-.215258){6}{\line(0,-1){.215258}} \multiput(121.686,54.711)(-.03171,-.32434){4}{\line(0,-1){.32434}} \put(121.559,53.413){\line(0,-1){1.3013}} \put(121.482,52.112){\line(0,-1){3.9081}} \multiput(121.55,48.204)(.03059,-.32445){4}{\line(0,-1){.32445}} \multiput(121.673,46.906)(.028658,-.215358){6}{\line(0,-1){.215358}} \multiput(121.845,45.614)(.031613,-.183517){7}{\line(0,-1){.183517}} \multiput(122.066,44.329)(.030034,-.14169){9}{\line(0,-1){.14169}} \multiput(122.336,43.054)(.031892,-.126392){10}{\line(0,-1){.126392}} \multiput(122.655,41.79)(.033371,-.113708){11}{\line(0,-1){.113708}} \multiput(123.022,40.539)(.0318992,-.0950631){13}{\line(0,-1){.0950631}} \multiput(123.437,39.304)(.0329782,-.0870742){14}{\line(0,-1){.0870742}} \multiput(123.899,38.084)(.0317514,-.0750294){16}{\line(0,-1){.0750294}} \multiput(124.407,36.884)(.0325651,-.0694202){17}{\line(0,-1){.0694202}} \multiput(124.96,35.704)(.0332433,-.0643381){18}{\line(0,-1){.0643381}} \multiput(125.559,34.546)(.0321137,-.0567165){20}{\line(0,-1){.0567165}} \multiput(126.201,33.411)(.0326299,-.0528053){21}{\line(0,-1){.0528053}} \multiput(126.886,32.303)(.0330534,-.0491757){22}{\line(0,-1){.0491757}} \multiput(127.613,31.221)(.0333938,-.0457928){23}{\line(0,-1){.0457928}} \multiput(128.381,30.167)(.033659,-.0426275){24}{\line(0,-1){.0426275}} \multiput(129.189,29.144)(.0325534,-.0381302){26}{\line(0,-1){.0381302}} \multiput(130.036,28.153)(.0327303,-.035491){27}{\line(0,-1){.035491}} \multiput(130.919,27.195)(.0328484,-.0329902){28}{\line(0,-1){.0329902}} \multiput(131.839,26.271)(.0353497,-.0328829){27}{\line(1,0){.0353497}} \multiput(132.793,25.383)(.0379897,-.0327173){26}{\line(1,0){.0379897}} \multiput(133.781,24.533)(.0407829,-.0324886){25}{\line(1,0){.0407829}} \multiput(134.801,23.72)(.0456486,-.0335908){23}{\line(1,0){.0456486}} \multiput(135.851,22.948)(.0490329,-.033265){22}{\line(1,0){.0490329}} \multiput(136.929,22.216)(.0526642,-.032857){21}{\line(1,0){.0526642}} \multiput(138.035,21.526)(.0565776,-.0323577){20}{\line(1,0){.0565776}} \multiput(139.167,20.879)(.0641943,-.0335201){18}{\line(1,0){.0641943}} \multiput(140.322,20.275)(.0692793,-.0328638){17}{\line(1,0){.0692793}} \multiput(141.5,19.717)(.074892,-.0320743){16}{\line(1,0){.074892}} \multiput(142.698,19.203)(.0869314,-.033353){14}{\line(1,0){.0869314}} \multiput(143.915,18.737)(.0949248,-.0323084){13}{\line(1,0){.0949248}} \multiput(145.15,18.317)(.1041,-.031038){12}{\line(1,0){.1041}} \multiput(146.399,17.944)(.126254,-.032437){10}{\line(1,0){.126254}} \multiput(147.661,17.62)(.141559,-.030644){9}{\line(1,0){.141559}} \multiput(148.935,17.344)(.183379,-.032403){7}{\line(1,0){.183379}} \multiput(150.219,17.117)(.215233,-.029586){6}{\line(1,0){.215233}} \multiput(151.51,16.94)(.32431,-.03199){4}{\line(1,0){.32431}} \put(152.808,16.812){\line(1,0){1.3012}} \put(154.109,16.733){\line(1,0){2.6066}} \put(156.715,16.727){\line(1,0){1.3016}} \multiput(158.017,16.798)(.32447,.03031){4}{\line(1,0){.32447}} \multiput(159.315,16.919)(.215383,.028473){6}{\line(1,0){.215383}} \multiput(160.607,17.09)(.183544,.031455){7}{\line(1,0){.183544}} \multiput(161.892,17.31)(.15943,.033651){8}{\line(1,0){.15943}} \multiput(163.167,17.58)(.12642,.031784){10}{\line(1,0){.12642}} \multiput(164.432,17.897)(.113737,.033273){11}{\line(1,0){.113737}} \multiput(165.683,18.263)(.0950905,.0318172){13}{\line(1,0){.0950905}} \multiput(166.919,18.677)(.0871026,.0329031){14}{\line(1,0){.0871026}} \multiput(168.138,19.138)(.0750568,.0316868){16}{\line(1,0){.0750568}} \multiput(169.339,19.645)(.0694482,.0325053){17}{\line(1,0){.0694482}} \multiput(170.52,20.197)(.0643667,.0331878){18}{\line(1,0){.0643667}} \multiput(171.678,20.795)(.0567441,.0320648){20}{\line(1,0){.0567441}} \multiput(172.813,21.436)(.0528333,.0325843){21}{\line(1,0){.0528333}} \multiput(173.923,22.12)(.0492042,.0330111){22}{\line(1,0){.0492042}} \multiput(175.005,22.847)(.0458216,.0333544){23}{\line(1,0){.0458216}} \multiput(176.059,23.614)(.0426565,.0336222){24}{\line(1,0){.0426565}} \multiput(177.083,24.421)(.0381583,.0325205){26}{\line(1,0){.0381583}} \multiput(178.075,25.266)(.0355192,.0326997){27}{\line(1,0){.0355192}} \multiput(179.034,26.149)(.0330185,.03282){28}{\line(1,0){.0330185}} \multiput(179.959,27.068)(.0329134,.0353214){27}{\line(0,1){.0353214}} \multiput(180.847,28.022)(.03275,.0379615){26}{\line(0,1){.0379615}} \multiput(181.699,29.009)(.0325237,.0407548){25}{\line(0,1){.0407548}} \multiput(182.512,30.028)(.0336301,.0456196){23}{\line(0,1){.0456196}} \multiput(183.285,31.077)(.0333072,.0490042){22}{\line(0,1){.0490042}} \multiput(184.018,32.155)(.0329024,.0526359){21}{\line(0,1){.0526359}} \multiput(184.709,33.26)(.0324064,.0565497){20}{\line(0,1){.0565497}} \multiput(185.357,34.391)(.0335754,.0641654){18}{\line(0,1){.0641654}} \multiput(185.962,35.546)(.0329235,.0692509){17}{\line(0,1){.0692509}} \multiput(186.521,36.723)(.0321389,.0748643){16}{\line(0,1){.0748643}} \multiput(187.035,37.921)(.0334278,.0869026){14}{\line(0,1){.0869026}} \multiput(187.503,39.138)(.0323901,.0948969){13}{\line(0,1){.0948969}} \multiput(187.925,40.372)(.031128,.104073){12}{\line(0,1){.104073}} \multiput(188.298,41.62)(.032545,.126226){10}{\line(0,1){.126226}} \multiput(188.624,42.883)(.030766,.141533){9}{\line(0,1){.141533}} \multiput(188.9,44.157)(.032561,.183351){7}{\line(0,1){.183351}} \multiput(189.128,45.44)(.029771,.215207){6}{\line(0,1){.215207}} \multiput(189.307,46.731)(.03227,.32428){4}{\line(0,1){.32428}} \put(189.436,48.028){\line(0,1){1.3011}} \put(189.515,49.329){\line(0,1){1.4205}} \put(52.25,7.25){\makebox(0,0)[cc]{$AQ^0_n$}} \put(154.75,7.5){\makebox(0,0)[cc]{$AQ^1_n$}} \put(150.25,22){\framebox(18.25,57.75)[cc]{}} \put(49.25,62){\framebox(17.5,16.5)[cc]{}} \put(58.5,51.5){\circle*{.707}} \put(67,69.25){\circle*{.707}} \put(151,69.25){\circle*{.707}} \put(150.75,51.5){\circle*{.707}} \put(58.5,36){\circle*{.707}} \put(58.5,32){\circle*{.707}} \put(150.5,36.5){\circle*{.707}} \put(58.5,44){\circle*{.707}} \put(150.25,32.25){\circle*{.707}} \put(158,59){\makebox(0,0)[cc]{ $T^1_{(n-1)}$}} \put(57.75,70){\makebox(0,0)[cc]{$T^0_n$}} \put(48,30.5){\makebox(0,0)[cc]{$v^0_{2^{n-1}}$}} \put(143.25,28.25){\makebox(0,0)[cc]{$v^1_{2^{n-1}}$}} \multiput(66.5,69.25)(10.53125,.03125){8}{\line(1,0){10.53125}} \put(58.5,51.75){\line(1,0){91.75}} \put(58.25,44.25){\line(1,0){92.25}} \multiput(58.75,36.5)(11.40625,.03125){8}{\line(1,0){11.40625}} \put(103.25,9.75){\makebox(0,0)[cc]{Fig.6}} \put(70.75,70.5){\makebox(0,0)[cc]{$v^0_{n}$}} \put(148.75,72){\makebox(0,0)[cc]{$v^1_{n}$}} \put(51.75,52){\makebox(0,0)[cc]{$v^0_{n+1}$ }} \put(140.25,54){\makebox(0,0)[cc]{$v^1_{n+1}$ }} \put(58.25,32.5){\line(1,0){92.25}} \end{picture} \end{center} Still, we have uncovered edges of $AQ_{n+1}$ namely edges of tree $T^1_n$, the hypercube edge $\langle v^0_{n-1}, v^1_{n-1} \rangle$ and hypercube edges $\langle u^0_i, u^1_i \rangle$ ($1 \leq i \leq 2^{n-1}$). We can easily observe that these uncovered hypercube edges along with the tree $T^1_n$ again form a new tree on $(2^{n-1}+ n) + (2^{n-1} +1) = 2^n + n+1$ vertices. See Fig.7. \\ \begin{center} \unitlength 0.6mm \linethickness{0.4pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(189.545,84.795)(0,0) \put(86.795,50.5){\line(0,1){1.3033}} \put(86.77,51.803){\line(0,1){1.3014}} \multiput(86.695,53.105)(-.03115,.32439){4}{\line(0,1){.32439}} \multiput(86.571,54.402)(-.029029,.215309){6}{\line(0,1){.215309}} \multiput(86.396,55.694)(-.031929,.183462){7}{\line(0,1){.183462}} \multiput(86.173,56.978)(-.030278,.141638){9}{\line(0,1){.141638}} \multiput(85.9,58.253)(-.03211,.126337){10}{\line(0,1){.126337}} \multiput(85.579,59.516)(-.033566,.11365){11}{\line(0,1){.11365}} \multiput(85.21,60.767)(-.0320629,.095008){13}{\line(0,1){.095008}} \multiput(84.793,62.002)(-.0331282,.0870173){14}{\line(0,1){.0870173}} \multiput(84.329,63.22)(-.0318807,.0749746){16}{\line(0,1){.0749746}} \multiput(83.819,64.42)(-.0326847,.069364){17}{\line(0,1){.069364}} \multiput(83.264,65.599)(-.0333541,.0642807){18}{\line(0,1){.0642807}} \multiput(82.663,66.756)(-.0322114,.056661){20}{\line(0,1){.056661}} \multiput(82.019,67.889)(-.0327208,.052749){21}{\line(0,1){.052749}} \multiput(81.332,68.997)(-.0331381,.0491187){22}{\line(0,1){.0491187}} \multiput(80.603,70.077)(-.0334727,.0457352){23}{\line(0,1){.0457352}} \multiput(79.833,71.129)(-.0337324,.0425695){24}{\line(0,1){.0425695}} \multiput(79.024,72.151)(-.032619,.0380741){26}{\line(0,1){.0380741}} \multiput(78.175,73.141)(-.0327914,.0354346){27}{\line(0,1){.0354346}} \multiput(77.29,74.098)(-.0329052,.0329335){28}{\line(0,1){.0329335}} \multiput(76.369,75.02)(-.0354063,.032822){27}{\line(-1,0){.0354063}} \multiput(75.413,75.906)(-.038046,.0326518){26}{\line(-1,0){.038046}} \multiput(74.424,76.755)(-.0408388,.0324183){25}{\line(-1,0){.0408388}} \multiput(73.403,77.565)(-.0457064,.0335121){23}{\line(-1,0){.0457064}} \multiput(72.351,78.336)(-.0490901,.0331804){22}{\line(-1,0){.0490901}} \multiput(71.271,79.066)(-.0527207,.0327662){21}{\line(-1,0){.0527207}} \multiput(70.164,79.754)(-.0566333,.0322602){20}{\line(-1,0){.0566333}} \multiput(69.032,80.399)(-.0642519,.0334094){18}{\line(-1,0){.0642519}} \multiput(67.875,81.001)(-.0693358,.0327444){17}{\line(-1,0){.0693358}} \multiput(66.696,81.557)(-.0749471,.0319452){16}{\line(-1,0){.0749471}} \multiput(65.497,82.069)(-.0869887,.0332031){14}{\line(-1,0){.0869887}} \multiput(64.279,82.533)(-.0949803,.0321447){13}{\line(-1,0){.0949803}} \multiput(63.045,82.951)(-.113621,.033664){11}{\line(-1,0){.113621}} \multiput(61.795,83.322)(-.12631,.032219){10}{\line(-1,0){.12631}} \multiput(60.532,83.644)(-.141612,.0304){9}{\line(-1,0){.141612}} \multiput(59.257,83.917)(-.183435,.032087){7}{\line(-1,0){.183435}} \multiput(57.973,84.142)(-.215284,.029215){6}{\line(-1,0){.215284}} \multiput(56.681,84.317)(-.32437,.03143){4}{\line(-1,0){.32437}} \put(55.384,84.443){\line(-1,0){1.3013}} \put(54.083,84.519){\line(-1,0){2.6066}} \put(51.476,84.521){\line(-1,0){1.3015}} \multiput(50.175,84.447)(-.32442,-.03087){4}{\line(-1,0){.32442}} \multiput(48.877,84.324)(-.215334,-.028844){6}{\line(-1,0){.215334}} \multiput(47.585,84.151)(-.18349,-.031771){7}{\line(-1,0){.18349}} \multiput(46.3,83.929)(-.141664,-.030156){9}{\line(-1,0){.141664}} \multiput(45.025,83.657)(-.126365,-.032001){10}{\line(-1,0){.126365}} \multiput(43.762,83.337)(-.113679,-.033468){11}{\line(-1,0){.113679}} \multiput(42.511,82.969)(-.0950356,-.031981){13}{\line(-1,0){.0950356}} \multiput(41.276,82.553)(-.0870458,-.0330532){14}{\line(-1,0){.0870458}} \multiput(40.057,82.09)(-.0750021,-.0318161){16}{\line(-1,0){.0750021}} \multiput(38.857,81.581)(-.0693921,-.0326249){17}{\line(-1,0){.0693921}} \multiput(37.678,81.027)(-.0643094,-.0332987){18}{\line(-1,0){.0643094}} \multiput(36.52,80.427)(-.0566888,-.0321625){20}{\line(-1,0){.0566888}} \multiput(35.386,79.784)(-.0527771,-.0326753){21}{\line(-1,0){.0527771}} \multiput(34.278,79.098)(-.0491472,-.0330958){22}{\line(-1,0){.0491472}} \multiput(33.197,78.37)(-.0457641,-.0334333){23}{\line(-1,0){.0457641}} \multiput(32.144,77.601)(-.0425985,-.0336957){24}{\line(-1,0){.0425985}} \multiput(31.122,76.792)(-.0381022,-.0325862){26}{\line(-1,0){.0381022}} \multiput(30.131,75.945)(-.0354628,-.0327609){27}{\line(-1,0){.0354628}} \multiput(29.174,75.06)(-.0329619,-.0328768){28}{\line(-1,0){.0329619}} \multiput(28.251,74.14)(-.0328524,-.035378){27}{\line(0,-1){.035378}} \multiput(27.364,73.185)(-.0326846,-.0380178){26}{\line(0,-1){.0380178}} \multiput(26.514,72.196)(-.0324534,-.0408108){25}{\line(0,-1){.0408108}} \multiput(25.702,71.176)(-.0335515,-.0456775){23}{\line(0,-1){.0456775}} \multiput(24.931,70.125)(-.0332227,-.0490615){22}{\line(0,-1){.0490615}} \multiput(24.2,69.046)(-.0328116,-.0526925){21}{\line(0,-1){.0526925}} \multiput(23.511,67.939)(-.0323089,-.0566055){20}{\line(0,-1){.0566055}} \multiput(22.865,66.807)(-.0334648,-.0642231){18}{\line(0,-1){.0642231}} \multiput(22.262,65.651)(-.0328041,-.0693075){17}{\line(0,-1){.0693075}} \multiput(21.705,64.473)(-.0320098,-.0749196){16}{\line(0,-1){.0749196}} \multiput(21.192,63.274)(-.033278,-.0869601){14}{\line(0,-1){.0869601}} \multiput(20.727,62.057)(-.0322266,-.0949526){13}{\line(0,-1){.0949526}} \multiput(20.308,60.823)(-.030949,-.104126){12}{\line(0,-1){.104126}} \multiput(19.936,59.573)(-.032328,-.126282){10}{\line(0,-1){.126282}} \multiput(19.613,58.31)(-.030522,-.141585){9}{\line(0,-1){.141585}} \multiput(19.338,57.036)(-.032245,-.183407){7}{\line(0,-1){.183407}} \multiput(19.113,55.752)(-.0294,-.215258){6}{\line(0,-1){.215258}} \multiput(18.936,54.461)(-.03171,-.32434){4}{\line(0,-1){.32434}} \put(18.809,53.163){\line(0,-1){1.3013}} \put(18.732,51.862){\line(0,-1){3.9081}} \multiput(18.8,47.954)(.03059,-.32445){4}{\line(0,-1){.32445}} \multiput(18.923,46.656)(.028658,-.215358){6}{\line(0,-1){.215358}} \multiput(19.095,45.364)(.031613,-.183517){7}{\line(0,-1){.183517}} \multiput(19.316,44.079)(.030034,-.14169){9}{\line(0,-1){.14169}} \multiput(19.586,42.804)(.031892,-.126392){10}{\line(0,-1){.126392}} \multiput(19.905,41.54)(.033371,-.113708){11}{\line(0,-1){.113708}} \multiput(20.272,40.289)(.0318992,-.0950631){13}{\line(0,-1){.0950631}} \multiput(20.687,39.054)(.0329782,-.0870742){14}{\line(0,-1){.0870742}} \multiput(21.149,37.834)(.0317514,-.0750294){16}{\line(0,-1){.0750294}} \multiput(21.657,36.634)(.0325651,-.0694202){17}{\line(0,-1){.0694202}} \multiput(22.21,35.454)(.0332433,-.0643381){18}{\line(0,-1){.0643381}} \multiput(22.809,34.296)(.0321137,-.0567165){20}{\line(0,-1){.0567165}} \multiput(23.451,33.161)(.0326299,-.0528053){21}{\line(0,-1){.0528053}} \multiput(24.136,32.053)(.0330534,-.0491757){22}{\line(0,-1){.0491757}} \multiput(24.863,30.971)(.0333938,-.0457928){23}{\line(0,-1){.0457928}} \multiput(25.631,29.917)(.033659,-.0426275){24}{\line(0,-1){.0426275}} \multiput(26.439,28.894)(.0325534,-.0381302){26}{\line(0,-1){.0381302}} \multiput(27.286,27.903)(.0327303,-.035491){27}{\line(0,-1){.035491}} \multiput(28.169,26.945)(.0328484,-.0329902){28}{\line(0,-1){.0329902}} \multiput(29.089,26.021)(.0353497,-.0328829){27}{\line(1,0){.0353497}} \multiput(30.043,25.133)(.0379897,-.0327173){26}{\line(1,0){.0379897}} \multiput(31.031,24.283)(.0407829,-.0324886){25}{\line(1,0){.0407829}} \multiput(32.051,23.47)(.0456486,-.0335908){23}{\line(1,0){.0456486}} \multiput(33.101,22.698)(.0490329,-.033265){22}{\line(1,0){.0490329}} \multiput(34.179,21.966)(.0526642,-.032857){21}{\line(1,0){.0526642}} \multiput(35.285,21.276)(.0565776,-.0323577){20}{\line(1,0){.0565776}} \multiput(36.417,20.629)(.0641943,-.0335201){18}{\line(1,0){.0641943}} \multiput(37.572,20.025)(.0692793,-.0328638){17}{\line(1,0){.0692793}} \multiput(38.75,19.467)(.074892,-.0320743){16}{\line(1,0){.074892}} \multiput(39.948,18.953)(.0869314,-.033353){14}{\line(1,0){.0869314}} \multiput(41.165,18.487)(.0949248,-.0323084){13}{\line(1,0){.0949248}} \multiput(42.4,18.067)(.1041,-.031038){12}{\line(1,0){.1041}} \multiput(43.649,17.694)(.126254,-.032437){10}{\line(1,0){.126254}} \multiput(44.911,17.37)(.141559,-.030644){9}{\line(1,0){.141559}} \multiput(46.185,17.094)(.183379,-.032403){7}{\line(1,0){.183379}} \multiput(47.469,16.867)(.215233,-.029586){6}{\line(1,0){.215233}} \multiput(48.76,16.69)(.32431,-.03199){4}{\line(1,0){.32431}} \put(50.058,16.562){\line(1,0){1.3012}} \put(51.359,16.483){\line(1,0){2.6066}} \put(53.965,16.477){\line(1,0){1.3016}} \multiput(55.267,16.548)(.32447,.03031){4}{\line(1,0){.32447}} \multiput(56.565,16.669)(.215383,.028473){6}{\line(1,0){.215383}} \multiput(57.857,16.84)(.183544,.031455){7}{\line(1,0){.183544}} \multiput(59.142,17.06)(.15943,.033651){8}{\line(1,0){.15943}} \multiput(60.417,17.33)(.12642,.031784){10}{\line(1,0){.12642}} \multiput(61.682,17.647)(.113737,.033273){11}{\line(1,0){.113737}} \multiput(62.933,18.013)(.0950905,.0318172){13}{\line(1,0){.0950905}} \multiput(64.169,18.427)(.0871026,.0329031){14}{\line(1,0){.0871026}} \multiput(65.388,18.888)(.0750568,.0316868){16}{\line(1,0){.0750568}} \multiput(66.589,19.395)(.0694482,.0325053){17}{\line(1,0){.0694482}} \multiput(67.77,19.947)(.0643667,.0331878){18}{\line(1,0){.0643667}} \multiput(68.928,20.545)(.0567441,.0320648){20}{\line(1,0){.0567441}} \multiput(70.063,21.186)(.0528333,.0325843){21}{\line(1,0){.0528333}} \multiput(71.173,21.87)(.0492042,.0330111){22}{\line(1,0){.0492042}} \multiput(72.255,22.597)(.0458216,.0333544){23}{\line(1,0){.0458216}} \multiput(73.309,23.364)(.0426565,.0336222){24}{\line(1,0){.0426565}} \multiput(74.333,24.171)(.0381583,.0325205){26}{\line(1,0){.0381583}} \multiput(75.325,25.016)(.0355192,.0326997){27}{\line(1,0){.0355192}} \multiput(76.284,25.899)(.0330185,.03282){28}{\line(1,0){.0330185}} \multiput(77.209,26.818)(.0329134,.0353214){27}{\line(0,1){.0353214}} \multiput(78.097,27.772)(.03275,.0379615){26}{\line(0,1){.0379615}} \multiput(78.949,28.759)(.0325237,.0407548){25}{\line(0,1){.0407548}} \multiput(79.762,29.778)(.0336301,.0456196){23}{\line(0,1){.0456196}} \multiput(80.535,30.827)(.0333072,.0490042){22}{\line(0,1){.0490042}} \multiput(81.268,31.905)(.0329024,.0526359){21}{\line(0,1){.0526359}} \multiput(81.959,33.01)(.0324064,.0565497){20}{\line(0,1){.0565497}} \multiput(82.607,34.141)(.0335754,.0641654){18}{\line(0,1){.0641654}} \multiput(83.212,35.296)(.0329235,.0692509){17}{\line(0,1){.0692509}} \multiput(83.771,36.473)(.0321389,.0748643){16}{\line(0,1){.0748643}} \multiput(84.285,37.671)(.0334278,.0869026){14}{\line(0,1){.0869026}} \multiput(84.753,38.888)(.0323901,.0948969){13}{\line(0,1){.0948969}} \multiput(85.175,40.122)(.031128,.104073){12}{\line(0,1){.104073}} \multiput(85.548,41.37)(.032545,.126226){10}{\line(0,1){.126226}} \multiput(85.874,42.633)(.030766,.141533){9}{\line(0,1){.141533}} \multiput(86.15,43.907)(.032561,.183351){7}{\line(0,1){.183351}} \multiput(86.378,45.19)(.029771,.215207){6}{\line(0,1){.215207}} \multiput(86.557,46.481)(.03227,.32428){4}{\line(0,1){.32428}} \put(86.686,47.778){\line(0,1){1.3011}} \put(86.765,49.079){\line(0,1){1.4205}} \put(189.545,50.75){\line(0,1){1.3033}} \put(189.52,52.053){\line(0,1){1.3014}} \multiput(189.445,53.355)(-.03115,.32439){4}{\line(0,1){.32439}} \multiput(189.321,54.652)(-.029029,.215309){6}{\line(0,1){.215309}} \multiput(189.146,55.944)(-.031929,.183462){7}{\line(0,1){.183462}} \multiput(188.923,57.228)(-.030278,.141638){9}{\line(0,1){.141638}} \multiput(188.65,58.503)(-.03211,.126337){10}{\line(0,1){.126337}} \multiput(188.329,59.766)(-.033566,.11365){11}{\line(0,1){.11365}} \multiput(187.96,61.017)(-.0320629,.095008){13}{\line(0,1){.095008}} \multiput(187.543,62.252)(-.0331282,.0870173){14}{\line(0,1){.0870173}} \multiput(187.079,63.47)(-.0318807,.0749746){16}{\line(0,1){.0749746}} \multiput(186.569,64.67)(-.0326847,.069364){17}{\line(0,1){.069364}} \multiput(186.014,65.849)(-.0333541,.0642807){18}{\line(0,1){.0642807}} \multiput(185.413,67.006)(-.0322114,.056661){20}{\line(0,1){.056661}} \multiput(184.769,68.139)(-.0327208,.052749){21}{\line(0,1){.052749}} \multiput(184.082,69.247)(-.0331381,.0491187){22}{\line(0,1){.0491187}} \multiput(183.353,70.327)(-.0334727,.0457352){23}{\line(0,1){.0457352}} \multiput(182.583,71.379)(-.0337324,.0425695){24}{\line(0,1){.0425695}} \multiput(181.774,72.401)(-.032619,.0380741){26}{\line(0,1){.0380741}} \multiput(180.925,73.391)(-.0327914,.0354346){27}{\line(0,1){.0354346}} \multiput(180.04,74.348)(-.0329052,.0329335){28}{\line(0,1){.0329335}} \multiput(179.119,75.27)(-.0354063,.032822){27}{\line(-1,0){.0354063}} \multiput(178.163,76.156)(-.038046,.0326518){26}{\line(-1,0){.038046}} \multiput(177.174,77.005)(-.0408388,.0324183){25}{\line(-1,0){.0408388}} \multiput(176.153,77.815)(-.0457064,.0335121){23}{\line(-1,0){.0457064}} \multiput(175.101,78.586)(-.0490901,.0331804){22}{\line(-1,0){.0490901}} \multiput(174.021,79.316)(-.0527207,.0327662){21}{\line(-1,0){.0527207}} \multiput(172.914,80.004)(-.0566333,.0322602){20}{\line(-1,0){.0566333}} \multiput(171.782,80.649)(-.0642519,.0334094){18}{\line(-1,0){.0642519}} \multiput(170.625,81.251)(-.0693358,.0327444){17}{\line(-1,0){.0693358}} \multiput(169.446,81.807)(-.0749471,.0319452){16}{\line(-1,0){.0749471}} \multiput(168.247,82.319)(-.0869887,.0332031){14}{\line(-1,0){.0869887}} \multiput(167.029,82.783)(-.0949803,.0321447){13}{\line(-1,0){.0949803}} \multiput(165.795,83.201)(-.113621,.033664){11}{\line(-1,0){.113621}} \multiput(164.545,83.572)(-.12631,.032219){10}{\line(-1,0){.12631}} \multiput(163.282,83.894)(-.141612,.0304){9}{\line(-1,0){.141612}} \multiput(162.007,84.167)(-.183435,.032087){7}{\line(-1,0){.183435}} \multiput(160.723,84.392)(-.215284,.029215){6}{\line(-1,0){.215284}} \multiput(159.431,84.567)(-.32437,.03143){4}{\line(-1,0){.32437}} \put(158.134,84.693){\line(-1,0){1.3013}} \put(156.833,84.769){\line(-1,0){2.6066}} \put(154.226,84.771){\line(-1,0){1.3015}} \multiput(152.925,84.697)(-.32442,-.03087){4}{\line(-1,0){.32442}} \multiput(151.627,84.574)(-.215334,-.028844){6}{\line(-1,0){.215334}} \multiput(150.335,84.401)(-.18349,-.031771){7}{\line(-1,0){.18349}} \multiput(149.05,84.179)(-.141664,-.030156){9}{\line(-1,0){.141664}} \multiput(147.775,83.907)(-.126365,-.032001){10}{\line(-1,0){.126365}} \multiput(146.512,83.587)(-.113679,-.033468){11}{\line(-1,0){.113679}} \multiput(145.261,83.219)(-.0950356,-.031981){13}{\line(-1,0){.0950356}} \multiput(144.026,82.803)(-.0870458,-.0330532){14}{\line(-1,0){.0870458}} \multiput(142.807,82.34)(-.0750021,-.0318161){16}{\line(-1,0){.0750021}} \multiput(141.607,81.831)(-.0693921,-.0326249){17}{\line(-1,0){.0693921}} \multiput(140.428,81.277)(-.0643094,-.0332987){18}{\line(-1,0){.0643094}} \multiput(139.27,80.677)(-.0566888,-.0321625){20}{\line(-1,0){.0566888}} \multiput(138.136,80.034)(-.0527771,-.0326753){21}{\line(-1,0){.0527771}} \multiput(137.028,79.348)(-.0491472,-.0330958){22}{\line(-1,0){.0491472}} \multiput(135.947,78.62)(-.0457641,-.0334333){23}{\line(-1,0){.0457641}} \multiput(134.894,77.851)(-.0425985,-.0336957){24}{\line(-1,0){.0425985}} \multiput(133.872,77.042)(-.0381022,-.0325862){26}{\line(-1,0){.0381022}} \multiput(132.881,76.195)(-.0354628,-.0327609){27}{\line(-1,0){.0354628}} \multiput(131.924,75.31)(-.0329619,-.0328768){28}{\line(-1,0){.0329619}} \multiput(131.001,74.39)(-.0328524,-.035378){27}{\line(0,-1){.035378}} \multiput(130.114,73.435)(-.0326846,-.0380178){26}{\line(0,-1){.0380178}} \multiput(129.264,72.446)(-.0324534,-.0408108){25}{\line(0,-1){.0408108}} \multiput(128.452,71.426)(-.0335515,-.0456775){23}{\line(0,-1){.0456775}} \multiput(127.681,70.375)(-.0332227,-.0490615){22}{\line(0,-1){.0490615}} \multiput(126.95,69.296)(-.0328116,-.0526925){21}{\line(0,-1){.0526925}} \multiput(126.261,68.189)(-.0323089,-.0566055){20}{\line(0,-1){.0566055}} \multiput(125.615,67.057)(-.0334648,-.0642231){18}{\line(0,-1){.0642231}} \multiput(125.012,65.901)(-.0328041,-.0693075){17}{\line(0,-1){.0693075}} \multiput(124.455,64.723)(-.0320098,-.0749196){16}{\line(0,-1){.0749196}} \multiput(123.942,63.524)(-.033278,-.0869601){14}{\line(0,-1){.0869601}} \multiput(123.477,62.307)(-.0322266,-.0949526){13}{\line(0,-1){.0949526}} \multiput(123.058,61.073)(-.030949,-.104126){12}{\line(0,-1){.104126}} \multiput(122.686,59.823)(-.032328,-.126282){10}{\line(0,-1){.126282}} \multiput(122.363,58.56)(-.030522,-.141585){9}{\line(0,-1){.141585}} \multiput(122.088,57.286)(-.032245,-.183407){7}{\line(0,-1){.183407}} \multiput(121.863,56.002)(-.0294,-.215258){6}{\line(0,-1){.215258}} \multiput(121.686,54.711)(-.03171,-.32434){4}{\line(0,-1){.32434}} \put(121.559,53.413){\line(0,-1){1.3013}} \put(121.482,52.112){\line(0,-1){3.9081}} \multiput(121.55,48.204)(.03059,-.32445){4}{\line(0,-1){.32445}} \multiput(121.673,46.906)(.028658,-.215358){6}{\line(0,-1){.215358}} \multiput(121.845,45.614)(.031613,-.183517){7}{\line(0,-1){.183517}} \multiput(122.066,44.329)(.030034,-.14169){9}{\line(0,-1){.14169}} \multiput(122.336,43.054)(.031892,-.126392){10}{\line(0,-1){.126392}} \multiput(122.655,41.79)(.033371,-.113708){11}{\line(0,-1){.113708}} \multiput(123.022,40.539)(.0318992,-.0950631){13}{\line(0,-1){.0950631}} \multiput(123.437,39.304)(.0329782,-.0870742){14}{\line(0,-1){.0870742}} \multiput(123.899,38.084)(.0317514,-.0750294){16}{\line(0,-1){.0750294}} \multiput(124.407,36.884)(.0325651,-.0694202){17}{\line(0,-1){.0694202}} \multiput(124.96,35.704)(.0332433,-.0643381){18}{\line(0,-1){.0643381}} \multiput(125.559,34.546)(.0321137,-.0567165){20}{\line(0,-1){.0567165}} \multiput(126.201,33.411)(.0326299,-.0528053){21}{\line(0,-1){.0528053}} \multiput(126.886,32.303)(.0330534,-.0491757){22}{\line(0,-1){.0491757}} \multiput(127.613,31.221)(.0333938,-.0457928){23}{\line(0,-1){.0457928}} \multiput(128.381,30.167)(.033659,-.0426275){24}{\line(0,-1){.0426275}} \multiput(129.189,29.144)(.0325534,-.0381302){26}{\line(0,-1){.0381302}} \multiput(130.036,28.153)(.0327303,-.035491){27}{\line(0,-1){.035491}} \multiput(130.919,27.195)(.0328484,-.0329902){28}{\line(0,-1){.0329902}} \multiput(131.839,26.271)(.0353497,-.0328829){27}{\line(1,0){.0353497}} \multiput(132.793,25.383)(.0379897,-.0327173){26}{\line(1,0){.0379897}} \multiput(133.781,24.533)(.0407829,-.0324886){25}{\line(1,0){.0407829}} \multiput(134.801,23.72)(.0456486,-.0335908){23}{\line(1,0){.0456486}} \multiput(135.851,22.948)(.0490329,-.033265){22}{\line(1,0){.0490329}} \multiput(136.929,22.216)(.0526642,-.032857){21}{\line(1,0){.0526642}} \multiput(138.035,21.526)(.0565776,-.0323577){20}{\line(1,0){.0565776}} \multiput(139.167,20.879)(.0641943,-.0335201){18}{\line(1,0){.0641943}} \multiput(140.322,20.275)(.0692793,-.0328638){17}{\line(1,0){.0692793}} \multiput(141.5,19.717)(.074892,-.0320743){16}{\line(1,0){.074892}} \multiput(142.698,19.203)(.0869314,-.033353){14}{\line(1,0){.0869314}} \multiput(143.915,18.737)(.0949248,-.0323084){13}{\line(1,0){.0949248}} \multiput(145.15,18.317)(.1041,-.031038){12}{\line(1,0){.1041}} \multiput(146.399,17.944)(.126254,-.032437){10}{\line(1,0){.126254}} \multiput(147.661,17.62)(.141559,-.030644){9}{\line(1,0){.141559}} \multiput(148.935,17.344)(.183379,-.032403){7}{\line(1,0){.183379}} \multiput(150.219,17.117)(.215233,-.029586){6}{\line(1,0){.215233}} \multiput(151.51,16.94)(.32431,-.03199){4}{\line(1,0){.32431}} \put(152.808,16.812){\line(1,0){1.3012}} \put(154.109,16.733){\line(1,0){2.6066}} \put(156.715,16.727){\line(1,0){1.3016}} \multiput(158.017,16.798)(.32447,.03031){4}{\line(1,0){.32447}} \multiput(159.315,16.919)(.215383,.028473){6}{\line(1,0){.215383}} \multiput(160.607,17.09)(.183544,.031455){7}{\line(1,0){.183544}} \multiput(161.892,17.31)(.15943,.033651){8}{\line(1,0){.15943}} \multiput(163.167,17.58)(.12642,.031784){10}{\line(1,0){.12642}} \multiput(164.432,17.897)(.113737,.033273){11}{\line(1,0){.113737}} \multiput(165.683,18.263)(.0950905,.0318172){13}{\line(1,0){.0950905}} \multiput(166.919,18.677)(.0871026,.0329031){14}{\line(1,0){.0871026}} \multiput(168.138,19.138)(.0750568,.0316868){16}{\line(1,0){.0750568}} \multiput(169.339,19.645)(.0694482,.0325053){17}{\line(1,0){.0694482}} \multiput(170.52,20.197)(.0643667,.0331878){18}{\line(1,0){.0643667}} \multiput(171.678,20.795)(.0567441,.0320648){20}{\line(1,0){.0567441}} \multiput(172.813,21.436)(.0528333,.0325843){21}{\line(1,0){.0528333}} \multiput(173.923,22.12)(.0492042,.0330111){22}{\line(1,0){.0492042}} \multiput(175.005,22.847)(.0458216,.0333544){23}{\line(1,0){.0458216}} \multiput(176.059,23.614)(.0426565,.0336222){24}{\line(1,0){.0426565}} \multiput(177.083,24.421)(.0381583,.0325205){26}{\line(1,0){.0381583}} \multiput(178.075,25.266)(.0355192,.0326997){27}{\line(1,0){.0355192}} \multiput(179.034,26.149)(.0330185,.03282){28}{\line(1,0){.0330185}} \multiput(179.959,27.068)(.0329134,.0353214){27}{\line(0,1){.0353214}} \multiput(180.847,28.022)(.03275,.0379615){26}{\line(0,1){.0379615}} \multiput(181.699,29.009)(.0325237,.0407548){25}{\line(0,1){.0407548}} \multiput(182.512,30.028)(.0336301,.0456196){23}{\line(0,1){.0456196}} \multiput(183.285,31.077)(.0333072,.0490042){22}{\line(0,1){.0490042}} \multiput(184.018,32.155)(.0329024,.0526359){21}{\line(0,1){.0526359}} \multiput(184.709,33.26)(.0324064,.0565497){20}{\line(0,1){.0565497}} \multiput(185.357,34.391)(.0335754,.0641654){18}{\line(0,1){.0641654}} \multiput(185.962,35.546)(.0329235,.0692509){17}{\line(0,1){.0692509}} \multiput(186.521,36.723)(.0321389,.0748643){16}{\line(0,1){.0748643}} \multiput(187.035,37.921)(.0334278,.0869026){14}{\line(0,1){.0869026}} \multiput(187.503,39.138)(.0323901,.0948969){13}{\line(0,1){.0948969}} \multiput(187.925,40.372)(.031128,.104073){12}{\line(0,1){.104073}} \multiput(188.298,41.62)(.032545,.126226){10}{\line(0,1){.126226}} \multiput(188.624,42.883)(.030766,.141533){9}{\line(0,1){.141533}} \multiput(188.9,44.157)(.032561,.183351){7}{\line(0,1){.183351}} \multiput(189.128,45.44)(.029771,.215207){6}{\line(0,1){.215207}} \multiput(189.307,46.731)(.03227,.32428){4}{\line(0,1){.32428}} \put(189.436,48.028){\line(0,1){1.3011}} \put(189.515,49.329){\line(0,1){1.4205}} \put(52.25,5.5){\makebox(0,0)[cc]{$AQ^0_n$}} \put(154.75,5.75){\makebox(0,0)[cc]{$AQ^1_n$}} \put(150.25,22){\framebox(18.25,57.75)[cc]{}} \put(58.5,51.5){\circle*{.707}} \put(66.25,69.25){\circle*{.707}} \put(150.5,69.25){\circle*{.707}} \put(150.75,51.5){\circle*{.707}} \put(58.5,36){\circle*{.707}} \put(150.5,36.5){\circle*{.707}} \put(58.5,44){\circle*{.707}} \multiput(66.5,69.25)(10.53125,.03125){8}{\line(1,0){10.53125}} \put(58.5,51.75){\line(1,0){91.75}} \put(58.25,44.25){\line(1,0){92.25}} \multiput(58.75,36.5)(11.40625,.03125){8}{\line(1,0){11.40625}} \put(158.5,61){\makebox(0,0)[cc]{$T^1_n$}} \put(53.25,52.5){\makebox(0,0)[cc]{$u^0_1$}} \put(144.25,55){\makebox(0,0)[cc]{$u^1_1$}} \put(47,36.5){\makebox(0,0)[cc]{$u^0_{2^{n-1}}$}} \put(142.25,32.75){\makebox(0,0)[cc]{$u^1_{2^{n-1}}$}} \put(101,12){\makebox(0,0)[cc]{Fig.7}} \put(63.25,69.5){\makebox(0,0)[cc]{$v^0_{n-1}$}} \put(148.5,72.25){\makebox(0,0)[cc]{$v^1_{n-1}$}} \end{picture} \end{center} \end{proof} \noindent {\bf Acknowledgment:} The author gratefully acknowledges the Department of Science and Technology, New Delhi, India for the award of Women Scientist Scheme for research in Basic/Applied Sciences.
{ "timestamp": "2017-06-19T02:05:42", "yymm": "1706", "arxiv_id": "1706.05215", "language": "en", "url": "https://arxiv.org/abs/1706.05215", "abstract": "Let T1, T2,.... Tk be spanning trees in a graph G. If for any pair of vertices u and v of G, the paths between u and v in every Ti( 0 < i < k+1) do not contain common edges then T1, T2,.... Tk are called edge-disjoint spanning trees in G. The design of multiple edge-disjoint spanning trees has applications to the reliable communication protocols. The n-dimensional augmented cube, denoted as AQn, a variation of the hypercube, possesses some properties superior to those of the hypercube. For AQn (n > 2), construction of n-1 edge-disjoint spanning trees is given the result is optimal with respect to the number of edge-disjoint spanning trees.", "subjects": "Combinatorics (math.CO)", "title": "Constructing edge-disjoint spanning trees in augmented cubes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812309063187, "lm_q2_score": 0.7310585727705127, "lm_q1q2_score": 0.7079434005641256 }
https://arxiv.org/abs/1111.4494
Structured Sparse Aggregation
We introduce a method for aggregating many least squares estimator so that the resulting estimate has two properties: sparsity and structure. That is, only a few candidate covariates are used in the resulting model, and the selected covariates follow some structure over the candidate covariates that is assumed to be known a priori. While sparsity is well studied in many settings, including aggregation, structured sparse methods are still emerging. We demonstrate a general framework for structured sparse aggregation that allows for a wide variety of structures, including overlapping grouped structures and general structural penalties defined as set functions on the set of covariates. We show that such estimators satisfy structured sparse oracle inequalities --- their finite sample risk adapts to the structured sparsity of the target. These inequalities reveal that under suitable settings, the structured sparse estimator performs at least as well as, and potentially much better than, a sparse aggregation estimator. We empirically establish the effectiveness of the method using simulation and an application to HIV drug resistance.
\section{Introduction} In statistical learning, sparsity and variable selection are well studied and fundamental topics. Given a large set of candidate covariates, sparse models use only a few in the model. Sparse techniques often improve out of sample performance and aid in model interpretation. We focus on the linear regression setting. Here, we model a vector of responses $\textbf{y}$ as a linear combination of $M$ predictors, represented as an $n \times M$ data matrix $\textbf{X}$, via the equation $\textbf{y} = \textbf{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}$, where $\boldsymbol{\beta}$ is a vector of linear coefficients and $\boldsymbol{\epsilon}$ is a vector of stochastic noise. The task is then to produce an estimate of $\boldsymbol{\beta}$, denoted $\widehat{\boldsymbol{\beta}}$, using $\textbf{X}$ and $\textbf{y}$. Sparse modeling techniques produce a $\widehat{\boldsymbol{\beta}}$ with only a few nonzero entries, with the remaining set equal to zero, effectively excludes many covariates from the model. One example of a sparse regression method is the lasso estimator~\citep{tibshiraniLasso}: \begin{align} \widehat{\boldsymbol{\beta}}_{\mbox{lasso}} = \underset{\boldsymbol{\beta} \in \mathbb{R}^M}{\operatorname{argmin}} \|\textbf{y} - \textbf{X}\boldsymbol{\beta}\|_2^2 + \lambda \sum_{j=1}^M |{\beta}_j|. \end{align} In the above, $\lambda>0$ is a tuning parameter. Here, the $\ell_1$ penalty encourages many entries of $\widehat{\boldsymbol{\beta}}_{\mbox{lasso}}$ to be identically zero, giving a sparse estimator. Suppose now that additional structural information is available about the covariates. We then seek to incorporate this information in our sparse modeling strategy, giving a {\em structured, sparse} model. For example, consider a factor covariate with $u$ levels, such as in an ANOVA model, encoded as a set of $u-1$ indicator variables in $\textbf{X}$. Taking this structure into account, we then would jointly select or exclude this set of covariates from our sparse model. More generally, suppose that we have a graph with $M$ nodes, each node corresponding to a covariate. This graph might represent a spatial relationship between the covariates. A sparse model incorporating this information might jointly include or exclude sets of predictors corresponding to neighborhoods or cliques of the graph. In summary, sparsity seeks a $\widehat{\boldsymbol{\beta}}$ with few nonzero entries, whereas structured sparsity seeks a sparse $\widehat{\boldsymbol{\beta}}$ where the nonzero entries following some a priori defined pattern. As an example, consider the results displayed in Figure~\ref{twoDimOneRunFigure}. In the top left, we see a coefficient vector, rearranged as a square matrix. The nonzero entries, represented as white squares, have a clear structure with respect to the familiar two dimensional lattice. On the bottom row, we display the results of two sparse methods, including the lasso. The top right panel displays the results of one of the methods of this paper. Since our method also takes the structural information into account, it is able to more accurately re-create the sparsity pattern pictured in the top left. Though methods for structured sparsity are still emerging, there are now many examples in the literature. The grouped lasso~\citep{Yuan06modelselection} allows for joint selection of covariates, where the groups of covariates partition the set of covariates. Subsequent work \citep*{Huang:2009:LSS:1553374.1553429,Jacob:2009:GLO:1553374.1553431,Jenatton09structuredsparse} extended this idea to allow for more flexible structures based on overlapping groups of covariates. Further, \cite{Bach10hierarchical} and~\cite*{Zhao-cap-penalty} proposed methods for Hierarchical structures and~\cite{xingTreeGuided} as well as \cite{pengMasterPredictors} gave methods in the multi-task setting for coherent variable selection across tasks. In this paper, we present an aggregation estimator that produces structured sparse models. In the linear regression setting, aggregation estimators combine many estimates of $\boldsymbol{\beta}$: $\{\widehat{\boldsymbol{\beta}}_1,\ldots,\widehat{\boldsymbol{\beta}}_B\}$ in some way to give an improved estimate $\widehat{\boldsymbol{\beta}}_{\mbox{Aggregate}}$. See \cite*{buneaAggregation} and the references therein for discussions of aggregation in general settings, and~\cite{Yang_adaptiveregression,Yang_adaptiveregressionBern} for methods in the linear regression setting. In particular, we extend the methods and results given by~\cite{RigTsy10}, who focused on sparse aggregation, where the estimated $\widehat{\boldsymbol{\beta}}_{\mbox{Aggregate}}$ has many entries equal to zero. Their sparse aggregation method combines in a weighted average the least squares estimates for each subset of the set of candidate covariates. For a particular model in the average, its weight is, in part, inversely exponentially proportional to the number of covariates in the model, i.e. the sparsity of the model. This strategy encourages a sparse $\widehat{\boldsymbol{\beta}}_{\mbox{Aggregate}}$. We extend this idea by proposing an alternate set of weights that are instead depend on the structured sparsity of the sparsity patterns, accordingly encouraging a structured sparse $\widehat{\boldsymbol{\beta}}_{\mbox{Aggregate}}$. We give extensions that cover a wide range possible structure inducing strategies. These include overlapping grouped structures and structural penalties based on hierarchical structures or arbitrary set functions. These parallel many convex methods for structured sparsity from the literature, see Section~\ref{structuedAgg}. Though structure can be useful for interpretability, we must consider whether injecting structure into a sparse method has a beneficial impact under reasonable conditions. In this paper we demonstrate that our estimators perform no worse than sparse estimators when the true model is structured sparse. In the group sparsity case, they can give dramatic improvements. These results hold for a very general class of structural modifications, including overlapping grouped structures. We first give a review the sparse aggregation method of~\cite{RigTsy10} in Section~\ref{ES}. In Section~\ref{structuedAgg} we discuss our methods for structured sparse aggregation. We introduce two settings: structurally penalized sparse aggregation (Section~\ref{structPenalizedNorm}), and group structured sparse aggregation (Section~\ref{groupedEllZero}). We present the theoretical properties of these estimators in Section~\ref{Theory}. We then present a simulation study and an application to HIV drug resistance in Section~\ref{applicationSimHIV}. We finally give some concluding remarks and suggestions for future directions in Section~\ref{conclude}. Proofs of general versions of the main theoretical results are given in the supplementary material. \section{Sparsity Pattern Aggregation} \label{ES} The sparse aggregation method of~\cite{RigTsy10} builds on the results of~\cite{leungInfoMixing}. The method creates an aggregate estimator from a weighted average the $2^M$ ordinary least squares regressions on all subsets of the $M$ candidate covariates. The method encourages sparsity by letting the weight in the average for a particular model increase as the sparsity of the model increases. We first establish our notation and setting, and then present the basic formulas behind the method. We finally discuss its implementation via a stochastic greedy algorithm. \subsection{Settings and the Sparsity Pattern Aggregation Estimator} We consider the linear regression model: \begin{align} \textbf{y} = \textbf{X}^T\boldsymbol{\beta} + \boldsymbol{\epsilon}. \end{align} Here, we have a response $\textbf{y} \in \mathbb{R}^n$, $n \times M$ data matrix $\textbf{X} = [\textbf{x}_1,\ldots,\textbf{x}_M]$ --- where $\textbf{x}_i \in \mathbb{R}^n$, and vector of coefficients $\boldsymbol{\beta} \in \textbf{R}^M$. From here on, we assume that $\textbf{X}$ is normalized so $\|\textbf{x}_i\|_2^2 \leq 1 \ \forall i$. The entries of the $n-$vector of errors $\boldsymbol{\epsilon}$ are $\mbox{i.i.d.}$ $N(0,\sigma^2)$. Assume that $\sigma^2$ is known. Let $\|\cdot\|_p$ dnote the $\ell_p$ norm for $p \geq 1$. Let $\mbox{supp}(\cdot)$ denote the support of a vector, the set of indices for which the entries are nonzero. Denote $\|\cdot\|_0 = |\mbox{supp}(\cdot)|$ as the $\ell_0$ norm. Let the set $\mathcal{I} = \{1,\ldots,M\}$ index the set of candidate covariates. Define the set $\mathcal{P} = \{0,1\}^M$; $|\mathcal{P}| = |2^\mathcal{I}| = 2^M$. $\mathcal{P}$ encodes all sparsity patterns over our set of candidate covariates --- the $i$th element of $\textbf{p} \in \mathcal{P}$ is 1 if covariate $i$ is included in the model, and 0 if it is excluded. Let $\widehat{\boldsymbol{\beta}}_\textbf{p}$ be the ordinary least squares solution restricted to the sparsity pattern $\textbf{p}$: \begin{align} \label{OLS} \widehat{\boldsymbol{\beta}}_\textbf{p} &= \underset{\boldsymbol{\beta} \in \mathbb{R}^M: \ \mbox{supp}(\boldsymbol{\beta}) \subseteq \mbox{supp}(\textbf{p})}{\operatorname{argmin}}\| \textbf{y} - \textbf{X}\boldsymbol{\beta}\|^2_2. \end{align} Define the training error of an estimate $\widehat{\boldsymbol{\beta}}$ to be: \begin{align} \mbox{Error}(\widehat{\boldsymbol{\beta}}) = \|\textbf{y} - \textbf{X}\widehat{\boldsymbol{\beta}}\|^2_2. \end{align} Then, the sparsity pattern aggregate estimator coefficients are defined as: \begin{align} \label{SPA-estimator} \widehat{\boldsymbol{\beta}}^{SPA}:= \frac{\sum_{\textbf{p} \in \mathcal{P}} \widehat{\boldsymbol{\beta}}_\textbf{p} \exp\left( -\frac{1}{4\sigma^2} \mbox{Error}(\widehat{\boldsymbol{\beta}}_\textbf{p}) - \frac{\|\textbf{p}\|_0}{2} \right) \pi_\textbf{p}}{\sum_{\textbf{p} \in \mathcal{P}} \exp\left( -\frac{1}{4\sigma^2} \mbox{Error}(\widehat{\boldsymbol{\beta}}_\textbf{p}) - \frac{\|\textbf{p}\|_0}{2} \right) \pi_\textbf{p}}. \end{align} Here, we obtain $\widehat{\boldsymbol{\beta}}^{SPA}$ by taking a weighted average over all sparsity patterns $\textbf{p}$. The weights in this average are a product of an exponentiated unbiased estimate of the risk and a prior, $\pi_\textbf{p}$, over the sparsity patterns. This strategy is based on the work of~\cite{leungInfoMixing}, who demonstrated this form results in several appealing theoretical properties which form the basis of the theory of~\cite{RigTsy10} and our own methods. \cite{RigTsy10} consider the following prior: \begin{align} \label{SPA_PRIOR} \pi_{\textbf{p}} := \left\{ \begin{array}{lr} \frac{1}{H} \left( \frac{\|\textbf{p}\|_0}{2eM} \right)^{\|\textbf{p}\|_0} & \|\textbf{p}\|_0 \leq R\\ \frac{1}{2} & \|\textbf{p}\|_0 = M\\ 0 & \mbox{else} \end{array} \right. . \end{align} Here, $H$ is a normalizing constant and $R = \mbox{rank}(\textbf{X})$. The above prior places exponentially less weight on sparsity patterns as their $\ell_0$ norm increases, up-weighting sparse models. The weight of $1/2$ on the OLS solution is included for theoretical calculations; in practice this case is treated as other cases --- see the supplementary material. This specific choice of prior had many theoretical and computational advantages. In section~\ref{structuedAgg}, we consider modifications to the prior weight to encourage both structure and sparsity. \subsection{Computation} \label{stochasticAlgorithm} Exact computation of the sparsity pattern aggregate estimator is clearly impractical, since it would require fitting $2^M$ models. \cite{RigTsy10} give a Metropolis-Hastings stochastic greedy algorithm based on work by~\cite{PAC_bayes} for approximating the sparsity pattern aggregate --- the procedure is reviewed in the supplement. The procedure performs a random walk over the hypercube of all sparsity patterns. Beginning with an empty model, in each step, one covariate is randomly selected from the candidate set, and proposed to be added to the model if it is already in the current model or to be removed from the current model otherwise. These proposals are accepted or rejected using a Metropolis step, with probability related to the product of the difference in risk and the ratio of prior weights. Two practical concerns arise from this approach. First, the algorithm assumes that $\sigma^2$ is known. Second, the metropolis algorithm requires significant additional computation than competing sparse methods. Regarding the variance,~\cite{RigTsy10} proposed a two stage scheme: the algorithm is run twice, and the residuals from the first run provide an estimate for the variance for the second run. To the second point, a simple analysis of the algorithm reveals that at each iteration of the MCMC method, we must fit a linear regression model. In order to effectively explore the sparsity pattern hypercube, we must run the Markov chain on the order of $M$, the number of candidate predictors, iterations. We can therefore expect computation times on the order of a linear regression fit times $M$. When $M$ is a much higher order than the number of observations, this is a concern. This makes the sparse estimator difficult to compute in very high dimensional settings. However, in a structured sparse problem, we may have structural information that effectively reduces the order of $M$, such as in group sparsity. \section{Structured, Sparse Aggregation} \label{structuedAgg} The sparsity pattern aggregate estimator derives its sparsity property from placing a prior on sparsity patterns that is inversely proportional to the $\ell_0$ norm of the pattern. This up-weights models with sparsity patterns with low $\ell_0$ norm, encouraging sparsity. We propose basing similar priors on different set functions than the $\ell_0$ norm. These set functions are chosen so that the resulting estimator simultaneously encourages sparsity and structure. Thus, the resulting estimators upweight structured, sparse models. We consider two class of functions: structurally penalized $\ell_0$ norms and grouped $\ell_0$ norms. \subsection{Penalized Structured Sparsity Aggregate Estimator} \label{structPenalizedNorm} Consider penalizing the $\ell_0$ norm with some non-negative set function that measures the structure of the sparsity pattern. We will show later (see Assumption~\ref{assumptionLOS} in Section~\ref{TheoryS1}) that if this set function if non-negative and does not exceed $M$, we can guarantee similar theoretical properties as the sparsity pattern aggregate estimator. More formally, consider the following extension: \begin{align} \textbf{p} \in \mathcal{P}: \|\textbf{p}\|_{0,c} &:= \|\textbf{p}\|_0 + \|\textbf{p}\|_c,\\ \mbox{where: }\|\textbf{p}\|_c := \|\mbox{supp}(\textbf{p})\|_c: 2^\mathcal{I} &\to [0,M] \subset \mathbb{R},\\ \|\textbf{0}\|_c & := 0. \end{align} We then define the following prior on $\mathcal{P}$: \begin{align} \label{SSPA_PRIOR} \pi_{\textbf{p},c} := \left\{ \begin{array}{lr} \frac{1}{H_c} \left( \frac{\|\textbf{p}\|_{0,c}}{2eM} \right)^{\|\textbf{p}\|_{0,c}} & \|\textbf{p}\|_{0} \leq R\\ \frac{1}{2} & \|\textbf{p}\|_{0} = M\\ 0 & \mbox{else} \end{array} \right. . \end{align} Where $H_c$ is a normalizing constant. For our subsequent theoretical analysis, we note that since $\|\textbf{p}\|_{0,c} \leq 2M$ then we know that $H_c \leq 4$. We then define the structured sparsity aggregate (SSA) estimator as: \begin{align} \widehat{\boldsymbol{\beta}}^{SSA}:= \frac{\sum_{\textbf{p} \in \mathcal{P}} \widehat{\boldsymbol{\beta}}_\textbf{p} \exp\left( -\frac{1}{4\sigma^2} \mbox{Error}(\widehat{\boldsymbol{\beta}}_\textbf{p}) - \frac{\|\textbf{p}\|_0}{2} \right) \pi_{\textbf{p},c}}{\sum_{\textbf{p} \in \mathcal{P}} \exp\left( -\frac{1}{4\sigma^2} \mbox{Error}(\widehat{\boldsymbol{\beta}}_\textbf{p}) - \frac{\|\textbf{p}\|_0}{2} \right) \pi_{\textbf{p},c}}. \end{align} We now discuss some possible choices for the structural penalty $\|\cdot\|_c$. Note that the general consequence of the prior is that sparsity patterns with higher values of $\|\cdot\|_c$ will be down-weighted. At the same time, the prior still contains the $\ell_0$ norm as an essential element, and so it enforces a trade-off between sparsity and the structure captured by the additional term. \begin{itemize} \item \textbf{Covariate Weighting}. Consider the function $\|\textbf{p}\|_c = \sum_{i=1}^M c_i\textbf{p}_i$ such that $\sum_{i=1}^M c_i < M$, $c_i>0 \ \forall \ i$. This has the effect of weighting the covariates, discouraging those with high weight to enter the model. These weights can be determined in a wide variety of ways, including simple prior belief elicitation. This weighting scheme is related to the prior suggested in~\cite{Hoeting&1999} in the bayesian model averaging setting. This strategy also has the flavor of the individual weighting in the adaptive lasso, where \cite{Zou_2006} considered weighting each coordinate in the lasso using coefficient estimates from OLS or marginal regression. \item \textbf{Graph Structures}. Generalizing previous work, \cite{Bach10structuredsparse} suggested many structure inducing set functions in the regularization setting. Many of these functions can be easily adapted to this framework. For example, given a directed acyclic graph (DAG) structure over $\mathcal{I}$, the following penalty encourages a hierarchical structure: \begin{align} \|\textbf{p}\|_c = |\{\mbox{Ancestors of supp}(\textbf{p})\}|. \end{align} If we desire strong hierarchy, we can additionally define $\pi_{\textbf{p},c} := 0$ if the sparsity pattern of $\textbf{p}$ does not obey the hierarchical structure implied by the DAG. Strong hierarchy may also greatly increase the speed of the MCMC algorithm by restricting the number of predictors potentially sampled at each step. Alternately, suppose we have a set of weights over pairs of predictors represented by the function $d: \mathcal{I} \times \mathcal{I} \to \mathbb{R}^+$. Given a graph over the candidate covariates, this could correspond to edge weights, or the shortest path between two nodes (covariates). More generally, it could correspond to a natural geometric structure such as a line or a lattice, see~\cite*{percivalStructureSparse} for such an example. We can use these weights to define the cut function: \begin{align} \|\textbf{p}\|_c = \sum_{i \in \mbox{supp}(\textbf{p}); \ j \notin \mbox{supp}(\textbf{p})} d(i,j). \end{align} This encourages sparsity patterns to partition the set $\mathcal{I}$ into two maximally disconnected sets, as defined by low values of $d(\cdot,\cdot)$. This would give sparsity patterns corresponding to isolated neighborhoods in the graph. \item \textbf{Cluster Counting}. We finally propose a new $\|\cdot\|_c$ that measures the structure of the sparsity pattern by counting the number of clusters in $\textbf{p}$. Suppose, we now have a symmetric weight function $d: \mathcal{I} \times \mathcal{I} \to \mathbb{R}^+$. Suppose we also set a constant $h>0$. Then, we count the clusters using the following procedure: \begin{enumerate} \item Define the fully connected weighted graph over the set supp$(\textbf{p})$ with weights given by $d(\cdot,\cdot)$. \item Break all edges with weight great than $h$. \item Return the remaining number of connected components as $\|\textbf{p}\|_c$. \end{enumerate} This definition encourages sparsity patterns that are clustered with respect to $d(\cdot,\cdot)$. For computational considerations, note that this strategy is the same as single linkage clustering with parameter $h$, or building a minimal spanning tree and breaking all edges with weight greater than $h$. For many geometries, this definition of $\|\cdot\|_c$ is easy to compute and update for the MCMC algorithm. \end{itemize} \subsection{Group Structured Sparsity Aggregate Estimator} \label{groupedEllZero} In the framework of structured sparsity, one popular representation of structure is via groups of variables, cf.~\cite{Yuan06modelselection}. For example, a factor covariate with $u$ levels, as in an ANOVA model, can be represented as a collection of $u-1$ indicator variables. We would not select these variables individually, instead preferring to include or exclude them as a group. In the case where these groups partition $\mathcal{I}$, this structure can be easily incorporated into the prior, theory, and implementation of the sparsity pattern aggregate estimator. Suppose we a priori define: \begin{align} \mathcal{G} &:= \{g\}\ \mbox{such that } g\subset \mathcal{I} \ \forall g; \mbox{and } \cup_{g \in \mathcal{G}} g = \mathcal{I}, \forall g, g' \in \mathcal{G}, g \cap g' = \emptyset, \\ \|\boldsymbol{\beta}\|_{0,\mathcal{G}} &:= |\{g: g \cap \mbox{supp}(\boldsymbol{\beta}) \neq \emptyset \}|,\\ \|\boldsymbol{\beta}\|_{1,\mathcal{G}} &:= \sum_{g \in \mathcal{G}} \|\boldsymbol{\beta}_g\|_2. \end{align} $\|\boldsymbol{\beta}\|_{1,\mathcal{G}}$ is the same as the grouped lasso penalty~\citep{Yuan06modelselection}, which is used to induce sparsity at the group level in the regularization setting. $\|\boldsymbol{\beta}\|_{0,\mathcal{G}}$ is simply the number of groups needed to cover the sparsity pattern of $\boldsymbol{\beta}$. Thus, we have simply replaced sparsity patterns over all subsets of predictors with sparsity patterns over all subsets of groups of predictors. We can show that the theoretical framework of~\cite{RigTsy10} holds with $\mathcal{P} = \{0,1\}^{|\mathcal{G}|}$, $\|\boldsymbol{\beta}\|_0$ replaced with $\|\boldsymbol{\beta}\|_{0,\mathcal{G}}$, and $\|\boldsymbol{\beta}\|_1$ replaced with $\|\boldsymbol{\beta}\|_{1,\mathcal{G}}$. A more interesting and flexible case arises when we allow the elements of $\mathcal{G}$ to overlap. Here, we adopt the framework of~\cite{Jacob:2009:GLO:1553374.1553431}, who gave a norm and penalty for inducing sparsity patterns using overlapping groups in the regularization setting. In this case, we define the groups as any collection of sets of covariates: \begin{align} \mathcal{G} &:= \{g\} \ \mbox{such that } g\subset \mathcal{I} \ \forall g; \mbox{and } \cup_{g \in \mathcal{G}} g = \mathcal{I}. \end{align} We now define the $\mathcal{G}$-decomposition as the following set of size $|\mathcal{G}|$: \begin{align} \mathcal{V}_\mathcal{G}(\boldsymbol{\beta}) &= \{\textbf{v}_g: g \in \mathcal{G}, \textbf{v}_g \in \mathbb{R}^M \mbox{ s.t. } \mbox{supp}(\textbf{v}_g) \subseteq g \},\\ \mbox{such that} & \sum_{\textbf{v}_g \in \mathcal{V}_\mathcal{G}(\boldsymbol{\beta})} \textbf{v}_g = \boldsymbol{\beta}. \end{align} That is, $\mathcal{V}_\mathcal{G}(\boldsymbol{\beta})$ contains single $\textbf{v}_g$ for each $g \in \mathcal{G}$. For arbitrary $\mathcal{G}$ and $\boldsymbol{\beta}$, $\mathcal{V}_\mathcal{G}(\boldsymbol{\beta})$ is not unique. We then define the following functions, analogous to the $\ell_0$ and $\ell_1$ norms of the usual sparsity framework: \begin{align} \|\boldsymbol{\beta}\|_{0,\mathcal{G}} &= \min_{G \subset \mathcal{G}; \cup_{g \in G} g = \mbox{supp}(\boldsymbol{\beta})} |G|, \label{groupedEllZeroNorm}\\ \|\boldsymbol{\beta}\|_{1,\mathcal{G}} &= \min_{\mathcal{V}_\mathcal{G}(\boldsymbol{\beta})}\left( \sum_{g \in \mathcal{G}} \|\textbf{v}_g\|_2 \right) \label{groupedEllOneNorm} \end{align} In $\|\cdot\|_{1,\mathcal{G}}$, the minimum is over all possible decomposition $\mathcal{V}_\mathcal{G}(\cdot)$. Computing $\|\cdot\|_{0,\mathcal{G}}$ is difficult for arbitrary $\mathcal{G}$. However, in most applications $\mathcal{G}$ has some regular structure which allows for efficient computation. The norm in Equation~\ref{groupedEllZeroNorm} leads to the following choice of prior on $\mathcal{P}$: \begin{align} \label{SSPA_GROUPED_PRIOR} \pi_{\textbf{p}, \mathcal{G}} := \left\{ \begin{array}{lr} \frac{1}{H_ \mathcal{G}} \left( \frac{\|\textbf{p}\|_{0,\mathcal{G}}}{2e|\mathcal{G}|} \right)^{\|\textbf{p}\|_{0, \mathcal{G}}} & \|\textbf{p}\|_{0} \leq R\\ \frac{1}{2} & \|\textbf{p}\|_{0} = M\\ 0 & \mbox{else} \end{array} \right. . \end{align} By considering all unions of groups, we obtain an upper bound for the normalizing constant $H_\mathcal{G} \leq 4$. We then define the grouped sparsity aggregate (GSA) estimator as: \begin{align} \widehat{\boldsymbol{\beta}}^{GSA}:= \frac{\sum_{\textbf{p} \in \mathcal{P}} \widehat{\boldsymbol{\beta}}_\textbf{p} \exp\left( -\frac{1}{4\sigma^2} \mbox{Error}(\widehat{\boldsymbol{\beta}}_\textbf{p}) - \frac{\|\textbf{p}\|_0}{2} \right) \pi_{\textbf{p}, \mathcal{G}}}{\sum_{\textbf{p} \in \mathcal{P}} \exp\left( -\frac{1}{4\sigma^2} \sum_{i=1}^n \mbox{Error}(\widehat{\boldsymbol{\beta}}_\textbf{p}) - \frac{\|\textbf{p}\|_0}{2} \right) \pi_{\textbf{p}, \mathcal{G}}}. \end{align} We leave $\mathcal{G}$ general throughout this section and the subsequent theoretical analysis. There are many possible definitions of $\mathcal{G}$, such as connected components or neighborhoods in a graph, groups of factor predictors, or application driven groups --- see~\cite{Jacob:2009:GLO:1553374.1553431} for some examples. In particular, many of the structures mentioned in Section~\ref{structPenalizedNorm} can be encoded as a series of groups. \section{Theoretical Properties} \label{Theory} \cite{RigTsy10} showed that the sparsity pattern aggregate estimator enjoyed great theoretical properties. In summary, they showed that the estimator adapted to the sparsity of the target, measured in both the $\ell_0$ and $\ell_1$ norm. Further, they showed that their sparsity oracle inequalities were optimal in a minimax sense, in particular superior to rates obtained for popular estimators such as the lasso. Moreover, their results required fewer assumptions than those of the lasso, cf~\cite{Bickel_simultaneousanalysis}. In the supplementary material, we give a theoretical framework for aggregation using priors of our form --- Equation~\ref{SSPA_PRIOR} and~\ref{SSPA_GROUPED_PRIOR}. The following shows specific applications of this theory, yielding a set of structured sparse oracle inequalities, the first of their kind. \subsection{Structurally Penalized $\ell_0$ Norm} \label{TheoryS1} We first state an assumption: \begin{asm} \label{assumptionLOS} For all $\textbf{p} \in \mathcal{P}$ where $R>\|\textbf{p}\|_0>0$: \begin{align} \frac{\|\textbf{p}\|_0}{\|\textbf{p}\|_{0,c}} \leq \log\left( 1 + \frac{eM}{\max(\|\textbf{p}\|_{0,c},1)}\right). \end{align} \end{asm} Numerical analysis reveals that a sufficient condition for this assumption is $0 \leq \|\textbf{p}\|_c \leq M$. \begin{prp} \label{prop1pen} Suppose Assumption~\ref{assumptionLOS} holds. For any $M\geq1, n \geq1$, the structured sparsity aggregate estimator satisfies: \begin{align} \mathbb{E} \|\textbf{X}{\widehat{\boldsymbol{\beta}}^{SSA}} - \textbf{y} \|^2_2 \leq \min_{\boldsymbol{\beta} \in \mathbb{R}^M} \left\{ \|\textbf{X}{\boldsymbol{\beta}} - \textbf{y}\|^2_2 + \min \left\{\frac{\sigma^2 R}{n}, \mbox{ } 9 \sigma^2 \frac{M_c(\boldsymbol{\beta})}{n} \log \left( 1 + \frac{eM}{\max(M_c(\boldsymbol{\beta}), 1)}\right) \right\} \right\} + \frac{8\sigma^2}{n}\log 2 \end{align} Here, $R = \mbox{rank}(\textbf{X})$, and $M_c(\boldsymbol{\beta}) = \|\mbox{sparsity}(\boldsymbol{\beta})\|_{0,c}$, where $\mbox{sparsity}(\boldsymbol{\beta})$ is the sparsity pattern of $\boldsymbol{\beta}$. \end{prp} A key property of the next proposition is the existence of some $\gamma \geq 1$ such that $\forall \textbf{p} \in \mathcal{P}: \|\textbf{p}\|_0 \leq \|\textbf{p}\|_{0,c} \leq \gamma \| \textbf{p}\|_{0}$. \begin{prp} \label{prop2pen} Suppose Assumption~\ref{assumptionLOS} holds. Suppose the structural penalty in structured sparsity aggregate (SSA) estimator satisfies $\forall \textbf{p} \in \mathcal{P}: \|\textbf{p}\|_0 \leq \|\textbf{p}\|_{0,c} \leq \gamma \| \textbf{p}\|_{0}$ for some $\gamma \geq 1$. Then for any $M\geq 1, n \geq 1$, the SSA estimator satisfies: \begin{align} \mathbb{E}\|\textbf{X}{\widehat{\boldsymbol{\beta}}^{SSA}} - \textbf{y} \|^2_2 \leq \min_{\boldsymbol{\beta} \in \mathbb{R}^M} \{ \|\textbf{X}{\boldsymbol{\beta}} - \textbf{y}\|^2_2 + \phi_{n,M}(\boldsymbol{\beta}) \} + \frac{\sigma^2}{n}(9\log(1+eM) + 8\log2) \end{align} where $\phi_{n,M}(0) := 0$ and for $\boldsymbol{\beta} \neq 0$: \begin{align} \phi_{n,M} = \min\left[ \frac{\sigma^2}{n},\frac{9\sigma^2 M_c(\boldsymbol{\beta})}{n} \log \left( 1+ \frac{eM}{\max(M_c(\boldsymbol{\beta}),1)}\right), \frac{11\sigma \sqrt{\gamma}\|\boldsymbol{\beta}\|_1}{\sqrt{n}}\sqrt{\log \left( 1 + \frac{3eM\sigma}{\|\boldsymbol{\beta}\|_1\sqrt{\gamma n}}\right)} \ \right] \end{align} \end{prp} \subsection{Grouped $\ell_0$ Norm} \label{TheoryS2} We first state an Assumption: \begin{asm} \label{assumptionGRPLO} For all $\textbf{p} \in \mathcal{P}$ where $R>\|\textbf{p}\|_0>0$: \begin{align} \frac{\|\textbf{p}\|_0}{\|\textbf{p}\|_{0,\mathcal{G}}} \leq \log\left( 1 + \frac{e|\mathcal{G}|}{\max(\|\textbf{p}\|_{0,\mathcal{G}},1)}\right). \end{align} \end{asm} This assumption does not hold uniformly for all sparsity patterns and for all choices of $\mathcal{G}$. A sufficient condition for the assumption is: \begin{align} \max_{g \in \mathcal{G}} |g| \leq \log(1 + e|\mathcal{G}|/R). \end{align} In particular, for sparsity patterns with low $\ell_0$ norm relative to $M$, the assumption is satisfied provided the cardinality of $\mathcal{G}$ is large enough. \begin{prp} \label{prop1group} Suppose Assumption~\ref{assumptionGRPLO} holds. For any $M\geq1, n \geq1$, the grouped sparsity aggregate estimator satisfies: \begin{align} \mathbb{E} \|\textbf{X}{\widehat{\boldsymbol{\beta}}^{GSA}} - \textbf{y} \|^2_2 \leq \min_{\boldsymbol{\beta} \in \mathbb{R}^M} \left\{ \|\textbf{X}{\boldsymbol{\beta}} - \textbf{y}\|^2_2 + \min \left\{\frac{\sigma^2 R}{n}, \mbox{ } 9 \sigma^2 \frac{M_\mathcal{G}(\boldsymbol{\beta})}{n} \log \left( 1 + \frac{e|\mathcal{G}|}{\max(M_\mathcal{G}(\boldsymbol{\beta}), 1)}\right) \right\} \right\} + \frac{8\sigma^2}{n}\log 2 \end{align} Here, $R = \mbox{rank}(\textbf{X})$, and $M_\mathcal{G}(\boldsymbol{\beta}) = \|\mbox{sparsity}(\boldsymbol{\beta})\|_{0,\mathcal{G}}$, where $\mbox{sparsity}(\boldsymbol{\beta})$ is the sparsity pattern of $\boldsymbol{\beta}$. \end{prp} \begin{prp} \label{prop2group} Suppose Assumption~\ref{assumptionGRPLO} holds. Then for any $M\geq 1, n \geq 1$, the grouped sparsity aggregate estimator satisfies: \begin{align} \mathbb{E}\|\textbf{X}{\widehat{\boldsymbol{\beta}}^{GSA}} - \textbf{y} \|^2_2 \leq \min_{\boldsymbol{\beta} \in \mathbb{R}^M} \{ \|\textbf{X}{\boldsymbol{\beta}} - \textbf{y}\|^2_2 + \phi_{n,\mathcal{G}}(\boldsymbol{\beta}) \} + \frac{\sigma^2}{n}(9\log(1+e|\mathcal{G}|) + 8\log2) \end{align} where $\phi_{n,\mathcal{G}}(0) := 0$ and for $\boldsymbol{\beta} \neq 0$: \begin{align} \phi_{n,\mathcal{G}} = \min\left[ \frac{\sigma^2}{n},\frac{9\sigma^2 M_\mathcal{G}(\boldsymbol{\beta})}{n} \log \left( 1+ \frac{e|\mathcal{G}|}{\max(M_\mathcal{G}(\boldsymbol{\beta}),1)}\right), \frac{11\sigma \|\boldsymbol{\beta}\|_{1,\mathcal{G}}}{\sqrt{n}}\sqrt{\log \left( 1 + \frac{3e|\mathcal{G}|\sigma}{\|\boldsymbol{\beta}\|_{1,\mathcal{G}}\sqrt{n}}\right)} \ \right] \end{align} \end{prp} \subsection{Discussion of the Results} For each class of prior, we give two main results. The first result shows that each procedure enjoys adaptation in terms of the appropriate structured sparsity measuring set function --- $\|\cdot\|_{0,c}$ and $\|\cdot\|_{0,\mathcal{G}}$, respectively. The bound is thus best when the structured sparsity of the regression function is small, as measured by the appropriate set functions; the estimator adapts to the structured sparsity of the target. The second demonstrates that the estimators also adapts to structured sparsity measured in terms of a corresponding convex norm --- $\|\cdot\|_1$ and $\|\cdot\|_{1,\mathcal{G}}$. This is useful when some entries of $\boldsymbol{\beta}$ contribute little to the convex norm, but still incur a penalty in the corresponding set function. For example, a small isolated entry of $\boldsymbol{\beta}$ contributes little to the $\ell_1$ norm, but is heavily weighted in the structurally penalized $\ell_0$ norm. Comparing the results to the corresponding results in~\cite{RigTsy10}, these results reveal some benefits and drawbacks to adding structure to the sparse aggregation procedure. In the penalized case, the results show that the structured estimator enjoys the same rates as the sparse estimator when the penalty is low. When structure is not present in the target, the sparse estimator is superior, as expected. Proposition~\ref{prop2pen} is still given in terms of the $\ell_1$ norm, which only measures sparsity. The price for adding structure to the procedure appears in the additional factor of $\sqrt{\gamma}$. While these results are not dramatic, the previous discussion (Section~\ref{structPenalizedNorm}) and subsequent simulation study (Section~\ref{applicationSimHIV-sim}) show that the penalized version is flexible and powerful in practice. In the grouped case, the results are more appealing. Since the grouped $\ell_0$ and $\ell_1$ norms are potentially much smaller than their ungrouped counterparts, the results here give better constants than their sparse versions. These improvements may be dramatic: previous work on the grouped lasso, cf~\cite{Lounici_takingadvantage},~\cite{zhang_benefits}, revealed great benefits to grouped structures. Following the settings of~\cite{Lounici_takingadvantage}, consider a multi-task regression setting in which we desire the same sparsity pattern across tasks. Then, if the number of tasks is on the same order or of a higher greater than the number of samples per task ($n$), a grouped aggregation approach would reduce the order (in $n$) of the rates in the theoretical results. We can also expect such improvements for an overlapping set of groups that do not highly overlap. The propositions given in the previous subsections are simplified versions of those proved for the sparsity pattern aggregate estimator in~\cite{RigTsy10}. We note that the full results can be extended to our estimators, we omit the derivation for brevity. In addition to these more complex statements, \cite{RigTsy10} also gave a detailed theoretical discussion of these results in comparison to the lasso and BIC aggregation estimators~\cite{buneaAggregation}, concluding that their estimator enjoyed superior and near optimal rates. Since our rates differ by no more than constants when the target is truly structured and sparse, we conclude that in such settings a structured approach can give great benefits. \section{Applications} \label{applicationSimHIV} \subsection{Simulation Study} \label{applicationSimHIV-sim} We now turn to a simulation study. ~\cite{RigTsy10} presented a detailed simulation study comparing their sparsity pattern aggregate estimator --- see Section~\ref{ES} --- to numerous sparse regression methods. They demonstrated that the sparsity pattern aggregate was superior to the competitor methods. Therefore, we primarily compare our technique to the sparsity pattern aggregate estimator. We will show that the structured sparsity pattern aggregate estimator is superior under appropriate settings where the target is structured. For brevity, we consider only the structurally penalized $\ell_0$ norm. In the following, we employ our cluster counting penalty, described in Section~\ref{structPenalizedNorm}, with $h=3$. We consider two settings that offer natural geometries and notions of structure: connected components in a line structure (see, e.g. the top left display in Figure~\ref{oneDimOneRunFigure}), and blocks in a two-dimensional lattice (see, e.g. the top left display in Figure~\ref{twoDimOneRunFigure}). Using these natural geometries, we let $d(\cdot, \cdot)$ be euclidean distance. We uniformly at random set the appropriate entries of a true coefficient vector $\overline{\boldsymbol{\beta}}$ to be one of $\{+1,-1\}$. Each entry of the $n \times M$ design matrix $\textbf{X}$ are independent standard random normal variables. We additionally generate a $n \times M$ matrix $\textbf{X}_{test}$ to measure prediction performance, see below. We consider different values of $n$ --- the number of data points, $M$ --- the number of candidate covariates; represented as columns in $\textbf{X}$, $C$ --- the number of clusters as measured by the cluster counting penalty applied to the true sparsity pattern, and $C_{on}$ --- the number of nonzero entries per cluster in $\overline{\boldsymbol{\beta}}$. We enforce non overlapping clusters giving $\|\overline{\boldsymbol{\beta}}\|_0 = C\times C_{on}$. For direct comparison we follow~\cite{RigTsy10}, and set the noise level $\sigma = \|\overline{\boldsymbol{\beta}}\|_0/9$, and run the MCMC algorithm for 7000 iterations, discarding the first 3000. We repeat each simulation setting 250 times. We use two metrics to measure performance. First, prediction risk: \begin{align} \mbox{Prediction}(\widehat{\boldsymbol{\beta}}) := \frac{\|\textbf{X}_{\mbox{test}}^T(\overline{\boldsymbol{\beta}} - \widehat{\boldsymbol{\beta}})\|_2^2}{n}. \end{align} Our second metric measures the estimation of $\overline{\boldsymbol{\beta}}$: \begin{align} \mbox{Recovery}(\widehat{\boldsymbol{\beta}}) := \frac{\|\overline{\boldsymbol{\beta}} - \widehat{\boldsymbol{\beta}}\|_2^2}{\|\overline{\boldsymbol{\beta}}\|_2^2}. \end{align} In each of the above, $\widehat{\boldsymbol{\beta}}$ denotes some estimate of $\overline{\boldsymbol{\beta}}$. We compare against our structured sparsity aggregate estimator (SSA) against the sparsity pattern aggregate estimator (SPA) and the lasso (lasso) --- note that the true coefficients, while clustered, are not smooth, making these settings inappropriate applications for structured smooth estimators such as the 1d or 2d fused lasso~\citep*{fusedLasso}. For the lasso, we we choose the tuning parameter $\lambda$ using 10-fold cross validation, and refit the model using ordinary least squares regression, both within and outside cross validation. This strategy effectively uses the lasso only for its variable selection properties and avoids shrinkage in $\widehat{\boldsymbol{\beta}}$. We employ the R package {\tt glmnet}~\citep{glmnetRpackage} to fit the lasso. Tables~\ref{penalizedEllZeroSimulationOneD} and~\ref{penalizedEllZeroSimulationTwoD} display the results. In all cases, the structured sparse estimator is superior to the sparse estimator, and both methods are superior to the lasso. Although the mean prediction and recovery for the aggregation estimators are within two standard errors of each other, for paired runs on the same simulated data set, the structured sparse estimator is superior in both metrics at least 95\% of the time, for all settings. Figures~\ref{oneDimOneRunFigure} and~\ref{twoDimOneRunFigure} display results for a sample sparsity pattern in both settings. We can clearly see the superiority of the aggregation methods over the lasso. In both figures, we see that both aggregation methods correctly estimated the true sparsity pattern. However, in the sparse estimator, the Markov chain spent many iterations adding and dropping covariates far away from the true clusters. This did not happen in the structured estimators, giving a much sharper picture of the sparsity pattern in both cases. Rejecting these wandering steps gave the structured estimator better numerical performance in both prediction and estimation. \subsection{Application to HIV Drug Resistance} We now explore a data application which calls for a structured sparse approach. Standard drug therapy for Human Immunodeficiency Virus (HIV) inhibits the activity of proteins produced by the virus. HIV is able to change its protein structure easily and become resistant to the drugs. The goal is then to determine which mutations drive this resistance. We use regression to determine the relationship between a particular strain of HIV's resistance to a drug and its protein sequence. \cite{Soo} studied this problem using sparse regression techniques. Casting this problem as linear regression, the continuous response is drug resistance, measured by $\log$ dosage of the drug needed to effectively negate the virus' reproduction. The covariates derive from the protein sequences. Each sequence is 99 amino acids long, so we view each of these 99 positions as factors. Breaking each of these factors into levels, we obtain mutation covariates, which is our set of candidate predictors. If a location displays $A$ different amino acids across the data, we obtain $A-1$ mutation covariates. Thus, each covariate is an indicator variable for the occurrence of a particular amino acid at a particular location in the protein sequence. Note that many positions in the protein sequence display no variation throughout the data set --- these positions always display the same amino acid --- and are therefore dropped from the analysis. In summary, the predictors are mutations in the sequence, and the response is the log dosage. A sparse model would show exactly which mutations are most important in driving resistance. We are interested in which mutations predict drug resistance, rather than only which locations predict dug resistance. Therefore, we do not select the mutation covariates from a location jointly. We instead treat each mutation separately. Additional biological information gives us reason to believe a structured, sparse model is more appropriate. Proteins typically function by active sites. That is, localized areas of the protein are more important to the protein function than others. Viewing the sequence as a simple linear structure, we expect that selected mutations should occur clustered in this structure. We can cluster the mutations by defining a distance in straightforward way: since each mutation covariate is also associated with a location, we can define $d(\cdot, \cdot)$, the distance between a pair of mutation covariates, as the absolute difference in their locations. We apply our structured sparse aggregation (SSA) method along with sparse aggregation (SPA), forward stepwise regression, and the lasso to the data for drug Saquinavir (SQV) --- see~\cite*{database} for details on the data and~\cite{percivalStructureSparse} for another structured sparse approach to the analysis; the data are available as a data set in the R package {\tt BLINDED}~\cite{casparRpackage}. We set $h=3$ in our cluster counting structural penalty for the structured aggregation method. We display a comparison of the sparsity patterns for the methods in Figure~\ref{hivFigure}. We see that each method selects similar mutations. As expected, the structured sparse estimator encourages clustered selection of mutations, giving us two clear important regions. In contrast, the sparse aggregation estimator, stepwise regression, and the lasso suggest mutations across the protein sequence. We finally evaluate the predictive performance of the four methods using data splitting. We split the data into three equal groups, and compare the mean test error from using each set of two groups as a training set, and the third as a test set. Table~\ref{hivTable} shows that both aggregation estimators are superior to the lasso and stepwise regression. Although the mean test error is lower for the sparse aggregation estimator, it is within a single standard deviation of the structured estimator's mean test error. Therefore, the structured estimator gives comparable predictive power, with the extra benefit of superior biological interpretability. \section{Conclusion} \label{conclude} In this paper, we proposed simple modifications of a powerful sparse aggregation technique, giving a framework for structured sparse aggregation. We presented methods for two main classes of structured sparsity: set function based structure and grouped based structure. These aggregation estimators place highest on weight models whose sparsity patterns are the most sparse and structured. We showed that these estimators enjoy appropriate oracle inequalities --- they adapt to the structured sparsity of the targets. Further, we showed that in practice these methods are effective in the appropriate setting. In the theory throughout this paper, we considered a particular structure in the prior in order to easily compare theoretical properties with sparse estimators. In practice, the form of the prior may be modified further. For example, we need not restrict our structural penalty to be less than the number of predictors. In our current formulation, this restriction forced us to consider sparsity and structure with equal weight. Although both the sparsity pattern and structured sparsity pattern estimators display good promise theoretically and in practice, there are several practical challenges remaining. First, while~\cite{RigTsy10} suggested a strategy for dealing with the assumption that $\sigma^2$ is known, it requires running another Markov chain to find a good estimate for $\sigma^2$. This strategy is slow, and the stochastic greedy algorithm is much slower than comparable sparse techniques. While the algorithm is not prohibitively slow, speedups would greatly enhance its utility. Currently, the algorithm must be run for at least approximately $10 \times M$ iterations so that it is time to search over all $M$ covariates. Since each iteration requires an OLS regression fit, if $M$ is of the same or greater order than $n$, this is a significant drawback. Thus, the estimator does not scale well to high dimensions. In future work, we can also consider a specialized version of the stochastic greedy algorithms adapted to our structured priors.
{ "timestamp": "2011-11-22T02:00:19", "yymm": "1111", "arxiv_id": "1111.4494", "language": "en", "url": "https://arxiv.org/abs/1111.4494", "abstract": "We introduce a method for aggregating many least squares estimator so that the resulting estimate has two properties: sparsity and structure. That is, only a few candidate covariates are used in the resulting model, and the selected covariates follow some structure over the candidate covariates that is assumed to be known a priori. While sparsity is well studied in many settings, including aggregation, structured sparse methods are still emerging. We demonstrate a general framework for structured sparse aggregation that allows for a wide variety of structures, including overlapping grouped structures and general structural penalties defined as set functions on the set of covariates. We show that such estimators satisfy structured sparse oracle inequalities --- their finite sample risk adapts to the structured sparsity of the target. These inequalities reveal that under suitable settings, the structured sparse estimator performs at least as well as, and potentially much better than, a sparse aggregation estimator. We empirically establish the effectiveness of the method using simulation and an application to HIV drug resistance.", "subjects": "Methodology (stat.ME)", "title": "Structured Sparse Aggregation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812345563904, "lm_q2_score": 0.7310585669110203, "lm_q1q2_score": 0.7079433975583194 }
https://arxiv.org/abs/2010.09712
independence: Fast Rank Tests
In 1948 Hoeffding devised a nonparametric test that detects dependence between two continuous random variables X and Y, based on the ranking of n paired samples (Xi,Yi). The computation of this commonly-used test statistic takes O(n log n) time. Hoeffding's test is consistent against any dependent probability density f(x,y), but can be fooled by other bivariate distributions with continuous margins. Variants of this test with full consistency have been considered by Blum, Kiefer, and Rosenblatt (1961), Yanagimoto (1970), Bergsma and Dassios (2010). The so far best known algorithms to compute these stronger independence tests have required quadratic time. Here we improve their run time to O(n log n), by elaborating on new methods for counting ranking patterns, from a recent paper by the author and Leng (SODA'21). Therefore, in all circumstances under which the classical Hoeffding independence test is applicable, we provide novel competitive algorithms for consistent testing against all alternatives. Our R package, independence, offers a highly optimized implementation of these rank-based tests. We demonstrate its capabilities on large-scale datasets.
\subsubsection*{\bibname}} \bibliographystyle{apalike} \fancyhead[R]{\small\ifnum\value{page}<2\relax\else\thepage\fi} \usepackage[usenames]{xcolor} \definecolor{webgreen}{rgb}{0,.5,0} \definecolor{webbrown}{rgb}{.6,0,0} \usepackage[colorlinks=true, linkcolor=webgreen, urlcolor=blue, filecolor=webbrown, citecolor=webgreen]{hyperref} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{tikz} \usepackage{url} \usepackage{relsize} \usepackage{enumitem} \usepackage{algorithm} \usepackage[noend]{algpseudocode} \usepackage{tcolorbox} \theoremstyle{plain} \newtheorem{theorem}{Theorem} \newtheorem*{theorem*}{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem*{corollary*}{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem*{lemma*}{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem*{proposition*}{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem*{definition*}{Definition} \newtheorem{example}[theorem]{Example} \newtheorem*{example*}{Example} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem*{conjecture*}{Conjecture} \newtheorem{problem}[theorem]{Problem} \newtheorem*{problem*}{Problem} \newtheorem{todo}[theorem]{TO DO} \newtheorem*{todo*}{TO DO} \newtheorem{question}[theorem]{Question} \newtheorem*{question*}{Question} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem*{remark*}{Remark} \newtheorem{notation}[theorem]{Notation} \newtheorem*{notation*}{Notation} \numberwithin{theorem}{section} \begin{document} \runningtitle{\MakeLowercase{{independence:}} Fast Rank Tests} \twocolumn[ \aistatstitle{\MakeLowercase{\LARGE\texttt{independence:}} Fast Rank Tests} \aistatsauthor{ Chaim Even-Zohar } \aistatsaddress{ The Alan Turing Institute } ] \begin{abstract} In 1948 Hoeffding devised a nonparametric test that detects dependence between two continuous random variables $X$ and $Y$, based on the ranking of $n$ paired samples $(X_i,Y_i)$. The computation of this commonly-used test statistic takes $O(n \log n)$ time. Hoeffding's test is consistent against any dependent probability density $f(x,y)$, but can be fooled by other bivariate distributions with continuous margins. Variants of this test with full consistency have been considered by Blum, Kiefer, and Rosenblatt (1961), Yanagimoto (1970), Bergsma and Dassios (2010). The so far best known algorithms to compute these stronger independence tests have required quadratic time. Here we improve their run time to $O(n \log n)$, by elaborating on new methods for counting ranking patterns, from a recent paper by the author and Leng (SODA'21). Therefore, in all circumstances under which the classical Hoeffding independence test is applicable, we provide novel competitive algorithms for consistent testing against all alternatives. Our \texttt{R}~package, \texttt{independence}, offers a highly optimized implementation of these rank-based tests. We demonstrate its capabilities on large-scale datasets. \end{abstract} \section{OVERVIEW} Part~\ref{setting} briefly outlines the current need for consistent, fast, and distribution-free independence testing and measuring. Part~\ref{hoef} surveys the original Hoeffding's $D$ measure of dependence \citep{hoeffding1948non}. Since it is solely based on ranks, the derived test statistic is indeed distribution-free. Hoeffding's test is only consistent against absolutely continuous bivariate distributions. Various examples are given in Part~\ref{fooling} of simple dependent alternatives that Hoeffding's $D$ fails to detect. Part~\ref{variations} presents the two main refinements of Hoeffding's dependence measure, that yield consistent distribution-free tests against all alternatives. They go back to works of \cite{blum1961distribution}, \cite{yanagimoto1970measures}, and others. Recent works by \cite{bergsma2014consistent}, and others have popularized and extended these variants of Hoeffding's test. Part~\ref{algo} gives $O(n \log n)$ time algorithms of these consistent variants of Hoeffding's independence test, based on techniques of \cite{even2019counting}. Such an improvement from quadratic to near linear time has been regarded as the most important factor in selecting an independence test \citep{chatterjee2020new}, and may be considered as overcoming a natural barrier \citep[cf.][]{deheuvels2006quadratic}. By designing a dedicated algorithm for these statistics, we achieve an additional \mbox{5x-10x} improvement over direct application of our general method. Part~\ref{rpackage} briefly describes the functionality of our newly developed \texttt{R} package \texttt{independence} \citep[available on \texttt{CRAN},][]{independenceR}, and demonstrates its unique capabilities. Finally, in Part~\ref{related} we briefly survey other ranking-based independence tests, that have been proposed within the same setting of Hoeffding's test. \section{SETTING}\label{setting} Analyzing the empirical relationship between two variables is a classical subject in statistics. Perhaps the most basic problem given bivariate data is to decide whether there is any relation at all between the two variables. It is therefore natural that \emph{testing for independence} has long been a primary interest of statistical researchers and practitioners. One crucial feature is the test's \emph{consistency}. Traditional tests of linear correlation, such as Pearson's $r$, Spearman's $\rho$, or Kendall's $\tau$, may rule out the independence hypothesis in some cases. However, they are not consistent, since many forms of dependency are uncorrelated, and hence undetectable by them. In general, one seeks consistent tests, that assume as little as possible on the underlying distribution, and hence eliminate many alternatives given enough data. Another key feature is the \emph{computational complexity} of the proposed test. In the times of ``big data'', we are able to discover latent connections and subtle patterns, that are often detectable only with a great number of samples. In a typical scenario of, say, several million data records, and testing potential relations between tens of attributes, even the gap between linear and quadratic runtime is a matter of vital importance. This work provides \emph{consistent and efficient} independence tests for $n$ paired samples of two real-valued variables $X$ and~$Y$. The variables are only assumed to be non-atomic, and the tests are \emph{distribution-free}. This is the classical setting of Hoeffding's $D$ and its variants, where one has so far been compelled to choose between consistency and efficiency. For other, simpler data types, consistent linear-time methods are already widely available, such as chi-square tests for categorical variables. On the contrary, testing for independence in more involved settings, such as real vectors and processes, is an active area of research. The rank-based tests that we implement here extend to more general settings in various ways, and it is plausible that our approach can help accelerate and optimize some of these generalized methods. The implemented tests are \emph{nonparametric}, as no assumptions are made on the distribution of $X$ or $Y$, other than continuity which is normally evident from the nature of the data. Given $n$ independent observations, $\{(X_1,Y_1),\dots,(X_n,Y_n)\}$, we only make use of the relative ranking of the two coordinates, which is invariant to any monotone reparametrization of the variables. This information may be encoded by the rank-matching permutation: $\pi : \{1,\dots,n\} \to \{1,\dots,n\}$ where $\pi(\textrm{rank\,}X_i) = \textrm{rank\,}Y_i$. Since $\pi$ is uniformly random under independence, the null distribution only depends on~$n$, and conveniently one can use distribution-free $p$-values. \section{HOEFFDING'S TEST}\label{hoef} We first review Hoeffding's independence test, its usefulness, and limitations. Our presentation mostly follows the seminal paper by \cite{hoeffding1948non}. \subsection{Hoeffding's Independence Coefficient} Let $X$ and $Y$ be two real-valued random variables on the same probability space, with a joint cumulative distribution function $F(x,y) = P(X \leq x, Y \leq y)$. Their marginal distribution functions are assumed to be continuous, and are denoted respectively by \begin{eqnarray*} &F_X(x) = F(x, \infty) = P(X \leq x) \\ &F_Y(y) = F(\infty, y) = P(Y \leq y) \end{eqnarray*} The null hypothesis, that the two variables are independent, is expressed as: $F \equiv F_XF_Y$. In order to measure how much $F$ departs from independence, Hoeffding proposed the following Cram\'er--von Mises type statistic. $$ D \;=\; \int\left(F(x,y) - F_X(x)F_Y(y)\right)^2dF(x,y) $$ The integration $dF$ over $\mathbb{R}^2$ can be interpreted in the sense of Riemann--Stieltjes or Lebesgue--Stieltjes. This notation emphasizes the repeated use of the same $F$ both for evaluating the deviation and for averaging it. It is clear that $D=0$ if $X$ and $Y$ are independent. Conversely, \citet[Theorem~3.1]{hoeffding1948non} showed that $D>0$ whenever the joint distribution is dependent and \emph{absolutely} continuous, that is, has a joint density function $f(x,y)$ integrating to~$F(x,y)$. Next, we briefly sketch Hoeffding's derivation of an estimator for~$D$, since it will be useful later on. Expanding the square, we write \begin{equation} \label{dfffff} D \,=\, \int\left(F\,F - 2\,F\,F_XF_Y + F_XF_YF_XF_Y\right)dF \tag{$\star$} \end{equation} The cumulative distribution functions are then replaced by integrals, \begin{align*} \textstyle F(x,y) \;&=\; \int_{\mathbb{R}^2} I(x'<x,y'<y)\,dF(x',y') \\ F_X(x) \;&=\; \int_{\mathbb{R}^2} I(x'<x)\,dF(x',y') \\ 1 \;&=\; \int_{\mathbb{R}^2} dF(x',y') \end{align*} where $I(\cdots)=1$ if the given condition holds and~$0$ otherwise. We end up with an expression of the following form. $$ D \;=\; \int\!\!\int\!\!\int\!\!\int\!\!\int \phi\left(x_1,y_1,\dots,x_5,y_5\right) dF\,dF\,dF\,dF\,dF$$ The function $\phi:(\mathbb{R}^2)^{5} \to \mathbb{R}$ can be written explicitly, and only depends on the order relations between its inputs, and not on the distribution~$F$. \subsection{Hoeffding's Test Statistic} \label{hoefdn} Recall that we are given $n$ independent observations, $(X_i,Y_i)$. Hoeffding's test statistic is essentially obtained by using the empirical distribution instead of the unknown~$F$. Specifically, the integrals turn into a finite average: \begin{equation}\label{dn}\tag{$\diamond$} D_n \;=\;\; \frac{\displaystyle{\sum}_{i,j,k,l,m} \phi\left(X_i,Y_i,\dots,X_m,Y_m\right)}{n(n-1)(n-2)(n-3)(n-4)} \end{equation} The sum is over all combinations of five distinct indices $i,j,k,l,m \in \{1,\dots,n\}$. Estimators obtained this way are called U-statistics, and were developed and studied by \cite{hoeffding1948class}. Such an estimator is unbiased, $E(D_n)=D$, and also consistent, $D_n \to D$ in probability, where $D$ is the population parameter as above. This expression for $D_n$ can be interpreted in terms of rank patterns, in the sense of the following definitions. Recall that any set of points $\left\{(x_1,y_1),\dots,(x_n,y_n)\right\}$ induces a permutation $\pi \in S_n$ such that $\pi(\textrm{rank\,}x_i) = \textrm{rank\,}y_i$. We use the one-line notation $\pi(1)\pi(2)\dots\pi(n)$ for permutations, for example $\pi = 1423$. The number of $k$-point subsets inducing a given \emph{pattern} $\sigma \in S_k$ is denoted by $\#\sigma(\pi)$, or $\#\sigma$ for short. For example $\#21(1423)=2$, since the pattern $21$ occurs twice, at the marked entries $1\underline{4}\underline{2}3$ and~$1\underline{4}2\underline{3}$. The statistic $D_n$ can now be represented by counting patterns of order 5 in the permutation induced by $(X_1,Y_1),\dots,(X_n,Y_n)$. Indeed, by a straightforward case analysis of the kernel $\phi$, the sum in Equation~(\ref{dn}) equals \begin{multline*} 4\left(\#12345 + \#12354 + \dots + \#54321\right) \\ \;-\; 2\left(\#14325 + \#14352 + \dots + \#52341\right) \end{multline*} This combination involves all occurrences of 5-patterns whose middle entry is~3. The eight patterns where it separates 1 and 2 from 4 and 5 are being added, and the other sixteen patterns are subtracted. For every point $(X_i,Y_i)$, let $a_i$, $b_i$, $c_i$ and $d_i$ be the number of other points in the four quadrants around it, as follows. \begin{align*} a_i = \left|\left\{j\;:\;\substack{X_j<X_i\\Y_j>Y_i}\right\}\right| && b_i = \left|\left\{j\;:\;\substack{X_j>X_i\\Y_j>Y_i}\right\}\right| \\ c_i = \left|\left\{j\;:\;\substack{X_j<X_i\\Y_j<Y_i}\right\}\right| && d_i = \left|\left\{j\;:\;\substack{X_j>X_i\\Y_j<Y_i}\right\}\right| \end{align*} \cite{hoeffding1948non} gives a formula for computing~$D_n$, which can be rephrased by means of these variables as follows: \begin{multline*} D_n \;=\; \tfrac{1}{n(n-1)\cdots(n-4)} \sum\limits_{i=1}^{n} \scalebox{1.2}{[} a_i(a_i-1)d_i(d_i-1) \\ \;+\; b_i(b_i-1)c_i(c_i-1) \;-\; 2\,a_i b_i c_i d_i \scalebox{1.2}{]} \end{multline*} \cite{blum1961distribution} studied Hoeffding's statistic, and proposed a slightly more convenient estimator: $$ B_n \;=\; \frac{1}{n^5}\sum\limits_{i=1}^n\,\left[a_id_i-b_ic_i\right]^2 $$ Asymptotically in~$n$, these two statistics differ by a constant offset to the normalized null distribution, and a negligible term under continuous alternatives. The sequence $\{a_i\}$ is essentially the \emph{inversion code} of the induced permutation~$\pi$ \cite[p.~36]{stanley2011enumerative}. The computation of this sequence, and thereby the other three, in $O(n \log n)$ time is a textbook exercise, see~\cite[14.1-7, for example]{cormen2009introduction} or the tools we use in~\S\ref{algo}. Hence the computation of Hoeffding's~$D_n$ costs $O(n \log n)$ time. \subsection{Hoeffding's Independence Test} \label{htest} As usual, Hoeffding's test is performed by computing $D_n$ and comparing it with its distribution under the independence hypothesis. If $D_n$ is significantly large then independence is reject. This happens with probability tending to one under any alternative with $D>0$. The null distribution is uniquely determined regardless of $F_X$ and $F_Y$, since any ordering for the $X_i$'s is equally likely, and same for the~$Y_i$'s. For large $n$, the distribution is approximated using the following limit law~\cite[Theorem 8.1]{hoeffding1948non}. $$ n \, D_n \;\;\; \xrightarrow[\;\; n \to \infty \;\;]{} \;\;\; \sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{\pi^4j^2k^2}\left(Z_{jk}^2 - 1\right) $$ where $Z_{jk}$ are independent gaussians. See also~\cite{blum1961distribution}. For further information on the distribution of general linear combinations of pattern counts in a random permutation see~\cite{janson2015asymptotic} and \cite{even2018patterns}. Efficient implementations of Hoeffding's~$D$ test are available in popular software, e.g.~\citet[\href{https://cran.r-project.org/package=wdm}{\texttt{wdm} package}]{R}, \citet[\href{https://reference.wolfram.com/language/ref/HoeffdingD.html}{\texttt{HoeffdingD}}]{Mathematica}, \citet[\href{https://v8doc.sas.com/sashtml/proc/zconcept.htm}{\texttt{PROC CORR}}]{sas2015base}. \begin{remark*} Although our discussion mainly adopts the standpoint of hypothesis testing, Hoeffding's $D_n$ is also much relevant from a descriptive point of view. It is meaningful as a distribution-free measure of dependence, consistently estimating the population's $D \in [0,\tfrac{1}{30}]$. Such a quantity may serve as a distance function in the context of \emph{variable clustering}, for example, and Hoeffding's~$D_n$ in particular has been recommended in these pursuits, see \cite{harrell2015regression}. \end{remark*} \section{FOOLING HOEFFDING} \label{fooling} As mentioned above, the consistency of Hoeffding's test was established under the condition that the joint distribution of $(X,Y)$ is absolutely continuous. It is not guaranteed that $D>0$ for other dependent alternatives. Some distributions with no density still happen to be detectable. For example, $D=\tfrac{1}{30}$ in the important case of a monotone dependency, $Y=\varphi(X)$ almost surely. However, $D$~might vanish under other dependent distribution. We discuss various examples. \paragraph{(I)} The first example was given by \cite{yanagimoto1970measures}. It can be described as a perturbation of an independent distribution. Without loss of generality we start from the uniform measure, $U([0,1]\times[0,1])$. Inside this square we choose a parallelogram made of two axes-parallel right triangles: \raisebox{-0.1em}{\tikz[scale=0.125]{\draw[black,line width=1] (0,0)--(3,2)--(0,2)--(-3,0)--(0,0)--(0,2);}}. The probability mass in the interior of the triangles is then transferred to the diagonal edges, upward in one triangle and downward in the other. Clearly, this makes the coordinates $X$ and $Y$ dependent, while still non-atomic. See Figure~\ref{fool}, and the paper of \cite{yanagimoto1970measures}. \begin{figure}[t] \centering \tikz[scale=8]{ \foreach[evaluate={ \x = 0.49*rand+0.5; \y = 0.49*rand+0.5; \y = (\x>0.3 && \x<0.6 && \y>0.1 && \y<2*\x-0.5?2*\x-0.5:\y; \y = (\x>=0.6 && \x<0.9 && \y<0.7 && \y>2*\x-1.1?2*\x-1.1:\y; }] \i in {1,...,1000}{ \node[gray,mark size=1pt] at (\x,\y) {$\bullet$};} \draw[black, line width=1.5] (0,0) rectangle (1,1); } \vspace{.3in} \caption{A Random Sample of 1000 Points from Yanagimoto's Counterexample.} \vspace{.1in} \label{fool} \end{figure} This distribution satisfies $D=0$. Indeed, $F_X(x)=x$ for $x \in [0,1]$ because the probability mass is only ``swept'' vertically. Also $F_Y(y)=y$ for $y \in [0,1]$ as the two diagonals complement each other. Finally, $F=F_XF_Y$ almost everywhere with respect to~$F$. Indeed, let $(x,y)$ be a point in the support of this distribution. At least one of the four quadrants around it is unchanged, so $F(x,y)=xy$ follows. Of course this is not the case in the interior of the parallelogram, but that area is invisible to the integral~$dF$. \paragraph{(II)} One can observe that Yanagimoto's counterexample is actually quite robust. The position, size and ratio of the parallelogram do not matter. Several disjoint parallelograms would work as well. Moreover, the two diagonal lines may be replaced by other monotone curves that preserve the marginals. For example, one may sweep all the in-between probability mass to the following two hyperbola branches: $$ \begin{array}{c} 2y(1-x)=1 \\[0.25em] 2x(1-y)=1 \end{array} \;\;\;\;\;\;\;\; \raisebox{-1.125em}{\tikz[scale=1]{ \fill[draw=black,fill=lightgray,line width=0.7] (0,1)--(0,0.5)--(0.050,0.526)--(0.100,0.556)--(0.150,0.588)--(0.200,0.625)--(0.250,0.667)--(0.300,0.714)--(0.350,0.769)--(0.400,0.833)--(0.450,0.909)--(0.5,1)--(0,1) (1,0)--(0.5,0)--(0.526,0.050)--(0.556,0.100)--(0.588,0.150)--(0.625,0.200)--(0.667,0.250)--(0.714,0.300)--(0.769,0.350)--(0.833,0.400)--(0.909,0.450)--(1,0.5)--(1,0); \draw[lightgray,line width=0.8] (0,0)--(1,0)--(1,1)--(0,1)--(0,0);}} $$ This clears an area of $\log 2 \approx 0.69$ in the unit square while $F_X(x)=x$ and~$F_Y(y)=y$ for all $x,y \in [0,1]$. Moreover, splitting this distribution at the median of~$X$ completely coincides with the splitting by~$Y$. Still, Hoeffding's $D=0$ for the same reasons as before. \paragraph{(III)} We give another new counterexample, of a different nature. It is based on the binary expansion $X = 0.X_1X_2X_3\dots$ where $X_i$ are independent Bernoulli$(\tfrac12)$ random variables. Denote by $\ell = \ell(X) \in \mathbb{N}$ the length of the first run in this binary representation, namely $X_1 = X_2 = \dots = X_{\ell} \neq X_{\ell+1}$. Let $Y = 0.Y_1Y_2Y_3\dots$ be obtained by resampling the first $\ell$ digits of~$X$, so that $Y_1,\dots,Y_{\ell}$ and $X$ are independent given~$\ell$, while $Y_i = X_i$ for $i > \ell$. Here is an example, with the first run marked. \begin{align*} X \;&=\; \texttt{ 0.\underline{00000}1101000101...} \\ Y \;&=\; \texttt{ 0.\underline{01101}1101000101...} \end{align*} Each of the two variables $X$ and $Y$ follows a continuous uniform distribution on the interval $[0,1]$, and their joint distribution is clearly dependent. However, we show that these variables fool Hoeffding's test. \begin{proposition}\label{fool2} For $X$ and $Y$ distributed as above, Hoeffding's $D = 0$. \end{proposition} The proof is given in the supplementary material, \S\ref{foolproof}. The dependency in this counterexample is stronger than the previous one in the sense that for any given $X$ there are only finitely many options for~$Y$, and often as few as two. We note that the proof works the same even if we restrict it to $Y<0.5$. Then the distribution satisfies $Y = \phi(X)$ with probability $2/3$, where $\phi(x) = (x \bmod 0.5)$, yet Hoeffding's $D=0$. It is an interesting phenomena that the resulting probability measure is \emph{singular} with respect to the independent distribution with the same marginals. Even so, Hoeffding's statistic cannot tell them apart. We argue that the occurrence of this kind of dependency in a real-world application is not unimaginable. In numerical simulations, for example, it might happen that two outcomes share certain parts of their binary expansion. The above examples can be modified to guarantee that the dependency stays undetectable also by standard correlation tests, such as Spearman's~$\rho$ and Kendall's~$\tau$. One way to do that is by adding a ``mirror image'', i.e., taking $X' = \pm X \in [-1,1]$, where the sign is independent of $(X,Y)$ and determined by flipping a fair coin. \section{VARIATIONS}\label{variations} Many variations of Hoeffding's test and extensions in variaous directions have been proposed over the years. Here we describe the two main variants that maintain the original spirit of the test while achieving consistency for all non-atomic real-valued $X$ and~$Y$. \subsection{Refined Hoeffding Test} The above counterexamples highlight where the blind spots of Hoeffding's test are. The integral~$dF$ that appears in $D$'s definition might miss certain regions in the plane, where $F \neq F_X F_Y$. From this perspective, it is natural to try a double integral $dF_X dF_Y$ instead, which leads to the following modification of~$D$. $$ R \;=\; \iint\left(F(x,y) - F_X(x)F_Y(y)\right)^2dF_X(x)dF_Y(y) $$ This measure of independence is often attributed to Blum, Kiefer, and Rosenblatt \citep{blum1961distribution}. In their paper, they discuss several possible variants and extensions of Hoeffding's test, and this one is defined as $\gamma_F^2$ on page~490. Curiously, a similar measure had already been defined in an early work of \cite{hoffding1940masstabinvariante}, see \citep[$\Phi^2$~on page~62]{hoeffding1994collected}. It was later introduced elsewhere, such as in the work of \citet[page 58]{yanagimoto1970measures}. The parameter $R$ vanishes if and only if $X$ and $Y$ are independent \cite[bottom of page~490]{blum1961distribution}. Therefore, a consistent estimator of~$R$ yields a consistent independence test against all alternatives. We hence derive a test statistic $R_n$ given $n$ independent samples, similar to $D_n$ above. We first expand, \begin{equation} \label{bfffff} R \,=\, \iint\left(F\,F - 2\,F\,F_XF_Y + F_XF_YF_XF_Y\right)\,dF_XdF_Y \tag{$\star\star$} \end{equation} The integral of the last term separates and simplifies to $\int F_X^2dF_X\int F_Y^2dF_Y = 1/9$. The other terms are replaced with integrals over indicator functions, as for $D$ in (\ref{dfffff}) above. We end up with a fivefold integral $\int\!\!\int\!\!\int\!\!\int\!\!\int \psi \, dFdFdFdFdF$, for some $\psi$ that only depends on the rank patterns induced by its five input points. Given $(X_1,Y_1), \dots, (X_n,Y_n)$, the estimator $R_n$ may be defined by a sum over sets of five samples, similar to Equation~(\ref{dn}), with $\phi$ replaced by~$\psi$. The independence test works as described in Section~\ref{htest}, using $R_n$ instead. It turns out that assuming independence $D_n$ and $R_n$ are equivalent up to an order $n^{-3/2}$ term \citep[for example]{even2018patterns}. In particular, $nD_n$ and $nR_n$ follow the same asymptotic null distribution, described by \cite{hoeffding1948non} and \cite{blum1961distribution}. In Part~\ref{algo} we give a first near linear time algorithm for computing~$R_n$. \subsection{Bergsma--Dassios--Yanagimoto Test} \label{bdy} The next step in the evolution of Hoeffding's independence test originates in multiple works, including the following question on the distribution of permutation patterns. Recall that the permutation induced by $k$ samples $(X_1,Y_1),\dots,(X_k,Y_k)$ is uniformly distributed under the independence hypothesis, with each $\pi \in S_k$ equally likely. Hoeffding's $D$, and its stronger variant $R$, provide a converse statement: if every 5-pattern $\pi \in S_5$ is equally likely to be induced by $(X_1,Y_1),\dots,(X_5,Y_5)$ then $X$ and $Y$ are independent. \cite{yanagimoto1970measures} showed that already equiprobable 4-patterns imply independence, but 3-patterns are not sufficient. This implication is shown by taking a nonnegative combination of the nonnegative dependence measures $D$ and $R$, $$ T = D + 2 R $$ Since $D=R=0$ if and only if $X$ and $Y$ are independent, also $T=0$ exactly in that case. Yanagimoto observed that, with this choice of combination, the order-five terms in the expansions (\ref{dfffff}) and~(\ref{bfffff}) add up to a constant, leaving \begin{multline*} T \;=\; 2 \iint F \, F\, dF_X \, dF_Y \;+\; \int F \, F \, dF \\ \;-\; 2 \int F_X \, F_Y \, F \, dF \;-\; \frac{1}{9} \end{multline*} By counting $F$'s in these integrals, one can represent $T = \int\!\!\int\!\!\int\!\!\int \vartheta \, dFdFdFdF$, where $\vartheta$ depends only on the induced patterns of four points, as claimed. \citet*[][]{bergsma2010nonparametric} and \citet*[][]{bergsma2010consistent,bergsma2014consistent} independently derived this independence measure, and denoted it by~$\tau^{\star} = 12T$. It seems they were the first to utilize it for testing independence in practice, and its popularity has risen in the past decade. Testing with $T$ proceeds similar to Hoeffding's test. The test statistic $T_n$ is an estimator of $T$ from $n$ samples, defined by fourfold summation analogously to Equation~(\ref{dn}). The null distribution of $T_n/3$ tends to the same limit as for $R_n$ and $D_n$ \citep{nandy2016large, dhar2016study}. The computation of~$T_n$ and our new $O(n \log n)$ time algorithm are discussed in Part~\ref{algo}. \newcommand{\cross}[4]{\raisebox{-7.5pt}{\tikz[scale=0.15]{ \foreach \x/\y in {1/#1,2/#2,3/#3,4/#4} { \draw[black,fill=black] (\x,\y) circle (0.2);} \draw[densely dotted,line width=0.4] (0,2.5)--(5,2.5); \draw[densely dotted,line width=0.4] (2.5,0)--(2.5,5); }}} The case for the Bergsma--Dassios--Yanagimoto test is supported by its simplicity, as it uses the ranking patterns of four points rather than five. \cite{bergsma2014consistent} discuss an analogy between $\tau^{\star}$ and Kendall's $\tau$ correlation coefficient. They offer another compelling interpretation, noting that a formula of this test statistic classifies the pattern occurrences in the induced permutation into two types. \begin{equation*} T_n \;=\; \frac{1}{\tbinom{n}{4}}\left( \tfrac{1}{18} \sum_{\sigma \in \mathcal{C}} \#\sigma \,-\, \tfrac{1}{36} \sum_{\sigma \in \mathcal{D}} \#\sigma \right) \end{equation*} where \begin{align*} \mathcal{C} \;&=\; \left\{ 1234, 1243, 2134, 2143, 3412, 3421, 4312, 4321 \right\} \\ \mathcal{D} \;&=\; S_4 \setminus \mathcal{C} \end{align*} are the sets of \emph{concordant} and \emph{discordant} 4-patterns, respectively. This classification of four-point rankings is best explained graphically as follows, by means of their \emph{cruciform partition}: \begin{align*} \mathcal{C}:& \;\;\; \cross1234 \;\, \cross1243 \;\, \cross2134 \;\, \cross2143 \;\, \cross3412 \;\, \cross3421 \;\, \cross4312 \;\, \cross4321 \\[0.5em] \mathcal{D}:& \;\;\; \cross1324 \;\, \cross1342 \;\, \cross1423 \;\, \cross1432 \;\, \cross2314 \;\, \cross2341 \;\, \cross2413 \;\, \cross2431 \\ &\;\;\; \cross3124 \;\, \cross3142 \;\, \cross3214 \;\, \cross3241 \;\, \cross4123 \;\, \cross4132 \;\, \cross4213 \;\, \cross4231 \end{align*} Thus, departure from independence manifests as less quadruples with one point in each corner, and more with two pairs in opposite corners. \paragraph{Remark} The statistic $T_n$ also arises in the combinatorial context of \emph{quasirandom} permutations. These are deterministic sequences $\pi_n \in S_n$ with asymptotic properties that random permutations satisfy with probability approaching one. For example, any box $[a,b]\times[c,d] \subseteq [0,1]^2$ contains $(b-a)(d-c)n \pm o(n)$ points of the form $(i/n,\pi_n(i)/n)$. \cite{cooper2004quasirandom} gives several equivalent characterizations, including various notions of discrepancy, and properties of the Fourier coefficients of~$\pi_n$. Another one is $\#\sigma(\pi_n)/\tbinom{n}{k} \to 1/k!$ for every $k$-pattern~$\sigma$, for all $k \in \mathbb{N}$. The latter quasirandom property relates to the characterization of independence by patterns due to \cite{hoeffding1948non}. \cite{hoppen2011testing,hoppen2013limits} similarly use $\#\sigma(\pi_n)$ to define more general \emph{limit permutations}, that correspond to alternative distributions. As noted by \cite{even2018patterns}, the consistency of the Bergsma--Dassios--Yanagimoto statistic translates to the following simple criterion: $$ \left\{\pi_n\right\} \text{ is quasirandom} \;\;\;\; \Leftrightarrow \;\;\;\; T_n\left(\pi_n\right) \to 0 $$ A series of papers on quasirandom and other limit permutations have independently reached the same conclusion \citep{kral2013quasirandom,glebov2015finitely,chan2019characterization}. \section{ALGORITHMS}\label{algo} We now present our algorithms for computing the consistent variants $R_n$ and $T_n$ of Hoeffding's statistic. The linear relation $T = D + 2R$ between these population coefficients implies that also for the sample statistics, $$ T_n \;=\; D_n + 2R_n $$ See \cite{even2018patterns} and \cite{drton2018high}. Since Hoeffding's $D_n$ is computed in time $O(n \log n)$, as shown in \S\ref{hoefdn}, A linear time computation of~$R_n$ would reduce to that of~$T_n$. By definition $T_n$ can be computed in~$O(n^4)$, though \cite{bergsma2010consistent} noted that it can be improved to~$O(n^3)$ and left its complexity as an open problem. Later, \cite{bergsma2014consistent} suggested to approximate this statistic by averaging a random subset of the $\tbinom{n}{4}$ terms in its representation as a U-statistic. However, under the null hypothesis $T_n$ is degenerate of rank two, hence this approximation incurs a substantial loss of information even if as much as $O(n^2)$ terms are taken \citep{janson1984asymptotic}. \cite{weihs2016efficient} proposed an algorithm that precisely computes $T_n$ in $O(n^2 \log n)$ time and near linear memory use. \cite{heller2016computing} showed it can be computed in $O(n^2)$ time and memory. Also the apparent quadratic order cost of $R_n$ seems to have been regarded as a natural barrier, since it is an empiric version of the double integral $dF_X dF_Y$ defining~$R$, compared to the single integral $dF$ in~$D$, see (\ref{dfffff}) and~(\ref{bfffff}). In fact, Deheuvels, Peccati and Yor have written that the fully consistent $R_n$ is less popular than $D_n$ only because it requires the summation of $n^2$ terms \citep[p.~496]{deheuvels2006quadratic}. Recently, \cite{chatterjee2020new} has considered near linear computational time as the major concern when selecting a bivariate independence test in practice. Our description of the near linear algorithm is completely self-contained, despite being inspired by the general method of \cite{even2019counting}. It uses as a data structure a numeric array of fixed size~$n$, named \textsc{sum-array}, that supports assigning a value to a given position in $O(\log n)$ time, and also supports range sum queries in $O(\log n)$ time. Such an array is easy to implement with a complete binary tree of depth $\left\lceil\log n\right\rceil$, and goes back at least to \cite{shiloach1982n2log}. If $A$ is a \textsc{sum-array}, then $A.\textsc{prefix-sum}(y)$ returns $A[1] + \dots + A[y]$ in logarithmic time, and similarly for $A.\textsc{suffix-sum}$. Given the $n$ samples $(X_1,Y_1),\dots,(X_n,Y_n)$, we first sort them in $O(n\log n)$ time and compute the ranking permutation $\pi:\{1,\dots,n\}\to\{1,\dots,n\}$, satisfying $\pi(\text{rank}X_i) = \text{rank}Y_i$ as explained above in \S\ref{setting}. The inverse permutation $\pi^{-1}$ is clearly computable in linear time. We then run Algorithm~\ref{alg} on $\pi$. We claim that its output is the statistic $\tau^\star(\pi)$ of \cite{bergsma2014consistent}, which is $12T_n$ in our notation. \begin{algorithm}[tb] \caption{Compute the $\tau^*$ statistic} \linespread{1.25}\selectfont \label{alg} \begin{algorithmic}[1] \Function{quad}{permutation $\pi \in S_n$} \State $N \leftarrow 0$ \State $A \leftarrow \textsc{sum-array}(0,\dots,0)$ \Comment{of size $n$} \State $A_u \leftarrow \textsc{sum-array}(0,\dots,0)$ \State $A_d \leftarrow \textsc{sum-array}(0,\dots,0)$ \State $A_{ud} \leftarrow \textsc{sum-array}(0,\dots,0)$ \For{$x$ \textbf{in} $(1,\dots,n)$} \State $N_u \leftarrow A.\textsc{prefix-sum}(\pi[x])$ \State $N_d \leftarrow A.\textsc{suffix-sum}(\pi[x])$ \State $N_{du} \leftarrow A_d.\textsc{prefix-sum}(\pi[x])$ \State $N_{ud} \leftarrow A_u.\textsc{suffix-sum}(\pi[x])$ \State $N_{udu} \leftarrow A_{ud}.\textsc{prefix-sum}(\pi[x])$ \State $A[\pi(x)] \leftarrow 1$ \State $A_u[\pi(x)] \leftarrow N_u$ \State $A_d[\pi(x)] \leftarrow N_d$ \State $A_{ud}[\pi(x)] \leftarrow N_{ud}$ \State $\Delta \leftarrow 2N_{udu} - N_{du} N_d - N_{ud}N_u + (x-2) N_u N_d$ \State $N \leftarrow N + \Delta$ \EndFor \State \Return $N$ \EndFunction \vspace{0.5em} \Function{tau-star}{permutation $\pi \in S_n$} \State $S \leftarrow \textsc{quad}(\pi)$ \State $S \leftarrow S + \textsc{quad}(\pi(n),\dots,\pi(1))$ \State $S \leftarrow S + \textsc{quad}(\pi^{-1}(1),\dots,\pi^{-1}(n))$ \State $S \leftarrow S + \textsc{quad}(\pi^{-1}(n),\dots,\pi^{-1}(1))$ \State \Return $\frac23 - \frac{1}{4}\, S / \binom{n}{4}$ \EndFunction \end{algorithmic} \end{algorithm} \begin{theorem} \label{algworks} Algorithm~\ref{alg} returns $\tau^{\star}$. \end{theorem} The proof is given in the supplementary material, \S\ref{algproof}. Here are some remarks on the running time of this algorithm. \begin{enumerate} \item The statements on lines 8-16 are called $4n$ times and run in $O(\log n)$ time. All the rest runs in~$O(n)$. In total, the algorithm executes in $O(n \log n)$ time. \item Computing $\tau^{\star}$ by the scheme described in \cite{even2019counting} would require 32 equivalents of the subroutine \textsc{quad}, rather than 4 in this dedicated algorithm. This yields a 8x speedup. \item As with the previous algorithms for this problem in the literature, we work under the assumption that arithmetic operations on numbers of $\log n$ digits cost~$O(1)$ time, where $n$ is the input length. \end{enumerate} \section{R PACKAGE} \label{rpackage} We have implemented the above algorithm in \texttt{C++}, and have made it available in a new \texttt{R} package, named \texttt{independence} \citep[on CRAN,][]{independenceR}. Our package provides methods for near linear time computation of the following three statistics. \begin{itemize}[itemsep=0.25em,topsep=0pt] \item Hoeffding's $D_n$ \item The refined Hoeffding statistic $R_n$ \item Bergsma--Dassios--Yanagimoto's $\tau^{\star} = 12T_n$ \end{itemize} Run times of these three tests on large-scale datasets are summarized in Table~\ref{table}. These experiments were performed using Intel 3.30GHz CPU and less than 8GB RAM. \begin{table}[h] \caption{Run Times for Independence Tests (sec)} \label{table} \begin{center} \begin{tabular}{crrr} \textbf{$n$} & \textbf{Hoeffding} & \textbf{Refined} & \textbf{Tau-Star} \\ \hline \\ $10^5$ & 0.16 \;& 0.23 \;& 0.22 \;\\ $10^6$ & 0.60 \;& 3.51 \;& 3.38 \;\\ $10^7$ & 6.74 \;& 58.86 \;& 55.42 \;\\ $10^8$ & 85.74 \;& 888.75 \;& 805.75 \;\\ \end{tabular} \end{center} \end{table} For computing $p$-values, if needed, we call the method \texttt{pHoeffInd} from the \texttt{R} package \texttt{TauStar} by \cite{TauStar}. Look at the documentation of the package \texttt{independence} for technical details about $p$-value precision, some limitations due to the machine's word size, and handling ties and missing values. Figure~\ref{code} shows a short code excerpt using the package. It illustrates how the dependency given by \cite{yanagimoto1970measures}, as described in~\S\ref{fooling}, is undetected by Hoeffding's original test but clearly visible to its refined variants. \begin{figure}[t] \centering \begin{tcolorbox} \begin{verbatim} > library(independence) > set.seed(12345) > f <- function(a,b) ifelse(a>b, + pmin(b,a/2), pmax(b,(a+1)/2)) > x <- runif(300) > y <- f(x, runif(300)) > hoeffding.D.test(x,y)$p.value [1] 0.4589397 > hoeffding.refined.test(x,y)$p.value [1] 2.138784e-10 > tau.star.test(x,y)$p.value [1] 1.053099e-08 \end{verbatim} \end{tcolorbox} \vspace{.3in} \caption{Using the \texttt{R} Package \texttt{independence} where Hoeffding's $D$ Fails.} \vspace{.3in} \label{code} \end{figure} \section{RELATED TESTS} \label{related} Ever since Hoeffding introduced his~$D$, methods of testing and quantifying dependence have continued to be developed. Many competing ideas for other population coefficients of dependence and test statistics have emerged in the statistical literature, often driven by the need to go beyond Hoeffding's original setting of two real-valued variables. Several tests have been proposed for \emph{multidimensional} variables, $X \in \mathbb{R}^p$ and~$Y \in \mathbb{R}^q$, as well as in more general spaces. For example, the \emph{distance correlaiton} (dCor) by \citet{szekely2007measuring,lyons2013distance} is based on pairwise distances, and another approach by~\citet{gretton2005measuring,gretton2008kernel} relies on a Hilbert--Schmidt kernel structures associated to the two spaces (HSIC), and other recent approaches \citep[e.g.][]{heller2013consistent,deb2019multivariate,shi2020rate}. Another extension is to more than two variables. Indeed, if $X,Y,Z$, etc.~are pairwise independent, it is still interesting to test whether they are \emph{mutually independent}. In the problem of \emph{conditional independence}, one asks whether $X$ and $Y$ are independent under conditioning on~$Z$ \citep[for example]{zhang2011kernel}. However, we leave such generalizations for another time, and remain within the realm of \emph{rank-based independence testing for two real variables}, as even in this specific setting of Hoeffding's test, there are several other approaches to discuss. Recall that we only assume that $X$ and $Y$ are real-valued and non-atomic, and $n$ iid samples $(X_i,Y_i)$ are given. Nothing is assumed about their joint or marginal distribution, though it would also make sense to discuss also the ``copula'' setting where $F_X$ and $F_Y$ are known, or only one of them, or just known to be the same, and other assumptions on the marginal. In our setting, a distribution-free statistic only depends on the ranking permutation, $\pi \in S_n$ with $\pi(\mathrm{rank}\,X_i) = \mathrm{rank}\,Y_i$. Under the null hypothesis, every $\pi \in S_n$ is equaly likely. Therefore, our independence measures may be viewed as ways to order $S_n$ such that permutations that are more likely under dependent alternatives tend to appear near the end. We list a selection of examples for other approaches to independence testing depending only on ranks. \begin{itemize} \itemsep0.05em \item Hoeffding's dependence measure $D$ is an $L^2$ norm of $(F_{XY} - F_XF_Y)$, and so is the statistic $D_n$ with empirical cdfs. A~Kolmogornov--Smirnov type statistic uses the maximum norm instead \citep{blum1961distribution, deheuvels1979fonction, Hmisc}. \item Other ideas along these lines: introducing weight functions \citep{rosenblatt1975quadratic,de1980cramer}, using $L^1$ distance \citep{Hmisc}, or comparing the empirical characteristic functions of $F_{XY}$ and $F_XF_Y$ \citep{feuerverger1993consistent,csorgHo1985testing}. \item It is suggested by \cite{szekely2009brownian} to apply their \emph{distance correlation} (dCor) on ranks. This means using it with the points $\{(x,\pi(x))\}_{x=1}^{n}$ rather than the original samples. \item From another viewpoint, Hoeffding's formula in \S\ref{hoefdn} averages Pearson's $\chi^2$ over all partitions of the data into $2 \times 2$ contingency tables: $\tfrac{a_i}{c_i}\!|\!\tfrac{b_i}{d_i}$. \citet{heller2016consistent} study a scheme of tests that take more partitions, finer partitions, and other ways to weigh the scores (HHG).\item Another test that uses binning is the Maximal \emph{Mutual Information Coefficient} (MIC) by \cite{reshef2011detecting}, and its variations \citep{reshef2013equitability,gorfine2012comment,simon2014comment,renyi1959measures} \item \cite{chan2019characterization} found more combinations of 4-patterns that consistently detect independence, similar to $\mathcal{C}$ for $\tau^\star$, in \S\ref{bdy}. As noted by \cite{even2019counting}, the following two are computable in linear time. \begin{align*} &\{1324, 1342, 2413, 2431, 3124, 3142, 4213, 4231\} \\ &\{1324, 1423, 2314, 2413, 3142, 3241, 4132, 4231\} \end{align*} In general, such combinations form a convex cone of consistent independence measures. \item A new quick measure $\xi_n$ by \cite{chatterjee2020new} uses $\sum_{x=1}^{n-1}|\pi(x+1)-\pi(x)|$. See also \cite{shi2020power,wang2015efficient}. \item \cite{garcia2014independence} suggested to use the longest monotone subsequence in~$\pi$. \end{itemize} \subsubsection*{Acknowledgements} The author was supported by the Lloyd’s Register Foundation / Alan Turing Institute programme on Data-Centric Engineering.
{ "timestamp": "2020-10-27T01:20:12", "yymm": "2010", "arxiv_id": "2010.09712", "language": "en", "url": "https://arxiv.org/abs/2010.09712", "abstract": "In 1948 Hoeffding devised a nonparametric test that detects dependence between two continuous random variables X and Y, based on the ranking of n paired samples (Xi,Yi). The computation of this commonly-used test statistic takes O(n log n) time. Hoeffding's test is consistent against any dependent probability density f(x,y), but can be fooled by other bivariate distributions with continuous margins. Variants of this test with full consistency have been considered by Blum, Kiefer, and Rosenblatt (1961), Yanagimoto (1970), Bergsma and Dassios (2010). The so far best known algorithms to compute these stronger independence tests have required quadratic time. Here we improve their run time to O(n log n), by elaborating on new methods for counting ranking patterns, from a recent paper by the author and Leng (SODA'21). Therefore, in all circumstances under which the classical Hoeffding independence test is applicable, we provide novel competitive algorithms for consistent testing against all alternatives. Our R package, independence, offers a highly optimized implementation of these rank-based tests. We demonstrate its capabilities on large-scale datasets.", "subjects": "Computation (stat.CO); Combinatorics (math.CO)", "title": "independence: Fast Rank Tests", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812336438724, "lm_q2_score": 0.7310585669110203, "lm_q1q2_score": 0.7079433968912153 }
https://arxiv.org/abs/2005.02896
Holes with hats and Erdős-Hajnal
A "hole-with-hat" in a graph $G$ is an induced subgraph of $G$ that consists of a cycle of length at least four, together with one further vertex that has exactly two neighbours in the cycle, adjacent to each other, and the "house" is the smallest, on five vertices. It is not known whether there exists $\epsilon>0$ such that every graph $G$ containing no house has a clique or stable set of cardinality at least $|G|^\epsilon$; this is one of the three smallest open cases of the Erdős-Hajnal conjecture and has been the subject of much study.We prove that there exists $\epsilon>0$ such that every graph $G$ with no hole-with-hat has a clique or stable set of cardinality at least $|G|^\epsilon$
\section{Introduction} Graphs in this paper are finite and simple, and $|G|$ denotes the number of vertices of a graph $G$. A graph is {\em $H$-free} if it has no induced subgraph isomorphic to $H$. The Erd\H{o}s-Hajnal conjecture~\cite{EH0,EH} asserts: \begin{thm}\label{EHconj} {\bf Conjecture:} For every graph $H$, there exists $\varepsilon>0$ such that every $H$-free graph $G$ has a clique or stable set of cardinality at least $|G|^\varepsilon$. \end{thm} This has not yet been proved when $H$ is the five-vertex path $P_5$, and that problem motivates the work of this paper. The complement of $P_5$ is the {\em house}, the graph consisting of a cycle of length four, together with one extra vertex with two neighbours in the cycle, adjacent. By taking complements, we see that proving \ref{EHconj} when $H$ is the house is the same problem as proving it when $H=P_5$. The house is the smallest example of a ``hole-with-hat''. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8,auto=left] \tikzstyle{every node}=[inner sep=1.5pt, fill=black,circle,draw] \node (v0) at (0,2) {}; \node (v1) at (-1,0) {}; \node (v2) at (1,0) {}; \node (v3) at (-1,-2) {}; \node (v4) at (1,-2) {}; \draw (v0) -- (v1); \draw (v0) -- (v2); \draw (v1) -- (v2); \draw (v2) -- (v4); \draw (v3) -- (v4); \draw (v1) -- (v3); \end{tikzpicture} \caption{A house.} \label{fig:house} \end{figure} A {\em hole} in a graph $G$ is an induced cycle of length at least four. If $C$ is a hole in $G$, a vertex $v\in V(G)\setminus V(C)$ is said to be a {\em hat} for $C$ if $v$ has exactly two neighbours $x,y\in V(C)$, and $x,y$ are adjacent. The subgraph induced on $V(C)\cup \{v\}$ is then said to be a {\em hole-with-hat} in $G$; and we say $G$ is {\em hole-with-hat-free} if there is no hole-with-hat in $G$. The main result of this paper is: \begin{thm}\label{mainthm} There exists $\varepsilon>0$ such that for every hole-with-hat-free graph $G$, there is a clique or stable set in $G$ of cardinality at least $|G|^\varepsilon$. \end{thm} Here are some earlier theorems with a similar nature: \begin{itemize} \item If $G$ contains no hole, there is a clique or stable set in $G$ of cardinality at least $|G|^{1/2}$. (This is immediate because such graphs are perfect.) \item If $G$ contains no house and no hole of odd length, then again there is a clique or stable set in $G$ of cardinality at least $|G|^{1/2}$. (Again, because such graphs are perfect, a consequence of the ``strong perfect graph theorem''~\cite{SPGT}.) \item For each $\ell>0$, there exists $\varepsilon>0$ such that if $G$ contains no house and no hole of length at least $\ell$, then $G$ has a clique or stable set of cardinality at least $|G|^\varepsilon$. (This is a combination of a theorem of Bousquet, Lagoutte, and Thomass\'e~\cite{lagoutte} and a theorem of Bonamy, Bousquet and Thomass\'e~\cite{bonamy}.) \end{itemize} Working in structural graph theory, one always hopes to find a collection of non-crossing decompositions that together break the graph into simpler pieces. That is because the existence of such a collection leads into the well-understood area of tree-decompositions. However, such collections do not often appear in the context of forbidden induced subgraphs. Here we give a weakening of this notion, that is still almost as useful, and does work more often for induced subgraphs. Hole-with-hat-free graphs typically admit a certain kind of separation, that we call a ``fracture''. (This is related to the ``amalgam'' decomposition of hole-with-hat-free graphs due to Conforti, Cornu\'ejols, Kapoor and Vu\v{s}kovi\'c~\cite{conforti}.) A fracture is a certain kind of partition of the vertex set of our graph $G$ into three parts (actually four parts, but we merge two of them for this sketch) $A,B,C$, where $A,B$ are ``anticomplete'', that is, there are no edges between them. We call $A$ and $B$ the ``small'' and ``big'' sides of the fracture. (There is no symmetry between $A$ and $B$ in the full definition of a fracture.) Let $S$ be the union of all small sides of fractures. The graph $R$ obtained from $G$ by deleting $S$ does not admit a fracture with nonempty small side (because such a fracture would extend to one in the whole graph $G$, and we would have deleted all its small side, including the part in $R$); so $R$ has a very restricted type. We can base a proof on this, provided we can show that $R$ still contains a substantial part of $G$: in other words, that $S$ is not too big. And we could show this, if we could prove that: \medskip {\em Every component of $S$ is anticomplete to the big side of some fracture of $G$. } \medskip This is where ``non-crossing'' would be useful. If it were true that fractures form a set of non-crossing separations, then every component of $S$ would be a component of one small side, and therefore anticomplete to the corresponding big side. This is not true, but we have a substitute: we can show that for any two fractures, if some component of the union of their small sides is not a component of either small side, then the two big sides are equal. It follows from this that every component of $S$ is anticomplete to the big side of some fracture, which is what we needed. A similar idea works in several other situations, and we hope to find further uses for it in the future. A graph $P$ is {\em perfect} if chromatic number equals clique number for every induced subgraph of $P$. We denote the set of nonnegative real numbers by $\mathbb{R}^+$. Let $G$ be a graph and $f:V(G)\rightarrow \mathbb{R}^+$; if $X\subseteq V(G)$, we define $f(X)=\sum_{v\in X}f(v)$, and if $P$ is an induced subgraph of $G$ we define $f(P)=f(V(P))$. We say that $f$ is {\em good on $G$} if $f(P)\le 1$ for every perfect induced subgraph $P$ of $G$. Now let $\alpha\ge 1$. We denote by $f^\alpha$ the function $g$ on $G$ defined by $g(v)=(f(v))^{\alpha}$ for each $v\in V(G)$. Let us say that $G$ is {\em $\alpha$-narrow} if $f^\alpha(G)\le 1$ for every good function $f$ on $G$. Here is a result of Chudnovsky and Safra~\cite{bull}, with a short proof by Chudnovsky and Zwols~\cite{zwols}: \begin{thm}\label{narrowtoEH} If $G$ is $\alpha$-narrow then $G$ has a clique or stable set of cardinality at least $|G|^\varepsilon$, where $\varepsilon=1/(2\alpha)$. \end{thm} \noindent{\bf Proof.}\ \ Let $P$ be a perfect induced subgraph of $G$ with as many vertices as possible, and let $p=|P|$. Let $f(v)=1/p$ for all $v\in V(G)$. Then $f$ is good on $G$, and so $f^\alpha(G)\le 1$, that is, $p^{-\alpha}|G|\le 1 $, and so $p\ge |G|^{1/\alpha}=|G|^{2\varepsilon}$. But $P$ is perfect, and so $P$, and hence $G$, has a clique or stable set of cardinality at least $p^{1/2}\ge |G|^\varepsilon$. This proves \ref{narrowtoEH}.~\bbox In view of \ref{narrowtoEH}, the following implies \ref{mainthm}: \begin{thm}\label{mainthm2} There exists $\alpha\ge 1$ such that every hole-with-hat-free graph is $\alpha$-narrow. \end{thm} Let us say a pair $(G,f)$ is {\em $\alpha$-critical} if $G$ is a graph and $f:V(G)\rightarrow \mathbb{R}^+$ is a good function on $G$, such that \begin{itemize} \item every proper induced subgraph of $G$ is $\alpha$-narrow; and \item $f^\alpha(G)> 1$. \end{itemize} To prove \ref{mainthm2}, choose an appropriately large value of $\alpha$, and suppose (for a contradiction) that \ref{mainthm2} is not satisfied, and look at a counterexample $G$ with as few vertices as possible. Hence there is a good function $f$ on $G$ with $f^\alpha(G)> 1$, and so $(G,f)$ is $\alpha$-critical. Consequently, \ref{mainthm2} can be reformulated as: \begin{thm}\label{mainthm3} There exists $\alpha\ge 1$ such that for every $\alpha$-critical pair $(G,f)$, there is a hole-with-hat in $G$. \end{thm} We will prove this at the end of the final section, but let us sketch the proof now. We are proving $\alpha$-narrowness instead of proving the statement of \ref{mainthm} directly, in order to handle homogeneous sets; so vertices will have non-negative weights, but for this sketch the reader could assume that all vertex-weights are one. If there are two disjoint anticomplete sets of vertices, that both have linear total weight, then we win by induction; so we assume there are no two such sets. By a theorem of R\"odl, there is a subgraph $X$ containing a linear fraction of the total weight of $G$, such that either $X$ is sparse (in a weighted sense) or its complement is: and the second is impossible, by a theorem of Bonamy, Bousquet and Thomass\'e. So the first holds. We look at fractures of $X$. If $A,B,C$ is such a fracture, then $C$ has very small total weight, and so at least one of $A,B$ has big weight. But not both $A,B$ have linear total weight, since they are anticomplete; and with some sleight of hand we can arrange that it is always the small side $A$ that has small weight, and therefore that most of the weight of $X$ resides in $B$. Let $S$ be the union of all the small sides of fractures of $X$. By the remarkable fact that we described earlier, every component of $S$ is anticomplete to some big side, and therefore has small weight; and so $S$ itself has small weight (because otherwise we could group its components into two sets both with big weight). That means that deleting $S$ from $X$ gives a graph $R$ that still has big weight. But every fracture in $R$ extends to a fracture in $X$ (this is another useful feature of fractures, and the reason for using ``forcers'', which we do not explain here), and therefore $R$ has no fracture with nonempty small side. Hence $R$ has a very restricted type, and in particular it is $\alpha'$-narrow where $\alpha'$ is much less than $\alpha$; and it follows that $G$ itself is $\alpha$-narrow, which is what we wanted to show. This completes the sketch. \section{Complete pairs of sets} Hole-with-hat-free graphs have some convenient structural properties, that we will prove next. Let $A,B\subseteq V(G)$ be disjoint; we say they are {\em complete} to each other if every vertex in $A$ is adjacent to every vertex in $B$, and {\em anticomplete} if there are no edges between $A,B$. We are concerned in this section with how the remainder of a hole-with-hat free graph can attach to a pair of sets of vertices that are complete to each other. A graph is {\em anticonnected} if its complement is connected; and its {\em anticomponents} are the complements of the components of its complement. If $X\subseteq V(G)$, we say that $X$ is {\em connected} if $G[X]$ is connected, and {\em anticonnected} if $G[X]$ is anticonnected. If $C\subseteq V(G)$, a vertex $v\in V(G)\setminus C$ is {\em mixed} on $C$ if $v$ is neither complete not anticomplete to $C$. We begin with: \begin{thm}\label{wiggly1} Let $G$ be a hole-with-hat-free graph. Let $C,D$ be disjoint anticonnected subsets of $V(G)$, complete to each other. Then no vertex of $V(G)\setminus (C\cup D)$ is both mixed on $C$ and mixed on $D$. \end{thm} \noindent{\bf Proof.}\ \ Suppose that $v\in V(G)\setminus (C\cup D)$ is both mixed on $C$ and mixed on $D$. Since $C$ is anticonnected, there exist nonadjacent $c_1,c_2\in C$ such that $v$ is adjacent to $c_1$ and not to $c_2$; and choose $d_1,d_2\in D$ similarly. Then the subgraph induced on $\{c_1,c_2,d_1,d_2,v\}$ is a house, contradicting that $G$ is hole-with-hat-free. This proves \ref{wiggly1}.~\bbox \begin{thm}\label{wiggly2} Let $G$ be a hole-with-hat-free graph. Let $C,D$ be disjoint subsets of $V(G)$, complete to each other, such that $C$ is connected. Let $P$ be a connected subgraph of $G\setminus (C\cup D)$, such that some vertex of $P$ has a neighbour in $C$, and no vertex of $P$ is complete to $C$. For every $v\in V(P)$, there exists $u\in V(P)$, mixed on $C$, such that every vertex in $D$ adjacent to $v$ is also adjacent to $u$. \end{thm} \noindent{\bf Proof.}\ \ Suppose the claim is false, and choose a counterexample with $P$ minimal. Choose $v\in V(P)$ such that no vertex of $P$ has a neighbour in $C$ and is adjacent to all neighbours of $v$ in $D$. Choose $u\in V(P)$ mixed on $C$. By the minimality of $P$, $P$ is an induced path between $u,v$, and $u$ is the only vertex of $P$ with a neighbour in $C$. Choose $d\in D$ adjacent to $v$ and not to $u$. By the minimality of $P$, $v$ is the only vertex of $P$ adjacent to $d$. Since $C$ is connected, there exist adjacent $c_1,c_2\in C$ such that $u$ is adjacent to $c_1$ and not to $c_2$. But then the subgraph induced on $V(P)\cup \{c_1,c_2,d\}$ is a hole-with-hat, a contradiction. This proves \ref{wiggly2}.~\bbox \begin{thm}\label{wiggly3} Let $G$ be a hole-with-hat-free graph. Let $C,D$ be disjoint subsets of $V(G)$, complete to each other, such that $C$ is connected and $D$ is anticonnected. Let $P$ be a connected subgraph of $G\setminus (C\cup D)$, such that some vertex of $P$ has a neighbour in $C$, and no vertex of $P$ is complete to $C$. If some vertex of $P$ has a neighbour in $D$, then some vertex of $P$ is mixed on $C$ and complete to $D$. \end{thm} \noindent{\bf Proof.}\ \ If some vertex of $P$ has a neighbour in $D$, then by \ref{wiggly2}, some vertex $v\in C$ is mixed on $C$ and has a neighbour in $D$, and therefore by \ref{wiggly1}, $v$ is complete to $D$. This proves \ref{wiggly3}.~\bbox \begin{thm}\label{wiggly4} Let $G$ be a hole-with-hat-free graph. Let $C,D$ be disjoint nonempty subsets of $V(G)$, complete to each other, such that $C$ is connected and anticonnected, and $D$ is anticonnected. Let $P$ be a connected subgraph of $G$ with $V(P)\cap (C\cup D)=\emptyset$, such that some vertex of $P$ has a neighbour in $C$, and no vertex of $P$ is complete to $C$. Then no vertex of $P$ is mixed on $D$. \end{thm} \noindent{\bf Proof.}\ \ Suppose not, and choose $C,D,P$ not satisfying the theorem, with $P$ minimal. Choose $u\in V(P)$ with a neighbour in $C$, and therefore mixed on $C$; and choose $v\in V(P)$ mixed on $D$. From the minimality of $P$, it follows that $P$ is an induced path with ends $u,v$, and $u$ is the only vertex of $P$ with a neighbour in $C$, and no vertex of $P$ different from $v$ is mixed on $D$. By \ref{wiggly3} $u$ is complete to $D$, and in particular $u\ne v$. Let $u'$ be the neighbour of $u$ in $P$. It follows that $C\cup \{u\}$ is connected and anticonnected, and $u'$ is mixed on it; and this contradicts the minimality of $P$. This proves \ref{wiggly4}.~\bbox \begin{thm}\label{wiggly5} Let $G$ be a hole-with-hat-free graph. Let $C,D$ be disjoint nonempty subsets of $V(G)$, complete to each other, such that $C$ is connected and anticonnected, and $D$ is connected and anticonnected. Then there do not exist connected subgraphs $P,Q$ of $G\setminus (C\cup D)$, with $V(P\cap Q)\ne \emptyset$, such that \begin{itemize} \item some vertex of $P$ has a neighbour in $C$, and no vertex of $P$ is complete to $C$; \item some vertex of $Q$ has a neighbour in $D$, and no vertex of $Q$ is complete to $D$. \end{itemize} \end{thm} \noindent{\bf Proof.}\ \ Suppose that such $P,Q$ exist, and choose $C,D,P,Q$ with $|V(P)|+|V(Q)|$ minimal. Choose $w\in V(P\cap Q)$, and choose $p\in V(P)$ with a neighbour in $C$, and $q\in V(Q)$ with a neighbour in $D$. From the minimality of $|V(P)|+|V(Q)|$, it follows that $P$ is an induced path with ends $p,w$, and no vertex of $P$ different from $p$ has a neighbour in $C$; and similarly for $Q$; and $V(P\cap Q)=\{w\}$. Suppose that $p$ is complete to $D$. Hence $p\notin V(Q)$, and so $p\ne w$. Then $C\cup\{p\}$ is connected and anticonnected, and complete to $D$, and the two paths $P\setminus p$ and $Q$ contradict the minimality of $|V(P)|+|V(Q)|$. Thus $p$ is not complete to $D$. Since no other vertex of $P$ has a neighbour in $C$, it follows from \ref{wiggly2} that no vertex of $P$ has a neighbour in $D$. Similarly no vertex of $Q$ has a neighbour in $C$. In particular, no vertex of $P\cup Q$ is complete to $C$, contrary to \ref{wiggly4}. This proves \ref{wiggly5}.~\bbox \section{Decomposing hole-with-hat-free graphs} If $v\in V(G)$, we denote by $N(v)=N_G(v)$ the set of all neighbours of $v$ in $G$. If $N\subseteq V(G)$, we denote by $G[N]$ the induced subgraph with vertex set $N$. A {\em weighted graph} is a pair $(G,w)$, where $G$ is a graph and $w:V(G)\rightarrow \mathbb{R}^+$ is a function, such that $w(G)=1$. Let $\varepsilon>0$. We say a weighted graph $(G,w)$ is {\em $\varepsilon$-coherent} if \begin{itemize} \item for every $v\in V(G)$, $w(v)< \varepsilon$; \item for every $v\in V(G)$, $w(N_G(v))< \varepsilon$; and \item if $A,B\subseteq V(G)$ are disjoint and anticomplete then $\min (w(A), w(B))< \varepsilon$. \end{itemize} First we need: \begin{thm}\label{bigcomp} Let $(G,w)$ be an $\varepsilon$-coherent weighted graph. If $X\subseteq V(G)$ with $w(X)\ge 3\varepsilon$, there is a component $Y$ of $G[X]$ with $w(Y)> w(X)-\varepsilon$. \end{thm} \noindent{\bf Proof.}\ \ Let $Z$ be a union of components of $G[X]$, minimal such that $w(Z)\ge \varepsilon$. Since $X\setminus Z$ is anticomplete to $Z$, it follows that $w(X\setminus Z)<\varepsilon$, and so $w(Z)> w(X)-\varepsilon$. Choose a component $Y$ of $G\setminus X$ with $Y\subseteq Z$. From the minimality of $Z$, $w(Z\setminus Y)<\varepsilon$, and so $w(Y)\ge ( w(X)-\varepsilon)-\varepsilon\ge \varepsilon$, and therefore $Z=Y$ from the minimality of $Z$. But then $w(Y)=w(Z)> w(X)-\varepsilon$. This proves \ref{bigcomp}.~\bbox The component $Y$ of \ref{bigcomp} satisfies $w(Y)> w(X)-\varepsilon\ge 2\varepsilon$, and since the remainder of $G[X]$ is anticomplete to $Y$ and therefore has weight less than $\varepsilon$, it follows that $Y$ is unique. We call $Y$ the {\em big component} of $G[X]$. If $A\subseteq V(G)$, each vertex in $V(G)\setminus A$ with a neighbour in $A$ is called an {\em attachment} of $A$. Let $\varepsilon>0$, with $5\varepsilon\le 1$, and let $(G,w)$ be an $\varepsilon$-coherent weighted graph. Let $C,D$ be disjoint subsets of $V(G)$, such that: \begin{itemize} \item $|C|\ge 2$, and $G[C]$ is connected and anticonnected; \item $D\ne \emptyset$, and $D$ is the set of all vertices in $V(G)\setminus C$ that are complete to $C$; and \item $C$ contains no attachment of the big component of $G\setminus (C\cup D)$. \end{itemize} (Note that since there is a vertex in $D$ complete to $C$, it follows that $w(C)\le \varepsilon$, and similarly $w(D)\le \varepsilon$. Since $5\varepsilon\le 1$, there is a big component of $G\setminus (C\cup D)$.) In these circumstances we call $(C,D)$ a {\em split} of $(G,w)$. As we shall see, splits are a useful kind of decomposition in hole-with-hat-free graphs. Let us say a {\em forcer} is a graph $F$ with eight vertices $v_1,\ldots, v_8$, where $v_1\hbox{-} v_2\hbox{-} v_3\hbox{-} v_4$ and $v_5\hbox{-} v_6\hbox{-} v_7\hbox{-} v_8$ are induced paths of $F$, and $\{v_1,\ldots, v_4\}$ is complete to $\{v_5,\ldots, v_8\}$. We call these two paths the {\em constituent paths} of the forcer. A {\em forcer in $G$} means an induced subgraph of $G$ that is a forcer, and $G$ is {\em forcer-free} if there is no forcer in $G$ . Now we prove the main result of this section. It is a strengthening of the ``amalgam'' decomposition of hole-with-hat-free graphs due to Conforti, Cornu\'ejols, Kapoor and Vu\v{s}kovi\'c~\cite{conforti}. \begin{thm}\label{getsplit} Let $\varepsilon>0$, with $5\varepsilon\le 1$, and let $(G,w)$ be a $\varepsilon$-coherent weighted graph, where $G$ is hole-with-hat-free. Let $F$ be a forcer in $G$. Then there is a split $(C,D)$ of $G$ such that $G[C], G[D]$ both contain a constituent path of $F$. \end{thm} \noindent{\bf Proof.}\ \ Let $F$ be a forcer, and let $P_1,P_2$ be the constituent paths of $F$. Consequently there are disjoint subsets $X_1, X_2$ of $V(G)$, such that \begin{itemize} \item $V(P_1)\subseteq X_1$, and $X_1$ is connected and anticonnected; \item $V(P_2)\subseteq X_2$, and $X_2$ is connected and anticonnected; and \item $X_1, X_2$ are complete to one another. \end{itemize} Choose such $(X_1,X_2)$ maximal in the sense that there is no choice of $(X_1', X_2')$ satisfying the same conditions, with $X_i\subseteq X_i'$ for $i = 1,2$ and $|X_1'\cup X_2'|>|X_1\cup X_2|$. We call this property the {\em maximality} of $(X_1,X_2)$. Let $X_3$ be the set of all vertices in $V(G)\setminus (X_1\cup X_2)$ that are complete to $X_1\cup X_2$, and let $R=V(G)\setminus (X_1\cup X_2\cup X_3)$. For $i=1,2$, let $R_i$ be the set of vertices in $R$ that are complete to $X_i$. \\ \\ (1) {\em $R_1$ is anticomplete to $X_2$, and $R_2$ is anticomplete to $X_1$, and so $R_1\cap R_2=\emptyset$.} \\ \\ Suppose that $v\in R_1$ has a neighbour in $X_2$, say. Since $v\notin X_3$, $v$ is mixed on $X_2$. But then $X_2'=X_2\cup \{v\}$ is connected and anticonnected, and the pair $(X_1,X_2')$ violates the maximality of $(X_1,X_2)$. This proves (1). \bigskip For $i = 1,2$, let $S_i$ be the union of all components of $G[R\setminus (R_1\cup R_2)]$ that have an attachment in $X_i$. Let $S_3=R\setminus (R_1\cup R_2\cup S_1\cup S_3)$. \\ \\ (2) {\em $S_1\cap S_2=\emptyset$. Moreover, $S_1$ is anticomplete to $X_2\cup R_2\cup S_2$, and $S_2$ is anticomplete to $X_1\cup R_1\cup S_1$.} \\ \\ By \ref{wiggly4}, $S_1\cap S_2=\emptyset$. By \ref{wiggly3}, $S_1$ is anticomplete to $R_2$, and $S_2$ is anticomplete to $R_1$. This proves (2). \begin{figure}[h!] \centering \begin{tikzpicture}[scale=0.8,auto=left] \tikzstyle{every node}=[inner sep=10pt, fill=none,circle,draw] \node (X3) at (0,1) {}; \node (S3) at (0,3) {}; \node (R1) at (-4,2.5) {}; \node (R2) at (4,2.5) {}; \node (X1) at (-2,-2) {}; \node (X2) at (2,-2) {}; \node (S1) at (-2,.5) {}; \node (S2) at (2,.5) {}; \draw[line width = 6pt] (R1) -- (X1); \draw[line width = 6pt] (X1) -- (X2); \draw[line width = 6pt] (X2) -- (R2); \draw[line width = 6pt] (X2) -- (X3); \draw[line width = 6pt] (X1) -- (X3); \foreach \from/\to in {S1/R1, S1/X1,S1/X3,R1/X3,R1/S3,S3/X3,S3/R2,X3/R2,X3/S2,S2/R2,S2/X2} \draw [decoration={snake}, decorate] (\from) -- (\to); \draw [decoration={snake}, decorate] (R1) to [bend left =40](R2); \tikzstyle{every node}=[] \draw (R1) node [] {\scriptsize$R_1$}; \draw (R2) node [] {\scriptsize$R_2$}; \draw (X1) node [] {\scriptsize$X_1$}; \draw (S1) node [] {\scriptsize$S_1$}; \draw (X2) node [] {\scriptsize$X_2$}; \draw (S2) node [] {\scriptsize$S_2$}; \draw (S3) node [] {\scriptsize$S_3$}; \draw (X3) node [] {\scriptsize$X_3$}; \end{tikzpicture} \caption{Thick lines indicate complete pairs, and wiggly lines indicate possible edges.} \label{fig:getsplit} \end{figure} Thus, in summary, the sets $X_1,X_2,X_3,R_1,R_2,S_1,S_2,S_3$ are pairwise disjoint and have union $V(G)$. The pairs $$(X_1,X_2), (X_1,X_3), (X_2,X_3), (R_1,X_1), (R_2,X_2)$$ are complete to each other; the pairs $$(R_1,X_2), (R_2,X_1), (S_1,X_2), (S_2,X_1), (S_3,X_1), (S_3,X_2), (S_1,R_2), (S_2,R_1), (S_1,S_2), (S_1,S_3),(S_2,S_3)$$ are anticomplete; and there may be edges between the pairs not listed. Every component of $G[S_i]$ has an attachment in $X_i$ for $i = 1,2$. Define $T=X_1\cup X_2\cup X_3\cup R_1\cup R_2$. Choose $x_1\in X_1$ and $x_2\in X_2$. Then every vertex in $T$ is adjacent to one of $x_1,x_2$, and so $w(T)\le 2\varepsilon$. Hence $G\setminus T$ has a big component $Y$, and since the sets $S_1,S_2,S_3$ are pairwise anticomplete, we may assume that $Y$ is disjoint from $S_1$, by exchanging $X_1,X_2$ if necessary. But then $(X_1,X_2\cup X_3\cup R_1)$ is a split of $G$ satisfying the theorem. This proves \ref{getsplit}.~\bbox Let us say a split $(C,D)$ of $G$ is {\em optimal} if there is no split $(C',D')$ with $C\subseteq C'$ and $C'\ne C$. Let $(C,D)$ be an optimal split. Let $A$ be the union of all components of $G\setminus (C\cup D)$ that have an attachment in $C$; and let $B$ be the union of all other components of $G\setminus (C\cup D)$ (including the big component). Let us call $(A,C,D,B)$ a {\em fracture} of $G$. (Note that there are no edges between $B$ and $A\cup C$ but there may well be edges between $A$ and $D$. Also, $B\ne \emptyset$, since it contains the big component of $G\setminus (C\cup D)$, but $A$ might be empty.) From \ref{getsplit} we have immediately: \begin{thm}\label{getfracture} Let $\varepsilon>0$, with $5\varepsilon\le 1$, and let $(G,w)$ be an $\varepsilon$-coherent weighted graph, where $G$ is hole-with-hat-free. Let $F$ be a forcer in $G$. Then there is a fracture $(A,C,D,B)$ of $G$ such that $G[C]$ contains a constituent path of $F$. \end{thm} We need some observations about fractures. \begin{thm}\label{fracture} Let $\varepsilon>0$, with $5\varepsilon\le 1$, and let $(G,w)$ be an $\varepsilon$-coherent weighted graph, where $G$ is hole-with-hat-free. Let $(A,C,D,B)$ be a fracture of $G$. \begin{itemize} \item For each $a\in A$, there is an attachment of the big component of $G\setminus (C\cup D)$ that is nonadjacent to~$a$. \item For each $a\in A$, and every anticomponent $X$ of $G[D]$, $a$ is not mixed on~$X$. \end{itemize} \end{thm} \noindent{\bf Proof.}\ \ Let $Y$ be the big component of $G\setminus (C\cup D)$; thus $Y\subseteq B$, and all its attachments belong to $D$. Suppose that $a\in A$, and $a$ is adjacent to every vertex of $D$ that has a neighbour in $Y$. Let $P$ be the component of $G[A]$ that contains $a$; then some attachment of $P$ belongs to $C$. By \ref{wiggly2}, we may choose $v\in P$ mixed on $C$, such that every vertex in $D$ adjacent to $a$ is also adjacent to $v$. Let $D'$ be the set of all neighbours of $v$ in $D$. Then $(C\cup \{v\}, D')$ is a split (because $D'$ contains all attachments of $Y$), contradicting that $(C,D)$ is optimal. This proves the first assertion. Now suppose that $a\in A$ is mixed on an anticomponent $X$ of $G[D]$. Let $P$ be the component of $G[A]$ that contains $a$. Choose $v\in P$ mixed on $C$; then $P$ contradicts \ref{wiggly4} applied to the connected anticonnected set $C$ and the anticonnected set $X$. This proves \ref{fracture}.~\bbox \section{Multiple fractures} A fracture $(A,C,D,B)$ of $G$ is a kind of separation of $G$, because deleting $C\cup D$ disconnects $A$ from $B$. (But $A$ might be empty.) Also the order of this separation is small, since $w(C\cup D)\le 2\varepsilon$ in the usual notation. It would be nice if these separations did not ``cross'', so that they give us a tree-decomposition of $G$, but that is not true. Nevertheless, something like that is true, as we see in this section. \begin{thm}\label{crossing} Let $\varepsilon>0$, with $6\varepsilon\le 1$, and let $(G,w)$ be an $\varepsilon$-coherent weighted graph, where $G$ is hole-with-hat-free. Let $(A,C,D,B)$ and $(A', C', D', B')$ be fractures in $G$. Then either \begin{itemize} \item every connected subgraph of $G[A\cup A']$ is contained in one of $A$, $A'$; or \item the big component of $G\setminus (C\cup D)$ equals the big component of $G\setminus (C'\cup D')$. \end{itemize} \end{thm} \noindent{\bf Proof.}\ \ We suggest that, to follow this argument, the reader imagine a $4\times 4$ matrix with rows labelled $A,C,D,B$ and columns $A', C', D', B'$. We remind the reader that $C$ is complete to $D$, and $A\cup C$ is anticomplete to $B$, and the same for $(A',C',D',B')$. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8,auto=left] \tikzstyle{every node}=[] \draw [-] (1,-2) -- (1,2); \draw [-] (0,-2) -- (0,2); \draw [-] (-1,-2) -- (-1,2); \draw [-] (-2,1) -- (2,1); \draw [-] (-2,0) -- (2,0); \draw [-] (-2,-1) -- (2,-1); \node at (-2.5,1.5) {\scriptsize $A$}; \node at (-2.5,0.5) {\scriptsize $C$}; \node at (-2.5,-0.5) {\scriptsize $D$}; \node at (-2.5,-1.5) {\scriptsize $B$}; \node at (-1.5,2.5) {\scriptsize $A'$}; \node at (-.5,2.5) {\scriptsize $C'$}; \node at (.5,2.5) {\scriptsize $D'$}; \node at (1.5,2.5) {\scriptsize $B'$}; \end{tikzpicture} \caption{Two fractures.} \label{fig:matrix1} \end{figure} Let $Y$ be the big component of $G\setminus (C\cup D)$, and define $Y'$ similarly. Since $w(Y), w(Y')> 1-3\varepsilon\ge 1/2$, it follows that $Y\cap Y'\ne \emptyset$, and since $Y\subseteq B$ and $Y'\subseteq B'$, we deduce that $Y\cap Y'\cap B\cap B'\ne \emptyset$. \\ \\ (1) {\em If $C\cap B'\ne \emptyset$ then $D\cap (A'\cup C')=\emptyset$, and if $B\cap C'\ne \emptyset$ then $(A\cup C)\cap D'=\emptyset$.} \\ \\ Let $u\in C\cap B'$. If $v\in D\cap (A'\cup C')$, then $v$ is adjacent to $u$ (because $C$ is complete to $D$), and yet $v$ is nonadjacent to $u$ (because $A'\cup C', D'$ are anticomplete), a contradiction. This proves the first statement, and the second follows by symmetry. \\ \\ (2) {\em We may assume that $A\cap (C'\cup D')\ne \emptyset$, and $(C\cup D)\cap A'\ne \emptyset$, and at least one of $B\cap D', D\cap B'$ is nonempty.} \\ \\ If $A\cap (C'\cup D')= \emptyset$, then the first outcome of the theorem holds, and similarly if $(C\cup D)\cap A'=\emptyset$. If $B\cap D', D\cap B'$ are both empty, then $Y,Y'\subseteq B\cap B'$, and so $Y=Y'$ and the second outcome of the theorem holds. This proves (2). \bigskip From the third assertion of (2) and symmetry, we may assume that $B\cap D'\ne \emptyset$. By (1), $(A\cup C)\cap C'=\emptyset$. From (2), $A\cap D'\ne \emptyset$; so by (1) $B\cap C'=\emptyset$. Hence $D\cap C'\ne \emptyset$, because $C'\ne \emptyset$. Every vertex in $C\cap A'$ is complete to $D\cap C'$ and hence to $C'$; but no vertex in $A'$ is complete to $C'$ from the definition of a fracture, and so $C\cap A'=\emptyset$. By (2), $D\cap A'\ne \emptyset$. By (1), $C\cap B'=\emptyset$, and so $C\cap D'\ne \emptyset$. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8,auto=left] \tikzstyle{every node}=[] \draw [-] (1,-2) -- (1,2); \draw [-] (0,-2) -- (0,2); \draw [-] (-1,-2) -- (-1,2); \draw [-] (-2,1) -- (2,1); \draw [-] (-2,0) -- (2,0); \draw [-] (-2,-1) -- (2,-1); \node at (-2.5,1.5) {\scriptsize $A$}; \node at (-2.5,0.5) {\scriptsize $C$}; \node at (-2.5,-0.5) {\scriptsize $D$}; \node at (-2.5,-1.5) {\scriptsize $B$}; \node at (-1.5,2.5) {\scriptsize $A'$}; \node at (-.5,2.5) {\scriptsize $C'$}; \node at (.5,2.5) {\scriptsize $D'$}; \node at (1.5,2.5) {\scriptsize $B'$}; \node at (-1.5,1.5) {\scriptsize $?$}; \node at (-1.5,.5) {\scriptsize $\emptyset$}; \node at (-1.5,-1.5) {\scriptsize $?$}; \node at (-.5,1.5) {\scriptsize $\emptyset$}; \node at (-.5,.5) {\scriptsize $\emptyset$}; \node at (-.5,-1.5) {\scriptsize $\emptyset$}; \node at (.5,-.5) {\scriptsize $?$}; \node at (1.5,1.5) {\scriptsize $?$}; \node at (1.5,.5) {\scriptsize $\emptyset$}; \node at (1.5,-.5) {\scriptsize $?$}; \tikzstyle{every node}=[inner sep=1.5pt, fill=black,circle,draw] \node at (-1.5,-.5) {}; \node at (-.5,-.5) {}; \node at (.5,1.5) {}; \node at (.5,-1.5) {}; \node at (1.5,-1.5) {}; \node at (.5,.5) {}; \end{tikzpicture} \caption{A solid dot means a nonempty set, and $?$ means we don't know.} \label{fig:matrix2} \end{figure} Since $C=C\cap D'$ is anticonnected, and every vertex in $A$ has a nonneighbour in $C$, and every vertex in $B\cap D'$ has a nonneighbour in $A\cap D'$, it follows that $(A\cup B\cup C)\cap D'$ is anticonnected. But each vertex in $D\cap A'$ has a neighbour in $(A\cup B\cup C)\cap D'$ (namely, in $C\cap D'$), and by \ref{fracture}, it follows that $D\cap A'$ is complete to $(A\cup B\cup C)\cap D'$. Similarly, since $D\cap (A'\cup B'\cup C')$ is anticonnected, it follows that $A\cap D'$ is complete to $D\cap (A'\cup B'\cup C')$. (Thus we almost have symmetry between $(A,C,D,B)$ and $(A', C', D', B')$; but not quite, because we do not know that $D\cap B'\ne \emptyset$.) Let $Q$ be the set of vertices in $D\cap D'$ that are not complete to $D\cap A'$, and let $Q'$ be the set of vertices in $D\cap D'$ that are not complete to $ A\cap D'$. Let $R=(D\cap D')\setminus (Q\cup Q')$. \\ \\ (3) {\em $Q\cap Q'=\emptyset$, and $Q,Q', R$ are pairwise complete.} \\ \\ Since there is a vertex in $A\cap D'$ and it is complete to $D\cap A'$, \ref{fracture} implies that $A\cap D'$ is complete to $Q$; and so $Q\cap Q'=\emptyset$. If $u\in Q$ and $v\in Q'\cup R$, there is a vertex in $D\cap A'$ adjacent to $v$ and not to $u$; so \ref{fracture} implies that $u,v$ are adjacent. Thus $Q$ is complete to $Q'\cup R$, and similarly $Q'$ is complete to $R$. This proves (3). \\ \\ (4) {\em $A'\cap B$ is anticomplete to $Q'\cup (B\cap D')$, and $B'\cap A$ is anticomplete to $Q\cup (D\cap B')$.} \\ \\ Each vertex in $A'\cap B$ is anticomplete to $A\cap D'$, and so by \ref{fracture}, also anticomplete to $Q'\cup (B\cap D')$. Similarly $B'\cap A$ is anticomplete to $Q\cup (D\cap B')$. This proves (4). \bigskip Choose $v\in D\cap A'$; then by \ref{fracture}, there is an attachment $q$ of $Y'$ nonadjacent to $v$. Since $v$ is adjacent to all vertices of $D'\setminus Q$, it follows that $q\in Q$. Similarly there is an attachment $q'$ of $Y$ with $q'\in Q'$. Let $X=((A\cup C)\cap D')\cup Q'$, and $X'=(D\cap (A'\cup C'))\cup Q$. Then $X, X'$ are disjoint, and complete to each other, and each of them is both connected and anticonnected. Now some vertex of $Y$ is adjacent to $q'$, and so has a neighbour in $X$; and no vertex of $Y$ is complete to $X$ (because $Y\subseteq B\cap (B'\cup D')$, since there are no edges between $B\cap A'$ and $B\cap D'$). Similarly some vertex of $Y'$ has a neighbour in $X'$, and no vertex of $Y'$ is complete to $X'$. Since $Y\cap Y'$ is non-null, this contradicts \ref{wiggly5}.~\bbox \begin{thm}\label{bigbag} Let $\varepsilon>0$, with $6\varepsilon\le 1$, and let $(G,w)$ be an $\varepsilon$-coherent weighted graph, where $G$ is hole-with-hat-free. Let $\mathcal{F}$ be the set of all fractures of $G$, and let $\mathcal{A}$ be the union of all the sets $A$ for $(A,C,D,B)\in \mathcal{F}$. Then $w(\mathcal{A})< 3\varepsilon$. \end{thm} \noindent{\bf Proof.}\ \ Let $Z$ be the vertex set of a component of $G[\mathcal{A}]$. For each $(A,C,D,B)\in \mathcal{F}$, we call each component of $G[A]$ a {\em piece}; let $H$ be the set of all maximal pieces (taken over all $(A,C,D,B)\in \mathcal{F}$). Thus $Z$ can be expressed as the union of vertex sets of maximal pieces. For each maximal piece $X$, let $(A,C,D,B)\in \mathcal{F}$ such that $X$ is a component of $G[A]$, and let $Y$ be the big component of $G\setminus (C\cup D)$; we call $Y$ the {\em fulcrum} of $X$. (There may be more than one choice of $(A,C,D,B)\in \mathcal{F}$ for a given set $X$, and correspondingly more than one choice of fulcrum: choose one, arbitrarily). We observe: \\ \\ (1) {\em If $X, X'$ are maximal pieces such that either $V(X\cap X')\ne \emptyset$, or $X$ is not anticomplete to $X'$, then $X,X'$ have the same fulcrum.} \\ \\ Suppose not. Let $X$ be a component of $G[A]$ where $(A,C,D,B)\in \mathcal{F}$, and define $(A',C', D', B')$ similarly. By \ref{crossing}, it follows that every connected subgraph of $G[A\cup A']$ is a subgraph of one of $G[A], G[A']$, and in particular the connected subgraph induced on $V(X)\cup V(X')$ is a subgraph of one of $G[A], G[A']$, say $G[A]$. But $X$ is a component of $G[A]$, so $V(X)=V(X\cup X')$, contradicting that $X'$ is a maximal piece. This proves (1). \bigskip Choose a connected subgraph $H$ of $G[Z]$, maximal such that $V(H)$ is the union of maximal pieces all with the same fulcrum $Y$. Suppose that $V(H)\ne Z$. Since $G[Z]$ is connected, there is a vertex $v_1\in Z\setminus V(H)$ with a neighbour $v_2\in V(H)$. Choose a maximal piece $X_1$ containing $v_1$, and a maximal piece $X_2$ containing $v_2$ with fulcrum $Y$. By (1), $X_1$ has fulcrum $Y$, contrary to the maximality of $H$. Thus $V(H)=Z$, and so $Y$ is anticomplete to $Z$. Since $w(Y)\ge \varepsilon$, it follows that $w(Z)<\varepsilon$. Since this holds for each component of $G[\mathcal{A}]$, \ref{bigcomp} implies that $w(\mathcal{A})<3\varepsilon$. This proves \ref{bigbag}.~\bbox Let us say $X\subseteq V(G)$ is a {\em homogeneous set} of $G$ if for every vertex $v\in V(G)\setminus X$, either $v$ is complete or anticomplete to $X$. Let $G$ be a graph; we say that $G$ is {\em guarded} if for every forcer $F$ in $G$, there is a homogeneous set $X$ of $G$ with $X\ne V(G)$ such that $G[X]$ contains a constituent path of $F$. \begin{thm}\label{flat} Let $\varepsilon>0$, with $6\varepsilon\le 1$, and let $(G,w)$ be an $\varepsilon$-coherent weighted graph, where $G$ is hole-with-hat-free. Then there exists $Z\subseteq V(G)$ with $|Z|>1$, such that $G[Z]$ is connected and guarded, and $w(Z)>1-4\varepsilon$. \end{thm} \noindent{\bf Proof.}\ \ Define $\mathcal{F}, \mathcal{A}$ as in \ref{bigbag}, and let $W=V(G)\setminus \mathcal{A}$. By \ref{bigbag}, $w(W)> 1-3\varepsilon\ge 3\varepsilon$. By \ref{bigcomp}, $G[W]$ has a big component, with vertex set $Z$ say, where $w(Z)\ge 1-4\varepsilon$. Hence $|Z|>1$, since $w(v)\le \varepsilon< 1-4\varepsilon$ for each vertex $v$. Let $F$ be a forcer in $G[Z]$. Then by \ref{getfracture}, there is a fracture $(A,C,D,B)$ of $G$ such that $|V(F)\cap C|\ge 4$. Let $X=C\cap Z$. Since $C$ is a homogeneous set of $G\setminus A$, it follows that $X$ is a homogeneous set of $G[Z]$, and it contains a constituent path of $F$. This proves \ref{flat}.~\bbox \section{$\alpha$-critical pairs} In this section we explore the properties of $\alpha$-critical pairs, and combine these results with \ref{flat} to prove \ref{mainthm3}. \begin{thm}\label{smalldeg} Let $\alpha\ge 2$, and let $(G,f)$ be $\alpha$-critical. Then $f(w)< 1-4^{-1/\alpha}$ for each $w\in V(G)$. \end{thm} \noindent{\bf Proof.}\ \ Let $w\in V(G)$, and let $c=f(w)$. Let $N=N_G(w)$, and let $M=V(G)\setminus (N\cup \{w\})$. Since $(G,f)$ is $\alpha$-critical, it follows that $G[N]$ is $\alpha$-narrow, and so is $G[M]$. Let $p$ be the maximum of $f(P)$ over all perfect induced subgraphs of $G[N]$, and let $q$ be the maximum of $f(Q)$ over all perfect induced subgraphs $Q$ of $G[M]$. We claim that $f^\alpha (M)\le p^\alpha$. If $f(v)=0$ for every $v\in N$ then the statement is true, so we may assume that $f(v)>0$ for some $v\in N$, and hence $p>0$. So the function $f(v)/p\;(v\in N)$ is a good function on $G[N]$, and since $G[N]$ is $\alpha$-narrow, we deduce that $f^\alpha (N)\le p^\alpha$. Similarly $f^\alpha(M)\le q^\alpha$. But if $P$ is a perfect induced subgraph of $G[N]$ then $G[V(P)\cup \{w\}]$ is perfect, and therefore $f(V(P)\cup \{w\})\le 1$; and so $p\le 1-c$, and similarly $q\le 1-c$. Thus $$1<f^\alpha(G)=f^\alpha(N)+f^\alpha(M)+f^\alpha(w)\le p^\alpha+q^\alpha+f^\alpha(w)\le 2(1-c)^\alpha+c^\alpha.$$ Now for $0\le x\le 1$, the function $g(x)=2(1-x)^\alpha+x^\alpha$ has the value $1$ when $x=1$, and its value increases with $x$ for $2/3\le x\le 1$, since $\alpha\ge 2$ (as can be seen by taking the derivative). Thus $g(x)\le 1$ for $2/3\le x\le 1$. Since $g(c)>1$, it follows that $c<2/3$, and so $c^\alpha\le 1/2$; and consequently $2(1-c)^\alpha> 1/2$, that is, $c<1-4^{-1/\alpha}$. This proves \ref{smalldeg}.~\bbox \begin{thm}\label{strongEH} Let $\alpha\ge 1$, and let $(G,f)$ be $\alpha$-critical. Let $A,B\subseteq V(G)$ be disjoint and either complete or anticomplete. Then not both $f^\alpha(A), f^\alpha(B)> 2^{-\alpha}$. \end{thm} \noindent{\bf Proof.}\ \ Let $P$ be a perfect induced subgraph of $G[A]$ with $f(P)$ maximum, and choose $Q$ in $G[B]$ similarly. Since $P\cup Q$ is a perfect induced subgraph of $G$, it follows that $f(P)+f(Q)\le 1$, and from the symmetry we may assume that $f(P)\le 1/2$. We may also assume that $f(P)>0$, $f(P)=p$ say, and so $f(v)/p\;(v\in A)$ is a good function on $G[A]$; and we may assume that $Y\ne \emptyset$, and so $G[A]$ is $\alpha$-narrow, and consequently $f^\alpha(A)\le p^\alpha\le 2^{-\alpha}$. This proves \ref{strongEH}.~\bbox Next we need the following consequence of a theorem of R\"odl~\cite{rodl}: \begin{thm}\label{rodl} For all $\varepsilon>0$ and every graph $H$, there exists $\delta > 0$ such that for every $H$-free graph $G$, there is a subset $X\subseteq V(G)$ with $|X|\ge \delta |V(G)|$ such that one of $G[X], \overline{G}[X]$ has maximum degree less than $\varepsilon |X|$. \end{thm} We also need the following theorem of Bousquet, Lagoutte and Thomass\'e~\cite{lagoutte}: \begin{thm}\label{lagoutte} For every path $H$, there exists $\varepsilon>0$ such that for every $H$-free graph $G$ with $|G|>1$, either some vertex of $G$ has degree at least $\varepsilon|G|$, or there are disjoint anticomplete subsets $A,B\subseteq V(G)$ with $|A|,|B|\ge \varepsilon|G|$. \end{thm} We recall that the {\em house} is the complement of $P_5$. Let us say $G$ is {\em house-free} if $G$ contains no house. \begin{thm}\label{nohouse} For all $\varepsilon>0$ there exists $\delta>0$ such that, if $G$ is house-free and $|G|>1$, then either \begin{itemize} \item there are disjoint sets $A,B\subseteq V(G)$, complete to each other, with $|A|,|B|\ge \varepsilon\delta |G|$, or \item there exists $X\subseteq V(G)$ with $|X|\ge \delta|G|$ such that $G[X]$ has maximum degree less than $\varepsilon|X|$. \end{itemize} \end{thm} \noindent{\bf Proof.}\ \ Choose $\varepsilon'>0$ such that \ref{lagoutte} holds with $H, \varepsilon$ replaced by $P_5, \varepsilon'$ respectively. Now let $\varepsilon>0$; we must show that there exists $\delta$ as in the theorem. Thus we may assume that $\varepsilon\le \varepsilon'$, by reducing $\varepsilon$ if necessary. Choose $\delta$ as in \ref{rodl}. Now let $G$ be a house-free graph with $|G|>1$. The complement $\overline{G}$ of $G$ is $P_5$-free, and so by \ref{rodl}, there is a subset $X\subseteq V(G)$ with $|X|\ge \delta |V(G)|$ such that one of $G[X], \overline{G}[X]$ has maximum degree less than $\varepsilon |X|$. If $G[X]$ has maximum degree less than $\varepsilon |X|$ then the theorem holds, so we assume that $G[X]$ has maximum degree at least $\varepsilon |X|$ (and so $|X|>1$), and therefore $\overline{G}[X]$ has maximum degree less than $\varepsilon |X|$. By \ref{lagoutte} applied to $\overline{G}[X]$, there are disjoint anticomplete (in $\overline{G}$) subsets $A,B\subseteq X$ with $|A|,|B|\ge \varepsilon|X|$. But then $A$ is complete to $B$ in $G$, and $|A|,|B|\ge \varepsilon|X|\ge \delta\varepsilon |G|$. This proves \ref{nohouse}.~\bbox From \ref{nohouse} we deduce: \begin{thm}\label{fnohouse} Let $\varepsilon>0$, and choose $\delta>0$ satisfying \ref{nohouse}. Let $\alpha\ge 1$, such that $\varepsilon\delta 2^{\alpha}>1$. Let $(G,f)$ be $\alpha$-critical, where $G$ is house-free. Then there is a subset $X\subseteq V(G)$ and a good function $g$ on $G[X]$ with $g(v)\le f(v)$ for each $v\in X$, such that $g^\alpha(X)\ge \delta f^\alpha(G)$ and $g^\alpha(N_G(v)\cap X)< \varepsilon g^\alpha(X)$ for every vertex $v\in X$. \end{thm} \noindent{\bf Proof.}\ \ By rational approximation, we may assume that $f^\alpha$ is rational. Choose an integer $T>0$ such that $Tf^\alpha(v)$ is an integer for all $v\in V(G)$. Let $G'$ be obtained from $G$ by replacing each vertex $v$ by a clique $W_v$ of cardinality $Tf^\alpha(v)$, where \begin{itemize} \item the sets $W_v\;(v\in V(G))$ are pairwise disjoint; \item for all distinct $u,v\in V(G)$ adjacent in $G$, $W_u$ is complete to $W_v$ in $G'$; and \item for all distinct $u,v\in V(G)$ nonadjacent in $G$, $W_u$ is anticomplete to $W_v$ in $G'$. \end{itemize} It follows that $G'$ is also house-free, and $|G'|=Tf^\alpha(G)>T\ge 1$. From \ref{nohouse} applied to $G'$, we deduce that either \begin{itemize} \item there are disjoint sets $A',B'\subseteq V(G')$, complete to each other, with $|A'|,|B'|\ge \varepsilon\delta |G'|$, or \item there exists $X'\subseteq V(G')$ with $|X'|\ge \delta|G'|$ such that $G'[X']$ has maximum degree less than $\varepsilon|X'|$. \end{itemize} Suppose that the first bullet holds. Let $A$ be the set of vertices in $G$ such that $W_v\cap A'\ne \emptyset$, and define $B$ similarly. Then $A$ is complete to $B$ in $G$. Moreover $$f^\alpha(A)\ge |A'|/T\ge \varepsilon\delta |G'|/T\ge \varepsilon\delta f^\alpha(G)>\varepsilon\delta,$$ and similarly $f^\alpha(B)\ge \varepsilon\delta f^\alpha(G)$. By \ref{strongEH}, $\varepsilon\delta \le 2^{-\alpha}$, contrary to the hypothesis. Thus the second bullet holds. Let $X$ be the set of all $v\in V(G)$ such that $W_v\cap X'\ne \emptyset$; and for each $v\in V(G)$ let $g(v)$ satisfy $T(g(v))^\alpha=|W_v\cap X'|$. Thus $g^\alpha(X)=|X'|\ge \delta|G'|=\delta f^\alpha(G)$, and $g(v)\le f(v)$ for each $v\in V(G)$. Let $v\in X$. The union of the sets $W_u\cap X'$ over all $u\in N(v)\cap X$ has cardinality less than $\varepsilon|X'|$ (indeed, less than $\varepsilon|X'|-|W_v\cap X'|+1$); and so $Tg^\alpha(N(v)\cap X)< \varepsilon|X'|= \varepsilon Tg^\alpha(X)$, that is, $g^\alpha(N(v))< \varepsilon g^\alpha(X)$. This proves \ref{fnohouse}.~\bbox Since $P_4$-free graphs are perfect, a theorem of Erd\H{o}s and Hajnal~\cite{EH} (see also Alon, Pach and Solymosi~\cite{alon}) implies: \begin{thm}\label{noforcer} There exists $\varepsilon>0$ such that if $G$ is forcer-free then $G$ has a clique or stable set of cardinality at least $|G|^\varepsilon$. \end{thm} We also need a theorem of Jacob Fox (he did not publish his proof, but we gave a proof in~\cite{pathsandanti}): \begin{thm}\label{Fox2} Let $H$ be a graph for which there exists a constant $\delta>0$ such every $H$-free graph $G$ has a clique or stable set of cardinality at least $|G|^{\delta}$. Then every $H$-free graph is $\frac{3}{\delta}$-narrow. \end{thm} By combining \ref{noforcer} and \ref{Fox2} we obtain: \begin{thm}\label{forcernarrow} There exists $\alpha\ge 1$ such that every forcer-free graph is $\alpha$-narrow. \end{thm} We deduce: \begin{thm}\label{homog} Let $\alpha'\ge 1$ such that every forcer-free graph is $\alpha'$-narrow. Let $\alpha\ge \alpha'$, and let $G$ be a graph such that every proper induced subgraph is $\alpha$-narrow. Let $g$ be a good function on $G$. Let $Z\subseteq V(G)$ with $|Z|>1$, such that $G[Z]$ is connected and guarded. Let $d$ be the maximum of $g^\alpha(N_G(v)\cap Z)$ over all $v\in Z$. Then $g^\alpha(Z)\le \max(2d,d^{1-\alpha'/\alpha})$. \end{thm} \noindent{\bf Proof.}\ \ If $G[Z]$ is not anticonnected, there are two vertices $u,v\in Z$ such that $N(u)\cup N(v)=Z$, and so $g^\alpha(Z)\le 2 d$ as required. So we may assume that $G[Z]$ is anticonnected. Let us list all subsets $X$ of $Z$ with the properties that $X$ is a homogeneous set of $G[Z]$, and $X\ne Z$, and $X$ is maximal with these two properties; let these subsets be $W_1,\ldots, W_k$ say. Thus $W_1\cup\cdots\cup W_k=Z$, because $|Z|\le 2$ and so each singleton subset of $Z$ is a subset of one of $W_1,\ldots, W_k$. We claim that $W_1,\ldots, W_k$ are pairwise disjoint. Suppose that $W_1\cap W_2\ne \emptyset$ say. Choose $w_1\in W_1\setminus W_2$ and $w_2\in W_2\setminus W_1$. If $w_1,w_2$ are nonadjacent, then since $W_2$ is a homogeneous set, $w_1$ has no neighbours in $W_2$, and so, since $W_1$ is homogeneous, each vertex of $W_2$ has no neighbour in $W_1$; and so $G[W_1\cup W_2]$ is not connected. If $w_1,w_2$ are adjacent, then similarly $G[W_1\cup W_2]$ is not anticonnected. Since $G[Z]$ is both connected and anticonnected, it follows that $W_1\cup W_2\ne Z$. But $W_1\cup W_2$ is a homogeneous set of $G[Z]$, contrary to the maximality of $W_1$. This proves that $W_1,\ldots, W_k$ form a partition of $Z$, and so $k>1$. Choose $w_i\in W_i$ for $1\le i\le k$, and let $G'$ be the graph induced on $\{w_1,\ldots, w_k\}$. From the hypothesis, $G'$ is forcer-free, and so $\alpha'$-narrow. Let $t=\alpha/\alpha'$ and $r=2^{-1/\alpha'}$; thus $r^{\alpha'} = 1/2$, and $r^{\alpha}= 2^{-t}$. For $1\le i\le k$, let $P_i$ be a perfect subgraph of $G[W_i]$ with $g(P_i)$ maximum, and let $p(w_i)=g(P_i)$. By an application of Lov\'asz' substitution lemma~\cite{lovasz}, it follows that $p$ is a good function on $G'$, and so $p^{\alpha'}(G')\le 1$. For $1\le i\le k$, $G[W_i]$ is $\alpha$-narrow (because $k>1$). Since $g(v)/p(w_i)\;(v\in W_i)$ is good on $G[W_i]$, it follows that $g^\alpha(W_i)\le p(w_i)^\alpha$. But also, since $G[Z]$ is connected, and $W_i\ne Z$, there is a vertex $v\in Z\setminus W_i$ complete to $W_i$; and so $g^\alpha(W_i)\le d$ by hypothesis. Thus $$g^\alpha(W_i) \le \min(p(w_i)^\alpha,d)\le p(w_i)^{\alpha'}d^{1-1/t}.$$ Hence $$g^\alpha(Z)=\sum_{1\le i\le k}g^\alpha(W_i)\le d^{1-1/t}\sum_{1\le i\le k}p(w_i)^{\alpha'}= d^{1-1/t}p^{\alpha'}(G')\le d^{1-1/t}.$$ This proves \ref{homog}.~\bbox We deduce \ref{mainthm3}, which we restate: \begin{thm}\label{mainthm4} There exists $\alpha\ge 1$ such that for every $\alpha$-critical pair $(G,f)$, there is a hole-with-hat in $G$. \end{thm} \noindent{\bf Proof.}\ \ Let $\varepsilon=1/6$, and choose $\delta>0$ satisfying \ref{nohouse}. From \ref{noforcer}, there exists $\alpha'\ge 1$ such that every forcer-free graph is $\alpha'$-narrow. Let $\alpha\ge 2$, such that $\varepsilon\delta 2^{\alpha/\alpha'}>1$. We claim that $\alpha$ satisfies the theorem. Suppose not; then there is an $\alpha$-critical pair ($G,f)$, such that $G$ is hole-with-hat-free. By \ref{fnohouse}, there is a subset $X\subseteq V(G)$ and a good function $g$ on $G[X]$ with $g(v)\le f(v)$ for each $v\in X$, such that $g^\alpha(X)\ge \delta f^\alpha(G)>\delta$ and $g^\alpha(N_G(v)\cap X)< \varepsilon g^\alpha(X)$ for every vertex $v\in X$. Let $g^\alpha(X)=\lambda$; then $\delta\le \lambda\le f^\alpha(G)$. Let $H=G[X]$, and define $w(v)=g^\alpha(v)/\lambda$ for each $v\in X$. Then $(H, w)$ is a weighted graph. \\ \\ (1) {\em $(H,w)$ is $\varepsilon$-coherent.} \\ \\ By \ref{smalldeg}, for each $v\in V(H)$, $g(v)\le f(v)< 1-4^{-1/\alpha}\le 1/2$, and so $w(v)< (1-4^{-1/\alpha})^\alpha/\lambda \le 2^{-\alpha}/\lambda\le \varepsilon$ (since $\lambda\ge \delta$). Also, for each vertex $v\in V(H)$, $$\lambda w(N_H(v))=g^\alpha(N_G(v)\cap X) < \varepsilon g^\alpha(X)=\varepsilon\lambda,$$ and so $w(N_H(v)) <\varepsilon.$ Third, by \ref{strongEH}, if $A,B\subseteq V(H)$ are disjoint and anticomplete, then not both $w(A), w(B)\ge \varepsilon$, since $\varepsilon> 2^{-\alpha}/\lambda$ (because $\lambda\ge \delta$). Consequently $(H, w)$ is $\varepsilon$-coherent. This proves~(1). \bigskip By \ref{flat}, and since $\varepsilon=1/6$, there exists $Z\subseteq V(H)$ with $|Z|>1$, such that $H[Z]$ is connected and guarded, and $w(Z)>1-4\varepsilon$. Let $d$ be the maximum of $\lambda w(N_G(v)\cap Z)$ over all $v\in Z$. Hence $d\le \varepsilon\lambda$. By \ref{homog}, $$\lambda w(Z)\le \max(2d,d^{1-\alpha'/\alpha})\le \max(2\varepsilon\lambda,(\varepsilon\lambda)^{1-\alpha'/\alpha}).$$ But $2\varepsilon\lambda\ge (\varepsilon\lambda)^{1-\alpha'/\alpha}$, since $2(\varepsilon\lambda)^{\alpha'/\alpha}\ge 1$ (since $\lambda\ge \delta$). Thus $\lambda w(Z)\le 2\varepsilon\lambda$, and so $w(Z)\le 2\varepsilon$, contradicting that $w(Z)> 1-4\varepsilon$ and $\varepsilon=1/6$. This proves \ref{mainthm4}.~\bbox
{ "timestamp": "2020-06-16T02:33:26", "yymm": "2005", "arxiv_id": "2005.02896", "language": "en", "url": "https://arxiv.org/abs/2005.02896", "abstract": "A \"hole-with-hat\" in a graph $G$ is an induced subgraph of $G$ that consists of a cycle of length at least four, together with one further vertex that has exactly two neighbours in the cycle, adjacent to each other, and the \"house\" is the smallest, on five vertices. It is not known whether there exists $\\epsilon>0$ such that every graph $G$ containing no house has a clique or stable set of cardinality at least $|G|^\\epsilon$; this is one of the three smallest open cases of the Erdős-Hajnal conjecture and has been the subject of much study.We prove that there exists $\\epsilon>0$ such that every graph $G$ with no hole-with-hat has a clique or stable set of cardinality at least $|G|^\\epsilon$", "subjects": "Combinatorics (math.CO)", "title": "Holes with hats and Erdős-Hajnal", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759660443166, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7079405764545542 }
https://arxiv.org/abs/2003.05606
Orientations without forbidden patterns on three vertices
Given a set $F$ of oriented graphs, a graph $G$ is an $F$-graph if it admits an $F$-free orientation. Building on previous work by Bang-Jensen and Urrutia, we propose a master algorithm that determines if a graph admits an $F$-free orientation when $F$ is a subset of the orientations of $P_3$ and the transitive triangle.We extend previous results of Skrien by studying the class of $F$-graphs, when $F$ is any set of oriented graphs of order three. Structural characterizations for all such sets are provided, except for the so-called perfectly-orientable graphs and one of its subclasses, which remain as open problems.
\section{Introduction} \label{sec:Introduction} For a set of oriented graphs $F$, a graph is an {\em $F$-graph} if it admits an $F$-free orientation. The concept of $F$-graph was introduced by Skrien in \cite{skrienJGT6}, where he studied $F$-graphs when $F$ consists of a subset of the orientations of $P_3$. Following Skrien, we will use $B_1$, $B_2$, and $B_3$ to denote the orientations of $P_3$, see Figure \ref{Fig:smallor}. Also in \cite{skrienJGT6}, Skrien proved structural characterizations of $F$-graphs for every $F \subseteq \{B_1,B_2,B_3\}$, except for $\{B_1\}$ and $\{B_2\}$; notice that $\{B_1\}$- and $\{B_2\}$-graphs are actually the same class, known as {\em perfectly-orientable graphs}. Studying the structure of $B_1$-free orientable graphs has caught the interest of several authors. In particular, Hartinger and Milanic, and the same authors with Bre\v sar and Kos, have thoroughly studied this family in a series of papers \cite{bresarDAM248, hartingerJGT2016, hartingerDM2017}. We will follow their terminology and call the class of $\{B_1\}$-graphs, $1$-{\em perfectly-orientable graphs} ({\em $1$-p.o. graphs} for short). They have nice results when the problem is restricted to some families, e.g., they showed that a cograph is $1$-p.o. if and only if it is $K_{2,3}$-free. Nonetheless, characterizing the class of $1$-p.o. graphs through forbidden induced subgraphs remains an open problem in the general case. From the algorithmic point of view, Urrutia and Gavril found a polynomial time algorithm to recognize $1$-perfectly orientable graphs (\cite{urrutiaIPL41}). Furthermore, in \cite{bangjensenJCTB59}, the authors show that for any subset $F$ of $\{B_1,B_2,B_3\}$, there is a polynomial time algorithm to determine if a graph admits an $F$-free orientation. They do so by reducing each of these problems to $2$-SAT. Recall that in the classic article \cite{aspvallIPL8}, $2$-SAT is solved by proceeding over an auxiliary digraph constructed from the $2$-SAT instance. By using these two techiques, we extend the aforementioned result from \cite{bangjensenJCTB59} to any subset of $\{B_1, B_2, B_3, T_3\}$, where $T_3$ is the transitive tournament of order $3$. Instead of reducing our problem to $2$-SAT, we give an explicit construction of an auxiliary digraph $D^+$. Then, we follow the same procedure used in \cite{aspvallIPL8} over $D^+$. Thus, we show a certifying polynomial time algorithm to determine if a graph belongs to the class of $F$-graphs, for any set $F \subseteq \{ B_1, B_2, B_3, T_3\}$. In addition to the algorithm mentioned above, in this paper we extend Skrien's work by proposing characterizations of $F$-graphs when $F$ is any set of oriented graphs on three vertices, except for $\{\overrightarrow{C_3},B_1\}$ and $\{B_1\}$, where $\overrightarrow{C_3}$ denotes the directed $3$-cycle. Probably the most interesting case is the family of $T_3$-graphs, for which we provide a characterization in terms of forbidden homomorphic images of a family of graphs. The characterization of $T_3$-graphs results suprisingly natural, and the obstructions are obtained by ``reverse-engineering'' the no-certificates provided by the recognition algorithm. \begin{figure}[ht!] \begin{center} \begin{tikzpicture}; \begin{scope}[xshift=2cm, scale=0.7] \node [vertex] (1) at (-1.45,0){}; \node [vertex] (2) at (0,2){}; \node [vertex] (3) at (1.45,0){}; \node at (0,-1.2){($K_1+\overrightarrow{K_2}$)}; \draw[arc] (1) edge (3); \end{scope} \end{tikzpicture} \\~\\ \begin{tikzpicture}; \begin{scope}[xshift=2cm, scale=0.7] \node [vertex] (1) at (-1.45,0){}; \node [vertex] (2) at (0,2){}; \node [vertex] (3) at (1.45,0){}; \node at (0,-1.2){$\overrightarrow{C_3}$}; \draw[arc] (1) edge (2); \draw[arc] (2) edge (3); \draw[arc] (3) edge (1); \end{scope} \end{tikzpicture} \ \ \ \ \ \ \begin{tikzpicture}[every circle node/.style ={circle,draw,minimum size= 2pt,inner sep=2.5pt, outer sep=0.5pt},every rectangle node/.style ={}]; \begin{scope}[xshift=2cm, scale=0.7] \node [circle] (1) at (-1.45,0){}; \node [circle] (2) at (0,2){}; \node [circle] (3) at (1.45,0){}; \node[] at (0,-1.2){$T_3$}; \draw [arc] (1) edge (2); \draw [arc] (2) edge (3); \draw [arc] (1) edge (3); \end{scope} \end{tikzpicture} \\~\\ \begin{tikzpicture}; \begin{scope}[xshift=2cm, scale=0.7] \node [vertex] (1) at (-1.45,0){}; \node [vertex] (2) at (0,2){}; \node [vertex] (3) at (1.45,0){}; \node at (0,-1.2){$B_1$}; \draw [arc] (1) edge (2); \draw [arc] (3) edge (2); \end{scope} \end{tikzpicture} \ \ \ \ \ \ \begin{tikzpicture}; \begin{scope}[xshift=2cm, scale=0.7] \node [vertex] (1) at (-1.45,0){}; \node [vertex] (2) at (0,2){}; \node [vertex] (3) at (1.45,0){}; \node at (0,-1.2){$B_2$}; \draw [arc] (2) edge (1); \draw [arc] (2) edge (3); \end{scope} \end{tikzpicture} \ \ \ \ \ \ \begin{tikzpicture}[every circle node/.style ={circle,draw,minimum size= 2pt,inner sep=2.5pt, outer sep=0.5pt},every rectangle node/.style ={}]; \begin{scope}[xshift=2cm, scale=0.7] \node [vertex] (1) at (-1.45,0){}; \node [vertex] (2) at (0,2){}; \node [vertex] (3) at (1.45,0){}; \node[] at (0,-1.2){$B_3$}; \draw [arc] (1) edge (2); \draw [arc] (2) edge (3); \end{scope} \end{tikzpicture} \caption{All possible orientations of non-empty graphs on three vertices.} \end{center} \end{figure}\label{Fig:smallor} We refer the reader to \cite{bangjensenDigraphs} for undefined basic terms. We denote the oriented graphs on three vertices as in Figure~\ref{Fig:smallor}. Given a set $A$, we define $A \times 1 = A$ and $A \times 0 = \varnothing$. For a statment $P$, we denote by $\mathbbm{1}_{[P]}$ the truth value of $P$. In other words, $\mathbbm{1}_{[P]}=1$ if $P$ is true, and $\mathbbm{1}_{[P]}=0$ otherwise. We say that any set $F \subseteq \{B_1, B_2, B_3, T_3\}$ is a {\em simple set}. For a graph $G$ and a simple set $F$. We construct the {\em constraint digraph} $D^+$ associated to $G$ and $F$ as follows. The vertex set, $V^+$, of $D^+$ is the set $\{(x,y) \colon xy \in E_G\}$; notice that for every edge $xy \in E_G$, both $(x,y)$ and $(y,x)$ belong to $V^+$. We define the following sets of arcs: \begin{itemize} \item $A_1 = \{ ((y,x),(z,y)) \colon xy \in E_G,yz \in E_G, zx \notin E_G \}$, \item $A_2 = \{ ((x,y),(y,z)) \colon xy \in E_G,yz \in E_G, zx \notin E_G \}$, \item $A_3 = \{ ((x,y),(z,y)) \colon xy \in E_G,yz \in E_G, zx \notin E_G \} \cup \{ ((y,x),(y,z))$ $\colon xy \in E_G, yz \in E_G, zx \notin E_G \}$, and \item $A_t = \{ ((x,y),(y,z)) \colon xy \in E_G, yz \in E_G, zx \in E_G \} \cup \{ ((y,x),(x,z)) \colon xy \in E_G, yz \in E_G, zx \in E_G \}$. \end{itemize} Finally, we define the arc set, $A^+$, of $D^+$ as \[ A^+ = (A_1 \times \mathbbm{1}_{[B_1 \in \mathcal{F}]}) \cup (A_2 \times \mathbbm{1}_{ [B_2 \in \mathcal{F}]}) \cup( A_3 \times \mathbbm{1}_{[B_3 \in \mathcal{F}]}) \cup (A_t \times \mathbbm{1}_{[T_3 \in \mathcal{F}]}). \] In the following section we will use the constraint digraph for our algorithm. We will also use it at the end of this paper to find a structural characterization of $\{T_3\}$-graphs. The rest of the paper is organized as follows. In Section \ref{sec:Algorithm}, the algorithm to recognize $F$-graphs, where $F$ is any subset of $\{ B_1, B_2, B_3, T_3 \}$, is presented. In Section \ref{sec:small}, we characterize $F$-graphs for most of the cases not covered in \cite{skrienJGT6}. Section \ref{sec:T3} is devoted to characterize $\{ T_3 \}$-free matrices. Conclusions and some open problems are presented in Section \ref{sec:conclusions}. \section{Algorithm}\label{sec:Algorithm} In this section we propose a master algorithm that finds an $F$-free orientation of a graph $G$, or outputs that it is not possible to find one. We say that it is a master algorithm since it works for any set $F \subseteq \{ B_1,B_2,B_3,T_3 \}$. We begin by observing some properties of the constraint digraph, $D^+$. \begin{proposition}\label{reversedarc} Let $G$ be a graph and $F \subseteq \{ B_1, B_2, B_3, T_3 \}$. Then, in $D^+$, $(x,y) \to (z,w)$ if and only if $(w,z) \to (y,x)$. \end{proposition} \begin{proof} Proving one implication is enough to prove the whole statement. Observe that $((x,y),(z,w)) \in A^+$ if and only if $((x,y),(z,w)) \in A_i$ for some $i \in \{ 1,2,3,t \}$. We will prove the statement for the case when $((x,y),(z,w)) \in A_1$, the other cases follow the same line of argumentation. If $((x,y),(z,w))\in A_1$ then $w = x$, $yx \in E_G$, $xz \in E_G$ and $zy \notin E_G$. Thus $zx \in E_G$, $xy \in E_G$ and $yz \notin E_G$, therefore $((x,z),(y,x)) \in A_1$. Hence, $((w,z), (y,x))\in A_1$ if and only if $((x,y),(z,w)) \in A_1$. \end{proof} From here, the following two propositions are easy to obtain. \begin{proposition}\label{reversedpath} Let $G$ be a graph and $F \subseteq \{ B_1,B_2, B_3,T_3\}$. There is a directed path from $(x,y)$ to $(z,w)$ in $D^+$ if and only if there is a directed path from $(w,z)$ to $(y,x)$ in $D^+$. \end{proposition} \begin{proof} Proceed by induction over the length of the directed path. Notice that Proposition~\ref{reversedarc} is the base case. Use again Proposition~\ref{reversedarc} in the inductive step. \end{proof} Let $D$ be a digraph and let $\overleftarrow{D}$ be the digraph obtained from $D$ by reversing every arc. A digraph $D$ is {\em skew-symmetric} if it is isomorphic to $\overleftarrow{D}$. \begin{proposition}\label{skewsymmetric} Let $G$ be a graph and $F\subseteq\{B_1,B_2,B_3,T_3\}$. The constraint digraph of $G$ and $F$ is skew-symmetric. \end{proposition} \begin{proof} Let $D$ be a digraph. Let $D^+$ be the constraint digraph of $G$ and $F$. Consider the function $\varphi \colon V^+ \to V^+$ defined by $\varphi((x,y)) = (y,x)$. By Proposition~\ref{reversedarc}, it is clear to see that $\varphi$ is a digraph isomorphism between $D^+$ and $\overleftarrow{D^+}$. \end{proof} By the isomorphism shown in the previous proof, every strong component $S$ in $D^+$ has a dual component, $\overline{S}$ (which might be equal to $S$), induced by the vertices of the form $(y,x)$ where $(x,y)\in S$. By Proposition~\ref{reversedpath}, a strong component $S_1$ reaches another one $S_2$, if and only if $\overline{S_2}$ reaches $\overline{S_1}$. A well-known algorithm of Tarjan \cite{tarjanSIAMJC1972} generates the strong components of a digraph in reverse topological order (i.e. if $S_1$ reaches $S_2$ then $S_2$ is generated before $S_1$). Let us go back to the construction of the constraint digraph. Suppose that we want to find an $F$-free orientation of $G$. An arc $((x,y),(z,w))$ in $D^+$ tells us that, in order to achieve such an orientation, if we orient the edge $xy$ from $x$ to $y$, then we must orient the edge $zw$ from $z$ to $w$. Inductively, if there is a path from $(x,y)$ to $(z,w)$ and we orient the edge $xy$ from $x$ to $y$ then we must orient the edge $zw$ from $z$ to $w$. Thus, if $(x,y)$ and $(y,x)$ belong to the same strong component, $G$ does not admit an $F$-free orientation. In fact the reverse implication is also true. To see this, we will consider the famous $2$-satisfiability algorithm due to Tarjan \cite{aspvallIPL8}. \begin{algorithm}[$2$-satisfiability algorithm \cite{aspvallIPL8}] \label{alg:tarjan} Process the strong components, $S$, of $D^+$ in reverse topological order as follows: \hspace{0.2cm}{\em General Step.} If $S$ is marked, do nothing. Otherwise if $S = \overline{S}$ then stop: $G$ does not admit an $F$-free orientation. Otherwise mark $S$ \texttt{true} and $\overline{S}$ \texttt{false}. \end{algorithm} Clearly, the algorithm finishes inside a loop of the general step only if there is a vertex $(x,y)\in V^+$ in the same strong component as $(y,x)$. Otherwise, the $\{\texttt{true,false}\}$-colouring of $D^+$ induces an $F$-free orientation of $G$. We prove the later fact in the following proposition. \begin{proposition}\label{proposition:algorithm1} Let $G$ be a graph and $F$ a simple set. If Algorithm \ref{alg:tarjan} outputs a $\{\texttt{true},\texttt{false} \}$-colouring of the vertices in $D^+$ then vertices with colour \texttt{true} induce an $F$-free orientation of $G$. \end{proposition} \begin{proof} Clearly, if $(x,y)$ is marked with \texttt{true}, then $(y,x)$ is marked with \texttt{false}. Also, every vertex receives one and only one truth colour. Hence the \texttt{true}-coloured vertices of $D^+$ induce an orientation of $G$; this is, if $(x,y)$ is marked \texttt{true}, then $xy$ is oriented as $(x,y)$. We now prove that it is an $F$-free orientation of $G$. To do so, we must prove that for any two oriented edges $(x,y),(w,z) \in V^+$ that induce an oriented graph in $F$, then at least one is marked with \texttt{false}. By construction of $A^+$, it must happen that if $(x,y)$ and $(w,z)$ induce an oriented graph in $F$ then $(x,y) \to (z,w)$ and $(w,z) \to (y,x)$. Hence it is adequate to show that if $(x,y)$ is marked with \texttt{true} and $(x,y) \to (z,w)$, then $(z,w)$ is also marked with \texttt{true}. Since the algorithm marks all the vertices in the same strong component at once, it suffices to show that for any two strong components $S_1$ and $S_2$ of $D^+$, if $S_1$ is \texttt{true}-coloured and $S_1$ reaches $S_2$, then $S_2$ is also \texttt{true}-coloured. Suppose that $S_1$ is marked with \texttt{true} and it reaches $S_2$, but $S_2$ is \texttt{false}-coloured. Since $S_1$ reaches $S_2$, $S_2<S_1$, where $<$ is the reverse topological order of the strong components of $D^+$. Since $S_2$ is marked with \texttt{false} it means that $\overline{S_2}$ was processed before $S_2$ (i.e. $\overline{S_2} < S_2$). Analogously $S_1 < \overline{S_1}$. Transitivity of $<$, implies that $\overline{S_2} < \overline{S_1}$. Since $S_1$ reaches $S_2$, by Proposition~\ref{reversedpath}, $\overline{S_2}$ reaches $\overline{S_1}$, then $\overline{S_1} < \overline{S_2}$. Previous inequalities yield the following chain, $\overline{S_1} < \overline{S_2} < S_2 < S_1 < \overline{S_1}$. From which we conclude that $\overline{S_1} = \overline{S_2}$; equivalently $S_1 = S_2$. This contradicts that the algorithm does not assign two different truth values to the same component. Therefore if $S_1$ reaches $S_2$ and $S_1$ is marked with \texttt{true}, $S_2$ is marked with \texttt{true} as well. \end{proof} Now it is easy to prove the following result. \begin{theorem}\label{thm:algorithm1} Let $G$ be a graph and $F$ a simple set. The following are equivalent: \begin{itemize} \item $G$ admits an $F$-free orientation, \item there are no vertices $(x,y),(y,x)\in V^+$ contained in the same strong connected component of $D^+$, \item for any strong component $S$, $S \cap \overline{S} = \varnothing$ (i.e. $S \ne \overline{S}$). \end{itemize} \end{theorem} \begin{proof} The equivalence between the second and third item is trivial. On the paragraph preceding Algorithm \ref{alg:tarjan} it was shown that the second statement implies the first one. The remaining implication is proved by Algorithm \ref{alg:tarjan} and Proposition \ref{proposition:algorithm1}. \end{proof} The order of $D^+$ is $2m$, where $m$ is the size of $G$. Also note that $d_{D^+}((x,y)) \le d_G(x)+d_G(y)$. Thus $|E^+| \le m \Delta(G) \le mn$. Since both the $2$-satisfiabiltiy algorithm and Tarjan's algorithm for generating the strong components of a digraph run in $O(|V^+|)$ time, our algorithm runs in $O(m)$ time once $D^+$ is constructed. \section{Graph properties and small forbidden orientations.}\label{sec:small} In this section we study the family of $F$-graphs when $F$ consists of oriented graphs on three vertices. In \cite{skrienJGT6} Skrien studied the cases when $F$ is a set of orientations of $P_3$. For this reason, we study $F$-graphs when either $K_1 + \overrightarrow{K_2} \in F$ or $F$ contains at least one orientation of $C_3$. Cleary, any orientation of a ($K_1+K_2$)-free graph is ($K_1+\overrightarrow{K_2}$)-free. Moreover, it is not hard to verify that if a graph admits a ($K_1+\overrightarrow{K_2}$)-free orientation, then it is ($K_1+K_2$)-free. Since the class of ($K_1+K_2$)-free graphs coincides with the class of complete multipartite graphs, if $K_1+\overrightarrow{K_2}\in F$, then the family of $F$-graphs is the intersection of $(F - \{K_1 + \overrightarrow{K_2}\})$-graphs and complete multipartite graphs. Therefore, we only consider families of $F$-graphs when $K_1 + \overrightarrow{K_2} \not \in F$ and $F$ contains an orientation of $C_3$. It is direct to verify that if the set of forbidden orientations consists of connected graphs, then the associated hereditary property is closed under disjoint unions. Thus, it suffices to study connected graphs. \begin{table} \begin{center} \begin{tabular}{| c | l |} \hline Forbidden orientations & Graph family\\ \hline $B_1,B_2,B_3$ & Complete graphs. \\ \hline $B_1,B_2$ & Proper circular-arc graphs.\\ \hline $B_1,B_3$ & Nested interval graphs.\\ \hline $B_2,B_3$ & Nested interval graphs. \\ \hline $B_1$ & Open \\ \hline $B_2$ & Open \\ \hline $B_3$ & Comparability graphs. \\ \hline \end{tabular} \caption{This table is found in \cite{skrienJGT6}.} \end{center} \end{table}\label{Tab:skrien} Skrien's results from \cite{skrienJGT6} are included in Table~\ref{Tab:skrien}. Recall that he found an alternative characterization for all sets containing orientations of $P_3$, except for $1$-p.o. graphs. Bang-Jensen, Huang and Prisner also studied $1$-perfectly orientable graphs, in particular, they proved the following result in \cite{bangjensenJCTB59}. \begin{proposition}\label{bangjensen}\cite{bangjensenJCTB59} Every graph with exactly one induced cycle of length greater than $3$ is $1$-perfectly orientable. \end{proposition} This result can be equivalently restated as follows: every triangle-free graph is $1$-perfectly orientable if it has only one induced cycle. With a simpler proof than the one found in \cite{bangjensenJCTB59}, we prove the biconditional version of this result, which is a corollary to the following proposition. \begin{proposition}\label{B1T3-free} The following statements are equivalent for a connected graph $G$, \begin{enumerate} \item $G$ admits a $\{B_1,T_3\}$-free orientation, \item $G$ admits an orientation such that $d^+(x) \le 1$ for every vertex $x \in V_G$, \item there is function $f \colon V_G \to V_G$ such that $E_G = \{xy \colon x \ne y, f(x) = y \}$, \item $G$ is unicyclic, \item $G$ has no more edges than vertices. \end{enumerate} \end{proposition} \begin{proof} It is not hard to notice that the first two items are equivalent, and so are the second and third one. It is also straightforward to show that if $G$ has no more edges than vertices, then $G$ is unicyclic (recall that $G$ is connected), so $4$ is an implication of $5$. Now we prove that the second item implies the fifth one. Let $D_G$ be an orientation of $G$ such that $d^+(x)\le 1$ for every vertex $x$ of $G$. Consider the function $i \colon A_{D_G}\to V_G$ where $i((x,y)) = x$. Since $d^+(x)\le 1$, $i$ is an injective function. Thus $|E_G| = |A_{D_G}| \le |V_G|$. To conclude the proof we show that if $G$ is unicyclic, it admits an $\{B_1,T_3\}$-free orientation. If $G$ is a tree, root $G$ in any vertex and orient the edges from descendent to ancestor. If $G$ is a cycle, orient $G$ in a cyclic way. In any other case, let $C$ by the only cycle in $G$. Orient $C$ in a cyclic way. Notice that $G/C$ is a tree. Root $G/C$ in the vertex corresponding to $C$. Orient the edges in $G/C$ from descendent to ancestor. We have oriented all edges in $G$ now, and it it not hard to notice that this orientation is $\{B_1,T_3\}$-free. \end{proof} \begin{corollary}\label{B1C3T3-free} A graph $G$ admits a $\{B_1, \overrightarrow{C_3}, T_3\}$-free orientation if and only if $G$ is unicyclic and triangle free. \end{corollary} \begin{proof} Suppose $G$ admits a $\{B_1,\overrightarrow{C_3}, T_3\}$-free orientation. Clearly, $G$ is triangle-free and by Proposition~\ref{B1T3-free}, $G$ is also a unicyclic graph. On the other hand, consider a triangle-free unicyclic graph $G$. By Proposition~\ref{B1T3-free}, it admits a $\{B_1, T_3\}$-free orientation $D_G$. Since $G$ is triangle-free, $D_G$ is $\{B_1, \overrightarrow{C_3}, T_3\}$-free. \end{proof} The family of $F$-graphs when $F = \{T_3, \overrightarrow{C_3},$ $B_3\}$, has already been characterized, and it is a particular case of the Gallai-Hasse-Roy-Vitaver Theorem. \begin{proposition}\label{B3C3T3-free} A graph is bipartite if and only if it admits an $\{T_3,\overrightarrow{C_3}, B_3\}$-free orientation. \end{proposition} In \cite{skrienJGT6}, Skrien shows that a graph is a proper circular arc graph if and only if it is a $\{B_1,B_2\}$-graph. A proper cicular-arc graph is a graph that admits an intersection model where no arc is contained in another. A family of sets $\mathcal{A}$ is said to have the Helly property, if for any subfamily $\mathcal{B} \subseteq \mathcal{A}$ such that any two sets $A,B \in \mathcal{B}$, $A \cap B \ne \varnothing$, then the intersection of all sets in $\mathcal{B}$ is non-empty. A (proper) Helly cicular-arc graph is a graph that admits an intersection model that satisfies the Helly property (and no arc is contained in another). We extend Skrien's result to proper Helly circular-arc graphs. \begin{proposition}\label{B1B2C3-free} A graph $G$ admits a $\{B_1, B_2, \overrightarrow{C_3}\}$-free orientation if and only if $G$ is a proper Helly circular-arc graph. \end{proposition} \begin{proof} Let $G$ be a graph that admits a $\{B_1,B_2, \overrightarrow{C_3}\}$-free orientation. By line two of Table~\ref{Tab:skrien}, we know that $G$ must be a proper circular-arc graph. Corollary 5 in \cite{linDAM2013} shows that a proper circular-arc graph is a proper Helly circular-arc graph if it contains neither the Hajos graph nor a $4$-wheel as an induced subgraph. It is not hard to notice that neither of those graphs admit a $\{B_1, B_2,\overrightarrow{C_3}\}$-free orientation. Thus, since $G$ is a proper circular-arc graph, $G$ must be a proper Helly circular-arc graph. In \cite{mckeeDM2003} it is proved that a model of a proper circular-arc graph is the model of a proper Helly circular-arc graph if and only if no two nor three arcs cover its circle. Consider a proper Helly circular-arc graph $G$. Let $\mathcal{A} = \{A_1, A_2, \dots, A_n\}$ be a model of $G$ where no three arcs cover the circle. Moreover, we can assume that no end points of the arcs in $\mathcal{A}$ coincide. Let us denote by $l_i$ the anti-clockwise end point of $A_i$, and by $r_i$ the clockwise end point. We denote by $D_G$ the following orientation of $G$. Consider an edge $A_iA_j\in E_G$. By moving in a clockwise motion around the circle, we see the endpoints of $A_i$ and $A_j$ form the sequence $[l_i, l_j,r_i,r_j]$ or $[l_j,l_i,r_j,r_i]$. We orient $A_iA_j$ form $A_i$ to $A_j$ when we see $[l_i,l_j, r_i,r_j]$, in the other case we orient it from $A_j$ to $A_i$. Bearing in mind that there are no three arcs that cover the circle, it is easy to see $D_G$ is $\{B_1,B_2,\overrightarrow{C_3}\}$-free. \end{proof} Since every graph admits an acyclic orientation, every graph admits a $\overrightarrow{C_3}$-orientation. Which is not the case for $T_3$-free orientable graphs. Recall that a graph is {\em locally bipartite} if the open neighbourhood of every vertex induces a bipartite graph. \begin{proposition}\label{obs1:T3} For any graph $G$ the following statements hold: \begin{itemize} \item if $G$ is $3$-colourable, then it admits a $T_3$-free orientation, \item if $G$ admits a $T_3$-free orientation, then it is $K_4$-free, \item if $G$ admits a $T_3$-free orientation, then it is locally bipartite. \end{itemize} \end{proposition} \begin{proof} Let $G$ be graph with a proper colouring $(V_0, V_1, V_2)$ . By orienting the edges of $G$ from $V_i$ to $V_{i+1}$, with subindices taken modulo 3, we obtain a $T_3$-free orientation of $G$. In order to prove the second item, it suffices to notice that $K_4$ does not admit a $T_3$-free orientation. Let $D_G$ be a $T_3$-free orientation of a graph $G$. For any vertex $x \in V_G$, the sets $N_{D_G}^+(x)$ and $N_{D_G}^-(x)$ are a partition of $N_G(x)$. Since $D_G$ is $T_3$-free, $N_{D_G}^+(x)$ and $N_{D_G}^-(x)$ are independent sets. \end{proof} As we will see later, the statements in the previous proposition are far from being necessary and sufficient conditions for a graph $G$ to admit a $T_3$-free orientation. For the moment, recall the well known result of Mycielski stating that the chromatic number on triangle-free graphs is unbounded \cite{mycielskiCM1995}. Thus, there are graphs with arbitrary large chromatic number that admit a $T_3$-free orientation. Nonetheless, for perfect graph, the first condition of the previous proposition actually characterizes graphs admitting a $T_3$-free orientation. \begin{proposition}\label{perfectgraphsT3} A perfect graph $G$ admits a $T_3$-free orientation if and only if it is $3$-colourable. \end{proposition} \begin{proof} Consider a perfect graph $G$. By Proposition~\ref{obs1:T3}, if $G$ is $3$-colourable it admits a $T_3$-free orientation. On the other hand, suppose that $G$ admits a $T_3$-free orientation. By Proposition~\ref{obs1:T3}, $G$ is $K_4$-free. Since $G$ is perfect, $G$ is $3$-colourable. \end{proof} Since comparability graphs are perfect graphs, the following proposition stems from Proposition~\ref{perfectgraphsT3}. \begin{proposition}\label{B3T3-free} A graph admits a $\{B_3,T_3\}$-free orientation if and only if it is a $3$-colourable comparability graph. \end{proposition} \begin{proof} If a graph $G$ admits a $\{B_3,T_3\}$-free orientation, then it is a comparability graph. Thus, $G$ is a perfect graph that admits a $T_3$-free orientation. By Proposition~\ref{perfectgraphsT3}, $G$ is a $3$-colourable comparability graph. Now suppose that $G$ is a $3$-colourable comparability graph. Since $G$ is perfect, it is $K_4$-free. Consider the partial order of the vertices, $<$, induced by the edges of $G$. Let $X_1 = \{ x \in V_G \colon x$ is $<$-minimal$\}$, $X_3 = \{ x \in V_G \colon x$ is $<$-maximal$\}$ and $X_2 = V_G - (X_1 \cup X_3)$. It follows from the construction of $X_i$, $i \in \{ 1,2,3 \}$, and the fact that $G$ is $K_4$-free, that the sets $X_i$ is an independent set for $i \in \{ 1,2,3 \}$. Orient the edges from $X_1$ to $X_2$, from $X_2$ to $X_3$ and from $X_3$ to $X_1$; name this orientation $D_G$. Clearly, $D_G$ is $T_3$-free. In order to show that $D_G$ is also $B_3$-free, consider three vertices $x,y,z \in V_G$, that induce a path on $G$. Since $\{ x,y,z \}$ does not induce a triangle, it may not happen that $x < y < z$. Thus $x<y$ and $z<y$, or $y<x$ and $y<z$. Then $\{ x,y,z \}$ induces either a $B_1$ or $B_2$ in $D_G$. Concluding that $D_G$ is a $\{ B_3,T_3 \}$-free orientation of $G$. \end{proof} Before proceeding to study the non perfect graphs that admit a $T_3$-free orientation, allow us to study two very simple subclasses. \begin{proposition}\label{B1B2T3-free} A graph $G$ admits a $\{B_1,B_2,T_3\}$-free orientation if and only if $\Delta(G) \le 2$. Equivalently, $G$ admits a $\{ B_1,B_2,T_3 \}$-free orientation if and only $G$ is a dijsoint union of paths and cycles. \end{proposition} \begin{proof} Recall that $\Delta(G) \le 2$ if and only if $G$ is a disjoint union of paths and cycles. Suppose that there is a vertex $x \in V_G$ with at least three distinct neighbours, $y,z,w$. Let $D_G$ be an orientation of $G$. Without loss of generality, $y$ and $z$ will be in-neighbours of $x$ in $D_G$. If $yz \in E_G$ then $\{ x,y,z \}$ will induce a $T_3$ in $D_G$. On the other hand, if $yz \not \in E_G$, $\{ x,y,z \}$ will induce a $B_1$ in $D_G$. Thus if $\Delta(G) \ge 3$, $G$ does not admit a $\{B_1, B_2, T_3\}$-free orientation. To conclude the proof, consider a disjoint union of paths and cycles $G$. By orienting every cycle and path of $G$ in a directed way, we obtain a $\{B_1,B_2,T_3\}$-free orientation of $G$. \end{proof} \begin{proposition}\label{B1B3T3-free} A connected graph $G$ admits a $\{B_1,B_3,T_3\}$-free orientation if and only if $G$ is a star or a triangle. \end{proposition} \begin{proof} It is trivial to find a $\{B_1,B_3,T_3\}$-free orientation of a star or a triangle. Recall that a connected graph $G$ is a star if and only if $G$ is $\{P_4,C_4,C_3\}$-free. Notice that neither $P_4$ nor $C_4$ admit a $\{B_1,B_3\}$-free orientation. Thus if $G$ does not contain a triangle and admits a $\{B_1,B_3,T_3\}$-free orientation, $G$ is a star. On the contrary, if $G$ contains a triangle, observe that neither of the three connected supergraphs of $C_3$ on four vertices, admit a $\{B_1,B_3,T_3\}$-free orientation. Thus, if $G$ contains a triangle $C$, then $G=C$. \end{proof} \section{$\{ T_3 \}$-graphs} \label{sec:T3} The following results build up to characterize the family of graphs that admit a $\{T_3\}$-free orientation. \begin{proposition}\label{homduality} Consider a set of tournaments $F$ and an $F$-graph $H$. If a graph $G$ admits a homomorphism $\varphi \colon G \to H$, then $G$ admits an $F$-free orientation. \end{proposition} \begin{proof} Consider an $F$-free orientation $D_H$ of $H$. We obtain an orientation $D_G$ of $G$ in the following way, there is an arc $(x,y)$ in $D_G$ if and only if $(\varphi(x), \varphi(y))$ is an arc in $D_H$. Since $\varphi$ is a graph homomorphism, by the way we chose to orient the edges of $G$, $\varphi$ induces a digraph homomorphism $\varphi_D \colon D_G \to D_H$. Thus, every tournament $T$ in $D_G$, can be embedded in $D_H$. Since $F$ consists of tournaments and $D_H$ is an $F$-free orientation of $H$, $D_G$ is also an $F$-free orientation of $G$. \end{proof} If a graph $G$ admits a homomorphism to another graph $H$, we write $G \to H$; and $G \not \to H$ otherwise. If $\mathcal{F}$ is a set of graphs, we write $\mathcal{F} \not \to H$, if $G \not \to H$ for every graph $G \in \mathcal{F}$. \begin{corollary} \label{cor:motivation} For every set of tournaments $F$, there is a set of graphs $\mathcal{F}$ such that for any graph $G$, $G$ admits an $F$-free orientation if and only if $\mathcal{F} \not \to G$. \end{corollary} \begin{proof} By Proposition~\ref{homduality} an example of such a set, is the set of graphs that do not admit an $F$-free orientation. \end{proof} Corollary \ref{cor:motivation} motivates the characterization we propose of $\{T_3\}$-graphs; i.e. we find a set of graphs, $\mathcal{F}$, such that a graph $G$ admits a $T_3$-free orientation if and only if $\mathcal{F} \not \to G$. First we introduce some definitions. Consider two paths $P = x_1 \cdots x_n$ and $Q = y_1 \cdots y_m$ such that $n+m \ge 4$. If we embed $P$ and $Q$ in two distinct parallel lines on the plane, add the edges $x_1y_1$, $x_n y_m$ and triangulate the inside region of the resulting cycle in such a way that each of the new edges has one end in $P$ and the other in $Q$, we say that the resulting embedded graph $G_e$ is a {\em t-embedding} of $P$ and $Q$. Any graph $G$ that admits an isomorphic embedding to a {\em t-embedding} of $P$ and $Q$ will be called a {\em t-join} of $P$ and $Q$. A graph obtained from a t-join, $G$, of two paths, $P = x_1 \cdots x_n$ and $Q = y_1 \cdots y_m$, by identifying $x_1$ with $x_n$ and $y_1$ with $y_m$ is called a {\em donut}. If we identify $x_1$ with $y_m$ and $y_1$ with $x_n$ it is called a {\em M\"obius donut}. In both cases we say that $G$ is the {\em spanning} t-join of the t-(M\"obius) donut; $P$ and $Q$ will be the {\em underlying} paths. Note that if one of the underlying paths only has one vertex, then the donut is a wheel. In order to avoid loops, we will not consider donuts when both of the underlying paths are on two vertices, nor M\"obius donuts when either of the initial or final vertices of $P$ ($Q$) is adjacent to all vertices of $Q$ ($P$). As a final definition, if the number of triangles in the t-join is even will say that the resulting donut (M\"obius donut) is an {\em even donut} ({\em even M\"obius donut}); otherwise we say it a is an {\em odd donut} ({\em odd M\"obius donut}). It is not hard to prove the following statement with an inductive argument. \begin{remark}\label{numbertriangles} The number of triangles in a t-join is the sum of the vertices in $P$ and $Q$ minus two. \end{remark} Donuts and M\"obius donuts are defined as quotient graphs. The following remark might reside in the land of trivial results, but will be used in the main proof. \begin{remark}\label{quotienthom} Consider a homomorphism $\varphi \colon G \to H$ and a relation $R$ over $V_G$ such that if $xRy$ then $\varphi(x) = \varphi(y)$. Then $\varphi$ induces a homomorphism $\varphi' \colon G/R \to H$. \end{remark} Recall that $D^+ = (V^+,A^+)$ denotes the constraint digraph defined in Section~\ref{sec:Algorithm}. From the definition of $A^+$, it follows that if $F=\{T_3\}$, for any graph $G$, every arc in $D^+$ is symmetric. Thus, we may think of $D^+$ as a graph. For any graph $G$, we denote by $G^+$ the constraint graph of $G$ with the set $\{T_3\}$. Recall that $G$ admits a $T_3$-free orientation if and only if $(x,y)$ and $(y,x)$ are in different connected components in $G^+$ for any edge $xy\in E_G$. \begin{lemma}\label{donut-contradictingpath} Let $G$ be a graph that does not admit a $\{T_3\}$-free orientation, then there is an odd donut or an even M\"obius donut, $D$, such that $D \to G$. \end{lemma} \begin{proof} Let $P = a_1 \cdots a_n$ be an $(x,y)(y,x)$-path in $G^+$; i.e. $a_1 = (x,y)$, $a_n = (y,x)$ and $a_ia_{i+1} \in E^+$ for $1 \le i< n$. Recall that each vertex in $G^+$ is an orientation of an edge in $E_G$, thus, denote by $t_i$ the tail of the arc $a_i$ and by $h_i$ the head of $a_i$. For instance, $t_1 = x = h_n$ and $h_1 = y = t_n$. Since $a_ia_{i+1} \in E^+$ for $1 \le i< n$, $\{t_i, h_i, t_{i+1}, h_{i+1}\}$ induces a triangle in $G$. So $|\{t_i,h_i\} \cap \{t_{i+1}, h_{i+1} \}| = 1$, and by definition of $E^+$ one of the following must hold, $t_i = h_{i+1}$ or $h_i = t_{i+1}$. We define the function $f^+ \colon V_P \to V_G$ by $f^+(a_1) = x$ if $x \in \{ t_2,h_2 \}$ or, $f^+(a_1) = y$ if $y \in \{ t_2,h_2 \}$, and for $2 \le i \le n$, $f^+(a_i) = h_i$ if $t_i = h_{i-1}$ or, $f^+(a_i) = t_i$ if $h_i = t_{i-1}$. In other words, for $2 \le i \le n$, $f^+$ maps the arc $a_i = (t_i,h_i)$ to the vertex $w \in \{t_i,h_i\}$ such that $w \not \in \{ t_{i-1}, h_{i-1} \}$. Let us observe that for $i \ge 2$ and every vertex $w \in \{ t_i,h_i \}$, $w = f^+(a_j)$ for some $ j \le i\le n$. For $i = 2$ it follows from the definition of $f^+(a_1)$. If $i>2$ and $w \ne f^+(a_i)$, by definition of $f^+$, we know that $w \in \{ t_{i-1},h_{i-1} \}$, thus we conclude by induction on $i$. Now, we define the function $f^- \colon V_P \to V_G$ as $f^-(a_1) = x$ if $x \not \in \{ t_2,h_2 \}$ or $f^-(a_1) = y$ if $y \not \in \{ t_2,h_2 \}$, and for $2 \le i \le n$, $f(a_i) = t_i$ if $t_i = h_{i-i}$ and $f^-(a_i) = h_i$ if $h_i = t_{i-1}$. Notice that for $1\le i\le n$, $\{ t_i,h_i \} = \{ f^+(a_i), f^-(a_i) \}$ and $f^-(a_i) \in \{ t_{i-1}, h_{i-1} \}$. The following claim includes these and additional observations. \begin{claim}\label{claimt} For the functions $f^+$ and $f^-$, and for $1 \le i \le n$ the following hold, \begin{enumerate} \item $t_i = h_{i+1}$ or $h_i = t_{i+1}$ but not both for $i<n$, \item $f^-(a_i) \in \{t_{i-1},h_{i-1}\}$ for every $2\le i$, \item if $i\ge 2$, for every vertex $w \in \{t_i, h_i\}$, $w = f^+(a_j)$ for some $ j \le i$, \item $f^+(a_i)f^-(a_i), f^+(a_i)f^+(a_{i-1}), f^+(a_i)f^-(a_{i-1})\in E_G$ for $2\le i$ and, \item $f^-(a_i) = f^-(a_{i-1})$ and $f^-(a_{i-1}) f^+(a_i)\in E_G$ or $f^-(a_i) = f^+(a_{i-1})$ and $f^-(a_{i-1})f^-(a_i)$ $\in E_G$. \end{enumerate} \end{claim} Since $f^-(a_i) \ne f^+(a_i)$, by Claim~\ref{claimt}.3, for every $i \ge 2$, the set $\{ j \colon \ j< i, f^+(a_j) = f^-(a_i) \}$ is not an empty set, so we may define $k(i) = \max\{ j \colon \ j< i, f^+(a_j) = f^-(a_i)\}$. Note that if $k(i)< i-1$ then $f^+(a_{i-1}) \ne f^-(a_i)$, so by Claim~\ref{claimt}.5, $f^-(a_{i-1}) = f^-(a_i)$, hence $k(i-1) = k(i)$. With a backward induction argument, if $k(i) < j\le i$ then $k(i)= k(j)$. We define the function $c \colon \{ 0, \dots, n \} \to \mathbb{Z}_2$ recursively: $c(0) = 0$, $c(1)=1$ and $c(i) = c(k(i))+1$. For an intiger $i \in \{2, \dots, n\}$, we define its $0${\em -predecessor}, $p_0(i)$ as $\max \{ j \colon \ j < i, c(j) = 0\}$, analogously we define its $1$-{\em predecessor}, $p_1(i)$. The following claim follows from the definitions of $c$ and $k(i)$, \begin{claim}\label{claimt2} For $i \ge 2$, if $c(i) = 0$ ($c(i) = 1$) then the following statements hold, \begin{itemize} \item $p_1(i) = k(i)$ ($p_0(i) = k(i)$), \item if $k(i)<i-1$, then the $0$-predecessor of $i$, $p_0(i)$, is $i-1$ (if $k(i)<i-1$, then the $1$-predecessor of $i$, $p_1(i)$, is $i-1$), \item on the other hand, if $k(i) = i-1$ then $p_0(i) =k(k(i)) = k(i-1)$ (if $k(i) = i-1$ then $p_1(i) = k(k(i)) = k(i-1)$) and, \item $p_0(2) = 0$, $p_1(2) = 1$ and for $i>2$, $p_0(i), p_1(i)\ge 1$. \end{itemize} \end{claim} We proceed to construct a graph $D$ with vertex set $V_D = \{0, \dots, n\}$. We define $E_D$ recursively. In Figure~\ref{Fig:donut} we show an example of the construction of $D$. First, set $E_1 = \{01\}$ and $E_i = E_{i-1} \cup \{ ip_0(i),ip_1(i) \}$; $E_D = E_n$. By construction of $E_D$ it is clear that $c^{-1}(0)$ induces a path $Q_0 = x_1 \cdots x_m$ where $x_1 = 0$ and $x_i$ is the $0$-predecessor of $x_{i+1}$, and $c^{-1}(1)$ induce a path $Q_1 = y_1 \cdots y_s$ where $y_1 = 1$ and $y_i$ is the $1$-predecessor of $y_{i+1}$. Since every vertex is adjacent to its $0$ and $1$ predecessor, proceeding by induction we can notice that $D$ is a t-join of $Q_0$ and $Q_1$. \begin{figure} \begin{center} \begin{tikzpicture}[every circle node/.style ={circle,draw,minimum size= 2pt,inner sep=2.5pt, outer sep=0.5pt},every rectangle node/.style ={}]; \begin{scope}[xshift=2cm, scale=0.7] \node [circle] (1) at (-3,0)[label={south:$0$}]{}; \node [circle] (2) at (-3,2)[label={north:$1$}]{}; \node [circle] (3) at (-1,2)[label={north:$2$}]{}; \node [circle] (4) at (-1,0)[label={south:$3$}]{}; \node [circle] (5) at (1,0)[label={south:$4$}]{}; \node [circle] (6) at (1,2)[label={north:$5$}]{}; \node [circle] (7) at (3,0)[label={south:$6$}]{}; \foreach \from/\to in {1/2,2/3,3/1,1/4,3/4,4/5,3/5,3/6,5/6,6/7,5/7} \draw [-, shorten <=1pt, shorten >=1pt, >=stealth, line width=.7pt] (\from) to (\to); \end{scope} \end{tikzpicture} \caption{The graph construction of $D$ up to $E_6$, where $k(2)= 0$, $k(3) = 2$, $k(4) = 2$, $k(5)= 4$ and $k(6) = 5$. Thus, by definition of $c$, we have $0 = c(0)=c(3)=c(4)=c(6)$ and $1 = c(1) = c(2) = c(5)$. Then, $p_0(2) = 0$, $p_1(2) = 1$, $p_0(3) = 0$, $p_1(3) = 2$, $p_0(4) = 3$, $p_1(4) = 2$, $p_0(5) = 4$, $p_1(5) = 2$, and $p_0(6) = 4$, $p_1(6) = 5$. So $E_1 = \{01\}$, $E_2 = \{01, 20, 21 \}$, $E_3 = E_2 \cup \{ 30, 32 \}$, $E_4 = E_3 \cup \{ 43,42 \}$, $E_5 = E_4 \cup \{ 54,52 \}$ and $E_6 = E_5 \cup \{ 64,65 \}$.} \label{Fig:donut} \end{center} \end{figure} Consider the function $\varphi \colon V_D \to V_G$ defined as follows, $\varphi(0) = f^-(a_1)$ and $\varphi(i) = f^+(a_i)$ for $1 \le i \le n$. By construction of $E_D$, in order to prove that $\varphi$ is a homomorphism, it suffices to show that $\varphi(0) \varphi(1) \in E_G$ and for $i \ge 2$, $\varphi(i)\varphi(p_0(i)), \varphi(i)\varphi(p_1(i)) \in E_G$. Clearly, by Claim~\ref{claimt}.4, for every $1 \le i \le n$, $f^-(a_i)f^+(a_i) \in E_G$, so $\varphi(0)\varphi(1) \in E_G$. Let $i \ge 2$ and suppose that $c(i)= 0$. Let $j=p_1(i)$ and $l = p_0(i)$, by Claim~\ref{claimt2}, $j = k(i)$, by definition of $k(i)$, $f^+(a_j) = f^-(a_i)$ so $f^+(a_j)f^+(a_i) \in E_G$, therefore $\varphi(p_1(i))\varphi(i) \in E_G$. Suppose that $k(i)< i-1$, then by Claim~\ref{claimt2}, $l = i-1$, and by Claim~\ref{claimt}.4 $f^+(a_i)f^+(a_{i-1}) \in E_G$, thus $f^+(a_i)f^+(a_l) \in E_G$. Else, if $k(i) = i-1$ and then, $l = p_0(i) = k(i-1)$ (third item of Claim~\ref{claimt2}). By definition of $k(i-1)$, $f^+(a_l) = f^+(a_{k(i-1)}) = f^-(a_{i-1})$. Hence, using Claim~\ref{claimt}.4, we know that $f^+(a_i)f^-(a_{i-1}) \in E_G$, so by the last equality, $f^+(a_i)f^+(a_l) \in E_G$, therefore $\varphi(i)\varphi(p_1(i)) \in E_G$. So for every $i \ge 2$ such that $c(i)=0$, $\varphi(i)\varphi(p_1(i)),\varphi(i)\varphi(p_0(i)) \in E_G$. The case when $c(i) = 1$ follows analogously. Notice that we were assuming that for every $2 \le i \le n$, $\varphi(p_0(i)) = f^+(a_{p_0(i)})$ and that $\varphi(p_1(i)) = $ $f^+(a_{p_1(i)})$, which is true since $p_0(i),p_1(i)\ge 1$ (fourth statement of Claim~\ref{claimt2}). So we conclude that $\varphi \colon D\to G$ is a homomorphism. At this point, it is not hard to notice that $\{\varphi(0), \varphi(1)\} = \{ x,y \} = \{ \varphi(n), \varphi(k(n)) \}$. Thus $\varphi(0) = \varphi(n)$ and $\varphi(1) = \varphi(k(n))$ or $\varphi(0) = \varphi(k(n))$ and $\varphi(1) = \varphi(n)$. Consider the relation $R$ in $V_D$ that identifies every vertex with itself, $n$ with $0$ and $k(n)$ with $1$ if $\varphi(n) = \varphi(0)$; and $n$ with $1$ and $k(n)$ with $0$ otherwise. Since $0$ is the first vertex in the path $Q_0$, $1$ the initial vertex of $Q_1$, $n$ is the last vertex of the path $Q_{c(n)}$ and $k(n)$ the last one in $Q_{c(k(n))}$, then $D/R$ is either a donut or a M\"obius donut $(c(n) = c(k(n))+1)$. By Remark~\ref{quotienthom} and by definition of $R$, $\varphi$ induces a homomorphism $\varphi \colon D/R \to G$. So in order to conclude the proof we must show that if $D/R$ is either an even M\"obius donut or an odd donut. By Remark~\ref{numbertriangles}, $G/R$ is an even (M\"obius) donut if and only if $|V_D|$ is even, so $G/R$ is an even (M\"obius) donut if and only if $n+1$ is even. Recall that $a_i = (t_i,h_i)$ and $f^+(a_i) \in \{ t_i,h_i \}$. Consider the function $c' \colon \{ a_1,\dots,a_n \} \to \mathbb{Z}_2$ defined by, $c'(a_i) = 0$ if $c(i) = 0$ and $f^+(a_i) = h_i$ or if $c(i) = 1$ and $f^+(a_i) =t_i$; otherwise $c'(a_i) = 1$. First, we will show that $c'(a_i) = c'(a_{i-1})+1$ for $2 \le i \le n$. We will prove the case when $c'(a_{i-1}) = 0$, $c(i-1) = 0$ and $f^+(a_{i-1}) = h_{i-1}$, the rest of the cases follow in a very similar manner. First consider the case when $k(i) = i-1$. In this case, $f^-(a_i) = f^+(a_{i-1})$ and since $c(i) = c(k(i))+1$ and $c(k(i)) = c(i-1) = 0$, then $c(i) = 1$. Since $f^-(a_i) = f^+(a_{i-1})$ and $f^+(a_{i-1}) = h_{i-1}$, by Claim~\ref{claimt}.5, $f^-(a_i) = t_i$, so $f^+(a_i) = h_i$. Therefore $c'(a_i) = 1 = c'(a_{i-1})+1$. If $k(i)<i-1$, then $k(i-1) = k(i)$ (observed in the paragraph after Claim~\ref{claimt}). Since $c(i-1) = 0$, by definition of $c$, $c(k(i-1)) = 1$, but $k(i) = k(i-1)$ so $c(i) = 1+1 = 0$. From the choice of $k(i)$ it follows that $f^-(a_i) = f^+(a_{k(i)}) = f^+(a_{k(i-1)} = f^-(a_{i-1})$. We are considering the case when $f^+(a_{i-1}) = h_{i-1}$, so $f^-( a_{i-1}) = t_{i-1}$, and since $f^-(a_i) = f^-( a_{i-1})$, by Claim~\ref{claimt}.1, $f^-(a_i) = h_i$ so $f^+(a_i) = t_i$. We already knew that $c(i) = 0$, so $c'(a_i) = 1$. Therefore $c'(a_i) = c'(a_{i-1})+1$ for $2\le i\le n$. Thus, $n$ is odd if and only if $c'(a_n) = c'(a_1)$ ($n+1$ is even if and only if $c'(a_n) = c'(a_1)$). Recall that $t_1 = x = h_n$, $h_1 = y = t_n$ and $c(1) = 1$ so $c'(a_1) = 1$ if and only if $f^+(a_1) = y$. Moreover, $c'(a_n) = 1$ if and only if $f^+(a_n) = h_n= x$ and $c(n) = 1$ or $f^+(a_n) = y$ and $c(n) = 0$. So $c'(a_1) = c'(a_n)$ if and only if one of the following hold, \begin{itemize} \item $c(1) = 1$, $f^+(a_1) = y$, $f^+(a_n) = x$ and $c(n) = 1$, \item $c(1) = 1$, $f^+(a_1) = y$, $f^+(a_n) = y$ and $c(n) = 0$, \item $c(1) = 1$, $f^+(a_1) = x$, $f^+(a_n) = x$ and $c(n) = 0$, or \item $c(1) = 1$, $f^+(a_1) = x$, $f^+(a_n) = y$ and $c(n) = 1$. \end{itemize} Thus if $c'(a_1) = c'(a_n)$, $D/R$ is a M\"obius donut. It is not hard to notice that if neither of the four items hold, then $D/R$ is a donut. So if $D/R$ is a M\"obius donut then $c'(a_1) = c'(a_n)$, and if it is a donut then $c'(a_1) = c'(a_n)+1$. Therefore if $D/R$ is a M\"obius donut then it is an even M\"obius donut, and if it is a donut then it is an odd donut. We already proved that there is a homomorphism $\varphi \colon D/R \to G$, so we have concluded this lemma. \end{proof} We are ready to state our main result. \begin{theorem}\label{T3-free} Let $\mathcal{F}$ be the set of all odd donuts and even M\"obius donuts. Then, a graph $G$ admits a $T_3$-free orientation if and only if $\mathcal{F} \not \to G$. \end{theorem} \begin{proof} It is not hard to notice that neither odd donuts nor even M\"obius donuts admit a $T_3$-free orientation. Thus by Proposition~\ref{homduality} if an odd donut or an even M\"obius donuts maps to a graph $G$, then $G$ does not admit a $T_3$-free orientation. On the other hand, suppose $G$ does not admit a $T_3$-free orientation. By Lemma~\ref{donut-contradictingpath}, $G$ contains an homomorphic image of an odd donut or even M\"obius donut. \end{proof} \section{Conclusions} \label{sec:conclusions} We present a summary of our results in Table~\ref{Tab:summary}, as an extension of Table~\ref{Tab:skrien}. \begin{table} \label{Tab:summary} \begin{center} \begin{tabular}{| c | l | l |} \hline Forbidden orientations & Graph family & Reference \\ \hline $B_1$ & $1$-perfectly orientable graphs. &Open Problem. \\ \hline $B_2$ & $1$-perfectly orientable graphs. &Open Problem.\\ \hline $B_3$ & Comparability graphs. & Skrien \cite{skrienJGT6}. \\ \hline $\overrightarrow{C_3}$ & All graphs. & Trivial \\ \hline $T_3$ & odd donut, even M\"obius hom.-free& Theorem~\ref{T3-free}\\ \hline $B_1,B_2$ & Proper circular-arc graphs. & Skrien \cite{skrienJGT6}.\\ \hline $B_1,B_3$ & Nested interval graphs. & Skrien \cite{skrienJGT6}.\\ \hline $B_1,\overrightarrow{C_3}$ & {\em Transitive-perfectly orientable graphs}.& Open Problem \\ \hline $B_1,T_3$ & Unicyclic graphs.&Proposition~\ref{B1T3-free}.\\ \hline $B_2,B_3$ & Nested interval graphs. & Skrien \cite{skrienJGT6}.\\ \hline $B_2,\overrightarrow{C_3}$ & {\em Transitive-perfectly orientable graphs}.& Open Problem \\ \hline $B_2,T_3$ & Unicyclic graphs.& Proposition~\ref{B1T3-free}.\\ \hline $B_3,\overrightarrow{C_3}$ & Comparability graphs. & Definiton.\\ \hline $B_3,T_3$ & $3$-colourable comparability graphs& Proposition~\ref{B3T3-free}.\\ \hline $\overrightarrow{C_3},T_3$ & Triangle-free graphs. & Trivial. \\ \hline $B_1,B_2,B_3$ & Complete graphs. & Skrien \cite{skrienJGT6}. \\ \hline $B_1,B_2,\overrightarrow{C_3}$ & Proper Helly circular-arc graphs.& Proposition~\ref{B1B2C3-free}.\\ \hline $B_1,B_2,T_3$ & $\Delta(G)\le 2$. & Proposition~\ref{B1B2T3-free}.\\ \hline $B_1,B_3,\overrightarrow{C_3}$ & Nested interval graphs (Skrien). & Direct implication of results in \cite{skrienJGT6}.\\ \hline $B_1,B_3,T_3$ &Triangles and stars.& Proposition~\ref{B1B3T3-free}.\\ \hline $B_1,\overrightarrow{C_3},T_3$ & Triangle-free unicyclic graphs.& Corollary~\ref{B1C3T3-free}.\\ \hline $B_2,B_3,\overrightarrow{C_3}$ & Nested interval graphs (Skrien). & Direct implication of results in \cite{skrienJGT6}.\\ \hline $B_2,B_3,T_3$ &Triangles and stars.& Proposition~\ref{B1B3T3-free}.\\ \hline $B_2,\overrightarrow{C_3},T_3$ & Triangle-free unicyclic graphs.& Corollary~\ref{B1C3T3-free}.\\ \hline $B_3,\overrightarrow{C_3},T_3$ & Bipartite graphs. & Proposition~\ref{B3C3T3-free}\\ \hline $B_1,B_2,B_3,\overrightarrow{C_3}$ & Complete graphs. & Trivial.\\ \hline $B_1,B_2,B_3,T_3,$ & $K_3$, $K_2$ and $K_1$. & Trivial.\\ \hline $F'\cup \{\overrightarrow{C_3},T_3\}$ & Triangle-free-$F'$-graphs. & Straightforward observation.\\ \hline $F'\cup \{K_1+\overrightarrow{K_2}\}$ & Complete-multipartite-$F'$-graphs. & Straightforward observation.\\ \hline \end{tabular} \caption{Graph classes characterized by forbidden orientations of order $3$.} \end{center} \end{table} Note that for any hereditary property, $P$, closed under homomorphic pre-images, that is, if $G \in P$ and $H \to G$, then $H \in P$, there is a characterization analogous to Theorem~\ref{T3-free}. In particular, the class of $\{\overrightarrow{T_k}\}$-graphs is closed under homomorphic pre-images. The problem of finding the minimum integer $n(k)$ such that any orientation of $K_{n(k)}$ contains a copy of $T_k$ it is a well known tough problem. Thus, generalizing Theorem~\ref{T3-free} for larger transitive tournaments could be hard to do. But, if we knew the value of $n(k)$, could we construct a set of forbidden homomorphic pre-images? The following result is a straightforward generalization of Proposition~\ref{perfectgraphsT3}. \begin{proposition} Let $P$ be a non-empty hereditary property closed under homomorphic pre-images. Then, the following conditions are equivalent: \begin{itemize} \item there is a perfect graph $G$ that does not belong to $P$, \item there is graph $G$ that does not belong to $P$, \item there is a positive integer, $k$, such that a perfect graph $G$ belongs to $P$, if and only if $G$ is $k$-colourable, \item there is a positive integer $r>1$, such that a perfect graph $G$ belongs to $P$ if and only if $G$ is $K_r$-free, and \item there is a tournament, $T$, such that a perfect graph $G$ belongs to $P$ if and only if $G$ is a $\{ \overrightarrow{C_3},T \}$-graph. \end{itemize} \end{proposition} \begin{proof} We will only show the equivalence between the last two items. To do so, it suffices to notice that a graph is $K_r$-free if and only if it admits a $\{\overrightarrow{C_3}, \overrightarrow{T_r}\}$-free orientation. \end{proof} The proof of Lemma~\ref{donut-contradictingpath}, is technical and tedious to read. Thus, finding a simpler proof remains as an open problem. Nonetheless, we believe it is important to stand out the following: the master algorithm is a certifying one, i.e., it gives a graph $G$ an $F$-free orientation if it has one, or it finds and obstruction to being an $F$-graph, but these obstructions live in the constraint digraph $D^+$, not in $G$. Our proof yields a polynomial time extension of the master algorithm (in the case when $F = \{T_3\}$) and outputs an obstruction that now lives in $G$; namely it outputs a forbidden homomorphic pre-image $W$ and a homomorphism $\varphi \colon W \to G$. Various of the reductions to $2$-SAT are examples of certifying algorithms that exhibit an obstruction that does not belong to the graph $G$. The algorithm in \cite{hellESA2014} also exhibits obstructions that to not belong to $G$. A similar technique of the reverse engineering in the proof of Lemma~\ref{donut-contradictingpath} could work to find obstructions in $G$. We extended Skrien's results by finding characterizations for almost all sets of $F$-graphs where $F$ consists of oriented graphs on three vertices. We say that a graph $G$ is a {\em transitive-perfectly orientable graph} if is admits a $\{B_1,\overrightarrow{C_3}\}$-free orientation. Finding nice characterizations of perfectly orientable graphs and transitive-perfectly orientable graphs remain as open problems.
{ "timestamp": "2020-03-13T01:06:23", "yymm": "2003", "arxiv_id": "2003.05606", "language": "en", "url": "https://arxiv.org/abs/2003.05606", "abstract": "Given a set $F$ of oriented graphs, a graph $G$ is an $F$-graph if it admits an $F$-free orientation. Building on previous work by Bang-Jensen and Urrutia, we propose a master algorithm that determines if a graph admits an $F$-free orientation when $F$ is a subset of the orientations of $P_3$ and the transitive triangle.We extend previous results of Skrien by studying the class of $F$-graphs, when $F$ is any set of oriented graphs of order three. Structural characterizations for all such sets are provided, except for the so-called perfectly-orientable graphs and one of its subclasses, which remain as open problems.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)", "title": "Orientations without forbidden patterns on three vertices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759649262344, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.707940575647586 }
https://arxiv.org/abs/2212.06738
Absolute integral closures of commutative rings
Absolute integral closures of general commutative unital rings are explored. All rings admit absolute integral closures, but in general they are not unique. Among the reduced rings with finitely many minimal prime ideals, finite products of domains are the only rings for which they are unique. Arguments using model theory suggest that the same holds for all infinite rings that are finite products of connected rings. Universal absolute integral closures, which contain every aic of a given ring, are shown to exist for certain subrings of products of domains.
\section{Introduction}\label{sec:intro} The absolute integral closure of an integral domain is a well-known concept. It has been much studied, and there are many applications. See \cite{H} for an overview.\\ A definition of an absolute integral closure for a general commutative ring was proposed recently by Martin Brandenburg, in a question he posted on MathOverflow regarding their uniqueness. He showed that every ring has an absolute integral closure. We will present the definition presently.\\ We work in the category $\xc{CRg}$ of commutative unital rings. Given a ring $R$, its set of minimal prime ideals is denoted by $\xr{min}(R)$, the set of regular elements by $\xr{reg}(R)$, the set of nilpotents by $\xr{nil}(R)$, and the group of units by $R^\times$. We write $\xr{Q}(R)$ for $\xr{reg}(R)^{-1}R$, the total ring of fractions of $R$. When $R$ is an integral domain, this is just the field of fractions. Recall that an $R\in\xc{CRg}$ is called \textit{connected} if $\xr{spec}(R)$ is connected topologically, that is, if $R$ is not the direct product of two nontrivial rings, that is, if $R$ has no idempotents other than the trivial ones, $0$ and $1$.\\ A ring extension $R\subseteq S$ is called \textit{tight} if non-zero ideals of $S$ contract to non-zero ideals of $R$. It suffices of course that this holds for all non-zero principal ideals of $S$. Like integrality, tightness is a transitive property: when $R\subseteq S\subseteq T$ is a chain of two tight ring extensions, $T$ is tight over $R$.\\ $S$ is \textit{absolutely integrally closed}, or \textit{ai closed} for short, if every monic $f\in S[X]$ factorizes as a product of monic linear polynomials in $S[X]$. Uniqueness of the factorization (up to a permutation of the factors) is not required.\\ If $S$ is also integral and tight over $R$, it is called an \textit{absolute integral closure} or \textit{aic} of $R$. Such $S$ are not uniquely determined up to isomorphism, as we will see. To see they exist, note that every ring $R$ has an ai closed integral extension $S$ (\cite{ST}). If we take an ideal $J$ of $S$ that is maximal wrt.\ the property that $J\cap R=0$, then $S/J$ is an aic of $R$.\\ The purpose of these notes is to investigate under what conditions on $R$ aics \textit{are} unique up to an $R$-\textit{isomorphism} (an isomorphism in the category $_R\xc{CRg}$ of $R$-algebras). For rings $R$ that have a unique aic, we will denote it by $R^{\,+}$\!, the usual notation for the absolute integral closure of $R$ when $R$ is a domain. When $R$ \textit{is} a domain, $R^{\,+}$ is its unique aic in the sense used here, see Prop.~\ref{prop:1} below.\\ Some of the basics concerning aics are set out in section \ref{sec:first}. We show that every subring of an arbitrary product of domains, over which the product is integral and tight, possesses a unique \textit{universal} aic, that is, an aic into which every aic of the ring can be embedded (Prop.~\ref{prop:2}). And we show that aics survive some forms of base change: localization and dividing out a prime ideal.\\ Section \ref{sec:redux} discusses reduced rings that have only finitely many minimal prime ideals. Such a ring is a subring of a finite product of domains, and therefore it has a universal aic $T$\!. Th.~\ref{thm:1} provides a criterion to determine when a given aic of the ring is isomorphic to $T$\!. For rings of this type, $R^\times$ exists iff $R$ is a product of domains. This is Th.~\ref{thm:2} in section \ref{sec:sample}. Actual counterexamples to aic uniqueness are also given and analyzed in that section.\\ Model theoretic methods are used in section \ref{sec:mt} to produce aics, and it is argued that infinite rings, except for finite products of domains, possess non-isomorphic aics. Prop.~\ref{prop:8} shows this, under the assumption that certain first-order theories related to the ring are well-behaved in a certain respect. The assumption is more or less self-evident, but for obvious reasons it is difficult to rigorously demonstrate that no first-order formula can exist that implies certain, infinitely many, others, to be specified in the section. \section{First properties}\label{sec:first} First, we establish that the new definition of absolute integral closure agrees with the existing one for domains. \begin{prop}\label{prop:1}If $R$ is a domain, its unique aic is its $R$-plus, that is, the integral closure of $R$ in the algebraic closure $\overline{\xr{Q}(R)}$ of $\xr{Q}(R)$. \begin{proof} If $S$ is an aic of $R$, by integrality, lying-over holds for $S/R$, so there is a prime ideal $\xf{q}$ of $S$ with $\xf{q}\cap R=0$. As $S$ is tight over $R$, we have $\xf{q}=0$, so $S$ is a domain, and $\xr{Q}(R)\subseteq\xr{Q}(S)$ is an algebraic extension of fields. Hence $S$ can be embedded in $\overline{\xr{Q}(R)}$, and since $S$ is ai closed and integral over $R$, its image in $\overline{\xr{Q}(R)}$ must simply coincide with $R^{\,+}$\!. \end{proof} \end{prop} Next, for a finite set $\{R_a\mid a\in A\}$ of domains with product $P=\prod_{a\in A}R_a$, the product $P^{\,+}=\prod_{a\in A}R^{\,+}_a$\! is the unique aic of $P$. This is easily seen, since, as is usual with products, the product factors are completely unaware of each other, and properties only have to be verified in each component separately. It is as if one studies multiple independent rings simultaneously.\\ A case in point is the trivial ring $P=1$, which is its own aic, and is, in this sense, on par with the algebraically closed fields. The trivial ring is sometimes called the \textit{zero} ring and written as $0$, suggesting it is the initial object in the category $\xc{CRg}$, that maps to every other ring, while it is in fact the terminal object $1$ every other ring maps \textit{to}, and the name $0$ should be reserved for the true initial object, which is $\xb{Z}$ - but this aside. At any rate, being the terminal of $\xc{CRg}$, the ring $1$ is the direct product of the empty family of domains.\\ When $A$ is infinite, $P'\coloneqq P^{\,+}$ is not even integral over $P$, although every aic $S$ of $P$ still embeds in it. For let $e_a$ be the idempotent of $P$ that has $1$ as its $a$-th coordinate and zero elsewhere. Then the ideal $S_a=e_aS$ of $S$ is actually a ring. It is the product of infinitely many trivial rings and a single non-trivial one. Clearly, $S$ embeds in $S'\!=\prod_{a\in A}S_a$ via $s\mapsto(e_as)_{a\in A}$. The $S_a$ are aics of the $R_a$, so they can only be the rings $R^{\,+}_a$\!. We insert a lemma. \begin{lem}\label{lem:1}If $R\subseteq T$ in $\xc{CRg}$ and $T$ is absolutely integrally closed, the integral closure $S$ of $R$ in $T$ is also ai closed. \begin{proof} If $f\in\overline{S}[X]$ is monic of degree $n$, it factors as $\prod_{i=1}^n(X-t_i)$ in $T[X]$. Then the $t_i$ are integral over the subring $R[c_{n-1},\cdots,c_{0}]$ of $S$ generated by the coefficients $c_i$ of $f$. But the latter are integral over $R$, hence so are the $t_i$. So $t_i\in S$ for all $i$, and $S$ is ai closed. \end{proof} \end{lem} For infinite $A$, write $\overline{P}$ for the integral closure of $P$ in $P'$\!, which is $\{p'\in P'\mid\exists_{n\in\xb{N}}\forall_{a\in A}\exists_{f_a\in R_a[X]}(f_a\text{ monic}\,\wedge\,\xr{deg}(f_a)\le n\,\wedge\,f_a(p_a')=0)\}$. It is an aic of $P$. Indeed, $\overline{P}$ is ai closed by Lemma~\ref{lem:1}. And if $0\ne p\in\overline{P}$, take an $a\in A$ such that $e_ap\ne0$. Then $p$ is integral over $P$, so its $a$-th coordinate is integral over $R_a$, hence satisfies an equation with coefficients in $R_a$ that has a non-zero constant term $b_a$. And then $b=(b_x)_{x\in A}\in e_ap\overline{P}\cap P\subseteq p\overline{P}\cap P$ if we put $b_x=0$ for $x\in A-\{a\}$.\\ When $S$ is an aic of $P$ then, as above, $S\rightarrowtail S'\!=\prod_{a\in A}S_a\cong\prod_{a\in A}R^{\,+}_a=P'$\! is a morphism of $R$-algebras, so, since $S$ is integral over $P$, the image is contained in $\overline{P}$, and $\overline{P}$ is the universal aic of $P$. The image of $S$ may be a proper subset of $\overline{P}$. For, given $p'\in\overline{P}$, and $n\in\xb{N}$ plus $f_a\in R_a[X]$ monic of degree $\le n$ for all $a\in A$, such that $\forall_{a\in A}f_a(p_a')=0$, we can multiply each $f_a$ with a suitable power of $X$, and assume that $\xr{deg}(f_a)=n$ for all $a$. Then $f=(f_a)_{a\in A}\in P[X]$ is monic of degree $n$. But, in case every $f_a$ is separable, $f$ has $n^{|A|}$ roots in $\overline{P}$, and all we know about $S$ is that it is ai closed, so it has at least $n$ zeroes for $f$.\\ Now let $R$ be a subring of $P$ (with $A$ still infinite). The integral closure $\overline{R}$ of $R$ in $\overline{P}$ is ai closed (Lemma~\ref{lem:1}). But it is not necessarily tight over $R$. It is when $P$ is integral and tight over $R$, for then $\overline{R}=\overline{P}$ is tight over $P$, so over $R$. And, in that case, all the $e_a$ are in $\overline{R}$, for they are integral over $R$. But that does not mean that $\overline{R}=\overline{P}$ (it would for finite $A$). Since the $e_a$ are in $\overline{R}$, such $\overline{R}$ have zero-divisors. Finite products are a special case of infinite products, for we can always throw in infinitely many trivial factors.\\ We collect these observations. \begin{prop}\label{prop:2}Let $P=\prod_{a\in A}R_a$ be a product of integral domains. \begin{enumerate}[label=\normalfont{(}\normalfont\arabic*)] \item If the index set $A$ is finite, $P^{\,+}$ exists, and it is the product of the $R^{\,+}_a$\!. \item For infinite $A$, $P$ has a unique universal aic. \item If $R$ is a subring of $P$ over which $P$ is integral and tight, then $R$ has a unique universal aic. It is the integral closure of $R$ in $\prod_{a\in A}R^{\,+}_a$\!.$\hfill\square$ \end{enumerate} \end{prop} Note that universal absolute integral closures, as far as they exist, are not necessarily unique up to isomorphism. Two universal aics of course embed into one another. But unless their construction is based on a reduction to a collection of domains $R$ and their $R^{\,+}$ (as in the present setup), they may have any number of roots for a given monic polynomial $f$ over the base ring. The cardinality of the zero set per $f$ must be the same in both aics, but that doesn't imply global isomorphism.\\ Returning to the general case, the following two propositions, valid for domains (\cite{H}), remain valid. The proof of the first one is immediate. \begin{prop}\label{prop:3}If $R\subseteq S\subseteq T$ and $T$ is an aic of $R$, then $T$ is an aic of $S$.$\hfill\square$ \end{prop} \begin{prop}\label{prop:4}If $S$ is an aic of $R$ and $F\subseteq\xr{reg}(R)$ is a monoid, $F^{-1}S$ is an aic of $F^{-1}R$. And $F\subseteq\xr{reg}(S)$. If $S=R^{\,+}$, one also has $F^{-1}S=(F^{-1}R)^+$. \begin{proof} The first statement is straightforward. For the second one, if $fs=0$ for an $f\in F$ and $0\ne s\in S$, by tightness $ss'\!=r\in R-\{0\}$ for some $s'$\!. But then $fr=0$, contradicting $f\in\xr{reg}(R)$. For the third statement, if $S=R^{\,+}$ and $T$ is an aic of $F^{-1}R$, then since $R\subseteq F^{-1}R$, the integral closure $Z$ of $R$ in $T$ is ai closed by Lemma~\ref{lem:1}. If $0\ne z\in Z$ and $r/f\in zT\cap F^{-1}R$ for some $f\in F$ and $0\ne r\in R$, say $r/f=zt$, then, because $t$ is integral over $F^{-1}R$, say $t^2+(x/f')t+x'/f'\!=0$ in $T$\!, with $x,x'\in R$ and $f'\in F$, to keep it simple, we find $f't\in Z$, hence $rf'\!=ztff'\in zZ\cap R$. So $Z$ is tight over $R$, and it follows that $Z\cong S$ as $R$-algebras. But since $T=F^{-1}Z$, we obtain $T\cong F^{-1}S$. \end{proof} \end{prop} The next point was covered in the proof. The one after that deals with tightness and (de)localization. \begin{cor}\label{cor:1}If the extension $R\subseteq S$ is tight, one has $\xr{reg}(R)\subseteq\xr{reg}(S)$.$\hfill\square$ \end{cor} \begin{prop}\label{prop:5}If $R\subseteq S$, $F\subseteq\xr{reg}(R)$ is a monoid, and $S$ is tight over $R$, then $F^{-1}S$ is tight over $F^{-1}R$. Conversely, if $F^{-1}R\subseteq F^{-1}S$ is tight and the elements of $F$ are also regular in $S$, then $R\subseteq S$ is tight. \begin{proof} When $0\ne\frac{z}{f}\in F^{-1}S$ with $z\in S$ and $f\in F$, then $zs=r\in R-\{0\}$ for some $s\in S$. If $\frac{zs}{f}=0$ in $F^{-1}S$, then $f'r=0$ in $S$ for some $f'\in F$, contradicting regularity of $f'$\!. For the converse, for every $0\ne z\in S$ there exist $r\in R$, $s\in S$ and $f,f'\in F$ such that $0\ne\frac{z}{1}\frac{s}{f}=\frac{r}{f'}$ in $F^{-1}S$. Hence $0=f''(zsf'-rf)$ in $S$ for an $f''\in F$, so, because $f''\in\xr{reg}(S)$, $zsf'\!=rf\in R$. And $zsf'\ne0$, for else $\frac{zs}{f}$ would be zero in $F^{-1}S$. \end{proof} \end{prop} If $J$ is an ideal of an aic $S$ of $R$, then $S/J$ can of course not be expected to be tight over $R/J\cap R$. Still, this property \textit{does} hold in case $J$ is a prime ideal. \begin{lem}\label{lem:2}If $R\to S$ is integral, $\xf{q}\in\xr{spec}(S)$ and $J\supseteq\xf{q}$ is an ideal of $S$ with $J\cap R=\xf{q}\cap R$, then $J=\xf{q}$. \begin{proof} Let $\xf{p}=\xf{q}\cap R$. Then $\xf{q}\cap(R-\xf{p}) = \varnothing$, so $\xf{q}S_\xf{p}\in\xr{spec}(S_\xf{p})$. It is readily seen that $(\xf{q}S_\xf{p})\cap R_\xf{p}=\xf{p}R_\xf{p}$. But $R_\xf{p}\to S_\xf{p}$ is integral, and $\xf{p}R_\xf{p}\in\xr{max}(R_\xf{p})$, so $\xf{q}S_\xf{p}$ is a maximal ideal of $S_\xf{p}$ (for $R_\xf{p}/\xf{p}R_\xf{p}\subseteq S_\xf{p}/\xf{q}S_\xf{p}$ is also integral). But $\xf{q}S_\xf{p}\subseteq JS_\xf{p}\subsetneq S_\xf{p}$ (for $J\cap(R-\xf{p})$ is also $\varnothing$). Hence $JS_\xf{p}=\xf{q}S_\xf{p}$, and this implies that $J=\xf{q}$. \end{proof} \end{lem} Now if $\xf{p}$ is a minimal prime ideal of $R$, there is a $\xf{q}\in\xr{spec}(S)$ lying over it. Since, by Lemma~\ref{lem:2}, there are no inclusion relations between the $S$-primes lying over $\xf{p}$, this $\xf{q}$ must be a minimal prime of $S$. Then, by Lemma~\ref{lem:2}, $S/\xf{q}$ is tight over $R/\xf{p}$, hence an aic of $R/\xf{p}$, and hence $S/\xf{q}=(R/\xf{p})^{+}$\!. This goes at least \textit{some} way towards aic uniqueness, as the $R/\xf{p}$ are the domains closest to $R$. So: \begin{cor}\label{cor:2}If $S$ is an aic of $R$ and $\xf{q}\in\xr{spec}(S)$, then $S/\xf{q}=(R/\xf{q}\cap R)^{+}$\!.$\hfill\square$ \end{cor} If in Lemma~\ref{lem:2} one only assumes that $\xf{q}$ is \textit{primary}, or even merely that $\sqrt{\xf{q}}\in\xr{spec}(S)$, the conclusion reads $\sqrt{J}=\sqrt{\xf{q}}$. Finally, we note that M. Artin's beautiful proof (\cite{H}, Thm.\ 2.6) of a result we quote here goes through verbatim. \begin{prop}\label{prop:6}If $S$ is an aic of $R$ and $\xf{p}$ and $\xf{q}$ are prime ideals of $S$, then $\xf{p}+\xf{q}\in\xr{spec}(S)\cup\{S\}$.$\hfill\square$ \end{prop} \section{Reduced rings with finite minimal spectrum}\label{sec:redux} As an illustration of Prop.~\ref{prop:2}, we consider reduced rings $R$ for which $\xr{min}(R)$ is a finite set, and we discuss their (unique) universal aic.\\ The trivial ring $R=1$ is the case $\xr{min}(R)=\varnothing$, and integral domains are just the special case where $\xr{min}(R)$ is a one-point space. For if $\xr{min}(R)=\{\xf{p}\}$, then $\xf{p}=\bigcap\xr{min}(R)=\bigcap\xr{spec}(R)=\xr{nil}(R)=0$. This class of rings also includes all noetherian reduced rings.\\ Let $R\subseteq S$ be a given tight integral extension, with $S$ an ai closed ring. Then we have $\xr{nil}(S)\cap R=\xr{nil}(R)=0$, so by tightness $S$ must also be reduced.\\ For $\xf{p}\in\xr{min}(R)$, take a minimal prime ideal $\tilde{\xf{p}}$ of $S$ lying over it. Then, if $I=\bigcap_{\xf{p}\in\xr{min}(R)}\tilde{\xf{p}}$, we have $I\cap R=\bigcap\xr{min}(R)=\xr{nil}(R)=0$, hence $I=0$. So if $\xf{Q}\in\xr{min}(S)$, since $\xr{min}(R)$ is finite, the product $\prod_{\xf{p}\in\xr{min}(R)}\tilde{\xf{p}}$ exists and is contained in $I=0\subseteq\xf{Q}$. Thus $\xf{Q}$ must be one of the $\tilde{\xf{p}}$. Therefore, $S$ is also "semiglobal", that is to say, $\xr{min}(S)=\{\tilde{\xf{p}}\mid\xf{p}\in\xr{min}(R)\}$ is finite.\\ Let $K$ and $L$ be the total rings of fractions of $R$ and $S$, respectively. So $K=\xr{reg}(R)^{-1}R$. The prime ideals of $K$ are of the form $\xf{p}K$, where $\xf{p}\in\xr{spec}(R)$ with $\xf{p}\cap\xr{reg}(R)=\varnothing$, i.e. $\xf{p}$ consists of zero divisors of $R$. These $\xf{p}$ are precisely the minimal prime ideals of $R$. For if $\xf{p}\in\xr{min}(R)$ and $r\in\xf{p}$, then $r\in\xf{p}R_\xf{p}$. Being a localisation of a reduced ring, $R_\xf{p}$ is again reduced. So $0=\xr{nil}(R_\xf{p})=\bigcap\xr{spec}(R_\xf{p})=\xf{p}R_\xf{p}$, for $\xf{p}R_\xf{p}$ is the only prime ideal of $R_\xf{p}$. (Note that $R_\xf{p}$ therefore must be a field.) So $r=0$ in $R_\xf{p}$, and hence there is an $r'\in R-\xf{p}$ for which $rr'\!=0$ in $R$. So $r$ is a zero divisor of $R$. Conversely, if $rr'\!=0$ and $r'\ne0$, then there is a $\xf{p}\in\xr{min}(R)$ with $r'\notin\xf{p}$, because $\bigcap\xr{min}(R)=0$. Therefore, $r\in\xf{p}$. So if a prime $\xf{q}$ of $R$ contains only zero divisors, it is contained in $\bigcup\xr{min}(R)$. But this a finite union, and so, by the prime avoidance lemma, the ideal $\xf{q}$ is a minimal prime of $R$.\\ Thus $\xr{spec}(K)=\{ \xf{p}K\mid\xf{p}\in\xr{min}(R)\}$. And we have $R\subseteq K$. Note that $\xf{p}K\cap R=\xf{p}$ when $\xf{p}$ is a minimal prime. Indeed, if $r=u/v$ in $K$ with $u\in\xf{p}$ and $v\in R$ regular, then there is a $w\in\xr{reg}(R)$ with $w(vr-u)=0$ in $R$, so $vr=u\in\xf{p}$. If $v\in\xf{p}$, then by the above $v$ is a zero divisor of $R$, contradiction. So $r\in\xf{p}$. As a result, $R/\xf{p}\subseteq K/\xf{p}K$.\\ It follows that $K$ is a zero-dimensional reduced ring, that is, a von Neumann regular ring. For if $\xf{p}$ and $\xf{q}$ are minimals of $R$ with $\xf{p}K\subseteq\xf{q}K$, then $\xf{p}\subseteq\xf{q}K\cap R=\xf{q}$, hence $\xf{p}=\xf{q}$. So if $\xf{p}\in\xr{min}(R)$, we have $\xf{p}K\in\xr{max}(K)$, and $K/\xf{p}K$ is a field. It is in fact the quotient field of $R/\xf{p}$. The ring $L$ is also VNR, and it contains $S$ as a subring.\\ $R\subseteq S\subseteq L=\xr{reg}(S)^{-1}S$, and $\xr{reg}(R)\subseteq\xr{reg}(S)$ in view of Cor.~\ref{cor:1}. Thus $K=\xr{reg}(R)^{-1}R\subseteq\xr{reg}(R)^{-1}S$ (since localizations are flat) $\subseteq\xr{reg}(S)^{-1}S=L$. Hence $K/\xf{p}K$ is a subfield of $L/\tilde{\xf{p}}L$. And the extension $K/\xf{p}K\subseteq L/\tilde{\xf{p}}L$ is algebraic, because $S$ is integral over $R$.\\ The natural map $K\to\prod_{\xf{p}\in\xr{min}(R)}K/\xf{p}K$ is injective, for the kernel is the intersection of all prime ideals of $K$, and $K$ is reduced. As $\xr{dim}(K)=0$, by the CRT this is actually an isomorphism, and $K$ is a finite product of fields. It is easy to see that in fact $\xr{Q}(R/\xf{p})\cong K/\xf{p}K\cong K_{\xf{p}K}\cong R_\xf{p}$.\\ The extension $R\rightarrowtail P\coloneqq\prod_{\,\xf{p}\in\xr{min}(R)}R/\xf{p}$ is integral and tight. Indeed, by the finiteness of the minimal spectrum, $\prod_{\,\xf{q}\in\xr{min}(R)-\{\xf{p}\}}\xf{q}$ exists. It is not contained in $\xf{p}$, and, therefore, $\bigcap_{\,\xf{q}\in\xr{min}(R)-\{\xf{p}\}}\xf{q}\nsubseteq\xf{p}$, for every $\xf{p}$. So if $0\ne p=(\overline{r}_\xf{p})_\xf{p}\in P$, with $\overline{r}_\xf{p}\ne0$, say, with lift $r_\xf{p}\in R-\xf{p}$, and $c$ is in $\bigcap_{\,\xf{q}\in\xr{min}(R)-\{\xf{p}\}}\xf{q}-\xf{p}$, then $cr_\xf{p}\in pP\cap(R-\{0\})$.\\ Take $A=\xr{min}(R)$ and put $R_a=R/a$ for $a\in A$. Then, by (1) of Prop.~\ref{prop:2}, $T=P^+=\prod_{\xf{p}\in\xr{min}(R)}(R/\xf{p})^+$ is the unique aic of $P$, hence, by (3) of Prop.~\ref{prop:2}, the universal aic of $R$. And $(R/\xf{p})^+=S/\tilde{\xf{p}}$ by Cor.~\ref{cor:2}. $T$ is also a subring of $\overline{K}\coloneqq\prod_{\xf{p}\in\xr{min}(R)}C_\xf{p}$, where $C_\xf{p}$ is the algebraic closure of the field $K/\xf{p}K$ (and of $L/\tilde{\xf{p}}L$). This $\overline{K}$ may be regarded as the "algebraic closure" of $R$ (or, equally, of $K$, $S$ or $L$). Clearly, $T$ is also the integral closure of $R$ in $\overline{K}$.\\ The tightness of $T$ over $R$ can also be seen directly. Let $0\ne t\in T$\!, and, for $\xf{p}\in\xr{min}(R)$, denote the $\xf{p}$-th coordinate of $t$ by $t_\xf{p}$, and pick a $\xf{p}$ with $t_\xf{p}\in C_\xf{p}$ nonzero. Take a monic $f\in R[X]$ with $f(t)=0$ in $T$\!. If the constant term $f(0)$ is in $\xf{p}$, it becomes zero in $K/\xf{p}K$, hence in $C_\xf{p}$. As $t_\xf{p}$ is a root of the image of $f$ in $C_\xf{p}[X]$ and $C_\xf{p}$ is a field, $t_\xf{p}$ is also a root of (the image of) $g=(f-f(0))/X\in R[X]$. We then replace $f$ by $g$. If the new $f(0)$ is in $\xf{p}$ again, repeat the process until $f(0)\notin\xf{p}$. With $c\in R$ as above, which is in all minimals of $R$ except $\xf{p}$, put $h\coloneqq cf$. Then $h(t_\xf{p})=0$, and for $\xf{p}\ne\xf{q}\in\xr{min}(R)$, we have $c=0$ in $K/\xf{q}K$, so $h=0$ in $C_\xf{q}[X]$. Hence $h(t)=0$ in $T$\!. But $h(0)=cf(0)\notin\xf{p}$, and therefore $h(0)$ is a nonzero element of $tT$ that is in $R$.\\ The image of $S$ in $\overline{K}$ under the composition map $S\subseteq L\rightarrowtail\prod_{\xf{p}\in\xr{min}(R)}L/\tilde{\xf{p}}L\subseteq\overline{L}=\overline{K}$ is integral over $R$, so it is contained in $T$\!, and this confirms the universality of the absolute integral closure $T$\!.\\ $T$ has $2^{|\xr{min}(R)|}$ idempotents. Denote the one with $e_\xf{p}=1$ and $0$ elsewhere by $e_{(\xf{p})}$. Then every idempotent is the sum of the elements of a subset of $\{e_{(\xf{p})}\mid\xf{p}\in\xr{min}(R)\}$. And $e_{(\xf{p})}e_{(\xf{q})}=0$ when $\xf{p}\ne\xf{q}$. Therefore, the $e_{(\xf{p})}$ form a fundamental system of orthogonal idempotents.\\ Then $S=T$ iff the $e_{(\xf{p})}$ are all in $S$. For if they are, and $t\in T$\!, fix a $\xf{p}$, and let $f\in R[X]$ be monic with $f(t)=0$. Then $f(t_\xf{p})=0$ in $C_\xf{p}$. Write $f=\prod_{1\leq i\leq n}(X-s_i)$ in $S[X]$, so that we have $f=\prod_{1\leq i\leq n}(X-(s_i)_{\xf{p}})$ in $C_\xf{p}[X]$. But then $t_\xf{p}=(s_i)_{\xf{p}}$ for some $i$, because $C_\xf{p}$ is a field. Put $s=s_i$. Since $e_{(\xf{p})}$ is in $S$, so is $e_{(\xf{p})}s$. Its $\xf{p}$-th component is $t_\xf{p}$, and it has zero for the other $\xf{q}$. So $e_{(\xf{p})}s$ is actually equal to $e_{(\xf{p})}t$. The sum of the $e_{(\xf{p})}$, taken over the $\xf{p}\in\xr{min}(R)$, is equal to $1$. So $t$, which is therefore the sum of the $e_{(\xf{p})}t$, must be in $S$.\\ To sum up: \begin{thm}\label{thm:1}With the hypotheses and notation of the current section, one has: \begin{enumerate}[label=\normalfont{(}\normalfont\arabic*)] \item $T=\prod_{\xf{p}\in\xr{min}(R)}(R/\xf{p})^+$ is the universal aic of $R$. \item An aic of $R$ is isomorphic to $T$ if and only if it contains precisely $2^{|\xr{min}(R)|}$ idempotents, that is, iff it has $|\xr{min}(R)|$ connected components.$\hfill\square$ \end{enumerate} \end{thm} Recall that when $R$ is a domain, so is $S$. Hence $S$ contains $2^{|\xr{min}(R)|}$ idemps, namely just the trivial ones, so this confirms once again that domains have a unique aic. (And the proposition is also valid for $R=1$, in which case $T=R$.)\\ With regard to generalizing the above construction, we note that for reduced $R$, the total ring of fractions $K=Q(R)$ is VNR iff every ideal of $R$ contained in $\bigcup\xr{min}(R)$ (that is, every ideal consisting entirely of zero divisors) is contained in some minimal prime of $R$. For such $R$, the minimal prime spectrum $\xr{min}(R)$ is automatically compact wrt.\ the topology induced by the Zariski topology on $\xr{spec}(R)$ - although that in inself is not enough for $K$ to be VNR (cf. \cite{M}, Prop.~1.15, or various places in \cite{T}, such as Lemma 3.2 and Th.~3.4).\\ But a fundamental difficulty with the construction when $\xr{min}(R)$ is infinite is that $T$ no longer needs to be tight over $R$. For, look at the ideal $I$ generated by the idempotent $e_{(\xf{p})}\in T$ for a $\xf{p}\in\xr{min}(R)$. It consists of the $t\in T$ with $t_{\xf{p}}\in T_{\xf{p}}$, that is, $t_{\xf{p}}\in C_{\xf{p}}$ is integral over $R$, and $t_{\xf{q}}=0$ in $C_{\xf{q}}$ for the remaining $\xf{q}\in\xr{min}(R)$. For $t$ to be in $R$ and non-zero, you would need $t_{\xf{p}}$ to be in $R$ and in every $\xf{q}\in\xr{min}(R)$ except in $\xf{p}$. And this must hold for each minimal $\xf{p}$ of $R$. This would appear to preclude any substantial generalization of the case that $\xr{min}(R)$ is finite. \section{Sample rings having non-unique aics}\label{sec:sample} We continue the notation of the previous section. If $k$ is a field, then the ring $R=k[X,Y]/(XY)$ has non-isomorphic aics.\\ In this case, we have $\xr{min}(R)=\{\xf{p},\xf{q}\}$, with $\xf{p}=(X)$ and $\xf{q}=(Y)$, if we denote the images of $X$ and $Y$ in $R$ by the same symbols. And $K/\xf{p}K=k(Y)$, with algebraic closure $C_\xf{p}$. The image of $R$ in $C_\xf{p}$ is the polynomial ring $k[Y]$. Let $T_\xf{p}$ be the integral closure of $k[Y]$ in $C_\xf{p}$, and $T_\xf{q}=k[X]^+\subseteq C_\xf{q}\cong_kC_\xf{p}$ (under $X\mapsto Y$). We then have $T=T_\xf{p}\times T_\xf{q}$ in $C_\xf{p}\times C_\xf{q}=\overline{K}$.\\ If $\overline{k}$ is the algebraic closure of $k$, there are ring homomorphisms $T_\xf{p}\xrightarrow{\varphi}\overline{k} \xleftarrow{\psi}T_\xf{q}$ extending the maps $\varphi:k[Y]\to \overline{k}\gets k[X]:\psi$ defined by $Y\mapsto0\mapsfrom X$. Indeed, take $\xf{M}\in\xr{max}(T_\xf{p})$ lying over $(Y)\in\xr{max}(k[Y])$. Then $k[Y]/(Y)\subseteq T_\xf{p}/\xf{M}$ is an algebraic extension of fields, so $\varphi:k[Y]\to k[Y]/(Y)=k\subseteq\overline{k}$ can be extended to $\varphi:T_\xf{p}\to T_\xf{p}/\xf{M}\to\overline{k}$. Likewise for $T_\xf{q}$ and $\psi$.\\ Now let $T_0=\{(\alpha,\beta)\in T=T_\xf{p}\times T_\xf{q}\mid\varphi(\alpha)=\psi(\beta)\}$ be the pullback. It contains the image of $R$ in $T$\!. For every $r\in R$ is of the form $\lambda+Xf(X)+Yg(Y)$ with $\lambda\in k$ and $f(Z),g(Z)\in k[Z]$. This maps to $(\lambda+Yg(Y),\lambda+Xf(X))$ in $R/\xf{p}\times R/\xf{q}=k[Y]\times k[X]\subseteq T$\!, and thence to $(\lambda,\lambda)$ in $\overline{k}\times\overline{k}$ under $\varphi\times\psi$, since $\varphi(Y)=0=\psi(X)$. And $T_0$ is integral and tight over $R$. For let $t_0=(\alpha,\beta)\in T_0-R$. Say $\alpha\ne0$. Then $Yt_0=(Y\alpha,0)$ is in $T_0-\{0\}$, and we have $f(\alpha)=0$ for some monic $f=f(Z)\in k[Y][Z]$ with $f(0)\in k[Y]$ non-zero, for example for the minimal polynomial $f$ of $\alpha$ over $k(Y)$. As $k[Y][Z]\subseteq R[Z]$, we may view $f$ as a polynomial over $R$. Then $g\coloneqq Y^{\xr{deg}(f)}f(Y^{-1}Z)\in R[Z]$ has $g(Y\alpha)=0$ in $T_\xf{p}$ and $g(0)$ = $Y^{\xr{deg}(f)}f(0)\ne0$. But $Y=0$ in $T_\xf{q}$, so $g(0)$ vanishes in $T_\xf{q}$, hence $g(Yt_0)=0$ in $T_0$. This yields $0\neg(0)\in Yt_0T_0\cap R\subseteq t_0T_0\cap R$.\\ Finally, $T_0$ is ai closed. For let $f\in T_0[Z]$ be monic, of degree $n$, say. Then $f=\prod_{1\leq i\leq n}(Z-(\alpha_i,\beta_i))$ in $T[Z]$ for suitable $\alpha_i\in T_\xf{p}$ and $\beta_i\in T_\xf{q}$, as $T$ is absolutely integrally closed. But then $f=\prod_{1\leq i\leq n}(Z-(\alpha_i,\beta_{\pi(i)}))$ in $T[Z]=(T_\xf{p}\times T_\xf{q})[Z]$ for every permutation $\pi$ of the indices $1,\cdots,n$ (!). For $i<n$, let $(\eta_i,\vartheta_i)\in T_0$ be the coefficient of $Z^i$ in $f$. So $(\eta_0,\vartheta_0)=(-1)^n\prod_{1\leq i\leq n}(\alpha_i,\beta_i)$, and so on, up to $(\eta_{n-1},\vartheta_{n-1})=-\sum_{1\leq i\leq n}(\alpha_i,\beta_i)$. Then we have $(-1)^n\prod_{1\leq i\leq n}\varphi(\alpha_i)=\varphi(\eta_0)=\psi(\vartheta_0)=(-1)^n\prod_{1\leq i\leq n}\psi(\beta_i)$, and so forth, up to $-\sum_{1\leq i\leq n}\varphi(\alpha_i)=\varphi(\eta_{n-1})=\psi(\vartheta_{n-1})=-\sum_{1\leq i\leq n}\psi(\beta_i)$. It follows that in the ring $\overline{k}[Z]$ the equality $\prod_{1\leq i\leq n}(Z-\varphi(\alpha_i))=\prod_{1\leq i\leq n}(Z-\psi(\beta_i))$ holds, and hence $\varphi(\alpha_i)=\psi(\beta_{\pi(i)})$ for all $i$, for some permutation $\pi$.\\ So $T_0$ is an aic of $R$. But since $e_{(\xf{p})}=(1,0)$ is not in $T_0$, the rings $T_0$ and $T$ cannot be isomorphic. One is connected, while the other is not. Since $2^{|\xr{min}(R)|}=4$ here, it is clear there cannot be any other aics.\\ This generalizes easily. \begin{thm}\label{thm:2}For $R$ reduced with $\xr{min}(R)$ finite, the following are equivalent. \begin{enumerate}[label=\normalfont{(}\normalfont\arabic*)] \item $R^{\,+}$\! exists, i.e., the absolute integral closure of $R$ is uniquely determined. \item $R$ is a finite direct product of integral domains. \end{enumerate} \begin{proof} One direction is given by (1) of Prop.~\ref{prop:2}. For the other, $R$ cannot be an infinite product $\prod_{a\in A}R_a$ of non-trivial rings, for if $\xf{p}_a\in\xr{min}(R_a)$, we would have $\xf{p}_a\times\prod_{a'\in A-\{a\}}R_{a'}\in\xr{min}(R)$. Since for finite products we can simply consider the individual factors individually, we may as well assume $R$ is connected. If $|\xr{min}(R)|\le1$, i.e.\ if $R$ is either a domain or the empty product $1$ of domains, we are done. In all other events, $R=R/\bigcap\xr{min}(R) \rightarrowtail\prod_{\xf{p}\in\xr{min}(R)}R/\xf{p}$, the natural map, can't be an isomorphism, so by the (contraposition of the) CRT there are minimals $\xf{p}\ne\xf{q}$ that are not comaximal. Then $\xf{p}\cup\xf{q}\subseteq\xf{m}$ for some $\xf{m}$ in $\xr{max}(R)$, and the maps $\varphi:R/\xf{p}\to R/\xf{m}\eqqcolon k\to\overline{k}\gets R/\xf{m}\gets R/\xf{q}:\psi$ can be extended to $T_\xf{p}\xrightarrow{\varphi}\overline{k}\xleftarrow{\psi}T_\xf{q}$, where $T=T_\xf{p}\times T_\xf{q}\times\prod_{\xf{r}\in\xr{min}(R)-\{\xf{p},\xf{q}\}}T_\xf{r}$ in $\prod_{\xf{r}\in\xr{min}(R)}C_\xf{r}=\overline{K}$, and $T_\xf{r}$ denotes the integral closure of $R$ in $C_\xf{r}$. Again, the pullback $T_0=\{(\alpha_\xf{r})_{\xf{r}\in\xr{min}(R)}\in\prod_{\xf{r}\in\xr{min}(R)}T_\xf{r}\mid\varphi(\alpha_\xf{p}) =\psi(\alpha_\xf{q})\}$ is an aic of $R$ that is not isomorphic to $T$ itself, as it has only half the required idemps. We observe that the image of $R$ in $T$ is contained in $T_0$, as $\forall_{r\in R}\,\varphi(r)=r\text{ mod }\xf{m}=\psi(r)$. To see tightness of $T_0$ over $R$, let $0\ne t_0=(\alpha_\xf{r})_{\xf{r}}\in T_0$. Say $\alpha_\xf{r}\ne0$. This coordinate zeroes a monic $f\in R[X]$, for which we may assume $f(0)\notin\xf{r}$ (as in the second proof of the tightness of $T$ over $R$ in section \ref{sec:redux}). Take $c\in(\bigcap_{\,\xf{s}\in\xr{min}(R)-\{\xf{r}\}}\xf{s})-\xf{r}$. Then $cf(t_0)=0$ in $T_0$, hence $cf(0)\in (t_0T_0\cap R)-\xf{r}$, because (the image of) $R\subseteq T_0$. Ai closedness of $T_0$ follows as above, looking at just the $\xf{p}$-th and $\xf{q}$-th coordinates. But this contradicts (1). \end{proof} \end{thm} \section{Using model theory}\label{sec:mt} The concepts of tightness and algebraicity (rather than integrality) of ring extensions can be represented in first-order logic as the omission of two 1-types, as we will see in a moment. Based on this, we present a case for the conjecture that ``most'' rings in fact possess at least two non-isomorphic aics. For the benefit of the reader, a concise brush-up on the concepts is included a little further down.\\ We will adopt the terminology from Chang \& Keisler's \cite{CK}, except that, as in the formulation and the very name of the Omitting Types Theorem, by a ``1-type'' we shall mean just a consistent set $\Sigma$ of $\xc{L}$-formulas $\eta(x)$ that have at most one free variable: $x$, rather than a maximal consistent such set.\\ To start off, given a ring $R$, let $\xc{L}$ be the first-order language consisting of the function symbols $+$ and $\times$, plus for every $r\in R$ an individual constant $r^\bullet$. And let $\xc{T}$ be the first-order theory with axioms: the usual ones for commutative unital rings, plus the sentences $r_1^\bullet+r_2^\bullet=(r_1+r_2)^\bullet$ and $r_1^\bullet\times r_2^\bullet=(r_1r_2)^\bullet$ for all $r_1,r_2\in R$, plus $r_1^\bullet\ne r_2^\bullet$ for all $r_1,r_2\in R$ with $r_1\ne r_2$. The models of $\xc{T}$ are just the commutative rings $S$ that contain $R$ as a subring. We suppress the multiplication symbol $\times$ in formulas, as customary, and for the bullets $^\bullet$ we do the same.\\ For a sentence $\sigma$ of $\xc{L}$, i.e. a formula without free variables (variables not bound by $\forall$ or $\exists$), one writes $\xc{T}\vdash\sigma$ to say that $\sigma$ follows from (the axioms of) $\xc{T}$ under the derivation rules of first-order logic. If $\eta(x)$ is an $\xc{L}$-formula, $S$ a $\xc{T}$\!-model and $s\in S$, we say that $s$ \textit{realizes} $\eta(x)$, and write $S\models\eta(s)$, if $s$ satisfies the property expressed by $\eta(x)$ when interpreted in $S$. If such $S$ and $s$ exist, $\eta(x)$ is called \textit{consistent} with $\xc{T}$\!. This can be generalized to sets $\Sigma$ of formulas in a single variable $x$, called 1-\textit{types}. $s$ realizes $\Sigma$ in $S$ when $s$ realizes every formula $\eta(x)\in\Sigma$. And if such $S$ and $s$ exist, $\Sigma$ is said to be consistent with $\xc{T}$\!. When there is no such $s$, $S$ \textit{omits} $\Sigma$. Here and there, we enclose formulas in corners. This allows ``$=$'' to be used both as the formal equality symbol in formulas and for assignment statements in the metatheory.\\ As an example, let $\Sigma$ be the collection of the formulas $\varphi_{\vec{r}}(x)=\ulcorner\!r_nx^n+r_{n-1}x^{n-1}+\cdots+r_{0}=0\to r_n=0\urcorner$ for all $\vec{r}=(r_n,\cdots,r_{0})\in R^{n+1}$ of (arbitrary) length $n+1$, and take $R=\xb{Q}$. A $\xb{Q}$-algebra $S$ that omits $\Sigma$ is algebraic over $\xb{Q}$. For if $s$ is in $S$, there must be an $n$ and an $\vec{r}=(r_n,\cdots,r_0)\in\xb{Q}^{n+1}$ such that $\varphi_{\vec{r}}(s)$ fails in $S$. Then we have $r_ns^n+r_{n-1}s^{n-1}+\cdots+r_{0}=0$, but $r_n\ne0$. And $S$ realizes $\Sigma$ iff it contains an element that satisfies none of the equations, i.e., that is transcendental over $\xb{Q}$.\\ Next, we add to $\xc{T}$ enough sentences to make models $S$ ai closed. One sentence is needed for all (or infinitely many) polynomial degrees $n$. E.g., for the quadratic case it would be $\forall_x\forall_y\exists_u\exists_v(u+v=-x\wedge uv=y)$, so that every $Z^2+xZ+y$ will factor as $(Z-u)(Z-v)$ in $S[Z]$ for suitable $u,v\in S$.\\ We will denote the integral closure of $R$ in a $\xc{T}$\!-model $S$ by $S^\dagger$. By Lemma~\ref{lem:1}, the ring $S^\dagger$ is again ai closed.\\ For $S$ to be an aic of $R$, it needs to omit two 1-types of $\xc{L}$. The first one consists of one formula $\eta_{\vec{r}}(x)$ for every finite sequence $\vec{r}=(r_{n-1},\cdots,r_{0})\in R^n$ of any length $n$, saying that $x^n+r_{n-1}x^{n-1}+\cdots+r_{0}\ne0$. When $S$ does not realize this type, $S$ is integral over $R$. However, in cases like $R=\xb{Z}$, $\xc{T}$ \textit{locally realizes} this type, as the saying goes, since $\gamma(x)=\ulcorner2x=1\urcorner$ is a formula consistent with $\xc{T}$, and $\xc{T}\vdash\forall_x(\gamma(x)\to\eta_{\vec{r}}(x))$ for every $\vec{r}$, because $\frac{1}{2}$ (or any $s\in S$ with $2s=1$) cannot be a zero of any monic $f\in\xb{Z}[X]$, hence realizes the type in $S$. The formula $\gamma(x)$ describes what an element $x$ realizing the type may look like.\\ So instead, we will use the $\varphi_{\vec{r}}(x)$ defined above. Let $\Sigma_1$ be the set of these formulas, for all $n\in\xb{N}$ and $\vec{r}=(r_n,\cdots,r_{0})\in R^{n+1}$. Models $S$ that omit this type are algebraic over $R$. Instead of $S$ itself, the subring $S^\dagger$ will be ai closed and integral over $R$. $\xc{T}$ \textit{locally omits} $\Sigma_1$, that is, there is no $\xc{L}$-formula $\gamma(x)$ having at most $x$ free and consistent with $\xc{T}$ such that $\xc{T}\vdash\forall_x(\gamma(x)\to\varphi_{\vec{r}}(x))$ for all applicable finite sequences $\vec{r}$. For $\gamma(x)$ would then imply that $x$ is transcendental over $R$, and no single formula can of course achieve that on its own (except when $R=1$, where $\gamma(x)=\ulcorner0=1\urcorner$ achieves this tour de force).\\ For the definition of the second 1-type $\Sigma_2$, we first specialize to the case where $R=P=\prod_{a\in A}R_a$ is a finite product of non-trivial rings, as before, but we no longer require that the $R_a$ are domains, merely that they are connected. If $h=2^{|A|}$, then $R$ will have precisely $2^h$ idempotents (including the two trivial ones), say $d_1,\cdots,d_h$. The $e_a$ (for $a\in A$), introduced in Section \ref{sec:first}, are among them. All $\xc{T}$-models $S$ contain the fundamental system of orthogonal idempotents $\{e_a\mid a\in A\}$, so can be written as $\prod_{a\in A}S_a$, with $S_a=e_aS$. But these $S$ may of course have additional idempotents.\\ Let $\delta(x)$ abbreviate the $h$-fold conjunction $\bigwedge_{i=1}^hx\ne d_i$.\\ For $\vec{r}=(r_n,\cdots,r_{0})\in R^{n+1}$, put $\vartheta_{\vec{r}}(y)=\ulcorner\!y^n+r_{n-1}y^{n-1}+\cdots+r_{0}=0\urcorner$ (in which $r_n$ is not referenced), and for $n,m\in\xb{N}$, $\vec{r}\in R^{n+1}$ and $\vec{r}\,'\in R^{m+1}$, put $\psi_{\vec{r},\vec{r}\,'}(x)=\ulcorner\delta(x)\,\wedge\,\forall_y\,\forall_z\,((\vartheta_{\vec{r}}(y)\wedge\vartheta_{\vec{r}\,'}(z)\wedge xy=r_n\wedge(1-x)z=r_m')\to\bigwedge_{a\in A}(r_ne_a=0\vee r_m'e_a=0))\urcorner$, and let $\Sigma_2$ be the collection of all formulas $\psi_{\vec{r},\vec{r}\,'}(x)$ in $x$.\\ If $S$ omits $\Sigma_2$, every $s\in S$ not among the $d_i$ is (in particular) $\ne0$ and $1$, and there are $y,z\in S$ integral over $R$ plus an $a\in A$ such that $sy=r_n$ and $(1-s)z=r_m'$\! and $r_ne_a\ne0\ne r_m'e_a$ (for suitable $\vec{r}$ and $\vec{r}\,'$), meaning that if, in particular, $s\in S^\dagger$, both ideals $e_asS^\dagger$ and $e_a(1-s)S^\dagger$ have nonzero contraction to $R$. So the extension $S^\dagger/R$ is tight, and thus $S^\dagger$ is an aic of $R$.\\ If the $R_a$ are domains, then $\gamma(x)=\ulcorner\delta(x)\,\wedge\,x^2=x\!\urcorner$ implies $\psi_{\vec{r},\vec{r}\,'}(x)$ under $\xc{T}$ for all finite sequences $\vec{r}$ and $\vec{r}\,'$\! of this kind. For, arguing in $\xc{T}$\!, if $x$ is an idempotent that differs from all the $d_i$, and $y$ and $z$ (satisfy the monic equations involved and) have $xy=r_n$ and $(1-x)z=r_m'$\!, and there is an $a\in A$ with $r_ne_a\ne0$ and $r_m'e_a\ne0$, then $r_nr_m'\!=x(1-x)yz=0$. But since the $a$-th coordinates of $r_n$ and $r_m'$\! are non-zero, the fact that $r_nr_m'\ne0$ is known to $\xc{T}$, so that is a contradiction. This $\gamma(x)$ is consistent with $\xc{T}$\!, because $R\hookrightarrow R\times R$ via $r\mapsto(r,r)$ and $R\times R$ has extra idempotents: the $(e,e')$ with $e$ and $e'$\! different idempotents of $R$. So any ai closed $S\supseteq R\times R$ is a $\xc{T}$-model that realizes $\gamma(x)$. Hence $\xc{T}$ locally realizes $\Sigma_2$ when every $R_a$ is a domain.\\ And $\gamma(x)=\ulcorner\delta(x)\,\wedge\,\bigwedge_{(r,r')\in(R-\{0\})^2}\forall_y\,\forall_z\,(xy\ne r\,\vee\,(1-x)z\ne r')\urcorner$ does the same job when $R$ is a \textit{finite} ring $\ne1$. Indeed, $R\subsetneq R\times R$ and, for $s=(1,0)$, $(s)\cap R=0=(1-s)\cap R$, so $R\times R\models\gamma(s)$. So here $\xc{T}$ locally realizes $\Sigma_2$ too.\\ But for a general $R$, an $\xc{L}$-formula $\gamma(x)$ that locally realizes $\Sigma_2$ over $\xc{T}$ must be consistent with $\xc{T}$\!. I.e., there must be an extension $R\subseteq S$, with $S$ ai closed, and an $s\in S$, such that $S\models\gamma(s)$. And for every such $S$ the existence of such an $s$ needs to imply $S\models\psi_{\vec{r},\vec{r}\,'}(s)$ for \textit{all} pairs $(\vec{r},\vec{r}\,')$. If $r_n=0$ or $r_m'\!=0$, then $\bigwedge_{a\in A}(r_ne_a=0\vee r_m'e_a=0)$ is true, hence, by definition, it is implied by any formula $\gamma(x)$ whatsoever. And, if $\gamma(x)$ implies $\delta(x)\,\wedge\,x^2=x$, when $r_nr_m'\ne0$ we can argue as in the case of a finite product of domains.\\ However, since $\vartheta_{\vec{r}}(y)$ expresses that $y$ is integral with a specific equation over $R$, and the notion of being an element integral over $R$ with unknown equation cannot be formalized using a single formula, in case $r_n\ne r_nr_m'\!=0\ne r_m'$\! the only chance $\gamma(x)$ has, is to entail that $xy=r_n\wedge(1-x)z=r_m'$\! will be false for \textit{every} $y$ and $z$ in $S$, not just the integral ones. But that has proved to be a hard act - for this author at least, in view of the following remark.\\ \textit{Note.} Some prior versions of the paper claimed that also when $R$'s zero-divisors are easily surveyable, $\xc{T}$\! locally realizes $\Sigma_2$. But the proof that the formula $\gamma(x)$ used is consistent with $\xc{T}$\! was incorrect, and doesn't appear to be easily mended.\\ All in all, in light of the above considerations it seems quite plausible that for any infinite ring which is not a finite product of domains, the corresponding theory $\xc{T}$\! must locally omit $\Sigma_2$.\\ We now produce the (un)desired non-isomorphic aics, for rings $R$ for which $\xc{T}$\! \textit{does} locally omit $\Sigma_2$. \begin{prop}\label{prop:8}Let $R$ be a finite product of connected rings, and put $\sigma_{\,\xr{extra}}=\ulcorner\,\exists_x(\delta(x)\,\wedge\,x^2=x)\urcorner$, a statement saying that extraneous idempotents exist (that are not already present in $R$ itself). If $\xc{T}_{\,\xr{extra}}=\xc{T}\cup\{\sigma_{\,\xr{extra}}\}$ and $\xc{T}_{\,\xr{nextra}}=\xc{T}\cup\{\neg\,\sigma_{\,\xr{extra}}\}$ locally omit $\Sigma_2$, then $R$ admits both aics with and aics without additional idempotents. Aics for $R$ are therefore \textit{not} unique. \begin{proof} We have already noticed that $\xc{T}$ locally omits $\Sigma_1$. So do the two extended theories. If $R$ is countable as a set, so is the language $\xc{L}$, hence, by the Extended Omitting Types Theorem (Th.~2.2.15 in \cite{CK}), under the given circumstances each of $\xc{T}_{\,\xr{extra}}$ and $\xc{T}_{\,\xr{nextra}}$ has a model omitting both types $\Sigma_1$ and $\Sigma_2$. The integral closures of $R$ in such models are aics of $R$, as we saw earlier. But idempotents are integral over $R$ and the $\xc{T}_{\,\xr{nextra}}$-model will have the same (finite number of) idempotents as $R$, while the $\xc{T}_{\,\xr{extra}}$-model has more. So the aics of $R$ that live in these two models can't be isomorphic. For uncountable $R$, we can take $\kappa=|R|$, the cardinality of $R$, and instead use the $\kappa$-OTT (this is Th.~2.2.19 in Chang \& Keisler), by which every consistent theory $\xc{T}$ in a language $\xc{L}$ of power $\kappa$, which $\kappa$-omits an $n$-type $\Sigma$ of $\xc{L}$, has a model of cardinality $\leq\kappa$ that omits $\Sigma$. Here, $\xc{T}$ is said to $\kappa$\textit{-omit} $\Sigma$ if there is no set $\Gamma=\Gamma(x_1,\cdots,x_n)$ of formulas that have at most $x_1,\cdots,x_n$ free, with $|\Gamma|<\kappa$, and $\Gamma$ consistent with $\xc{T}$, and $\xc{T}\cup\Gamma(x_1,\cdots,x_n)\models\Sigma(x_1,\cdots,x_n)$. Now one cannot express transcendence over $R$ in fewer than $\kappa$ statements, and the same goes for \textit{looseness} (= untightness). The $\kappa$-OTT mentions only a single type $\Sigma$, while we use two. But this is easily overcome by letting $\Sigma$ be the $1$-type consisting of all disjunctions $\varphi_{\vec{r}\,''}(x)\vee\psi_{\vec{r},\vec{r}\,'}(x)$, where the $\psi_{\vec{r},\vec{r}\,'}$ are as above and $\vec{r}\,''$\! runs over all finite sequences in $R^{k+1}$, using a third (independent) natural number $k$. When a $\xc{T}$\!-model $S$ omits this $\Sigma$, for every $s$ in $S$ there are $n,m,k$ and vectors $\vec{r}$, $\vec{r}\,'$\!, $\vec{r}\,''$\! of lengths $n+1,m+1,k+1$, for which $\varphi_{\vec{r}\,''}(x)\vee\psi_{\vec{r},\vec{r}\,'}(x)$ fails for $x=s$, meaning that both disjuncts fail, meaning that etc. etc. \end{proof} \end{prop}
{ "timestamp": "2023-01-18T02:34:50", "yymm": "2212", "arxiv_id": "2212.06738", "language": "en", "url": "https://arxiv.org/abs/2212.06738", "abstract": "Absolute integral closures of general commutative unital rings are explored. All rings admit absolute integral closures, but in general they are not unique. Among the reduced rings with finitely many minimal prime ideals, finite products of domains are the only rings for which they are unique. Arguments using model theory suggest that the same holds for all infinite rings that are finite products of connected rings. Universal absolute integral closures, which contain every aic of a given ring, are shown to exist for certain subrings of products of domains.", "subjects": "Commutative Algebra (math.AC); Logic (math.LO)", "title": "Absolute integral closures of commutative rings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759632491111, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7079405744371337 }
https://arxiv.org/abs/1506.07387
Cardinal Interpolation With General Multiquadrics: Convergence Rates
This article pertains to interpolation of Sobolev functions at shrinking lattices $h\mathbb{Z}^d$ from $L_p$ shift-invariant spaces associated with cardinal functions related to general multiquadrics, $\phi_{\alpha,c}(x):=(|x|^2+c^2)^\alpha$. The relation between the shift-invariant spaces generated by the cardinal functions and those generated by the multiquadrics themselves is considered. Additionally, $L_p$ error estimates in terms of the dilation $h$ are considered for the associated cardinal interpolation scheme. This analysis expands the range of $\alpha$ values which were previously known to give such convergence rates (i.e. $O(h^k)$ for functions with derivatives of order up to $k$ in $L_p$, $1<p<\infty$). Additionally, the analysis here demonstrates that some known best approximation rates for multiquadric approximation are obtained by their cardinal interpolants.
\section{Introduction} This article is primarily concerned with cardinal interpolation schemes associated with {\em general multiquadrics} in higher dimensions defined via two parameters as $\phi_{\alpha,c}(x):=(|x|^2+c^2)^\alpha$. The two main objects of study are the principal shift-invariant spaces associated with either the multiquadric {\em cardinal functions} or the multiquadrics themselves, and the performance of the interpolants from these spaces for recovery of functions in the classical $L_p$ Sobolev spaces of finite smoothness. Cardinal interpolation finds its origins in the work of Schoenberg (\cite{Schoenberg} and references therein), who studied interpolation at the integer lattice using splines. Subsequent investigations ensued involving cardinal functions associated with radial basis functions (RBFs), including much work by Buhmann \cite{BuhmannOdd, Buhmann, BuhmannBook}, Baxter, Riemenschneider, and Sivakumar \cite{Baxter, RiemSiva,rs1,rs2, rs3, siva}, and the authors \cite{HammLedford}. Some of the RBFs considered in those works are the thin plate spline, the Gaussian kernel, and the Hardy multiquadric. Recently, \cite{Ledford} provided a general framework for recovering bandlimited functions from their samples at the (multi) integer lattice using so-called {\em regular families of cardinal interpolators}, of which certain families of Gaussians and multiquadrics are examples. This article is a continuation of the study done in \cite{HammLedford}, in which cardinal interpolation in one dimension using general multiquadrics was considered. There, detailed estimates on the univariate cardinal functions were given, the behavior of the interpolation operators acting on $\ell_p$ spaces was considered, and a method for recovery of multivariate bandlimited functions from their multiquadric interpolants via a limiting process was shown. One interesting problem for these interpolation schemes is to determine how quickly the interpolant of a function converges (globally) to the function based on its smoothness. Much work has been done on determining convergence rates for functions in the so-called {\em native space} of a given RBF (which for positive definite RBFs is the reproducing kernel Hilbert space with the RBF as the kernel). When interpolating functions in the native space, convergence is often exponentially fast \cite{MadychNelson,Wendland}, however this space is often rather small. Indeed, for the Gaussian kernel, the native space consists of functions whose Fourier transform satisfies $\widehat{f}e^{|\cdot|^2}\in L_2(\R^d)$ \cite[Theorem 10.12]{Wendland}. Consequently, it is desirable to determine the rate of approximation for more general classes of smooth functions. Here, we consider interpolation of Sobolev ($W_p^k(\R^d)$) functions. The inspiration for our work is the article of Hangelbroek, Madych, Narcowich, and Ward \cite{hmnw}, which provided convergence rates for Gaussian interpolation. Often in the RBF literature, interpolants take the form $\sum_{j\in\Z^d}c_j\phi(\cdot-j)$, where $\phi$ is the given RBF. However, associated with many RBFs are cardinal functions $L_\phi$ satisfying $L_\phi(k)=\delta_{0,k}$, $k\in\Z^d$, and so another interpolant is $\sum_{j\in\Z^d}a_jL_\phi(\cdot-j)$ where $a_j=f(j)$ for a given function $f$. This brings up some natural questions of how the series above converge, and in what sense the interpolants are related. In particular, for Sobolev functions to be considered here, the coefficients typically lie in $\ell_p(\Z^d)$, and so a natural object of study are the $L_p$ principal shift-invariant spaces associated with $\phi$ and $L_\phi$ (see Section \ref{SECBasic} for the precise definition of these spaces). In many instances, it is easily shown that the shift-invariant spaces coincide (regardless of whether one defines them in terms of $\phi$ or $L_\phi$); however, for growing kernels, e.g. multiquadrics with positive $\alpha$, the matter is a bit more delicate since one of the spaces is not well-defined. In many instances, one still has that $L_\phi = \sum_{j\in\Z^d}d_j\phi(\cdot-j)$, but with the series converging uniformly on compact subsets of $\R^d$. For general considerations of RBF approximation methods (not necessarily involving interpolation) and the associated shift-invariant space structure, see \cite{BuhmannRon,deBoorRon, DDR,Johnson,Johnson2,K}. Many of these references consider the best rates of approximation of smooth functions from shift-invariant spaces associated with different RBFs. Our study here demonstrates that in most cases, the optimal approximation rates using multiquadrics can be achieved by the associated cardinal interpolants. For more information on stable computation of multiquadric approximants, the interested reader is invited to consult the works of Driscoll, Fornberg, and Flyer (\cite{Fornberg,Fornberg2,FornbergFlyer} and references therein). Additionally, for a prolonged discussion of other methods in use with many references, the reader may consult \cite{HammLedford}. \begin{comment} We continue the study of cardinal interpolation of smooth classes of functions via translates of so-called {\em general multiquadrics} (these are the functions $\phi_{\alpha,c}(x):=(|x|^2+c^2)^\alpha$). Schoenberg (see \cite{Schoenberg} and the references therein) instigated the study of interpolation at the integer lattice in one variable using splines, while others continued the study in a variety of directions. Of primary concern to us is the formation of cardinal interpolants from shifts of radial basis functions (RBFs). Such studies have been done by Buhmann \cite{Buhmann}, Baxter, Riemenschneider, and Sivakumar \cite{Baxter, RiemSiva,rs1,rs2, rs3, siva}. Some of the RBFs considered in those works are the thin plate spline, the Gaussian kernel, and the Hardy multiquadric. Recently, \cite{Ledford} provided a general framework for recovering bandlimited functions from their samples at the (multi) integer lattice using so-called {\em regular families of cardinal interpolators}, of which certain families of Gaussians and multiquadrics are examples. In \cite{HammLedford}, cardinal interpolation using general multiquadrics was considered. There, detailed estimates on the univariate cardinal functions were given, the behavior of the interpolation operators acting on $\ell_p$ spaces was considered, and a method for recovery of multivariate bandlimited functions from their multiquadric interpolants via a limiting process was shown. One interesting problem for these interpolation schemes is to determine how quickly the interpolant of a function converges (globally) to the function based on its smoothness. Some approximation rates for oversampling bandlimited functions using general multiquadrics can be found in \cite{HammSampta}. Additionally, much work has been done on determining convergence rates for functions in the so-called {\em native space} of a given RBF (which for positive definite RBFs, is the reproducing kernel Hilbert space with the RBF as the kernel). When interpolating functions in the native space, convergence is often exponentially fast \cite{MadychNelson,Wendland}, however this space is often rather small. Indeed, for the Gaussian kernel, the native space consists of functions whose Fourier transform satisfies $\widehat{f}e^{\lambda|\cdot|^2}\in L_2(\R^d)$ \cite[Theorem 10.12]{Wendland}. Consequently, it is desirable to determine the rate of approximation for more general classes of smooth functions. Here, we consider interpolation of Sobolev ($W_p^k(\R^d)$) functions. The inspiration for our work and many of the basic techniques used here is the article of Hangelbroek, Madych, Narcowich, and Ward \cite{hmnw}, which provided convergence rates for Gaussian interpolation. We show that interpolating a Sobolev function $g$ at $h\Z^d$ via general multiquadrics exhibits a convergence rate of $h^k$, where $k$ is the number of $L_p$ derivatives of $g$. The proof relies on an intermediate step in which we interpolate the Sobolev function by a bandlimited one whose band size depends on the parameter $h$. Thus the problem reduces to determining the behavior of the interpolants of bandlimited functions. \end{comment} The rest of the paper is laid out as follows. In Section \ref{SECBasic}, we provide some preliminaries such as notation and facts about the general multiquadrics and shift-invariant spaces. Section \ref{SECMain} details the statements of our main results and mentions some of the key ingredients; this section also contains the proof of the main theorem on approximation rates for Sobolev interpolation at the shrinking lattice $h\Z^d$ (Theorem \ref{THMmaintheorem}). Section \ref{SECMultiplier} begins by discussing the Fourier multiplier associated with the multiquadric cardinal function, whose operator norm governs the rest of the analysis. Pointwise and norm estimates are given for the multiplier operator as well. Section \ref{SECProofs} contains the lion's share of the proofs of the main theorems of Section \ref{SECMain}, while the Appendix contains the distributional proof of one of the driving equations. We conclude with some brief remarks and extensions in Section \ref{SECremark}. \section{Preliminaries}\label{SECBasic} Let $\Omega\subset\R^d$ be an open set. Then let $L_p(\Omega)$, $1\leq p\leq\infty$, be the usual space of $p$--integrable functions on $\Omega$ with its usual norm. If no set is specified, we mean $L_p(\R^d)$. Similarly, denote by $\ell_p$ the usual sequence spaces indexed by the (multi) integers. Throughout, $\gamma$ and $\beta$ will be multi-indices, with $D^\gamma$ taking on its usual meaning as the differential operator. To mitigate confusion, the convention $[\gamma]:=\sum_{j=1}^d\gamma_j$ will be used to denote the length of the multi-index since the symbol $|\cdot|$ is reserved exclusively for the Euclidean distance on $\R^d$. Define $W_p^k:=W_p^k(\R^d)$ to be the Sobolev space of functions in $L_p$ whose first $k$ weak derivatives are in $L_p$. The seminorm and norm on $W_p^k(\Omega)$ may be defined as \[|g|_{W_p^k(\Omega)}:=\underset{\multi{\gamma}= k}\max\|D^\gamma g\|_{L_p(\Omega)},\quad\text{and}\quad \|g\|_{W_p^k(\Omega)}:=\|g\|_{L_p(\Omega)}+|g|_{W_p^k(\Omega)},\] respectively. Let $\schwartz$ be the space of Schwartz functions on $\R^d$, that is the collection of infinitely differentiable functions $\psi$ such that for all multi-indices $\gamma$ and $\beta$, $$\underset{x\in\R^d}\sup\left|x^\gamma D^\beta\psi(x)\right|<\infty\;.$$ The Fourier transform of a Schwartz function $\psi$ is given by \[ \widehat{\psi}(\xi):=\int_{\R^d} \psi(x)e^{-i\bracket{\xi, x}}dx,\quad \xi\in\R^d, \] whence the inversion formula is \[ \psi^\vee(x) = \dfrac{1}{(2\pi)^d}\dint_{\R^d}\psi(\xi)e^{i\bracket{x,\xi}}d\xi,\quad x\in\R^d. \] In the event that these formulas do not hold (for instance for $L_p$ functions with $p>2$), the Fourier transform should be interpreted in the sense of tempered distributions. Let $\schwartz'$ be the space of tempered distributions (that is, the continuous dual of $\schwartz$). Given $T\in\schwartz'$, its Fourier transform is the tempered distribution, $\widehat{T}$, which satisfies $\bracket{\widehat{T},\phi}=\bracket{T,\widehat{\phi}},\;\phi\in\schwartz$. For basic facts about distributions, consult \cite{fjoshi}. The most used fact for the subsequent analysis is that if $f\in L_p$, then it may be identified with its {\em induced distribution}, $T_f\in\schwartz'$, via $\bracket{T_f,\psi}:=\int_{\R^d} f(x)\psi(x)dx$, $\psi\in\schwartz$. Note that the integral is well-defined due to H\"{o}lder's inequality. Additionally, the following basic fact will be utilized implicitly throughout the sequel: \begin{lemma}\label{LEMDistribution} If $f,g\in L_p$ and $\widehat{T_f}=\widehat{T_g}$, then $f=g$ almost everywhere. \end{lemma} The proof of the lemma follows from the fact that an induced distribution is the 0 distribution precisely when the function is 0 almost everywhere. Finally, on account of Lemma \ref{LEMDistribution}, the common abuse of notation of writing $\widehat{f}$ for $\widehat{T_f}$ will be used. For $\sigma>0$, define $E_\sigma$ to be the class of entire functions of exponential type $\sigma$ whose restriction to $\R^d$ has at most polynomial growth. Namely, $f\in E_\sigma$ if and only if there is a constant $C$ and an $N\in\N$ such that $$|f(z)|\leq C(1+|z|)^Ne^{\sigma |Im(z)|},\quad z\in\C^d.$$ Consequently, the restriction of $f$ to $\R^d$ is a tempered distribution, and the Paley-Wiener-Schwartz Theorem (see, for example, \cite[Theorem 7.23]{Rudin}) states that the distributional Fourier transform of $f$ has support (in the distributional sense) in the ball of radius $\sigma$ centered at the origin, which we denote $B(0,\sigma)$. The classes $E_\sigma$ are generalizations of the traditional Paley-Wiener spaces of bandlimited functions. Let $\alpha\in\R$ and $c>0$ be fixed; then define the \textit{general multiquadric} by \begin{equation}\label{EQgmcdef} \phi_{\alpha,c}(x):=\left(|x|^2+c^2\right)^\alpha,\quad x\in\R^d. \end{equation} The parameter $c$ is often called the {\em shape parameter} of the multiquadric. If $\alpha\in\R\setminus\N_0$ ($\N_0$ being the natural numbers including 0), the generalized Fourier transform of $\phi_{\alpha,c}$ is given by the following (see, for example, \cite[Theorem 8.15]{Wendland}): \[ \phica(\xi)=\dfrac{2^{1+\alpha}}{\Gamma(-\alpha)}\left(\dfrac{c}{|\xi|}\right)^{\alpha+\frac{d}{2}}K_{\alpha+\frac{d}{2}}(c|\xi|),\quad \xi\in\R^d\setminus\{0\}, \] where $K_\nu$ is called the modified Bessel function of the second kind (see \cite[p.376]{AandS} for its precise definition). A few properties germane to our analysis here are that $K_\nu$ has an algebraic singularity at the origin and exponential decay away from the origin. We note that $\phi_{\alpha,c}$, and consequently its Fourier transform, are radial functions (i.e. $\phi_{\alpha,c}(x)=\varphi_{\alpha,c}(|x|)$ for some univariate function $\varphi_{\alpha,c}$). Much of the results presented here pertain to {\em principal shift-invariant} spaces which are subspaces of $L_p$. Following \cite{AG}, these can be defined as $$V_p(\psi):=\left\{\sum_{j\in\Z^d}c_j\psi(\cdot-j):(c_j)\in\ell_p\right\},$$ where convergence of the series is taken to be in $L_p$. The function $\psi$ is often called the {\em generator}, or the window, or kernel, of the shift-invariant space. Note also that in some of the literature, the space is defined to be the closed linear span of $\{\psi(\cdot-j):j\in\Z^d\}$ in $L_p$; however, for sufficiently nice generators, the definitions coincide. Additionally, shift-invariant spaces may be defined for more general lattices; specifically, we will make use of $V_p(\psi,h\Z^d):=\{\sum_{j\in\Z^d}c_j\psi(\cdot-hj):(c_j)\in\ell_p\}$ in the sequel. \section{Main Results}\label{SECMain} For ease of viewing, the main results are all contained in this section. We begin by setting some definitions. A function $L:\R^d\to\R$ is a {\em cardinal function} provided $L(k)=\delta_{0,k}$, for all $k\in\Z^d$ (these are also often called Lagrange functions or fundamental functions in the literature). There are cardinal functions associated with all manner of radial basis functions, and one common construction is to define them via their Fourier transforms. The primary concern here is the cardinal function associated with the general multiquadric; to wit, for a fixed $\alpha\in\R\setminus\N_0$ and $c>0$, define \[ \widehat{L_{\alpha,c}}(\xi):=\dfrac{\phica(\xi)}{\zsumd{j}\phica(\xi+2\pi j)},\quad\xi\in\R^d\setminus\{0\}. \] It was shown in \cite{HammLedford} that $\widehat{L_{\alpha,c}}\in L_1\cap L_2(\R^d)$; it follows that $L_{\alpha,c}:=\widehat{L_{\alpha,c}}^\vee$ is continuous, square-integrable, and a cardinal function (see Section 3 therein). Suppose $g\in W_p^k(\R^d)$ with $k>d/p$ (thus pointwise evaluation of $g$ is well-defined since it is continuous by the Sobolev embedding theorem). Let $h\in(0,1]$, and fix $\alpha\in(-\infty,-d-1/2)\cup[1/2,\infty)\setminus\N$. Then formally define the multiquadric interpolant of $g$ via \[ I_\alpha^h g(x):=\sum_{j\in\Z^d}g(hj)L_{\alpha,\frac1h}\left(\frac{x}{h}-j\right),\quad x\in\R^d. \] Presuming the interpolant is well-defined, it is evident that it satisfies $I_\alpha^hg(hk)=g(hk)$, $k\in\Z^d$ since $L_{\alpha,\frac1h}$ is a cardinal function. In \cite[Theorem 8]{HammLedford}, it was shown that the univariate interpolation operator $I_\alpha^h$ is bounded from $\ell_p(\Z)\to L_p(\R)$ for every $1\leq p\leq\infty$ and $\alpha$ in the restricted range specified above. However, the argument for higher dimensions is identical; indeed, in the course of the proof there, the authors essentially use the techniques of Jia and Micchelli \cite{JM} (see also \cite{Johnson}). Consequently, since $g\in W_p^k(\R^d)$ implies that $(g(hj))_{j\in\Z^d}\in\ell_p$, it follows that $I_\alpha^hg\in L_p$, and particularly that $I_\alpha^h g\in V_p(L_{\alpha,\frac1h}(\cdot/h),h\Z^d)$. It should be noted that approximation in such families of spaces have been studied extensively by Johnson and others (for example, \cite{Johnson}). The family of subspaces $\{V_p(L_{\alpha,\frac1h}(\cdot/h),h\Z^d)\}_{h>0}$ is therein termed a {\em nonstationary ladder} of principal shift-invariant spaces. In addition, the interested reader is referred to \cite{HoltzRon} for discussion of principal shift-invariant subspaces of $W_2^k(\R^d)$ and their approximation orders. \subsection{Structural Results} Before discussing the main interpolation results, we pause to mention some facts about the shift-invariant spaces associated with the multiquadric cardinal functions and how they relate to the spaces of translates of the multiquadrics themselves. Often, RBF interpolation schemes begin by trying to find interpolants to a given class of functions from the closed linear span of translates of the RBF itself (e.g. the multiquadric), where the closure is taken, for example, in the topology of uniform convergence on compact subsets of $\R^d$. Often in cardinal interpolation, the cardinal functions serve as a change of basis in the spirit of classical Lagrange polynomial interpolation, and one has $L$ as an element of the closed linear span of $\{\phi(\cdot-j):j\in\Z^d\}$. The current analysis begins from the opposite vantage point, and considers interpolation from $V_p(L_{\alpha,c})$, and discusses how such spaces relate to their counterparts arising from $\phi_{\alpha,c}$. The techniques here allow for a conclusion to be made about the relation of these spaces in many instances, but is not strong enough in others (primarily for positive $\alpha$ which is not of the form $k-1/2$ for $k\in\N$). The following two theorems demonstrate that for sufficiently negative $\alpha$, we may classify the structure of the cardinal functions and the decay rate of their coefficients, as well as showing equality of the associated shift-invariant spaces $V_p(L_{\alpha,c})$ and $V_p(\phi_{\alpha,c})$. \begin{theorem}\label{L_coeff_bnd} Suppose that $\alpha<-d-1/2$ and $c>0$. Then $L_{\alpha,c}$ has a series representation of the form \[ L_{{\alpha,c}}(x)=\sum_{j\in\Z^d} a_j \phi_{\alpha,c}(x-j), \] where \[ |a_j|=\begin{cases} O(|j|^{-\lfloor 2|\alpha|-d\rfloor}) & \alpha\notin\Z, \\ O(|j|^{-2|\alpha|+d+1}) & \alpha\in\Z. \end{cases} \] In particular, $a\in\ell_1$ and the series is uniformly convergent. \end{theorem} \begin{theorem}\label{InterSpace} Suppose that $\alpha<-d-1/2$. Then for all $1\leq p\leq \infty$ and $c>0$, \[ V_p(L_{\alpha,c})= V_p(\phi_{\alpha,c}). \] Consequently, for all $h>0$, \[ V_p(L_{\alpha, 1/h}(\cdot/h),h\Z^d)= V_p(\phi_{\alpha,1/h}(\cdot/h),h\Z^d). \] \end{theorem} Note that if $\alpha>0$, the space $V_p(\phi_{\alpha,c})$ is not well-defined since $\phi_{\alpha,c}$ is unbounded. Nonetheless, in certain cases a similar result holds concerning the space \[ S(\phi):=\overline{\text{span}}\{ \phi(\cdot-j):j\in\mathbb{Z}^d\}, \] where the closure is taken in the topology of uniform convergence on compact sets. If we take the spatial dimension $d$ to be odd and $\alpha=(2k-1)/2$ for some $k\in\N$, then the estimates from \cite[Theorem 1]{BuhmannMichelli} imply that the cardinal function satisfies \[ L_{\alpha,c}(x)=O\left((1+|x|)^{-3d-4k+2}\right), \] which allows us to follow the outline of the proofs of the previous theorems to conclude that \[ V_p(L_{\alpha,c})\subseteq S(\phi_{\alpha,c}). \] However, the Fourier analytic arguments in the sequel seem insufficient to prove this containment for more general $\alpha>0$. The proofs of the preceding theorems require some more detailed estimates of the decay of the cardinal functions, and so are postponed until Section \ref{SECProofs}. \begin{rem} It should be noted that the decay conditions on the cardinal functions (cf. Corollary \ref{CORmultiplierinversebounds}) imply that $\{L_{\alpha,\frac1h}(\cdot/h-j):j\in\Z^d\}$ is a Riesz basis for $V_2(L_{\alpha,\frac1h}(\cdot/h),h\Z^d)$, so it follows on account of \cite[Theorem 2.4]{AG} that $\{L_{\alpha,\frac1h}(\cdot/h-j):j\in\Z^d\}$ is an unconditional basis for $V_p(L_{\alpha,\frac1h}(\cdot/h),h\Z^d)$, and moreover that the latter is a closed subspace of $L_p$. Consequently, the same is true with $L$ is replaced by $\phi$ when $1\leq p\leq\infty$ and $\alpha<-d-1/2$ on account of Theorem \ref{InterSpace}. \end{rem} \subsection{Interpolation and Approximation Rates} To begin the discussion of interpolation of $g\in W_p^k(\R^d)$ by $I_\alpha^hg\in V_p(L_{\alpha,\frac1h}(\cdot/h),h\Z^d)$, first note that existence of the interpolant is given by the boundedness of $I_\alpha^h$ as an operator from $\ell_p\to L_p$ (discussed above). Secondly, uniqueness follows from the definition of a cardinal function (i.e. $\sum_{j\in\Z^d}c_jL_{\alpha,\frac1h}(\frac{\cdot}{h}-j)=0$ if and only if $c_j=0$ for all $j$). From here on, for a given $g$ in the Sobolev space, $I_\alpha^h g$ is to be taken to be the unique interpolant in $V_p(L_{\alpha,\frac1h}(\cdot/h),h\Z^d)$, where it is understood that for $\alpha<-d-1/2$ and $1\leq p\leq\infty$, this must be the same as the unique interpolant from $V_p(\phi_{\alpha,\frac1h}(\cdot/h),h\Z^d)$ by Theorem \ref{InterSpace}. In the interpolation results that follow, the constants $C$ will be independent of $h$ provided that we consider a range $0<h<h_0$ for some fixed, but arbitrary $h_0$. However, as the behavior we are primarily interested in is that for small $h$, we state the results for $0<h\leq1$ without loss of generality, and simply alert the reader here that this is not strictly necessary. \begin{theorem}\label{THMmaintheorem} Suppose $\alpha\in(-\infty,-d-1/2)\cup[1/2,\infty)\setminus\N$ is fixed. Let $1<p<\infty$, $k>d/p$, and $0<h\leq1$. There exists a constant $C$, independent of $h$, so that for every $g\in W_p^k(\R^d)$, $$\|I_\alpha^hg-g\|_{L_p}\leq Ch^k\|g\|_{W_p^k}.$$ If $p=1$ and $k> d$, or $p=\infty$ and $k\in\N$, there is a constant $C$, independent of $h$, so that for every $g\in W_p^k(\R^d)$, $$\|I_\alpha^hg-g\|_{L_p}\leq C(1+|\ln h|)h^k\|g\|_{W_p^k}.$$ \end{theorem} The proof of the above theorem follows from an indirect argument which considers interpolation of bandlimited functions $f\in E_\frac{\pi+\eps}{h}\cap W_p^k(\R^d)$, which themselves interpolate the Sobolev functions at $h\Z^d$. This argument follows the insightful techniques of \cite{hmnw}. The following lemma shows that this interpolation of Sobolev functions by bandlimited ones is stable by providing Jackson and Bernstein type inequalities. \begin{lemma}[\cite{hmnw}, Lemma 2.2]\label{LEMhmnwapproxbandlimited} Let $0<\varepsilon<\pi$, $1\leq p\leq\infty$, and $k>d/p$. If $g\in W^k_p$, then given $0<h\leq1$, there is a function $f\in E_\frac{\pi+\eps}{h}\cap W_p^k$ satisfying \begin{equation}\label{EQbandlimitedinterpcondition} f(hj) = g(hj),\quad j\in\Z^d, \end{equation} \begin{equation}\label{EQBernstein} \|f-g\|_{L_p}\leq Ch^k|g|_{W_p^k}, \end{equation} and \begin{equation}\label{EQJackson} |f|_{W_p^k}\leq C|g|_{W_p^k}, \end{equation} where $C$ is a constant independent of $h$ and $g$. \end{lemma} The next key ingredient to the proof of Theorem \ref{THMmaintheorem} is the following, which shows that the interpolation operators $(I_\alpha^h)_{h\in(0,1]}$ are uniformly bounded in the Sobolev seminorm. \begin{theorem}\label{THMstabilinterpolation} Let $\alpha\in(-\infty,-d-1/2)\cup[1/2,\infty)\setminus\N$ be fixed, and let $1<p<\infty$, $0<h\leq1$, and $k>d/p$. There exists a constant $C$ such that for every suitably small $\eps>0$, \begin{equation}\label{EQseminormuniformbound} |I_\alpha^hf|_{W_p^k}\leq C\|f\|_{W_p^k},\qquad f\in E_\frac{\pi+\eps}{h}\cap W_p^k(\R^d).\end{equation} For $p=1$ and $k> d$, or $p=\infty$ and $k\in\N$, there is a constant $C$ such that \begin{equation}\label{EQseminormuniformboundendpoints}|I_\alpha^hf|_{W_p^k}\leq C(1+|\ln h|)\|f\|_{W_p^k},\qquad f\in E_\frac{\pi+\eps}{h}\cap W_p^k(\R^d).\end{equation} \end{theorem} The constants in Theorems \ref{THMmaintheorem} and \ref{THMstabilinterpolation} will depend on $\alpha, p, k$, and $d$, but not on $h$. The proof of Theorem \ref{THMmaintheorem} is now immediate. \begin{proof}[Proof of Theorem \ref{THMmaintheorem}] Suppose $g\in W_p^k(\R^d)$ for $1<p<\infty$, and let $f\in E_\frac{\pi+\eps}{h}\cap W_p^k$ be the function provided by Lemma \ref{LEMhmnwapproxbandlimited}. Then on account of \eqref{EQbandlimitedinterpcondition}, $I_\alpha^hg = I_\alpha^hf$, and so $\|I_\alpha^hg-g\|_{L_p}\leq \|I_\alpha^hf-f\|_{L_p}+\|f-g\|_{L_p}$. The latter term is bounded by $Ch^k|g|_{W_p^k}$ due to \eqref{EQBernstein}. To estimate the first term, applying a bound due to Madych and Potter \cite[Corollary 1]{MadychPotter} on the norm of $W_p^k$ functions with closely spaced zeros provides the estimate $$\|I_\alpha^hf-f\|_{L_p}\leq Ch^k|I_\alpha^hf-f|_{W_p^k}\leq Ch^k\left(|I_\alpha^hf|_{W_p^k}+|f|_{W_p^k}\right),$$ whence applying \eqref{EQJackson} and \eqref{EQseminormuniformbound} and combining the above estimates yields the desired inequality. The proof for $p=1,\infty$ is identical but for applying \eqref{EQseminormuniformboundendpoints} in the final step rather than \eqref{EQseminormuniformbound}. \end{proof} The proof of Theorem \ref{THMstabilinterpolation} is technical and postponed to later sections. \section{The multiquadric multiplier}\label{SECMultiplier} \subsection{A Note on Fourier Multipliers} For the subsequent analysis, it is pertinent to stop for a moment and collect some properties of {\em Fourier multiplier operators}. Let $m$ be a measurable function. Then we define the linear multiplier operator $T_m$ by $$T_m f := (m\widehat{f})^\vee,$$ which, in the event that the convolution theorem holds, is $$T_m f = m^\vee\ast f.$$ Now {\em a priori}, it is not clear how this operator is even defined on $L_p$ for general $p$, so at the moment, consider $T_m f$ defined as above for Schwartz functions $f$. Supposing that there is a constant such that $$\|T_m f\|_{L_p}\leq C\|f\|_{L_p},\qquad f\in\schwartz,$$ then by density, $T_m$ extends to a bounded linear operator on $L_p$, and moreover $$\|T_m f\|_{L_p}\leq C\|f\|_{L_p},\qquad f\in L_p.$$ Since we will be considering the same definition for the multiplier and estimating its multiplier norm for different values of $p$, we define the $p$--multiplier norm of $T_m$ in the natural way: $$\|T_m\|_{\Mp}:=\|m\|_{\Mp}:=\underset{\|f\|_{L_p}=1}{\sup}\|T_m f\|_{L_p}.$$ Notice that if $m^\vee\in L_1$, then $T_m$ is a bounded linear operator on $L_p$. Indeed, by Young's Inequality, $$\|T_m f\|_{L_p} = \|m^\vee\ast f\|_{L_p}\leq\|m^\vee\|_{L_1}\|f\|_{L_p}.$$ It follows that \begin{equation}\label{EQmultiplieronenormbound} \|m\|_{\Mp}\leq\|m^\vee\|_{L_1}. \end{equation} However, we can (and will) also make use of the estimation \begin{equation}\label{EQmultiplierconvolutioninequality} \|m^\vee\ast f\|_{L_p}\leq\|m\|_{\Mp}\|f\|_{L_p}. \end{equation} \subsection{The Multiquadric Multiplier} We now define the Fourier multiplier whose operator norm will govern much of the analysis in the sequel. Let \begin{equation}\label{EQmultiplierdef} m_{\alpha,h}(\xi):=\widehat{L_{\alpha,\frac{1}{h}}}(h\xi),\quad \xi\in\R^d. \end{equation} Therefore, we have \begin{equation}\label{EQmultiplierinverseFTdef} m_{\alpha,h}^\vee(x) = \dfrac{1}{h^d}L_{\alpha,\frac{1}{h}}\left(\frac{x}{h}\right),\quad x\in\R^d. \end{equation} In what follows, we consider interpolation of functions $f\in E_\frac{\pi+\eps}{h}\cap W_p^k(\R^d)$ on account of Lemma \ref{LEMhmnwapproxbandlimited}. First, note that \eqref{EQmultiplierinverseFTdef} implies that the interpolant of a function $f$ may be expressed as $I_\alpha^h f(x)=h^d\sum_{j\in\Z^d}f(hj)m_{\alpha,h}^\vee(x-hj)$. Consider the following formal calculation of the Fourier transform of $I_\alpha^h$: \begin{align}\label{EQIhFourierCalc} \widehat{I_\alpha^hf}(\xi) & = h^d\left[\zsumd{j}f(hj)m_{\alpha,h}^\vee(\cdot-hj)\right]^\wedge(\xi)\nonumber\\ & = h^d\zsumd{j}f(hj)e^{-ih\bracket{j,\xi}}m_{\alpha,h}(\xi)\nonumber\\ & = \zsumd{j}\widehat{f}\left(\xi-\frac{2\pi j}{h}\right)m_{\alpha,h}(\xi). \end{align} In the case $p=2$, the above calculation is completely justified by the Poisson summation formula since $f$ is in the classical Paley--Wiener space. However, for general $p\neq2$, the above formula needs to be taken distributionally. Indeed, denote the exponential function as $e_x:=e^{i\langle x, \cdot\rangle}$, and the translation operator on tempered distributions via $\bracket{\tau_xT,\psi}:=\bracket{T,\psi(\cdot-x)}$. Recalling that $I_\alpha^hf\in L_p$ and thus induces a well-defined tempered distribution, the formal calculation above is as follows: \begin{align}\label{EQIFourierDistribution} \widehat{I_\alpha^hf} & = h^d\left[\zsumd{j}f(hj)m_{\alpha,h}^\vee(\cdot-hj)\right]^\wedge\nonumber\\ & = h^d\zsumd{j}f(hj)e_{-hj}m_{\alpha,h}\nonumber\\ & = \zsumd{j}\tau_{\frac{2\pi j}{h}}\widehat{f}m_{\alpha,h}. \end{align} Justification of this identity is somewhat more complicated. The main idea is that the right-hand side of the second equality of \eqref{EQIFourierDistribution} is a periodic tempered distribution times an integrable (but not infinitely differentiable) function $m$. The action of such an object on a test function is not well-defined; however, $m$ may be convolved with a standard $C^\infty$ mollifier that is an $L_1$ approximate identity, and then the right-hand side defines a tempered distribution after taking a limit as the approximate identity parameter goes to $\infty$. Following this, the final equality stems from the Poisson summation formula for compactly supported tempered distributions \cite[Corollary 8.5.1]{fjoshi}, while the second equality is justified by the fact that the series in question converges in $L_p$, and hence $\schwartz'$. A complete proof is given in the appendix. \begin{comment}First, consider the formal calculation that by \eqref{EQmultiplierdef}, \eqref{EQsecondinterpdef}, and \eqref{EQchiLfourierrelation}, we have \begin{align}\label{EQIhFourierCalc} \widehat{I_\alpha^hf}(\xi) & = \left[\zsumd{j}f(hj)\chih(\cdot-hj)\right]^\wedge(\xi)\nonumber\\ \nonumber \\ & = \zsumd{j}f(hj)e^{-i\bracket{hj,\xi}}\chihhat(\xi)\nonumber\\ \nonumber \\ & = h^d\zsumd{j}f(hj)e^{-i\bracket{hj,\xi}}\Lhhat(h\xi)\nonumber\\ \nonumber\\ & = \left[\zsumd{j}\widehat{f}\left(\xi-\dfrac{2\pi j}{h}\right)\right]m_{\alpha,h}(\xi), \end{align} where the last step follows from the Poisson Summation Formula for compactly supported distributions (see, for example, \cite[Corollary 8.5.1]{fjoshi}). Distributing the Fourier transform inside the sum in the first line is justified by the fact that the series converges in $L_p$, and thus in $\schwartz'$. \end{comment} In the sequel, we will use \eqref{EQIhFourierCalc} (which is an abuse of notation in the $p\neq2$ case but should cause no confusion), and consider $\mah$ as a Fourier multiplier acting on $L_p(\R^d)$. The multiplier norm of $\mah$ for different values of $p$ will determine the behavior of the seminorm of $I_\alpha^hf$ since \eqref{EQIFourierDistribution} implies that $I_\alpha^hf = T_{m_{\alpha,h}}(\sum_{j\in\Z^d}\tau_\frac{2\pi j}{h}\widehat{f})^\vee$ in the multiplier notation of the previous subsection. Use of Fourier multipliers to provide the seminorm estimates comes from the techniques of \cite{hmnw}. However, unlike the Gaussian and its associated multiplier, the general multiquadrics in several variables are not tensor products of their univariate counterparts. Consequently, to determine the properties of $m_{\alpha,h}$, it is not sufficient to consider the case $d=1$. Nonetheless, due to the radial nature of the multiquadrics, we may still find sufficient estimates on the decay of the multiplier to prove Theorem \ref{THMstabilinterpolation}. \begin{comment} \begin{lemma}\label{LEMIhLpinftybounded} For $0<h\leq1$, and $\mathbf{y}\in\ell_p(\Z)$, $1\leq p\leq\infty$, define $S_N(x):=\underset{|j|\leq N}{\sum}y_j\chi_h(x-hj)$. Then there is a constant (depending on $h$) such that $\|S_N\|_{L_\infty}\leq C$ for $N\in\N$. \end{lemma} \begin{proof} First consider the case $\alpha\in[1/2,\infty)$. Let $C:=\|\mathbf{y}\|_{\ell_\infty}$. It was shown in \cite{HammLedford} that $I_\alpha^h\mathbf{y}\in L_\infty(\R)$. Notice that $$|S_N(x)|\leq C\underset{|j|\leq N}{\sum}|\chih(x-hj)|.$$ If $x=hk$ for some $k$, then it follows that $|S_N(x)|\leq C$. Suppose that $x\in[0,h]$. Then because $|\chih(x)|\leq 1$, and $|x-hk|\in[h(k-1),hk]$, Corollary \ref{CORmultiplierinversebounds} implies that $$|S_N(x)|\leq C\left[3+\finsum{|j|}{2}{N}|\chih(x-hj)|\right]\leq C'\left[3+\finsum{|j|}{1}{N-1}\dfrac{1}{h^{5(2\alpha-\floor{\alpha})+2}|j|^2}\right]\leq C''.$$ Since we can take the constant on the right hand side to involve the entire series $\finsum{|j|}{1}{\infty}|j|^{-2}$, the bound is independent of $N$ and holds for every $S_N$. The only remaining thing to note is that the function $\zsum{j}\chih(\cdot-hj)$ is $h$-periodic, and so the constant bound holds on all of $\R$. \end{proof} \begin{corollary}\label{CORdistributionalseriesconvergence} The sequence $(S_N)_{N\in\N}$ defined in Lemma \ref{LEMIhLpinftybounded} converges in $\schwartz$ to $I^h\mathbf{y}$ as $N\to\infty$. \end{corollary} \begin{proof} We need to show that $\int S_N\phi\to\int I^h\mathbf{y}\phi$ for every $\phi\in\schwartz$. Notice that $$\left|\int(S_N-I^h\mathbf{y})\phi\right|\leq\int|S_N-I^h\mathbf{y} ||\phi|\leq\|S_N-I^h\mathbf{y}\|_{L_\infty}\|\phi\|_{L_1}\leq(C+\|I^h\mathbf{y} \|_{L_\infty})\|\phi\|_{L_1}.$$ Therefore, the Dominated Convergence Theorem implies the desired result. \end{proof} As a direct consequence of Corollary \ref{CORdistributionalseriesconvergence}, equation \eqref{EQIhFourierCalc} holds in the sense of tempered distributions. We finish the section by restating some pointwise bounds on $\Lachat$ which will be quite useful to us later. \begin{proposition}[\cite{HammLedford}, Proposition 4.2]\label{PROPLhatbounds} Suppose that $\alpha\in (0,\infty)\setminus \N$, $\varepsilon\in [0,1)$, $c\geq 1 $, and $k\in\mathbb{N}_0$. If $ k\leq 2\alpha+1$, then there exist constants $A_{\alpha,k}(\varepsilon), A_{\alpha,k} >0$ such that \begin{enumerate} \item[(i)] $|\widehat{L_{\alpha,c}}^{(k)}(\xi)|\leq A_{\alpha,k}(\varepsilon) c^{2k(2\alpha-\floor\alpha\rfloor)+k} e^{-2\pi c\varepsilon}$ whenever $|\xi|\leq \pi(1-\varepsilon)$, \item[(ii)] $|\widehat{L_{\alpha,c}}^{(k)}(\xi)| \leq A_{\alpha,k}(\varepsilon) c^{(2k+1)(2\alpha-\lfloor\alpha\rfloor)+k} e^{-\pi c \varepsilon}$ whenever $|\xi|\in [(1+\varepsilon)\pi, 3\pi]$, and \item[(iii)] $|\widehat{L_{\alpha,c}}^{(k)}(\xi)|\leq A_{\alpha,k} c^{(2k+1)(2\alpha-\lfloor\alpha\rfloor)+k} e^{-2\pi c (|j|-1)} $ whenever $\xi\in[(-2j-1)\pi,(-2j+1)\pi]$ and $|j|\geq 2$, \end{enumerate} where $A_{\alpha,k}(\varepsilon)=O(1)$ as $\varepsilon\to 0$. \end{proposition} \end{comment} \subsection{Estimates for $m_{\alpha,h}$}\label{SECmestimates} To use \eqref{EQmultiplieronenormbound}, it suffices to obtain bounds on the function $m_{\alpha,h}$. The calculations closely resemble those used in \cite{HammLedford} to derive estimates for the cardinal functions $L_{\alpha,c}$. For now, we restrict our attention to positive values of $\alpha$ as the calculations are essentially the same for negative values. Throughout the rest of this section, our calculations will be helped by the fact that \eqref{EQmultiplierdef} may be rewritten as \[ m_{\alpha,h}(\xi)=\dfrac{\widehat{\phi_{\alpha,1}}(\xi) }{\sum_{j\in\mathbb{Z}^d} \widehat{\phi_{\alpha,1}}(\xi+\frac{2\pi j}{h})} = \dfrac{\widehat{\varphi_{\alpha,1}}(|\xi |)}{\sum_{j\in\mathbb{Z}^d} \widehat{\varphi_{\alpha,1}}(|\xi+\frac{2\pi j}{h}|) }, \] where $\varphi$ is the univariate function associated to $\phi$, i.e. $\phi(x)=\varphi(|x|)$. To estimate the behavior of $m_{\alpha,h}^\vee$, it suffices to estimate the $L_1$ norms of derivatives of $m_{\alpha,h}$, which by use of Leibniz rule in the formula above, requires estimates on derivatives of $\widehat{\varphi_{\alpha,1}}(|\cdot|)$ and its reciprocal. This study is taken up henceforth. Since the shape parameter is always 1 in the sequel, we drop the subscript from the subsequent estimates and remind the reader that the constant $C$ below will typically depend on $\alpha,d,$ and the multi-index $\gamma$, but not on $h$. As a matter of notation, we say that $\beta\leq\gamma$ for multi-indices $\beta$ and $\gamma$ provided $\beta_j\leq\gamma_j$ for all $j$. Now for any multi-index $\gamma$ with $\multi{\gamma} \geq 1$, \begin{equation}\label{chain} D^{\gamma}\widehat{\phi_{\alpha}}( \xi )= \sum_{ \{\beta: \beta\leq \gamma, \multi{\beta}\geq 1\}}^{\multi{\gamma}}a_\beta\widehat{\varphi_{\alpha}}^{(\multi{\beta})}(| \xi |) \Omega_{\multi{\beta}-\multi{\gamma}}(\xi), \end{equation} where $\Omega_l$ is a homogeneous function of degree $l$ (i.e. $\Omega_l(r\xi) = r^l\Omega_l(\xi)$ for any $r>0$). In fact, these homogeneous functions are simply combinations of (partial) derivatives of the function $\xi\mapsto| \xi |$. One may prove this by first noting that if $\multi{\gamma}=1$, then \eqref{chain} holds by the chain rule, and then proceeding by induction on $\multi{\gamma}$. By \eqref{chain} and the nature of $\Omega_l$, there exists a constant $C>0$ such that \begin{equation}\label{est1} | D^{\gamma}\widehat{\phi_{\alpha}}( \xi ) | \leq C | \xi |^{-\multi{\gamma}} \sum_{j=1}^{\multi{\gamma}}| \xi |^{j} \widehat{\varphi_{\alpha}}^{(j)}(| \xi |). \end{equation} Having found an upper bound for the derivative in terms of a univariate function, we may now recycle the estimates from Section 7 of \cite{HammLedford} by replacing $\alpha$ with $\alpha+(d-1)/2$ and taking $c=1$. For instance, for $1\leq\multi{\gamma} < 2\alpha+d$, there is a constant $C>0$ such that \begin{equation}\label{est2} |D^\gamma \widehat{\phi_{\alpha}}(\xi)|\leq C | \xi |^{-2\alpha-d-\multi{\gamma}} e^{-|\xi|}. \end{equation} Similarly, for $\multi{\gamma}=2\alpha+d$, \begin{equation}\label{est3} |D^\gamma \widehat{\phi_{\alpha}}(\xi)|\leq C \left\{ e^{-|\xi|}|\xi|^{-2\alpha-d}\ln(1+|\xi|^{-1}) + e^{-| \xi |} | \xi |^{-4\alpha-2d} \right\}. \end{equation} The previous estimates are Equations (48) and (49) in \cite{HammLedford}. Now a calculation analogous to \eqref{chain} for the reciprocal yields (for $\multi{\gamma}\geq 1$) \[ D^{\gamma}\left( 1/\widehat{\phi_{\alpha}} \right)(\xi ) = \sum_{ \{\beta: \beta\leq \gamma, \multi{\beta}\geq 1\}}^{\multi{\gamma}}a_\beta \left( 1/\widehat{\varphi_{\alpha}} \right)^{(\multi{\beta})}(|\xi |)\Omega_{\multi{\beta}-\multi{\gamma}}(\xi), \] thus we get a similar estimate to \eqref{est1}: \[ \left| D^{\gamma}\left( 1/\widehat{\phi_{\alpha}} \right)(\xi )\right |\leq C| \xi |^{-\multi{\gamma}}\sum_{j=1}^{\multi{\gamma}} | \xi |^{j} \left\vert\left( 1/\widehat{\varphi_{\alpha}} \right)^{(j)}(|\xi |)\right\vert. \] For $1\leq \multi{\gamma}< 2\alpha+d$, we obtain \begin{equation}\label{est5} | D^{\gamma}\left( 1/\widehat{\phi_{\alpha}} \right)(\xi ) |\leq Ce^{|\xi|}| \xi |^{2\alpha+d -\multi{\gamma}}. \end{equation} When $\multi{\gamma}=2\alpha+d$, a logarithmic term appears in \eqref{est3} which must be handled separately. In this case, we obtain from \cite[Eqs. (50) and (51)]{HammLedford} \begin{equation}\label{est6} \left|\dfrac{D^{\gamma}\widehat{\phi_{\alpha}}(\xi)}{(\widehat{\phi_{\alpha}}(\xi))^2}\right| \leq C e^{|\xi |} \left\{ | \xi|^{2\alpha+d}\ln(1+|\xi|^{-1})+1\right\}\leq C e^{|\xi|}. \end{equation} The term on the left of \eqref{est6} appears after applying the Leibniz rule to the term $(1/\widehat{\varphi_{\alpha}})^{(\multi{\gamma})}(|\xi|)$. This term is the only one containing the logarithm from \eqref{est3}, while the other terms are estimated using \eqref{est5}. Note that these estimates rely only on the order of the derivative $\multi{\gamma}$ and $\alpha$. Following Riemenschneider and Sivakumar \cite{RiemSiva}, for $j\neq 0$, define $a_j(\xi):= \widehat{\phi_{\alpha}}(| \xi+\frac{2\pi j}{h} |)/ \widehat{\phi_{\alpha}}(|\xi |)$ and $s(\xi):=\sum_{j\neq 0}a_j(\xi)$; then $m_{\alpha,h}=(1+s)^{-1}$. Our estimates once again rely on univariate estimates; however, we will also make use of the infinity norm $\|\xi \|_{\infty}:=\max\{|\xi_1|,\dots,|\xi_d|\}$. Recall that \begin{equation}\label{norm equiv} d^{-1/2}| \xi | \leq \| \xi \|_{\infty} \leq | \xi |. \end{equation} To estimate $\| D^\gamma m_{\alpha,h} \|_{L_1}$, we split $\mathbb{R}^d$ into three sections based on $j\in\mathbb{Z}^d$: \begin{enumerate} \item[I.] $\| \xi \|_\infty \leq \pi/h$, \item[II.] $\xi \in 2\pi j/h + [-\pi/h,\pi/h]^d, \quad | j |=1$, and \item[III.] $\xi \in 2\pi j/h + [-\pi/h,\pi/h]^d \quad | j |>1$. \end{enumerate} These regions need further refinement to avoid the faces of the cube $[-\pi/h, \pi/h]^d$ on which we have no precise estimates on the cardinal functions except the transparent bound $|\widehat{L_{\alpha,c}}(\xi)|\leq1$, which holds for all $\xi$. The estimates for these regions are corollaries of the following lemma. \begin{lemma}\label{LEM_Daj_ests} Suppose that $\alpha>0$, $1\leq \multi{\gamma}\leq 2\alpha+d$, $0<h\leq1$, and $\varepsilon\in [0,1)$. If $\| \xi \|_\infty \leq (1-\varepsilon)\pi/h$, then \[ |D^{\gamma}a_j(\xi)|\leq C h^{\multi{\gamma}}e^{-\varepsilon\pi/(\sqrt{d}h)}\begin{cases}1, & |j|=1 \\ e^{-2\pi|j|/(3dh)}, & |j|>1\end{cases}, \] where $C>0$ is independent of $h$ and is uniformly bounded for $\eps\in[0,1)$. \end{lemma} \begin{proof} By using \eqref{est2},\eqref{est3},\eqref{est5}, and \eqref{est6}, we have for $1\leq \multi{\gamma}\leq 2\alpha +d$, \begin{equation}\label{a_j} |D^{\gamma}a_j (\xi)| \leq C e^{| \xi |-| \xi+\frac{2\pi j}{h} |}\sum_{\{\beta: \beta\leq \gamma, \multi{\beta}\geq 1\}}\dfrac{| \xi |^{2\alpha+d-\multi{\beta}}}{\left| \xi+\frac{2\pi j}{h} \right|^{2\alpha+d-\multi{\beta}+\multi{\gamma}}}\;. \end{equation} Notice that, by \eqref{norm equiv}, the summands in \eqref{a_j} satisfy \begin{align*} \dfrac{| \xi |^{2\alpha+d-\multi{\beta}}}{\left| \xi+\frac{2\pi j}{h} \right|^{2\alpha+d-\multi{\beta}+\multi{\gamma}}} \leq & h^{\multi{\gamma}}\dfrac{\sqrt{d}((1-\varepsilon)\pi)^{2\alpha+d-\multi{\beta}}}{((1+\varepsilon)\pi)^{2\alpha+d-\multi{\beta}+\multi{\gamma}}} \\ \leq & Ch^{\multi{\gamma}}, \end{align*} where $C>0$ is a constant independent of $h$. This being the desired estimate for the sum, we turn our attention to the exponential term in \eqref{a_j}, noting that the exponent may be written as \begin{equation}\label{exponent} | \xi |-\left| \xi+\frac{2\pi j}{h} \right| = \dfrac{-4(\pi/h)\left((\pi/h)| j |^2+\langle \xi, j\rangle \right)}{| \xi|+|\xi+\frac{2\pi j}{h} | }. \end{equation} Next, separate the integers into two cases: (i) $\| j \|_\infty =1$, and (ii) $\| j \|_\infty =M >1 $. In case (i), the expression on the right hand side of \eqref{exponent} never changes sign because $\| \xi \|_\infty \leq (1-\varepsilon)\pi/h$. Thus, if $j$ has $1\leq k\leq d$ non-zero components, we obtain \[ \dfrac{-4(\pi/h)\left((\pi/h)| j |^2+\langle \xi, j\rangle \right)}{| \xi|+|\xi+\frac{2\pi j}{h} | } \leq \dfrac{-k\varepsilon \pi}{\sqrt{d}h} \leq \dfrac{-\varepsilon \pi}{\sqrt{d}h}. \] Hence, \begin{equation}\label{j=1} |D^\gamma a_j(\xi)|\leq C h^{\multi{\gamma}} e^{-(\varepsilon \pi)/(\sqrt{d}h)}. \end{equation} In case (ii), the following holds by similar reasoning to that in case (i): \begin{align*} \dfrac{-4(\pi/h)\left((\pi/h)| j |^2+\langle \xi, j\rangle \right)}{| \xi|+|\xi+\frac{2\pi j}{h} | } \leq &\dfrac{-2(\pi/h)\left( (M^2-M)+ \varepsilon M \right)}{\sqrt{d}(M+1-\varepsilon)} \\ \leq & \dfrac{-2\pi M}{3\sqrt{d}h}+ \dfrac{-\epsilon\pi}{\sqrt{d}h}. \end{align*} Thus we have \begin{align*} |D^\gamma a_j(\xi)|\leq & C h^{\multi{\gamma}}e^{-(\varepsilon \pi)/(\sqrt{d}h)}e^{-(2\pi \| j \|_\infty)/(3\sqrt{d}h)} \\ \leq & C h^{\multi{\gamma}}e^{-(\varepsilon \pi)/(\sqrt{d}h)}e^{-(2\pi | j |)/(3dh)} . \end{align*} \end{proof} \begin{corollary}\label{Dm_est1} Suppose that $\alpha>0$, $1\leq \multi{\gamma}\leq 2\alpha+d$, $0<h\leq1$, and $\varepsilon\in [0,1)$. If $\| \xi \|_\infty \leq (1-\varepsilon)\pi/h$, then \[ |D^{\gamma}m_{\alpha,h}(\xi)|\leq C h^{\multi{\gamma}}e^{-\varepsilon\pi/(\sqrt{d}h)}, \] where $C>0$ is independent of $h$. \end{corollary} \begin{proof} From Lemma \ref{LEM_Daj_ests}, we see that \begin{align*} |D^\gamma s(\xi)|\leq & C h^{\multi{\gamma}} e^{-(\varepsilon \pi)/(\sqrt{d}h)} \left[ (3^d-1)+ \sum_{|j|\geq 2} e^{-(2\pi | j|)/(3dh)} \right]\\ \leq & C h^{\multi{\gamma}}e^{-(\varepsilon\pi)/(\sqrt{d}h)}; \end{align*} by noting that $s(\xi)\geq 0$ and applying the Leibniz rule, we find that \[ |D^\gamma m_{\alpha,h}(\xi)|\leq C h^{\multi{\gamma}}e^{-(\varepsilon\pi)/(\sqrt{d}h)}. \] \end{proof} \begin{corollary}\label{Dm_est2} Suppose that $\alpha>0$, $1\leq \multi{\gamma}\leq 2\alpha+d$, and $0<h\leq1$. If $ \xi\in 2\pi j/h+[-\pi/h,\pi/h]^d$, where $|j|>1$, then \[ |D^{\gamma}m_{\alpha,h}(\xi)|\leq C h^{\multi{\gamma}}e^{-2\pi|j|/(3dh)}, \] where $C>0$ is independent of $h$. \end{corollary} \begin{proof} We can write $\xi=2\pi j/h + r$, where $r\in [-\pi/h,\pi/h]^d$, and we have \[ m_{\alpha,h}(\xi) = a_{j}(r)m_{\alpha,h}(r). \] Consequently, we can use the estimates in region I (Lemma \ref{LEM_Daj_ests}) with $\varepsilon=0$ and the Leibniz rule to obtain \[ |D^\gamma m_{\alpha,h}(\xi)|\leq C h^{\multi{\gamma}} e^{-(2\pi |j|)/(3dh)}. \] \end{proof} As mentioned before, on a face of the cube $[-\pi/h,\pi/h]^d$, we must be more careful due to the lack of bounds on the Fourier transform of the cardinal function there. To avoid these we introduce the following regions when $|j|=1$: \begin{align*} R_{i+}&:=\left\{(\xi_1,\dots,\xi_d):-(1-\varepsilon)\pi/h\leq \xi_i \leq 3\pi/h, |\xi_k|\leq\pi/h, k\neq i \right\} \text { and }\\ R_{i-}&:=\left\{ (\xi_1,\dots,\xi_d):-3\pi/h\leq\xi_i\leq -(1-\varepsilon)\pi/h, |\xi_k|\leq\pi/h, k\neq i \right\}. \end{align*} Now we define $R_j$ to be $R_{\pm i}$ where the plus or minus is chosen to match the $i^{th}$ component of $j$. \begin{corollary}\label{Dm_est3} Suppose that $\alpha>0$, $1\leq \multi{\gamma}\leq 2\alpha+d$, $0<h\leq1$, and $\varepsilon\in [0,1)$. If $|j|=1$ and $\xi \in R_{j}$, then \[ |D^{\gamma}m_{\alpha,h}(\xi)|\leq C h^{\multi{\gamma}}e^{-\varepsilon\pi/(\sqrt{d}h)}, \] where $C>0$ is independent of $h$. \end{corollary} \begin{proof} This estimate follows similarly to that for Corollary \ref{Dm_est2}, except we must be careful when a component of $j$ approaches $\pm \pi$. We will outline what to do in the case that $j=e_1=(1,0,\dots,0)$ and omit the other cases since they are handled in a completely similar manner. Consider $\xi\in [(1+\varepsilon)\pi,3\pi]\times[-\pi,\pi]^{d-1}$. We may write $\xi=2\pi e_1/h +r $, where $r\in [-(1-\varepsilon)\pi,\pi]\times[-\pi,\pi]^{d-1}$, and again we have \[ m_{\alpha,h}(\xi)=a_{e_1}(r)m_{\alpha,h}(r). \] For this region, the estimate in \eqref{j=1} still holds, so we may apply the Leibniz rule to obtain \[ |D^\gamma m_{\alpha,h}(\xi)|\leq C h^{\multi{\gamma}}e^{-(\varepsilon\pi)/(\sqrt{d}h)}. \] \end{proof} Combining these estimates yields the main result in this section. \begin{theorem}\label{DmL1} Suppose that $\alpha>0$, $0<h\leq 1$ and $1\leq \multi{\gamma}\leq 2\alpha+d$. Then there exists a constant $C>0$, independent of $h$, such that \[ \| D^\gamma m_{\alpha,h} \|_{L_1(\mathbb{R}^d)}\leq C h^{\multi{\gamma}}. \] \end{theorem} \begin{proof} For $1\leq \multi{\gamma} \leq 2\alpha+d$, by combining estimates from Corollaries \ref{Dm_est1}, \ref{Dm_est2}, \ref{Dm_est3}, and letting $m:=\min\left\{\dfrac{\varepsilon\pi}{\sqrt{d}h},\dfrac{2\pi}{3dh} \right\} $, we have \[ \| D^\gamma m_{\alpha,h} \|_{L_1(\mathbb{R}^d)}\leq C\sum_{j\in\Z^d} h^{\multi{\gamma}-d}e^{-m|j|}. \] Estimating this sum via the integral definition of the Gamma function yields the result. \end{proof} The corresponding result for sufficiently negative $\alpha$ is the following (the analogues of the univariate estimates from \cite{HammLedford} follow by replacing $\alpha$ with $|\alpha|-1$, as noted in Eq. (44) therein). \begin{theorem}\label{-DmL1} Suppose that $\alpha<-d-1/2$, $0<h\leq 1$ and $1\leq \multi{\gamma}<2|\alpha|-d$, then there exists a constant $C>0$, independent of $h$, such that \[ \| D^\gamma m_{\alpha,h} \|_{L_1(\mathbb{R}^d)}\leq Ch^{\multi{\gamma}}. \] \end{theorem} Theorems \ref{DmL1} and \ref{-DmL1} together with \eqref{EQmultiplierinverseFTdef} and the fact that $|L_{\alpha,c}(x)|\leq1$ provide the following estimates for $m_{\alpha,h}^\vee$. \begin{corollary}\label{CORmultiplierinversebounds} Suppose $\alpha\in(-\infty,-d-1/2)\cup[1/2,\infty)\setminus\N$ and $0<h\leq1$. There exists a constant $C>0$, independent of $h$, such that $$|m_{\alpha,h}^\vee(x)|\leq C\min\left\{\frac{1}{h^d},\frac{1}{|x|^d},\frac{1}{|x|^{d+1}}\right\}.$$ \end{corollary} \subsection{The Multiplier Norm of $m_{\alpha,h}$} We now consider the behavior of $\|m_{\alpha,h}\|_{\Mp}$ for different values of $p$. This will aid the analysis in proving the stability of the interpolation operator $I^h_\alpha$. \begin{theorem}\label{THMmultiplierbounded} Suppose that $\alpha\in (-\infty,-d-1/2)\cup [1/2,\infty)\setminus \mathbb{N} $ and $0<h\leq1$. Then for each $1<p<\infty$, there exists a constant $C>0$ such that $$\|m_{\alpha,h}\|_{\mathcal{M}_p}\leq C.$$ If $p=1,\infty$, then there exists a constant $C>0$ such that $$\|m_{\alpha,h}\|_{\Mp}\leq C\left(1+|\ln h|\right).$$ \end{theorem} \begin{proof} First, notice that $\|m_{\alpha,h}\|_{\mathcal{M}_1} = \|m_{\alpha,h}\|_{\mathcal{M}_\infty}\leq \|m_{\alpha,h}^\vee\|_{L_1(\R^d)}$. By Corollary \ref{CORmultiplierinversebounds}, the stated upper bound for $\| m_{\alpha,h}^\vee \|_{L_1(\mathbb{R}^d)}$ is obtained as follows: \begin{align*} &C\left[\int_{0}^{h} h^{-d}r^{d-1}dr + \int_{h}^{1} r^{-1}dr + \int_{1}^{\infty} r^{-2}dr \right]\\ \leq & C\left( 1 + |\ln(h)| \right). \end{align*} If $1<p<\infty$, the conclusion of the theorem follows directly from the Mikhlin multiplier theorem \cite[Theorem 2, p. 232]{Mikhlin}, which states that if $\underset{x\in\R^d}\sup|x|^{[\gamma]}|D^\gamma m_{\alpha,h}(x)|\leq C$ for every $[\gamma]\leq d$, then $\|m_{\alpha,h}\|_{\mathcal{M}_p}\leq C$. This bound follows directly from the estimates in the previous subsections. \end{proof} \section{Proofs of Main Theorems}\label{SECProofs} In this section, we enumerate the proofs of the main theorems in Section \ref{SECMain}, excepting Theorem \ref{THMmaintheorem} which was already proven there. \subsection{Proofs of Structural Theorems} \begin{proof}[Proof of Theorem \ref{L_coeff_bnd}] Let $c>0$ be fixed, and we drop the subscript for ease of notation. Begin by defining the periodic symbol \[ P_\alpha(\xi):=(\widehat{\phi_\alpha}(\xi))^{-1}\widehat{L_{\alpha}}(\xi)=\left(\sum_{j\in\Z^d} \widehat{\phi_\alpha}(\xi+2\pi j) \right)^{-1}. \] From \eqref{est5}, \eqref{est6}, and Theorem \ref{-DmL1}, we see that $P_\alpha$ has derivatives of up to order $k$ in $L_1(\T^d)$, where $k<2|\alpha|-d$, and $\T^d$ is the $d$--dimensional torus, which may be identified with $[-\pi,\pi)^d$. Since $P_\alpha\in L_1(\T^d)$, it may be identified with its Fourier series, and we have $\widehat{L_\alpha}(\xi)=\widehat{\phi_\alpha}(\xi)\sum_{j\in\Z^d}a_je^{-i\bracket{j,\xi}},$ where \[ a_j=\frac{1}{(2\pi)^d}\int_{\T^d}P_\alpha(\xi)e^{i\langle \xi,j\rangle}d\xi. \] The series representation of $L_\alpha$ is immediate, and we can use integration by parts and the periodicity of $P_\alpha$ to see the decay rates of the coefficients $a_j$. These estimates together with the decay of $\phi_\alpha$ allow us to conclude that the series is uniformly convergent. \end{proof} The following lemma will be useful for the proof of Theorem \ref{InterSpace}. Before stating it, let us note that $A(\T^d)$ is the Wiener algebra of $L_1(\T^d)$ functions with absolutely summable Fourier coefficients. \begin{lemma}\label{LEMAT} Let $c>0$ and $\alpha<-d-1/2$ be fixed. Let $P_\alpha$ be the function defined in the proof of Theorem \ref{L_coeff_bnd}. Then $P_\alpha$ and $1/P_\alpha$ are in $A(\T^d)$. \end{lemma} \begin{proof} Note that Theorem \ref{L_coeff_bnd} and its proof imply that $P_\alpha\in A(\T^d)$. Consequently, if $P_\alpha(\xi)$ is bounded away from 0 on $\T^d$, then $P_\alpha$ satisfies the conditions of Wiener's $1/f$ Theorem, and $1/P_\alpha\in A(\T^d)$. Thus, it suffices to demonstrate that for fixed $\alpha<-d-1/2$ and $c>0$, $\sum_{j\in\Z^d}|\widehat{\phi_{\alpha,c}}(\xi+2\pi j)|\leq C$, uniformly for $\xi\in\T^d$. First, note that for this range of $\alpha$, $\phi_{\alpha,c}\in L_1\cap L_2(\R^d)$, which implies that $\vert\widehat{\phi_{\alpha,c}}(\xi)\vert\leq C$ for all $\xi$. Thus, it suffices to bound $\sum_{j\neq0}|\widehat{\phi_{\alpha,c}}(\xi+2\pi j)|$. Note that the quantity in question is majorized by $|\widehat{\phi_{\alpha,c}}(\xi)|\sum_{j\neq0}e^{-c(|\xi+2\pi j|-|\xi|)}$ on account of the estimates in \cite[Section 5.1]{Wendland} (see also \cite[Lemma 1]{HammLedford}; however, the rational term appearing there should be replaced by 1 due to a typographical error in the definition of $K_\nu$ therein). Finally, this series is uniformly bounded for $\xi\in\T^d$ (see, for example, the proof of \cite[Proposition 2.2]{Baxter}), whence the proof is complete. \end{proof} \begin{proof}[Proof of Theorem \ref{InterSpace}] To begin, we show that $V_p(L_{\alpha,c})\subset V_p(\phi_{\alpha,c}).$ On account of Theorem \ref{L_coeff_bnd}, we may write $f\in V_p(L_{\alpha,c}) $ as \[ f(x)=\sum_{j\in\Z^d}b_j \sum_{k\in\Z^d}a_k\phi_{\alpha,c}(x-j-k), \] where $(a_k)\in\ell_1$ are the Fourier coefficients of $P_\alpha$, and $(b_j)\in\ell_p$. The following calculation justifies the use of Fubini's Theorem, allowing us to switch the order of summation: \begin{align*} &\sum_{j,k\in\Z^d}|b_j a_k||\phi_{\alpha,c}(x-j-k)|\\ &=\sum_{k\in\Z^d}|a_k|\sum_{j\in\Z^d}|b_j||\phi_{\alpha,c}(x-j-k)|\\ &\leq \| a\|_{\ell_1}\|b \|_{\ell_p} \| (\phi_{\alpha,c}(x-\cdot) ) \|_{\ell_q}, \end{align*} where we have used H\"{o}lder's inequality. The last term is bounded uniformly in $x$ on account of the decay of $\phi_{\alpha,c}$ and the fact that $ a \in\ell_1$. Then we have, by re-indexing, \begin{align*} f(x)&=\sum_{k\in\Z^d} \sum_{j\in\Z^d}a_kb_j\phi_{\alpha,c}(x-j-k)\\ &=\sum_{m\in\Z^d}\left(\sum_{j\in\Z^d}b_j a_{m-j}\right)\phi_{\alpha,c}(x-m)\\ &=\sum_{m\in\Z^d}(a*b)_m \phi_{\alpha,c}(x-m). \end{align*} Now the discrete version of Young's convolution inequality implies that $b*a\in\ell_p$, hence $f\in V_p(\phi_{\alpha,c})$. The statement about $V_p(L_{\alpha,\frac1h}(\cdot/h),h\Z^d)$ follows from a simple dilation argument. To see the reverse inclusion $V_p(\phi_{\alpha,c})\subset V_p(L_{\alpha,c})$, let $f(x) = \sum_{j\in\Z^d}b_j\phi_{\alpha,c}(x-j)$, and let $(d_k)\in\ell_1$ be the Fourier coefficients of $1/P_\alpha$ on account of Lemma \ref{LEMAT}. The same calculation as above verifies that $$f(x) = \sum_{m\in\Z^d}(d\ast b)_mL_{\alpha,c}(x-m),$$ where the only change needed is that, in the justification of the use of Fubini's Theorem, we need $\|(L_{\alpha,c}(x-\cdot))\|_{\ell_q}$ to be finite. Note that it suffices to bound the $\ell_1$ norm of the sequence in question, whereby on account of Corollary \ref{CORmultiplierinversebounds} and \eqref{EQmultiplierinverseFTdef}, we have $$\|(L_{\alpha,c}(x-\cdot))\|_{\ell_1} = O\left(\sum_{j\in\Z^d}\frac{1}{1+|x-j|^{d+1}}\right)=O(1),$$ where the implicit constant is independent of $x$. Note also that the univariate proof of this bound is \cite[Proposition 6]{HammLedford}. \begin{comment} If we suppose that $1\leq p\leq 2$, we can say more. In this case $P_\alpha$ satisfies the conditions of Wiener's ``$1/f$" theorem, so we can write \[ \phi_{\alpha,c}(x):=\sum_{j\in\Z^d}d_j L_{\alpha,c}(x-j), \] where $d\in\ell_1$ are the Fourier coefficients of $1/P_{\alpha}$. Using completely analogous calculations to those above, we have that if $g\in V_p(\phi_{\alpha,c})$, with coefficient sequence $b\in\ell_p$, then the corresponding coefficient sequence in $V_p(L_{\alpha,c})$ is given by $b*d\in\ell_p$. \end{comment} \end{proof} \subsection{Proof of Theorem \ref{THMstabilinterpolation} - Univariate Case}\label{SECstabilityunivariate} We may now prove Theorem \ref{THMstabilinterpolation}. Since the essence of the argument is contained in the univariate proof, we carefully consider the behavior of the interpolation operator $I^h_\alpha$ acting on functions in $E_\frac{\pi+\eps}{h}\cap W_p^k(\R).$ The changes necessary to complete the multivariate version follow in Section \ref{SECstabilitymultivariate}. We begin with a formula that follows similarly to \eqref{EQIhFourierCalc}: \begin{equation}\label{EQIhDerivativeFourierCalc} \widehat{\left(I_\alpha^hf\right)^{(k)}}(\xi) = (i\xi)^k\left[\zsum{j}\widehat{f}\left(\xi-\frac{2\pi j}{h}\right)\right]m_{\alpha,h}(\xi) =:\zsum{j}\widehat{G_{k,j}}(\xi).\end{equation} We define $G_{k,j}:=D^k[(e^{2\pi i(\cdot)j/h}f)\ast m_{\alpha,h}^\vee],$ and notice that the convolution theorem holds as $f$ is bandlimited (see, for example, \cite[Theorem 8.4.2]{fjoshi}), and so $\widehat{G_{k,j}}$ as defined by \eqref{EQIhDerivativeFourierCalc} (cf. \eqref{EQTemp}) is indeed the distributional Fourier transform of $G_{k,j}$. We will prove that $\zsum{j}\|G_{k,j}\|_{L_p}<\infty$. This implies that $\left(I_\alpha^hf\right)^{(k)} = \zsum{j}G_{k,j}$ in $\schwartz'$, and $|I_\alpha^hf|_{W_p^k}\leq\zsum{j}\|G_{k,j}\|_{L_p}.$ We break these estimates into three parts: $j=0$, $j=\pm1$, and $|j|>1$. {\em Case $j=0$}: By \eqref{EQmultiplierconvolutioninequality}, $\|D^kf\ast m_{\alpha,h}^\vee\|_{L_p}\leq|f|_{W_p^k}\|m_{\alpha,h}\|_{\Mp}.$ So by Theorem \ref{THMmultiplierbounded}, $$\|G_{k,0}\|_{L_p}\leq\left\{ \begin{array}{ll} C|f|_{W_p^k} & 1<p<\infty\\ C(1+|\ln h|)|f|_{W_p^k} & p=1,\infty. \end{array}\right.$$ {\em Case $|j|>1$}: Let $\nu$ be a non-negative $C^\infty$ bump function such that $\nu(\xi)=0$ whenever $|\xi|>2\varepsilon$, $\nu(\xi)=1$ whenever $|\xi|<\varepsilon$, and $\nu(-\xi)=\nu(\xi)$ for all $\xi$. We use $\nu$ to construct another bump function, $\psi$, with support in $[-\pi-2\varepsilon,\pi+2\varepsilon]$ satisfying $\psi(t) = 1$ for $-\pi\leq t\leq\pi$, and $\psi(-t)=\psi(t)=\nu(t-\pi)$ for $t\geq\pi$. Now since $f$ is bandlimited with band $\frac{\pi+\varepsilon}{h}$, we may write $$\widehat{G_{k,j}}(\xi) = \widehat{f}\left(\xi-\frac{2\pi j}{h}\right)(i\xi)^k\psi\left(h\left(\xi-\frac{2\pi j}{h}\right)\right)m_{\alpha,h}(\xi),$$ because on the support of $\widehat{f}$, $\psi\left(h\left(\xi-\frac{2\pi j}{h}\right)\right) = 1$. Now define the function $$\rho(\xi):=\rho_{k,j}(\xi):=(i\xi)^k\psi\left(h\left(\xi-\frac{2\pi j}{h}\right)\right)m_{\alpha,h}(\xi).$$ Using \eqref{EQmultiplieronenormbound} and \eqref{EQmultiplierconvolutioninequality}, we may bound $\|G_{k,j}\|_{L_p}$ above by $\|\rho^\vee\|_{L_1}\left\|\left(\widehat{f}\left(\cdot-\frac{2\pi j}{h}\right)\right)^\vee\right\|_{L_p}$ by viewing $\rho_{k,j}$ as an $L_p$ multiplier operator. We now estimate $\|\rho^\vee\|_{L_1}$. \begin{lemma}\label{LEMtauestimate} Let $k\in\N$ and $\alpha\in[1/2,\infty)$. There is a constant $C$ depending on $\eps$ and $k$ such that the following hold: \begin{itemize} \item[(i)] If $|j|\geq3$, then $$\|\rho_{k,j}^\vee\|_{L_1}\leq C\left|\frac{\pi(2j+1)+2\eps}{h}\right|^{k+1}h^2e^{-\frac{2\pi}{3h}(|j|-2)}.$$ \item[(ii)] If $|j|=2$, then $$\|\rho_{k,j}^\vee\|_{L_1}\leq C\left|\frac{\pi(2j+1)+2\eps}{h}\right|^{k+1}h^2e^{-\frac{\pi}{2h}}.$$ \end{itemize} \end{lemma} \begin{proof} As $\rho$ is continuous and square-integrable, we may make use of the fact that if $$\max\left\{\dint_\R |\rho''(\xi)|d\xi,\dint_\R|\rho(\xi)|d\xi\right\}\leq M,$$ then $|\rho^\vee(x)|\leq\dfrac{M}{\left(1+|x|^2\right)}$, whence $\|\rho^\vee\|_{L_1}\leq \pi M$. Notice first that due to the support of $\psi$, $\rho_{k,j}(\xi)=0$ outside the interval $$I_j:=\left[\frac{(2j-1)\pi-2\eps}{h},\frac{(2j+1)\pi+2\eps}{h}\right].$$ Consequently, $$\dint_\R\left|\rho''_{k,j}(\xi)\right|d\xi \leq 2\left(\frac{\pi+2\eps}{h}\right)\left\|\dfrac{d^2}{d\xi^2}\left[\xi^k\psi\left(h\xi-2\pi j\right)m_{\alpha,h}(\xi)\right]\right\|_{L_\infty(I_j)}.$$ The second derivative term in the $L_\infty$ norm above is equal to $$\sum_{\multi{\gamma}=2}C_\gamma D^{\gamma_1}(\xi^k)D^{\gamma_2}\left(\psi\left(h\xi-2\pi j\right)\right)D^{\gamma_3}(m_{\alpha,h}(\xi)),$$ where $\gamma=(\gamma_1,\gamma_2,\gamma_3)$ is a multi-index, and $C_\gamma = \frac{2!}{\gamma_1!\gamma_2!\gamma_3!}$. Notice that $|D^{\gamma_1}(\xi^k)|\leq C|\xi|^{k-\gamma_1}\leq C\left|\frac{\pi+2\pi j+ 2\eps}{h}\right|^k$, and also that $$\left\|D^{\gamma_2}\left(\psi\left(h\xi-2\pi j\right)\right)\right\|_{L_\infty(I_j)}\leq Ch^{\gamma_2},$$ since derivatives of $\psi$ are again $C^\infty$ functions. Consequently, the dominating terms will be those of the form $\xi^k\psi(h\xi-2\pi j)m_{\alpha,h}''(\xi).$ It remains then to consider how large $\|m_{\alpha,h}''(\xi)\|_{L_\infty(I_j)}$ can be. From Corollary \ref{Dm_est2}, we find that if $|j|\geq 3$, then \[ \|m_{\alpha,h}''\|_{L_\infty(I_j)}\leq C h^2 e^{-\frac{2\pi(|j|-1)}{3h}}. \] Similarly, if $|j|=2$, then Corollary \ref{Dm_est3} with $\eps=1/2$ implies \[ \|m_{\alpha,h}''\|_{L_\infty(I_j)}\leq C h^2e^{-\frac{\pi}{2h}}. \] The conclusion of the lemma follows from combining the above estimates. \end{proof} Now for the terms corresponding to $j=\pm1$, we must be more careful, because the estimates in Section \ref{SECmestimates} do not give uniform bounds (in terms of $h$) near the boundary points $\pm\pi$. Nonetheless, we may make a slight modification to the above argument to achieve our purposes. If $j=1$, then we decompose the bump function $\psi$ as the sum of two bump functions. If $\nu$ is the original bump function considered in the case of $|j|>1$, then let $\psi(\xi) = \omega(\xi)+\nu(\xi+\pi)$, where $\omega$ is a bump function whose support lies in $[-\pi+\eps,\pi+2\eps]$, and $\omega(\xi)=1$ whenever $\xi\in[-\pi+2\eps,\pi+\eps]$. Note that this requires $\omega(\xi)+\nu(\xi)=1$ on the small overlapping interval $[-\pi+\eps,-\pi+2\eps]$, but this is no problem. If $j=-1$, then make a simple modification and write $\psi(\xi)=\overset{\sim}\omega(\xi)+\nu(\xi-\pi)$, where $\overset{\sim}\omega$ is defined with asymmetric support in $[-\pi-2\eps,\pi+\eps]$. Since the proof is the same, we deal only with the case $j=1$. Similar to the previous case, we may write \begin{equation}\label{EQGk1} \widehat{G_{k,1}}(\xi) = \widehat{f}\left(\xi-\frac{2\pi }{h}\right)\sigma_{k}(\xi) + \widehat{f}\left(\xi-\frac{2\pi }{h}\right)\widetilde\sigma_{k}(\xi), \end{equation} where $$\sigma_{k}(\xi) := (i\xi)^k\omega\left(h\xi-2\pi \right)m_{\alpha,h}(\xi),$$ and $$\widetilde\sigma_{k}(\xi) := (i\xi)^k\nu\left(h\xi-\pi \right)m_{\alpha,h}(\xi).$$ Let us first analyze $\widetilde\sigma_{k}$ by identifying $\xi^k$ with its Taylor series. We obtain \begin{align}\label{EQsigma} \widetilde\sigma_{k}(\xi) &= i^k\finsum{l}{0}{k}\binom{k}{l}\left(\frac{2\pi}{h}\right)^{k-l}\left(\xi-\frac{2\pi}{h}\right)^{l}\nu\left(h\xi-\pi\right)m_{\alpha,h}(\xi)\nonumber\\ & = i^k\left(\xi-\frac{2\pi}{h}\right)^km_{\alpha,h}(\xi)\finsum{l}{0}{k}\binom{k}{l}\left(\frac{2\pi}{h}\right)^{k-l}\frac{\nu(h\xi-\pi)}{(\xi-\frac{2\pi}{h})^{k-l}}\nonumber\\ & =: i^k\left(\xi-\frac{2\pi}{h}\right)^km_{\alpha,h}(\xi)\mu_k(\xi). \end{align} Consider $\mu_k$ as a Fourier multiplier, and note that \begin{equation}\label{EQMu}\|\mu_k\|_{\mathcal{M}_p}\leq\finsum{l}{0}{k}\binom{k}{l}(2\pi)^{k-l}\left\|\left(\dfrac{\nu(\cdot-\pi)}{(\cdot-2\pi)^{k-l}}\right)^\vee\right\|_{L_1}\leq C,\end{equation} where $C$ is a constant depending only on $k$. This follows because the function $\nu(\cdot-\pi)/(\cdot-2\pi)^{k-l}$ belongs to $\schwartz$ since the support of $\nu(\cdot-\pi)$ is bounded away from the point $2\pi$. Of course, the case $j=-1$ is essentially the same. We obtain the same upper bound on the modified version of $\mu_k$. We now turn to the multiplier $\sigma_{k}$ for $j=\pm1$. \begin{lemma}\label{LEMgammaestimate} Let $k\in\N$ and $\alpha\in[1/2,\infty)$. If $j=\pm1$, then there is a constant $C$ depending on $\eps$ and $k$ such that the following holds. $$\|\sigma_{k}^\vee\|_{L_1}\leq C\left|\frac{\pi(2j+1)+2\eps}{h}\right|^{k+1}h^2e^{-\frac{\pi\eps}{h}}.$$ Consequently, there is a constant $C$, independent of $h$, such that $$\|\sigma_{k}^\vee\|_{L_1}\leq C.$$ \end{lemma} \begin{proof} Mimic the proof of Lemma \ref{LEMtauestimate}, except note that in this case, $\supp(\omega)\subset[-\pi+\eps,\pi+2\eps]$, and thus we need to estimate $\|m''_{\alpha,h}\|_{L_\infty(\tilde{I}_j)}$, where $\tilde{I}_j:=[\frac{\pi+\eps}{h},\frac{3\pi+2\eps}{h}].$ This term is majorized by $Ch^2e^{-\frac{\eps\pi}{h}}$ on account of Corollary \ref{Dm_est3}, which provides the stated bound. \end{proof} We are now ready to supply the remainder of the proof. \begin{proof}[Proof of Theorem \ref{THMstabilinterpolation}] First let us consider the case when $1<p<\infty$ and $\alpha\in[1/2,\infty)$. By the discussion at the beginning of this section, we find that \begin{align}\label{EQIhseminormbound} |I_\alpha^hf|_{W_p^k} &\leq\zsum{j}\|G_{k,j}\|_{L_p}\\\ & = \|G_{k,0}\|_{L_p}+\underset{|j|=1}\dsum\|G_{k,j}\|_{L_p}+\underset{|j|\geq2}\dsum\|G_{k,j}\|_{L_p}\nonumber\\ & =: \Sigma_1+\Sigma_2+\Sigma_3.\nonumber \end{align} We already saw that $\Sigma_1\leq C|f|_{W_p^k}$. It follows from Theorem \ref{THMmultiplierbounded}, Lemma \ref{LEMgammaestimate}, \eqref{EQGk1}, \eqref{EQsigma}, and \eqref{EQMu}, that \begin{align}\label{EQSigma2bound} \Sigma_2 &\leq2\left\|\left(i^k\left(\cdot-\frac{2\pi}{h}\right)^k\widehat{f}\left(\cdot-\frac{2\pi}{h}\right)\right)^\vee\right\|_{L_p}\|m_{\alpha,h}\|_{\mathcal{M}_p}\|\mu_k\|_{\mathcal{M}_p}+\|f\|_{L_p}\|\sigma_{k}^\vee\|_{L_1}\nonumber\\ & \leq C\|f\|_{W_p^k}. \end{align} Now for the final term, Lemma \ref{LEMtauestimate} implies the following: \begin{align*} \Sigma_3 &\leq \|f\|_{L_p}\left[\underset{|j|=2}\dsum\|\rho_{k,j}^\vee\|_{L_1}+\underset{|j|\geq3}\dsum\|\rho_{k,j}^\vee\|_{L_1}\right]\nonumber\\ & \leq C\|f\|_{L_p}+C\|f\|_{L_p}\underset{|j|\geq3}\dsum\left|\frac{\pi(2j+1)+2\eps}{h}\right|^{k+1}h^2e^{-\frac{2\pi}{3h}(|j|-1)}. \end{align*} The sum over $j$ on the right hand side is majorized by a constant (depending on $\eps$ and $k$) times $h^{-k+1}e^{-\frac{2\pi}{3h}}$, which in turn is bounded by a constant depending only on $k$. Consequently, \begin{equation}\label{EQSigma3bound} \Sigma_3\leq C\|f\|_{L_p}. \end{equation} Combining \eqref{EQIhseminormbound}, \eqref{EQSigma2bound}, and \eqref{EQSigma3bound} provides the theorem. Whenever $\alpha\in(-\infty,-d-1/2)$, the only change is in the use of the analogues of Lemmas \ref{LEMtauestimate} and \ref{LEMgammaestimate}. But the bound will still be some power of $h$ times a decaying exponential, and so the corresponding series from \eqref{EQIhseminormbound} will satisfy the same upper bound up to a different constant $C$. The proof when $p=1,\infty$ for either range of $\alpha$ is the same except that $\Sigma_1,\Sigma_2\leq C(1+|\ln h|)\|f\|_{W_p^k}$ due to the estimate on $\|m_{\alpha,h}\|_{\mathcal{M}_1}=\|m_{\alpha,h}\|_{\mathcal{M}_\infty}$ (Theorem \ref{THMmultiplierbounded}). \end{proof} \subsection{Proof of Theorem \ref{THMstabilinterpolation} - Multivariate Case}\label{SECstabilitymultivariate} Similar to the univariate case, if $f\in E_\frac{\pi+\eps}{h}\cap W_p^k(\R^d)$, then we have, for $\multi{\gamma}\leq k$, \[ \widehat{D^\gamma I_\alpha^h f}(\xi) = i^{\multi{\gamma}}\xi^\gamma\left[\zsumd{j}\widehat{f}\left(\xi-\frac{2\pi j}{h}\right)\right]\mah(\xi) =:\zsumd{j}\widehat{G_{\gamma,j}}(\xi), \] with $G_{\gamma,j} = D^\gamma[(fe^{i\bracket{\frac{2\pi j}{h},\cdot}}\ast\mah^\vee].$ Consequently, $|I_\alpha^hf|_{W_p^k}\leq\underset{\multi{\gamma}= k}\sum\;\underset{j\in\Z^d}\sum\|G_{\gamma,j}\|_{L_p}$. Thus we estimate the norms of $G_{\gamma,j}$ for different values of the given parameters. {\em Case} $j=0$: By \eqref{EQmultiplierconvolutioninequality} and Theorem \ref{THMmultiplierbounded}, we have \begin{displaymath} \|G_{\gamma,0}\|_{L_p}\leq \|\mah\|_{\Mp}|f|_{W_p^k}\leq \left\{ \begin{array}{lll} C|f|_{W_p^k} & 1<p<\infty\\ C(1+|\ln h|)|f|_{W_p^k} & p=1,\infty.\\ \end{array}\right. \end{displaymath} {\em Case} $|j|>1$: Let $\nu$ and $\psi$ be the smooth bump functions from Section \ref{SECstabilityunivariate}, and let $\widetilde{\psi}(\xi) = \psi(|\xi|)$, which will be a smooth bump function with support in the ball $B(0,\pi+2\eps)$ taking value 1 on $B(0,\pi+\eps)$. Since $\supp(\widehat{f})\subset B\left(0,\frac{\pi+\eps}{h}\right)$, we have $$\widehat{G_{\gamma,j}}(\xi) = \widehat{f}\left(\xi-\frac{2\pi j}{h}\right)i^{\multi{\gamma}}\xi^\gamma\widetilde{\psi}\left(h\xi-2\pi j\right)\mah(\xi) =:\widehat{f}\left(\xi-\frac{2\pi j}{h}\right)\rho_{\gamma,j}(\xi).$$ By \eqref{EQmultiplieronenormbound} and \eqref{EQmultiplierconvolutioninequality}, $\|G_{\gamma,j}\|_{L_p}$ is majorized by $\|\rho_{\gamma,j}^\vee\|_{L_1}\left\|\left(\widehat{f}\left(\cdot-\frac{2\pi j}{h}\right)\right)^\vee\right\|_{L_p}$ by viewing $\rho_{\gamma,j}$ as an $L_p$ multiplier operator. We now estimate $\|\rho_{\gamma,j}^\vee\|_{L_1}$. Recall that if $\underset{\multi{\beta}\leq d+1}\sum\|D^\beta\rho_{\gamma,j}\|_{L_1}\leq M$, then $|\rho_{\gamma,j}^\vee(x)|\leq\frac{M}{1+|x|^{d+1}}$, thus $\|\rho_{\gamma,j}^\vee\|_{L_1}\leq CM$. Since $\supp(\rho_{\gamma,j})\subset B_j:=B(\frac{2\pi j}{h},\frac{\pi+2\eps}{h})$, we have $$\dint_{\R^d}\left|D^\beta\rho_{\gamma,j}(\xi)\right|d\xi\leq\left|\frac{\pi+2\eps}{h}\right|^d\left\|D^\beta\left(\xi^\gamma\widetilde{\psi}\left(h\xi-2\pi j\right)\mah(\xi)\right)\right\|_{L_\infty(B_j)}.$$ We may write the term at the right as the supremum of the absolute value of the following expression: $$\underset{\beta_1+\beta_2+\beta_3 = \beta}\sum C_{\beta_1,\beta_2,\beta_3}D^{\beta_1}(\xi^\gamma)D^{\beta_2}\left(\widetilde{\psi}\left(h\xi-2\pi j\right)\right)D^{\beta_3}\mah(\xi).$$ Now we estimate each term in the series above. First, there is a constant $C$ depending on $\beta_1$ and $\gamma$ such that $|D^{\beta_1}(\xi^\gamma)|\leq C|\xi|^{\multi{\gamma}-\multi{\beta_1}}\leq C\left|\frac{\pi(2|j|+1)+2\eps}{h}\right|^{\multi{\gamma}}.$ Since $\psi$ is $C^\infty,$ there is a constant $C$ depending only on $\multi{\beta_2}$ such that $|D^{\beta_2}\widetilde{\psi}(h(\xi-\frac{2\pi j}{h}))|\leq Ch^{\multi{\beta_2}}.$ Finally, by Corollary \ref{Dm_est2}, there is a constant $C$ such that $$\|D^{\beta_3}\mah(\xi)\|_{L_\infty(B_j)}\leq Ch^{\multi{\beta_3}}e^{-\frac{2\pi(|j|-1)}{3dh}}.$$ The appearance of $|j|-1$ in the exponential term rather than $|j|$ is due to the fact that the support of $\widetilde{\psi}$ overlaps multiple cubes, and thus an overlap into a cube corresponding to a smaller value of $|j|$ is possible. We note that in some cases when $\|j\|_\infty=1$, this may cause an overlap of the support into cubes with $|j|=1$, but the final estimate above holds nonetheless by applying Corollary \ref{Dm_est3} with $\eps = \frac{2(|j|-1)}{3\sqrt{d}}$, which is indeed less than 1. Note too that we need the additional assumption that the ball $B_j$ does not extend outside of any face of codimension larger than 1 of the cube $\left[-\frac{\pi(2j+1)+2\eps}{h},\frac{\pi(2j+1)+2\eps}{h}\right]^d$. If this were not the case, then in $d=2$, for example, the ball corresponding to $j=(1,1)$ would have part of its support in the cube centered about the origin, which is undesirable since we have no uniform control on derivatives of $\mah$ on the boundary of this cube. Similarly, in $d=3$, the ball corresponding to $j=(1,1,0)$ could overlap the line segment connecting $B_j$ to the cube at the origin. However, if we assume that $\eps\leq \frac{(\sqrt{2}-1)\pi}{2}$, then such an overlap will not occur. We note that the assumption on $\eps$ being small is not strictly necessary, but without it the estimates break into many more cases, which would unnecessarily convolute the proof. Combining the estimates above provides the bound $$\|\rho_{\gamma,j}^\vee\|_{L_1}\leq \left|\dfrac{\pi(2|j|+1)+2\eps}{h}\right|^{\multi{\gamma}+d}h^{d+1}e^{-\frac{2\pi(|j|-1)}{3dh}}.$$ {\em Case} $|j|=1$: Since the end result is the same for each integer of length 1, we restrict our attentions to the case $j=e_1=(1,0,\dots,0)$ without loss of generality. In this case, the ball $B_{e_1}$ containing the support of $\widehat{f}(\cdot-\frac{2\pi e_1}{h})$ overlaps one of the faces of the cube $[-\frac{\pi}{h},\frac{\pi}{h}]^d$, where we do not have uniform bounds (in $h$) for $\mah$. Consequently, we cannot execute an argument similar to that in the case $|j|>1$. However, let $\psi$, $\nu$, and $\omega$ be the smooth bump functions from Section \ref{SECstabilityunivariate}, and define $\widetilde{\psi}(\xi):=\psi(\xi_1)\dots\psi(\xi_d)$, $\nutilde(\xi) :=\nu(\xi_1+\pi)\psi(\xi_2)\dots\psi(\xi_d)$, and $\wtilde(\xi):=\omega(\xi_1)\psi(\xi_2)\dots\psi(\xi_d)$. Then $\supp(\nutilde)=[-\pi-2\eps,-\pi+2\eps]\times[-\pi-2\eps,\pi+2\eps]^{d-1}$ and $\supp(\wtilde)=[-\pi+\eps,\pi+2\eps]\times[-\pi-2\eps,\pi+2\eps]^{d-1}$. On the overlap of their supports, $\nutilde(\xi)+\wtilde(\xi)=1$, and for every $\xi$, $\nutilde(\xi)+\wtilde(\xi) = \widetilde{\psi}(\xi)$. Thus we may write $$\widehat{G_{\gamma,e_1}}(\xi) = \widehat{f}\left(\xi-\frac{2\pi e_1}{h}\right)\left[\sigma_{1,\gamma,e_1}(\xi)+\sigma_{2,\gamma,e_1}(\xi)\right],$$ where $$\sigma_{1,\gamma,e_1}(\xi) := i^{\multi{\gamma}}\xi^\gamma\nutilde\left(h\xi-2\pi e_1\right)\mah(\xi),$$ and $$\sigma_{2,\gamma,e_1}(\xi):=i^{\multi{\gamma}}\xi^\gamma\wtilde\left(h\xi-2\pi e_1\right)\mah(\xi).$$ Consider first that $\sigma_{1,\gamma,e_1}(\xi) = i^{\multi{\gamma}}\xi^\gamma\nu(h(\xi_1-\frac{\pi}{h}))\mah(\xi)$, since on the support of $\widehat{f}$, $\phi(h\xi_i)=1$ for $i=2,\dots,d$. Let $\gamma' = \gamma-\gamma_1$, and identify $\xi_1^{\gamma_1}$ with its Taylor series to obtain \begin{align* \sigma_{1,\gamma,e_1}(\xi) &= i^{\multi{\gamma}}\xi^{\gamma'}\finsum{l}{0}{\gamma_1}\binom{\gamma_1}{l}\left(\frac{2\pi}{h}\right)^{\gamma_1-l}\left(\xi_1-\frac{2\pi}{h}\right)^{l}\nu\left(h\xi_1-\pi\right)m_{\alpha,h}(\xi)\nonumber\\ & = i^{\multi{\gamma}}\xi^{\gamma'}\left(\xi_1-\frac{2\pi}{h}\right)^{\gamma_1}m_{\alpha,h}(\xi)\finsum{l}{0}{\gamma_1}\binom{\gamma_1}{l}\left(\frac{2\pi}{h}\right)^{\gamma_1-l}\frac{\nu(h\xi_1-\pi)}{(\xi_1-\frac{2\pi}{h})^{\gamma_1-l}}\nonumber\\ & =: i^{\multi{\gamma}}\left(\xi-\frac{2\pi e_1}{h}\right)^\gamma m_{\alpha,h}(\xi)\mu_{\gamma_1,e_1}(\xi). \end{align*} Consider $\mu_{\gamma_1,e_1}$ as a univariate Fourier multiplier, and note that $$\|\mu_{\gamma_1,e_1}\|_{\Mp}\leq\finsum{l}{0}{\gamma_1}\binom{\gamma_1}{l}(2\pi)^{\gamma_1-l}\left\|\left(\dfrac{\nu(\cdot-\pi)}{(\cdot-2\pi)^{\gamma_1-l}}\right)^\vee\right\|_{L_1(\R)}\leq C,$$ where $C$ is a constant depending only on $\gamma_1$, which depends on $k$. Subsequently, notice that $\mu_{\gamma_1,e_1}$ may be viewed as a Fourier multiplier on $L_p(\R^d)$ via the tensor product representation $\mu_{\gamma_1,e_1}(\xi)= \mu_{\gamma_1,e_1}(\xi_1)I(\xi_2)\dots I(\xi_d)$, where $I$ is the constant 1 function, whose multiplier norm is 1. Consequently $\|\mu_{\gamma_1,e_1}\|_{\Mp}$ is majorized by the constant $C$ above. Moving on, we use an analogous estimate to the case $|j|>1$ to find a bound on the multiplier $\sigma_{2,\gamma,e_1}$. All of the terms are estimated the same way, except that since $\wtilde$ is supported on $[\frac{(1+\eps)\pi}{h},\frac{3\pi}{h}]\times[-\frac{\pi}{h},\frac{\pi}{h}]^{d-1}$, we use Corollary \ref{Dm_est3} to find that the maximum of $|D^{\beta}\mah(\xi)|$ on the cube is a constant multiple of $h^{\multi{\beta}}e^{-\frac{\eps\pi}{\sqrt{d}h}}$. Now the proof of Theorem \ref{THMstabilinterpolation} follows by the same calculation in Section \ref{SECstabilityunivariate}. \section{Remark}\label{SECremark} Since cardinal interpolation of smooth functions can be done using many of the classical RBFs, it is a natural question to ask for which RBFs can the results of \cite{hmnw} and this work be extended? Cardinal interpolation using translates of thin plate splines, polyharmonic splines, and B-splines (see, for example, \cite{Buhmann,BuhmannBook}) has been considered. However, the method of proof here makes essential use of the fact that the Fourier transform of the cardinal function decays exponentially away from the origin. This is the case for the Gaussian and the general multiquadrics, but for each of the other RBFs mentioned, the Fourier transforms of their cardinal functions decay only algebraically away from the origin. Therefore, if such results as are obtained here are to be extended to these other natural RBFs, it would seem that different techniques are needed. It is germane to mention the case of the Poisson kernel, which corresponds to $\alpha=-(d+1)/2$ in \eqref{EQgmcdef} above. For this example, the Fourier transform of the cardinal function indeed decays exponentially; however, the argument in Section \ref{SECstabilitymultivariate} cannot be used since it requires the boundedness of the derivatives of $m_{\alpha,h}$ near the origin. Thus the results cannot be extended to the multivariate Poisson kernel. But when $d=1$, the derivatives near the origin are bounded, hence a straightforward modification of the proof given in Section \ref{SECstabilityunivariate} shows that the results of this paper may be extended to the univariate Poisson kernel, $(x^2+c^2)^{-1}$. In particular, the following holds. \begin{theorem} Let $\alpha=-1$, $1<p<\infty$, $k\in\N$, and $0<h\leq1$. There exists a constant $C$, independent of $h$, so that for every $g\in W_p^k(\R)$, $$\|I_\alpha^hg-g\|_{L_p}\leq Ch^k\|g\|_{W_p^k}.$$ If $p=1$ and $k>1$ or $p=\infty$ and $k\in\N$, there is a constant $C$, independent of $h$, so that for every $g\in W_p^k(\R)$, $$\|I_\alpha^hg-g\|_{L_p}\leq C(1+|\ln h|)h^k\|g\|_{W_p^k}.$$ \end{theorem} \begin{comment} It should also be pointed out that one main difference between the techniques used in this work and those of \cite{hmnw} is that the latter exploited the fact that the Gaussian cardinal function on $\R^d$ may be written as the $d$-fold tensor product of the univariate Gaussian cardinal function, whereas this is not true for general multiquadrics. However, the techniques here may be adapted to the Gaussian, since all that is required is a radial estimate on the decay of the multivariate cardinal function associated with the Gaussian. Indeed, if one carries out the estimates in Section \ref{SECmestimates} for the Gaussian multiplier given by $$m_h(\xi)=\dfrac{e^{-\frac{1}{4}|\xi|^2}}{\zsum{j}e^{-\frac{1}{4}\left|\xi+\frac{2\pi j}{h}\right|^2}},\quad\xi\in\R^d,$$ one finds that $|m_h^\vee(x)|\leq C_h|x|^{-d-1}$ for some constant depending on $h$ (cf. Corollary \ref{CORmultiplierinversebounds}), and thus the following holds for Gaussian interpolation on $L_1$ and $L_\infty$ Sobolev functions (this is a slight strengthening of the second part of Theorem 2.1 in \cite{hmnw} in that obtaining radial estimates for $m_h^\vee$ allows us to get rid of the power of $d$ on the logarithmic term). \begin{theorem} Suppose either $p=1$ and $k> d$, or $p=\infty$ and $k\in\N$. Let $I^h$ be the Gaussian interpolation operator of \cite{hmnw}. Then there exists a constant $C$, independent of $h$, so that for every $g\in W_p^k(\R^d)$, $$\|I^hg-g\|_{L_p}\leq C(1+|\ln h|)h^k\|g\|_{W_p^k}.$$ \end{theorem} \end{comment} For general results on RBF interpolation of bandlimited functions at nonuniform sequences of points, refer to \cite{HammZonotopes,Ledford_Scattered,Ledford_Poisson,ss}, while \cite{Hamm} contains a nonuniform analogue to Theorem \ref{THMmaintheorem} for Gaussian interpolation in one dimension. Also, a recent work of Buhmann and Dai \cite{BuhmannDai} examined pointwise error estimates for quasi-interpolation schemes involving RBFs. For a pleasant discussion of the computational aspects and asymptotic behavior of interpolation with the Hardy multiquadric and other RBFs, the reader is referred to \cite{Madych}.
{ "timestamp": "2017-05-15T02:00:58", "yymm": "1506", "arxiv_id": "1506.07387", "language": "en", "url": "https://arxiv.org/abs/1506.07387", "abstract": "This article pertains to interpolation of Sobolev functions at shrinking lattices $h\\mathbb{Z}^d$ from $L_p$ shift-invariant spaces associated with cardinal functions related to general multiquadrics, $\\phi_{\\alpha,c}(x):=(|x|^2+c^2)^\\alpha$. The relation between the shift-invariant spaces generated by the cardinal functions and those generated by the multiquadrics themselves is considered. Additionally, $L_p$ error estimates in terms of the dilation $h$ are considered for the associated cardinal interpolation scheme. This analysis expands the range of $\\alpha$ values which were previously known to give such convergence rates (i.e. $O(h^k)$ for functions with derivatives of order up to $k$ in $L_p$, $1<p<\\infty$). Additionally, the analysis here demonstrates that some known best approximation rates for multiquadric approximation are obtained by their cardinal interpolants.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "Cardinal Interpolation With General Multiquadrics: Convergence Rates", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759615719875, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.707940573226681 }
https://arxiv.org/abs/1811.00517
Coexistence phenomena in the Hénon family
We study the classical Hénon family $f_{a,b}:(x,y)\mapsto(1-ax^2+y,bx)$, $0<a<2$, $0<b<1$, and prove that given an integer $k\geq 1$, there is a set of parameters $E_k$ of positive two-dimensional Lebesgue measure so that $f_{a,b}$, for $(a,b)\in E_k$, has at least $k$ attractive periodic orbits and one strange attractor. A corresponding statement also holds for the Hénon-like families. The final main result of the paper is the existence, within the classical Hénon family, of a positive Lebesgue measure set of parameters whose corresponding maps have two coexisting strange attractors.
\section{Introduction} \subsection{History} In $1976$, the French astronomer and applied mathematician M. H\'enon made a famous computer experiment where he numerically detected but did not rigorously prove the existence of a non-trivial attractor for a two-dimensional perturbation of the one-dimensional quadratic map, $f_{a,b}:\mathbb{R}^2\to\mathbb{R}^2$ defined by $$ f_{a,b}\left(\begin{matrix} x\\ y \end{matrix}\right)=\left(\begin{matrix} 1-ax^2+y\\ bx \end{matrix}\right) $$ with $a=1.4$ and $b=0.3$, see \cite{Henon}. Since then, several studies, both numerical and theoretical, have been conducted with the aim of understanding this family of maps which is now known as \emph{Hénon family}. The complete understanding of Hénon maps is still quite far from being achieved. In his experiments Hénon also verified that attractive periodic orbits do indeed occur for other parameter values from the same family. In view of this and of the result of S. Newhouse, \cite{Newhouse}, stating that periodic attractors are generic, there were no reason, at the time, to eliminate the possibility that the attractor observed by Hénon was just a periodic orbit with a very high period. However in $1991$, L. Carleson and the first author proved the existence of the attractor observed by Hénon for a positive Lebesgue measure set of parameter values near $a=2$ and $b=0$, see \cite{BC2}. More precisely, in the paper it was shown that if $b>0$ is small enough, then for a positive measure set of $a$-values near $a=2$, the corresponding maps $f_{a,b}$ exhibit a strange attractor. \medskip To define what we mean by a {\it strange attractor} we first recall that a {\it trapping region} for a map $f$ is an open set $U$ such that $$ \overline{f(U)}\subset U. $$ An {\it attractor} in the sense of Conley for a map $f$ which has a trapping region is the set $$ \Lambda=\bigcap_{j=0}^\infty {f^j(U)}=\bigcap_{j=0}^\infty \overline{f^j(U)}. $$ The attractor is {\it topologically transitive} if there is a point with a dense orbit. In \cite{BC2} it was proved for a positive two-dimensional Lebesgue measure set of parameters ${\mathcal A}$ in the $(a,b)$ space, that there is a point $z_0(a,b)$ such that $z_1=f_{a,b}(z_0)$ satisfies the Collet-Eckmann condition\footnote{A quadratic map $q_a(x)=1-ax^2$ satisfies the Collet-Eckman condition if $|(q_a^j)'(1)|\geq Ce^{\kappa j}$ for all $j\geq0$ and some positive constants $\kappa$ and $C$.}, i.e. that there is a constant $\kappa>0$ such that $$ \left|Df^n(z_1)\begin{brsm}1\\0\end{brsm}\right|\geq e^{\kappa n},\qquad\text{for all}\ n\geq 0. $$ It is fairly easy to see that the attractor $\Lambda$ for this set of parameters can be identified as $\overline{W^u(\hat{z})}$, where $\hat{z}$ is the unique fixed point of $f_{a,b}$ in the first quadrant, \cite{BenedicksViana}. Moreover, the fact that the Collet-Eckmann conditions are satisfied leads to topological transitivity, see \cite{BC2}, and the combination of $\Lambda=\overline{W^u(\hat{z})}$ and topological transitivity makes it appropriate to call the attractor {\em strange}. \medskip The techniques used in \cite{BC2} are a non trivial generalizations of the ones presented in \cite{BC1} by the same authors for the one-dimensional quadratic family. Those techniques opened the way for the understanding of a new class of non-hyperbolic dynamical systems. Further results have been achieved for Hénon maps by using and developing the techniques in \cite{BC2}. In \cite{MoraViana} the results of \cite{BC2} are obtained for a general perturbation of the family of quadratic maps on the real line, called Hénon-like family. The statistical properties, the existence of a Sinai-Ruelle-Bowen (SRB) measure, exponential decay of correlation and a central limit theorem were studied in \cite{BenedicksYoung1} and \cite{BenedicksYoung2}. Furthermore the metric properties of the basin of attraction of the strange attractor was studied in \cite{BenedicksViana}. In that paper it was proven that Lebesgue almost all points in the topological basin for the attractor $$ B=\bigcup_{j=0}^\infty f^{-j}(U), $$ are generic for the SRB measure. Here $U$ is the trapping region as above. Other more recent approaches to generalizations of this class of dissipative attractors were given by Wang and Young in \cite{WangYoung1}, \cite{WangYoung2} and by Berger in \cite{Berger1}. In the present paper we show that coexistence of periodic attractors and strange attractors occur in the H\'enon family for a positive Lebesgue measure set of parameters. Our proof is mainly based on the techniques in \cite{BC2}. However the construction of the periodic attractors is inspired by \cite{Th}, where H. Thunberg proved the existence of attractive periodic orbits for one-dimensional quadratic maps for parameters that accumulate on the ones corresponding to the quadratic maps with absolutely continuous invariants measures of \cite{BC1} and \cite{BC2}. A similar result has been obtained for H\'enon maps in \cite{Ures}. Furthermore we prove the existence of a positive two-dimensional Lebesgue measure set of parameters in the H\'enon family for which there exist two coexisting strange attractors. \medskip The next section contains more details about our main results. \subsection{Statement of the results} We now present our main results. We first give the definition of H\'enon-like families as in \cite{MoraViana}. \begin{defin}\label{henonlike} An $a$-dependent one-dimensional parameter family of maps $F_a$ is called a {\it H\'enon-like} family if $$ F_a(x,y;b)=\begin{pmatrix}1-ax^2\\0 \end{pmatrix}+\psi(a,x,y;b), $$ and we have the following properties: \begin{itemize} \item[(i)] $\psi$ satisfies the condition $$ ||\psi||_{C^3}\leq K b^t. $$ \item[(ii)] Let $A,B,C,\, D$ be the matrix element of $$ DF_a=\begin{pmatrix}A&B\\C&D \end{pmatrix}, $$ and assume $A$, $B$, $C$, $D$, satisfies the conditions stated in Theorem 2.1 of \cite{MoraViana}, \item[(a)] $|A|\leq K, \sqrt{b}/K\leq |B|\leq K \sqrt{b}, \sqrt{b}/K\leq |C|\leq K \sqrt{b},\ b/K\leq |\det\ DF_a |\leq K b$, $||DF_aa||\leq K$ and $||DF_a^{-1}a||\leq K/b$. \item[(b)] $||D_{(a,x,y)}A||\leq K$, $||D_{(a,x,y)}B||\leq K^{1/2+t}$, $||D_{(a,x,y)}C||\leq K^{1/2+t}$, $||D_{(a,x,y)}D||\leq K^{1+2t}$. Moreover $||D_{(a,x,y)}(\det DF_a)||\leq Kb^{1+t}$ and $||D^2F_a||\leq K$. \item[(c)] $||D^2_{(a,x,y)}A||\leq Kb^t$, $||D^2_{(a,x,y)}B||\leq Kb^{1/2+2t}$, $||D^2_{(a,x,y)}C||\leq Kb^{1/2+2t}$, $||D^2_{(a,x,y)}D||\leq Kb^{1+3t}$. Finally $||D^2_{(a,x,y)}(\det DF_a)||\leq Kb^{1+2t}$ and $||D^3F_a||\leq Kb^t$. \end{itemize} \end{defin} \begin{rem} The original H\'enon family corresponds to $$ \varphi(x,y;b)=\sqrt{b}\begin{pmatrix}y\\x \end{pmatrix}. $$ \end{rem} \begin{theo}\label{A} Suppose $F_a(.,.;b)$ is an $a$-dependent H\'enon-like family as in Definition \ref{henonlike}. Then there is a $b_0>0$ so that for all $k\geq 1$, and all $0<b<b_0$, there is a set of $a$-parameters $A_{k,b}$ ( with fixed $b$) which has positive one-dimensional Lebesgue measure, i.e. $|A_{k,b}|>0$ and such that for all $a\in A_{k,b}$, $F_a(.,.;b)$ has at least $k$ attractive periodic orbits and at least one strange attractor of the type constructed in \cite{BC2} and \cite{MoraViana}. \end{theo} The method introduced to prove Theorem \ref{A} gives also the following result. \begin{theo}\label{B} Suppose $F_a(.,.;b)$ is a H\'enon-like family as in Definition \ref{henonlike}. If $b_0>0$ is sufficiently small, then for all $0<b<b_0$ and for all $a$ in some set $A_{\infty,b}$, $F_a(.,.;b)$ has infinitely many coexisting attractive periodic orbits (the Newhouse phenomenon). \end{theo} Theorem \ref{A} and Theorem \ref{B} hold for the original H\'enon family. \begin{theo}\label{C} Consider the original H\'enon family $f_{a,b}$, $0<a<2$, $0<b<1$. \begin{itemize} \item[(a)] There is a set of positive two-dimensional Lebesgue measure of parameters with at least $k\geq1$ attractive periodic orbits and one H\'enon-like strange attractor. \item[(b)] There are parameters in the H\'enon family for which there are infinitely many attractive periodic orbits. \end{itemize} \end{theo} The existence of H\'enon and H\'enon-like maps in one-parameter families with infinitely many sinks has already been established in \cite{Ro}, \cite{GST} and \cite{GS}. In difference to the previous approaches, the present methods of proof are completely constructive. In particular, the methods avoid Baire category arguments, the Newhouse thickness criterium and the persistance of tangencies is not used. Our method allows also to obtain a stronger result about the coexistence of two chaotic, non-periodic attractors. The following can be considered as the main theorem of the paper. \begin{theo} \label{D}There is a positive two-dimensional Lebesgue measure set of parameters ${\mathcal A}$, such that for $(a,b)\in{\mathcal A} $, the maps of the H\'enon family $f_{a,b}$ have two coexisting strange attractors. \end{theo} Our results can be viewed as some steps in the Palis program, see \cite{Palis}, aiming to describe coexistence phenomena for dissipative surface maps. Other coexistence results has been obtained in e.g. \cite{BenedicksMartensPalmisano, Berger2, Palmisano}. \paragraph{Acknowledgements.} The first author was supported by the Swedish Research Council Grant 2016-05482. The second author was supported by the Trygger Foundation, Project CTS 17:50 and the research was partially summorted by the NSF grant 1600554 and the IMS at Stony Brook University. The authors would like to thank P. Berger, L. Carleson and J-P Eckmann for helpful discussions. The project was initiated at Institute Mittag Leffler during the program Fractal Geometry and Dynamics, September 04 -- December 15, 2017. \bigskip \section{Overview of results and methods on H\'enon and H\'enon-like maps} In this section we collect definitions and constructions by \cite{BC2} and \cite{MoraViana} which will be used in the sequel. We briefly review the construction of Collet-Eckmann maps in the quadratic family and the H\'enon family of \cite{BC1}, \cite{BC2}, and the corresponding construction in \cite{MoraViana}. For more details we refer to the original papers. \subsection{The one-dimensional case}\label{onedimensional-case} \medskip Let us first consider the quadratic family $q_a(x)=1-ax^2$ and we write $\xi_j(a)=q_a^j(0)$, $j\geq 0$. We start with an interval $\omega_0=[a',a'']\subset (0,2)$ and very close to 2. We partition $(-\delta,\delta)=\bigcup_{|r|\geq r_\delta} I_r$, where $I_r=(e^{-r},e^{-r+1})$, $I_r=-I_r$ and $I_r=\bigcup_{\ell=0}^{r^2-1} I_{r,\ell}$, where the intervals $I_{r,\ell}$ are disjoint and of equal length. The definition is similar for negative $r$:s. We do an explicit preliminary construction of the first {\it free return} so that it satisfies $$ \xi_{n_1}(\omega)=I_{r_\delta,\ell}, $$ i.e. a parameter interval $\omega$ is mapped by the parameter dynamics $a\mapsto \xi_{n_1}(a)$ to a parameter interval in the partition $\{I_{r,\ell}\}$. Here $r$ is chosen so that $e^{-r}\geq e^{-\alpha n(\omega)}$, and therefore Assertion 4, (ii), in Subsection \ref{ss:twod} is satisfied. This condition is called the basic assumption (BA) in \cite{BC2}. We give a brief description of the constructions in \cite{BC1}, \cite{BC2}. At the $n$:th stage of the construction, we have a partition ${\mathcal P_n}$ and for $\omega\in{\mathcal P}_n$, when $n=n_k$ is a free return, we have $$ \xi_n(\omega)\subset I_r\cup I_{r-1}\qquad\text{if}\ r>0. $$ (The case $r<0$ is analogous.) We define the bound period at a free return as the maximum integer $p$ so that \begin{equation} |\xi_{n+j}(a)-\xi_j(a')|\leq e^{-\beta j} \qquad \forall a,a'\in\omega,\ \forall j\leq p.\label{bound-period} \end{equation} After the bound period there is a {\it free period} of length $L$, during which the corresponding iterates are called free, and at time $n+p+L$ we have a return, at which $$ \xi_{n+p+L}(\omega)\cap (-\delta,\delta)\neq\emptyset. $$ This corresponds to a new free return to an interval $I_r$, which can either be {\it essential}, i.e. the image covers a whole $I_{r,\ell}$-interval or it is contained in the union of two adjacent such intervals. The latter case is called an {\it inessential} free return. If we have an essential return the part of $\omega\in {\mathcal P}_{n-1}$, which is mapped to $(-e^{-\alpha n},e^{-\alpha n})$ is deleted and we define the partition ${\mathcal P}_n$ by pulling back the intervals $\{I_{r,\ell}\}$ to the parts of $\omega$ that remain after deletions. The union of the partition elements of the parameter space that remain at time $k$ is written as $A_k=\bigcup_{\omega\in{\mathcal P}_k}$. The numbers $\alpha$ and $\beta$ are small and positive. In the one-dimensional case one can choose $\alpha=\frac1{400}$ and $\beta=\frac{1}{100}$. Define $\rho_k=|r_k|$, $k=0,\dots,r_s$. Then $(\rho_0,\dots,\rho_s)$ is an itinerary, which essentially determine the derivative expansion that from free return time $n_k$ to free time $n_{k+1}$ is always \begin{equation}\label{eq:largedeviation} \geq \frac{e^{-3\beta \rho_k}}{e^{- \rho_k}}. \end{equation} A combinatorial argument shows, see Section 2.2 in \cite{BC2}, that there are {\it escape situations} for partition elements $\omega$ at times $\tilde{E}(\omega)$. The definition of an escape situation is somewhat arbitrary but let us define it as a pair $(\omega,\tilde{E})$, $\omega\in {\mathcal P}_{\tilde{E}}$ which is defined so that $\omega$, under the parameter dynamics, is mapped to an interval of size $\geq\frac{1}{10}$ at time $\tilde{E}$. The escape time $\tilde{E}$ has a distribution depending essentially on the itineraries $(\rho_0,\rho_1,\dots,\rho_s)$ of the subintervals of $\omega\in{\mathcal P}_{n_0}$. By Section 2.2 of \cite{BC2} we have \begin{itemize} \item the total time $T$ spent in an itinerary $(\rho_0,\rho_1,\dots,\rho_s)$ satisfies $$ T\sim \sum_{j=0}^{s} \rho_j $$ \item $z_{n_i}$, at the return times $n_i$, $i=1,2,\dots,s$, can be viewed as almost independent random variable, \item the distribution of the escape times after the parameter selection satisfies $$ \left|\left\{a\in\omega_0\left|\right. E(a)>t\right\}\right|\leq C \left|\omega_0\right|e^{-\gamma t} $$ with $\gamma,C>0$. \end{itemize} This is known as the large deviation argument. \subsection{The two-dimensional case}\label{ss:twod} By perturbing the quadratic family interpreted as an endomorphism $(x,y)\mapsto (1-ax^2,0)$, where $a$ is close to 2, we obtain a H\'enon-like map of the type given in Definition \ref{henonlike}. If the map is orientation reversing it has a fixed point $\hat{z}\approx (\frac12,0)$ in the first quadrant. For small $b$, the unstable eigenvalue $\lambda_u$ is approximately equal to $-2$ and the product of the stable and unstable eigenvalues $\lambda_u$ and $\lambda_s$, i.e. $\lambda_u\cdot\lambda_s =\hat{d}$, where $\hat{d}=\det(DF_a(\hat{z}))$. One of the main new ingredients in the two-dimensional theory is that the critical point 0 of the one-dimensional map in the $n$:th stage of the induction is replaced by a critical set ${\mathcal C}_g$, $g\leq Cn/\log(1/b)$. There is also a special set of critical points $\Gamma_N \subset {\mathcal C}_g$ on which the induction is carried on, and which is increased as the induction index $n$ grows. (Note that the critical set $\Gamma_N$ in the construction is only changed for a special sequence $\{N_k\}$ of times $n$. The induction on $n$ is done for $n$ satisfying $N_k\leq n \leq N_{k+1}$.) In the case of H\'enon-like maps it is most natural to define instead of the critical point, the critical value. The unstable manifold $W^u(\hat{z})$ of the fixed point has a sharp turn close to $x=1$. The critical value $z_1$ has the property that there is $\kappa>0$ so that \begin{equation}\label{CE-equation} \left|DF^j(z_1)\begin{brsm}1\\0\end{brsm}\right|\geq e^{\kappa j}\qquad \text{\rm for all}\quad 0\leq j\leq n. \end{equation} The first approximation of $z_1$ is defined as the tangency point between the vector field defined by the most contracting direction of $DF(z)$ close to $(1,0)$. Successively the equation \eqref{CE-equation} is verified by induction for higher and higher $n$ and this allows most contracting directions of higher orders to be defined. This makes better and better approximations of the critical value. This allows us to define the image $z_2$ of the critical value $z_1$ under the maps $F$, and also the critical point $z_0$ as $z_0=F^{-1}(z_1)$. The critical point $z_0$ will play a crucial role in our construction. Note that all this is defined for an interval $\omega\in{\mathcal P}_n$ and all points $a$ of $\omega$ have equivalent $z_0$, $z_1$ and $z_2$. An arbitrary point $a\in\omega$ can be used for the definitions. We now define for $a\in\omega$ the first generation $G_1$ of $W^u(\hat{z})$ as the segment of $W^u(\hat{z})$ from $z_1$ to $z_2$. We also make the notation $W_1=G_1$ and inductively define $W_{k+1}=F_a(W_k)$ and then $G_k=W_{k+1}\setminus W_k$ for $k\geq 1$. The induction proceeds by using information of the critical points $\Gamma_N$ (and corresponding critical values) defined on segments of $W^u(\hat{z})$ of generation $\leq g=CN/\log(1/b)$, where $C$ is a numerical constant. One can consider $\Gamma_N$ as the set of ``precritical points''. A succesive modification procedure at the times $N_k$ will make the ``precritical points'' converge to the final critical points. \medskip We require the following: \smallskip Consider a free return time $n$ of the induction, and for all $\omega\in {\mathcal P}_n$ all critical values $z_1$ associated with $\Gamma_N$ satisfy \bigskip {\it Assertion 4 of \cite{BC2},} equation (12b), p.42. in \cite{MoraViana} There is a constant $\kappa>0$ so that \begin{itemize} \item[(i)] $\left|DF_a^j(z_1) \begin{brsm}1\\0\end{brsm}\right|\geq e^{\kappa j}\qquad\forall j \leq n $; \item[(ii)] $\text{dist}_h(F_a^j(z_1),\Gamma_N)\geq e^{-\alpha j}\qquad\forall j\leq n$. \end{itemize} The formal definition of $\text{dist}_h(F_a^i(z_0),\Gamma_N)$, denoted by $d_i$ in \cite{BC2}, is given in Assertion 1, p. 127, in that paper and this quantity at returns satisfies $$ 3|z_i-\tilde{z}_0^{(i)}|\leq d_i(z_0)\leq 5|z_i-\tilde{z}_0^{(i)}|, $$ where $z_i$ is at returns, by construction located horizontally to its {\it binding point} $\tilde{z}_0^{(i)}\in \Gamma_N$. The condition (ii) is called the Basic Assumption (BA) in \cite{BC1}, \cite{BC2}. Roughly speaking, a binding point is chosen at a suitable horizontal location so that the splitting argument, and the bound period distorsion estimates of the corresponding $w_\nu^*$-vectors will be valid, see Subsection \ref{ss:splitting} below. \subsection{Splitting algorithm}\label{ss:splitting} Now we recall the splitting algorithm for expanded vectors as in \cite{BC2}, and \cite{MoraViana} p. 40-41. Let $w_\nu=DF^\nu(z_0)\begin{brsm}1\\0\end{brsm}$, and we write $$ w_\nu=E_\nu+w_\nu^*. $$ $E_\nu$ corresponds to the part of $w_\nu$ that is in a folding situation, i.e. there are various terms in $E_\nu$ that come from a splitting at a previous return. In particular if $\nu$ is outside of all bound periods $w_\nu=w_\nu^*$. \medskip We now summarize an essential part of Assertion 4 concerning distorsion of the vectors $w_\nu^*$ during the bound period, which has an analogous definition to that in the one-dimensional case given in \eqref{bound-period}. \medskip There are constants $C_0$ and $C$, such that for all critical points $z_0\in\Gamma_N$ \begin{itemize} \item[(a)] If $p$ is the binding time for $\zeta_0$ to $z_0$ $$ C^{-1}\leq \frac{||w_\nu^*(\zeta_0)||}{||w_\nu^*(z_0)||}\leq C, \qquad 0\leq \nu\leq p. $$ \item[(b)] Let $z_0\in\Gamma_N$, let $\zeta_0$ and $\zeta_0'$ be two points bound to $z_0$ during time $[0,p]$ and let $n$ be the first free return $n\geq p$. Furthermore let $w_\nu^*(\zeta_0)$ and $w_\nu^*(\zeta_0')$ be the associated vectors of the splitting algorithm. We write the vectors in polar coordinates, where $M_\nu(\cdot)$ denotes the absolute value and $\theta_\nu(\cdot)$ the argument, and measure the distance between the orbits using $$ \Delta_i(\zeta_0,\zeta_0')=\max_{0\leq j\leq i}|\zeta_j-\zeta_j'|. $$ Then there is a constant $C_0$ such that, if $$ \sum_{j=1}^k \frac{\Delta_j}{d_j(z_0)}\leq \frac{1}{C_0},\qquad \text {and}\ k\leq \min(n,N), $$ then if $\nu\leq k$ \begin{equation}\label{eq:modulus} \frac{M_\nu(\zeta_0)}{M_\nu(\zeta_0')}\leq \exp\left\{C_0\sum_{j=1}^\nu \frac{\Delta_j}{d_j(z_0)}\right\}, \end{equation} and \begin{equation}\label{eq:angle2} |\theta_\nu(\zeta_0)-\theta_\nu(\zeta_0')|\leq 2 b^{1/4}\Delta_\nu. \end{equation} \end{itemize} Very similar estimates appear in Lemma 10.2, in \cite{MoraViana}. Their estimate in the Modulus equation \eqref{eq:modulus} is better with the quantity $$ \Theta_k=\Theta_k(\zeta_0,\zeta_0')=\sum_{s=1}^\nu b^{(s-\nu)/4} |\zeta_s-\zeta_s'|, $$ instead of $\Delta_i(\zeta_0,\zeta_0')=\max_{0\leq j\leq i}|\zeta_j-\zeta_j'|$. We have written \eqref{eq:angle2} with the constant $2b^{1/4}$ as in \cite{MoraViana} instead of $2b^{1/2}$ as in \cite{BC2} since our estimates are required to work also in the more general setting of H\'enon-like maps. \subsection{Derivative estimates and $C^2(b)$ curves for H\'enon-like maps} We also need at several places that uniform expansion of the $x$-derivative of the $n$:th iteration of a function $F(x;a)$ automatically gives a uniform comparasion of $a$ and $x$-derivatives of the iterated function. In the one-dimensonal case this is formulated abstractly in Lemma 2.1 in \cite{BC2}. The corresponding estimate in the two-dimensional case is \cite{BC2} lemmas 8.1 and 8.4 and \cite{MoraViana} Lemma 11.3, which we formulate as a distorsion result for the $w_\nu^*$ vectors of the splitting algoritm. \begin{lem}\label{par-phase-dist1} We consider the critical orbit $z_\nu(a)$ as a function of the parameter $a$. We denote its derivative with respect to $a$ by $\dot{z}_\nu(a)$. Then the following holds \medskip For all $2\leq\nu\leq n$ and $a\in{\mathcal P}_{\nu-1}(\omega)\subset E_{\nu-1}(z_0)$ we have \begin{itemize} \item[(i)] $$ \frac{1}{100}\leq \frac{||\dot{z}_\nu(a)||}{||w_\nu^*(a)||}\leq 100. $$ Moreover if $\nu$ is a free iterate then \item[(ii)] $|\text{\rm angle}(\dot{z}_\nu(a),w_\nu^*)|\leq b^{t/2}$. \end{itemize} \end{lem} We also need a statement about distorsion for the tangent vectors of the parameter dependent curves $a\mapsto z_\nu(a)$, which can be formulated as follows. \begin{cor}\label{cor:pardist} There is a constant $C(K,\alpha,\beta,\delta)$, so that if $\nu$ is a free return then if $\omega\in{\mathcal P}_{\nu-1}(z_0)$ then for all $a,a'\in\omega$ $$ \frac{||\dot{z}_\nu(a')||}{||\dot{z}_\nu(a)||}\leq C \quad \text{\rm and}\quad \text{\rm angle}(\dot{z}_\nu(a'),\dot{z}_\nu(a))\leq 10 b^{1/4}. $$ \end{cor} For the construction of two strange attractors, Theorem \ref{C}, we also need the distorsion control of the $b$-derivatives given in Lemma \ref{pardist} below. In several places, in particular for parameter dependent curves and pieces of unstable manifolds, it is relevant that the corresponding curves segments are $C^2(b)$-curves which in the setting of the H\'enon-like maps of \cite{MoraViana}, has the following definition. \begin{defin}\label{C2bcurve} A curve $\gamma(x)=(x,h(x))$, $x_1\leq x\leq x_2$ is called a $C^2(b)$-curve if the curve is $C^2$, and there is a constant $C$ so that $|h'(x)|\leq Cb^t$ and $|h''(x)|\leq Cb^t$ for $x_1\leq x \leq x_2$. The constant $t>0$ appears in the definition of the H\'enon-like maps. \end{defin} \subsection{Stable and unstable manifold} We also need some geometric information on the attractor. A reference is \cite{MoraViana}, Section 4, but we will also need two quantitative statements on the stable and unstable manifolds of the fixed point formulated in lemmas \ref{stablemanifold}, \ref{unstable} and \ref{lem:equidistsm} below. \begin{lem}\label{stablemanifold} Let $\gamma^s_a$, $a\in\tilde\omega_0$, be the first leg of the stable manifold of $\hat z(a)$ pointing in the negative $y$ direction. Then $\gamma^s_a$ at all points has slope bounded below by $K/\sqrt{b}$ where $K$ is a numerical constant. Moreover $\gamma^s$ has a $C^1$ dependence on $a$. Also the downwards pointing leg $\gamma^s_a$ of $W^s(\hat{z})$ intersects $W^u(\hat{z})$ at a homoclinic point $\hat{z}'$. \end{lem} \begin{proof} We consider the orientation reversing case when the fix point $(\hat{x},\hat{y})$ satisfies $\hat{y}>0$. By the $C^1$-version of the stable manifold theorem, there is a small segment of the $\gamma_s$-leg pointing down. Note that we do not have control of the size of this leg. It depends on $a_0$, the middle point of $\tilde{\omega}_0$, and $b$. By $C^1$ continuity of the stable manifold we can choose a sufficiently small segment $\Gamma_0$ so that its slope is close to the slope at the fixed point. As in \cite{MoraViana} the derivative of the map is defined as $$ DF_a(x,y)=\left(\begin{matrix} A& B\\ C& D \end{matrix}\right)(a,x,y).$$ The stable direction at the fixed point has approximate slope $s_0$, where $$s_0=\frac{-2a\hat{x}}{B}, $$ and by continuity this is true also for points of $\Gamma_0$. Now define inductively $\Gamma_{n+1}=F_a^{-1}(\Gamma_n)$ for $n \leq n_0$, where $n_0$ is determined so that $(x,y)\in\Gamma_n$ for $n\leq n_0$ should satisfy $y\geq \frac{7}{8} \hat{y}$. Note that we have strong expansion of the inverse map $F_a^{-1}$ and $n_0$ is finite. Next we verify that the cone defined by $$ |s-s_0|\leq\frac{1}{10}|s_0| $$ is invariant under $DF_a^{-1}$. For this we use the derivative estimates of $A$, $B$, $C$, $D$ and the determinant $AD-BC$ in \cite{MoraViana}, Theorem 2.1. This will hold for the sequence of curve segments $\{\Gamma_n\}$, $n\leq n_0$. The length of $\Gamma_{n_0}$, will be greater or equal to $\frac{1}{8}\hat{y}>0$. We now do two final iterates and conclude that $\Gamma_{n_0+2}$ has a subcurve with vertical slope $\geq K/\sqrt{b}$ and length $\geq C\hat{y}b^{-1}$. It follows that we have the required homoclinic intersection $\hat{z}'$, compare Lemma \ref{homoclinic}. \end{proof} \begin{lem}\label{unstable} Consider a family of H\'enon-like maps $F_{a}(.,.;b)$ which is area reversing. Let a time $\nu$ be given and let a parameter interval of $a$-values, $\omega\in{\mathcal P_\nu}$. For $a\in\omega$ there is a critical point $z_0$ and a critical orbit $z_1$, $z_2$, $z_3$ located on $W^u(\hat{z})$. Let $\gamma_u$ be the segment of $W^u(\hat{z})$ from $z_2$ to $z_3$. Then for a suitable choice of $\delta_0$, the curve segment $$ \gamma^u_1=\gamma_u\cap \{(x,y): x\geq -1 + \delta_0\} $$ is an approximate parabola and the two segments $$ \gamma^u_1\cap \{(x,y): x\leq 1-\delta_0\} $$ are two $C^2(b)$ curves. \end{lem} \medskip {\it Sketch of proof.} For the first part of the proof we follow \cite{MoraViana}, Section 7. In formula (2), p.30, they state that the unstable manifold restricted to $G_0\cap \{|x|\leq 1-\delta_0\}$ can be viewed as the graph $y(x)=y_\varphi(a,x)$ with $$ ||y_\varphi||_{C^2}\leq \text{const}\, b^t, $$ If we iterate the unstable manifold once it follows that it folds to a parabola. From a curvature argument, see \cite{MoraViana} Lemma 9.3, it follows that the curve is $C^2(b)$.\demo We will later need information on the structure of the stable manifold of the fixed point $\hat{z}$. \begin{lem}\label{lem:equidistsm} There is an approximate equidistribution of pieces of the stable manifold $W^s(\hat{z})$, with a definite slope $s$, $|s|\geq \text{\rm Const.}\ \delta$ that intersect $\{(x,y): |x|\geq \delta\}$. The interspacing of the the legs of $W^s(\hat{z})$ is $\sim \frac{\pi}{2}\cdot\frac{1}{3\cdot 2^k}$. \end{lem} \begin{proof} Consider the tent map $\xi\mapsto 1-2|\xi|$. It has a fixed point $\xi=\frac13$. The preimages of this fixed point are located at $$ \xi_{\nu,k}=\frac{\nu}{3\cdot 2^k},\qquad \nu=-3\cdot 2^k+1,\dots,3\cdot 2^k-1 $$ The corresponding points for the quadratic map $x\mapsto 1-2x^2$ are given by $x_{\nu,k}=\sin\frac{\pi}{2}\xi_{\nu,k}$. This means that the interspacing of the legs of $W^s(\hat{z})$ is as required. \end{proof} \subsection{The Stable Foliation and its properties}\label{stablefoliation} The stable foliation of order $n$ for different values of $n$ will play an important role in the following, in particular in the capturing argument in Section \ref{sec:capturing} and in the construction of the sink in Section \ref{sink}. This construction of the stable foliation appears in \cite{BC2}, but we will use the version in \cite{MoraViana}, Section 6. We will need some lemmas about the expansion properties of the maps. Because of the dissipative properties of the maps these will lead also to the existence of contractive vector fields and a corresponding stable foliation. Let $F$ be a H\'enon-like map and denote by $M^{\nu}(z)=DF^{\nu}(z)$. Let $u_0$ be a tangent vector of $W^{u}(\hat z)$ near $\hat z$. Let $\zeta_0=(\xi_0,\eta_0)$ be a point on the unstable manifold, satisfying $|\xi_0|\geq\delta$ and for any $1\leq\nu\leq n$, $\left\|M^{\nu}(\zeta_0)u_0\right\|\geq\kappa^{\nu}$. We get an expansive behaviour of horizontal vectors, compare Corollary 6.2 in \cite{MoraViana}. Here $\kappa<1$ is allowed. We need a condition similar to partial hyperbolicity relating $b$ and $\kappa$ such as $\sqrt{b}\leq\left(\kappa/{10 K^2}\right)^4$, compare the hypothesis of Lemma \ref{lemma6.4} below. \begin{lem} \label{lemma6.2} Assume that $\zeta_0=(\xi_0,\eta_0)$ is a point on the unstable manifold satisfying $|\xi_0|\geq \delta$ and \begin{equation}\label{eq:expansion} \left\|M^{\nu}(\zeta_0)u_0\right\|\geq\kappa^{\nu},\qquad 1\leq\nu\leq n. \end{equation} Then all $1\leq\nu\leq n$ and for all unit vector $v_0$ with $|\text{\rm slope}(v_0)|\leq\frac{1}{10}$, $$ \left\|M^{\nu}(\zeta_0)v_0\right\|\geq \frac{1}{2}\left\|M^{\nu}(\zeta_0)\right\|. $$ \end{lem} We will also need Lemma $6.3$ in \cite{MoraViana} which implies estimates of the norms and angles of the expanded vectors. \begin{lem}\label{lemma6.3} Let $\zeta'_0$ and norm $1$ vectors $u,v$ satisfying $$|\zeta_0-\zeta'_0|\leq\sigma^n\text{ and } \left\|u-v\right\|\leq\sigma^n $$ with $\sigma\leq\left(\frac{\kappa}{10 K^2}\right)^2$, then \begin{itemize} \item[(a)]$\frac{1}{2}\leq\frac{\left\|M^{\nu}(\zeta_0)u\right\|}{\left\|M^{\nu}(\zeta'_0)v\right\|}\leq 2$, \item[(b)]$\left|\text{\rm angle}\left(M^{\nu}(\zeta_0)u, M^{\nu}(\zeta'_0)v\right)\right|\leq\left(\sqrt{\sigma}\right)^{2n-\nu}\leq \left(\sqrt{\sigma}\right)^{n} $. \end{itemize} \end{lem} Observe that, by Lemma \ref{lemma6.2}, the conclusions of Lemma \ref{lemma6.3} are verified for all unit vectors $u,v$ such that $\left\|u-v\right\|\leq\sigma^n$ and $|\text{slope}(u)|\leq\frac{1}{10}$. Similarly, because by construction, $\zeta_0=(\xi_0,\eta_0)$, with $|\xi_0|>\delta$ is $\kappa$-expanding up to time $n$ and therefore we can apply Lemma $6.4$ of \cite{MoraViana}, that in our setting becomes: \begin{lem}\label{lemma6.4} Let $\zeta'_0$ be such that $|\zeta_{\nu}-\zeta'_{\nu}|\leq\sigma^{\nu}$ for every $1\leq\nu\leq n$ with $\sqrt{b}\leq\sigma\leq\left(\kappa/{10 K^2}\right)^4$. Then \begin{itemize} \item[(a)]$\frac{1}{2}\leq\frac{\left\|M^{\nu}(\zeta_0)u\right\|}{\left\|M^{\nu}(\zeta'_0)v\right\|}\leq 2$, \item[(b)]$\left|\text{\rm angle}\left(M^{\nu}(\zeta_0)u, M^{\nu}(\zeta'_0)v\right)\right|\leq \left(\frac{K^2\sqrt{\sigma}}{\kappa}\right)^{\nu+1} $ \end{itemize} for any $1\leq\nu\leq n$ and any norm $1$ vectors $u,v$ with $|\text{\rm slope}(u)|\leq\frac{1}{10}$ and $|\text{\rm slope}(v)|\leq\frac{1}{10}$. \end{lem} The above result combined with results at the end of Section 6 and Section 7C in \cite{MoraViana} gives the following lemma on the existence of the stable vector field $e^{(n)}$ and the corresponding stable foliation which will be instrumental for the capture argument, Section \ref{sec:capturing}, and also for the construction of the sink, Section \ref{sink}. \begin{lem}\label{stable-foliation} Let $\zeta_0$ satisfy equation \eqref{eq:expansion} and let $s$ be a segment of $W^{u}(\hat z)$ centered in $\zeta_0=$ of length $\sigma^{2n}$. The stable vector field $e^{(n)}$ through $s$ can be integrated from $s$ to $G_1=F(G_0)$. Let $s_1$ be the arc of end points obtained on $G_1$, then \begin{itemize} \item[(a)]$\text{\rm dist}\left(F^n(s),F^n(s_1)\right)=K\kappa^n$, \item[(b)]$\left|\text{\rm angle}\left(M^{n}(\zeta'_0)u, M^{n}(\zeta''_0)v\right)\right|\leq\left(\frac{K^2\sqrt\sigma}{\kappa}\right)^4$, \end{itemize} where $\zeta'_0\in s$, $\zeta''_0\in s_1$, $u=\tau(\zeta'_0)$ and $v=\tau(\zeta''_0)$. \end{lem} We also need Lemma 6.1. from \cite{MoraViana}. \begin{lem}\label{contractive-field} If $e_{\nu}(z)$ is the most contractive direction, then for $1\leq\mu\leq\nu\leq n$ \begin{itemize} \item[(a)]$\left|\text{\rm angle}(e_{\mu}(z),e_{\nu}(z))\right|\leq\left(\frac{3K}{\kappa}\right)\left(\frac{Kb}{\kappa^2}\right)^{\mu}$ , \item[(b)] $\left\|Df^{\mu}(z) e_{\nu}(z)\right\|\leq\left(\frac{4K}{\kappa}\right)\left(\frac{K^2b}{\kappa^2}\right)^{\mu}$. \end{itemize} \end{lem} We consider the integral curves of the vector field $$ \left(\begin{matrix} \dot x\\\dot y \end{matrix}\right)=e_1(z). $$ Since $$DF(z)^{-1}=\frac{1}{\text{det}DF(z)}\left( \begin{matrix} D &-B\\ -C& A \end{matrix} \right)$$ and $A=-2ax+O(b^t)$, $C_1\sqrt b\leq |B|\leq C_2\sqrt b$, it is easy to see that $$ \text{slope }e_1(z)=-\frac{A}{B}\approx\frac{2ax}{\sqrt b}. $$ \ As a conclusion we get that the integral curves of the stable vector field $e^{(1)}$ are approximate parabolas. At the critical value $z_1$, the expansive property \eqref{eq:expansion} is valid and we obtain the following result, see Figure 1. \begin{lem}\label{lem:stablefoliation} Suppose that $F$ satisfies the assumption of Lemma \ref{stable-foliation}. Then there is a quadrilateral containing the critical value, which is completely foliated with leaves that are integral curves of $e_k(z)$ given that $k=\left[\frac{n}{10}\right]$. \end{lem} \begin{proof} This is a small variation of Lemma $5.8$ in \cite{BC2}, which we are going to pursue in the following with more detail. The idea is to successively define smaller and smaller quadrilaterals $Q_n$ which are foliated by integral curves of the most contractive vector field $e_k(z)$ of $DF^k(z)$. We know that for the point $\tilde z_0=z_1$ $$ \left|DF(z_1)\begin{brsm}1\\0\end{brsm}\right|\geq e^{\tilde \kappa\nu},\qquad \nu=1,\dots, n. $$ Moreover we will only use this estimate in the range $1\leq\nu\leq k$, $k=\left[\frac{n}{10}\right]$. We will inductively define a sequence $\left\{\gamma_{i}\right\}$ of integral curves of $e_{i}(z)$ through $z=z_1$. We start by defining $\gamma_1$ as the integral curve of $e_1(z)$ through $z_1$. We now pick $\tilde z_0=z_1$. Suppose $\gamma_{i}$ is defined and stretches from $y=-1$, $y=1$. Pick a point $\zeta_0\in\gamma_{i}$. Then by Lemma 6.1 $(b)$ in \cite{MoraViana}, $$ d(\zeta_j,\tilde z_j)\leq \left(\frac{4K}{\kappa}\right)\left(\frac{K^2b}{\kappa^2}\right)^j. $$ Let $\zeta'_0$ be on the horizontal segment containing $\zeta_0$ at distance $\left(\frac{4K}{\kappa}\right)\left(\frac{Kb}{\kappa^2}\right)^{i}$, \begin{eqnarray*} d(\zeta'_j,\tilde z_j)&\leq & \left(\frac{4K}{\kappa}\right)\left(\frac{K^2b}{\kappa^2}\right)^j+5^j\left(\frac{Kb}{\kappa^2}\right)^{i}\\ &\leq &\left(\frac{8K}{\kappa}\right)\left(\frac{K^2b}{\kappa^2}\right)^j. \end{eqnarray*} Define $$ \Omega_{i}=\left\{ z \left| \right. \text{dist}_{\text{h}}(z,\gamma_{i})\leq 16 K\left(\frac{Kb}{\kappa^2}\right)^{i}\right\}. $$ Then the integral curves of $e_{i+1}(z)$ are defined in $\Omega_{i}$ and do not leave $\Omega_{i}$. We define $\Omega_{i+1}$ by the restrictive condition $$ \Omega_{i+1}=\left\{ z \left| \right. \text{dist}_{\text{h}}(z,\gamma_{i+1})\leq 16 K\left(\frac{Kb}{\kappa^2}\right)^{i+1}\right\}. $$ We proceed in this way by induction. Finally we can vary the point $\tilde z_0$ on a horizontal line segment $s$ through $z$, providing that $|s|\leq c^n$ (for a suitably choosen $c$). \end{proof} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{Fig1New} \caption{Stable foliation at the critical value} \label{Fig1New} \end{figure} \section{Construction of a sink}\label{sink} In the following we work in the H\'enon-like setting. Let $z_0\in\Gamma_E$ be the critical point on the left leg of $W^u(\hat z)$, see Subsection \ref{ss:twod}. One can choose $z_0$ uniquely for all $a\in\omega_0\in\mathcal P_E$, see Section $5$ in \cite{MoraViana} or Section $6$ in \cite{BC2}. We nox fix $E_0$ to be such that $z_{E_0}(\omega_0)$ is in an escape situation as defined in the end of Subsection \ref{onedimensional-case}. \subsection{Construction of a long escape situation}\label{sec:long_escape_situation} The aim of this section is to prove that long escape situations occur. In these situations we can guide the dynamics to behave in the direction we wish, in particular, we can create attractive periodic orbits. \begin{defin} We say that $z_{E}(\omega)$, $\omega\subset\mathcal P_E$, is in a long escape situation at time $E$ if $z_{E}(\omega)$ is a $C^2(b)$ curve\footnote{See Definition \ref{C2bcurve}} such that $$\pi_1 z_{E}(\omega)\supset\left[\frac{3}{8},\frac{5}{8}\right],$$ where $\pi_1$ is the projection on the first coordinate, i.e. if $\gamma(t)=(\gamma_1(t),\gamma_2(t))$ then $\pi_1\gamma(t)=\gamma_1(t)$. \end{defin} \begin{lem}\label{lem:long_escape_sit} There exist $\tilde\omega_0\subset\omega_0$ and a time $E$ such that $z_{E}(\tilde\omega_0)$ is in a long escape situation. \end{lem} \begin{proof} This proof is purely one-dimensional, since $b$ is small and the dynamics is outside of $\left(-\delta,\delta\right)\times\mathbb{R}$. We use an argument very similar to that in \cite{Th}. By \cite{MoraViana}, there is a time $n$ and an interval $\omega_0\in\mathcal P_n$ so that $\pi_1 z_{n}(\omega_0)\cap (-\delta,\delta)\neq\emptyset$ and $\left|\pi_1 z_{n}(\omega_0)\right|\geq\sqrt{\delta}$. Consequently, one of the components, $L_n'$ of $\pi_1 z_{n}(\omega_0)\setminus\left(-\delta,\delta\right)$ has length bigger than $\sqrt{\delta}/3$. Let $\omega'=\left[a_1,a_2\right]$ be defined by the relation $$\pi_1 z_n(\omega')=L_n'=\left[\pi_1 u,\pi_1 v\right],$$ where $u$ and $v$ are the end points of the curve $z_n(\omega')$. Consider then the future iterates $z_{n+i}(\omega')$, $i=1,2,\dots,$ under the parameter dynamics. Observe that $\pi_1 z_{n+2}(\omega')$ is located at \begin{eqnarray*} \left(\pi_1 F^2_{a_1}(u),\pi_1 F^2_{a_2}(v)\right)&=& \left(1-a_1\left(1-a_1\delta^2\right)^2+O(b^t),\pi_1 F^2_{a_2}(v)\right)\\ &=& \left(1-a_1+O\left(\delta^2\right)+O(b^t),1-a_2+\Theta\left(\delta^{\frac{4}{3}}\right)\right), \end{eqnarray*} where the function $\Theta(x)$ satisfies $c_1 x\leq \Theta(x)\leq c_2 x$ for some numerical constants $c_1$ and $c_2$. Observe that $F^2_{a_1}(u)$ and $F^2_{a_2}(v)$ and consequently $\left(\pi_1 F^2_{a_1}(u),\pi_1 F^2_{a_2}(v)\right)$ are located near the saddle fixed point close to $(-1,0)$ where the dynamics is expanding in the $x$-direction by a factor bigger than $3$ as long as \begin{equation}\label{closecritpoint} \pi_1 F^{2+i}_{a_2}(v)\leq-\frac{3}{4} \end{equation} Denote by $i_0$ the last $i$ for which (\ref{closecritpoint}) is verified. Then $\pi_1 F^{2+i_0}_{a_1}(u)$ is still close to $-1$; its distance to $-1$ is of order $O\left(\delta^{2-\frac{4}{3}}\right)$. After $2$ more iterates $$\left(\pi_1 F^{4+i_0}_{a_1}(u), \pi_1 F^{4+i_0}_{a_2}(v)\right)\supset\left[\frac{3}{4},\frac{5}{4}\right].$$ \end{proof} To the fixed point $(\hat{x},\hat{y})$ there is a symmetric point on $W^u(\hat{x},\hat{y})$, $(\hat{x_1},\hat{y_1})$, located approximately at $(-\hat{x},\hat{y})$. The leg of $W^s(\hat{z})$ in the negative $y$-direction crosses this homoclinic point and the slope $s$ of the curve segment of $\gamma_s$ joining the two points $(\hat{x},\hat{y})$ and $(\hat{x_1},\hat{y_1})$ satisfies $s\geq C/\sqrt{b}$ on all points of $\gamma_s$, see Lemma \ref{stablemanifold}. We choose the intersection with the preimage to ensure that at the next iterate when the curve segment intersects the stable manifold, the distance to the fixed point $\hat{z}$ is defined by a high accuracy and is very close to the width of the parabola at this $x$-coordinate. This is needed to make the time $E'$, which will appear later, well defined, see Lemma \ref{lambda2}. \begin{lem}\label{homoclinic} There is a subinterval $\tilde\omega_0'\subset\tilde\omega_0$ such that, for all $a\in\tilde\omega_0'$, the stable leg of $W^s(\hat{z})$ pointing downwards, denoted by $\gamma^s_a$, intersects the middle half of $z_E(\tilde\omega_0')$. \end{lem} \begin{proof} Let $\tilde a_0$ be the midpoint of $\tilde\omega_0$ and let $p_1=\gamma^s_{\tilde a_0}\cap z_E(\tilde\omega_0)$. Let $\tilde a_0'$ be the preimage of $p_1$ in $\tilde\omega_0$. Observe that $\gamma^s_{\tilde a_0'}$ intersects $z_E(\tilde\omega_0)$ at $p_2$. By Lemma \ref{par-phase-dist1}, $$ |p_1-p_2|\leq K |\tilde\omega_0|\leq K e^{-cE}, $$ where $K$ is a positive constant. We choose now a subinterval $\tilde\omega_0'\subset\tilde\omega_0$ having midpoint $\tilde a_0'$ and such that $z_E(\tilde\omega_0')$ has length $e^{-c E}$. Then $\tilde\omega_0'$ has the required property, i.e. for all $a\in \tilde\omega_0'$, $\gamma^s_a$ intersects $z_E(\tilde\omega_0')$ in its middle half. \end{proof} The following lemma allows us to control the dynamics so that part of the parameter interval returns close to a critical point with a controlled geometry, see Figure \ref{Fig0}. This will create an attractive periodic orbit for all selected parameters. \begin{lem}\label{returnlemma} There is a subinterval $\tilde\omega_0''\subset\tilde\omega_0',$ with midpoint $\tilde a_0''$ and a time $N$ so that, $z_N(\tilde\omega_0'')$ has the following properties: \begin{itemize} \item[(i)] $z_{N}(\tilde\omega_0'')$ is a ${{ C}^2}(b)$ curve, \item[(ii)] $|z_{N}(\tilde\omega_0'')|=\frac{1}{100}\frac{1}{D_N}$, \item [(iii)] $\text{\rm dist}\left(\pi_1 z_0(\tilde a_0''),z_N(\tilde\omega_0'')\right)\leq\frac{1}{50}\frac{1}{D_N}$, \end{itemize} \end{lem} where $D_N=|w_N|$. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{Fig0} \caption{Stable foliation at the fixpoint} \label{Fig_stable_fixedpoint} \end{figure} \smallskip The proof of Lemma \ref{returnlemma} consists of several steps, formulated in a sequence of lemmas. Consider the phase curve $\gamma=z_E(\tilde\omega_0')$ and denote by $\tilde a_0'$ the midpoint of $\tilde\omega_0'$. We recall the $\lambda$-lemma, see e.g. \cite{PalisdeMelo}, Lemma 7.1. \begin{lem}\label{lambda1} Let 0 be a saddle fixed point of a $C^2$ map. Let $V=B^u\times B^s$ be the cartesian product of an unstable and stable ball at the fixed point 0, let $q\in W^s(q)\setminus \{0\} $ and let $D^u$ be a disk transverse to $W^s$ intersecting $W^s$ in $q$. Let $D^u_n$ be the connected component of $F^n(D^u)\cap V$ to which $F^n(q)$ belongs. Given $\varepsilon>0$ there exists $n_0\in{\mathbb N}$ such that if $n>n_0$, then $D^u_n$ is $\varepsilon>0$ $C^1$ close to $B^u$. \end{lem} In our present setting we can obtain a quantative version of the $\lambda$-lemma adapted to our situation. In the following we refer to Figure \ref{Fig_stable_fixedpoint}. \begin{lem}\label{lambda2} Suppose a $C^2(b)$-curve $\gamma$ of size $e^{-\kappa E}$ crosses the leg of $W^s(\hat{z})$ in the negative $y$-direction. Then after $E'$ iterates where $E'\sim E$, $F^{E'}_a(\gamma)$ will be a $C^2(b)$ curve stretching along $W^u(\hat{z})$ and across the ordinate axis $x=0$ to $x=-\frac{1}{4}$. Close to $x=0$ the vertical distance between $W^u(\hat{z})$ and $F^{E'}_a(\gamma)$ can be estimated as \begin{equation}\label{eq:dist} \leq \text{\rm const. } \left(\lambda_s\right)^{\frac{1}{10}E'}. \end{equation} and the angles between points with the same $x$-coordinate satisfies \begin{equation}\label{eq:angle} \leq \text{\rm const. } \left(\lambda_s\right)^{\frac{1}{40}E'}. \end{equation} \end{lem} \begin{proof} We apply the construction of the stable foliation in lemmas \ref{stablefoliation} and \ref{lem:stablefoliation}. For each point of $\zeta_0\in\gamma$ we connect it to a corresponding point $\zeta_0'$ on $W^u(\hat{z})$. It is then possible to apply Lemma \ref{lemma6.4} with $\tilde{z}_0=\zeta_0$, $\tilde{z}_0'=\zeta_0'$ and $\kappa=(1+\varepsilon)\lambda_s$, for a suitable $\varepsilon>0$. We conclude that the estimates of \eqref{eq:dist} and \eqref{eq:angle} hold. \end{proof} \begin{rem} Note that $\lambda_u\cdot \lambda_s=\det DF_a(\hat{z})$ and that the factor $\frac{1}{10}$ comes from the comparison between $\kappa$ and $\log |\lambda_u|$, where $\log 2-\varepsilon \leq \log |\lambda_u| \leq \log2$, and where $\varepsilon$ depends on $2-a$. \end{rem} \medskip {\em Proof of Lemma \ref{returnlemma}}. For the following we refer to Figure \ref{Fig0} . \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{FigLemma} \caption{The capturing argument} \label{Fig0} \end{figure} \begin{itemize} \item[(i)] We apply Lemma \ref{lambda1} to a fixed parameter $\tilde{a}_0'\in\tilde{\omega}_0'$ from Lemma \ref{stable-foliation}, (b), to $\gamma$ with fixed parameter $\tilde a_0'$. At a certain time $E'\sim E$, $F_{\tilde a_0'}^{E'}(\tilde{\omega}_0')$ stretches along $W^u(\tilde a_0')$ covering its $x$-projection $\left[-\frac{1}{4},\frac{1}{4}\right]$. \item[(ii)] By the comparability of $x$ and $a$ derivatives, see Corollary \ref{cor:pardist}, during the time from $E$ to $E+E'$ and the fact that $|\tilde\omega_0'|\sim e^{-2cE}$, one can check that $z_{E+E'}(\tilde\omega_0')$ covers the $x$-projection $\left[-\frac{1}{8},\frac{1}{8}\right]$. Now restrict $\tilde\omega_0'$ to a subinterval $\tilde\omega_0''$ with midpoint $\tilde a_0''$ so that for $N=E+E'$, $|z_{N}(\tilde\omega_0'')|=\frac{1}{100}{D_N}^{-1}$. \item[(iii)] Note that, as in \cite{MoraViana}, Section 7, $z_{N}(\tilde\omega_0'')$ is a ${{ C}^2}(b)$ curve and $\text{\rm dist}\left(\pi_1 z_0(\tilde a_0''),z_N(\tilde\omega_0'')\right)\leq\frac{1}{50}D_N^{-1}$ and we also obtain by Lemma \ref{stable-foliation}, (b), \eqref{eq:angle} that the angle $\theta$ between the points of $z_N(\tilde\omega_0'')$ with the same $x$-coordinate on the first leg of $W^u(\hat{z})$ satisies \begin{equation}\label{eq:angle1} \theta\leq \text{\rm const. } \left(\lambda_s\right)^{\frac{1}{40}E'}. \end{equation} Here we again have to use the comparasion of parameter and phase derivatives, Lemma \ref{par-phase-dist1} and the distorsion of the the $a$-derivative within a partition interval, see Corollary \ref{cor:pardist}. \end{itemize} \subsection{Construction of an invariant contractive region} In this section we prove the existence of an invariant contractive region around the critical point. We pick an arbitrary $a\in\tilde\omega_0''$, with $\tilde\omega_0''$ as in Lemma \ref{returnlemma}. We refer to Figure \ref{Fig2New}. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{Fig2New} \caption{Stable foliation at the critical point} \label{Fig2New} \end{figure} Associated to $a$ there is a critical point $z_0(a)$ located on the first left leg of $W^u(\hat z)$, see Subsection \ref{ss:twod}. We fix now a curve $\gamma :(-\rho',\rho)\to\mathbb{R}^2$ on this left leg so that $\gamma(0)=z_0$, where $\rho=\frac{1}{10}{D_N}^{-1}$. and $\rho'$ will be choosen as follows. Close to the critical value $z_1$ there is, by Lemma \ref{lem:stablefoliation}, a quadrilateral foliated by leaves of the stable vector field $e_{[N/10]}$. The leave $\gamma_3'$ of $e_{[N/10]}$ through $F(\gamma(\rho))$ hits $W^u(\hat{z})$ in another point $\zeta'$ and $\rho'$ is defined so that $F(-\rho')=\zeta'$. The pullback of the stable leave $\gamma_3'$ by $F$ is denoted by $\gamma_3$. We define $\mathcal{D'_N}$ as the domain bounded by $f\left(\gamma_{|(-\rho',\rho)}\right)$ and the stable leave $\gamma_3'$. Let $\mathcal D_N$ be the pullback under $F$, namely $\mathcal D_N=F^{-1}\left(\mathcal D'_N\right)$. We will prove that $\mathcal D_N'$ and hence also $\mathcal D_N$ are invariant under $F_a^N$ for all $a$ in $\tilde\omega_0''$. Consider the tangent vector $\tau_1(s)$ of $\gamma_1(s)=F_a(\gamma(s))$ and write it, following Lemma 9.6 in \cite{MoraViana} as $$ \tau_1(s)=\alpha(s)e_{E-1}(s)+\beta(s)w_1, $$ with $\frac{3}{2}a|s|\leq \left|\beta(s)\right|\leq\frac{5}{2}a|s|$ and $w_1=\begin{brsm}1\\0\end{brsm}$. Observe that, at time $E$, $$ \left\|DF^{E-1}_ae_{E-1}\right\|=O\left(b^{E-1}\right). $$ Denote by $\gamma_E^1$ and $\gamma_E^2$ the two sub-curves of $\gamma$ defined by restricting the arclength to $(-\rho',0)$ and $(0,\rho)$ respectively. For the image of these curves the tangent vector decomposes as $$ \tau_E(s)=\alpha(s)DF^{E-1}e_{E-1}(s)+\beta(s)w_{E-1}. $$ Since, by the induction, $\|w_E\|\geq e^{\kappa E}$, we conclude that $$ \left|\alpha(s)DF^{E-1}\left(e_{E-1}(s)\right)\right|\leq O(b^{E-1})\leq\frac{1}{2}|s|\|w_E\| $$ and since $\text{slope}(w_E)=O(b^t)$, it follows that $\gamma^1_E\setminus\tilde{\gamma}^1_E $ and $\gamma^2_E\setminus\tilde{\gamma}^2_E $ are $C^2(b)$ curves. The curves $\tilde{\gamma}^1_E$ and $\tilde{\gamma}^2_E$ correspond to the subsegments close to $z_E$, which are still in fold periods of the initial binding to $z_0$, and those segments are of size $(Cb)^E$. The curve $\gamma^3_E=F^E(\gamma_3)$ has, by Lemma \ref{contractive-field} (b), length $|\gamma^3_E|\leq (Cb)^E$. There is, by Lemma \ref{contractive-field}, a stable vector field $e_{E'}$ defined in a vertical region containing the curves $\gamma^1_E $, $\gamma^2_E$ and $\gamma^3_E$. By \cite{BC2} the curves $F^{E'}(\gamma^1_E)$, $F^{E'}(\gamma^2_E)$ and $F^{E'}(\gamma^3)$ are located below $\gamma$ and at distance $O(b^{E'})$. By the angle estimate \eqref{eq:angle1} it follows that except for the points still in fold period to $z_0$ at time $N=E+E'$, the slopes of points of the curves $\gamma'=F^{E'}(\gamma^1_E)$ and $\tilde{\gamma}'=F^{E'}(\gamma^2_E)$ with the same $x$-coordinates is $\leq (Cb)^{E'/40}$. The curve $F^{E'}(\gamma^3)$ has diameter $\leq 2\cdot 5^{E'}\cdot (Cb)^E$, and it is located close to $z_N$. At this point we choose $\rho'$ so that $F(\gamma(\rho))$ and $F(\gamma(-\rho'))$ are on the same stable leave of $e_E$ close to $\hat z$. The curve segment $F^N(\gamma^1)$ has length \begin{eqnarray*} \text{length}(F^N(\gamma^1))&\leq &\int_{0}^{\rho}|\beta(s)|\|w_N(s)\| ds+\int_{0}^{\rho}O(b^N)d\rho\\ &\leq &\int_{0}^{\rho}4s D_N ds+O(\rho b^N)=2\rho ^2D_N+O(\rho b^N)\\ &\leq &3\left(\frac{1}{10}D_N^{-1}\right)^2\cdot D_N=\frac{3}{100}\frac{1}{D_N}. \end{eqnarray*} The length of $F^N(\gamma^2)$ is estimated similarly. Finally $$ \text{diam}(F^N(\gamma^3))\leq 5^{E'}(Cb)^E\leq\frac{2}{100}\frac{1}{D_N}. $$ It follows that $F^{N-1}\left(\mathcal D'_N\right)$ has diameter $\leq\frac{5}{100}D_N^{-1}$ and it is at distance $O(b^{N-1})$ to $\gamma$. Since $\|DF\|_{{{\mathcal C}^1}}\leq 5$, then $$ F^{N}\left(\mathcal D'_N\right)\subset \mathcal D'_N. $$ The discussion above can be summarized in the following lemma (see Figure \ref{Fig6}). \begin{lem} For all $a\in\tilde\omega_0''$ , there exists a domain ${\mathcal D}_N(a)$ around the critical point $z_0(a)$, so that $$F_{a,b}^N\left(\mathcal D_N(a)\right)\subset\mathcal D_N(a).$$ A corresponding statement holds for the region $\mathcal D_N'(a)$ close to the critical value $F_a(z_0)$ \end{lem} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{Fig6New1} \caption{The invariant region at the critical point} \label{Fig6} \end{figure} \begin{lem}\label{contraction} There exists an integer $k$ such that, for all $a\in\tilde\omega_0''$, $F_{a,b}^{Nk}$ contracts. \end{lem} \begin{proof} Take an arbitrary point $z\in\mathcal D'_{N}(a)$ and as in Subsection \ref{ss:splitting}, consider the unit vector $$v=\alpha_0 e_n(z)+\beta_0w_1,$$ where $w_0=\begin{brsm}1\\0\end{brsm}$ and $e_n(z)$ is the contracting direction of order $n=\left[\frac{N}{10}\right]$ at $z$. Consider the decomposition of $DF^N(z)v$ as $$DF^N(z)v=\alpha_0 DF^N(z)e_n(z)+\beta_0 w_{N+1}.$$ Observe that, at the first return time $N$, $e_n(z)$ is mapped to $DF^{N}(z)e_n(z)$ with \begin{equation}\label{contres} \left\|DF^N(z)e_n(z)\right\|\leq 5^{N-n}b^n. \end{equation} Let us decompose $\alpha_0 DF^N(z) e_n(z)$ as $$\alpha_0 DF^N(z) e_n(z)=\alpha_1^s e_n\left(F^N(z)\right)+\beta_1^s w_1,$$ where, by (\ref{contres}), $|\alpha_1^s|,|\beta_1^s|\leq 5^{N-n}b^n |\alpha_0|$. Observe now that $\left\|DF^Nw_1\right\|= D_N$. As a consequence $$DF^N(z)\beta_0 w_0=\alpha_1^u e_n\left(F^N(z)\right)+\beta_1^u w_0,$$ where $|\alpha_1^u|\leq D_N|\beta_0|$ and $|\beta_1^u|\leq\frac{5}{10}\frac{1}{D_N}D_N |\beta_0|$. Using the notation $\alpha_\nu=(\alpha_\nu^u,\alpha_\nu^s)$, $\beta_\nu=(\beta_\nu^u,\beta_\nu^s)$, it follows that $$\left\{ \begin{matrix} |\alpha_1|&\leq&|\alpha_1^s|+|\alpha_1^u|&\leq&5^{N-n}b^n|\alpha_0|+D_N|\beta_0|,\\ |\beta_1|&\leq&|\beta_1^s|+|\beta_1^u|&\leq& 5^{N-n}b^n|\alpha_0|+\frac{5}{10}|\beta_0|. \end{matrix} \right.$$ Let $A$ be the matrix Observe that $A$ has spectral radius at most $\frac{1}{2}$. Finally we choose $k>0$ such that $\left(\frac{1}{2}\right)^kD_N^2<1$. Then $A^k$ is a contraction and therefore also $DF^{Nk}$ is a contraction. \end{proof} \section{Capturing of a new critical point}\label{sec:capturing} The next step in the construction is to create a new attractor for the same parameter values of maps with a sink, see Section \ref{sink}. This attractor can be another sink or a strange attractor. In order to do so, we need to select another critical point and follow its evolution for the same parameter values as those of the first sink constructed in the previous section. It is important that we can use the binding critical points for the intitial critical point. By chosing its distance appropropriately $z_\nu(\omega)$ will follow the intitial critical point and the new critical point will still be bound to the first at its first return time $N$. At this time there will be a secondary bound period after which the secondary critical point again is bound. After the third bound period we will essentially be in a situation corresponding to the intial inductive situation in \cite{BC2}, \cite{MoraViana}. Using the machinery of \cite{BC2}, we will prove that the new critical point also will reach an escape situation. At this point we will be able to choose parameters which go through an unfolding of a homoclinic tangency. Following \cite{PalisTakens} and \cite{MoraViana}, this will allow to create a new Henon-like family and to consequently set up the inductive procedure. More precisely, to this new Henon-like family, one could apply Section 3 to create a new sink or \cite{MoraViana} to create a strange attractor. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{Fig3New1} \caption{Capturing of the second critical point} \label{Fig3New} \end{figure} Our aim is first to capture a new critical point $z'_0$ at a specific distance to $z_0$. We will show that the critical point $z_0$ and the segment $W^u(\hat z)$ are accumulated by leaves of $W^u(\hat z)$ which contain other critical points. Fix $a\in\omega=\tilde{\omega}_0''$ and let $z_0=z_0(\omega)$ be a critical point. We select a segment $L$ of the unstable manifold of length $2\sigma^{n_1}$ aroung $\hat{z}'$, see Lemma \ref{stablemanifold}, where $n_1$ is a prescribed integer. By Lemma \ref{stable-foliation} and Lemma \ref{contractive-field} it follows that the image $F^{n_2}(L)$ has length $\approx 2\sigma^{n_1}\cdot(2a)^{n_2}$. By adjusting $n_1$ and $n_2$, we obtain a sequence of long leaves $\gamma_j$ which accumulate on the first leg of $W^u(\hat{z})$ restricted to $-\frac12\leq x \leq \frac12$. This is formulated in the next lemma, where ${\rm dist}_{\rm v}(\hat{z}_0,z_0)$ denotes the vertical distance between the leaves of the unstable manifold containing the critical points $\hat{z}_0$ and $z_0$. \begin{lem}\label{lem:newcapture} There are constants $C_1$, $C_2$ such that for all $j\geq 16$ there is a critical point $\hat{z_0}$ and a corresponding segment $\hat{\gamma}^u$ containing $\hat{z_0}$ \begin{equation}\label{eq:spacingofwuleaves} C_1\left(\frac{\hat{d}}{2a}\right)^{j+1}\leq \text{\rm dist}_{\rm v}(\hat{z}_0,z_0)\leq C_2\left(\frac{\hat{d}}{2a}\right)^{j} \end{equation} where $\hat{d}=\det DF(\hat z)$. \end{lem} \begin{proof} The exact estimates of \eqref{eq:spacingofwuleaves} is obtained since most of the time is spent in the linearization domain of the saddle point $\hat{z}$ where the eigenvalues are $\sim 2a$ and $\sim \hat{d}/2a$ \end{proof} \subsection{The new critical point} Observe that, for each $n$, $\gamma_n$ and $\mathcal F^s_p$ intersects in a unique point, $z'_0$ and that $p$ depends on $n$. Pick $n$ so that the vertical distance $$d_v(\gamma_u,\gamma_n)=d_n=\frac{1}{D^{\eta}_N}$$ for a suitable $\eta$ satisfying $1<\eta<2$ to be chosen later. Moreover, by Lemma \ref{contractive-field}, $(b)$, there exists a constant $K$ close to $1$ so that $$ \frac{1}{K}\leq\frac{\max_{\pi_1\gamma_n}\left|h_u(x)-h_n(x)\right|}{\min_{\pi_1\gamma_n}\left|h_u(x)-h_n(x)\right|}\leq K $$ where $h_u$ and $h_j$ are the graphs of $\gamma_u$ and $\gamma_n$ and $\pi_1\gamma_n$ is the projection of $h_n$ on the $x$-axe. \begin{lem} Suppose that the horizontal distance satisfies $$ d_h(\gamma_u,\gamma_n)=d_n, $$ then $$ d_h(z_0,z_0^{(n)})\leq\sqrt{d_n} $$ \end{lem} \begin{proof} This is a reformulation of Lemma $5$, Section $2.3.1$ of \cite{BenedicksYoung1} and the same proof applies also in our setting. \end{proof} \begin{lem}\label{lem:secondbd} At time $N$, $z'_N(\omega)=F^{N}(z'_0)$ is located in horizontal position to $z_0$. Moreover there exists a constant $K$ close to $1$ so that $$ \frac{1}{K}d_h(z_0,z'_N)\leq d_h(z_N,z'_N)\leq K d_h(z_0,z'_N). $$ Furthermore $$ \frac{1}{K_1}D_N^{1-\eta}\leq d_h(z_0,z'_N)\leq K_1 D_N^{1-\eta} $$ for some constant $K_1$ close to $1$. \end{lem} \begin{proof} Let $\Gamma_0$ be a curve joining $z_0$ and $z'_0$ and let $\Gamma_1$ be its image joining $z_1$ and $z'_1$ close to the critical value. On $\Gamma_0$, using Subsection \ref{ss:splitting}, we decompose the tangent vector as $$ \tau(z)=\alpha(y)e_N(z)+\beta(y)\begin{brsm}1\\0\end{brsm} $$ with $z=(x,y)\in\Gamma_0$. Consider now the vertical segment from $z_0$ to $\gamma_n$ and let $y_n,y'_n$ be the $y$-coordinates of its end points. Then $$ \frac{1}{K}d_n\leq\int^{y_n}_{y'_n}\beta(y)dy\leq K d_n $$ with $K$ a constant close to $1$. Use the notation $w_j=DF^j(z_0)\begin{brsm}1\\0\end{brsm}$ and apply the distortion estimates during the bound period for $w_j$, see Lemma $10.2$ in \cite{MoraViana}, which gives $$ \frac{1}{K}D_N\leq\left\|w_N\right\|\leq K D_N. $$ Furthermore $$ \frac{1}{K}\frac{1}{D^{\eta}_N}\leq d_n\leq K \frac{1}{D^{\eta}_N}. $$ This proves the last inequality of the lemma. \end{proof} \smallskip Observe now that, by Corollary $5.7$ in \cite{BC2}, $w_N$ and the tangent vector $\tau_N$ are aligned with $\gamma_u$ forming an angle smaller than $d^4_n$. Note that Lemma $5.5$ and Corollary $5.7$ in \cite{BC2} do not depend on the special form of the map and applies also in our context. As final remark, one can notice that the distortion during the bound period are stated in the case of phase space dynamics. Moreover they are valid also in the parameter dependent setting because of the uniform comparison between the $x$ and $a$-derivatives, see Corollary \ref{cor:pardist}. \paragraph{The second bound period from time $N$ to time $2N$.} Note that, for $\eta$ close to $2$, $z'_{2N}(\omega)$ will still be bound to $z_N$ and that $z'_N(\omega)$ is located in horizontal position with respect to $z_0$. We repeat the same procedure as in Lemma \ref{lem:secondbd}. Join $z_0$ and $z'_N(\omega)$ by a curve $\Gamma'_0$ and decompose the tangent vector of $\Gamma'_1=F(\Gamma'_0)$ as $$ \tau(s)=A(s)e_N(s)+B(s)\begin{brsm}1\\0\end{brsm}, $$ where $B(s)$ satisfies $\frac{3a}{2}s\leq B(s)\leq\frac{5a}{2}s $, see Lemma $9.6$ in \cite{MoraViana} and Assertion $4(c)$ in \cite{BC2}. Again by the bound distortion lemma in \cite{MoraViana} (Lemma $10.2$), $d(z_N,z'_{2N}(\omega))$ and $d(z_0,z'_{2N}(\omega))$ can be estimated from below and above using $$ \frac{1}{K}s^2D_N\leq\left|\left(\int_0^sB(t)dt\right)w_N\right|\leq Ks^2 D_N $$ where $s=d(z_0,z'_{2N}(\omega))$. A similar statement for points in horizontal position appear in \cite{BC2}, Assertion $4$, $(b)$ and $(c)$ and in \cite{MoraViana}, Corollary $10.7$. We conclude that \begin{itemize} \item[(a)] $d(z_0,z'_{2N}(\omega))$ is comparable with a fixed constant to $\left(D^{1-\eta}_N\right)^2D_N=D_N^{3-2\eta}$, \item[(b)] $|z'_{2N}(\omega)|$ is comparable to $|z'_{N}(\omega)|D^{1-\eta}_N D_N$, which is comparable to $D^{1-\eta}_N$. \end{itemize} \smallskip Let us now study the period when $z'_{2N+\nu}(\omega)$, $\nu\geq 0$, is bound to $z_{0}(\omega)$. We define the preliminary binding period $p_1$ as the maximal integer so that, for all $\nu\leq p_1$, $$ \left|z'_{2N+\nu}(\omega)-z_{\nu}\right|\leq e^{-\beta\nu}. $$ In principle $p_1$ could be infinite, but this is not the case. \begin{lem}\label{lem:binding} The preliminary binding period $p_1<\infty$. \end{lem} \begin{proof} The proof of this fact will follow after the proof of Lemma \ref{lem:quadratic}. \end{proof} \begin{lem}\label{lem:quadratic} Let $\rho=\left|z'_{2N+\nu}(\omega)-z_{\nu}\right|$. If $\nu\geq\nu_0$ is outside of all folding periods, then \begin{equation}\label{prelboundper} \frac{3a}{2}\rho^2\left\|w_{\nu}\right\|\leq \left|z'_{2N+\nu}(\omega)-z_{\nu}\right|\leq \frac{5a}{2}\rho^2\left\|w_{\nu}\right\|, \end{equation} where $w_{\nu}=DF^{\nu}(z_0)w_0$. \end{lem} \begin{proof} We introduce an horizontal curve $\Gamma_0$ joining $z_0$ and $z'_{2N}$ with tangent vector $\tau(s)$. The lengh of $\Gamma_{\nu}=F^{\nu}(\Gamma_0)$ is equal to $$ \int_0^{\rho}\left\|DF^{\nu}(\Gamma_0(s))\tau_0(s)\right\|ds. $$ We decompose $$ \tau_1(s)=A(s)e_{\nu-1}+B(s)\begin{brsm}1\\0\end{brsm}, $$ and then $$ \tau_{\nu}(s)=A(s)DF^{\nu-1}(\Gamma_0(s))e_{\nu-1}+B(s)DF^{\nu-1}(\Gamma_0(s))\begin{brsm}1\\0\end{brsm}, $$ where, by Lemma \ref{prelboundper} \begin{equation}\label{Bsbounds} \frac{3a}{2}s\leq \left|B(s)\right|\leq \frac{5a}{2}s, \end{equation} see Section $8$ in \cite{MoraViana}. We apply the splitting algoritm from Section $8$, $(i)-(v)$ in \cite{MoraViana} to $DF^{\nu-1}(\Gamma_0(s))$. If $v$ is outside of it follows from \eqref{Bsbounds} and integrating that $$ \frac{3}{4}a\rho^2\left\|w_v\right\|\leq\int_0^{\rho}\left\|\tau_v(s)\right\|ds \leq \frac{5}{4}a\rho^2\left\|w_v\right\|. $$ We conclude that Lemma \ref{lem:quadratic} holds. \end{proof} {\em Proof of Lemma \ref{lem:binding}}. By the basic assumption which is part of the induction, see Assertion 4 (ii) in Subsection \ref{ss:twod}, $$ d(z_v(a),\mathcal C)\geq e^{-\alpha v}, $$ and $\rho=d(z_v(a),\mathcal C)$. Since by the induction $||w_\nu||\geq e^{\kappa \nu}$, $\nu=1,2,\dots,n$, it follows that $p_1<\infty$. \demo Suppose now at the time $p_1$ $$ \left\|z'_{2N+p_1+1}(\omega)-z_{2N+p_1+1}(\omega)\right\|\geq e^{-\beta(p_1+1)}. $$ We follow an argument from \cite{BC2}, Subsection 6.2. It follows from the basic assumption, see Assertion 4 (ii) in Subsection \ref{ss:twod}, that $$ d(z_v(a),\mathcal C)\geq e^{-\alpha v} $$ that the deepest and longest bound period for $z_j$ satisfies $\tilde p_1\leq 4\alpha p_1$. The next level bound period satisfies $\tilde p_2\leq 4\alpha\tilde p_1$. As consequence the lenght of the combined bound period of $z_{p_1}$ will be less than $$ \sum_\nu {\tilde p_{v}}\leq 4\alpha p_1+(4\alpha)^2p_1+\dots=\frac{4\alpha}{1-4\alpha}p_1. $$ This means that at the time $p$, $$ 3\rho^2\left\|w_p\right\|\geq e^{-\beta p_1}\frac{1}{4^{4\alpha p_1(1-4\alpha)}}. $$ But $p_1\leq p\leq \left(1+\frac{4\alpha}{1-4\alpha}\right)p_1$. If we chose $\beta=10\alpha$ as in \cite{BC2} we obtain \begin{equation}\label{firstp} 3\rho^2\left\|w_p\right\|\geq e^{-\frac{3}{4}\beta p_1} \end{equation} and also \begin{equation}\label{firstp2} 3\rho^2\left\|w_p\right\|\geq \rho^2 e^{-\beta p}. \end{equation} We can choose $\beta_1$ satisfying $$ \frac{3}{4}\beta\leq\beta_1\leq\beta $$ so that we have the estimate $$ \rho^2\left\|w_p\right\|\geq C^{-1}e^{-\beta_1 p}. $$ Let us also denote $D_p=\left\|w_p\right\|$. This means that with $p$ as in \ref{firstp2} $$ C^{-1}e^{-\beta_1 p}\leq D_p\left(D_N^{1-\eta}\right)^2\leq e^{-\beta_1 p}. $$ On the other hand $$ e^{(c_1-\alpha) p}\leq D_p\leq e^{c_1 p} $$ so we obtain that $$ C^{-1}D_p^{-\beta_2}\leq D_p\left(D_N^{1-\eta}\right)^2\leq CD_p^{-\beta_2}, $$ where $\frac{\beta_1}{c_1}\leq\beta_2\leq \frac{\beta_1}{c_1-\alpha}$. Hence $$ C^{-1}D_N^{\frac{2(\eta-1)}{1+\beta_2}}\leq D_p\leq CD_N^{\frac{2(\eta-1)}{1+\beta_2}}. $$ Note that the estimate $$C^{-1}D_p^{-\beta_2}\leq\rho^2D_p\leq CD_p^{-\beta_2}$$ implies that $$C^{-1/2}D_p^{-\frac{1}{2}\beta_2}\leq \rho D_p^{\frac{1}{2}}\leq C^{1/2}D_p^{-\frac{1}{2}\beta_2}$$ and we obtain that $$ \left|z'_{2N+p}(\omega)\right|\sim \left|z'_{2N}(\omega)\right|2apD_p\sim 2aD_{N}^{1-\eta}D_p^{\frac{1}{2}-\frac{1}{2}\beta_2}. $$ We now choose $\eta=\frac{3}{2}+\epsilon$. This means that $$ \left|z'_{2N+p}(\omega)\right|\geq 2aD_{N}^{-\frac{1}{2}-\epsilon}D_p^{\frac{1}{2}-\frac{1}{2}\beta_2}=2aD_{N}^{-\frac{1}{2}-\epsilon}D_N^{\left(\frac{1}{2}-\frac{1}{2}\beta_2\right)\frac{2\left(\frac{1}{2}+\epsilon\right)}{1+\beta_2}}. $$ If $\epsilon=\frac{\beta_2}{2}$ we obtain that $2aD_{N}^{-\frac{\beta_2}{2}-\frac{\beta_2}{2}}=2aD_{N}^{-{\beta_2}}$. We then follow the segment until the next return $2N+p+\ell$ and $$ \left|z'_{2N+p+\ell}(\omega)\right|\geq \text{const}\,D_{N}^{-{\beta_2}}. $$ Since $D_{N}\geq e^{\kappa N}$, we obtain $$ \left|z'_{2N+p+\ell}(\omega)\right|\geq \text{const}\, e^{-\kappa \beta_2N} $$ and the free period satisfies $\ell\leq \beta_2\kappa\kappa_1^{-1}N$, where $\kappa_1$ is the Lyapunov exponent associated to the dynamics outside of $(-\delta,\delta)$. Moreover, the time $2N+p+\ell$ is less than or equal to $3N$. We can now relax the condition of the basic assumption, see Subsection \ref{ss:twod} and apply the machinery to a subinterval $\omega'\subset\omega$ which is chosen so that $$ \left|z'_{2N+p+\ell}(\omega')\right|\geq\frac{1}{4} \left|z'_{2N+p+\ell}(\omega)\right|. $$ As a consequence $$ \left|z'_{2N+p+\ell}(\omega')\right|\geq \text{const'}\,e^{-\kappa \beta_2N}. $$ The corresponding bound period for a return time to a position at horizontal distance $e^{-r'}$ with $r'\leq \beta_2 N$ has length smaller than or equal to $4\beta_2N<N$. In particular, we can use that the induction is valid up to time $N$ and we can repeat the argument for $ \left|z'_{2N+p+\ell}(\omega')\right|. $ At the expiration time of the new bound period $p_1$, $ \left|z'_{2N+p+\ell+p_1}(\omega')\right| $ satisfies $$ \left|z'_{2N+p+\ell+p_1}(\omega')\right|\geq \text{const}\,e^{r'(1-3\beta)}\left|z'_{2N+p+\ell}(\omega')\right|, $$ see \eqref{eq:largedeviation}. After a finite number of steps $s$, at time $n_s$ and for a parameters interval $\omega^{(s)}$, we have $$ \left|z_{n_s}\left(\omega^{(s)}\right)\right|\geq \frac{1}{10}. $$ We are then in an escape situation and the argument in Section \ref{sink} applies. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{Fig4New} \caption{Long escape situation for the second critical point} \label{Fig4New} \end{figure} \section{Construction of a tangency}\label{sec:tangency} We aim to construct a non-degenerate quadratic tangency at the long escape time $\tilde N$. Pick $a\in\tilde\omega$ and consider the $\mathcal C^2(b)$ curve $\gamma_a$ containing the critical point $\tilde z_0(a)$. We will prove that a suitable subcurve $\tilde\gamma_a\subset F_{a}^{\tilde N}(\gamma_a)$ and containing $F_{a}^{\tilde N} (\tilde z_0(a))$ has very high curvature at $F_{a}^{\tilde N}(\tilde z_0(a))$. We denote by $t(z)$ the tangent vector at $z$, by $e_{\tilde N}(z)$ the most contractive vector at time $\tilde N$ and by $w_{\tilde N}(z)=DF_a^{\tilde N-1}(F(z))\begin{brsm}1\\0\end{brsm}$. Let $u$ be the arclength of $\tilde\gamma_a$ which is $0$ at $\tilde z_0$. Denote by $$\left\{\begin{matrix} E_{\tilde N}(u)=e_{\tilde N}(z(u))\\ W_{\tilde N}(u)=w_{\tilde N}(z(u))\\ \tau(u)=t(z(u)) \end{matrix}\right.$$ We decompose the tangent vector $\tau(u)$ along $\tilde\gamma_a$ as $$\tau(u)=A(u)E_{\tilde N}(u)+B(u)W_{\tilde N}(u).$$ We have \begin{equation}\label{main} \zeta_{\tilde N}-\tilde z_{\tilde N}=\int_0^{\rho}\left(A(u)E_{\tilde N}(u)+B(u)W_{\tilde N}(u)\right)du \end{equation} where $\zeta_0=\zeta_0(\rho)$ is an arbitrary point on $\tilde\gamma_a$ at arclength $\rho$ from $\tilde z_0$ and $\zeta_{\tilde N}=F_a^{\tilde N}\left(\zeta_0(\rho)\right)$. Differentiating (\ref{main}) twice, we get \begin{equation}\label{main1} \zeta'_{\tilde N}(\rho)=A(\rho)E_{\tilde N}(\rho)+B(\rho)W_{\tilde N}(\rho) \end{equation} and \begin{equation}\label{main2} \zeta''_{\tilde N}(\rho)=A'(\rho)E_{\tilde N}(\rho)+A(\rho)E'_{\tilde N}(\rho)+B'(\rho)W_{\tilde N}(\rho)+B(\rho)W'_{\tilde N}(\rho) \end{equation} \begin{lem}\label{wprime} For all $\rho>0$ $$\left|W'_{\tilde N}(\rho)\right|\leq 25^{\tilde N}.$$ \end{lem} \begin{proof} Observe that $$ W_{\tilde N}(\rho)=DF(x_{\tilde N-1},y_{\tilde N-1})\dots DF(x_{1},y_{1})\begin{brsm}1\\0\end{brsm}. $$ By differentiating with respect to $\rho$ and taking the matrix norm, one gets, $$ \left|W'_{\tilde N}(\rho)\right|=\sum_{i}\left(\prod_{j\neq i}\left\|DF(x_{j},y_{j})\right\|\right)\left\|P_i\right\| $$ where $$P_i=\frac{d}{d\rho}\left[\begin{matrix} -2x_i+\partial_x\varphi_1 & \partial_y\varphi_1\\ \partial_x\varphi_2& \partial_y\varphi_2 \end{matrix}\right].$$ Since the $C^2$ norms of $\varphi_1$ and $\varphi_2$ have the bound $Cb^{t/2}$, see \cite{MoraViana}, Section 7A, we get \begin{eqnarray*} \left|W'_{\tilde N}(\rho)\right|\leq \sum_{i}\left[\left(\frac{9}{2}\right)^{\tilde N-1}\cdot 3\left(\frac{9}{2}\right)^{i}\right]\leq 25^{\tilde N} \end{eqnarray*} where we used that $\left\|W_i\right\|<\left(\frac{9}{2}\right)^{i}$ (since $\left\|DF\right\|<\frac{9}{2}$). \end{proof} \begin{prop}\label{bigcurvature} Let $ |\rho_0|=\frac{\left|E_{\tilde N}(0)\right|}{\left\|W_{\tilde N}(0)\right\|} $, then for all $\rho\in[-\rho_0,\rho_0]$, the curvature of $\zeta_{\tilde N}(\rho)$, $\kappa\left(\zeta_{\tilde N}(\rho)\right)$ satisfies the following: $$\kappa\left(\zeta_{\tilde N}(\rho)\right)\geq \frac{C_1}{2}\frac{|W_{\tilde N}(\rho)|}{|E_{\tilde N}(\rho)|^2}$$ with $2\leq C_1\leq 4$. \end{prop} \begin{rem} Observe that the number $\frac{1}{2}$ appearing in the curvature estimates above can be chosen arbitrarily as any number less that $1$, if $b$ is sufficiently small. \end{rem} \begin{proof} Recall that $$\kappa(\rho)=\frac{\left|\zeta'_{\tilde N}(\rho)\times\zeta''_{\tilde N}(\rho) \right|}{\left|\zeta'_{\tilde N}(\rho)\right|^3}.$$ We start by computing $\zeta'_{\tilde N}(\rho)\times\zeta''_{\tilde N}(\rho)$. We get \begin{eqnarray*} \zeta'_{\tilde N}(\rho)\times\zeta''_{\tilde N}(\rho)&=&A(\rho)A'(\rho)E_{\tilde N}(\rho)\times E_{\tilde N}(\rho)+A(\rho)^2E_{\tilde N}(\rho)\times E'_{\tilde N}(\rho)\\ &+&A(\rho)B'(\rho)E_{\tilde N}(\rho)\times W_{\tilde N}(\rho)+A(\rho)B(\rho)E_{\tilde N}(\rho)\times W'_{\tilde N}(\rho)\\ &+&A'(\rho)B(\rho)W_{\tilde N}(\rho)\times E_{\tilde N}(\rho)+A(\rho)B(\rho)W_{\tilde N}(\rho)\times E'_{\tilde N}(\rho)\\ &+&B(\rho)B'(\rho)W_{\tilde N}(\rho)\times W_{\tilde N}(\rho)+B(\rho)^2W_{\tilde N}(\rho)\times W'_{\tilde N}(\rho) \end{eqnarray*} and since $E_{\tilde N}(\rho)\times E_{\tilde N}(\rho)=W_{\tilde N}(\rho)\times W_{\tilde N}(\rho)=0$ \begin{eqnarray*} \zeta'_{\tilde N}(\rho)\times\zeta''_{\tilde N}(\rho)&=& \left(A(\rho)B'(\rho)-A'(\rho)B(\rho)\right)E_{\tilde N}(\rho)\times W_{\tilde N}(\rho)\\ &+& A(\rho)^2E_{\tilde N}(\rho)\times E'_{\tilde N}(\rho)+B(\rho)^2W_{\tilde N}(\rho)\times W'_{\tilde N}(\rho)\\ &+&A(\rho)B(\rho)E_{\tilde N}(\rho)\times W'_{\tilde N}(\rho)+A(\rho)B(\rho)W_{\tilde N}(\rho)\times E'_{\tilde N}(\rho). \end{eqnarray*} Observe that, by \cite{MoraViana}, for all $\rho\geq 0$, \begin{eqnarray*} 2\rho\leq B(\rho)&\leq & 4\rho\\ B'(\rho)&=&2ax'+O(b)=C_1+O(b)\\ A(\rho)&=&1+O(\rho^2)\\ A'(\rho)&=&O(\rho)\\ \end{eqnarray*} with $2\leq C_1\leq 4$. The following estimates hold. \begin{eqnarray*} \left|\left(A(\rho)B'(\rho)-A'(\rho)B(\rho)\right)E_{\tilde N}(\rho)\times W_{\tilde N}(\rho)\right|&\geq&\frac{3}{4}C_1\left|E_{\tilde N}(\rho)\times W_{\tilde N}(\rho)\right|\\ &\geq&\frac{C_1}{2}\left|E_{\tilde N}(\rho)\right|\left| W_{\tilde N}(\rho)\right|, \end{eqnarray*} where we used the fact that the angle between $W_{\tilde N}$ and $E_{\tilde N}$ is very small, see formula $(9)$, Section $6$ in \cite{MoraViana}. By Lemma $6.8$ in \cite{MoraViana}, we get \begin{eqnarray*} \left|E_{\tilde N}(\rho)\times E'_{\tilde N}(\rho)\right|&\leq&\left|E_{\tilde N}(\rho)\right| \left| E'_{\tilde N}(\rho)\right| \\ &\leq&\left|E_{\tilde N}(\rho)\right| \left(K_1 b\right)^{\tilde N-3} \end{eqnarray*} with $K_1>0$. By Lemma \ref{wprime} we have \begin{eqnarray*} \left|B(\rho)^2 W_{\tilde N}(\rho)\times W'_{\tilde N}(\rho)\right|&\leq&\left|W_{\tilde N}(\rho)\right|4^2\rho^2 25^{\tilde N} \\ &\leq&\left|W_{\tilde N}(\rho)\right|\left|E_{\tilde N}(\rho_0)\right|\cdot16\left(\frac{\left|E_{\tilde N}(\rho_0)\right|}{\left|W_{\tilde N}(\rho_0)\right|^2}25^{\tilde N}\right)\\ &\leq&\frac{1}{100}\left|W_{\tilde N}(\rho_0)\right|\left|E_{\tilde N}(\rho_0)\right|, \end{eqnarray*} where we used that $|\rho|^2\leq |\rho_0|^2=\frac{\left|E_{\tilde N}(0)\right|^2}{\left|W_{\tilde N}(0)\right|^2} and $\left|E_{\tilde N}(\rho_0)\right|<\left(\frac{Kb}{\kappa}\right)^{\tilde N}$, $K,\kappa>0$, see formula $(5)$ of Section $6$ in \cite{MoraViana}. By Lemma $6.8$ in \cite{MoraViana}, \begin{eqnarray*} \left|A(\rho)B(\rho) W_{\tilde N}(\rho)\times E'_{\tilde N}(\rho)\right|&\leq& 8 |\rho| \left| W_{\tilde N}(\rho)\right|\left(K_1 b\right)^{\tilde N-3} \\ &\leq& 8 \frac{\left| E_{\tilde N}(\rho_0)\right|}{\left| W_{\tilde N}(\rho_0)\right|}\left| W_{\tilde N}(\rho_0)\right|\left(K_1 b\right)^{\tilde N-1}\\ &\leq&\frac{1}{100} \left| E_{\tilde N}(\rho_0)\right|\left| W_{\tilde N}(\rho_0)\right|. \end{eqnarray*} By Lemma \ref{wprime} we have \begin{eqnarray*} \left|A(\rho) B(\rho) E_{\tilde N}(\rho)\times W'_{\tilde N}(\rho)\right|&\leq& 8 |\rho| \left|E_{\tilde N}(\rho)\right| 25^{\tilde N} \\ &\leq&8 \left|W_{\tilde N}(\rho_0)\right|\left|E_{\tilde N}(\rho_0)\right|\frac{\left|E_{\tilde N}(\rho_0)\right|}{\left|W_{\tilde N}(\rho_0)\right|^2} 25^{\tilde N}\\ &\leq&\frac{1}{100}\left|W_{\tilde N}(\rho_0)\right|\left|E_{\tilde N}(\rho_0)\right|, \end{eqnarray*} where we used that $|\rho|^2\leq |\rho_0|^2=\frac{\left|E_{\tilde N}(0)\right|^2}{\left|W_{\tilde N}(0)\right|^2} $ and $\left|E_{\tilde N}(\rho_0)\right|<\left(\frac{Kb}{\kappa}\right)^{\tilde N}$, $K,\kappa>0$, see formula $(5)$ of Section $6$ in \cite{MoraViana}. The proof of the lemma is concluded by combining the previous five estimates. \end{proof} \subsection{Quadratic Tangency} We prove that in a long escape situation a quadratic tangency appears. \begin{prop} Let $z_E(\omega)$ be a curve segment of critical values in an escape situation that intersect {$\gamma^s$}, the leg of $W^s(\hat{z})$ pointing downwards. Then there exists a unique $a_0\in\omega$ such that the tangency between {$\gamma^s_{a_0}$} and {$\gamma^u_{a_0}$} is quadratic. \end{prop} \begin{rem}\label{rem:curv} Actually, the curvature of {$\gamma^s_{a_0}$} is close to zero while the curvature of $\gamma^u_{a_0}$ is close to its maximal which is $2\frac{|W_N|}{|E_N|^2}$ within a factor close to $1$. \end{rem} \begin{proof} By Proposition \ref{bigcurvature}, the $\rho$ which makes the slope equal to $-{C}/{\sqrt b}$ is roughly $$\rho=-\frac{|E_N|}{2C|W_N|}\sqrt b.$$ Observe that this $\rho$ belongs to the interval $(-\rho_0,\rho_0)$, so Proposition \ref{bigcurvature} gives the required lower bound for the curvature. \end{proof} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{Fig5New} \caption{Quadratic tangency} \label{Fig5New} \end{figure} \section{Proof of theorems \ref{A}, \ref{B} and \ref{C}} The proof of theorems \ref{A}, \ref{B} and \ref{C} is done by induction. From sections \ref{sink} and \ref{sec:tangency} we selected maps with a sink and a new tangency. We reapply now Section \ref{sink} to get a second sink and Section \ref{sec:tangency} to get a new tangency. One could stop this process after $k$ steps. At this moment one would have $k$ sinks and a new tangency. This tangency will then be used to create a strange attractor using \cite{MoraViana} and give the proof of Theorem \ref{A}. Alternatively, one could continue the process infinitely many times to get infinitely many sinks. This leads to the proof of Theorem \ref{B}. The inductive procedure is formulated in the next proposition. \begin{prop}\label{mainprop} There exists $K>0$ such that, for all $k=0,1,\dots , K$, there are parameters intervals $\omega_k$ with $\omega_k\subset\omega_{k-1}$, so that, for all $a\in\omega_k$, there is a $\mathcal C^2(b)$ curve $\gamma_k(a)\subset W^u(\hat z)$ with $z_k(a)\in\gamma_k(a)$. Moreover, for all $k=0,1,\dots, K$ there are regions $\mathcal D_{N_k}(a)$ with $\mathcal D_{N_j}(a)\cap \mathcal D_{N_i}(a)=\emptyset$ for all $i\neq j$ such that $\mathcal D_{N_k}(a)$ is bounded by $\gamma_k(a)$ and parabolic leaves of $W^s_{\text{loc}}$ and it contains a unique sink. \end{prop} \begin{proof} We proceed by induction and the case of one sink appears in Section \ref{sink}. Assume that we have already constructed $k$ sinks and that a parameter interval $\omega^{(k)}$ corresponding to the critical point $z^{(k+1)}_0$ is in escape situation and intersects $W^s(\hat{z})$. We now have an unfolding of a homoclinic tangency as in Palis-Takens \cite{PalisTakens} and \cite{MoraViana}. We can then do the renormalization procedure associated to this unfolding as in these papers and we obtain a new renormalized H\'enon-like family. This allows us to create a new sink as in Section 3, and we obtain also a new escape situation following the argument in Section \ref{sec:capturing}. \end{proof} {\it Proof of Theorem \ref{A}.} The proof is a small modification of that of Proposition \ref{mainprop}. The only difference is that, at the time $k$, instead of construct a new sink one can create a strange attractor as in \cite{MoraViana} at the homoclinic unfolding.\demo \medskip {\it Proof of Theorem \ref{B}.} The proof is a minor modification of that of {Theorem \ref{A}}. The only difference is that instead of switching to construction of a strange attractor after $k$ steps, we continue to construct more and more sinks. We obviously obtain Newhouse parameters in the limit. Note that the renormalizations take parameters of a specific H\'enon-like family linearly to new renormalized parameters of the corresponding H\'enon-like family. For each renormalization of order $k$, we get a set $A'_k$ of parameters in the renormalized H\'enon-like family of maps with $k$ sinks. We denote by $A_k$ the pullback of $A'_k$ containing parameters of the original H\'enon-like family. Consider now a non-empty closed subset of $A_k$, $B_k$ and denote by $B'_k$ the push-forward of $B_k$. We do at this point, another renormalization and we get a sequence of inclusions $$ A_1\supset B_1\supset\dots\supset A_k\supset B_k\supset A_{k+1}\supset\dots. $$ The intersection $$ \bigcap_{k=1}^{\infty}A_k $$ is then non-empty and so is then the set of maps with infinitely many sinks. \demo {\it Proof of Theorem \ref{C}.} This result is a direct consequence of Theorem \ref{A} and Theorem \ref{B}, since the H\'enon family is a special example of a H\'enon-like family.\demo \section{Construction of two coexisting strange attractors} In this section we prove the existence of two strange attractors for a parameter set of positive Lebesgue measure within the classical H\'enon family. We first outline the proof. The idea is to find parameters with two coexisting homoclinic tangencies. To do this we consider two very close critical points which are in escape situation simultaneously. We must chose them very carefully so that their images are at suitable distance at the escape situation. To do this we have to chose carefully their initial distance and the time they spend in the hyperbolic region outside of $(-\delta,\delta)$. We also have to adjust $b$ and therefore we also need a distorsion estimate which includes a comparasion of the $b$-derivative and the phase derivative. When we have the two tangencies we can follow \cite{MoraViana} and \cite{PalisTakens} to create two sets of large one-dimensional Lebesgue measure with strange attractors, which must intersect. Finally we can perturb in $b$ to get a parameters set of positive two-dimensional Lebesgue measure. We return to the construction of the first critical point $z_0$ and the corresponding long escape situation of Section \ref{sec:long_escape_situation}. We fix $b<b_0$ and by Lemma \ref{lem:long_escape_sit} we see that there is a subinterval $\tilde{\omega}_0$ such that $z_E(\tilde{\omega}_0)$ is in a long escape situation. We now construct a second critical point $\hat{z}_0$. The construction is similar to the corresponding one in Section \ref{sec:capturing}. The difference is that $\hat{z}_0$ will be chosen much closer to $z_0$ vertically than $\tilde{z}_0$ is to $z_0$ and its distance can be chosen exponentially well spaced, see \eqref{eq:spacingofwuleaves}. From Lemma \ref{lem:newcapture}, choose $j$ and the corresponding $\hat{z}_0$ so that $j$ is the minimal integer so that for all $a\in\tilde{\omega}_0$ at time $E$, $\hat{z}_E$ is still bound to $z_E$. \medskip \subsection{Comparison between $b$-derivatives and phase derivatives} The aim now is to obtain a simultanous tangency for an image of $\gamma^u\ni z_0$ and and image of $\hat{\gamma}^u\ni \hat {z}_0$. To do this we need to understand the dependence of the parameter $b$. The analysis is similar to that of the $a$-dependence in \cite{BC2}, but because of the differences we carry out some details. We fix $a$ and study the $b$-dependent curves $$ b \mapsto z_\nu(a,b). $$ Write \begin{equation}\label{tangent-iter} \begin{cases} x_{\nu+1}=1-ax_\nu(b)^2+y_\nu(b)\\ y_{\nu+1}=bx_\nu(b). \end{cases} \end{equation} The tangent vectors satisfy \begin{equation}\label{tangent-iter} \tau_{\nu+1}=\begin{pmatrix} -2ax_\nu & 1\\ b & 0 \end{pmatrix}\tau_\nu+\begin{pmatrix} 0 \\ x_\nu\end{pmatrix}. \end{equation} The following lemma is the analogous to Lemma 8.1 on the $a$-dependence in \cite{BC2}. \begin{lem}\label{pardist} Let $\theta=C/\log\left(1/b\right)$, with $C$ a positive constant, see Subsection \ref{ss:twod}. Let $z_0$ be a critical point of generation $g\leq 3\theta N$ with at free return at time $n$. Let $\tau_\nu$ be defined by \eqref{tangent-iter}. Then if $a>a_0$ and $0<b<b_0$ $$ \tau_n(z_0)=\lambda_n(z_0)w_n(z_0)+{\mathcal O}(1), $$ where $|\lambda_n(z_0)-\lambda(z_0)|\leq C_1e^{-c_0n}$ for all $z_0$ and free returns $n$ with constants $C_1$ and $c_0$ independent of $z_0$ and $n$ and $\lambda=\lambda(z_0)$ uniformly bounded and bounded away from zero. \end{lem} \begin{rem} In our case we will apply Lemma \ref{pardist}, for the critical point $z_0$ on $G_1$ and the captured critical point $\hat{z}_0$. Note that $\hat{z}_0$ does not satisfy the generation condition $g \leq 3\theta N$. However the result and proof will work for $\hat{z}_0$ as well, since $\hat{z}_0$ will have the same binding points as $z_0$ and these binding points are of the correct generation. \end{rem} In the proof of Lemma \ref{pardist} we need the following lemma from \cite{BC2}. \begin{lem} There is a constant $C$ so that for any $m<n$ $$ ||Df^{n-m}(z_m)||\leq C e^{-c''m}||w_n||, \qquad c''=c-\frac{C}{\log (1/b)}. $$ Here $c$ is the exponent in the inductive lower bound $$ ||w_\nu^*||\geq e^{c\nu},\qquad \nu=1,\dots $$ \end{lem} {\it Proof of Lemma \ref{pardist}.} The proof is analogous to the corresponding proofs of Lemma 8.1 in \cite{BC2}. We use the explicit formula for the H\'enon map for the dependence of on $b$ so the argument does not extend to the H\'enon-like setting of \cite{MoraViana}. Consider the critical point $z_0$ and its iterates as $b$-dependent curves $$ b\mapsto z_\nu(a;b) $$ and denote the corresponding tangent vectors by $\tau_\nu$. We represent the first generation $G_1$ of the unstable manifold as $y=b\varphi(x,a,b)$. By \cite{BC2}, Lemma 4.1 $$ ||\varphi(x,a,b)||_{C^2(x,a,b)} \leq\text{Const.} $$ The critical point $(x_0,y_0)=(x_0,\varphi (x_0,a,b))$ is defined, using the notation of \cite{BC2}, as the solution of $$ b\frac{\partial\varphi(x_0,a,b)}{\partial x_0}=q(x_0,a,b)=2ax_0+H(x_0,a,b), $$ where (cf. \cite{BC2}, Lemma 5.1) $$ \left|\frac{\partial H}{\partial b}\right| < \text{\rm Const.} $$ for the ranges of $x_0$ considered. Hence taking $$ \tau_0=\left(\frac{dx_0}{db},\frac{d}{db}(b\varphi(x_0,a,b))\right) $$ we have $\tau_0=({\mathcal O}(1),1)+{\mathcal O}(b)$ , $\tau_1=(1,0)+{\mathcal O}(b)$, $\tau_2=\tau_2'+\varphi_2$, where $\tau_2'=(-2ax_1,0)+{\mathcal O}(b)$ and $\varphi_2=(0,1)$. Using the notation \begin{equation} M_\nu=\begin{pmatrix}-2ax_\nu & 1\\ b& 0 \end{pmatrix} \end{equation} \begin{equation} \varphi_\nu=(0,x_\nu),\qquad \nu=2,3,\cdots, \end{equation} we can write \begin{equation} \tau_n=\sum_{\nu=2}^{n-2}(M_{n-1}M_{n-2}\cdots M_{\nu})\varphi_\nu +\varphi_{n-1}+ M_{n-1}M_{n-2}\cdots M_{2}\tau_2', \end{equation} or alternatively \begin{equation}\label{tau-rec} \tau_n=\sum_{\nu=1}^{n-2} Df^{n-\nu-1}(z_{\nu+1})\varphi_\nu+\varphi_{n-1}+Df^{n-2}(z_2)\tau_2'. \end{equation} We can essentially follow line by line the proof of Lemma 8.1. in \cite{BC2}. The main difference is that in our version the vectors $\varphi_\nu=(0,x_\nu)$. These vectors will be mapped by $Df(z_\nu)$ to $\tilde{\varphi}_\nu=(x_\nu,0)$, and since for $\nu<\nu_0(a,b)$, $x_\nu<0$, we have essentially the same situation as in \cite{BC2}. We write $$ M_{n-1}M_{n-2}\cdots M_{\nu+1}\varphi_\nu= M_{n-1}M_{n-2}\cdots M_{\nu+2}\tilde{\varphi}_\nu. $$ Let $C_\nu(n)w_n$ be the orthogonal projection of the vector $M_{n-1}M_{n-2}\cdots M_{\nu+1}\varphi_\nu$ on the line generated by $w_n$. We continue the proof of Lemma \ref{pardist}, stating and proving the following Claim. {\it Claim.} We can write \begin{itemize} \item[(i)] $$ M_{n-1}M_{n-2}\cdots M_{\nu+1}\varphi_\nu=C_\nu(n)w_n+{\mathcal O} b^{(n-\nu-1)/2}. $$ \item[(ii)] Here $$ |C_\nu(n)|\leq C\cdot e^{-c''\nu}. $$ \item[(iii)] There are $\{C_\nu\}_{\nu=0}^\infty$ such that $ |C_\nu(n)-C_\nu|\leq \text{\rm Const.}\, b^{(n-\nu)/2},\quad \text{\rm where}$\ $n$ is a free return, $n\geq 2\nu$. \end{itemize} {\it Proof of claim.} Write $$ \tilde{\varphi}_\nu=\xi_\nu^{(n)}e_\nu^n+\eta_\nu^{(n)}f_\nu^n, $$ and $$ w_\nu=x_\nu^{(n)}e_\nu^n+y_\nu^{(n)}f_\nu^n, $$ where $e_\nu^{(n)}$ and $f_\nu^{(n)}$ are the most contracting respectively expanding directions of $Df^{n-\nu}(z_\nu)$. By Lemma 6.1(a) in \cite{MoraViana} $$ |\text{angle}(e_{\nu}^{(n')},e_{\nu}^{(n'')})|\leq \frac{4K}{\kappa}\left(K^2b/(\kappa^2)\right)^{n'} $$ hold if $n'<n''$ are two free returns. Since $e_\nu^{(n)}\perp f_\nu^{(n)}$ also $$ |\text{angle}(f_{\nu}^{(n')},f_{\nu}^{(n'')})|\leq \frac{4K}{\kappa}\left(K^2b/(\kappa^2)\right)^{n''}. $$ As a consequence, $\eta_\nu^{(n)}$ and $y_\nu^{(n)}$ converge as $n$ goes to infinity and $$ \tilde{\varphi}_n=\eta_\nu^{(n)}(Df^{n-\nu}f_\nu^{(n)})+{\mathcal O}(b^{(n-\nu)/2}) =\frac{\eta_\nu^{(n)}}{||w_\nu||}\cdot w_n+ {\mathcal O}(b^{(n-\nu)/2}), $$ $$ w_n=y_\nu^{(n)}(Df^{n-\nu}f_\nu^{(n)})+{\mathcal O}(b^{(n-\nu)/2}) =\frac{y_\nu^{(n)}}{||w_\nu||}\cdot w_n+ {\mathcal O}(b^{(n-\nu)/2}). $$ It follows that $$ y_\nu^{(n)}/||w_\nu||\to A_\nu $$ with estimates ${\mathcal O}(b^{(n-\nu)/2})$ as $n\to \infty $ through free returns and also $\eta_\nu^{(n)}\to \eta_\nu^*$ with estimates ${\mathcal O}(b^{(n-\nu)/2})$. Part (i) of the claim now follows immediately. Part (ii) and part (iii) then follow with $C_\nu=\eta_\nu^*/A_\nu$. Again using the claim we conclude that $$ M_{n-1}M_{n-2}\cdots M_2\tau_2'= C_0(n)w_n+{\mathcal O}(1). $$ The term $M_{n-1}M_{n-2}\dots M_2\tau_2'$ is essentially directed in the direction $(-1,0)$. Therefore $\tau_n=(\sum_{\nu=0}^{n-1}C_\nu(n))w_n+{\mathcal O}(1)$. Defining $\lambda_n= \sum_{\nu=0}^{n-1}C_\nu(n)$ and $\lambda=\sum_{\nu=0}^{\infty}C_\nu(n)$, we conclude that $|\lambda_n(z_0)-\lambda(z_0)|\leq C^{-c_0n}$, $c_0=c''/2$ and that $\lambda=\lambda(z_0)$ is bounded from above independent of $z_0$. To observe that $\lambda$ is also bounded from below we observe that in the matrices $M_\nu$, $-2ax_\nu\geq 0 $ for $\nu\leq \nu_0(a,b)$, where $\nu_0\to\infty$ as $a\to 2$ and $b\to 0$ all vectors $$ M_{\nu_0}M_{\nu_0-1}\cdots M_{\nu+1}\varphi_\nu $$ are essentially proportional to $(-1,0)$ with positive constants and are in the expanding direction for $M_{n-1}M_{n-2}\cdots M_{\nu_0+1}$. Hence $(\sum_{\nu=0}^n C_\nu(z_0))w_n$ do not cancel, and we deduce $|\lambda|=|\sum_{\nu=0}^\infty C_\nu|\geq \text{\rm Const.}$ {\demo}. \begin{rem} Lemma \ref{pardist} clearly also holds if $n$ is a time of free orbit in $|x|>\delta$. \end{rem} We will also need the following lemma \begin{lem}\label{w-star-b-dist} If $z_\nu(a,b)$ and $z_\nu(a,b')$ are bound up to time $p$ then \begin{itemize} \item[(i)] $$ \frac{1}{100}\leq \frac{||w_*^\nu(a,b) ||}{||w_*^\nu(a,b')||}\leq 100 \qquad \text{for all } \nu\leq p $$ \item[(ii)] $$ \text{\rm angle}(w_*^\nu(a,b),w_*^\nu(a,b'))\leq b^{1/2} \max_{1\leq \nu\leq p}{|z_\nu(a,b)-z_\nu(a,b')|}\} $$ \end{itemize} \end{lem} \begin{proof} This lemma is a consequence of Assertion 4 of \cite{BC2} and Lemma \ref{pardist}. \end{proof} \subsection{Proof of Theorem \ref{D}} In the proof of Theorem \ref{D} we need the following lemma which gives a quantitative property of the startup set for the constructions of \cite{BC2} and \cite{MoraViana}. We formulate this in the following lemma, where we use $\mu$ as parameter to be compatible with the notation in \cite{PalisTakens}. \begin{lem}\label{lem:startup} Suppose $F_\mu$ is a H\'enon-like map as in Definition \ref{henonlike}. Let $z_0$ be the critical point on the left leg of the unstable manifold. Let $\mu_0=2$ and $\omega_0=[\mu_0-2^{-N},\mu_0- 2^{-N-1}]$, be a dyadic interval. Then there is a $\epsilon>0$ and $\kappa >0 $ less than $\log 2$ but close to $\log 2$ and there is a decomposition $$ \omega_0=\left(\bigcup_{\omega\in{\mathcal Q}}\omega\right)\cup{\mathcal E}. $$ such that for each $\omega\in {\mathcal Q}$, $z_n(\omega)$ has a first free return at time $n=n(\omega)$ with the following properties \begin{itemize} \item[(i)] $z_n(\omega)$ is a $C^2(b)$ curve; \item[(ii)] the projection $\pi_1 z_n(\omega)$ on the first coordinate satisfies $$ I_{r,\ell}\subset \pi_1 z_n(\omega)\subset I_{r,\ell}^+,\qquad e^{-r}\geq e^{-\beta n}. $$ \item[(iii)] $\left|DF^j(z_0)\begin{pmatrix} 0\\ 1\end{pmatrix}\right|\geq e^{\kappa j}$ \ \text{for }$a\in\omega\quad \forall j\leq n(\omega)$; \item[(iv)] $\text{\rm dist}(z_j(a),0)\geq e^{-\beta j}$ for all $j<n$ \end{itemize} The exceptional set ${\mathcal E}$ can be chosen to be of measure $<\varepsilon_0|\omega_0|$. \end{lem} \begin{proof} $z_2(\omega)$ is very close to the repelling fixed points of $1-\mu x^2$ and $F_\mu$. The expansive behaviour of $F_\mu$ in $|x|>\delta$ is given in Lemma 1 and Lemma 2 in \cite{BC1}. The comparasion between the parameter and phase derivatives is given in Lemma 2.1 in \cite{BC2}. The behaviour of the iterates of $z_\nu$ and the vectors $w_\nu$ in $|x|>\delta $ is descibed in Lemma 4.5 and Lemma 4.6 in \cite{BC2}. In the H\'enon-like case these lemmas will work with $b$ replaced by $b^t,\ t>0$, with $t$ as in the definition of the H\'enon-like maps. The construction goes as follows. Consider the iterates $z_j(\omega)$, $j\geq 2$. At some time $j_0$ $z_{j_0}(\omega)$ will intersect $(-\delta,\delta)$ and we delete the preimage in $\omega_0$ of $z_{j_0}(\omega_0)\cap (-e^{-\beta{j_0}},-e^{-\beta{j_0}})$. $z_{j_0}(\omega)$ will then be partitioned according to the $\pi_1$- projections to $I_{r,\ell}$ and the preimages $\omega_{r,\ell}$ so that $\pi_1(z_{j_0}(\omega_{r,\ell}))=I_{r,\ell}$ form elements of the partition ${\mathcal Q}$. The elements of $z_{j_0}(\omega_0)$ are continued to be iterated until they hit $(-\delta,\delta)$. Since we have the estimate $$ |\{x: F_\mu^j(0)\notin (-\delta,\delta),\ j=0,1,\dots,n\}|\leq e^{-\eta n} $$ and also by uniform comparasion between parameter and phase derivatives $$ |\{\mu:\pi_1 z_\mu^j(0)\notin (-\delta,\delta),\ j=0,1,\dots,n\}|\leq e^{-\eta n} $$ we can stop the construction at some time $j_1$ and chose this time so that the exceptional set ${\mathcal E}$ satisfies $|{\mathcal E}|\leq \varepsilon_0 |\omega_0|$. Note that if $b$ is sufficiently small we will not need additional binding points beyond $z_0$ in this procedure. \end{proof} \begin{prop} Suppose that $f_\mu$ is a H\'enon-like map and that $\omega_0$ is a parameter interval as in Lemma \ref{lem:startup}. Then there is $\epsilon_0<\frac{1}{10}$ such such that $F_\mu$ has a strange attractor for $a\in E$, where $|E|>(1-2|\varepsilon_0|)|\omega_0|$, \end{prop} \begin{proof} We can for each $\omega\in{\mathcal Q}$ start the inductive construction of \cite{MoraViana}. The induction assumptions are as described in Assertion 1 --3 in \cite{BC2}. The deleted part $E(\omega)$ of each $\omega\in{\mathcal Q}$ satisfies $|E(\omega)|\leq \text{\rm Const.}|\omega| e^{-\alpha n(\omega}/\delta$. There will be two deletions, one due to the basic assumption and one due to the large deviation argument and since $n(\omega)$ is sufficiently large for all $\omega$, if $a_0$ is chosen sufficiently close to 2, this deletion will be of size $\leq \text{Const.}\,|\omega|e^{-\alpha n(\omega)}/\delta$. \end{proof} \begin{prop}\label{prop:coexist} There is a 1-dimensional Hausdorff measure set of parameters $A_1$, such that for $(a,b)\in A_1$, the maps of the H\'enon family $f_{a,b}$ have two coexisting strange attractors. \end{prop} The proof is a simple version of the general induction in \cite{BC2} and \cite{MoraViana}. \begin{proof} Consider the two critical points $z_0(a)$ and $\hat{z}_0(a)$ , $a\in\tilde{\omega}_0$ and suppose that at the escape time $E$, $\hat{z}_0(a)$ is still bound to $z_0(a)$. For $a\in\tilde{\omega}_0$ and fixing $b$ write $$E_k=\{a\in\tilde{\omega}_0:\text{dist}(E_E+k(a),0)\geq \delta\}. $$ This is a hyperbolic set and we will have the estimate $$ m(E_j)\leq C e^{-\eta j}\qquad \text{for all }j\geq 0. $$ However $\{z_{E+j}(a,b):a\in E_j\}$ will contain long segments for all $j\geq 0$. In particular, part of these segments remain outside of $(-\delta,\delta)$. We continue to iterate the parts of the intervals that remain outside of $(-\delta,\delta)$ until the time $j_0$, when $z_{j_0}(a,b)$ are separated by more than $\frac{1}{3}\cdot 2^{-k}$. We can then chose a point $a^{(0)}$ so that the image of $\gamma^u$ associated with $E_{E+j}$ has a tangency. Then $\hat{z}_{E+j}$ has a tangency for $b=b^{(0)}$ The tangency associated with the image of $z_0$ is then lost, but we can then change $a=a^{(0)}$ to $a=a^{(1)}$ to obtain a tangency associated with the image of $z_0$ and then change $b=b^{(0)}$ to $b=b^{(1)}$. This will give exponentially converging sequences $\{a_{(k)}\}_{k=0}^\infty$ and $\{b_{(k)}\}_{k=0}^\infty$. Finally we find parameters $(a_*,b_*)$ where there is a common tangency. We are in the situation of \cite{MoraViana} of homoclinic tangencies. Suppose that the common tangency occurs for a parameter $a_0$. We consider the normalization argument in \cite{PalisTakens}. Suppose that the tangencies are of order $Q_1$ and $Q_2$. The curvature is given, for $i=1,2$, by $$ Q_i=\frac{|W_N^i(\rho)|}{|E_N^i|^2\left({1+\left(\frac{|B(\rho)||W_N^i|}{|E_N^i|^2}\right)^2}\right)^{3/2}} $$ Note that since $$ \frac{|W_N^i(\rho)|}{|E_N^i|^2}= \frac{|W_N^i(\rho)|^3}{b^{2N}}, $$ it follows that the two curvatures at the tangencies are comparable within fixed constants. At the tangencies there are naturally defined renormalizations. We follow \cite{PalisTakens}, page 49. The maps $\varphi_\mu^N$ are written in coordinates $$ (1+x,y)\mapsto (0,1)+(H_1(\mu,x,y),H_2(\mu,x,y)) $$ with \begin{align*} H_1(\mu,x,y)&=v\cdot x^2+\mu+w y+\tilde{H}_1(\mu,x,y)\\ H_2(\mu,x,y)&=u\cdot y+\tilde{H}_2(\mu,x,y). \end{align*} They define $n$ dependent reparametrization of the parameter $\mu$ and a $\mu$-dependent change of coordinates renormalizations. The parameter renormalization is given by. $$ \overline{\mu}=\sigma^{2n}\cdot \mu+w\cdot\kappa^n\cdot\sigma^{2n}-\sigma^n. $$ We have followed the notation of \cite{PalisTakens}. In our notation the Palis-Takens $\mu$ corresponds to $a$. A renormalization takes the maps close to the tangency to the H\'enon-like maps. In our application $v=|W_N|$, $w=0$, $u=|E_N|$, $$ \begin{pmatrix} \overline{x}\\ \overline{y} \end{pmatrix} \mapsto \begin{pmatrix} v \overline{x}^2+\overline{\mu} \\ u \overline{x} \end{pmatrix}+R_1 $$ and $$ \begin{pmatrix} \overline{x}'\\ \overline{y}' \end{pmatrix} \mapsto \begin{pmatrix}v' \overline{x}'^2+\overline{\nu} \\ u' \overline{x}' \end{pmatrix}+R_2 $$ $$ \overline{\mu}=\sigma_1^{2n}\cdot \mu-\sigma_1^n $$ $$ \overline{\nu}=\sigma_2^{2n}\cdot \mu-\sigma_2^n. $$ In our application we have chosen $n$ so that $n=3N$ and since $|\sigma_1-\sigma_2|= {\mathcal O}\left(|W_N|^{-1}\right)$, $\sigma_1^{2n}/\sigma_2^{2n}$ is bounded above and below. The parameter transformations are linear. By Remark \ref{rem:curv} and Lemma \ref{w-star-b-dist} it follows that $Q_1=v/u^2$ and $Q_2=v'/u'^2$ are comparable within fixed constant. It follows that two intervals $\omega'=[\mu_1,\mu_2]$ and $\omega''=[\nu_1,\nu_2]$ in the renormalized parameters corresponds to an interval $\omega_0=[\mu_0-2^{-n_0},\mu_0-2^{-n_0-1}]$ where $C^{-1}\leq \frac{\mu_0-\mu_1}{\mu_0-\mu_2}\leq C$ and $C^{-1}\leq \frac{\nu_0-\nu_1}{\nu_0-\nu_2}\leq C$. In both $\omega'$ and $\omega''$ we apply Lemma \ref{lem:startup} and select parameters $E'\subset \omega'$ and $E''\subset \omega''$ of measure $\geq (1 -\varepsilon)|\omega'|$ respectively $\geq (1 -\varepsilon)|\omega''|$. Since the parameter maps $\psi_1:\omega_0\mapsto \omega'$ and $\psi_2:\omega_0\mapsto \omega''$ are linear, it follows that $\psi_1^{-1}(E')\cap \psi_1^{-1}(E'')\neq \emptyset$. We obtain two coexisting strange attractors in a one-dimensional Hausdorff dimensional set of paramenter space. \end{proof} We can now finish the proof of the main theorem of the section. {\it Proof of Theorem \ref{D}.} Starting from Proposition \ref{prop:coexist}, it remains to prove that the set of $b$-values for which the corresponding maps have two simultaneous strange attractors is an open set. This follows by noticing that the initial conditions hold in an open neighborhood of $b=b_*$, see the proof of Proposition \ref{prop:coexist}.
{ "timestamp": "2019-08-06T02:08:50", "yymm": "1811", "arxiv_id": "1811.00517", "language": "en", "url": "https://arxiv.org/abs/1811.00517", "abstract": "We study the classical Hénon family $f_{a,b}:(x,y)\\mapsto(1-ax^2+y,bx)$, $0<a<2$, $0<b<1$, and prove that given an integer $k\\geq 1$, there is a set of parameters $E_k$ of positive two-dimensional Lebesgue measure so that $f_{a,b}$, for $(a,b)\\in E_k$, has at least $k$ attractive periodic orbits and one strange attractor. A corresponding statement also holds for the Hénon-like families. The final main result of the paper is the existence, within the classical Hénon family, of a positive Lebesgue measure set of parameters whose corresponding maps have two coexisting strange attractors.", "subjects": "Dynamical Systems (math.DS)", "title": "Coexistence phenomena in the Hénon family", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759660443167, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7079405705838167 }
https://arxiv.org/abs/0910.0777
The Knapsack Problem with Neighbour Constraints
We study a constrained version of the knapsack problem in which dependencies between items are given by the adjacencies of a graph. In the 1-neighbour knapsack problem, an item can be selected only if at least one of its neighbours is also selected. In the all-neighbours knapsack problem, an item can be selected only if all its neighbours are also selected. We give approximation algorithms and hardness results when the nodes have both uniform and arbitrary weight and profit functions, and when the dependency graph is directed and undirected.
\section{Introduction} We consider the knapsack problem in the presence of constraints. The input is a graph $G = (V,E)$ where each vertex $v$ has a {\em weight} $w(v)$ and a {\em profit} $p(v)$, and a knapsack of size $k$. We start with the usual knapsack goal---find a set of vertices of maximum profit whose total weight does not exceed $k$---but consider two natural variations. In the {\em 1-neighbour knapsack problem}, a vertex can be selected only if {\em at least one} of its neighbours is also selected (vertices with no neighbours can always be selected). In the {\em all-neighbour knapsack problem} a vertex can be selected only if all its neighbours are also selected. We consider the problem with {\em general} (arbitrary) and {\em uniform} ($p(v) = w(v) = 1\ \forall v$) weights and profits, and with undirected and directed graphs. In the case of directed graphs, the constraints only apply to the {\em out}-neighbours of a vertex. Constrained knapsack problems have applications to scheduling, tool management, investment strategies and database storage~\cite{KPP,BFFS05,Johnson:1983p1256}. There are also applications to network formation. For example, suppose a set of customers $C \subset V$ in a network $G = (V,E)$ wish to connect to a server, represented by a single sink $s \in V$. The server may activate each edge at a cost and each customer would result in a certain profit. The server wishes to activate a subset of the edges with cost within the server's budget. By introducing a vertex mid-edge with zero-profit and weight equal to the cost of the edge and giving each customer zero-weight, we convert this problem to a 1-neighbour knapsack problem. \subsection{Results} We show that the eight resulting problems \[ \{\mbox{1-neighbour, all-neighbours}\} \times \{\mbox{general, uniform}\}\times\{\mbox{undirected, directed}\} \] vary in complexity but afford several algorithmic approaches. We summarize our results for the 1-neighbour knapsack problem in Table~\ref{tbl:results}. In addition, we show that uniform, directed all-neighbour knapsack has a PTAS but is NP-complete. The general, undirected all-neighbour knapsack problem reduces to 0-1 knapsack, so there is a fully-polynomial time approximation scheme. \begin{table}[tb] \begin{center} \begin{tabular}{cccc} \toprule & & Upper & Lower \\ \cmidrule(r){2-4} \multirow{2}{*}{\hspace{3mm}Uniform\hspace{3mm}} & \hspace{3mm}Undirected\hspace{3mm} & \multicolumn{2}{c}{linear-time exact} \\ \cmidrule(r){2-4} & Directed & PTAS & \hspace{3mm}NP-hard (strong sense) \hspace{3mm} \\ \cmidrule(r){2-4} \multirow{2}{*}{General} & Undirected & \hspace{3mm}$\frac{(1-\varepsilon)}{2} \cdot (1-1/e^{1-\varepsilon})$\hspace{3mm} & $1-1/e+\epsilon$ \\ \cmidrule(r){2-4} & Directed & {\em open} & $1/ \Omega(\log^{1-\varepsilon} n)$ \\ \bottomrule \end{tabular} \end{center} \caption{\label{tbl:results} 1-Neighbour Knapsack Problem results: upper and lower bounds on the approximation ratios for combinations of $\{\mbox{general, uniform}\} \times \{\mbox{undirected, directed}\}$. For uniform, undirected, the bounds are running-times of optimal algorithms.} \end{table} In Section~\ref{sec:g1n} we describe a greedy algorithm that applies to the general 1-neighbour problem for both directed and undirected dependency graphs. The algorithm requires two oracles: one for finding a set of vertices with high profit and another for finding a set of vertices with high profit-to-weight ratio. In both cases, the total weight of the set cannot exceed the knapsack capacity and the subgraph defined by the vertices must adhere to a strict combinatorial structure which we define later. The algorithm achieves an approximation ratio of $(\alpha/2) \cdot (1-1/e^{\beta})$. The approximation ratios of the oracles determines the $\alpha$ and $\beta$ terms respectively. For the general, undirected 1-neighbour case, we give polynomial-time oracles that achieve $\alpha = \beta = (1-\varepsilon)$ for any $\varepsilon > 0$. This yields a polynomial time $((1-\varepsilon)/2) \cdot (1-1/e^{1-\varepsilon})$-approximation. We also show that no approximation ratio better than $1-1/e$ is possible (assuming P$\not=$NP). This matches the upper bound up to (almost) a factor of 2. These results appear in Section~\ref{sec:gu1n}. In Section~\ref{sec:gd1n}, we show that the general, directed 1-neighbour knapsack problem is $1/ \Omega(\log^{1-\varepsilon} n)$-hard to approximate, even in DAGs. In Section~\ref{sec:ud1n} we show that the uniform, directed 1-neighbour knapsack problem is NP-hard in the strong sense but that it has a polynomial-time approximation scheme (PTAS)\footnote{A PTAS is an algorithm that, given a fixed constant $\varepsilon < 1$, runs in polynomial time and returns a solution within $1-\varepsilon$ of optimal. The algorithm may be exponential in $1/\varepsilon$}. Thus, as with general, undirected 1-neighbour problem, our upper and lower bounds are essentially matching. In Section~\ref{sec:uu1n} we show that the uniform, undirected 1-neighbour knapsack problem affords a simple, linear-time solution. In Section~\ref{sec:all-neighbours} we show that uniform, directed all-neighbour knapsack has a PTAS but is NP-complete. We also discuss the general, undirected all-neighbour problem. \subsection{Related work} \label{sec:related} There is a tremendous amount of work on maximizing submodular functions under a single knapsack constraint~\cite{Sviridenko:orl2004}, multiple knapsack constraints~\cite{Kulik:2009}, and both knapsack and matroid constraints~\cite{Lee:2009,groundan-schulz:prepreint2009}. While our profit function is submodular, the constraints given by the graph are not characterized by a matroid (our solutions, for example, are not closed downward). Thus, the 1-neighbour knapsack problem represents a class of knapsack problems with realistic constraints that are not captured by previous work. As we show in Section~\ref{sec:apx-hardness}, the general, undirected 1-neighbour knapsack problem generalizes several maximum coverage problems including the budgeted variant considered by Khuller, Moss, and Naor~\cite{kmn:ipl1999} which has a tight $(1-1/e)$-approximation unless P=NP. Our algorithm for the general 1-neighbour problem follows the approach taken by Khuller, Moss, and Naor but, because of the dependency graph, requires several new technical ideas. In particular, our analysis of the greedy step represents a non-trivial generalization of the standard greedy algorithm for submodular maximization. Johnson and Niemi~\cite{Johnson:1983p1256} give an FPTAS for knapsack problems on dependency graphs that are in-arborescences (these are directed trees in which every arc is directed toward a single root). In their problem formulation, the constraints are given as out-arborescences---directed trees in which every arc is directed away from a single root---and feasible solutions are subsets of vertices that are closed under the {\em predecessor} operation. This problem can be viewed as an instance of the general, directed 1-neighbour knapsack problem. In the subset-union knapsack problem (SUKP)~\cite{KPP}, each item is a subset of a ground set of elements. Each element in the ground set has a weight and each item has a profit and the goal is to find a maximum-profit set of elements where the weight of the union of the elements in the sets fits in the knapsack. It is easy to see that this is a special case of the general, directed all-neighbours knapsack problem in which there is a vertex for each item and each element and an arc from an item to each element in the item's set. In~\cite{KPP}, Kellerer, Pferschy, and Pisinger show that SUKP is NP-hard and give an optimal but badly exponential algorithm. The precedence constrained knapsack problem~\cite{BFFS05} and partially-ordered knapsack problem~\cite{Kolliopoulos:2007p1242} are special cases of the general, directed all-neighbours knapsack problem in which the dependency graph is a DAG. Hajiaghayi et.~al.~show that the partially-ordered knapsack problem is hard to approximate within a $2^{\log^\delta n}$ factor unless 3SAT$\in$DTIME$(2^{n^{3/4+\varepsilon}})$~\cite{Hajiaghayi:2006p1244}. \subsection{Notation.} We consider graphs $G$ with $n$ vertices $V(G)$ and $m$ edges $E(G)$. Whether the graph is directed or undirected will be clear from context and we refer to edges of directed graphs as arcs. For an undirected graph, $N_G(v)$ denotes the neighbours of a vertex $v$ in $G$. For a directed graph, $N_G(v)$ denotes the out-neighbours of $v$ in $G$, or, more formally, $N_G(v) = \{u : vu \in E(G)\}$. Given a set of nodes $X$, $N^{-}_{G}(X)$ is the set of nodes not in $X$ but that have a neighbour (or out-neighbour in the directed case) in $X$. That is, $N^{-}_{G}(X)=\{u : uv \in E(G), u \not\in X, \mbox{ and } v \in X\}$. The degree (in undirected graphs) and out-degree (in directed graphs) of a vertex $v$ in $G$ is denoted $\delta_G(v)$. The subscript $G$ will be dropped when the graph is clear from context. For a set of vertices {\em or} edges $U$, $G[U]$ is the graph induced on $U$. For a directed graph $G$, $\mathcal{D}$ is the directed, acyclic graph (DAG) resulting from contracting maximal strongly-connected components (SCCs) of $G$. For each node $u \in V(\mathcal{D})$, let $V(u)$ be the set of vertices of $G$ that are contracted to obtain $u$. For a vertex $u$, let $\ensuremath{\mathrm{desc}}_G(u)$ be the set of all descendants of $u$ in $G$, {\em i.e.},~all the vertices in $G$ that are reachable from $u$ (including $u$). A vertex is its own descendant, but not its own strict descendant. For convenience, extend any function $f$ defined on items in a set $X$ to any subset $A \subseteq X$ by letting $f(A) = \sum_{a \in A} f(a)$. If $f(a)$ is a set, then $f(A) = \bigcup_{a\in A} f(a)$. If $f$ is defined over vertices, then we extend it to edges: $f(E) = f(V(E))$. For any knapsack problem, \ensuremath{\mathrm{OPT}\xspace}~is the set of vertices/items in an optimal solution. \subsection{Viable Families and Viable Sets.} A set of nodes $U$ is a {\em 1-neighbour set} for $G$ if for every vertex $v \in U$, $|N_{G[U]}(v)| \geq \min\{\delta_{G}(v),1\}$. That is, a 1-neighbour set is feasible with respect to the dependency graph. A family of graphs $\mathcal{H}$ is a {\em viable family} for $G$ if, for any subgraph $G'$ of $G$, there exists a partition $\mathcal{Y}_{\mathcal{H}}(G')$ of $G'$ into 1-neighbour sets for $G'$, such that for every $Y \in \mathcal{Y}_{\mathcal{H}}(G')$, there is a graph $H \in \mathcal{H}$ spanning $G[Y]$. For directed graphs, we take {\em spanning} to mean that $H$ is a directed subgraph of $G[Y]$ and that $Y$ and $H$ contain the same number of nodes. For a graph $G$, we call $\mathcal{Y}_{\mathcal{H}}(G)$ a {\em viable partition} of $G$ with respect to $\mathcal{H}$. \begin{figure}[bt] \centering\includegraphics[scale=0.5]{figures/viable-partition} \caption{ \label{fig:viable-partition} An undirected graph. If $\mathcal{H}$ is the family of star graphs, then the shaded regions give the only viable partition of the nodes---no other partition yields 1-neighbour sets. However, every {\em edge} viable with respect to $\mathcal{H}$. The singleton node is also viable since it is a 1-neighbour set for the graph.} \end{figure} \begin{figure}[tb] \centering \subfigure[]{\includegraphics[scale=0.5]{figures/A-B-Partition}} \subfigure[]{\includegraphics[scale=0.5]{figures/A-B-Partition-Directed}} \caption{ \label{fig:viable-partition-lemma} An undirected $G$ in (a) and a directed graph $G$ in (b) with 1-neighbour sets $A$ (dark shaded) and $B$ (dotted) marked in both. Similarly, in both (a) and (b) the lightly shaded regions give viable partitions for $G[A \setminus B]$ and the white nodes denote $N^{-}_{G}(B)$. In (a) $Y_{2}$ is viable for $G[A \setminus B]$, and since $|Y_{2}|=2$, it is viable for $G[V(G) \setminus B]$. $Y_{1}$ is not viable for $G[V(G) \setminus B]$ but it is in $N^{-}_{G}(B)$. In (b), $Y_{3}$ is viable in $G[V(G) \setminus B]$ whereas $Y_{4}$ is a viable because we consider $G[V(G) \setminus B]$ with the dotted arc removed.} \end{figure} In Section~\ref{sec:gu1n} we show that star graphs form a viable family for any undirected dependency graph. That is, we show that any undirected graph can be partitioned into 1-neighbour sets that are stars. Fig.~\ref{fig:viable-partition} gives an example. In contrast, edges do not form a viable family since, for example, a simple path with 3 nodes cannot be partitioned into 1-neighbour sets that are edges. For DAGs, in-arborescences are a viable family but directed paths are not (consider a directed graph with 3 nodes $u,v,w$ and two arcs $(u,v)$ and $(w,v)$). Note that every vertex must be included as a set on its own in any viable family. A 1-neighbour set $U$ for $G$ is {\em viable} with respect to $\mathcal{H}$ if there is a graph $H \in \mathcal{H}$ spanning $G[U]$. Note that the 1-neighbour sets in $\mathcal{Y}_{\mathcal{H}}(G)$ are, by definition, viable for $G$, but a viable set for $G$ need not be in $\mathcal{Y}_{\mathcal{H}}(G)$. For example, if $\mathcal{H}$ is the family of stars and $G$ is the undirected graph in Fig.~\ref{fig:viable-partition}, then any edge is a viable set for $G$ but the only viable partition is the shaded region. Note that if $U$ is a viable set for $G$ then it is also a viable set for any subgraph $G'$ of $G$ provided $U \subseteq V(G')$. Viable families and viable sets play an essential role in our greedy algorithm for the general 1-neighbour knapsack problem. Viable families establish a set of structures over which our oracles can search. This restriction simplifies both the design and analysis of efficient oracles as well as coupling the oracles to a shared family of graphs which, as we'll show later, is essential to our analysis. In essence, viable families provide a mechanism to coordinate the oracles into returning sets with roughly similar structure. Viable sets correctly capture the idea of an indivisible unit of choice in the greedy step. We formalize this with the following lemma which is illustrated in Fig.~\ref{fig:viable-partition-lemma}. \begin{lemma} \label{lemma:viable-correct} Let $G$ be a graph and $\mathcal{H}$ be a viable family for $G$. Let $A$ and $B$ be 1-neighbour sets for $G$. If $\mathcal{Y}_{\mathcal{H}}(C)$ is a viable partition of $G[C]$ where $C=A \setminus B$ then every set $Y \in \mathcal{Y}_{\mathcal{H}}(C)$ is either (i) a singleton node $y$ such that $y \in N^{-}_{G}(B)$ (i.e., $y$ has a neighbour in $B$), or (ii) a viable set for $G'$, which is the subgraph obtained by deleting vertices in $B$ and arcs in $X$ where $X$ is empty if $G$ is undirected and $X$ is the set of arcs with tails in $N^{-}_{G}(B)$ if $G$ is directed. \end{lemma} \begin{proof} If $|Y|=1$ then let $Y=\{y\}$. If $\delta_{G}(y)=0$ then $Y$ is a viable set for $G$ so it is viable set for $G'$. Otherwise, since $A$ is a 1-neighbour set for $G$, $y$ must have a neighbour in $B$ so $y \in N^{-}_{G}(B)$. If $|Y| >1$ then, provided $G$ is undirected, $Y$ is also a viable set in $G$ so it is a viable set in $G'$. If $G$ is directed and $Y$ contains a node $y$ that is in $N^{-}_{G}(B)$, an arc out of $y$ is not needed for feasibility since $y$ already has a neighbour in $A$. \end{proof} \section{The general 1-neighbour knapsack problem} \label{sec:g1n} Here we give a greedy algorithm {\sc Greedy-1-Neighbour} for the general 1-neighbour knapsack problem on both directed and undirected graphs. A formal description of our algorithm is available in Fig.~\ref{alg:greedy-1-neighbour}. {\sc Greedy1-Neighbour} relies on two oracles {\sc Best-Profit-Viable} and {\sc Best-Ratio-Viable} which find viable sets of nodes with respect to a fixed viable family $\mathcal{H}$. In each iteration $i$, we call {\sc Best-Ratio-Viable} which, given the nodes not yet chosen by the algorithm, returns the highest profit-to-weight ratio, viable set $S_{i}$ with weight not exceeding the remaining capacity. We also consider the set of nodes $Z$ not in the knapsack, but with at least one neighbour already in the knapsack. Let $s_{i}$ be the node with highest profit-to-weight ratio in $Z$ not exceeding the remaining capacity. We greedily add either $s_{i}$ or $S_{i}$ to our knapsack $U$ depending on which has higher profit-to-weight ratio. We continue until we can no longer add nodes to the knapsack. For a viable family $\mathcal{H}$, if we can efficiently approximate the highest profit-to-weight ratio viable set to within a factor of $\beta$ and if we can efficiently approximate the highest profit viable set to within a factor of $\alpha$, then our greedy algorithm yields a polynomial time $\frac{\alpha}{2}(1-1/e^\beta)$-approximation. \begin{figure}[tb] \begin{center} \fbox{ \begin{minipage}[h]{.9\linewidth} \noindent {\sc Greedy-1-Neighbour}$(G,k):$ \begin{tabbing} \qquad \= $S_{\max}$ = {\sc best-profit-viable}$(G,k)$ \\ \> $K = k$, $U = \emptyset$, $i = 1$, $G' = G$, $Z = \emptyset$ \\ \> WHILE there is either a viable set in $G'$ or a node in $Z$ with weight $\leq K$ \\ \> \qquad \= $S_i$ = {\sc best-ratio-viable}$(G', K)$ \\ \> \> $s_{i} = \arg\max \{p(v) / w(v) \,|\, v \in Z\}$ \\ \> \> IF $p(s_{i}) / w(s_{i}) \, > \, p(S_i) / w(S_i)$\\ \> \> \qquad $S_{i} = \{ s_{i} \}$\\ \> \> $G' = G[V(G') \setminus S_{i}]$\\ \> \> $i = i+1,\ U = U \cup V(S_i), \ K = K-w(S_i)$\\ \> \> $Z = N^{-}_{G}(U)$\\ \> \> If $G$ is directed, remove any arc in $G'$ with a tail in $Z$\\ \> RETURN $\arg\max \{p(S_{\max}), p(U)\}$ \end{tabbing} \end{minipage} } \end{center} \caption{\label{alg:greedy-1-neighbour} The {\sc Greedy-1-Neighbour} algorithm. In each iteration $i$, we greedily add either the viable set $S_{i}$ or the node $s_{i}$ to our knapsack $U$ depending on which has higher profit-to-weight ratio. This continues until we can no longer add nodes to the knapsack.} \vspace{-4mm} \end{figure} \begin{theorem} \label{thm:gd1n} {\sc Greedy-1-Neighbour} is a $\frac{\alpha}{2}(1-\frac{1}{e^\beta})$-approximation for the general 1-neighbour problem on directed and undirected graphs. \end{theorem} \begin{proof} Let $\ensuremath{\mathrm{OPT}\xspace}$ be the set of vertices in an optimal solution. In addition, let $U_i =\cup_{j = 1}^{i} V(S_j)$ correspond to $U$ after the first $i$ iterations where $U_{0} = \emptyset$. Let $\ell+1$ be the first iteration in which there is either a node in $Z \cap \ensuremath{\mathrm{OPT}\xspace}$ or a viable set in $\ensuremath{\mathrm{OPT}\xspace} \setminus U_\ell$ whose profit-to-weight ratio is larger than $S_{\ell+1}$. Of these, let $\mathcal{S}_{\ell+1}$ be the node or set with highest profit-per-weight. For convenience, let $\mathcal{S}_{i} = S_{i}$ and $\mathcal{U}_{i} = U_{i}$ for $i = 1 \ldots \ell$, and $\mathcal{U}_{\ell+1} = \mathcal{U}_{\ell} \cup \mathcal{S}_{\ell+1}$. Notice that $\mathcal{U}_{\ell}$ is a feasible solution to our problem but that $\mathcal{U}_{\ell+1}$ is not since it contains $\mathcal{S}_{\ell+1}$ which has weight exceeding $K$. We analyze our algorithm with respect to $\mathcal{U}_{\ell+1}$. \begin{lemma} \label{lem:profit-inc-dag} For each iteration $i = 1, \ldots, \ell+1$, the following holds: $$p(\mathcal{S}_i) \geq \beta\frac{w(\mathcal{S}_i)}{k}\left(p(\ensuremath{\mathrm{OPT}\xspace})-p(\mathcal{U}_{i-1})\right)$$ \end{lemma} \begin{proof} Fix an iteration $i$ and let $I$ be the graph induced by $\ensuremath{\mathrm{OPT}\xspace} \setminus \mathcal{U}_{i-1}$. Since both $\ensuremath{\mathrm{OPT}\xspace}$ and $\mathcal{U}_{i-1}$ are 1-neighbour sets for $G$, by Lemma~\ref{lemma:viable-correct}, each $Y \in \mathcal{Y}_{\mathcal{H}}(I)$ is either a viable set for $G'$ (so it can be selected by {\sc best-ratio-viable}) or a singleton vertex in $N^{-}_{G}(\mathcal{U}_{i-1})$ (which {\sc Greedy-1-Neighbour} always considers). Thus, if $i \leq \ell$, then by the greedy choice of the algorithm and approximation ratio of {\sc best-ratio-viable} we have \begin{equation} \label{eq:viable-lb} \frac{p(\mathcal{S}_i)}{w(\mathcal{S}_i)} \geq \beta \frac{p(Y)}{w(Y)} \ \mbox{for all} \ Y \in \mathcal{Y}_{\mathcal{H}}(I). \end{equation} If $i=\ell+1$ then $p(\mathcal{S}_{\ell+1})/w(\mathcal{S}_{\ell+1})$ is, by definition, at least as large as the profit-to-weight ratio of any $Y \in \mathcal{Y}$. It follows that for $i=1, \ldots, \ell+1$: \begin{eqnarray*} p(\ensuremath{\mathrm{OPT}\xspace}) - p(\mathcal{U}_{i-1}) = \sum_{Y \in \mathcal{Y}_{\mathcal{H}}(I)} p(Y) & \leq & \frac{1}{\beta}\frac{p(\mathcal{S}_i)}{w(\mathcal{S}_i)} \sum_{Y \in \mathcal{Y}_{\mathcal{H}}(I)} w(Y),\mbox{ by Eq.~(\ref{eq:viable-lb}) }\\ & \leq & \frac{1}{\beta}\frac{p(\mathcal{S}_i)}{w(\mathcal{S}_i)} w(\ensuremath{\mathrm{OPT}\xspace}),\mbox{ since $I$ is a subset of \ensuremath{\mathrm{OPT}\xspace}} \\ & \leq & \frac{1}{\beta}\frac{k}{w(\mathcal{S}_i)}p(\mathcal{S}_i),\mbox{ since $w(\ensuremath{\mathrm{OPT}\xspace}) \leq k$} \end{eqnarray*} Rearranging gives Lemma~\ref{lem:profit-inc-dag}. \hfill \end{proof} \begin{lemma} \label{lem:profit} For $i = 1, \ldots, \ell+1$, the following holds: $$p(\mathcal{U}_i) \geq \left[1-\prod_{j = 1}^i \left(1-\beta\frac{w(\mathcal{S}_j)}{k} \right) \right] p(\ensuremath{\mathrm{OPT}\xspace})$$ \end{lemma} \begin{proof} We prove the lemma by induction on $i$. For $i = 1$, we need to show that \begin{equation}\label{eq:induct-1} p(\mathcal{U}_1) \geq \beta\frac{w(\mathcal{S}_1)}{k} p(\ensuremath{\mathrm{OPT}\xspace}). \end{equation} This follows immediately from Lemma~\ref{lem:profit-inc-dag} since $p(\mathcal{U}_{0})=0$ and $\mathcal{U}_{1}=\mathcal{S}_1$. Suppose the lemma holds for iterations 1 through $i-1$. Then it is easy to show that the inequality holds for iteration $i$ by applying Lemma~\ref{lem:profit-inc-dag} and the inductive hypothesis. This completes the proof of Lemma~\ref{lem:profit}. \hfill \end{proof} We are now ready to prove Theorem~\ref{thm:gd1n}. Starting with the inequality in Lemma~\ref{lem:profit} and using the fact that adding $\mathcal{S}_{\ell+1}$ violates the knapsack constraint (so $w(\mathcal{U}_{\ell+1}) > k$) we have \begin{eqnarray*} p(\mathcal{U}_{\ell+1}) & \geq & \left[1-\prod_{j = 1}^{\ell + 1} \left(1-\beta\frac{w(\mathcal{S}_j)}{k} \right) \right] p(\ensuremath{\mathrm{OPT}\xspace}) \\ & \geq &\left[1-\prod_{j = 1}^{\ell + 1} \left(1-\beta\frac{w(\mathcal{S}_j)}{w(U_{\ell+1})} \right) \right] p(\ensuremath{\mathrm{OPT}\xspace}) \\ & \geq & \left[1-\left(1-\frac{\beta}{\ell+1}\right)^{\ell+1} \right] p(\ensuremath{\mathrm{OPT}\xspace}) \geq \left(1-\frac{1}{e^\beta}\right)p(\ensuremath{\mathrm{OPT}\xspace}) \end{eqnarray*} where the penultimate inequality follows because equal $w(\mathcal{S}_j)$ maximize the product. Since $S_{\max}$ is within a factor of $\alpha$ of the maximum profit viable set of weight $\leq k$ and $\mathcal{S}_{\ell+1}$ is contained in \ensuremath{\mathrm{OPT}\xspace}, $p(S_{\max}) \geq \alpha \cdot p(\mathcal{S}_{\ell+1})$. Thus, we have $p(U) + p(S_{\max})/ \alpha \geq p(\mathcal{U}_\ell) + p(\mathcal{S}_{\ell+1}) = p(\mathcal{U}_{\ell+1}) \geq \left(1-\frac{1}{e^\beta}\right)p(\ensuremath{\mathrm{OPT}\xspace})$. Therefore $\max\{p(U), p(S_{\max})\} \geq \frac{\alpha}{2}\left(1-\frac{1}{e^\beta}\right)p(\ensuremath{\mathrm{OPT}\xspace})$. \hfill \end{proof} \subsection{The general, undirected 1-neighbour problem} \label{sec:gu1n} Here we formally show that stars are a viable family for undirected graphs and describe polynomial-time implementations of {\sc Best-Profit-Viable} and {\sc Best-Ratio-Viable} for the star family. Both oracles achieve an approximation ratio of $(1-\varepsilon)$ for any $\varepsilon > 0$. Combined with {\sc Greedy-1-Neighbour} this yields a polynomial time $((1-\varepsilon)/2) \cdot (1-1/e^{1-\varepsilon})$-approximation for the general, undirected 1-neighbour problem. In addition, we show that this approximation is nearly tight by showing that the general, undirected 1-neighbour problem generalizes many coverage problems including the max $k$-cover and budgeted maximum coverage, neither of which have a $(1-1/e+\epsilon)$-approximation for any $\epsilon > 0$ unless P=NP. \subsubsection{Stars} \label{sec:stars} For the rest of this section, we assume $\mathcal{H}$ is the family of star graphs ({\em i.e.} graphs composed of a center vertex $u$ and a (possibly empty) set of edges all of which have $u$ as an endpoint) so that given a graph $G$ and a capacity $k$, {\sc Best-Profit-Viable} returns the highest profit, viable star with weight at most $k$ and {\sc Best-Ratio-Viable} returns the highest profit-to-weight, viable star with weight at most $k$. \begin{lemma}\label{lem:graphs-into-stars} The nodes of any undirected constraint graph $G$ can be partitioned into 1-neighbour sets that are stars. \end{lemma} \begin{proof} Let $G_{i}$ be an arbitrary connected component of $G$. If $|V(G_{i})|=1$ then $V(G_{i})$ is trivially a 1-neighbour set and the trivial star consisting of a single node is a spanning subgraph of $G_{i}$. If $G_{i}$ is non-trivial then let $T$ be any spanning tree of $G_{i}$ and consider the following construction: while $T$ contains a path $P$ with $|P| > 2$, remove an interior edge of $P$ from $T$. When the algorithm finishes, each path has at least one edge and at most two edges, so $T$ is a set of non-trivial stars, each of which is a 1-neighbour set. \hfill \end{proof} \paragraph{{\sc Best-Profit-Viable}} Finding the maximum profit, viable star of a graph $G$ subject to a knapsack constraint $k$ reduces to the traditional unconstrained knapsack problem which has a well-known FPTAS that runs in $O(n^{3} / \varepsilon)$ time~\cite{ibarra-kim:jacm1975,vazirani}. Every vertex $v \in V(G)$ defines a knapsack problem: the items are $N_{G}(v)$ and the capacity is $k-w(v)$. Combining $v$ with the solution returned by the FPTAS yields a candidate star. We consider the candidate star for each vertex and return the one with highest profit. Since we consider all possible star centers, {\sc Best-Profit-Viable} runs in $O(n^{4} / \varepsilon)$ time and returns a viable star within a factor of $(1-\varepsilon)$ of optimal, for any $\varepsilon > 0$. \paragraph{{\sc Best-Ratio-Viable}} We again turn to the FPTAS for the standard knapsack problem. Our goal is to find a high profit-to-weight star in $G$ with weight at most $k$. The standard FPTAS for the unconstrained knapsack problem builds a dynamic programing table $T$ with $n$ rows and $nP'$ columns where $n$ is the number of available items and $P'$ is the maximum adjusted profit over all the items. Given an item $v$, its adjusted profit is $p'(v) = \lfloor \frac{p(v)}{ (\varepsilon / n) \cdot P} \rfloor$ where $P$ is the true maximum profit over all the items. Each entry $T[i,p]$ gives the weight of the minimum weight subset over the first $i$ items achieving profit $p$. Notice that, for any fixed profit $p$, $p / T[n,p]$ is the highest profit-to-weight ratio for that $p$. Therefore, for $1 \leq p \leq nP'$, the $p$ maximizing $p / T[n,p]$ gives the highest profit-to-weight ratio of any feasible subset provided $T[n,p] \leq k$. Let $S$ be this subset. We will show that $p(S) / w(S)$ is within a factor of $(1-\varepsilon)$ of \ensuremath{\mathrm{OPT}\xspace}\ where \ensuremath{\mathrm{OPT}\xspace}\ is the profit-to-weight ratio of the highest profit-to-weight ratio feasible subset $S^{*}$. Letting $r(v) = p(v) / w(v)$ and $r'(v) = p'(v) / w(v)$, and following~\cite{vazirani}, we have \[ r(S^{*}) - ((\varepsilon / n) \cdot P) \cdot r'(S^{*}) \leq \varepsilon P / w(S^{*}) \] since, for any item $v$, the difference between $p(v)$ and $((\varepsilon / n) \cdot P) \cdot p'(v)$ is at most $(\varepsilon / n) \cdot P$ and we can fit at most $n$ items in our knapsack. Because $r'(S) \geq r'(S^{*})$ and \ensuremath{\mathrm{OPT}\xspace} \ is at least $P / w(S^{*})$ we have \[ r(S) \geq (\varepsilon / n) \cdot P \cdot r'(S^{*}) \geq r(S^{*}) - \varepsilon P / w(S^{*}) \geq \ensuremath{\mathrm{OPT}\xspace} - \varepsilon \ensuremath{\mathrm{OPT}\xspace} = (1-\varepsilon)\ensuremath{\mathrm{OPT}\xspace}. \] Now, just as with {\sc Best-Profit-Viable}, every vertex $v \in V(G)$ defines a knapsack instance where $N_{G}(V)$ is the set of items and $k-w(v)$ is the capacity. We run the modified FPTAS for knapsack on the instance defined by $v$ and add $v$ to the solution to produce a set of candidate stars. We return the star with highest profit-to-weight ratio. Since we consider all possible star centers, {\sc Best-Ratio-Viable} runs in $O(n^{4} / \varepsilon)$ time and returns a viable star within a factor of $(1-\varepsilon)$ of optimal, for any $\varepsilon > 0$. \paragraph{Justifying Stars} Besides some isolated vertices, our solution is a set of edges, but the edges are not necessarily vertex disjoint. Analyzing our greedy algorithm in terms of edges risks counting vertices multiple times. Partitioning into stars allows us to charge increases in the profit from the greedy step without this risk. In fact, stars are essentially the {\em simplest} structure meeting this requirement which is why we use them as our viable family. \paragraph{Improving the approximation ratio} Often this style of greedy algorithm can be augmented with an ``enumeration over triples'' step to improve the ratio of $(1-\epsilon)(1-{1\over e^\epsilon})$. However, such an enumeration would require enumerating over all possible triples of {\em stars} in our case. Doing so cannot be done in polynomial time, unless the graph has bounded degree. \subsubsection{General, undirected 1-neighbour knapsack is APX-complete} \label{sec:apx-hardness} Here we show that it is NP-hard to approximate the general, undirected 1-neighbour knapsack problem to within a factor better than $1-1/e+\epsilon$ for any $\epsilon > 0$ via an approximation-preserving reduction from max $k$-cover~\cite{feige:jacm1998}. An instance of max $k$-cover is a set cover instance $(S,{\mathcal R})$ where $S$ is a ground set of $n$ items and $\mathcal R$ is a collection of subsets of $S$. The goal is to cover as many items in $S$ using at most $k$ subsets from $\mathcal R$. \begin{theorem} The general, undirected 1-neighbour knapsack problem has no $(1-1/e+\epsilon)$-approximation for any $\epsilon > 0$ unless P$=$NP. \end{theorem} \begin{proof} Given an instance of $(S,{\mathcal R})$ of max $k$-cover, build a bipartite graph $G=(U \cup V, E)$ where $U$ has a node $u_{i}$ for each $s_i \in S$ and $V$ has a node $v_{j}$ for each set $R_{j} \in {\mathcal R}$. Add the edge $\{u_{i}, v_{j}\}$ to $E$ if and only if $u_{i} \in R_{j}$. Assign profit $p(u_{i})=1$ and weight $w(u_{i})=0$ for each vertex $u_{i} \in U$ and profit $p(v_{j})=0$ and weight $w(u_{i})=1$ for each vertex $v_{j} \in V$. Since no pair of vertices in $U$ have an edge and since every vertex in $U$ has no weight, our strategy is to pick vertices from $V$ and all their neighbours in $U$. Since every vertex of $U$ has unit profit, we should choose the $k$ vertices from $V$ which collectively have the most neighbours. This is exactly the max $k$-cover problem. \end{proof} The max $k$-cover problem represents a class of {\em budgeted maximum coverage} ({BMC}\xspace) problems where the elements in the base set have unit profit (referred to as weights in~\cite{kmn:ipl1999}) and the cover sets have unit weight (referred to as costs in~\cite{kmn:ipl1999}). In fact, one can use the above reduction to represent an arbitrary {BMC}\xspace instance: form the same bipartite graph, assign the element weights in {BMC}\xspace as vertex profits in $U$, and finally assign the covering set costs in {BMC}\xspace as vertex weights in $V$. \subsection{General, directed 1-neighbour knapsack is hard to approximate} \label{sec:gd1n} Here we consider the 1-neighbour knapsack problem where $G$ is directed and has arbitrary profits and weights. We show via a reduction from {\em directed Steiner tree} ({DST}\xspace) that the general, directed 1-neighbour problem is hard to approximate within a factor of $1/ \Omega(\log^{1-\varepsilon} n)$. Our result holds for DAGs. Because of this negative result, we also don't expect that good approximations exist for either {\sc Best-Profit-Viable} and {\sc Best-Ratio-Viable} for any family of viable graphs. In the {DST}\xspace problem on DAGs we are given a DAG $G=(V,E)$ where each arc has an associated cost, a subset of $t$ vertices called {\em terminals} and a root vertex $r \in V$. The goal is to find a minimum cost set of arcs that together connect $r$ to all the terminals ({\em i.e.}, the arcs form an out-arborescence rooted at $r$). For all $\varepsilon >0$, {DST}\xspace admits no $\log^{2-\varepsilon} n$-approximation algorithm unless $NP\subseteq ZTIME[n^{\ensuremath{\mathrm{poly}}\log n}]$~\cite{HK}. This result holds even for very simple DAGs such as {\em leveled DAGs} in which $r$ is the only root, $r$ is at level 0, each arc goes from a vertex at level $i$ to a vertex at level $i+1$, and there are $O(\log n)$ levels. We use leveled DAGs in our proof of the following theorem. \begin{theorem} \label{thm:gd1nlb} The general, directed 1-neighbour knapsack problem is $1/\Omega(\log^{1-\varepsilon} n)$-hard to approximate unless $NP\subseteq ZTIME [n^{\ensuremath{\mathrm{poly}}\log n}]$. \end{theorem} \begin{proof} Let $D$ be an instance of {DST}\xspace where the underlying graph $G$ is a leveled DAG with a single root $r$. Suppose there is a solution to $D$ of cost $C$. \begin{claim} \label{claim:cover} If there is an $\alpha$-approximation algorithm for the general, directed 1-neighbour knapsack problem then a solution to $D$ with cost $O(\alpha \log t)\times C$ can be found where $t$ is the number of terminals in $D$. \end{claim} \begin{proof} Let $G=(V,A)$ be the DAG in instance $D$. We modify it to $G'=(V',A')$ where we split each arc $e\in A$ by placing a dummy vertex on $e$ with weight equal to the cost of $e$ according to $D$ and profit of 0. In addition, we also reverse the orientation of each arc. Finally, all other vertices are given weight 0 and terminals are assigned a profit of 1 while the non-terminal vertices of $G$ are given a profit of 0. We create an instance $N$ of the general, directed 1-neighbour knapsack problem consisting of $G'$ and budget bound of $C$. By assumption, there is a solution to $N$ with cost $C$ and profit $t$. Therefore given $N$, an $\alpha$-approximation algorithm would produce a set of arcs whose weight is at most $C$ and includes at least $t/\alpha$ terminals. That is, it has a profit of at least $t/\alpha$. Set the weights of dummy nodes to 0 on the arcs used in the solution. Then for all terminals included in this solution, set their profit to 0 and repeat. Standard set-cover analysis shows that after $O(\alpha \log t)$ repetitions, each terminal will have been connected to the root in at least one of the solutions. Therefore the union of all the arcs in these solutions has cost at most $O(\alpha \log t)\times C$ and connects all terminals to the root. \hfill \end{proof} Using the above claim, we'll show that if there is an $\alpha$-approximation algorithm for the general, directed-1-neighbour problem then there is an $O(\alpha \log t)$-approximation algorithm for {DST}\xspace which implies the theorem. Let $L$ be the total cost of the arcs in the instance of {DST}\xspace. For each $2^i < L$, take $C=2^i$ and perform the procedure in the previous claim for $\alpha \log t$ iterations. If after these iterations all terminals are connected to the root then call the cost of the resulting arcs a valid cost. Finally, choose the smallest valid cost, say $C'$ and $C'$ will be no more than $2C_{\ensuremath{\mathrm{OPT}\xspace}}$ where $C_{\ensuremath{\mathrm{OPT}\xspace}}$ is the optimal cost of a solution for the {DST}\xspace instance. By the previous claim we have a solution whose cost is at most $2C_{\ensuremath{\mathrm{OPT}\xspace}} \times O(\alpha \log t)$. \hfill \end{proof} \section{The uniform, directed 1-neighbour knapsack problem} \label{sec:ud1n} In this section, we give a PTAS for the uniform, directed 1-neighbour knapsack problem. We rule out an FPTAS by proving the following theorem. \begin{theorem} \label{thm:ud1n-hard} The uniform, directed 1-neighbour problem is strongly NP-hard. \end{theorem} \begin{proof} The proof is a reduction from set cover. Let the base set for an instance be $S=\{ s_1, s_2, \ldots, s_{n}\}$ and the collection of subsets of $S$ be ${\mathcal R}=\{R_1, R_2, \ldots, R_{m}\}$. The maximum number of sets desired to cover the base set is $t$. We build an instance of the 1-neighbour knapsack problem. Let $M = n+1$. The dependency graph is as follows. For each subset $R_i$ create a cycle $C_i$ of size $M$; the set of cycles are pairwise vertex disjoint. In each such cycle $C_i$ choose some node arbitrarily and denote it by $c_i$. For each $s_j\in S$, define a new node in $V$ and label it $v_j$. Define $A=\{(v_j,c_i)\; : \; s_j\in R_i\}$. Let the capacity of the knapsack be $k = tM+n$. Suppose ${\mathcal R}'$ is a solution to the set-cover instance. Since $1\le |{\mathcal R}'|\le t$, we can define $0\le p <t$ to be such that $|{\mathcal R}'|+p=t$. Let ${\mathcal R}''=\{R_{i(1)}, R_{i(2)}, \ldots, R_{i(p)}\}$ be a collection of $p$ elements of $\mathcal R$ not in ${\mathcal R}'$. Let $G'$ be the graph induced by the union of the nodes in $C_j$ for each $R_j\in {\mathcal R}'$ or ${\mathcal R}''$, and $\{v_1, v_2, \ldots, v_n\}$: $G'$ consists of exactly $tM+n$ nodes. Every vertex in the cycles of $G'$ has out-degree 1. Since ${\mathcal R}'$ is a set cover, for every $s_j\in S$ there is some $R_i\in {\mathcal R}'$ where $s_j\in R_i$ and so the arc $(v_j, c_i)$ is in $G'$. It follows that $G'$ is a witness for a 1-neighbour set of size $k=tM+n$. Now suppose that the subgraph $G'$ of $G$ is a solution to the 1-neighbour knapsack instance with value $k$. Since $M>n$, it is straightforward to check that $G'$ must consist of a collection $\mathcal C$ of exactly $t$ cycles, say ${\mathcal C}=\{C_{a(1)}, C_{a(2)}, \ldots, C_{a(t)}\}$, and each node $v_i$, $1\le i\le n$, along with some arc $(v_i,c_{a(j_i)})$. But by definition of $G$, that means that $s_i\in R_{a(j_i)}$ for $1\le i\le n$ and so $\{ R_{a(j_1)}, R_{a(j_2)},\ldots , R_{a(j_n)}\}$ is a solution to the set cover instance. \end{proof} \subsection{A PTAS for the uniform, directed 1-neighbour problem.} Let $U$ be a 1-neighbour set. Let $A_U$ be a minimal set of arcs of $G$ such that for every vertex $u \in U$, $\delta_{G[A_U]}(u) \geq \min \{\delta_G(u),1\}$. That is, $A_U$ is a {\em witness} to the feasibility of $U$ as a 1-neighbour set. Since each node of $U$ in $G[A_U]$ has out-degree 0 or 1, the structure of $A_U$ has the following form. \begin{property} \label{prop:structure} Each connected component of $G[A_U]$ is a cycle $C$ and a collection of vertex-disjoint in-arborescences, each rooted at a node of $C$. $C$ may be trivial, i.e.,~$C$ may be a single vertex $v$, in which case $\delta_G(v) = 0$. \end{property} For a strongly connected component $X$, let $c(X)$ be the size of the shortest directed cycle in $X$ with $c(X) = 1$ if and only if $|X| = 1$. \begin{lemma} \label{lem:scc-structure} There is an optimal 1-neighbour knapsack $U$ and a witness $A_U$ such that for each non-trivial, maximal SCC $K$ of $G$, there is at most one cycle of $A_U$ in $K$ and this cycle is a smallest cycle of $K$. \end{lemma} \begin{proof} First we modify $A_U$ so that it contains smallest cycles of maximal SCCs. We rely heavily on the structure of $A_U$ guaranteed by Property~\ref{prop:structure}. The idea is illustrated in Fig.~\ref{fig:1-neighbour-structure}. Let $C$ be a cycle of $A_U$ and let $K$ be the maximal SCC of $G$ that contains $C$. Suppose $C$ is not the smallest cycle of $K$ or there is more than one cycle of $A_U$ in $K$. Let $H$ be the connected component of $A_U$ containing $C$. Let $C'$ be a smallest cycle of $K$. Let $P$ be the shortest directed path from $C$ to $C'$. Since $C$ and $C'$ are in a common SCC, $P$ exists. Let $T$ be an in-arborescence in $G$ spanning $P$, $C$ and $H$ rooted at a vertex of $C'$. Some vertices of $C' \cup P$ might already be in the 1-neighbour set $U$: let $X$ be these vertices. Note that $X$ and $V(H)$ are disjoint because of Property~\ref{prop:structure}. Let $T'$ be a sub-arborescence of $T$ such that: \begin{itemize} \item $T'$ has the same root as $T$, and \item $|V(T' \cup C') \cup X| = |V(H)|+|X|$. \end{itemize} Since $|V(T \cup C')| = |V(P\cup H \cup C')| \geq |V(H)| + |X|$ and $T \cup C'$ is connected, such an in-arborescence exists. Let $B = (A_U \setminus H) \cup T' \cup C'$. Let $B'$ be a witness spanning $V(B)$ contained in $B$ that contains the arcs in $C'$. We have that $B'$ has $|U|$ vertices and contains a smallest cycle of $K$. We repeat this procedure for any SCC in our witness that contains a cycle of a maximal SCC of G that is not smallest or contains two cycles of a maximal SCC. \hfill \end{proof} \begin{figure}[tb] \centering \subfigure[]{\includegraphics[scale=0.4]{figures/1-friend-structure-a}} \subfigure[]{\includegraphics[scale=0.4]{figures/1-friend-structure-b}} \caption{Construction of a witness containing the smallest cycle of an SCC. The shaded region highlights the vertices of an SCC (edges not in $C$, $C'$, or $P$ are not depicted). The edges of the witness are solid. (a) The smallest cycle $C'$ is not in the witness. (b) By removing an edge from $C$ and leaf edges from the in-arborescences rooted on $C$, we create a witness that includes the smallest cycle $C'$.} \label{fig:1-neighbour-structure} \end{figure} To describe the algorithm, let $\mathcal{D} = (S,F)$ be the DAG of maximal SCCs of $G$ and let $\varepsilon > 1/k$ be a fixed constant where $k$ is the knapsack bound. (If $\varepsilon \leq 1/k$ then the brute force algorithm which considers all subsets $V' \subseteq V(G)$ with $|V'| \leq k$ yields an acceptable bound for a PTAS.) We say that $u\in S$ is {\em large} if $c(u) > \varepsilon\, k$, {\em petite} if $1 < c(u) \leq \varepsilon\, k$, or {\em tiny} if $c(u)=1$. Let $L$, $P$, and $T$ be the set of all large, petite and tiny SCCs respectively. Note that since $\varepsilon > 1/k$, for every $u \in L$, $c(u)> \varepsilon\, k >1$. \begin{center} \fbox{ \begin{minipage}[h]{.9 \linewidth} \noindent {\sc uniform-directed-1-neighbour} \begin{tabbing} \qquad $B = \emptyset$ \\ \qquad For every subset $X\subseteq L$ such that $|X| \le1/\varepsilon$\\ \qquad\qquad \= $D_X = \mathcal{D}[P \cup X]$.\\ \> $Z = \{ \mbox{tiny sinks of $\mathcal{D}$} \} \cup \{ \mbox{petite sinks of $D_X$} \}$ \\ \> $P' = $ any maximal subset of $Z$ such that $c(P') + c(X) \leq k$.\\ \> $U = \bigcup_{K \in P' \cup X}\{V(C)\ :\ C\mbox{ is a smallest cycle of }K \}$\\ \> Greedily add vertices to $U$ such that $U$ remains a 1-neighbour \\ \> \qquad set until there are no more vertices to add or \\ \> \qquad $|U| = k$. (Via a backwards search rooted at $U$.)\\ \> $B = \arg\max \{|B|, |U|\}$ \\ \qquad Return $B$. \end{tabbing} \end{minipage} } \end{center} \begin{theorem} \label{thm:1-neighbour-ptas} {\sc uniform-directed-1-neighbour} is a PTAS for the uniform, directed 1-neighbour knapsack problem. \end{theorem} \begin{proof} Let $U^*$ be an optimal 1-neighbour knapsack and let $A_{U^*}$ be its witness as guaranteed by Lemma~\ref{lem:scc-structure}. Let $\mathcal{L}, \mathcal{P}$, and $\mathcal{T}$ be the sets of large, petite, and tiny cycles in $A_{U^*}$ respectively. By Lemma~\ref{lem:scc-structure}, each of these cycles is in a different maximal SCC and each cycle is a smallest cycle in its maximal SCC. Let $\mathcal{L}=\{L_{1}, \ldots, L_{\ell} \}$ and let $L^*$ be the set of large SCCs that intersect $L_1,\ldots, L_\ell$. Note that $|L^*| = \ell$. Since $k \geq |U^*| \geq \sum_{i=1}^\ell |L_{i}| > \ell\, \varepsilon\, k$ we have $\ell < 1/\varepsilon$. So, in some iteration of {\sc uniform-directed-1-neighbour}, $X = L^*$. We analyze this iteration of the algorithm. There are two cases: \begin{description} \item[$P'=Z$.] First we show that every vertex in $U^*$ has a descendant in $X \cup P'$. Clearly if a vertex of $U^*$ has a descendant in some $L_i \in \mathcal{L}$, it has a descendant in $X$. Suppose a vertex of $U^*$ has a descendant in some $P_i \in \mathcal{P}$. $P_i$ is within an SCC of $D_X$, and so it must have a descendant that is in a sink of $D_X$. Similarly, suppose a vertex of $U^{*}$ has a descendant in some $T_{i} \in \mathcal{T}$. $T_{i}$ is either a sink in $\mathcal{D}$ or has a descendant that is either a sink of $\mathcal{D}$ or a sink of $D_{X}$. All these sinks are contained in $X \cup P'$. Since every vertex of $U^*$ can reach a vertex in $X \cup P'$, greedily adding to this set results in $|U| = |U^*|$ and the result of {\sc uniform-directed-1-neighbour} is optimal. \item[$P' \neq Z$.]For any sink $x \notin P'$, $c(P')+c(X)+c(x) > k$ but $c(x) \leq \varepsilon\, k$ by the definition of tiny and petite. So, $|U| \geq c(P')+c(X) > (1-\varepsilon) k$, and the resulting solution is within $(1-\varepsilon)$ of optimal. \end{description} The running time of {\sc uniform-directed-1-neighbour} is $n^{O(1/\varepsilon)}$. It is dominated by the number of iterations, each of which can be executed in poly time. \hfill \end{proof} \section{The uniform, undirected 1-neighbour problem} \label{sec:uu1n} We now consider the final case of 1-neighbour problems, namely the uniform, undirected 1-neighbour problem. We note that there is a relatively straightforward linear time algorithm for finding an optimal solution for instances of this problem. The algorithm essentially breaks the graph into connected components and then, using a counting argument, builds an optimal solution from the components. \begin{theorem} \label{thm:uu1n} The uniform, undirected 1-neighbour knapsack problem has a linear-time solution. \end{theorem} \begin{proof} Let $\mathcal{G}=(\mathcal{G}_{1}, \mathcal{G}_{2}, \ldots, \mathcal{G}_{t})$ be the connected components of the dependency graph $G$ in {\em decreasing} order by size (we can find such an ordering in linear time). Note that each connected component $\mathcal{G}_{j}$ constitutes a feasible set for the uniform, undirected 1-neighbour problem on $G$. If $k$ is odd and $|G_{j}|=2$ for all $j$, then the optimal solution has size $k-1$ since no vertex can be included on its own. In this case the first $\lfloor k/2 \rfloor$ connected components constitutes a feasible, optimal solution. Otherwise, let $i$ be smallest index such that $\sum_{j=1}^{i} |\mathcal{G}_{j}| > k$. If $i=1$ then let $\mathcal{S}=0$. Otherwise, take $\mathcal{S}=\sum_{j=1}^{i-1} |\mathcal{G}_{j}|$. If $\mathcal{S}=k$ then the first $i-1$ components of $G$ have exactly $k$ nodes and constitute a feasible, optimal solution for $G$. Otherwise, by our choice of $i$, $\mathcal{S}<k$ and $|\mathcal{G}_{i}|>k-\mathcal{S}$. Let $U=(u_{1},u_{2}, \ldots, u_{|\mathcal{G}_{i}|})$ be an ordering of the nodes in $\mathcal{G}_{i}$ given by a breadth-first search (start the search from an arbitrary node). Collect the first $k-\mathcal{S}$ nodes of $u$ in $U=\{u_{l} \,|\, l \leq k-\mathcal{S}\}$. We consider three cases: \begin{enumerate} \item If $|U|=1$ and $|\mathcal{G}_{t}|=1$, then the first $i-1$ connected components along with $\mathcal{G}_{t}$ constitute a feasible, optimal solution. \item If $|U|=1$ and $|\mathcal{G}_{t}| \neq 1$, then $|\mathcal{G}_{1}|>2$. If $k=1$ then return $\emptyset$ since there is no feasible solution, otherwise drop an appropriate node from $\mathcal{G}_{1}$ (one that keeps the rest of $\mathcal{G}_{1}$ connected) and add $u_{2}$ to $U$ since $|\mathcal{G}_{i}|>1$. Now the first $i-1$ connected components (without the one node in $\mathcal{G}_{1}$) along with $U$ constitute a feasible, optimal solution. \item If $|U|>1$, then the first $i-1$ connected components along with $U$ constitute a feasible, optimal solution. \end{enumerate} \end{proof} \section{The all-neighbours knapsack problem} \label{sec:all-neighbours} In this section, we consider the all-neighbours knapsack problem. Our primary result is a PTAS for the uniform, directed all-neighbours problem. We also show that uniform, directed all-neighbours is NP-hard in the strong sense, so no polynomial-time algorithm can yield a better approximation unless P=NP. In addition, we show that uniform, undirected all-neighbours knapsack reduces to the classic knapsack problem. A set of vertices $U$ is a {\em feasible} all-neighbours knapsack solution if, for every vertex $u \in U$, $N_G(u) \subseteq U$. Recall that for an SCC $c \in V(\mathcal{D})$ is obtained by contracting $V(c) \subseteq V(G)$. For convenience, let $w(c) = w(V(c))$ and $p(c) = p(V(c))$. Let $\mathcal{S}=\{ \ensuremath{\mathrm{desc}}_\mathcal{D}(u) \,|\, u \in V(\cal{D})\}$ be the set of descendant sets for every node of $\mathcal{D}$. We now show that all feasible solutions to the all-neighbour knapsack problem can be decomposed into sets from $\mathcal{S}$. \begin{property} \label{prop:all-neighbours} Every feasible solution to a general, directed all-neighbour instance has the form $\cup_{u \in Q} V(u)$ where $Q \subseteq \cal{S}$. \end{property} \begin{proof} Let $U$ be a feasible solution for the dependency graph $G$. We claim that if $u \in U$ then there exists a set $S \in \mathcal{S}$ such that $u \in V(S)$ and $V(S) \subseteq U$. Notice that the all-neighbours constraint implies that if $b$ is a neighbor of $a$ in $G$ and $c$ is a neighbor of $b$ in $G$, then $a \in U$ implies $c \in U$. Thus, by transitivity, if $a \in U$ and $b$ is reachable from $a$ then $b \in U$. Let $u \in U$ and $v$ be the node in $\mathcal{D}$ such that $u \in V(v)$. Suppose that $w \in \ensuremath{\mathrm{desc}}_{\mathcal{D}}(v)$. Then every node in $V(w)$ is reachable from $u$ in $G$ as is every node in $V(\ensuremath{\mathrm{desc}}_{\mathcal{D}}(v))$ so $V(\ensuremath{\mathrm{desc}}_{\mathcal{D}}(v) \subseteq U$ which proves the claim since $\ensuremath{\mathrm{desc}}_\mathcal{D}(v) \in \mathcal{S}$. The property follows. \end{proof} Property~\ref{prop:all-neighbours} tells us that if $U$ is a feasible solution for $G$ and $u \in U$, then every node reachable from $u$ in $G$ must also be in the optimal solution. We use this property extensively throughout the rest of Section~\ref{sec:all-neighbours}. \subsection{The uniform, directed all-neighbour knapsack problem} We show that {\sc uniform-directed-all-neighbour} (below) is a PTAS for the uniform, directed all-neighbours knapsack problem. The key ideas are to (a) identify a set $A$ of {\em heavy nodes} in $V(\mathcal{D})$ i.e., those nodes $v$ where $w(v)> \epsilon k$, and then (b) augment subsets of the heavy nodes with nodes from the set $B$ of {\em light nodes}, i.e., those nodes $v$ with $w(v)\leq \epsilon k$. We note that this algorithm works on the set of SCCs and can handle the slightly more general than uniform case: that in which the weight and profit of a vertex is equal, but different vertices may have different weights. \begin{center} \fbox{ \begin{minipage}[h]{.9\linewidth} \noindent {\sc uniform-directed-all-neighbour} \begin{tabbing} \qquad \=$A = \{v \in V(\mathcal{D}) \,|\, w(v) > \epsilon k\}$, $B = S \setminus A$, $X = \emptyset$ \\ \> For every subset $A'$ of $A$ such that $|A'| \leq 1/\epsilon$ \\ \> \qquad \=$T = \ensuremath{\mathrm{desc}}_\mathcal{D}(A')$ \\ \> \> Let $B' = \{v \,|\, v \in B \cap (V(\mathcal{D}) \setminus T)\mbox{ and } N_{\mathcal{D}}(v) \subseteq T\}$ \\ \> \> While $w(T) \leq k$ and $B' \neq \emptyset$ \\ \> \> \qquad \= Add any element $b \in B'$ to $T$. \\ \> \> \> Update $B' = \{v \,|\, v \in B \cap (V(\mathcal{D}) \setminus T)\mbox{ and } N_\mathcal{D}(v) \subseteq T\}$ \\ \> \> If $W(V(T)) > W(X)$ then $X = V(T)$ \\ \> Return $X$\\ \end{tabbing} \end{minipage} } \end{center} \begin{theorem} \label{thm:uniform-directed-all} {\sc uniform-directed-all-neighbour} is a PTAS for the uniform, directed all-neighbour knapsack problem. \end{theorem} \begin{proof} Let $U^*$ be a set of vertices of $G$ forming an optimal solution to the uniform, directed all-neighbours knapsack problem. By Property~\ref{prop:all-neighbours}, there is a subset of nodes $Q^{*} \subseteq \mathcal{D}$ such that $U^* = \cup_{u\in Q^*} V(u)$. Let $A^* = U^* \cap A$. Since the size of any node in $A$ is at least $\epsilon k$ and the weight of $U^*$ is at most $k$, $|A^*| \leq 1/\epsilon$. Since all subsets of $A$ of size at most $1/\epsilon$ are considered in the for loop of {\sc uniform-directed-all-neighbours}, set $A^*$ will be one such set. Let $D^* = \ensuremath{\mathrm{desc}}(A^*)$. Let $\tilde{B}$ be all the nodes of $\mathcal{D}$ added to the solution in all iterations of the while loop. Let $T^*=D^* \cup \tilde{B}$. Since $A^* \subseteq U^*$, $D^* \subseteq U^*$ by Property~\ref{prop:all-neighbours}. Let $B^* = U^* \setminus D^*$. $\tilde B$ and $B^*$ are not necessarily the same set of nodes. Suppose $\tilde B$ and $B^*$ are not the same set of nodes and $w(T^*) < (1-\epsilon)w(U^*)$. Then there is a node $u \in B^* \setminus \tilde B$ such that $u$'s neighbours are in $T^*$. Since $w(u) < \epsilon k$, $u$ could be added to $\tilde B$, a contradiction. We now bound the running time of {\sc uniform-directed-all-neighbour}. Line 1, which find the set of heavy nodes $A\subseteq V(\mathcal{D})$, compute a simple set difference, and initialize the return value, take at most $O(n)$ time. Since $|A| \leq \frac{n}{\epsilon k}$ and $|A'| \leq 1/ \epsilon$ there are at most ${\frac{n}{ \epsilon k} \choose 1/ \epsilon} \leq (n / \epsilon k)^{1/\epsilon}$ subsets of $A$ considered in line 2, so line 2 executes at most $(n / \epsilon k)^{1/\epsilon}$ times. Since we will never execute line 4 more than $n$ times we have an $O(n^{1+(1/\epsilon)})$-time algorithm. \end{proof} \begin{theorem} \label{thm:uniform-directed-all-hard} The uniform, directed-all-neighbour problem is NP-hard. \end{theorem} \begin{proof} We reduce the set-union knapsack problem to the uniform, directed all-neighbours knapsack problem. An instance of SUKP consists of a base set of elements $S=\{x_1, x_2,\ldots, x_n\}$ where each $x_i$ has an integer weight $w_i$, a positive integer capacity $c$, a target profit $d$, a collection $C=\{S_1,S_2,\ldots, S_m\}$ where $S_i\subseteq S$, each subset $S_i$ has a non-negative profit $p_i$. Then the question asked is: Does there exist a sub-collection $C'=\{S_{i_1}, S_{i_2},\ldots, S_{i_t}\}$ of $C$ such that $\sum_{j=1}^t p_{i_j} \geq d$ and for $T=\cup_{j=1}^t S_{i_j}$, $\sum_{x_s\in T} w_s \leq c$. This problem is known to be NP-hard in the strong sense even for the case where $w_i=p_i=1$ and $|S_i|=2$ for $1\le i\le m$~\cite{goldschmidt-etal:nrl1994}. We consider instances of SUKP where every subset $S_j$ in $C$ has cardinality 2 and profit $p_j=1$. Also, each element $x_i$ has weight $w_i=1$. Let $c$ be the capacity and $d$ be the target profit. Given such an instance of SUKP we define next an instance of uniform, directed all-neighbours that has a solution if and only if the SUKP instance has a solution. Let $G=(V,A)$ be a directed graph where for each element $x_i$ there is a strongly connected component $scc_i$ with $M=d+1$ nodes one of which is labeled $z_i$. Let $U_i$ denote the set of nodes in $scc_i$. For each subset $S_j$ there is a node $v_j\in V$. For every $x_i\in S_j$ there is an arc $(v_j,z_i)\in A$ and these are the only other arcs. Let $k=cM+d$ be the target party size. Then we claim that there is a party of size $k$ if and only if there is a solution to the SUKP instance having weight at most $c$ and profit at least $d$. Suppose there is solution $P$ of size $k$ to uniform, directed all-neighbours. Since $k=cM+d$ and $M>d$, there must be some collection $K$ of node sets $U_i$ of strongly connected components such that $P$ contains the union of nodes of the $U_i$'s in $K$ where $|K|\le c$. Hence $P$ must also contain a set $Z$ of at least $d$ nodes $v_j$. Since $P$ is feasible solution it must be that for every $v_j\in Z$ if $x_i\in S_j$ then $U_i \in P$. It is straightforward then to check that the collection of sets $C'=\{S_j\; : \; v_j \in Z \}$ is a solution to the SUKP instance with profit $d\ge |Z|$ and since $\cup_{v_j\in Z} S_j = \{ x_i \; : \; U_i\in K\}$) it has weight at most $c$. Now suppose $C'=\{S_{j_1},S_{j_2},\ldots, S_{j_t}\}$ is a solution to the SUKP instance where $t\geq d$ and $|\cup_{r=1}^t S_{j_r}| \le c$. Let $N=\cup_{r=1}^t S_{j_r}$ and hence $|N|\leq c$. Arbitrarily choose some $K\subseteq C'$ where $|K|=d$. Then take $P'=\{v_j \,|\, S_j\in K\}$. Let $N'$ be a set of elements such that $N\subseteq N'$ and $|N'|=c$. Define $P''=\cup_{x_i \in N'} U_i$. Since $K\subseteq C'$, it must be for every $v_j\in K$, if $x_i\in S_j$ then $U_i \subseteq P''$. Therefore $P=P'\bigcup P''$ is a solution to the all-neighbours problem where $|P|=cM+d$. \end{proof} \subsection{The uniform, undirected all-neighbour knapsack problem} The problem of uniform, undirected all-neighbour knapsack is solvable in polynomial time. In this case we just need to find the subset of connected components of $G$ whose total size is as large as possible without exceeding $k$. But this is exactly the subset sum problem. Since $k \leq n$, the standard dynamic programming algorithm yields a truly polynomial-time $O(nk)$ solution. \subsection{The general, all-neighbour knapsack problem} As mentioned in Section~\ref{sec:related} the general, directed, all-neighbours knapsack problem is a generalization of the partially ordered knapsack problem~\cite{Kolliopoulos:2007p1242} which has been shown to be hard to approximate within a $2^{\log^\delta n}$ factor unless 3SAT$\in$DTIME$(2^{n^{3/4+\epsilon}})$~\cite{Hajiaghayi:2006p1244}. Hence the general, directed all-neighbours knapsack problem is hard to approximate within this factor under the same complexity assumption. In the undirected case, i.e., the case where the dependency graph $G$ is undirected, $\mathcal{D}$ becomes a set of disjoint nodes, one for each connected component of $G$, and $\mathcal{S}=V(\mathcal{D})$. By Property~\ref{prop:all-neighbours}, we are left with the problem of finding a subset of nodes $Q \subseteq V(\mathcal{D})$ such that $p(Q)$ is maximal subject to $w(Q) \leq k$. But this is exactly the 0-1 knapsack problem which has a well-known FPTAS. Thus, general, undirected all-neighbours also has an FPTAS. Contrast this with the uniform, directed all-neighbours problem. There, the sets in $\mathcal{S}$ are not disjoint, so we cannot use the 0-1 knapsack ideas. \section{Future directions} There are several open problems to consider, including closing gaps, improving the running times of the PTASes, and giving approximation algorithms for the general, directed versions of both 1-neighbour and all-neighbour. We believe that fully understanding these problems will lead to ideas for a much more general problem: maximizing a linear function with a submodular constraint. \paragraph{{\bf Acknowledgments}} We thank Anupam Gupta for helpful discussions in showing hardness of approximation for general, directed 1-neighbour knapsack. \bibliographystyle{plain}
{ "timestamp": "2011-09-28T02:01:17", "yymm": "0910", "arxiv_id": "0910.0777", "language": "en", "url": "https://arxiv.org/abs/0910.0777", "abstract": "We study a constrained version of the knapsack problem in which dependencies between items are given by the adjacencies of a graph. In the 1-neighbour knapsack problem, an item can be selected only if at least one of its neighbours is also selected. In the all-neighbours knapsack problem, an item can be selected only if all its neighbours are also selected. We give approximation algorithms and hardness results when the nodes have both uniform and arbitrary weight and profit functions, and when the dependency graph is directed and undirected.", "subjects": "Data Structures and Algorithms (cs.DS); Discrete Mathematics (cs.DM)", "title": "The Knapsack Problem with Neighbour Constraints", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759660443166, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7079405705838165 }
https://arxiv.org/abs/1409.1865
On the Identification of Symmetric Quadrature Rules for Finite Element Methods
In this paper we describe a methodology for the identification of symmetric quadrature rules inside of quadrilaterals, triangles, tetrahedra, prisms, pyramids, and hexahedra. The methodology is free from manual intervention and is capable of identifying an ensemble of rules with a given strength and a given number of points. We also present polyquad which is an implementation of our methodology. Using polyquad we proceed to derive a complete set of symmetric rules on the aforementioned domains. All rules possess purely positive weights and have all points inside the domain. Many of the rules appear to be new, and an improvement over those tabulated in the literature.
\section{Introduction} When using the finite element method to solve a system of partial differential equations it is often necessary to evaluate surface and volume integrals inside of a standardised domain $\vec{\Omega}$ \cite{hesthaven2008nodal, solin2003higher, karniadakis2013spectral}. A popular numerical integration technique is that of Gaussian quadrature in which \begin{equation} \label{eq:quad} \int_{\vec{\Omega}} f(\vec{x})\, \mathrm{d} \vec{x} \approx \sum_i^{N_p} \omega_i f(\vec{x}_i), \end{equation} where $f(\vec{x})$ is the function to be integrated, $\{\vec{x}_i\}$ are a set of $N_p$ points, and $\{\omega_i\}$ the set of associated weights. The points and weights are said to define a \emph{quadrature rule}. A rule is said to be of strength $\phi$ if it is capable of exactly integrating any polynomial of maximal degree $\phi$ over $\vec{\Omega}$. A degree $\phi$ polynomial $p(\vec{x})$ with $\vec{x} \in \vec{\Omega}$ can be expressed as a linear combination of basis polynomials \begin{equation} p(\vec{x}) = \sum_i^{|\mathcal{P}^\phi|} \alpha_i \mathcal{P}^{\phi}_i(\vec{x}), \qquad \alpha_i = \int_{\vec{\Omega}} p(\vec{x}) \mathcal{P}^{\phi}_i(\vec{x})\, \mathrm{d} \vec{x}, \end{equation} where $\mathcal{P}^\phi$ is the set of basis polynomials of degree $\leq \phi$. From the linearity of integration it therefore follows that a strength $\phi$ quadrature rule is one which can exactly integrate the basis. Taking $f \in \mathcal{P}^\phi$ the task of obtaining an $N_p$ point quadrature rule of strength $\phi$ is hence reduced to finding a solution to a system of $|\mathcal{P}^\phi|$ nonlinear equations. This system can be seen to possess $(N_D + 1)N_p$ degrees of freedom where $N_D \geq 2$ corresponds to the number of spatial dimensions. In the case of $N_p \lesssim 10$ the above system can often be solved analytically using a computer algebra package. However, beyond this it is usually necessary to solve the above system---or a simplification thereof---numerically. Much of the research into multidimensional quadrature over the past five decades has been directed towards the development of such numerical methods. The traditional objective when constructing quadrature rules is to obtain a rule of strength $\phi$ inside of a domain $\vec{\Omega}$ using the fewest number of points. To this end efficient quadrature rules have been derived for a variety of domains: triangles \cite{lyness1975moderate, dunavant1985high, lyness1994survey, savage1996quadrature, wandzurat2003symmetric, zhang2009set, taylor2005several, xiao2010numerical, witherden2013analysis, williams2014symmetric}, quadrilaterals \cite{dunavant1985economical, cools1988another, xiao2010numerical}, tetrahedra \cite{savage1996quadrature, zhang2009set, shunn2012symmetric, keast1986moderate}, prisms \cite{kubatko2013pri}, pyramids \cite{kubatko2013pyr}, and hexahedra \cite{stroud1971approximate, dunavant1986efficient, cools2001rotation, xiao2010numerical}. For finite element applications it is desirable that (i) points are arranged symmetrically inside of the domain, (ii) all of the points are strictly inside of the domain, and (iii) all of the weights are positive. The consideration given to these criterion in the literature cited above depends strongly on the intended field of application---not all rules are derived with finite element method in mind. Much of the existing literature is predicated on the assumption that the integrand sits in the space of $\mathcal{P}^\phi$. Under this assumption there is little, other than the criteria listed above, to distinguish two $N_p$ rules of strength $\phi$; both can be expected to compute the integral exactly with the same number of functional evaluations. It is therefore common practice to terminate the rule discovery process as soon as a rule is found. However, there are cases when either the integrand is inherently non-polynomial in nature, e.g. the quotient of two polynomials, or of an high degree, e.g. a polynomial raised to a high power. In these circumstances the above assumption no longer holds and it is necessary to consider the truncation term associated with each rule. Hence, within this context it is no longer clear that the traditional objective of minimising the number of points required to obtain a rule of given strength is suitable: it is possible that the addition of an extra point will permit the integration of several of the basis functions of degree $\phi + 1$. Over the past five or so years there has also been an increased interest in numerical schemes where the same set of points are used for both integration and interpolation. One example of such a scheme is the flux reconstruction (FR) approach introduced by Huynh \cite{huynh2007flux}. In the FR approach there is a need for quadrature rules that (i) are symmetric, (ii) remain strictly inside of the domain, (iii) have a prescribed number of points, and (iv) are associated with a well conditioned nodal basis for polynomial interpolation. These last two requirements exclude many of the points tabulated in the literature. Consequently, there is a need for \emph{bespoke} or \emph{designer} quadrature rules with non-standard properties. This paper describes a methodology for the derivation of symmetric quadrature rules inside of a variety of computational domains. The method accepts both the number of points and the desired quadrature strength as free parameters and---if successful---yields an ensemble of rules. Traits, such as the positivity of the weights, can then be assessed and rules binned according to their suitability for various applications. The remainder of this paper is structured as follows. In \autoref{sec:shapes} we introduce the six reference domains and enumerate their symmetries. Our methodology is presented in \autoref{sec:meth}. Based on the approach of Witherden and Vincent \cite{witherden2013analysis} the methodology requires no manual intervention and avoids issues relating to ill-conditioning. In \autoref{sec:impl} we proceed to describe our open-source implementation, \emph{polyquad}. Using polyquad a variety of truncation-optimised rules, many of which appear to improve over those tabulated in the literature, are obtained and presented in \autoref{sec:rules}. Finally, conclusions are drawn in \autoref{sec:conclusions}. \section{Bases, Symmetries, and Domains} \label{sec:shapes} \subsection{Basis polynomials} The defining property of a quadrature rule for a domain $\vec{\Omega}$ is its ability to exactly integrate the set of basis polynomials, $\mathcal{P}^\phi$. This set has an infinite number of representations the simplest of which being the monomials. In two dimensions we can express the monomials as \begin{equation} \mathcal{P}^\phi = \bigl\{x^iy^j \mid 0 \leq i \leq \phi,\; 0 \leq j \leq \phi - i \bigr\}, \end{equation} where $\phi$ is the maximal degree. Unfortunately, at higher degrees the monomials become extremely sensitive to small perturbations in the inputs. This gives rise to polynomial systems which are poorly conditioned and hence difficult to solve numerically \cite{zhang2009set, shunn2012symmetric}. A solution to this is to switch to an \emph{orthonormal basis set} defined in two dimensions as \begin{equation} \mathcal{P}^\phi = \bigl\{\psi_{ij}(\vec{x}) \mid 0 \leq i \leq \phi,\; 0 \leq j \leq \phi - i \bigr\}, \end{equation} where $\vec{x} = (x,y)^T$ and $\psi_{ij}(\vec{x})$ is satisfies $\forall \mu, \nu$ \begin{equation} \int_{\vec{\Omega}} \psi_{ij}(\vec{x}) \psi_{\mu\nu}(\vec{x})\, \mathrm{d}\vec{x} = \delta_{i\mu}\delta_{j\nu}, \end{equation} where $\delta_{i\mu}$ is the Kronecker delta. In addition to being exceptionally well conditioned orthonormal polynomial bases have other useful properties. Taking the constant mode of the basis to be $\psi_{00}(\vec{x}) = 1/c$ we see that \begin{equation} \label{eq:obint} \int_{\vec{\Omega}} \psi_{ij}(\vec{x}) \, \mathrm{d}\vec{x} = c \int_{\vec{\Omega}} \psi_{00}(\vec{x}) \psi_{ij}(\vec{x}) \, \mathrm{d}\vec{x} = c\delta_{i0}\delta_{j0}, \end{equation} from which we conclude that all non-constant modes of the basis integrate up to zero. Following Witherden and Vincent \cite{witherden2013analysis} we will use this property to define the truncation error associated with an $N_p$ point rule \begin{equation} \label{eq:trunc} \xi^2(\phi) = \sum_{i,j} \bigg\{\sum_{k}^{N_{p}}\omega_k \psi_{ij}(\vec{x}_k) - c\delta_{i0}\delta_{j0}\bigg\}^2, \end{equation} This definition is convenient as it is free from both integrals and normalisation factors. The task of constructing an $N_p$ point quadrature rule of strength $\phi$ is synonymous with finding a set of points and weights that minimise $\xi(\phi)$. Although the above discussion has been presented primarily in two dimensions all of the ideas and relations carry over into three dimensions. \subsection{Symmetry orbits} A symmetric arrangement of $N_p$ points inside of a reference domain can be decomposed into a linear combination of \emph{symmetry orbits}. This concept is best elucidated with an example. Consider a line segment defined by $[-1,1]$. The segment possesses two symmetries: an identity transformation and a reflection about the origin. For an arrangement of distinct points to be symmetric it follows that if there is a point at $\alpha$ where $0 < \alpha \leq 1$ there must also be a point at $-\alpha$. We can codify this by writing $\mathcal{S}_2(\alpha) = \pm\alpha$ with $\abs{\mathcal{S}_2} = 2$. The function $S_2$ is an example of a \emph{symmetry orbit} that takes a single \emph{orbital parameter}, $\alpha$, and generates two distinct points. In the limit of $\alpha \rightarrow 0$ the two points become degenerate. We handle this degeneracy by introducing a second orbit, $\mathcal{S}_1 = 0$, with $\abs{\mathcal{S}_1} = 1$. Having identified the symmetries we may now decompose a symmetric arrangement of points as \[ N_p = n_1\abs{\mathcal{S}_1} + n_2\abs{\mathcal{S}_2} = n_1 + 2n_2, \] where $n_1 \in \{0, 1\}$ and $n_2 \geq 0$ with the constraint on $n_1$ being necessary to ensure uniqueness. This is a constrained linear Diophantine equation; albeit one that is trivially solvable and admits only a single solution. As a concrete example we take $N_p = 11$. Solving the above equation we find $n_1 = 1$ and $n_2 = 5$. The $n_1$ orbit does not take any arguments and so does not contribute any degrees of freedom. Each $n_2$ orbit takes a single parameter, $\alpha$, and so contributes one degree of freedom for a grand total of five. This is less than half that associated with the asymmetrical case. Hence, by parameterising the problem in terms of symmetry orbits it is possible to simultaneously reduce the number of degrees of freedom while guaranteeing a symmetric distribution of points. Symmetries also serve to reduce the number of basis polynomials that must be considered when computing $\xi(\phi)$. Consider the following two monomials \[ p_1(x, y) = x^iy^j, \qquad \text{and} \qquad p_2(x, y) = x^jy^i, \] defined inside of a square domain with vertices $(-1,-1)$ and $(1,1)$. We note that $p_1(x,y) = p_2(y,x)$. As this is a symmetry which is expressed by the domain it is clear that any symmetric quadrature rule capable of integrating $p_1$ is also capable of integrating $p_2$. Further, the index $i$ is odd we have $p_1(x,y) = -p_1(-x,y)$. Similarly, when $j$ is odd we have $p_1(x,y) = -p_1(x, -y)$. In both cases it follows that the integral of $p_1$ is zero over the domain. More importantly, it also follows that \emph{any} set of symmetric points are also capable of obtaining this result. This is due to terms on the right hand side of \autoref{eq:quad} pairing up and cancelling out. A consequence of this is that not all of equations in the system specified by \autoref{eq:quad} are independent. Having identified such polynomials for a given domain it is legitimate to exclude them from our definition of $\xi(\phi)$. Although this exclusion does change the value of $\xi(\phi)$ in the case of a non-zero truncation error the effect is not significant. We shall denote the set of basis polynomials which \emph{are} included as the \emph{objective basis}, and denote this by $\mathcal{\tilde{P}}^\phi$. \subsection{Reference domains} In the paragraphs which follow we will take $\hat{P}^{(\alpha,\beta)}_i(x)$ to refer to a \emph{normalised} Jacobi polynomial as specified in \S 18.3 of \cite{olver2010nist}. In two dimensions we take the coordinate axes to be $\vec{x} = (x,y)$ and $\vec{x} = (x,y,z)$ in three dimensions. \paragraph{Triangle.} \begin{figure} \centering \begin{subfigure}[b]{.45\linewidth} \centering \includegraphics{figure0.pdf} \caption{Triangle.} \end{subfigure} \begin{subfigure}[b]{.45\linewidth} \centering \includegraphics{figure1.pdf} \caption{Quadrilateral.} \end{subfigure} \caption{\label{fig:2d-shapes}Reference domains in two dimensions.} \end{figure} Our reference triangle can be seen in \autoref{fig:2d-shapes} and has an area given by $\int_{-1}^{1}\int_{-1}^{-y}\mathrm{d}x\,\mathrm{d}y = 2$. A triangle has six symmetries: two rotations, three reflections, and the identity transformation. A simple means of realising these symmetries is to transform into barycentric coordinates \begin{equation} \bm{\lambda} = (\lambda_1,\lambda_2,\lambda_3)^T \quad 0 \leq \lambda_i \leq 1, \lambda_1 + \lambda_2 + \lambda_3 = 1, \end{equation} which are related to Cartesian coordinates via \begin{equation} \vec{x} = \begin{pmatrix} -1 & \hphantom{-}1 & -1\\ -1 & -1 & \hphantom{-}1\\ \end{pmatrix}\bm{\lambda}, \end{equation} where the columns of the matrix can be seen to be the vertices of our reference triangle. The utility of barycentric coordinates is that the symmetric counterparts to a point $\bm{\lambda}$ are given by its unique permutations. The number of unique permutations depends on the number of distinct components of $\bm{\lambda}$ and leads us to the following three symmetry orbits \begin{align*} \mathcal{S}_1 &= \big(\tfrac{1}{3},\tfrac{1}{3},\tfrac{1}{3}\big), & \abs{\mathcal{S}_1} &= 1,\\ \mathcal{S}_2(\alpha) &= \Perm(\alpha,\alpha,1-2\alpha), & \abs{\mathcal{S}_2} &= 3,\\ \mathcal{S}_3(\alpha,\beta) &= \Perm(\alpha,\beta,1-\alpha-\beta), & \abs{\mathcal{S}_3} &= 6, \end{align*} where $\alpha$ and $\beta$ are suitably constrained as to ensure the validity of the resulting coordinates. It can be easily verified that the orthonormal polynomial basis inside of our reference triangle is given by \begin{equation} \psi_{ij}(\vec{x}) = \sqrt{2}\hat{P}_i(a)\hat{P}_j^{(2i+1,0)}(b)(1-b)^i, \end{equation} where $a = 2(1 + x)/(1 - y) - 1$, and $b = y$ with the objective basis being given by \begin{equation} \mathcal{\tilde{P}}^\phi = \bigl\{\psi_{ij}(\vec{x}) \mid 0 \leq i \leq \phi,\; i \leq j \leq \phi - i \bigr\}. \end{equation} In the asymptotic limit the cardinality of the objective basis is half that of the complete basis. However, the modes of this objective basis are known not to be completely independent. Several authors have investigated the derivation of an optimal quadrature basis on the triangle. Details can be found in the papers of Lyness \cite{lyness1975moderate} and Dunavant \cite{dunavant1985high}. \paragraph{Quadrilateral.} Our reference quadrilateral can be seen in \autoref{fig:2d-shapes}. The area is simply $\int_{-1}^{1}\int_{-1}^{1}\mathrm{d}x\,\mathrm{d}y = 4$. A square has eight symmetries: three rotations, four reflections and the identity transformation. Applying these symmetries to a point $(\alpha,\beta)$ with $0 \leq (\alpha,\beta) \leq 1$ will yield a set $\chi(\alpha,\beta)$ containing its counterparts. The cardinality of $\chi$ depends on if any of the symmetries give rise to identical points. This can be seen to occur when either $\beta = \alpha$ or $\beta = 0$. Enumerating the possible combinations of the above conditions gives rise to the following four symmetry orbits \begin{align*} \mathcal{S}_1 &= (0, 0), & \abs{\mathcal{S}_1} &= 1,\\ \mathcal{S}_2(\alpha) &= \chi(\alpha, 0), & \abs{\mathcal{S}_2} &= 4,\\ \mathcal{S}_3(\alpha) &= \chi(\alpha, \alpha), & \abs{\mathcal{S}_3} &= 4,\\ \mathcal{S}_4(\alpha,\beta) &= \chi(\alpha, \beta), & \abs{\mathcal{S}_4} &= 8. \end{align*} Trivially, the orthonormal basis inside of our quadrilateral is given by \begin{equation} \psi_{ij}(\vec{x}) = \hat{P}_i(a)\hat{P}_j(b), \end{equation} where $a = x$, and $b = y$. The objective basis is found to be \begin{equation} \mathcal{\tilde{P}}^\phi = \bigl\{\psi_{ij}(\vec{x}) \mid 0 \leq i \leq \phi,\; i \leq j \leq \phi - i,\; (i, j) \text{ even} \bigr\}, \end{equation} with a cardinality one eighth that of the complete basis. \begin{figure} \centering \begin{subfigure}[b]{.49\linewidth} \centering \includegraphics{figure2.pdf} \caption{Tetrahedron.} \end{subfigure} \begin{subfigure}[b]{.49\linewidth} \centering \includegraphics{figure3.pdf} \caption{Prism.} \end{subfigure}\vskip12pt \begin{subfigure}[b]{.49\linewidth} \centering \includegraphics{figure4.pdf} \caption{Pyramid.} \end{subfigure} \begin{subfigure}[b]{.49\linewidth} \centering \includegraphics{figure5.pdf} \caption{Hexahedron.} \end{subfigure} \caption{\label{fig:3d-shapes}Reference domains in three dimensions.} \end{figure} \paragraph{Tetrahedron.} Our reference tetrahedron is a right-tetrahedron as depicted in \autoref{fig:3d-shapes}. Integrating up the volume we find $\int_{-1}^{-1}\int_{-1}^{-z}\int_{-1}^{-1-y-z} \mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z = 4/3$. A tetrahedron has a total of 24 symmetries. Once again it is convenient to work in terms of barycentric coordinates which are specified for a tetrahedron as \begin{equation} \bm{\lambda} = (\lambda_1,\lambda_2,\lambda_3,\lambda_4)^T \quad 0 \leq \lambda_i \leq 1, \lambda_1 + \lambda_2 + \lambda_3 + \lambda_4 = 1, \end{equation} and related to Cartesian coordinates via \begin{equation} \vec{x} = \begin{pmatrix} -1 & \hphantom{-}1 & -1 & -1\\ -1 & -1 & \hphantom{-}1 & -1 \\ -1 & -1 & -1 & \hphantom{-}1 \end{pmatrix}\bm{\lambda}, \end{equation} where as with the triangle the columns of the matrix correspond to vertices of the reference tetrahedron. Similarly the symmetric counterparts of $\bm{\lambda}$ are given by its unique permutations. This leads us to the following five symmetry orbits \begin{align*} \mathcal{S}_1 &= \big(\tfrac{1}{4},\tfrac{1}{4},\tfrac{1}{4},\tfrac{1}{4}\big), & \abs{\mathcal{S}_1} &= 1,\\ \mathcal{S}_2(\alpha) &= \Perm(\alpha,\alpha,\alpha,1-3\alpha), & \abs{\mathcal{S}_2} &= 4,\\ \mathcal{S}_3(\alpha) &= \Perm\big(\alpha,\alpha,\tfrac{1}{2} - \alpha, \tfrac{1}{2} - \alpha\big), & \abs{\mathcal{S}_3} &= 6,\\ \mathcal{S}_4(\alpha, \beta) &= \Perm(\alpha,\alpha,\beta,1-2\alpha-\beta), & \abs{\mathcal{S}_4} &= 12,\\ \mathcal{S}_5(\alpha,\beta,\gamma) &= \Perm(\alpha,\beta,\gamma, 1-\alpha-\beta-\gamma), & \abs{\mathcal{S}_5} &= 24, \end{align*} where $\alpha$, $\beta$, and $\gamma$ are constrained to ensure that $0 \leq \lambda_i \leq 1$ and $\sum_i \lambda_i = 1$. With some manipulation it can be verified that the orthonormal polynomial basis inside of our reference tetrahedron is given by \begin{equation} \psi_{ijk}(\vec{x}) = \sqrt{8}\hat{P}_i(a)\hat{P}_j^{(2i+1,0)}(b) \hat{P}_k^{(2i + 2j + 2, 0)}(c)(1 - b)^i(1 - c)^{i + j}, \end{equation} where $a = -2(1 + x)/(y + z) - 1$, $b = 2(1 + y)/(1 - z)$, and $c = z$. The objective basis is given by \begin{equation} \mathcal{\tilde{P}}^\phi = \bigl\{\psi_{ijk}(\vec{x}) \mid 0 \leq i \leq \phi, i \leq j \leq \phi - i, j \leq k \leq \phi - i - j \bigr\}. \end{equation} \paragraph{Prism.} Extruding the reference triangle along the $z$-axis gives our reference prism of \autoref{fig:3d-shapes}. It follows that the volume is $\int_{-1}^{1}\int_{-1}^{1}\int_{-1}^{-y} \mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z = 4$. There are a total of 12 symmetries. On account of the extrusion the most natural coordinate system is a combination of barycentric and Cartesian coordinates: $(\lambda_1, \lambda_2, \lambda_3, z)$. Let $\Perm_3$ generate all of the unique permutations of its first three arguments. Using this the six symmetry groups of the prism can be expressed as \begin{align*} \mathcal{S}_1 &= (\tfrac{1}{3},\tfrac{1}{3},\tfrac{1}{3},0), & \abs{\mathcal{S}_1} &= 1,\\ \mathcal{S}_2(\gamma) &= (\tfrac{1}{3},\tfrac{1}{3},\tfrac{1}{3},\pm\gamma), & \abs{\mathcal{S}_2} &= 2,\\ \mathcal{S}_3(\alpha) &= \Perm_3(\alpha,\alpha,1-2\alpha,0), & \abs{\mathcal{S}_3} &= 3,\\ \mathcal{S}_4(\alpha,\gamma) &= \Perm_3(\alpha,\alpha,1-2\alpha,\pm\gamma), & \abs{\mathcal{S}_4} &= 6,\\ \mathcal{S}_5(\alpha,\beta) &= \Perm_3(\alpha,\beta,1-\alpha-\beta,0), & \abs{\mathcal{S}_5} &= 6,\\ \mathcal{S}_6(\alpha,\beta,\gamma) &= \Perm_3(\alpha,\beta,1-\alpha-\beta, \pm\gamma), & \abs{\mathcal{S}_6} &= 12, \end{align*} where the constraints on $\alpha$ and $\beta$ are identical to those in a triangle and $0 < \gamma \leq 1$. Combining the orthonormal polynomial bases for a right-triangle and line segment yields the orthonormal prism basis \begin{equation} \psi_{ijk}(\vec{x}) = \sqrt{2}\hat{P}_i(a)\hat{P}_j^{(2i + 1, 0)}(b) \hat{P}_k(c)(1 - b)^i, \end{equation} where $a = 2(1 + x)/(1 - y) - 1$, $b = y$, and $c = z$. The objective basis is given by \begin{equation} \mathcal{\tilde{P}}^\phi = \bigl\{\psi_{ijk}(\vec{x}) \mid 0 \leq i \leq \phi,\; i \leq j \leq \phi - i,\; 0 \leq k \leq \phi - i - j,\; k \text{ even}\bigr\}. \end{equation} \paragraph{Pyramid.} Our reference pyramid can be seen in \autoref{fig:3d-shapes} with a volume determined by $\int_{-1}^{1}\int_{(z-1)/2}^{(1-z)/2}\int_{(z-1)/2}^{(1-z)/2} \mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z = 8/3$. The symmetries are identical to those of a quadrilateral. Extending the notation employed for the quadrilateral we obtain the following symmetry orbits \begin{align*} \mathcal{S}_1(\gamma) &= (0, 0, \gamma), & \abs{\mathcal{S}_1} &= 1,\\ \mathcal{S}_2(\alpha,\gamma) &= \chi(\alpha, 0, \gamma), & \abs{\mathcal{S}_2} &= 4,\\ \mathcal{S}_3(\alpha,\gamma) &= \chi(\alpha, \alpha, \gamma), & \abs{\mathcal{S}_3} &= 4,\\ \mathcal{S}_4(\alpha,\beta,\gamma) &= \chi(\alpha, \beta, \gamma), & \abs{\mathcal{S}_4} &= 8, \end{align*} subject to the constraints that $0 < (\alpha,\beta) \leq (1 - \gamma)/2$ and $-1 \leq \gamma \leq 1$. Inside of the reference pyramid the orthonormal polynomial basis is found to be \begin{equation} \psi_{ijk}(\vec{x}) = 2\hat{P}_i(a)\hat{P}_j(b) \hat{P}_k^{(2i + 2j + 2, 0)}(c)(1 - c)^{i + j}, \end{equation} where $a = 2x/(1 - z)$, $b = 2y/(1 - z)$, and $c = z$. The objective basis is \begin{equation} \mathcal{\tilde{P}}^\phi = \bigl\{\psi_{ijk}(\vec{x}) \mid 0 \leq i \leq \phi,\; i \leq j \leq \phi - i,\; 0 \leq k \leq \phi - i - j,\; (i,j) \text{ even}\bigr\}. \end{equation} \paragraph{Hexahedron.} Our choice of reference hexahedron can be seen in \autoref{fig:3d-shapes}. The volume is, trivially, $\int_{-1}^{1}\int_{-1}^{1}\int_{-1}^{1} \mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z = 8$. A hexahedron exhibits octahedral symmetry with a symmetry number of 48. The procedure for determining the orbits similar to that used for the quadrilateral. We consider applying these symmetries to a point $(\alpha, \beta, \gamma)$ with $0 \leq (\alpha, \beta, \gamma) \leq 1$ and let the resulting set of points by given by $\Xi(\alpha, \beta, \gamma)$. When $\alpha$, $\beta$, and $\gamma$ are all distinct and greater than zero the set has a cardinality of 48, as expected. However, when one or more parameters are either identical to one another or equal to zero some symmetries give rise to equivalent points. This reduces the cardinality of the set. Enumerating the various combinations we obtain seven symmetry orbits \begin{align*} \mathcal{S}_1 &= \Xi(0, 0, 0), & \abs{\mathcal{S}_1} &= 1,\\ \mathcal{S}_2(\alpha) &= \Xi(\alpha, 0, 0), & \abs{\mathcal{S}_2} &= 6,\\ \mathcal{S}_3(\alpha) &= \Xi(\alpha, \alpha, \alpha), & \abs{\mathcal{S}_3} &= 8,\\ \mathcal{S}_4(\alpha) &= \Xi(\alpha, \alpha, 0), & \abs{\mathcal{S}_4} &= 12,\\ \mathcal{S}_5(\alpha, \beta) &= \Xi(\alpha, \beta, 0), & \abs{\mathcal{S}_5} &= 24,\\ \mathcal{S}_6(\alpha, \beta) &= \Xi(\alpha, \alpha, \beta), & \abs{\mathcal{S}_6} &= 24,\\ \mathcal{S}_7(\alpha, \beta, \gamma) &= \Xi(\alpha, \beta, \gamma), & \abs{\mathcal{S}_7} &= 48. \end{align*} Trivially, the orthonormal basis inside of our reference hexahedron is given by \begin{equation} \psi_{ijk}(\vec{x}) = \hat{P}_i(a)\hat{P}_j(b)\hat{P}_k(c), \end{equation} where $a = x$, $b = y$, and $c = z$. The objective basis is \begin{equation} \mathcal{\tilde{P}}^\phi = \bigl\{\psi_{ijk}(\vec{x}) \mid 0 \leq i \leq \phi,\; i \leq j \leq \phi - i,\; j \leq k \leq \phi - i - j,\; (i, j, k) \text{ even} \bigr\}. \end{equation} \section{Methodology} \label{sec:meth} Our methodology for identifying symmetric quadrature rules is a refinement of that described by Witherden and Vincent \cite{witherden2013analysis} for triangles. This method is, in turn, a refinement of that of Zhang et al. \cite{zhang2009set}. To derive a quadrature rule four input parameters are required: the reference domain, $\vec{\Omega}$, the number of quadrature points $N_p$, the target rule strength, $\phi$, and a desired runtime, $t$. The algorithm begins by computing all of the possible symmetric decompositions of $N_p$. The result is a set of vectors satisfying the relation \begin{equation} N_p = \sum^{N_s}_{j = 1} n_{ij}|\mathcal{S}_j|, \end{equation} where $N_s$ is the number of symmetry orbits associated with the domain $\vec{\Omega}$, and $n_{ij}$ is the number of orbits of type $j$ in the $i$th decomposition. Finding these involves solving the constrained linear Diophantine equation outlined in \autoref{sec:shapes}. It is possible for this equation to have no solutions. As an example we consider the case when $N_p = 44$ for a triangular domain. From the symmetry orbits we have \[ N_p = n_1|\mathcal{S}_1| + n_2|\mathcal{S}_2| + n_3|\mathcal{S}_3| = n_1 + 3n_2 + 6n_3, \] subject to the constraint that $n_1 \in \{0, 1\}$. This restricts $N_p$ to be either a multiple of three or one greater. Since forty-four is neither of these we find the equation to have no solutions. Therefore, we conclude that there can be no symmetric quadrature rules inside of a triangle with forty-four points. Given a decomposition we are interested in finding a set of orbital parameters and weights that minimise the error associated with integrating the objective basis on $\vec{\Omega}$. This is an example of a nonlinear least squares problem. A suitable method for solving such problems is the Levenberg-Marquardt algorithm (LMA). The LMA is an iterative procedure for finding a set of parameters that correspond to a local minima of a set of functions. The minimisation process is not always successful and is dependent on an initial guess of the parameters. Within the context of quadrature rule derivation minimisation can be regarded as successful if $\xi(\phi) \sim \epsilon$ where $\epsilon$ represents machine precision. Let us denote the number of parameters associated with symmetry orbit $\mathcal{S}_i$ as $\big\llbracket\mathcal{S}_i\big\rrbracket$. Using this we can express the total number of degrees of freedom associated with decomposition $i$ as \begin{equation} \sum^{N_s}_{j = 1} \big\{n_{ij}\big\llbracket\mathcal{S}_i\big\rrbracket + n_{ij}\big\}, \end{equation} with the second term accounting for the presence of one quadrature weight associated with each symmetry orbit. From the list of orbits given in \autoref{sec:shapes} we expect the weights contribute approximately one third of the degrees of freedom. This is not an insignificant fraction. One way of eliminating the weights is to treat them as dependent variables. When the points are prescribed the right hand side of \autoref{eq:quad} becomes linear with respect to the unknowns---the weights. In general, however, the number of weights will be different from the number of polynomials in the objective basis. It is therefore necessary to obtain a least squares solution to the systen. Linear least squares problems can be solved directly through a variety of techniques. Perhaps the most robust numerical scheme is that of singular value decomposition (SVD). Thus, at the cost of solving a small linear least squares problem at each LMA iteration we are able to reduce the number of free parameters to \begin{equation} \sum^{N_s}_{j = 1} n_{ij}\big\llbracket\mathcal{S}_i\big\rrbracket. \end{equation} Such a modification has been found to greatly reduce the number of iterations required for convergence. This reduction more than offsets the marginally greater computational cost associated with each iteration. Previous works \cite{zhang2009set, kubatko2013pri, taylor2005several} have emphasised the importance of picking a `good' initial guess to seed the LMA. To this end several methodologies for seeding orbital parameters have been proposed. The degree of complexity associated with such strategies is not insignificant. Further, it is necessary to device a separate strategy for each symmetry orbit. Our experience, however, suggests that the choice of decomposition is far more important than the initial guess in determining whether minimisation will be successful. For larger values of $N_p$ we note that many decompositions---especially those for prisms and pyramids---are pathological. As an example of this we consider searching for an $N_p = 80$ point rule inside of a prism where there are $2\,380$ distinct symmetrical decompositions. One such decomposition is $N_p = 40\abs{\mathcal{S}_2}$ where all points lie in a single column down the middle of the prism. Since there is no variation in either $x$ or $y$ it is not possible to obtain a rule of strength $\phi \geq 1$. Hence, the decomposition can be dismissed without further consideration. A presentation of our method in pseudocode can be seen in \autoref{alg:meth}. When the objective basis functions in $\mathcal{\tilde{P}}^\phi$ are orthonormal \autoref{eq:obint} states that the integrand of all non-constant modes is zero. We can exploit this to simplify the computation of $b_i$. The purpose of \textsc{ClampOrbit} is to enforce the constraints associated with a given orbit to ensure that all points remain inside of the domain. \begin{algorithm} \caption{\label{alg:meth}Procedure for generating symmetric quadrature rules of strength $\phi$ with $N_p$ points inside of a domain.} \begin{algorithmic}[1] \Procedure{FindRules}{$N_p,\phi,t$} \ForAll{decompositions of $N_p$} \State $t_0 \gets \Call{CurrentTime}$ \Repeat \State $\mathcal{R} \gets \Call{SeedOrbits}$ \Comment{Initial guess of points} \State $\xi \gets \Call{LMA}{\textsc{RuleResid}, \mathcal{R}}$ \If{$\xi \sim \epsilon$} \Comment{If minimisation was successful} \State \textbf{save} $\mathcal{R}$ \EndIf \Until{$\Call{CurrentTime}{}() - t_0 > t$} \EndFor \EndProcedure \Statex \Function{RuleResid}{$\mathcal{R}$} \ForAll{$p_i \in \mathcal{\tilde{P}}^\phi$} \Comment{For each basis function} \State $b_i \gets \int_{\vec{\Omega}} p_i(\vec{x})\, \mathrm{d}\vec{x}$ \ForAll{$r_j \in \mathcal{R}$} \Comment{For each orbit} \State $r_j \gets \Call{ClampOrbit}{r_j}$ \Comment{Ensure orbital parameters are valid} \State $A_{ij} \gets 0$ \ForAll{$\vec{x}_k \in \Call{ExpandOrbit}{r_j}$} \State $A_{ij} \gets A_{ij} + p_i(\vec{x}_k)$ \EndFor \EndFor \EndFor \State $\omega \gets b / A$ \Comment{Use SVD to determine the weights} \State \textbf{return} $A\omega - b$ \Comment{Compute the residual} \EndFunction \end{algorithmic} \end{algorithm} \section{Implementation} \label{sec:impl} We have implemented the algorithms outlined above in a C++11 program called \emph{polyquad}. The program is built on top of the Eigen template library \cite{eigenweb} and is parallelised using MPI. It is capable of searching for quadrature rules on triangles, quadrilaterals, tetrahedra, prisms, pyramids, and hexahedra. All rules are guaranteed to be symmetric having all points inside of the domain. Polyquad can also, optionally, filter out rules possessing negative weights. Further, functionality exists, courtesy of MPFR \cite{fousse2007mpfr}, for refining rules to an arbitrary degree of numerical precision and for evaluating the truncation error of a ruleset. The source code for polyquad is available under the terms of the GNU General Public License v3.0 and can be downloaded from \url{https://github.com/vincentlab/Polyquad}. \section{Rules} \label{sec:rules} Using polyquad we have derived a set of quadrature rules for each of the reference domains in \autoref{sec:shapes}. All rules are completely symmetric, possess only positive weights, and have all points inside of the domain. It is customary in the literature to refer to quadratures with the last two attributes as being ``PI'' rules. As polyquad attempts to find an ensemble of rules it is necessary to have a means of differentiating between otherwise equivalent formulae. In constructing this collection the truncation term $\xi(\phi + 1)$ was employed with the rule possessing the smallest such term being chosen. The number of points $N_p$ required for a rule of strength $\phi$ can be seen in \autoref{tab:rules}. The rules themselves are provided as electronic supplementary material and have been refined to 38 decimal places. \begin{table} \centering \caption{\label{tab:rules}Number of points $N_p$ required for a fully symmetric quadrature rule with positive weights of strength $\phi$ inside of the six reference domains. Rules with underlines represent improvements over those found in the literature (see text).} \begin{tabular}{rrrrrrr} \toprule & \multicolumn{6}{c}{$N_p$} \\ \cmidrule{2-7} $\phi$ & Tri & Quad & Tet & Pri & Pyr & Hex\\ \midrule 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 2 & 3 & 4 & 4 & 5 & 5 & 6 \\ 3 & 6 & 4 & 8 & 8 & 6 & 6 \\ 4 & 6 & 8 & 14 & 11 & 10 & 14 \\ 5 & 7 & 8 & 14 & 16 & \underline{15} & 14 \\ 6 & 12 & 12 & 24 & \underline{28} & \underline{24} & \underline{34} \\ 7 & 15 & 12 & \underline{35} & \underline{35} & \underline{31} & \underline{34} \\ 8 & 16 & \underline{20} & 46 & \underline{46} & \underline{47} & \underline{58} \\ 9 & 19 & \underline{20} & \underline{59} & \underline{60} & \underline{62} & \underline{58} \\ 10 & 25 & \underline{28} & 81 & \underline{85} & \underline{83} & \underline{90} \\ 11 & 28 & \underline{28} & & & & \\ 12 & 33 & \underline{37} & & & & \\ 13 & 37 & \underline{37} & & & & \\ 14 & 42 & \underline{48} & & & & \\ 15 & 49 & \underline{48} & & & & \\ 16 & 55 & 60 & & & & \\ 17 & 60 & 60 & & & & \\ 18 & 67 & \underline{72} & & & & \\ 19 & 73 & \underline{72} & & & & \\ 20 & 79 & \underline{85} & & & & \\ \bottomrule \end{tabular} \end{table} From the table we note that several of the rules appear to improve over those in the literature. We consider a rule to be an improvement when it either requires fewer points than any symmetric rule described in literature or when existing symmetric rules of this strength are not PI. We note that many of the rules presented by Dunavant for quadrilaterals \cite{dunavant1985economical} and hexahedra \cite{dunavant1986efficient} possess either negative weights or have points outside of the domain. Using polyquad in quadrilaterals we were able to identify PI rules with point counts less than or equal to those of Dunavant at strengths $\phi = 8,9,18,19,20$. In tetrahedra we were able to reduce the number of points required for $\phi = 7$ and $\phi = 9$ by one and two, respectively, compared with Zhang et al. \cite{zhang2009set}. Furthermore, in prisms and pyramids rules requiring significantly fewer points than those in literature were identified. As an example, the $\phi = 9$ rule of \cite{kubatko2013pri} inside of a prism requires 71 points compared with just 60 for the rule identified by polyquad. Additionally, both of the $\phi = 10$ rules for prisms and pyramids appear to be new. \section{Conclusions} \label{sec:conclusions} We have presented a methodology for identifying symmetric quadrature rules on a variety of domains in two and three dimensions. Our scheme does not require any manual intervention and is not restricted to any particular topological configuration inside of a domain. Additionally, it is also capable of generating an ensemble of rules. We have further provided an open source implementation of our method in C++11, and used it to generate a complete set of symmetric quadrature rules that are suitable for use in finite element solvers. All rules possess purely positive weights and have all points inside the domain. Many of the rules appear to be new, and an improvement over those tabulated in the literature. \section*{Acknowledgements} The authors would like to thank the Engineering and Physical Sciences Research Council for their support via a Doctoral Training Grant and an Early Career Fellowship (EP/K027379/1).
{ "timestamp": "2014-09-08T02:11:25", "yymm": "1409", "arxiv_id": "1409.1865", "language": "en", "url": "https://arxiv.org/abs/1409.1865", "abstract": "In this paper we describe a methodology for the identification of symmetric quadrature rules inside of quadrilaterals, triangles, tetrahedra, prisms, pyramids, and hexahedra. The methodology is free from manual intervention and is capable of identifying an ensemble of rules with a given strength and a given number of points. We also present polyquad which is an implementation of our methodology. Using polyquad we proceed to derive a complete set of symmetric rules on the aforementioned domains. All rules possess purely positive weights and have all points inside the domain. Many of the rules appear to be new, and an improvement over those tabulated in the literature.", "subjects": "Numerical Analysis (math.NA)", "title": "On the Identification of Symmetric Quadrature Rules for Finite Element Methods", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759660443166, "lm_q2_score": 0.7217432122827967, "lm_q1q2_score": 0.7079405705838164 }
https://arxiv.org/abs/1707.02884
A homogenization theorem for Langevin systems with an application to Hamiltonian dynamics
This paper studies homogenization of stochastic differential systems. The standard example of this phenomenon is the small mass limit of Hamiltonian systems. We consider this case first from the heuristic point of view, stressing the role of detailed balance and presenting the heuristics based on a multiscale expansion. This is used to propose a physical interpretation of recent results by the authors, as well as to motivate a new theorem proven here. Its main content is a sufficient condition, expressed in terms of solvability of an associated partial differential equation ("the cell problem"), under which the homogenization limit of an SDE is calculated explicitly. The general theorem is applied to a class of systems, satisfying a generalized detailed balance condition with a position-dependent temperature.
\section{Introduction and background} This paper studies the small mass limit of a general class of Langevin equations. Langevin dynamics is defined in terms of canonical variables---positions and momenta---by adding damping and (It\^o) noise terms to Hamiltonian equations. In the limit when the mass, or masses, of the system's particles, go to zero, the momenta homogenize, and one obtains a limiting equation for the position variables only. This is a great simplification which often allows one to see the nature of the dynamics more clearly. If the damping matrix of the original system depends on its state, a {\it noise-induced drift} arises in the limit. We analyze and interpret this term from several points of view. The paper consists of four parts. The first part contains general background on stochastic differential equations. In the second part, the small-mass limit of Langevin equations is studied using a multiscale expansion. This method requires making additional assumptions, but it leads to correct results in all cases in which rigorous proofs are known. The third part presents a new rigorous result about homogenization of a general class of singularly perturbed SDEs. The final part applies this result to prove a theorem about the homogenization of a large class of Langevin systems. \subsection{Stochastic differential equations} Let us start from a general background on Langevin equations. The material presented here is not new, and its various versions can be found in many textbooks, see for example \cite{khasminskii2011stochastic}. We do not strive for complete precision or a listing of all necessary assumptions in our discussions here. The aim of the first two sections is to motivate and facilitate reading the remainder of the paper. Detailed technical considerations will be reserved for Sections 3 and 4, where we present our new results. Consider the stochastic differential equation \begin{align}\label{SDE} dy_t = b(y_t)\,dt + \sigma(y_t)\,dW_t. \end{align} The process $y_t$ takes values in $\mathbb{R}^m$, $b$ is a vector field in $\mathbb{R}^m$, $W$ is an $n$-dimensional Wiener process and $\sigma$ is an $m \times n$-matrix-valued function. Define an $m \times m$ matrix $\Sigma$ by $\Sigma = \sigma\sigma^T$. The equation \req{SDE} defines a Markov process with the infinitesimal operator \begin{align}\label{generator} (Lf)(y) = {1 \over 2}\Sigma_{ij}\partial_i\partial_j f + b_i\partial_i f \end{align} where we are writing $\partial_i$ for $\partial_{y_i}$ and suppressing the dependence of $\Sigma$, $b_i$ and $f$ on $y$ from the notation. Summation over repeating indices is implied. We assume that this process has a unique stationary probability measure with a $C^2$-density $h(y)$. Under this assumption $h$ satisfies the equation \begin{align} \label{stationaryFP} L^*h = 0 \end{align} where $L^*$ denotes the formal adjoint of $L$, \begin{align} \label{adjoint_generator} L^*f = {1 \over 2}\partial_i\partial_j\left(\Sigma_{ij}f\right) - \partial_i\left(b_i f\right). \end{align} That is, we have \begin{align}\label{stationaryFP_explicit} \partial_i\left({1 \over 2}\partial_j\left(\Sigma_{ij}h\right) - b_ih\right) = 0. \end{align} Consider the special case when $h$ solves the equation \begin{align}\label{inner} {1 \over 2}\partial_j\left(\Sigma_{ij}h\right) - b_ih = 0 . \end{align} In this case the operator $L$ is symmetric on the space $L^2\left(\mathbb{R}^m, h\right)\equiv L^2_h$ of square-integrable functions with the weight $h$, as the following calculation shows. Using product formula, we have \begin{equation} \int\left(Lf\right)gh = \int fL^*\left(gh\right) = \int f\partial_i\left({1 \over 2}\partial_j\left(\Sigma_{ij}gh\right) - b_igh\right). \end{equation} The expression in parentheses equals \begin{equation} {1 \over 2}\partial_j g \left(\Sigma_{ij}h\right) + g{1 \over 2}\partial_j\left(\Sigma_{ij}h\right) - gb_ih = {1 \over 2}\partial_j g \left(\Sigma_{ij}h\right) \end{equation} by \req{inner}. Applying product formula again, we obtain \begin{equation} \int\left(Lf\right)gh = \int f\left({1 \over 2}\Sigma_{ij}\left(\partial_i\partial_jg\right)h + {1 \over 2}\partial_i\left(\Sigma_{ij}h\right)\partial_jg\right) \end{equation} which, by another application of \req{inner}, equals \begin{equation} \int f\left({1 \over 2}\Sigma_{ij}\partial_i\partial_jg - b_j\partial_jg\right)h = \int f\left(Lg\right)h. \end{equation} Here is a more complete discussion: \subsection{Detailed balance condition and symmetry of the infinitesimal operator} We have \begin{align} \int\left(Lf\right)gh &= \int\left(\frac{1}{2}\Sigma_{ij}\partial_i\partial_jf + b_i\partial_if\right)gh \\ &=-\frac{1}{2}\int\partial_if\partial_j\left(\Sigma_{ij}gh\right) + \int\left(\partial_if\right)b_igh \notag\\ &= -\frac{1}{2}\int\partial_if\left[\partial_j\left(\Sigma_{ij}h\right)g + \Sigma_{ij}h\partial_jg\right] + \partial\left(\partial_if\right)b_igh \notag\\ &=\int\partial_if\left[-\frac{1}{2}\partial_j\left(\Sigma_{ij}h\right) + b_ih\right]g - \frac{1}{2}\int \Sigma_{ij}\partial_if\partial_jgh. \notag \end{align} Interchanging the roles of $f$ and $g$ and canceling the term symmetric in $f$ and $g$, we obtain \begin{equation} \int\left(Lf\right)gh - \int f\left(Lg\right)h = \int\left[\left(\partial_if\right)g - \left(\partial_ig\right)f\right]\left(-{1 \over 2}\partial_j\left(\Sigma_{ij}h\right) + b_ih\right). \end{equation} If $h$ is a solution to the equation \begin{equation} -{1 \over 2}\partial_j\left(\Sigma_{ij}h\right) + b_ih = 0 \end{equation} then the above expression is zero, showing that the operator $L$ is symmetric on the space $L^2_h$. Conversely, for this symmetry to hold, the $\mathbb{R}^m$-valued function $-{1 \over 2}\partial_j\left(\Sigma_{ij}h\right) + b_ih$ has to be orthogonal to all elements of the space $L^2$ (of functions with values in $\mathbb{R}^m$) of the form $\left(\partial_if\right)g - \left(\partial_ig\right)f$. It is not hard to prove that every $C^1$ function with this property must vanish, and thus, that $\frac{1}{2}\partial_j\left(\Sigma_{ij}h\right) - b_ih = 0$. Here is a sketch of a proof: suppose $\phi$ is $C^1$ and orthogonal to all such functions. That is, for every $f$ and $g$, \begin{equation} \int \left[\phi_i \left(\partial_i f\right)g - \phi_i\left(\partial_i g\right)f\right] = 0. \end{equation} Integrating the first term by parts we obtain \begin{equation} \int\left[-\left(\partial_i\phi_i\right)g - 2\phi_i\partial_i g\right]f = 0. \end{equation} Since this holds for all $f$, it follows that \begin{equation} -\left(\partial_i\phi_i\right)g - 2\phi_i\partial_i g = 0 \end{equation} and thus also \begin{equation} \int\left[-\left(\partial_i\phi_i\right)g - 2\phi_i\partial_i g\right] = 0. \end{equation} Integrating the second term by parts, we get \begin{equation} \int\left(\partial_i\phi_i\right)g = 0 \end{equation} and, since this is true for every $g$, it follows that $\partial_i\phi_i$ vanishes. We thus have, for every $g$ \begin{equation} \phi_i \partial_ig = 0 \end{equation} and this implies that $\phi$ vanishes. In summary: {\bf Proposition:} If the density $h$ of the stationary probability measure is $C^2$, then $h$ satisfies the stationary Fokker-Planck equation \begin{equation} \partial_i\left[-{1 \over 2}\partial_j\left(\Sigma_{ij}h\right) + b_ih\right] = 0. \end{equation} The stronger statement \begin{equation} -{1 \over 2}\partial_j\left(\Sigma_{ij}h\right) + b_ih = 0 \end{equation} is equivalent to symmetry of the operator $L$ on the space $L^2_h$. We are now going to relate the above symmetry statement to the detailed balance property of the stationary dynamics. First, it is clearly equivalent to the analogous property for the backward Kolmogorov semigroup: \begin{equation} \int\left(P_tf\right)gh = \int f\left(P_tg\right)h \end{equation} since $P_t = \exp\left(tL\right)$. Now, $\left(P_tf\right)(x)$ is the expected value of $f(x_t)$ for the process, starting at $x$ at time 0. In particular, for $f = \delta_y$, we obtain $P_tf(x) = p_t(x,y)$---the density of the transition probability from $x$ to $y$ in time $t$. Using the above symmetry of $P_t$ with $f = \delta_y$ and $g = \delta_x$, we obtain the detailed balance condition: \begin{equation} h(x) p_t(x,y) = h(y) p_t(y, x) \end{equation} which, conversely, implies the symmetry statement for arbitrary $f$ and $g$. \subsection{The case of a linear drift and constant noise} When both $b(y)$ and $\sigma(y)$ are constant or depend linearly on $y$, \req{SDE} can be solved explicitly \cite{arnold} and an explicit formula for its stationary distribution can be found, when it exists. We consider the special case $b(y) = - \gamma y$ and $\sigma(y) \equiv \sigma$, where $\gamma$ and $\sigma$ are constant matrices and the eigenvalues of $\gamma$ have positive real parts. The stationary Fokker-Planck equation, \req{stationaryFP_explicit}, reads \begin{equation} \nabla \cdot \left({1 \over 2}\Sigma\nabla h + (\gamma y)h\right) = 0 \end{equation} where $\Sigma = \sigma\sigma^T$. It has a Gaussian solution \begin{equation} h(y) = \left(2\pi\right)^{-{m \over 2}}\left(\det M\right)^{-{1 \over 2}}\exp\left(-{1 \over 2}\left(M^{-1}y, y\right)\right) \end{equation} with the covariance matrix $M$ which is the unique solution of the Lyapunov equation \begin{equation} \gamma M + M\gamma^T = \Sigma \end{equation} and can be written as (see, for example, \cite{ortega2013matrix}) \begin{equation} M = \int_0^{\infty}\exp\left(-t\gamma\right)\Sigma\exp\left(-t\gamma^T\right)dt. \end{equation} This result can be verified by a direct calculation. We emphasize that it holds without assuming the detailed balance condition. The latter condition is satisfied if and only if the above $h$ solves the equation \begin{equation} {1 \over 2}\Sigma\nabla h + (\gamma y)h = 0 \end{equation} which is equivalent to $M = {1 \over 2}\gamma^{-1}\Sigma$ or, in terms of the coefficients of the system, to \begin{align}\label{condition_forDB} \Sigma\gamma^T = \gamma \Sigma \end{align} To see the physical significance of this condition, let us go back to the general case and write (adapting the discussion in \cite{Zwanzig} to our notation) \begin{equation} \gamma = {1 \over 2}\Sigma M^{-1} - i\Omega. \end{equation} $\Omega$ represents the ``oscillatory degrees of freedom'' of the diffusive system. The above calculations show that the detailed balance condition is equivalent to $\Omega = 0$, in agreement with the physical intuition that there are no macroscopic currents in the stationary state. \section{Small mass limit---a perturbative approach}\label{sec:perturb} We are now going to apply the general facts about Langevin equations to a model of a mechanical system, interacting with a noisy environment. The dynamical variables of this system are positions and momenta, and, in general, the Langevin equations which describe its time evolution, are not linear. However, when investigating the small mass limit of the system by a perturbative method, we will encounter equations closely related to those studied above. This will be explained later, when we interpret the limiting equations. Consider a mechanical system with the Hamiltonian $\mathcal{H}(q, p)$ where $q, p \in \mathbb{R}^n$. We want to study a small mass limit of this system, coupled to a damping force and the noise. Therefore, we introduce the variable $z = {p \over \sqrt{m}}$ and assume the Hamiltonian can be written $\mathcal{H}(q,p) =H(q,z)$ where $H$ is independent of $m$. We thus have \begin{align}\label{Langevin} dq_t &= {1 \over \sqrt{m}}\nabla_zH(q_t, z_t)\,dt \\ dz_t &= -{1 \over \sqrt{m}}\nabla_qH(q_t, z_t)\,dt - {1 \over m}\gamma(q_t)\nabla_zH(q_t, z_t)\,dt + {1 \over \sqrt{m}}\sigma(q_t)\,dW_t .\notag \end{align} $\gamma$ is $n \times n$-matrix-valued, $\sigma$ is $n \times k$-matrix-valued and $W$ is a $k$-dimensional Wiener process. We emphasize that $\sigma$ does not play here the same role that it played in our discussion of the general Langevin equation, since the noise term enters only the equation for $dz_t$. The number $k$ of the components of the driving noise does not have to be related to the dimension of the system in any particular way. The corresponding backward Kolmogorov equation for a function $\rho(q, z, t)$ is \begin{equation} \partial_t \rho = L\rho \end{equation} where the differential operator $L$ equals \begin{equation} L = {1 \over m}L_1 + {1 \over \sqrt{m}}L_2 \end{equation} with \begin{align} L_1 &= {1 \over 2}\Sigma\nabla_z \cdot \nabla_z - \gamma \nabla_zH \nabla_z \\ L_2 &= \nabla_zH \cdot \nabla_q - \nabla_qH \cdot \nabla_z \notag \end{align} where $\Sigma(q) = \sigma(q)\sigma(q)^T$. We represent the solution of the Kolmogorov equation as a formal series \begin{equation} \rho = \rho_0 + \sqrt{m}\rho_1 + m\rho_2 + \dots \end{equation} Equating the expressions, proportional to $m^{-1}$, $m^{-{1 \over 2}}$ and $m^0$, we obtain the equations: \begin{align} L_1 \rho_0 &= 0, \\ L_1 \rho_1 &= -L_2\rho_0, \notag\\ \partial_t \rho_0 &= L_1\rho_2 + L_2\rho_1. \notag \end{align} To satisfy the first equation it is sufficient to choose $\rho_0$ which does not depend on $z$: \begin{equation} \rho_0 = \rho_0(q,t). \end{equation} If we now search for $\rho_1$ which is linear in $z$, the second equation simplifies to \begin{equation} \gamma \nabla_z H \cdot \nabla_z \rho_1 = \nabla_z H \cdot \nabla_q \rho_0 \end{equation} which has a solution \begin{equation} \rho_1(q,z) = \left(\gamma^{-1}\right)^T\nabla_q \rho_0\cdot z = \nabla_q\rho_0 \cdot \gamma^{-1}z. \end{equation} Writing the third equation as \begin{equation} \partial_t \rho_0 - L_2\rho_1 = L_1\rho_2 \end{equation} and applying the identity \begin{equation} Ran L_1 = \left(Ker L_1^*\right)^{\perp} \end{equation} to the space $L^2$ with respect to the $z$ variable, we see that $\partial_t \rho_0 - L_2\rho_1 = L_1\rho_2$ must be orthogonal in this space to any function $h$ in the null space of $L_1^*$. We have \begin{align}\label{nullspace} L_1^*h = \nabla_z\cdot\left({1 \over 2}\Sigma\nabla_zh + \left(\gamma\nabla_zH\right)h\right) \end{align} where $\Sigma = \sigma\sigma^T$. It is impossible to continue the analysis without further, simplifying assumptions. We are first going to study the case of a general $H$, assuming a form of the detailed balance condition in the variable $z$, at fixed $q$. {\bf Assumption 1:} for every $q$ there exists a nonnegative solution of the equation \begin{align}\label{conditionalDB} {1 \over 2}\Sigma\nabla_zh + \left(\gamma\nabla_zH\right)h = 0 \end{align} of finite $L^1(dz)$-norm. We can thus choose \begin{equation} \int h(q,z)\,dz = 1. \end{equation} We will say in this case that the system satisfies the {\it conditional detailed balance property} in the variable $z$. Since $\rho_0$ does not depend on $z$, the orthogonality condition can be written as \begin{equation} \partial_t\rho_0 = \int L_2\rho_1(q,z)h(q,z)\,dz. \end{equation} We have the following explicit formula for $L_2\rho_1$ (summation over repeated indices is implied): \begin{align} L_2\rho_1=& \partial_{z_i}H\left(\partial_{q_i}\partial_{q_j}\rho_0\right)\left(\gamma^{-1}\right)_{jk}z_k + \partial_{z_i}H\left(\partial_{q_j}\rho_0\right)\partial_{q_i}\left(\left(\gamma^{-1}\right)_{jk}\right)z_k\\ & - \partial_{q_i}H\left(\partial_{q_j}\rho_0\right)\left(\gamma^{-1}\right)_{ji}. \notag \end{align} To integrate it against $h(q,z)$, we will use the following consequence of \req{conditionalDB} \begin{align}\label{averaging} &\int\left(\partial_{z_i}H\right)z_kh(q,z)\,dz = -{1 \over 2}\int\left(\gamma^{-1}\Sigma\nabla_zh\right)_iz_k\,dz\\ =& -\int\left(\gamma^{-1}\Sigma\right)_{ij}\left(\partial_{z_j}h\right)z_k\,dz= -{1 \over 2}\left(\gamma^{-1}\Sigma\right)_{ij}\int\left(\partial_{z_j}h\right)z_k\,dz \notag\\ =& {1 \over 2}\left(\gamma^{-1}\Sigma\right)_{ij}\int h\delta_{jk}\,dz = {1 \over 2}\left(\gamma^{-1}\Sigma\right)_{ik}.\notag \end{align} The orthogonality condition is thus \begin{align}\partial_t \rho_0 =& -\left(\gamma^{-1}\right)_{ji}\left<\partial_{q_i}H\right>\partial_{q_j}\rho_0 + {1 \over 2}\left(\gamma^{-1}\Sigma\right)_{ik}\partial_{q_i}\left(\left(\gamma^{-1}\right)_{jk}\right)\partial_{q_j}\rho_0\\ &+ {1 \over 2}\left(\gamma^{-1}\Sigma\right)_{ik}\left(\gamma^{-1}\right)_{jk}\left(\partial_{q_i}\partial_{q_j}\rho_0\right).\notag \end{align} In this formula, which is more general than the detailed-balance case of the rigorous result of \cite{Hottovy}, $\left<-\right>$ denotes the average (i.e. the integral over $z$ with the density $h(q,z)$). This notation is used only in the term in which the average has not been calculated explicitly. Passing from the Kolmogorov equation to the corresponding SDE, we obtain the effective Langevin equation in the $m \to 0$ limit: \begin{align}\label{effective} dq_t = -\gamma(q_t)^{-1}\left(\left<\nabla_qH\right>(q_t) + S(q_t)\right)\,dt + \gamma^{-1}(q_t)\sigma(q_t)\, dW_t \end{align} where the components of the noise-induced drift, $S(q)$, are given by \begin{equation} S_i(q) = {1 \over 2}\left(\gamma^{-1}\Sigma\right)_{jk}\partial_{q_j}\left(\left(\gamma^{-1}\right)_{ik}\right) \end{equation} and we have used \begin{equation} \gamma^{-1}\sigma\left(\gamma^{-1}\sigma\right)^T = \gamma^{-1}\Sigma\left(\gamma^{-1}\right)^T. \end{equation} We are now going to interpret the limiting equation \req{effective}, using the stationary probability measure $h(q,z)\,dz$, as follows: from the original equations for $q_t$ and $z_t$ we obtain \begin{equation} dq_t = -\gamma(q_t)^{-1}\nabla_qH\,dt + \gamma(q_t)^{-1}\sigma(q_t)\,dW_t -\sqrt{m}\gamma(q_t)^{-1}\,dz_t. \end{equation} Integrating the last term by parts, we obtain \begin{equation} \sqrt{m}\left(\gamma^{-1}(q_t)\right)_{ij}\,dz_t^j = d\left(\sqrt{m}\left(\gamma_{ij}^{-1}(q_t)\right)z_t^j\right)-\sqrt{m}\,d\left(\left(\gamma^{-1}\right)_{ij}\right)z_t^j. \end{equation} We leave the first term out, since, under fairly general natural assumptions, it is of order $m^{1 \over 2}$ \cite{Hottovy}. The second term equals \begin{equation} -\partial_{q_k}\left(\left(\gamma^{-1}\right)_{ij}\right)\left(\partial_{z_k}H\right)z_j\,dt. \end{equation} We substitute this into the equation for $dq_t$ and average, multiplying by $h(q,z)$ and integrating over $z$. The calculation is as in \req{averaging} and the result is thus the same as the equation obtained by the multiscale expansion \req{effective}. This provides the following heuristic physical interpretation of the perturbative result: the smaller $m$ is, the faster the variation of $z$ becomes, and in the limit $m \to 0$, $z$ homogenizes instantaneously, with $q$ changing only infinitesimally. Let us now discuss conditions, under which one may expect our conditional detailed balance assumption to hold. As seen above, at fixed $q$ this assumption is equivalent to existence of a non-negative, integrable solution of the equation \begin{align}\label{condDB} {1 \over 2}\Sigma \nabla_zh + \gamma\left(\nabla_zH\right)h = 0. \end{align} This equation can be rewritten as \begin{equation} {\nabla_zh \over h} = -2\Sigma^{-1}\gamma\nabla_zH. \end{equation} The left-hand side equals $\nabla_z \log h$. Letting $B = -2\Sigma^{-1}\gamma$ to simplify notation, we see that a necessary condition for existence of a solution is that $B\nabla_zH$ be a gradient. This requires \begin{equation} \partial_{z_k}\left(b_{ij}\partial_{z_j}H\right) = \partial_{z_i}\left(b_{kj}\partial_{z_j}H\right) \end{equation} for all $i, k$, where $b_{ij}$ are matrix elements of $B$. Introducing the matrix $R = \left(r_{ij}\right)$ of second derivatives of $H$, \begin{equation} r_{ij} = \partial_{z_i}\partial_{z_j}H \end{equation} we see that solvability of \req{condDB} is equivalent to symmetry of the product $BR$: \begin{equation} BR = RB^T. \end{equation} For the system to satisfy the conditional detailed balance property, this relation has to be satisfied for all $q$ and $z$. When $H$ is a quadratic function of $z$, the matrix $R$ is constant. Even though in this case we will derive the limiting equation withouth assuming conditional detailed balance, let us remark that the above approach provides a method of determining when that condition holds, different from that used earlier. Namely, let \begin{equation} H(q,z) = V(q) + {1 \over 2}Q(q)z\cdot z \end{equation} where $Q(q)$ is a symmetric matrix. We then have $R = Q$ and the solvability condition becomes \begin{equation} BQ = QB^T. \end{equation} In a still more special---but the most fequently considered---case when $Q$ is a multiple of identity, this reduces to \begin{equation} B = B^T \end{equation} which is easily seen to be equivalent to the relation \begin{equation} \gamma \Sigma = \Sigma\gamma^T. \end{equation} We have derived this condition earlier by a different argument \req{condition_forDB}. If $\gamma$ is symmetric, this becomes the commutation relation \begin{equation} \gamma \Sigma = \Sigma\gamma. \end{equation} Note that if $\gamma \Sigma = \Sigma\gamma^T$, the solution of the Lyapunov equation \begin{equation} J\gamma^T + \gamma J = \Sigma \end{equation} is given by $J = {1 \over 2}\gamma^{-1}\Sigma$. In this the case the linear Langevin equation in the $z$ variable, whose conditional equilibrium at fixed value of $q$ we are studying, has no ``oscillatory degrees of freedom'', as discussed earlier (see also \cite{Zwanzig}). In the case when $H$ is not a quadratic function of $z$, the matrix $BR(q,z)$ has to be symmetric for all $q$ and $z$, which means satisfying a continuum of conditions for every fixed $q$. It is interesting to ask whether there exist physically natural examples in which this happens, without each $B(q)$ being a multiple of identity. We are not going to pursue this question here. In the case when $B(q)$ is a multiple of identity, we can write \begin{equation} A = 2\beta(q)^{-1}\gamma \end{equation} with $\beta(q)^{-1} = k_BT(q)$ and call the scalar function $T(q)$ {\it generalized temperature}. The limiting Kolmogorov equation reads then \begin{equation} \partial_t \rho_0 = -\left(\gamma^{-1}\right)_{ji}\left<\partial_{q_i}H\right>\partial_{q_j}\rho_0 +k_BT\partial_{q_k}\left(\gamma^{-1}\right)_{jk}\partial_{q_j}\rho_0+ k_BT\left(\gamma^{-1}\right)_{ij}\left(\partial_{q_i}\partial_{q_j}\rho_0\right) \end{equation} and the components of the noise-induced drift are thus \begin{equation} S_j(q) = k_BT\partial_{q_k}\left(\gamma^{-1}\right)_{jk}. \end{equation} The above applies in particular in the one-dimensional case, in which $\sigma(q)^2$ and $\gamma(q)$ are scalars and hence one is always an ($q$-dependent) multiple of the other: \begin{align} k_BT(q) = {\sigma(q)^2 \over 2\gamma(q)}. \end{align} The limiting Langevin equation is in this case \begin{equation} dq_t = - {\left<\nabla_qH\right> \over \gamma}\,dt - {1 \over 2}{\nabla_q \gamma \over \gamma^3}\sigma^2\,dt + {\sigma \over \gamma}\,dW_t. \end{equation} For a Hamiltonian equal to a sum of potential and quadratic kinetic energy, $H = V(q) + {z^2 \over 2}$, the first term equals $F \over \gamma$, where $F = -\nabla_qV\,dt$, in agreement with earlier results. The second situation, in which the perturbative treatment of the original system can be carried out explicitly is the general quadratic kinetic energy case. \medskip {\bf Assumption 2}: $H = V(q) + {z^2 \over 2}$ If we follow the singular perturbation method used above, we again need to find the integral \req{averaging}, where $\partial_{z_i}H = z_i$. In this case we know the solution of $L_1^*h = 0$ explicitly: \begin{equation} h(q,z) = \left(2\pi\right)^{-{n \over 2}}\left(\det M\right)^{-{1 \over 2}}\exp\left(-{1 \over 2}M^{-1}z\cdot z\right) \end{equation} so the integral in \req{averaging} is the mean of $z_iz_k$ in the Gaussian distribution with the covariance $M = \left(m_{ik}\right)$, that is, $m_{ik}$. The second-order term in the Kolmogorov equation is thus $m_{ik}\left(\gamma^{-1}\right)_{jk}\partial_{q_i}\partial_{q_j}\rho_0$. The corresponding Langevin equation, which has been derived rigorously in \cite{Hottovy} is in this case \begin{align} dq_t = -\gamma^{-1}(q_t)\nabla_q V(q_t)\,dt + S(q_t)\,dt + \gamma^{-1}(q_t)\sigma(q_t)\,dW_t. \end{align} The homogenization heuristics proposed under Assumption 1 applies here as well: the limiting Langevin equation can be interpreted as a result of averaging over the conditional stationary distribution of the $z$ variable. A rigorous result, corroborating this picture has recently been proven in \cite{clt}. \section{A rigorous homogenization theorem} We now develop a framework for the homogenization of Langevin equations that is able to make many of the heuristic results from the previous two sections rigorous. Our results will concern Hamiltonians of the form \begin{align}\label{H_form} H(t,x)=K(t,q,p-\psi(t,q))+V(t,q) \end{align} where $x=(q,p)\in\mathbb{R}^n\times \mathbb{R}^n$, $K = K(t,q,z)$ and $V= V(t,q)$ are $C^2$, $\mathbb{R}$-valued functions, $K$ is non-negative, and $\psi$ is a $C^2$, $\mathbb{R}^n$-valued function. The splitting of $H$ into $K$ and $V$ does not have to correspond physically to any notion of kinetic and potential energy, although we will use those terms for convenience. The splitting is not unique; it will be constrained further as we continue. We now define the family of scaled Hamiltonians, parameterized by $\epsilon>0$ (generalizing the above mass parameter): \begin{align} H^\epsilon(t,q,p)\equiv K^\epsilon(t,q,p)+V(t,q)\equiv K(t,q,(p-\psi(t,q))/\sqrt{\epsilon})+V(t,q). \end{align} Consider the following family of SDEs:\\ \begin{align} dq^\epsilon_t=&\nabla_p H^\epsilon(t,x^\epsilon_t)dt,\label{Hamiltonian_SDE_q}\\ d p^\epsilon_t=&(-\gamma(t,x^\epsilon_t)\nabla_p H^\epsilon(t,x^\epsilon_t)-\nabla_q H^\epsilon(t,x^\epsilon_t)+F(t,x^\epsilon_t))dt+\sigma(t,x^\epsilon_t) dW_t,\label{Hamiltonian_SDE_p} \end{align} where $\gamma:[0,\infty)\times\mathbb{R}^{2n}\rightarrow\mathbb{R}^{n\times n}$ and $\sigma:[0,\infty)\times\mathbb{R}^{2n}\rightarrow\mathbb{R}^{n\times k}$ are continuous, $\gamma$ is positive definite, and $W_t$ is a $\mathbb{R}^k$-valued Brownian motion on a filtered probability space $(\Omega,\mathcal{F},\mathcal{F}_t,P)$ satisfying the usual conditions \cite{karatzas2014brownian}. Our objective in this section is to develop a method for investigating the behavior of $x^\epsilon_t$ in the limit $\epsilon\rightarrow 0^+$; more precisely, we wish to prove the existence of a limiting ``position" process $q_t$ and derive a homogenized SDE that it satisfies. In fact, the method we develop is applicable to a more general class SDEs that share certain properties with \req{Hamiltonian_SDE_q}-\req{Hamiltonian_SDE_p}. In the following subsection, we discuss some prior results concerning \req{Hamiltonian_SDE_q}-\req{Hamiltonian_SDE_p}. This will help motivate the assumptions made in the development of our general homogenization method, starting in subsection \label{sec:gen_homog}. \subsection{Summary of prior results} Let $x_t^\epsilon$ be a family of solutions to the SDE \req{Hamiltonian_SDE_q}-\req{Hamiltonian_SDE_p} with initial condition $x_0^\epsilon=(q_0^\epsilon,p_0^\epsilon)$. We assume that a solution exists for all $t\geq 0$ (i.e. there are no explosions). See Appendix B in \cite{BirrellHomogenization} for assumptions that guarantee this. Under Assumptions 1-3 in \cite{BirrellHomogenization} (repeated as Assumptions \ref{assump1}-\ref{assump3} in \ref{app:assump}, we showed that for any $T>0$, $p>0$, $0<\beta<p/2$ we have \begin{align}\label{results_summary} \sup_{t\in[0,T]}E\left[\|p_t^\epsilon-\psi(t,q^\epsilon_t)\|^p\right]=O(\epsilon^{p/2}) \text{ and } E\left[\sup_{t\in[0,T]}\|p_t^\epsilon-\psi(t,q^\epsilon_t)\|^p\right]=O(\epsilon^{\beta}) \end{align} as $\epsilon\rightarrow 0^+$ i.e. the point $(p,q)$ is attracted to the surface defined by $p=\psi(t,q)$. Adding Assumption 4 (Assumption \req{assump4} in the appendix) we also showed that \begin{align}\label{q_eq} d(q_t^\epsilon)^i=&(\tilde\gamma^{-1})^{ij}(t,q_t^\epsilon)(-\partial_t\psi_j(t,q_t^\epsilon)-\partial_{q^j}V(t,q_t^\epsilon)+F_j(t,x^\epsilon_t))dt\\ &+(\tilde\gamma^{-1})^{ij}(t,q_t^\epsilon)\sigma_{j\rho}(t,x_t^\epsilon)dW^\rho_t-(\tilde\gamma^{-1})^{ij}(t,q_t^\epsilon)\partial_{q^j}K(t,q_t^\epsilon,z_t^\epsilon)dt\notag\\ &+(z_t^\epsilon)_j\partial_{q^l}(\tilde\gamma^{-1})^{ij}(t,q_t^\epsilon)\partial_{z_l}K(t,q_t^\epsilon,z_t^\epsilon)dt- d((\tilde\gamma^{-1})^{ij}(t,q_t^\epsilon)(u^\epsilon_t)_j)\notag\\ &+(u_t^\epsilon)_j\partial_t(\tilde\gamma^{-1})^{ij}(t,q_t^\epsilon)dt,\notag \end{align} where $u_t^\epsilon\equiv p_t^\epsilon-\psi(t,q^\epsilon_t)$, $z_t^\epsilon\equiv u_t^\epsilon/\sqrt{\epsilon}$, and \begin{align}\label{tilde_gamma_def} \tilde\gamma_{ik}(t,q)\equiv\gamma_{ik}(t,q) +\partial_{q^k}\psi_i(t,q)-\partial_{q^i}\psi_k(t,q). \end{align} We define the components of $\tilde\gamma^{-1}$ such that \begin{align}\label{tilde_gamma_inv_def} (\tilde\gamma^{-1})^{ij}\tilde\gamma_{jk}=\delta^i_k, \end{align} \begin{comment} i.e. to think of them as linear maps, we lower the second index on $\tilde\gamma^{-1}$ and raise the first index on $\tilde\gamma$ \end{comment} and for any $v_i$ we define $(\tilde\gamma^{-1}v)^i=(\tilde\gamma^{-1})^{ij}v_j$. Under the additional Assumptions 5-7 in \cite{BirrellHomogenization}, which include further restrictions on the form of the Hamiltonian, we were then able to show that $q_t^\epsilon$ converges in an $L^p$-norm as $\epsilon\rightarrow 0^+$ to the solution of a lower dimensional SDE, \begin{align}\label{limit_eq} dq_t=&\tilde \gamma^{-1}(t,q_t)(-\partial_t\psi(t,q_t)-\nabla_{q}V(t,q_t)+F(t,q_t,\psi(t,q_t)))dt+S(t,q_t)dt\notag\\ &+\tilde\gamma^{-1}(t,q_t)\sigma(t,q_t,\psi(t,q_t)) dW_t. \end{align} The {\em noise-induced drift} term, $S(t,q)$, that arises in the limit is the term of greatest interest here. Its form is given in Eq. (3.26) in \cite{BirrellHomogenization}. The homogenization technique used in \cite{BirrellHomogenization} to arrive at \req{limit_eq} relies heavily on the specific structural assumptions on the form of the Hamiltonian. Those assumptions cover a wide variety of important systems, such as a particle in an electromagnetic field, and motion on a Riemannian manifold, but it is desirable to search for a more generally applicable homogenization method. In this paper, we develop a significantly more general technique, adapted from the methods presented in \cite{pavliotis2008multiscale}, that is capable of homogenizing terms of the form $G(t,q_t^\epsilon,(p_t^\epsilon-\psi(t,q_t^\epsilon))/\sqrt{\epsilon})dt$ for a general class of SDEs that satisfy the property \req{results_summary}, as well as prove convergence of $q_t^\epsilon$ to the solution of a limiting, homogenized SDE. In particular, it will be capable of homogenizing $q_t^\epsilon$ from the Hamiltonian system \req{Hamiltonian_SDE_q}-\req{Hamiltonian_SDE_p} under less restrictive assumptions on the form of the Hamiltonian, than those made in \cite{BirrellHomogenization}. We emphasize that the convergence statements are proven in the strong sense, see Section \ref{sec:gen_homog}. \subsection{General homogenization framework}\label{sec:gen_homog} Here we describe our homogenization technique in a more general context than the Hamiltonian setting from the previous section. This method is related to the cell problem method from \cite{pavliotis2008multiscale}, our proof applies to a larger class of SDEs and demonstrates $L^p$-convergence rather that weak convergence. We will denote an element of $\mathbb{R}^{n}\times\mathbb{R}^m$ by $x=(q,p)$, where we no longer require the $q$ and $p$ degrees of freedom to have the same dimensionality, though we still employ the convention of writing $q$ indices with superscripts and $p$ indices with subscripts. We let $W_t$ be an $\mathbb{R}^k$-valued Wiener process, $\psi:[0,\infty)\times\mathbb{R}^n\rightarrow\mathbb{R}^m$ be $C^2$ and $G_1,F_1:[0,\infty)\times\mathbb{R}^{n+m}\times\mathbb{R}^m\rightarrow\mathbb{R}^n$, $G_2,F_2:[0,\infty)\times\mathbb{R}^{n+m}\times\mathbb{R}^m\rightarrow\mathbb{R}^m$, $\sigma_1:[0,\infty)\times\mathbb{R}^{n+m}\rightarrow\mathbb{R}^{n\times k}$, and $\sigma_2:[0,\infty)\times\mathbb{R}^{n+m}\rightarrow\mathbb{R}^{m\times k}$ be continuous. With these definitions, we consider the following family of SDEs, depending on a parameter $\epsilon>0$: \begin{align} dq^\epsilon_t=&\left(\frac{1}{\sqrt{\epsilon}}G_1(t,x_t^\epsilon,z_t^\epsilon)+F_1(t,x_t^\epsilon,z_t^\epsilon)\right)dt+\sigma_1(t,x_t^\epsilon)dW_t,\label{gen_SDE1}\\ d p^\epsilon_t=&\left(\frac{1}{\sqrt{\epsilon}}G_2(t,x^\epsilon_t,z_t^\epsilon)+F_2(t,x^\epsilon_t,z_t^\epsilon)\right)dt+\sigma_2(t,x^\epsilon_t) dW_t,\label{gen_SDE2} \end{align} where we define $z_t^\epsilon=(p_t^\epsilon-\psi(t,q_t^\epsilon))/\sqrt{\epsilon}$. We will assume, in analogy with \req{results_summary}, that: \begin{assumption}\label{homog_assump1} For any $T>0$, $p>0$, $0<\beta<p/2$ we have \begin{align} \sup_{t\in[0,T]}E\left[\|p_t^\epsilon-\psi(t,q^\epsilon_t)\|^p\right]=O(\epsilon^{p/2}) \text{ and } E\left[\sup_{t\in[0,T]}\|p_t^\epsilon-\psi(t,q^\epsilon_t)\|^p\right]=O(\epsilon^{\beta}) \end{align} as $\epsilon\rightarrow 0^+$. \end{assumption} In words, we assume that the $p$ degrees of freedom are attracted to the values defined by $p=\psi(t,q)$. This is an appropriate setting to expect some form of homogenization, as it suggests that the dynamics in the limit $\epsilon\rightarrow 0^+$ can be characterized by fewer degrees of freedom---the $q$-variables. \subsubsection{Homogenization of integral processes} In this section we derive a method capable of homogenizing processes of the form \begin{equation}\label{M_t_def} M^\epsilon_t\equiv \int_0^tG(s,x_s^\epsilon,z_s^\epsilon)ds \end{equation} in the limit $\epsilon\rightarrow 0^+$. More specifically, our aim is to find conditions under which there exists some function, $S(t,q)$, such that \begin{align}\label{homog_goal} \int_0^tG(s,x_s^\epsilon,z_s^\epsilon)ds-\int_0^tS(s,q_s^\epsilon)ds\rightarrow 0 \end{align} in some norm, as $\epsilon\rightarrow 0^+$ , i.e. only the $q$-degrees of freedom are needed to characterize $M_t^\epsilon$ in the limit. We will call a family of processes, $S(t,q_t^\epsilon)dt$, that satisfies such a limit, a homogenization of $G(t,x_t^\epsilon,z_t^\epsilon)dt$. The technique we develop will also be useful for proving existence of a limiting process $q_s$ (i.e. $q_s^\epsilon\rightarrow q_s$), and showing that \begin{align} \int_0^tG(s,x_s^\epsilon,z_s^\epsilon)ds\rightarrow \int_0^tS(s,q_s)ds. \end{align} as $\epsilon\rightarrow 0^+$. We will consider this second question in Section \ref{sec:limit_eq}. Here, our focus is on \req{homog_goal}. As a starting point, let $\chi(t,x,z):[0,\infty)\times\mathbb{R}^{n+m}\times\mathbb{R}^m\rightarrow\mathbb{R}$ be $C^{1,2}$, where $C^{1,2}$ is defined as follows: \begin{itemize} \item If $\sigma_1\neq 0$ then we take this to mean $\chi$ is $C^1$ and, for each $t$, $\chi(t,x,z)$ is $C^2$ in $(x,z)$ with second derivatives continuous jointly in all variables. \item If $\sigma_1=0$ then we take this to mean $\chi$ is $C^1$ and, for each $t,q$, $\chi(t,q,p,z)$ is $C^2$ in $(p,z)$ with second derivatives continuous jointly in all variables. \end{itemize} Eventually, we will need to carefully choose $\chi$ so that we achieve our aim, but for now we simply use It\^o's formula to compute $\chi(t,x_t^\epsilon,z_t^\epsilon)$. We defined $C^{1,2}$ precisely so that It\^o's formula is justified. For this computation, we will define $\chi^\epsilon(t,x)=\chi(t,x,(p-\psi(t,q))/\sqrt{\epsilon})$, and \begin{align}\label{sigma_defs} &\Sigma_{11}^{ij}= \sum_\rho (\sigma_1)^i_\rho(\sigma_1)^j_\rho,\hspace{1mm} (\Sigma_{12})^i_j= \sum_\rho (\sigma_1)^i_\rho(\sigma_2)_{j\rho}, \hspace{1mm}(\Sigma_{22})_{ij}= \sum_\rho (\sigma_2)_{i\rho}(\sigma_2)_{j\rho}. \end{align} It\^o's formula gives \begin{align} &\chi(t,x_t^\epsilon,z_t^\epsilon)=\chi(0,x_0^\epsilon,z_0^\epsilon)+\int_0^t\partial_s \chi(s,x_s^\epsilon,z_s^\epsilon)ds+\int_0^t\nabla_q\chi^\epsilon(s,x_s^\epsilon)\cdot dq_s^\epsilon\\ &+\int_0^t\nabla_p\chi^\epsilon(s,x_s^\epsilon)\cdot dp_s^\epsilon+\frac{1}{2}\int_0^t\partial_{q^i}\partial_{q^j}\chi^\epsilon(s,x_s^\epsilon) \Sigma_{11}^{ij}(s,x_s^\epsilon)ds\notag\\ &+\frac{1}{2}\int_0^t\partial_{q^i}\partial_{p_j}\chi^\epsilon(s,x_s^\epsilon) (\Sigma_{12})^i_j(s,x_s^\epsilon)ds+\frac{1}{2}\int_0^t\partial_{p_i}\partial_{q^j}\chi^\epsilon(s,x_s^\epsilon) (\Sigma_{12})_i^j(s,x_s^\epsilon)ds\notag\\ &+\frac{1}{2}\int_0^t\partial_{p_i}\partial_{p_j}\chi^\epsilon(s,x_s^\epsilon) (\Sigma_{22})_{ij}(s,x_s^\epsilon)ds.\notag \end{align} Note that if $\sigma_1=0$ then only the second derivatives that we have assumed exist are involved in this computation. We can compute these terms as follows: \begin{align} \partial_{q^i}\chi^\epsilon(t,x)=&(\partial_{q^i}\chi)(t,x,z)-\epsilon^{-1/2}\partial_{q^i}\psi_k(t,q)(\partial_{z_k}\chi)(t,x,z),\\ \partial_{p_i}\chi^\epsilon(t,x)=&(\partial_{p_i}\chi)(t,x,z)+\epsilon^{-1/2}(\partial_{z_i}\chi)(t,x,z),\\ \partial_{q^i}\partial_{q^j}\chi^\epsilon(t,x)=&(\partial_{q^i}\partial_{q^j}\chi)(t,x,z)+\epsilon^{-1/2}\left(-\partial_{q^j}\psi_k(t,q)(\partial_{q^i}\partial_{z_k}\chi)(t,x,z)\right.\\ &\left.-\partial_{q^i}\partial_{q^j}\psi_k(t,q)(\partial_{z_k}\chi)(t,x,z)-\partial_{q^i}\psi_k(t,q)(\partial_{q^j}\partial_{z_k}\chi)(t,x,z)\right)\notag\\ &+\epsilon^{-1}\partial_{q^i}\psi_k(t,q)\partial_{q^j}\psi_l(t,q)(\partial_{z_k}\partial_{z_l}\chi)(t,x,z).\notag\\ \partial_{p_i}\partial_{p_j}\chi^\epsilon(t,x)=&(\partial_{p_i}\partial_{p_j}\chi)(t,x,z)+\epsilon^{-1/2}\left((\partial_{z_j}\partial_{p_i}\chi)(t,x,z)\right.\\ &\left.+(\partial_{p_j}\partial_{z_i}\chi)(t,x,z)\right)+\epsilon^{-1}(\partial_{z_i}\partial_{z_j}\chi)(t,x,z),\notag\\ \partial_{q^i}\partial_{p_j}\chi^\epsilon(t,x)=&(\partial_{q^i}\partial_{p_j}\chi)(t,x,z)+\epsilon^{-1/2}\left((\partial_{q^i}\partial_{z_j}\chi)(t,x,z)\right.\\ &\left.-\partial_{q^i}\psi_k(t,q)(\partial_{p_j}\partial_{z_k}\chi)(t,x,z)\right)\notag\\ &-\epsilon^{-1}\partial_{q^i}\psi_k(t,q)(\partial_{z_j}\partial_{z_k}\chi)(t,x,z),\notag \end{align} where $z$ is evaluated at $z(t,x,\epsilon)=(p-\psi(t,q))/\sqrt{\epsilon}$ in each of the above formulae. Using these expressions, together with the SDE \req{gen_SDE1}-\req{gen_SDE2} we find \begin{align} &\chi(t,x_t^\epsilon,z_t^\epsilon)=\chi(0,x_0^\epsilon,z_0^\epsilon)+\int_0^t\partial_s \chi(s,x_s^\epsilon,z_s^\epsilon)ds\\ &+\int_0^t\left((\partial_{q^i}\chi)(s,x_s^\epsilon,z_s^\epsilon)-\epsilon^{-1/2}\partial_{q^i}\psi_k(s,q_s^\epsilon)(\partial_{z_k}\chi)(s,x_s^\epsilon,z_s^\epsilon)\right) \notag\\ &\hspace{10mm}\times\left[\left(\frac{1}{\sqrt{\epsilon}}G_1(s,x_s^\epsilon,z_s^\epsilon)+F_1(s,x_s^\epsilon,z_s^\epsilon)\right)ds+\sigma_1(s,x_s^\epsilon)dW_s\right]^i\notag\\ &+\int_0^t\left((\partial_{p_i}\chi)(s,x_s^\epsilon,z_s^\epsilon)+\epsilon^{-1/2}(\partial_{z_i}\chi)(s,x_s^\epsilon,z_s^\epsilon)\right) \notag\\ &\hspace{10mm}\times\left [\left(\frac{1}{\sqrt{\epsilon}}G_2(s,x^\epsilon_s,z_s^\epsilon)+F_2(s,x^\epsilon_s,z_s^\epsilon)\right)ds+\sigma_2(s,x^\epsilon_s) dW_s\right]_i\notag\\ &+\frac{1}{2}\int_0^t\Sigma_{11}^{ij}(s,x_s^\epsilon)\bigg[(\partial_{q^i}\partial_{q^j}\chi)(s,x_s^\epsilon,z_s^\epsilon)+\epsilon^{-1/2}\left(-\partial_{q^j}\psi_k(s,q_s^\epsilon)(\partial_{q^i}\partial_{z_k}\chi)(s,x_s^\epsilon,z_s^\epsilon)\right.\notag\\ &\hspace{15mm}\left.-\partial_{q^i}\partial_{q^j}\psi_k(s,q_s^\epsilon)(\partial_{z_k}\chi)(s,x_s^\epsilon,z_s^\epsilon)-\partial_{q^i}\psi_k(s,q_s^\epsilon)(\partial_{q^j}\partial_{z_k}\chi)(s,x_s^\epsilon,z_s^\epsilon)\right)\notag\\ &\hspace{15mm}+\epsilon^{-1}\partial_{q^i}\psi_k(s,q_s^\epsilon)\partial_{q^j}\psi_l(s,q_s^\epsilon)(\partial_{z_k}\partial_{z_l}\chi)(s,x_s^\epsilon,z_s^\epsilon)\bigg] ds\notag\\ &+\int_0^t(\Sigma_{12})^i_j(s,x_s^\epsilon)\bigg[(\partial_{q^i}\partial_{p_j}\chi)(s,x_s^\epsilon,z_s^\epsilon)+\epsilon^{-1/2}\left((\partial_{q^i}\partial_{z_j}\chi)(s,x_s^\epsilon,z_s^\epsilon)\right.\notag\\ &\hspace{10mm}\left.-\partial_{q^i}\psi_k(t,q)(\partial_{p_j}\partial_{z_k}\chi)(s,x_s^\epsilon,z_s^\epsilon)\right)-\epsilon^{-1}\partial_{q^i}\psi_k(s,q_s^\epsilon)(\partial_{z_j}\partial_{z_k}\chi)(s,x_s^\epsilon,z_s^\epsilon)\bigg]ds\notag\\ &+\frac{1}{2}\int_0^t (\Sigma_{22})_{ij}(s,x_s^\epsilon)\bigg[(\partial_{p_i}\partial_{p_j}\chi)(s,x_s^\epsilon,z_s^\epsilon)+\epsilon^{-1/2}\left((\partial_{z_j}\partial_{p_i}\chi)(s,x_s^\epsilon,z_s^\epsilon)\right.\notag\\ &\hspace{15mm}\left.+(\partial_{p_j}\partial_{z_i}\chi)(s,x_s^\epsilon,z_s^\epsilon)\right)+\epsilon^{-1}(\partial_{z_i}\partial_{z_j}\chi)(s,x_s^\epsilon,z_s^\epsilon)\bigg]ds.\notag \end{align} Multiplying by $\epsilon$ and collecting powers, we arrive at \begin{align}\label{homog_eq} &\int_0^t(L\chi)(s,x_s^\epsilon,z_s^\epsilon)ds=\epsilon^{1/2} (R_1^\epsilon)_t+\epsilon\left(\chi(t,x_t^\epsilon,z_t^\epsilon)-\chi(0,x_0^\epsilon,z_0^\epsilon)+ (R^\epsilon_2)_t\right), \end{align} where we define \begin{align}\label{L_def} (L\chi)(t,x,z)=&\bigg(\frac{1}{2}\Sigma_{11}^{ij}(t,x)\partial_{q^i}\psi_k(t,q)\partial_{q^j}\psi_l(t,q)\\ &\hspace{5mm}-(\Sigma_{12})^i_l(t,x)\partial_{q^i}\psi_k(t,q)+\frac{1}{2} (\Sigma_{22})_{kl}(t,x)\bigg)(\partial_{z_k}\partial_{z_l}\chi)(t,x,z)\notag\\ &+\left((G_2)_k(t,x,z)-\partial_{q^i}\psi_k(t,q)G^i_1(t,x,z)\right)(\partial_{z_k}\chi)(t,x,z),\notag \end{align} \begin{align}\label{R1_def} &(R_1^\epsilon)_t\\ =&-\int_0^t(\partial_{q^i}\chi)(s,x_s^\epsilon,z_s^\epsilon)G^i_1(s,x_s^\epsilon,z_s^\epsilon)ds\notag\\ &+\int_0^t\partial_{q^i}\psi_k(s,q_s^\epsilon)(\partial_{z_k}\chi)(s,x_s^\epsilon,z_s^\epsilon)\left[F_1(s,x_s^\epsilon,z_s^\epsilon)ds+\sigma_1(s,x_s^\epsilon)dW_s\right]^i\notag\\ &-\int_0^t(\partial_{z_i}\chi)(s,x_s^\epsilon,z_s^\epsilon) \left [F_2(s,x^\epsilon_s,z_s^\epsilon)ds+\sigma_2(s,x^\epsilon_s) dW_s\right]_i\notag\\ &-\int_0^t(\partial_{p_i}\chi)(s,x_s^\epsilon,z_s^\epsilon)(G_2)_i(s,x^\epsilon_s,z_s^\epsilon)ds\notag\\ &-\frac{1}{2}\int_0^t\Sigma_{11}^{ij}(s,x_s^\epsilon)\left(-\partial_{q^j}\psi_k(s,q_s^\epsilon)(\partial_{q^i}\partial_{z_k}\chi)(s,x_s^\epsilon,z_s^\epsilon)\right.\notag\\ &\hspace{8.5mm}\left.-\partial_{q^i}\partial_{q^j}\psi_k(s,q_s^\epsilon)(\partial_{z_k}\chi)(s,x_s^\epsilon,z_s^\epsilon)-\partial_{q^i}\psi_k(s,q_s^\epsilon)(\partial_{q^j}\partial_{z_k}\chi)(s,x_s^\epsilon,z_s^\epsilon)\right)ds\notag\\ &-\int_0^t(\Sigma_{12})^i_j(s,x_s^\epsilon)\left((\partial_{q^i}\partial_{z_j}\chi)(s,x_s^\epsilon,z_s^\epsilon)-\partial_{q^i}\psi_k(t,q)(\partial_{p_j}\partial_{z_k}\chi)(s,x_s^\epsilon,z_s^\epsilon)\right)ds\notag\\ &-\int_0^t (\Sigma_{22})_{ij}(s,x_s^\epsilon)(\partial_{z_j}\partial_{p_i}\chi)(s,x_s^\epsilon,z_s^\epsilon)ds,\notag \end{align} and \begin{align}\label{R2_def} & (R^\epsilon_2)_t\\ =&-\int_0^t\partial_s \chi(s,x_s^\epsilon,z_s^\epsilon)ds\notag\\ &-\int_0^t(\partial_{q^i}\chi)(s,x_s^\epsilon,z_s^\epsilon)\left[F_1(s,x_s^\epsilon,z_s^\epsilon)ds+\sigma_1(s,x_s^\epsilon)dW_s\right]^i\notag\\ &-\int_0^t(\partial_{p_i}\chi)(s,x_s^\epsilon,z_s^\epsilon) \left [F_2(s,x^\epsilon_s,z_s^\epsilon)ds+\sigma_2(s,x^\epsilon_s) dW_s\right]_i\notag\\ &-\frac{1}{2}\int_0^t\Sigma_{11}^{ij}(s,x_s^\epsilon)(\partial_{q^i}\partial_{q^j}\chi)(s,x_s^\epsilon,z_s^\epsilon)ds\notag\\ &-\int_0^t(\Sigma_{12})^i_j(s,x_s^\epsilon)(\partial_{q^i}\partial_{p_j}\chi)(s,x_s^\epsilon,z_s^\epsilon)ds\notag\\ &-\frac{1}{2}\int_0^t (\Sigma_{22})_{ij}(s,x_s^\epsilon)(\partial_{p_i}\partial_{p_j}\chi)(s,x_s^\epsilon,z_s^\epsilon)ds.\notag \end{align} First, think of simply homogenizing \req{M_t_def} to a quantity of the form $\int_0^t\tilde G(s,x_s^\epsilon)ds$. Suppose we have a candidate for $\tilde G$. If we can find a $C^{1,2}$ solution, $\chi$, to the PDE \begin{align} (L\chi)(t,x,z)=G(t,x,z)-\tilde G(t,x) \end{align} then substituting this into \req{homog_eq} gives \begin{align}\label{homog_eq2} &\int_0^t G(s,x_s^\epsilon,z_s^\epsilon)ds-\int_0^t \tilde G(s,x_s^\epsilon)ds\\ =&\epsilon^{1/2}(R_1^\epsilon)_t+\epsilon\left(\chi(t,x_t^\epsilon,z_t^\epsilon)-\chi(0,x_0^\epsilon,z_0^\epsilon)+ (R^\epsilon_2)_t\right).\notag \end{align} Given sufficient growth bounds for $\chi$ and its derivatives, one anticipates that the right hand side of \req{homog_eq2} vanishes in the limit. If in addition, $\tilde G$ is Lipschitz in $p$, uniformly in $(t,q)$, then, based on Assumption \ref{homog_assump1}, one expects \begin{align} \int_0^t G(s,x_s^\epsilon,z_s^\epsilon)ds- \int_0^t\tilde G(s,q_s^\epsilon,\psi(s,q_s^\epsilon))ds\rightarrow 0 \end{align} as $\epsilon\rightarrow 0^+$. We make this informal discussion precise in Theorem \ref{homog_thm}, below. For this, we will need the following assumptions: \begin{assumption}\label{homog_assump2} For all $T>0$, the following quantities are polynomially bounded in $z$, with the bounds uniform on $[0,T]\times \mathbb{R}^{n+m}$:\\ $G_1$, $F_1$, $G_2$, $F_2$, $\sigma_{1}$, $\sigma_{2}$, $\partial_{q^i}\psi$, $\partial_{q^i}\partial_{q^j}\psi$. If $\sigma_1=0$ then we can remove the requirement on $\partial_{q^i}\partial_{q^j}\psi$. \end{assumption} Recall that an $\mathbb{R}^l$-valued function, $\phi(t,x,z)$, is called {\em polynomially bounded} in $z$, uniformly on $[0,T]\times \mathbb{R}^{n+m}$ if there exists $q,C>0$ such that \begin{equation} \|\phi(t,x,z)\|\leq C(1+\|z\|^q) \end{equation} for all $(t,x,z)\in[0,T]\times \mathbb{R}^{n+m}\times\mathbb{R}^m$. In particular, if $\phi$ is independent of $z$, this just means it is bounded on $[0,T]\times \mathbb{R}^{n+m}$. Applying this to $\psi$, we note that Assumption \ref{homog_assump2} implies $\psi$ is Lipschitz in $q$, uniformly in $t\in[0,T]$. \begin{assumption}\label{homog_assump3} Given a continuous $G:[0,\infty)\times\mathbb{R}^{n+m}\times\mathbb{R}^m\rightarrow\mathbb{R}$, assume that there exists a $C^{1,2}$ function $\chi:[0,\infty)\times\mathbb{R}^{n+m}\times\mathbb{R}^m\rightarrow\mathbb{R}$ and a continuous function $\tilde G(t,x):[0,\infty)\times\mathbb{R}^{n+m}\rightarrow\mathbb{R}$ that together satisfy the PDE \begin{align}\label{chi_eq} (L\chi)(t,x,z)= G(t,x,z)-\tilde G(t,x), \end{align} where the differential operator, $L$, is defined in \req{L_def}. Assume that, for a given $T>0$, $\tilde G$ is Lipschitz in $p$, uniformly for $(t,q)\in[0,T]\times\mathbb{R}^n$. Also suppose that $\chi$, its first derivatives, and the second derivatives $\partial_{q^i}\partial_{q^j}\chi$, $\partial_{q^i}\partial_{p_j}\chi$, $\partial_{q^i}\partial_{z_j}\chi$, $\partial_{p_i}\partial_{p_j}\chi$, and $\partial_{p_i}\partial_{z_j}\chi$ are polynomially bounded in $z$, uniformly for $(t,x)\in[0,T]\times\mathbb{R}^{n+m}$. If $\sigma_1=0$ then the only second derivatives that we require to be polynomially bounded are $\partial_{p_i}\partial_{p_j}\chi$ and $\partial_{p_i}\partial_{z_j}\chi$. \end{assumption} \begin{theorem}\label{homog_thm} Fix $T>0$. Let Assumptions \ref{homog_assump1}-\ref{homog_assump3} hold and $x_t^\epsilon=(q_t^\epsilon,p_t^\epsilon)$ satisfy the SDE \req{gen_SDE1}-\req{gen_SDE2}. Then for any $p>0$ we have \begin{align} E\left[\sup_{t\in[0,T]}\left|\int_0^t G(s,x_s^\epsilon,z_s^\epsilon)ds-\int_0^t \tilde G(s,x_s^\epsilon)ds\right|^p\right]=O(\epsilon^{p/2}). \end{align} and \begin{align} E\left[\sup_{t\in[0,T]}\left|\int_0^t G(s,x_s^\epsilon,z_s^\epsilon)ds-\int_0^t \tilde G(s,q_s^\epsilon,\psi(s,q_s^\epsilon))ds\right|^p\right]=O(\epsilon^{p/2}) \end{align} as $\epsilon\rightarrow 0^+$. \end{theorem} \begin{proof} Fix $T>0$. First let $p\geq 2$. \req{homog_eq2} gives \begin{align} &E\left[\sup_{t\in[0,T]}\left|\int_0^t G(s,x_s^\epsilon,z_s^\epsilon)ds-\int_0^t \tilde G(s,x_s^\epsilon)ds\right|^p\right]\\ \leq &3^{p-1}\left( \epsilon^{p/2}E\left[\sup_{t\in[0,T]}|(R_1^\epsilon)_t|^p\right]+2^p\epsilon^pE\left[\sup_{t\in[0,T]}|\chi(t,x_t^\epsilon,z_t^\epsilon)|^p\right]\right.\notag\\ &\left.\hspace{1cm}+\epsilon^p E\left[\sup_{t\in[0,T]}|(R^\epsilon_2)_t|^p\right]\right).\notag \end{align} From \req{R1_def} and \req{R2_def} we see that $R_1^\epsilon$ and $R_2^\epsilon$ have the forms \begin{align} (R_i^\epsilon)_t=\int_0^t V_i(s,x_s^\epsilon,z_s^\epsilon) ds+\int_0^t Q_{ij}(s,x_s^\epsilon,z_s^\epsilon) dW^j_s, \end{align} where $V_i$ and $Q_{ij}$ are linear combinations of products of (components of) one or more terms from the following list:\\ $G_1$, $F_1$, $G_2$, $F_2$, $\sigma_1$, $\sigma_2$, $\partial_{q^i}\psi$, $\partial_{q^i}\partial_{q^j}\psi$, $\partial_t\chi$, $\partial_{q^i}\chi$, $\partial_{z_i}\chi$, $\partial_{p_i}\chi$, $\partial_{q^i}\partial_{q^j}\chi$, $\partial_{q^i}\partial_{p_j}\chi$, $\partial_{q^i}\partial_{z_j}\chi$, $\partial_{p_i}\partial_{p_j}\chi$, $\partial_{p_i}\partial_{z_j}\chi$. Also note that if $\sigma_1=0$ then the only second derivatives terms that are involved are $\partial_{p_i}\partial_{p_j}\chi$ and $\partial_{p_i}\partial_{z_j}\chi$. By assumption, these are all polynomially bounded in $z$, uniformly on $[0,T]\times \mathbb{R}^{n+m}$, as is $\chi$. Therefore, letting $\tilde C$ denote a constant that potentially varies line to line, there exists $r>0$ such that \begin{align} &E\left[\sup_{t\in[0,T]}\left|\int_0^t G(s,x_s^\epsilon,z_s^\epsilon)ds-\int_0^t \tilde G(s,x_s^\epsilon)ds\right|^p\right]\\ \leq &\tilde C \epsilon^{p/2}\left(E\left[\left(\int_0^T |V_1(s,x_s^\epsilon,z_s^\epsilon)| ds\right)^p\right]+E\left[\sup_{t\in[0,T]}\left|\int_0^t Q_{1j}(s,x_s^\epsilon,z_s^\epsilon) dW^j_s\right|^p\right]\right)\notag\\ &+\tilde C\epsilon^p\left(E\left[\left(\int_0^T |V_2(s,x_s^\epsilon,z_s^\epsilon)| ds\right)^p\right]+E\left[\sup_{t\in[0,T]}\left|\int_0^t Q_{2j}(s,x_s^\epsilon,z_s^\epsilon) dW^j_s\right|^p\right]\right.\notag\\ &\left.\hspace{15mm}+1+E\left[\sup_{t\in[0,T]}\|z_t^\epsilon\|^{rp}\right]\right).\notag \end{align} H\"older's inequality and polynomial boundedness yields \begin{align} E\left[\left(\int_0^T |V_i(s,x_s^\epsilon,z_s^\epsilon)| ds\right)^p\right]\leq& T^{p-1} E\left[ \int_0^T |V_i(s,x_s^\epsilon,z_s^\epsilon)|^p ds\right]\\ \leq & \tilde C T^{p}\left( 1+\sup_{t\in[0,T]}E\left[\|z_t^\epsilon\|^{rp} \right]\right).\notag \end{align} Applying the Burkholder-Davis-Gundy inequality to the terms involving $Q_{ij}$, (as found in, for example, Theorem 3.28 in \cite{karatzas2014brownian}), and then H\"older's inequality, we obtain \begin{align} &E\left[\sup_{t\in[0,T]}\left|\int_0^t Q_{ij}(s,x_s^\epsilon,z_s^\epsilon) dW^j_s\right|^p\right]\\ \leq &\tilde CE\left[\left( \int_0^T\| Q_i(s,x_s^\epsilon,z_s^\epsilon)\|^2ds\right)^{p/2}\right]\notag\\ \leq &\tilde C T^{p/2-1}E\left[ \int_0^T\| Q_i(s,x_s^\epsilon,z_s^\epsilon)\|^pds\right]\notag\\ \leq &\tilde C T^{p/2}\left( 1+\sup_{t\in[0,T]}E[\|z_t^\epsilon\|^{rp}]\right).\notag \end{align} Combining these bounds, and using Assumption \ref{homog_assump1}, we find \begin{align} &E\left[\sup_{t\in[0,T]}\left|\int_0^t G(s,x_s^\epsilon,z_s^\epsilon)ds-\int_0^t \tilde G(s,x_s^\epsilon)ds\right|^p\right]\\ \leq &\tilde C \epsilon^{p/2}\left(1+\sup_{t\in[0,T]}E[\|z_t^\epsilon\|^{rp}]\right)\notag\\ &+\tilde C\epsilon^p\left(1+\sup_{t\in[0,T]}E[\|z_t^\epsilon\|^{rp}]+E\left[\sup_{t\in[0,T]}\|z_t^\epsilon\|^{rp}\right]\right)\notag\\ \leq &\tilde C \epsilon^{p/2}(1+O(1))+\tilde C\epsilon^p\left(1+O(1)+O(\epsilon^{-\delta})\right)\notag \end{align} for any $\delta>0$. Letting $\delta=p/2$ we find \begin{align} E\left[\sup_{t\in[0,T]}\left|\int_0^t G(s,x_s^\epsilon,z_s^\epsilon)ds-\int_0^t \tilde G(s,x_s^\epsilon)ds\right|^p\right]=O(\epsilon^{p/2}). \end{align} Now use H\"older's inequality, the uniform Lipschitz property of $\tilde G$, and Assumption \ref{homog_assump1} again to compute \begin{align} &E\left[\sup_{t\in[0,T]}\left|\int_0^t G(s,x_s^\epsilon,z_s^\epsilon)ds-\int_0^t \tilde G(s,q_s^\epsilon,\psi(s,q_s^\epsilon))ds\right|^p\right]\\ \leq& O(\epsilon^{p/2})+\tilde C E\left[\sup_{t\in[0,T]}\left|\int_0^t \tilde G(s,x_s^\epsilon)-\tilde G(s,q_s^\epsilon,\psi(s,q_s^\epsilon))ds\right|^p\right]\notag\\ \leq& O(\epsilon^{p/2})+ \tilde CT^{p-1}E\left[\int_0^T |\tilde G(s,x_s^\epsilon)-\tilde G(s,q_s^\epsilon,\psi(s,q_s^\epsilon))|^p ds\right]\notag\\ \leq& O(\epsilon^{p/2})+\tilde C T^{p}\sup_{t\in[0,T]}E\left[ \|p_t^\epsilon-\psi(t,q_t^\epsilon)\|^p \right]\notag\\ =&O(\epsilon^{p/2}).\notag \end{align} This proves the claim for $p\geq 2$. The result for arbitrary $p>0$ then follows from an application of H\"older's inequality. \end{proof} \subsubsection{Formal derivation of $\tilde G$}\label{sec:formal_G_tilde} Formally applying the Fredholm alternative to \req{chi_eq} motivates the form that $\tilde G$ must have in order for $\chi$ and its derivatives to possess the growth bounds required by Theorem \ref{homog_thm}. The formal calculation is simple enough that we repeat it here:\\ Let $L^*$ be the formal adjoint to $L$ and suppose we have a solution, $h(t,x,z)$, to \begin{equation}\label{h_eq} L^*h=0, \hspace{2mm} \int h(t,x,z)dz=1. \end{equation} If $\chi$ and its derivatives grow slowly enough and $h$ and its derivatives decay quickly enough, then $\int hL\chi dz$ will exist, the boundary terms from integration by parts will vanish at infinity, and we find \begin{align} 0=\int (L^*h)\chi dz=\int h L(\chi) dz=\int h (G-\tilde G)dz=\int h Gdz-\tilde G. \end{align} Therefore we must have \begin{align} \tilde G(t,x)=\int h(t,x,z)G(t,x,z)dz. \end{align} In essence, the homogenized quantity is obtained by averaging over $h$, the instantaneous equilibrium distribution for the fast variables, $z$. This corroborates the heuristic discussion in Section \ref{sec:perturb}. \subsubsection{Limiting equation}\label{sec:limit_eq} We now apply the above framework to prove existence of a limiting process $q_t^\epsilon\rightarrow q_s$ and deriving an SDE satisfied by $q_s$. Specifically, we have: \begin{theorem}\label{conv_thm} Let $T>0$, $p\geq 2$, $0<\beta\leq p/2$, $x_t^\epsilon=(q_t^\epsilon,p_t^\epsilon)$ satisfy the SDE \req{gen_SDE1}-\req{gen_SDE2}, suppose Assumptions \ref{homog_assump1}-\ref{homog_assump3} hold, and that the SDE for $q_t^\epsilon$, \req{gen_SDE1}, can be rewritten in the form \begin{align}\label{con_thm_eq} q_t^\epsilon=q_0^\epsilon+\int_0^t\tilde F(s,x_s^\epsilon)ds+\int_0^tG(s,x_s^\epsilon,z_s^\epsilon)ds+\int_0^t\tilde\sigma(s,x_s^\epsilon)dW_s+ R^\epsilon_t \end{align} where the components of $G$ have the properties described in Assumption \ref{homog_assump3}, $\tilde F(t,x):[0,\infty)\times\mathbb{R}^{n+m}\rightarrow\mathbb{R}^n$, $\tilde \sigma(t,x):[0,\infty)\times\mathbb{R}^{n+m}\rightarrow\mathbb{R}^{n\times k}$ are continuous, Lipschitz in $x$, uniformly in $t\in[0,T]$, and $R_t^\epsilon$ are continuous semimartingales that satisfy \begin{align} E\left[\sup_{t\in[0,T]}\| R_t^\epsilon\|^p\right]=O(\epsilon^\beta)\text{ as }\epsilon\rightarrow 0^+. \end{align} Suppose $\tilde G$ (from Assumption \ref{homog_assump3}) is Lipschitz in $x$, uniformly in $t\in[0,T]$, and we have initial conditions $E[\|q^\epsilon_0\|^p]<\infty$, $E[\|q_0\|^p]<\infty$, and\\ $E[\|q_0^\epsilon-q_0\|^p]=O(\epsilon^{p/2})$. Then \begin{align} E\left[\sup_{t\in[0,T]}\|q_t^\epsilon-q_t\|^p\right]=O(\epsilon^\beta)\text{ as }\epsilon\rightarrow 0^+ \end{align} where $q_t$ satisfies the SDE \begin{align}\label{gen_limit_eq} q_t=q_0+&\int_0^t\tilde F(s,q_s,\psi(s,q_s))ds+\int_0^t\tilde G(s,q_s,\psi(s,q_s))ds\\ &+\int_0^t\tilde\sigma(s,q_s,\psi(s,q_s))dW_s.\notag \end{align} \end{theorem} \begin{proof} We will prove this theorem by verifying all the hypotheses of Lemma \ref{conv_lemma}. Define \begin{align} \tilde R_t^\epsilon=R_t^\epsilon+\int_0^tG(s,x_s^\epsilon,z_s^\epsilon)ds-\int_0^t\tilde G(s,x_s^\epsilon)ds. \end{align} Then \begin{align} q_t^\epsilon=q_0^\epsilon+\int_0^t\tilde F(s,x_s^\epsilon)ds+\int_0^t\tilde G(s,x_s^\epsilon)ds+\int_0^t\tilde\sigma(s,x_s^\epsilon)dW_s+ \tilde R^\epsilon_t \end{align} where $\tilde F+\tilde G$ and $\tilde \sigma$ are Lipschitz in $x$, uniformly for $t\in[0,T]$ and \begin{align} E\left[\sup_{t\in[0,T]}\|\tilde R_t^\epsilon\|^p\right]=O(\epsilon^\beta) \end{align} by Theorem \ref{homog_thm}. $E[\|q_0^\epsilon-q_0\|^p]=O(\epsilon^\beta)\text{ as }\epsilon\rightarrow 0^+$ by assumption and \begin{equation} \sup_{t\in[0,T]}E[\|p_t^\epsilon-\psi(t,q_t^\epsilon)\|^p]=O(\epsilon^{p/2})\text{ as }\epsilon\rightarrow 0^+ \end{equation} by Assumption \ref{homog_assump1}. Note that the assumptions also imply that a solution $q_t$ to \req{gen_limit_eq} exists for all $t\geq 0$ \cite{khasminskii2011stochastic}. For any $\epsilon>0$, using the Burkholder-Davis-Gundy inequality and H\"older's inequality we obtain the bound \begin{align} &E\left[\sup_{t\in[0,T]}\|q_t^\epsilon\|^p\right]\\ \leq&4^{p-1}\bigg(E\left[\|q_0^\epsilon\|^p\right]+\epsilon^{-p/2}E\left[\left(\int_0^T \|G_1(s,x_s^\epsilon,z_s^\epsilon)\|ds\right)^p\right]\notag\\ &+E\left[\left(\int_0^T \|F_1(s,x_s^\epsilon,z_s^\epsilon)\|ds\right)^p\right]+E\left[\sup_{t\in[0,T]}\left\|\int_0^t\sigma_1(s,x_s^\epsilon)dW_s\right\|^p\right]\bigg)\notag\\ \leq&4^{p-1}\bigg(E\left[\|q_0^\epsilon\|^p\right]+\epsilon^{-p/2}T^{p-1}\int_0^T E\left[\|G_1(s,x_s^\epsilon,z_s^\epsilon)\|^p\right]ds\notag\\ &+T^{p-1}\int_0^T E\left[\|F_1(s,x_s^\epsilon,z_s^\epsilon)\|^p\right]ds+\tilde C T^{p/2-1}\int_0^TE\left[ \|\sigma_1(s,x_s^\epsilon)\|^p_F\right]ds\bigg)\notag. \end{align} Polynomial boundedness (see Assumption \ref{homog_assump2}) gives \begin{align} &E\left[\sup_{t\in[0,T]}\|q_t^\epsilon\|^p\right]\leq4^{p-1}\bigg(E\left[\|q_0^\epsilon\|^p\right]+\tilde C \int_0^TE\left[ (1+\|z_s^\epsilon\|^q)^p\right]ds\bigg), \end{align} where we absorbed all factors of $T$ and $\epsilon$ into the constant $\tilde C$. Using Assumption \ref{homog_assump1} then gives \begin{align} &E\left[\sup_{t\in[0,T]}\|q_t^\epsilon\|^p\right]<\infty \end{align} for all $\epsilon$ sufficiently small. Finally, for $n>0$ define the stopping time $\tau_n=\inf\{t\geq 0:\|q_t\|\geq n\}$. Then for $0\leq t\leq T$ the Lipschitz properties together with the Burkholder-Davis-Gundy and H\"older's inequalities imply \begin{align} &E\left[\sup_{s\in[0,t]}\|q^{\tau_n}_s\|^p\right]\\ \leq&3^{p-1}\bigg(E[ \|q_0\|^p]+E\left[\left(\int_0^{t\wedge\tau_n} \|(\tilde F+\tilde G)(s,q^{\tau_n}_s,\psi(s,q^{\tau_n}_s))\|ds\right)^p\right]\notag\\ &\hspace{.75cm}+E\left[\sup_{t\in[0,t]}\left\|\int_0^{t\wedge\tau_n}\tilde\sigma(s,q_s^{\tau_n},\psi(s,q^{\tau_n}_s))dW_s\right\|^p\right]\bigg)\notag\\ \leq&3^{p-1}E[ \|q_0\|^p]+\tilde C\int_0^tE\left[\|q^{\tau_n}_s\|^p\right]ds\\ &+\tilde C\int_0^t\|(\tilde F+\tilde G)(s,0,\psi(s,0))\|^p+\|\tilde\sigma(s,0,\psi(s,0))\|^p_Fds\notag\\ \leq&\tilde C\left(1+ \int_0^tE\left[\sup_{r\in[0,s]}\|q^{\tau_n}_r\|^p\right]ds\right), \end{align} where $\tilde C$ changes line to line, and is independent of $t$. The definition of $\tau_n$, together with $E[\|q_0\|^p]<\infty$, implies that \begin{equation} \sup_{s\geq 0}E\left[\sup_{r\in[0,s]}\|q^{\tau_n}_r\|^p\right]<\infty. \end{equation} Therefore we can apply Gronwall's inequality to get \begin{align} E\left[\sup_{t\in[0,T]}\|q^{\tau_n}_t\|^p\right]\leq \tilde Ce^{\tilde C T}, \end{align} where the constant $\tilde C$ is independent of $n$. Hence, the monotone convergence theorem yields \begin{align} E\left[\sup_{t\in[0,T]}\|q_t\|^p\right]\leq \tilde Ce^{\tilde C T}<\infty. \end{align} This completes the verification that the hypotheses of Lemma \ref{conv_lemma} hold, allowing us to conclude that \begin{align} E\left[\sup_{t\in[0,T]}\|q_t^\epsilon-q_t\|^p\right]=O(\epsilon^\beta)\text{ as }\epsilon\rightarrow 0^+. \end{align} \end{proof} \section{Homogenization of Hamiltonian systems} In this final section, we apply the above framework to our original Hamiltonian system, \req{Hamiltonian_SDE_q}-\req{Hamiltonian_SDE_p} (in particular, $m=n$ in this section) in order to prove the existence of a limiting process $q_t^\epsilon\to q_t$ and derive a homogenized SDE for $q_t$. Specifically, in Sections \ref{sec:h_explicit_sol} and \ref{sec:chi_explicit_sol} we will study a class of Hamiltonian systems for which the PDEs \req{h_eq} for $h$ and \req{chi_eq} for $\chi$ that are needed to derive the limiting equation are explicitly solvable and the required bounds can be verified by elementary means. The SDE \req{Hamiltonian_SDE_q}-\req{Hamiltonian_SDE_p} can be rewritten in the general form \req{gen_SDE1}-\req{gen_SDE2}: \begin{align} dq^\epsilon_t=&\frac{1}{\sqrt{\epsilon}}\nabla_z K(t,q^\epsilon_t,z_t^\epsilon)dt,\label{q_Hamil_eq2}\\ d p^\epsilon_t=&\left(-\frac{1}{\sqrt{\epsilon}}\left(\gamma_l(t,x^\epsilon_t)- \nabla_q\psi_l(t,q_t^\epsilon)\right)\partial_{z_l}K(t,q_t^\epsilon,z_t^\epsilon)-\nabla_q K(t,q^\epsilon_t,z_t^\epsilon)\right.\label{p_Hamil_eq2}\\ &\hspace{5mm}-\nabla_q V(t,q^\epsilon_t)+F(t,x^\epsilon_t)\bigg)dt+\sigma(t,x^\epsilon_t) dW_t,\notag \end{align} where $\gamma_l$ denotes the vector obtained by taking the $l$th column of $\gamma$. Specifically, \begin{align} &F_1=0, \hspace{2mm}\sigma_1=0, \hspace{2mm} \sigma_2=\sigma, \hspace{2mm} G_1(t,x,z)=\nabla_z K(t,q,z),\\ &F_2(t,x,z)=-\nabla_q K(t,q,z)-\nabla_q V(t,q)+F(t,x),\\ & G_2(t,x,z)=-\left(\gamma_l(t,x)- \nabla_q\psi_l(t,q)\right)\partial_{z_l}K(t,q,z). \end{align} In particular, $\sigma_1=0$, so below we use the definition of $C^{1,2}$ applicable to this case. The operator $L$, \req{L_def}, and its formal adjoint have the following form: \begin{align} (L\chi)(t,x,z)=&\frac{1}{2} \Sigma_{kl}(t,x)(\partial_{z_k}\partial_{z_l}\chi)(t,x,z)\label{Hamil_L}\\ &-\tilde\gamma_{kl}(t,x)\partial_{z_l}K(t,q,z)(\partial_{z_k}\chi)(t,x,z),\notag\\ (L^*h)(t,x,z)=&\partial_{z_k}\bigg(\frac{1}{2} \Sigma_{kl}(t,x)\partial_{z_l}h(t,x,z)\label{Hamil_L_star}\\ &+\tilde\gamma_{kl}(t,x)\partial_{z_l}K(t,q,z)h(t,x,z)\bigg),\notag \end{align} where \begin{equation}\label{hamil_Sigma_def} \Sigma_{ij}=\sum_{\rho}\sigma_{i\rho}\sigma_{j\rho} \end{equation} and $\tilde\gamma$ was defined in \req{tilde_gamma_def}. Here $\sigma$ and $\Sigma$ denote what were $\sigma_2$ and $\Sigma_{22}$ respectively in \req{sigma_defs} and $\sigma_1=0$. In particular, the indices on $\Sigma$ have the meaning $\Sigma_{ij}\equiv (\Sigma_{22})_{ij}$. \subsection{Computing the noise induced drift}\label{sec:h_explicit_sol} In general, an explicit solution to $L^*h=0$ is not available, and so the homogenized equation can only be defined implicitly, as in Theorem \ref{conv_thm}. However, there are certain classes of systems where we can explicitly derive the form of the additional vector field, $\tilde G$, appearing in the homogenized equation. In \cite{BirrellHomogenization}, one such class was studied by a different method. Here, we explore the case where the noise and dissipation satisfy the fluctuation dissipation relation pointwise for a time and state dependent generalized temperature $T(t,q)$, \begin{align}\label{fluc_dis} \Sigma_{ij}(t,q)=2k_BT(t,q) \gamma_{ij}(t,q). \end{align} where $\Sigma$ was defined in \req{hamil_Sigma_def}. We will make Assumptions \ref{assump1}-\ref{assump4}, but make no further constraints on the form of the Hamiltonian here. As can be verified by a direct calculation, under the assumption \req{fluc_dis}, the adjoint equation \req{Hamil_L_star} is solved by \begin{align}\label{h_formula} h(t,q,z)=\frac{1}{Z(t,q)} \exp[-\beta(t,q)K(t,q,z)], \end{align} where we define $\beta(t,q)=1/(k_BT(t,q))$ and $Z$, the ``partition function", is chosen so that $\int h dz=1$. Note that Assumption \ref{assump3} ensures such a normalization exists. We also point out that in this case, the antisymmetric part of $\tilde \gamma$ does not contribute to the right hand side of \req{Hamil_L_star}. An interesting point to note is that when the antisymmetric part of $\tilde\gamma$ vanishes (physically, for $K$ quadratic in $z$ this means a vanishing magnetic field), the vector field that we are taking the divergence of in \req{Hamil_L_star} vanishes identically. When $\tilde\gamma$ has a non-vanishing antisymmetric part, only once we take the divergence does the expression in \req{Hamil_L_star} vanish. \begin{comment} \begin{align}\label{Hamil_L_star} &(L^*h)(t,x,z)\\ =&\partial_{z_k}\left(\frac{1}{2} \Sigma_{kl}(t,x)\partial_{z_l}h(t,x,z)+\tilde\gamma_{ki}(t,x)\partial_{z_i}K(t,q,z)h(t,x,z)\right)\\ =&\partial_{z_k}\left(-\frac{1}{2} \Sigma_{kl}(t,x)\partial_{z_l}K(t,q,z)/(k_BT(t,q) )h(t,x,z)+\tilde\gamma_{ki}(t,x)\partial_{z_i}K(t,q,z)h(t,x,z)\right)\\ =&\partial_{z_k}\left(-\gamma_{kl}(t,x)\partial_{z_l}K(t,q,z)h(t,x,z)+\tilde\gamma_{ki}(t,x)\partial_{z_i}K(t,q,z)h(t,x,z)\right)\\ =&\partial_{z_k}\left(\tilde\gamma^a_{ki}(t,x)\partial_{z_i}K(t,q,z)h(t,x,z)\right)\\ =&\tilde\gamma^a_{ki}(t,x)\left(\partial_{z_k}\partial_{z_i}K(t,q,z)h(t,x,z)+\partial_{z_i}K(t,q,z)\partial_{z_k}h(t,x,z)\right)\\ =&-\tilde\gamma^a_{ki}(t,x)\partial_{z_i}K(t,q,z)\partial_{z_k}K(t,q,z)/(k_BT(t,q) )h(t,q,z)=0. \end{align} \end{comment} From \req{q_eq}, we see that the terms that require homogenization are \begin{align}\label{G_gen_def} G(t,q_t^\epsilon,z_t^\epsilon)=&-(\tilde\gamma^{-1})^{ij}(t,q_t^\epsilon)\partial_{q^j}K(t,q_t^\epsilon,z_t^\epsilon)dt\\ &+(z_t^\epsilon)_j\partial_{q^l}(\tilde\gamma^{-1})^{ij}(t,q_t^\epsilon)\partial_{z_l}K(t,q_t^\epsilon,z_t^\epsilon)dt.\notag \end{align} Using \req{h_formula}, the formal calculation of Section \ref{sec:formal_G_tilde} gives \begin{align} \tilde G(t,q)= -(\tilde\gamma^{-1})^{ij}(t,q)\langle\partial_{q^j}K(t,q,z)\rangle+\frac{\partial_{q^l}(\tilde\gamma^{-1})^{il}(t,q)}{\beta(t,q)}, \end{align} where we define \begin{align}\label{h_avg_def} \langle\partial_{q^j}K(t,q,z)\rangle=\frac{1}{Z(t,q)} \int \partial_{q^j}K(t,q,z)\exp[-\beta(t,q)K(t,q,z)]dz. \end{align} \begin{comment} \begin{align} \tilde G(t,q)=&\frac{ -(\tilde\gamma^{-1})^{ij}(t,q)}{Z(t,q)} \int \partial_{q^j}K(t,q,z)\exp[-\beta(t,q)K(t,q,z)]dz\\ &+\frac{\partial_{q^l}(\tilde\gamma^{-1})^{ij}(t,q)}{Z(t,q)} \int z_j\partial_{z_l}K(t,q,z) \exp[-\beta(t,q)K(t,q,z)]dz\\ =&\frac{ -(\tilde\gamma^{-1})^{ij}(t,q)}{Z(t,q)} \int \partial_{q^j}K(t,q,z)\exp[-\beta(t,q)K(t,q,z)]dz\\ &-\frac{\partial_{q^l}(\tilde\gamma^{-1})^{ij}(t,q)}{\beta(t,q)Z(t,q)} \int z_j\partial_{z_l} \exp[-\beta(t,q)K(t,q,z)]dz\\ =& -(\tilde\gamma^{-1})^{ij}(t,q)\langle\partial_{q^j}K(t,q,z)\rangle\\ &+\frac{\partial_{q^l}(\tilde\gamma^{-1})^{il}(t,q)}{\beta(t,q)}. \end{align} \end{comment} Of course, this calculation is only formal. In the next section, we study a particular case where everything can be made rigorous. \subsection{Rigorous Homogenization of a class of Hamiltonian systems }\label{sec:chi_explicit_sol} In this section we explore a class of Hamiltonian systems for which Assumption \ref{homog_assump3} can be rigorously verified via an explicit solution to the PDE for $\chi$. We will work with Hamiltonian systems that satisfy Assumptions \ref{assump1}-\ref{assump5}, \ref{assump7}. In particular, we are restricting to the class of Hamiltonians with \begin{align} \label{K_tilde_def} K(t,q,z)=\tilde K(t,q,A^{ij}(t,q)z_iz_j), \end{align} where $A(t,q)$ is valued in the space of positive definite $n \times n$-matrices. We will write $\tilde K\equiv \tilde K(t,q,\zeta)$ and $\tilde K^\prime\equiv \partial_{\zeta}\tilde K$. We will also need the following relations between $\Sigma$, $\gamma$, and $A$ to hold: \begin{assumption}\label{proportionality_assump} $\sigma$ is independent of $p$ and \begin{align} \Sigma(t,q)=b_1(t,q)A^{-1}(t,q) , \hspace{2mm} \gamma(t,q)=b_2(t,q)A^{-1}(t,q) \end{align} where, for every $T>0$, the $b_i$ are bounded, $C^2$ functions that have positive lower bounds and bounded first derivatives, both on $[0,T]\times\mathbb{R}^n$. \end{assumption} Note that these relations imply a fluctuation-dissipation relation with a time and state dependent generalized temperature $T=\frac{b_1}{2k_Bb_2}$. In \cite{BirrellHomogenization}, we showed that Assumptions \ref{assump1}-\ref{assump5} imply: \begin{align}\label{q_eq2} d(q_t^\epsilon)^i=&\tilde F^i(t,x)dt+\tilde\sigma^i_{\rho}(t,x_t^\epsilon)dW^\rho_t+G^i(t,x_t^\epsilon,z_t^\epsilon)dt+d(R^\epsilon_t)^i,\notag \end{align} where \begin{align} \tilde F^i(t,x)=&(\tilde\gamma^{-1})^{ij}(t,q)(-\partial_t\psi_j(t,q)-\partial_{q^j}V(t,q)+F_j(t,x))+S^i(t,q),\\ G^i(t,q,z)=&-(\tilde\gamma^{-1})^{ij}(t,q)(\partial_{q^j}\tilde K)(t,q,A^{ij}(t,q)z_iz_j),\\ \tilde \sigma^i_\rho(t,x)=& (\tilde\gamma^{-1})^{ij}(t,q)\sigma_{j\rho}(t,x),\\ S^i(t,q)=& k_BT(t,q) \left(\partial_{q^j}(\tilde\gamma^{-1})^{ij}(t,q)-\frac{1}{2}(\tilde\gamma^{-1})^{ik}(t,q)A^{-1}_{jl}(t,q)\partial_{q^k} A^{jl}(t,q)\right),\label{S_def} \end{align} and $R_t^\epsilon$ is a family of continuous semimartingales. $S(t,q)$ is called the {\em noise induced drift} (see Eq. 3.26 in \cite{BirrellHomogenization}). Note, that with $K(t,q,z)$ defined by \req{K_tilde_def}, the first term in \req{G_gen_def} consists of two contributions---one coming from the $q$-dependence of $\tilde{K}$ and one coming from the $q$-dependence of $A$. The $G$ defined here comprises only the first contribution. The method of \cite{BirrellHomogenization} is able to homogenize the second term in \req{G_gen_def}, as well the first contribution of the first term, leading to the noise induced drift, S, but fails when $\tilde K$ depends explicitly on $q$. However, under certain circumstances, the method developed in Section \ref{sec:gen_homog} is succeeds in homogenizing the system when $\tilde K$ has $q$ dependence, as we now show. We will need one final assumption: \begin{assumption}\label{K_poly_bound_assump} For every $T>0$: \begin{enumerate} \item There exists $\zeta_0>0$ and $C>0$ such that $\tilde K^\prime(t,q,\zeta)\geq C$ for all $(t,q,\zeta)\in[0,T]\times\mathbb{R}^n\times[\zeta_0,\infty)$. \item $\tilde K(t,q,\zeta)$, $\partial_t\partial_{q^i}\tilde K(t,q,\zeta)$, $\partial_{q^i}\partial_{\zeta}\tilde K(t,q,\zeta)$, and $\partial_{q^i}\partial_{q^j}\tilde K(t,q,\zeta)$ are polynomially bounded in $\zeta$, uniformly in $(t,q)\in[0,T]\times\mathbb{R}^n$. \end{enumerate} \end{assumption} We are now prepared to prove the following homogenization result: \begin{theorem}\label{hamil_conv_thm} Let $x_t^\epsilon=(q_t^\epsilon,p_t^\epsilon)$ satisfy the Hamiltonian SDE \req{Hamiltonian_SDE_q}-\req{Hamiltonian_SDE_p} and suppose Assumptions \ref{assump1}-\ref{assump5}, \ref{assump7}, \ref{proportionality_assump}, and \ref{K_poly_bound_assump} hold. Let $p\geq 2$ and suppose we have initial conditions that satisfy $E[\|q^\epsilon_0\|^p]<\infty$, $E[\|q_0\|^p]<\infty$, and $E[\|q_0^\epsilon-q_0\|^p]=O(\epsilon^{p/2})$. Then for any $T>0$, $0<\beta<p/2$ we have \begin{align} E\left[\sup_{t\in[0,T]}\|q_t^\epsilon-q_t\|^p\right]=O(\epsilon^\beta)\text{ as }\epsilon\rightarrow 0^+ \end{align} where $q_t$ is the solution to the SDE \begin{align}\label{limit_eq} dq_t^i=&(\tilde\gamma^{-1})^{ij}(t,q_t)(-\partial_t\psi_j(t,q_t)-\partial_{q^j}V(t,q_t)+F_j(t,q_t,\psi(t,q_t)))dt\\ &+S^i(t,q_t)dt+\tilde G^i(t,q_t)dt+(\tilde\gamma^{-1})^{ij}(t,q_t)\sigma_{j\rho}(t,q_t)dW_t^\rho\notag \end{align} with initial condition $q_0$. See \req{tilde_gamma_def}, \req{S_def}, and \req{hamil_tilde_G_def} for the definitions of $\tilde\gamma$, $S$, and $\tilde G$, respectively. \end{theorem} \begin{proof} From \cite{BirrellHomogenization}, Assumptions \ref{assump1}-\ref{assump5}, \ref{assump7} imply: \begin{enumerate} \item $q_t^\epsilon$ satisifies an equation of the form \req{con_thm_eq}, where, for every $T>0$, $\tilde F$, $\tilde\sigma$ are bounded, continuous, and Lipschitz in $x$, on $[0,T]\times\mathbb{R}^{2n}$, with Lipschitz constant uniform in $t$. \item Assumptions \ref{homog_assump1} holds. \item For any $p>0$, $T>0$, $0<\beta<p/2$ we have \begin{align} E\left[\sup_{t\in[0,T]}\|R_t^\epsilon\|^p\right]=O(\epsilon^\beta) \text{ as } \epsilon \rightarrow 0^+. \end{align} \end{enumerate} Combined with polynomial boundedness of $\tilde K$ (Assumption \ref{K_poly_bound_assump}) we see that Assumption \ref{homog_assump2} also holds. Therefore, to apply Theorem \ref{conv_thm}, we have to verify Assumption \ref{homog_assump3} and that the $\tilde G$ referenced therein is Lipschitz in $x$, uniformly in $t\in [0,T]$. From Section \ref{sec:formal_G_tilde}, we expect that \begin{align}\label{hamil_tilde_G_def} \tilde G^i(t,q)=-(\tilde\gamma^{-1})^{ij}(t,q)\langle\partial_{q^j}\tilde K(t,q,A^{ij}(t,q)z_iz_j)\rangle \end{align} where, similarly to \req{h_avg_def}, \begin{align} \langle\partial_{q^j}\tilde K(t,q,\|z\|_A^2)\rangle=\frac{1}{Z(t,q)} \int\partial_{q^j}\tilde K(t,q,\|z\|^2_A)\exp[-\beta(t,q)\tilde K(t,q,\|z\|^2_A)]dz. \end{align} Here we use the shorthand $\|z\|^2_A\equiv A^{ij}(t,q)z_iz_j$ when the implied values of $t,q$ are apparent from the context. Using our assumptions, along with several applications of the DCT, one can see that $\tilde G$ is $C^{1}$ and, for every $T>0$, is bounded with bounded first derivatives on $[0,T]\times\mathbb{R}^n$. In particular, it is Lipschitz in $q$, uniformly in $t\in[0,T]$. We now turn to solving the equation \begin{align}\label{L_pde} L\chi=G-\tilde G. \end{align} Since $G-\tilde G$ is independent of $p$ and depends on $z$ only through $\|z\|^2_A$, we look for $\chi$ with the same behavior. Using the ansatz $\chi(t,q,z)=\tilde\chi(t,q,\|z\|^2_A)$, and defining $G^i(t,q,\zeta)=-(\tilde\gamma^{-1})^{ij}(t,q)(\partial_{q^j}\tilde K)(t,q,\zeta)$, leads (on account of the antisymmetry of the matrix $\tilde{\gamma} - \gamma$) to the ODE in the variable $\zeta$: \begin{align} &\zeta\tilde\chi^{\prime\prime}(t,q,\zeta) +\left(\frac{n}{2}- \beta(t,q) \zeta \tilde K^\prime(t,q,\zeta)\right) \tilde\chi^\prime(t,q,\zeta)\\ =&\frac{1}{2 b_1(t,q)}(G(t,q,\zeta)-\tilde G(t,q)).\notag \end{align} This has the solution \begin{align}\label{tilde_chi_def} &\tilde\chi(t,q,\zeta)=\frac{1}{2b_1(t,q)}\int_0^\zeta \zeta_1^{-n/2}\exp[\beta(t,q) \tilde K(t,q,\zeta_1)]\\ &\times \int_0^{\zeta_1} \zeta_2^{(n-2)/2}\exp[-\beta(t,q) \tilde K(t,q,\zeta_2)]\left(G(t,q,\zeta_2)-\tilde G(t,q)\right)d\zeta_2d\zeta_1\notag \end{align} Therefore $\chi(t,q,z)\equiv\tilde\chi(t,q,\|z\|_A^2)$ solves the PDE \req{L_pde}. One can show that it is is $C^{1,2}$ and that $\chi$ and its first derivatives are polynomially bounded in $z$, uniformly for $(t,q)\in[0,T]\times\mathbb{R}^{n}$. As a representative example, in \ref{app:poly_bound} we outline the proof that $\tilde\chi(t,q,\zeta)$ is polynomially bounded in $\zeta$, uniformly in $(t,q)\in[0,T]\times\mathbb{R}^n$. The remainder of the computations are similar and we leave them to the reader. $\chi$ is independent of $p$, so $\partial_{p_i}\partial_{p_j}\chi=0$ and $\partial_{p_i}\partial_{z_j}\chi=0$. Therefore, this completes the verification of Assumption \ref{homog_assump3} and we are justified in using Theorem \ref{conv_thm} to conclude \begin{align} E\left[\sup_{t\in[0,T]}\|q_t^\epsilon-q_t\|^p\right]=O(\epsilon^\beta)\text{ as }\epsilon\rightarrow 0^+, \end{align} where $q_t$ satisfies the SDE \begin{align}\label{gen_limit_eq} q_t=q_0+&\int_0^t\tilde F(s,q_s,\psi(s,q_s))ds+\int_0^t\tilde G(s,q_s)ds+\int_0^t\tilde\sigma(s,q_s)dW_s\notag \end{align} as claimed. \end{proof} Lastly, we give an example of a general class of Hamiltonians that satisfy the hypotheses of Theorem \ref{hamil_conv_thm}. The proof of this corollary is straighforward, so we leave it to the reader. \begin{corollary} Consider the class of Hamiltonians of the form \begin{align} H(t,q,p)=\sum_{l=k_1}^{k_2} d_l(t,q) \left[A^{ij}(t,q) (p-\psi(t,q))_i(p-\psi(t,q))_j\right]^l+V(t,q) \end{align} where $1\leq k_1\leq k_2$ are integers and the following properties hold on $[0,T]\times\mathbb{R}^{n}$ for every $T>0$: \begin{enumerate} \item $V$ is $C^2$ and $\nabla_q V$ is bounded and Lipschitz in $q$, uniformly in $t\in[0,T]$. \item $\psi$ is $C^3$ and $\partial_t\psi$, $\partial_{q^i}\psi$, $\partial_t\partial_{q^i}\psi$, $\partial_{q^i}\partial_{q^j}\psi$, $\partial_t\partial_{q^j}\partial_{q^i}\psi$, and $\partial_{q^l}\partial_{q^j}\partial_{q^i}\psi$ are bounded. \item $d_l$ are $C^2$, non-negative, bounded, and have bounded first and second derivatives. \item $d_{k_1}$ and $d_{k_2}$ are uniformly bounded below by a positive constant. \item $A$ is $C^2$, positive-definite, and $A$, $\partial_t A$, $\partial_{q_i} A$, $\partial_t \partial_{q^i}A$, and $\partial_{q^i}\partial_{q^j} A$ are bounded. \item The eigenvalues of $A$ are uniformly bounded below by a positive constant. \end{enumerate} Also suppose that \begin{enumerate} \item $\sigma$ is independent of $p$ and \begin{align} \Sigma(t,q)=b_1(t,q)A^{-1}(t,q) , \hspace{2mm} \gamma(t,q)=b_2(t,q)A^{-1}(t,q) \end{align} where, for every $T>0$, the $b_i$ are bounded, $C^2$ functions with positive lower bounds and bounded first derivatives. \item $\gamma$ is $C^2$, is independent of $p$, and $\partial_t\gamma$, $\partial_{q^i} \gamma$, $\partial_t\partial_{q^j}\gamma$, $\partial_{q^i}\partial_{q^j}\gamma$ are bounded on $[0,T]\times\mathbb{R}^{n}$. \item The eigenvalues of $\gamma$ are bounded below by some $\lambda>0$. \item $\gamma$, $F$, and $\sigma$ are bounded. \item $F$ and $\sigma$ are Lipschitz in $x$ uniformly in $t\in[0,T]$. \item There exists $C>0$ such that the (random) initial conditions satisfy $K^\epsilon(0,x^\epsilon_0)\leq C$ for all $\epsilon>0$ and all $\omega\in\Omega$. \item There is a $p\geq 2$ such that \begin{align} E[\|q^\epsilon_0\|^p]<\infty, \hspace{2mm} E[\|q_0\|^p]<\infty,\text{ and } E[\|q_0^\epsilon-q_0\|^p]=O(\epsilon^{p/2}). \end{align} \end{enumerate} Then all the hypotheses of Theorem \ref{hamil_conv_thm} hold, in particular Assumptions \ref{assump1}-\ref{assump5}, \ref{assump7}, \ref{proportionality_assump}, and \ref{K_poly_bound_assump} hold, and hence, for any $\beta \in \left(0, {p \over 2}\right)$, \begin{align} E\left[\sup_{t\in[0,T]}\|q_t^\epsilon-q_t\|^p\right]=O(\epsilon^\beta)\text{ as }\epsilon\rightarrow 0^+, \end{align} where $x_t^\epsilon=(q_t^\epsilon,p_t^\epsilon)$ satisfy the Hamiltonian SDE \req{Hamiltonian_SDE_q}-\req{Hamiltonian_SDE_p} and $q_t$ satisfies the homogenized SDE, \req{limit_eq}. \end{corollary} \begin{comment} Checked:\\ Assump1\\ Assump2\\ Assump3\\ Assump4\\ Assump5\\ Assump7\\ \ref{proportionality_assump}\\ \ref{K_poly_bound_assump}\\ \begin{assumption}\label{K_poly_bound_assump} For every $T>0$: \begin{enumerate} \item There exists $\zeta_0>0$ and $C>0$ such that $\tilde K^\prime(t,q,\zeta)\geq C$ for all $(t,q,\zeta)\in[0,T]\times\mathbb{R}^n\times[\zeta_0,\infty)$. \item $\tilde K(t,q,\zeta)$, $\partial_t\partial_{q^i}\tilde K(t,q,\zeta)$, $\partial_{q^i}\partial_{\zeta}\tilde K(t,q,\zeta)$, and $\partial_{q^i}\partial_{q^j}\tilde K(t,q,\zeta)$ are polynomially bounded in $\zeta$, uniformly in $(t,q)\in[0,T]\times\mathbb{R}^n$. \end{enumerate} \begin{align} \tilde K(t,q,\zeta)=\sum_{l=k_1}^{k_2} d_l(t,q)\zeta^l \end{align} is poly bounded in $\zeta$, uniformly in $(t,q)\in[0,T]\times\mathbb{R}^n$ since the $d_l$ have bounded first and second derivatives \begin{align} \tilde K^\prime(t,q,\zeta)=\sum_{l=k_1}^{k_2} ld_l(t,q)\zeta^{l-1}\geq k_1 d_{k_1}(t,q)\zeta^{l-1}\geq \tilde C\zeta^{l-1}\geq C \end{align} for $\zeta\geq 1$. since $d_l$ are non-neg., $d_{k_1}$ is bounded below. This proves \ref{K_poly_bound_assump}. \end{assumption} \end{comment}
{ "timestamp": "2017-07-11T02:13:34", "yymm": "1707", "arxiv_id": "1707.02884", "language": "en", "url": "https://arxiv.org/abs/1707.02884", "abstract": "This paper studies homogenization of stochastic differential systems. The standard example of this phenomenon is the small mass limit of Hamiltonian systems. We consider this case first from the heuristic point of view, stressing the role of detailed balance and presenting the heuristics based on a multiscale expansion. This is used to propose a physical interpretation of recent results by the authors, as well as to motivate a new theorem proven here. Its main content is a sufficient condition, expressed in terms of solvability of an associated partial differential equation (\"the cell problem\"), under which the homogenization limit of an SDE is calculated explicitly. The general theorem is applied to a class of systems, satisfying a generalized detailed balance condition with a position-dependent temperature.", "subjects": "Mathematical Physics (math-ph)", "title": "A homogenization theorem for Langevin systems with an application to Hamiltonian dynamics", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759654852756, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7079405701803325 }
https://arxiv.org/abs/1511.07073
1-domination of knots
We say that a knot $k_1$ in the $3$-sphere {\it $1$-dominates} another $k_2$ if there is a proper degree 1 map $E(k_1) \to E(k_2)$ between their exteriors, and write $k_1 \ge k_2$. When $k_1 \ge k_2$ but $k_1 \ne k_2$ we write $k_1 > k_2$. One expects in the latter eventuality that $k_1$ is more {\it complicated}. In this paper we produce various sorts of evidence to support this philosophy.
\subsection{Algebraic consequences of $1$-domination} Suppose that $k_1\ge k_2$ and $\sigma$ is a (set of) knot invariant(s). It is generally believed that $\sigma(k_1)\; ``\ge" \; \sigma(k_2)$ in some sense, and though this has been verified in various cases, the general case is unknown. See section \ref{open problems} and \cite{Wan} for discussions. \begin{Proposition}\label{dominateunknot} Every knot $1$-dominates the unknot. \end{Proposition} \noindent We need to show $k \ge O$ for each knot $k$, where $O$ is the unknot. Note that any compact manifold $M^n$ with spherical collared boundary, $\partial M \cong S^{n-1}$, can be degree $1$ mapped onto the ball $B^n$, by pinching the complement of a collar of $\partial M$ to a point. If $E(k)$ is a knot exterior, this trick can be used to map a Seifert surface in $E(k)$ to a spanning disk in $E(O)$, and another pinch to map the remainder of $E(k)$ to the remainder of $E(O)$. \qed A similar argument shows the following: \begin{Proposition}\label{sumdominates} A connected sum $k_1 \sharp k_2$ of knots $1$-dominates each summand. Moreover, if $k_1 \ge k_1'$ and $k_2 \ge k_1' $ then $k_1 \sharp k_2 \ge k_1' \sharp k_2'.$ \end{Proposition} Next we consider some invariants which are known to behave well under $1$-domination. Proofs of the stated results will be sketched below. \begin{Proposition}\label{surjective} If $f: E(k_1) \to E(k_2)$ is a $1$-domination, then $f_* :\pi_1E(k_1) \to \pi_1E(k_2)$ is surjective. \end{Proposition} \begin{Proposition}\label{genus} If $g(k)$ denotes the genus of $k$, then $k_1 \ge k_2 \implies g(k_1)\ge g(k_2)$. \end{Proposition} \begin{Proposition}\label{volume} If $V(k)$ denotes the Gromov volume of $E(k)$, then $k_1 \ge k_2 \implies V(k_1)\ge V(k_2)$. \end{Proposition} \begin{Proposition}\label{Apoly} If $A_{k}$ denotes the $A$-polynomial of $k$, then $k_1 \ge k_2 \implies A_{k_2}\vert A_{k_1}$. \end{Proposition} \noindent Let $\Lambda_{k}$ denote the Alexander module associated with the knot $k$. That is, consider $\widetilde{E(k)} \to E(k)$ the infinite cyclic cover associated with the (kernel of the) Hurewicz map $\pi(X) = \pi_1(E(k)) \to H_1(E(k)) \cong {\mathbb Z}$. Then $\Lambda_{k}$ is $H_1(\widetilde{E(k)}; {\mathbb Z})$, considered as a ${\mathbb Z}[t^{\pm 1}]$- module, where $t$ corresponds to a generator of the deck transformation group ${\mathbb Z}$. \begin{Proposition}\label{module} $k_1 \ge k_2 \implies \Lambda_{k_1}=\Lambda_{k_2}\oplus \Lambda$, in particular $\Delta_{k_2}\vert\Delta_{k_1}$. \end{Proposition} \noindent More generally let $\Delta(k, G) = \{\Delta_{\phi,k}|\,\phi : \pi_1(E(k)) \to G\}$ denote the set of all {\em twisted} Alexander polynomials for a given linear group $G$. \begin{Proposition}\label{twisted} $k_1 \ge k_2 \implies \Delta(k_2, G)\subseteq \Delta (k_1, G)$. \end{Proposition} The proof of surjectivity of $f_*$ follows from well-known elementary facts and is left to the reader. Proposition \ref{genus} is a corollary of Gabai's result that embedded Thurston Norms and singular Thurston Norms coincide \cite{Ga}. Proposition \ref{volume} is a basic property of Gromov volume, see \cite{Gr} or \cite{Th}. A sketch of proof of Proposition \ref{Apoly} can be found in \cite{SWh}, and also was discussed in a lecture of Boyer \cite{Boy}. The existence of splittings provided by a degree $1$ map as in Proposition \ref{module} is a classical fact. See \cite{Br} Theorem 1.2.5 for the $\mathbb Z$-coefficient case and \cite{Wal} p.25 for the local coefficient case. We will present a rather concrete proof based on \cite{Mi} in Section \ref{Alexander}. For Proposition \ref{twisted} see \cite{KMW}, and also \cite{Z}. \subsection{Some open problems} \label{open problems} The behaviour of bridge numbers $b(k)$ under dominations is largely unknown with only partial results currently available \cite{BNW}. For crossing number $c(k)$, a positive answer to the question of whether $k_1 > k_2$ implies that $c(k_1) > c(k_2)$ would provide an alternative proof of the fact that any knot $1$-dominates at most finitely many knots \cite{BRuW}. Relatedly, Kitano asked whether $c(k_1)\ge c(k_2)$ if there is an epimorphism $\pi_1E(k_1)\to \pi_1E(k_2)$, which would provide an alternate proof of Simon's conjecture \cite{AL}). It would also support the additivity of crossing number under connected sum. Some flexibility in the interpretation of ``reduces complexity" notion is necessary. For instance for Jones polynomials it is not true that $k_1 \geq k_2$ implies that $V_{k_1}|V{k_2}$ (see the remark after Example 4 in Section \ref{Alexander}), but it is possible that $k_1 \geq k_2$ implies that the degree of $V(k_1)$ is $\ge$ that of $V(k_2)$. More problems will be raised below. \subsection{Outline of the paper}\label{outline} In Section \ref{Rigidity}, we will prove some rigidity results about $1$-domination between knots; that is, under certain conditions, $k \ge k'$ implies that $k=k'$. Some previously known conditions include: \begin{enumerate} \item both $k$ and $k'$ are hyperbolic knots and have the same Gromov volume (Gromov-Thurston's rigidity theorem \cite{Th}); \item $k$ and $k'$ have the same genus and $k$ is fibred \cite[Corollary 2.3]{BW1}. \end{enumerate} Theorem \ref{rigidity} states that $k\ge k'$ implies $k=k'$ if $k$ is a knot with no companion of winding number zero, and if $k$ and $k'$ have the same genus and the same Gromov volume. We also construct a strict $1$-domination $k> k'$ such that both $k$ and $k'$ have same genus, and same Gromov volumes (and same Alexander polynomials) to show that {Theorem} \ref{rigidity} is a best rigidity result in terms of genus and Gromov volume. Other results in a similar spirit can be found in \cite{BNW} and \cite{De}. Section \ref{double-cover} is concerned with relations between domination and double branched coverings $M_2(k)$ of $S^3$ over knots $k$. We show that if $k \ge k'$, then \begin{enumerate} \item[(1)] $M_2(k_1) \ge M_2(k_2)$ (i.e. there is a degree 1 map $M_2(k_1) \to M_2(k_2)$); \item[(2)] If $M_2(k_1) = M_2(k_2)$ then $k_1 = k_2$. \end{enumerate} Assertion $(2)$ can be thought of as an extension of the fact that there is no $1$-domination between distinct mutant knots with hyperbolic $2$-fold branched coverings \cite{Ru}. We use it to show that knots $1$-dominated by 2-bridge knots, respectively Montesinos knots, are 2-bridge, respectively Montesinos. (See \cite{ORS}, \cite{Li}, \cite{BB}, \cite{BBRW} for other results on $1$-domination between 2-bridge knots and Montesinos knots). We also show in section \ref{sec:AP} that any knot $1$-dominated by a toroidally alternating knot is a connected sum of simple knots. Assertion (1) suggests some interesting questions about the relations between $1$-domination among knots, the theory of left orderable groups, and Heegaard-Floer L-spaces. See \cite{BRoW}, \cite{BGW}, \cite{OS}. In Section \ref{$1$-domination sequences} we study upper bounds on the length $n$ of $1$-domination sequences of knots $k_0> k_1>k_2>....>k_n$ with given $k_0$, which is closely related to rigidity results. It is known that any sequence of $1$-dominations $M_0>M_1>...>M_i>....$ of compact orientable 3-manifolds has a finite length \cite{Ro1}, and there is an apriori bound on this length given $M_0$ \cite{So2}. {Theorem} \ref{$1$-domination length in genus} states that if a knot $k_0$ is free (see Section \ref{$1$-domination sequences} for definitions), then the length of any $1$-domination sequence of knots $k_0> k_1>k_2>....>k_n$ is bounded by the maximal genus $\hat g(k_0)$ of an incompressible Seifert surface for $k_0$ when $\hat g(k_0)$ is bounded . We point out that alternating knots, fibred knots and small knots are free with bounded $\hat g(k_0)$. If $k_0$ is either fibred or 2-bridge, then $\hat g(k_0)$ is equal to the genus $g(k_0)$ of $k_0$. One-dominations between small knots, fibred knots, and two bridge knots have also been addressed in \cite{BW2}, \cite{ORS}, \cite{BB}, \cite{BBRW}. In Section \ref{Alexander} we present a proof of $\Lambda_{k_1}=\Lambda_{k_2}\oplus \Lambda$ when $k_1 \ge k_2$ along with some applications. We also point out that Gordon's approach to ribbon concordance \cite{Go}, based on Stallings' results about homology and central series of groups \cite{Sta}, provides some other rigidity results for $1$-domination of knots in terms of Alexander polynomials. Consequently, the length of a $1$-domination sequence $k_0> k_1 > k_2 > ...> k_n$ of alternating knots is bounded above by the degree of $\Delta_{k_0}$ when its leading coefficient is a prime power. It is known that any knot $1$-dominates at most finitely many knots \cite{BRuW}. (See also the stronger results of \cite{AL}, \cite{Liu}.) It is very hard to bound the number of knots $1$-dominated by a given knot in general. However the techniques of this paper provide many knots which are minimal in the sense that they only $1$-dominate the trivial knot and themselves. \section{Rigidity via genus and Gromov volume}\label{Rigidity} \subsection{Satellite knots and an example}\label{satellite} We recall the definition of satellite knots and fix some notation and terminology needed below. Suppose that $k_p$ is a knot contained in a solid torus $V$, where $V \subset S^3$ is unknotted and has longitude and meridian $l,m$. It is assumed that $k_p$ does not lie in a $3$-ball in $V$. Let $k_c$ be another knot in $S^3$, with regular neighbourhood $N(k_c)$, and let $h: V \to N(k_c)$ be a homeomorphism, taking $l$ and $m$ respectively to the longitude and meridian of $k_c$. Then the knot $k_s := h(k_p)$ is called the {\it satellite} of $k_c$ with {\it pattern} knot $k_p$, the latter considered in $S^3$. One also calls $k_c$ a {\it companion} of $k_s$. \begin{Proposition}\label{satellitesdominate} Satellite knots $1$-dominate their pattern knots. \end{Proposition} \begin{proof} Suppose that $k_s$ is a satellite of $k_c$ with pattern $k_p$ as described above. Arguing as in Proposition \ref{dominateunknot} there is a degree $1$ map of the exterior of $k_c$ to the exterior of $V$. Combining this with $h^{-1}$ on the closure of $N(k_c) \setminus N(k_s)$ gives the 1-domination $k_s \geq k_p$. \end{proof} \begin{Example}\label{same volume} {\rm We construct a non-trivial $1$-domination $k > k_1$ of knots with the same genus, the same Alexander polynomial, and the same Gromov volume. Moreover all those invariants are non-vanishing. Let $k=h(k_1)$ be the satellite of the trefoil $k_2$ indicated by Figure 2. Here $h: V \to N(K_2)$ is a homeomorphism preserving the longitudes pictured; $k$ itself is not drawn. Then we have a $1$-domination $k\ge k_1$. Let ${\mathcal T}$ and ${\mathcal T}_1$ be the JSJ-tori of $E(k)$ and $E(k_1)$ respectively, then $E(k)\setminus {\mathcal T}$ consists of three components: two Seifert pieces and one hyperbolic piece $H$, which is homeomorphic to the Whitehead link complement; and $E(k_1)\setminus {\mathcal T}_1$ consists of two components: one Seifert piece and one hyperbolic piece $H$. Thus $k > k_1$. On the other hand, it is clear that both $k$ and $k_1$ are of genus 1, and have the same Gromov volume, which equals the hyperbolic volume of $H$. They also have the same Alexander polynomials, since $h$ is longitude preserving (see \cite{Rlf}, Chap.7) and $k_1$ is an untwisted double.} \end{Example} \begin{center} \includegraphics[totalheight=5cm]{fig3.eps} \begin{center} Figure 2 \end{center} \end{center} \bigskip \begin{Remark} {\rm By iterating the construction in Example \ref{same volume}, one can provide an arbitrarily long $1$-domination sequence of knots with the same genus, the same Alexander polynomial and the same Gromov volume.} \end{Remark} Suppose $k$ is a knot and $T$ is an essential torus in $E(k)$. By a theorem of Alexander, $T$ bounds a solid torus $V \cong S^1 \times D^2$, and as $T$ is incompressible in $E(k)$, we must have $k \subset V$. Thus $k$ represents some multiple of the generator of $\pi_1(V) \cong {\mathbb Z}$. We call the absolute value of this multiple the {\it winding number} of $T$ relative to $k$. In this setting, the core curve of $V$ is a companion of $k$. The essential feature permitting the construction of satellites with the same genus, Alexander polynomial and Gromov volume is that the winding number of $k$ in $N(k_2)$ is zero. This turns out to be necessary to the construction, as the following theorem demonstrates. \begin{Theorem}[Rigidity] \label{rigidity} Suppose that $k$ is a non-trivial knot such that every essential torus in $E(k)$ has non-zero winding number. If $k$ and $k'$ have the same Gromov volume and the same genus, and $k\ge k'$, then $k = k'$. \end{Theorem} \subsection{Proof of Theorem \ref{rigidity}} We prove Theorem \ref{rigidity} by establishing a sequence of claims. \begin{Claim}\label{seifert surface} Let $f: E(k)\to E(k')$ be a degree 1 map and let $(S, \partial S)\subset (E(k), \partial E(k))$ be a Seifert surface of minimal genus $g(k)$. Then the restriction $f|_* : \pi_1(S)\to \pi_1(E(k'))$ is injective. \end{Claim} \begin{proof} Otherwise there is an essential closed curve $c \subset S$ which is in the kernel of $f|_*: \pi_1(S)\to \pi_1(E(k'))$. Fix a finite covering $p: \tilde S \to S$ of degree $d$, say, so that $c$ can be lifted to a simple closed curve $\tilde c$ in $\tilde S$ (\cite{Sc}). Since $f(S)$ carries a generator $a$ of $H_2(E(k'), \partial E(k'); \mathbb Z)$ and $g(k)= g(k') > 0$, the Thurston Norm of $a$ is $|\chi(S)|$. It follows that $(f \circ p)(\tilde S)$ carries $da$ and realizes its Thurston norm, which is $|\chi(\tilde S)|=d|\chi(S)|$. However since the simple essential closed curve $\tilde c$ lies in the kernel of $(f\circ p)_*$, we can perform surgery on $\tilde S$ along $\tilde c$ to produce a new surface $\tilde S^*$ and a map $g: \tilde S^*\to E(k')$ which also represents $da$. But then the singular Thurston norm of $da$ is bounded above by $|\chi(\tilde S^*)|$, which is strictly less than $|\chi(\tilde S)|$, contrary to Gabai's result that the Thurston norm and singular Thurston norm coincide. \end{proof} \begin{Claim}\label{injective} If $T\subset E(k)$ is any essential torus, then the restriction $f|_* : \pi_1(T)\to \pi_1(E(k'))$ is injective. \end{Claim} \begin{proof} Let $k_T$ be the companion of $k$ such that $\partial E(k_T)=T$, and let $(m_T, \ell_T)$ be the meridian-longitude pair of $k_T$ on $\partial E(k_T)$. If $w_{T}$ denotes the winding number of $T$, one has $m_T= w_{T}m$ in $H_1(E(k); \mathbb Z)$and so for any integers $p$ and $q$, $p\ell_T + qm_T= qw_{T} m$ in $H_1(E(k); \mathbb Z)$. Since $f_*: H_1(E(k); \mathbb Z)\to H_1(E(k'); \mathbb Z)$ is an isomorphism given by $f_*(m)=m'$ and $\pi_1(E(k'))$ is torsion free, it follows that if the kernel of $f|_* : \pi_1(T)\to \pi_1(E(k'))$ is non-trivial, then it is generated by the longitude $\ell_T$ on $T$. As argued by Schubert, any minimal Seifert surface $S$ for $k$ may be assumed to intersect $T$ in $w_T$ longitudes. Since by hypothesis $w_T\ne 0$, we may assume $\ell_T\subset S$ and represents a nontrivial element of $\pi_1(S).$ But $f|_* :\pi_1(S)\to \pi_1(E(k'))$ is injective by Claim \ref{seifert surface}, so $f_*(\ell_T)\ne 1$. Claim \ref{injective} is proved. \end{proof} \begin{Claim}\label{seifert piece} If $N\subset E(k)$ is a Seifert piece of the JSJ-decomposition of $E(k)$, then the restriction $f|_* : \pi_1(N)\to \pi_1(E(k'))$ is injective. \end{Claim} \begin{proof} It follows from Seifert's classification of Seifert fibre structures on $S^3$ that $N$ is either a torus knot exterior, a cable space, or a composing space (the product of a planar surface and a circle) with at least three boundary components (see Lemma VI.3.4 of \cite{JS}). In particular, its base orbifold is orientable and therefore $N$ admits no separating, horizontal surfaces. Let $T \subseteq \partial N$ be either $\partial E(k)$ or the torus which separates $N$ from $\partial E(k)$ and fix a minimal genus Seifert surface $S$ for $k$. Assume that $S$ has been isotoped to intersect $\partial N$ minimally and recall from the proof of the previous claim that $S \cap T$ consists of $w_T > 0$ copies of the longitude $\ell_T$. Fix a component $S_0$ of $S \cap N$ such that $S_0 \cap T \ne \emptyset$. Clearly $S_0$ is an essential surface in $N$ and so can be assumed to be either vertical or horizontal with respect to a fixed Seifert structure on $N$. Now $\ell_T$ cannot be isotopic in $T$ to a Seifert fibre of $N$ (this can verified for each of the three types of possibilities for $N$), so $S_0$ is horizontal and therefore non-separating in $N$. It follows that $N$ fibres over the circle with fibre $S_0$. By Claim \ref{seifert surface}, $f_*|\pi_1(S_0)$ is injective and so $f_*(\pi_1(S_0))$ is a non-abelian free group. (It follows from the previous paragraph that $\chi(S_0) < 0$.) Recall that the class $\phi$ of a regular fibre of $N$ is central in $\pi_1(N)$ and let $H$ be the group generated by $\phi$ and $\pi_1(S_0)$. Then $H$ has finite index in $\pi_1(N)$ and since the latter is torsion free, it suffices to show that $f_*|H$ is injective. An element of $H$ can be written $\gamma \phi^n$ for some $\gamma \in \pi_1(S_0)$ and $n \in \mathbb Z$. Thus if $f_*(\gamma \phi^n) = 1$, then $f_*(\phi)^n = 1$ since it is a central element of the non-abelian free group $f_*(\pi_1(S_0))$. But $f_*(\phi) \ne 1$ by Claim \ref{injective}, and since $\pi_1(E(k'))$ is torsion free we see that $n = 0$. Then $f_*(\gamma) = 1$ so that $\gamma \phi^n = \gamma = 1$. Thus the Claim holds. \end{proof} \vskip 0.5 true cm Let $E(k) = H_k \cup S_k$ and $E(k') = H_{k'} \cup S_{k'}$ where $H_k, H_{k'}$ and $S_k, S_{k'}$ are the unions of the hyperbolic and Seifert pieces of $E(k)$ and $E(k')$. \begin{Claim}\label{hyperbolic piece} The map $f$ can be homotoped so that: \noindent $(1)$ $f|: (H_k, \partial H_k) \to (H_{k'}, \partial H_{k'})$ is a homeomorphism. \noindent $(2)$ $f(S_k) = S_{k'}$. \end{Claim} \begin{proof} Define $\Sigma_k$ to be the union of $S_k$ and regular neighbourhoods of the characteristic tori connecting two hyperbolic pieces in the JSJ decomposition of $E(k)$. Define $\Sigma_{k'}$ similarly. By Claim 3 and the enclosing property of characteristic submanifold theory (\cite{JS}), we may homotope $f|\Sigma_k$ into $\Sigma_{k'}$. If $\partial E(k) \subset \Sigma_k$ we may suppose that the homotopy leaves $f|\partial E(k)$ invariant. Extend this homotopy to a homotopy of $f$ supported in a regular neighbourhood of $\Sigma_k$. Since the Gromov norms of $E(k)$ and $E(k')$ are the same, by Soma's result \cite{So1} one can further modify $f$ by a homotopy fixed on $S_k$ so that $f|H_k$ is a homeomorphism $(H_k, \partial H_k) \to (H_{k'}, \partial H_{k'})$. Then $f^{-1}(S_{k'}) \subset S_k$ and since $f$ is surjective we have $f(S_k) =S_{k'}$. \end{proof} \begin{Claim}\label{seifert preimage} Distinct neighbouring Seifert pieces of $E(k)$ are sent to distinct neighbouring Seifert pieces of $E(k')$ by $f$. Further, if $N' \subset S_{k'}$ is a Seifert piece of $E(k')$, then $f^{-1}(N')$ is a Seifert piece of $E(k)$. \end{Claim} \begin{proof} Suppose that there are distinct but non-disjoint Seifert pieces $N_1, N_2$ of $E(k)$ which are sent into $N'$ by $f$. Since $S^3$ is simply-connected, $N_1 \cap N_2$ is a torus $T$ and if $\phi_1, \phi_2 \in \pi_1(T) \cong \mathbb Z^2$ represent the fibre classes of $N_1, N_2$ respectively, they generate a $\mathbb Z^2$ subgroup of $\pi_1(T)$. Claim \ref{injective} shows the latter statement also holds for $f_*(\phi_1), f_*(\phi_2)$. On the other hand, Claim \ref{seifert piece} implies that $f_*(\phi_j)$ has a non-abelian centralizer in $\pi_1(E(k'))$ and so Addendum VI.1.8 of Theorem VI.1.6 in \cite{JS} implies that $f_*(\phi_j)$ is a power of the fibre class of $N'$ ($j = 1, 2$). But then $f_*(\phi_1)$ and $f_*(\phi_2)$ lie in a $\mathbb Z$ subgroup of $\pi_1(E(k'))$, which we have seen is impossible. Thus distinct neighbouring Seifert pieces of $E(k)$ are sent to distinct neighbouring Seifert pieces of $E(k')$ by $f$. The dual graph $\Gamma(k)$ to the JSJ-decomposition of $E(k)$ is a rooted tree where the root vertex $v_0$ corresponds to the vertex manifold containing $\partial E(k)$. For each vertex $v$ of $\Gamma(k)$, we use $X_v$ to denote the corresponding vertex manifold. Define $\Gamma(k'), v_0'$, and $X_{v'}$ similarly. Since $S^3$ is simply-connected, both $\Gamma(k)$ and $\Gamma(k')$ are trees. Hence, Claim \ref{hyperbolic piece} and the conclusion of the previous paragraph imply that $f$ induces an isomorphism between these trees, which proves the claim. \end{proof} \medskip Claims \ref{hyperbolic piece} and \ref{seifert preimage} imply that for each piece $X'$ of $E(k')$, there is a unique piece $X$ of $E(k)$ such that $f: (X, \partial X) \to (X', \partial X')$. If $v$ is the vertex of $\Gamma(k)$ corresponding to $X$, we let $f(v)$ denote the vertex of $\Gamma(k')$ corresponding to $X'$. Theorem \ref{rigidity} is a consequence of our final claim: \begin{Claim}\label{winding number non-zero} The map $f$ can be homotoped to a homeomorphism. \end{Claim} \begin{proof} Given the conclusions of Claims \ref{injective}, \ref{seifert piece} and \ref{hyperbolic piece}, classic work of Waldhausen shows that the restriction $f|: (X_{v_0}, \partial X_{v_0}) \to (X'_{v_0'}, \partial X_{v_0'}')$ is homotopic (rel $\partial E(k)$) to a covering map (see Theorem 13.6 of \cite{He}). Since $f| \partial E(k)$ has degree $1$, $f|: (X_{v_0}, \partial X_{v_0}) \to (X'_{v_0'}, \partial X_{v_0'}')$ can be homotoped to a homeomorphism. (This is automatic of course if $X_{v_0}$ is hyperbolic.) In particular $|\partial X_{v_0}| = |\partial X'_{v_0'}|$. Now suppose that the vertices of $\Gamma(k)$ (respectively $\Gamma (k')$) adjacent to $v_0$ (respectively $v'_0$) are $v_1,...v_p$ (respectively $v_1',...,v_p'$). Let $T_i$ be the torus $X_{v_0} \cap X_{v_i}$, $1 \leq i \leq p$ and $T_i'$ its image by $f$. By Claim \ref{seifert preimage}, $f$ cannot send $X_{v_i}$ to $X'_{v'_0}$, $i \ne 0$, so we may assume that $f(X_i)\subset X'_{v'_i}$ for $i = 1, \dots , p$. Since $f|: T_i\to T'_i$ is a homeomorphism, the argument of the previous paragraph shows that for each $i$, $f: X_{v_i} \to X'_{v'_i}$ is homotopic to a homeomorphism (rel $T_i$). Proceeding by induction we see that for each vertex $v$ of $\Gamma(k)$, $f|: (X_v, \partial X_v) \to (X'_{f(v)}, \partial X'_{f(v)})$ is a homeomorphism. Since $f$ induces an isomorphism $\Gamma(k) \to \Gamma(k')$, the proof of the claim is complete. \end{proof} \section{Double branched covers}\label{double-cover} Let $M_q(k)$ denote the $q$-fold cyclic branched covering of $S^3$ over the knot $k$. \begin{Theorem}\label{double cover} Suppose $k\ge k'$. Then \noindent $(1)$ $M_2(k)\ge M_2(k')$, that is, a degree 1 map $M_2(k)\to M_2(k')$ exists; and \noindent $(2)$ $M_2(k)=M_2(k') \implies k=k'$. \end{Theorem} \begin{proof} Suppose that there is a degree $1$ map $f: E(k_1)\to E(k_2)$. We may assume that $f|: \partial E(k_1)\to \partial E(k_2)$ is a homeomorphism which sends $m_1$ to $m_2$. (1) Pick a Seifert surface $F'$ of $k'$. We may assume that $f$ has been homotoped relatively to the boundary to be transverse to $F'$ and so that $F=f^{-1}(F')$ is connected. Then $F$ is a Seifert surface of $k$. Then $f$ restricts to a proper degree $1$ map between $E(k)$ cut open along $F$, which we denote by $E(k)\setminus F$, and $E(k')\setminus F'$. This restriction can be assumed to be a homeomorphism between the two copies of $F$ in $\partial(E(k)\setminus F)$ to the two copies of $F'$ in $\partial(E(k')\setminus F')$. By gluing two copies of $E(k)\setminus F$, respectively $E(k')\setminus F'$, along these copies of $F$, respectively of $F'$, we obtain a degree $1$ map from the $2$-fold covering of $E(k)$ to the $2$-fold cyclic covering of $E(k')$, which extends to a degree $1$ map $\hat f$ from $M_2(k)$ to $M_2(k')$. In other words, the degree $1$ map $f: E(k_1)\to E(k_2)$ lifts to a degee 1 map between the 2-fold coverings of the knot exteriors, which extends to a degree $1$ map $\hat f: M_2(k) \to M_2(k')$. (2) A degree $1$ map, $f$ induces an epimorphism $f_*: \pi_1E(k_1)\to \pi_1 E(k_2)$ such that $f_*(m_1) = m_2)$, hence it induces an epmorphism $$\bar f_*: \pi_1E(k_1)/m_1^2\to \pi_1 E(k_2)/m_2^2$$ and we have the commutative diagram $$\xymatrix{ 1 \ar[r] & \pi_1( M_2(k)) \ar[d]_{\hat f_*}\ar[r] & \pi_1(E(k)/m_1^2)\ar[d]_{\bar f_*} \ar[r] & {\mathbb Z}_2\ar[d]_{\cong}\ar[r] & 1\\ 1 \ar[r] & \pi_1(M_2(k')) \ar[r] & \pi_1(E(k')/m^2_2) \ar[r] & {\mathbb Z}_2\ar[r] & 1 } $$ Since $\hat f_*$ is induced by a degree $1$ map, it is surjective. Therefore $\hat f_*$ is an isomorphism, because $M_2(k)=M_2(k')$ and 3-manifold groups are Hopfian. By the Five Lemma, $\bar f_*$ is also an isomorphism. Assertion (2) follows now from the geometrization of 3-orbifolds with singular locus a link \cite{BP}, a theorem of Boileau-Zimmermann about $\pi$-orbifolds groups \cite{BZ} when $\pi_1(M_2(k))$ is infinite and the classification of spherical Montesinos knots when $\pi_1(M_2(k))$ is finite. \end{proof} Mutant knots have the same double branched covering, so Proposition \ref{double cover} implies the following. \begin{Corollary}\label{cor:mutant} There is no $1$-domination between distinct mutant knots. \end{Corollary} \noindent Ruberman has shown that if $k, k'$ are mutants, then $E(k)$ is hyperbolic if and only if $E(k')$ is, and in this case, both have the same volume \cite{Ru}. Hence in this situation, Corollary \ref{cor:mutant} follows from the Gromov-Thurston rigidity theorem. Assertion $(1)$ of Theorem \ref{double cover} provides a connection between $1$-domination among knots, left orderable groups and L-spaces. \begin{Definition} $\;$ \\ {\rm $(1)$ A group is {\it left-orderable} if there is a total ordering $<$ of its elements which is left-invariant: $x < y$ if and only if $zx < zy$ for all $x, y$ and $z$. \noindent $(2)$ An {\it L-space} is a closed rational homology $3$-sphere whose Heegaard-Floer homology $\widehat {HF}(M)$ is a free abelian group of rank equal to $|H_1(M,{\mathbb Z})|$. } \end{Definition} \begin{Proposition}\label{left orderable} {\rm (\cite{BRoW})} Suppose $G$ and $G'$ are nontrivial fundamental groups of irreducible 3-manifolds and there is a surjection $G\to G'$. If $G'$ is left orderable, then $G$ is left orderable. \end{Proposition} \begin{Corollary} \label{lodomination} If $\pi_1M_2(k_1)$ is not left orderable but $\pi_1M_2(k_2)$ is, then $k_1$ does not $1$-dominate $k_2$. \end{Corollary} \noindent The left orderabilty of $\pi_1M_2(k)$ can be determined for certain family of knots. For instance, Boyer-Gordon-Watson showed that this is never the case for non-trivial alternating knots $k$ \cite{BGW}. For each Montesinos knot $k$, $M_2(k)$ is a Seifert manifold, and work of Boyer-Rolfsen-Wiest \cite{BRoW} combines with that of Jankins-Neumann \cite{JN} and Naimi \cite{Na} to determine exactly when such manifolds have left orderable fundamental groups in terms of the Seifert invariants. As a consequence, alternating knots cannot $1$-dominate certain classes of Montesinos knots. Another result, due to Ozsvath-Szabo, states that $M_2(k)$ is an L-space for each alternating knot $k$ \cite{OS}. This and other evidence corroborates the following conjecture in \cite{BGW}, which is unsolved at this writing. \begin{Conjecture} \label{bgwconj} An irreducible $3$-manifold which is a rational homology sphere is an L-space if and only if its fundamental group is not left orderable. \end{Conjecture} \noindent Ozsv\'ath-Szab\'o have conjectured that an irreducible $\mathbb Z$-homology $3$-sphere is an L-space if and only if it is the $3$-sphere or the Poincar\'e homology sphere (cf. \cite[Problem 11.4 and the remarks which follow it]{Sz}). This combines with Conjecture \ref{bgwconj} to yield the following conjecture: {\it An irreducble $\mathbb Z$-homology $3$-sphere other than $S^3$ and the Poincar\'e homology sphere has a left-orderable fundamental group}. Recall that the {\it determinant} of a knot $k$ is given by $|\Delta_{k}(-1)|$ and coincides with $|H_1(M_2(k),{\mathbb Z})|$. Thus $M_2(k)$ is a $\mathbb Z$-homology $3$-sphere if and only if the determinant of $k$ is $1$. The discussion above leads to the following question, whose expected answer is no. \begin{Question} Suppose that $k$ is alternating. Can $k$ $1$-dominate a nontrivial knot $k'$ with $|\Delta_{k'}(-1)|=1$? In particular a nontrivial knot with trivial Alexander polynomial? \end{Question} \noindent Here is a related question. \begin{Question} Suppose that $k$ is alternating and $k\ge k'$. Does $|\Delta_{k}(-1)|=|\Delta_{k'}(-1)|$ imply that $k=k'$? \end{Question} To state our next results, we need to recall some definitions: a {\it 2-string tangle} is the 3-ball $B^3$ with two disjoint properly embedded arcs $a_1 \cup a_2$. A {\it trivial tangle} is a 2-string tangle where the arcs $a_1$ and $a_2$ bound disjoint disks together with arcs on the boundary of $B^3$. A {\it rational tangle} is the image of a trivial tangle by a homeomorphism of the ball fixing the end points of the arcs $a_1$ and $a_2$: a tangle is rational if and only if the $2$-fold covering of $B^3$ branched along the arcs $a_1 \cup a_2$ is a solid torus $S^1 \times D^2$. A well-known fact, using this double branched covering, is that rational tangles correspond to rational numbers, called the {\it slopes} of the rational tangles: a rational tangle $T(r)$ corresponding to the rational number $r$ is obtained by first drawing two strings of slope $r$ on the boundary $S^2(2,2,2,2)$ of the pillow-case $B$, then pushing into its interior. A {\it Montesinos tangle} is a tangle sum of rational tangles $T(r_1),..., T(r_n)$: by adding two arcs on the boundary of $B$, we get the so-called 2-bridge knots and Montesinos knots. The double branched cover of those knots are, respectively, lens spaces and Seifert manifolds. Further, the action of the covering involution $\tau$ preserves the Seifert fibre of the $2$-fold covering and reverses its orientation. The converse is true by the orbifold theorem \cite{BP}, see also \cite{BS} : If $M_2(k)$ is, respectively, a lens space or a Seifert fibered manifold and $\tau$ reverses the orientation of the Seifert fibre, then $k$ is, respectively, a 2-bridge knot or a Montesinos knot. \begin{Proposition}\label{Montesinos} Suppose $k\ge k'$. \noindent $(1)$ If $k$ is a 2-bridge knot, so is $k'$. \noindent $(2)$ If $k$ is a Montesinos knot, so is $k'$. \end{Proposition} \begin{proof} Let $f: E(k)\to E(k')$ be a $1$-domination and $\tau$, $\tau'$ are the covering involutions of the 2-fold branched coverings $M_2(k)$ and $M_2(k')$. Then we have a ${\mathbb Z}_2$ equivalent degree 1 map $\tilde f : M_2(k) \to M_2(k')$ , i.e. $\tau\circ \tilde f= \tilde f \circ \tau'$ and a surjection $\tilde f_*: \pi_1M_2(k)\to \pi_1 M_2(k')$. If $k$ is a 2-bridge knot, $M_2(k)$ is a lens space and therefore $\pi_1(M_2(k))$ is a finite cyclic group. Hence $\pi_1(M_2(k'))$ is finite cyclic so by the orbifol theorem \cite{BP}, $M_2(k')$ is a lens space and $k'$ is $2$-bridge. Next suppose that $k$ is a Montesinos knot, so that $M_2(k)$ is an irreducible Seifert manifold. Again it is known that $k'$ is Montesinos if $\pi_1M_2(k')$ is finite by the orbifold theorem \cite{BP}, so suppose otherwise. Then $\pi_1M_2(k)$ is infinite and non-cyclic, so $M_2(k)$ is a $K(\pi_1M_2(k), 1)$ space, finitely covered by a circle bundle $W$ over a closed orientable surface $F$ where the circle fibres of $W$ are the inverse image of the Seifert fibres of $M_2(k)$. The class $h$ of a regular fibre of $M_2(k)$ cannot be contained in the kernel of $\tilde f_*$ as otherwise the composition $W \to M_2(k) \to M_2(k')$ would factor through $F$, which is impossible for a non-zero degree map. Suppose that $M_2(k')$ is reducible and let $S'$ be an essential 2-sphere it contains. After a homotopy of $\tilde f$ we can suppose that the preimage $S$ of $S'$ in $M_2(k)$ is an essential surface. Now $S$ cannot be vertical as this would imply that the $h$ would be contained in the kernel of $\tilde f_*$. On the other hand it cannot be horizontal in $M_2(k)$ since this would imply that the odd-order abelian group $H_1(M_2(k); {\mathbb Z})$ has a ${\mathbb Z}_2$ quotient. Thus $M_2(k')$ is irreducible. It follows that $\pi_1M_2(k')$ is torsion-free and therefore $\tilde f_*(h)$ has infinite order. But then $\pi_1M_2(k')$ has a non-trivial centre containing $\tilde f_*(h)$, so $M_2(k')$ is a Seifert manifold by \cite{CJ}, \cite{Ga2} with $\tilde f_*(h)$ a non-trivial power of the fibre class. It follows that $k'$ is either a Montesinos knot or a torus knot. The former case happens if $\tau'$ reverses orientation of each fibre, and the latter case otherwise. Since $\tau'_*(\tilde f_*(h)) = \tilde f_*(\tau_*(h)) = \tilde f_*(h^{-1}) = \tilde f_*(h)^{-1}$, $k'$ is a Montesinos knot. \end{proof} \begin{Remark} {\rm Many Seifert manifolds are known to be minimal (see \cite{HWZ} for example) from which we can deduce that many Montesinos knots are minimal.} \end{Remark} \section{AP-property and 1-domination}\label{sec:AP} A knot $k$ has the {\em AP-property} if every closed incompressible surface embedded in its complement carries an essential closed curve homotopic to a peripheral element. (Here $AP$ refers to {\it accidental parabolic}.) Small knots (i.e.~knots whose exteriors contain no closed essential surfaces) are AP knots, but so are {\it toroidally alternating knots} \cite{Ad}, a large class which contains, for instance, all hyperbolic knots which are alternating, almost alternating, or Montesinos. A knot $k$ is {\em simple} if it is either a hyperbolic or a torus knot. This condition is equivalent to the requirement that $E(k)$ contain no essential tori. \begin{Proposition}\label{AP} Let $k$ be a knot with the AP-property. Then $k$ can dominate only a connected sum of simple knots or a cable of a simple knot. \end{Proposition} \begin{proof} Suppose that $k$ dominates a satellite knot $k'$ and let $T'$ be a JSJ torus in $E(k')$ which bounds a simple knot exterior (i.e. is innermost). Fix a degree $1$ map $f: E(k)\to E(k')$ which is transverse to $T'$. Since $E(k')$ is irreducible, $f$ can be homotoped so that each component $F$ of $f^{-1}(T')$ is incompressible in $E(k)$. The AP-property implies that some essential closed curve $\gamma \subset F$ is freely homotopic to an essential closed curve $\alpha \subset \partial E(k)$. Up to replacing $F$ by another component of $f^{-1}(T')$, we can assume that the homotopy takes place in the outermost component of $\overline{E(k) \setminus f^{-1}(T')}$ (i.e. the component which contains $\partial E(k)$). Applying $f$ we obtain a homotopy in $W$, the outermost component of $\overline{E(k') \setminus T'}$. Since the restriction $f|: \partial E(k) \to \partial E(k')$ is a homeomorphism, $f(\alpha)$ is an essential closed curve on $\partial E(k')$, and therefore the annulus theorem \cite{JS} provides an essential annulus $A$ properly embedded in $W$ and cobounded by essential simple closed curves on $T'$ and $\partial E(k')$. We can assume that $A$ intersects the JSJ tori of $E(k')$ transversely and minimally. Then it intersects each of the JSJ pieces it passes through in essential, properly embedded annuli. It follows that these pieces are Seifert fibred. Further, since $S^3$ contains no Klein bottles, the annuli are vertical in their respective pieces, so their Seifert structures match up. Thus $W$ is Seifert fibred and hence a piece of $E(k')$. Since $T'$ was an arbitrary innermost JSJ torus of $E(k')$, the result follows. \end{proof} \begin{Corollary}\label{connected sum} A toroidally alternating knot can dominate only a connected sum of simple knots. In particular this holds for alternating knots. \end{Corollary} \begin{proof} Let $k$ be a toroidally alternating knot which $1$-dominates a knot $k'$, say $f:E(k) \to E(k')$ is a degree $1$ map. We have assumed that $f$ restricts to a homeomorphism $\partial E(k) \to \partial E(k')$ and hence sends a meridian of $k$ to an essential simple closed curve on $\partial E(k')$ which normally generates $\pi_1E(k')$. Thus the Property P conjecture \cite{KM} shows that $f$ sends the meridian of $k$ to the meridian of $k'$. By Proposition \ref{AP}, $k'$ is either a product of simple knots or a cable of a simple knot. Let $W$ denote the outermost JSJ piece of $E(k')$. Each closed incompressible surface embedded contained in $E(k)$ carries an essential simple closed curve homotopic to a meridian of $k$ by \cite[Corollary 3.3]{Ad}, so the proof of Proposition \ref{AP} shows that there is a homotopy in $W$ between a meridian of $k'$ and an essential loop on an innermost JSJ torus of $E(k')$. But this never occurs when $W$ is a cable space, so $k'$ is a product of simple knots. \end{proof} For Montesinos knots Corollary \ref{connected sum} follows from Proposition \ref{Montesinos}, since a Montesinos knot is simple by \cite{Oe}. \section{Length of $1$-domination sequences via genus}\label{$1$-domination sequences} \begin{Definition} \noindent $\;$ \\ {\rm \noindent $(1)$ We recall that a knot is {\it small} if each incompressible closed surface in its exterior is boundary parallel. \noindent $(2)$ A Seifert surface $S$ of a knot $k$ is {\it free} if $\overline{E(k)\setminus T(S)}$ is a handlebody where $T(S)$ is a tubular neighbourhood of $S$. A knot $k$ is {\it free}, if all its incompressible Seifert surfaces are free. For example a small knot is free. \noindent $(3)$ Define $\hat g (k)=\text{sup} \{ g(S) | \text{$S$ is an incompressible Seifert surfaces for $k$}\}.$ Here $g(S)$ denotes the genus of the surface $S$.} \end{Definition} Classic pretzel knots with three branches are small \cite{Oe} as are 2-bridge knots [HT]. Fibered knots are free, which follows directly from the classical result that each incompressible Seifert surface of a fibred knot is isotopic to its fibre surface. In particular, $\hat g(k)=g(k)$ for fibred knots. If $k$ has a companion of winding number zero, then $k$ is not free since there is an incompressible Seifert surface for $k$ which is contained in the complement of the companion torus. Clearly $g(k) \le \hat g(k )\le \infty$ and it is possible that $\hat g(k)=\infty$ (see [Ly]). Small knots satisfy $\hat g(k) < \infty$ by \cite{Wi}. Non-fibred examples of knots for which $\hat g(k)=g(k)$ include non-fibred 2-bridge knots, ([HT]). \begin{Question} Which knots $k$ in $S^3$ have bounded $\hat g(k)$ and which are free? \end{Question} \noindent Here is a construction which produces several interesting classes of free knots. \begin{Proposition} \label{small-free} A knot $k$ with the property that each closed, essential surface in $E(k)$ contains a loop which links $k$ homologically a non-zero number of times is free. \end{Proposition} \begin{proof} Suppose that $k$ is not free and choose an incompressible Seifert surface for $k$ whose complement is not a handlebody. Set $H= \overline{E(k)\setminus T(S)}$ and consider a maximal compression body $P$ for $\partial H$ in $H$. There is a decomposition $$H = P \cup V = (\partial H \times I) \cup \text{2-handles} \cup V$$ where $V$ is a not necessarily connected, compact $3$-manifold. Since $P$ is maximal, $\partial V$ either is a finite union of 2-spheres or has a component which is incompressible in $V$. In the former case, the incompressibility of $S$ and irreducibility of $E(k)$ implies that $V$ is a finite union of $3$-balls. But then $H$ is a handlebody, contrary to our assumptions. Thus there is a component $F$ of $\partial V$ which is incompressible in $H$ and therefore in $H = V \cup (\text{1-handles})$ and $E(k) = H \cup (S \times I)$. Since $S$ is contained in $\overline{E(k) \setminus V}$ and is not $\partial$-parallel in $E(k)$, $F$ is essential in $E(k)$. By construction, $F \cap S = \emptyset$ and so every loop on $F$ links $k$ zero times, contrary to our hypotheses. Thus $k$ must be free. \end{proof} \begin{Corollary} Small knots, alternating knots, and Montesinos knots are free. \end{Corollary} \begin{proof} Any such knot satisfies the hypotheses of Proposition \ref{small-free} and so is free. This is obvious for small knots. On the other hand, a closed essential surface in the exterior of either an alternating knot or a Montesinos knot $k$ contains a simple closed curve which is homotopic in $E(k)$ to a meridian of $k$ (\cite{Me}, \cite{Oe}), which implies the claim. \end{proof} \begin{Proposition}\label{free-free} Suppose that $k \ge k'$. \noindent $(1)$ If $k$ is free, then $k'$ is free. \noindent $(2)$ If $k$ is free and $f: E(k) \to E(k')$ is a degree 1 map such that $g(S) = g(S')$ where $S'$ and $S=f^{-1}(S') $ are incompressible Seifert surfaces for $k', k$, then $k = k'$. \noindent $(3)$ $\hat g (k) \ge \hat g (k')$, and if $k$ is free with bounded $\hat g(k)$, then $\hat g (k) = \hat g (k')$ if and only if $k = k'$. \end{Proposition} \begin{proof} (1) Let $S'$ be an incompressible Seifert surface of $k'$ with genus $g(k')$, and let $f: E(k)\to E(k')$ be a degree $1$ map, transverse to $S'$, which realizes the $1$-domination $k\ge k'$. Since $E(k')$ is irreducible, $f$ can be homotoped so that each component of $f^{-1}(S')$ is incompressible. Further, since $f$ has degree $1$, exactly one component $S$ of $f^{-1}(S')$ is a Seifert surface of $k$ and the remaining components are closed. Since $k$ is a free knot, it follows that $S = f^{-1}(S')$. Let $T(S) \subset E(k)$ be a tubular neighbourhood of $S$. Then $f$ induces a proper degree $1$ map $$f|: H = \overline{E(k) \setminus T(S)} \to \overline{E(k') \setminus T(S')} = H'.$$ \noindent Consider a maximal compression body $P'$ for $\partial H'$ in $H'$. There is a decomposition $$H' = P' \cup V' = (\partial H' \times I) \cup \text{2-handles} \cup V'$$ where $V'$ is a not necessarily connected, compact $3$-manifold. Since $P'$ is maximal, $\partial V'$ either has a component $F'$ which is incompressible in $V'$ or is a finite union of 2-spheres. In the former case, $H'$ contains closed incompressible surface $F'$, and $f|H$ could be homotoped rel $\partial H$ to a function $g$ such that $g^{-1}(F)$ is a closed and essential in $H$, contrary to the hypothesis that $H$ is a handlebody. Hence the latter case arises and the incompressibility of $S'$ and irreducibility of $E(k')$ implies that $V'$ is a finite union of $3$-balls. This shows that $H'$ is a handlebody, and completes the proof of (1). (2) By hypothesis, $f|: S \to S'$ is a proper degree $1$ map between homeomorphic surfaces, and as such, homotopic to a homeomorphism. Thus, after a homotopy, $f$ induces a proper degree $1$ map $$h = f|: H = \overline{E(k) \setminus T(S)} \to \overline{E(k') \setminus T(S')} = H',$$ where $H$ and $H'$ are handlebodies of genus $2g(S)$, such that $h|: \partial H \stackrel{\cong}{\longrightarrow} \partial H'$ is a homeomorphism. The latter implies that $h_*: \pi_1(H) \to \pi_1(H')$ is surjective, and as $\pi_1(H) \cong \pi_1(H')$ are free, and therefore Hopfian, $h_*$ is an isomorphism. Now apply Waldhausen's result (Theorem 13.6 of \cite{He}) to conclude that $h$ is homotopic rel $\partial H$ to a homeomorphism. Consequently, the same conclusion holds for $f: E(k)\to E(k')$. Thus $k=k'$. (3) The inequality $\hat g (k) \ge \hat g (k')$ follows from the equality of immersed and embedded genus (Corollary 6.18, \cite{Ga}). Suppose then that $k$ is free and $\hat g (k) = \hat g (k') < \infty$. Fix a proper degree $1$ map $f: E(k) \to E(k')$ and an incompressible Seifert surface $S'$ for $k'$ with $g(S') = \hat g(k')$. The proof of part (1) shows that we can find an incompressible Seifert surface $S \subset E(k)$ for $k$ and homotope $f$ to be transverse to $S$ and satisfy $S = f^{-1}(S')$. Then $f$ induces a proper degree $1$ map $S \to S'$ and hence, $\hat g (k) \ge g(S) \geq g(S') = \hat g (k') = \hat g (k)$. Thus (2) implies that $k = k'$. \end{proof} \bigskip An immediate consequence is the following \begin{Corollary}\label{$1$-domination length in genus} Suppose $k_0$ is a free knot and $\hat g(k_0)$ is bounded. Then for any $1$-domination sequence $k_0> k_1> .... > k_n$, $n +\hat g(k_n)\le \hat g(k_0)$. In particular the length of the sequence is at most $\hat g(k_0)$. \end{Corollary} \section{Alexander invariant}\label{Alexander} The proof of the following proposition is styled on classic arguments \cite{Br}. \begin{Proposition}\label{splitting} If $k_1\ge k_2 $, then $\Lambda_{k_1}=\Lambda_{k_2}\oplus \Lambda$ where $\Lambda$ is a $\mathbb Z[t^{\pm 1}]$-module. In particular $\Delta_{k_2}$ divides $\Delta_{k_1}$. \end{Proposition} \begin{proof} Let $\tilde E(k_i)$ be the infinite cyclic covering of $E(k_i)$ and $t_i$ be the generator of the deck transformation group of the infinite cyclic covering. Then $f: E(k_1)\to E(k_2)$ lifts to a proper degree $1$ map $\tilde f: \tilde E(k_1)\to \tilde E(k_2)$. We have induced homomorphisms $\tilde f_*: H_1(\tilde E(k_1);\mathbb Q)\to H_1(\tilde E(k_2);\mathbb Q)$ and $\tilde f^*: H^1(\tilde E(k_2);\mathbb Q)\to H^1(\tilde E(k_1);\mathbb Q)$. Since knot complements have the homology of the circle, Assertion 5 of [Mi] shows that $H_*(\tilde E(k_1);\mathbb Q)$ is finitely dimensional over $\mathbb Q$. For each $i$ let $u_i$ be the fundamental class of $H_2(E(k_i), \partial E(k_i); \mathbb Q)$. There is a duality isomorphism $P_i=u_i\cap : H^1(\tilde E(k_i); \mathbb Q)\to H_1(\tilde E(k_i); \mathbb Q)$, see \cite[Assertion 9 and Section 4]{Mi}. Let $\alpha : H_1(\tilde E(k_2); \mathbb Q)\to H_1(\tilde E(k_1); \mathbb Q)$ be given by $\alpha(x)=u_1\cap \tilde f^*(P^{-1}_2(x))$ for each $x\in H_1(\tilde E(k_2); \mathbb Q)$. Then $$ \tilde f_*\alpha(x) = \tilde f_* (u_1\cap \tilde f^*(P^{-1}_2(x)) =\tilde f^* (u_1)\cap (P^{-1}_2(x))=u_2 \cap (P^{-1}_2(x))=x,$$ Thus $\tilde f_*\alpha$ is the identity on $H_1(\tilde E(k_2); \mathbb Q)$. It follows that $$ H_1(\tilde E(k_1);\mathbb Q)\cong H_1(\tilde E(k_2);\mathbb Q) \oplus \text{ker} \tilde f_*.$$ Next we prove that an analogous splitting holds over $\mathbb Z$. Since $H_1(\tilde E(k_i);\mathbb Z)$ is torsion free, there is an inclusion $\tau_*: H_1(\tilde E(k_i);\mathbb Z) \to H_1(\tilde E(k_i);\mathbb Q)$, and since both $\tilde f_*$ and $\alpha$ preserve integer homology, the restriction $\tilde f_* \alpha|H_1(\tilde E_2; \mathbb Z)$ is the identity. It follows that $$ H_1(\tilde E(k_1);\mathbb Z)\cong H_1(\tilde E(k_2);\mathbb Z) \oplus \text{ker} \tilde f_*|.$$ It is also easy to see that $\tilde f_* t_1 = t_2 \tilde f_*$ and $\alpha t_2 = t_1 \alpha$. Hence the splitting above gives the desired splitting of $\mathbb Z[t^{\pm 1}]$ modules. \end{proof} An immediate consequence of Propostion \ref{splitting} is that $k_1 \geq k_2$ implies that $\Delta_{k_2}$ divides $\Delta_{k_1}$. This follows from the fact that if $k$ is a knot, then $H_1(\tilde E(k);\mathbb Q)\cong \Gamma/(p_1(t)) \oplus \ldots \oplus \Gamma/(p_n(t))$ where $p_1(t), \ldots, p_n(t) \in \Gamma = \mathbb Q[t^{\pm 1}]$ and $p_1(t) \cdots p_n(t) = \Delta(k)$. Thus if $\Delta_{k_{1}}$ and $\Delta_{k_{2}}$ have the same degree, then $\Delta_{k_{1}} = \pm \Delta_{k_{2}}$. One might hope to use band-connected sum and Murasugi sum to produce examples of 1-dominance: see [Ka2] for definitions. The following direct application of Proposition \ref{splitting} shows that this fails in general. \begin{Example} {\rm Figure 2 is a band connected sum $k$ of the trefoil knot $3_1$ and the trivial knot with $\Delta_k(t)=1-t^2+t^4$, which does not have $\Delta_{3_1}(t)=1-t+t^2$ as a factor. It follows that band connected sum does not $1$-dominate its factors in general.} \end{Example} \begin{center}% \includegraphics[totalheight=5cm]{fig1.eps}% \begin{center}% Figure 2 \end{center} \end{center} \begin{Example} {\rm Figure 3 is a Murasugi sum $k$ of $5_2$ and $4_1$ with $\Delta_k(t)=2-3t+3t^2-3t^3+2t^4$, which contain neither $\Delta_{4_1}(t)=1-3t+t^2$ nor $\Delta_{5_2}(t)=2-3t+2t^2$ as a factor. It follows that Murasugi sum does not, in general, $1$-dominate its factors.} \end{Example} \begin{center}% \includegraphics[totalheight=7cm]{fig2.eps}% \begin{center}% Figure 3 \end{center} \end{center} Referring to the definition of satellite knots in Section \ref{satellite}, if the winding number of $k_p$ in $V$ is $\pm 1$, then there is a proper degree $1$ map $g: V\setminus N(k_p)\to S^1\times S^1\times [0,1]$ which is a homeomorphism on the boundaries (see [Du]). This provides a $1$-domination $k_s \ge k_c$. The next example shows that $k_s \ge k_c$ need not hold without the assumption of winding number $\pm 1$, as $\Delta_{k_c}$ does not divide $\Delta_{k_s}$. \begin{Example}\label{cable} {\rm Let $k_s$ be the $(2,3)$ cable of the figure-eight knot $k_c =4_1$. That is, $k_s$ is the satellite of $k_c$ defined by the pattern knot, which is a trefoil, $k_p = 3_1$ with winding number 2 in the solid torus $V$, as in the description above. The Alexander polynomials of these knots are $$ \Delta_{k_p} = 1 - t + t^2, \quad \Delta_{k_c} = 1 -3t +t^2,$$ $$ \Delta_{k_s} = (1 - t - t^2)(1 - t +t^2)(1 + t - t^2).$$ } \end{Example} \begin{Remark} {\rm In [p.463, Wan], it is asked whether Jones polynomials will provide an obstruction to $1$-domination. In Example \ref{cable}, we have Jones polynomial: $$V_{k_s} = t^{-5} - t^{-4} +t + t^3 -t^4 - t^7 +t^8,$$ which is irreducible, and certainly does not have $V_{k_p}$ as a factor, despite the 1-domination $k_s \geq k_p$. Therefore the Jones polynomial does not reflect 1-dominance in a manner analogous to the Alexander or A-polynomials. The same may be said of the HOMFLYPT polynomial.} \end{Remark} As a final topic in this section, we apply Gordon's approach to ribbon concordance [Go] to prove certain $1$-domination rigidity results in terms of Alexander polynomials. Let $G$ be a group and let $H\subset G$ be a subgroup. Assume that $p$ is a fixed integer either equal to $0$ or a prime number. Define $G \natural H$ to be the subgroup of $G$ generated by all the elements of the form $[x,y]z^p$ for $x \in G, \, y \in H$ and $z \in H$. The {\it lower p-central series} of $G$ is defined as follows: $G_0 = G$, $G_{\alpha +1} = G \natural G_{\alpha}$ and $G_{\beta} = \cap_{\alpha <\beta}G_{\alpha}$ if $\beta$ is a limit ordinal. We say that $G$ is {\it transfinitely p-nilpotent} if $G_{\alpha} = \{1\}$ for some ordinal $\alpha$. In particular the group is residually $p$-nilpotent (or residually $p$ for short) if and only if $G_{\omega} = \{1\}$. \begin{Definition} {\rm \cite{Go} {\rm A knot $k \subset S^3$ is {\it transfinitely $p$-nilpotent} if its commutator subgroup $[\pi_1E(k),\pi_1E(k)]$ is transfinitely $p$-nilpotent.}} \end{Definition} \medskip The class of transfinitely $p$-nilpotent knots contains $2$-bridge knots, fibred knots and, when $p > 0$, alternating knots $k$ for which the leading coefficient of $\Delta_{k}$ is a power of $p$. Moreover it has been observed by Gordon that the property of being transfinitely p-nilpotent is preserved by connected sum and cabling, see \cite{Go}. For a polynomial $P$, we use $d^o(P)$ to denote the degree of $P$. The following proposition is essentially \cite[Lemma 3.4]{Go} \begin{Proposition}\label{nilpotent} Let $k_1$ and $k_2$ be two knots in $S^3$ such that $k_1 \geq k_2$. If $k_1$ is transfinitely $p$-nilpotent for some $p$ and $d^{o}(\Delta_{k_{1}}) = d^{o}(\Delta_{k_{2}})$, then $k_1 = k_2$. \end{Proposition} \begin{proof} The proper degree $1$ map $f: E(k_1) \to E(k_2)$ induces an epimorphism $f_*: \pi_1E(k_1) \to \pi_1E(k_2)$. It induces an epimorphism $\hat f_* :[\pi_1E(k_1),\pi_1E(k_1)] \to [\pi_1E(k_2),\pi_1E(k_2)]$. For a knot $k \subset S^3$ it is well-known that $H_1([\pi_1E(k),\pi_1E(k)];{\mathbb Z})$ is torsion-free and $H_2([\pi_1E(k),\pi_1E(k)];{\mathbb Z}) = 0$. Thus $H_2([\pi_1E(k),\pi_1E(k)];{\mathbb F}_p) = \{0\}$ where ${\mathbb F}_p = \mathbb Q$ when $p = 0$ and ${\mathbb Z}/p{\mathbb Z}$ otherwise. It is also known that for a field $\mathbb F$, $\text{rank}\,(H_1([\pi(k),\pi(k)];{\mathbb F})) = d^{o}(\Delta_{k})$. Therefore our hypotheses imply that the epimorphism $\hat f_*$ induces an isomorphism $$\hat f_{\sharp}: H_1([\pi_1E(k_1),\pi_1E(k_1)];{\mathbb F}_p) \to H_1([\pi_1E(k_2),\pi_1E(k_2)];{\mathbb F}_p).$$ Stallings' theorem \cite[Theorem 3.4]{Sta} implies that for every ordinal $\alpha$, $\hat f_*$ induces an isomorphism {\small $$[\pi_1E(k_1),\pi_1E(k_1)]/[\pi_1E(k_1),\pi_1E(k_1)]_{\alpha} \to [\pi_1E(k_2),\pi_1E(k_2)]/[\pi_1E(k_2),\pi_1E(k_2)]_{\alpha}.$$} \noindent By hypothesis we have $[\pi_1E(k_1),\pi_1E(k_1)]_{\alpha} = \{1\}$ for some ordinal $\alpha$, so the epimorphism $\hat f_* :[\pi_1E(k_1),\pi_1E(k_1)] \to [\pi_1E(k_2),\pi_1E(k_2)]$ is in fact an isomorphism. Since $f$ induces an isomorphism $f_{\sharp} : H_1(\pi_1E(k_1);{\mathbb Z}) \to H_1(\pi_1E(k_2);{\mathbb Z})$, it follows that the epimorphism $f_*: \pi_1E(k_1) \to \pi_1E(k_2)$ is an isomorphism. Finally, since this isomorphism preserves the peripheral structures of the two knots, the two knots are the same by Waldhausen [H, Chap 13] and [GL]. \end{proof} Since alternating knots $k$ for which the leading coefficient of $\Delta_{k}$ is a power of $p$ are transfinitely $p$-nilpotent, the followings are straightforward consequences of Propositions \ref{splitting} and \ref{nilpotent}: \begin{Corollary}\label{1-domination alternating} Suppose that $k_1\ge k_2$ where $k_1$ is an alternating knot such that the leading coefficient of $\Delta_{k_{1}}$ is a power of a prime number and $d^{o}(\Delta_{k_{1}}) = d^{o}(\Delta_{k_{2}})$. Then $k_1 = k_2$. \end{Corollary} \begin{Corollary}\label{sequence alternating} Suppose $k_0$ is an alternating knot such that the leading coefficient of $\Delta_{k_{0}}$ is a power of a prime number Then any $1$-domination sequence $k_0> k_1> .... > k_n >....$, contains at most $d^{o}(\Delta_{k_{0}})$ alternating knots. \end{Corollary} \begin{Question}\label{Q-nilponent} Is each alternating knot transfinitely $p$-nilpotent? \end{Question} \begin{Question}\label{alternation} Suppose that $k$ is alternating. Does $k\ge k'$ imply that $k'$ is alternating? \end{Question} \begin{Remark} $\;$ \\ {\rm (1) A positive answer to Question \ref{Q-nilponent} implies that if $k_1$ is an alternating knot, $k_1\ge k_2$, and $d^{o}(\Delta_{k_{1}}) = d^{o}(\Delta_{k_{2}})$, then $k_1 = k_2$. \noindent (2) A positive answer to both Question \ref{Q-nilponent} and Question \ref{alternation} implies that any $1$-domination sequence of knots starting with an alternating knot $k$ has length at most $d^{o}(\Delta_{k})$.} \end{Remark}
{ "timestamp": "2015-11-24T02:13:02", "yymm": "1511", "arxiv_id": "1511.07073", "language": "en", "url": "https://arxiv.org/abs/1511.07073", "abstract": "We say that a knot $k_1$ in the $3$-sphere {\\it $1$-dominates} another $k_2$ if there is a proper degree 1 map $E(k_1) \\to E(k_2)$ between their exteriors, and write $k_1 \\ge k_2$. When $k_1 \\ge k_2$ but $k_1 \\ne k_2$ we write $k_1 > k_2$. One expects in the latter eventuality that $k_1$ is more {\\it complicated}. In this paper we produce various sorts of evidence to support this philosophy.", "subjects": "Algebraic Topology (math.AT); Geometric Topology (math.GT)", "title": "1-domination of knots", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759654852754, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7079405701803324 }
https://arxiv.org/abs/2101.08777
Limit Processes and Bifurcation Theory of Quasi-Diffusive Perturbations
The bifurcation theory of ordinary differential equations (ODEs), and its application to deterministic population models, are by now well established. In this article, we begin to develop a complementary theory for diffusion-like perturbations of dynamical systems, with the goal of understanding the space and time scales of fluctuations near bifurcation points of the underlying deterministic system. To do so we describe the limit processes that arise in the vicinity of the bifurcation point. In the present article we focus on the one-dimensional case.
\section{Introduction} A broad class of individual-based stochastic population models, under suitable mixing assumptions, can be interpreted as diffusion-like perturbations of an underlying smooth dynamical system. The general framework is known as density-dependent Markov chains \cite{ddmc}; examples include chemical reactions \cite{vk}, infection spread \cite{primer}, population genetics \cite{kimura} and evolutionary games \cite{popovic}. In each case, there is a system size parameter $N$ and functions $F$ and $G$ such that, letting $\epsilon=1/\sqrt{N}$, for small $\epsilon>0$, trajectories of the vector of population densities $x_\epsilon \in \mathbb{R}_+^d$ resemble solutions to a stochastic differential equation (SDE) of the form \begin{align}\label{eq:QDP} dx = F(x)\,dt + \epsilon\, \sqrt{G(x)}\,dB, \end{align} where $B$ is a $d$-dimensional standard Brownian motion and $\sqrt{G(x)}$ is the square root of the positive (semi)-definite matrix $G(x)$. The interpretation is made rigorous through limit theorems (see \cite{ddmc}) such as \begin{enumerate}[noitemsep,label={(\roman*)}] \item \textbf{the law of large numbers}: letting $\phi_t$ denote the solution flow of $x'=F(x)$, $$\text{if} \ \ x_\epsilon(0) \to x_0 \ \ \text{as} \ \ \epsilon \to 0 \ \ \text{then for fixed} \ \ T>0, \ \ \sup_{t \le T}|x_\epsilon(t)-\phi_t(x_0)| \stackrel{\text{p}}{\to} 0 \ \ \text{as} \ \ \epsilon \to 0.$$ \item \textbf{the central limit theorem}: letting $Y_\epsilon(t)= \epsilon^{-1}(x_\epsilon(t)-\phi_t(x_0))$ and $(Y(t))$ denote the solution to the initial-value problem $$Y(0) = Y_0 \ \ \text{and} \ \ dY = DF(\phi_t(x_0))Ydt + \sqrt{G(\phi_t(x_0))}dB,$$ if $x_\epsilon(0) \to x_0$ and $Y_\epsilon(0) \to Y_0$ as $\epsilon \to 0$ then $Y_\epsilon \stackrel{\text{d}}{\to} Y$ as $\epsilon \to 0$. \end{enumerate} Briefly, on the natural time scale and on the population density scale, such processes resemble solutions to the deterministic system $x'=F(x)$, with random fluctuations of size $O(\epsilon)$. When $F$ and $G$ are non-degenerate, this description gives an accurate sense of the typical behaviour of sample paths. For example, if $x_\star$ is a linearly stable equilibrium point of the deterministic system $x'=F(x)$, $G(x_\star) \ne 0$ and $x_\epsilon(0)=x_\star+O(\epsilon)$, then fluctuations in $x_\epsilon(t)-x_\star$ of size $\epsilon$ are observed on the natural time scale, and larger than $O(\epsilon)$ fluctuations occur only as the result of brief, rare excursions, as described by the theory of moderate to large deviations (see for example \cite{fw}).\\ On the other hand, when either $F$ or $G$ is degenerate the description given by (i)-(ii) above becomes uninformative. For example, if $x_\star$ is stable but non-hyperbolic in the sense that $DF(x_\star)$ is non-invertible, then as we will see, larger than $O(\epsilon)$ fluctuations are observed on a longer time scale, while if $G(x_\star)=0$, fluctuations can still be non-zero, and can be large or small depending on $F$. This has already been observed in particular cases, such as the SIS \cite{crit-scale} and SIR \cite{martinlof} models of infection spread. In both references, $\alpha_1,\alpha_2,\alpha_3>0$ are found such that if $x_\epsilon(0)=x_\star + \epsilon^{\alpha_1}$ and $\lambda=\lambda_\star + \epsilon^{\alpha_2}$, then $\epsilon^{-\alpha_1}(x_\epsilon(\epsilon^{-\alpha_3}t)-x_\star)$ converges in distribution as $\epsilon \to 0$ to a diffusion. Our aim is to develop a sufficiently general theory that we can accurately describe all limits of this kind, subject only to existence of a Taylor expansion for $F$ and $G$ near bifurcation points of the deterministic approximation.\\ We refer to the models under consideration as \emph{quasi-diffusive perturbations} or QDPs; a precise definition is given in Section \ref{sec:def}. In this article, we develop the theory of limit processes for QDPs, including both degenerate and parametrized systems. This enables us to accurately describe the spatial (i.e., population density) and temporal scales of fluctuations of $(x_\epsilon)$ in a neighbourhood of bifurcation points $(x_\star,\lambda_\star)$ of the deterministic approximation $x'=F(x,\lambda)$, as a function of the Taylor expansion of $F$ and $G$.\\ As a result of this theory we obtain enhanced versions of the usual deterministic bifurcation diagrams, since they also account for the effect of the stochastic terms. Because of the added complexity of this endeavour, here we mostly restrict our attention to the one-dimensional case, i.e., $x \in \mathbb{R}$. It should also be noted that, although it deals with stochastic processes and treats the same types of bifurcations, our theory bears no direct resemblance to the stochastic bifurcation theory of random dynamical systems \cite{rds}. In the next section we describe our approach and the main results, and give an overview of the rest of the article. \section{Overview and main results} In this section we give an informal overview of our approach and main results. Precise statements are deferred to the section in which they appear, as a good deal of exposition is required to state them.\\ \noindent\textbf{Assumption: strong stochasticity. }We begin by noting an important assumption of the theory, which is that locally, $F=O(G)$, i.e., $|F|$ is uniformly bounded by a multiple of $|G|$ in a neighbourhood of $(x_\star,\lambda_\star)$, that we refer to as \emph{strong stochasticity}. This ensures that diffusion tends to dominate at small scales, i.e., small values of $x-x_\star$, while drift dominates at large scales. All density-dependent Markov chains are strongly stochastic, a fact that is not hard to show but is deferred to a later work.\\ \noindent\textbf{Approach. }Given a point $(x_\star,\lambda_\star)$, our approach is to consider all rescalings of the form \begin{align}\label{eq:Yresc} Y_\epsilon(t) = a_\epsilon (x_\epsilon(b_\epsilon t; \lambda_\epsilon)-x_\star) \end{align} with $a_\epsilon \to\infty$, and $\lambda_\epsilon \to \lambda_\star$ as $\epsilon \to 0$, then to find all choices of $(a_\epsilon),(b_\epsilon),(\lambda_\epsilon)$ for which $(Y_\epsilon)$ converges to the solution of a non-trivial ordinary or stochastic differential equation (ODE/SDE). We refer to such sequences as \emph{limit scales} for the family $(x_\epsilon)$. We consider not only the case where $x_\star$ is a constant but also the case where $x_\star=x_\star(\lambda)$ is a non-constant equilibrium branch of $F$, i.e., is such that $F(x_\star(\lambda),\lambda))=0$. To fix ideas, we begin with the former case.\\ When $(a_\epsilon),(b_\epsilon),(\lambda_\epsilon)$ is a limit scale, the limiting equation for $(Y_\epsilon)$ has the form \begin{align}\label{eq:lim} dY = \tilde F(Y)dt + \tilde G(Y)dB \end{align} with \begin{align}\label{eq:limproc} \tilde F(x) &:= \lim_{\epsilon \to 0}a_\epsilon b_\epsilon F(x_\star + x/a_\epsilon,\lambda_\epsilon) \ \text{and} \nonumber \\ \tilde G(x) &:= \lim_{\epsilon \to 0}\epsilon^2 a_\epsilon^2 b_\epsilon G(x_\star + x/a_\epsilon,\lambda_\epsilon). \end{align} The multipliers $a_\epsilon b_\epsilon$ and $\epsilon^2 a_\epsilon^2 b_\epsilon$ arise from the fact that drift scales linearly in space while diffusion scales quadratically, and both scale linearly in time.\\ To simplify the discussion, assume $(x_\star,\lambda_\star)=(0,0)$ which can be achieved by translation, and focus on the rectangle $(x,\lambda)\in[0,1]^2$ in the first quadrant; behaviour on other quadrants follows in the same way.\\ \noindent\textbf{Requirements for a limit scale. } $(a_\epsilon),(b_\epsilon),(\lambda_\epsilon)$ is a limit scale iff \begin{enumerate}[noitemsep,label={(\roman*)}] \item \textbf{shape:} The shape of $F(\cdot/a_\epsilon,\lambda_\epsilon)$ and $G(\cdot/a_\epsilon,\lambda_\epsilon)$ is stable as $\epsilon \to 0$, i.e., there are sequences $(f_\epsilon),(g_\epsilon)$ such that as $\epsilon \to 0$, we have locally uniform convergence in $x$ of the functions $$\frac{1}{f_\epsilon}\,F(x/a_\epsilon,\lambda_\epsilon)\quad \text{and} \quad \frac{1}{g_\epsilon}\, G(x/a_\epsilon,\lambda_\epsilon),$$ \item \textbf{ratio:} the rescaled drift to diffusion ratio \begin{align}\label{eq:rat} \frac{F(x/a_\epsilon,\lambda_\epsilon)}{\epsilon^2 a_\epsilon G(x/a_\epsilon,\lambda_\epsilon)} \end{align} converges either to $0$, a non-trivial function of $x$, or $\infty$, and\\ \item \textbf{time scale:} $(b_\epsilon)$ is chosen just large enough that one or both of $\tilde F$, $\tilde G$ is not identically zero. \end{enumerate} For sequences $(c_n),(d_n)$, say that $c_n \ll d_n$, $c_n \asymp d_n$ or $c_n \gg d_n$ if $\lim_{n\to\infty} c_n/d_n$ exists and is equal to $0$, a number in $(0,\infty)$, or $\infty$, respectively. Say that the ratio of $(c_n)$ to $(d_n)$ is stable if one of $c_n\ll d_n$, $c_n\asymp d_n$ or $c_n\gg d_n$ holds. Say that $(c_n)$ is stable if the ratio of $(c_n)$ to 1 is stable, i.e., if $\lim_{n\to\infty}c_n$ exists in $[0,\infty]$.\\ \noindent\textbf{Resolution of requirements. }Together, (i)-(ii) determine $(a_\epsilon,\lambda_\epsilon)$, while (iii) implies that $(b_\epsilon)$ is uniquely determined, modulo $\asymp$, from $(a_\epsilon,\lambda_\epsilon)$. (i)-(ii) are resolved as follows. \begin{enumerate}[noitemsep,label={(\roman*)}] \item As explained in Section \ref{sec:domterms}, there are finite sets $M_F,M_G \subset (0,\infty)\cap \mathbb{Q}$ such that on each quadrant of $\mathbb{R}^2$, the shape of $F(\cdot/a_\epsilon,\lambda_\epsilon)$ (respectively, $G(\cdot/a_\epsilon,\lambda_\epsilon$)) is stable as $\epsilon \to 0$ iff the ratio of $1/a_\epsilon$ to $\lambda_\epsilon^m$ is stable for every $m\in M_F$ ($m\in M_G$). If $(1/a_\epsilon,\lambda_\epsilon)$ are viewed as points in the $(x,\lambda)$ plane, regions of stability correspond to conditions such as, for example, $\lambda \ll x \ll \sqrt{\lambda}$, or $x\asymp \lambda$.\\ \item In Section \ref{sec:const-equil}, \eqref{eq:dd-fcn} we define a drift-diffusion ratio function $r(x,\lambda)$ that is easy to write down and morally satisfies $r(x,\lambda) \asymp |xF(x,\lambda)|/|G(x,\lambda)|$ as $(x,\lambda)\to (0,0)$. More precisely, on regions of the form $\lambda^{m} \le x \le \lambda^{m'}$ for consecutive $m,m'\in M_F\cup M_G$, where $F(x,\lambda)\approx x^{\alpha_1}\lambda^{\alpha_2}$ and $G(x,\lambda)\approx x^{\beta_1}\lambda^{\beta_2}$ for some $\alpha,\beta \in \mathbb{N}^2$, $r(x,\lambda)$ is defined as $x^{1+\alpha_1-\beta_1}\lambda^{\alpha_2-\beta_2}$. \end{enumerate} One of our main results, expressed in various contexts by Theorems \ref{thm:iso-limits}, \ref{thm:prmtrzd-limits} and \ref{thm:eq-branch}, is that if $(b_\epsilon)$ is chosen correctly relative to $(a_\epsilon)$ and $(\lambda_\epsilon)$ then \eqref{eq:limproc} holds for some \begin{enumerate}[noitemsep,label={(\alph*)}] \item $\tilde F=0$ and $\tilde G\ne 0$, if the shape of $F(\cdot/a_\epsilon,\lambda_\epsilon)$ is stable and $r(1/a_\epsilon,\lambda_\epsilon)\ll \epsilon^2$, \item $\tilde F\ne 0$ and $\tilde G\ne 0$, if the shape of $F(\cdot/a_\epsilon,\lambda_\epsilon)$ and $G(\cdot/a_\epsilon,\lambda_\epsilon)$ is stable and $r(1/a_\epsilon,\lambda_\epsilon) \asymp \epsilon^2$, \item $\tilde F \ne 0$ and $\tilde G=0$, if the shape of $G(\cdot/a_\epsilon,\lambda_\epsilon)$ is stable and $r(1/a_\epsilon,\lambda_\epsilon) \gg \epsilon^2$. \end{enumerate} Cases (a) and (c) are respectively called the \emph{pure diffusive range} and the \emph{deterministic range}, while the intermediate region, case (b), is called the \emph{drift-diffusion scale}, or dd scale for short.\\ \noindent\textbf{Summary of limit scales. }To summarize, $(a_\epsilon),(b_\epsilon),(\lambda_\epsilon)$ is a limit scale if \begin{itemize}[noitemsep] \item the ratio of $r(1/a_\epsilon,\lambda_\epsilon)$ to $\epsilon^2$ is stable, \item the ratio of $1/a_\epsilon$ to $\lambda_\epsilon^m$ is stable for each relevant $m$, i.e., \begin{enumerate}[noitemsep,label={(\alph*)}] \item for every $m\in M_F$ if $\lim_{\epsilon \to 0}\epsilon^{-2} r(1/a_\epsilon,\lambda_\epsilon)>0$ and \item for every $m\in M_G$ if $\lim_{\epsilon \to 0}\epsilon^{-2} r(1/a_\epsilon,\lambda_\epsilon)<\infty$, and \end{enumerate} \item $(b_\epsilon)$ is chosen correctly relative to $(a_\epsilon)$ and $(\lambda_\epsilon)$. \end{itemize} \noindent\textbf{Partition into limit classes. }The above characterization partitions limit scales according to whether each of the relevant ratios (those of $r$ to $\epsilon^2$ and of $1/a_\epsilon$ to $\lambda_\epsilon^m$) is $\ll 1$, $\asymp 1$ or $\gg 1$. Moreover, the partition elements, that we call \emph{limit classes}, are also the equivalence classes for the relation $(\tilde F_1,\tilde G_1)\sim (\tilde F_2,\tilde G_2)$ if $\tilde F_1=c_1\tilde F_2$ and $\tilde G_1=c_2\tilde G_2$ for positive constants $c_1,c_2$. This is easily verified from the facts that \begin{itemize}[noitemsep] \item $\tilde F=0$ iff $r(1/a_\epsilon,\lambda_\epsilon) \ll \epsilon^2$ and $\tilde G=0$ iff $r(1/a_\epsilon,\lambda_\epsilon) \gg \epsilon^2$, and \item from the above iff statement relating stable shape of $F,G$ to stable ratios. \end{itemize} \noindent\textbf{Graphical depiction of limit classes. }We can depict the above partition nicely in the $(x,\lambda)$ plane, by replacing $\ll,\asymp,\gg$ with $\le,=,\ge$. First, define the \emph{transition curves} and the \emph{drift-diffusion curve} as follows: \begin{itemize}[noitemsep] \item For $m\in(0,\infty)$ let $\Gamma_m=\{(x,\lambda)\colon x=\lambda^m\}$. The transition curves are then $\{\Gamma_m \colon m\in M_F\cup M_G\}$. \item The drift-diffusion curve is the sequence of curves $(\Phi_\epsilon)_\epsilon$ defined by $$\Phi_\epsilon = \{(x,\lambda)\colon r(x,\lambda)=\epsilon^2\}.$$ \end{itemize} To depict the partition, draw the following lines. \begin{enumerate}[noitemsep,label={\arabic*.}] \item Draw the dd curve. \item For each $m\in M_F$, add $\Gamma_m \cap \{(x,\lambda)\colon r(x,\lambda)>1\}$. \item Then, for each $m\in M_G$, add $\Gamma_m \cap \{(x,\lambda)\colon r(x,\lambda)<1\}$. \end{enumerate} The result (for each $\epsilon$) is a connected union of curves $\Psi_\epsilon$, consisting of the dd curve with some or all of each transition curve branching off from it. For each $\epsilon$, define a partition of $[0,1]^2$ by taking as its elements \begin{enumerate}[noitemsep,label={\arabic*.}] \item the branch points, i.e., each singleton set $p(m):=\Gamma_m \cap \Phi_\epsilon$, for $m\in M_F\cup M_G$, \item the edges, i.e., each connected component of $\Psi_\epsilon \setminus \bigcup_m p(m)$, and \item the regions, i.e., each connected component of $[0,1]^2 \setminus \Psi_\epsilon$. \end{enumerate} The set of partition elements is then in $1:1$ correspondence with the limit classes, restricted to $(1/a_\epsilon,\lambda_\epsilon)\in[0,1]^2$, in the manner described above. An example is shown in Figure \ref{fig:classex} for $F(x,\lambda)=\lambda x-x^2$ and $G(x,\lambda)=x$, corresponding to a transcritical bifurcation. The dd curve cuts transversely across the transition curve, which in this case coincides with the positive equilibrium branch of $F$, and separates the pure diffusive range from the deterministic range, and as long as $F=O(G)$, the pure diffusive range is on the side nearest to $(0,0)$. In the above example the dd curve is the graph of a family of functions $\lambda\mapsto\phi_\epsilon(\lambda)$; as described in Theorem \ref{thm:dd-curve}, in some cases it can develop folds, which is shown in Figure \ref{fig:uprex2}.\\ \begin{figure} \center \includegraphics[width=2.5in]{CMS1.png} \includegraphics[width=2.5in]{CMS2.png} \caption{\emph{Left:} Plot of equilibria (black) with transition curve in solid black, and drift-diffusion curve (blue) with $\epsilon=0.4$ on $[0,1]^2$ with $\lambda$ on horizontal axis, for the case $F(x,\lambda)=\lambda x-x^2$ and $G(x)=x$. \emph{Right:} the union of curves $\Phi_\epsilon$, for the same $F,G$ and $\epsilon$.} \label{fig:classex} \end{figure} \noindent\textbf{Equilibrium branch. }In the example of Figure \ref{fig:classex}, it turns out there are fluctuations around the non-constant equilibrium branch that aren't captured by the above analysis, as the latter takes $x=0$ as its point of reference. To capture these, we replace $x_\star$ with $x_\star(\lambda_\epsilon)$ in \eqref{eq:Yresc} and identify the limit scales, using the same method as above. For simplicity we assume that $F$ has a simple equilibrium branch, i.e., there is a function $x_\star(\lambda)$ such that $F(x_\star(\lambda),\lambda)=0$ and $x_\star(\lambda)$ is a simple root of $F(\cdot,\lambda)$ for each $\lambda$, which allows us to work with the linearization of $F$. The approximation is useful when $\lambda \gg \lambda_\star(\epsilon)$, the intersection point of the dd curve $\Phi_\epsilon$ with the equilibrium branch, which is also the range of $\lambda$ values for which the analysis around $x=0$ loses information about fluctuations around $x_\star(\lambda)$. As in the previous analysis, there is a pure diffusive range centered around $x_\star(\lambda)$ and a deterministic range further out, separated by a drift-diffusion scale whose width we denote by $\phi_\epsilon^\star(\lambda)$. A hybrid plot, showing the dd curves around both $0$ and $x_\star(\lambda)$, is given in Figure \ref{fig:transcrit}. This is complemented by numerical simulations of the logistic Markov chain \begin{align}\label{eq:logisMC} X \to \begin{cases} X+1 & \text{at rate} \ (1+\lambda)X, \\ X-1 & \text{at rate} \ X + X^2/N.\end{cases}\end{align} to demonstrate the limits observed across the diagram. For DDMCs \cite{ddmc}, the functions $F,G$ are given by \begin{align*} F(x)=\sum_\Delta \Delta q_\Delta(x) \ \ \text{and} \ \ G(x)=\sum_\Delta \Delta \Delta^{\top}q_\Delta(x) \end{align*} where $q_\Delta(x)=\lim_{N\to\infty} q_N(\lfloor Nx \rfloor ,\lfloor Nx\rfloor+\Delta)/N$ and $q_N(X,Y)$ is the transition rate from $X$ to $Y$ in the Markov chain, for a given value of $N$. The Markov chain \eqref{eq:logisMC} has $q_1(x)=(1+\lambda)x$ and $q_{-1}(x) = x + x^2$ so $F(x)=\lambda x - x^2$ and $G(x)=(2+\lambda)x + x^2$ which is asymptotic to $2x$ as $(x,\lambda)\to (0,0)$ and thus compatible with Figure \ref{fig:transcrit}, modulo $\asymp$.\\ \begin{figure} \begin{center} \includegraphics[width=3.5in]{transcrit.png}\\ \end{center} \caption{Bifurcation diagram for $F(x)=\lambda x- x^2$ and $G(x)=x$ including dd time scale, depicted with $\epsilon=0.04$. Equilibria (thick) and non-equilibrium transition curve (thin) outlined in black. Drift-diffusion curves $\phi_\epsilon$ and $x_\star \pm \phi_\epsilon^\star$ in blue, with pure diffusive regions shaded in blue. Time scale at dd curve in green (it is the same for both curves in this example) with right-hand axis indicating values. Red dots denote values of $\lambda$ and $x_\epsilon(0)$ used in Figure \ref{fig:sampaths}.} \label{fig:transcrit} \end{figure} \begin{figure} \centering \includegraphics[width=4in]{logispaths.png} \caption{Sample paths of the logistic Markov chain \eqref{eq:logisMC}, which corresponds to the example from Figure \ref{fig:transcrit}. At each value of $\lambda$, time window is set to $b_\epsilon(\lambda)$, the time scale of fluctuations at the dd scale for that value of $\lambda$.} \label{fig:sampaths} \end{figure} \noindent\textbf{Bifurcations. }Combining these analyses, we study three types of bifurcations in one dimension: saddle-node, transcritical and pitchfork. We find it is convenient to focus on the dd space and time scales. As long as the dd curve does not fold these can be described using functions of $\lambda$, respectively $\phi_\epsilon$ and $b_\epsilon$, around $x=0$ and $\phi_\epsilon^\star$, $b_\epsilon^\star$ around $x_\star(\lambda)$. The details are given in Section \ref{sec:bifurc}; some general findings are that (i) $\phi_\epsilon$ is constant for $\lambda$ near $0$, and may increase or decrease as $\lambda$ increases, and (ii) $b_\epsilon \gg 1$ for $\lambda$ near $0$, corresponding to slow fluctuations, and decreases (though not always monotonically) to $1$ as $\lambda \uparrow 1$ when $F(0,\lambda)=0$ for $\lambda$ near $0$ as in the transcritical and pitchfork case, and to $0$ when $F(0,\lambda)\ne 0$ for $\lambda \ne 0$, as in the saddle-node case. $b_\epsilon^\star$ is similar but always decreases to $1$, which relies on the assumption that $F(x_\star(\lambda),\lambda)=0$.\\ \noindent\textbf{Scope of the article and later work.} For simplicity, in this article we only study bifurcations in one dimension, and we define QDPs in such a way that the required error estimates for convergence to a limit are satisfied. This suggests two directions for generalization. The first is to study other bifurcations; for example, the Hopf bifurcation can likely be tackled by combining the methods of this article with an averaging result. The second is to make the results applicable to density-dependent Markov chains (DDMCs). It appears that DDMCs whose transition rates have chemical mass-action form (see for example Chapter VII, Section 2 in \cite{vk}; briefly, the reaction rate is proportional to the product of the concentrations of the reactants, or to the same with combinatorial corrections), which also includes most well-mixed population and infection spread models, naturally satisfy the required error estimates, at least at the drift-diffusion scale, so I intend to discuss this in a companion paper. Towards this goal, in the present article we take care to specify precise conditions for convergence, formulated in the language of semimartingales. The reader who is uninterested in these technical details can effectively skip Section \ref{sec:def} and assume that \eqref{eq:QDP} is meant literally, i.e., that $(x_\epsilon)$ is a family of diffusions with small noise parameter.\\ \noindent\textbf{Layout. }The article is organized as follows. In Section \ref{sec:def} we define precisely what it means for a family of processes $(x_\epsilon)$ to resemble solutions to \eqref{eq:QDP}, and give conditions for \eqref{eq:Yresc} to converge to a diffusion limit. In Section \ref{sec:iso} we treat the unparametrized case, identifying limit scales and computing limit processes. This simpler case acts as a warm-up for Section \ref{sec:prmtzd}. In Section \ref{sec:prmtzd} we treat the parametrized case, establishing the results described in this section. A non-trivial effort is required in Sections \ref{sec:env}-\ref{sec:domterms} to understand the anatomy of bivariate Taylor expansions. In Section \ref{sec:bifurc} we treat the three types of bifurcations mentioned above. We obtain formulae for the space and time scale of fluctuations, paying special attention to the drift-diffusion scale. Since, corresponding to a given $F$ there are potentially several $G$ such that $F=O(G)$, I'll finish this section by arguing that a particular choice of $G$ is generic and show that in this case, the bifurcation diagrams are particularly easy to describe. \section{Definitions and basic limit theorems}\label{sec:def} To understand the suitable class of processes, we work backwards from the desired limit processes, which are diffusions, i.e., solutions of a martingale problem associated to a stochastic differential equation. Once we have formulated the martingale problem in suitable generality to allow for explosion, we move to quasi-diffusions, which are sequences of semimartingales that converge to diffusions. Finally we define quasi-diffusive perturbations, which are generalizations of \eqref{eq:QDP} satisfying sufficiently strong estimates that, upon a rescaling of the form \eqref{eq:Yresc}, will be quasi-diffusions if the functions describing drift and diffusion (that we refer to as characteristics) converge. \subsection{Diffusions} Our definition of diffusion will be via the martingale problem, which is arguably the most flexible approach. For the sake of the unacquainted reader, we take a short detour to arrive at that formulation. The classical definition of a diffusion is a strong Markov process with continuous sample paths. Underlying this definition is the idea that a diffusion solves a stochastic differential equation (SDE), such as an initial value problem of the form \begin{align}\label{eq:SDE} X(0) = x, \quad dX = F(X)dt + \sigma(X)dB, \end{align} where $B$ is a $d$-dimensional standard Brownian motion, $F:\mathbb{R}^d\to\mathbb{R}^d$ is a vector field, and $\sigma:\mathbb{R}^d \to M_d(\mathbb{R})$ is a $d\times d$ matrix-valued function. The formal interpretation of \eqref{eq:SDE} is via the corresponding integral equation \begin{align}\label{eq:intSDE} X(t) = x + \int_0^t F(X(s))ds + \int_0^t \sigma(X(s))dB(s), \end{align} which leads to the notion of a strong solution: a \emph{strong solution} of \eqref{eq:intSDE} is an $\mathbb{R}^d$-valued process $X$, defined on the filtered probability space $(\Omega,\mathcal{F},P)$ of a $d$-dimensional Brownian motion $B$, where $\mathcal{F}=(\mathcal{F}(t))$ is the completion of the natural filtration of $B$, such that $X$ is adapted to $\mathcal{F}$ and \eqref{eq:intSDE} is satisfied $P$-almost surely, for all $t\ge 0$. The spirit of this definition is that one begins with a Brownian motion, and then constructs the solution $X$ directly from $B$.\\ A related notion is that of a \emph{weak solution}, which is a probability space $(\Omega,\mathcal{F},P)$ and a pair of $\mathcal{F}$-adapted and continuous processes $X$ and $B$, such that $B$ is a $d$-dimensional standard $\mathcal{F}$-Brownian motion (see Chapter 5, Section 1 of \cite{ethktz} for a precise definition of $\mathcal{F}$-Brownian motion), and \eqref{eq:intSDE} is satisfied $P$-almost surely for all $t\ge 0$. The spirit of this definition is that both $X$ and $B$ are constructed upon a probability space $(\Omega,\mathcal{F},P)$ which may be freely chosen.\\ The raison d'{\^e}tre of the weak solution notion is that in order to find solve \eqref{eq:intSDE}, it should not be strictly necessary to start from a Brownian motion. Taking this a step further, if there is a characterization of $X$ in a weak solution to \eqref{eq:intSDE}, then we can forget about $B$ and focus on the distribution of $X$. The desired characterization is called the martingale problem. Define $G=\sigma \sigma^{\top}$ and the operator \begin{align}\label{eq:mg-op} Lf = \frac{1}{2}\sum_{i,j=1}^d G_{ij} \frac{\partial^2}{\partial x_i \partial x_j} f + \sum_i F_i \, \frac{\partial}{\partial x_i} f \end{align} on $C^2$ functions $f:\mathbb{R}^d\to\mathbb{R}$. A solution to the martingale problem for \eqref{eq:SDE} is a probability measure on the space $C([0,\infty),\mathbb{R}^d)$ such that for all $f \in C_0^2(\mathbb{R}^d)$, the corresponding random variable $X$ satisfies $X(0)=x$ and has the property that $$f(X(t)) - \int_0^t Lf(X(s))ds$$ is a martingale with respect to the natural filtration $\mathcal{F}(t) := \sigma(\{X(s) \colon s\le t\}$ of $X$. Using the fact that $[B](t) = t$, together with some basic properties of the stochastic integral as well as It{\^o}'s equation (Chapter 1, Theorem 4.57 in \cite{jacod}), it is not hard to show that the distribution of $X$ in a weak solution of \eqref{eq:intSDE} solves the martingale problem. It is also possible to construct a weak solution from a solution to the martingale problem (Chapter 5, Theorem 3.3 in \cite{ethktz}). From now on we shall focus on the martingale problem, so we won't further discuss these connections rigorously.\\ Since population values are typically non-negative, it will be useful to formulate a version of the martingale problem that allows both the domain of $F,G$, and the state space of the process, to be restricted to an open, connected set $U\subset \mathbb{R}^d$. Fortunately, this has already been mostly treated; we record the definitions and the existence and uniqueness results below, following $\S$1.12-1.13 of \cite{pinsky}. Let $M_+(\mathbb{R},d)$ denote the set of $d\times d$ positive semidefinite matrices with values in $\mathbb{R}$. For $A\subset \mathbb{R}^d$ let $\hat A$ denote the one-point compactification of $A$, and let $\mathfrak{c}$ denote the one point such that $\hat A = A \cup \{\mathfrak{c}\}$. If $A=\mathbb{R}^d$ then the resulting topology on $\hat A=\hat \mathbb{R}^d$ is generated by the metric $\rho$ given by identifying $\hat \mathbb{R}^d$ with the sphere $S^d$ and using the standard Riemannian metric $\rho$ on $S^d$. If $A\ne \mathbb{R}^d$ we can equip $\hat A$ with the metric $\rho_A$ defined by $$\rho_A(x,y) = \begin{cases}\inf_{z \in \partial A \cup \{\mathfrak{c}\}} \rho(x,z) \ \text{if} \ y=\mathfrak{c},\\ \min(\rho(x,y),\rho_A(x,\mathfrak{c}) + \rho_A(y,\mathfrak{c})) \ \text{if} \ x,y \ne \mathfrak{c}.\end{cases}$$ The resulting topology on $\hat A$ is such that $x_n \to_{\rho_A} x \in \hat A$ if (i) $x \in A$ and $d(x_n,x) \to 0$ or (ii) $x = \mathfrak{c}$ and one of (a) $|x_n|\to\infty$ or (b) $d(x_n,\partial A) \to 0$ holds, where $d$ is the Euclidean distance. Using $\rho_A$ to define a compatible metric in the manner of equation (8.1) in Ch.~1, \cite{pinsky}, the space $C([0,\infty),\hat A)$, with the topology of uniform $\rho_A$-convergence on bounded intervals, is shown to be a Polish space. Below, a \emph{domain} refers to an open and connected set, and $A\subset\subset B$ if $A$ is bounded and $\overline A\subset B$ where $\overline A$ denotes the closure of $A$. \begin{definition}[Martingale problem and generalized martingale problem, Chapter 1, Sections 12-13 \cite{pinsky}]\label{def:mp} Let $U\subset \mathbb{R}^d$ be open and connected and let $F:U \to \mathbb{R}^d$ and $G:U\to M_+(\mathbb{R},d)$ be measurable. For $X \in C([0,\infty),\hat U)$ and $V\subset U$ define $\tau(V) = \inf\{t \colon X(t) \notin V\}$, and let $$\hat\Omega_U = \{X \in C([0,\infty),\hat U)\colon \tau(U)=\infty \ \text{or} \ X(\tau(U)+t)=\mathfrak{c} \ \text{for all} \ t>0\}.$$ Define the filtration $\hat\mathcal{F}_U(t)$ on $\hat \Omega_U$ by $\hat \mathcal{F}_U(t) = \sigma(X(s), 0 \le s \le t)$ and let $\hat \mathcal{F}_U=\hat \mathcal{F}_U(\infty)$.\\ Let $(P_x)_{x \in \hat U}$ on $(\hat \Omega_U,\hat \mathcal{F}_U)$ be a family of probability measures with $P_x(X(0)=x)=1$ for each $x$. $(P_x)$ solves the generalized martingale problem for $F,G$ if there is a sequence of domains $(D_n)$, with $D_n \subset\subset D_{n+1}$ for each $n$ and $\bigcup_n D_n=U$, such that for each $\smash x \in \hat U$, $f \in C^2(U)$ and $n>0$, \begin{align}\label{eq:mp-mg} f(X(t\wedge \tau(D_n)) - \int_0^{t \wedge \tau(D_n)}(Lf)(X(s))ds\end{align} is an $\hat\mathcal{F}_U$-martingale with respect to $P_x$. $(P_x)$ solves the martingale problem for $F,G$ if $P_x(\tau(U)=\infty)=1$ for each $x \in U$, and for each $f\in C_0^2(U)$, \eqref{eq:mp-mg} is a martingale with $t$ in place of $t \wedge \tau(D_n)$. \end{definition} \begin{lemma}[\cite{pinsky}]\label{lem:SDExist} Suppose $F:U\to \mathbb{R}^d$ and $G:U\to M_+(\mathbb{R},d)$ are bounded on compact $K\subset U$, and that $G$ is continuous and invertible on $U$. Then there is a unique solution $(P_x)_{x \in \hat U}$ to the generalized martingale problem for $F,G$. Moreover, $(P_x)_{x \in U}$ has the Feller property and $(P_x)_{x \in \hat U}$ has the strong Markov property. If, in addition, $F$ and $G$ are bounded on bounded subsets of $U$, then for each $x \in U$, $P_x$-a.s., if $\tau(U)<\infty$ then as $t \to \tau(U)^-$ either $|X(t)| \to \infty$ or $d(X(t),z)\to 0$ for some $z \in \partial U$, where $d$ is Euclidean distance. \end{lemma} \begin{proof} Everything except the last statement belongs to Theorem 13.1 in Chapter 1 of \cite{pinsky}; the last statement is proved in the Appendix. \end{proof} \subsection{Quasi-diffusions} Next we define quasi-diffusion (QD), for which the following fact provides motivation: namely, that the solution to the generalized martingale problem given in Lemma \ref{lem:SDExist} is uniquely characterized by the following properties: \begin{enumerate}[noitemsep,label={(\roman*)}] \item for each $x \in U$, $P_x(X(0)=x)=1$, \item $t\mapsto X(t)$ is a.s.~continuous with respect to Euclidean distance for all $t\in [0,\tau(U))$, and \item the following processes are local martingales: \begin{align*} & X^m(t) := X(t) - X(0) - \int_0^t F(X(s))ds \ \ \text{and} \\ & X^m(t)(X^m)^{\top}(t) - \int_0^t G(X(s))ds.\end{align*} \end{enumerate} In (iii), ``local'' refers to a localizing sequence that increases to $\tau(U)$, as for example $(\tau(D_n))$ in Definition \ref{def:mp}. The forward implication -- that the solutions described by Lemma \ref{lem:SDExist} have these properties -- follows by choosing $f(x)=x_i$, $i=1,\dots, d$ for the first expression in (iii) and $f(x)=x_ix_j$, $i,j=1,\dots,d$ for the second. The interested reader may deduce the reverse implication with the help of It{\^o}'s formula (Chapter 1, Theorem 4.57 in \cite{jacod}), although we will not need the rigorous result. Using this idea, roughly speaking, a family of processes $(x_\epsilon)$ should be a QD with coefficients $F$ and $G$ if, for small $\epsilon>0$, sample paths are nearly continuous, i.e., have only small jumps, and the processes $x_\epsilon^m(t):= x_\epsilon(t)-\int_0^t F(x_\epsilon(s))ds$ and $\smash x_\epsilon^m(t)(x_\epsilon^m)(t))^{\top} - \int_0^t G(x_\epsilon(s))ds$ are nearly martingales. For a QDP the definition will be similar, except with $\epsilon^2 \int_0^t G(x_\epsilon(s))ds$ in place of $\int_0^t G(X_\epsilon(s))ds$.\\ In order to be precise about ``nearly martingales'', a QD and QDP should have first- and second-order martingales similar in appearance to those in property (iii) above. As is tradition in the theory of stochastic processes, we shall generally assume processes are defined on a filtered probability space satisfying the usual conditions, and adapted to the filtration. Processes are defined on a time interval $[0,\zeta)$, where $\zeta \in (0,\infty]$ is a predictable time that we call the terminal time, and may be finite. For a particular $X$, $\zeta(X)$ denotes the terminal time of $X$. To describe QDPs, we will work with the class of semimartingales. A semimartingale (SM) is a c{\`a}dl{\`a}g~(right continuous with left limits) process $X$ that can be written $$X=X(0) + M+A,$$ where $M,A$ are c{\`a}dl{\`a}g, $M$ is a local martingale and $A$ has locally finite variation. For example, the solutions described by Lemma \ref{lem:SDExist} are semimartingales, based on the decomposition $X(t) = X(0) + X^m(t) + \int_0^t F(X(s))ds$ given by (iii) above. A semimartingale is said to be special if it has a decomposition of the above type for which $A$ is predictable. In this case, the decomposition is unique (see \cite[I.4.22]{jacod}) and we denote $M$ and $A$ by $X^m$ and $X^p$, and refer to them as the martingale part and compensator, respectively, in agreement with the standard definition of compensator. For a locally square-integrable martingale $M$, we denote by $[M]$ and $\langle M \rangle$ the quadratic variation (qv) and predictable quadratic variation (pqv). We will say that a SM is locally $L^2$ if it is special and if $X^m$ is locally square-integrable.\\ For a c{\`a}dl{\`a}g~process $X$ and $c>0$, define the operators $J_c$ and $J^c$ by $$J^cX = \sum_{s \le t} \Delta X(s)\mathbf{1}(|X(s)|>c)$$ and $J_cX = X - J^cX$. Since a c{\`a}dl{\`a}g~function has only finitely many jumps of a given minimum size on any finite time interval, $J^cX$ makes sense. Since $J^cX$ is c{\`a}dl{\`a}g~and has finite variation, if $X$ is a SM then so are $J^cX$ and $J_cX$, moreover $|\Delta J_cX| \le c$. As shown in \cite[I.4.24]{jacod}, is $Y$ is a SM with $|\Delta Y| \le c$ then $Y$ is special and $|Y^p| \le c$, $|Y^m| \le 2c$. So, $J_cX$ is special and $|(J_cX)^p| \le c$, $|(J_cX)^m| \le 2c$. As noted in \cite[I.4.1]{jacod}, martingales with bounded jumps are locally $L^2$, so $J_cX$ is locally $L^2$.\\ We are now ready to define QD and QDP, and to relate the notions to each other and to diffusions. We begin with the definition of a QD, and a result that shows a QD converges to a diffusion. \begin{definition}[Quasi-diffusion]\label{def:qd} Let $U \subset \mathbb{R}^d$ be an open set and let $F:U\to\mathbb{R}^n$ and $G:U \to M_+(\mathbb{R},d)$ be measurable. Let $\mathcal{E}\subset \mathbb{R}_+$ be a countable set that accumulates at $0$. A family of semimartingales $(X_\epsilon)_{\epsilon \in \mathcal{E}}$ is a quasi-diffusion on $U$ with characteristics $F,G$ if for each domain $D\subset\subset U$, fixed $c,T>0$ and $\tau(D,\epsilon) := \inf\{t \colon X_\epsilon(t) \notin D \ \text{or} \ X_\epsilon(t^-) \notin D\}$, $\zeta(X_\epsilon) > \tau(D,\epsilon)$ and \begin{enumerate}[noitemsep,label={(\roman*)}] \item $P(J^c X_\epsilon(t)=0 \ \text{for all} \ t \le \tau(D,\epsilon)\wedge T) \to 1$ as $\epsilon \to 0$, \item $\sup_{t \le \tau(D,\epsilon)\wedge T} \left| \, (J_cX_\epsilon)^p(t) - \int_0^t F(X_\epsilon(s))ds \, \right| \stackrel{\text{p}}{\to} 0 \ \ \text{as} \ \ \epsilon \to 0$, and \item $\sup_{t \le \tau(D,\epsilon) \wedge T} \left| \langle (J_cX_\epsilon)^m \rangle(t) - \int_0^t G(X_\epsilon(s))ds \, \right| \stackrel{\text{p}}{\to} 0 \ \ \text{as} \ \ \epsilon \to 0$, \end{enumerate} with $\stackrel{\text{p}}{\to}$ meaning convergence in probability. \end{definition} $F$ is called the drift and $G$, the diffusion. As shown in the next result, a quasi-diffusion with characteristics $F,G$ converges to the corresponding diffusion process. Since we allow for processes with explosion, the mode of convergence we'll use is a localized version of convergence in distribution. \begin{definition}[Local convergence in distribution]\label{def:lcd} Let $U\subset \mathbb{R}^d$ be an open and connected set, and suppose $X_n, n=1,2,\dots$ and $X$ are c{\`a}dl{\`a}g~processes taking values in $\mathbb{R}^d$. For a bounded domain $D\subset \subset U$ define $\tau(D,n) = \inf\{t \colon X_n(t^-) \notin D \ \text{or} \ X_n(t) \notin D\}$, similarly define $\tau(D)$ for $X$, and assume that $\zeta(X_n) > \tau_n(D)$ and $\zeta(X)>\tau(D)$ for all $n$ and $D\subset\subset U$, where $\zeta(X_n),\zeta(X)$ are the terminal times. Say that $(X_n)$ converges locally in distribution to $X$ on $U$ as $n\to\infty$, writing $X_n \smash \stackrel{\text{ld}}{\to} X$ on $U$, if there is a sequence of domains $D_1 \subset\subset D_2 \subset\subset \dots$ increasing to $U$ such that $$\smash X_n(\cdot \wedge \tau(D_i,n)) \stackrel{\text{d}}{\to} X(\cdot \wedge \tau(D_i))$$ for each $i$, where $\smash\stackrel{\text{d}}{\to}$ is convergence in distribution with respect to the Skorohod topology. \end{definition} In the above, we assume the Euclidean metric on $\mathbb{R}^n$ is used for the Skorohod topology. Note that the solution $X$ described by Lemma \ref{lem:SDExist} takes values in $\hat U = U\cup\{\mathfrak{c}\}$. Defining $\zeta(X)=\inf\{t \colon X(t)=\mathfrak{c}\}$, the restriction of $X$ to the time interval $[0,\zeta(X))$ fits the context of Definition \ref{def:lcd}. \begin{lemma}[Quasi-diffusions converge to diffusions]\label{lem:diff-limit} Let $U,F,G$ satisfy the assumptions of Lemma \ref{lem:SDExist}, and given $x \in U$ let $X$ denote the corresponding diffusion with $X(0)=x$. If $(X_\epsilon)$ is a QD with characteristics $F,G$ and $X_\epsilon(0) \stackrel{\text{p}}{\to} x$ as $\epsilon \to 0$ then $X_\epsilon \smash \stackrel{\text{ld}}{\to} X$ as $\epsilon \to 0$. \end{lemma} \begin{proof} This is done in the Appendix. \end{proof} \subsection{Quasi-diffusive perturbations} Now we give the definition of a QDP, and a result giving conditions for a rescaled QDP to be a QD. The definition of a QDP is made deliberately so that if $(x_\epsilon)$ is a QDP then provided that the rescaled characteristics convergence (see Lemma \ref{lem:QDP-QD}), rescaled processes of the form \begin{align}\label{eq:rescale} X_\epsilon(t) :=a_\epsilon(x_\epsilon(b_\epsilon t)-x_\star), \end{align} with $x_\star \in \mathbb{R}^d$ and $(a_\epsilon),(b_\epsilon)$ sequences of positive numbers, will be QDs. \begin{definition}[Quasi-diffusive perturbation, isotropic case]\label{def:QDP} Let $U \subset \mathbb{R}^d$ be an open set and let $F:U\to\mathbb{R}^n$ and $G:U \to M_+(\mathbb{R},d)$ be measurable. Let $\mathcal{E}\subset \mathbb{R}_+$ be a countable set that accumulates at $0$ and let $(a_\epsilon)_{\epsilon\in \mathcal{E}}$, $(b_\epsilon)_{\epsilon\in \mathcal{E}}$, be sets of positive real numbers, and let $(D_\epsilon)_{\epsilon \in \mathcal{E}}$ be a collection of domains with $D_\epsilon \subset \subset U$ for each $\epsilon$. A family of semimartingales $(x_\epsilon)_{\epsilon \in \mathcal{E}}$ is a quasi-diffusive perturbation (QDP) on $(D_\epsilon)$ to scale $(a_\epsilon),(b_\epsilon)$ with characteristics $F,G$, if for fixed $c,T>0$ and $\tau_\epsilon := \inf\{t \colon x_\epsilon(t) \notin D_\epsilon \ \text{or} \ x_\epsilon(t^-) \notin D_\epsilon\}$, $\zeta(x_\epsilon) > \tau_\epsilon$ and \begin{enumerate}[noitemsep,label={(\roman*)}] \item $P(J^{c/a_\epsilon}x_\epsilon(t)=0 \ \text{for all} \ t \le \tau_\epsilon\wedge b_\epsilon T) \to 1$ as $\epsilon \to 0$, \item $a_\epsilon \sup_{t \le \tau_\epsilon\wedge b_\epsilon T} \left| \, (J_{c/a_\epsilon}x_\epsilon)^p(t) - \int_0^t F(x_\epsilon(s))ds \, \right| \stackrel{\text{p}}{\to} 0$, and \item $ a_\epsilon^2 \sup_{t \le \tau_\epsilon \wedge b_\epsilon T} \left| \, \langle (J_{c/a_\epsilon}x_\epsilon)^m \rangle(t) - \epsilon^2\int_0^t G(x_\epsilon(s))ds \, \right| \stackrel{\text{p}}{\to} 0$, \end{enumerate} where $\stackrel{\text{p}}{\to}$ denotes convergence in probability.\\ $(x_\epsilon)$ is a QDP on $U$ if $(D_\epsilon)$ can be chosen such that $\lim_{\epsilon \to 0} \bigcup_{\epsilon' \le \epsilon}D_{\epsilon'}=U$. \end{definition} As with QD, $F$ is referred to as drift and $G$, as diffusion. The first condition says that larger than $c/a_\epsilon$ jumps contribute nothing on the time interval $[0,b_\epsilon T]$, while conditions (ii) and (iii) say that the compensator and pqv are well approximated, to order $a_\epsilon$ and $a_\epsilon^2$, by the pathwise integrals of $F(x_\epsilon)$ and $ \epsilon^2 G(x_\epsilon)$, respectively -- the reason for this choice of error bounds becomes clear in Lemma \ref{lem:QDP-QD}. We record a few notes on the definition. \begin{itemize}[noitemsep] \item Restricting $D_\epsilon$ or decreasing $b_\epsilon$ can lead to better estimates on drift and diffusion error. \item By definition, $J_{c/a_\epsilon}x_\epsilon$ has bounded jumps, so is locally $L^2$, so we need not assume the same of $x_\epsilon$. \item If $F,G$ are locally bounded, then (i) implies that $x_\epsilon$ can be replaced with $J_{\delta c_\epsilon} x_\epsilon$ in (ii)-(iii). \item Even if the processes $(x_\epsilon)$ are locally $L^2$, (i) does not imply that $J_{\delta a_\epsilon}x_\epsilon$ can be replaced with $x_\epsilon$ in (ii)-(iii), if $x_\epsilon$ has increasingly large and infrequent jumps as $\epsilon \to 0$. \item For given $F,G$ satisfying the conditions of Lemma \ref{lem:SDExist}, the solution to the generalized martingale problem, trivially parametrized by $\epsilon>0$, is a QDP on $U$ to any order $(a_\epsilon)$, on any time scale $(b_\epsilon)$. \end{itemize} Now we give a result stating conditions for a rescaled QDP to be a QD. It is clear from this result that the definition of QDP is tailored to minimize the requirements for convergence. \begin{lemma}\label{lem:QDP-QD} Let $(x_\epsilon)$ be a family of semimartingales, and let $x_\star \in \mathbb{R}^d$, and $\tilde U \subset \mathbb{R}^d$ be an open set whose closure contains the origin. Suppose that for every domain $D\subset\subset \tilde U$, $(x_\epsilon)$ is a QDP on $(D_\epsilon):=(\{x_\star + x/a_\epsilon\colon x \in D\})$ to scale $(a_\epsilon),(b_\epsilon)$ with characteristics $F,G$, and that the following limits exist for every $x \in \tilde U$: \begin{align}\label{eq:QD-coeff} \tilde F(x) &:= \lim_{\epsilon \to 0}a_\epsilon b_\epsilon F(x_\star + x/a_\epsilon) \ \text{and} \nonumber \\ \tilde G(x) &:= \lim_{\epsilon \to 0}\epsilon^2 a_\epsilon^2 b_\epsilon G(x_\star + x/a_\epsilon), \end{align} with uniform convergence on compact subsets of $\tilde U$. Then $(X_\epsilon)$ defined by \eqref{eq:rescale} is a QD on $\tilde U$ with characteristics $\tilde F,\tilde G$ given by \eqref{eq:QD-coeff}. \end{lemma} \begin{proof} For $c>0$ we have $$J^c X_\epsilon(t) = a_\epsilon J^{c/a_\epsilon} x_\epsilon(b_\epsilon t),$$ so condition (i) of Definition \ref{def:qd} follows from condition (i) of Definition \ref{def:QDP}. Next, the map $X\mapsto X^p$ on special semimartingales is linear, and the map $X\mapsto \langle X^m\rangle$ on locally $L^2$ semimartingales is homogeneous of degree 2. Thus, it follows from \eqref{eq:rescale} that $$(J_cX_\epsilon)^p(t) = a_\epsilon \,(J_{c/a_\epsilon}x_\epsilon)^p(b_\epsilon t) \quad \text{and} \quad \langle (J_cX_\epsilon)^m\rangle(t) = a_\epsilon^2 \langle (J_{c/a_\epsilon}x_\epsilon)^m \rangle(b_\epsilon t).$$ Moreover, \begin{align*} &\int_0^{b_\epsilon t}F(x_\epsilon(s))ds = \int_0^t F(x_\epsilon(b_\epsilon s))d(b_\epsilon s) = b_\epsilon \int_0^t F(x_\star + X_\epsilon(s)/a_\epsilon)ds, \\ & \text{similarly} \ \ \int_0^{b_\epsilon t}G(x_\epsilon(s))ds = b_\epsilon \int_0^t G(x_\star + X_\epsilon(s)/a_\epsilon)ds. \end{align*} Therefore, \begin{align}\label{eq:rescale-error} \left | \, (J_{c/a_\epsilon}x_\epsilon)^p(b_\epsilon t) - \int_0^{b_\epsilon t} F(x_\epsilon(s))ds \, \right| = \frac{1}{a_\epsilon}\left | \, (J_cX_\epsilon)^p(t) - a_\epsilon b_\epsilon \int_0^t F(x_\star + X_\epsilon(s)/a_\epsilon)ds \, \right|. \end{align} If convergence in \eqref{eq:QD-coeff} is uniform on compact sets, then with $\tau(D,\epsilon)$ as in Definition \eqref{def:qd}, $$\sup_{t \le \tau(D,\epsilon) \wedge T}\left |\int_0^t \tilde F(X_\epsilon(s))ds - a_\epsilon b_\epsilon \int_0^t F(x_\star+X_\epsilon(s)/a_\epsilon)ds \right| \stackrel{\text{p}}{\to} 0$$ as $\epsilon \to 0$, so using \eqref{eq:rescale-error}, condition (ii) in Definition \ref{def:qd} follows from condition (ii) in Definition \ref{def:QDP}. Next, observe that $$\left| \, \langle (J_{c/a_\epsilon}x_\epsilon)^m \rangle(b_\epsilon t) - \epsilon^2\int_0^{b_\epsilon t} G(x_\epsilon(s))ds \, \right| = \frac{1}{a_\epsilon^2} \left| \, \langle (J_c X_\epsilon)^m \rangle (t) - \epsilon^2 \, a_\epsilon^2 b_\epsilon \int_0^t G(x_\star + X_\epsilon(s)/a_\epsilon)ds \right|.$$ Arguing as before, condition (iii) in Definition \ref{def:qd} follows from condition (iii) in Definition \ref{def:QDP}. \end{proof} \section{Isotropic limit scales}\label{sec:iso} In this section we describe the possible limits of $Y_\epsilon = a_\epsilon(x(b_\epsilon t)-x_\star)$ where $(a_\epsilon),(b_\epsilon)$ are scalars, which we call isotropic scaling. We'll use the following asymptotic notation: for positive functions $f,g$, use $f(x) \asymp g(x)$ as $x\to a$ to mean that $\lim_{x \to a}f(x)/g(x)$ exists and takes its value in $(0,\infty)$, and $f(x) \ll g(x)$ to mean the same thing as $f=o(g)$, i.e., that $\lim_{x \to a} f(x)/g(x)=0$, and $f\gg g$ if $g\ll f$.\\ Lemma \ref{lem:QDP-QD} gives conditions under which a QDP, rescaled around a point $x_\star$, is a QD, and thus has a diffusion limit. The main requirement is the convergence condition \eqref{eq:QD-coeff}, which we recall: for some open set $\tilde U$, uniformly over $x$ in any compact $K\subset \tilde U$, the following limits exist: \begin{align*} \tilde F(x) &:= \lim_{\epsilon \to 0}a_\epsilon b_\epsilon F(x_\star + x/a_\epsilon) \ \text{and} \\ \tilde G(x) &:= \lim_{\epsilon \to 0}\epsilon^2 a_\epsilon^2 b_\epsilon G(x_\star + x/a_\epsilon).\nonumber \end{align*} These conditions are a bit opaque, so let's come up with a more intuitive and equivalent description. Convergence can be tidily broken down into three parts: shape, drift to diffusion ratio, and time scale. We begin with a brief description, then work back from \eqref{eq:QD-coeff} to flesh it out. \begin{enumerate}[noitemsep,label={(\roman*)}] \item The functions $x\mapsto F(x_\star+x/a_\epsilon)$ and $x\mapsto G(x_\star+x/a_\epsilon)$ have a limiting shape as $\epsilon \to 0$. \item The rescaled drift to diffusion ratio converges to $0$, $\infty$, or to a non-zero function of $x$. \item The time scale is just long enough for either drift or diffusion to be non-vanishing. \end{enumerate} In particular, we ignore the trivial case $\tilde F \equiv 0$ and $\tilde G \equiv 0$. Let's now use \eqref{eq:QD-coeff} to frame these concepts. \begin{enumerate}[noitemsep,label={(\roman*)}] \item \textit{Shape.} Working back from \eqref{eq:QD-coeff} and defining $f_\epsilon=1/(a_\epsilon b_\epsilon)$ and $g_\epsilon = 1/(\epsilon^2 a_\epsilon^2 b_\epsilon)$, for each $x$, as $\epsilon \to 0$, \begin{align}\label{eq:FG-shape} F(x_\star+x/a_\epsilon) &\sim f_\epsilon \,\tilde F(x) \quad \text{and} \nonumber \\ G(x_\star+x/a_\epsilon) &\sim g_\epsilon \,\tilde G(x). \end{align} In other words, $F$ and $G$ are asymptotically a product of a function of $\epsilon$, with a function of $x$. This is true if $F$ and $G$ are locally homogeneous around $x_\star$, i.e., if there exist $\alpha,\beta \ge 0$ and homogeneous functions $Q,V$ of degrees $\alpha,\beta$ respectively (i.e., such that $Q(rx)=r^\alpha Q(x)$ and $V(rx)=r^\beta V(x)$ for real $r$), such that as $x\to 0$, \begin{align}\label{eq:FG-hom} F(x_\star+x) \sim Q(x) \quad \text{and} \quad G(x_\star+x) \sim V(x). \end{align} \eqref{eq:FG-hom} holds, for example, if $F,G$ have a Taylor expansion around $x_\star$, in which case $\alpha,\beta$ are integers. When \eqref{eq:FG-hom} holds, if $a_\epsilon \to \infty$ as $\epsilon \to 0$ then for each $x$, as $\epsilon \to 0$, \begin{align}\label{eq:FG-hom-shape} F(x_\star+x/a_\epsilon) \sim (1/a_\epsilon)^\alpha Q(x) \quad \text{and} \quad G(x_\star+x/a_\epsilon)\sim (1/a_\epsilon)^\beta V(x). \end{align} Let $h_\epsilon=a_\epsilon^{1-\alpha}b_\epsilon$ and $\ell_\epsilon=\epsilon^2 a_\epsilon^{2-\beta} b_\epsilon$. Combining \eqref{eq:FG-shape} and \eqref{eq:FG-hom-shape}, for each $x$, as $\epsilon \to 0$, \begin{align}\label{eq:tF-tG-ratio} \tilde F(x) \sim h_\epsilon Q(x) \quad \text{and} \quad \tilde G(x) \sim \ell_\epsilon V(x). \end{align} If $Q,V$ are not identically zero, then each expression gives a dichotomy. \begin{itemize}[noitemsep] \item Either $h_\epsilon \to 0$, in which case $\tilde F \equiv 0$, or $h_\epsilon \asymp 1$, in which case $\tilde F \propto Q$. \item Either $\ell_\epsilon\to 0$, in which case $\tilde G \equiv 0$, or $\ell_\epsilon \asymp 1$, in which case $\tilde G \propto V$. \end{itemize} The converse is also true: if \eqref{eq:FG-hom} holds and $(h_\epsilon),(\ell_\epsilon)$ each satisfy one of the two above conditions then \eqref{eq:QD-coeff} holds with $\tilde F,\tilde G$ as described, and convergence is locally uniform.\\ \item \textit{Drift to diffusion ratio.} The ratio of the terms in \eqref{eq:QD-coeff} is \begin{align}\label{eq:rsc-dd-ratio} \frac{|a_\epsilon b_\epsilon F(x_\star+x/a_\epsilon)|}{|\epsilon^2 a_\epsilon^2 b_\epsilon G(x_\star + x/a_\epsilon)|} = \frac{|F(x_\star+x/a_\epsilon)|}{\epsilon^2 a_\epsilon |G(x_\star + x/a_\epsilon)|}. \end{align} In particular, the ratio depends on $a_\epsilon$ but not on $b_\epsilon$. It can also be written $\epsilon^{-2}\, \theta(1/a_\epsilon,x)$, where the drift : diffusion ratio function $\theta$ is defined by \begin{align}\label{eq:dd-th} \theta(u,x) = \frac{ u \, |F(x_\star + u\,x)|}{|G(x_\star+u\,x)|}. \end{align} If \eqref{eq:FG-hom} holds and $x$ is such that $Q(x)\ne 0$ and $V(x) \ne 0$, then as $u\to 0$ $$\theta(u,x) \sim \frac{u |Q(ux)|}{|V(ux)|} = \frac{u^{1+\alpha}|Q(x)|}{u^\beta |V(x)|} = u^{1+\alpha-\beta} \frac{|Q(x)|}{|V(x)|}.$$ In particular, $\theta(1/a_\epsilon,x) \asymp (1/a_\epsilon)^{1+\alpha-\beta}$ for such $x$. Using this as inspiration and referring to (i), we see that $\epsilon^{-2}(1/a_\epsilon)^{1+\alpha-\beta} = h_\epsilon / \ell_\epsilon$, so if both $h_\epsilon \to 0$ or $h_\epsilon \asymp 1$, and $\ell_\epsilon \to 0$ or $\ell_\epsilon \asymp 1$, then ignoring the trivial case where both tend to $0$, there are three relevant cases, that we refer to collectively as limit scales for the drift to diffusion ratio. \begin{itemize}[noitemsep] \item If $(1/a_\epsilon)^{1+\alpha-\beta}\ll \epsilon^2$ then $h_\epsilon \ll \ell_\epsilon$ (diffusion dominates). \item If $(1/a_\epsilon)^{1+\alpha-\beta} \gg \epsilon^2$ then $\ell_\epsilon \ll h_\epsilon$ (drift dominates). \item If $(1/a_\epsilon)^{1+\alpha-\beta} \asymp \epsilon^2$ then $\ell_\epsilon \asymp h_\epsilon$ (drift matches diffusion). \end{itemize} \item \textit{Time scale.} Given $(a_\epsilon)$ satisfying one of the above three cases, $(b_\epsilon)$ should be chosen just large enough that at least one of $\tilde F,\tilde G$ is non-zero. If \eqref{eq:QD-coeff} and \eqref{eq:FG-hom} hold then from \eqref{eq:tF-tG-ratio}, $h_\epsilon \asymp 1 \Leftrightarrow \tilde F\propto Q$ and $\ell_\epsilon \asymp 1 \Leftrightarrow \tilde G\propto R$. Recalling that $h_\epsilon=a_\epsilon^{1-\alpha}b_\epsilon$ and $\ell_\epsilon = \epsilon^2 a^{2-\beta} b_\epsilon$, \begin{itemize}[noitemsep] \item $h_\epsilon \asymp 1 \Leftrightarrow b_\epsilon \asymp a_\epsilon^{\alpha-1}$ and \item $\ell_\epsilon \asymp 1 \Leftrightarrow b_\epsilon \asymp \epsilon^{-2}a_\epsilon^{\beta-2}$. \end{itemize} We refer to this choice of $(b_\epsilon)$ in each case as the visible time scale corresponding to $(a_\epsilon)$, since it is the time scale at which changes are visible (not too slow, and not too fast). \end{enumerate} Combining the observations of (i)-(iii), if \eqref{eq:FG-hom} holds, $(1/a_\epsilon)$ is a limit scale for the drift to diffusion ratio, and $(b_\epsilon)$ is the visible time scale corresponding to $(a_\epsilon)$, then \eqref{eq:QD-coeff} holds with $\tilde F,\tilde G$ determined as follows. \begin{itemize}[noitemsep] \item If $(1/a_\epsilon)^{1+\alpha-\beta } \ll \epsilon^2$ and $b_\epsilon \asymp \epsilon^{-2} a_\epsilon^{\beta-2}$ then $\tilde F=0$ and $\tilde G \propto V$. \item If $(1/a_\epsilon)^{1+\alpha-\beta } \gg \epsilon^2$ and $b_\epsilon \asymp a_\epsilon^{\alpha-1}$ then $\tilde F \propto Q$ and $\tilde G=0$. \item If $(1/a_\epsilon)^{1+\alpha-\beta } \asymp \epsilon^2$ and $b_\epsilon \asymp a_\epsilon^{\alpha-1}$ (which is then $\asymp \epsilon^{-2}a_\epsilon^{\beta-2}$), then $\tilde F \propto Q$ and $\tilde G \propto V$. \end{itemize} To resolve the inequality on $(1/a_\epsilon)$, we need an assumption on the sign of $1+\alpha-\beta $. To study the effect of the sign, we'll consider $(1/a_\epsilon)^{1+\alpha-\beta} = O(\epsilon^2)$ which corresponds to non-zero diffusion. \begin{itemize}[noitemsep] \item If $1+\alpha-\beta >0$, $(1/a_\epsilon)^{1+\alpha-\beta }=O(\epsilon^2)$ iff $a_\epsilon= \epsilon^{-2/(1+\alpha-\beta )} \gg 1$. \item If $1+\alpha-\beta =0$, $(1/a_\epsilon)^{1+\alpha-\beta }=1$ is not $O(\epsilon^2)$ as $\epsilon \to 0$. \item If $1+\alpha-\beta <0$, $(1/a_\epsilon)^{1+\alpha-\beta }=O(\epsilon^2)$ iff $a_\epsilon=\epsilon^{2/(\beta-\alpha-1)} \ll 1$. \end{itemize} In particular, if $1+\alpha-\beta \le 0$ and $a_\epsilon \to \infty$ as $\epsilon \to 0$ then diffusion does not occur, i.e., the only limit possible for $\tilde G$ is $0$. This case can be excluded if $F=O(G)$, i.e., $|F(x)| \le C|G(x)|$ for some $C>0$ and all $x$, since then $\alpha \ge \beta$ and $1+\alpha-\beta$ has positive sign. Let us give this property a name. \begin{definition}\label{def:ssQDP} A QDP is \emph{strongly stochastic} if its characteristics $F,G$ satisfy $F=O(G)$, i.e., if there exists $C>0$ such that $|F(x)| \le C\, |G(x)|$ for every $x \in U$. \end{definition} From the above discussion and Lemma \ref{lem:QDP-QD} we obtain the following result. \begin{theorem}[Limit processes for QDPs under isotropic scaling]\label{thm:iso-limits} Let $U,F,G$ be as in Definition \ref{def:QDP}. Let $(x_\epsilon)$ be a family of semimartingales, and let $x_\star \in \mathbb{R}^d$. Let $\tilde U$ be a non-empty open convex set whose closure contains $0$. Suppose that for each domain $D\subset\subset \tilde U$, $(x_\epsilon)$ is a strongly stochastic QDP on $(D_\epsilon):=(\{x_\star + x/a_\epsilon\colon x \in D\})$ to scale $(a_\epsilon),(b_\epsilon)$, with coefficients $F,G$ that are locally homogeneous around $x_\star$, i.e., that satisfy \eqref{eq:FG-hom}.\\ Let $Y_\epsilon(t) = a_\epsilon(x_\epsilon(b_\epsilon t)-x_\star)$ and suppose that $a_\epsilon \to \infty$. Suppose $(1/a_\epsilon)$ is a limit scale for the drift to diffusion ratio, and that $(b_\epsilon)$ is the visible time scale corresponding to $(a_\epsilon)$, as described above. Then one of the three cases below holds, and $(Y_\epsilon)$ is a QD with characteristics $\tilde F,\tilde G$ as given. \begin{enumerate}[noitemsep,label={(\roman*)}] \item If $1/a_\epsilon \ll \epsilon^{2/(1+\alpha-\beta )}$ and $b_\epsilon \asymp \epsilon^{-2}a_\epsilon^{\beta-2}$, then $\tilde F \propto 0$ and $\tilde G \propto V$. \item If $1/a_\epsilon \gg \epsilon^{2/(1+\alpha-\beta )}$ and $b_\epsilon \asymp a_\epsilon^{\alpha-1}$, then $\tilde F \propto Q$ and $\tilde G \propto 0$. \item If $1/a_\epsilon \asymp \epsilon^{2/(1+\alpha-\beta )}$ and $b_\epsilon \asymp a_\epsilon^{\alpha-1} \asymp \epsilon^{-2} a_\epsilon^{\beta-2}$, then $\tilde F \propto Q$ and $\tilde G \propto V$. \end{enumerate} \end{theorem} We take a moment to name the different cases in Theorem \ref{thm:iso-limits}. \begin{definition}[limit ranges]\label{def:lim-scales} In the context of Theorem \ref{thm:iso-limits}, say that $(a_\epsilon),(b_\epsilon)$ is a \emph{QD limit scale} for $(x_\epsilon)$ if it satisfies one of the three sets of conditions above. Define the three \emph{limit ranges} as follows: \begin{enumerate}[noitemsep,label={(\roman*)}] \item $1/a_\epsilon \ll \epsilon^{2/(1+\alpha-\beta)}$ is the \emph{pure diffusive range}, \item $1/a_\epsilon \gg \epsilon^{2/(1+\alpha-\beta)}$ is the \emph{deterministic range}, and \item $1/a_\epsilon \asymp \epsilon^{2/(1+\alpha-\beta)}$ is the \emph{drift-diffusion (dd) scale}. \end{enumerate} \end{definition} Theorem \ref{thm:iso-limits} tells a nice story. When we assume strong stochasticity, characteristics scale in such a way that diffusion dominates at smaller scales, while drift dominates at larger scales. The dd scale separates the two regimes and is also the largest scale at which, from the fixed vantage point $x_\star$, random fluctuations are not drowned out by the deterministic flow. Moreover, once the spatial scale $(1/a_\epsilon)$ has been chosen, there is a unique choice of time scale $(b_\epsilon)$, modulo multiplication by a non-zero constant, that leads to a non-trivial limit process; on any faster (shorter) time scale, no change would be observed, while on any slower (longer) time scale the change would be instantaneous, as either the drift or diffusion coefficient would be divergent. \section{Limit scales for parametrized QDPs}\label{sec:prmtzd} In this section we extend the analysis of Section \ref{sec:iso} to the case where $F,G$ also depend on a parameter $\lambda\in \mathbb{R}$. In other words, we consider a family of processes $(x_\epsilon(t;\lambda))$ indexed by $\epsilon$ and $\lambda$, with corresponding characteristics $F(\cdot,\lambda),G(\cdot,\lambda)$, that we wish to study in the neighbourhood of a point $(x_\star,\lambda_\star)$. We will study two kinds of limits: \begin{enumerate}[noitemsep,label={(\roman*)}] \item \textit{Constant branch:} $Y_\epsilon(t;\lambda_\epsilon):= a_\epsilon( x_\epsilon(b_\epsilon t;\lambda_\epsilon)-x_\star)$, and \item \textit{Non-constant branch:} $Y_\epsilon(t;\lambda_\epsilon):= a_\epsilon( x_\epsilon(b_\epsilon t;\lambda_\epsilon)-x_\star(\lambda_\epsilon))$, \\ for some function $x_\star(\lambda)$ satisfying $x_\star(\lambda_\star)=x_\star$. \end{enumerate} In both cases the goal is to find $(a_\epsilon),(b_\epsilon)$ and $(\lambda_\epsilon)$ with $a_\epsilon \to \infty$ and $\lambda_\epsilon \to \lambda_\star$ such that the rescaled process is a quasi-diffusion. For simplicity, we consider only a single spatial dimension, i.e., $x \in \mathbb{R}$. Instead of the homogeneous assumption \eqref{eq:FG-hom} on $F,G$, we'll assume existence of a Taylor expansion in $x$ and $\lambda$ around $(x_\star,\lambda_\star)$, then partition the $o(1)$ neighbourhood of $(x_\star,\lambda_\star)$ into regions where $F,G$ have a limiting shape with respect to $x$. We begin by precisely defining a parametrized QDP, and translate Lemma 3.6 to fit that context. \begin{definition}[Parametrized QDP]\label{def:prmtrzdQDP} Let $\lambda$ be a parameter taking values in an interval $I\subset \mathbb{R}$. For each $\lambda \in I$, let $U,F(\,\cdot\,,\lambda),G(\,\cdot\,,\lambda),\mathcal{E},(a_\epsilon),(b_\epsilon),(D_\epsilon)$ be as in Definition \ref{def:QDP}, let $(x_\epsilon(t;\lambda))_{\epsilon \in \mathcal{E},\,\lambda \in I,\,t\ge 0}$ be a collection of semimartingales, and fix a sequence $(\lambda_\epsilon)$. Then $(x_\epsilon(t,\lambda_\epsilon))$ is a parametrized QDP on $(D_\epsilon)$ to order $(a_\epsilon)$ on time scale $(b_\epsilon)$, with characteristics $F,G$, if the estimates of Definition \ref{def:QDP} hold with $x_\epsilon(t;\lambda_\epsilon)$, $F(\,\cdot,\,\lambda_\epsilon)$ and $G(\,\cdot,\,\lambda_\epsilon)$ in place of $x_\epsilon$, $F$ and $G$. \end{definition} The following trivial corollary of Lemma \ref{lem:QDP-QD} gives conditions for a parametrized QDP to be a QD. \begin{corollary}\label{cor:QDP-QD} Let $x_\star,\tilde U$ be as in Lemma \ref{lem:QDP-QD} and suppose that for each domain $D\subset\subset \tilde U$, $(x_\epsilon(t;\lambda_\epsilon))$ is a parametrized QDP on $(D_\epsilon):=(\{x_\star+x/a_\epsilon\colon x \in D\})$ to scale $(a_\epsilon),(b_\epsilon)$, with characteristics $F,G$, and that the following limits exist for every $x \in \tilde U$: \begin{align}\label{eq:FG-par-lim} \tilde F(x) &:= \lim_{\epsilon \to 0}a_\epsilon b_\epsilon F(x_\star + x/a_\epsilon,\lambda_\epsilon) \ \text{and} \nonumber \\ \tilde G(x) &:= \lim_{\epsilon \to 0}\epsilon^2 a_\epsilon^2 b_\epsilon G(x_\star + x/a_\epsilon,\lambda_\epsilon), \end{align} with uniform convergence on compact subsets of $\tilde U$. Then the process $(X_\epsilon)$ defined below is a QD with characteristics $\tilde F,\tilde G$: \begin{align*} X_\epsilon(t) =a_\epsilon(x_\epsilon(b_\epsilon t;\lambda_\epsilon)-x_\star). \end{align*} \end{corollary} \begin{proof} The proof is identical to the proof of Lemma \ref{lem:QDP-QD}, except with $x_\epsilon(\cdot;\lambda_\epsilon),F(\cdot,\lambda_\epsilon)$ and $G(\cdot,\lambda_\epsilon)$ in place of $x_\epsilon,F$ and $G$. \end{proof} To find $(a_\epsilon),(b_\epsilon),(\lambda_\epsilon)$ such that $F,G$ can satisfy \eqref{eq:FG-par-lim}, as in Section \ref{sec:iso} we use the three-step philosophy of shape, drift to diffusion ratio, and time scale. We begin with case (i), the constant branch, then treat case (ii) in Section \ref{sec:float-equil}. Since $F,G$ now depend on two variables, the step that will occupy the most effort is shape. We shall begin the discussion in the same manner as Section \ref{sec:iso}, then take a theoretical detour to address shape before returning to address the remaining two steps. Since the notation gets more complex, we'll mostly use $(x_\star,\lambda_\star)=(0,0)$ to formulate results.\\ \noindent\textit{Shape.} Working back from \eqref{eq:FG-par-lim} with $(x_\star,\lambda_\star)=(0,0)$ and defining $f_\epsilon=1/(a_\epsilon b_\epsilon)$ and $g_\epsilon = 1/(\epsilon^2 a_\epsilon^2 b_\epsilon)$, \begin{align}\label{eq:FG-flip-lim} F(x/a_\epsilon, \lambda_\epsilon) &\sim f_\epsilon \,\tilde F(x) \quad \text{and} \nonumber \\ G(x/a_\epsilon, \lambda_\epsilon) &\sim g_\epsilon \,\tilde G(x). \end{align} In other words, the space and parameter scales $(1/a_\epsilon,\lambda_\epsilon)$ are such that on that scale, $F,G$ are asymptotically the product of a function of $\epsilon$ with a function of $x$. Since we now have a free variable $\lambda$, it is no longer necessary that $F,G$ be locally homogeneous. Instead, the correct generalization to this context is that $(1/a_\epsilon,\lambda_\epsilon)$ be a limit scale for $F$ and $G$, in the following sense. \begin{definition}[limit scale]\label{def:homscale} Let $f$ be defined in a neighbourhood of $(0,0)$. Say that a sequence $(x_n,\lambda_n)$ with limit $(0,0)$ is a limit scale for $f$ if there are functions $v,w$, called the scale functions, such that $f(ux,\lambda)\sim v(x,\lambda)w(u)$ as $n\to\infty$, locally uniformly with respect to $u \in (0,\infty)$. \end{definition} If $(1/a_\epsilon,\lambda_\epsilon)$ is a limit scale for both $F$ and $G$, then letting $v_F,w_F$ and $v_G,w_G$ denote the respective scale functions and rearranging, \eqref{eq:FG-flip-lim} gives \begin{align}\label{eq:tF-tG-ratio2} \tilde F(x) \sim h_\epsilon Q(x) \quad \text{and} \quad \tilde G(x) \sim \ell_\epsilon V(x), \end{align} where, using $f_\epsilon=1/(a_\epsilon b_\epsilon)$ and $g_\epsilon=1/(\epsilon^2 a_\epsilon b_\epsilon)$, \begin{align}\label{eq:h-ell} &Q(x)=w_F(x), \quad V(x)=w_G(x), \nonumber \\ &h_\epsilon = a_\epsilon b_\epsilon v_F(1/a_\epsilon,\lambda_\epsilon) \ \ \text{and} \ \ \ell_\epsilon = \epsilon^2 a_\epsilon b_\epsilon v_G(1/a_\epsilon,\lambda_\epsilon). \end{align} We will return to these expressions after laying out a method for identifying limit scales and scale functions. To get warmed up, suppose $f$ has a Taylor expansion around $(0,0)$ in the sense that \begin{align}\label{eq:loTaylor} f(x,\lambda) \sim \sum_{\alpha \in A}c_\alpha x^{\alpha_1}\lambda^{\alpha_2} \ \ \text{as} \ \ |x|+|\lambda| \to 0 \end{align} for some finite set of powers $A\subset \mathbb{N}^2$ and non-zero coefficients $(c_\alpha)_{\alpha \in A}$. Then any sequence $(x_n,\lambda_n)$ tending to $(0,0)$ along a curve of the form $x=\lambda^m$ for some $m>0$ is a limit scale for $f$, since if $x=\lambda^m$ then letting $s(\alpha,m) = m \alpha_1+\alpha_2$, $s(A,m) = \min \{m\alpha_1+\alpha\colon \alpha \in A\}$ and $A(m) = \{\alpha \in A\colon s(\alpha,m) = s(A,m)$, for $u\in (0,\infty)$, as $x\to 0$, $$f(ux,\lambda) \sim \sum_{\alpha \in A}c_\alpha u^{\alpha_1}\lambda^{m \alpha_1 + \alpha_2} \sim \lambda^{s(A,m)}\sum_{\alpha \in A(m)}c_\alpha u^{\alpha_1},$$ with uniform convergence over $u$ in compact subsets of $(0,\infty)$. By studying the structure of the sets $(A(m))_{m\in (0,\infty)}$ we can identify all the limit scales for $f$ satisfying \eqref{eq:loTaylor}. We begin with some simple theory of partially ordered sets. \subsection{Envelope of a partially ordered subset of $\mathbb{N}^2$}\label{sec:env} Let $(S,\le)$ be a partially ordered set. Elements $a,b$ are incomparable if neither $a\le b$ nor $b\le a$ is true, or equivalently, if $a \ne b$ and neither $a<b$ nor $b<a$ is true. A subset $A\subset S$ is an anti-chain (I prefer the term disordered) if $a,b \in A$ and $a\le b$ implies $a=b$, or equivalently, if all distinct pairs of elements in $A$ are incomparable. If $x \in A$ then $x$ is minimal in $A$ if $y<x$ implies $y\notin A$. For any $A$, the set $\min(A):= \{x\in A\colon x \ \text{is minimal in} \ A \}$ is disordered. If $\min(A)\subset B\subset A$ then $\min(B)=\min(A)$. If $A$ is disordered then $A=\min(A)$. If $A$ is disordered and and $B\subset A$ then $B$ is disordered.\\ Define the partially ordered set $(\mathbb{N}^2,\le)$ by $(i,j) \le (k,\ell)$ if $i\le k$ and $j \le \ell$. \begin{lemma} If $A\subset \mathbb{N}^2$ is disordered, then it is finite. \end{lemma} \begin{proof} Suppose $(j,k) \in A$. If $(\ell,m)\in A$ and $(\ell,m) \ne (j,k)$ then either $\ell<j$ or $m<k$. Moreover, if $(\ell,m) \in A$ then $(\ell,m') \notin A$ for all $m' \ne m$ and $(\ell',m) \notin A$ for all $\ell' \ne \ell$, so for each $\ell \in \{0,\dots,j-1\}$ there is at most one $m$ such that $(\ell,m) \in A$, and for each $m \in \{0,\dots,k-1\}$ there is at most one $\ell$ such that $(\ell,m) \in A$. In other words, in addition to $(j,k)$, $A$ contains at most $j+k$ other elements. \end{proof} If $A \subset \mathbb{N}^2$ is disordered then since $(\ell,m) \in A \Rightarrow (\ell',m),(\ell,m') \notin A$ for any $\ell'\ne \ell$ or $m' \ne m$, so $A$ can be arranged in increasing or decreasing order of either the first or second coordinate. If $\alpha,\alpha'$ are incomparable then $\mathrm{sgn}(\alpha_1-\alpha_1')=-\mathrm{sgn}(\alpha_2-\alpha_2')$, so if $A$ is arranged as $\alpha^{(1)},\dots,\alpha^{(n)}$ with $\alpha_1^{(1)} < \dots < \alpha_1^{(n)}$, then $\alpha_2^{(1)} > \dots > \alpha_2^{(n)}$.\\ For $(x,y)\in \mathbb{R}^2$ and $m\in (0,\infty)$ let $s((x,y),m) = mx + y$. A set $A\subset \mathbb{N}^2$ is an envelope if, for each $\alpha \in A$, there is $m\in (0,\infty)$ such that $s(\alpha,m) \le s(\alpha',m)$ for all $\alpha' \in A$. Envelopes are disordered, since if $(k,\ell) \le (i,j)$ then letting $m$ correspond to $(i,j)$, $mi+j\le mk+\ell$ which together imply that $(k,\ell)=(i,j)$. Not all disordered sets are envelopes. For example, take $\{(3,0),(2,2),(0,3)\}$, which is disordered. Then $s((3,0),m)<s((2,2),m)$ for $m<2$ while $s((0,3),m)<s((2,2),m)$ for $m>1/2$.\\ For any $A\subset \mathbb{N}^2$ and $m\in (0,\infty)$ let $s(A,m) = \min\{s(\alpha,m)\colon \alpha \in A\}$ and let $A(m) = \{\alpha\in A \colon s(\alpha,m)=s(A,m)\}$. Then $\mathrm{env}(A) := \bigcup_{m\in(0,\infty)} A(m) =\{\alpha \in A\colon s(\alpha,m)=s(A,m) \ \text{for some} \ m \in (0,\infty)\}$ is an envelope, called the envelope of $A$. If $A$ is an envelope then $A=\mathrm{env}(A)$. If $\mathrm{env}(A) \subset B \subset A$ then $\mathrm{env}(B)=\mathrm{env}(A)$. If $\alpha' < \alpha$ then $s(\alpha',m) < s(\alpha,m)$, so if $s(\alpha,m) = s(A,m)$ then $\alpha \in \min(A)$. In particular, $\mathrm{env}(A) \subset \min(A)$. Since $\min(A) \subset A$ it follows that $\mathrm{env}(\min(A)) = \mathrm{env}(A)$. A visual example of a set $A$ together with $\min(A)$ and $\mathrm{env}(A)$ is given in Figure \ref{fig:set-min-env}.\\ . \begin{figure} \centering \includegraphics[width=1.5in]{setA.png} \hspace{.4in} \includegraphics[width=1.5in]{minA.png} \hspace{.4in} \includegraphics[width=1.5in]{envA.png} \caption{The set $A=\{(3,0),(2,2),(0,3),(1,3),(3,3)\}$ in blue, with $\min(A)$ in yellow and $\mathrm{env}(A)$ in red.} \label{fig:set-min-env} \end{figure} To understand $\mathrm{env}(A)$ we examine the sets $(A(m))_{m\in (0,\infty)}$, which are explained by the following lemma. \begin{lemma}\label{lem:envelope} Let $A\subset \mathbb{N}^2$ and let $(A(m))_{m\in (0,\infty)}$ be as above. Define the pivot slopes and pivots of $A$ by $$M(A):= \{m \in (0,\infty) \colon |A(m)| \ge 2\}$$ and $$\mathrm{piv}(A):= \{\alpha \in A\colon A(m)=\{\alpha\} \ \text{for some} \ m \in (0,\infty)\}.$$ Then $M(A)$ is a finite set $0<m_1<\dots<m_n<\infty$, and letting $m_0:=0$ and $m_{n+1}:=\infty$, $\mathrm{piv}(A) = \{\alpha(i)\}_{i=0}^n$, where $A(m) = \alpha(i)$ for all $m\in (m_i,m_{i+1})$, and $\alpha(i-1),\alpha(i) \in A(m_i)$ for each $i$. In addition, \begin{itemize}[noitemsep] \item $i\mapsto \alpha_1(i)$ is decreasing, \item $\alpha_1(i) \le \alpha_1 \le \alpha_1(i-1)$ for $\alpha \in A(m_i)$, \item $\alpha(0) = \arg\max\{\alpha_1\colon (\alpha_1,\alpha_2) \in \min(A)\}$, and \item $\alpha(n) = \arg\min\{\alpha_1\colon (\alpha_1,\alpha_2) \in \min(A)\}$. \end{itemize} Finally, for each $m\in (0,\infty)$ there is $\alpha \in \mathrm{piv}(A)$ such that $s(A,m)=s(\alpha,m)$. \end{lemma} \begin{proof} If $\alpha,\alpha'$ are incomparable then $m(\alpha,\alpha'):=(\alpha_1-\alpha_1')/(\alpha_2'-\alpha_2) \in (0,\infty)$, so \begin{enumerate}[noitemsep,label={(\roman*)}] \item $s(\alpha,m)=s(\alpha',m) \Leftrightarrow m(\alpha_1-\alpha_1') = \alpha_2'-\alpha_2 \Leftrightarrow m=m(\alpha,\alpha')$, and \item if $\alpha_1<\alpha_1'$ then $s(\alpha,m)<s(\alpha',m) \Leftrightarrow m>m(\alpha,\alpha')$. \end{enumerate} Using (ii), $M(A) \subset \{m(\alpha,\alpha') \colon \alpha,\alpha' \in A, \ \alpha \ne \alpha'\}$, so $M(A)$ is finite. Since $m\mapsto s(\alpha,m)$ is continuous for each $\alpha$ and $\mathrm{env}(A)$ is finite, using (i), $$\{m\in (0,\infty)\colon A(m)=\{\alpha\}\} = \bigcap_{\alpha' \in \mathrm{env}(A)\setminus \alpha}\{m\in (0,\infty)\colon s(\alpha,m)<s(\alpha',m)\}$$ is an open interval and similarly, using (i)-(ii), $\{m\in (0,\infty)\colon \alpha \in A(m)\}$ is a relatively closed interval in $(0,\infty)$. Let $\alpha$ be such that $A(m)=\alpha$ for some $m$, and let $(\underline m, \overline m)=\{m\colon A(m)=\alpha\}$. Then \begin{itemize}[noitemsep] \item either $\underline m=0$, or $\alpha \in A(\underline m)$ and $\underline m \in M(A)$, and \item either $\overline m=\infty$, or $\alpha \in A(\overline m)$ and $\overline m \in M(A)$. \end{itemize} In all cases $(\underline m, \overline m ) = (m_i,m_{i+1})$ for some $i \in \{0,\dots,n\}$. In particular, $A(m)$ is constant on each set $(m_i,m_{i+1})$, and letting $\alpha(i)$ be its unique element, $\alpha(i-1),\alpha(i) \in A(m_i)$. If $\alpha,\alpha' \in A(m_i)$ and $\alpha \ne \alpha'$ then $m(\alpha,\alpha')=m_i$. If moreover $\alpha_1<\alpha_1'$ then from (ii) above, \begin{itemize}[noitemsep] \item $s(\alpha,m)<s(\alpha',m) \Leftrightarrow m>m_i$ so $\alpha' \ne \alpha(i)$, and \item $s(\alpha',m)<s(\alpha,m) \Leftrightarrow m<m_i$ so $\alpha \ne \alpha(i-1)$. \end{itemize} Since $\alpha(i-1),\alpha(i) \in A(m_i)$, it follows that \begin{itemize}[noitemsep] \item $\alpha(i-1) = \arg\max\{\alpha_1\colon (\alpha_1,\alpha_2)\in A(m_i)\}$ and \item $\alpha(i) = \arg\min\{\alpha_1\colon (\alpha_1,\alpha_2) \in A(m_i)\}$. \end{itemize} In particular, $\alpha(i-1) \ne \alpha(i)$, and $\alpha_1(i-1)>\alpha_1(i)$. Let $\alpha = \arg\max\{\alpha_1\colon (\alpha_1,\alpha_2\in \min(A)\}$. Then for $\alpha' \in \min(A)\setminus \{\alpha\}$ and small $m$, $s(\alpha,m)<s(\alpha',m)$, so $s(\alpha,m)=s(A,m)$ for small $m$ which means that $\alpha=\alpha(0)$. Similarly one can show that $\alpha(n)=\arg\min\{\alpha_1\colon(\alpha_1,\alpha_2)\in\min(A)\}$. To prove the last statement -- that for each $m$ there is $\alpha\in \mathrm{piv}(A)$ such that $s(\alpha,m)=s(A,m)$ -- simply note that for each $m$, $\mathrm{piv}(A)\cap A(m) \ne \emptyset$. \end{proof} There is a nice visual interpretation of Lemma \ref{lem:envelope}. Define the set of lines $(L(A,m))_{m\in [0,\infty]}$ as follows: \begin{itemize}[noitemsep] \item $L(A,0) := \{(x,y) \in \mathbb{R}^2 \colon y = \min\{\alpha_2\colon(\alpha_1,\alpha_2) \in A\}\}$, \item $L(A,\infty) := \{(x,y) \in \mathbb{R}^2 \colon x = \min\{\alpha_1\colon(\alpha_1,\alpha_2) \in A\}\}$, and \item for $m\in (0,\infty)$, $L(A,m):= \{(x,y) \in \mathbb{R}^2\colon s((x,y),m) = s(A,m)\}$. \end{itemize} Then each line $L(A,m)$ is a lower bound and a supporting line of $A$, in the sense that \begin{itemize}[noitemsep] \item $L(A,m) \cap A$ is non-empty, \item if $\alpha \in A$ then $\alpha \ge \alpha'$ for some $\alpha' \in L(A,m)$, and \item if $\alpha \in L(A,m)$ and $\alpha'<\alpha$ then $\alpha' \notin A$. \end{itemize} By definition, $A(m)= A \cap L(A,m)$ for $m\in (0,\infty)$. As a function of $m$, $L(A,m)$ begins at $m=0$ as a horizontal line through the lowest points of $A$, finishes at $m=\infty$ as a vertical line through the leftmost points of $A$, and as $m$ increases from $0$ to $\infty$, rotates clockwise around the pivots of $A$, changing pivots when $m$ is equal to a pivot slope. The contour followed by the lines $(L(A,m))$ is the set $L(A)$ defined by \begin{align}\label{eq:LofA} L_+(A,m) &= \{(x,y) \in \mathbb{R}^2\colon s((x,y),m) \ge s(A,m)\} \ \text{and} \nonumber \\ L(A) &= \bigcap_{m\in (0,\infty)} L_+(A,m)\cap \bigcup_{m\in [0,\infty]}L(A,m). \end{align} Geometrically, $L(A)$ is made up of $n$ line segments, connecting $\alpha(i-1)$ to $\alpha(i)$ for $i=0,\dots,n-1$, together with the portion of $L(A,0)$ to the right of $\alpha(0)$, and the portion of $L(A,\infty)$ above $\alpha(n)$. It is the boundary, as well as a kind of set-wise greatest lower bound, of the set $S(A)$ defined by \begin{align}\label{eq:SofA} S(A) &= \bigcap_{m\in (0,\infty)}L_+(A,m). \end{align} An example is shown in Figure \ref{fig:set-slopes-contour}. \begin{figure} \centering \includegraphics[width=1.5in]{setB.png} \hspace{.4in} \includegraphics[width=1.5in]{slopesB.png} \hspace{.4in} \includegraphics[width=1.5in]{contourB.png} \caption{The set $A=\{(4,0),(2,1),(1,2),(2,2),(3,3),(1,3),(0,4)\}$ in blue (left), with the lines $L(A,m_i)$ and shaded regions $L_+(A,m_i)$ evaluated at the pivot slopes $m_1=1/2, m_2=1$ and $m_3=2$ (center) and the contour $L(A)$ with the shaded region $S(A)$ and $\mathrm{env}(A)$ higlighted in red (right).} \label{fig:set-slopes-contour} \end{figure} \subsection{Dominant terms and limit scales for $f(x,\lambda)$}\label{sec:domterms} Next, we show that if $f$ has a Taylor expansion in the sense of \eqref{eq:loTaylor} then we can sum over $\mathrm{env}(A)$ instead of $A$, and we identify the limit scales for $f$ in the sense of Definition \ref{def:lim-scales}, and the form of the product expansion $v(x,\lambda)w(u)$. Note that if $f$ is analytic at $(0,0)$ then letting $(c_\alpha)_{\alpha \in \mathbb{N}^2}$ be the coefficients in its power series, it satisfies \eqref{eq:loTaylor} with $A=\min\{\alpha\colon c_\alpha \ne 0\}$. First we show only the terms in $\mathrm{env}(A)$ are important. \begin{lemma}\label{lem:envelope-sum} Suppose $f$ has a Taylor expansion around $(0,0)$ as in \eqref{eq:loTaylor}, with powers $A$ and non-zero coefficients $(c_{\alpha})_{\alpha \in A}$. Then \begin{align}\label{eq:envelope-sum} f(x,\lambda) \sim \sum_{\alpha \in \mathrm{env}(A)}c_\alpha x^{\alpha_1}\lambda^{\alpha_2} \ \ \text{as} \ \ |x|+|\lambda| \to 0. \end{align} \end{lemma} \begin{proof} The result is clear if $A=\emptyset$. We will assume $x,\lambda \ge 0$, as the other cases follow in the same way with a change of sign. Since $\mathrm{env}(\min(A)) = \mathrm{env}(A)$ and $\min(A)$ is disordered, it suffices to show that \begin{itemize}[noitemsep] \item \eqref{eq:loTaylor} holds with $\min(A)$ in place of $A$, and \item \eqref{eq:envelope-sum} holds assuming $A$ is disordered. \end{itemize} The first step is easy: since $A$ is finite, for any $\alpha \in A$ with $c_\alpha \ne 0$, $$\sum_{\{\alpha' \in A \colon \alpha' \ge \alpha\}} c_{\alpha'}x^{\alpha_1'}\lambda^{\alpha_2'} = (c_\alpha+o(1))x^{\alpha_1}\lambda^{\alpha_2},$$ and since $A \subset \bigcup_{\alpha \in \min(A)}\{\alpha'\in \mathbb{N}^2 \colon \alpha' \ge \alpha\}$, it follows that $f(x,\lambda) \sim \sum_{\alpha \in \min(A)}c_\alpha x^{\alpha_1} \lambda^{\alpha_2}$. Now suppose $A$ is disordered. Since $A$ is finite, it suffices to show that for $\alpha=(\alpha_1,\alpha_2) \in A\setminus \mathrm{env}(A)$, \begin{align}\label{eq:env-sum} x^{\alpha_1}\lambda^{\alpha_2} \ll \sum_{\alpha'\in \mathrm{env}(A)} x^{\alpha_1'}\lambda^{\alpha_2'}. \end{align} Let $\{\alpha(i)\}_{i=0}^n \subset \mathrm{env}(A)$ denote the pivots from Lemma \ref{lem:envelope}. Since $A$ is disordered, $A=\min(A)$, so $\alpha(0)=\arg\max\{\alpha_1\colon \alpha \in A\}$ and $\alpha(n) =\arg\min\{\alpha_1\colon \alpha \in A\}$. Thus if $\alpha \in A\setminus\mathrm{env}(A)$ then for some $i$, $\alpha_1(i-1)<\alpha_1 < \alpha_1(i)$, and $s(\alpha(i-1),m_i) = s(\alpha(i),m_i)=s(A,m_i)<s(\alpha,m_i)$. \begin{itemize}[noitemsep] \item If $(x,\lambda) \in [0,1]^2$ and $x = z \lambda^m$ with $z\le 1$ then $z^{\alpha_1} \le z^{\alpha_1(i-1)}$, so as $\lambda \to 0$, $$x^{\alpha_1}\lambda^{\alpha_2} = z^{\alpha_1}\lambda^{m \alpha_1 + \alpha_2} \ll z^{\alpha_1(i-1)}\lambda^{m\alpha_1(i-1) + \alpha_2(i-1)} = x^{\alpha_1(i-1)}\lambda^{\alpha_2(i-1)}.$$ \item If $(x,\lambda) \in [0,1]^2$ and $x= z\lambda^m$ with $z \ge 1$, then $z^{\alpha_1} \le z^{\alpha_1(i)}$ so as $\lambda \to 0$, $$x^{\alpha_1}\lambda^{\alpha_2} = z^{\alpha_1}\lambda^{m\alpha_1 + \alpha_2} \ll z^{\alpha_1(i)}\lambda^{m\alpha_1(i)+\alpha_2(i)} = x^{\alpha_1(i)}\lambda^{\alpha_2(i)}.$$ \end{itemize} It follows that $x^{\alpha_1}\lambda^{\alpha_2} \ll x^{\alpha_1(i-1)}\lambda^{\alpha_2(i-1)} + x^{\alpha_1(i)}\lambda^{\alpha_2(i)}$, which implies \eqref{eq:env-sum}. \end{proof} The next step is to refine Lemma \ref{lem:envelope-sum} by partitioning the vicinity of $(0,0)$ into regions where specific terms in the sum \eqref{eq:envelope-sum} are dominant. Conveniently, these regions are also the limit scales for $f$ in the sense of Definition \ref{def:lim-scales}. It suffices to work on $[0,1]^2$, as the results map to other quadrants by mapping that quadrant into $[0,1]^2$ by a change of sign. Just a note that I am picturing $[0,1]^2$ in $\mathbb{R}^2$ with $\lambda$ on the horizontal axis and $x$ on the vertical axis, as is tradition for bifurcation diagrams.\\ The usual notion of partition is a bit too strong for sets of sequences, so we replace it with the following partition-like notion that we call a subpartition. \begin{definition}[subpartition]\label{def:subpart} Let $S$ be a set of sequences, let $\mathcal{P}$ be a collection of subsets of $S$ and define the domain of $\mathcal{P}$ by $\mathrm{dom}(\mathcal{P}) := \bigcup_{A \in \mathcal{P}}A$. Then $\mathcal{P}$ is a subpartition of $S$ if \begin{itemize}[noitemsep] \item each $A\in \mathcal{P}$ is closed under the taking of subsequences, \item $A,B \in \mathcal{P}$ and $A \ne B$ implies $A\cap B=\emptyset$, and \item every element of $S$ has a subsequence belonging to $\mathrm{dom}(\mathcal{P})$. \end{itemize} \end{definition} A compelling example is the following: let $K$ be a compact set and $S$ the set of sequences with values in $K$, then let $\mathcal{P} = \{A_x\}_{x \in K}$ where $A_x$ is the set of sequences in $S$ with limit $x$. Another example, which is the one we'll use for limit scales, is given by the following definition. \begin{definition}[subpartition induced by a finite set]\label{def:subpartM} Let $M=\{m_i\}_{i=1}^n \subset (0,\infty)$ be a non-empty finite set with $0<m_1<\dots <m_n<\infty$. Abusing notation somewhat, let $(x,\lambda)$ denote a sequence in $[0,1]^2$ that converges to $(0,0)$ and define $(R_i)_{i=0}^{2n}$ by \begin{align}\label{eq:asym-regions} & R_0 = \{(x,\lambda) \colon x \gg \lambda^{m_1}\}, \nonumber \\ & R_{2i-1} = \{x \asymp \lambda^{m_i}\}, \ i\in\{1,\dots,n\}, \nonumber \\ & R_{2i} = \{(x,\lambda) \colon \lambda^{m_i} \gg x \gg \lambda^{m_{i+1}}\}, \ i \in \{1,\dots,n-1\}\nonumber \\ & R_{2n} = \{(x,\lambda)\colon \lambda^{m_n} \gg x\}. \end{align} The collection $\{R_i\}_{i=0}^{2n}$ is called the subpartition induced by $M$ and denoted $\mathrm{sub}(M)$. \end{definition} The following is not hard to show, so we'll omit the proof. \begin{lemma}\label{lem:subpartM} Let $M$ and $\mathrm{sub}(M)$ be as in Definition \ref{def:subpartM}. \begin{itemize}[noitemsep] \item $\mathrm{sub}(M)$ is a subpartition of sequences in $[0,1]^2$ that converge to $(0,0)$. \item If $M\supset M'$ and $(x,\lambda)\in \mathrm{dom}(\mathrm{sub}(M))$, then $(x,\lambda) \in \mathrm{dom}(\mathrm{sub}(M'))$. \end{itemize} \end{lemma} In the next result we show that if $f$ has a Taylor expansion around $(0,0)$ with powers $A$ and $M\supset M(A)$ then on each element of the subpartition induced by $M$, $f$ is asymptotic to the sum of the dominant terms on that element. The reason we formulate it in terms of $M\supset M(A)$ and not just $M(A)$ is because, once we deal with both $F$ and $G$, we need to join the corresponding subpartitions, which refines each one. \begin{lemma}\label{lem:dom-terms} Suppose $f$ has a Taylor expansion around $(0,0)$ with powers $A$ as in \eqref{eq:loTaylor}, and that $A \ne \emptyset$, and let $(A(m))_{m\in(0,\infty)}$ and $M(A)$ be as in Lemma \ref{lem:envelope}. Suppose $M = \{m_i\}_{i=1}^n\subset \mathbb{N}^2$ is a finite set with $0<m_1<\dots<m_n<\infty$, and that $M(A) \subset M$. Let $m_0=0$ and $m_{n=1}=\infty$ and for $i\in \{0,\dots,n\}$ let $\alpha(i)$ be such that $A(m)=\{\alpha(i)\}$ for $m\in (m_i,m_{i+1})$. Let $\{R_i\}_{i=0}^{2n}$ denote the subpartition induced by $M$, as in Definition \ref{def:subpartM}. \begin{enumerate}[noitemsep,label={(\roman*)}] \item If $(x,\lambda) \in R_{2i}$ for $i\in \{0,\dots,n\}$ then $f(x,\lambda) \sim c_{\alpha(i)} x^{\alpha_1(i)}\lambda^{\alpha_2(i)}$. \item If $(x,\lambda) \in R_{2i-1}$ for $i \in \{1,\dots,n\}$ then $$f(x,\lambda) \sim \sum_{\alpha \in A(m_i)} c_\alpha x^{\alpha_1}\lambda^{\alpha_2}.$$ \end{enumerate} Letting $A_{2i} = A(m_i)$ and $A_{2i-1} = \{\alpha(i)\}$, for $i\in \{0,\dots,2n\}$ and $(x,\lambda) \in R_i$, \begin{enumerate}[noitemsep,label={(\roman*)}] \item[(iii)] $f(x,\lambda) \sim \sum_{\alpha \in A_i}c_\alpha x^{\alpha_1}\lambda^{\alpha_2}$, and \item[(iv)] for any $\alpha \in A_i$, $f(x,\lambda) = O(x^{\alpha_1}\lambda^{\alpha_2})$. \end{enumerate} \end{lemma} For the sake of disambiguation, it should be noted that the definition of $\{\alpha(i)\}_{i=0}^n$ given here agrees with the one in Lemma \ref{lem:envelope} iff $M=M(A)$. \begin{proof} We first prove (iii)-(iv). (iii) is simply a summary of (i)-(ii). (iv) is trivial if $i$ is even, since $A_i$ is a singleton. For odd $i$, if $(x,\lambda)\in R_i$ then $x\sim \lambda^{m_i}z$ for some constant $z\in(0,\infty)$, and if $\alpha \in A_i$ then $s(\alpha,m_i)=s(A,m_i)$, so $$x^{\alpha_1}\lambda^{\alpha_2} = z^{\alpha_1 m_i}\lambda^{\alpha_1 m_i + \alpha_2} = z^{\alpha_1 m_i} \lambda^{s(\alpha,m_i)} \asymp \lambda^{s(A,m_i)}.$$ Since $A_i$ is finite, $f(x,\lambda) = O(\lambda^{s(A,m_i)})$, and (iv) follows.\\ We now prove (i)-(ii). Referring to Lemma \ref{lem:envelope}, since $M(A) \subset M$ and $M$ is arranged in increasing order, \begin{itemize}[noitemsep] \item $\{\alpha(i)\}_{i=0}^n = \mathrm{piv}(A)$, \item $\alpha(0)=\arg\max\{\alpha_1\colon \alpha \in A\}$, \item $\alpha(n)=\arg\min\{\alpha_1\colon \alpha \in A\}$, \item $\alpha_1(i-1)<\alpha_1(i)$ and $\alpha(i-1),\alpha(i) \in A(m_i)$ if $m_i \in M(A)$, and \item $\alpha_1(i-1)=\alpha(i)$ and $A(m_i)=\{\alpha(i-1)\}=\{\alpha(i)\}$ if $m_i \in M\setminus M(A)$. \end{itemize} Suppose $(x,\lambda) \in R_{2i}$ and $\alpha \in \mathrm{env}(A)\setminus \{\alpha(i)\}$. If $\alpha_1 \le \alpha_1(i)$, then since $\mathrm{env}(A)$ is disordered, $\alpha_1<\alpha_1(i)$ and since $\alpha(n)=\arg\min\{\alpha_1\colon \alpha\in A\}$, $i<n$ and $m_{i+1}<\infty$. Since $\alpha(i)\in A(m_{i+1})$, $s(\alpha(i),m_{i+1}) \le s(\alpha,m_{i+1})$, so if $z$ satisfies $x=z\lambda^{m_{i+1}}$ then since $(x,\lambda) \in R_{2i}$, $\lambda \ll 1$ and $z\gg 1$, so $$x^{\alpha_1}\lambda^{\alpha_2} = z^{\alpha_1}\lambda^{s(\alpha,m_{i+1})} \ll z^{\alpha_1(i)}\lambda^{s(\alpha(i),m_{i+1})} = x^{\alpha_1(i)}\lambda^{\alpha_2(i)}.$$ If instead $\alpha_1 > \alpha(i)$ then since $\alpha(0) = \arg\max\{\alpha_1\colon \alpha\in A\}$, $i>0$ so $m_i>0$. Since $\alpha(i) \in A(m_i)$, $s(\alpha(i),m_i) \le s(\alpha,m_i)$, so if $z$ satisfies $x=z\lambda^{m_i}$, then since $(x,\lambda) \in R_{2i}$, $\lambda \ll 1$ and $z \ll 1$, so $$x^{\alpha_1}\lambda^{\alpha_2} = z^{\alpha_1}\lambda^{s(\alpha,m_i)} \ll z^{\alpha_1(i)} \lambda^{s(\alpha(i),m_i)} = x^{\alpha_1(i)}\lambda^{\alpha_2(i)},$$ which implies (i). If $(x,\lambda) \in R_{2i-1}$ and $x=z\lambda^{m_i}$ then $z \asymp 1$, so for any $\alpha,\alpha' \in A$, $$x^{\alpha_1'}\lambda^{\alpha_2'} = z^{\alpha_1'} \lambda^{s(\alpha',m_i)} \asymp \lambda^{s(\alpha',m_i)} \begin{cases} = \lambda^{s(\alpha,m_i)} & \text{if} \ \ s(\alpha',m_i)=s(\alpha,m_i) \\ \ll \lambda^{s(\alpha,m_i)} & \text{if} \ \ s(\alpha',m_i) > s(\alpha,m_i),\end{cases}$$ which implies (ii). \end{proof} Using Lemma \ref{lem:dom-terms} and a bit of extra work, we show the limit scales for $f$ satisfying \eqref{eq:loTaylor} are exactly the sequences in $\mathrm{dom}(\mathrm{sub}(M(A)))$, and identify the correpsonding scale functions on each element of $\mathrm{sub}(M(A))$. \begin{theorem}\label{thm:limscale} Suppose $f$ has a Taylor expansion around $(0,0)$ with non-empty set of powers $A$ as in \eqref{eq:loTaylor}, and let $M(A)$ be as in Lemma \ref{lem:envelope}, $\mathrm{sub}(\dots)$ as in Definition \ref{def:subpartM} and $\mathrm{dom}(\dots)$ as in Definition \ref{def:subpart}. Then $(x,\lambda)$ is a limit scale for $f$ if and only if $(x,\lambda) \in \mathrm{dom}(\mathrm{sub}(M(A)))$.\\ In particular, if $M\supset M(A)$ and $(x,\lambda) \in \mathrm{dom}(\mathrm{sub}(M))$ then $(x,\lambda)$ is a limit scale for $f$, and defining $(R_i)_{i=0}^{2n}$ using $M$ as in Definition \ref{def:subpartM}, in the notation of Lemma \ref{lem:dom-terms}, the corresponding scale functions are given by \begin{itemize}[noitemsep] \item $v_{2i}(x,\lambda) := c_{\alpha(i)}x^{\alpha_1(i)}\lambda^{\alpha_2(i)}$ and $w_{2i}(u) := u^{\alpha_1(i)}$ on $(R_{2i})_{i=0}^n$, and by \item $v_{2i-1}(x,\lambda) := \lambda^{s(A,m_i)}$ and $w_{2i-1}(u) := \sum_{\alpha \in A(m_i)} c_\alpha z^{\alpha_1} u^{\alpha_1}$ on $(R_{2i-1})_{i=1}^n$, \end{itemize} where for each $(x,\lambda) \in R_{2i-1}$, $z$ is the unique value such that $x \sim z \lambda^{m_i}$. \end{theorem} \begin{proof} We first show that if $M\supset M(A)$ and $(x,\lambda) \in \mathrm{dom}(\mathrm{sub}(M))$ then $(x,\lambda)$ is a limit scale for $f$, and identify the given scale functions. We then show that if $(x,\lambda) \notin \mathrm{dom}(\mathrm{sub}(M(A)))$ then $(x,\lambda)$ is not a limit scale for $f$. If $M\subset (0,\infty)$ is any finite set, then elements of $\mathrm{sub}(M)$ are closed under positive scaling of either variable. That is, defining $(m_i)$, $(R_i)$ as in Definition \ref{def:subpartM}, if $(x,\lambda) \in R_i$ and $u\in (0,\infty)$ then $(ux,\lambda),(x,u\lambda)\in R_i$, for $i\in \{0,\dots,2n\}$. Suppose $M\supset M(A)$ and $(x,\lambda) \in \mathrm{dom}(\mathrm{sub}(M))$, so that $(x,\lambda) \in R_i$ for some $i$. If $(x,\lambda) \in R_{2i}$ then in the notation of Lemma \ref{lem:dom-terms}, $$f(ux,\lambda) \sim c_{\alpha(i)}(ux)^{\alpha_1(i)}\lambda^{\alpha_2(i)}$$ which has the desired form with $v_{2i}(x,\lambda)=c_{\alpha(i)}x^{\alpha_1(i)}\lambda^{\alpha_2(i)}$ and $w_{2i}(u) = u^{\alpha_1(i)}$. If $(x,\lambda) \in R_{2i-1}$, letting $z$ be the constant such that $x\sim z\lambda^{m_i}$, then using Lemma \ref{lem:dom-terms} and recalling that $A(m_i):= \{\alpha \in A\colon m_i\alpha_1+\alpha_2=s(A,m_i)\}$, $$f(ux,\lambda) \sim \sum_{\alpha \in A(m_i)}c_\alpha (ux^{\alpha_1})\lambda^{\alpha_2} = \lambda^{s(A,m_i)}\sum_{\alpha \in A(m_i)}c_\alpha z^{\alpha_1}u^{\alpha_1},$$ which has the desired form with $v_{2i-1}(x,\lambda) = \lambda^{s(A,m_i)}$ and $w_{2i-1}(u) = \sum_{\alpha \in A(m_i)} c_\alpha z^{\alpha_1}u^{\alpha_1}$. In both cases, it is clear, by examining the proof of Lemma \ref{lem:dom-terms}, that for each $R_i$ and any fixed sequence $(x,\lambda)\in R_i$, convergence of $f(ux,\lambda)/(v_i(x,\lambda)w_i(u))$ to $1$ as $|x|+|\lambda|\to 0$ is uniform over $u$ in compact subsets of $(0,\infty)$. It remains to show that if $(x,\lambda) \notin \mathrm{dom}(\mathrm{sub}(M(A)))$ then $(x,\lambda)$ is not a limit scale for $f$. If $(x,\lambda) \notin \mathrm{dom}(\mathrm{sub}(M(A)))$ then it has subsequences $(x',\lambda')$ and $(x''',\lambda''')$ belonging to distinct elements of $\mathrm{sub}(M(A))$. This is verified as follows. \begin{itemize}[noitemsep] \item By Lemma \ref{lem:subpartM}, $(x,\lambda)$ has a subsequence $(x',\lambda') \in \mathcal{A}$ for some $\mathcal{A} \in \mathrm{sub}(M(A))$, and \item Since $(x,\lambda) \notin \mathcal{A}$, by definition of $(R_i)$, $(x,\lambda)$ has a subsequence $(x'',\lambda'')$ that has no subsequence in $\mathcal{A}$; for example, if $(x_n,\lambda_n) \notin R_{2i-1}$, take $(n_k)$ so that $x_{n_k}/\lambda^{m_i}_{n_k}$ tends to $0$ or $\infty$. Using Lemma \ref{lem:subpartM} again, $(x'',\lambda'')$ has a subsequence $(x''',\lambda''')$ in $\mathcal{A}'$ for some $\mathcal{A}' \in \mathrm{sub}(M(A))$ with $\mathcal{A}' \ne \mathcal{A}$. \end{itemize} Let $i \ne j$ be such that $\mathcal{A}=R_i$ and $\mathcal{A}'=R_j$ and let $v_i,w_i$ and $v_j,w_j$ be the limit scale functions given above. Then $w_i,w_j$ are polynomials and with $(A_i)$ as in Lemma \ref{lem:dom-terms}, since $A_i \ne A_j$ for $i\ne j$, $w_i$ is not a multiple of $w_j$. Suppose $(x,\lambda)$ is a limit scale and let $k,w$ be the corresponding functions. Then $$f(ux',\lambda') \sim v_i(x',\lambda')w_i(u) \sim v(x',\lambda')w(u)$$ and $$f(ux''',\lambda''') \sim v_j(x''',\lambda''')w_j(u) \sim v(x''',\lambda''')w(u).$$ If $u$ is such that $w_i(u),w_j(u),w(u) \ne 0$ then $$w_i(u)/w(u) \sim v(x',\lambda')/v_i(x',\lambda') \ \text{and} \ w_j(u)/w(u) \sim v(x''',\lambda''')/v_j(x''',\lambda''').$$ In particular, on $\{u\in(0,\infty)\colon w_i(u)w_j(u)w(u) \ne 0\}$, $w_i(u)/w(u)$ and $w_j(u)/w(u)$ are constant, so $w_i(u)/w_j(u)$ is constant, i.e., $w_i$ is a multiple $w_j$, which it is not. \end{proof} \subsection{Limit scales around a constant branch}\label{sec:const-equil} We now return to the discussion of shape that was interrupted above Section \ref{sec:env}. Recall that $F(x,\lambda),G(x,\lambda)$ are the characteristics of a parametrized QDP $(x_\epsilon(t;\lambda_\epsilon))_{t\ge 0}$ in the sense of Definition \eqref{def:prmtrzdQDP}, and we want $(1/a_\epsilon,\lambda_\epsilon)$ to be a limit scale for both $F$ and $G$. Assume that $F,G$ have a Taylor expansion around $(0,0)$ with respective powers $A,B$ and non-zero coefficients $(c_\alpha)_{\alpha \in A}$, $(d_\beta)_{\beta \in B}$, in the sense of \eqref{eq:loTaylor}. Then by Theorem \ref{thm:limscale}, $(1/a_\epsilon,\lambda_\epsilon)$ is a limit scale for both $F$ and $G$ iff it is in the domain of the subpartitions defined by both $A$ and $B$. This can be expressed concisely as follows. Let $A(m),B(m)$ and $M(A),M(B)$ be as in Lemma \ref{lem:envelope}, and let $M=M(A)\cup M(B)$. It is easy to check that $$\mathrm{sub}(M) = \mathrm{sub}(M(A)) \vee \mathrm{sub}(M(B)) = \{S \cap S'\colon S\in \mathrm{sub}(M(A)), \ S' \in \mathrm{sub}(M(B))\}.$$ It follows that $(1/a_\epsilon,\lambda_\epsilon)$ is a limit scale for both $F$ and $G$ iff it is in $\mathrm{dom}(\mathrm{sub}(M))$. Label $\mathrm{sub}(M)$ by $(R_i)_{i=0}^{2n}$ as in Definition \ref{def:subpartM}, and let $(\alpha(i))_{i=0}^n$ and $(\beta(i))_{i=0}^n$ be as in Lemma \ref{lem:dom-terms}, corresponding to $A,B$ respectively. Referring back to \eqref{eq:tF-tG-ratio2}-\eqref{eq:h-ell} and using the scale functions of Theorem \ref{thm:limscale}, if $(1/a_\epsilon,\lambda_\epsilon) \in R_i$ then $Q,V,h_\epsilon,\ell_\epsilon$ are as given in Table \ref{tab:QVhl}. Note that in all cases, \begin{align}\label{eq:h-ell-Gma} h_\epsilon &\asymp a_\epsilon^{1-\alpha_1(i)}b_\epsilon \lambda_\epsilon^{\alpha_2(i)} \quad \text{and} \nonumber \\ \ell_\epsilon &\asymp \epsilon^2 a_\epsilon^{2-\beta_1(i)}b_\epsilon \lambda_\epsilon^{\beta_2(i)}. \end{align} On $R_{2i}$, this is immediate; on $R_{2i-1}$, use $1/a_\epsilon\sim z\lambda^{m_i}$ and the fact that $s(\alpha,m_i) = s(A,m_i)$ for all $\alpha \in A(m_i)$ and $\alpha(i) \in A(m_i)$, similarly for $B(m_i)$, to obtain \eqref{eq:h-ell-Gma}.\\ \begin{table} \bgroup \def2{2} \begin{tabular}{c | c | c | c | c} $(1/a_\epsilon,\lambda_\epsilon) \in \bullet$ & $Q$ & $V$ & $h_\epsilon$ & $\ell_\epsilon$ \\ \hline \hline $R_{2i}$ & $c_{\alpha(i)}x^{\alpha_1(i)}$ & $c_{\beta(i)}x^{\beta_1(i)}$ & $a_\epsilon^{1-\alpha_1(i)} b_\epsilon \lambda_\epsilon^{\alpha_2(i)}$ & $\epsilon^2 a_\epsilon^{2-\beta_1(i)} b_\epsilon \lambda_\epsilon^{\beta_2(i)}$ \\ \hline $R_{2i-1}$ & $\sum_{\alpha \in A(m_i)}c_\alpha (zx)^{\alpha_1}$ & $\sum_{\beta \in B(m_i)} c_\beta (zx)^{\beta_1}$ & $a_\epsilon b_\epsilon \lambda_\epsilon^{s(A,m_i)}$ & $\epsilon^2 a_\epsilon b_\epsilon \lambda_ \epsilon^{s(B,m_i)}$ \end{tabular} \egroup \caption{Shape and scale of characteristics, in the notation of Lemma \ref{lem:dom-terms}. \\ On $R_{2i-1}$, $z$ is the constant such that $1/a_\epsilon \sim z \lambda_\epsilon^{m_i}$.} \label{tab:QVhl} \end{table} \noindent\textit{Drift to diffusion ratio.} Let $\delta(i) = \alpha(i)-\beta(i)$ and define $r$ on $[0,1]^2$ piecewise by \begin{align}\label{eq:dd-fcn} r(x,\lambda) = x^{1+\delta_1(i)}\lambda^{\delta_2(i)} \ \ \text{for} \ \ \lambda^{m_{i+1}} \le x \le \lambda^{m_i}, \end{align} with $m_0:=0$ and $m_{n=1}:=\infty$ (so $\lambda^{m_0}=1$ and $\lambda^{m_{n+1}}=0$). Since $\alpha(i)$ and $\alpha(i+1)$ both minimize $s(\alpha, m_{i+1})$ over $\alpha \in A$, and similarly for $\beta$, $r$ is well-defined on the curves $x=\lambda^{m_{i+1}}$ and continuous on $[0,1]^2$. The function $r$ gives the approximate ratio of drift to diffusion terms: if $(1/a_\epsilon,\lambda_\epsilon) \in R_i$ for some $i$ then it follows from \eqref{eq:h-ell-Gma} that $$\epsilon^{-2} r(1/a_\epsilon,\lambda_\epsilon) \asymp h_\epsilon/\ell_\epsilon.$$ As in Section \ref{sec:iso}, there are three pertinent cases, that we call limit scales for the drift to diffusion ratio. \begin{itemize}[noitemsep] \item If $r(1/a_\epsilon,\lambda_\epsilon) \ll \epsilon^2$ then $h_\epsilon \ll \ell_\epsilon$(diffusion dominates). \item If $r(1/a_\epsilon,\lambda_\epsilon) \gg \epsilon^2$ then $\ell_\epsilon \ll h_\epsilon$ (drift dominates). \item If $r(1/a_\epsilon,\lambda_\epsilon) \asymp \epsilon^2$ then $\ell_\epsilon \asymp h_\epsilon$ (drift matches diffusion). \end{itemize} \noindent\textit{Time scale.} As in Section \ref{sec:iso}, $(b_\epsilon)$ should be chosen just large enough that at least one of $\tilde F,\tilde G$ is non-zero. Referring back to \eqref{eq:tF-tG-ratio2}, $h_\epsilon \asymp 1 \Leftrightarrow \tilde F\propto Q$ and $\ell_\epsilon \asymp 1 \Leftrightarrow \tilde G\propto R$. Using \eqref{eq:h-ell-Gma}, \begin{itemize}[noitemsep] \item $h_\epsilon \asymp 1 \Leftrightarrow b_\epsilon \asymp a_\epsilon^{\alpha_1(i)-1}\lambda_\epsilon^{-\alpha_2(i)}$ and \item $\ell_\epsilon \asymp 1 \Leftrightarrow b_\epsilon \asymp \epsilon^{-2}a_\epsilon^{\beta_1(i)-2}\lambda_\epsilon^{-\beta_2(i)}$. \end{itemize} As in Section \ref{sec:iso}, in each case we say that $(b_\epsilon)$ is the visible time scale corresponding to $(1/a_\epsilon,\lambda_\epsilon)$. We now combine these observations to state a limit theorem for QDPs. As in Section \ref{sec:iso}, we'll find that when $F=O(G)$, diffusion dominates when $(1/a_\epsilon,\lambda_\epsilon)$ is small, and drift dominates when $(1/a_\epsilon,\lambda_\epsilon)$ is large, but since the details are more complex, we'll first state the result just in terms of $r$ itself, then afterward study the geometry of the separatrix $\{(x,\lambda) \colon r(x,\lambda)=\epsilon^2\}$. \begin{theorem}[Limit scales for parametrized QDPs]\label{thm:prmtrzd-limits} Let $\tilde U$ be a non-empty open convex set whose closure contains $0$, and let $\lambda$ be a parameter taking values in an interval $I$ that contains $0$. Suppose $F=F(x,\lambda)$ and $G=G(x,\lambda)$ have a Taylor expansion around $(0,0)$ with respective powers $A,B$, as in \eqref{eq:loTaylor}. Let $(x_\epsilon(t;\lambda)\colon \epsilon \in \mathcal{E},\,\lambda \in I,\,t\ge 0)$ be a collection of semimartingales, and given $(a_\epsilon),(b_\epsilon),(\lambda_\epsilon)$, suppose that for each domain $D\subset\subset \tilde U$, $(x_\epsilon(t ;\lambda_\epsilon))_{t\ge 0}$ is a parametrized QDP on $(D_\epsilon):=(\{x/a_\epsilon\colon x \in D\})$ to scale $(a_\epsilon),(b_\epsilon)$ with characteristics $F,G$. Let $M(A),M(B)$ be as in Lemma \ref{lem:envelope} and let $M=M(A)\cup M(B)$, and let $\mathrm{sub}(M)$ be the subpartition induced by $M$, as in Definition \ref{def:subpartM}. Label $\mathrm{sub}(M)$ by $(R_i)$ as in Lemma \ref{lem:dom-terms}.\\ Let $Y_\epsilon(t) = a_\epsilon x_\epsilon(b_\epsilon t;\lambda_\epsilon)$ and suppose that $a_\epsilon \to \infty$. Suppose that $(1/a_\epsilon,\lambda_\epsilon)$ is a limit scale for both $F$ and $G$, and for the drift to diffusion ratio, and that $(b_\epsilon)$ is the corresponding visible time scale. Then $(1/a_\epsilon,\lambda_\epsilon) \in R_i$ for some $i$, $(a_\epsilon),(b_\epsilon)$ and $(\lambda_\epsilon)$ satisfy one of the sets of conditions below, and $(Y_\epsilon)$ is a QD with characteristics $\tilde F,\tilde G$ as described below, with $Q,V$ as given in Table \ref{tab:QVhl}. \begin{itemize}[noitemsep] \item If $r(1/a_\epsilon,\lambda_\epsilon) \ll \epsilon^2$ and $b_\epsilon \asymp \epsilon^{-2}a_\epsilon^{\beta_1(i)-2}\lambda_\epsilon^{-\beta_2(i)}$ then $\tilde F=0$ and $\tilde G \propto V$. \item If $r(1/a_\epsilon,\lambda_\epsilon) \gg \epsilon^2$ and $b_\epsilon \asymp a_\epsilon^{\alpha_1(i)-1}\lambda_\epsilon^{-\alpha_2(i)}$ then $\tilde F \propto Q$ and $\tilde G=0$. \item If $r(1/a_\epsilon,\lambda_\epsilon) \asymp \epsilon^2$ and $b_\epsilon \asymp a_\epsilon^{\alpha_1(i)-1}\lambda_\epsilon^{-\alpha_2(i)} \asymp \epsilon^{-2}a_\epsilon^{\beta_1(i)-2}\lambda_\epsilon^{-\beta_2(i)}$, then $\tilde F \propto Q$ and $\tilde G \propto V$. \end{itemize} \end{theorem} \begin{proof} The result follows in each case from locally uniform convergence in \eqref{eq:FG-par-lim}, which in turn follows easily from the discussion above. \end{proof} We now study the regions defined by the three conditions on $r$. Before doing so, we take a moment to relate the condition $F=O(G)$ to the Taylor series of $F$ and $G$. Condition (iii) below is included for its visual appeal. \begin{lemma}\label{lem:FOGenv} Suppose $F,G$ has a Taylor expansion to leading order around $(0,0)$ as in \eqref{eq:loTaylor} with powers $A ,B\subset \mathbb{N}^2$ respectively, and let $\mathrm{piv}(A),\mathrm{piv}(B)$ be as in Lemma \ref{lem:envelope-sum} and $S(A),S(B)$ as in \eqref{eq:SofA}. Among the following statements, (i) $\Rightarrow$ (ii), (ii) $\Leftrightarrow$ (iii), and if $\mathrm{piv}(B)=\mathrm{env}(B)$ and $G\ge 0$ then (ii) $\Rightarrow$ (i). \begin{enumerate}[noitemsep,label={(\roman*)}] \item $F(x,\lambda)=O(G(x,\lambda))$ as $|x|+|\lambda| \to 0$. \item $s(B,m)\le s(A,m)$ for all $m\in(0,\infty)$. \item $\mathrm{piv}(A)\subset S(B)$. \end{enumerate} Using $M=M(A)\cup M(B)$ in Lemma \ref{lem:dom-terms}, let $(\alpha(i))$, $(\beta(i))$ correspond to $A,B$, respectively. Then any of (i)-(iii) above implies that \begin{itemize}[noitemsep] \item $\alpha_2(0)\ge\beta_2(0)$ and if $\alpha_2(0)=\beta_2(0)$ then $\alpha_1(0) \ge \beta_1(0)$, and \item $\alpha_1(n)\ge \beta_1(n)$ and if $\alpha_1(n) =\beta_1(n)$ then $\alpha_2(n)\ge \beta_2(n)$. \end{itemize} \end{lemma} Before giving the proof, let us note a counterexample that illustrates why the implication (ii) $\Rightarrow$ (i) needs to be qualified. Let $F(x,\lambda) = \lambda^2+x^2$ and $G(x,\lambda)=(\lambda-x)^2$, which even has $G\ge 0$. Then $A=\mathrm{env}(A)=\{(0,2),(2,0)\}$ and $B=\mathrm{env}(B)=\{(0,2),(1,1),(2,0)\}$, so $M(A)=M(B)=\{1\}$ and $\mathrm{piv}(A)=\mathrm{piv}(B)=\{(2,0),(0,2)\}$. Since $s(A,m)=s(\mathrm{piv}(A),m)$, and similarly for $B$, $s(A,m)=s(B,m)$ for each $m\in(0,\infty)$. On the other hand, along the line $x=\lambda$ we have $F(\lambda,\lambda) = 2\lambda^2$ while $G(\lambda,\lambda) = 0$. Note that, in this example, $\mathrm{env}(B)\setminus \mathrm{piv}(B)=\{(1,1)\}\ne\emptyset$. \begin{proof}[Proof of Lemma \ref{lem:FOGenv}] We proceed in the following order: \begin{enumerate}[noitemsep,label={\arabic*.}] \item (ii) implies the last two statements. \item (ii) $\Leftrightarrow$ (iii). \item (i) $\Rightarrow$ (ii). \item If $\mathrm{piv}(B)=\mathrm{env}(B)$ and $G\ge 0$ then (ii) $\Rightarrow$ (i). \end{enumerate} \noindent\textit{Last two statements.} If $m\in (0,m_1)$ then using (ii), $s(\alpha(0),m)=s(A,m)\ge s(B,m) = s(\beta(0),m)$. Letting $m\to 0$, $\alpha_2(0)\ge \beta_2(0)$, and if $\alpha_2(0)=\beta_2(0)$ then $\alpha_1(0)\ge \beta_1(0)$. Using a similar argument for $m\in (m_n,\infty)$ gives the last statement.\\ \noindent\textit{(ii) $\Rightarrow$ (iii).} Since $\mathrm{piv}(A)\subset A$ it's enough to show that $A\subset S(B)$. By definition, $S(B) = \{(x,y) \in \mathbb{R}^2\colon s((x,y),m) \ge s(B,m) \ \text{for all} \ m \in (0,\infty)\}$. If $\alpha \in A$ then by definition $s(\alpha,m) \le s(A,m)$ for all $m\in (0,\infty)$. Using (ii), $s(\alpha,m) \le s(B,m)$ for all $m\in (0,\infty)$, so $\alpha \in S(B)$.\\ \noindent\textit{(iii) $\Rightarrow$ (ii).} For $m \in (0,\infty)$, from Lemma \ref{lem:envelope} there is $\alpha \in \mathrm{piv}(A)$ with $s(\alpha,m)=s(A,m)$. If (iii) holds then $\alpha \in S(B)$ so in particular, $s(\alpha,m) \ge s(B,m)$. Combining gives (ii).\\ \noindent\textit{(i) $\Rightarrow$ (ii).} Denote the coefficients of $F,G$ as in \eqref{eq:loTaylor} by $(c_\alpha)$ and $(d_\beta)$ respectively. If $x = z\lambda^{m_i}$ with constant $z>0$ and $\lambda \to0$ then $(x,\lambda) \in R_{2i-1}$ and $$F(x,\lambda)=\sum_{\alpha \in A(m_i)}(c_\alpha + o(1))x^{\alpha_1}\lambda^{\alpha_2} = \lambda^{s(A,m_i)}\sum_{\alpha \in A(m_i)} (c_\alpha+o(1))z^{\alpha_1},$$ where $c_\alpha \ne 0$ for every $\alpha \in A(m_i)$. Since $\alpha(i)$ minimizes $\alpha_1$ over $(\alpha_1,\alpha_2)\in A(m_i)$, if $z>0$ is small then $$\left |\sum_{\alpha \in A(m_i)}(c_\alpha+o(1))z^{\alpha_1}\right| \ge \frac{1}{2}\, c_{\alpha(i)}z^{\alpha_1(i)}>0$$ and $F(z\lambda^{m_i},\lambda) \asymp \lambda^{s(A,m_i)}$. Similarly $G(z\lambda^{m_i},\lambda)\asymp \lambda^{s(B,m_i)}$ for small $z>0$. If $F=O(G)$ it follows that $s(A,m_i) \ge s(B,m_i)$. If $m\in (m_i,m_{i+1})$ then $(\lambda^m,\lambda)\in R_{2i}$ and $F(\lambda^m,\lambda)\sim c_{\alpha(i)}\lambda^{s(\alpha(i),m)}$, similarly, $G(x,\lambda)\sim d_{\alpha(i)}\lambda^{s(\beta(i),m)}$. Since $s(\alpha(i),m)=s(A,m)$ and $s(\beta(i),m) = s(B,m)$, if $F=O(G)$ then $s(A,m)\ge s(B,m)$.\\ \noindent\textit{If $\mathrm{piv}(B)=\mathrm{env}(B)$ and $G\ge 0$ then (ii) $\Rightarrow$ (i).} We show by contradiction that if (i) is false then (ii) cannot be true. With $M(A),M(B)$ as in Lemma \ref{lem:envelope}, let $M=M(A)\cup M(B)$ and label $\mathrm{sub}(M)$ by $(R_i)_{i=0}^{2n}$ as in Lemma \ref{def:subpartM}. If (i) is false, there is a sequence $(x,\lambda)$ with limit $(0,0)$ such that $|G(x,\lambda)| / |F(x,\lambda)| \to 0$. By Lemma \ref{lem:subpartM} we can assume $(x,\lambda) \in \mathrm{dom}(\mathrm{sub}(M))=\bigcup_{i=0}^{2n} R_i$. There are two cases -- note that the assumptions $\mathrm{piv}(B)=\mathrm{env}(B)$ and $G\ge 0$ will only be needed in Case 2.\\ \noindent\textit{Case 1: $(x,\lambda) \in R_{2i}$ for some $i$.} By Lemma \ref{lem:dom-terms}, $F(x,\lambda) \sim x^{\alpha_1(i)}\lambda^{\alpha_2(i)}$ and $G(x,\lambda)\sim x^{\beta_1(i)}\lambda^{\beta_2(i)}$, so \begin{align}\label{eq:GoverF} |G(x,\lambda)|/|F(x,\lambda)| \sim x^{\beta_1(i)-\alpha_1(i)}\lambda^{\beta_2(i)-\alpha_2(i)} \to 0, \end{align} moreover $s(\alpha(i),m) = s(A,m)$ and $s(\beta(i),m) = s(B,m)$ for $m\in (m_i,m_{i+1})$, so if (ii) holds then $s(\alpha(i),m) \ge s(\beta(i),m)$ for $m\in (m_i,m_{i+1})$. If $i<n$ then $m_{i+1}<\infty$, so by definition of $(x,\lambda) \in R_{2i}$, $x \gg \lambda^{m_{i+1}}$, and by continuity of $m\mapsto s((x,y),m)$, $s(\alpha(i),m_{i+1}) \ge s(\beta(i),m_{i+1})$. Plugging $x\gg \lambda^{m_{i+1}}$ into \eqref{eq:GoverF}, $$\lambda^{m_{i+1}(\beta_1(i)-\alpha_1(i))} \ll \lambda^{\alpha_2(i)-\beta_2(i)}$$ and rearranging gives $$\lambda^{s(\beta(i),m_{i+1})} \ll \lambda^{s(\alpha(i),m_{i+1})}$$ which, since $\lambda \to 0$, contradicts $s(\alpha(i),m_{i+1})\ge s(\beta(i),m_{i+1})$. The same argument is applicable if $i=n$, as long as $x \gg \lambda^m$ for some $m\in (m_n,\infty)$. If instead $i=n$ and $x/\lambda^m \to 0$ for all $m<\infty$ then supposing again that (ii) holds, we showed above that $\alpha_1(n) \ge \beta_1(n)$ and that if $\alpha_1(n)=\beta_1(n)$ then $\alpha_2(n) \ge \beta_2(n)$. The latter is ruled out, since plugging into \eqref{eq:GoverF} gives $\lambda^a \to 0$ with $a=\beta_2(i)-\alpha_2(i) \le 0$. The remaining case is $\alpha_1(n)>\beta_1(n)$ which in \eqref{eq:GoverF} gives $x^a \lambda^b$ for some $a,b$ with $a<0$. By assumption, $\lambda^m/x \to \infty$ for every $m$, so taking $m=b/|a|$, $x^a \lambda^b = (\lambda^{b/|a|}/x)^{|a|}\to \infty$, contradicting \eqref{eq:GoverF}.\\ \noindent\textit{Case 2: $(x,\lambda) \in R_{2i-1}$ for some $i$.} We first prove the result assuming $d_\beta>0$ for all $\beta \in B(m_i)$, then show this assumption is implied by $G\ge 0$ and $\mathrm{piv}(B)=\mathrm{env}(B)$. Let $z \in (0,\infty)$ be such that $x\sim \lambda^{m_i}z$. Then $$G(x,\lambda) \sim \sum_{\beta \in B(m_i)} d_\beta x^{\beta_1}\lambda^{\beta_2} \sim \lambda^{s(B,m_i)}\sum_{\beta \in B(m_i)}d_\beta z^{\beta_1}.$$ Using (iv) of Lemma \ref{lem:dom-terms} and noting $x^{\alpha_1}\lambda^{\alpha_2} \sim z^{\alpha_1}\lambda^{s(A,m_i)}$ for $\alpha \in A(m_i)$, $$F(x,\lambda) = O(\lambda^{s(A,m_i)}).$$ If $d_\beta>0$ for every $\beta \in B(m_i)$ then $G(x,\lambda) \asymp \lambda^{s(B,m_i)}$, so if $|G(x,\lambda)|/|F(x,\lambda)| \to 0$ then $\lambda^{s(B,m_i)} \ll \lambda^{s(A,m_i)}$, implying $s(B,m_i) > s(A,m_i)$. Lastly we show that if $G\ge 0$ and $\mathrm{piv}(B)=\mathrm{env}(B)$ then $d_\beta>0$ for all $\beta \in B(m_i)$ and each $i \in \{1,\dots,n\}$. Recall that $\mathrm{piv}(B) = \{\beta(i)\}_{i=1}^n$. If $\mathrm{piv}(B)=\mathrm{env}(B)$ then since $B(m_i)\subset \mathrm{env}(B)$, $B(m_i) = B(m_i) \cap \mathrm{piv}(B)$ and, since from Lemma \ref{lem:envelope}, $\beta(i-1),\beta(i) \in B(m_i)$, it follows that $B(m_i) = \{\beta(i-1),\beta(i)\}$, so for fixed $z \in (0,\infty)$, $$G(z\lambda^{m_i},\lambda) \sim \lambda^{s(B,m_i)} (d_{\beta(i-1)} z^{\beta_1(i-1)} + d_{\beta(i)}z^{\beta_1(i)}).$$ By Lemma \ref{lem:envelope}, $i\mapsto \beta_1(i)$ is decreasing, so $\beta_1(i)<\beta_1(i-1)$. By assumption, $d_{\beta(i-1)}\ne 0$ and $d_{\beta(i)}\ne 0$. If $z$ is small then $|d_{\beta(i)} z^{\beta_1(i)}| \le |d_{\beta(i-1)}z^{\beta_1(i-1)}|/2$, so if $G\ge 0$ then since $z>0$, $d_{\beta(i-1)}$ must be positive. Similarly if $z$ is large then $ |d_{\beta(i-1)}z^{\beta_1(i-1)}|\le |d_{\beta(i)} z^{\beta_1(i)}|/2$, so if $G\ge 0$ then since $z>0$, $d_{\beta(i)}$ must be positive. \end{proof} Next we define the drift-diffusion separatrix curve $\Phi$, which is actually a family of curves in $[0,1]^2$, parametrized by $\epsilon>0$. We will define it as a set, then show it is a piecewise smooth curve. Define $\Phi(\epsilon)$ to be the $\epsilon^2$ level set of $r$, that is, \begin{align}\label{eq:dd-set} \Phi(\epsilon) = \{(x,\lambda) \in [0,1]^2\colon r(x,\lambda) = \epsilon^2\}. \end{align} As is clear from Theorem \ref{thm:prmtrzd-limits}, $\Phi$ delineates the boundary between stochastic and deterministic limits. In the previous section we found that if $F=O(G)$ then diffusion dominates at small scales while drift dominates at larger scales, with the two regimes separated by the dd scale. The following result generalizes that observation. A Jordan arc is the image of an injective, continuous function with domain $[0,1]$. \begin{theorem}\label{thm:dd-curve} Suppose $F,G$ have a Taylor expansion around $(0,0)$ with respective powers $A,B$ as in \eqref{eq:loTaylor}, and that $A$ has an element of the form $(\alpha_1,0)$. Suppose in addition that $F=O(G)$ on $[0,1]^2$, and let $\Phi$ be as in \eqref{eq:dd-set}. If $\epsilon>0$ is small, then $\Phi(\epsilon)$ is a piecewise smooth Jordan arc that connects $(0,1) \times \{0\}$ to $(0,1) \times \{1\}$ through the set $(0,1)^2$, and $\sup\{x \colon (x,\lambda) \in \Phi(\epsilon)\} \to 0$ as $\epsilon \to 0$.\\ $\Phi(\epsilon)$ is the graph of a function $\lambda \mapsto \phi_\epsilon(\lambda)$ for small $\epsilon>0$ iff $\alpha_1(i)\ge \beta_1(i)$ for every $i \in \{0,\dots,n\}$. When this holds, for given sequences $(a_\epsilon),(\lambda_\epsilon)$, $r(1/a_\epsilon,\lambda_\epsilon)$ is $\ll \epsilon^2, \asymp \epsilon^2$ or $\gg \epsilon^2$ iff $1/a_\epsilon$ is $\ll \phi_\epsilon(\lambda_\epsilon), \asymp \phi_\epsilon(\lambda_\epsilon)$ or $\gg \phi_\epsilon(\lambda_\epsilon)$, respectively. \end{theorem} Before proving this result, which takes a bit of effort, we record the definition that it justifies. We reserve the word ``upright'' to refer to the case where $\Phi(\epsilon)$ is the graph of a function $\lambda \mapsto\phi_\epsilon(\lambda)$. \begin{definition}[parametrized limit ranges]\label{def:prmtrzd-lim-scales} In the context of Theorem \ref{thm:prmtrzd-limits}, and assuming $F=O(G)$ around $(0,0)$, define the following \emph{limit ranges} for $(x_\epsilon)$: \begin{enumerate}[noitemsep,label={(\roman*)}] \item $r(1/a_\epsilon,\lambda_\epsilon) \ll \epsilon^2$ is the \emph{pure diffusive range}, \item $r(1/a_\epsilon,\lambda_\epsilon) \gg \epsilon^2$ is the \emph{deterministic range}, and \item $r(1/a_\epsilon,\lambda_\epsilon) \asymp \epsilon^2$ is the \emph{drift-diffusion (dd) scale}. \end{enumerate} The set $\Phi(\epsilon)$ from \eqref{eq:dd-set} is the \emph{dd curve}. Say that the dd curve is \emph{upright} if $\alpha_1(i)\ge\beta_1(i)$ for every $i\in \{0,\dots,n\}$. \end{definition} \begin{proof}[Proof of Theorem \ref{thm:dd-curve}] For $i\in \{1,\dots,n\}$ let $L_i = \{(x,\lambda)\in[0,1]^2 \colon x=\lambda^{m_i}\}$, and let $L_0=[0,1]\times\{0\}$ and $L_{n+1} = [0,1]\times\{1\}$. In addition, for $i\in \{0,\dots,n\}$ let $S_i = \{(x,\lambda)\in [0,1]^2 \colon \lambda^{m_{i+1}} \le x \le \lambda^{m_i}\}$. The steps to the proof are as follows: for each $i$ and $\epsilon \in (0,1)$, we show that \begin{enumerate}[noitemsep,label={\arabic*.}] \item $\Phi(\epsilon) \cap L_i$ is a singleton $p_i=(x_i,\lambda_i)$ with $x_i>0$ and $p_i(\epsilon) \to 0$ as $\epsilon \to 0$. \item $\Phi(\epsilon)\cap S_i$ is a smooth Jordan arc connecting $p_i(\epsilon)$ to $p_{i+1}(\epsilon)$. \item $\Phi(\epsilon) \cap S_i$ is contained in the rectangle with corners $p_i(\epsilon)$ and $p_{i+1}(\epsilon)$. \item Moving from $p_i(\epsilon)$ to $p_{i+1}(\epsilon)$ along $\Phi(\epsilon) \cap S_i$, $\lambda$ increases iff $\alpha_1(i) \ge \beta_1(i)$. \end{enumerate} Once these are established, the proof is completed as follows. Connecting the arcs $\Phi(\epsilon) \cap S_i$ from step 2 in increasing order of $i$, we obtain an injective continuous function $\psi_\epsilon:[0,1] \to [0,1]^2$ with $\psi_\epsilon(0)=p_0(\epsilon)\in (0,1)\times \{0\}$ and $\psi_\epsilon(1)=p_{n+1}(\epsilon) \in (0,1)\times \{1\}$. Using the second half of step 1 together with step 3, we find that $\sup\{x\colon (x,\lambda) \in \Phi(\epsilon)\} = \sup \{x\colon (x,\lambda) \in \bigcup_i p_i(\epsilon)\}\to 0$ as $\epsilon \to 0$. Letting $\pi_2(x,\lambda)=\lambda$ denote projection onto the $\lambda$ coordinate, since $\psi$ is continuous and has the given endpoints it follows that $\pi_2(\Phi(\epsilon)) = \pi_2(\psi_\epsilon([0,1])) = [0,1]$, so $\Phi(\epsilon)$ is the graph of a function $\lambda\mapsto \phi_\epsilon(\lambda)$ if and only if $u\mapsto \pi_2(\psi_\epsilon(u))$ is increasing, i.e., iff for each $i$, the value of $\lambda$ increases as we move from $p_i(\epsilon)$ to $p_{i+1}(\epsilon)$ along $\Phi(\epsilon)\cap S_i$, which by step 4 holds iff $\alpha_1(i)\ge \beta_1(i)$ for every $i$. Lastly, if $\alpha_1(i)\ge \beta_1(i)$ for every $i$ then $\delta_1(i)\ge 0$ for each $i$, and since $r(x,\lambda)=x^{1+\delta_1(i)}\lambda^{\delta_2(i)}$ on each $S_i$ with the expressions matching along $L_i$, it follows that for $x,x'$ and fixed $\lambda$, $r(x',\lambda)/r(x,\lambda) \ge x'/x$. Since $r(\phi_\epsilon(\lambda_\epsilon),\lambda_\epsilon) = \epsilon^2$ by definition, the last statement concerning $\phi_\epsilon$ follows directly.\\ \noindent\textbf{Step 1.} We begin with $L_1,\dots, L_n$. If $(x,\lambda) \in L_i$ and $i\in\{1,\dots,n\}$ then noting that $s(\alpha(i),m_i)=s(A,m_i)$ and $s(\beta(i),m_i)=s(B,m_i)$, \begin{align}\label{eq:rLi} r(x,\lambda)=\lambda^{m_i(1+\alpha_1(i)-\beta_1(i))}\lambda^{\alpha_2(i)-\beta_2(i)}=\lambda^{m_i + s(A,m_i) - s(B,m_i)}. \end{align} If $F=O(G)$ then by Lemma \ref{lem:FOGenv}, $s(A,m_i)\ge s(B,m_i)$. Therefore $m_i + s(A,m_i)-s(B,m_i)>0$, so $\lambda \mapsto \gamma_i(\lambda) := r(\lambda^{m_i},\lambda)$ is strictly increasing. Since $\gamma_i(0)=0$ and $\gamma_i(1)=1$, if $\epsilon \in (0,1)$ then $\Phi(\epsilon) \cap L_i= \{p_i\}$ where $p_i=(x_i,\lambda_i) = (\lambda_i^{m_i},\lambda_i)$ and $\lambda_i$ is the unique solution to $\gamma_i(\lambda) = \epsilon^2$. Denoting it $p_i(\epsilon)$, it follows that $p_i(\epsilon) \in (0,1)^2$ for each $\epsilon\in(0,1)$ and $p_i(\epsilon)\to 0$ as $\epsilon \to 0$.\\ We now treat $L_0$ and $L_{n+1}$. If $(x,\lambda) \in L_0$ then $\lambda=0$ so $x \ge \lambda^m$ for any $m>0$, which means that $r(x,0) = x^{1+\alpha_1(0)-\beta_1(0)}0^{\delta_2(0)}$. By assumption, $A$ is an envelope, so is disordered; since $\alpha(0)=\arg\max\{\alpha_1 \colon \alpha \in A\}$, also $\alpha(0)=\arg\min\{\alpha_2\colon \alpha \in A\}$. Since $A$ has an element of the form $(\alpha_1,0)$, $\alpha_2(0)=0$. Since $F=O(G)$, by Lemma \ref{lem:FOGenv}, $\alpha_2(0) \ge \beta_2(0)$ so $\beta_2(0)=\alpha_2(0)=0$, which implies $\alpha_1(0)\ge \beta_1(0)$ and $x\mapsto r(x,0)$ is strictly increasing, and as above we obtain $p_0(\epsilon)$ with the stated properties. If $(x,\lambda) \in L_{n+1}$ then $\lambda=1$ so $x\le \lambda^m$ for any $m<\infty$, which means that $r(x,1) = x^{1+\alpha_1(n)-\beta_1(n)}1^{\delta_2(n)}$. Since $F=O(G)$, by Lemma \ref{lem:FOGenv} $\alpha_1(n)\ge \beta_1(n)$, so $x\mapsto r(x,1)$ is increasing and as above we obtain $p_{n+1}(\epsilon)$ with the stated properties.\\ \noindent\textbf{Step 2.} Recall that $\delta(i)=\alpha(i)-\beta(i)$. If $1\le i \le n$, then since $s(\alpha(i),m_i)=s(A,m_i)\ge s(B,m_i)=s(\beta(i),m_i)$ from Lemma \ref{lem:FOGenv}, if $\delta_1(i)\le 0$ then $\delta_2(i)\ge 0$ and if $\delta_2(i) \le 0$ then $\delta_1(i)\ge 0$, and the same is true with $<,>$ in place of $\le,\ge$. Since as noted above, $\alpha_1(i)\ge \beta_1(i)$ for $i\in\{0,n+1\}$, $\delta_1(i)\ge 0$ for $i\in \{0,n+1\}$.\\ First note that $p_i(\epsilon),p_{i+1}(\epsilon) \in \Phi(\epsilon) \cap S_i$. Let $r_i(x,\lambda) = x^{1+\delta_1(i)}\lambda^{\delta_2(i)}$ so that $r=r_i$ on $S_i$, and let $\Phi_i(\epsilon) = \{(x,\lambda)\in [0,1]^2\colon r_i(x,\lambda)=\epsilon^2\}$, so that $\Phi_i(\epsilon)\cap S_i = \Phi(\epsilon) \cap S_i$. In particular, $\Phi_i(\epsilon)\cap L_i = \{p_i(\epsilon)\}$ and $\Phi_i(\epsilon) \cap L_{i+1} = \{p_{i+1}(\epsilon)\}$. We break into cases according to the sign of $1+\delta_1(i)$. \begin{enumerate}[noitemsep,label={(\roman*)}] \item \textit{Case 1: $1+\delta_1(i)=0$}. Then, $\delta_1(i)<0$ so $\delta_2(i)>0$, and the condition $r_i(x,\lambda)=\epsilon^2$ gives $\lambda = \epsilon^{2/\delta_2(i)}$ which is a vertical line. It follows that $\Phi(\epsilon)\cap S_i$ is a vertical line segment connecting $p_i(\epsilon)$ to $p_{i+1}(\epsilon)$.\\ \item \textit{Case 2: $1+\delta_1(i) \ne 0$.} The condition $r_i(x,\lambda)=\epsilon^2$ gives \begin{align}\label{eq:dd-xvalue} x = \epsilon^{2/(1+\delta_1(i))}\lambda^{-\delta_2(i)/(1+\delta_1(i))}, \end{align} so $\Phi_i(\epsilon)$ is the graph of a function of $\lambda$. Moreover, for fixed $\lambda$, $x\mapsto r_i(x,\lambda)$ is increasing if $1+\delta_1(i)>0$, and is decreasing if $1+\delta_1(i)<0$. We consider each subcase separately. \begin{enumerate}[noitemsep,label={(\alph*)}] \item $1+\delta_1(i)>0$. In this case $x\mapsto r_i(x,\lambda)$ is increasing. As shown above, $r$ (and thus $r_i$) increases with $\lambda$ along each $L_i$, $i \in \{1,\dots,n\}$. Denoting $p_i(\epsilon)$ by $(x_i(\epsilon),\lambda_i(\epsilon))$, it follows that $\lambda_{i+1}(\epsilon) > \lambda_i(\epsilon)$ for all $i$; for $i=0$ and $i=n$ this follows since $\lambda_0(\epsilon)=0$ and $\lambda_{n+1}(\epsilon)=1$, and for $i\in\{1,\dots,n-1\}$ it's because, by moving vertically down from $p_i(\epsilon)$ onto $L_{i+1}$, $r_i$ decreases below $\epsilon^2$, so for $r_i$ to reach $\epsilon^2$ along $L_{i+1}$, $\lambda$ must be increased. Using monotonicity of $r_i$ on both $S_i$ and $L_i$, if $x\le \lambda^{m_i}$ and $\lambda<\lambda_i(\epsilon)$ then $r(x,\lambda)<\epsilon^2$, and if $x\ge \lambda^{m_{i+1}}$ and $\lambda>\lambda_{i+1}(\epsilon)$ then $r(x,\lambda)>\epsilon^2$. It follows that $\Phi_i(\epsilon)\cap S_i$ is contained in the strip $\Lambda_i:=\{(x,\lambda)\colon \lambda_i(\epsilon) \le \lambda \le \lambda_{i+1}(\epsilon)\}$. Since $\Phi_i(\epsilon)\cap L_i = \{p_i(\epsilon)\}$ and $\Phi_i(\epsilon) \cap L_{i+1} = \{p_{i+1}(\epsilon)\}$, it follows from the intermediate value theorem that $\Phi_i(\epsilon) \cap S_i = \Phi_i(\epsilon) \cap \Lambda_i$. Since $\Phi_i(\epsilon) \cap \Lambda_i$ is the graph of the restriction of a smooth function to a closed interval, it is a Jordan arc. \item $1+\delta_1(i)<0$. Analogous arguments show that $\lambda_{i+1}(\epsilon)<\lambda_i(\epsilon)$, that $\Phi_i(\epsilon) \cap S_i = \Phi_i(\epsilon) \cap \Lambda_i$ where $\Lambda_i = \{(x,\lambda)\colon \lambda_{i+1}(\epsilon) \le \lambda \le \lambda_i(\epsilon)\}$, and that $\Phi_i(\epsilon) \cap \Lambda_i$ is a Jordan arc. \end{enumerate} \end{enumerate} \noindent\textbf{Step 3.} Referring to the above three cases, when $1+\delta_1(i)=0$ the result is trivial. When $1+\delta_1(i) \ne 0$, the result follows from $\Phi_i(\epsilon)\cap S_i \subset \Lambda_i$ and the fact that $\Phi_i(\epsilon)$ is the graph of a monotone function of $\lambda$.\\ \noindent\textbf{Step 4.} Referring to the above three cases, moving from $p_i(\epsilon)$ to $p_{i+1}(\epsilon)$ along $\Phi(\epsilon)$, \begin{itemize}[noitemsep] \item if $1+\delta_1(i)=0$ then $\lambda$ is constant, \item if $1+\delta_1(i)>0$ then $\lambda$ increases, and \item if $1+\delta_1(i)<0$ then $\lambda$ decreases. \end{itemize} In particular, $\lambda$ increases iff $1+\delta_1(i) = 1+\alpha_1(i)-\beta_1(i)>0$, i.e., $\alpha_1(i)> \beta_1(i)-1$, which since both are integer-valued is equivalent to $\alpha_1(i)\ge \beta_1(i)$. \end{proof} We close this section with a simple example showing the possibility of both upright and non-upright dd curves, in particular the transition from upright, to vertical, to folded dd curve. \begin{example}\label{ex:uprex} Consider the case with $A = \{(4,0),(1,2)\}$ and $B=\{(k,0),(0,k)\}$ where $k\in \{1,2,3\}$ is fixed, which has $A=\mathrm{piv}(A)$ and $B=\mathrm{piv}(B)$. One easily verifies from Figure \ref{fig:uprex1} that $\mathrm{piv}(A)\subset S(B)$ as required by Lemma \ref{lem:FOGenv}. Then $M(A)= \{2/3\}$ and $M(B)=\{1\}$, so $M:=M(A)\cup M(B)=\{m_1,m_2\}$, with $m_1=2/3$ and $m_2=1$. We have $\alpha(0)=(4,0)$ and $\alpha(1)=\alpha(2)=(1,2)$, and $\beta(0)=\beta(1)=(k,0)$ and $\beta(2)=(0,k)$, so $\delta(0) = (4-k,0)$, $\delta(1) = (1-k,2)$ and $\delta(2) = (1,2-k)$. In particular, the dd curve is upright, i.e., $\delta_1(i)\ge 0$ for all $i$, iff $k=1$. The dd curve is depicted for each case in Figure \ref{fig:uprex2}. \end{example} \begin{figure} \begin{center} \includegraphics[width=2in]{uprex.png}\hspace{.05in} \end{center} \caption{a graph depicting $A$ (blue dots), $S(A)$ (blue region) and the three options for $B$ (red dots of darkening shade) in Example \ref{ex:uprex}, with a portion of the contour $L(B)$ (red segments) in each case.} \label{fig:uprex1} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.8in]{bifurc11.png}\hspace{.1in} \includegraphics[width=1.8in]{bifurc12.png}\hspace{.1in} \includegraphics[width=1.8in]{bifurc13.png} \end{center} \caption{The dd curve $\Phi(\epsilon)$ of \eqref{eq:dd-set} (bold) drawn with $\epsilon=0.2$ and the lines $L_i=\{(x,\lambda)\colon x=\lambda^{m_i}\}$ (dotted) for Example \ref{ex:uprex} with $k=1,2,3$ from left to right, plotted with vertical $x$ and horizontal $\lambda$.} \label{fig:uprex2} \end{figure} \subsection{Limit scales around a simple equilibrium branch}\label{sec:float-equil} We now treat case (ii) from the top of Section \ref{sec:prmtzd}: limits around a non-constant branch $x_\star(\lambda)$. Since the goal is to use this for bifurcation theory, we will focus on the case where $x_\star(\lambda)$ is a non-constant simple branch of the zeroset of $F$, i.e., $\lambda \mapsto x_\star(\lambda)$ is non-constant and for each $\lambda$, $x_\star(\lambda)$ is a simple root of $F$. To do so, in the context of Section \ref{sec:const-equil}, suppose there is $\star \in \{1,\dots,n\}$ and $z_\star>0$ such that \begin{align}\label{eq:Feqbr} \sum_{\alpha \in A(m_\star)}c_\alpha z_\star^{\alpha_1}=0 \quad \text{and} \quad \frac{d}{dz}\sum_{\alpha \in A(m_\star)}c_\alpha z^{\alpha_1}\big|_{z=z_\star} \ne 0. \end{align} For $\lambda \in (0,1)$ define $ F_\star(z,\lambda) = \lambda^{-s(A,m_\star)}F(z\lambda^{m_\star},\lambda)$, then examining the scale functions for $F$ on $R_{2i-1}$ from Lemma \ref{lem:dom-terms}, we have $$F_\star(z,\lambda) \sim \sum_{\alpha \in A(m_\star)}c_\alpha z^{\alpha_1}$$ locally uniformly in $z$, as $\lambda \to 0$. In particular, $F_\star$ can be smoothly extended to $\lambda=0$ by defining $F_\star(z,0) = \sum_{\alpha \in A(m_\star)}c_\alpha z^{\alpha_1}$. By assumption, $F_\star(z_\star,0)=0$ and $\partial_z F_\star(z,0) \ne 0$. By the implicit function theorem, there is a function $z_\star(\lambda)$, defined on some interval $[0,\lambda_0)$, such that $F_\star(z_\star(\lambda),\lambda)=0$, and $\partial_z F_\star(z_\star(\lambda),\lambda) = \partial_z F_\star(z_\star,0) + o(1) \ne 0$. Letting $x_\star(\lambda) = \lambda^{m_\star}z_\star(\lambda)$, $F(x_\star(\lambda),\lambda)=0$ for $0 \le \lambda <\lambda_0$. \\ We are interested in the scaling of $F,G$ around $x_\star(\lambda)$. Specifically, we seek limits of the form \begin{align}\label{eq:FG-branch-lim} \tilde F(x) &:= \lim_{\epsilon \to 0}a_\epsilon b_\epsilon F(x_\star(\lambda_\epsilon) + x/a_\epsilon,\lambda_\epsilon) \ \text{and} \nonumber \\ \tilde G(x) &:= \lim_{\epsilon \to 0}\epsilon^2 a_\epsilon^2 b_\epsilon G(x_\star(\lambda_\epsilon) + x/a_\epsilon,\lambda_\epsilon). \end{align} The cases $1/a_\epsilon \asymp x_\star(\lambda_\epsilon)$ and $1/a_\epsilon \gg x_\star(\lambda_\epsilon)$ are covered by Theorem \ref{thm:prmtrzd-limits}, since if $1/a_\epsilon \asymp x_\star(\lambda_\epsilon)$ then \eqref{eq:FG-par-lim} holds iff \eqref{eq:FG-branch-lim} holds, with the argument of $\tilde F,\tilde G$ shifted by $\lim_{\epsilon \to 0}a_\epsilon x_\star(\lambda_\epsilon)$, while if $1/a_\epsilon \gg x_\star(\lambda_\epsilon)$ then \eqref{eq:FG-par-lim} holds iff \eqref{eq:FG-branch-lim} holds, with the same $\tilde F,\tilde G$. Thus, it remains to consider $1/a_\epsilon \ll x_\star(\lambda_\epsilon)$. To keep things simple, we shall assume that $F=O(G)$, which constrains the behaviour of $G$ rather nicely. We'll again follow the three steps of Section \ref{sec:iso}: shape, drift to diffusion ratio and time scale.\\ \noindent\textit{Shape.} The notion of limit scales is again applicable with the obvious adjustments. Specifically, we seek to decompose the right-hand side of the equations in \eqref{eq:FG-branch-lim} in such a way that $\tilde F,\tilde G$ satisfy \eqref{eq:tF-tG-ratio2}, which we recall here for convenience: $$\tilde F(x) \sim h_\epsilon Q(x) \quad \text{and} \quad \tilde G(x) \sim \ell_\epsilon V(x)$$ for some $h_\epsilon,\ell_\epsilon$ and $Q,V$. By assumption, $$F_\star(z_\star(\lambda) + z,\lambda) \sim \partial_zF_\star(z_\star,0)\, z \ \ \text{as} \ \ |z|+|\lambda| \to 0,$$ and by definition, $$F(x_\star(\lambda) + x,\lambda)=\lambda^{s(A,m_\star)}F_\star(z_\star(\lambda) + x/\lambda^{m_\star},\lambda),$$ so if $x\ll \lambda^{m_\star}$ then $$F(x_\star(\lambda)+x,\lambda) \sim \lambda^{s(A,m_\star)-m_\star}\partial_z F_\star(z_\star,0)\, x \ \ \text{as} \ \ \lambda \to 0.$$ Since $x_\star(\lambda) \sim \lambda^{m_\star}z_\star$ as $\lambda \to 0$, if $1/a_\epsilon \ll x_\star(\lambda_\epsilon)$ and $x=O(1)$ then $x/a_\epsilon \ll \lambda^{m_\star}$ and \begin{align}\label{eq:Fbr-scale} F(x_\star(\lambda_\epsilon) + x/a_\epsilon,\lambda_\epsilon) \sim (1/a_\epsilon)\lambda^{s(A,m_\star)-m_\star}\partial_z F_\star(z_\star,0)\, x. \end{align} In particular, $x \mapsto F(x_\star(\lambda_\epsilon)+x/a_\epsilon,\lambda_\epsilon)$ is asymptotically linear as $\epsilon \to 0$, and in \eqref{eq:tF-tG-ratio2} we can take $$h_\epsilon = b_\epsilon \lambda_\epsilon^{s(A,m_\star)-m_\star} \ \ \text{and} \ \ Q(x) = \partial_z F_\star(z_\star,0) \, x.$$ For $G$, define $G_\star(z,\lambda) = \lambda^{-s(B,m_\star)}G(z\lambda^{m_\star},\lambda)$, then similarly to $F_\star$, $$G_\star(z,\lambda) \sim \sum_{\beta \in B(m_\star)}c_\beta z^{\beta_1},$$ so extend to $\lambda=0$ by letting $G_\star(z,0) = \sum_{\beta \in B(m_\star)}c_\beta z^{\beta_1}$. By assumption, $G$ is non-negative, so $G_\star(z,\lambda) \ge 0$ for all $z,\lambda$, including $\lambda=0$ by continuity. In particular, any zero of the function $z \mapsto G_\star(z,0)$ must have even multiplicity. If $G_\star(z_\star,0)=0$ then since $\partial_z F_\star(z_\star,0) \ne 0$ and $F=O(G)$ by assumption, $\partial_z G_\star(z_\star,0) \ne 0$ which implies $G_\star(z,0)<0$ for some $z$ near $z_\star$, a contradiction. It follows that $G_\star(z_\star,0)>0$, and consequently that $$G_\star(z_\star(\lambda)+z,\lambda) = G_\star(z_\star,0) + o(1) \ \ \text{as} \ \ |z|+|\lambda| \to 0.$$ Thus if $1/a_\epsilon \ll x_\star(\lambda_\epsilon)$ and $x=O(1)$ then $x/a_\epsilon \ll \lambda^{m_\star}$ and \begin{align}\label{eq:Gbr-scale} G(x_\star(\lambda_\epsilon)+x/a_\epsilon,\lambda_\epsilon) \sim \lambda_\epsilon^{s(B,m_\star)} G_\star(z_\star,0), \end{align} so in \eqref{eq:tF-tG-ratio2} we can take $$\ell_\epsilon = \epsilon^2 a_\epsilon^2 b_\epsilon \lambda_\epsilon^{s(B,m_\star)} \ \ \text{and} \ \ V(x) = G_\star(z_\star,0).$$ \noindent\textit{Drift to diffusion ratio.} Following the recipe of Sections \ref{sec:iso} and \ref{sec:const-equil}, $$\frac{h_\epsilon}{\ell_\epsilon} = \frac{\lambda_\epsilon^{s(A,m_\star)-s(B,m_\star)-m_\star}}{\epsilon^2 a_\epsilon^2},$$ so $h_\epsilon \ll \ell_\epsilon \Leftrightarrow 1/a_\epsilon \ll \epsilon \lambda_\epsilon^{\gamma_\star}$, where $\gamma_\star = (m_\star + s(B,m_\star) - s(A,m_\star))/2$, similarly with $\asymp,\gg$ in place of $\ll$.\\ \noindent\textit{Time scale.} $(b_\epsilon)$ is again determined by setting $h_\epsilon \asymp 1$ or $\ell_\epsilon \asymp 1$, so we'll just state the result. \begin{theorem}\label{thm:eq-branch} Let $x_\star \in \mathbb{R}^d$ and let $\tilde U$ be a non-empty open convex set whose closure contains $0$. Let $\lambda$ be a parameter taking values in an interval $I$ containing $0$. Assume that for each domain $D\subset\subset \tilde U$, $(x_\epsilon(t ;\lambda_\epsilon))_{t\ge 0}$ is a strongly stochastic parametrized QDP on $(D_\epsilon):=(\{x_\star(\lambda_\epsilon) + x/a_\epsilon\colon x \in D\})$ to scale $(a_\epsilon),(b_\epsilon)$ with characteristics $F,G$ that have a Taylor expansion around $(0,0)$ as in \eqref{eq:loTaylor}. Suppose in addition that $F$ satisfies \eqref{eq:Feqbr} for some $i$. Let $x_\star(\lambda)$ denote the corresponding branch and let $\gamma_\star = (m_\star + s(B,m_\star) - s(A,m_\star))/2$.\\ Let $Y_\epsilon(t) = a_\epsilon(x_\epsilon(b_\epsilon t;\lambda_\epsilon)-x_\star(\lambda_\epsilon))$ and suppose that $a_\epsilon \to \infty$. If $1/a_\epsilon \ll x_\star(\lambda_\epsilon)$ and $(a_\epsilon),(b_\epsilon),(\lambda_\epsilon)$ satisfy one of the sets of conditions below, then $(Y_\epsilon)$ is a QD with characteristics $\tilde F,\tilde G$ as described below, where $$Q(x) = \partial_z F_\star(z_\star,0)\, x \ \ \text{and} \ \ V(x) = G_\star(z_\star,0)$$ and $F_\star,G_\star$ are as given in the discussion above. \begin{itemize}[noitemsep] \item If $1/a_\epsilon \ll \epsilon \lambda_\epsilon^{\gamma_\star}$ and $b_\epsilon \asymp \epsilon^{-2}a_\epsilon^{-2}\lambda_\epsilon^{-s(B,m_\star)}$ then $\tilde F=0$ and $\tilde G \propto V$. \item If $1/a_\epsilon \asymp \epsilon \lambda_\epsilon^{\gamma_\star}$ and $b_\epsilon \asymp \lambda_\epsilon^{m_\star-s(A,m_\star)}$ then $\tilde F \propto Q$ and $\tilde G=0$. \item If $1/a_\epsilon \gg \epsilon \lambda_\epsilon^{\gamma_\star}$ and $b_\epsilon$ satisfies either condition above then $\tilde F \propto Q$ and $\tilde G \propto V$. \end{itemize} \end{theorem} \begin{proof} It is trivial to check that Corollary \ref{cor:QDP-QD} holds with $x_\star(\lambda_\epsilon)$ in place of $x_\star$. The result then follows from locally uniform convergence in \eqref{eq:FG-branch-lim}, which follows from the discussion. \end{proof} As we did before, we define the relevant limit ranges. In this case, it is important that we restrict to $1/a_\epsilon \ll x_\star(\lambda_\epsilon)$. \begin{definition}[limit ranges for a simple equilibrium branch]\label{def:branch-lim-scales} In the context of Theorem \ref{thm:eq-branch} (in particular, assuming that $1/a_\epsilon \ll x_\star(\lambda_\epsilon)$), define the following \emph{limit ranges} for $(x_\epsilon)$: \begin{enumerate}[noitemsep,label={(\roman*)}] \item $1/a_\epsilon \ll \epsilon \lambda_\epsilon^{\gamma_\star}$ is the \emph{pure diffusive range}, \item $1/a_\epsilon \gg \epsilon \lambda_\epsilon^{\gamma_\star}$ is the \emph{deterministic range}, and \item $1/a_\epsilon \asymp \epsilon \lambda_\epsilon^{\gamma_\star}$ is the \emph{drift-diffusion (dd) scale}. \end{enumerate} \end{definition} In cases where both $0$ and $x_\star(\lambda_\epsilon)$ are equilibria (zeros of $F$), or when there are two branches $\pm x_\star(\lambda_\epsilon)$ as in the case of a saddle-node, we could say that a bifurcation has occurred, in the sense of the equilibria becoming distinguishable, once the distance between equilibria exceeds the drift-diffusion scale around $x_\star(\lambda)$. Since that distance is of order $x_\star(\lambda_\epsilon)$, and by assumption, $x_\star(\lambda) \asymp \lambda^{m_\star}$ as $\lambda\to 0$, this occurs when $\epsilon \lambda^{\gamma_\star} \asymp \lambda^{m_\star}$. Setting the two equal to each other, $$\epsilon\,\lambda^{(m_\star+s(B,m_\star)-s(A,m_\star))/2} = \lambda^{m_\star},$$ and squaring and re-arranging gives $\epsilon^2 = \lambda^{m_\star+s(A,m_\star)-s(B,m_\star)}$ or \begin{align}\label{eq:lbdtranspt} \lambda = \epsilon^{\nu_\star} \quad \text{where} \quad \nu_\star = \dfrac{2}{m_\star+s(A,m_\star)-s(B,m_\star)}, \end{align} which is also equivalent to $\epsilon^2 = r(\lambda^{m_\star},\lambda)$, the crossing point of $\Phi(\epsilon)$ (from Theorem \ref{thm:dd-curve}) with the curve $x=\lambda^{m_\star}$. The condition $1/a_\epsilon \ll x_\star(\lambda_\epsilon)$ for validity of the limit scale translates to $\epsilon\lambda^{\gamma_\star} \ll \lambda^{m_\star}$, which corresponds to $\lambda \gg \epsilon^{\nu_\star}$. \section{Bifurcations in one dimension}\label{sec:bifurc} We now apply the theory developed in Section \ref{sec:prmtzd} to study some one-dimensional bifurcations: saddle-node, transcritical and pitchfork. Our goal is to understand how the dd scale changes, in both space and time, as the value of $\lambda$ sweeps across the bifurcation. We'll begin by giving formulae for the objects of study, then conduct the bifurcation analysis. \subsection{Generalities}\label{sec:gener} Let's first write down the dd space and time scales, as a function of $\lambda$, around both $x=0$ and $x=x_\star$, assuming in the former case that the dd curve is upright, which will mostly be the case in what follows, and in the latter case that we are in the context of Section \ref{sec:float-equil}.\\ \noindent\textbf{Scales around $x=0$. }As in the proof of Theorem \ref{thm:dd-curve}, the dd curve intersects the lines $L_i=\{(x,\lambda)\colon x=\lambda^{m_i}\}$ at the points $p_i(\epsilon)=(x_i(\epsilon),\lambda_i(\epsilon))$ such that $r(p_i(\epsilon))=\epsilon^2$, which using \eqref{eq:rLi} , for $i=1,\dots,n$ have $$\lambda_i(\epsilon) = \epsilon^{\nu_i} \ \ \text{where} \ \ \nu_i = \frac{2}{m_i+s(A,m_i)-s(B,m_i)}.$$ If the dd scale is upright then $\lambda_i(\epsilon)<\lambda_{i+1}(\epsilon)$ for each $i$. Referring to \eqref{eq:dd-xvalue} and letting $\lambda_0(\epsilon)=0$ and $\lambda_{n+1}(\epsilon)=1$, the dd curve around $x=0$ has the form $$\phi_\epsilon(\lambda) = \epsilon^{2/(1+\delta_1(i))}\lambda^{-\delta_2(i)/(1+\delta_1(i))}\quad \text{for} \quad \lambda_i(\epsilon)\le \lambda \le \lambda_{i+1}(\epsilon).$$ Since, for a given sequence $(\lambda_\epsilon)$ and subpartition element $R_i$, the dd scale has $1/a_\epsilon \asymp \phi_\epsilon(\lambda_\epsilon)$, we can sensibly define the dd time scale around $x=0$, modulo $\asymp$, piecewise by the function $$b_\epsilon(\lambda) = \phi_\epsilon(\lambda)^{1-\alpha_1(i)}\lambda^{-\alpha_2(i)} \quad \text{for} \quad \lambda_i(\epsilon) \le \lambda \le \lambda_{i+1}(\epsilon),$$ which is obtained from the form of $b_\epsilon$ given in Theorem \ref{thm:prmtrzd-limits} by substituting $\phi_\epsilon$ for $1/a_\epsilon$.\\ From Theorem \ref{thm:dd-curve} we already know that $\phi_\epsilon(\lambda)\ll 1$ uniformly over $\lambda$ as $\epsilon \to 0$, and that $\lambda_i(\epsilon) \ll 1$ as $\epsilon \to 0$ for $i\in \{1,\dots,n\}$, since $\phi_\epsilon(\lambda_i(\epsilon)) = \lambda_i(\epsilon)^{m_i}$ and $m_i\in (0,\infty)$. Using the formulae above we can read off a good deal more information. Uprightness is still assumed, so $1+\delta_1(i) >0$. \begin{enumerate}[noitemsep,label={(\roman*)}] \item For each $i$ and $\lambda \in [\lambda_i(\epsilon),\lambda_{i+1}(\epsilon)]$, \begin{enumerate}[noitemsep,label={(\alph*)}] \item $\phi_\epsilon$ and $b_\epsilon$ each have the form $\epsilon^{q_1}\lambda^{q_2}$ for some $q_1,q_2 \in \mathbb{Q}$, so each is monotone in both $\epsilon$ and $\lambda$. \item $\lambda \mapsto \phi_\epsilon(\lambda)$ is increasing if $\delta_2(i)<0$, constant if $\delta_2(i)=0$ and decreasing if $\delta_2(i)>0$. \item since $\phi_\epsilon \ll 1$, if $\alpha_1(i)>1$ then $b_\epsilon(\lambda) \gg 1$ as $\epsilon \to 0$, uniformly over $\lambda$. \item if $\alpha_1(i)=1$ and $i<n$ then since $\lambda_n(\epsilon)\ll 1$, $b_\epsilon(\lambda) \gg 1$ as $\epsilon \to 0$, uniformly over $\lambda$. \end{enumerate} \item If $\alpha_1(n)=1$ then either \begin{enumerate}[noitemsep,label={(\alph*)}] \item $b_\epsilon(\lambda)=1$ on $[\lambda_n(\epsilon),1]$ (if $\alpha_2(n)=0$), or \item $b_\epsilon(\lambda) \downarrow 1$ as $\lambda \uparrow 1$ (if $\alpha_2(n)>0$). \end{enumerate} \end{enumerate} \noindent\textbf{Scales around $x=x_\star$. }Referring to the dd scale in Theorem \ref{thm:eq-branch}, we have $x_\star(\lambda) \asymp \lambda^{m_\star}$, $\gamma_\star=m_\star+s(B,m_\star)-s(A,m_\star)$ and $\nu_\star=2/(m_\star+s(A,m_\star)-s(B,m_\star))$, and we can define a dd curve around $x_\star$ with half-width $$\phi_\epsilon^\star(\lambda) := \epsilon \lambda^{\gamma_\star} \quad \text{for} \quad \lambda\ge \lambda_\star(\epsilon):=\epsilon^{\nu_\star},$$ Similarly as above, referring to Theorem \ref{thm:eq-branch} the dd time scale around $x_\star$ can be defined by $$b_\epsilon^\star(\lambda) = \lambda^{m_\star-s(A,m_\star)}\quad \text{for} \quad \lambda\ge \lambda_\star(\epsilon).$$ From the definition of $\lambda_\star(\epsilon)$ (see \eqref{eq:lbdtranspt}), it follows that $\phi_\epsilon^\star(\lambda_\star(\epsilon)) \asymp \phi_\epsilon(\lambda_\star(\epsilon))$ as $\epsilon \to 0$. Similarly it can be verified from the formulae that $b_\epsilon^\star(\lambda_\star(\epsilon)) \asymp b_\epsilon(\lambda_\star(\epsilon))$ as $\epsilon \to 0$. As before we will make some general observations, this time for $\lambda \in [\lambda_\star(\epsilon),1]$. \begin{enumerate}[noitemsep,label={(\alph*)}] \item $\phi_\epsilon^\star$ and $b_\epsilon^\star$ each have the form $\epsilon^{q_1}\lambda^{q_2}$ for some $q_1,q_2 \in \mathbb{Q}$, so each is monotone in $\epsilon$ and $\lambda$. \item $\phi_\epsilon^\star(\lambda) \ll 1$ as $\epsilon \to 0$, uniformly in $\lambda$. This follows from monotonicity of $\lambda\mapsto \phi_\epsilon^\star(\lambda)$ together with $\phi_\epsilon^\star(\lambda_\star(\epsilon)) \asymp \phi_\epsilon(\lambda_\star(\epsilon))\ll 1$ and $\phi_\epsilon^\star(1) = \epsilon \ll 1$. \item $\phi_\epsilon^\star(\lambda) \to \epsilon$ and $b_\epsilon^\star(\lambda) \to 1$ as $\lambda \to 1$, for each $\epsilon>0$. \item Since $b_\epsilon^\star(1)=1$, if $b_\epsilon(\lambda_\star(\epsilon))\gg 1$ as $\epsilon \to 0$ then for small $\epsilon>0$, $b_\epsilon^\star(\lambda)\downarrow 1$ as $\lambda \uparrow 1$. \end{enumerate} \subsection{Bifurcations} We now study bifurcations. Since, in principle, we can use the above formulae to describe exactly the dd space and time scales, we shall be more concerned with the following qualitative properties: \begin{enumerate}[noitemsep,label={\arabic*.}] \item The general shape of $\phi_\epsilon$ and $\phi_\epsilon^\star$, i.e., the intervals $[\lambda_i(\epsilon),\lambda_{i+1}(\epsilon)]$ on which \\each one increases, decreases or remains constant, and \item The values of $\lambda$ for which $b_\epsilon(\lambda)$ and $b_\epsilon^\star(\lambda)$ are $\gg 1$, $\asymp 1$ or $\ll 1$ as $\epsilon\to 0$, respectively, regions where the diffusion limit is slow, fast, or irrelevant (i.e., too short to observe on the original time scale). \end{enumerate} The context is a strongly stochastic ($F=O(G)$) parametrized QDP, as in Definitions \ref{def:ssQDP} and \ref{def:prmtrzdQDP}, whose characteristics $F,G$ have a Taylor expansion to leading order around $(0,0)$ as in \eqref{eq:loTaylor}, and for which $F(0,0)=\partial_x F(0,0)=0$. Letting $A,B$ denote the powers of $F,G$ from \ref{eq:loTaylor}, by Lemma \ref{lem:envelope-sum}, we can, and will, assume in this section that $A,B$ are envelopes.\\ \noindent\textbf{Bifurcation types. }The set $A$ determines the bifurcation type, and from the assumption $F=O(G)$ and Lemma \ref{lem:FOGenv}, $B$ is constrained by the condition $\mathrm{piv}(A)\subset S(B)$. The bifurcations that we'll consider correspond to the following choices for $A$ and equilibria: \begin{enumerate}[noitemsep,label={\arabic*.}] \item Saddle-node: $A=\{(2,0),(0,1)\}$ with equilibria at $\pm \, x_\star(\lambda)$ where $x_\star(\lambda) \asymp \sqrt{\lambda}$. \item Transcritical: $A=\{(2,0),(1,1)\}$ with equilibria at $0$ and $x_\star(\lambda) \asymp \lambda$. \item Pitchfork: $A=\{(3,0),(1,1)\}$ with equilibria at $0$ and $\pm \, x_\star(\lambda)$ where $x_\star(\lambda) \asymp \sqrt{\lambda}$. \end{enumerate} In all three cases, $A$ has the form $\{(j_1,0),(j_2,1)\}$ for some $j_2\le 1<j_1$, which has $\mathrm{piv}(A)=A$ and $M(A)=\{m_A\}$ with $m_A:=1/(j_1-j_2)\}$, and $F$ has a non-constant equilibrium branch $x_\star(\lambda) \asymp \lambda^{m_\star}$ with $m_\star=m_A$. For simplicity we'll assume $B$ has no elements $(\beta_1,\beta_2)$ with $\beta_2 \ge 2$, i.e., $G$ has no relevant $\lambda$ dependence above $\lambda^1$, which is reasonable for most applications. Using this and the constraint $A \subset S(B)$, in each case either \begin{enumerate}[noitemsep,label={(\roman*)}] \item $B=\{(k,0)\}$ for some $k\le j_2$ which has $M(B)=\emptyset$, or \item $B=\{(k_1,0),(k_2,1)\}$ for some $k_2<k_1\le j_1$ with $k_2 \le j_2$ which has $M(B)=\{1/(k_1-k_2)\}$. \end{enumerate} \noindent\textbf{Uprightness. }For the above bifurcations and any compatible choice of $B$, the dd curve around $x=0$ is upright, with the exception of $A=\{(3,0),(0,1)\}$ and $B=\{(2,0),(0,1)\}$ where it is vertical for $\lambda \le x \le \sqrt{\lambda}$. This is clear if $B=\{(k,0)\}$ since $\alpha_1(i)\in \{j_1,j_2\}$ for each $i$ while $\beta_1(i)=k \le \min(j_1,j_2)$. Otherwise, since $k_1\le j_1$ and $k_2\le j_2$, $\alpha_1(i) < \beta_1(i)$ only occurs if for some $i$, $\alpha(i) = (j_2,1)$ and $\beta(i)=(k_1,0)$ with $k_1>j_2$. Since $A(m)=\{(j_2,1)\}$ iff $m>m_A$ and $B(m)=\{(k_1,0)\}$ iff $m<m_B:=1/(k_1-k_2)$, this is possible only if $m_A<m_B$, which as confirmed by a quick sketch is only possible in the pitchfork case, and only when $B=\{(2,0),(0,1)\}$.\\ \noindent\textbf{Regions. }As usual, we restrict to $(x,\lambda)\in [0,1]^2$. The behaviour in the other three quadrants can be inferred by symmetry: limit scales around $x=0$ are unchanged modulo reflection, and the same goes for scales around $x=x_\star$, when there is an equilibrium branch in the given quadrant, such as quadrant IV for saddle-node and pitchfork, and quadrant III for transcritical. Note that limit scales do not depend on whether the equilibrium is stable or unstable.\\ Using what we learned in Section \ref{sec:gener}, we give a qualitative picture of the dd space and time scales around $x=0$ and $x=x_\star$. We have $M(A)=\{m_A\}$ with $m_A=1/(j_1-j_2)$ and if $M(B) \ne \emptyset$ then $M(B)=\{m_B\}$ with $m_B=1/(k_1-k_2)$. In addition, $m_\star=m_A$. There are four cases to consider: \begin{enumerate}[noitemsep,label={(\roman*)}] \item if $B=\{(k,0)\}$ then $M(B)=\emptyset$, \item if $j_1-j_2=k_1-k_2$ then $m_B=m_A$, \item if $j_1-j_2>k_1-k_2$ then $m_B<m_A$ and \item if $j_1-j_2<k_1-k_2$ then $m_B>m_A$. \end{enumerate} This gives $$M:=M(A)\cup M(B) = \begin{cases} \{m_A\} & \text{in cases (i)-(ii)}, \\ \{m_A,m_B\} & \text{in cases (iii)-(iv)}.\end{cases}$$ When $|M|=1$, with $m_1=m_A$ the intervals to consider are $[0,\lambda_\star(\epsilon)]$ and $[\lambda_\star(\epsilon),1]$. When $|M|=2$, with $m_1=\min(m_A,m_B)$ and $m_2=\max(m_A,m_B)$, the intervals are $[0,\lambda_\mathbf{1}(\epsilon)]$, $[\lambda_\mathbf{1}(\epsilon),\lambda_2(\epsilon)]$ and $[\lambda_2(\epsilon),1]$. A table of $\alpha,\beta$ and $\delta$ values is given in Table \ref{tab:bifurcases}.\\ \begin{table}\label{tab:bifurcases} \begin{tabular}{ c | c | c | c } Case & $(\alpha(i))_{i=0}^n$ & $(\beta(i))_{i=0}^n$ & $(\delta(i))_{i=0}^n$ \\ \hline $M(B)=\emptyset$ & $((j_1,0),(j_2,1))$ & $((k,0),(k,0))$ & $((j_1-k,0),(j_2-k,1))$ \\ $m_B=m_A$ & $((j_1,0),(j_2,1))$ & $((k_1,0),(k_2,1))$ & $((j_1-k_1,0),(j_2-k_2,0))$ \\ $m_B<m_A$ & $((j_1,0),(j_1,0),(j_2,1))$ & $((k_1,0),(k_2,1),(k_2,1))$ & $((j_1-k_1,0),(j_1-k_2,-1),(j_2-k_2,0)$ \\ $m_B>m_A$ & $((j_1,0),(j_2,1),(j_2,1))$ & $((k_1,0),(k_1,0),(k_2,1))$ & $((j_1-k_1,0),(j_2-k_1,1),(j_2-k_2,0))$ \end{tabular} \end{table} \noindent\textbf{Shape of $\phi_\epsilon$ and $\phi_\epsilon^\star$. }As noted in Section \ref{sec:gener}, $\phi_\epsilon$ increases ($\nearrow$), decreases ($\searrow$) or is constant ($\rightarrow$) if $\delta_2(i)$ is negative, positive or zero respectively. Reading $\delta_2(i)$ from Table \ref{tab:bifurcases}, from left to right over the two or three regions, \begin{enumerate}[noitemsep,label={(\roman*)}] \item if $M(B)=\emptyset$ then $\phi_\epsilon$ is $\rightarrow$, then $\searrow$, \item if $m_B=m_A$ then $\phi_\epsilon$ is constant on $[0,1]$, \item if $m_B<m_A$ then $\phi_\epsilon$ is $\rightarrow$, then $\nearrow$, then $\rightarrow$ again, and \item if $m_B>m_A$ then $\phi_\epsilon$ is $\rightarrow$, then $\searrow$ , then $\rightarrow$ again. \end{enumerate} In the exceptional case (see uprightness above) which belongs to case (iv), $\phi_\epsilon$ is vertical instead of decreasing in the second region. In each case, $\phi_\epsilon(\lambda) \to \epsilon^{2/(1+\delta_1(n))} = \epsilon^{2/(1+j_2-k_2)}$ ($k$ in place of $k_2$ in case (i)) as $\lambda \to 1$. Since $k_2 \le j_2\le 1$, $j_2-k_2 \in \{0,1\}$. For the saddle-node, $j_2=0$ requiring $k_2=0$ and giving limit $\epsilon^2$, while for the other two bifurcations it depends on the value of $k_2$.\\ Similarly, $\phi_\epsilon^\star$ is $\nearrow$, $\searrow$ or $\rightarrow$ on $[\lambda_\star(\epsilon),1]$ if $\gamma_\star$ is $>0$, $<0$ or $=0$. We leave it to the interested reader to determine its shape in each case. As noted in Section \ref{sec:gener}, $\phi_\epsilon^\star(\lambda_\star(\epsilon)) \asymp \phi_\epsilon(\lambda_\star(\epsilon))$ and $\phi_\epsilon^\star(\lambda) \to \epsilon$ as $\lambda \to 1$, so one way to infer its shape is to compute $\phi_\epsilon(\lambda_\star(\epsilon))$ and compare it to $\epsilon$.\\ \noindent\textbf{Scale of $b_\epsilon$ and $b_\epsilon^\star$. }Here is a brief summary: for $\lambda$ near $0$, time scales are $\gg 1$, while for $\lambda$ near $1$, the time scale around equilibrium points $\to 1$ following the curve $1/\lambda$, and around non-equilibrium points is eventually $\ll 1$ as $\lambda \to 1$. In other words, diffusion is slow near the bifurcation point, and moving away from the bifurcation point, becomes fast around equilibria and irrelevant around non-equilibrium points. We now give and demonstrate the precise statements.\\ In all cases, $b_\epsilon(\lambda)\gg 1$ uniformly over $\lambda \in [0,\lambda_n(\epsilon)]$ as $\epsilon \to 0$, i.e., the dd time scale around $0$ is slow in the leftmost $n$ (of $n+1$) regions. Since $\lambda_\star(\epsilon)\le \lambda_n(\epsilon)$, in particular the time scale is $\gg 1$ at least up to the point where equilibria become distinguishable. Using observations (i)(c)-(d) of the first half of Section \ref{sec:gener}, the above is true provided $\alpha_1(i) \ge 1$ for all $i<n$. This is obvious for transcritical and pitchfork as $\alpha_1\ge 1$ for all $\alpha\in A$, while for saddle-node, $n=1$ and $\alpha_1(0)=2$. If, instead, $\lambda \in [\lambda_n(\epsilon),1]$ then from the formula for $b_\epsilon$ and the fact that in all cases, $\alpha_1(n)=j_2$ and $\alpha_2(n)=1$, $$b_\epsilon(\lambda) = \begin{cases} 1/\lambda & \text{if} \ j_2=1, \\ \phi_\epsilon(\lambda)/\lambda & \text{if} \ j_2=0.\end{cases}$$ In particular, as $\lambda \to 1$, since $\phi_\epsilon(\lambda) \to \epsilon^{2/(1+k_2-j_2)}$, $$b_\epsilon \to \begin{cases} 1 & \text{if} \ \ j_2=1,\\ \epsilon^{2/(1+k_2-j_2)} & \text{if} \ \ j_2=0\end{cases}$$ (with $k$ in place of $k_2$ if $B=\{(k,0)\}$). If $j_2=0$ then $b_\epsilon(\lambda)\le 1$ iff $\lambda \ge \epsilon^{2/(1+k_2-j_2)}$, so if $\lambda \gg \epsilon^{2/(1+k_2-j_2)}$ then since $b_\epsilon(\lambda)\ll 1$ it is sensible to declare the dd scale irrelevant at that point, as drift has completely washed out diffusion. For the time scale around $x_\star$, since we've shown that $b_\epsilon(\lambda_\star(\epsilon))\gg 1$ as $\epsilon \to 0$, using observation (d) from the second half of Section 5.1 it follows that $b_\epsilon^\star(\lambda) \downarrow 1$ as $\lambda \uparrow 1$, for each $\epsilon>0$. In particular, if $\lambda_\star(\epsilon) \le \lambda \ll 1$ then $b_\epsilon^\star(\lambda) \gg 1$. \subsection{Canonical form of $G$} With the multiplicity of functions $G$ satisfying $F=O(G)$ for a given $F$, the question arises whether there is a canonical or generic choice of $G$, one to be expected most often in applications. I wish to argue that for the above examples, where $A=\{(j_1,0),(j_2,1)\}$, an obvious choice is to take \begin{align}\label{eq:canonB} B=\{(k,0)\} \quad \text{where} \quad k=\min(j_1,j_2)=j_2. \end{align} Equivalently, $B=\{\beta\}$ where $\beta = (j_1,0)\wedge (j_2,1)$. To understand this claim, note that a bifurcation can arise from two competing mechanisms, when their contributions to the drift are equal and opposite: for example, varying the reproduction rate in a population growth model, a transcritical bifurcation occurs when birth and death rates are equal. When each mechanism occurs randomly at specified rates, the diffusion contributed from each one is additive, even if the drift is cancellative. This explains the assumption $F=O(G)$. The particular form \eqref{eq:canonB} is obtained if, apart from the terms that cause the bifurcation, no other cancellation occurs.\\ Happily, \eqref{eq:canonB} leads to the simplest diagrams, since $M(B)=\emptyset$. There are only two regions: $[0,\lambda_\star(\epsilon)]$ and $[\lambda_\star(\epsilon),1]$. The first region can be viewed as the ``critical window'', where the dd scale is $\ge$ the separation distance between equilibria, and (borrowing terminology from \cite{luczak}) the second region is the ``barely subcritical/supercritical'' region, where the equilibria have separated but $\lambda$ is still generally $\ll 1$ as $\epsilon\to 0$.\\ Let's begin by computing $\lambda_\star(\epsilon)$. Since $m_\star=m_A=m_1=1/(j_1-j_2)$ and $\alpha(0)=(j_1,0),\beta(0)=(k,0)=(j_2,0)$, $s(A,m_\star)-s(B,m_\star)=m_\star (j_1-j_2)=1$ and so $$\nu_\star = 2/(m_\star+s(A,m_\star)-s(B,m_\star)) = 2(j_1-j_2)/(1 + j_1-j_2)$$ and, noting $j_1>1\ge j_2$, $$\lambda_\star(\epsilon)=\epsilon^{2(j_1-j_2)/(1+j_1-j_2)}.$$ \noindent\textbf{Critical window. }For $\lambda \in [0,\lambda_\star(\epsilon)]$, both $\phi_\epsilon$ and $b_\epsilon$ are constant, and $b_\epsilon \gg 1$ as $\epsilon \to 0$. Most of this has already been noted, except that $b_\epsilon$ is constant, which using the formula for $b_\epsilon$ follows from $\phi_\epsilon$ being constant, and the fact that $\alpha_2(0)=0$. The precise formulae are as follows: since $k=j_2$, referring to Table \ref{tab:bifurcases}, since $k=j_2$ we have $\alpha(0) = (j_1,0)$ and $\delta(0)=(j_1-j_2,0)$ and, noting $j_1>1\ge j_2$, $$\phi_\epsilon = \epsilon^{2/(1+j_1-j_2)} \quad\text{and} \quad b_\epsilon = \epsilon^{-2(j_1-1)/(1+j_1-j_2)}.$$ \noindent\textbf{Barely non-critical region. }For $\lambda \in [\lambda_\star(\epsilon),1]$ we first treat scales around $x=0$, then around $x=x_\star$. We have $\alpha(1)=(j_2,1)$ and since $k=j_2$, $\delta(1)=(0,1)$ and so $$\phi_\epsilon(\lambda) = \epsilon^2/\lambda, \quad b_\epsilon(\lambda) = \begin{cases} 1/\lambda & \text{if} \ j_2=1, \\ \epsilon^2/\lambda^2 & \text{if} \ j_2=0.\end{cases}$$ In particular, $\phi_\epsilon(\lambda)\to \epsilon^2$ as $\lambda \to 1$, and for the saddle-node, diffusion is irrelevant once $\lambda \gg \epsilon$, while in the other cases, diffusion around $x=0$ persists as $\lambda \to 1$, approaching fast diffusion. For scales around $x=x_\star$, recall $m_\star=1/(j_1-j_2)$ and $s(A,m_\star)-s(B,m_\star)=1$, so $\gamma_\star = m_\star+s(B,m_\star)-s(A,m_\star) = 1/(j_1-j_2)-1 \le 0$, and since $j_1>1\ge j_2$, $$\phi_\epsilon^\star(\lambda) = \begin{cases} \epsilon & \text{if} \ j_1=2, \ j_2=1,\\ \epsilon\,\lambda^{1/(j_1-j_2)-1} & \text{otherwise},\end{cases}$$ with $\phi_\epsilon^\star(\lambda) \downarrow \epsilon$ as $\lambda \uparrow 1$ in the second case. Then, since $m_\star-s(A,m_\star)=m_\star(1-j_1)-j_2 = -((j_1-1)/(j_1-j_2)+j_2)$, $$b_\epsilon^\star(\lambda) = \lambda^{-((j_1-1)/(j_1-j_2)+j_2)}.$$ Since $(j_1-1)/(j_1-j_2)+j_2>0$, this supports what we already showed: that $b_\epsilon^\star(\lambda) \gg 1$ for $\lambda_\star(\epsilon) \le \lambda \ll 1$ and $b_\epsilon^\star(\lambda)\downarrow 1$ as $\lambda \uparrow 1$.\\ \begin{comment} The more exotic possibilities tend not to occur in simple bifurcation scenarios.\\ \subsection{Saddle-node} The saddle-node bifurcation is characterized by the conditions $$F(0,0)=0, \ \ \partial_x F(0,0)=0, \ \ \partial_\lambda F(0,0) \ne 0 \ \ \text{and} \ \ \partial_x^2 F(0,0)\ne 0 .$$ If $F$ is $C^2$, it follows that for $|x|+|\lambda| \ll 1$, $F(x,\lambda) \sim c_{20}x^2 + c_{01}\lambda$, i.e., $F$ has powers $A=\{(2,0),(0,1)\}$. We have $M(A)=\{1/2\}$ and $\mathrm{piv}(A)=A$. If $B$ is an envelope, then the requirement $A\subset S(B)$ leaves only three options: \begin{enumerate}[noitemsep,label={(\roman*)}] \item $B = \{(0,0)\}$, which has $M(B)=\emptyset$, \item $B = \{(1,0),(0,1)\}$, which has $M(B)=\{1\}$, and \item $B = \{(2,0),(0,1)\}$, which has $M(B)=\{1/2\}$. \end{enumerate} In all cases $\mathrm{piv}(B)=B$.\\ \subsection{Transcritical} The transcritical bifurcation is characterized by the conditions \begin{align*} &F(0,\lambda)=0 \ \ \ \text{for all} \ \ \lambda \ \ \text{in a neighbourhood of} \ \ 0,\\ &\partial_x F(0,0)=0, \ \ \partial_{x\lambda}F(0,0)\ne 0 \ \ \text{and} \ \ \partial_x^2 F(0,0) \ne 0. \end{align*} If $F$ is $C^2$, it follows that for $|x|+|\lambda| \ll 1$, $F(x,\lambda) \sim c_{20}x^2 + c_{11}x\lambda$, i.e., $F$ has powers $A=\{(2,0),(1,1)\}$, so $M(A)=\{1\}$ and $\mathrm{piv}(A)=A$. If $B$ is an envelope and $A\subset S(B)$, then either \begin{enumerate}[noitemsep,label={(\roman*)}] \item $B = \{(0,0)\}$, which has $M(B)=\emptyset$, \item $B = \{(1,0)\}$, which has $M(B)=\emptyset$, \item $B = \{(1,0),(0,k)\}$ for some $k\ge 1$, which has $M(B)=\{k\}$, \item $B = \{(2,0),(0,k)\}$ for $k\in\{1,2\}$, which has $M(B) = \{k/2\}$, \item $B = \{(2,0),(1,1)\}$, which has $M(B)=\{1\}$, \item $B = \{(2,0),(1,1),(0,2)\}$, which has $M(B)=\{1\}$, or \item $B=\{(2,0)(1,1),(0,k)\}$ for some $k\ge 3$, which has $M(B) = \{1,k-1\}$. \end{enumerate} In case (vi), $\mathrm{piv}(B) = \{(2,0),(0,2)\}$, otherwise $\mathrm{piv}(B)=B$. Taking $M=M(A)\cup M(B)$, we obtain the following diagrams. \\ \subsection{Pitchfork} The pitchfork bifurcation is characterized by the conditions \begin{align*} &F(0,\lambda)=0 \ \ \ \text{for all} \ \ \lambda \ \ \text{in a neighbourhood of} \ \ 0,\\ &\partial_x F(0,0)=0, \ \ \partial_{x^2}F(0,0)=0, \ \ \partial_{x\lambda}F(0,0)\ne 0 \ \ \text{and} \ \ \partial_x^3 F(0,0) \ne 0. \end{align*} If $F$ is $C^3$, it follows that $F(x,\lambda) \sim c_{30}x^3 + c_{11}x \lambda $ as $|x|+|\lambda| \to 0$, i.e., $F$ has powers $A=\{(3,0),(1,1)\}$, so $M(A)=\{1/2\}$ and $\mathrm{piv}(A)=A$. If $B$ is an envelope and $A\subset S(B)$ then either \begin{enumerate}[noitemsep,label={(\roman*)}] \item $B = \{(0,0)\}$, which has $M(B)=\emptyset$, \item $B = \{(1,0)\}$, which has $M(B)=\emptyset$, \item $B = \{(1,0),(0,k)\}$ for some $k\ge 1$, which has $M(B)=\{k\}$, \item $B = \{(2,0),(0,k)\}$ for $k\in\{1,2\}$, which has $M(B) = \{k/2\}$, \item $B = \{(2,0),(1,1)\}$, which has $M(B)=\{1\}$, \item $B = \{(2,0),(1,1),(0,2)\}$, which has $M(B)=\{1\}$, \item $B = \{(2,0),(1,1),(0,k)\}$ for $k\ge 3$, which has $M(B) = \{1,k-1\}$, or- \item $B = \{(3,0),(k,1)\}$ for $k\in \{0,1\}$, which has $M(B) = \{1/(3-k)\}$. \end{enumerate} In case (vi), $\mathrm{piv}(B) = \{(0,2),(2,0)\}$, otherwise $\mathrm{piv}(B)=B$. Taking $M=M(A)\cup M(B)$, we obtain the following diagrams. \end{comment} \section*{Acknowledgements} The author is grateful for support from an NSERC Discovery Grant. \section*{Appendix} \subsection*{Proof of Lemma \ref{lem:SDExist}} \begin{proof} The existence and uniqueness of $(P_x)_{x \in \hat U}$, and the Feller and strong Markov properties, are given in Theorem 13.1 in Chapter 1 of \cite{pinsky}. It remains to show that if $\tau(U)<\infty$ and $X(0)\in U$ then as $t\to \tau(U)^-$ either $|X(t)|\to \infty$ or $d(X_t,z)\to 0$ for some $z \in \partial U$, where $d$ is Euclidean distance. By definition of $ \smash{ \hat \Omega_U}$, $X(t)=\mathfrak{c}$ iff $t\ge \tau(U)$ and $X(t) \to_{\rho_U} \mathfrak{c}$ as $t \to \tau(U)^-$. So, by definition of $\rho(U)$, one of $|X(t)| \to \infty$ or $d(X(t),\partial U) \to 0$ holds as $t\to \tau(U)^-$, where $d$ is Euclidean distance. If $\liminf_{t \to \tau(U)^-}|X(t)|<\infty$ then $A\cap \partial U \ne \emptyset$, where $A$ is the limit set of $\{X(t)\colon t<\tau(U)\}$ with respect to $d$, so it is enough to show that if $\tau(U)<\infty$ and $\liminf_{t\to\tau(U)^-}|X(t)|<\infty$ then $X(t)$ converges as $t\to \tau(U)^-$.\\ For $x,z \in U$ define $f_z(x)=|x-z|^2 = \sum_i (x_i-z_i)^2$ and with $L$ as in \eqref{eq:mg-op}, for $t<\tau(U)$ define $$M(t;z)=f_z(X(t)) - \int_0^t (Lf_z)(X(s))ds,$$ so that $t\mapsto M(t\wedge \tau(D_n);z)$ is a martingale, by \eqref{eq:mp-mg}. We compute $$(Lf_z)(x) = \sum_i (G_{ii}(x) + 2F_i(x)(x_i-z_i)).$$ For $r>0$ let $B_r=\{x\in\mathbb{R}^d \colon |x|\le r\}$. Since, by assumption, $F,G$ are bounded on bounded subsets of $U$, \begin{align}\label{eq:Lbnd} A:=\sup\{|(Lf_z)(x)| \colon x,z \in B_r \cap U\}<\infty. \end{align} If $\tau$ is a stopping time with $\tau\le \tau(D_n)$, then since stopping preserves the martingale property, $t\mapsto M(t\wedge \tau;z)$ is a martingale, and if moreover $\sup_{t <\tau} |X(t)| \le r$, then using \eqref{eq:Lbnd}, for all $t\ge 0$ $$M(t\wedge \tau;z) \ge f_z(X(t\wedge \tau)) - (t \wedge \tau) A.$$ Using the strong Markov property and the fact that $M(0;X(0))=0$, it follows that if $\tau,\tau'$ are stopping times with $\tau \le \tau' \le \tau(D_n)$ and $M(t)$ is defined by \begin{align}\label{eq:stop-mg} M(t) = \mathbf{1}(\tau<t)\left(f_{X(\tau)}(X(t \wedge \tau')) - \int_{\tau}^{t\wedge \tau'} (Lf_{X(\tau)})(X(s))ds \right), \end{align} then $M$ is a martingale, and if moreover $\sup_{t \in [\tau,\tau')}|X(t)| \le r$, then \begin{enumerate}[noitemsep,label={(\roman*)}] \item $M(t) \ge -(t\wedge \tau'- t\wedge \tau)A$ for all $t \ge 0$, and \item $M(\tau') \ge |X(\tau')-X(\tau)|^2 - (\tau'-\tau )A$ for all $t\ge \tau'$. \end{enumerate} Fix $\epsilon \in (0,1)$ and $T>0$ and define the times $\tau_0=0$ and \begin{align}\label{eq:ret-time} &\tau_{2i+1} = \tau(D_n) \wedge T\wedge \inf\{t\ge \tau_{2i}\colon |X(t)| \le r-1\},\nonumber \\ &\tau_{2i+2} = \tau(D_n) \wedge T\wedge \inf\{t \ge \tau_{2i+1}\colon |X(t)-X(\tau_{2i+1})|^2 > \epsilon\} \end{align} Then for each $i$, the stopping times $\tau_{2i+1},\tau_{2i+2}$ fulfill the conditions decribed for $\tau,\tau'$ above. Let $M_i(t)$ denote the process from \eqref{eq:stop-mg} with $\tau_{2i+1},\tau_{2i+2}$ in place of $\tau,\tau'$, and let $S_i(t) = \sum_{j=1}^i M_i(t)$. Summing over $j\le i$ in (i) above, $S_i(t) \ge - (t\wedge \tau_{2i+2})A \ge - TA$ for $t\ge 0$. Using (ii) above, on the event $\{\tau_{2i+2}<\tau(D_n)\wedge T\}$, $M_i(t) \ge \epsilon - (\tau_{2i+2}-\tau_{2i+1})A$ for all $t \ge \tau_{2i+2}$, so on $\{\tau_{2i+2}<\tau(D_n)\wedge T\}$, $S_i(t) \ge i\,\epsilon -(\tau_{2i+2})A \ge i\,\epsilon - TA$ for all $t\ge \tau_{2i+2}$. We record these two facts: \begin{enumerate}[noitemsep,label={(\roman*)}] \item $S_i(t) \ge - T A$ for all $t\ge 0$, and \item $S_i(t) \ge i\,\epsilon - T A$ for all $t \ge \tau_{2i+2}$, on $\{\tau_{2i+2}<\tau(D_n)\wedge T\}$. \end{enumerate} Since each $M_i$ is a martingale, each $S_i$ is a martingale. In addition, $S_i(0)=0$, so together with (i), $S_i + TA$ is a non-negative martingale, with $S_i(0)+TA = TA$. Using Doob's inequality, for $C>TA$, $$P(\sup_{t\ge 0}S_i(t)\ge C-TA) \le TA/C,$$ which also holds for $C\le TA$ since then $TA/C\ge 1$. Write the times $\tau_i$ defined by \eqref{eq:ret-time} as $\tau_i^n$ to emphasize the dependence on $n$. Combining with (ii), \begin{align}\label{eq:tau-bound} P(\tau_{2i+2}^n < \tau(D_n)\wedge T) \le TA/(i\,\epsilon). \end{align} Defining $(\tau_i)_{i \ge 1}$ as in \eqref{eq:ret-time} but with $\tau(U)$ in place of $\tau(D_n)$, clearly $\tau_i^n = \tau_i \wedge \tau(D_n)$ for all $i,n$. If $\tau_{2i+2}^n = \tau(D_n)\wedge T$ for all $n$, then since $\tau(D_n) \to \tau(U)$, $\tau_{2i+2}=\tau(U)\wedge T$, so if $\tau_{2i+2}<\tau(U)\wedge T$ then $\tau_{2i+2}^n < \tau(D_n)\wedge T$ for some $n$. Using \eqref{eq:tau-bound} and continuity of probability, it follows that \begin{align}\label{eq:tau-bound2} P(\tau_{2i+2} < \tau(U) \wedge T) \le TA/(i\,\epsilon). \end{align}\\ Define the oscillation of $X$ as $t\to\tau(U)^-$ by $$\text{Osc} = \lim_{t\to\tau(U)^-} \sup_{u,v \in [t,\tau(U))}|X(u)-X(v)|.$$ Then by completeness of $\mathbb{R}^d$, $X(t)$ converges as $t\to\tau(U)^-$ iff $\text{Osc}=0$. Suppose $\liminf_{t\to \tau(U)^-}|X(t)| \le r-1$, $\text{Osc} > \epsilon$ and $\tau(U)\le T$. Then, $\tau_{2i+2}<\tau(U) \wedge T$ for every $i\ge 1$. Since $\{\tau_{2i+2}<\tau(U)\wedge T\}$ is a decreasing sequence in $i$, by \eqref{eq:tau-bound2} and continuity of probability, $P(\tau_{2i+2} < \tau(U)\wedge T \ \text{for all} \ i\ge 1) =0$. Thus, $$P(\liminf_{t\to \tau(U)^-}|X(t)| \le r-1, \ \text{Osc} > \epsilon \ \text{and} \ \tau(U)\le T)=0.$$ Since $\epsilon,T,r>0$ are arbitrary, taking a sequence $\epsilon_m,r_m,T_m$ with $\epsilon_m \to 0$ and $r_m,T_m\to\infty$ as $m\to\infty$, $$P(\liminf_{t\to\tau(U)^-}|X(t)| <\infty , \ \text{Osc} > 0 \ \text{and} \ \tau(U) <\infty)=0.$$ In other words, if $\liminf_{t\to\tau(U)^-}|X(t)| <\infty$ and $\tau(U)<\infty$, then $\text{Osc}=0$ which implies $X(t)$ converges as $t\to\tau(U)^-$, which is what we needed to show. \end{proof} \subsection*{Proof of Lemma \ref{lem:diff-limit}} As in the statement of Lemma \ref{lem:diff-limit}, given $U,F,G,x$ let $X$ denote the solution supplied by Lemma \ref{lem:SDExist} with $X(0)=x$. The goal is to show that if $(X_\epsilon)$ is a QD with characteristics $F,G$ defined on $U$ and $X_\epsilon(0)\to x$ as $\epsilon \to 0$ then $X_\epsilon \smash \stackrel{\text{ld}}{\to} X$ on $U$ as $\epsilon\to 0$. \begin{proof} We will need Theorem 4.1 from Chapter 7 of \cite{ethktz}. First, we define the stopped martingale problem and give an existence and uniqueness result.\\ \noindent\textit{Definition.} In the context of Definition \ref{def:mp}, say that $(P_x)$ solves the stopped martingale problem for $F,G$ on $D$ if $P_x(X(t)=X(t\wedge \tau(D))=1$ for all $x$ and if \eqref{eq:mp-mg} is a martingale for $f \in C^2(U)$, with $\tau(D)$ in place of $\tau(D_n)$.\\ \noindent\textit{Existence and uniqueness.} Say that a problem is well-posed if it has a unique solution. Suppose $F,G$ satisfy the conditions of Lemma \ref{lem:SDExist}. Then, as in Theorem 13.1 of Chapter 1 in \cite{pinsky}, for domains $D\subset\subset U$ we can define $F_D,G_D$ on $\mathbb{R}^d$ that coincide with $F,G$ on $D$ and are such that (i) the martingale problem for $F_D,G_D$ on $\mathbb{R}^d$ is well-posed, and (ii) the solution for $F_D,G_D$ coincides with the solution for $F,G$ up to time $\tau(D)$, i.e., the distributions of the stopped processes coincide. It follows from (i) and Theorem 6.1 in Chapter 4 of \cite{ethktz} that the stopped martingale problem for $F,G$ on $D$ is well-posed, and from (ii) that its distribution is given by the solution for $F,G$, stopped at time $\tau(D)$.\\ \noindent\textit{Convergence.} We adapt Theorem 4.1 of Chapter 7 in \cite{ethktz} to the present context. Suppose the generalized martingale problem for $F,G$ is well-posed, and fixing $x \in U$ let $X$ denote the process with distribution $P_x$. As explained above, for $D\subset\subset U$ the stopped martingale problem for $F,G$ on $D$ is well-posed, and its unique solution with initial value $x$ is given by $X(\cdot \wedge \tau(D))$. Suppose we have c{\`a}dl{\`a}g~$\mathbb{R}^d$-valued processes $X_n$ and $B_n$ and an $M_d(\mathbb{R})$-valued process $A_n$ such that $A_n(t)-A_n(s)$ is positive semidefinite for $t>s\ge 0$. Let $\mathcal{F}_t^n = \sigma(X_n(s),B_n(s),A_n(s) \colon s\le t)$. Let $\tau(F,n) = \inf\{t \colon X_n(t^-) \notin D \ \text{or} \ X_n(t) \notin D\}$. Suppose that $M_n := X_n-B_n$ and $M_nM_n^{\top} - A_n$ are $\mathcal{F}^n$-local martingales, and that for $D\subset\subset U$ and $T>0$, \begin{enumerate}[noitemsep,label={(\roman*)}] \item $\lim_{n\to\infty} E\left(\sup_{t \le \tau(D,n) \wedge T} |\Delta X_n(t)|^2\right)=0$, \item $\lim_{n\to\infty} E\left(\sup_{t \le \tau(D,n) \wedge T} |\Delta B_n(t)|^2\right)=0$, \item $\lim_{n\to\infty} E\left(\sup_{t \le \tau(D,n) \wedge T} |\Delta A_n(t)|\right)=0$, \item $\sup_{t \le \tau(D,n) \wedge T}|B_n(t) - \int_0^t F(X_n(s))ds| \stackrel{\text{p}}{\to} 0$ and \item $\sup_{t \le \tau(D,n) \wedge T}|A_n(t) - \int_0^t (GG^{\top})(X_n(s))ds| \stackrel{\text{p}}{\to} 0$. \end{enumerate} If these conditions are satisfied and $X_n(0) \to x$ as $n\to\infty$, then defining the increasing family $$D_r = \{x \in U\colon |x| < r \ \text{and} \ d(x,U^c) > 1/r \}$$ where $|\cdot|$ is any norm and for a point $y$ and set $A$, $d(y,A) = \inf\{d(y,z)\colon z\in A\}$ where $d$ is Euclidean distance, using the same approach as the proof of Theorem 4.1 of Chapter 7 in \cite{ethktz} it follows that for all but countably many $r$, $X_n(\cdot \wedge \tau(D_r,n))\smash \stackrel{\text{d}}{\to} X(\cdot \wedge \tau(D_r))$. So, taking a strictly increasing sequence $r_i\to 0$ such that the above converges with $r=r_i$ for each $i$ and letting $D_i = D_{r_i}$, it follows that $X_n\smash \stackrel{\text{ld}}{\to} X$ as $n\to\infty$.\\ \noindent\textit{Application to QD.} We now apply this result to a QD $(X_\epsilon)$. We'll refer to (i)-(v) above as conditions and (i)-(iii) from Definition \ref{def:qd} as assumptions. Since $F,G$ are locally uniformly continuous, assumption (i) implies that $X_\epsilon$ can be replaced with $J_c X_\epsilon$ in assumptions (ii)-(iii). By a standard argument, there exist $a_\epsilon \to 0$ as $\epsilon \to 0$ such that (i)-(iii) are true with $a_\epsilon$ in place of $a$. Let $Z_\epsilon = J_{a_\epsilon}X_\epsilon$. Noting that $\tau(D,\epsilon) < \zeta(X_\epsilon)=\zeta(Z_\epsilon)$, from assumptions (i)-(iii) we find that \begin{enumerate}[noitemsep,label={(\roman*)}] \item $\sup_{t \le \tau(D,\epsilon)\wedge T}|X_\epsilon(t)-Z_\epsilon(t)| \stackrel{\text{p}}{\to} 0$ as $\epsilon \to 0$, \item $\sup_{t \le \tau(D,\epsilon)\wedge T} \left| \, (Z_\epsilon)^p(t) - \int_0^t F(Z_\epsilon(s))ds \, \right| \stackrel{\text{p}}{\to} 0 \ \ \text{as} \ \ \epsilon \to 0$, and \item $\sup_{t \le \tau(D,\epsilon)\wedge T} \left| \langle (Z_\epsilon)^m \rangle(t) - \int_0^t G(Z_\epsilon(s))ds \, \right| \stackrel{\text{p}}{\to} 0 \ \ \text{as} \ \ \epsilon \to 0$, \end{enumerate} From now on, property (i)-(iii) refers to the above points. Let $X$ denote the solution to the generalized martingale problem for $F,G$. If we can show that $Z_\epsilon \smash \stackrel{\text{ld}}{\to} X$ as $\epsilon \to 0$ then property (i) implies $X_\epsilon \smash \stackrel{\text{ld}}{\to} X$. Let $(\epsilon_n)$ be a sequence with $\epsilon_n \to 0$ as $n\to\infty$ and let $X_n = Z_{\epsilon_n}$, $B_n = Z_{\epsilon_n}^p$ and $A_n = \langle Z_{\epsilon_n}^m \rangle$, noting that $M_n:= X_n-B_n= Z_{\epsilon_n}^m$ and $M_nM_n^{\top}-A_n$ are local martingales, as required. Properties (ii) and (iii) are equivalent to conditions (iv) and (v) so we need only establish conditions (i)-(iii). By definition, $|\Delta Z_\epsilon| \le a_\epsilon$ a.s., so since $a_{\epsilon_n} \to 0$ as $n\to\infty$, condition (i) holds. From [I.4.24], $|Z_\epsilon| \le a_\epsilon$ implies $|\Delta Z_\epsilon^p| \le a_\epsilon$ and $|\Delta Z_\epsilon^m| \le 2a_\epsilon$. $|\Delta Z_\epsilon^p| \le a_\epsilon$ and $a_{\epsilon_n}\to 0$ implies condition (ii). $|\Delta Z_\epsilon^m| \le 2a_\epsilon $ is equivalent to $|\Delta M_n| \le 2 a_{\epsilon_n}$, and condition (iii) is obtained from it, as follows.\\ For a jointly measurable (with respect to the underlying $\sigma$-algebra and time) process $X$, let $^p X$ denote the predictable projection of $X$, which as defined and proved in [I.2.28], is the unique predictable process that satisfies $^pX(\tau) = E(X(\tau) \mid \mathcal{F}(\tau^-))$ on $\{\tau<\infty\}$, for all predictable $\tau$. Clearly, $^p(\,\cdot\,)$ is linear. If $X$ is ~c{\`a}dl{\`a}g~and $\tau$ is predictable then \begin{align}\label{eq:pp-jump} |(\,^p(\Delta X))(\tau)| \le E(\, | \, (\Delta X)(\tau)| \ \mid \mathcal{F}(\tau^-)) \le E(\,|(\Delta X)(\tau)|\,). \end{align} If $X$ is a (c{\`a}dl{\`a}g) local martingale, then by [I.2.31], $^p(\Delta X)=0$. If $X$ is c{\`a}dl{\`a}g~and predictable, it follows easily from [I.2.4] and [I.2.24] that $^p(\Delta X)=\Delta X$. Since $M_nM_n^{\top}-A_n$ is a local martingale and $A_n$ is c{\`a}dl{\`a}g~and predictable, $^p(\Delta (M_n M_n^{\top} - A_n))=0$ and $\Delta A_n = \,^p(\Delta A_n)$, so $\Delta A_n = \,^p(\Delta (M_nM_n^{\top}))$. Since $M_nM_n^{\top} - [M]$ is a local martingale,$^p(\Delta (M_n M_n^{\top})) = ^p(\Delta [M_n])$, so $\Delta A_n = \,^p (\Delta [M_n])$. From [I.4.47(c)], $\Delta [M_n] = \Delta M_n \Delta (M_n^{\top})$ and using the earlier estimate $|\Delta M_n| \le 2a_{\epsilon_n}$, if $\tau$ is predictable then using \eqref{eq:pp-jump}, $$|\,^p(\Delta[M_n]))(\tau)| \le E(|\Delta [M_n]|) \le 4a_{\epsilon_n}^2.$$ Since $A_n$ is predictable, by [I.2.24] there is a sequence $(T_m)$ of predictable times such that $\{t\colon \Delta A_n(t)\ne 0\} \subset \bigcup_m T_m$. Using the above display with $\tau=T_m$, it follows that a.s.~$|\Delta A_n| \le 4a_{\epsilon_n}^2$. Since $a_{\epsilon_n} \to 0$ as $n\to\infty$, condition (iii) holds. \end{proof}
{ "timestamp": "2021-01-22T02:23:38", "yymm": "2101", "arxiv_id": "2101.08777", "language": "en", "url": "https://arxiv.org/abs/2101.08777", "abstract": "The bifurcation theory of ordinary differential equations (ODEs), and its application to deterministic population models, are by now well established. In this article, we begin to develop a complementary theory for diffusion-like perturbations of dynamical systems, with the goal of understanding the space and time scales of fluctuations near bifurcation points of the underlying deterministic system. To do so we describe the limit processes that arise in the vicinity of the bifurcation point. In the present article we focus on the one-dimensional case.", "subjects": "Dynamical Systems (math.DS); Probability (math.PR)", "title": "Limit Processes and Bifurcation Theory of Quasi-Diffusive Perturbations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759649262345, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7079405697768485 }
https://arxiv.org/abs/0910.4647
Elementary proof techniques for the maximum number of islands
Islands are combinatorial objects that can be intuitively defined on a board consisting of a finite number of cells. Based on the neighbor relation of the cells, it is a fundamental property that two islands are either containing or disjoint. Recently, numerous extremal questions have been answered using different methods. We show elementary techniques unifying these approaches. Our building parts are based on rooted binary trees and discrete geometry.Among other things, we show the maximum cardinality of islands on a toroidal board and in a hypercube. We also strengthen a previous result by rarefying the neighborhood relation.
\section{Introduction, preliminaries}\label{intro} We start with an intuitive notion. Let a rectangular $m\times n$ board be given. We associate a number (real or integer) to each cell of the board. We can think of this number as a height above see level. A rectangular part of the board is called a {\it rectangular island}, if and only if there is a possible water level such that the rectangle is an island in the usual sense. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{island.eps} \end{center} \caption{Rectangular landscape with heights} \label{island} \end{figure} The notion of an island turned up recently in information theory. The characterization of the lexicographical length sequences of binary maximal instantaneous codes in \cite{FS} uses the notion of {\it full segments}, which are one-dimensional islands. Several generalizations led to interesting combinatorial problems. G. Cz\'edli discovered a connection between islands and weakly independent subsets of finite distributive lattices. He determined the maximum number of rectangular islands on a rectangular board \cite{Cz}. Cz\'edli's method is based on weak bases of a finite distributive lattice \cite{CzHSch}. G. Pluh\'ar \cite{P} gave upper and lower bounds in higher dimensions. E. K. Horv\'ath, Z. N\'emeth and G. Pluh\'ar \cite{HNP} gave upper and lower bounds for the maximum number of triangular islands on a triangular board. In \cite{L} the minimal size of a maximal system of islands and related problems are presented. In the present paper, we list related problems with exact formulae. In each case, we present the proof, which we believe to be the shortest. In full generality, we denote the set of all cells of some board by $\mathcal C$. A {\it height function} is a mapping $h: \mathcal C\rm\to \mathbb R$, $c \mapsto h(c)$. We have to specify a neighborhood relation on the cells. If not otherwise stated, two cells are {\it neighbors} if they share a point. Let $R$ be a subset of cells. The neighbors of $R$ can be defined naturally as the set of cells not in $R$ but having a neighbor in $R$. A connected subset $R$ of cells is called an {\it island}, if the minimum height in $R$ is greater than the maximum height on the neighbors of $R$. In our applications, we define the islands to have a geometric shape, therefore the definition of connectivity does not play a role here. If $h$ is a height function, then we denote the induced set of islands by ${\mathcal I}(h)$. Let us consider rectangular islands. We say that rectangles $R$ and $S$ are \it far from each other, \rm if no cell of $R$ is the neighbor of any cell of $S$. We denote by $P(\mathcal C)$ the power set of $\mathcal C$, that is the set of all subsets of $\mathcal C$. The following statement in a different form was proved in \cite{Cz}. \begin{lemma} \label{egy} Let $\mathcal C$ be the set of all cells of some board, and let $\mathcal W$ denote the entire board as an island. Let ${\mathcal I}$ be a set of islands. The following two conditions are equivalent: \item{\rm(i)} there exists a mapping $h: \mathcal C\rm\to \mathbb R$, $c \mapsto h(c)$ such that ${\mathcal I} = {\mathcal I}(h)$. \item{\rm (ii)} $\mathcal B \rm \in {\mathcal I}$, and for any $R_1\neq R_2 \in {\mathcal I}$ either $R_1\subset R_2$, or $R_2\subset R_1$, or $R_1$ and $R_2$ are far from each other. \end{lemma} A subset of $P(\mathcal C)$ satisfying the equivalent conditions of Lemma~\ref{egy} is called \it a system of islands\rm . The set of maximal elements of $\mathcal I\setminus \{\mathcal B\rm \}$ is denoted by $\max \mathcal I$. \section{Methods}\label{meth} We list three effective proof techniques for island problems. We give detailed demonstration of the latter two, the first and original method can be read in \cite{Cz}. We recall the following \begin{lemma}[\cite{Cz}] The maximum number of rectangular islands of an $m \times n $ rectangular board is $$f(m,n)= \left[\frac{(m+1)(n+1)}{2}\right]-1.$$ \end{lemma} Let $\mathcal C$ be the set of unit squares of the $m \times n $ board. The proof in \cite{Cz} exploits that the islands form a weakly independent set in the distributive lattice of $P(\mathcal C)$. In a distributive lattice, maximal weakly independent subsets are called \it weak bases\rm . By the main theorem of \cite{CzHSch}, any two weak bases have the same cardinality. We ask the reader to consult \cite{Cz} for the details. For the second method, we need basic graph theory \cite{bondy}. To be self-contained, we recall the definitions that are crucial for our purposes. A graph without a cycle is called a {\it forest}. Any component of a forest is a connected cycle-free graph, that is a {\it tree}. A forest with a distinguished node (root) in each component is called a {\it rooted forest}. For any node $u$, there is a unique path from $u$ to the root of its component. If $u$ is not a root, then this path has more than one vertex. Let $u^+$ be the node following $u$ on the path to the root. It is called the {\it father of $u$}. If $v=u^+$, then we say that $u$ is a son of $v$. If $v$ is on the path from $u$ to a root, then $v$ is an {\it ancestor} of $u$ and $u$ is a {\it descendant} of $v$. Any non-root vertex has exactly one father, but a father might have several sons. The descendants of $v$ are the sons of $v$, the sons of sons (grandsons), and so on. For any $v$ the vertex $v$ and its descendants span $T_v$, a {\it rooted subtree}. Therefore, a rooted forest can be described recursively: it contains a set of roots; each root has a set of sons; and there are vertex disjoint rooted trees rooted at the sons. A vertex is a {\it leaf} if and only if it has no son. A rooted tree is {\it binary} if and only if any non-leaf node has two sons. Consider any base set. In the present paper, it is the set $\mathcal C$ of all cells. Fix certain shapes (e.g. rectangle) to be allowed for islands, and a function $h$ defined on $\mathcal C$. Let $\mathcal I$ be the set of islands of the fixed shape. \begin{fact} Let $S$ be a subset of $\mathcal C$. The maximal islands contained in $S$ are disjoint. \end{fact} Based on this observation, we define a rooted forest, $T({\mathcal I})$ describing a hierarchy of the islands. Let the maximal islands $R_1, R_2,\ldots, R_t$ of $\mathcal I$ be the roots of the forest. The islands contained in $R$ form $P(R)$ ($R\in P(R)$), the part of the partition connected to $R$. The maximal islands of $P(R)-\{R\}$ are the sons of $R$. The description of the rooted forest is completed by iterating the above step. \begin{remark} In the specific cases we consider, the base set is always an island itself, therefore it is the unique maximal island. In this case, the rooted forest is a rooted tree. \end{remark} Let $T_0({\mathcal I})$ be the rooted forest, we just defined based on $\mathcal I$. The islands are exactly the vertices of $T_0({\mathcal I})$, hence the number of islands is $|V(T_0({\mathcal I})|$. The leaves of $T_0({\mathcal I})$ are the minimal islands. We can visualize this description. The function $h$ can be viewed as a height function, describing a geographic part of Earth. We start to pour water into this place. We see the birth of a few islands, these are the roots. Often water level zero corresponds to the case when we see the first island: the whole area we considered. As the water level increases, we see islands to be divided into smaller islands (sons) or disappear (leaves of our forest). Sometimes an island/vertex has only one son. This means that by the increase of the water level the island shrinks. In this case, it will be useful to modify our rooted forest. We interpret the decline of the island as a division into a smaller island (its only son) and a dummy part. This dummy part of the island will be a second son of the shrinking vertex, a leaf. Let $T({\mathcal I})$ be the rooted forest we obtain this way. In $T({\mathcal I})$ any non-leaf vertex has at least two sons. The number of islands is $|V(T({\mathcal I}))|-|D|$, where $D$ is the set of dummy nodes added to $T_0({\mathcal I})$. The leaves of $T({\mathcal I})$ are the minimal islands and the dummy islands. We demonstrate the above notation in Figure~\ref{tree}. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{tree2.eps} \end{center} \caption{Hasse diagram of islands with respect to containment} \label{tree} \end{figure} In order to bound the number of islands, the following Lemma (folklore or an easy exercise in studying rooted trees) is very useful. \begin{lemma} \label{ketto} \item{(i)} Let $T$ be a binary tree with $\ell$ leaves. Then the number of vertices of $T$ depends only on $\ell$ and $|V|=2\ell-1$. \item{(ii)} Let $T$ be a rooted tree such that any non-leaf node has at least two sons. Let $\ell$ be the number of leaves in $T$. Then $|V|\leq 2\ell -1$. \end{lemma} Our simple strategy is the following: if we know how to express the number of islands by the number of vertices and dummy nodes, then we apply Lemma~\ref{ketto}. \begin{proof}[A proof example] Let $B_{m,n}$ denote the set of $mn$ unit squares of the $m\times n$ rectangular board. Let the island shape be rectangular. We call the vertices of the unit squares grid points. Let $\mathcal I$ be a system of islands with $s$ minimal islands and $d$ dummy islands. Any island covers at least four grid points. In the case of a shrinking island there is a loss of at least two grid points. The set of these lost grid points can be assigned to the corresponding dummy node. We assign grid points to the leaves of $T({\mathcal I})$: four points to the minimal islands, two points to the dummy leaves. These assigned sets of grid points are disjoint in the set of all $(m+1)(n+1)$ grid points. Therefore, $4s+2d\leq (m+1)(n+1)$. The number of leaves of $T({\mathcal I})$ is $\ell=s+d$. By Lemma~\ref{ketto}, the number of islands is $|V|-d\leq (2\ell-1)-d=2s+d-1\leq \frac{1}{2}(m+1)(n+1)-1$. \end{proof} This proof is very suggestive, clear and short. Still, it needed some technical preparation. As it turns out, we can make the proof even more elementary. The iterative description of $T_0({\mathcal I})$ or $T({\mathcal I})$ suggests a recursive proof technique: the mathematical induction. Actually, all known upper bounds on the number of islands \cite{Cz, HNP, P} can be proved by induction. \begin{proof}[A proof example] Let $f(B_{m,n})=f(m,n)$ be the maximum number of islands on the $m\times n$ rectangular board. We claim that $f(B_{m,n})\leq \frac{1}{2}(m+1)(n+1)-1$. Let us denote the covered grid points by $\|B_{m,n}\|$. For disjoint sub-boards $S_1, S_2, \ldots S_k$ of $B_{m,n}$ we know that $\|B_{m,n}\|\geq \|S_1\|+\|S_2\|+\ldots +\|S_k\|$ holds. We prove the claim by induction. The case of small boards can be easily checked. Let $\mathcal I^*$ be a system of islands realizing the number $f(m,n)$. \begin{multline*} f(m,n)= 1+\sum_{R\in{\max \mathcal I^*}} f(R)\leq 1+\sum_{R\in{\max \mathcal I^*}}\left(\frac{1}{2}\|R\|-1\right)=\\ = 1+\frac{1}{2}\sum_{R\in{\max \mathcal I^*}}\|R\|-|{\max \mathcal I^*}| \leq \frac{1}{2}\|B_{m,n}\|+1-|{\max \mathcal I^*}|. \end{multline*} If $|{\max \mathcal I^*}|\geq 2$, then the induction is complete. If $|{\max \mathcal I^*}|=1$, then one needs a minor technical remark to finish the proof. \end{proof} \section{Applications} \label{app} \subsection{Peninsulas} We show that the maximum cardinality of rectangular islands in the $ m \times n $ rectangular board can be attained such that each island reaches at least one side of the board. This is a slight strengthening of the result in \cite{Cz}. Also, the proof gives a recursive algorithm constructing a system of maximum cardinality. For brevity, we call a rectangular island $P$ a {\it peninsula} if it reaches at least one side of the board. We denote the maximum number of peninsulas in an $m \times n$ board by $p(m,n)$. \begin{thm}\label{pmn} In a rectangular $m\times n$ board, the maximum number of rectangular islands is equal to the maximum number of peninsulas, that is $p(m,n)=f(m,n)$. \end{thm} {\it Proof.} Since peninsulas are islands, $p(m,n)\leq f(m,n)$. To prove $p(m,n)\geq f(m,n)$, we show by induction on the number of cells, that the maximum number of peninsulas reaching the eastern side of the board is at least $f(m,n)$. We use the notation $p'(m,n)$ for the maximum number of peninsulas reaching the eastern side of the board. For $m,n\in \{1,2\}$, the statement is clear. To see the induction step, notice the following: Let the first row of the board be a peninsula. It contains $m$ different peninsulas by deleting the squares one by one from west. That is, $$p'(m,n)\geq p'(m,n-2)+m=\left[\frac{m(n-2)+m+n-2-1}{2}\right] +m+1= \left[\frac{mn+m+n-1}{2}\right].\qed $$ \subsection{Cylindric board, rectangular islands}\label{cyli1} In this section, we put a square grid on the surface of a cylinder with height $m$ and circumference of the base circle $n$. We get the same object by identifying the sides of length $m$ of an $m\times n$ rectangle. We denote by $c_1(m,n)$ the maximum number of rectangular islands on this cylinder, supposing that the whole cylinder is an island, but no other cylinders are islands. \begin{thm}\label{c1mn} If $n\geq 2$, then $c_1(m,n)=\left[\frac{(m+1)n}{2}\right].$ \end{thm} {\it Proof.} By deleting a column of the cylinder, we get an $m\times (n-1)$ rectangle. Therefore, $$c_1(m,n)\geq f(m, n-1)+1=\left[\frac{(m+1)n}{2}\right].$$ Let $\mathcal I^*$ be a set of rectangular islands of maximum cardinality. Then \begin{multline*} c_1(m,n)= 1+\sum_{R\in{max \mathcal I^*}} f(R)= 1+\sum_{R\in{max\mathcal I^*}}\left( \left[\frac{(u+1)(v+1)}{2} \right]-1 \right)=\\ =1 - | max ({\mathcal I^*})| +\sum_{R\in max \mathcal I^*} \left[\frac{(u+1)(v+1)}{2} \right]\le\\ \le 1 - 1 + \left[\frac{(m+1)n}{2} \right] = \left[\frac{(m+1)n}{2} \right] . \end{multline*} We applied that $-| max ({\mathcal I^*})|$ can be bounded above by $-1$ if $| max ({\mathcal I^*})| \geq 1$; and also that $$\sum_{R\in max \mathcal I^*} \left[\frac{(u+1)(v+1)}{2} \right]\le \left[\frac{(m+1)n}{2} \right].$$ To see this, we magnify the maximal rectangles by half a unit, see Figure~\ref{hengernagyit}. The sum of the area of the magnified maximal rectangles is at most the area of the magnified cylinder. \qed \begin{figure}[ht] \begin{center} \includegraphics[scale=0.4]{cyl1.eps} \caption{Magnified rectangles of a cylindric board} \label{hengernagyit} \end{center} \end{figure} \subsection{Cylindric board, cylindric and rectangular islands} Living on a cylindric board, it is natural to consider cylindric islands as well. In this section, we allow two shapes for the islands, cylindric and rectangular. We denote by $c_2(m,n)$ the maximum cardinality of such a system of islands on the cylindric $m\times n$ board. \begin{thm}\label{c2mn} If $n\geq 2$, then $c_2(m,n)=\left[\frac{(m+1)n}{2}\right]+\left[\frac{(m-1)}{2}\right].$ \end{thm} {\it Proof.} We show by induction on $m$, that $c_2(m,n)\geq \left[\frac{(m+1)n}{2}\right]+\left[\frac{m-1}{2}\right]$. Notice that $c_2(1,n)=n$ and $c_2(2,n)\geq f(2,n-1)+1=\left[\frac{3n}{2}\right].$ Let $m>2$. For the induction step, we remove a cylinder of height one such that a cylindric board of size $(m-2)\times n$ remains. Therefore, $$c_2(m,n)\geq c_2(m-2,n)+n+1= \left[\frac{(m-1)n}{2}\right]+\left[\frac{m-3}{2}\right]+n+1=\left[\frac{(m+1)n}{2}\right]+\left[\frac{m-1}{2}\right].$$ Now we show that $c_2(m,n)\leq \left[\frac{(m+1)n}{2}\right]+\left[\frac{m-1}{2}\right]$. There must be a cylindric island $Y$ by Theorem~\ref{c1mn}. Then $Y$ is included in a maximal cylindric island $M$. Assume there is a maximum cardinality system given. Then $M$ is bordered with water from one side, and in the maximal case the width of this water is one. Also, the remaining part of the board is a maximal cylindric island. Therefore, there exist $a,b\in \mathbb N_0$ in such a way that $a+b+1=m$ and \begin{multline*} c_2(m,n)= c_2(a,n)+c_2(b,n)+1 = \left[\frac{(a+1)n}{2}\right]+\left[\frac{a-1}{2}\right]+ \left[\frac{(b+1)n}{2}\right]+\left[\frac{b-1}{2}\right]+ 1 \leq \\ \leq \left[\frac{(a+b+1+1)n}{2}\right]+\left[\frac{a+b-3+1}{2}\right] +1 =\left[\frac{(m+1)n}{2}\right]+\left[\frac{m-1}{2}\right]. \qed \end{multline*} \subsection{Toroidal board, rectangular islands} With respect to the neighborhood relation, the most symmetric case is the toroidal board. It is not a surprise that we get the most compact result of all. Assume there is an $m \times n$ board on the torus, which is also known as $C_m\times C_n$. The island shape is fixed as rectangular, but we consider the whole torus as an island. We denote by $t(m,n)$ the maximum number of rectangular islands on the torus. \begin{thm}\label{tmn} If $m, n\geq 2$, then $t(m,n)=\left[\frac{mn}{2}\right].$ \end{thm} {\it Proof.} We can cut off a horizontal and a vertical line to get an $(m-1)\times (n-1)$ rectangle. Therefore, $$t(m,n)\geq f(m-1,n-1)+1=\left[\frac{mn}{2}\right]. $$ On the other hand, we denote again by $\mathcal I^*$ a set of rectangular islands realizing the maximum cardinality. \begin{multline*} t(m,n)= 1+\sum_{R\in max \mathcal I^*} f(R) = 1+\sum_{R\in max \mathcal I^*} \left( \left[\frac{(u+1)(v+1)}{2} \right]-1 \right) =\\ = 1 - | max ({\mathcal I^*})| +\sum_{R\in \max \mathcal I^*} \left[\frac{(u+1)(v+1)}{2} \right] \leq 1 - 1 + \left[\frac{mn}{2}\right] = \left[\frac{mn}{2}\right]. \end{multline*} Similar to Section~\ref{cyli1} we applied that $-| max ({\mathcal I^*})|$ can be bounded above by $-1$ if $| max ({\mathcal I^*})| \geq 1$; and also that $$\sum_{R\in \max \mathcal I^*} \left[\frac{(u+1)(v+1)}{2} \right]\leq \left[\frac{mn}{2}\right], $$ by counting the grid points covered by maximal islands. \qed \subsection{Heuristic} The results of the section are based on counting grid points. Therefore, it is convenient to modify the parameters in the above cases such that the number of grid points of the board is the same. This yields: $$p(m-1,n-1)=f(m-1,n-1)=c_1(m-1,n)-1=t(m,n)-1.$$ We can erase the $-1$ if the sets of grid points induce the islands instead of the squares of the board. We could short-cut to this result as follows: The most restricted set of islands is the eastern peninsulas and the broadest is the toroidal definition. Therefore, $p(m-1,n-1)\le f(m-1,n-1)\le c_1(m-1,n)\le t(m,n)$. Then we observe that a peninsula construction coincides with the toroidal upper bound. That is, equality must hold everywhere. \section{Changing the neighborhood relation} So far, two cells were neighbors if they had a point in common. Therefore, in the corresponding neighborhood graph, the typical degree was 8. It is somewhat natural to rarefy this structure such that the neighborhood graph is also a grid. In this case, two cells are neighbors if and only if they have a side in common. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.4]{szom.eps} \caption{Common-side neighborhood} \label{szomi} \end{center} \end{figure} Let a rectangular board of size $m \times n$ be given. We denote the maximum cardinality of a system of rectangular islands by $\hat{f} (m,n)$. We denote the set of rectangular islands induced by a height function $h$ by ${\hat{\mathcal I}}(h)$. \begin{lemma} Let $\mathcal C$ be the set of cells of an $m \times n$ board, and let $\mathcal W$ denote the entire board as an island. Let ${\mathcal I}$ be a subset of\/ $P(\mathcal C)$. Two cells are neighbors if and only if they have a side in common. The following two conditions are equivalent: \item{\rm(i)} there exists a mapping $h: \mathcal C\rm\to \mathbb R$, $c \mapsto h(c)$ such that ${\mathcal I} = {\hat{\mathcal I}}(h)$. \item{\rm (ii)} $\mathcal W \rm \in {\mathcal I}$, and for any $I_1\neq I_2 \in {\mathcal I}$ either $I_1\subset I_2$, or $I_2\subset I_1$, or $I_1$ and $I_2$ have zero or one point in common. \end{lemma} Despite the rarefied structure, the maximum value surprisingly has not changed. \begin{thm}\label{hatf} $\hat{f} (m,n)=f(m,n)$. \end{thm} {\it Proof.} Clearly $\hat{f} (m,n)\geq f(m,n)$. We show $\hat{f} (m,n)\leq f(m,n)$ via induction on $mn$. If $m,n\in \{1,2\}$, then the statement holds. We denote by\ $\mathcal I^*$ a maximum cardinality system of islands. For the induction step, we magnify the members of $max \mathcal I^*$ as shown in Figure~\ref{szu}. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.4]{nagyit.eps} \caption{Magnification of the maximal islands} \label{szu} \end{center} \end{figure} Let the side lengths of a rectangle $R$ be $u$ and $v$. We define $\mu(R)=\mu(u,v):=(u+1)(v+1)-2$, which is the area of the magnified $R$. The induction step goes as follows \begin{multline*} \hat{f}(m,n) = 1+\sum_{R\in max \mathcal I^*} \hat{f}(R)= 1+\sum_{R\in max \mathcal I^*} \left( \left[\frac{(u+1)(v+1)}{2}\right]-1 \right) = \\ = 1 + \sum_{R\in max \mathcal I^*} \left( \left[\frac{\mu(u,v)}{2}\right]\right) \leq 1 + \left[ \frac{\mu(C)}{2} \right] -1. \end{multline*} In the last step we applied that $$\sum_{R\in max \mathcal I^*} {\mu(u,v)} \leq \mu (\mathcal C) -2, $$ since the magnified maximal islands do not overlap, and there is an area of size at least 2, which is not covered by the magnified maximal islands. \qed \section{Islands in hypercubes} We give an exact formula for the maximum number of hypercubic islands in a big hypercube. The board consists of all vertices of a hypercube, or in other words the elements of a Boolean algebra $\mathcal A=\{0,1\}^n$. Two cells are neighbors if their Hamming distance is 1. We denote the maximum number of islands in $\mathcal A=\{0,1\}^n$ by $b(n)$. \begin{thm}\label{bn} $b(n)= 1 + 2^{n-1}.$ \end{thm} {\it Proof.} Consider the vertices with an odd number of 1's. They correspond to independent cellular islands. Therefore, $b(n)\geq 1+2^{n-1}$, if we consider the whole space as an island. We prove the opposite direction by induction on $n$. For $n=0,1$ the statement is easy to check. For $n\geq 2$, we cut the hypercube into two half-hypercubes of size $2^{n-1}$. If one of them is an island, then the other part can not contain an island. If neither of them is an island, then by the induction hypothesis, in both half-hypercubes, the maximum cardinality of a system of islands is at most $2^{n-2}$. This implies the claim: $b(n)\leq 1+2^{n-1}$. \qed \section*{Epilogue} The inductive argument worked easily, when we found a row or column containing no island. This phenomenon helped us also when we traced back the toroidal case to the planar. Let an empty row or column be called a {\it blast}. The maximum cardinality of a blast-free system of islands can be of interest. (Blast-free domino tilings of a rectangular board is a classical Olympiad question.) As this maximum is strongly related to the area uncovered by the maximal islands, it is tempting to ask the following \begin{prob} Let us consider the $m\times n$ rectangular, cylindric or toroidal board. What is the minimum of the uncovered area in a blast-free configuration of maximal islands? \end{prob} We dare to conjecture $m+n+1$ in the plane, $3m+2n-7$ on the cylinder and $4m+2n-9$ on the torus, where $m\le n$. We may assume the maximal islands to be on the same level, height one say. This is the first level also in the rooted tree defined in Section~\ref{meth}. We imagine the islands corresponding to vertices of the second level of the rooted tree to have height two. This building process can be continued downwards the rooted tree. In this way, we build a characteristic example of a class of island systems corresponding to the same rooted tree. This can also be formulated in the language of the height function. In this sense, the previous question is posed about the section of a landscape at height one. We may require the blast-free property at each level. \begin{prob} Let $H=\max_{c\in \mathcal C} h(c)$. For which height function $h$ are the sections blast-free at each height $1,2,\dots, H$. \end{prob} Another natural question is the following \begin{prob} Characterize the maximum cardinality island systems on any fixed board. \end{prob} The philosophy of Section~\ref{app} can be applied to grid-like drawings of orientable surfaces of higher genus as well as non-orientable surfaces. We plan to report on these problems in a forthcoming paper.
{ "timestamp": "2009-10-24T14:35:07", "yymm": "0910", "arxiv_id": "0910.4647", "language": "en", "url": "https://arxiv.org/abs/0910.4647", "abstract": "Islands are combinatorial objects that can be intuitively defined on a board consisting of a finite number of cells. Based on the neighbor relation of the cells, it is a fundamental property that two islands are either containing or disjoint. Recently, numerous extremal questions have been answered using different methods. We show elementary techniques unifying these approaches. Our building parts are based on rooted binary trees and discrete geometry.Among other things, we show the maximum cardinality of islands on a toroidal board and in a hypercube. We also strengthen a previous result by rarefying the neighborhood relation.", "subjects": "Combinatorics (math.CO)", "title": "Elementary proof techniques for the maximum number of islands", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759638081522, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7079405689698801 }
https://arxiv.org/abs/1601.06330
Novel PT-invariant Kink and Pulse Solutions For a Large Number of Real Nonlinear Equations
For a large number of real nonlinear equations, either continuous or discrete, integrable or nonintegrable, uncoupled or coupled, we show that whenever a real nonlinear equation admits kink solutions in terms of $\tanh \beta x$, where $\beta$ is the inverse of the kink width, it also admits solutions in terms of the PT-invariant combinations $\tanh 2\beta x \pm i \sech 2 \beta x$, i.e. the kink width is reduced by half to that of the real kink solution. We show that both the kink and the PT-invariant kink are linearly stable and obtain expressions for the zero mode in the case of several PT-invariant kink solutions. Further, for a number of real nonlinear equations we show that whenever a nonlinear equation admits periodic kink solutions in terms of $\sn(x,m)$, it also admits periodic solutions in terms of the PT-invariant combinations $\sn(x,m) \pm i \cn(x,m)$ as well as $\sn(x,m)\pm i \dn(x,m)$. Finally, for coupled equations we show that one cannot only have complex PT-invariant solutions with PT eigenvalue $+1$ or $-1$ in both the fields but one can also have solutions with PT eigenvalue $+1$ in one field and $-1$ in the other field.
\section{Introduction} Nonlinear equations are playing an increasingly important role in several areas of science in general and physics in particular \cite{book}. One of the major problems with these equations is the lack of a superposition principle. In particular, even if one can find two solutions, say $\phi_1$ and $\phi_2$, of a given nonlinear equation, unlike the linear case, any linear combination of $\phi_1$ and $\phi_2$ is usually not a solution of that nonlinear equation. Thus if we can find some general results about the existence of solutions to nonlinear equations, that would be invaluable. In this context it is worth recalling that some time ago we \cite{ks1,ks2} have shown (through a number of examples) that if a nonlinear equation admits periodic solutions in terms of Jacobi elliptic functions ${\rm dn}(x,m)$ and ${\rm cn}(x,m)$, then it will also admit solutions in terms of ${\rm dn}(x,m) \pm \sqrt{m} {\rm cn}(x,m)$ where m is the modulus of the elliptic function \cite{as}. Further, in the same papers \cite{ks1,ks2}, we also showed (again through several examples) that if a nonlinear equation admits solutions in terms of ${\rm dn}^2(x,m)$, then it will also admit solutions in terms of ${\rm dn}^2(x,m) \pm {\rm cn}(x,m) {\rm dn}(x,m)$. The purpose of this paper is to propose general results about the existence of new solutions to real nonlinear equations, integrable or nonintegrable, continuous or discrete through the idea of parity-time reversal or PT symmetry. It may be noted here that in the last 15 years or so the idea of PT symmetry \cite{ben} has given us new insights. In quantum mechanics it has been shown that even if a Hamiltonian is not hermitian but if it is PT-invariant, then the energy eigenvalues are still real in case the PT symmetry is not broken spontaneously. Further, there has been a tremendous growth in the number of studies of open systems which are specially balanced by PT symmetry \cite{sch,ben1,peng} in several PT-invariant open systems bearing both loss and gain. In particular, many researchers have obtained soliton solutions which have been shown to be stable within a certain parameter range \cite{mak,pan,jes}. It is worth specifying what exactly we mean by $P$ and $T$ and hence the PT symmetry. By $P$ one means parity symmetry, i.e. $x \rightarrow -x$, $t \rightarrow t$ while by $T$ one means time-reversal symmetry, i.e. $t \rightarrow -t$, $i \rightarrow -i$, $x \rightarrow x$. Thus by the combined PT symmetry we mean $x \rightarrow -x$, $t \rightarrow -t$, $i \rightarrow -i$. In this context, recently we have highlighted one more novel aspect of PT symmetry. Specifically, we obtained new PT-invariant solutions of several real nonlinear equations with PT-eigenvalue +$1$. In particular, we showed \cite{ks} that if a real nonlinear equation admits soliton solutions in terms of ${\rm sech} x$ then it also admits PT-invariant solutions ${\rm sech} x \pm i \tanh x$ with PT-eigenvalue $+1$. We also showed that if a real nonlinear equation admits solutions in terms of ${\rm sech}^2 x$ then it also admits PT-invariant solutions ${\rm sech}^2 x \pm i {\rm sech} x \tanh x$ with PT-eigenvalue $+1$. In addition, we considered the periodic generalization of these results. It is worth pointing out that in all these cases the PT-invariant combinations (such as ${\rm sech} x \pm i \tanh x$) are eigenfunctions of the PT operator with eigenvalue $+1$. It is then natural to inquire if there are PT-invariant solutions with PT-eigenvalue $-1$ and further in the case of coupled field theories, are there PT-invariant solutions with PT-eigenvalue $-1$ in both the fields and also if there are mixed PT-invariant solutions with PT-eigenvalue $+1$ in one field and $-1$ in the other field. One of the aims of this paper is to provide answers to these questions. Our strategy will be to start with known real solutions and then make Ans\"atze for complex PT-invariant solutions and obtain conditions under which the Ans\"atze are valid. We show, through several examples (such as $\phi^4$, $\phi^6$, sine-Gordon, double sine-Gordon, double sine-hyperbolic-Gordon equations), that whenever a real nonlinear equation, either continuous or discrete, integrable or nonintegrable, admits a solution in terms of $\tanh \beta x$, then it will necessarily also admit solutions in terms of the PT-invariant combinations $\tanh 2\beta x \pm i {\rm sech} 2\beta x$ with PT-eigenvalue $-1$ (i.e. with the PT-kink width being half of that of the corresponding real kink). Remarkably, in all these cases, the kink solution as well as the PT-invariant kink are solutions of the first order self-dual equation. Further in all these cases, the kink as well as the PT-invariant kink have the same topological charge and the same kink energy and both the solutions can be shown to be linearly stable. In view of this, we believe that the PT-invariant kink solutions may also find physical realization in coupled optical waveguides among other applications \cite{ruter}. It is worth pointing out that some of the equations considered here have also been considered in their PT-symmetric deformed version \cite{beb}. We also generalize these results to the periodic case and show that whenever a nonlinear equation admits a solution in terms of ${\rm sn}(x,m)$, then it will necessarily also admit solutions in terms of the PT-invariant combinations ${\rm sn} (x,m)\pm i{\rm cn} (x,m)$ as well as ${\rm sn}(x,m) \pm i {\rm dn}(x,m)$ with PT-eigenvalue $-1$. Further, we also consider coupled field theory models and obtain PT-invariant solutions with PT-eigenvalue $-1$ in both the fields (in addition to PT-eigenvalue $+1$ in both the fields) and mixed solutions with PT-eigenvalue $+1$ in one field and $-1$ in the other field. The plan of the paper is the following. In Sec. II we consider several self-dual first order equations which are known to admit topological kink solutions of the form $\tanh \beta x$ and in all these cases show the existence of PT-invariant complex kink solutions of the form $\tanh 2\beta x \pm i {\rm sech} 2\beta x$ with PT-eigenvalue $-1$ and kink width being half of that of the corresponding real kink. We show that a given kink solution and the corresponding PT-invariant kink solution have the same topological charge, the same kink energy and further, both are linearly stable. For all the PT-invariant kink solutions we give explicit expressions for the zero mode. In Sec. III we consider four continuum and two discrete field theory models and show the existence of PT-invariant periodic kink solutions of the form ${\rm sn}(x,m) \pm i {\rm cn}(x,m)$ and ${\rm sn}(x,m) \pm i {\rm dn}(x,m)$ with PT-eigenvalue $-1$ and also the corresponding hyperbolic PT-invariant kink solutions. In Sec. IV we discuss three coupled field theory models and show that these models not only admit PT-invariant solutions with PT-eigenvalue $+1$ or $-1$ for both the fields, but also mixed PT-invariant solutions with PT-eigenvalue $+1$ in one field and $-1$ in the other field. Section V is reserved for summary of the main results obtained where we also discuss some of the open problems. \section{PT-invariant Kink Solutions} We now discuss several examples from continuum field theories where a kink solution like $\tanh \beta x$ is a solution of the first order self-dual equation. We consider several self-dual first order equations with known kink solutions and in all the cases obtain new PT-invariant solutions in terms of $\tanh 2\beta x \pm i {\rm sech} 2\beta x$ with PT-eigenvalue $-1$. As is well known, typically, the self-dual equations with kink solutions are of the form \begin{equation}\label{2.0} \frac{d\phi}{dx} = \pm \sqrt{2V(\phi)}\,, \end{equation} where $V(\phi)$ has multiple degenerate minima and is positive semidefinite with its minimum value being zero at the degenerate minima. We show in general that both the kink topological charge as well as the kink energy remain unaltered, i.e. the usual kink solution and the new PT-invariant kink solutions have the same topological charge and the same kink energy. Further, we show that both the usual kink as well as the PT-invariant kink solutions are linearly stable and give explicit expressions for the nodeless zero mode in all the cases. \subsection{$\phi^4$ Kink} The $\phi^4$ field theory arises in several areas of physics. It has the celebrated kink solution which is the solution of the first order self-dual equation \begin{equation}\label{1.14} \frac{d\phi}{dx} = \pm \sqrt{\frac{b}{2}}\left(\frac{a}{b}-\phi^2\right)\,. \end{equation} We consider here and in rest of the examples, one of the self-dual equations. Exactly the same arguments are also valid for the other cases. It is well known that the kink solution to Eq. (\ref{1.14}) is \cite{raj} \begin{equation}\label{1.15} \phi_{k} = A \tanh \beta x\,, \end{equation} provided \begin{equation}\label{1.16} A = \sqrt{\frac{a}{b}}\,,~~\beta = \sqrt{\frac{a}{2}}\,. \end{equation} The corresponding topological charge is \begin{equation}\label{1.17} Q = \phi (x=\infty) - \phi (x=-\infty)= 2\sqrt{\frac{a}{b}}\,, \end{equation} while the corresponding kink energy is \begin{eqnarray}\label{1.18} &&E_k = \int_{-\infty}^{+\infty} dx\, \left[\frac{1}{2}\left(\frac{d\phi}{dx}\right)^2 +V(\phi)\right] \nonumber \\ &&= \int_{-\sqrt{\frac{a}{b}}}^{\sqrt{\frac{a}{b}}} d\phi\, \sqrt{2V(\phi)} = \frac{4}{3}\sqrt{\frac{a^3}{b^3}}\,. \end{eqnarray} Remarkably, even \begin{equation}\label{1.19} \phi_{ptk} = A \tanh \beta_1 x +i B{\rm sech} \beta_1 x\,, \end{equation} is an exact PT-invariant kink solution (with PT eigenvalue $-1$) of the self-dual Eq. (\ref{1.14}) provided \begin{equation}\label{1.20} B = \pm A\,,~~ A = \sqrt{\frac{a}{b}}\,,~~ \beta_1 = \sqrt{2a} = 2 \beta\,, \end{equation} where $\beta$ is as given by Eq. (\ref{1.16}). Note that the corresponding topological charge as defined by Eq. (\ref{1.17}) is the same for the kink solution (\ref{1.15}) and the complex PT-invariant kink solution (\ref{1.19}). This is because $\phi(\pm \infty)$ remains unchanged. Further, even the kink energy is the same for the kink solution (\ref{1.15}) and the PT-invariant kink solution (\ref{1.19}) since as is clear from Eq. (\ref{1.18}), the answer for the kink energy depends on $V(\phi)$ and $\phi(\pm \infty)$ both of which are the same for the kink solution (\ref{1.15}) as well for the PT-invariant kink solution (\ref{1.19}). The arguments given above are rather general and hence it is clear that the topological charge and the kink energy are the same for any (real) kink solution and the corresponding complex PT-invariant kink solution. We have also explicitly checked it for all the cases mentioned below. Hence now onwards, for the remaining examples, we shall not discuss the kink topological charge and the kink energy for the PT-invariant kink solutions. Let us now discuss the question of linear stability of the complex PT-invariant kink solution (\ref{1.19}). In this context it is worth recalling that in the case of the standard kink solution as given by Eq. (\ref{1.15}), the linear stability issue has been discussed a while ago \cite{raj} and it has been shown that on assuming \begin{equation}\label{1.14a} \phi = \phi_{k} + \eta(x) e^{i\omega t}\,, \end{equation} and substituting it in the corresponding (second-order) field equation, one gets the stability equation \begin{equation}\label{1.15a} -\frac{d^2 \eta(x)}{dx^2} + \frac{d^2 V(\phi_{k})}{dx^2} \eta(x) = \omega^2 \eta(x)\,. \end{equation} Further, it is well known that the corresponding zero mode, i.e. the unnormalized ground state wave function $\eta_0$ with $\omega^2 =0$ is given by \begin{equation}\label{1.15d} \eta_{0}(x) = \frac{d\phi_{k}}{dx}\,. \end{equation} This discussion is easily generalized to the PT-invariant kink and in that case one gets the stability equation which is similar to (\ref{1.15a}) except that $\frac{d^2V(\phi_{k})}{dx^2}$ is replaced by $\frac{d^2V(\phi_{ptk})}{dx^2}$ and the corresponding zero-mode is given by \begin{equation}\label{1.15e} \eta_{0,pt}(x) = \frac{d\phi_{ptk}}{dx}\,. \end{equation} For the usual $\phi^4$ kink, using Eq. (\ref{1.15}), one gets the stability equation \begin{equation}\label{1.16a} -\frac{d^2\eta(x)}{dx^2}+[2a-3a {\rm sech}^2(\beta x)]\eta(x) = \omega^2 \eta(x)\,, ~~\beta = \sqrt{\frac{a}{2}}\,. \end{equation} As is well known, this Schr\"odinger-like equation has two bound states with the corresponding eigenvalues and eigenfunctions being \begin{eqnarray}\label{1.17a} &&\eta_{0}(x) = {\rm sech}^2 (\beta x)\,,~~ \omega^{2}_{0} = 0\,, \nonumber \\ &&\eta_{1}(x) = {\rm sech}(\beta x) \tanh(\beta x)\,,~~\omega^{2}_{1} = \frac{3a}{2}\,. \end{eqnarray} Note that while $\eta_{0}$ is nodeless, $\eta_{1}$ has one node. Let us now discuss the stability of the PT-invariant kink solution (\ref{1.19}). On using Eq. (\ref{1.19}) in the stability Eq. (\ref{1.15a}), one gets a Schr\"odinger-like equation for the PT-invariant potential \begin{equation}\label{1.18a} -\frac{d^2\eta(x)}{dx^2}+[2a-6a {\rm sech}^2(\beta_1 x) \pm 6ia {\rm sech}(\beta_1 x) \tanh(\beta_1 x)]\eta(x) = \omega^2 \eta(x)\,, ~~\beta_1 = \sqrt{2a}\,. \end{equation} Remarkably, this Schrodinger-like PT-invariant equation too has two discrete states but unlike the real kink case, both the discrete states are nodeless but with different energies. In particular, the corresponding eigenvalues and eigenfunctions are \begin{eqnarray}\label{1.19a} &&\eta_{0,pt}(x) = {\rm sech} (\beta_1 x)[{\rm sech}(\beta_1 x)\mp i \tanh(\beta_1 x)] = {\rm sech}(\beta_1 x) e^{\mp i\tan^{-1}[\sinh(\beta_1 x)]}\,, ~~ \omega_{0} = 0\,, \nonumber \\ &&\eta_{1,pt}(x) = {\rm sech}^{1/2}(\beta_1 x) e^{\mp (3i/2) \tan^{-1}[\sinh(\beta_1 x)]} \,,~~\omega^{2}_{1} = \frac{3a}{2}\,. \end{eqnarray} It is worth noting that the eigenenergies of the two bound states are identical (i.e. $0, \sqrt{3a/2}$) for the usual and the PT-invariant kink solution. Because of the linear stability of the PT-invariant kink solution, we suspect that the PT-invariant kink solution too can have physical relevance. It would be interesting to explore physical situations where such kink can indeed be relevant. We now show that similar discussion is valid in the case of the other PT-invariant kink solutions and in all the cases the zero mode is nodeless thereby ensuring the linear stability of the PT-invariant kink solution. \subsection{$\phi^6$ Kinks} Unlike the $\phi^4$ case, in the $\phi^6$ case there are two different types of kink solutions depending on if one is at the first order transition point or below the transition point. We now show that in both the cases we have PT-invariant kink solutions. {\bf Case I: $T = T_{c1}$} In this case the kink solution satisfies the self dual equation \begin{equation}\label{1.21} \frac{d\phi}{dx} = \pm \sqrt{\lambda} \phi (a^2-\phi^2)\,. \end{equation} At this point there are two distinct kink solutions of the form $\sqrt{1\pm \tanh \beta x}$ and in both the cases the PT-invariant kink solutions also exist. For simplicity we only discuss one case, exactly the same arguments are also valid in the other case. One of the well known kink solutions to Eq. (\ref{1.21}) is \cite{kh} \begin{equation}\label{1.22} \phi_{k} = A \sqrt{1+\tanh \beta x}\,, \end{equation} provided \begin{equation}\label{1.23} A = a/\sqrt{2}\,,~~\beta = \sqrt{\lambda} a^2\,. \end{equation} The self-dual Eq. (\ref{1.21}) also admits the PT-invariant kink solution \begin{equation}\label{1.24} \phi_{ptk} = A \sqrt{1+\tanh \beta_1 x \pm i {\rm sech} \beta_1 x}\,, \end{equation} provided $A$ is again as given by Eq. (\ref{1.23}) while the inverse width $\beta$ is again doubled, i.e. \begin{equation}\label{1.25} \beta_1 = 2\sqrt{\lambda} a^2 = 2 \beta\,, \end{equation} where $\beta$ is given by Eq. (\ref{1.23}). Let us now show that this PT-invariant kink solution is linearly stable. From Eq. (\ref{1.21}) it is clear that \begin{equation}\label{1.21a} V(\phi) = \frac{\lambda}{2}\phi^2(a^2-\phi^2)^2\,. \end{equation} It is then easy to calculate $\frac{d^2V(\phi_{ptk})}{d\phi^2}$ and set up the stability equation for the PT-invariant kink solution (\ref{1.24}). In particular, on setting $y = \beta_1 x$, the stability Eq. (\ref{1.15a}) takes the form \begin{equation}\label{1.21f} -\eta''(y)+(1/8)[5-15 {\rm sech}^2 y+3\tanh y\pm 3i{\rm sech} y \pm 15 i {\rm sech} y \tanh y]\eta(y) = \frac{\omega^2}{4\lambda a^4} \eta (y)\,. \end{equation} On using Eq. (\ref{1.15e}) the corresponding zero-mode turns out to be \begin{equation}\label{1.23a} \eta_{0,pt} \propto ({\rm sech} y)^{1/2}(1-\tanh y)^{1/4} e^{\mp(3i/4)\tan^{-1}[\sinh y]}\,,~~\omega_{0} =0\,. \end{equation} {\bf Case II: $T < T_{c1}$} In this case the self-dual equation is of the form \begin{equation}\label{1.26} \frac{d\phi}{dx} = \pm \sqrt{\lambda} (a^2-\phi^2)\sqrt{\phi^2+b^2}\,. \end{equation} The well known kink solution to this equation is \cite{chle,ks4} \begin{equation}\label{1.27a} \phi = \frac{A\tanh \beta x}{\sqrt{1-B^2 \tanh^2 \beta x}}\,, \end{equation} provided \begin{equation}\label{1.28} A = \frac{ab}{\sqrt{a^2+b^2}}\,,~~B = \frac{a^2}{a^2+b^2}\,,~~ \beta = a \sqrt{\lambda (a^2+b^2)}\,. \end{equation} The solution (\ref{1.27a}) can be put in the form \begin{equation}\label{1.27} \frac{\phi}{\sqrt{A^2+B\phi^2}} = \tanh \beta x\,, \end{equation} The self-dual Eq. (\ref{1.26}) also admits the PT-invariant kink solution \begin{equation}\label{1.29} \frac{\phi}{\sqrt{A^2+B\phi^2}} = \tanh \beta_1 x \pm i {\rm sech} \beta_1 x\,, \end{equation} provided $A$ and $B$ are as given by Eq. (\ref{1.28}) while $\beta$ is again doubled and given by \begin{equation}\label{1.30} \beta_1 = 2a\sqrt{\lambda(a^2+b^2)} = 2\beta\,, \end{equation} where $\beta$ is given by Eq. (\ref{1.28}). Let us now show that this PT-invariant kink solution is linearly stable. From Eq. (\ref{1.26}) it is clear that \begin{equation}\label{1.26a} V(\phi) = \frac{\lambda}{2}(b^2+\phi^2)(a^2-\phi^2)^2\,. \end{equation} It is then easy to calculate $\frac{d^2V(\phi_{ptk})}{d\phi^2}$ and set up the stability equation for the PT-invariant kink solution (\ref{1.29}). In particular, on putting $y = \beta_1 x$, the stability Eq. (\ref{1.15a}) takes the form \begin{eqnarray}\label{1.26f} &&-\eta''(y)+\frac{1}{4(a^2+b^2)} \Bigg(a^2-2b^2 +\frac{3b^2(b^2-2a^2)}{2a^2}\left[\frac{(b^2/(b^2+a^2)-2{\rm sech}^2 y \pm 2i {\rm sech} y \tanh y}{{\rm sech}^2 y + p^2}\right] \nonumber \\ &&+\frac{15b^4}{16a^2}\left[\frac{(b^2/(b^2+a^2)-2{\rm sech}^2 y \pm 2i {\rm sech} y \tanh y}{{\rm sech}^2 y + p^2}\right]^2 \Bigg) \eta(y) = \frac{\omega^2}{4a^2 \lambda (a^2+b^2)}\eta(y)\,, \end{eqnarray} where $p^2 = \frac{b^4}{4a^2(a^2+b^2)}$. On using Eq. (\ref{1.15e}), we obtain the well known zero-mode for the PT-invariant kink solution (\ref{1.29}) \begin{eqnarray}\label{1.28a} &&\eta_{0,pt} \propto {\rm sech} y \sqrt{\frac{b^2}{a^2+b^2}-2{\rm sech}^2 y \mp 2i {\rm sech} y \tanh y} \nonumber \\ && \times \frac{[b^2/(a^2+b^2)](1+\frac{b^2}{2a^2}) {\rm sech} y \tanh y \pm i(1+2p^2){\rm sech}^2 y \mp i p^2}{[{\rm sech}^2 y+p^2]^2}\,. \end{eqnarray} \subsection{Sine-Gordon Kink} The self-dual sine-Gordon equation is \cite{raj} \begin{equation}\label{1.31} \frac{d\phi}{dx} = \pm 2 \sin \frac{\phi}{2}\,. \end{equation} One of the well known kink solutions is \begin{equation}\label{1.32a} \phi = 4 \tan^{-1}(e^{-x})\,, \end{equation} which can be put in the form \begin{equation}\label{1.32} \cos(\phi/2) = \tanh \beta x\,,~~\beta =1\,. \end{equation} The self-dual Eq. (\ref{1.31}) also admits the PT-invariant kink solution \begin{equation}\label{1.33} \cos(\phi/2) = \tanh \beta_1 x \pm i {\rm sech} \beta_1 x\,, \end{equation} provided $\beta$ is again doubled, i.e. $\beta_1 =2 = 2\beta$ where $\beta$ is given by Eq. (\ref{1.32}). Let us now show that this PT-invariant kink solution is linearly stable. From Eq. (\ref{1.31}) it is clear that \begin{equation}\label{1.31a} V(\phi) = 1- \cos(\phi)\,. \end{equation} It is then easy to calculate $\frac{d^2V(\phi_{ptk})}{d\phi^2}$ and set up the stability equation for the PT-invariant kink solution (\ref{1.33}). In particular, on substituting $y = \beta_1 x$ in Eq. (\ref{1.15a}) we obtain the stability equation \begin{equation}\label{1.31f} -\eta''(y)+\left[\frac{1}{4} - {\rm sech}^2 y \pm i {\rm sech} y \tanh y\right]\eta(y) = \frac{\omega^2}{4} \eta(y)\,. \end{equation} Using Eq. (\ref{1.15e}), the zero-mode for the PT-invariant kink solution (\ref{1.33}) turns out to be \begin{equation}\label{1.33a} \eta_{0,pt} \propto \sqrt{{\rm sech} y}\, e^{\mp (i/2)\tan^{-1}(\sinh y)}\,. \end{equation} \subsection{Double sine-hyperbolic-Gordon Kink} The self-dual equation for the double sine-hyperbolic-Gordon (DSHG) equation is \begin{equation}\label{1.34} \frac{d\phi}{dx} = \pm \sqrt{2}(\zeta \cosh 2\phi -n)\,. \end{equation} The well known kink solution in this case is \cite{raz,bk,hks} \begin{equation}\label{1.35} \phi = \tanh^{-1}\left[\sqrt{\frac{n-\zeta}{n+\zeta}}\tanh \beta x\right]\,,~~\beta = \sqrt{n^2-\zeta^2}\,, \end{equation} which can be put in the form \begin{equation}\label{1.36} \tanh \phi = \sqrt{\frac{n-\zeta}{n+\zeta}} \tanh \beta x\,. \end{equation} The same self-dual Eq. (\ref{1.34}) also admits the PT-invariant kink solution \begin{equation}\label{1.37} \tan \phi = \sqrt{\frac{n-\zeta}{n+\zeta}}\, [\tanh \beta_1 x \pm i{\rm sech} \beta_1 x]\,, \end{equation} provided $\beta$ is doubled, i.e. $\beta_1 = 2\sqrt{n^2-\zeta^2} = 2\beta$ where $\beta$ is given by Eq. (\ref{1.35}). Let us now show that this PT-invariant kink solution is linearly stable. From Eq. (\ref{1.34}) it is clear that \begin{equation}\label{1.31d} V(\phi) = (\zeta \cosh 2\phi -n)^2\,. \end{equation} It is then easy to calculate $\frac{d^2V(\phi_{ptk})}{d\phi^2}$ and set up the stability equation for the PT-invariant kink solution (\ref{1.37}). In particular, on substituting $y = \beta_1 x$ in Eq. (\ref{1.15a}) we obtain the stability equation \begin{eqnarray}\label{1.37f} &&-\eta''(y)+\frac{1}{n-\zeta}\bigg (2\zeta -\frac{2\zeta} {(n^2 {\rm sech}^2 y+\zeta^2 \tanh^2 y)}[2\zeta +(n+4\zeta)(n{\rm sech}^2 y+\zeta \tanh^2 y \pm i(n-\zeta){\rm sech} 6 \tanh y] \nonumber \\ &&+\frac{8\zeta^2(n+\zeta)(n{\rm sech}^2 y + \zeta \tanh^2 y)} {(n^2{\rm sech}^2 y+\zeta^2 \tanh^2 y)^2}[n{\rm sech}^2 y+\zeta \tanh^2 y \pm i (n-\zeta){\rm sech} y \tanh y] \bigg ) \eta (y) = \frac{\omega^2}{4(n^2-\zeta^2)} \eta(y)\,. \nonumber \\ \end{eqnarray} Using Eq. (\ref{1.15e}), the zero-mode for the PT-invariant kink solution (\ref{1.37}) turns out to be \begin{equation}\label{1.33d} \eta_{0,pt} \propto \frac{{\rm sech} y (n{\rm sech} y \mp i \zeta \tanh y)} {n^2 {\rm sech}^2 y+\zeta^2 \tanh^2 y}\,. \end{equation} \subsection{Double Sine-Gordon Kink} Consider the following self-dual equation for the double sine-Gordon case \begin{equation}\label{1.38} \frac{d\phi}{dx} = \pm \sqrt{2\lambda} \left(\sin \phi - \frac{\mu}{\lambda}\right)\,, ~~\mu < \lambda\,. \end{equation} In this case the well known kink solution is \cite{leung} \begin{equation}\label{1.39a} \phi = 2\tan^{-1}\left(\sqrt{\frac{\lambda - \mu}{\lambda + \mu}} \tanh \beta x\right) +\frac{\pi}{2}\,, \end{equation} which can be put in the form \begin{equation}\label{1.39} \tan\left(\frac{\phi}{2}- \frac{\pi}{4}\right) = \sqrt{\frac{\lambda - \mu} {\lambda + \mu}}\tanh(\beta x)\,, \end{equation} provided \begin{equation}\label{1.40} \beta = \sqrt{\frac{\lambda(1-\frac{\mu^2}{\lambda^2})}{2}}\,. \end{equation} The same self-dual Eq. (\ref{1.38}) admits the PT-invariant kink solution \begin{equation}\label{1.41} \tan\left(\frac{\phi}{2}- \frac{\pi}{4}\right) = \sqrt{\frac{\lambda - \mu} {\lambda + \mu}}\,[\tanh \beta x \pm i {\rm sech} \beta x]\,, \end{equation} provided $\beta$ is doubled, i.e. \begin{equation}\label{1.42} \beta_1 = \sqrt{2\lambda\left(1-\frac{\mu^2}{\lambda^2}\right)} = 2\beta\,, \end{equation} where $\beta$ is given by Eq. (\ref{1.40}). Let us now show that this PT-invariant kink solution is linearly stable. From Eq. (\ref{1.38}) it is clear that \begin{equation}\label{1.31g} V(\phi) = \lambda \left(\sin \phi - \frac{\mu}{\lambda}\right)^2\,. \end{equation} It is then easy to calculate $\frac{d^2V(\phi_{ptk})}{d\phi^2}$ and set up the stability equation for the PT-invariant kink solution (\ref{1.41}). In particular, on substituting $y = \beta_1 x$ in Eq. (\ref{1.15a}) we obtain the stability equation \begin{eqnarray}\label{1.41f} &&-\eta''(y)+\frac{\lambda^2}{\lambda^2-\mu^2} \bigg [1 +\frac{\mu(\mu \lambda \mp i (\lambda^2 - \mu^2){\rm sech} y} {\lambda(\mu^2 {\rm sech}^2 y+\lambda^2 \tanh^2 y)} \nonumber \\ &&-2\frac{\mu^2 \lambda^2 -(\lambda^2-\mu^2)^2 {\rm sech}^2 y \mp 2i\mu \lambda (\lambda^2-\mu^2) {\rm sech} y} {(\mu^2 {\rm sech}^2 y + \lambda^2 \tanh^2 y)^2} \bigg ]\eta(y) = \frac{\omega^2 \lambda}{2(\lambda^2-\mu^2)}\eta(y)\,. \end{eqnarray} Using Eq. (\ref{1.15e}), the zero-mode for the PT-invariant kink solution (\ref{1.41}) turns out to be \begin{equation}\label{1.41a} \eta_{0,pt} \propto \frac{{\rm sech} y (\mu {\rm sech} y \mp i \lambda \tanh y)} {\mu^2 {\rm sech}^2 y+\lambda^2 \tanh^2 y}\,. \end{equation} \section{PT-Invariant Periodic Kink Solutions} We now discuss a few examples from both the continuum and the discrete field theories where both periodic and hyperbolic kink-like solutions are known, and show that in all these cases one also has complex PT-invariant periodic as well as hyperbolic kink solutions. \subsection{mKdV Equation} We first discuss the celebrated mKdV equation \begin{equation}\label{2.1} u_t+u_{xxx}- 6 u^2 u_{x} =0\,, \end{equation} which is a well known integrable equation having application in several areas \cite{dj}. One of the exact, periodic solutions to the mKdV Eq. (\ref{2.1}) is \begin{equation}\label{2.2} u = A \sqrt{m} {\rm sn}[\beta(x-vt),m]\,, \end{equation} provided \begin{equation}\label{2.3} A^2 = \beta^2\,,~~v= -(1+m) \beta^2\,. \end{equation} In the limit $m=1$, the solution (\ref{2.2}) goes over to the hyperbolic kink solution \begin{equation}\label{2.4} u = A \tanh[\beta(x-vt)\,, \end{equation} and in this case $v = -2\beta^2$. Remarkably, even the complex PT-invariant combination (with PT eigenvalue $-1$) \begin{equation}\label{2.5} u = A\sqrt{m} {\rm sn}[\beta(x-vt),m] + i B \sqrt{m} {\rm cn}[\beta(x-vt),m]\,, \end{equation} is an exact solution to the mKdV Eq. (\ref{2.1}) provided \begin{equation}\label{2.6} B = \pm A\,,~~\beta^2 = 4 A^2\,,~~v= -\frac{(2-m)}{2}\beta^2\,. \end{equation} Yet another PT-invariant solution (with PT eigenvalue $-1$) is \begin{equation}\label{2.7} u = A \sqrt{m} {\rm sn}[\beta(x-vt),m] +i B {\rm dn}[\beta(x-vt),m]\,, \end{equation} provided \begin{equation}\label{2.8} B = \pm A\,,~~\beta^2 = 4 A^2\,,~~v = -\frac{(2m-1)}{2} \beta^2\,. \end{equation} We thus have two new periodic solutions of the mKdV Eq. (\ref{2.1}). In the limit $m=1$, both these solutions go over to the hyperbolic PT-invariant solution \begin{equation}\label{2.9} u = A \tanh[\beta(x-vt)] \pm i B {\rm sech}[\beta(x-vt)]\,, \end{equation} provided \begin{equation}\label{2.10} B = \pm A\,,~~\beta^2 = 4 A^2\,,~~v= -(1/2) \beta^2\, . \end{equation} There is also a complex PT-invariant solution to the mKdV Eq. (\ref{2.1}) with PT-eigenvalue +$1$. Let us first note that the mKdV Eq. (\ref{2.1}) has an exact solution \begin{equation}\label{2.11} u = \frac{A\sqrt{m}{\rm cn}[\beta(x-vt),m]}{{\rm dn}[\beta(x-vt),m]}\,, \end{equation} provided \begin{equation}\label{2.12} A^2 = \beta^2\,,~~v = -(1+m)\beta^2\, . \end{equation} It is easily checked that the same Eq. (\ref{2.1}) also has the complex PT invariant solution with PT-eigenvalue +$1$ \begin{equation}\label{2.13} u = A \sqrt{m}\,\frac{{\rm cn}[\beta(x-vt),m]}{{\rm dn}[\beta(x-vt),m]} +iB \sqrt{m(1-m)} \frac{{\rm sn}[\beta(x-vt),m]}{{\rm dn}[\beta(x-vt),m]}\,, \end{equation} provided \begin{equation}\label{2.14c} B = \pm A\,,~~\beta^2 = 4A^2\,,~~v = -\frac{(2-m)}{2}\beta^2\,. \end{equation} Before completing this subsection, we would like to note that in our recent paper \cite{ks} we had considered the other mKdV equation, i.e. \begin{equation}\label{2.15} u_t+u_{xxx}+ 6 u^2 u_{x} =0\,, \end{equation} and had shown that in that case one has complex PT-invariant solutions of the form ${\rm cn}(x,m) \pm i{\rm sn}(x,m)$ and ${\rm dn}(x,m) \pm i{\rm sn}(x,m)$ with PT-eigenvalue $+1$. We now show that Eq. (\ref{2.15}) also has a PT-invariant solution with PT-eigenvalue $-1$. Let us first note that \begin{equation}\label{2.16} u = A \sqrt{m(1-m)} \frac{{\rm sn}[\beta(x-vt),m]}{{\rm dn}[\beta(x-vt),m]}\,, \end{equation} is an exact solution to Eq. (\ref{2.15}) provided \begin{equation}\label{2.17} A^2 = \beta^2\,,~~v = (2m-1)\beta^2\, . \end{equation} It is easily checked that the same Eq. (\ref{2.15}) also has the complex PT invariant solution with PT-eigenvalue $-1$ \begin{equation}\label{2.18} u = A \sqrt{m(1-m)} \frac{{\rm sn}[\beta(x-vt),m]}{{\rm dn}[\beta(x-vt),m]}\,, +iB \sqrt{m} \frac{{\rm cn}[\beta(x-vt),m]}{{\rm dn}[\beta(x-vt),m]}\,, \end{equation} provided \begin{equation}\label{2.14} B = \pm A\,,~~4A^2 = \beta^2\,,~~v = -\frac{(2-m)}{2}\beta^2\,. \end{equation} \subsection{$\phi^4$ Field Theory} The field equation in this case is \begin{equation}\label{2.20} \phi_{xx} = a \phi + b \phi^3\,, \end{equation} which also follows from the self-dual first order Eq. (\ref{1.14}). We now show that in this case too one has complex PT-invariant solutions with PT eigenvalue $-1$. One of the exact, periodic solutions to the $\phi^4$ Eq. (\ref{2.20}) is \cite{aubry} \begin{equation}\label{2.21} u = A \sqrt{m} {\rm sn}(\beta x,m)\,, \end{equation} provided \begin{equation}\label{2.22} b A^2 = 2\beta^2\,,~~a= -(1+m) \beta^2\,. \end{equation} In the limit $m=1$, the solution (\ref{2.21}) goes over to the hyperbolic kink solution discussed in the previous section. Remarkably, even the complex PT-invariant combination (with PT eigenvalue $-1$) \begin{equation}\label{2.23} u = A\sqrt{m} {\rm sn}(\beta x,m) + i B \sqrt{m} {\rm cn}(\beta x,m)\,, \end{equation} is an exact solution to Eq. (\ref{2.20}) provided \begin{equation}\label{2.24} B = \pm A\,,~~\beta^2 = 2b A^2\,,~~a= -\frac{(2m-1)}{2}\beta^2\,. \end{equation} Yet another PT-invariant solution (with PT eigenvalue $-1$) is \begin{equation}\label{2.25} u = A \sqrt{m} {\rm sn}(\beta x,m) +i B {\rm dn}(\beta x,m)\,, \end{equation} provided \begin{equation}\label{2.26} B = \pm A\,,~~\beta^2 = 2b A^2\,,~~a = -\frac{(2-m)}{2} \beta^2\,. \end{equation} In the limit $m=1$, both solutions (\ref{2.23}) and (\ref{2.25}) go over to the hyperbolic PT-invariant kink solution discussed in the previous section. Another periodic solution to Eq. (\ref{2.20}) is \begin{equation}\label{2.27} \phi = A \sqrt{m(1-m)}\frac{{\rm sn}(\beta x,m)}{{\rm dn}(\beta x,m)}\,, \end{equation} provided \begin{equation}\label{2.28} b A^2 = - 2\beta^2\,,~~a = -(2m-1)\beta^2\,. \end{equation} The complex PT-invariant combination (with PT-eigenvalue $-1$) \begin{equation}\label{2.29} \phi = A \sqrt{m(1-m)}\frac{{\rm sn}(\beta x,m)}{{\rm dn}(\beta x,m)} +iB \sqrt{m} \frac{{\rm cn}(\beta x,m)}{{\rm dn}(\beta x,m)}\,, \end{equation} is also an exact solution to Eq. (\ref{2.20}) provided \begin{equation}\label{2.30} B = \pm A\,,~~2b A^2 = -\beta^2\,,~~a = - \frac{(2-m)}{2} \beta^2\,. \end{equation} So far we have discussed complex PT-invariant solutions (with PT-eigenvalue $-1$) of the $\phi^4$ field Eq. (\ref{2.20}). Further, in a recent paper we have already obtained complex PT-invariant periodic solutions of the $\phi^4$ field Eq. (\ref{2.20}) with PT-eigenvalue $+1$. They were of the form ${\rm cn}(x,m) \pm i{\rm sn}(x,m)$ and ${\rm dn}(x,m) \pm i{\rm sn}(x,m)$. We now show that the same Eq. (\ref{2.20}) also has another periodic PT-invariant solution with PT-eigenvalue $+1$. Let us first note that one of the exact solutions to Eq. (\ref{2.20}) is \begin{equation}\label{2.31} \phi = A \sqrt{m}\frac{{\rm cn}(\beta x,m)}{{\rm dn}(\beta x,m)}\,, \end{equation} provided \begin{equation}\label{2.32} b A^2 = 2\beta^2\,,~~a = -(1+m)\beta^2\,. \end{equation} The complex PT-invariant combination (with PT-eigenvalue $+1$) \begin{equation}\label{2.33} \phi = A \sqrt{m}\frac{{\rm cn}(\beta x,m)}{{\rm dn}(\beta x,m)} +iB \sqrt{m(1-m)} \frac{{\rm sn}(\beta x,m)}{{\rm dn}(\beta x,m)}\,, \end{equation} is also an exact solution to Eq. (\ref{2.20}) provided \begin{equation}\label{2.34} B = \pm A\,,~~2b A^2 = \beta^2\,,~~a = - \frac{2-m}{2} \beta^2\,. \end{equation} \subsection{KdV Equation} In our recent publication \cite{ks} we have also obtained complex PT-invariant solutions of the KdV equation \begin{equation}\label{2.60} u_t+u_{xxx}- 6 u u_{x} =0\,, \end{equation} with PT eigenvalue +$1$. We now discuss one more complex PT-invariant periodic solution of this equation with PT-eigenvalue $+1$. To begin with, it is easy to check that one of the exact periodic solutions of the KdV Eq. (\ref{2.60}) is \begin{equation}\label{2.61} u = \frac{A(1-m)}{{\rm dn}^2[\beta(x-vt),m]}\,, \end{equation} provided \begin{equation}\label{2.61a} A = -2\beta^2\,,~~v= 4 (2-m) \beta^2\,. \end{equation} The same equation also admits the complex, PT-invariant, periodic solution \begin{equation}\label{2.62} u = \frac{A(1-m)}{{\rm dn}^2[\beta(x-vt),m]} +i B \sqrt{1-m} \frac{m{\rm sn}[\beta(x-vt),m] {\rm cn}[\beta(x-vt),m]}{{\rm dn}^2[\beta(x-vt),m]}\,, \end{equation} provided \begin{equation}\label{2.63} B = \pm A\,,~~A = -\beta^2\,,~~v= (2-m)\beta^2\,. \end{equation} It may be noted that (\ref{2.62}) is a nonsingular, periodic solution which vanishes in the hyperbolic limit $m=1$. \subsection{$\phi^3$ Field Theory} This field theory arises in the context of third order phase transitions \cite{phi3} and is also relevant to tachyon condensation \cite{tachyon}. In our recent publication \cite {ks} we also discussed complex PT-invariant periodic solutions of the $\phi^3$ field theory with PT-eigenvalue $+1$. In this subsection, we discuss one more complex, PT-invariant, periodic solution with PT-eigenvalue $+1$. We first notice that one of the exact solutions of the $\phi^3$ field theory \begin{equation}\label{2.64} \phi_{xx} = a\phi+b\phi^2\,, \end{equation} is \begin{equation}\label{2.65} \phi = \frac{A(1-m)}{{\rm dn}^2[\beta(x),m]}+ Ay\,, \end{equation} provided \begin{equation}\label{2.66} bA = -6\beta^2\,,~~a=4[2-m+3y]\beta^2\,,~~y = \frac{-(2-m)\pm \sqrt{1-m+m^2}}{3}\,. \end{equation} The same Eq. (\ref{2.64}) also admits the complex, PT-invariant periodic solution (with PT-eigenvalue $+1$) \begin{equation}\label{2.67} \phi = \frac{A(1-m)}{{\rm dn}^2[\beta(x),m]}+A y+ i B m\sqrt{1-m} \frac{{\rm cn}[\beta(x),m] {\rm sn}[\beta(x),m]}{{\rm dn}^2[\beta x,m]}\,, \end{equation} provided \begin{equation}\label{2.68} B = \pm A\,,~~bA = -3\beta^2\,,~~a = (2-m+6y)\beta^2\,,~~ y = \frac{-(2-m)\pm \sqrt{16-16m+m^2}}{6}\,. \end{equation} Note that this is a nonsingular, complex, PT-invariant solution which vanishes in the hyperbolic limit $m = 1$. \subsection{Discrete $\phi^4$ Equation} We now discuss two discrete models and show that both these models also admit PT-invariant periodic and hyperbolic kink solutions. Let us consider the discrete $\phi^4$ equation \begin{equation}\label{2.40} \frac{1}{h^2}[\phi_{n+1}+\phi_{n-1}-2\phi_n]+ a \phi_n -\frac{\lambda}{2} \phi_{n}^{2}[\phi_{n+1}+\phi_{n-1}] = 0\,. \end{equation} It is easy to check that Eq. (\ref{2.40}) admits an exact periodic kink solution \cite{cooper} \begin{equation}\label{2.41} \phi_n = A \sqrt{m} {\rm sn}(\beta n,m)\,, \end{equation} provided \begin{equation}\label{2.42} A^2 = \frac{2 {\rm sn}^2(\beta,m)}{h^2 \lambda}\,,~~ a h^2 = 2[1-{\rm cn}(\beta,m) {\rm dn}(\beta,m)]\,. \end{equation} The same model also admits a PT-invariant periodic kink solution \begin{equation}\label{2.43} \phi_n = A\sqrt{m} {\rm sn}(\beta n,m) +i B\sqrt{m} {\rm cn}(\beta n,m)\,, \end{equation} provided \begin{equation}\label{2.44} B = \pm A\,,~~A^2 = \frac{2{\rm sn}^2 (\beta,m)}{h^2 \lambda [1+{\rm dn}(\beta,m)]^2} \,,~~a h^2 = 2\left[1-\frac{2{\rm cn}(\beta,m)}{1+{\rm dn}(\beta,m)}\right]\,. \end{equation} Further, the model also admits yet another PT-invariant periodic kink solution \begin{equation}\label{2.45} \phi_n = A\sqrt{m} {\rm sn}(\beta n,m) +i B {\rm dn}(\beta n,m)\,, \end{equation} provided \begin{equation}\label{2.46} B = \pm A\,,~~A^2 = \frac{2{\rm sn}^2 (\beta,m)}{h^2 \lambda [1+{\rm cn}(\beta,m)]^2} \,,~~a h^2 = 2\left[1-\frac{2{\rm dn}(\beta,m)}{1+{\rm cn}(\beta,m)}\right]\,. \end{equation} In the limit $m=1$, both the solutions (\ref{2.43}) and (\ref{2.45}) go over to the hyperbolic PT-invariant kink solution \begin{equation}\label{2.47} \phi_n = A \tanh(\beta n) +i B {\rm sech}(\beta n)\,, \end{equation} provided \begin{equation}\label{2.48} B = \pm A\,,~~A^2 = \frac{2\tanh^2 (\frac{\beta}{2})} {h^2 \lambda}\,,~~a h^2 = 2\tanh^2\left(\frac{\beta}{2}\right)\,. \end{equation} While deriving results in this and the next subsection, we have made use of several not so well known identities satisfied by the Jacobi elliptic functions \cite{kls}. \subsection{Discrete mKdV Equation} Let us consider the discrete mKdV equation \begin{equation}\label{2.49} \frac{du_n}{dt}+ \alpha (u_{n+1} -u_{n-1}) -\lambda u_{n}^{2}(u_{n+1} -u_{n-1}) =0\,. \end{equation} It is easily checked that this model admits the periodic kink solution \cite{dmkdv} \begin{equation}\label{2.50} u_n = A\sqrt{m} {\rm sn}[\beta (n-vt),m]\,, \end{equation} provided \begin{equation}\label{2.51} \lambda A^2 = \alpha {\rm sn}^2(\beta, m)\,, ~~\beta v = 2\alpha {\rm sn}(\beta,m)\,. \end{equation} Remarkably, the same model (\ref{2.49}) also admits a complex PT-invariant periodic kink solution \begin{equation}\label{2.52} u_n = A \sqrt{m} {\rm sn}[\beta (n-vt),m] +iB \sqrt{m} {\rm cn}[\beta (n-vt),m]\,, \end{equation} provided \begin{equation}\label{2.53} B = \pm A\,,~~\lambda A^2 = \frac{\alpha {\rm sn}^2(\beta, m)} {[1+{\rm dn}(\beta,m)]^2}\,,~~\beta v = \frac{4\alpha {\rm sn}(\beta,m)} {1+{\rm dn}(\beta,m)}\,. \end{equation} Further, the same model also admits yet another complex PT-invariant periodic kink solution \begin{equation}\label{2.54} u_n = A \sqrt{m} {\rm sn}[\beta (n-vt),m] +iB {\rm dn}[\beta (n-vt),m]\,, \end{equation} provided \begin{equation}\label{2.55} B = \pm A\,,~~\lambda A^2 = \frac{\alpha {\rm sn}^2(\beta, m)} {[1+{\rm cn}(\beta,m)]^2}\,,~~\beta v = \frac{4\alpha {\rm sn}(\beta,m)} {1+{\rm cn}(\beta,m)}\,. \end{equation} In the limit $m=1$, both the complex PT-invariant solutions (\ref{2.52}) and (\ref{2.54}) go over to the complex PT-invariant hyperbolic kink solution \begin{equation}\label{2.56} u_n = A \tanh(\beta n) + i B{\rm sech}(\beta n)\,, \end{equation} provided \begin{equation}\label{2.57} B = \pm A\,,~~\lambda A^2 = \alpha \tanh^2(\beta/2)\,, ~~\beta v = 4 \alpha \tanh(\beta/2) \,. \end{equation} \section{PT-Invariant Solutions in Three Coupled models} We now consider three coupled models and show that in all these cases one has PT-invariant solutions for the coupled fields. In particular, we show that these models admit three different types of complex, PT-invariant periodic as well as hyperbolic solutions. In particular, there are solutions with PT eigenvalue $+1$ or $-1$ in both the fields and also solutions with PT eigenvalue $+1$ in one field and $-1$ in the other field. \subsection{Coupled $\phi^4$ Model} We first consider a coupled $\phi^4$ model \begin{equation}\label{3.1} \phi_{xx} = a_1 \phi + b_1 \phi^3 + \alpha \phi \psi^2\,, \end{equation} \begin{equation}\label{3.2} \psi_{xx} = a_2 \psi + b_2 \psi^3 + \alpha \psi \phi^2\,, \end{equation} and show that in this case all three types (i.e. those with PT-eigenvalue $+1$ or $-1$ in both the fields or $+1$ in one field and $-1$ in the other field) of PT-invariant periodic as well as hyperbolic solutions are allowed. We shall see that there are solutions not only in terms of Lam\'e polynomials of order one but also in terms of Lam\'e polynomials of order two. \subsubsection{Solutions in Terms of Lam\'e Polynomials of order one} Let us first discuss solutions in terms of Lam\'e polynomials of order one. {\bf Case I: Solutions With PT Eigenvalue $-1$ in Both The Fields} We first discuss PT-invariant solutions with PT eigenvalue $-1$ in both the fields. It is easy to check that one of the exact solutions to Eq. (\ref{3.1}) is \cite{ks3} \begin{equation}\label{3.3} \phi = A\sqrt{m} {\rm sn}[\beta x,m]\,,~~\psi = B \sqrt{m} {\rm sn}[\beta x, m]\,, \end{equation} provided \begin{equation}\label{3.4} b_1 A^2 +\alpha B^2 = b_2 B^2 + \alpha A^2 = 2\beta^2\,, ~~a_1 = a_2 = -(1+m)\beta^2\,. \end{equation} The same coupled model also admits the PT-invariant periodic solution \begin{eqnarray}\label{3.5} &&\phi = A \sqrt{m} {\rm sn}[\beta x,m]+ i D\sqrt{m} {\rm cn}[\beta x,m]\,, \nonumber \\ &&\psi = B \sqrt{m} {\rm sn}[\beta x,m]+ i F\sqrt{m} {\rm cn}[\beta x,m]\,, \end{eqnarray} provided \begin{equation}\label{3.6} D = \pm A\,,~~F = \pm B\,,~~a_1 = a_2 = -\frac{(2-m)\beta^2}{2}\,, \end{equation} and further \begin{equation}\label{3.7} 2(b_1 A^2+\alpha B^2) = 2(b_2 B^2+\alpha A^2) = \beta^2\,. \end{equation} Note that the signs of $D = \pm A$ and $F = \pm B$ are correlated. Further, the same model also admits another PT-invariant periodic solution \begin{eqnarray}\label{3.8} &&\phi = A \sqrt{m} {\rm sn}[\beta x,m]+ i D {\rm dn}[\beta x,m]\,, \nonumber \\ &&\psi = B \sqrt{m} {\rm sn}[\beta x,m]+ i F {\rm dn}[\beta x,m]\,, \end{eqnarray} provided \begin{equation}\label{3.9} D = \pm A\,,~~F = \pm B\,,~~a_1 = a_2 = -\frac{(2m-1)\beta^2}{2}\,, \end{equation} and if Eq. (\ref{3.7}) is satisfied. Note that the signs of $D = \pm A$ and $F = \pm B$ are correlated. In the limit $m=1$, both the periodic PT-invariant solutions (\ref{3.5}) and (\ref{3.8}) go over to the coupled hyperbolic PT-invariant solution \begin{eqnarray}\label{3.10} &&\phi = A \tanh(\beta x)+ i D {\rm sech}(\beta x)\,, \nonumber \\ &&\psi = B \tanh(\beta x)+ i F {\rm sech}(\beta x)\,, \end{eqnarray} provided Eq. (\ref{3.7}) is satisfied and if further \begin{equation}\label{3.11} D = \pm A\,,~~F = \pm B\,,~~a_1 = a_2 = -\frac{\beta^2}{2}\,. \end{equation} On solving Eq. (\ref{3.7}) we find that \begin{equation}\label{3.12} A^2 = \frac{(b_2 -\alpha)\beta^2}{2(b_1 b_2 -\alpha^2)}\,, ~~B^2 = \frac{(b_1 -\alpha)\beta^2}{2(b_1 b_2 -\alpha^2)}\,, \end{equation} provided $b_1 b_2 \ne \alpha^2$. In the special case when $b_1 b_2 = \alpha^2$ which implies $b_1 = b_2 = \alpha$, instead of the two relations of Eq. (\ref{3.7}), we only have a single relation \begin{equation}\label{3.13} 2b_1 (A^2+B^2) = \beta^2\,, \end{equation} and hence, in this case, $A,B$ are indeterminate except that they satisfy the constraint (\ref{3.13}). Yet another exact solution to Eqs. (\ref{3.1}), (\ref{3.2}) is \begin{eqnarray}\label{3.3a} &&\phi = A\sqrt{m(1-m)} \frac{{\rm sn}(\beta x, m)}{{\rm dn}(\beta x, m)}\,, \nonumber \\ &&\psi = B\sqrt{m(1-m)} \frac{{\rm sn}(\beta x, m)}{{\rm dn}(\beta x, m)}\,, \end{eqnarray} provided \begin{equation}\label{3.4a} b_1 A^2 +\alpha B^2 = b_2 B^2 + \alpha A^2 = -2\beta^2\,, ~~a_1 = a_2 = (2m-1)\beta^2\,. \end{equation} Remarkably, we find that the same coupled model also admits the PT-invariant periodic solution \begin{eqnarray}\label{3.5a} &&\phi = A \sqrt{m(1-m)} \frac{{\rm sn}(\beta x, m)}{{\rm dn}(\beta x, m)} + i D\sqrt{m} \frac{{\rm cn}(\beta x, m)}{{\rm dn}(\beta x, m)}\,, \nonumber \\ &&\psi = B \sqrt{m(1-m)} \frac{{\rm sn}(\beta x, m)}{{\rm dn}(\beta x, m)} + i F\sqrt{m} \frac{{\rm cn}(\beta x, m)}{{\rm dn}(\beta x, m)}\,, \end{eqnarray} with PT-eigenvalue $-1$ in both the fields provided \begin{equation}\label{3.6a} D = \pm A\,,~~F = \pm B\,,~~a_1 = a_2 = -\frac{(2-m)\beta^2}{2}\,, \end{equation} and further \begin{equation}\label{3.7a} 2(b_1 A^2+\alpha B^2) = 2(b_2 B^2+\alpha A^2) = -\beta^2\,. \end{equation} Note that the signs of $D = \pm A$ and $F = \pm B$ are correlated. On solving Eq. (\ref{3.7a}) we find that \begin{equation}\label{3.12a} A^2 = -\frac{(b_2 -\alpha)\beta^2}{2(b_1 b_2 -\alpha^2)}\,, ~~B^2 = -\frac{(b_1 -\alpha)\beta^2}{2(b_1 b_2 -\alpha^2)}\,, \end{equation} provided $b_1 b_2 \ne \alpha^2$. In the special case when $b_1 b_2 = \alpha^2$ which implies $b_1 = b_2 = \alpha$, instead of the two relations of Eq. (\ref{3.7a}), we only have a single relation \begin{equation}\label{3.13a} 2b_1 (A^2+B^2) = -\beta^2\,, \end{equation} and hence, in this case, $A,B$ are indeterminate except that they satisfy the constraint (\ref{3.13a}). \vskip 0.1truein \noindent{\bf Case II: Solutions with Mixed PT Eigenvalues} We now discuss mixed PT-invariant solutions, i.e. PT-invariant solutions with PT eigenvalue +$1$ in one field and $-1$ in the other field. It is easy to check that one of the exact solutions to Eqs. (\ref{3.1}), (\ref{3.2}) is \begin{equation}\label{3.14} \phi = A\sqrt{m} {\rm sn}[\beta x,m]\,,~~\psi = B \sqrt{m} {\rm cn}[\beta x, m]\,, \end{equation} provided \begin{equation}\label{3.15} b_1 A^2 -\alpha B^2 = \alpha A^2 -b_2 B^2 = 2\beta^2\,, ~~a_1 +m \alpha B^2 = -(1+m)\beta^2\,, ~~a_2 +m \alpha A^2 = (2m-1)\beta^2\, . \end{equation} We find that the same coupled model also admits the mixed PT-invariant periodic solution \begin{eqnarray}\label{3.16} &&\phi = A \sqrt{m} {\rm sn}[\beta x,m]+ i D\sqrt{m} {\rm cn}[\beta x,m]\,, \nonumber \\ &&\psi = B \sqrt{m} {\rm cn}[\beta x,m]+ i F\sqrt{m} {\rm sn}[\beta x,m]\,, \end{eqnarray} provided \begin{equation}\label{3.17} D = \pm A\,,~~F = \pm B\,,~~a_1 = a_2 = -\frac{(2-m)\beta^2}{2}\,, \end{equation} and further \begin{equation}\label{3.18} 2(b_1 A^2-\alpha B^2) = 2(\alpha A^2 -b_2 B^2) = \beta^2\,. \end{equation} Note that the signs of $D = \pm A$ and $F = \pm B$ are correlated. It is easy to check that one of the exact solutions to Eqs. (\ref{3.1}), (\ref{3.2}) is \begin{equation}\label{3.14a} \phi = A\sqrt{m} {\rm sn}[\beta x,m]\,,~~\psi = B {\rm dn}[\beta x, m]\,, \end{equation} provided \begin{equation}\label{3.15a} b_1 A^2 -\alpha B^2 = \alpha A^2 -b_2 B^2 = 2\beta^2\,, ~~a_1 + \alpha B^2 = -(1+m)\beta^2\,, ~~a_2 + \alpha A^2 = (2-m)\beta^2\, . \end{equation} The same model also admits another PT-invariant periodic solution \begin{eqnarray}\label{3.19} &&\phi = A \sqrt{m} {\rm sn}[\beta x,m]+ i D {\rm dn}[\beta x,m]\,, \nonumber \\ &&\psi = B {\rm dn}[\beta x,m]+ i F \sqrt{m} {\rm sn}[\beta x,m]\,, \end{eqnarray} provided \begin{equation}\label{3.20} D = \pm A\,,~~F = \pm B\,,~~a_1 = a_2 = -\frac{(2m-1)\beta^2}{2}\,, \end{equation} and further if Eq. (\ref{3.18}) is satisfied. Note that the signs of $D = \pm A$ and $F = \pm B$ are correlated. In the limit $m=1$, both the periodic PT-invariant solutions (\ref{3.17}) and (\ref{3.19}) go over to the coupled, hyperbolic, mixed PT-invariant solution \begin{eqnarray}\label{3.21} &&\phi = A \tanh(\beta x)+ i D {\rm sech}(\beta x)\,, \nonumber \\ &&\psi = B {\rm sech}(\beta x)+ i F \tanh(\beta x)\,, \end{eqnarray} provided Eq. (\ref{3.18}) is satisfied and if further \begin{equation}\label{3.22} D = \pm A\,,~~F = \pm B\,,~~a_1 = a_2 = -\frac{\beta^2}{2}\,. \end{equation} On solving Eq. (\ref{3.18}) we find that \begin{equation}\label{3.23} A^2 = \frac{(b_2 -\alpha)\beta^2}{2(b_1 b_2 -\alpha^2)}\,, ~~~ B^2 = \frac{(\alpha-b_1)\beta^2}{2(b_1 b_2 -\alpha^2)}\,, \end{equation} provided $b_1 b_2 \ne \alpha^2$. In the special case when $b_1 b_2 = \alpha^2$ which implies $b_1 = b_2 = \alpha$, instead of the two relations of Eq. (\ref{3.18}), we only have a single relation \begin{equation}\label{3.24} 2b_1 (A^2-B^2) = \beta^2\,. \end{equation} Thus $A,B$ are indeterminate in this case except that they must satisfy the constraint (\ref{3.24}). Yet another periodic solution to the coupled Eqs. (\ref{3.1}), (\ref{3.2}) is \begin{eqnarray}\label{3.3b} &&\phi = A\sqrt{m} \frac{{\rm cn}(\beta x, m)}{{\rm dn}(\beta x, m)}\,, \nonumber \\ &&\psi = B\sqrt{m(1-m)} \frac{{\rm sn}(\beta x, m)}{{\rm dn}(\beta x, m)}\,, \end{eqnarray} provided Eq. (\ref{3.15}) is satisfied. We find that the same coupled model also admits the PT-invariant periodic solution \begin{eqnarray}\label{3.5b} &&\phi = A \sqrt{m} \frac{{\rm cn}(\beta x, m)}{{\rm dn}(\beta x, m)} + i D\sqrt{m(1-m)} \frac{{\rm sn}(\beta x, m)}{{\rm dn}(\beta x, m)}\,, \nonumber \\ &&\psi = B \sqrt{m(1-m)} \frac{{\rm sn}(\beta x, m)}{{\rm dn}(\beta x, m)} + i F\sqrt{m} \frac{{\rm cn}(\beta x, m)}{{\rm dn}(\beta x, m)}\,, \nonumber \\ \end{eqnarray} provided Eq. (\ref{3.18}) is satisfied and if further \begin{equation}\label{3.6b} D = \pm A\,,~~F = \mp B\,,~~a_1 = a_2 = -\frac{(2-m)\beta^2}{2}\,. \end{equation} Note that the signs of $D = \pm A$ and $F = \mp B$ are correlated. \vskip 0.1truein \noindent{\bf Case III: Solutions With PT Eigenvalue +$1$ in Both The Fields} We now discuss complex PT-invariant periodic solutions with PT-eigenvalue $+1$ in both the fields. In our earlier paper we have already discussed PT-invariant solutions of the form ${\rm cn}(x,m) \pm i{\rm sn}(x,m)$ and ${\rm dn}(x,m) \pm i {\rm sn}(x,m)$ with PT-eigenvalue $+1$. We now show that there is another PT-invariant solution with PT-eigenvalue $+1$ in both the fields. One exact solution to Eqs. (\ref{3.1}), (\ref{3.2}) is \begin{equation}\label{3.25} \phi = A\sqrt{m} \frac{{\rm cn}(\beta x, m)}{{\rm dn}(\beta x, m)}\,, ~~\psi = B\sqrt{m} \frac{{\rm cn}(\beta x, m)}{{\rm dn}(\beta x, m)}\,, \end{equation} provided Eq. (\ref{3.4}) is satisfied. We find that the same coupled model also admits the PT-invariant periodic solution with PT-eigenvalue $+1$ in both the fields \begin{eqnarray}\label{3.27} &&\phi = A \sqrt{m} \frac{{\rm cn}(\beta x, m)}{{\rm dn}(\beta x, m)} + i D\sqrt{m(1-m)} \frac{{\rm sn}(\beta x, m)}{{\rm dn}(\beta x, m)}\,, \nonumber \\ &&\psi = B \sqrt{m} \frac{{\rm cn}(\beta x, m)}{{\rm dn}(\beta x, m)} + i F\sqrt{m(1-m)} \frac{{\rm sn}(\beta x, m)}{{\rm dn}(\beta x, m)}\,, \nonumber \\ \end{eqnarray} provided Eqs. (\ref{3.6}) and (\ref{3.7}) are satisfied. \subsubsection{Solutions in Terms of Lam\'e Polynomials of Order Two} We now show that for the coupled $\phi^4$ model (\ref{3.1}), (\ref{3.2}) one has all three types (i.e. those with PT-eigenvalue $+1$ or $-1$ for both the fields or $+1$ in one field and $-1$ in the other field) of PT-invariant solutions are allowed in terms of Lam\'e polynomials of order two. \vskip 0.1truein \noindent {\bf Case I: Solutions With PT Eigenvalue $-1$ in Both The Fields} It is easy to check that \begin{equation}\label{3.29} \phi = A m \frac{{\rm cn}(\beta x, m){\rm sn}(\beta x, m)}{{\rm dn}^2(\beta x, m}\,, ~~\psi = B \sqrt{m(1-m)} \frac{{\rm sn}(\beta x, m)}{{\rm dn}^2(\beta x, m}\,, \end{equation} is an exact solution to the coupled Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.30} b_1 = b_2 = \alpha\,,~~B = \pm A\,,~~b_1 A^2 = -6(1-m)\beta^2\,,~~ a_1 = (5m-4)\beta^2\,,~~a_2 = (5m-1)\beta^2\,. \end{equation} Remarkably, the PT-invariant combination with PT-eigenvalue $-1$ \begin{eqnarray}\label{3.31} &&\phi = A m \frac{{\rm cn}(\beta x, m){\rm sn}(\beta x, m)}{{\rm dn}^2(\beta x, m} +iD\left[\frac{1-m}{{\rm dn}^2(\beta x, m)}+y\right] \nonumber \\ &&\psi = B \sqrt{m(1-m)} \frac{{\rm sn}(\beta x, m)}{{\rm dn}^2(\beta x, m} +iF\sqrt{m} \frac{{\rm cn}(\beta x, m)}{{\rm dn}^2(\beta x, m}\,, \end{eqnarray} is also an exact solution to Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.32} b_1 = b_2 = \alpha\,,~~D = \pm A\,,~~F = \mp B\,,~~2 b_1 A^2 y = 3 \beta^2\,, \end{equation} \begin{equation}\label{3.34} a_1 = b_1 A^2 +[2-m+(9/2)y]\beta^2\,,~~ a_2 = b_1 A^2 +[2-m+(3/2)y]\beta^2\,, \end{equation} and $y$ is given by Eq. (\ref{2.68}). \vskip 0.1truein \noindent{\bf Case II: Solutions With Mixed PT Eigenvalues} Now let us discuss the so called mixed PT-invariant solutions, i.e. those with PT-eigenvalue $+1$ in one field and $-1$ in the other field. It is easy to check that \begin{equation}\label{3.35} \phi = A [{\rm dn}^2(\beta x, m)+y]\,,~~\psi = B m {\rm sn}(\beta x, m) {\rm cn}(\beta x, m)\,, \end{equation} is an exact solution to the coupled Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.36} b_1 = b_2 = \alpha\,,~~B = \pm A\,,~~(2y+2-m) b_1 A^2 = -6 \beta^2\,, \end{equation} \begin{equation}\label{3.37} a_1 = [4(2-m)+6y]\beta^2 -[y^2-(1-m)]b_1 A^2\,,~~ a_2 = (2-m)\beta^2 -[y^2-(1-m)]b_1 A^2\,, \end{equation} and $y$ is given by Eq. (\ref{2.66}). There is a related PT-invariant solution with PT-eigenvalue $-1$ in one field and +$1$ in the other field. In particular, \begin{eqnarray}\label{3.39} &&\phi = A [{\rm dn}^2(\beta x, m)+y] +iD m {\rm cn}(\beta x, m) {\rm sn}(\beta x, m)\,, \nonumber \\ &&\psi = B m {\rm cn}(\beta x, m) {\rm sn}(\beta x, m) +iF [{\rm dn}^2(\beta x, m)+z]\,, \end{eqnarray} is an exact solution to Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.40} b_1 = b_2 = \alpha\,,~~B = \pm A\,,~~F = \mp B\,,~~ y \ne z\,,~~ (y-z) b_1 A^2 y = -(3/2) \beta^2\,, \end{equation} \begin{equation}\label{3.41} a_1 = [2-m+(3/2)(3y+z)]\beta^2\,,~~ a_2 = [2-m+(3/2)(3z+y)]\beta^2\, , \end{equation} and both $y$ and $z$ are different; they are given by the two roots of Eq. (\ref{2.68}). There is another PT-invariant solution with PT-eigenvalue $-1$ in one field and $+1$ in the other field which is related to the solution (\ref{3.35}). In particular, \begin{eqnarray}\label{3.39a} &&\phi = A [{\rm dn}^2(\beta x, m)+y] +iD \sqrt{m} {\rm dn}(\beta x, m) {\rm sn}(\beta x, m)\,, \nonumber \\ &&\psi = B m {\rm cn}(\beta x, m) {\rm sn}(\beta x, m) +iF \sqrt{m} {\rm cn}(\beta x,m) {\rm dn}(\beta x, m)\,, \end{eqnarray} is an exact solution to Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.40a} b_1 = b_2 = \alpha\,,~~D = \pm A\,,~~F = \mp B\,,~~ (y+1-m) b_1 A^2 = -(3/2) \beta^2\,, \end{equation} \begin{equation}\label{3.41a} a_1 = [5-4m+(9/2)y]\beta^2+(1-m)(y+1)b_1 A^2\,,~~ a_2 = [2-m+(3/2)y]\beta^2 +(1-m)(y+1)b_1 A^2\,, \end{equation} and $y$ is given by \begin{equation}\label{3.42b} y = \frac{-(5-4m)\pm\sqrt{1-16m+16 m^2}}{6}\, . \end{equation} In the limit $m=1$, both the solutions (\ref{3.39}) and (\ref{3.39a}) go over to the corresponding hyperbolic PT-invariant solution \begin{eqnarray}\label{3.39d} &&\phi = A [{\rm sech}^2(\beta x)+y] +iD {\rm sech}(\beta x) \tanh(\beta x)\,, \nonumber \\ &&\psi = B {\rm sech}(\beta x) \tanh(\beta x) +iF {\rm sech}^2(\beta x)\,, \end{eqnarray} provided \begin{equation}\label{3.40d} b_1 = b_2 = \alpha\,,~~D = \pm A\,,~~F = \mp B\,,~~ y= -1/3,~~ z=0\,,~~b_1 A^2 = (9/2) \beta^2\,, \end{equation} \begin{equation}\label{3.41d} a_1 = -(1/2)\beta^2\,,~~a_2 = (1/2)\beta^2\,. \end{equation} It is easy to check that \begin{equation}\label{3.43} \phi = A [{\rm dn}^2(\beta x, m)+y]\,,~~\psi = B \sqrt{m} {\rm sn}(\beta x, m) {\rm dn}(\beta x, m)\,, \end{equation} is an exact solution to the coupled Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.44} b_1 = b_2 = \alpha\,,~~B = \pm A\,,~~(2y+1) b_1 A^2 = -6 \beta^2\,, \end{equation} \begin{equation}\label{3.45} a_1 = [4(2-m)+6y]\beta^2 - b_1 A^2 y^2\,,~~ a_2 = (5-4m)\beta^2 - b_1 A^2 y^2\,, \end{equation} while $y$ is as given by Eq. (\ref{2.66}). There is a related PT-invariant solution with PT-eigenvalue $-1$ in one field and $+1$ in the other field. In particular, \begin{eqnarray}\label{3.46} &&\phi = A [{\rm dn}^2(\beta x, m)+y] +iD \sqrt{m} {\rm dn}(\beta x, m) {\rm sn}(\beta x, m)\,, \nonumber \\ &&\psi = B \sqrt{m} {\rm dn}(\beta x, m) {\rm sn}(\beta x, m) +iF [{\rm dn}^2(\beta x, m)+z]\,, \end{eqnarray} is an exact solution to Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.47} b_1 = b_2 = \alpha\,,~~B= \pm A\,,~~F = \mp B\,,~~ y \ne z\,,~~ (y-z) b_1 A^2 y = -(3/2) \beta^2\,, \end{equation} \begin{equation}\label{3.48} a_1 = [5-4m+(3/2)(3y+z)]\beta^2\,,~~ a_2 = [5-4m+(3/2)(3z+y)]\beta^2\, , \end{equation} and both $y$ and $z$ are different; they are given by the two roots of Eq. (\ref{3.42b}). There is another PT-invariant solution with PT-eigenvalue $-1$ in one field and $+1$ in the other field which is related to solution (\ref{3.43}). In particular, \begin{eqnarray}\label{3.46a} &&\phi = A [{\rm dn}^2(\beta x, m)+y] +iD m {\rm cn}(\beta x, m) {\rm sn}(\beta x, m)\,, \nonumber \\ &&\psi = B \sqrt{m} {\rm dn}(\beta x, m) {\rm sn}(\beta x, m) +iF \sqrt{m} {\rm cn}(\beta x,m) {\rm dn}(\beta x, m)\,, \end{eqnarray} is an exact solution to Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.47a} b_1 = b_2 = \alpha\,,~~B= \pm A\,,~~F = \mp B\,,~~ b_1 A^2 y = -(3/2) \beta^2\,, \end{equation} \begin{equation}\label{3.48a} a_1 = [2-m+(9/2)y]\beta^2 -(1-m) b_1 A^2\,,~~ a_2 = [2-m+(3/2)y]\beta^2 -(1-m) b_1 A^2\, , \end{equation} and $y$ is given by Eq. (\ref{2.68}). In the limit $m=1$, both the solutions (\ref{3.46}) and (\ref{3.46a}) go over to the hyperbolic PT-invariant solution (\ref{3.39d}). It is easy to check that \begin{equation}\label{3.50} \phi = A \sqrt{m} {\rm cn}(\beta x, m) {\rm dn}(\beta x, m)\,,~~ \psi = B \sqrt{m} {\rm sn}(\beta x, m) {\rm dn}(\beta x, m)\,, \end{equation} is an exact solution to the coupled Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.51} b_1 = b_2 = \alpha\,,~~B = \pm A\,,~~m b_1 A^2 = -6 \beta^2\,,~~ a_1 = (5-m)\beta^2\,,~~a_2 = (5-4m)\beta^2\,. \end{equation} There is a related PT-invariant solution with PT-eigenvalue $-1$ in one field and $+1$ in the other field. In particular, \begin{eqnarray}\label{3.52} &&\phi = A \sqrt{m} {\rm cn}(\beta x, m) {\rm dn}(\beta x, m) +iD m {\rm cn}(\beta x, m) {\rm sn}(\beta x, m)\,, \nonumber \\ &&\psi = B \sqrt{m} {\rm dn}(\beta x, m) {\rm sn}(\beta x, m) +iF [{\rm dn}^2(\beta x, m)+y]\,, \end{eqnarray} is an exact solution to Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.53} b_1 = b_2 = \alpha\,,~~B= \pm A\,,~~F = \mp B\,,~~ y \ne z\,,~~ (y+1-m) b_1 A^2 = (3/2) \beta^2\,, \end{equation} \begin{equation}\label{3.54} a_1 = (2-m)\beta^2 +[y^2-(1-m)]b_1 A^2\,,~~ a_2 = (5-4m+3y)\beta^2 +[y^2-(1-m)]b_1 A^2\,, \end{equation} while $y$ is given by Eq. (\ref{3.42b}). It is easy to check that \begin{equation}\label{3.55} \phi = A \sqrt{m} {\rm cn}(\beta x, m) {\rm dn}(\beta x, m)\,,~~ \psi = B m {\rm sn}(\beta x, m) {\rm cn}(\beta x, m)\,, \end{equation} is an exact solution to the coupled Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.56} b_1 = b_2 = \alpha\,,~~B = \pm A\,,~~ b_1 A^2 = -6 \beta^2\,,~~ a_1 = (5m-1)\beta^2\,,~~a_2 = (5m-4)\beta^2\,. \end{equation} There is a related PT-invariant solution with PT-eigenvalue $-1$ in one field and $+1$ in the other field. In particular, \begin{eqnarray}\label{3.57} &&\phi = A \sqrt{m} {\rm cn}(\beta x, m) {\rm dn}(\beta x, m) +iD \sqrt{m} {\rm dn}(\beta x, m) {\rm sn}(\beta x, m)\,, \nonumber \\ &&\psi = B m {\rm cn}(\beta x, m) {\rm sn}(\beta x, m) +iF [{\rm dn}^2(\beta x, m)+y]\,, \end{eqnarray} is an exact solution to Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.58} b_1 = b_2 = \alpha\,,~~B= \pm A\,,~~F = \mp B\,,~~ y \ne z\,,~~ y b_1 A^2 = (3/2) \beta^2\,, \end{equation} \begin{equation}\label{3.59} a_1 = [(2-m)+(3/2)y]\beta^2 +(1-m)]b_1 A^2\,,~~ a_2 = [2-m+(9/2)]\beta^2 +(1-m)]b_1 A^2\,, \end{equation} while $y$ is given by Eq. (\ref{2.68}). In the limit $m=1$, both the solutions (\ref{3.52}) and (\ref{3.57}) go over to the hyperbolic PT-invariant solution (\ref{3.39d}). It is easy to check that \begin{equation}\label{3.60} \phi = A\left[\frac{(1-m)}{{\rm dn}^2(\beta x, m)}+y\right]\,, ~~\psi = B m \frac{{\rm sn}(\beta x, m){\rm cn}(\beta x, m)}{{\rm dn}^2(\beta x, m)}\,, \end{equation} is an exact solution to the coupled Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{eqnarray}\label{3.61} &&b_1 = b_2 = \alpha\,,~~B = \pm A\,,~~(2y+2-m)b_1 A^2 = -6\beta^2\,, \nonumber \\ &&a_1 = [4(2-m)+6y]\beta^2 -[y^2-(1-m)]b_1 A^2\,,~~ a_2 = (2-m)\beta^2 -[y^2-(1-m)]b_1 A^2\,, \end{eqnarray} while $y$ is given by Eq. (\ref{2.66}). The PT-invariant combination \begin{eqnarray}\label{3.62} &&\phi = A \left[\frac{(1-m)}{{\rm dn}^2(\beta x, m)}+y\right] +iD m \frac{{\rm cn}(\beta x, m){\rm sn}(\beta x, m)}{{\rm dn}^2(\beta x, m)} \nonumber \\ &&\psi = B m \frac{{\rm cn}(\beta x, m){\rm sn}(\beta x, m)}{{\rm dn}^2(\beta x, m)} +iF \left[\frac{(1-m)}{{\rm dn}^2(\beta x, m)}+z\right] \end{eqnarray} is also an exact solution to Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.63} b_1 = b_2 = \alpha\,,~~D = \pm A\,,~~F = \mp B\,,~~2 b_1 A^2 (y-z) = -3 \beta^2\,, \end{equation} \begin{equation}\label{3.65} a_1 = [2-m+(3/2)(3y+z)]\beta^2\,,~~ a_2 = [2-m+(9/2)(y+3z)]\beta^2\,, \end{equation} and $y$ and $z$ are unequal; they are given by the two roots of Eq. (\ref{2.68}). \vskip 0.1truein \noindent{\bf Case III: Solutions With PT Eigenvalue $+1$ in Both The Fields} Finally, let us discuss PT-invariant solutions in terms of Lam\'e polynomials of order two with PT-eigenvalue $+1$ in both the fields. In this context we first note that \begin{equation}\label{3.66} \phi = A\left[\frac{(1-m)}{{\rm dn}^2(\beta x, m)}+y\right]\,, ~~\psi = B \sqrt{m} \frac{{\rm cn}(\beta x, m)}{{\rm dn}^2(\beta x, m)}\,, \end{equation} is an exact solution to the coupled Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.67} b_1 = b_2 = \alpha\,,~~B = \pm A\,,~~(2y+1)b_1 A^2 = -6\beta^2\,,~~ a_1 = [4(2-m)+6y]\beta^2 - b_1 A^2 y^2\,,~~ a_2 = (5-4m)\beta^2 - b_1 A^2 y^2\,, \end{equation} while $y$ is given by Eq. (\ref{2.66}). The PT-invariant combination \begin{eqnarray}\label{3.68} &&\phi = A \left[\frac{(1-m)}{{\rm dn}^2(\beta x, m)}+y\right] +iD m \frac{{\rm cn}(\beta x, m){\rm sn}(\beta x, m)}{{\rm dn}^2(\beta x, m)} \nonumber \\ &&\psi = B \sqrt{m} \frac{{\rm cn}(\beta x, m)}{{\rm dn}^2(\beta x, m)} +iF \sqrt{m(1-m)}\frac{{\rm sn}(\beta x, m)}{{\rm dn}^2(\beta x, m)} \end{eqnarray} is also an exact solution to Eqs. (\ref{3.1}), (\ref{3.2}) provided \begin{equation}\label{3.69} b_1 = b_2 = \alpha\,,~~D = \pm A\,,~~F = \mp B\,,~~2 b_1 A^2 = -3 \beta^2\,, \end{equation} \begin{equation}\label{3.71} a_1 = [2-m+(9/2)y]\beta^2 -(1-m) b_1 A^2\,,~~ a_2 = [2-m+(3/2)y]\beta^2 -(1-m) b_1 A^2\,, \end{equation} while $y$ is given by Eq. (\ref{2.68}). \subsection{Coupled KdV Equations} We now consider a coupled KdV model \cite{zhou} which has received some attention in the literature. In our recent paper \cite{ks}, we have already obtained two PT-invariant solutions with PT-eigenvalue $+1$ in both the fields. We now show that the same model has another novel PT-invariant periodic solution with PT-eigenvalue $+1$ in both the fields. The coupled KdV equations are \begin{eqnarray}\label{4.1} &&u_t + \alpha u u_x + \eta v v_{x} + u_{xxx} = 0\,, \nonumber \\ &&v_t + \delta u v_x + v_{xxx} = 0\,. \end{eqnarray} It is easy to check that the coupled Eqs. (\ref{4.1}) have the periodic solution \begin{equation}\label{4.2} u = \frac{A(1-m)}{{\rm dn}^2[\beta(x-ct), m]}\,,~~ v = \frac{B(1-m)}{{\rm dn}^2[\beta(x-ct), m]}\,, \end{equation} provided \begin{equation}\label{4.3} \delta A = 12 \beta^2\,,~~\eta B^2 = (\delta - \alpha)A^2\,,~~ c = 4(2-m) \beta^2\,. \end{equation} Remarkably, the same model also admits the PT-invariant periodic solution \begin{eqnarray}\label{4.4} &&u = \frac{A(1-m)}{{\rm dn}^2[\beta(x-ct),m]} +iD m\sqrt{1-m} \frac{{\rm sn}[\beta(x-ct),m]{\rm cn}[\beta(x-ct),m]} {{\rm dn}^2[\beta(x-ct), m]}\,, \nonumber \\ &&v = \frac{B(1-m)}{{\rm dn}^2[\beta(x-ct), m]} +iF m\sqrt{1-m} \frac{{\rm sn}[\beta(x-ct),m]{\rm cn}[\beta(x-ct),m]} {{\rm dn}^2[\beta(x-ct), m]}\,, \end{eqnarray} provided \begin{equation}\label{4.5} D = \pm A\,,~~F = \pm B\,,~~\delta A = 6 \beta^2\,,~~\eta B^2 = (\delta - \alpha)A^2\,,~~c = (2-m) \beta^2\,. \end{equation} Note that the signs of $D = \pm A$ and $F = \pm B$ are correlated. It is worth pointing out that this nonsingular solution vanishes in the hyperbolic limit of $m=1$. \subsection{Coupled KdV-mKdV Model} Recently we had considered \cite{ks} a coupled KdV-mKdV model \cite{coupled} \begin{eqnarray}\label{5.1} &&u_t + u_{xxx} + 6u u_x+2\alpha u v v_x = 0\,, \nonumber \\ &&v_t + v_{xxx} + 6v^2 v_x+\gamma v u_x = 0\,, \end{eqnarray} and obtained PT-invariant solutions with PT-eigenvalue $+1$ in both the fields. We now show that the same model also admits PT-invariant solutions with PT-eigenvalue $+1$ in one field and $-1$ in the other field. Let us first note that \begin{equation}\label{5.2} u = A{\rm dn}^2[\beta(x-ct),m]+A y\,,~~v = B\sqrt{m} {\rm sn}[\beta(x-ct),m]\,, \end{equation} is an exact solution of the coupled Eqs. (\ref{5.1}) provided \begin{equation}\label{5.3} 4\gamma A -12 B^2 = 12 \beta^2 = 6A -\alpha B^2\,,~~ c = -(1+m) \beta^2\,,~~ y = -\frac{(3-m)}{4}\,. \end{equation} It is easy to check that the same model also admits the PT-invariant solution \begin{eqnarray}\label{5.4} &&u = A[{\rm dn}^2[\beta(x-ct),m]+y]+iD \sqrt{m} {\rm sn}[\beta(x-ct),m]{\rm dn}[\beta(x-ct),m] \,, \nonumber \\ &&v = B\sqrt{m} {\rm sn}[\beta(x-ct),m]+iF {\rm dn}[\beta(x-ct),m]\,, \end{eqnarray} provided \begin{equation}\label{5.5} D = \pm A\,,~~F = \mp B\,,~~2\gamma A - 12 B^2 = 3 \beta^2 = 3A -\alpha B^2\,,~~ c = -\frac{(2m-1)}{2} \beta^2\,,~~y = -\frac{(3-2m)}{4}\,. \end{equation} Note that the signs of $D = \pm A$ and $F = \mp B$ are correlated. Further, the same model also admits another PT-invariant solution \begin{eqnarray}\label{5.6} &&u = A[{\rm dn}^2[\beta(x-ct),m]+ y] +iD m {\rm sn}[\beta(x-ct),m]{\rm cn}[\beta(x-ct),m]\,, \nonumber \\ &&v = B \sqrt{m} {\rm sn}[\beta(x-ct),m] +iF \sqrt{m} {\rm sn}[\beta(x-ct),m]\,, \end{eqnarray} provided \begin{equation}\label{5.7} D = \pm A\,,~~F = \mp B\,,~~2\gamma A -12B^2 = 3 \beta^2 = 3A -\alpha B^2\,,~~ c = -\frac{(2-m)}{2} \beta^2\,,~~G = -\frac{(2-m)}{4}\,. \end{equation} Note that the signs of $D = \pm A$ and $F = \mp B$ are correlated. In the limit $m=1$, both the solutions (\ref{5.4}) and (\ref{5.6}) go over to the hyperbolic mixed PT-invariant solution \begin{eqnarray}\label{5.4a} &&u = A[{\rm sech}^2 \beta(x-ct)+y]+iD {\rm sech} \beta(x-ct)\tanh \beta(x-ct) \,, \nonumber \\ &&v = B \tanh \beta(x-ct)+iF {\rm sech} \beta(x-ct)\,, \end{eqnarray} provided \begin{equation}\label{5.5a} D = \pm A\,,~~F = \mp B\,,~~2\gamma A - 12 B^2 = 3 \beta^2 = 3A -\alpha B^2\,,~~c = -1/2 \beta^2\,,~~y = -1/4\,. \end{equation} Note that the signs of $D = \pm A$ and $F = \mp B$ are correlated. Yet another solution to the coupled Eqs. (\ref{5.1}) is \begin{equation}\label{5.8} u = \frac{A(1-m)}{{\rm dn}^2[\beta(x-ct),m]}+A y\,,~~v = B\sqrt{m(1-m)} \frac{{\rm sn}[\beta(x-ct),m]}{{\rm dn}[\beta(x-ct), m}\,, \end{equation} provided \begin{equation}\label{5.9} 4\gamma A +12 B^2 = 12 \beta^2 = 6A +\alpha B^2\,,~~ c = (2m-1) \beta^2\,,~~ y = -\frac{(3-2m)}{4}\,. \end{equation} It is easy to check that the same model also admits the PT-invariant solution \begin{eqnarray}\label{5.10} &&u = \frac{A(1-m)}{[{\rm dn}^2[\beta(x-ct),m]}+y+iD \sqrt{m(1-m)} \frac{{\rm cn}[\beta(x-ct), m]{\rm sn}[\beta(x-ct),m]}{{\rm dn}^2[\beta(x-ct),m]} \,, \nonumber \\ &&v = B\sqrt{m(1-m)} \frac{{\rm sn}[\beta(x-ct),m]}{{\rm dn}[\beta(x-ct), m]} +iF\sqrt{m}\frac{{\rm cn}[\beta(x-ct), m]}{{\rm dn}[\beta(x-ct),m]}\,, \end{eqnarray} with mixed PT-eigenvalues provided \begin{equation}\label{5.11} D = \pm A\,,~~F = \pm B\,,~~2\gamma A - 12 B^2 = 3 \beta^2 = 3A -\alpha B^2\,,~~ c = -\frac{(2-m)}{2} \beta^2\,,~~y = -\frac{(2-m)}{4}\,. \end{equation} Note that the signs of $D = \pm A$ and $F = \pm B$ are correlated. \section{Summary and Conclusions} In this paper we have in a sense completed the program which we had initiated recently. In particular, in a recent publication \cite{ks} we have shown through several examples that whenever a real nonlinear equation admits solutions in terms of ${\rm sech} x$ $({\rm sech}^2 x$), then the same equation also admits complex PT-invariant solutions with PT-eigenvalue $+1$ in terms of ${\rm sech} x \pm i \tanh x$ $({\rm sech}^2 x \pm i {\rm sech} x \tanh x$). Further, we have also shown that such PT-invariant solutions also exist in the corresponding periodic case. On the other hand, in this paper we have shown through several examples that whenever a real nonlinear equation admits kink solutions in terms of $\tanh x$, then the same equation also admits complex PT-invariant kink solutions with PT-eigenvalue $-1$ in terms of $\tanh x \pm i {\rm sech} x$. We have also shown that both the kink and the PT-invariant kink solutions have the same topological charge as well as the same kink energy. In addition, for several kink bearing equations we have explicitly shown that even the PT-invariant kink solution is linearly stable. In the case of $\phi^4$ kink we have shown that like the usual $\phi^4$ kink, the PT-invariant $\phi^4$ kink too has two modes and the corresponding eigenenergies are in fact identical for the usual and the PT-invariant kink. We believe this is quite significant and the PT-invariant kink can have some physical relevance. It would be worthwhile to examine this issue in detail. Further, we have shown that such PT-invariant solutions also exist in the corresponding periodic case. In particular, we have shown through several examples that whenever a nonlinear equation admits periodic solutions in terms of Jacobi elliptic function ${\rm sn}(x,m)$, then the same equation will also admit complex PT-invariant periodic solutions with PT-eigenvalue $-1$ in terms of ${\rm sn}(x,m) \pm i {\rm cn}(x,m)$ as well as ${\rm sn}(x,m) \pm i {\rm dn}(x,m)$. Moreover, in a few coupled models we have also shown the existence of PT-invariant periodic solutions in terms of Lam\'e polynomials of order one and two and with PT-eigenvalue being either $+1$ or $-1$ in both the fields or $+1$ in one field and $-1$ in the other field. These results raise several important questions. The obvious open question is whether these result are true in general. It would be nice if one can prove this in general, both in the hyperbolic as well as in the periodic case. In the absence of a general proof, it is worthwhile to look at more examples and see if this observation is true in the additional cases or if there are some exceptions. The other obvious question is: what could be the deeper reason because of which such solutions exist? Another question is about the significance of such solutions for real nonlinear equations. In this context we would like to remark that the symmetry of solutions of a nonlinear equation need not be the same as that of the nonlinear equation but it could be less. We believe that auto-B\"acklund transformations may also provide the solutions considered here in the case of both integrable \cite{backlund} and non-integrable models \cite{back2}. Normally, the complex solutions of a real nonlinear equation are not of relevance. However, being PT-invariant complex solutions, we believe they could have some physical significance including in coupled optical waveguides \cite{book,peng, ruter}. One pointer in this direction is the fact that for both the KdV and the mKdV equations, which are integrable equations, we have checked that the first three constants of motion for the PT-invariant complex solutions of both the KdV and the mKdV equations are in fact real but have different values than those for the usual hyperbolic solution (and we suspect that in fact all the constants of motion would be real and would be different than those for the real hyperbolic solution) thereby suggesting that such solutions could be physically interesting. Thus, it would be worthwhile to study the stability of such PT-invariant solutions, which may shed some light on the possible significance of these solutions. We hope to address some of these issues in the near future. \section{Acknowledgments} One of us (AK) is grateful to Indian National Science Academy (INSA) for the award of INSA Senior Scientist position at Savitribai Phule Pune University. This work was supported in part by the U.S. Department of Energy.
{ "timestamp": "2016-01-26T02:07:18", "yymm": "1601", "arxiv_id": "1601.06330", "language": "en", "url": "https://arxiv.org/abs/1601.06330", "abstract": "For a large number of real nonlinear equations, either continuous or discrete, integrable or nonintegrable, uncoupled or coupled, we show that whenever a real nonlinear equation admits kink solutions in terms of $\\tanh \\beta x$, where $\\beta$ is the inverse of the kink width, it also admits solutions in terms of the PT-invariant combinations $\\tanh 2\\beta x \\pm i \\sech 2 \\beta x$, i.e. the kink width is reduced by half to that of the real kink solution. We show that both the kink and the PT-invariant kink are linearly stable and obtain expressions for the zero mode in the case of several PT-invariant kink solutions. Further, for a number of real nonlinear equations we show that whenever a nonlinear equation admits periodic kink solutions in terms of $\\sn(x,m)$, it also admits periodic solutions in terms of the PT-invariant combinations $\\sn(x,m) \\pm i \\cn(x,m)$ as well as $\\sn(x,m)\\pm i \\dn(x,m)$. Finally, for coupled equations we show that one cannot only have complex PT-invariant solutions with PT eigenvalue $+1$ or $-1$ in both the fields but one can also have solutions with PT eigenvalue $+1$ in one field and $-1$ in the other field.", "subjects": "Pattern Formation and Solitons (nlin.PS)", "title": "Novel PT-invariant Kink and Pulse Solutions For a Large Number of Real Nonlinear Equations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759632491112, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7079405685663961 }
https://arxiv.org/abs/1606.00099
On A Subclass of Close-to-Convex Functions
In this paper, we introduce a subclass of close-to-convex functions defined in the open unit disk. We obtain the inclusion relationships, coefficient estimates and Fekete-Szego inequality. The results presented here would provide extension of those given in earlier works.
\section{Introduction} We begin by introducing the important classes of functions considered in this article. Let $\mathcal{A}$ denote the class of functions $f(z)$ \textit{normalized} by \[f(z)=z+\sum^\infty_{n=2}a_nz^n,\] which are analytic in the \textit{open} unit disk: \[\mathcal{U}:=\{z \in \mathbb{C}: |z| <1\}.\] Also, let $\mathcal{S}$ be the class of functions in $\mathcal{A}$ which are univalent in $\mathcal{U}$ and $\mathcal{P}$ denote the class of analytic function $p$ in $\mathcal{U}$ \[p(z)=1+\sum^\infty_{n=1}p_nz^n,\] such that $p(0)=1$ and $\Re\{p(z)\}>0.$ Any function in $\mathcal{P}$ is called a function with \textit{positive real part} in $\mathcal{U}.$ A set $\mathcal{D}$ in the complex plane is said to be convex if the line segment joining any two points in $\mathcal{D}$ lies entirely in $\mathcal{D}$ and starlike if the linear segment joining $w_0=0$ to every other point $w \in \mathcal{D}$ lies inside $\mathcal{D}$. If a function $f \in \mathcal{A}$ maps $\mathcal{U}$ onto a starlike (convex) domain, we say that $f$ is a starlike (convex) function. The equivalent analytic conditions for starlikeness and convexity are as follows: \[\Re\Big(\tfrac{\displaystyle zf'(z)}{\displaystyle f(z)}\Big)>0 \quad \text{and} \quad \Re\Big(1+\tfrac{\displaystyle zf''(z)}{\displaystyle f'(z)}\Big)>0.\] respectively. The classes consisting of starlike and convex functions are denoted by $\mathcal{S^*}$ and $\mathcal{C}$ respectively. It is well known that $f \in \mathcal{C}$ if and only if $zf'(z)\in \mathcal{S^*}.$ A function $f(z) \in \mathcal{S}$ is said to be \textit{starlike of order $\alpha$} if and only if \[\Re\Big(\tfrac{\displaystyle zf'(z)}{\displaystyle f(z)}\Big)>\alpha\] for some $\alpha$ ($0 \leq \alpha <1$). We denote by $\mathcal{S}^*(\alpha)$ the class of all functions in $\mathcal{S}$ which are starlike of order $\alpha$ in $\mathcal{U}$. Clearly, we have \[\mathcal{S}^*(\alpha) \subset \mathcal{S}^*(0)=\mathcal{S}^*.\] It is well known that if $f \in \mathcal{C},$ then $f \in \mathcal{S}^*(1/2).$ The converse is false as shown by the function $f(z)=z-\tfrac{1}{3}z^2.$ In 1952, Wilfred Kaplan [7] generalized the concept of starlike function to that of a close-to-convex function. An analytic function $f$ is said to be close-to-convex if there exists a univalent starlike function $g$ such that for any $z \in \mathcal{U}$, the inequality \[\Re\Big(\tfrac{\displaystyle zf'(z)}{\displaystyle g(z)}\Big)>0\] holds. We let $\mathcal{K}$ denote the set of all functions that are normalized and close-to-convex in $\mathcal{U}$. All close-to-convex functions are univalent and the coefficient $a_n$ satisfy Bieberbach inequality $|a_n| \leq n.$ Since convex and starshaped domains are close-to-convex, the inclusion relationships \[\mathcal{C} \subset \mathcal{S^*} \subset \mathcal{K} \subset \mathcal{S}\] holds true. A function $f \in \mathcal{A}$ is said to be starlike with respect to symmetrical points in $\mathcal{U}$ if it satisfies \[\Re\Big(\tfrac{\displaystyle zf'(z)}{\displaystyle f(z)-f(-z)}\Big)>0.\] This class denoted by $\mathcal{SSP}$ was introduced and studied by Sakaguchi in 1959 [13]. Since $\big(f(z)-f(-z)\big)/2$ is a starlike function [3] in $\mathcal{U}$, therefore Sakaguchi's class $\mathcal{SSP}$ is also belongs to $\mathcal{K}.$ Motivated by the class of starlike functions with respect to symmetric points, Gao and Zhou[4] discussed a class $\mathcal{K}_s$ of close-to-convex functions. \begin{definition} \normalfont[4] Let $f(z)$ be analytic in $\mathcal{U}$. We say $f \in \mathcal{K}_s$ if there exists a function $g(z)\in \mathcal{S}^{*} ({1/2})$ such that \[\Re\Big(-\tfrac{\displaystyle z^2f'(z)}{\displaystyle g(z)g(-z)}\Big)>0.\] \end{definition} \begin{remark} Note that if $g(z)\in \mathcal{S^{*}} ({1/2})$, then $(-g(z)g(-z))/z \in \mathcal{S^*}$ [3]. \end{remark} Here, we recall the concept of subordination between analytic functions. Given two functions $f(z)$ and $g(z),$ which are analytic in $\mathcal{U}$. The function $f(z)$ is \textit{subordinate} to $g(z)$, written as $f(z) \prec g(z)$, if there exists an analytic function $w(z)$ defined in $\mathcal{U}$ with \[w(0)=0 \quad \text{and} \quad |w(z)| < 1\] such that \[f(z)=g(w(z)).\] In particular, if $g$ is univalent in $\mathcal{U}$, then we have the following equivalence \[f(0)=g(0) \quad \text{and} \quad f(\mathcal{U}) \subset g(\mathcal{U}).\] Using the concept of subordination, Wang et al.[15] introduced a general class $\mathcal{K}_s(\varphi)$. \begin{definition} \normalfont[15] For a function $\varphi$ with positive real part, the class $\mathcal{K}_s(\varphi)$ consists of function $f \in \mathcal{A}$ satisfying \[-\tfrac{\displaystyle z^2f'(z)}{\displaystyle g(z)g(-z)} \prec \displaystyle \varphi(z)\] for some function $g(z)\in \mathcal{S}^{*} ({1/2}).$ \end{definition} Recently, Goyal and Singh[6] introduced and studied the following subclass of analytic functions: \begin{definition} \normalfont[6] For a function $\varphi$ with positive real part, a function $f\in\mathcal{A}$ is said to be in the class $\mathcal{K}_s(\lambda,\mu,\varphi)$ if it satisfies the following subordination condition: \[\frac{\displaystyle z^2f'(z)+ z^{3}f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{4}f'''(z)}{\displaystyle -g(z)g(-z)}\prec \varphi(z)\] where $0\leq\mu\leq\lambda\leq1$ and $g(z)\in \mathcal{S}^{*} ({1/2}).$ \end{definition} Motivated by aforementioned works, we now introduce the following subclass of analytic functions: \begin{definition} Suppose $\varphi\in\mathcal{P}.$ A function $f\in\mathcal{A}$ is said to be in the class $K_s^{(k)}(\lambda,\mu,\varphi)$ if it satisfies the following subordination condition: \[\frac{z^kf'(z)+ z^{k+1}f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{k+2}f'''(z)}{g_k(z)}\prec \varphi(z)\] where $0\leq\mu\leq\lambda\leq1$, $g(z) = z+\sum^\infty_{n=2}b_nz^n \in \mathcal{S}^*{(\tfrac{k-1}{k})}$, $k\geq 1$ is a fixed positive integer and $g_k(z)$ is defined by the following equality \begin{equation} g_k(z)= \prod_{v=0}^{k-1} \varepsilon^{-v}g(\varepsilon^v z) \end{equation} with $\varepsilon= e^{2\pi i/ k}.$ \end{definition} For $\varphi(z)=(1+Az)/(1+Bz)$, we get the class \begin{definition} A function $f\in\mathcal{A}$ is said to be in the class $K_s^{(k)}(\lambda,\mu,A,B)$ if it satisfies the following subordination condition: \begin{equation} \frac{z^kf'(z)+ z^{k+1}f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{k+2}f'''(z)}{g_k(z)}\prec \frac{1+Az}{1+Bz} \end{equation} where $0\leq\mu\leq\lambda\leq1$, $g(z) = z+\sum^\infty_{n=2}b_nz^n \in \mathcal{S}^*{(\tfrac{k-1}{k})}$, $k\geq 1$ is a fixed positive integer and $g_k(z)$ is defined by the following equality \[g_k(z)= \prod_{v=0}^{k-1} \varepsilon^{-v}g(\varepsilon^v z)\] with $\varepsilon= e^{2\pi i/ k}.$ \end{definition} The condition in (1.2) is equivalent to \begin{multline} \Big|\frac{z^kf'(z)+ z^{k+1}f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{k+2}f'''(z)}{g_k(z)}-1\Big| \\<\Big|A+\frac{B(z^kf'(z)+ z^{k+1}f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{k+2}f'''(z))}{g_k(z)} \Big|. \nonumber \end{multline} \begin{remark} (a) For $\mu=0$, and $k=2$, we have the class $\mathcal{K}_s(\lambda,A,B)$[17]. \\(b) When $A=1-2\gamma,B=-1$ and $\lambda=\mu=0$, we obtain the class $\mathcal{K}_s^{(k)}(\gamma)$ [15]. In addition, if $k=2$, then we obtain the class $\mathcal{K}_s(\gamma)$[11]. \\(c) When $A=\beta,B=-\alpha\beta$ and $\lambda=\mu=0$, then we obtain the class $\mathcal{K}_s^{(k)}(\alpha,\beta)$ in [18]. In addition, if $k=2$, then we obtain the class $\mathcal{K}_s(\alpha,\beta)$[16]. \end{remark} The following lemmas are needed in order to prove our main results: \begin{lemma}\normalfont[16] If $g(z)=z+\sum^\infty_{n=2}b_nz^n$ $\in \mathcal{S^{*}} (\tfrac{k-1}{k})$, then \begin{equation} G_k(z)=\tfrac{\displaystyle g_k(z)}{\displaystyle z^{k-1}}=z+\sum^\infty_{n=2}B_nz^n \in \mathcal{S^{*}} \subset \mathcal{S}. \end{equation} \end{lemma} \begin{lemma}\normalfont[12] Let $f(z)=1+\sum^\infty_{k=1}c_kz^k$ be analytic in $\mathcal{U}$ and $g(z)=1+\sum^\infty_{k=1}d_kz^k$ be analytic and convex in $\mathcal{U}$. If $f \prec g$, then \[|c_k| \leq |d_1| \quad \text{where} \quad k\in\mathbb{N}:=\{1,2,3,\ldots\}.\] \end{lemma} \begin{lemma}\normalfont[17] Let $\gamma \geq 0$ and $f\in \mathcal{K}$. \textit {Then \[F(z)=\tfrac{\displaystyle 1+\gamma}{\displaystyle z^{\gamma}}\int_0^z t^{\gamma -1 }f(t) dt \in \mathcal{K}. \]} \end{lemma} \section{Main Results} We first prove the inclusion relationship for the class $\mathcal{K}_s^{(k)}(\lambda,\mu,\varphi)$. \begin{theorem} Let $0\leq\mu\leq\lambda\leq1$. Then we have \[\mathcal{K}_s^{(k)}(\lambda,\mu,\varphi)\subset \mathcal{K} \subset \mathcal{S}. \] \end{theorem} \begin{proof} Consider $f \in K_s^{(k)}(\lambda,\mu,\varphi).$ By Definition 1.4, we have \[\tfrac{\displaystyle z^kf'(z)+ z^{k+1}f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{k+2}f'''(z)}{\displaystyle g_k(z)}\prec \varphi(z),\] which can be written as \[\tfrac{\displaystyle zF'(z)}{\displaystyle G_k(z)}\prec \varphi(z) \] where \begin{equation} F'(z)=f'(z)+zf''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{2}f'''(z) \end{equation} and $G_k(z)$ is defined in (1.2). A simple computation on (2.1) gives \[F(z)=(1-\lambda+ \mu) f(z)+ (\lambda - \mu) zf'(z) + \lambda\mu z^2f''(z).\] Since $\Re\varphi(z) >0$, we have \[\Re\tfrac{\displaystyle zF'(z)}{\displaystyle G_k(z)} >0. \] Also, since $G_k(z) \in \mathcal{S}^*$(by Lemma 1.1), by definition of close-to-convex function, we deduce that \[F(z)=(1-\lambda+ \mu) f(z)+ (\lambda - \mu) zf'(z) + \lambda\mu z^2f''(z) \in \mathcal{K}. \] In order to show $f\in \mathcal{K}$, we consider three cases: \\\textit{Case 1:} $\mu = \lambda =0$. It is then obvious that $f=F\in \mathcal{K}.$ \\\textit{Case 2:} $\mu =0, \lambda \not=0$. Then we obtain \[F(z)=(1-\lambda) f(z)+ \lambda zf'(z).\] By using the integrating factor $z^{\tfrac{1}{\lambda}-1}$, we get \[f(z)=\tfrac{\displaystyle1}{\displaystyle \lambda} z^{ 1-\tfrac{ 1}{ \lambda}}\int_0^z \displaystyle t^{\tfrac{1}{ \lambda}-2}F(t) dt. \] Taking $\gamma= (1/\lambda)-1$ in Lemma 1.3, we conclude that $f(z) \in \mathcal{K}.$ \\\textit{Case 3:} $\mu \not=0, \lambda \not=0$. Then we have \[F(z)=(1-\lambda+ \mu) f(z)+ (\lambda - \mu) zf'(z) + \lambda\mu z^2f''(z). \] Let $G(z)=\tfrac{1}{(1-\lambda+ \mu)}F(z)$, so $G(z)\in \mathcal{K}.$ Then \begin{equation} G(z)=f(z)+\alpha zf'(z)+\beta z^2 f''(z) \end{equation} where $\alpha= \tfrac{\lambda-\mu}{1-\lambda+ \mu}$ and $\beta=\tfrac{\lambda\mu}{1-\lambda+ \mu}.$ Consider $\delta$ and $\nu$ satisfies \[\delta+\nu=\alpha -\beta \quad \text{and} \quad \delta\nu=\beta.\] Then, (2.2) can be written as \[G(z)=f(z)+(\delta+\nu+\delta\nu) zf'(z)+\delta\nu z^2 f''(z).\] Let $p(z)=f(z)+\delta zf'(z),$ then \[p(z)+\nu zp'(z)=f(z)+(\delta+\nu+\delta\nu) zf'(z)+\delta\nu z^2 f''(z)=G(z).\] On the other hand, $p(z)+\nu zp'(z)=\nu z^{1-1/\nu}\Big(z^{1/\nu}p(z)\Big)'$. So, \[G(z)=\nu z^{1-1/\nu} \Big[\delta z^{1+1/\nu-1/\delta} \Big(z^{1/\delta} f(z)\Big)'\Big]'.\] Hence \[\delta z^{1+1/\nu -1/\delta} \Big(z^{1/\delta} f(z)\Big)'= \dfrac{1}{\nu}\int_0^z w^{1/\nu-1}G(w) dw.\] Multiply by $(1+\nu)$ at both sides and divided by $z^{1/\nu}$ , we get \[(1+\nu)\delta z^{1-1/\delta}\Big(z^{1/\delta}f(z)\Big)'=\dfrac{1+1/\nu}{z^{1/\nu}}\int_0^z w^{1/\nu-1}G(w) dw. \] Since $\gamma=1/\nu\geq 0$, therefore by Lemma 1.3, we have \[H(z)=\dfrac{1+1/\nu}{z^{1/\nu}}\int_0^z w^{1/\nu-1}G(w) dw \in K.\] Further, \[(1+\nu)z^{1/\delta}f(z)=\dfrac{1}{\delta}\int_0^z t^{1/\delta-1}H(t) dt.\] Multiply by $(1+\delta)$ at both sides and divided by $z^{1/\delta}$ , we get \[(1+\delta)(1+\nu)f(z)=\dfrac{1+1/\delta}{z^1/\delta}\int_0^z t^{1/\delta-1}H(t) dt. \] Since $\gamma=1/\delta\geq 0$, therefore by Lemma 1.3, we have $f \in \mathcal{K}.$ This complete the proof of the theorem. \end{proof} Next, we give the coefficient estimates of functions belongs to the class $\mathcal{K}_s^{(k)}(\lambda,\mu,\varphi)$. \begin{theorem} Let $0\leq\mu\leq\lambda\leq1$. If $f\in K_s^{(k)}(\lambda,\mu,\varphi)$, then \[|a_n|\leq\tfrac{\displaystyle1}{\displaystyle1+(n-1)(\lambda-\mu+n\lambda\mu)}\Big(1+\tfrac{\displaystyle |\varphi'(0)|(n-1)}{\displaystyle2} \Big) \quad (n\in\mathbb{N}).\] \end{theorem} \begin{proof} From the definition of $\mathcal{K}_s^{(k)}(\lambda,\mu,\varphi)$ , we know that there exists a function with positive real part \[p(z) = 1+\sum^\infty_{n=1}p_nz^n\] such that \[p(z)=\frac{z^kf'(z)+ z^k+1f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^k+2f'''(z)}{g_k(z)} = \frac{zF'(z)}{G_k(z)}\] or \begin{equation}\label{eq:solve} zf'(z)+ z^2f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^3f'''(z)= p(z)G_k(z). \end{equation} By expanding both sides and equating the coefficients in (2.3), we get \begin{equation} n|a_n|\big[1+(n-1)(\lambda-\mu+n\lambda\mu)\big]= B_n+p_{n-1}+p_1B_{n-1}+\cdots+p_{n-2}B_2. \end{equation} \\Since $G_k(z)$ is starlike, we have \begin{equation} |B_n|\leq n. \end{equation} Also,by Lemma 1.2, we know that \begin{equation} |p_n| = \Big|\tfrac{\displaystyle p^{(n)}(0)}{\displaystyle n!}\Big|\leq|\varphi'(0)| \quad (n\in\mathbb{N}). \end{equation} \\Combining (2.5),(2.6) and (2.7), we obtain \[n|a_n|\big[1+(n-1)(\lambda-\mu+n\lambda\mu)\big] \leq n + |\varphi'(0)| + |\varphi'(0)| \sum^{n-1}_{n=2}n. \] \[n|a_n|\big[1+(n-1)(\lambda-\mu+n\lambda\mu)\big] \leq n\bigg(1+\tfrac{\displaystyle |\varphi'(0)|(n-1)}{\displaystyle 2}\bigg). \] \\This completes the proof. \end{proof} Setting $\mu=0$ in Theorem 2.2, \begin{corollary} If $f\in \mathcal{K}_s^{(k)}(\lambda,\varphi)$, then \[|a_n|\leq\tfrac{\displaystyle1}{\displaystyle1+\lambda(n-1)}\Big(1+\tfrac{\displaystyle |\varphi'(0)|(n-1)}{\displaystyle2} \Big) \quad (n\in\mathbb{N}).\] \end{corollary} Furthermore, let $\lambda=0$ in Corollary 2.1, we have \begin{corollary} If $f\in \mathcal{K}_s^{(k)}(\varphi)$, then \[|a_n|\leq\Big(1+\tfrac{\displaystyle |\varphi'(0)|(n-1)}{\displaystyle2} \Big) \quad (n\in\mathbb{N}).\] \end{corollary} In this section, we obtain the Fekete-Szeg\"{o} inequality. To prove our result, we need the following lemmas: \begin{lemma}\normalfont[8] If $p(z)=1+c_1z+c_2z^2+c_3z^3+...$ is a function with positive real part, then for any complex number $\mu$ \[|c_2-\mu c_1^2|\leq 2 \max\{1,|2\mu-1|\} \] and the result is sharp for the functions given by $p(z)=\textstyle\frac{1+z^2}{1-z^2}$ and $p(z)=\frac{1+z}{1-z}.$ \end{lemma} \begin{lemma}\normalfont[8] Let $G(z)=z+b_2z^2+\cdots$ is in $\mathcal{S}^*.$ Then, \[|b_3-\lambda b_2^2| \leq \max\{1-|3-4\lambda|\}\] which is sharp for the Koebe function, $k$ if $|\lambda-\tfrac{3}{4}|\geq \tfrac{1}{4}$ and for $(k(z^2))^{\tfrac{1}{2}} = \tfrac{\displaystyle z}{\displaystyle 1-z^2}$ if $|\lambda-\tfrac{3}{4}|\leq \frac{1}{4}.$ \end{lemma} \begin{theorem} Let $\varphi(z)=1+Q_1z+Q_2z^2+Q_3z^3+...$ where $\varphi(z) \in \mathcal{A} $ and $\varphi'(0)>0$. For a function $f(z)=z+a_2z^2+a_3z^3+...$ belonging to the class $\mathcal{K}_s^{(k)}(\lambda,\mu,\varphi)$ and $\mu \in \mathbb{C}$, the following sharp estimate holds \begin{multline} |a_3-\mu a_2^2|\leq \tfrac{\displaystyle 1}{\displaystyle 3(1+2\lambda-2\mu+6\lambda\mu)}\max\{1,|3-4\alpha|\} \: + \: \tfrac{\displaystyle Q_1}{\displaystyle 3(1+2\lambda-2\mu+6\lambda\mu)}\max\{1,|2\beta-1|\} \: + \\ 2Q_1\Big(\tfrac{\displaystyle 1}{\displaystyle 3(1+2\lambda-2\mu+6\lambda\mu)}-\frac{\mu}{2(1+\lambda-\mu+2\lambda\mu)^2}\Big). \end{multline} where \[\alpha=\tfrac{\displaystyle 3\delta(1+2\lambda-2\mu+6\lambda\mu)}{\displaystyle 4(1+\lambda-\mu+2\lambda\mu)}\] and \[\beta=\tfrac{\displaystyle 1}{\displaystyle 2}\Bigg(1-\tfrac{\displaystyle Q_2}{\displaystyle Q_1}-\tfrac{\displaystyle 3\delta Q_2^2d_1^2(1+2\lambda-2\mu+6\lambda\mu)}{\displaystyle 4(1+\lambda-\mu+2\lambda\mu)^2)}\Bigg)\] \end{theorem} \begin{proof} If $f\in K_s^{(k)}(\lambda,\mu,\varphi)$,then there exists an analytic function $w$ analytic in $\mathbb{U}$ with $w(0)=0$ and $|w(z)|<1$ such that \begin{equation} \frac{z^kf'(z)+ z^{k+1}f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{k+2}f'''(z)}{g_k(z)}=\varphi(w(z)). \end{equation} The series expansion of \[\frac{z^kf'(z)+ z^{k+1}f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{k+2}f'''(z)}{g_k(z)} \] is given by \[1+(2a_2(1+\lambda+2\lambda\mu-\mu)-B_2)z+(3a_3(1+2\lambda+6\lambda\mu-2\mu)-2a_2(1+\lambda+2\lambda\mu-\mu)B_2+B_2^2-B_3)z^2+\cdots.\] Define the function $h$ by \begin{equation} h(z)=\tfrac{\displaystyle 1+w(z)}{\displaystyle 1-w(z)}=1+d_1z+d_2z^2+\cdots, \end{equation} then $Reh(z)>0$ and $h(0)=1.$ Since \begin{align*} \varphi(w(z))&= \varphi\Bigg(\tfrac{\displaystyle h(z)-1}{\displaystyle h(z)+1}\Bigg)\\ &= 1+ \tfrac{\displaystyle 1}{\displaystyle 2}Q_1d_1z+\tfrac{\displaystyle 1}{\displaystyle 2}Q_1\Big(d_2-\tfrac{\displaystyle d_1^2}{\displaystyle 2}\Big)z^2+\tfrac{\displaystyle 1}{\displaystyle 4}Q_2d_1^2z^2+\cdots, \end{align*} then it follows from (2.8) that \[a_2=\tfrac{\displaystyle 2B_2+Q_1d_1}{\displaystyle 4(1+\lambda-\mu+2\lambda\mu)}; a_3=\tfrac{\displaystyle 2B_2Q_1d_1+2q_1\Big(d_2-\tfrac{\displaystyle d_1^2}{\displaystyle 2}\Big)+Q_2d_1^2+4B_3}{\displaystyle 12(1+2 \lambda-2 \mu+6\lambda \mu)}\] Therefore, we have \begin{multline} a_3-\delta a_2^2= \tfrac{\displaystyle 1}{\displaystyle 3(1+2\lambda-2\mu+6\lambda\mu)}(B_3-\alpha B_2^2) + \tfrac{\displaystyle Q_1}{\displaystyle 6(1+2\lambda-2\mu+6\lambda\mu)}(d_2-\beta d_1^2) \\+\tfrac{\displaystyle B_2Q_1d_1}{2}(\tfrac{\displaystyle 1}{\displaystyle 3(1+2\lambda-2\mu+6\lambda\mu)}-\tfrac{\displaystyle \delta}{\displaystyle 2(1+\lambda-\mu+2\lambda\mu)}) \end{multline} where \[\alpha=\tfrac{\displaystyle 3\delta(1+2\lambda-2\mu+6\lambda\mu)}{\displaystyle 4(1+\lambda-\mu+2\lambda\mu)}\] and \[\beta=\tfrac{\displaystyle 1}{\displaystyle 2}\Bigg(1-\tfrac{\displaystyle Q_2}{\displaystyle Q_1}-\tfrac{\displaystyle 3\delta Q_2^2d_1^2(1+2\lambda-2\mu+6\lambda\mu)}{\displaystyle 4(1+\lambda-\mu+2\lambda\mu)^2)}\Bigg)\] Our result is now followed by an application of Lemma 2.1 and Lemma 2.2. \end{proof} Lastly, we prove sufficient condition for functions to belong to the class $\mathcal{K}_s^{(k)}(\lambda,\mu,A,B).$ \begin{theorem} Let $g(z)=z+\sum^\infty_{n=2}b_nz^n$ be analytic in $\mathcal{U}$ and $-1 \leq B < A \leq 1.$ If $f(z)\in A$ defined by \normalfont(1.1) \textit{satisfies the inequality} \begin{equation} (1+|B|) \displaystyle\sum^\infty_{n=2}n[1+(n-1)(\lambda-\mu+n\lambda\mu)] |a_n|+(1+|A|)\displaystyle\sum^\infty_{n=2}|B_n| \leq A-B \end{equation} \textit{and for $n=2,3,\dots$ the coefficients of $B_n$ given by} \normalfont(1.4), \textit{then $f(z) \in \mathcal{K}_s^{(k)}(\lambda,\mu,A,B).$} \end{theorem} \begin{proof} We set for $F'$ and $G_k$ given by (2.1) and (1.3) respectively. Now, let $M$ denoted by \\ \begin{align*} M &=\Big|zF'(z)- \tfrac{\displaystyle g_k(z)}{\displaystyle z^{k-1}}\Big|-\Big|\tfrac{\displaystyle Ag_k(z)}{\displaystyle z^{k-1}}-BzF'(z)\Big| \\ \nonumber & = \Big|zf'(z)+z^2f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{3}f'''(z)-\tfrac{\displaystyle g_k(z)}{\displaystyle z^{k-1}}\Big| \\ & \quad - \Big|A\tfrac{\displaystyle g_k(z)}{\displaystyle z^{k-1}}-B[zf'(z)+z^2f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{3}f'''(z)]\Big| \nonumber \\ &= \Big|z+\displaystyle\sum^\infty_{n=2}na_nz^n + \displaystyle(\lambda-\mu+2\lambda\mu)\sum^\infty_{n=2}n(n-1)a_nz^n +\displaystyle\lambda\mu\sum^\infty_{n=2}n(n-1)(n-2)a_nz^n-z \\ & \quad - \displaystyle\sum^\infty_{n=2}B_nz^n\Big| \\ & \quad - \Big|Az+A\displaystyle\sum^\infty_{n=2}B_nz^n-B[zf'(z)+z^2f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{3}f'''(z)]\Big| \\ &=\Big|\displaystyle\sum^\infty_{n=2}na_nz^n[1+(n-1)(\lambda-\mu+2\lambda\mu)]- \displaystyle\sum^\infty_{n=2}B_nz^n\Big| \\ & \quad - \Big|(A-B)z+A\displaystyle\sum^\infty_{n=2}B_nz^n-B\displaystyle\sum^\infty_{n=2}na_nz^n[1+(n-1)(\lambda-\mu+n\lambda\mu)]+A\displaystyle\sum^\infty_{n=2}B_nz^n \Big| \end{align*} Then, for $|z|=r<1$, we have \begin{align*} M &\leq \displaystyle\sum^\infty_{n=2}n[1+(n-1)(\lambda-\mu+n\lambda\mu)]|a_n||z^n|+\displaystyle\sum^\infty_{n=2}|B_n||z|^n \\ & \quad - \Bigg[(A-B)|z|-|A|\displaystyle\sum^\infty_{n=2}|B_n||z^n|-|B|\displaystyle\sum^\infty_{n=2}n[1+(n-1)(\lambda-\mu+n\lambda\mu)] |a_n||z|^n\Bigg] \\& = (1+|B|) \displaystyle\sum^\infty_{n=2}n[1+(n-1)(\lambda-\mu+n\lambda\mu)] |a_n||z|^n-(A-B)|z|+(1+|A|)\displaystyle\sum^\infty_{n=2}|B_n||z|^n \\ &< \Bigg[-(A-B)+(1+|B|) \displaystyle\sum^\infty_{n=2}n[1+(n-1)(\lambda-\mu+n\lambda\mu)] |a_n|+(1+|A|)\displaystyle\sum^\infty_{n=2}|B_n|\Bigg]|z| \\ & \leq 0. \end{align*} From the above calculation, we obtain $M<0$. Thus, we have \begin{multline} \Big|zf'(z)+z^2f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{3}f'''(z)-\tfrac{\displaystyle g_k(z)}{\displaystyle z^{k-1}}\Big| \\<\-\Big|A\tfrac{\displaystyle g_k(z)}{\displaystyle z^{k-1}}-B[zf'(z)+z^2f''(z)(\lambda-\mu+2\lambda\mu)+\lambda\mu z^{3}f'''(z)]\Big| \nonumber \end{multline} \\Therefore, $f \in \mathcal{K}_s^{(k)}(\lambda,\mu,A,B).$ \end{proof} Setting $\mu= 0$ in Theorem 2.3, we get \begin{corollary} Let $f(z)=z+\sum^\infty_{n=2}a_nz^n$ and $g(z)=z+\sum^\infty_{n=2}b_nz^n$ be analytic in $\mathcal{U}$ and $-1 \leq B < A \leq 1.$ If \[(1+|B|) \displaystyle\sum^\infty_{n=2}n[1+\lambda(n-1)] |a_n|+(1+|A|)\displaystyle\sum^\infty_{n=2}|B_n| \leq A-B,\] where $B_n$ given by \normalfont(1.4), then $f(z) \in \mathcal{K}_s^{(k)}(\lambda,A,B).$ \end{corollary} Further setting $\lambda=0$ in Corollary 2.3, we obtain \begin{corollary} Let $f(z)=z+\sum^\infty_{n=2}a_nz^n$ and $g(z)=z+\sum^\infty_{n=2}b_nz^n$ be analytic in $\mathcal{U}$ and $-1 \leq B < A \leq 1.$ If \[(1+|B|) \displaystyle\sum^\infty_{n=2}n|a_n|+(1+|A|)\displaystyle\sum^\infty_{n=2}|B_n| \leq A-B,\] where $B_n$ given by \normalfont(1.4), then $f(z) \in \mathcal{K}_s^{(k)}(A,B).$ \end{corollary} \begin{remark} By taking $A=\beta, B=-\alpha\beta$ in Corollary 2.4, we get the result obtained in [15, Theorem 5]. In addition, by taking $A=1-2\gamma, B=-1$ , we get the result obtained in [13,Theorem 2]. \end{remark}
{ "timestamp": "2016-06-02T02:03:55", "yymm": "1606", "arxiv_id": "1606.00099", "language": "en", "url": "https://arxiv.org/abs/1606.00099", "abstract": "In this paper, we introduce a subclass of close-to-convex functions defined in the open unit disk. We obtain the inclusion relationships, coefficient estimates and Fekete-Szego inequality. The results presented here would provide extension of those given in earlier works.", "subjects": "Complex Variables (math.CV)", "title": "On A Subclass of Close-to-Convex Functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759610129465, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7079405669524593 }
https://arxiv.org/abs/1301.1702
Formal Verification of Nonlinear Inequalities with Taylor Interval Approximations
We present a formal tool for verification of multivariate nonlinear inequalities. Our verification method is based on interval arithmetic with Taylor approximations. Our tool is implemented in the HOL Light proof assistant and it is capable to verify multivariate nonlinear polynomial and non-polynomial inequalities on rectangular domains. One of the main features of our work is an efficient implementation of the verification procedure which can prove non-trivial high-dimensional inequalities in several seconds. We developed the verification tool as a part of the Flyspeck project (a formal proof of the Kepler conjecture). The Flyspeck project includes about 1000 nonlinear inequalities. We successfully tested our method on more than 100 Flyspeck inequalities and estimated that the formal verification procedure is about 3000 times slower than an informal verification method implemented in C++. We also describe future work and prospective optimizations for our method.
\section{Introduction} In this paper, we present a tool for formal verification of nonlinear inequalities in HOL Light~\cite{HOLL}. Our tool can verify multivariate polynomial and non-polynomial inequalities on rectangular domains. The verification technique is based on interval arithmetic with Taylor approximations. A short user manual describing our tool is available~\cite{nonlinear-manual}. Solovyev's thesis~\cite{solovyev-thesis} contains additional information about the verification tool and the corresponding formal techniques. Our work is an integral part of the Flyspeck project~\cite{hales:DSP:2006:432,website:FlyspeckProject}. This project was launched in 2003 by T.~Hales to produce a complete formal verification of Hales' proof of the Kepler conjecture~\cite{Hales:2006:DCG,DSP}. There are several major computationally extensive verification problems in the Flyspeck project. One of these problems is a formval verification of about 1000 multivariate nonlinear inequalities. We have successfully tested our formal verification tool on several simple Flyspeck nonlinear inequalities (we have verified 130 inequalities). In theory, almost all Flyspeck inequalities can be verified with our formal verification procedure. A rough estimate shows that the current formal procedure is about 3000 times slower than the corresponding informal verification algorithm in C++~\cite{hales-algorithm}. With this estimate, it will take more than 4 years to verify all Flyspeck nonlinear inequalities formally on a single computer (the informal procedure requires about 9 hours). There exist other formal methods for verification of nonlinear inequalities. First of all, general quantifier elimination procedures may be used to solve some polynomial inequalities~\cite{tarski-decision,collins,mclaughlin-harrison}. Another method for proving polynomial inequalities is known as sums-of-squares (SOS) method~\cite{harrison-sos}. A tool called MetiTarski~\cite{metitarski-prover,metitarski-future} is capable to verify multivariate polynomial and non-polynomial inequalities on unbounded domains. It approximates non-polynomial functions by suitable polynomial bounds and then applies quantifier elimination procedures for resulting polynomials. The Bernstein polynomial technique~\cite{roland-thesis} allows to verify multivariate polynomial inequalities. Each polynomial can be written as a sum of polynomials in the Bernstein polynomial basis. Coefficients of this representation give bounds of the polynomial itself. A complete formal implementation of this method is done in PVS~\cite{MN12}. Non-polynomial inequalities must be first converted into polynomial inequalities by finding polynomial bounds. One way to find polynomial bounds is to use Taylor model approximations~\cite{roland-taylor}. R.~Zumkeller's thesis describes this method in details~\cite{roland-thesis}. He also implemented an informal global optimization tool based on Bernstein polynomials~\cite{website:sergei} in Haskell. There exists a tool in the PVS proof assistant which uses the same technique as our tool (interval arithmetic with Taylor approximations)~\cite{DLM09} but this tool works only with univariate functions. Methods based on quantifier elimination procedures do not scale well when the number of variables grows and when inequalities become more complicated. The Bernstein polynomial technique works well for polynomial inequalities but does not show very good results for inequalities involving special functions in high dimensions. \section{Verification of Nonlinear Inequalities} \subsection{Nonlinear Inequalities and Interval Taylor Approximations} Consider the problem: prove that \[ \forall {\bf x} \in \mathbb{R}^n, {\bf x} \in D \implies f({\bf x}) < 0. \] $D$ is assumed to be a rectangle given by $D = \{(x_1, \ldots, x_n)\ |\ a_i \le x_i \le b_i\} = [{\bf a}, {\bf b}]$. We also assume that $f({\bf x})$ is twice continuously differentiable in an open domain $U \supset D$. One way to solve the problem is to consider a finite partition of $D = \bigcup_j D^{j}$ such that each $D^{j}$ is rectangular. Also, we assume that $\bar{f}(D^{j}) < 0$ where $\bar{f}$ is an interval approximation of $f$ (that is, $\bar{f}(D^{j})$ is the interval corresponding to the interval evaluation of $f(x_1,\ldots,x_n)$ for input intervals $x_i \in [a_i^{j}, b_i^{j}]$; clearly, $\bar{f}(D) < 0 \implies f(D) < 0$). It is easy to see that such a partition always exists if $f$ is continuous, $f(D) < 0$, and $f$ can be arbitrary well approximated by $\bar{f}$ on sufficiently small domains. (It follows by the compactness argument: for each point $x \in D$ there is a small rectangle $D^{j}$ such that $x \in \rm{interior}(D^{j})$ and $\bar{f}(D^{j}) < 0$; $D$ is compact, so there are finitely many rectangles $D^{j}$ such that $D = \bigcup_j D^{j}$.) The main difficulty is finding a suitable partition $\{D^{j}\}$. The easiest way is the following. Let $D^{0} = D$ and compute $\bar{f}(D^{0})$. If this value is less than $0$ (in the interval sense), then we are done. Otherwise divide $D^{0}$ into two regions $D^{0} = D^{1}_1 \cup D^{1}_2$. Then repeat the procedure for regions with upper index $1$. In general, either $\bar{f}(D^k_j) < 0$ or we get $D^k_j = D^{k+1}_{2j-1} \cup D^{k+1}_{2j}$. If we divide each region such that sizes of new regions become arbitrary small in all dimensions, then the process will eventually stop and a suitable partition of $D$ will be found. An easy way to achieve this goal is to divide each region in half along the coordinate for which its size is maximal, i.e., if $D^{k}_j = \{a_i \le x_i \le b_i\} = [{\bf a}, {\bf b}]$ and $b_m - a_m = \max_i \{b_i - a_i\}$, then set $D^{(k+1)_{2j-1}} = [{\bf a}, {\bf b}^{(m,y)}]$ and $D^{(k+1)}_{2j} = [{\bf a}^{(m,y))}, {\bf b}]$. Here, $y = (a_m + b_m) / 2$ and ${\bf a}^{(m,y)}$ equals to ${\bf a}$ with the $m$-th component replaced by~$y$. As the result of the procedure above, we get a finite set of subregions $S = \{D^k_i\}$ with the property: for each $D^k_i \in S$ either $\bar{f}(D^k_i) < 0$ or $D^k_i = D^{k+1}_{i_1} \cup D^{k+1}_{i_2}$. In the last case, the verification relies on a trivial theorem \[ D = D_1 \cup D_2 \,\mathrel{{\mathlarger{\wedge}}}\, f(D_1) < 0 \,\mathrel{{\mathlarger{\wedge}}}\, f(D_2) < 0 \implies f(D) < 0. \] Interval arithmetic works for any continuous function (at least in theory where numerical errors are not considered) but it is not very efficient in general. This is due to the dependency problem when even a simple function could require a lot of subdivisions in order to get the result on the full domain. Even a trivial inequality $f(x) = x - x < 1$ will require subdivisions for the domain $x \in [0,1]$. Indeed, $\bar{f}([0,1]) = [0,1] - [0,1] = [-1,1]$. Of course, we can simplify $x - x = 0$ but it is not possible to do for a function $f(x) = x - \arctan(x)$ which has similar behaviour near $0$. For this function, $\bar{f}([0,1]) = [0,1] - [0, \pi/4] = [-\pi/4, 1]$ and we don't get $f(x) < 1$. One way to decrease the dependency problem is to use Taylor approximations for computing bounds of $f$ on a given domain $D$. Fix ${\bf y} \in D = [{\bf a}, {\bf b}]$, then we can write \[ f({\bf x}) = f({\bf y}) + \sum_{i = 1}^n \partd{f}{x_i}({\bf y}) (y_i - x_i) + \frac{1}{2}\sum_{i, j = 1}^n \partd{^2 f}{x_i \partial x_j}({\bf p}) (y_i - x_i) (y_j - x_j) \] where ${\bf p} \in [{\bf a}, {\bf b}]$. Let ${\bf w} = \max\{{\bf y} - {\bf a}, {\bf b} - {\bf y}\}$ (all operations are componentwise). Suppose we have interval bounds for $f({\bf y}) \in [f_0^{lo}, f_0^{hi}]$, $\partd{f}{x_i}({\bf y}) \in [f_i^{lo}, f_i^{hi}]$ and $\partd{^2 f}{x_i \partial x_j}({\bf t}) \in [f_{ij}^{lo}, f_{ij}^{hi}]$ for all ${\bf t} \in D$. We can write \begin{multline*} \forall {\bf x} \in D,\ f({\bf x}) \le f({\bf y}) + \sum_{i = 1}^n \Abs{\partd{f}{x_i}({\bf y})} w_i + \frac{1}{2}\sum_{i, j = 1}^n \Abs{\partd{^2 f}{x_i \partial x_j}(\xi)} w_i w_j \\ \le f_0^{hi} + \sum_{i = 1}^n \Abs{[f_i^{lo}, f_i^{hi}]} w_i + \frac{1}{2} \sum_{i, j = 1}^n \Abs{[f_{ij}^{lo}, f_{ij}^{hi}]} w_i w_j. \end{multline*} Absolute values of intervals are defined by $\Abs{[a, b]} = \max\{-a, b\}$. Let's see how well this approximation works on examples. Again, take $f(x) = x - x$ and $D = [0,1]$. We compute $f'(x) = 1 - 1 = 0$ and $f''(x) = 0$. Set $y = 0.5$ and $w = 0.5$. Suppose $\bar{f}(0.5) = [0.4, 0.6] - [0.4, 0.6] = [-0.2, 0.2]$ (we deliberately take a very poor interval approximation), then \[ \forall x \in [0,1],\ f(x) \le \bar{f}(0.5)^u + \sum_{i=1}^1 0 \times 0.5 + \sum_{i, j = 1}^1 0 \times 0.5 \times 0.5 = 0.2 < 1. \] In the same way, for $f(x) = x - \arctan x$ we get $f'(x) = 1 - \frac{1}{1 + x^2}$, $f''(x) = \frac{-2x}{(1 + x^2)^2}$. If $x \in [0,1]$, then $f''(x) \in [-2, 0] = [f_{11}^{lo}, f_{11}^{hi}]$ and hence $\abs{f''(x)} \le 2$. We compute \[ \forall x \in [0,1],\ f(x) \le 0.04 + 0.21 \times 0.5 + 2 \times 0.5^3 \le 0.4. \] We see that interval arithmetic with Taylor approximations works much better. Moreover, we don't need to abandon direct interval approximations completely: every time when we have to verify whether $f(D_i) < 0$ we can first find an interval approximation $\bar{f}(D_i)$ and then compute a Taylor approximation. If we don't get the inequality in both cases, then we subdivide the domain. One simple trick which can be done with both interval and Taylor interval approximations is estimation of partial derivatives on a given domain. If it happens that $f_j(D_k) = \partd{f}{x_j}(D_k) \le 0$ or $f_j(D_k) \ge 0$ then it will be immediately possible to restrict further verifications to the boundary of $D_k = [{\bf a}, {\bf b}]$. Indeed, if $f_j(D_k) \le 0$ and $f(D_k|_{x_j = a_j}) < 0$ then $f(D_k) < 0$ since the function is decreasing along the $j$-th coordinate and its maximal value is attained at $x_j = a_j$. The same is true for increasing functions (consider $D_k|_{x_j = b_j}$). Moreover, if $\{x_j = a_j\}$ ($\{x_j = b_j\}$) is not on the boundary of the main domain $D_k$, then it is possible to completely ignore any further verifications for the region $D_k$. Indeed, if the restriction of $D_k$ is not on the boundary of the original domain, then there is another subdomain $D_j$ such that the restriction of $D_k$ is a subset of $D_j$ and the inequality is true on $D_j$. However, we need to be careful. Consider an example. Suppose $f(x) = -x^2 - 1$ and $D = [-1,1]$. Assume that we have $D_1 = [-1, 0]$ and $D_2 = [0,1]$. We get $f'(x) = -2x \ge 0$ on $[-1,0]$. Hence, the function is increasing and we can consider the restricted domain $\{0\}$ which is not on the boundary of $[-1,1]$. Also, $f'(x) = -2x \le 0$ on $[0,1]$ and we again get $\{0\}$ as the restriction of $[0,1]$. If we don't continue verifications in both cases, then we will not be able to verify the inequality. In order to avoid this problem, we always check a strict inequality for decreasing functions, that is, we test if $f_j({\bf x}) \ge 0$ or $f_j({\bf x}) < 0$. Another trick is to check convexity of a function before subdividing a domain $D_k$. If we need to subdivide $D_k$ and find that $f_{jj}(D) = \partd{^2 f}{x_j \partial x_j}(D) \ge 0$, then it is enough to verify $f(D_k|_{x_j = a_j}) < 0$ and $f(D_k|_{x_j = b_j}) < 0$. By convexity of $f$ (i.e., $f$ attains its maximum on the boundary), we get $f(D_k) < 0$ from these two inequalities. \subsection{Solution Certificate Search Procedure} \label{nonlinear-certificate} An informal verification procedure based on the ideas presented above has been developed in C++ for informal verification of Flyspeck nonlinear inequalities~\cite{hales-algorithm}. The starting point of our implementation of a formal procedure for verification of nonlinear inequalities is a port of this original C++ program into OCaml. This OCaml program informally verifies a given nonlinear inequality on a rectangular domain by finding Taylor interval approximations and subdividing domains if necessary. The result of this program is just a boolean value: yes or no, the inequality true or false (there is the third option: verification could fail due to numerical instability or when subdomains become very small without any definite results). We have modified the OCaml informal verification procedure such that it returns a partition of the original domain in a special tree-like structure which also contains all necessary information about verification steps for each subdomain. We call this structure a solution certificate for a given nonlinear inequality. The informal procedure is called the solution certificate search procedure. A solution certificate is defined with the following OCaml record \begin{verbatim} type result_tree = | Result_false | Result_pass | Result_mono of mono_status list * result_tree | Result_glue of (int * bool * result_tree * result_tree) | Result_pass_mono of mono_status | Result_pass_ref of int \end{verbatim} The record \verb'mono_status' contains monotonicity information (i.e., whether some first-order partial derivative is negative or positive). A simplified solution certificate search algorithm is given below in OCaml-like pseudo code. \begin{verbatim} let search f dom = let taylor_inteval = {find Taylor approximation of f on dom} let bounds = {taylor_interval bounds} if bounds >= 0 then Result_false else if bounds < 0 then Result_pass else let d_bounds = {find bounds of partial derivatives} let mono = {list of negative and positive partial derivatives} if {mono is not empty} then let r_dom = {restrict dom using information from mono} Result_mono mono (search f r_dom) else let dd_bounds = {find bounds of second partial derivatives} if {the j-th second partial derivative is non-negative} then let dom1, dom2 = {restrict dom along j} let c1 = search f dom1 let c2 = search f dom2 Result_glue (j, true, c1, c2) else let j = {find j such that b_i - a_i is maximal} let dom1, dom2 = {split dom along j} let c1 = search f dom1 let c2 = search f dom2 Result_glue (j, false, c1, c2) \end{verbatim} If the inequality $f(x) < 0$ holds on $D$, then the algorithm (applied to $f$ and $D$) will return a solution certificate which does not contain \verb'Result_false' nodes (of course, the real algorithm could fail due to numerical instabilities and rounding errors). A solution certificate does not contain any explicit information about subdomains for which verification must be performed. All subdomains can be restored from a solution certificate and the initial domain $D$. For each \verb'Result_glue(j, false, c1, c2)' node, it is necessary to split the domain in two halves along the $j$-th coordinate. The second argument is the convexity flag. If it is true, then the current domain must be restricted to its left and right boundaries along the $j$-th coordinate. For new subdomains, the node contains their solution certificates: \verb'c1' and \verb'c2'. The domain also has to be modified for \verb'Result_mono' nodes. Each node of this type contains a list of indices and boolean parameters (packed in \verb'mono_status' record) which indicate for which partial derivatives the monotonicity argument should be applied; boolean parameters determine if the corresponding partial derivatives are positive or negative. The simplified algorithm never returns nodes of type \verb'Result_pass_mono'. The real solution certificate search algorithm is a little more complicated. Every time when monotonicity argument is applied, it checks if the restricted domain is on the boundary of the original domain or not (the original domain is an argument of the algorithm). If the restricted domain is not on the boundary of the original domain, then \verb'Result_pass_mono' will be returned. If a solution certificate contains nodes of type \verb'Result_pass_mono', then it is necessary to transform such a certificate to get new certificates which can be formally verified. Indeed, suppose we have a \verb'Result_pass_mono' node and the corresponding domain is $D_k$. \verb'Result_pass_mono' requires to apply the monotonicity argument to $D_k$, that is, to restrict this domain to its boundary along some coordinate. But it doesn't contain any information on how to verify the inequality on the restricted subdomain. We can only claim that there is another subdomain $D_j$ (corresponding to some other node of a solution certificate) such that the restriction of $D_k$ is a subset of $D_j$. In other words, to verify the inequality on $D_k$, we first need to find $D_j$ such that the restriction of $D_k$ is a subset of $D_j$ and such that the inequality can be verified on $D_j$. To solve this problem, we transform a given solution certificate into a list of solution certificates and subdomains for which these new solution certificates work. Each solution certificate in the list may refer to previous solution certificates with \verb'Result_ref'. The last solution certificate in the list corresponds to the original domain. The transformation algorithm is the following \begin{verbatim} let transform certificate acc = let sub_certs = {find all maximal sub-certificates which does not contain Result_pass_mono} if {sub_certs contains certificate} then {add certificate to acc and return acc} else let sub_certs = {remove certificates consisting of single Result_ref from sub_certs} let paths = {find paths to sub-certificates in sub_cert} let _ = {add sub_certs and the corresponding paths to acc} let new_cert1 = {replace all sub_certs in certificate with references} let new_cert2 = {replace Result_pass_mono nodes in new_cert1 if they can be verified using subdomains defined by paths in acc} transform new_cert2 acc \end{verbatim} This algorithm maintains a list \verb'acc' of solution certificates which do not contain nodes of type \verb'Result_pass_mono'. The list also contains paths to subdomains corresponding to certificates. Each path is a list of pairs and it can be used to construct the corresponding subdomain starting from the original domain. Each pair is one of \verb'("l", i)', \verb'("r", i)', \verb'("ml", i)', or \verb'("mr", i)' where $i$ is an index. \verb'"l"' and \verb'"r"' labels correspond to left and right subdomains after splitting. \verb'"ml"' and \verb'"mr"' correspond to left and right restricted subdomains. The index $i$ specifies the coordinate along which the operation must be performed. When a reference node \verb'Result_ref' is generated for a sub-certificate at the $j$-th position in the accumulator list \verb'acc', then the argument of \verb'Result_ref' is $j$. \section{Formal Verification} \label{formal} The first step of developing a formal verification procedure is formalization of all necessary theories involving the multivariate Taylor theorem and related topics. Standard HOL Light libraries contain a formalization of Euclidean vector space~\cite{harrison-euclidean} and define general Frechet derivatives and Jacobian matrices for working with first-order partial derivatives. Also, HOL Light contains the general univariate Taylor theorem. We formalized all other important results including the theory of partial derivatives, the equality of second-order mixed partial derivatives, the multivariate Taylor formula with the second-order error term. The main formal verification step is to compute a formal Taylor interval approximation for a function $f:\mathbb{R}^n \to \mathbb{R}$ on a given domain $D = [{\bf a}, {\bf b}]$. Each formal Taylor approximation includes the following data: a point ${\bf y} = ({\bf a} + {\bf b}) / 2 \in D$, a vector ${\bf w}$ which estimates the width of the domain and has the property ${\bf w} \ge \max\{{\bf b} - {\bf y}, {\bf y} - {\bf a}\}$ (all operations are componentwise), an interval bound of $f({\bf y}) \in [f^{lo}, f^{hi}]$, interval bounds of partial derivatives $f_i({\bf y}) \in [f_i^{lo}, f_i^{hi}] = d_i$ for all $i = 1,\ldots,n$, interval bounds of second-order partial derivatives on the full domain $f_{ij}({\bf x}) \in [f_{ij}^{lo}, f_{ij}^{hi}] = d_{ij}$ for all $i = 1,\ldots,n$, $j \le i$, and ${\bf x} \in D$. Based on this data, an interval approximation of $f({\bf x})$ and its partial derivatives on $D$ can be computed. For instance, the following theorem gives an interval approximation of $f({\bf x})$ when $n = 2$ \begin{align*} w_1 \abs{d_1} + w_2 \abs{d_2} \le b &\,\mathrel{{\mathlarger{\wedge}}}\, w_1(w_1 \abs{d_{1,1}}) + w_2(w_2\abs{d_{2,2}} + 2 w_1 \abs{d_{2,1}}) \le e\\ &\,\mathrel{{\mathlarger{\wedge}}}\, b + 2^{-1} e \le a \,\mathrel{{\mathlarger{\wedge}}}\, l \le f^{lo} - a \,\mathrel{{\mathlarger{\wedge}}}\, f^{hi} + a \le h\\ &\implies \bigl(\forall {\bf x},\ {\bf x} \in [{\bf a}, {\bf b}] \implies f({\bf x}) \in [l,h]\bigr). \end{align*} (Here, $\abs{d_i} = \abs{[f_i^{lo}, f_i^{hi}]} = \max\{-f_i^{lo}, f_i^{hi}\}$.) Formal computations of Taylor interval approximations require a lot of basic arithmetic operations. We implemented efficient procedures for working with natural numbers and real numbers in HOL Light. Our implementation of formal natural number arithmetic works with numerals in an arbitrary fixed base. Our implementation improves the performance of standard HOL Light arithmetic operations with natural numbers by the factor $\log_2 b$ (where $b$ is a fixed base constant) for linear operations (in the size of input arguments) and by the factor $(\log_2 b)^2$ for quadratic operations. We approximate real numbers with floating-point numbers which have fixed precision of the mantissa. This precision is controlled by an informal parameter which specifies the maximal number of digits in results of formal floating-point operations. All formal floating-point operations yield inequality theorems which approximate real results from above or below. Formal verification procedures are based on our implementation of interval arithmetic which works with formal floating-point numbers. We also cache results of all basic arithmetic operations to improve the performance of formal computations. A description of our formal verification procedure is technical and it can be found in~\cite{solovyev-thesis}. Here we give an example which demonstrates how the formal verification procedure works. Let $f(x) = x - 2$ and we want to prove $f(x) < 0$ for $x \in [-1, 1]$. Suppose that we have the following solution certificate \begin{verbatim} Result_glue {1, false, Result_pass_mono {[1, incr]}, Result_mono {[1, incr], Result_pass } } \end{verbatim} This certificate tells that the inequality may be verified by first splitting the domain into two subdomains along the first (and the only) variable; then the left branch follows from some other formal verification result by monotonicity (\verb'Result_pass_mono'); the right branch follows by the monotonicity argument and by a direct verification. This certificate cannot be used directly for a formal verification since we don't know how the left branch is proved. The first step is to transform this certificate into a list of certificate such that each certificate can be verified on subdomains specified by the corresponding paths. We get the following list of certificates \begin{verbatim} [ ["r", 1], Result_mono {[1], Result_pass}; ["l", 1], Result_mono {[1], Result_ref {0}}; [], Result_glue {1, false, Result_ref {1}, Result_ref {0}} ] \end{verbatim} The first element corresponds to the right branch of the original \verb'Result_glue' (hence, the path is \verb'["r", 1]' which means subdivision along the first variable and taking the right subdomain). A formal verification of the first certificate yields $\vdash x \in [0,1] \implies f(x) < 0$. The second result is the transformed left branch of the original certificate. This transformed result explicitly refers to the first proved result (\verb'Result_ref {0}'). Now it can be verified. Indeed, \verb'Result_ref {0}' yields $\vdash x \in [0,0] \implies f(x) < 0$ (since $[0, 0] \subset [0,1]$ and we have the theorem for $[0,1]$ which we use in the reference). Then the monotonicity argument \begin{align*} (\forall x,\ x \in [-1,0] \implies 0 \le f'(x)) &\,\mathrel{{\mathlarger{\wedge}}}\, (\forall x, x \in [0,0] \implies f(x) < 0) \\ &\implies (\forall x, x \in [-1,0] \implies f(x) < 0) \end{align*} yields $\vdash x \in [-1, 0] \implies f(x) < 0$. The last entry of the list refers to two proved results and glues them together in the right order: \begin{align*} (\forall x,\ x \in [-1,0] \implies f(x) < 0) &\,\mathrel{{\mathlarger{\wedge}}}\, (\forall x,\ x \in [0,1] \implies f(x) < 0)\\ &\implies (\forall x,\ x \in [-1,1] \implies f(x) < 0) \end{align*} \section{Optimization Techniques and Future Work} \label{nonlinear-optimization} \subsection{Implemented Optimization Techniques} There are several optimization techniques for formal verification of nonlinear inequalities. One of the basic ideas of optimization techniques is to compute extra information for solution certificates which helps to increase the performance of formal verification procedures. The first optimization technique is to try out direct interval evaluations without Taylor approximations. If a direct interval evaluation yields a desired result (verification of an inequality on a domain or verification of a monotonicity property), then a special flag is added to the corresponding certificate node. This flag indicates that it is not necessary to compute full formal Taylor interval and it is enough to evaluate the function directly with interval arithmetic (which is faster). These flags are added to \verb'Result_pass' and \verb'Result_mono' nodes. An important optimization procedure is to find the best (minimal) precision which is sufficient for verifying an inequality on each subdomain. We have a special informal implementation of all arithmetic, Taylor interval evaluation, and verification functions which compute results in the same way as the corresponding formal functions. This informal implementation is much simpler (because it does not prove anything) and faster (since it does not prove anything and all basic arithmetic is done by native machine arithmetic). For a given solution certificate, we run a modified informal verification procedure which tests different precision parameter values for each certificate node. It finds out the smallest value of the precision parameter for each certificate node such that the verification result is correct. Then a modified solution certificate is created where each node contains information about the best precision parameter. A special version of the formal verification procedure accepts this new certificate and verifies the inequality with computed precision parameters. This adaptive precision technique increases the performance of formal arithmetic computations. \subsection{Future Work} There are some optimization ideas which are not implemented yet. The first idea is to stop computations of bounds of second-order partial derivatives for Taylor intervals at some point and reuse bounds computed for larger domains. The error term in Taylor approximation depends quadratically on the size of a domain. When domains are sufficiently small, good approximations of bounds of second-order partial derivatives are not very important. This strategy could save quite a lot of verification time since formal evaluation of second-order partial derivative bounds is expensive for many functions. Another unimplemented optimization is verification of sets of similar inequalities on the same domain. The idea is to reuse results of formal computations as much as possible for inequalities which have a similar structure and which are verified on the same domains. The basic strategy is to find a subdivision of the domain into subdomains such that each inequality in the set can be completely verified on each subdomain. If inequalities in the set share a lot of similar computations, then the verification of all inequalities in the set could be almost as fast as the verification of the most difficult inequality in the set. This approach should work well for Flyspeck inequalities where many inequalities share the same sub-expressions and domains. An important unimplemented feature is verification of disjunctions of inequalities. That is, we want to verify inequalities in the form \[ \forall {\bf x} \in D \implies f_1({\bf x}) < 0 \,\mathrel{\mathlarger{\vee}}\, f_2({\bf x}) < 0 \,\mathrel{\mathlarger{\vee}}\, \ldots \,\mathrel{\mathlarger{\vee}}\, f_k({\bf x}) < 0. \] This form is equivalent to an inequality on a non-rectangular domain since \[ (P({\bf x}) \implies f({\bf x}) < 0 \,\mathrel{\mathlarger{\vee}}\, g({\bf x}) < 0) \Longleftrightarrow (P({\bf x}) \,\mathrel{{\mathlarger{\wedge}}}\, 0 \le g({\bf x}) \implies f({\bf x}) < 0). \] Many Flyspeck inequalities are in this form. A formal verification of these inequalities is simple. It is enough to add indices of functions for which the inequality is satisfied to the corresponding nodes of solution certificates. Then it will be only necessary to modify the formal gluing procedure. It should be able to combine inequalities for different functions with disjunctions. \section{Results and Tests} \label{nonlinear-tests} This section briefly introduces the implemented verification tool and presents some test results for several polynomial and non-polynomial inequalities. We also compare the performance of the formal verification tool and the informal C++ verification procedure for Flyspeck nonlinear inequalities. All tests were performed on Intel Core i5, 2.67GHz running Ubuntu 9.10 inside Virtual Box 4.2.0 on a Windows 7 host; the Ocaml version was 3.09.3; the base of arithmetic was 200. \subsection{Overview of the Formal Verification Tool} A user manual which contains information about the tool and installation instructions is available at~\cite{nonlinear-manual}. Here, we briefly describe how the tool can be used. Suppose we want to verify a polynomial inequality \begin{multline*} -\frac{1}{\sqrt{3}} \le x \le \sqrt{2} \,\mathrel{{\mathlarger{\wedge}}}\, -\sqrt{\pi} \le y \le 1 \implies x^2 y - x y^4 + y^6 + x^4 - 7 > -7.17995. \end{multline*} The following HOL Light script solves this problem \begin{verbatim} needs "verifier/m_verifier_main.hl";; open M_verifier_main;; let ineq = `-- &1 / sqrt(&3) <= x /\ x <= sqrt(&2) /\ -- sqrt(pi) <= y /\ y <= &1 ==> x pow 2 * y - x * y pow 4 + y pow 6 - &7 + x pow 4 > -- #7.17995`;; let th, stats = verify_ineq default_params 5 ineq;; \end{verbatim} First two lines of the script load the verification tool. The main verification function is called \verb'verify_ineq'. It takes 3 arguments. The first argument contains verification options. In most cases, it is enough to provide default options \verb|default_params|. The second parameter specifies the precision of formal floating-point operations. The third parameter is the inequality itself given as a HOL Light term. The format of this term is simple: it is an implication with bounds of variables in the antecedent and an inequality in the consequent. The bounds of all variables should be in the form $\text{\it a constant expression} \le x$ or $x \le \text{\it a constant expression}$. For each variable, upper and lower bounds must be given. The inequality must be a strict inequality ($<$ or $>$). The inequality may include \verb'sqrt' ($\sqrt{}$), \verb|atn| ($\arctan$), and \verb|acs| ($\arccos$) functions. The constant \verb|pi| ($\pi$) is also allowed. The verification function returns a HOL Light theorem and a record with some verification information which includes verification time. \subsection{Polynomial Inequalities} Here is a list of test polynomial inequalities taken from~\cite{MN12}. \begin{itemize} \item {\bf schwefel} \begin{align*} \langle &x_1, x_2, x_3 \rangle \in [\avec{-10,-10,-10},\avec{10,10,10}]\\ &\implies -5.8806 \times 10^{-10} < (x_1 - x_2^2)^2 + (x_2 - 1)^2 + (x_1 - x_3^2)^2 + (x_3 - 1)^2. \end{align*} \item {\bf caprasse} \begin{align*} \avec{x_1, x_2, x_3, x_4} &\in [\avec{-0.5,-0.5,-0.5,-0.5},\avec{0.5,0.5,0.5,0.5}]\\ \implies & -3.1801 < -x_1 x_3^3 + 4 x_2 x_3^2 x_4 + 4 x_1 x_3 x_4^2 + 2 x_2 x_4^3 \\ & \phantom{-3.1801 < x} + 4 x_1 x_3 + 4 x_3^2 - 10 x_2 x_4 - 10 x_4^2 + 2. \end{align*} \item {\bf magnetism} \begin{align*} \langle x_1,x_2,&x_3,x_4,x_5,x_6,x_7\rangle \in [\avec{-1,-1,-1,-1,-1,-1,-1}, \avec{1,1,1,1,1,1,1}]\\ &\implies -0.25001 < x_1^2 + 2 x_2^2 + 2 x_3^2 + 2 x_4^2 + 2 x_5^2 + 2 x_6^2 + 2 x_7^2 - x_1. \end{align*} \item {\bf heart} \begin{align*} \langle x_1,x_2,x_3,x_4,x_5,&x_6,x_7,x_8\rangle \in [\avec{-0.1, 0.4, -0.7, -0.7, 0.1, -0.1, -0.3, -1.1},\\ &\phantom{x_6,x_7,x_8\rangle \in [ } \avec{0.4, 1, -0.4, 0.4, 0.2, 0.2, 1.1, -0.3}]\\ \implies & -1.7435 < -x_1 x_6^3 + 3 x_1 x_6 x_7^2 - x_3 x_7^3 + 3 x_3 x_7 x_6^2 - x_2 x_5^3 \\ &\phantom{\text{aaaaaaaaaaa}} + 3 x_2 x_5 x_8^2 - x_4 x_8^3 + 3 x_4 x_8 x_5^2 - 0.9563453. \end{align*} \end{itemize} Performance test results are given in Table~\ref{table-poly}. The column {\it total time} contains total verification time, the column {\it formal} contains time of the formal verification only. The formal verification excludes all preliminary processes: computations of partial derivatives, search of solution certificates, adaptive precision search procedures. The last two columns show the corresponding verification time for the PVS procedure which is based on the Bernstein polynomial technique and described in~\cite{MN12}. Test results show that our procedure is faster than the Bernstein polynomial procedure in PVS for most cases. On the other hand, there still exist cases where our tool is slower. \begin{table}[t] \caption{Polynomial inequalities} \begin{center} \begin{tabular}{l@{\quad} r r r r r r} \hline \multicolumn{1}{l}{Inequality ID}& \multicolumn{1}{l}{\phantom{x}total time (s)}& \multicolumn{1}{l}{\phantom{x}formal (s)} & \multicolumn{1}{l}{\phantom{x}total PVS (s)} & \multicolumn{1}{l}{\phantom{x}formal PVS (s)}\\ \hline\rule{0pt}{12pt}% schwefel & 26.33 & 19.15 & 10.23 & 3.18 \\ caprasse & 8.06 & 1.29 & 11.44 & 1.25 \\ magnetism & 7.01 & 1.35 & 160.44 & 82.87 \\ heart & 17.30 & 1.28 & 79.68 & 26.14 \\ \hline \end{tabular} \end{center} \label{table-poly} \end{table} \subsection{Flyspeck Inequalities} The Flyspeck project contains 985 nonlinear inequalities. The informal verification program written in C++ can verify all these inequalities in about 10 hours. Most inequalities (683) can be informally verified in less than 10 seconds. Almost all inequalities (911) can be informally verified in less than 100 seconds. We tested our formal verification procedure on several simple Flyspeck inequalities. Some of these inequalities are listed below. Table~\ref{table-flyspeck} contains performance test results for these inequalities. The column {\it total time} contains total formal verification time, the column {\it formal} contains time of the formal verification only (excluding all preliminary processes), the column {\it informal} contains informal verification time by the C++ program. \begin{eqnarray*} \Delta(x_1,\ldots,x_6) &= &x_1 x_4(-x_1 + x_2 + x_3 - x_4 + x_5 + x_6)\\ && + x_2 x_5(x_1 - x_2 + x_3 + x_4 - x_5 + x_6)\\ && + x_3 x_6(x_1 + x_2 - x_3 + x_4 + x_5 - x_6)\\ && - x_2 x_3 x_4 - x_1 x_3 x_5 - x_1 x_2 x_6 - x_4 x_5 x_6,\\[10pt] \Delta_4 &=& \partd{\Delta}{x_4}, \end{eqnarray*} \begin{eqnarray*} \mathrel{\rm dih}_x(x_1,\ldots,x_6) &=& \frac{\pi}{2} - \arctan\left(\frac{-\Delta_4(x_1,\ldots,x_6)}{\sqrt{4 x_1 \Delta(x_1,\ldots,x_6)}}\right),\\[6pt] \mathrel{\rm dih}_y(y_1,\ldots,y_6) &=& \mathrel{\rm dih}_x(y_1^2, \ldots, y_6^2). \end{eqnarray*} \begin{itemize} \item {\bf 4717061266} \begin{align*} 4 \le x_i \le 6.3504 \implies \Delta(x_1, x_2, x_3, x_4, x_5, x_6) > 0. \end{align*} \item {\bf 7067938795} \begin{align*} 4 \le x_{1,2,3} &\le 6.3504,\ x_4 = 4,\ 3.01^2 \le x_{5,6} \le 3.24^2\\ &\implies \mathrel{\rm dih}_x (x_1, \ldots, x_6) - \pi/2 + 0.46 < 0. \end{align*} \item {\bf 3318775219} \begin{align*} 2 \le y_i \le 2.52 \implies 0 < &\mathrel{\rm dih}_y (y_1, \ldots, y_6) - 1.629 - 0.763 (y_4 - 2.52) \\ &- 0.315 (y_1 - 2.0) + 0.414 (y_2 + y_3 + y_5 + y_6 - 8.0). \end{align*} \end{itemize} \begin{table}[t] \caption{Flyspeck inequalities} \begin{center} \begin{tabular}{l@{\quad} r r r r r} \hline \multicolumn{1}{l}{Inequality ID}& \multicolumn{1}{l}{\phantom{x}total time (s)}& \multicolumn{1}{l}{\phantom{x}formal (s)}& \multicolumn{1}{l}{\phantom{x}informal (s)}\\ \hline\rule{0pt}{12pt}% 2485876245a & 5.530 & 0.058 & 0 \\ 4559601669b & 4.679 & 0.048 & 0 \\ 4717061266 & 27.1 & 0.250 & 0 \\ 5512912661 & 8.860 & 0.086 & 0.002 \\ 6096597438a & 0.071 & 0.071 & 0 \\ 6843920790 & 2.824 & 0.076 & 0.002 \\ SDCCMGA b & 9.012 & 0.949 & 0.006 \\ 7067938795 & 431 & 387 & 0.070 \\ 5490182221 & 1726 & 1533 & 0.375 \\ 3318775219 & 17091 & 15226 & 8.000 \\ \hline \end{tabular} \end{center} \label{table-flyspeck} \end{table} We also found formal verification time of all Flyspeck inequalities which can be verified in less than one second and which do not contain disjunctions of inequalities. Table~\ref{table-flyspeck-1} summarizes test results. The columns {\it total time} and {\it formal} show total formal verification time and formal verification time without preliminary processes for the corresponding sets of inequalities. The column {\it informal} contains informal verification time for the same sets of inequalities. Test results show that our formal verification procedure is about 2000--4000 times slower than the informal verification program. \begin{table}[t] \caption{Flyspeck inequalities which can be informally verified in 1 second} \begin{center} \begin{tabular}{l@{\quad} r r r r} \hline \multicolumn{1}{l}{time interval (ms)}& \multicolumn{1}{l}{\phantom{x}\# inequalities}& \multicolumn{1}{l}{\phantom{x}total time (s)}& \multicolumn{1}{l}{\phantom{x}formal (s)}& \multicolumn{1}{l}{\phantom{x}informal (s)}\\ \hline\rule{0pt}{12pt}% 0 & 57 & 423 & 2.159 & 0 \\ 1--100 & 35 & 5546 & 3854 & 1.134 \\ 101--500 & 11 & 12098 & 10451 & 3.944 \\ 501--700 & 14 & 32065 & 28705 & 8.423 \\ 701--1000 & 9 & 19040 & 16688 & 7.274 \\ \hline \end{tabular} \end{center} \label{table-flyspeck-1} \end{table} \bibliographystyle{splncs}
{ "timestamp": "2013-01-10T02:00:18", "yymm": "1301", "arxiv_id": "1301.1702", "language": "en", "url": "https://arxiv.org/abs/1301.1702", "abstract": "We present a formal tool for verification of multivariate nonlinear inequalities. Our verification method is based on interval arithmetic with Taylor approximations. Our tool is implemented in the HOL Light proof assistant and it is capable to verify multivariate nonlinear polynomial and non-polynomial inequalities on rectangular domains. One of the main features of our work is an efficient implementation of the verification procedure which can prove non-trivial high-dimensional inequalities in several seconds. We developed the verification tool as a part of the Flyspeck project (a formal proof of the Kepler conjecture). The Flyspeck project includes about 1000 nonlinear inequalities. We successfully tested our method on more than 100 Flyspeck inequalities and estimated that the formal verification procedure is about 3000 times slower than an informal verification method implemented in C++. We also describe future work and prospective optimizations for our method.", "subjects": "Logic in Computer Science (cs.LO); Logic (math.LO)", "title": "Formal Verification of Nonlinear Inequalities with Taylor Interval Approximations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759604539052, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7079405665489751 }
https://arxiv.org/abs/1508.07590
New Classes of Permutation Binomials and Permutation Trinomials over Finite Fields
Permutation polynomials over finite fields play important roles in finite fields theory. They also have wide applications in many areas of science and engineering such as coding theory, cryptography, combinatorial design, communication theory and so on. Permutation binomials and trinomials attract people's interest due to their simple algebraic form and additional extraordinary properties. In this paper, several new classes of permutation binomials and permutation trinomials are constructed. Some of these permutation polynomials are generalizations of known ones.
\section{Introduction} A polynomial $f \in \mathbb{F}_{q}[x]$ is called a permutation polynomial of $\mathbb{F}_{q}$ if the associated polynomial function $f:c\to f(c)$ from $\mathbb{F}_{q}$ into $\mathbb{F}_{q}$ is a permutation of $\mathbb{F}_{q}$. Permutation polynomials over finite fields play important roles in finite fields theory. They also have wide applications in coding theory, cryptography, combinational design, communication theory. The study of permutation polynomial can date back to Hermite \cite{Hermite} and Dickson \cite{Dic}. {There are numerous books and survey papers on the subject covering different periods in the development of this active area \cite[Ch.18]{22}, \cite{XH}, \cite[Ch.7]{LN}, \cite[Ch.8]{64}, \cite{XH1}, while the survey by X. Hou \cite{XH1} in 2015 is the most recent one.} Permutation binomials and trinomials attract people's interest due to their simple algebraic form and additional extraordinary properties. For instances, {a certain permutation trinomial of $\gf_{2^{2m+1}}$ was a major ingredient in the proof of the Welch conjecture \cite{HD}.} The discovery of other class of permutation trinomials by Ball and Zieve \cite{SM} provides a way to prove the construction of the Ree-Tits symplectic spreads of $\mathbf{PG}(3,q)$. For more relevant progresses, readers can consult a paper \cite{XH} presented by X.Hou, which conducted a recent survey on permutation binomials and trinomials in 2013. {Although quite a few permutation binomials and permutation trinomials have been found, however, an explicit and unify characterization of them is still missing and seems to be elusive today. Therefore, it is both interesting and important to find more explicit classes of them and try to find the hidden reason for their permutation property. In this paper, we present several new classes of} permutation binomials over finite fields with even extensions and {permutation} trinomials over finite fields with even characteristics. Therefore {we will review} the known permutation binomials over $\mathbb{F}_{q^2}$ and permutation trinomials over $\mathbb{F}_{2^m}$ {in Section \ref{kpb} and Section \ref{kpt}, respectively} for the convenience of the reader. {For other classes of permutation binomials or permutation trinomials, please refer to \cite{XH}.} In Section \ref{sec1}, we {introduce} the definition of the {multiplicative equivalence of permutation polynomials } and some lemmas. In Section \ref{sec2}, we {provide} new classes of permutation binomials through the Hermite's criterion \cite{Hermite} and considering the number of solutions of special equations. In Section \ref{sec3}, we give five new classes of permutation trinomials. {Section \ref{sec4} is the conclusion.} In the paper, ${\operatorname{Tr}}_{k}(x)$ is treated as the absolute trace function on $\mathbb{F}_{2^{k}}$. The algebraic closure of $\mathbb{F}_{q}$ is denoted {by} $\bar{\mathbb{{F}}}_{q}$. For integer $d>0$, $\mu_{d}=\{x \in \bar{\mathbb{{F}}}_{q}:x^{d}=1\}$. Others symbols follow the standard notation of finite fields. \section{Preparation} \label{sec1} {Let $f$ and $g$ be two polynomials in $\mathbb{F}_{q}[x]$ satisfying that $f(x)=g(x^d)$, where $1 \le d \le q-1$ is an integer such that $\mathrm{gcd}(d,q-1)=1$. Then the number of the terms of $f$ is equal to that of $g$, and $f$ is a permutation polynomial if and only if so is $g$. Particularly, $f$ is a permutation binomial resp. permutation trinomial if and only if so is $g$. Hence we introduce the following definition of multiplicative equivalence.} \begin{Def} Two permutation polynomials $f(x)$ and $g(x)$ in $\mathbb{F}_{q}[x]$ are called {multiplicative} equivalen{t} if there exists an integer $1 \le d \le q-1$ such that $\mathrm{gcd}(d,q-1)=1$ and $f(x)=g(x^d)$. \end{Def} {It should be noted that in \cite{XH1} and many references before, two permutation polynomial $f$ and $g$ of $\gf_q$ called equivalent if $f(x)=cg(ax+b)+d$, where $a, c\in \gf_q^*$, $b, d\in \gf_q$. In our opinion, this type of equivalence can be called {linear} equivalence. } {{The following lemmas will be useful in our future discussion.} The first one is a corollary following directly from } {\cite[Lemma 2.1]{Zieve}}. \begin{Lemma} Let $f(x)=x^{r_{0}}(x^{(q-1)/d}+a)\in \mathbb{F}_{q}[x]$, where $d \mid q-1$, $a \in \mathbb{F}_{q}^{*}$. If $f(x)$ is a permutation polynomial over $\mathbb{F}_{q}$ and $\mathrm{gcd}(r_{0}+d,(q-1)/d)=1$, then $g(x)=x^{r_{0}+d}(x^{(q-1)/d}+a)$ is also a permutation polynomial over $\mathbb{F}_{q}$. \label{ky1} \end{Lemma} \begin{Lemma} \emph{ \cite{LN} (Hermite's Criterion)} Let $\mathbb{F}_{q}$ be of characteristic $p$. Then $f \in \mathbb{F}_{q}[x]$ is a permutation polynomial of $\mathbb{F}_{q}$ if and only if the following two conditions hold: \begin{enumerate}[(i)] \item $f$ has exactly one root in $\mathbb{F}_{q}$; \item for each integer $t$ with $1 \le t \le q-2$ and $t \not \equiv 0 \mod{p}$, the reduction of $f(x)^{t} \mod{x^{q}-x}$ has degree $\le$ $q-2$. \end{enumerate} \end{Lemma} \begin{Lemma} \emph{(Lucas formula)} Let $n$,$i$ be positive integers and $p$ be a prime. Assume $n={{a}_{m}}{{p}^{m}}+\cdots +{{a}_{1}}p+{{a}_{0}}$ and $i={{b}_{m}}{{p}^{m}}+\cdots +{{b}_{1}}p+{{b}_{0}}$. Then $\left( \begin{matrix} n \\ i \\ \end{matrix} \right)\equiv \left( \begin{matrix} {{a}_{m}} \\ {{b}_{m}} \\ \end{matrix} \right)\left( \begin{matrix} {{a}_{m-1}} \\ {{b}_{m-1}} \\ \end{matrix} \right)\cdots \left( \begin{matrix} {{a}_{0}} \\ {{b}_{0}} \\ \end{matrix} \right)(\bmod p) $. \end{Lemma} \begin{Lemma} Let $q=2^k$ and $k>0$ be an integer. Then $f(x)=x+x^2+x^4$ is a permutation polynomial over $\mathbb{F}_{q}$ if and only if $k \not \equiv 0(\bmod 3)$. \label{lem1} \end{Lemma} { \begin{proof} This lemma can be easily proved. We omit it here. \end{proof} } \begin{Lemma} \emph{ \cite{SZM}} \label{san1} { Let $a \in \mathbb{F}_{2^k}^\ast$. Then the cubic equation $x^3+x=a$ has a unique solution in $ \mathbb{F}_{2^k}$ if and only if ${\operatorname{Tr}_k}\left(a^{-1}\right) \neq {\operatorname{Tr}_k}(1)$.} \end{Lemma} \section{Permutation binomials} \label{sec2} {G}iven $f(x)=x^{m}+ax^{n}\in \mathbb{F}_{q}[x]$, where $0<n<m<q$ and $a \in \mathbb{F}_{q}^{*}$, there exist integers $r,t,d>0$ with $\mathrm{gcd}(t,q-1)=1$ and $d \mid q-1$ such that $f({{x}^{t}})\equiv {{x}^{r}}({{x}^{(q-1)/d}}+a)(\bmod {{x}^{q}}-x)$ \cite{XH1}. Therefore, w.l.o.g, we only consider {permutation binomials with} the form $x^{r}(x^{(q-1)/d}+a)$. Inspired by Hou and Lappano \cite{XHSD}, in this section, we consider permutation binomials over finite fields with even extensions, {that is, over $\gf_{q^2}$}. \subsection{{Known permutation binomials over $\gf_{q^2}$}} \label{kpb} First, we review the known permutation binomials over finite fields with even extensions. \begin{Th} \emph{ \cite{Zieve}} Let $r,d>0$ be integers such that $d\mid q^2-1$ and $a \in \mathbb{F}_{q^2}^{*}$. Further assume that $\eta +\frac{a}{\eta }\in {{\mu }_{(q^2-1)/d}}$ for all $\eta \in {{\mu }_{2d}}$. Then $x^{r}(x^{(q^2-1)/d}+a)$ is a permutation polynomial of $\mathbb{F}_{q^2}$ if and only if $-a\notin {{\mu }_{d}}$, $\mathrm{gcd}(2d,2r+(q^2-1)/d) \le 2$. \end{Th} \begin{Th} \emph{ \cite{Zieve1}} Let $r$ and $d$ be positive integers, and let $\beta \in \mathbb{F}_{q^{2}}$ be such that $\beta^{q+1}=1$. Then $x^{r+d(q-1)}+\beta^{-1}x^{r}$ is a permutation polynomial of $\mathbb{F}_{q^2}$ if and only if all of the following hold: \begin{enumerate}[(i)] \item $\mathrm{gcd}(r,q-1)=1$. \item $\mathrm{gcd}(r-d,q+1)=1$. \item $(-\beta)^{(q+1)/\mathrm{gcd}(q+1,d)} \neq 1$. \end{enumerate} \end{Th} \begin{Th} \emph{ \cite{XHSD}} Let $f=ax+x^{3q-2}\in \mathbb{F}_{q^2}[x]$, where $a \in \mathbb{F}_{q^2}^{*}$. Then $f$ is a permutation polynomial of $\mathbb{F}_{q^2}$ if one of the following occurs. \begin{enumerate}[(i)] \item $q=2^{2k+1}$ and $a^{\frac{q+1}{3}}$ is a primitive $3$rd root of unity. \item $q=5$ and $a^2$ is a root of $(x+1)(x+2)(x-2)(x^2-x+1)$. \item $q=2^3$ and $a^3$ is a root of $x^3+x+1$. \item $q=11$ and $a^4$ is a root of $(x-5)(x+2)(x^2-x+1)$. \item $q=17$ and $a^6=4,5$. \item $q=23$ and $a^8=-1$. \item $q=29$ and $a^{10}=-3$. \end{enumerate} \end{Th} \begin{Th} \emph{ \cite{SD}} Let $f=ax+x^{5q-4}\in \mathbb{F}_{q^2}[x]$, where $a \in \mathbb{F}_{q^2}^{*}$. Then $f$ is a permutation polynomial of $\mathbb{F}_{q^2}$ if one of the following occurs. \begin{enumerate}[(i)] \item $q=2^{4k+2}$ and $a^{\frac{q+1}{5}}$ is a primitive $5$th root of unity. \item $q=3^2$ and $a^2$ is a root of $(x+1)(x^2+1)(x^2+x+2)(x^2+2x+2)(x^4+x^2+x+1)(x^4+x^3+x^2+1)(x^4+2x^3+x^2+2x+1)$. \item $q=19$ and $a^4$ is a root of $(x+1)(x+2)(x+3)(x+4)(x+5)(x+9)(x+10)(x+13)(x+17)(x^2+3x+16)(x^2+4x+1)(x^2+18x+6)$. \item $q=29$ and $a^6\in \{15,18,22,23\}$. \item $q=7^2$ and $a^{10}$ is a root of $x^2+4x+1$. \item $q=59$ and $a^{12}$ is a root of $(x^2+x+1)(x^3+x+1)$. \item $q=2^6$ and $a^{13}$ is a root of $(x^2+x+1)(x^3+x+1)$. \end{enumerate} \end{Th} \subsection{{N}ew classes of permutation binomials} \begin{Th} Let $f(x)={{x}}({{x}^{q+1}}+a) \in \mathbb{F}_{q^{2}}[x]$. Then $f(x)$ is a permutation {binomial} over $\mathbb{F}_{q^{2}}$ if and only if $ q \not \equiv 1(\bmod3)$ and $a^{2(q-1)}-a^{q-1}+1=0$. \label{th_BiPP1} \end{Th} \begin{proof} First of all, we show that when $ q \equiv 1(\bmod3)$, $f(x)$ is not a permutation polynomial over $\mathbb{F}_{q^{2}}$. Let $0\le \alpha ,\beta \le q-1$ with $(\alpha ,\beta) \ne (0,0)$. Then {w}e have \begin{eqnarray*} \sum\limits_{x\in {{\mathbb{F}}_{{{q}^{2}}}}}{f{{(x)}^{\alpha +q\beta }}} &=& \sum\limits_{x\in {{\mathbb{F}}_{{{q}^{2}}}}}{{{(ax+{{x}^{q+2}})}^{\alpha }}{{({{a}^{q}}{{x}^{q}}+{{x}^{1+2q}})}^{\beta }}} \\ &=& \sum\limits_{x\in {{\mathbb{F}}_{{{q}^{2}}}}}{\sum\limits_{ 0\le i \le \alpha, 0\le j \le \beta}{\left( \begin{matrix} \alpha \\ i \\ \end{matrix} \right){{a}^{\alpha -i}}{{x}^{\alpha -i}}{{x}^{(q+2)i}}\left( \begin{matrix} \beta \\ j \\ \end{matrix} \right){{a}^{q(\beta -j)}}{{x}^{q(\beta -j)}}{{x}^{(1+2q)j}}}}\\ &=& {{a}^{\alpha +q\beta }}\sum\limits_{0\le i \le \alpha, 0\le j \le \beta}{\left( \begin{matrix} \alpha \\ i \\ \end{matrix} \right)}\left( \begin{matrix} \beta \\ j \\ \end{matrix} \right){{a}^{-i-qj}}\sum\limits_{x\in \mathbb{F}_{{{q}^{2}}}^{*}}{{{x}^{\alpha +q\beta +(1+q)(i+j)}}}. \end{eqnarray*} The inner sum is $0$ unless $\alpha +q\beta +(1+q)(i+j)\equiv 0 (\bmod q^2-1)$, {which can happen only if } $\alpha +\beta q\equiv 0 (\bmod q+1)$, or equivalently, $ \alpha=\beta$. Then we have \begin{equation*} \sum\limits_{x\in \mathbb{F}_{{{q}^{2}}}^{*}}{f{{(x)}^{\alpha +q\beta }}=-{{a}^{(1+q)\alpha }}\sum\limits_{\tiny{\begin{array}{c} \alpha +i+j\equiv 0(\bmod q-1), \\ 0\le i, j \le \alpha \end{array}} }{\left( \begin{matrix} \alpha \\ i \\ \end{matrix} \right)\left( \begin{matrix} \alpha \\ j \\ \end{matrix} \right)}{{a}^{-i-qj}}}. \end{equation*} Let $\alpha =\frac{q-1}{3}$. Then \begin{equation*} \sum\limits_{x\in {\mathbb{F}_{{{q}^{2}}}}}{f{{(x)}^{\alpha +q\alpha }}=-}{{a}^{(q+1)\alpha }}{{a}^{-\alpha -\alpha q}}=-1\ne 0. \end{equation*} Hence, in the case, $f(x)$ is not a permutation polynomial over $\mathbb{F}_{q^{2}}$. In the following, we show that when $ q \not \equiv 1 (\bmod{3})$, $f(x)$ is a permutation polynomial over $\mathbb{F}_{q^{2}}$ if and only if $a^{2(q-1)}-a^{q-1}+1=0$. First, it is easy to prove that {zero is the only root of $f(x)=0$ in $\gf_q$} if and only if ${{a}^{q-1}}\ne 1$. Next we prove that $f(x)=c$ has at most one solution in $\mathbb{F}_{q^{2}}$ for any $c \in \mathbb{F}_{q^{2}}^{*}$ {if and only if $a^{2(q-1)}-a^{q-1}+1=0$}. It is clear that $0$ is not a solution. Therefore we consider the following equation \begin{equation} {{x}^{q+1}}+a=\frac{c}{x}. \label{ky3} \end{equation} Let {$u=\frac{c}{x}-a=x^{q+1}$. Then $u\in \mathbb{F}_{q}^{*}$} and $x=\frac{c}{a+u}$. Plugging it into (\ref{ky3}), one have \begin{equation*} \left(\frac{c}{a+u}\right)^{q+1}=u. \end{equation*} After simplifying and rearranging the terms, we get \begin{equation} {{u}^{3}}+({{a}^{q}}+a){{u}^{2}}+{{a}^{q+1}}u={{c}^{q+1}}. \label{ky4} \end{equation} Let ${{a }_{2}}={{a}^{q}}+a$, ${{a }_{1}}={{a}^{q+1}}$, ${{a }_{0}}=-{{c}^{q+1}}$. Then $f(x)=c$ has at most one solution in $\mathbb{F}_{q^{2}}$ if and only if $g(u)={{u}^{3}}+{{a }_{2}}{{u}^{2}}+{{a }_{1}}u$ is a permutation polynomial over $\mathbb{F}_{q}$. {In the following, we will use the table of normalized permutation polynomials over $\gf_q$ (Table 7.1 in \cite{LN}). Recall that a polynomial $\phi$ is called in normalized form if $\phi$ is monic, $\phi(0)=0$, and when the degree $n$ of $\phi$ is not divisible by the characteristic of $\gf_q$, the coefficient of $x^{n-1}$ is 0. The rest of the proof is split into two cases.} Case 1: $q\equiv 2(\bmod 3)$. Let $u=v-\frac{{{a }_{2}}}{3}$. Plugging it into (\ref{ky4}), we get $$v^3+\mu_{1}v+\mu_{2}=0.$$ where ${{\mu }_{1}}=-\frac{1}{3}a _{2}^{2}+{{a }_{1}}$, ${{\mu }_{2}}=\frac{2}{27}a _{2}^{3}-\frac{1}{3}{{a }_{1}}{{a }_{2}}+{{a }_{0}}$. Then ${{\mu }_{1}},{{\mu }_{2}}\in \mathbb{F}_{q}$. As $f(x)=c$ has at most one solution in $\mathbb{F}_{q^{2}}$ if and only if $h(v)={{v}^{3}}+{{\mu }_{1}}v$ is a permutation polynomial over ${{\mathbb{F}}_{q}}$, {which holds if and only if ${{\mu }_{1}}=0$ by \cite[Table 7.1]{LN}.} Recalling that ${{\mu }_{1}}=-\frac{1}{3}{{a}^{2}}({{a}^{2(q-1)}}-{{a}^{q-1}}+1)$, we come to the conclusion that $f(x)=c$ has at most one solution in $\mathbb{F}_{q^{2}}$ if and only if ${{a}^{2(q-1)}}-{{a}^{q-1}}+1=0$. Case 2: $q\equiv 0 (\bmod 3)$. { In this case, $g(u)={{u}^{3}}+{{a }_{2}}{{u}^{2}}+{{a }_{1}}u$ is already a normalized polynomial. According to \cite[Table 7.1]{LN}, $x^{3}-ax$ ($a$ is not a square) and $x^3$ are the two normalized permutation polynomials over $\mathbb{F}_{q}$ with degree three. On one hand, if $g$ permutes $\gf_q$, then we have ${{a }_{2}}=0$. Thus ${a}^{q-1}+1=0$, or equivalently, $0=({a}^{q-1}+1)^2={{a}^{2(q-1)}}-{{a}^{q-1}}+1$. On the other hand, if ${a}^{q-1}+1=0$, then $g(u)=u^3-a^2u$. Clearly, $a^2$ is a non-square in $\mathbb{F}_{q}$ since $a \in \gf_{q^2}\setminus\mathbb{F}_{q}$. Hence $g(u)$ permutes $\gf_q$.} We finish the proof. \end{proof} We briefly discuss the multiplicative inequivalence of the new permutation binomial in Theorem \ref{th_BiPP1} with known ones. Due to the special and specific conditions in Theorem 3.5 and those in the last subsection, the permutation binomial $f(x)$ of Theorem \ref{th_BiPP1} is not multiplicative equivalent to any known permutation binomial. {According to Lemma \ref{ky1}, if $\mathrm{gcd}(1+l(q-1),q+1)=1$, where $l$ is an integer, the polynomial $x^{1+l(q-1)}(x^{q+1}+a)$ is also a permutation binomial over $\mathbb{F}_{q^{2}}$.} Theorem \ref{th_BiPP1} considers permutation binomials of $\mathbb{F}_{q^{2}}$ with the form ${{x}}({{x}^{q+1}}+a)$. Next, we consider another permutation binomial with the form ${{x}^{r}}({{x}^{q-1}}+a)$, {where $r\in [1, q+1]$}. \begin{Th} \label{th_BiPP2} Let $f(x)={{x}^{r}}({{x}^{q-1}}+a)\in \mathbb{F}_{{{q}^{2}}}[x]$, $r\in [1,q+1]$. Then $f(x)$ is a permutation {binomial} over $\mathbb{F}_{{{q}^{2}}}$ if and only if $r=1$ and ${{a}^{q+1}}\ne 1$. \end{Th} \begin{proof} First, it is clear that $f(x)={{x}^{r}}({{x}^{q-1}}+a)$ has only one root in $\mathbb{F}_{{{q}^{2}}}$ if and only if ${{a}^{q+1}}\ne 1$. Next, let $0\le \alpha ,\beta \le q-1$ with $(\alpha ,\beta) \ne (0,0),(q-1,q-1)$. We have \begin{eqnarray*} \sum\limits_{x\in {\mathbb{F}_{{{q}^{2}}}}}{f{{(x)}^{\alpha +\beta q}}} &=& \sum\limits_{x\in {\mathbb{F}_{{{q}^{2}}}}}{{{x}^{r(\alpha +\beta q)}}{{({{x}^{q-1}}+a)}^{(\alpha +\beta q)}}}\\ &=& \sum\limits_{x\in \mathbb{F}_{{{q}^{2}}}^{*}}{{{x}^{r(\alpha +\beta q)}}{{({{x}^{q-1}}+a)}^{\alpha }}{{({{x}^{1-q}}+{{a}^{q}})}^{\beta }}}\\ &=& \sum\limits_{x\in \mathbb{F}_{{{q}^{2}}}^{*}}{{{x}^{r(\alpha +\beta q)}}\sum\limits_{i=0}^{\alpha }{\left( \begin{matrix} \alpha \\ i \\ \end{matrix} \right){{x}^{i(q-1)}}{{a}^{\alpha -i}}}\sum\limits_{j=0}^{\beta }{\left( \begin{matrix} \beta \\ j \\ \end{matrix} \right){{x}^{j(1-q)}}{{(a^q)}^{\beta -j}}}}\\ &=& {{a}^{\alpha +\beta q}}\sum\limits_{0\le i\le \alpha,0 \le j \le \beta}{\left( \begin{matrix} \alpha \\ i \\ \end{matrix} \right)}\left( \begin{matrix} \beta \\ j \\ \end{matrix} \right){{a}^{-i-qj}}\sum\limits_{x\in \mathbb{F}_{{{q}^{2}}}^{*}}{{{x}^{r(\alpha +\beta q)+(i-j)(q-1)}}}. \end{eqnarray*} The inner sum is 0 unless $r(\alpha+\beta q)+(i-j)(q-1)\equiv 0(\bmod q^2-1) $,{which can happen only if } $\alpha +\beta q\equiv 0(\bmod q-1)$, or equivalently, $\alpha +\beta =q-1$. Let $\beta =q-1-\alpha $. Then we have \begin{eqnarray*} \sum\limits_{x\in {{\mathbb{F}}_{{{q}^{2}}}}}{f{{(x)}^{\alpha +(q-1-\alpha )q}}} &=& {{a}^{(\alpha +1)(1-q)}}\sum\limits_{0 \le i \le \alpha,0 \le j \le q-1-\alpha}{\left( \begin{matrix} \alpha \\ i \\ \end{matrix} \right)}\left( \begin{matrix} q-1-\alpha \\ j \\ \end{matrix} \right){{a}^{-i-qj}}\sum\limits_{x\in \mathbb{F}_{{{q}^{2}}}^{*}}{{{x}^{(q-1)(-r(\alpha +1)+i-j)}}} \\ &=& -{{a}^{(\alpha +1)(1-q)}}\sum\limits_{-r(\alpha +1)+i-j\equiv 0 (\bmod q+1)}{\left( \begin{matrix} \alpha \\ i \\ \end{matrix} \right)}\left( \begin{matrix} q-1-\alpha \\ j \\ \end{matrix} \right){{a}^{-i-qj}}. \end{eqnarray*} As $i$ runs over the interval $[0,\alpha]$ and $j$ over the interval $[0,q-1-\alpha]$, $-r(\alpha +1)+i-j\in {{S}_{\alpha ,r}}$, where $${{S}_{\alpha ,r}}=[-r(\alpha +1)-(q-1-\alpha ),-r(\alpha +1)+\alpha ].$$ If $r=1$, then ${{S}_{\alpha ,r}}=[-q,-1]$, and the equation $-(\alpha +1)+i-j\equiv 0(\bmod q+1)$ has no solution. Therefore, $\sum\limits_{x\in {{\mathbb{F}}_{{{q}^{2}}}}}{f{{(x)}^{\alpha +(q-1-\alpha )q}}}=0$. Hence, $f(x)$ is a permutation polynomial over ${\mathbb{F}}_{{{q}^{2}}}$. If $r \in [2,q+1]$, let $\alpha=0$. Then ${{S}_{0,r}}=[-r-q+1,-r]$. In the case, the only multiple of $q+1$ in ${S}_{0,r}$ is $-(q+1)$. Therefore, we have \begin{equation*} \sum\limits_{x\in {{\mathbb{F}}_{{{q}^{2}}}}}{f{{(x)}^{(q-1)q}}}=-{{a}^{(1-q)}}\left( \begin{matrix} q-1 \\ q+1-r \\ \end{matrix} \right){{a}^{-1-q+rq}}=-\left( \begin{matrix} q-1 \\ q+1-r \\ \end{matrix} \right){{a}^{(r-2)q}}. \end{equation*} {It follows from Lucas formula that $\left( \begin{matrix} q-1 \\ q+1-r \\ \end{matrix} \right)\not\equiv 0 \bmod p $ if $r\in [2,q+1]$.} Therefore, $\sum\limits_{x\in {{\mathbb{F}}_{{{q}^{2}}}}}{f{{(x)}^{(q-1)q}}}\ne 0 $. Hence, when $r \in [2,q+1]$, $f(x)$ is not a permutation polynomial over $\mathbb{F}_{q^{2}}$. The proof is finished. \end{proof} {In Theorem \ref{th_BiPP2}, let $r=1$, then we get $f(x)={{x}^{r}}({{x}^{q-1}}+a)=x^q+ax$, which is a linearized polynomial. Hence Theorem \ref{th_BiPP2} does not provide new permutation binomial. However, to the authors' best knowledge, the characterization of the sufficient and necessary condition for the polynomial with this form to be a permutation is new. } \section{Permutation trinomials} \label{sec3} \subsection{The known permutation trinomials} \label{kpt} First, we review the known permutation trinomials over $\mathbb{F}_{2^m}$. The results before 2014 were summarized in \cite{DQ}. We copy their list into the following theorem for the readers' conveniences, \begin{Th}\label{th_TriPPList} \begin{enumerate} \item Some linearized permutation trinomials described in \cite{LN}. \item $x+x^3+x^5$ over $\gf_{2^m}$, where $m$ is odd (the Dickson polynomial of degree 5). \item $x+x^5+x^7$ over $\gf_{2^m}$, where $m \not\equiv 0 \pmod{3}$ (the Dickson polynomial of degree 7). \item $x+x^3+x^{2^{(m+1)/2}+1}$ over $\gf_{2^m}$, where $m$ is odd \cite{HD}. \item $x^{2^{2k}+1} +(ax)^{2^k+1} + ax^2$ over $\gf_{2^m}$, where $m=3k$ and $a^{(2^m-1)/(2^k-1)} \ne 1$ \cite{BCHO}. \item $x^{3 \cdot 2^{(m+1)/2}+4}+x^{2^{(m+1)/2}+2}+x^{2^{(m+1)/2}}$ over $\gf_{2^m}$, where $m$ is odd (\cite{Ch} or \cite[Theorem 4]{HD1}). \item $x^{2^{2k}+1}+x^{2^k+1}+vx$ over $\gf_{2^m}$, where $m=3k$ and $v \in \gf_{2^k}^*$ \cite{TZH}. \item \cite{LeePark}\label{thm-LeePark} Let $q \equiv 1 \pmod{3}$ be a prime power. Let $\alpha$ be a generator of $\gf_{q}^*$, $s=(q-1)/3$, and let $\omega=\alpha^s$. Define $f(x)=ax^2+bx+c \in \gf_{q}[x]$. Then $$ h(x):=x^rf(x^s) $$ is a permutation polynomial over $\gf_q$ if and only if the following conditions are satisfied: \begin{enumerate} \item $\gcd(r, s)=1$, \item $f(\omega^i) \ne 0$ for $0 \le i \le 2$, \item $\log_{\alpha}(f(1)/f(\omega)) \equiv \log_{\alpha}(f(\omega)/f(\omega^2)) \not\equiv r \pmod{3}$. \end{enumerate} \end{enumerate} \end{Th} The following results are new classes of permutation trinomials constructed in \cite{DQ}. \begin{Th}\label{th_DQ1} \emph{ \cite{DQ}} Let $m>1$ be an odd integer. Then both $x+x^{2^{(m+1)/2}-1}+x^{2^{m}-2^{(m+1)/2}+1}$ and $x+x^{3}+x^{2^{m}-2^{(m+3)/2}+2}$ are permutation polynomials over $\mathbb{F}_{2^{m}}$. \end{Th} \begin{Th} \emph{ \cite{DQ}} Let $m$ be a positive even integer. Then $x+x^{2^{(m+2)/2}-1}+x^{2^{m}-2^{m/2}+1}$ is a permutation polynomial over $\mathbb{F}_{2^{m}}$. \end{Th} \begin{Th} \emph{ \cite{DQ}} Let $k$ be a positive integer and $q$ be a prime power with $q \not \equiv 0(\bmod 3)$. Let $m$ be a positive even integer. Then, $x+x^{kq^{m/2}-(k-1)}+x^{k+1-kq^{m/2}}$ is a permutation polynomial over $\mathbb{F}_{q^{m}}$ if and only if one of the following three conditions holds: \begin{enumerate}[(i)] \item $m \equiv 0 (\bmod 4)$; \item $q \equiv 1 (\bmod 4)$; \item $m \equiv 2 (\bmod 4), q \equiv 2 (\bmod 3)$, and $\mathrm{exp}_{3}(q^{m/2}+1)$, where $\mathrm{exp}_{3}(i)$ denotes the exponent of $3$ in the canonical factorization of $i$. \end{enumerate} \end{Th} Recently, some particular types of permutation trinomials have been determined by Hou. We list these following permutation trinomials in even characteristic. \begin{Th} \label{TPPHou1} \emph{ \cite{XH4}} Let $f=ax+bx^q+x^{2q-1} \in \mathbb{F}_{q^2}[x]$, where $q$ is even. Then $f$ is a permutation polynomial over $\mathbb{F}_{q^2}$ if and only if one of the following is satisfied. \begin{enumerate}[(i)] \item $a=b=0,q=2^{2k}$. \item $ab\neq 0, a=b^{1-q}, \mathrm{Tr}_{q/2}(b^{-1-q})=0$. \item $ab(a-b^{1-q})\neq 0, \frac{a}{b^2}\in \mathbb{F}_q,\mathrm{Tr}_{q/2}(\frac{a}{b^2})=0,b^2+a^2b^{q-1}+a=0$. \end{enumerate} \end{Th} \begin{Th} \label{TPPHou2} \emph{ \cite{XH2}} Let $q>2$ be even and $f=x+tx^q+x^{2q-1}\in \mathbb{F}_q[x]$, where $t\in \mathbb{F}_q^*$. Then $f$ is a permutation polynomial of $\mathbb{F}_{q^2}$ if and only if $\mathrm{Tr}_{q/2}(\frac{1}{t})=0$. \end{Th} \subsection{New Classes of Permutation Trinomials} {In this subsection, we introduce five new classes of permutation trinomials over $\mathbb{F}_{2^m}$. Motivated by \cite{DQ}, we first consider permutation trinomials with trivial coefficients, that is, whose nonzero coefficients are all $1$. } \begin{Th} \label{theorem1} Let $q=2^{2k}$ and $k$ be a positive integer. Then $f(x)=x+x^{2^{k}}+x^{2^{2k-1}-2^{k-1}+1}$ is a permutation trinomial over $\mathbb{F}_{q}$ if and only if $k \not \equiv 0(\bmod 3)$. \end{Th} \begin{proof} First of all, we show that $x=0$ is the only solution of $f(x)=0$ in $\mathbb{F}_{q}$ when $k \not\equiv 0(\bmod 3)$. If $f(x)=0$, then either $x=0$ or $1+x^{2^{k}-1}+x^{2^{2k-1}-2^{k-1}}=0$. Therefore, we only need to prove {that} the equation \begin{equation} 1+x^{2^{k}-1}+x^{2^{2k-1}-2^{k-1}}=0 \label{key18} \end{equation} has no solution in $\mathbb{F}_{q}$. Let $y=x^{2^{k}-1}$. Then computing $(y*(\ref{key18}))^2$ and simplifying it by $y^{2^k+1}=1$, we get \begin{equation} y+y^2+y^4=0. \label{key20} \end{equation} {It follows from Lemma \ref{lem1} that (\ref{key20}) has no nonzero solution in $\mathbb{F}_{q}$ when $k \not\equiv 0(\bmod 3)$. Since $x=0$ is not the solution of (\ref{key18}), $f(x)=0$ has only one solution in $\mathbb{F}_{q}$ when $k \not\equiv 0(\bmod 3)$.} Then we prove that $f(x)=a$ has at most one solution in $\mathbb{F}_{q}$ for any $a \in \mathbb{F}_{q}^{*}$ if and only if $k \not\equiv 0(\bmod 3)$. Let $d=2^{2k-1}-2^{k-1}+1$. {Let $s$ be an integer satisfying $1\leq s \leq 2^{2k}-2$ and $4s \equiv 2^k+3 (\bmod{2^{2k}-1})$. Then it is easy to verify that $ds \equiv 1 (\bmod{2^{2k}-1})$.} And we have the following equation from $f(x)=a$: \begin{equation} x+x^{2^{k}}+x^{d}=a. \label{key21} \end{equation} {Let $u=x^{d}+a=x+x^{2^k}$. Then $u\in \mathbb{F}_{2^{k}}$ and $x=(a+u)^{s}$.} Plugging it into (\ref{key21}),we have \begin{equation*} (a+u)^s+(a+u)^{2^k\cdot s}=u. \end{equation*} Raising the above equation to the $4$-th power {and simplifying it}, we get \begin{equation} u^4+(a^{2^{k+1}}+a^2)u^2+(a^{2^{k}}+a)^{3}u+a^{2^{k}+1}(a^{2}+a^{2^{k+1}})=0. \end{equation} Let $b=a^{2^{k}}+a$, $c=a^{2^{k}+1}$. Then $b,c \in \mathbb{F}_{2^{k}}$, {and the above equation reduces to} \begin{equation*} u^{4}+b^{2}u^{2}+b^{3}u+b^{2}c=0. \end{equation*} If $b=0$, then there is only one solution $u=0$ of the above equation. Hence, $f(x)=a$ has at most one solution $x=a^{s}$ in $\mathbb{F}_{q}$ for any $a \in \mathbb{F}_{q}^{*}$. If $b \neq 0$, let $u=bv$. Rewriting the above equation, we get { \begin{equation}\label{eqLv} v^{4}+v^{2}+v=\frac{c}{b^{2}}=\frac{a^{2^k+1}}{(a^{2^k}+a)^2}. \end{equation} Since $x=(a+u)^{s}=(a+bv)^s$, it suffices to prove that \eqref{eqLv} has at most one solution in $\mathbb{F}_{2^k}$ for any $a \in \mathbb{F}_{q}^{*}$ if and only if $k \not\equiv 0(\bmod 3)$. Then the result follows from Lemma \ref{lem1}.} We finish the proof. \end{proof} \begin{Th} \label{theorem2} Let $q=2^{2k}$ and $k>0$ be an odd integer. Then $f(x)=x+x^{2^{k}+2}+x^{2^{2k-1}+2^{k-1}+1}$ is a permutation trinomial over $\mathbb{F}_{q}$. \end{Th} \begin{proof} First of all, we show that $x=0$ is the only solution of $f(x)=0$ in $\mathbb{F}_{q}$. If $f(x)=0$, then either $x=0$ or $1+{{x}^{{{2}^{k}}+1}}+{{x}^{{{2}^{2k-1}}+{{2}^{k-1}}}}=0$. Therefore, we only need to prove the equation \begin{equation} 1+{{x}^{{{2}^{k}}+1}}+{{x}^{{{2}^{2k-1}}+{{2}^{k-1}}}}=0 \label{key12} \end{equation} has no solution in $\mathbb{F}_{q}$. Raising (\ref{key12}) to {its} $2$-th power, we have \begin{equation*} 1+x^{2^{k+1}+2}+x^{2^{k}+1}=0. \end{equation*} Let $y=x^{2^{k}+1}$. Then $y \in \mathbb{F}_{2^{k}}$, and we get \begin{equation} 1+y+y^2=0. \label{key13} \end{equation} It is clear that $y=1$ is not the solution of (\ref{key13}). Then multiplying $(1+y)$ to both sides, we have $y^3=1$. We know $\mathrm{gcd}(2^{k}-1,3)=1$ since $k$ is odd. Therefore the above equation has no solution in $\mathbb{F}_{2^{k}}$. Hence, $x=0$ is the only solution {of $f(x)=0$ in $\gf_q$.} Next, we prove that $f(x)=a$ has at most one solution in $\mathbb{F}_{q}$ for any $a \in \mathbb{F}_{q}^{*}$. Considering the following equation: \begin{equation} 1+x^{2^{k}+1}+x^{2^{2k-1}+2^{k-1}}=\frac{a}{x}. \label{key14} \end{equation} Let $u=\frac{a}{x}$. Then $u=1+x^{2^{k}+1}+x^{2^{2k-1}+2^{k-1}}\in \mathbb{F}^{*}_{2^k}$. Plugging $x=\frac{a}{u}$ into { the square of (\ref{key14}),} we have \begin{equation} 1+\frac{{{a}^{{{2}^{k+1}}+2}}}{{{u}^{4}}}+\frac{{{a}^{{{2}^{k}}+1}}}{{{u}^{2}}}={{u}^{2}}. \label{key17} \end{equation} Let $y=\frac{a^{2^{k}+1}}{u^{2}}$. Then $y \in \mathbb{F}_{2^{k}}$. Plugging it into (\ref{key17}), we have $y^{3}+y^{2}+y=a^{2^{k}+1}$. {Since $g(y)=y^{3}+y^{2}+y=(y+1)^{3}+1$ is a permutation polynomial over $\mathbb{F}_{2^{k}}$ when $k>0$ is odd,} we know that $f(x)=a$ has at most one solution in $\mathbb{F}_q$ for any $a \in \mathbb{F}_{q}^{*}$. Hence the proof is complete. \end{proof} Next we introduce three new classes of permutation trinomials with the form $f(x)=x+ax^{\alpha}+bx^{\beta}\in \gf_{2^m}[x]$, {where $a, b\in \gf_{2^m}^\ast$}. \begin{Th} \label{th_Tripp3} Let $q=2^{2k}$ and $k>0$ be an integer, $f(x)=x+ax^{2^{k+1}-1}+a^{2^{k-1}}x^{2^{2k}-2^{k}+1}$, where $a\in \mathbb{F}_{q}$ and the order of $a$ is $2^{k}+1$. Then $f(x)$ is a permutation trinomial over $\mathbb{F}_{q}$. \end{Th} \begin{proof} First of all, we show that $x=0$ is the only solution to $f(x)=0$. If $f(x)=0$, then either $x=0$ or $1+ax^{2^{k+1}-2}+a^{2^{k-1}}x^{2^{2k}-2^{k}}=0$. Therefore, we only need to prove the equation \begin{equation} 1+ax^{2^{k+1}-2}+a^{2^{k-1}}x^{2^{2k}-2^{k}}=0. \label{key23} \end{equation} has no solution in $\mathbb{F}_{q}$. Adding $(\ref{key23})$ to $(\ref{key23})^{2^{k+1}}$, we have \begin{equation*} x^{3\cdot 2^{k}-3}=a^{3\cdot 2^{k-1}}. \end{equation*} Therefore, {$x^{2^{k}-1}=a^{2^{k-1}}$ or $x^{2^{k}-1}=a^{2^{k-1}}\omega$,} where $\omega^{2}+\omega+1=0$. If $x^{2^{k}-1}=a^{2^{k-1}}$, {then by plugging it into (\ref{key23}), one get $1+a^{2^k+1}+a^{2^{2k-1}+2^{k-1}}=0$.} Recalling the order of $a$ is $2^k+1$, we have $1=0$. It is a contradiction. If $x^{2^{k}-1}=a^{2^{k-1}}\omega$, {then by plugging it into (\ref{key23}), one have $\omega^{2^{k}}+\omega^{2}+1=0$. Hence $\omega^{2^{k}}=\omega$, which means that $k$ is even. However, in this case, $(a^{2^{k-1}}\omega)^{2^k+1}=\omega^{2^k+1}=\omega^2\neq 1$. It then follows that } the equation $x^{2^{k}-1}=a^{2^{k-1}}\omega$ has no solution in { $\mathbb{F}_q$. Contradicts!} Hence, $x=0$ is the only solution to $f(x)=0$ {in $\gf_q$}. Next we prove that $f(x)=c$ has at most one solution in $\mathbb{F}_q$ for any $c \in \mathbb{F}_{q}^{*}$. Considering the equation \begin{equation*} x+ax^{2^{k+1}-1}+a^{2^{k-1}}x^{2-2^{k}}=c. \end{equation*} Raising the above equation to the $2$-th power, we have \begin{equation} x^{2}+a^{2}x^{2^{k+2}-2}+a^{2^{k}}x^{4-2^{k+1}}=c^{2}. \label{key25} \end{equation} Computing $(\ref{key25})+(\ref{key25})^{2^k}*a$, and simplifying it by using $a^{2^{k}+1}=1$, we have \begin{equation*} x^{2}+ax^{2^{k+1}}=c^{2}+ac^{2^{k+1}}. \end{equation*} Let $u=x^{2}+c^{2}$. Then $u=au^{2^{k}}$. Plugging $x^2=u+c^2$ into (\ref{key25}), we get { \begin{equation} u+a^2(u+c^2)^{2^{k+1}-1}+a^{2^k}(u+c^2)^{2-2^{k}}=0. \end{equation} Multiplying $(u+c^2)^{2^{k}+1}$ across both sides of the above equation and then substituting $u^{2^{k}}=u/a$ and $a^{2^k+1}=1$ into it, one can get the following equation after simplification. } \begin{equation} u^3+\alpha u+\beta=0, \label{key27} \end{equation} where $\alpha=ac^{2^{k+1}+2}+c^{4}+a^{2}c^{2^{k+2}}$, $\beta=a^{3}c^{3\cdot 2^{k+1}}+c^{6}$. Now let $\gamma=ac^{2^{k+1}}+c^{2}$. Then $\beta=\gamma\alpha$. We distinguish two cases. {\bf Case $\gamma=0$. } Then $\alpha=c^{4}$ and $\beta=0$. Therefore, the solutions of (\ref{key27}) are $u=0$ or $u=c^{2}$. So $x=0$ and $x=c$ may be the solutions of the equation $f(x)=c$. However, $f(0) \neq c$. Hence, in this case, $f(x)=c$ has one unique nonzero solution $x=c$ for each nonzero $c \in \mathbb{F}_{q}$. {\bf Case $\gamma\neq 0$. } If $\alpha=0$, then $u=0$ is the only solution of (\ref{key27}). If $\alpha \neq 0$, then let $u=\alpha^{1/2} \varepsilon$. Further, $ \alpha^{2^k}=\frac{\alpha}{a^2}$, $\varepsilon^{2^k}=\frac{u^{2^k}}{(\alpha^{1/2})^{2^k}}=\frac{u}{a}\frac{a}{\alpha^{1/2}}=\varepsilon$. Therefore $\varepsilon \in \mathbb{F}_{2^{k}}$. Dividing (\ref{key27}) by $\alpha^{\frac{3}{2}}$ results in \begin{equation*} \varepsilon^{3}+\varepsilon+\frac{\gamma}{\alpha^{1/2}}=0. \end{equation*} It is routine to verify that $\frac{\gamma}{\alpha^{1/2}} \in \mathbb{F}_{2^{k}}$. Hence $h(\varepsilon)= \varepsilon^{3}+\varepsilon+\frac{\gamma}{\alpha^{1/2}} \in \mathbb{F}_{2^{k}}[\varepsilon]$. According to Lemma \ref{san1}, $h(\varepsilon)=0$ has only one solution in $\mathbb{F}_{2^{k}}$ if and only if ${{\operatorname{Tr}}_{k}}\left(\frac{\alpha }{{{\gamma }^{2}}}\right)={\operatorname{Tr}_k}(1)+1$. Next, we show that ${{\operatorname{Tr}}_{k}}\left(\frac{\alpha }{{{\gamma }^{2}}}\right)={\operatorname{Tr}_k}(1)+1$. We know \begin{eqnarray*} {{\operatorname{Tr}}_{k}}\left(\frac{\alpha }{{{\gamma }^{2}}}\right) &=& {{\operatorname{Tr}}_{k}}\left(\frac{a{{c}^{{{2}^{k+1}}+2}}+{{c}^{4}}+{{a}^{2}}{{c}^{{{2}^{k+2}}}}}{{{c}^{4}}+{{a}^{2}}{{c}^{{{2}^{k+2}}}}}\right)\\ &=& {{\operatorname{Tr}}_{k}}\left(\frac{a{{c}^{{{2}^{k+1}}+2}}}{{{c}^{4}}+{{a}^{2}}{{c}^{{{2}^{k+2}}}}}+1\right)\\ &=& {\operatorname{Tr}_k}(1)+{{\operatorname{Tr}}_{k}}\left(\frac{a{{c}^{{{2}^{k+1}}+2}}}{{{c}^{4}}+{{a}^{2}}{{c}^{{{2}^{k+2}}}}}\right) \end{eqnarray*} and \begin{equation*} {{\operatorname{Tr}}_{k}}\left(\frac{a{{c}^{{{2}^{k+1}}+2}}}{{{c}^{4}}+{{a}^{2}}{{c}^{{{2}^{k+2}}}}}\right)= {{\operatorname{Tr}}_{k}}\left(\frac{a{{c}^{{{2}^{k+1}}-2}}}{1+{{a}^{2}}{{c}^{{{2}^{k+2}}-4}}}\right)= {{\operatorname{Tr}}_{k}}\left(\frac{1}{1+a{{c}^{{{2}^{k+1}}-{2}}}}+\frac{1}{{{(1+a{{c}^{{{2}^{k+1}}-{2}}})}^{2}}}\right). \end{equation*} Let $v=\frac{1}{1+ac^{2^{{k+1}}-2}}$. {T}hen $v^{2^{k}}=\frac{1}{1+a^{2^{k}}c^{2-2^{2^{k+1}}}}=v+1$. Therefore we have \begin{equation*} {{\operatorname{Tr}}_{k}}\left(\frac{a{{c}^{{{2}^{k+1}}+2}}}{{{c}^{4}}+{{a}^{2}}{{c}^{{{2}^{k+2}}}}}\right)={{\operatorname{Tr}}_{k}}\left(v+{{v}^{2}}\right)=v+{{v}^{{{2}^{k}}}}=1. \end{equation*} Hence, ${{\operatorname{Tr}}_{k}}\left(\frac{\alpha }{{{\gamma }^{2}}}\right)={\operatorname{Tr}_k}(1)+1$. Then, ${\operatorname{Tr}_k}(\frac{\alpha^{1/2}}{\gamma})=\left({{\operatorname{Tr}}_{k}}\left(\frac{\alpha }{{{\gamma }^{2}}}\right)\right)^{\frac{1}{2}} \neq {\operatorname{Tr}_k}(1)$. Hence, $h(\varepsilon)=0$ has only one solution in $\mathbb{F}_{2^{k}}$ according to Lemma \ref{san1}. Therefore $f(x)=c$ has at most one solution in $\mathbb{F}_q$ for any $c \in \mathbb{F}_{q}^{*}$. The proof is complete. \end{proof} \begin{Th} \label{th_Tripp4} Let $q=2^{2k+1}$ and $k>0$ be an integer. Then $f(x)=x+ax^{2^{k+1}-1}+a^{2^{2k+1}-2^{k+1}-2}x^{2^{k+1}+1}$, where $a \in \mathbb{F}_{q}$, is a permutation trinomial over $\mathbb{F}_{q}$. \end{Th} \begin{proof} The case $a=0$ is trivial. Therefore, we assume $a\neq 0$ in the following. First of all, we show that $x=0$ is the only solution of $f(x)=0$ in $\mathbb{F}_q$. If $f(x)=0$, then either $x=0$ or $ 1+ax^{2^{k+1}-2}+a^{-2^{k+1}-1}x^{2^{k+1}}=0$. Therefore, we only need to prove the equation \begin{equation} 1+ax^{2^{k+1}-2}+a^{-2^{k+1}-1}x^{2^{k+1}}=0 \label{key1} \end{equation} has no solution in $\mathbb{F}_{q}$. Raising (\ref{key1}) to the $2^{k}$-th power, we have \begin{equation} 1+a^{2^k}x^{1-2^{k+1}}+a^{-2^k-1}x=0. \label{key2} \end{equation} Multiplying $a^{2^k+1}x^{2^{k+1}}$ {across} both sides of (\ref{key2}), we have \begin{equation} (x+a^{2^k+1})^{2^{k+1}+1}=a^{2^{k+1}+2^{k}+2}. \label{key3} \end{equation} {It is clear that $\mathrm{gcd}(2^{k+1}+1,2^{2k+1}-1)=1$. Hence $x=0$ is the only solution of (\ref{key3}) in $\gf_q$. } However, it is not the solution of (\ref{key1}). Hence, (\ref{key1}) has no solution in $\mathbb{F}_{q}$. Then we prove that $f(x)=c$ has at most one solution in $\mathbb{F}_{q}$ for any $c \in \mathbb{F}_{q}^{*}$. Considering the following equation: \begin{equation} x+ax^{2^{k+1}-1}+a^{-2^{k+1}-1}x^{2^{k+1}+1}+c=0. \label{key4} \end{equation} Computing $a^{2^k}\ast(\ref{key4})+x^{2^k}\ast(\ref{key4})^{2^k}$, we have \begin{equation} a^{2^k+1}x^{2^{k+1}-1}+x^{2^{k+1}}+c^{2^k}x^{2^k}+a^{2^k}c=0. \label{key9} \end{equation} Then {by computing $(c+x)x^{2^{k+1}-1}\ast(\ref{key9})^{2^{k+1}}+a^{2^{k+1}+1}x\ast(\ref{key4})$ and after simplification,} we get \begin{equation} (a^{2^{k+1}+2}+ac^{2^{k+1}}+c^2)x=ac^{2^{k+1}+1}. \label{key11} \end{equation} {Since $a, c\in \gf_q^\ast$, we know that (\ref{key11}) has at most one solution in $\gf_q$.} Then $f(x)=c$ has at most one solution in $\mathbb{F}_q$ for any $c \in \mathbb{F}_{q}^{*}$. {We are done.} \end{proof} {Let $a=1$ in the above theorem. Then $f(x)=x+x^{2^{k+1}-1}+x^{2^{k+1}+1}$ and $f(x^{2^{k+1}+2})=x^{2^{k+1}}+x^{2^{k+1}+2}+x^{3\cdot2^{k+1}+\bf{4}}$, which is the permutation trinomial in Theorem \ref{th_TriPPList}(6). Hence Theorem \ref{th_Tripp4} is a generalization of a known permutation trinomial.} \mqu{A comment about Theorem \ref{th_Tripp4} is as follows. Just after our submission, we noticed that this class of permutation trinomial was also introduced in a very recent paper. In \cite{MZFG}, Ma et al. presented several classes of new permutation polynomials, including two classes of permutation trinomials, that is, \cite[Theorem 5.1 and Theorem 5.2]{MZFG}. It can be readily verified that Theorem \ref{th_Tripp4} is equivalent to \cite[Theorem 5.1]{MZFG}, though our proof is different and a bit short. } The last class of permutation trinomial is a generalization of the second permutation trinomial in Theorem \ref{th_DQ1} (\cite[Theorem 2.2]{DQ}). \begin{Th} \label{th_Tripp5} Let $q=2^{2k+1}$,$f(x)=x+ax^3+a^{2^{2k+1}-2^{k+1}}x^{2^{2k+1}-2^{k+2}+2}$, $a \in \mathbb{F}_{q}$. Then $f(x)$ is a permutation trinomial over $\mathbb{F}_{q}$. \label{key29} \end{Th} \begin{proof} The case $a=0$ is trivial. If $a=1$, then it reduces to \cite[Theorem 2.2]{DQ}. Therefore, we assume $a \neq 0, 1$ in the following. Let $y=x^{2^{k+1}}$ and $b=a^{2^{k+1}}$. Then $y^{2^{k+1}}=x^{2}$ and $b^{2^{k+1}}=a^{2}$. For all $x \in \mathbb{F}_{q}^{*}$, we have \begin{equation} f(x)=x+ax^3+\frac{a}{b}\frac{x^3}{y^2}=\frac{x(abx^2y^2+by^2+ax^2)}{by^2}. \label{key30} \end{equation} First, we show that $x=0$ is the only solution of $f(x)=0$ in $\mathbb{F}_q$. From (\ref{key30}), we only need to prove the equation \begin{equation} abx^2y^2+by^2+ax^2=0 \label{key31} \end{equation} has no solution in {$\mathbb{F}_{q}^\ast$.} Raising (\ref{key31}) into {its} $2^{k+1}$-th power, we have \begin{equation} ba^{2}x^4y^2+a^{2}x^4+by^2=0. \label{key32} \end{equation} Computing $(\ref{key31})^{2}+(\ref{key32})$, we have \begin{equation*} (1+ax^{2})^{2^{k+1}+2}=0. \end{equation*} Then $x^{2}=\frac{1}{a}$ and $y^{2}=\frac{1}{b}$. Plugging them into (\ref{key31}), we get $1=0$. It is a contradiction. If $f(x)$ is not a permutation polynomial of $\mathbb{F}_{q}^{*}$, then there exists $x \in \mathbb{F}_{q}^{*}$ and $c \in \mathbb{F}_{q}^{*}$ such that $f(x)=f((1+c)x)$. Let $d=c^{2^{k+1}}$. It is clear that $c,d\neq 0,1$ and $c\neq d$. Then $$\frac{x(b{{y}^{2}}+ab{{x}^{2}}{{y}^{2}}+a{{x}^{2}})}{b{{y}^{2}}}= \frac{(1+c)x(b{{(1+d)}^{2}}{{y}^{2}}+ab{{(1+c)}^{2}}{{(1+d)}^{2}}{{x} ^{2}}{{y}^{2}}+a{{(1+c)}^{2}}{{x}^{2}})}{b{{(1+d)}^{2}}{{y}^{2}}}, $$ After simplifying, we get \begin{equation} A_1x^2y^2+A_2y^2+A_3x^2=0, \label{key33} \end{equation} where \begin{eqnarray*} A_1 &=& abc(1+d)^2(c^2+c+1), \\ A_2 &=& bc(1+d)^2, \\ A_3 &=& a[(1+d)^2+(1+c)^3]. \end{eqnarray*} Raising (\ref{key33}) to $2^{k+1}$-th power, we have \begin{equation} A_1^{2^{k+1}}x^4y^2+A_3^{2^{k+1}}y^2 + A_2^{2^{k+1}}x^4=0. \label{key34} \end{equation} Computing (\ref{key33})*$(A_{1}^{2^{k+1}}x^{4}{+A_3^{2^{k+1}}})$+(\ref{key34})*$(A_{1}x^{2}+A_{2})$, we get \begin{equation} \label{key35} B_1x^4+B_2x^2 + B_3=0. \end{equation} where \begin{eqnarray*} B_1 &=& A_3A_1^{2^{k+1}}+A_1A_2^{2^{k+1}}=a^3bd^2(1+c)^4[(1+c)^3+(1+d)^3], \\ B_2 &=& A_2^{2^{k+1}+1}=a^2bcd(1+c)^4(1+d)^2, \\ B_3 &=& A_3^{2^{k+1}+1}. \end{eqnarray*} It is {routine to verify } that $A_1,A_2,B_1,B_2 \neq 0$. Let ${{x}^{2}}=\frac{{{B}_{2}}}{{{B}_{1}}}\varepsilon$. Plugging it into (\ref{key35}), we have \begin{equation} {{\varepsilon }^{2}}+\varepsilon +D=0, \label{key36} \end{equation} where $D=\frac{{{B}_{1}}{{B}_{3}}}{B_{2}^{2}}$. As for $D$, we know $$D=\frac{{{B}_{1}}{{B}_{3}}}{B_{2}^{2}}=\frac{A_{3}^{{{2}^{k+1}}+1}({{A}_{3}}A_{1}^{{{2}^{k+1}}}+{{A}_{1}}A_{2}^{{{2}^{k+1}}})} {A_{2}^{{{2}^{k+2}}+2}}=\frac{{{A}_{1}}A_{3}^{{{2}^{k+1}}+1}}{A_{2}^{{{2}^{k+1}}+2}}+\frac{A_{1}^{{{2}^{k+1}}}A_{3}^{{{2}^{k+1}}+2}} {A_{2}^{{{2}^{k+2}}+2}}={{D}_{1}}+D_{1}^{{{2}^{k+1}}},$$ where $${{D}_{1}}=\frac{{{A}_{1}}A_{3}^{{{2}^{k+1}}+1}}{A_{2}^{{{2}^{k+1}}+2}}=\frac{{{A}_{1}}{{B}_{3}}}{{{A}_{2}}{{B}_{2}}}=\frac{(c^2+c+1)[(1+c)^4+(1+d)^{3}][(1+d)^2+(1+c)^3]}{cd(1+d)^2(1+c)^4}.$$ {Now we claim that ${{\operatorname{Tr}}_{2k+1}}({{D}_{1}})=1$. It is the same claim appeared in the proof of \cite[Theorem 2.2]{DQ}. The proof of this claim is a bit long and intricate. We omit it here. The interested reader please refer \cite{DQ} for details. } Raising (\ref{key36}) to $2^{i}$-th power, where $i=0,1,\cdots,k $, and summing them, we get $${{\varepsilon }^{{{2}^{k+1}}}}=\varepsilon +\sum\limits_{i=0}^{k}{{{({{D}_{1}}+D_{1}^{{{2}^{k+1}}})}^{{{2}^{i}}}}}={\varepsilon} +\sum\limits_{i=0}^{2k+1}{D_{1}^{{{2}^{i}}}}={\varepsilon} +{{D}_{1}}+{{\operatorname{Tr}}_{2k+1}}({{D}_{1}})={\varepsilon} +{{D}_{1}}+1.$$ and $${{\varepsilon }^{{{2}^{k+1}}+1}}=\varepsilon (\varepsilon +{{D}_{1}}+1)={{D}_{1}}\varepsilon +D.$$ {Substituting ${{x}^{2}}=\frac{{{B}_{2}}}{{{B}_{1}}}\varepsilon$, ${{y}^{2}}=(\frac{{{B}_{2}}}{{{B}_{1}}})^{2^{k+1}}\varepsilon^{2^{k+1}}$ and} the above two equations into (\ref{key33}), we obtain $$\frac{{{A}_{1}}B_{2}^{{{2}^{k+1}}+1}}{B_{1}^{{{2}^{k+1}}+1}}({{D}_{1}}\varepsilon +D)+\frac{{{A}_{2}}B_{2}^{{{2}^{k+1}}}}{B_{1}^{{{2}^{k+1}}}}(\varepsilon +{{D}_{1}}+1)+\frac{{{A}_{3}}{{B}_{2}}}{{{B}_{1}}}\varepsilon =0.$$ Multiplying $B_1^{2^{k+1}+1}$ across the two sides of the above equation and using $B_2^{2^{k+1}}=A_2^{2^{k+1}+2}=A_2B_2$, we have $${{C}_{1}}\varepsilon +{{C}_{2}}=0,$$ where \begin{eqnarray*} C_1 &=& A_1A_2B_2D_1+A_2^2B_1+A_3B_1^{2^{k+1}}=A_1A_2^{2^{k+1}+2}, \\ C_2 &=& A_1A_2B_2D+A_2^2B_1(D_1+1)=A_2^2B_1. \end{eqnarray*} Therefore, $\varepsilon =\frac{{{C}_{2}}}{{{C}_{1}}}=\frac{{{B}_{1}}}{{{A}_{1}}A_{2}^{{{2}^{k+1}}}}= \frac{{{A}_{2}}{{B}_{1}}}{{{A}_{1}}{{B}_{2}}}$. Plugging it into (\ref{key36}), we have $${{B}_{1}}A_{2}^{2}+{{A}_{1}}{{A}_{2}}{{B}_{2}}=A_{1}^{2}{{B}_{3}}.$$ That is $A_{1}^{{{2}^{k+1}}}A_{2}^{2}=A_{1}^{2}A_{3}^{{{2}^{k+1}}}$. {Substituting the definitions of $A_1,A_2,A_3$ into the above equation,} we get $$a^2bd(1+c)^4(d^2+d+1)b^2c^2(1+d)^4=a^2b^2c^2(1+d)^4(c^2+c+1)^2b[(1+c)^4+(1+d)^3].$$ Simplifying it, we have ${{(1+d)}^{3}}={{(1+c)}^{6}}$. {Since $\mathrm{gcd}(3,2^{2k+1}-1)=1$, one get $d=c^{2}$. After raising it to the $2^{k+1}$-th power, we obtain $c^2=d^2=d$, which means $d=0, 1$. It is a contradiction since $d\neq 0,1$.} Hence, the proof is complete. \end{proof} { \subsection{Multiplicative inequivalence of the new permutation trinomials with known ones} In this subsection, we briefly discuss the multiplicative inequivalence of the new permutation trinomials with known ones. First, the last three classes of permutation trinomials are not with trivial coefficients. As far as the authors know, there exist only five such classes of permutation trinomials in $\gf_{2^m}$. They are the functions in Theorem \ref{th_TriPPList} (5), (7), (8), Theorems \ref{TPPHou1} and \ref{TPPHou2}. Any of our newly constructed permutation trinomial is multiplicative inequivalent to those in Theorem \ref{th_TriPPList} (5), (7) since the latter permutations are defined over $\gf_{2^{3k}}$. The three conditions in Theorem \ref{th_TriPPList} (8) are further investigated in \cite{LeePark}. In general, no simple conditions can make $h(x)$ into a permutation trinomial. Hence it is multiplicative inequivalent to any of our newly constructed permutation trinomial. The permutations in Theorems \ref{th_Tripp4}, \ref{th_Tripp5} are multiplicative inequivalent to those in Theorems \ref{TPPHou1}, \ref{TPPHou2} since they are defined over odd dimension, while the latter permutations are defined over even dimension. It can be easily verified that the function in Theorem \ref{th_Tripp3} is multiplicative inequivalent to those in Theorems \ref{TPPHou1}, \ref{TPPHou2}. Hence the permutation trinomials in Theorems \ref{th_Tripp3}, \ref{th_Tripp4} and \ref{th_Tripp5} are indeed multiplicative inequivalent to known permutations. Further, they are pairwise multiplicative inequivalent. \mqu{In \cite{MZFG}, two classes of permutation trinomials were introduced. The first one is equivalent to Theorem \ref{th_Tripp4}. The second one is as follows. \begin{Th}\cite[Theorem 5.2]{MZFG}\label{th_MZF2} Let $m>1$ be an odd integer such that $m=2k-1$. Then $f(x) = x + ux^{2^k-1} + u^{2^k} x^{2^m-2^{k+1}+2}$, $u\in \gf_{2^m}$, is a permutation polynomial over $\gf_{2^m}$. \end{Th} We only need to check the multiplicative equivalence between the permutation trinomials in Theorems \ref{th_Tripp5} and \ref{th_MZF2}. It can be easily verified that they are indeed multiplicative inequivalent. } Now let us discuss the first two classes of permutation trinomials with trivial coefficients. We need to show they are multiplicative inequivalent to all the known classes. To this end, we used a Magma program to confirm this conclusion for at least one small field $\gf_{2^m}$, where $4 \le m \le 10$. Consequently, these two classes of permutation trinomials are also new. Further, these two classes are also multiplicative inequivalent to each other. } \section{Conclusions} \label{sec4} {Permutation binomials and permutation trinomials over finite fields are both interesting and important in theory and in many applications. In this paper, we present several new classes of permutation binomials and permutation trinomials. These functions extend the list of known such permutations. However, from computer experiments, we found that there should be more classes of permutation binomials and permutation trinomials. A complete determination of all permutation binomials or all permutation trinomials over finite fields seems to be out of reach for the time bing. The constructed functions here lay a solid foundation for the further research. At last, we would like to mention that the constructed functions have many applications. For instances, they can be employed in linear codes \cite{CCJ} and cyclic codes \cite{CD}, they can also be used to construct highly nonlinear functions such as bent and semi-bent functions. For more details, please refer to the last paragraph in \cite{DQ} and the references therein. }
{ "timestamp": "2015-09-01T02:09:21", "yymm": "1508", "arxiv_id": "1508.07590", "language": "en", "url": "https://arxiv.org/abs/1508.07590", "abstract": "Permutation polynomials over finite fields play important roles in finite fields theory. They also have wide applications in many areas of science and engineering such as coding theory, cryptography, combinatorial design, communication theory and so on. Permutation binomials and trinomials attract people's interest due to their simple algebraic form and additional extraordinary properties. In this paper, several new classes of permutation binomials and permutation trinomials are constructed. Some of these permutation polynomials are generalizations of known ones.", "subjects": "Information Theory (cs.IT); Number Theory (math.NT)", "title": "New Classes of Permutation Binomials and Permutation Trinomials over Finite Fields", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.980875959894864, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.707940566145491 }
https://arxiv.org/abs/math/0501523
Cohomological dimension theory of compact metric spaces
This is a detailed introductory survey of the cohomological dimension theory of compact metric spaces.
\section{Introduction} The cohomological dimension theory has connections with many different areas of mathematics: dimension theory, topology of manifolds, group theory, functional rings and others. It was founded by P.S.~Alexandroff in late 20's. Many famous topologists have contributed to the theory. Among them are Hopf, Pontryagin, Bockstein, Borsuk, Dyer, Boltyanskij, Kodama, Kuzminov, Sitnikov. There are only few introductory and survey texts on the theory. The book by Alexandroff `Introduction to homological dimension theory and general combinatorial topology' \cite{A1} is written in old fashion language and hardly readable. There are surveys by Kodama (Appendix in \cite{Na}) and by Kuzminov \cite{Ku}. The first part of Kuzminov paper is devoted to compact metric spaces and is an excellent reading. We don't consider noncompact spaces in this paper, since the cohomological dimension of noncompact spaces behaves differently and is not completely developed. A very special case of the cohomological dimension theory is the case of integer coefficients. An excellent survey on this case was written by Walsh \cite{WA} where a detailed proof of the Edwards resolution theorem first was published. In 1988, twenty years after Kuzminov's survey I wrote a sequel to that \cite{Dr1}. Since then ten years passed, new results appeared and a new understanding of the old results ripened. So time came for an update survey. A new compressed survey was given by Dydak \cite{Dy3} where the main applications of the cohomological dimensions are discussed. Here we present a detailed introductory survey of the theory. This survey and Dydak's have the same origin. They appeared as the notes to our joint book that we planned to write \cite{D-D-W}. We still have a hope that someday we will accomplish that. In this paper we assume that the reader is familiar with basic elements of the homotopy theory, with homology and cohomology theories, including the \v{C}ech cohomology, the Steenrod homology and extraordinary (co)homologies. Some knowledge in the dimension theory and the theory of absolute neighborhood retracts will be useful. Also we don't discuss here any applications of the cohomological dimension theory even to the dimension theory. Interested reader can find a discussion of some applications in \cite{Dy3}. I am thankful to Topology Atlas for inviting me to write this survey. I am also thankful to NSF, DMS-9971709, for the support. \section{General properties of the cohomological dimension} We define {\it the cohomological dimension} with respect to an abelian group $G$ of a topological space $X$ as the largest number $n$ such that there exists a closed subset $A\subset X$ with $\check H^n(X,A;G)\neq 0$. We denote it by $\operatorname{dim}_G X=n$. If there is no such number we set $\operatorname{dim}_G X=\infty$. This definition is good for any space. We restrict ourselves by compact metric spaces (we call them compacta). Actually everywhere in this paper one can replace compact spaces by $\sigma$-compact, i.e.\ countable unions of compacta. \begin{theorem} For any compactum $X$ and an abelian group $G$ the following conditions are equivalent \begin{enumerate} \item $\operatorname{dim}_GX\leq n$, \item $\check H^{n+1}(X,A;G)=0$ for all closed $A\subset X$, \item $H_c^{n+1}(U;G)=0$ for all open $U\subset X$, \item for every closed subset $A\subset X$ the inclusion homomorphism $\check H^n(X;G)\to\check H^n(A;G)$ is an epimorphism, \item $K(G,n)$ is an absolute extensor for $X$, $K(G,n)\in AE(X)$, i.e.\ every continuous map $f\colon A\to K(G,n)$ of a closed subset $A\subset X$ has a continuous extension over $X$. \end{enumerate} \end{theorem} \begin{proof} The implication (1) $\Rightarrow$ (2) follows from the definition. The condition (3) equals (2) by virtue of the equality $\check H^k(X,A;G)=H_c^k(X\setminus A;G)$. The implication (2) $\Rightarrow$ (4) follows from the long exact sequence of the pair $(X,A)$. The conditions (4) and (5) are equivalent since (5) is (4) formulated in homotopy language. To show (5) $\Rightarrow$ (1) first we prove that (5)=$(5)_n$ implies $(5)_k$ for all $k\geq n$. We consider a Serre fibration $p\colon E\to K(G,n+1)$ where $E$ is contractible, $K(G,n+1)$ is a simplicial complex representing the Eilenberg-MacLane space. Then the homotopy fiber of $p$ is $K(G,n)$. Then $p^{-1}(\Delta)\in AE(X)$ for any simplex $\Delta$ (see Chp~1). Then $p^{-1}(\Delta)\in AE(A)$ for any closed $A\subset X$. Then any map $g\colon A\to K(G,n)$ there is a homotopy lift $\bar g\colon A\to E$. Since $E$ is contractible, the map $\bar g$ and, hence, $g$ is homotopically trivial. Therefore $g$ is extendible over $X$. Thus, we proved that $(5)_n$ implies $(5)_{n+1}$. By induction we can prove all $(5)_k$ for $k\geq n$. If $K(G,k)\in AE(X)$, then $K(G,k)\in AE(X/A)$ for any closed subset $A\subset X$. Let $k>n$. Then any map $f\colon X/A\to K(G,k)$ can be lifted to a map $\bar f\colon X/A\to E$. Since $\bar f$ is null homotopic, the map $f$ is null homotopic. Since $f$ is arbitrary, we have $\check H^{k}(X/A;G)=0$. Since we have that for all $k>n$, $\operatorname{dim}_GX<n+1$ and (1) is proven. \end{proof} The property (5) automatically implies: \begin{corollary} For every closed subset $A\subset X$ there is the inequality; $\operatorname{dim}_GA\leq \operatorname{dim}_GX$ for any $G$. \end{corollary} \begin{example} \mbox{} \begin{enumerate} \item $\operatorname{dim}_GX\leq \operatorname{dim}_{\mathbb{Z}}X\leq \operatorname{dim} X$ for any abelian group $G$ and compact space $X$. \item $\operatorname{dim}_GX=0$ if and only if $\operatorname{dim} X=0$ for any (nontrivial) $G$. \item $\operatorname{dim}_{\mathbb{Z}}X=1$ if and only if $\operatorname{dim} X=1$. \item $\operatorname{dim}_GK=n$ for every $n$-dimensional polyhedron $K$ and any $G\neq 0$. \end{enumerate} \end{example} \begin{proof} (1). The first inequality follows from the Universal Coefficient Formula. The second inequality can be rewritten as an implication $S^n\in AE(X) \Rightarrow K(\mathbb{Z},n)\in AE(X)$ which follows from the fact that $S^n$ is an $n$-skeleton of $K(\mathbb{Z},n)$ and the standard homotopy theory. (2). The space $K(G,0)$ contains $S^0$ as a retract. (3). $S^1\in K(\mathbb{Z},1)$. (4). By (1) $\operatorname{dim}_GK\leq n$. Since $K$ contains an open set $U$ homeomorphic to $\mathbb{R}^n$, $H^n_c(U;G)=G\neq 0$ and Theorem 1.1 implies the inequality $\operatorname{dim}_GK\geq n$. \end{proof} \begin{theorem}[Alexandroff Theorem] For finite dimensional compacta there is the equality $\operatorname{dim}_{\mathbb{Z}}X=\operatorname{dim} X$. \end{theorem} \begin{proof} In view of 1.3 (1) it suffices to show that $\operatorname{dim}_{\mathbb{Z}}X\geq \operatorname{dim} X$. Assume the contrary: $\operatorname{dim} X=n$ and $\operatorname{dim}_{\mathbb{Z}}X\leq n-1$. Take an Eilenberg-MacLane complex $K=K(\mathbb{Z},n-1)$ such that its $n$-dimensional skeleton $K^{(n)}$ is an $n{-}1$-sphere $S^{n-1}$. Show that $S^{n-1}\in AE(X)$. Take a continuous map $f\colon A\to S^{n-1}=K^{(n)}$ of a closed subset $A\subset X$. By Theorem 1.1 there is a continuous extension $\bar f\colon X\to K$. Since the dimension of $X$ is $\leq n$, by the Cellular Approximation theorem there is a homotopy $H_t\colon X\to K$ such that \begin{itemize} \item $H_0=\bar f$, \item $H_1(X)\subset K^{(n)}$ and \item $H_t{\restriction}_A=f$ for all $t\in[0,1]$. \end{itemize} Hence, $H_1\colon X\to S^{n-1}$ is an extension of $f$. Thus, $S^{n-1}\in AE(X)$ and hence, $\operatorname{dim} X\leq n-1$. Contradiction. \end{proof} \begin{theorem}[Countable Union Theorem] Suppose $X = \bigcup X_i$ and each $X_i$ is a compactum. Then $\operatorname{dim}_GX = \sup\{\operatorname{dim}_GX_i\}$. \end{theorem} \begin{proof} If the family $\{\operatorname{dim}_GX_i\}$ is bounded, then the formula holds by the trivial reason. Now, we show that if all $\operatorname{dim}_GX_i\leq n$, then $\operatorname{dim}_GX\leq n$. We show that $K(G,n)\in AE(X)$. Although $X$ is not compact, this condition implies the inequality $\operatorname{dim}_GX\leq n$. Let $f\colon A\to K(G,n)$ be a continuous map of a closed subset $A\subset X$. We define a nested increasing sequence of open in $X$ sets $U_1\subset \operatorname{Cl}(U_1)\subset U_2\subset \operatorname{Cl}(U_2)\cdots$ and a sequence of maps $f_i\colon \operatorname{Cl}(U_i)\to K(G,n)$ such that \begin{itemize} \item $X = \bigcup_{i=1}^{\infty}U_i$, \item $A\subset U_1$ and $f_1{\restriction}_A=f$, \item $f_{i+1}{\restriction}_{U_i}=f_i$ for all $i$. \end{itemize} Then such a sequence defines a continuous map $\bigcup_{i=1}^{\infty}\colon X\to K(G,n)$ which is an extension of $f$. Do it by induction on $i$. Extend the map $f$ over an open neighborhood $V\supset A$ to a map $f_1'\colon V\to K(G,n)$. Take an open set $U_1$ such that $A\subset U_1\subset \operatorname{Cl}(U_1)\subset V$ and define $f_1 = f_1'{\restriction}_{\operatorname{Cl}(U_1)}$. To define $U_{k+1}$ and $f_{k+1}$ we extend a map $f_k$ restricted on $\operatorname{Cl}(U_k)\cap X_k$ over a space $X_k$ to a map $g_k\colon X_k\to K(G,n)$. Then the union of $f_k$ and $g_k$ defines a continuous map $q_k\colon \operatorname{Cl}(U_k) \cup X_k\to K(G,n)$. Extend that map over a neighborhood $V_{k+1}$ to a map $f_{k+1}'$ and define $U_{k+1}\supset \operatorname{Cl}(U_k) \cup X_k$ such that its closure lies in $V_{k+1}$. Define $f_{k+1}$ as the restriction of $f_{k+1}'$ onto $U_{k+1}$. \end{proof} \begin{theorem} Let $G=\varinjLim G_i$ and $\operatorname{dim}_{G_i}X\leq n$. Then $\operatorname{dim}_GX\leq n$. \end{theorem} \begin{proof} The formula $\varinjLim H^n_c(U;G)=H^n_c(U;\varinjLim G_i)$ implies the proof. \end{proof} \begin{corollary} If $G = \bigoplus G_s$, then for every compactum $X$ the following formula holds $\operatorname{dim}_GX = \sup\{\operatorname{dim}_{G_s}X\}$. \end{corollary} \begin{proof} Since $H^n_c(U;G_s \oplus G') = H^n_c(U;G_s) \oplus H^n_c(U;G')$, the inequality $\operatorname{dim}_G X \geq \operatorname{dim}_{G_s} X$ holds. Hence, $\operatorname{dim}_G X \geq \sup\{\operatorname{dim}_{G_s} X\}$. The opposite inequality follows from Theorem~1.6 applied to $G=\varinjLim \bigoplus_{s=1}^i G_s$ and the fact that $\sup_i\{\operatorname{dim}_{\bigoplus_{s=1}^i\! G_s} X\} = \sup_s\{\operatorname{dim}_{G_s} X\}$ imply the proof. \end{proof} \begin{definition} A compactum $X$ has an {\it $r$-dimensional obstruction} at its point $x$ with respect to a coefficient group $G$ if there is a neighborhood $U$ of $x$ such that for every smaller neighborhood $V$ of $x$ the image of the inclusion homomorphism $i_{V,U}\colon H^r_c(V;G)\to H^r_c(U;G)$ is nonzero. \end{definition} \begin{theorem} Let $X$ be a compact with $\operatorname{dim}_GX=r$ then $X$ contains a compact subset $Y$ of $\operatorname{dim}_GY=r$ such that at every point $x\in Y$ the compact $X$ has an $r$-dimensional obstruction with respect to $G$. \end{theorem} \begin{proof} Let $W$ be an open subset of $X$ with $H_c^r(W;G)\neq 0$. Because of the continuity of cohomology there is a closed in $U$ set $Z$ minimal with respect the property: the inclusion homomorphism $H^r_c(W;G)\to H^r_c(Z;G)$ is nonzero. Then $\operatorname{dim}_GZ=r$ and by the Countable Union Theorem there exists a compact subset $Y\subset Z$ with $\operatorname{dim}_GY=r$. For every $x\in Y$ we take $U=W$. Let $V\subset U$ be a neighborhood of $x$. Consider the diagram generated by exact sequence of pairs $(U,U\setminus V$ and $(Y,Y\setminus V)$. $$ \begin{CD} H^r_c(V;G) @>>i_{V,U}> H_c^r(U;G) @>>> \cdots\\ @VVj_{V,V\cap Y}V @VVj_{U,Y}V @.\\ H^r_c(V\cap Y;G)@>>i_{V\cap Y,Y}> H^r_c(Y;G) @>>j_{Y,Y\setminus V}> H^r_c(Y\setminus V;G)\\ \end{CD} $$ Let $\alpha\in H^r_c(U;G)$ such that $j_{U,Y}(\alpha)\neq 0$. Since $Y$ is minimal, $j_{Y,Y\setminus V}(j_{U,Y}(\alpha))=0$. The exactness of the bottom row implies that there is $\beta\in H^r_c(Y\cap V;G)$ such that $i_{Y\cap V,Y}(\beta)= j_{U,Y}(\alpha)$. Since $\operatorname{dim}_GV\leq r$, the homomorphism $j_{V,Y\cap V}$ is an epimorphism and hence there is $\gamma\in H^r_c(V;G)$ with $j_{V,Y\cap V}(\gamma)=\beta$. Therefore $j_{U,Y}i_{V,U}(\gamma)\neq 0$ and hence $i_{V,U}(\gamma)\neq 0$. \end{proof} \begin{definition} A compactum $X$ is called {\it dimensionally full-valued} if $\operatorname{dim}_GX=\operatorname{dim}_{\mathbb{Z}}X$ for all abelian groups $G$. It is clear that every $n$-dimensional manifold or $n$-dimensional polyhedron is dimensionally full-valued. The following are examples of dimensionally nonfull-valued compacta. \end{definition} \begin{example}[Pontryagin surfaces] There are 2-dimensional compacta $\Pi_p$ indexed by prime numbers having the following cohomological dimensions: $\operatorname{dim}_{\mathbb{Q}}\Pi_p=\operatorname{dim}_{\mathbb{Z}_q}\Pi_p=1$ for prime $q\neq p$ and $\operatorname{dim}_{\mathbb{Z}_p}\Pi_p=2$. \end{example} \begin{proof} Denote by $M_p$ the mapping cylinder of $p$-to-one covering map of the circle to itself $f_p\colon S^1\to S^1$. Denote by $\partial M_p$ the domain of the map $f_p$. We construct $\Pi_p$ as the limit space of an inverse sequence of polyhedra $\{L_k;q^{k+1}_k\}$ where $L_1$ is a 2-dimensional sphere and every $L_{k+1}$ is obtained from $L_k$ and a triangulation $\tau_k$ on $L_k$ by replacing all 2-simplexes $\Delta$ in $L_k$ by $M_p$ identifying the boundary of simplex $\partial\Delta$ with $\partial M_p$. A bonding map $q^{k+1}_k$ is defined by collapsing the image $\operatorname{Im}(f_p)=S^1\subset M_p$ to a point for all $M_p$ participating in the construction of $L_{k+1}$. We note that $M_p$ with $\operatorname{Im}(f_p)$ collapsed to a point is homeomorphic to a 2-simplex $\Delta$. Denote by $\xi\colon M_p\to\Delta$ the corresponding quotient map. In the above construction we chose triangulations $\tau_k$ such that preimages $(q^{\infty}_k)^{-1}(\Delta)$ of 2-dimensional simplexes form a basis of topology on $\Pi_p$. We note that \begin{enumerate} \item $H^2(M_p,\partial M_p;\mathbb{Q})=H^2(M_p,\partial M_p;\mathbb{Z}_q)=0$, \item $\xi^*\colon H^2(\Delta,\partial\Delta;\mathbb{Z}_p)\to H^2(M_p,\partial M_p;\mathbb{Z}_p)$ is an isomorphism. \end{enumerate} To observe (1), (2) we suggest to use the simplicial homology with coefficients $\mathbb{Q}$, $\mathbb{Z}_q$ and $\mathbb{Z}_p$. The cohomological results follow from the Universal Coefficient Theorem. By the property (1), $H^2_c\left((q^{k+1}_k)^{-1}(\operatorname{Int}\Delta);F\right)=0$ for any 2-simplex $\Delta$ in $L_k$ and for $F=\mathbb{Q},\mathbb{Z}_q$, $q\neq p$. By the Mayer-Vietoris sequence we can get the equality $H^2_c\left((q^{k+1}_k)^{-1}(\operatorname{Int} A);F\right)=0$ for any subcomplex $A$ in $L_k$ for the same coefficients. Therefore \begin{align*} H^2_c\left((q^{\infty}_k)^{-1}(\operatorname{Int} A);F\right)& = \varinjLim H^2_c\left((q^{i+1}_i)^{-1}(q^i_k)^{-1}(\operatorname{Int} A);F\right)\\ & = \varinjLim H^2_c\left((q^{i+1}_i)^{-1}(\operatorname{Int}(q^i_k(A)));F\right)\\ & =0 \end{align*} for any subcomplex $A\subset L_k$. Since every open set $U\subset\Pi_p$ can be presented as an increasing union of sets of the type $(q^{\infty}_k)^{-1}(\operatorname{Int} A)$, the formula $H^*_c(\varinjlim U_j;F)=\varinjLim H^*_c(U_j;F)$ implies that $H^2_c(U;F)=0$ for every open set $U$ and $F=\mathbb{Q},\mathbb{Z}_q$. Hence, $\operatorname{dim}_{\mathbb{Q}}\Pi_p\leq 1$ and $\operatorname{dim}_{\mathbb{Z}_q}\Pi_p\leq 1$. The equality holds since $\Pi_p$ is not 0-dimensional. Similarly the Mayer-Vietoris sequence implies that $$(q^{k+1}_k)^*\colon H^2(L_k;\mathbb{Z}_p)\to H^2(L_{k+1};\mathbb{Z}_p)$$ is an isomorphism for all $k$. Hence, $\check H^2(\Pi_p;\mathbb{Z}_p)\neq 0$ and, hence, $\operatorname{dim}_{\mathbb{Z}_p}\Pi_p=2$. \end{proof} According to the following theorem a Pontryagin compactum $\Pi_p$ cannot be imbedded in $\mathbb{R}^3$. \begin{theorem} Every $n{-}1$-dimensional compact subset $X$ of the Euclidean space $\mathbb{R}^n$ is dimensionally full-valued. \end{theorem} \begin{proof} By Alexandroff Theorem $\operatorname{dim}_{\mathbb{Z}}X=n-1$. According to Theorem 1.8 there is a point $x\in X$ having $n{-}1$-dimensional obstruction in $X$ with respect to $\mathbb{Z}$. Consider a small ball $U$ in $\mathbb{R}^n$ centered at $x$. Then $H_c^{n-1}(X\cap U;\mathbb{Z})\neq 0$. By the Alexander duality $H_0(U\setminus X;\mathbb{Z})\neq 0$. Since the singular 0-dimensional homology is always a free group, it follows that the group $H_c^{n-1}(X\cap U;\mathbb{Z})$ is free abelian and nontrivial. The Universal coefficient formula completes the proof. \end{proof} A family of subsets $\mathcal{U}$ of a given set $X$ we call {\it multiplicative} If $U,V\in\mathcal{U}$ implies $U\cap V\in\mathcal{U}$. \begin{proposition} Suppose that a compactum $X$ has a multiplicative basis $\mathcal{U}$ having the property $H^k_c(U;G)=0$ for all $k>n$ and for all $U\in\mathcal{U}$. Then $\operatorname{dim}_GX\leq n$. \end{proposition} \begin{proof} Consider a family of open sets $\mathcal{V} = \{V\subset X \mid H^k_c(V;G)=0\ \text{for all}\ k>n\}$. The Mayer-Vietoris exact sequence $$ \cdots \to H^k_c(U;G) \oplus H^k_c(V;G) \to H^k_c(U \cup V;G) \to H^{k+1}_c(U\cap V) \to \cdots $$ implies that $U \cup V \in \mathcal{V}$ provided $U,V \in \mathcal{V}$. Since $\mathcal{V}$ contains a basis $\mathcal{U}$, it follows that every open set in $X$ is an increasing union of sets from $\mathcal{V}$. The continuity of the cohomology implies that every open set in $X$ lies in $\mathcal{V}$. \end{proof} \begin{proposition} If $\operatorname{dim}_GX<\infty$, then the multiplicativity of the basis $\mathcal{U}$ in Proposition 1.11 can be omitted. \end{proposition} \begin{proof} If $\operatorname{dim}_GX=r>n$, then according to Theorem 1.8 there is an r-dimensional obstruction at some point $x$, which contradicts with the property of the basis $\mathcal{U}$. \end{proof} According to Theorem 1.1 for a compactum $X$ to be cohomologically at most $n$-dimensional with respect to a coefficient group $G$ it suffices to have the property that $H^k_c(U;G)=0$ not for all $k>n$ but just for $k=n+1$ and for all open sets $U\subset X$. If instead of all open sets we consider only a basis $\mathcal{U}$, then that property is insufficient even if $\mathcal{U}$ is multiplicative. For example, the unit cube $I^n$ has a multiplicative basis $\mathcal{U}$ consisting of open `rectangles' $U=I_1\times\dots\times I_n\subset I^n$ of diameter less than one. Since every $I_j$ is homeomorphic to an open interval or a half interval, every $U$ is homeomorphic to Euclidean space $\mathbb{R}^n$ or half space $\mathbb{R}^n_+$. In both cases $H^1_c(U;G)=0$. Thus, $H^1_c(U;G)=0$ for all $U\in\mathcal{U}$ but $I^n$ is far from being 0-dimensional. \section{Bockstein theory} As we have seen in \S1 the cohomological dimension of a given compactum depends on coefficient group. Any abelian group can be the coefficient group of a cohomology theory and there are uncountably many of them. It turns out to be that in the case of compacta it suffices to consider only countably many groups. Solving Alexandroff's problem \cite{A2}, M.F. Bockstein found a countable family of abelian groups $\sigma$ and an algorithm for computation of the cohomological dimension with respect to a given abelian group by means of cohomological dimensions with coefficients taken from $\sigma$. The Bockstein basis $\sigma$ consists of the following groups: rationals $\mathbb{Q}$, $p$-cyclic groups $\mathbb{Z}_p=\mathbb{Z}/p\mathbb{Z}$, $p$-adic circles $\mathbb{Z}_{p^{\infty}}=\mathbb{Q}_p/\mathbb{A}_p$, $p$-adic field factored out by $p$-adic integers, and $p$-localizations of integers $\mathbb{Z}_{(p)}=\{\frac{m}{n}\in\mathbb{Q} \mid \text{$n$ is not divisible by $p$}\}$ where $p$ runs over all primes. The set of all $p$-related groups in $\sigma$ we denote by $\sigma_p = \{\mathbb{Z}_p,\mathbb{Z}_{p^{\infty}},\mathbb{Z}_{(p)}\}$. Thus, $\sigma = \bigcup_p\sigma_p \cup \mathbb{Q}$. We note that the $p$-adic circle $\mathbb{Z}_{p^{\infty}}$ is the direct limit of groups $\mathbb{Z}_{p^k}$. \begin{definition} Given an abelian group $G\neq 0$ its Bockstein family $\sigma(G)\subset\sigma$ is defined by the following rule: \begin{enumerate} \item $\mathbb{Z}_{(p)}\in\sigma(G)$ if and only if $G/\operatorname{Tor} G$ is not divisible by $p$, \item $\mathbb{Z}_p\in\sigma(G)$ if and only if $p-\operatorname{Tor} G$ is not divisible by $p$, \item $\mathbb{Z}_{p^{\infty}}\in\sigma(G)$ if and only if $p-\operatorname{Tor} G\neq 0$ is divisible by $p$, \item $\mathbb{Q}\in\sigma(G)$ if and only if $G/\operatorname{Tor} G\neq 0$ is divisible by all $p$. \end{enumerate} \end{definition} \begin{example*} \mbox{} \begin{enumerate} \item $\sigma(\mathbb{Z})=\{\mathbb{Z}_{(p)} \mid \text{$p$ is prime}\}$, \item If $G\in\sigma$, then $\sigma(G)=\{G\}$, \item $\sigma(G) = \sigma(\operatorname{Tor} G) \cup \sigma(G/\operatorname{Tor} G)$ for any abelian group $G$. \end{enumerate} \end{example*} \begin{theorem}[Bockstein Theorem] For any compactum $X$ and for any abelian group $G$, $\operatorname{dim}_GX=\sup\{\operatorname{dim}_H X \mid H\in\sigma(G)\}$. \end{theorem} \begin{lemma} For any short exact sequence of abelian groups $0\to G\to E\to\Pi\to 0$ and for any compactum $X$ the following inequalities hold: \begin{itemize} \item[(a)] $\operatorname{dim}_EX\leq\max\{\operatorname{dim}_GX,\operatorname{dim}_{\Pi}X\}$, \item[(b)] $\operatorname{dim}_GX\leq\max\{\operatorname{dim}_EX,\operatorname{dim}_{\Pi}X+1\}$, \item[(c)] $\operatorname{dim}_{\Pi}X\leq\max\{\operatorname{dim}_EX,\operatorname{dim}_GX-1\}$. \end{itemize} \end{lemma} \begin{proof} (a). Let $n=\max\{\operatorname{dim}_GX,\operatorname{dim}_{\Pi}X\}$. The epimorphism $E\to\Pi$ defines a map $K(E,n)\to K(\Pi,n)$. Turn this map into a Serre fibration $p$, then the exact sequence of fibration implies that the homotopy fiber of $p$ is $K(G,n)$. By Theorem 1.1 we have $K(G,n)\in AE(X)$ and $K(\Pi,n)\in AE(X)$. Then the extension theory implies (\cite{Dr4}) that $K(E,n)\in AE(X)$, i.e.\ $\operatorname{dim}_EX\leq n$. (b). Let $m=\max\{\operatorname{dim}_EX,\operatorname{dim}_{\Pi}X+1\}$. Here we realize the monomorphism $G\to E$ by fibration $K(G,m)\to K(E,m)$. The homotopy fiber of that is $K(\Pi,m-1)$. Then the result follows. (c). The fibration $p\colon K(E,n)\to K(\Pi,n)$ of a. for $n=\max\{\operatorname{dim}_EX+1,\operatorname{dim}_GX\}$ as any other fibration defines a map $f\colon \Omega K(\Pi,n)=K(\Pi,n-1)\to p^{-1}(x_0)=K(G,n)$. The Serre construction turns $f$ into a fibration with a fiber $K(E,n-1)$. Note that $K(G,n)\in AE(X)$ and $K(E,n-1)\in AE(X)$. Then the extension theory implies that $K(\Pi,n-1)\in AE(X)$, i.e.\ $\operatorname{dim}_{\Pi}X\leq n-1=\max\{\operatorname{dim}_EX,\operatorname{dim}_GX-1\}$. \end{proof} \begin{proposition} Every compactum $X$ satisfies the equality $\operatorname{dim}_{\mathbb{Z}_p}X=\operatorname{dim}_{\mathbb{Z}_{p^k}}X$ for any $k$ and any prime $p$. \end{proposition} \begin{proof} Induction on $k$. Lemma 2.2 (a) applied to the sequence $0\to\mathbb{Z}_p\to\mathbb{Z}_{p^{k+1}}\to\mathbb{Z}_{p^k}\to 0$ and the induction assumption establish the inequality $\operatorname{dim}_{\mathbb{Z}_{p^{k+1}}}X\leq \operatorname{dim}_{\mathbb{Z}_p}X$. Lemma 2.2 (c) together with the induction assumption give an opposite inequality. \end{proof} \enlargethispage{\baselineskip} \begin{theorem}[Bockstein Inequalities] For any compactum $X$ the following inequalities hold: \begin{itemize} \item[BI1] $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X\leq \operatorname{dim}_{\mathbb{Z}_p}X$, \item[BI2] $\operatorname{dim}_{\mathbb{Z}_p}X\leq \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+1$, \item[BI3] $\operatorname{dim}_{\mathbb{Z}_p}X\leq \operatorname{dim}_{\mathbb{Z}_{(p)}}X$, \item[BI4] $\operatorname{dim}_{\mathbb{Q}}X\leq \operatorname{dim}_{\mathbb{Z}_{(p)}}X$, \item[BI5] $\operatorname{dim}_{\mathbb{Z}_{(p)}}X\leq\max\{\operatorname{dim}_{\mathbb{Q}}X,\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+1\}$, \item[BI6] $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X\leq\max\{\operatorname{dim}_{\mathbb{Q}}X,\operatorname{dim}_{\mathbb{Z}_{(p)}}X-1\}$. \end{itemize} \end{theorem} \begin{proof} Since the $p$-adic circle can be presented as the direct limit of groups $\mathbb{Z}_{p^k}$, Lemma 2.2 and Theorem 1.6 imply BI1. Lemma 2.2 (b) applied to the sequence $0\to\mathbb{Z}_p\to\mathbb{Z}_{p^{\infty}}\to\mathbb{Z}_{p^{\infty}}\to 0$ implies BI2. Lemma 2.2 (c) applied to the sequence $0\to\mathbb{Z}_{(p)}\to\mathbb{Z}_{(p)}\to\mathbb{Z}_p\to 0$ implies BI3. Lemma 2.2 (a) applied to the sequence $0\to\mathbb{Z}_{(p)}\to\mathbb{Q}\to \mathbb{Z}_{p^{\infty}}\to 0$, BI1 and BI3 imply BI4. Lemma 2.2 (b) applied to the above sequence gives BI5. Lemma 2.2 (c) applied to the same sequence gives BI6. \end{proof} \begin{lemma} Let $G$ be an abelian group, then $$\operatorname{dim}_GX=\max\{\operatorname{dim}_{\operatorname{Tor} G}X,\operatorname{dim}_{G/\operatorname{Tor} G}X\}$$ for every compactum $X$. \end{lemma} \begin{proof} Since $H^{k+1}(K(\operatorname{Tor} G,k);\mathbb{Q})=0$, it follows that the Bockstein long exact sequence generated by $0\to \operatorname{Tor} G\to G\to G/\operatorname{Tor} G\to 0$ is split into short exact sequences $$ 0\to \check H^k(Y;\operatorname{Tor} G)\to \check H^k(Y;G)\to \check H^k(Y;G/\operatorname{Tor} G)\to 0. $$ Then the result follows. \end{proof} \begin{proof}[Proof of Bockstein Theorem] First we consider the case when $G$ is a torsion group. Then $G = \operatorname{Tor} G = \bigoplus_p p-\operatorname{Tor} G$. By 1.7 it follows that $\operatorname{dim}_GX=\sup\{\operatorname{dim}_{p-\operatorname{Tor} G}X\}$. Since $\sigma(\operatorname{Tor} G) = \bigcup\sigma (p-\operatorname{Tor} G)$, it suffices to show that $$\operatorname{dim}_{p-\operatorname{Tor} G}X=\sup\{\operatorname{dim}_H X \mid H\in\sigma(p-\operatorname{Tor})\}.$$ Indeed, then \begin{align*} \operatorname{dim}_G X& = \sup_p\{\operatorname{dim}_{p-\operatorname{Tor} G}X\}\\ & = \sup_p\sup\{\operatorname{dim}_H X \mid H\in\sigma(p-\operatorname{Tor} G)\}\\ & = \sup\{\operatorname{dim}_H X \mid H\in\bigcup_p\sigma(p-\operatorname{Tor})\}\\ & = \sup\{\operatorname{dim}_H X \mid H\in\sigma(G)\}. \end{align*} If the group $p-\operatorname{Tor} G$ is not divisible by $p$, then it contains $\mathbb{Z}_{p^k}$ as a direct summand of $G$ for some $k\geq 1$. In that case $\sigma(p-\operatorname{Tor} G) = \{\mathbb{Z}_p\}$. By 1.7 we have $$\operatorname{dim}_{p-\operatorname{Tor} G}X \geq \operatorname{dim}_{\mathbb{Z}_{p^k}}X = \operatorname{dim}_{\mathbb{Z}_p}X = \sup\{\operatorname{dim}_H X \mid H\in\sigma(p-\operatorname{Tor} G)\}. $$ Here we applied Proposition 2.3 to obtain the second equality. On the other hand, $p-\operatorname{Tor} G$ is a direct limit of finite abelian $p$-groups which are direct sums of groups isomorphic to $\mathbb{Z}_{p^m}$ for some $m$. Thus, by Theorem 1.6, 1.7 and Proposition 2.3, $$\operatorname{dim}_{p-\operatorname{Tor} G}X \leq \operatorname{dim}_{\mathbb{Z}_p}X = \sup\{\operatorname{dim}_H X \mid H\in\sigma(p-\operatorname{Tor} G)\}. $$ Now we consider the case when $G$ is a torsion free group. By the Universal Coefficient Formula $\check H^{n+1}(X,A;G)\neq 0$ if and only if $\check H^{n+1}(X,A)\otimes G\neq 0$ which is equivalent to $\check H^{n+1}(X,A)\otimes \mathbb{Z}_{(p)}\neq 0$ for all $p$ such that $\mathbb{Z}_{(p)}\in\sigma(G)$. By the Universal Coefficient Formula the latter is equivalent to $\check H^{n+1}(X,A;\mathbb{Z}_{(p)})\neq 0$ for all $p$ such that $\mathbb{Z}_{(p)}\in\sigma(G)$. Now the result follows from Theorem 1.1. If $G$ is an arbitrary abelian group, then by Lemma 2.5, \begin{align*} \operatorname{dim}_GX& = \max\{\operatorname{dim}_{\operatorname{Tor} G}X, \operatorname{dim}_{G/\operatorname{Tor} G}X\}\\ & = \sup\{\operatorname{dim}_H X \mid H\in\sigma(\operatorname{Tor} G)\cup\sigma(G/\operatorname{Tor} G)\}\\ & = \sup\{\operatorname{dim}_H X \mid H\in\sigma(G)\}. \end{align*} \end{proof} \begin{definition} A compactum $X$ is $p$-{\it regular} if all its $p$-dimensions agree and coincide with the rational dimension: $$\operatorname{dim}_{\mathbb{Z}_p}X = \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X = \operatorname{dim}_{\mathbb{Z}_{(p)}}X = \operatorname{dim}_{\mathbb{Q}}X.$$ Otherwise we call a compactum $X$ $p$-{\it singular}. \end{definition} \begin{lemma} A compact $X$ is $p$-regular if and only if $\operatorname{dim}_{\mathbb{Z}_{(p)}}X = \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X$. \end{lemma} \begin{proof} Bockstein inequalities BI1 and BI3 imply that $\operatorname{dim}_{\mathbb{Z}_p}X = \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X = \operatorname{dim}_{\mathbb{Z}_{(p)}}X$, The inequalities BI4 and BI6 imply that $\operatorname{dim}_{\mathbb{Z}_{(p)}}X=\operatorname{dim}_{\mathbb{Q}}X$. \end{proof} The following theorem we call the Bockstein Alternative (BA). \begin{theorem}[Bockstein Alternative (BA)] For any compactum $X$ there is an alternative: either \begin{align*} \operatorname{dim}_{\mathbb{Z}_{(p)}}X& =\operatorname{dim}_{\mathbb{Q}}X \quad\text{or}\\ \operatorname{dim}_{\mathbb{Z}_{(p)}}X& =\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+1. \end{align*} \end{theorem} \begin{proof} It is clear that BA holds when $X$ is $p$-regular. Consider $p$-singular $X$ and assume that $\operatorname{dim}_{\mathbb{Z}_{(p)}}X\neq \operatorname{dim}_{\mathbb{Q}}X$. Then by BI4, $\operatorname{dim}_{\mathbb{Z}_{(p)}}X\geq \operatorname{dim}_{\mathbb{Q}}X$. Then BI5 implies that $\operatorname{dim}_{\mathbb{Z}_{(p)}}X\leq \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+1$. Since $X$ is $p$-singular and in the view of BI1, BI3, Lemma 2.6 implies that $\operatorname{dim}_{\mathbb{Z}_{(p)}}X=\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+1$. \end{proof} \begin{remark*} In the case of $p$-singular $X$, $\operatorname{dim}_{\mathbb{Z}_{(p)}}X=\max\{\operatorname{dim}_{\mathbb{Q}}X,\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+1\}$. \end{remark*} \begin{definition} $p$-{\it deficiency} $\epsilon_X(p)$ of a compactum $X$ is the difference $\operatorname{dim}_{\mathbb{Z}_p}X - \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X$. The inequalities BI1, BI2 imply that $\epsilon_X(p)\in\{0,1\}$. \end{definition} Let $\mathcal{P}$ be the set of all prime numbers. For every compactum $X$ by $\mathcal{S}_X\subset\mathcal{P}$ we denote the set of $p$ for which $X$ is $p$-singular and by $\mathcal{D}_X\subset\mathcal{P}$ the set of all $P$ for which $X$ is $p$-deficient. It is clear that $\mathcal{D}_X\subset\mathcal{S}_X$. Then the deficiency function $\epsilon_X(\ )$ is just the characteristic function of the set $\mathcal{D}_X$. Additionally we introduce {\it the field dimensional function} $d_X\colon \mathcal{P}\cup\{0\}\to\mathbb{N}\cup\{\infty\}$ by the formulas: $d_X(p)=\operatorname{dim}_{\mathbb{Z}_p}X$ and $d_X(0)=\operatorname{dim}_{\mathbb{Q}}X$. \begin{lemma} The family $(\mathcal{S}_X,\mathcal{D}_X;d_X)$ consisting of the pair of the singularity set and the deficiency set $\mathcal{D}_X\subset\mathcal{S}_X\subset\mathcal{P}$ together with the field dimensional function $d_X$ completely determine cohomological dimensions of a given compactum $X$. Moreover for the groups from the basis $\sigma$ there are formulas: \begin{enumerate} \item $\operatorname{dim}_{\mathbb{Q}} X = d_X(0)$, \item $\operatorname{dim}_{\mathbb{Z}_p} X = d_X(p)$, \item $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X = d_X(p) - \chi_{\mathcal{D}_X}(p)$ and \item $\operatorname{dim}_{\mathbb{Z}_{(p)}}X = \left(\max\{d_X(0), d_X(p) - \chi_{\mathcal{D}_X}(p) + 1\}\right) \chi_{\mathcal{S}}(p) + d_X(0)\chi_{\mathcal{P}\setminus\mathcal{S}}(p)$ \end{enumerate} where $\chi_A$ denotes the characteristic function of a set $A$. \end{lemma} \begin{proof} In view of Bockstein Theorem it is sufficient to prove the formulas. The first formula is obvious. If $p\in\mathcal{P}\setminus\mathcal{S}$, then $X$ is $p$-regular and the formula (2) holds. If $p\in\mathcal{S}$, then by BI5 $$\operatorname{dim}_{\mathbb{Z}_{(p)}}X\leq\max\{\operatorname{dim}_{\mathbb{Q}}X,\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+1\}$$ and $$\operatorname{dim}_{\mathbb{Z}_{(p)}}X\geq\max\{\operatorname{dim}_{\mathbb{Q}}X,\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+1\}$$ by BI4, BI1, BI3 and Lemma 2.6. \end{proof} \begin{lemma} For every compactum $X$ there is an additive group of a field $F\in\sigma$ such that $\operatorname{dim}_{\mathbb{Z}}X\leq \operatorname{dim}_FX+1$. \end{lemma} \begin{proof} By the Bockstein Theorem (2.1), $\operatorname{dim}_{\mathbb{Z}} X =\operatorname{dim}_{\mathbb{Z}_{(p)}} X$ for some $p$. By the Bockstein Alternative (Theorem 2.7), either $\operatorname{dim}_{\mathbb{Z}_{(p)}} X=\operatorname{dim}_{\mathbb{Q}}X$ or $\operatorname{dim}_{\mathbb{Z}{(p)}} X=\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+1$. In the first case we take $F=\mathbb{Q}$, in the second case $F=\mathbb{Z}_p$. The inequality BI1 completes the proof in the second case. \end{proof} \begin{example} A Pontryagin surface $\Pi_p$ has the following cohomological dimensions with respect to Bockstein groups $G\in\sigma$: $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}} \Pi_p = \operatorname{dim}_{\mathbb{Q}}\Pi_p = \operatorname{dim}_{\mathbb{Z}_q} \Pi_p = \operatorname{dim}_{\mathbb{Z}_{q^{\infty}}} \Pi_p = \operatorname{dim}_{\mathbb{Z}_{(q)}} \Pi_p = 1$ for $q\neq p$ and $\operatorname{dim}_{\mathbb{Z}_p} \Pi_p = \operatorname{dim}_{\mathbb{Z}_{(p)}} \Pi_p = 2$. \end{example} \begin{proof} First we note that a compactum $\Pi_p$ is $q$-regular for $q\neq p$. Since it is 2-dimensional, by the Bockstein theorem $\operatorname{dim}_{\mathbb{Z}_{(p)}} \Pi_p=2$. By BA we have $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}} \Pi_p = 1$. \end{proof} \section{Cohomological dimension of Cartesian product} Theorem 1.5 allows to compute easily the cohomological dimension of the union of two compacta: $\operatorname{dim}_GX\cup Y=\max\{ \operatorname{dim}_GX, \operatorname{dim}_GY\}$. Unfortunately there is no easy way to compute the cohomological dimension of the product of two compacta. The natural formula $\operatorname{dim}_G(X\times Y)=\operatorname{dim}_GX+\operatorname{dim}_GY$ can be violated in both directions. \begin{proposition} Let $X$ and $Y$ be compacta and $G$ an abelian group. Then the following conditions are equivalent: \begin{enumerate} \item $\operatorname{dim}_G(X\times Y)\leq n$, \item $H^k_c(U\times V;G)=0$ for all $k>n$ and all open subsets $U$ of $X$ and $V$ of $Y$, \item $\check H^k((X/A)\wedge (Y/B);G)=0$ for all $k>n$ and all closed subsets $A$ of $X$ and $B$ of $Y$. \end{enumerate} \end{proposition} \begin{proof} (1) $\Rightarrow$ (2). It follows from Theorem 1.1. (2) $\Rightarrow$ (1). Note that the family $\mathcal{U} = \{U\times V \mid \text{$U$ is open in $X$}, \text{$V$ is open in $Y$}\}$ forms a multiplicative basis in $X\times Y$. Now Proposition 1.11 implies the proof. (2) $\Leftrightarrow$ (3). Denote $A=X\setminus U$ and $B=Y\setminus V$. Then \begin{align*} H^k_c(U\times V;G)& =\check H^k(X\times Y,X\times Y\setminus U\times V;G)\\ & =\check H^k(X\times Y,X\times B\cup A\times Y;G)\\ & =\check H^k(X\times Y/X\times B\cup A\times Y;G)\\ & =\check H^k((X/A)\wedge (Y/B);G) \end{align*} and the result follows. \end{proof} \begin{proposition} Let $X$ and $Y$ be compacta and let $G\neq 0$ be an abelian group. \begin{enumerate} \item If $k\geq dim_GY$ is a number such that $\operatorname{dim}_{H^{k-i}_c(V;G)}X\leq i$ for all $i\geq 0$ and all open subsets $V$ of $Y$, then $\operatorname{dim}_G(X\times Y)\leq k$, \item If $\operatorname{dim}_{H^n_c(V;G)}X\geq m$, then $\operatorname{dim}_G(X\times Y)\geq n+m$. \end{enumerate} \end{proposition} \begin{proof} (1). Since $\operatorname{dim}_{H^{k-i}_c(V;G)}X\leq i$, we have $H^{i+l}_c(U;H^{k-i}_c(V;G))=0$ for any $l>0$ and any open subset $U\subset X$ for all $i\geq 0$. By the Kunneth formula we have \begin{align*} H^{k+l}_c(U\times V;G)& = \bigoplus_{j=0}^{k+l}H^j_c(U;H^{k+l-j}_c(V;G))\\ & = \bigoplus_{j=0}^{l-1}H^j_c(U;H^{k+l-j}_c(V;G)) \oplus \bigoplus_{i=0}^kH^{i+l}_c(U;H^{k-i}_c(V;G))\\ & = 0 \end{align*} The first sum is zero by the assumption $k\geq \operatorname{dim}_GY$ and the second part is zero by the above formula. Proposition 1.11 completes the proof. (2). Since $\operatorname{dim}_{H^n_c(V;G)}X\geq m$, by virtue of Theorem 1.1 there exists an open subset $U\subset X$ such that $H^m_c(U;H^n_c(V;G))\neq 0$. By the Kunneth formula we have $H^{n+m}_c(U\times V;G)\neq 0$. Hence, $\operatorname{dim}_G(X\times Y)\geq n+m$. \end{proof} \begin{proposition} For an additive group of a field $F$ the formula $\operatorname{dim}_F(X\times Y)=\operatorname{dim}_FX+\operatorname{dim}_FY$ holds for all compacta. \end{proposition} \begin{proof} Let $\operatorname{dim}_FX=m$ and $\operatorname{dim}_FY=n$. Note that $\operatorname{dim}_{H^{n+m-i}_c(V;F)}X=0$ if $i<m$ and $\operatorname{dim}_{H^{n+m-i}_c(V;F)}X=\operatorname{dim}_{\bigoplus F}X\leq \operatorname{dim}_FX =m$ if $i\geq m$. In both cases $\operatorname{dim}_{H^{n+m-i}_c(V;F)}X\leq i$ and by Proposition 3.2 (1) it follows $\operatorname{dim}_F(X\times Y)\leq n+m$. Let $V$ be an open subset of $Y$ with $H^n_c(V;F)\neq 0$. Then $H^n_c(V;F)=\bigoplus F\neq 0$. Then $\operatorname{dim}_{H^n_c(V;F)}X\geq m$ and by Proposition 3.2 (2) we have $\operatorname{dim}_F(X\times Y)\geq n+m$. Therefore $\operatorname{dim}_F(X\times Y)=\operatorname{dim}_FX+\operatorname{dim}_FY$. \end{proof} \begin{proposition} Suppose $X$ and $Y$ are compacta and $G$ is an abelian group. \begin{enumerate} \item $\operatorname{dim}_G(X\times Y)\leq \operatorname{dim}_GX+\operatorname{dim}_GY$ if $G$ is torsion free, \item $\operatorname{dim}_G(X\times Y)\leq \operatorname{dim}_GX+\operatorname{dim}_GY+1$ in general case. \end{enumerate} \end{proposition} \begin{proof} (1). Let $\operatorname{dim}_GX=m$ and $\operatorname{dim}_GY=n$. Since $G$ is torsion free, $H^l_c(V;G)=H^l_c(V)\otimes G$ by virtue of the Universal Coefficient Formula. We note that if $\mathbb{Z}_{(p)}\in\sigma(H\otimes G)$, then $H\otimes G$ is not divisible by $p$ and, hence, $G$ is not divisible by $p$, therefore, $\mathbb{Z}_{(p)}\in\sigma(G)$. Then \begin{align*} \operatorname{dim}_{H^l_c(V;G)}X& = \operatorname{dim}_{H^l_c(V)\otimes G}X\\ & = \sup\{\operatorname{dim}_{\mathbb{Z}_{(p)}}X \mid \mathbb{Z}_{(p)}\in\sigma(H^l_c(V)\otimes G)\}\\ & \leq \sup\{\operatorname{dim}_{\mathbb{Z}_{(p)}}X \mid \mathbb{Z}_{(p)}\in\sigma(G)\}\\ & = \operatorname{dim}_GX\\ & =m. \end{align*} Therefore $\operatorname{dim}_{H^{n+m-i}_c(V;G)}X\leq i$ for all $i\geq 0$. Then by Proposition 2.2 (1), $\operatorname{dim}_G(X\times Y)\leq n+m$. (2). First, we prove the inequality for the $p$-adic circle $G=\mathbb{Z}_{p^{\infty}}$. We note that $\sigma(H\otimes\mathbb{Z}_{p^{\infty}})=\{\mathbb{Z}_{p^{\infty}}\}$ or $\emptyset$ and $\sigma(H\ast \mathbb{Z}_{p^{\infty}})\subset\{\mathbb{Z}_p,\mathbb{Z}_{p^{\infty}}\}$. Then it follows that $\operatorname{dim}_{H^l_c(V;\mathbb{Z}_{p^{\infty}})}X \leq \operatorname{dim}_{\mathbb{Z}_p}X \leq \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+1 = m+1$. Therefore $\operatorname{dim}_{H^{n+m-i}_c(V;\mathbb{Z}_{p^{\infty}})}X\leq i+1$ for all $i\geq 0$ and hence, $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}(X\times Y)\leq n+m+1$. Now we have proven the inequality (2) for all groups from Bockstein basis (additionally to an above see also Proposition 3.3, Proposition 3.4 (1)). If $G$ is an arbitrary group, then by the Bockstein Theorem, \begin{align*} \operatorname{dim}_G(X\times Y)& = \sup\{\operatorname{dim}_H(X\times Y) \mid H\in\sigma(G)\}\\ & \leq \sup\{\operatorname{dim}_HX+\operatorname{dim}_H Y+1 \mid H\in\sigma(G)\}\\ & \leq \sup\{\operatorname{dim}_H X \mid H\in\sigma(G)\} + \sup\{\operatorname{dim}_H Y \mid H\in\sigma(G)\} + 1\\ & = \operatorname{dim}_GX + \operatorname{dim}_GY + 1. \end{align*} \end{proof} \begin{proposition} Suppose that compactum $X$ is not $p$-deficient, then $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}(X\times Y) = \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}Y$ for every compactum $Y$. \end{proposition} \begin{proof} Denote $\operatorname{dim}_{\mathbb{Z}_p}X=m$ and $\operatorname{dim}_{\mathbb{Z}_p}Y=n$. Since $X$ is not $p$-deficient, we have $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X=m$. According to Bockstein inequalities BI1, BI2 there are two possibilities for $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}Y$: (a) to be equal $n$ and (b) to be equal $n-1$. In the case of (a) we can find an open set $V\subset Y$ with $H^n_c(V;\mathbb{Z}_{p^{\infty}})\neq 0$. Since $H^n_c(V;\mathbb{Z}_{p^{\infty}})$ is $p$-torsion group, $\sigma(H^n_c(V;\mathbb{Z}_{p^{\infty}}))\subset\{\mathbb{Z}_p,\mathbb{Z}_{p^{\infty}}\}$. By the Bockstein Theorem and BI1, $\operatorname{dim}_{H^n_c(V;\mathbb{Z}_{p^{\infty}})}X=m$. Proposition 3.2 (2) implies that $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}(X\times Y)\geq n+m$. The inequality $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}(X\times Y)\leq n+m$ follows from Proposition 3.3 and BI1. In the case of (b) one can show that $\operatorname{dim}_{H^{m+n-i-1}_c(V;\mathbb{Z}_{p^{\infty}})}X\leq i$ for all $i\geq 0$ and every open subset $V\subset Y$. For $i<m$ it is due to an obvious reason: $H^{m+n-i-1}_c(V;\mathbb{Z}_{p^{\infty}})=0$. For $i\geq m$ the inequality holds because of the inclusion $\sigma(H^{m+n-i-1}_c(V;\mathbb{Z}_{p^{\infty}})) \subset \{\mathbb{Z}_p,\mathbb{Z}_{p^{\infty}}\}$ and the equality $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X=m$. Then Proposition 3.2 (1) implies that $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}(X\times Y)\leq n+m-1$. The opposite inequality $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}(X\times Y)\geq n+m-1$ follows from Proposition 3.3 and BI2. \end{proof} \begin{theorem} Suppose that a compactum $X$ is $p$-regular for some prime $p$, then $\operatorname{dim}_G (X\times Y) = \operatorname{dim}_G X + \operatorname{dim}_G Y$ for all $G\in\sigma_p=\{\mathbb{Z}_p,\mathbb{Z}_{p^{\infty}},\mathbb{Z}_{(p)}\}$ and any other compactum $Y$. \end{theorem} \begin{proof} Obviously theorem is true for $G=\mathbb{Z}_p$. Since $p$-regularity does not admit $p$-deficiency, the case $G=\mathbb{Z}_{p^{\infty}}$ follows from Proposition 3.5. In the case of $G=\mathbb{Z}_{(p)}$ we denote by $n=\operatorname{dim}_{\mathbb{Z}_{(p)}}Y$ and $m=\operatorname{dim}_{\mathbb{Z}_{(p)}}X$. Let $V$ be an open subset of $Y$ such that $A=H^n_c(V;\mathbb{Z}_{(p)})\neq 0$. If $A$ is not a torsion group, then by the Bockstein Theorem $\operatorname{dim}_AX\geq \operatorname{dim}_{\mathbb{Z}_{(q)}}X$. By BI4 we have $\operatorname{dim}_AX\geq \operatorname{dim}_{\mathbb{Q}}X=m$. Proposition 3.2 (2) implies that $\operatorname{dim}_{\mathbb{Z}_{(p)}}(X\times Y)\geq \operatorname{dim}_{\mathbb{Z}_{(p)}}X+\operatorname{dim}_{\mathbb{Z}_{(p)}}Y$. In the other direction the inequality follows by Proposition 3.4 (1). If $A$ is a torsion group, then $A$ is a $p$-torsion group, since $A = H^n_c(V)\otimes {\mathbb{Z}_{(p)}}$ by the Universal Coefficient Formula. Therefore $\operatorname{dim}_AX\geq \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X=m$ and the result follows. \end{proof} \begin{corollary} Suppose $X$ is a dimensionally full-valued compactum. Then\linebreak[5] $\operatorname{dim}_G (X\times Y) = \operatorname{dim}_G X + \operatorname{dim}_G Y$ for any group $G$. \end{corollary} \begin{proof} A compactum $X$ is $p$-regular for all $p$. Hence $\operatorname{dim}_GX=\operatorname{dim}_{\mathbb{Z}}X$ for any group $G$. Theorem 3.6 and Proposition 3.3 imply that the above formula holds for all $G\in\sigma$. If $G$ is an arbitrary abelian group, the Bockstein Theorem states that $\operatorname{dim}_G(X\times Y) = \sup\{\operatorname{dim}_H(X\times Y) \mid H \in\sigma(G)\} = \sup\{\operatorname{dim}_HX+\operatorname{dim}_HY \mid H\in\sigma(G)\} = \operatorname{dim}_GX+\sup\{\operatorname{dim}_HY \mid H\in\sigma(G)\} = \operatorname{dim}_GX+\operatorname{dim}_GY$. \end{proof} \begin{corollary} \mbox{} \begin{enumerate} \item The product of two $p$-regular compacta is $p$-regular. \item The product of $p$-regular and $p$-singular compacta is $p$-singular. \end{enumerate} \end{corollary} \begin{example*} Let $p\neq q$, then $\operatorname{dim}(\Pi_p\times\Pi_q)=3$ for different Pontryagin surfaces. Indeed, by theorems of Alexandroff and Bockstein, $$\operatorname{dim}(\Pi_p\times\Pi_q) = \max\{\operatorname{dim}_{\mathbb{Z}_{(r)}}(\Pi_p\times\Pi_q) \mid r\in\mathcal{P}\}.$$ Since for every $r\in\mathcal{P}$ one of the factors $\Pi_p$ or $\Pi_q$ is $r$-regular, by Theorem 3.6, $$\operatorname{dim}_{\mathbb{Z}_{(r)}}(\Pi_p\times\Pi_q) = \operatorname{dim}_{\mathbb{Z}_{(r)}}\Pi_p+\operatorname{dim}_{\mathbb{Z}_{(r)}}\Pi_q.$$ Then $\operatorname{dim}_{\mathbb{Z}_{(r)}}(\Pi_p\times\Pi_q)=3$ if $r=p$ or $r=q$ and it equals $2$ if $r\neq p$ and $r\neq q$. \end{example*} \begin{lemma} The deficiency set of the product is the union of deficiency sets of factors: $\mathcal{D}_{X\times Y}=\mathcal{D}_X\cup\mathcal{D}_Y$. \end{lemma} \begin{proof} By Propositions 3.3 and 3.5 the product $X\times Y$ cannot be $p$-deficient if both factors are not $p$-deficient. This implies the inclusion $\mathcal{D}_{X\times Y}\subset\mathcal{D}_X\cup\mathcal{D}_Y$. If $p\in\mathcal{D}_X\setminus\mathcal{D}_Y$, the $p$-deficiency of the product $X\times Y$ equals one by Propositions 3.3 and 3.5, and hence $p\in\mathcal{D}_{X\times Y}$. Similarly if $p\in\mathcal{D}_Y\setminus\mathcal{D}_X$. If $p\in\mathcal{D}_X\cap\mathcal{D}_Y$, then by Proposition 3.4, $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}(X\times Y) \leq \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X + \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}Y + 1 = \operatorname{dim}_{\mathbb{Z}_p}X - 1 + \operatorname{dim}_{\mathbb{Z}_p}Y - 1 + 1 = \operatorname{dim}_{\mathbb{Z}_p}(X\times Y) - 1$. Then BI2 implies that $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}(X\times Y) = \operatorname{dim}_{\mathbb{Z}_p}(X\times Y) - 1$. It means that $p\in\mathcal{D}_{X\times Y}$ in that case too. Thus, $\mathcal{D}_{X\times Y}\supset\mathcal{D}_X\cup\mathcal{D}_Y$. \end{proof} \begin{corollary} The $p$-deficiency of the product of two compacta can be computed by the following formula: $\epsilon_{X\times Y}(p) = \epsilon_X(p) + \epsilon_Y(p) - \epsilon_X(p)\epsilon_Y(p)$. \end{corollary} \begin{proof} The formula follows from the union formula for characteristic functions $\chi_{A\cup B} = 1 - (1 - \chi_{A})(1 - \chi_{B}) = \chi_{A} + \chi_{B} - \chi_{A}\chi_{B}$, Lemma 3.9 and the equality $\epsilon_X = \chi_{\mathcal{D}_X}$. \end{proof} \begin{lemma} The inequality $\operatorname{dim}_{\mathbb{Z}_{(p)}}(X\times Y) \geq \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X + \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}Y + 1$ holds for all $p$ and all $p$-singular compacta $X$ and $Y$. \end{lemma} \begin{proof} Let $k = \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X<\operatorname{dim}_{\mathbb{Z}_{(p)}}X$ and $l = \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}Y < \operatorname{dim}_{\mathbb{Z}{(p)}}Y$. Consider a group $G=H^{l+1}_c(V;\mathbb{Z}_{(p)}) = H^{l+1}_c(V)\otimes\mathbb{Z}_{(p)}$ for an open subset $V\subset Y$ such that $H^{l+1}_c(V)\neq 0$. Such a set $V$ exists because of Theorem 1.1 and the inequality $\operatorname{dim}_{\mathbb{Z}_{(p)}}Y\geq l+1$. If the group $G$ has $p$-torsion, then $\mathbb{Z}_p$ or $\mathbb{Z}_{p^{\infty}}$ belongs to $\sigma(G)$. In both cases $\operatorname{dim}_GX\geq \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X=k$. By Proposition 3.2 (2), $\operatorname{dim}_{\mathbb{Z}_{(p)}}(X\times Y)\geq k+l+1$. If the group $G$ has no $p$-torsion, then $H^{l+1}_c(V)\otimes\mathbb{Q}\neq 0$ and hence, $\operatorname{dim}_{\mathbb{Q}}Y\geq l+1$. Similarly, consider a group $G'=H^{k+1}_c(U;\mathbb{Z}_{(p)})$ and derive $\operatorname{dim}_{\mathbb{Q}}X\geq k+1$ or the required inequality $\operatorname{dim}_{\mathbb{Z}_{(p)}}(X\times Y)\geq k+l+1$. In the first case according to BI4 we have $\operatorname{dim}_{\mathbb{Z}_{(p)}}(X\times Y) \geq \operatorname{dim}_{\mathbb{Q}}(X\times Y) \geq k+l+1$. \end{proof} \begin{corollary} The product $X\times Y$ of two $p$-singular compacta is $p$-singular. \end{corollary} \begin{proof} If one of the compacta is $p$-deficient, then by Lemma 3.8 the product is also $p$-deficient and, hence, $p$-singular. If both compacta are not $p$-deficient, then Lemma 3.11 implies $\operatorname{dim}_{\mathbb{Z}_{(p)}}(X\times Y) \geq \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X + \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}Y + 1 = \operatorname{dim}_{\mathbb{Z}_p}X + \operatorname{dim}_{\mathbb{Z}_p}Y + 1 = \operatorname{dim}_{\mathbb{Z}_p}(X\times Y) + 1$ and, hence, $X\times Y$ is $p$-singular. \end{proof} \begin{lemma} The singularity set of the product of two compacta is the union of their singularity sets: $\mathcal{S}_{X\times Y}=\mathcal{S}_X\cup\mathcal{S}_Y$. \end{lemma} \begin{proof} Corollaries 3.8 and 3.12 imply the proof. \end{proof} The results of Lemmas 3.9, 3.13 and Proposition 3.3 can be summarize into the following. \begin{theorem} For any two compacta $X$ and $Y$ and their product $X\times Y$ there is the formula: $(\mathcal{S}_{X\times Y},\mathcal{D}_{X\times Y}, d_{X\times Y}) = (\mathcal{S}_X\cup\mathcal{S}_Y,\mathcal{D}_X\cup\mathcal{D}_Y,d_X+d_Y)$. \end{theorem} If one of the factors is $p$-regular, then according to Theorem 3.6, the logarithmic law for the dimension of the product holds. If both factors are $p$-singular then the following deviation takes place for coefficient groups from Bockstein basis $\sigma$. \begin{lemma} Suppose $X$ and $Y$ are $p$-singular compacta. Then \begin{enumerate} \item $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}(X\times Y) = \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X + \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}Y + \epsilon_X(p)\epsilon_Y(p)$ \item $\operatorname{dim}_{\mathbb{Z}_{(p)}}(X\times Y) = \max\{\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}(X\times Y)+1, \operatorname{dim}_{\mathbb{Q}}(X\times Y)\}$. \end{enumerate} \end{lemma} \begin{proof} Proposition 3.3 and Corollary 3.10 imply that \begin{align*} \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}(X\times Y)& = \operatorname{dim}_{\mathbb{Z}_p}(X\times Y)-\epsilon_{X\times Y}(p)\\ & = \operatorname{dim}_{\mathbb{Z}_p}X + \operatorname{dim}_{\mathbb{Z}_p}Y - \epsilon_X(p) - \epsilon_Y(p) + \epsilon_X(p)\epsilon_Y(p)\\ & = \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X + \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}Y + \epsilon_X(p)\epsilon_Y(p). \end{align*} Corollary 3.12 and Lemma 2.8(2) imply the second part of the theorem. \end{proof} \begin{theorem} Let $X$ be a compactum, then \begin{itemize} \item[(a)] $\operatorname{dim}_{\mathbb{Z}}(X\times X)=2\operatorname{dim}_{\mathbb{Z}}X$ or\/ $2\operatorname{dim}_{\mathbb{Z}}X-1$, \item[(b)] $\operatorname{dim}_{\mathbb{Z}}X^n=n\operatorname{dim}_{\mathbb{Z}}X$ or\/ $n\operatorname{dim}_{\mathbb{Z}}X-n+1$. \end{itemize} \end{theorem} \begin{proof} If there is a field $F$ such that $\operatorname{dim}_FX=\operatorname{dim}_{\mathbb{Z}}X$ then by Propositions 3.3 and 3.4 we have the first case. Now assume that there is no such a field. Then by Bockstein Theorem $\operatorname{dim}_{\mathbb{Z}}X=\operatorname{dim}_{\mathbb{Z}_{(p)}}X$ for some $p$. Our assumption implies that $X$ is $p$-singular and $\operatorname{dim}_{\mathbb{Z}_{(p)}}X>\operatorname{dim}_{\mathbb{Q}}X$. Lemma 3.15(1) states that $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}(X\times X) = \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+\epsilon^2_X(p)$. By Lemma 3.15(2), we have $\operatorname{dim}_{\mathbb{Z}_{(p)}}(X\times X) = 2\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+\epsilon_X(p)+1$. Bockstein inequality BI1 and the assumption imply that $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X=\operatorname{dim}_{\mathbb{Z}_p}X$ and hence $\epsilon_X(p)=0$. By Lemma 2.9 there is a field $F'$ such that $\operatorname{dim}_{\mathbb{Z}}(X\times X)\leq \operatorname{dim}_{F'}(X\times X)+1$. Since $\operatorname{dim}_{F'}X\leq \operatorname{dim}_{\mathbb{Z}}X-1$, we have \begin{align*} 2\operatorname{dim}_{\mathbb{Z}}X-1& = \operatorname{dim}_{\mathbb{Z}_{(p)}}(X\times X)\\ & \leq \operatorname{dim}_{\mathbb{Z}}(X\times X)\\ & \leq \operatorname{dim}_{F'}(X\times X) + 1\\ & = 2\operatorname{dim}_{F'}X + 1\\ & \leq 2(\operatorname{dim}_{\mathbb{Z}}X-1) + 1\\ & = 2\operatorname{dim}_{\mathbb{Z}}X - 1. \end{align*} Hence, $\operatorname{dim}_{\mathbb{Z}}(X\times X) = 2\operatorname{dim}_{\mathbb{Z}}X-1$. Induction on $n$ implies part (b). \end{proof} \begin{definition} A compactum $X$ is of the {\it basic type} if $\operatorname{dim} X^2=2\operatorname{dim} X$ and it is called having the {\it exceptional type} if $\operatorname{dim} X^2=2\operatorname{dim} X-1$. \end{definition} This definition makes sense only for finite dimensional compacta. In that case $\operatorname{dim} X=\operatorname{dim}_{\mathbb{Z}}X$ by virtue of Alexandroff Theorem. Theorem 3.16 proves that all compacta are split into these two classes. Moreover, the dimension of the $n$-th power of $X$ equals $\operatorname{dim} X^n=n\operatorname{dim} X$ for compacta of the basic type and $\operatorname{dim} X^n=n\operatorname{dim} X-n+1$ for compacta of the exceptional type. The proof of Theorem 3.16 suggests the following: \begin{criterion} A compactum $X$ is of the basic type if and only if there is a field $F\in\sigma$ such that $\operatorname{dim}_FX=\operatorname{dim} X$. \end{criterion} \section{Dimension type algebra} Every compactum $X$ of positive dimension defines a function $\phi_X\colon \sigma\to\mathbb{N}\cup\{\infty\}$ by the formula $\phi_X(G)=\operatorname{dim}_GX$. This function $\phi_X$ satisfies the Bockstein Inequalities BI1--6. In Lemma 2.8 we defined a set $F=(\mathcal{S}_X,\mathcal{D}_X;d_X)$ where $\mathcal{D}_X\subset\mathcal{S}_X\subset\mathcal{P}$ is a pair of subsets of primes and $d_X\colon \mathcal{P}\cup\{0\}\to\mathbb{N}\cup\{\infty\}$ is the field dimensional function. The function $d_X$ has the property $d(\mathcal{P}\setminus\mathcal{S}) = d(0)$. The set $(\mathcal{S}_X,\mathcal{D}_X;d_X)$ completely defines $\phi_X$. Now if we forget that the function $\phi_X$ came from a compactum $X$, we can reformulate the results of \S2 in more abstract way. For every abstract function $\phi\colon \sigma\to\mathbb{N}\cup\{\infty\}$ one can define a regularity set $$\mathcal{R} = \{p\in\mathcal{P} \mid \phi(\mathbb{Z}_{p^{\infty}}) = \phi(\mathbb{Z}_p) = \phi(\mathbb{Z}_{(p)}) = \phi(\mathbb{Q})\},$$ a singularity set $\mathcal{S} = \mathcal{P}\setminus\mathcal{R}$ and a deficiency set $\mathcal{D} = \{p\in\mathcal{P} \mid \phi(\mathbb{Z}_p) \neq \phi(\mathbb{Z}_{p^{\infty}})$. The field dimensional function can be defined as $d(p)=\phi(\mathbb{Z}_p)$ and $d(0)=\phi(\mathbb{Q})$. Thus the set $F_{\phi}=(\mathcal{S},\mathcal{D};d)$ is well defined. On the other hand if we have a set $F=(\mathcal{S},\mathcal{D};d)$ where $\mathcal{D}\subset\mathcal{S}\subset\mathcal{P}$ and $d\colon \mathcal{P}\cup\{0\}\to\mathbb{N}\cup\{\infty\}$, we can define a function $\phi_F\colon \sigma\to\mathbb{N}\cup\{\infty\} $ by formulas: $\phi(\mathbb{Z}_p)=d(p)$, $\phi(\mathbb{Q})=d(0)$, $\phi(\mathbb{Z}_{p^{\infty}})=d(p)-\chi_{\mathcal{D}}(p)$ and $\phi(\mathbb{Z}_{(p)}) = (\max\{d(0),d(p) - \chi_{\mathcal{D}}(p)+1\}) \chi_{\mathcal{S}}(p)+d(0)\chi_{\mathcal{P}\setminus\mathcal{S}}(p)$, where $\chi_A$ denotes the characteristic function of a set $A\subset\mathcal{P}$. \begin{proposition} The correspondence $\phi \to F_{\phi}$ defines a bijection between all functions $\phi\colon \sigma\to\mathbb{N}\cup\{\infty\}$ satisfying the Bockstein Inequalities BI1--BI6 and triples $F=(\mathcal{S},\mathcal{D};d)$ with $d(\mathcal{P}\setminus\mathcal{S})=d(0)$. Its inverse is defined by the above correspondence $F \mapsto \phi_F$. \end{proposition} We denote the set of functions $\phi\colon \sigma\to\mathbb{N}\cup\{\infty\}$ satisfying the Bockstein inequalities by $\mathcal{B}_+$ and the set of triples $(\mathcal{S},\mathcal{D};d)$ with the constrain $d(\mathcal{P}\setminus\mathcal{S})=d(0)$ by $\mathcal{F}_+$ On the class of all compacta we consider the following equivalence relation: $X\sim Y$ if and only if $\operatorname{dim}_GX=\operatorname{dim}_GY$ for all abelian groups $G$. An equivalence class under that relation is called a cohomological dimension type or briefly {\it cd-type}. We define zero cd-type as the type of 0-dimensional compacta. Every nonzero cd-type can be described by an element of $\mathcal{F}_+$ as well as by an element of $\mathcal{B}_+$. \begin{definition} We define two binary operations $\mathbin{[+]}$ and $\mathbin{[\times]}$ on $\mathcal{F}_+$ by the formulas: $$(\mathcal{S}_1,\mathcal{D}_1;d_1) \mathbin{[+]} (\mathcal{S}_2,\mathcal{D}_2;d_2) = (\mathcal{S}_1\cup\mathcal{S}_2,\mathcal{D}_1\cup\mathcal{D}_2;d_1+d_2)$$ $$(\mathcal{S}_1,\mathcal{D}_1;d_1) \mathbin{[\times]} (\mathcal{S}_2,\mathcal{D}_2;d_2) = (\mathcal{S}_1\cap\mathcal{S}_2,\mathcal{D}_1\cap\mathcal{D}_2;(d_1-d_1(0))(d_2-d_2(0))+d_1(0)d_2(0))$$ \end{definition} \begin{proposition} $F_1 \mathbin{[+]} F_2\in\mathcal{F}_+$ and $F_1 \mathbin{[\times]} F_2\in\mathcal{F}_+$ for $F_1,F_2\in\mathcal{F}_+$. \end{proposition} \begin{proof} First $(d_1+d_2)(\mathcal{P}\setminus(\mathcal{S}_1\cup\mathcal{S}_2)) = (d_1+d_2)((\mathcal{P}\setminus\mathcal{S}_1)\cap(\mathcal{P}\setminus\mathcal{S}_2)) = d_1(0)+d_2(0)$. Second, since $(d_1-d_1(0))(d_2-d_2(0))=0$ on $(\mathcal{P}\setminus \mathcal{S}_1)\cup(\mathcal{P}\setminus\mathcal{S}_2)=\mathcal{P}\setminus(\mathcal{S}_1\cap\mathcal{S}_2)$, $d(\mathcal{P}\setminus(\mathcal{S}_1\cap\mathcal{S}_2)) = d_1(0)d_2(0) = d(0)$. \end{proof} \begin{proposition} The distributivity law holds for operations $\mathbin{[+]}$ and $\mathbin{[\times]}$. \end{proposition} \begin{proof} It is known that the distributivity law holds for $\cup$ and $\cap$. We omit an easy verification of the distributivity law for functions $d$. \end{proof} \begin{proposition} The natural numbers $\mathbb{N}$ are imbedded into $\mathcal{F}_+$ by homomorphism taking a number $n$ to $(\emptyset,\emptyset;n)$ where $n$ also denotes the corresponding constant function. \end{proposition} \begin{proof} The proof is trivial. \end{proof} \begin{definition} {\it The norm} of cd-type $F=(\mathcal{S},\mathcal{D};d)\in\mathcal{F}_+$ is defined as $$\| F\|=\sup_{\mathcal{P}\cup\{0\}}\{d+\chi_{\mathcal{S}\setminus\mathcal{D}}\}.$$ \end{definition} \begin{proposition} Let $F\in\mathcal{F}_+$ represent the cd-type of a compactum $X$, then $\| X\|=\operatorname{dim}_{\mathbb{Z}}X$. \end{proposition} \begin{proof} By Bockstein Theorem $\operatorname{dim}_{\mathbb{Z}}X=\sup\{\operatorname{dim}_{\mathbb{Z}_{(p)}}X \mid p\in\mathcal{P}\}$. By Lemma 2.8 $$\operatorname{dim}_{\mathbb{Z}_{(p)}}X = \begin{cases} d(0)& \text{if $p\in\mathcal{P}\setminus\mathcal{S}$,}\\ \max\{d(0),d(p)+1\}& \text{if $p\in\mathcal{S}\setminus\mathcal{D}$,}\\ \max\{d(0),d(p)\}& \text{if $p\in\mathcal{D}$.} \end{cases} $$ Therefore, $\sup\{\operatorname{dim}_{\mathbb{Z}_{(p)}}X \mid p\in\mathcal{P}\} = \sup\{(d+\chi_{\mathcal{S}\setminus\mathcal{D}})(x) \mid x\in\mathcal{P}\cup\{0\}\}$. \end{proof} On the set of functions $\mathcal{B}_+$ there is the natural partial order $\leq$: $$ \text{$\phi_1\leq\phi_2$ if and only if $\phi_1(G)\leq\phi_2(G)$ for all $G\in\sigma$.} $$ Thus the bijection of Proposition 4.1 defines a partial order $\preceq$ on cd-types. \begin{proposition} Let $\phi_1,\phi_2\in\mathcal{B}_+$, then $\phi$ defined as $\phi(G) = \max\{\phi_1(G), \phi_2(G)\}$ satisfies the Bockstein inequalities, i.e.\ $\phi\in\mathcal{B}_+$. \end{proposition} \begin{proof} Trivial. \end{proof} \begin{definition} Let $F_1$ and $F_2$ be two cd-types, then we define the wedge $F_1\vee F_2$ as the cd-type corresponding to the function $\phi(G)=\max\{\phi_{F_1}(G),\phi_{F_2}(G)\}$. \end{definition} Proposition 4.6 is valid if one replaces the maximum by a supremum over an arbitrary index set. Thus an operation $\bigvee_{i\in J}F_i$ can be defined for any family $\{F_i \mid i\in J\}\subset\mathcal{F}_+$. \begin{proposition} The distributivity law holds for $\vee$ and $\mathbin{[+]}$. \end{proposition} \begin{proposition} For every family $\{F_i \mid i\in J\}\subset\mathcal{F}_+$ there is a countable subset $J'\subset J$ such that $\bigvee_{i\in J}F_i = \bigvee_{i\in J'}F_i$. \end{proposition} \begin{proof} Take an arbitrary group $G\in\sigma$. If the maximum of $\phi_{F_i}(G)$ is attained on some $i_G\in J$, we define $L_G=\{i_G\}$. If not, then there is a sequence $\{i^k_G\}$ such that $\operatorname{lim}_{k\to\infty}\phi_{F_{i^k_G}}(G)=\infty$. In that case we define $L_G=\{i^k_G\}_{k\in\mathbb{N}}$. We do this for all groups $G\in\sigma$. Then we define $J'=\bigcup_{G\in\sigma}L_G$. Since $\sigma$ is a countable set and $L_G$ is countable for every $G\in\sigma$, the set $J'$ is countable. \end{proof} By $\delta_x$ we denote a characteristic function of one point set $\{x\}$, i.e. $$\delta_x(t) = \begin{cases} 1& \text{if $t=x$,}\\ 0& \text{if $t\neq x$.} \end{cases} $$ We define {\it Kuzminov's basis} as the set of the following cd-types: \begin{align*} \Phi(\mathbb{Q},n)& = (\mathcal{P},\emptyset; (n-1)\delta_0+1),\\ \Phi(\mathbb{Z}_{(p)},n)& = (\mathcal{P}\setminus\{p\},\emptyset;(n-1)(\delta_0+\delta_p)+1),\\ \Phi(\mathbb{Z}_p,n)& =(\mathcal{P},\{p\};(n-1)\delta_p+1),\\ \Phi(\mathbb{Z}_{p^{\infty}},n)& = (\mathcal{P},\emptyset; (n-2)\delta_p+1). \end{align*} Here we assume that $n>1$. Since all 1-dimensional compacta define the same cd-type, we let $\Phi(G,1)$ equal the cd-type of one-dimensional compacta for all $G\in\sigma$. For $G\neq\mathbb{Z}_p$ the singularity set consists of whole $\mathcal{P}$ and hence all these $\Phi(G,n)$ belong to $\mathcal{F}_+$. For $\Phi(\mathbb{Z}_p,n)$ the condition $d(\mathcal{P}\setminus\mathcal{S})=d(0)$ turns into $d(p)=d(0)$ and it is easy to check that it holds. Hence, $\Phi(\mathbb{Z}_p,n)\in\mathcal{F}_+$ too. \begin{proposition} For all $G\in\sigma$ and every $n$, $\| \Phi(G,n)\| = n$. \end{proposition} \begin{proof} $\| \Phi(\mathbb{Q},n)\| = \sup\{d(x) + \chi_{\mathcal{S}\setminus\mathcal{D}}(x) \mid x \in \mathcal{P} \cup \{0\}\} = \sup\{(n-1)\delta_0 + 1 + \chi_{\mathcal{P}}\} = \max\{n,2\} = n$. $\| \Phi(\mathbb{Z}_{(p)},n)\| = \sup\{(n-1)(\delta_0+\delta_p) + 1 + \chi_{\mathcal{P}\setminus\{p\}}\} = \max\{n,2\} = n$. $\| \Phi(\mathbb{Z}_p,n)\| = \sup\{(n-1)\delta_p + 1 + \chi_{\mathcal{P}\setminus\{p\}}\} = \max\{n,2\} = n$. $\| \Phi(\mathbb{Z}_{p^{\infty}},n)\| = \sup\{(n-2)\delta_p + 1 + \chi_{\mathcal{P}}\} = \max\{n,2,1\} = n$. \end{proof} \begin{proposition} The field dimensional function $d$ has its maximum at $p$ for cd-types $\Phi(\mathbb{Z}_{(p)},n)$, $\Phi(\mathbb{Z}_p,n)$ and $\Phi(\mathbb{Z}_{p^{\infty}},n)$. It has its maximum at 0 for $\Phi(\mathbb{Q},n)$. \end{proposition} The proof is an easy observation. \begin{theorem} For any cd-type $F\in\mathcal{F}_+$ there is a representation $F = \bigvee\{\Phi(G,k_G) \mid G\in\sigma\}$. \end{theorem} \begin{proof} Let $F=(\mathcal{S},\mathcal{D});d)$. If the norm of $F$ equals one, then we take $k_G=1$ for all $G\in\sigma$. If the norm is greater than one, we take $k_{\mathbb{Q}}=d(0)$, $$ k_{\mathbb{Z}_{(p)}} = \begin{cases} d(p)& \text{if $p\in\mathcal{P}\setminus\mathcal{S}$,}\\ 1& \text{otherwise} \end{cases} $$ $$k_{\mathbb{Z}_p} = \begin{cases} d(p)& \text{if $p\in\mathcal{D},$}\\ 1& \text{otherwise} \end{cases} $$ and $$ k_{\mathbb{Z}_{p^{\infty}}} = \begin{cases} d(p)+1& \text{if $p\in\mathcal{S}\setminus\mathcal{D}$,}\\ 1& \text{otherwise.} \end{cases} $$ Then we consider a cd-type \begin{align*} F'& = \bigvee\{\Phi(G,k_G) \mid G\in\sigma\}\\ & = \Phi(\mathbb{Q},d(0)) \vee \bigvee_{p\in\mathcal{P}\setminus\mathcal{S}}\!\! \Phi(\mathbb{Z}_{(p)},d(p)) \vee \bigvee_{p\in\mathcal{D}}\! \Phi(\mathbb{Z}_p,d(p)) \vee \bigvee_{p\in\mathcal{S}\setminus\mathcal{D}}\!\! \Phi(\mathbb{Z}_{p^{\infty}},d(p)+1). \end{align*} If $F'=(\mathcal{S}',\mathcal{D}';d')$, then in the view of Proposition 4.10, \begin{align*} d'(0)& = \max\{d(0),d(0),1,1\} = d(0) \quad\text{and}\\ d'(p)& = \max\{1,d(p),d(p),d(p)+1-1\} = d(p). \end{align*} Therefore $d'=d$. It is easy to verify that $\mathcal{D}'=\mathcal{D}$ and $\mathcal{S}'=\mathcal{S}$. Hence $F=F'$. \end{proof} \begin{definition} {\it The inferior norm} $|F|$ of cd-type $F=(\mathcal{S},\mathcal{D};d)$ is defined as $$\min\{d(x) - \chi_{\mathcal{D}}(x) \mid x\in\mathcal{P}\cup\{0\}\}.$$ \end{definition} \begin{proposition} Let $F\in\mathcal{F}_+$ and let $\phi_F\in\mathcal{B}_+$ be its representative. Then $\| F\| = \sup\{\phi_F(G) \mid G\in\sigma\}$ and $|F| = \inf\{\phi_F(G) \mid G\in\sigma\}$. \end{proposition} \begin{proof} Note that \begin{align*} \sup\{\phi_F(G) \mid G\in\sigma\}& = \sup\{\phi_F(\mathbb{Z}_{(p)}) \mid p\in\mathcal{P}\}\\ & = \max\left\{ \sup\{\max\{d(0), d(p)+\chi_{\mathcal{S}\setminus\mathcal{D}}(p)\} \mid p\in\mathcal{S}\}, d(0) \right\}\\ & = \sup\{ d(x)+\chi_{\mathcal{S}\setminus\mathcal{D}}(x) \mid x\in\mathcal{P}\cup\{0\} \}\\ & = \|F\|. \end{align*} By virtue of Bockstein's inequalities, $$ \inf\{\phi_F(G) \mid G\in\sigma\} = \inf\{\phi_F(\mathbb{Q}),\phi_F(\mathbb{Z}_{p^{\infty}})\} = \min\{d(0),d(x) - \chi_{\mathcal{D}}(x)\} = |F|. $$ \end{proof} \begin{lemma} For any two cd-types $F_1$ and $F_2$ there are inequalities: $| F_1 | + \| F_2 \| \leq \| F_1 \mathbin{[+]} F_2 \| \leq \| F_1 \| +\| F_2 \|$. \end{lemma} \begin{proof} We may assume that all norms are finite. Suppose that the norm $\| F_2 \| = \sup\{d_2(y)+\chi_{\mathcal{S}_2\setminus\mathcal{D}_2}(y)\}$ is achieved at $x$. Then it suffices to show that $$ d_1(x) - \chi_{\mathcal{D}_1}(x) + d_2(x)+\chi_{\mathcal{S}_2\setminus\mathcal{D}_2}(x) \leq d_1(x) + d_2(x) + \chi_{(\mathcal{S}_1\cup\mathcal{S}_2)\setminus(\mathcal{D}_1\cup\mathcal{D}_2)}(x). $$ This is equal to the inequality $\chi_{\mathcal{S}_2\setminus\mathcal{D}_2} \leq \chi_{_{(\mathcal{S}_1\cup\mathcal{S}_2)\setminus(\mathcal{D}_1\cup\mathcal{D}_2)}} + \chi_{_{\mathcal{D}_1}}$ which follows from the fact that $\mathcal{S}_2 \setminus \mathcal{D}_2 \subset \mathcal{S}_2 \setminus (\mathcal{D}_2\cup\mathcal{D}_1) \cup \mathcal{D}_1 \subset \left( (\mathcal{S}_2\cup\mathcal{S}_1) \setminus (\mathcal{D}_2\cup\mathcal{D}_1) \right) \cup \mathcal{D}_1$. Assume that $\|F_1 \mathbin{[+]} F_2\| $ is achieved at $x$, then $\|F_1 \mathbin{[+]} F_2\| = d_1(x) + d_2(x) + \chi_{(\mathcal{S}_1\cup\mathcal{S}_2) \setminus (\mathcal{D}_1\cup\mathcal{D}_2)}(x)$. Because of the inclusion $(\mathcal{S}_1\cup\mathcal{S}_2)\setminus(\mathcal{D}_1\cup\mathcal{D}_2) \subset (\mathcal{S}_1\setminus\mathcal{D}_1)\cup(\mathcal{S}_2\setminus\mathcal{D}_2)$, we have the inequality $\chi_{(\mathcal{S}_1\cup\mathcal{S}_2)\setminus(\mathcal{D}_1\cup\mathcal{D}_2)} \leq \chi_{\mathcal{S}_1\setminus\mathcal{D}_1} + \chi_{\mathcal{S}_2)\setminus\mathcal{D}_2)}$. Then $$\|F_1 \mathbin{[+]} F_2\| \leq d_1(x) + \chi_{\mathcal{S}_1\setminus\mathcal{D}_1}(x) + d_2(x) + \chi_{\mathcal{S}_2)\setminus\mathcal{D}_2)}(x) \leq \|F_1\| + \|F_2\|.$$ \end{proof} By $\mathcal{F}$ we denote the set of all triples $(\mathcal{S},\mathcal{D};d)$, where $\mathcal{D}\subset\mathcal{S}\subset\mathcal{P}$ are subsets of primes and $d\colon \mathcal{P}\cup\{0\}\to\mathbb{Z}\cup\{-\infty,+\infty\}$ has the property $d(\mathcal{P}\setminus\mathcal{S})=d(0)$. As one can see $\mathcal{F}$ is the natural extension of $\mathcal{F}_+$. All operations $\vee$, $\mathbin{[+]}$, $\mathbin{[\times]}$ as well as the partial order $\preceq$ can be extended to $\mathcal{F}$. The notions of the norm $\|\ \|$ and the inferior norm $|\ |$ can be defined on $\mathcal{F}$ without changes. All propositions proven for $\mathcal{F}_+$ can be repeated without changes for $\mathcal{F}$. Similarly one can extend $\mathcal{B}_+$ to $\mathcal{B}$ together with the bijection $\mathcal{F} \to \mathcal{B}$. \begin{definition} A conjugation $\bar F$ of $F=(\mathcal{S},\mathcal{D};d)\in\mathcal{F}$ is defined as $$(\mathcal{S},\mathcal{S}\setminus\mathcal{D};-d).$$ \end{definition} It is clear that $\bar F\in\mathcal{F}$. \begin{proposition} For every $F=(\mathcal{S},\mathcal{D};d)\in\mathcal{F}$, \begin{enumerate} \item $\bar{\bar F}=F$, \item $F \mathbin{[+]} \bar F=(\mathcal{S},\mathcal{S};0)$, \item $\|F \mathbin{[+]} \bar F\|=0$. \end{enumerate} \end{proposition} \begin{proof} The statements (1),(2) are obvious; (3) follows from (2) and the definition of the norm. \end{proof} \begin{lemma} The conjugate $\bar F$ of $F$ is the maximal element with respect to the order $\preceq$ in the set $\{F'\in\mathcal{F} \mid \| F \mathbin{[+]} F'\|\leq 0\}$. \end{lemma} \begin{proof} First, from Proposition 4.14(3) we can see that $\bar F \preceq \bigvee\{F'\in\mathcal{F} \mid \| F \mathbin{[+]} F' \| \leq 0\}$. Then we show that $\bar F\succeq F'$ for every $F'$ having the property $\|F \mathbin{[+]} F'\|\leq 0$. Let $F=(\mathcal{S},\mathcal{D};d)$ and $F'=(\mathcal{S}',\mathcal{D}';d')$. The inequality $\|F \mathbin{[+]} F'\|\leq 0$ implies that \begin{equation*}\tag{$\ast$} d' + \chi_{(\mathcal{S}\cup\mathcal{S}')\setminus(\mathcal{D}\cup\mathcal{D}')} \leq -d. \end{equation*} Then $d'\leq d$ and hence, $\phi_{F'}(\mathbb{Z}_p)\leq\phi_{\bar F}(\mathbb{Z}_p)$ and $\phi_{F'}(\mathbb{Q})\leq\phi_{\bar F}(\mathbb{Q})$. We recall that $\phi_F$ stands for the function from $\mathcal{B}$ corresponding to $F$ under extended bijection from Proposition 4.1. We note that $\mathcal{S}\setminus\mathcal{D} \subset \mathcal{S}\setminus(\mathcal{D}\cup\mathcal{D}')\cup\mathcal{D}' \subset (\mathcal{S}\cup\mathcal{S}')\setminus(\mathcal{D}\cup\mathcal{D}')\cup\mathcal{D}'$. Hence, $\chi_{\mathcal{S}\setminus\mathcal{D}} \leq \chi_{(\mathcal{S}\cup\mathcal{S}')\setminus(\mathcal{D}\cup\mathcal{D}')} + \chi_{\mathcal{D}'}$. Therefore by this and ($\ast$) we have the following inequality: \begin{equation*}\tag{$\ast\ast$} d'- \chi_{_{\mathcal{D}'}} \leq -d - \chi_{(\mathcal{S}\cup\mathcal{S}')\setminus(\mathcal{D}\cup\mathcal{D}')} - \chi_{\mathcal{D}'} \leq -d-\chi_{\mathcal{S}\setminus\mathcal{D}}. \end{equation*} Hence, $\phi_{F'}(\mathbb{Z}_{p^{\infty}})\leq\phi_{\bar F}(\mathbb{Z}_{p^{\infty}})$. To treat the group $\mathbb{Z}_{(p)}$ we consider three cases. (1) $p\in\mathcal{P}\setminus\mathcal{S}'$, then $\phi_{F'}(\mathbb{Z}_{(p)})=d'(0)\leq -d(0)\leq\phi_{\bar F}(\mathbb{Z}_{(p)})$. Here we applied ($\ast$) and BI4. (2) $p\in\mathcal{S}'\cap\mathcal{S}$, then \begin{align*} \phi_{F'}(\mathbb{Z}_{(p)})& = \max\{d'(0),d'(p)-\chi_{\mathcal{D}'}(p)+1\}\\ & \leq \max\{-d(0),-d(p)-\chi_{\mathcal{S} \setminus\mathcal{D}}(p)+1\}\\ & = \phi_{\bar F}(\mathbb{Z}_{(p)}). \end{align*} Here we applied both ($\ast$) and ($\ast\ast$). (3) Finally, if $p\in\mathcal{S}'\setminus\mathcal{S}$, then the inclusion $\mathcal{S}'\setminus\mathcal{S} \subset (S\cup\mathcal{S}')\setminus(\mathcal{D}\cup\mathcal{D}')\cup\mathcal{D}'$ implies that $\chi_{(\mathcal{S}\cup\mathcal{S}')\setminus(\mathcal{D}\cup\mathcal{D}')}(p) + \chi_{_{\mathcal{D}'}}(p) \geq 1$. Then, $d'(p) - \chi_{\mathcal{D}'}(p)+1 \leq d'(p) - \chi_{(\mathcal{S}\cup\mathcal{S}')\setminus(\mathcal{D}\cup\mathcal{D}')}(p) \leq -d(p) = -d(0) = \phi_{\bar F}(\mathbb{Z}_{(p)})$. Because of this and ($\ast$) we have that $\phi_{F'}(\mathbb{Z}_{(p)}) = \max\{d'(p),d'(p)-\chi_{\mathcal{D}"}(p)+1\} \leq \phi_{\bar F}(\mathbb{Z}_{(p)})$. Thus, $\bar F = \bigvee\{F'\in\mathcal{F} \mid \| F \mathbin{[+]} F' \| \leq 0\}$. \end{proof} \section{Realization theorem} The main result of this Section is the following: \begin{theorem}[Realization Theorem] For every cd-type $F=(\mathcal{S},\mathcal{D};d)\in\mathcal{F}_+$ there exists a compactum $X$ such that $F_X=(\mathcal{S}_X,\mathcal{D}_X;d_X)=(\mathcal{S},\mathcal{D};d)$. Moreover, $F_X \mathbin{[+]} F_Y=F_{X\times Y}$ and $F_X \vee F_Y = F_{X\vee Y}$. \end{theorem} Thus the name `cd-types' for elements of $\mathcal{F}_+$ is justified. A compactum representing a fundamental cd-type $\Phi(G,n)$ is called a {\it fundamental compactum} of type $(G,n)$. The notation for this is $X\in F(G,n)$. A fundamental compacta have the following cohomological dimension with respect to groups from $\sigma$: \begin{figure}[h] \begin{tabular}{r|c|c|c|c|c|c|c} & $\mathbb{Z}_{(p)}$& $\mathbb{Z}_p$& $\mathbb{Z}_{p^\infty}$& $\mathbb{Q}$& $\mathbb{Z}_{(q)}$& $\mathbb{Z}_q$& $\mathbb{Z}_{q^\infty}$\\ \hline $F(\mathbb{Q},n)$& $n$& $1$& $1$& $n$& $n$& $1$& $1$\\ \hline $F(\mathbb{Z}_{(p)},n)$& $n$& $n$& $n$& $n$& $n$& $1$& $1$\\ \hline $F(\mathbb{Z}_p,n)$& $n$& $n$& $n{-}1$& $1$& $1$& $1$& $1$\\ \hline $F(\mathbb{Z}_{p^\infty},n)$& $n$& $n{-}1$& $n{-}1$& $1$& $1$& $1$& $1$\\ \end{tabular} \caption{Cohomological dimension w.r.t.\ groups from $\sigma$} \end{figure} Here $p,q$ are primes, $q$ runs over all primes $\ne p$. Let $h$ be a reduced homology (or cohomology) theory. A map between two topological spaces is called $h_*$-essential (or $h^*$-essential) if it induces nonzero homomorphism in $h$-homologies (or $h$-cohomologies). If one of the spaces is not a CW-complex, then we consider the \v{C}ech extension $\check h$. We recall that a cohomology theory $h^*$ is called {\it continuous} if for every direct limit $L=\varinjlim \{L_i;\lambda^i_{i+1}\}$ of finite CW-complexes the formula $h^*(L)=\varprojlim h^*(L_i)$ holds. We note that a cohomology $h^*(\ ;F)$ with coefficients in a field $F$ is continuous. In this section we give a proof of Realization Theorem based on the following general theorem. \begin{theorem} Let $P$ and $K$ be simplicial complexes and assume that $K$ is countable complex. Let $h_*$ ($h^*$) be a reduced generalized homology (continuous cohomology) theory. If $h_n(P)\neq 0$ ($h^n(P)\neq 0$) and $h_k(K)=0$ ($h^k(K)=0$) for all $k<n$, then there exist a compactum $X$, having the property $K\in AE(X)$, and an $h_n$-essential ($h^n$-essential) map $f\colon X\to P$. \end{theorem} \begin{corollary} For every $n\in\mathbb{N}$ there are $n$-dimensional fundamental compacta of types $(\mathbb{Q},n)$, $(\mathbb{Z}_p,n)$, $(\mathbb{Z}_{(p)},n)$ and $(\mathbb{Z}_{p^{\infty}},n)$ for all primes $p$ and any $n$. \end{corollary} \begin{proof} To realize the type $(\mathbb{Q},n)$ we take $P=S^n$ and for $K$ we take the wedge of an Eilenberg-Maclane complex and $n$-sphere $K=K(\bigoplus_{p\in\mathcal{P}}\mathbb{Z}_p,1) \vee S^n$ and we consider a continuous cohomology $h^*=H^*(\ ;\mathbb{Q})$. We note that $h^*(K)=0$ for $k<n$. By Theorem 5.2 there exists a compactum $X$ having nontrivial $n$-dimensional rational \v{C}ech cohomology. The property $K\in AE(X)$ implies that $K(\bigoplus_{p\in\mathcal{P}}\mathbb{Z}_p,1)\in AE(X)$ and $S^n\in AE(X)$. The second condition implies that $\operatorname{dim} X\leq n$. The first implies the inequality $\operatorname{dim}_{\bigoplus\mathbb{Z}_p}X\leq 1$ by virtue of Theorem 1.1. Corollary 1.7 implies that $\operatorname{dim}_{\mathbb{Z}_p}X\leq 1$ for all $p$. Since $X$ is not 0-dimensional (see 1.3), $\operatorname{dim}_{\mathbb{Z}_p}X=1$ for all $p$. Therefore by BI1, $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X=1$. The equality $\operatorname{dim}_{\mathbb{Q}}X=n$ follows form the $n$-dimensionality of $X$ and the fact that $X$ has nontrivial rational $n$-dimensional cohomology. The Bockstein inequality BI4 imply that $\operatorname{dim}_{\mathbb{Z}_{(p)}}X=n$ for all $p$. Now according to the above table $X$ has cohomological dimensions as $F(\mathbb{Q},n)$, hence $X$ is of type $(\mathbb{Q},n)$. For the type $(\mathbb{Z}_p,n)$ we take $P=S^n$, $K=K(\mathbb{Z}[\frac{1}{p}],1) \vee S^n$ and $h^*=H^*(\ ;\mathbb{Z}_p)$. Then the compactum $X$ of Theorem 5.2 has a cohomological dimension $\operatorname{dim}_{\mathbb{Z}[\frac{1}{p}]}X=1$ and the covering dimension $\operatorname{dim} X\leq n$. By virtue of Bockstein Theorem, $\operatorname{dim}_{\mathbb{Z}_{(q)}}X=1$ for prime $q\neq p$. By BI1,3,4 we have $\operatorname{dim}_{\mathbb{Z}_{q^{\infty}}}X= \operatorname{dim}_{\mathbb{Z}_q}X=\operatorname{dim}_{\mathbb{Q}}X=1$. Since $X$ has nontrivial $n$-dimensional cohomology with $\mathbb{Z}_p$-coefficients, we have $\operatorname{dim}_{\mathbb{Z}_p}X\geq n$. The equality holds, since $X$ is $n$-dimensional. Since $\operatorname{dim} X\leq n$, by BI3 $\operatorname{dim}_{\mathbb{Z}_{(p)}}X=n$. We may assume that $n>1$. Then $X$ is $p$-singular. Since $\operatorname{dim}_{\mathbb{Z}_{(p)}}X= \max\{\operatorname{dim}_{\mathbb{Q}}X,\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X+1\}$ for $p$-singular compacta, we have that $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X=n-1$. Thus, according to the above table $X\in F(\mathbb{Z}_p,n)$. For the type $(\mathbb{Z}_{(p)},n)$ we take $P=K(\mathbb{Z}_{p^{\infty}},n)$, $K=K(\bigoplus_{q\neq p}\mathbb{Z}_q,1) \vee S^n$, and $h_*=H_*(\ ;\mathbb{Z}_{(p)})$. Since $\mathbb{Z}_{p^{\infty}}\otimes\mathbb{Z}_{(p)}\neq 0$, by Hurewicz theorem and the Universal Coefficient Formula, $H_n(P;\mathbb{Z}_{(p)})\neq 0$. Note that $h_k(K)=0$ for all $k<n$. Apply Theorem 5.2 to obtain a compact $X$ and a map $f\colon X\to P$ with the certain properties. The property $K\in AE(X)$ implies, by virtue Theorem 1.1, equalities $\operatorname{dim}_{\mathbb{Z}_q}X=\operatorname{dim}_{\mathbb{Z}_{q^{\infty}}}X=1$ and the inequality $\operatorname{dim} X\leq n$. The essentiality of the map $f\colon X\to K(\mathbb{Z}_{p^{\infty}},n)$ gives nontrivial element in cohomology $\check H^n(X;\mathbb{Z}_{p^{\infty}})$. Hence $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X=n$. The inequalities BI1 and BI3 imply that $\operatorname{dim}_{\mathbb{Z}_{(p)}}X=n$. Hence, by Lemma 2.6, $X$ is $p$-regular. Therefore, $\operatorname{dim}_{\mathbb{Q}}X=n$. We assume that $n>1$, since any 1-dimensional compactum can serves as $F(\mathbb{Z}_{(p)},1)$. Then $X$ is $q$-singular for all prime $q\neq p$. Then by Lemma 2.8 it follows that $\operatorname{dim}_{\mathbb{Z}_{(q)}}X=\operatorname{dim}_{\mathbb{Q}}X=n$. Thus, $X$ has cohomological dimensions with respect to groups from $\sigma$ as it prescribed for $F(\mathbb{Z}_{(p)},n)$ by the table in the beginning of this section. For the type $(\mathbb{Z}_{p^{\infty}},n)$ we take $P=S^n$, $K = K(\mathbb{Z}[\frac{1}{p}],1) \vee K(\mathbb{Z}_p,n-1) \vee S^n$ and $h_*=H_*(\ ;\mathbb{Z}_{p^{\infty}})$. Note that $ H_k(K;\mathbb{Z}_{p^{\infty}})=0$ for $k<n$. We apply Theorem 5.2 to obtain a compactum $X$ having the property $K\in AE(X)$ and an essential map onto $n$-dimensional sphere. This properties imply that $\operatorname{dim} X=n$, $\operatorname{dim}_{\mathbb{Z}_{(q)}}X = \operatorname{dim}_{\mathbb{Z}_q}X = \operatorname{dim}_{\mathbb{Z}_{q^{\infty}}}X = \operatorname{dim}_{\mathbb{Q}}X=1$ and $\operatorname{dim}_{\mathbb{Z}_p}X\leq n-1$. Since $\operatorname{dim}_{\mathbb{Z}_{(p)}}X=n$, by the Bockstein Alternative it follows that $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X=n-1$. Then by BI1, $\operatorname{dim}_{\mathbb{Z}_p}X=n-1$. Then $X\in F(\mathbb{Z}_{p^{\infty}},n)$. \end{proof} \begin{definition} An {\it extension problem} $(A,\alpha)$ on a topological space $X$ is a map $\alpha\colon A\to K$ defined on a closed subset $A\subset X$ with the range a CW-complex (or ANE). A {\it solution} of an extension problem $(A,\alpha)$ is a continuous extension $\bar\alpha\colon X\to K$ of a map $\alpha$. A {\it resolution} of an extension problem $(A,\alpha)$ is a map $f\colon Y\to X$ such that the induced extension problem $f^{-1}(A,\alpha) = (f^{-1}(A), \alpha\circ f{\restriction}_{\cdots})$ on $Y$ has a solution. \end{definition} Because of the Homotopy Extension Theorem the solvability of an extension problem $(A,\alpha)$ is an invariant of homotopy class of $\alpha$. A family of extension problems $\{(A_i,\alpha_i)\}_{i\in J}$ forms a {\it basis} if for every extension problem $(B,\beta)$ there is $i\in J$ such that $B\subset A_i$ and the restriction $\alpha_i{\restriction}_B$ is homotopic to $\beta$. In that case we say that $(A_i,\alpha_i)$ contains $(B,\beta)$. In view of the Homotopy Extension Theorem the following Proposition is obvious. \begin{proposition} Suppose that a map $f\colon Y\to X$ resolves extension problems on $X$ from a given basis $\{(A_i,\alpha_i)\}_{i\in J}$. Then $f$ resolves all extension problems on $X$. \end{proposition} \begin{proposition} Let $K$ be fixed. Let $X$ be the limit space of an inverse sequence of compacta $\{X_k,q_k^{k+1}\}$ and let $\{(A^k_i,\alpha^k_i)\}_{i\in J_k}$ be a basis of extension problems for every $k$. Then $\{(q_k^{\infty})^{-1}(A^k_i,\alpha^k_i)\mid k\in\mathbb{N},i\in J_k\}$ is a basis of extension problems on $X$ where $q^{\infty}_k\colon X\to X_k$ denotes the infinite projection in the inverse sequence. \end{proposition} \begin{proof} Since $K\in ANE$, for every extension problem $(A,\alpha)$ on $X$ there is a number $k$ and a map $\beta\colon q^{\infty}_k(A)\to K$ such that $\beta\circ q^{\infty}_k{\restriction}_A$ is homotopic to $\alpha$. Take a problem $(A^k_i,\alpha^k_i)$ containing $(q^{\infty}_k(A),\beta)$. Then $\alpha^k_i{\restriction}_{q^{\infty}_k(A)} \sim\beta$. The extension problem $(q^{\infty}_k)^{-1}(A^k_i,\alpha^k_i)$ contains the problem $(A,\alpha)$. \end{proof} \begin{lemma} For any extension problem $(A,\alpha\colon A\to K)$ on $X$ there is a resolution of it $g\colon Y\to X$ such that every preimage $g^{-1}(x)$ is a point or homeomorphic to $K$. If additionally $X$ and $K$ are simplicial complexes, $A$ is a subcomplex and $\alpha$ is a simplicial map, then the resolving map $g$ can be chosen simplicial. \end{lemma} \begin{proof} Let $\pi\colon K\times I\to \operatorname{cone}(K)$ be the standard projection onto the cone. So, the preimage $\pi^{-1}(x)$ is one point set if $x$ is not the cone vertex, and it is homeomorphic to $K$ if $x$ is the cone vertex. We identify $K$ with the bottom of the cone $\operatorname{cone}(K)$. Since $\operatorname{cone}(K)\in ANE$, there is an extension $\bar\alpha\colon X\to \operatorname{cone}(K)$ of $\alpha$. We define $Y$ as a pullback of the diagram: $$\begin{CD} Y @>>\gamma> K\times I\\ @VVgV @VV\pi V\\ X @>>\bar\alpha> \operatorname{cone}(K)\\ \end{CD} $$ Then $\operatorname{pr}\circ\gamma\colon Y\to K$ is a solution of the extension problem $g^{-1}(A,\alpha)$ where $\operatorname{pr}\colon K\times I$ is the projection. Thus, the map $g\colon Y\to X$ resolves the problem $(A,\alpha)$. Since $g$ is parallel to $\pi$ in the pullback diagram, $g$ has the same set of topological types of point preimages, i.e.\ the set consisting of the one point space and $K$. If $\alpha$ is simplicial and $A\subset X$ is a subcomplex, then we consider the natural structure of a simplicial complex on the cone $\operatorname{cone}(K)$. Take all vertices of $X$ which do not belong to $A$ to the cone vertex and thus, define a simplicial extension $\bar\alpha$ of $\alpha$. Consider a product simplicial structures on $K\times I$ and $X\times(K\times I)$. Then the projection $\pi\colon K\times I\to \operatorname{cone}(K)$ is a simplicial map. Consider the induced triangulation on the pullback $L\subset X\times(K\times I)$. The map $g$ is simplicial with respect to that triangulation. \end{proof} \begin{proposition} Let $X$ be the limit space of an inverse sequence $\{X_k;q^{k+1}_k\}$ and let $\{(A^k_i,\alpha^k_i)\}_{i\in J_k}$ be a basis of extension problems for each $k$. Assume that $q^{\infty}_k$ resolves all problems $(A^k_i,\alpha^k_i)$ for all $k$. Then $K\in AE(X)$. \end{proposition} \begin{proof} According to Proposition 5.5 $X$ has a basis of solvable extension problems. Then by Proposition 4.4 all extension problems on $X$ have solutions. It means that $K\in AE(X)$. \end{proof} \begin{remark*} If a map $f\colon Y\to X$ resolves some extension problem $(A,\alpha)$ on $X$, then for any map $g\colon Z\to Y$ the composition $f\circ g$ resolves $(A,\alpha)$. \end{remark*} \begin{lemma} Let $g\colon L\to M$ be a simplicial map onto a finite dimensional complex $M$ and let $h_*$ be a reduced homology theory such that $h_k(g^{-1}(x))=0$ for all $k<n$ ($k\in\mathbb{Z}$). Then $g$ induces an isomorphism $g_*\colon h_k(L)\to h_k(M)$ for $k<n$ and an epimorphism for $k=n$. \end{lemma} \begin{proof} We prove it by induction on $m=\operatorname{dim} M$. If $\operatorname{dim} M=0$, then Lemma holds. Let $\operatorname{dim} M=m>0$. We denote by $A$ a regular neighborhood in $M$ of $(m-1)$-dimensional skeleton $M^{(m-1)}$. Since the map $g\colon L\to M$ is simplicial, $g^{-1}(A)$ has a deformation retraction onto $g^{-1}(M^{(m-1)}$. By the induction assumption Lemma holds for $g{\restriction}_{\cdots}\colon g^{-1}(M^{(m-1)})\to M^{(m-1)}$. Hence, the conclusion of Lemma holds for $g{\restriction}_{\cdots}\colon g^{-1}(A)\to A$. We define $B=M\setminus \operatorname{Int} A$, i.e.\ $B$ is a union of disjoint $m$-dimensional PL-cells $B=\bigcup B_i$. Since $g$ is simplicial, $g^{-1}(B_i)\simeq g^{-1}(c_i)\times B_i$ where $c_i\in B_i$. Therefore the conclusion of Lemma holds for $g{\restriction}_{\cdots}\colon g^{-1}(B)\to B$. Note that $\operatorname{dim}(A\cap B)=m-1$ and, hence,Lemma holds for $g{\restriction}_{\cdots}\colon g^{-1}(A\cap B)\to A\cap B$. The Mayer-Vietoris sequence for the triad $(A,B,M)$ produces the following diagram: $$ \begin{CD} h_k(A'\cap B') @>>> h_k(A')\oplus h_k(B') @>>> h_k(L) @>>> h_{k-1}(A\cap B') @>>>\\ @VVV @VVV @Vg_*VV @VVV \\ h_k(A\cap B) @>>> h_k(A)\oplus h_k(B) @>>> h_k(M) @>>> h_{k-1}(A\cap B) @>>>\\ \end{CD} $$ Here $A'=g^{-1}(A)$ and $B'=g^{-1}(B)$. The Five Lemma implies that $g_*$ is an isomorphism for $k<n$. The epimorphism version of the Five Lemma implies that $g_*$ is an epimorphism for $k=n$. \end{proof} \begin{lemma} Let $g\colon L\to M$ be a simplicial map onto a finite dimensional complex $M$ and let $h^*$ be a reduced cohomology theory such that $h^k(g^{-1}(x))=0$ for all $k<n$ ($k\in\mathbb{Z}$). Then $g$ induces an isomorphism $g^*\colon h^k(M)\to h^k(L)$ for $k<n$ and a monomorphism for $k=n$. \end{lemma} \begin{proof} We can apply the argument of Lemma 5.8 with the only difference that at the very end we should apply the monomorphism version of the Five Lemma. \end{proof} \begin{proof}[Proof of Theorem 5.2] Since $h_n(P)\neq 0$ ($h^n(P)\neq 0$), there exists a finite subcomplex $P_1\subset P$ such that the inclusion is $h_n$-essential ($h^n$-essential). For cohomology this follows from the continuity of $h^*$, for homology it follows from the fact that every homology has a compact support. We construct $X$ as the limit space of an inverse sequence of polyhedra $\{P_k;q_k^{k+1}\}$ where $f\colon X\to S^n$ will be the composition of $q^{\infty}_1$ and the inclusion $P_1\subset P$. We construct this sequence by induction on $k$ such that \begin{enumerate} \item for every $k$ there is fixed some countable basis of extension problems $\mathcal{A}^k=\{(A^k_i,\alpha^k_i)\}$ on $P_k$, \item for every $k$ some nonzero element $a_k\in h_n(P_k)$ ($a_k\in h^n(P_k)$) is fixed such that $(q^{k+1}_k)_*(a_{k+1})=a_k$ ($q^{k+1}_k)^*(a_k)=a_{k+1}$) for all $k$. \item for every problem $(A^k_i,\alpha^k_i)\in\mathcal{A}^k$ there is $j>k$ such that $q^j_k$ resolves it. \end{enumerate} If we manage to construct such a sequence, then by Proposition 5.7 $K\in AE(X)$. The property (2) would imply that $f$ is $h_n$-essential ($h^n$-essential). Thus, Theorem 5.2 would be proven. Enumerate all prime numbers $2=p_1<p_2<p_3<\dots<p_k<\dots$. We are going to work with homology first. We fix some element $a_1\in h_*(P_1)$ which goes to a nonzero element $a\in h_n(P)$. Denote by $\tau_1$ a triangulation on $P_1$ and by $\beta^k\tau$ $k$-th barycentric subdivision of $\tau$. There are only countably many subpolyhedra in $P_1$ with respect to all subdivisions $\beta^k\tau$. Since the set of homotopy class $[L,K]$ is countable for every compact $L$, we have only countably many different extension problems $(A,\alpha)$ defined on those subpolyhedra. Denote the set of all these extension problems $(L,\alpha)$ on $P_1$ with simplicial maps $\alpha$ by $\mathcal{A}^1$. Since $K\in ANE$, it easy to show that $\mathcal{A}^1$ form a basis of extension problems on $P_1$. We enumerate elements of $\mathcal{A}^1$ by all powers of 2. Let $N\colon \mathcal{A}^1\to\mathbb{N}$ be enumeration function. Then we consider an extension problem from $\mathcal{A}^1$ having number one in our list and resolve it by a simplicial map $g\colon L\to P_1$ by means of Lemma 5.6. By Lemma 5.8 $g_*\colon h_n(L)\to h_n(P_1)$ is an epimorphism. Take $a_2'\in h_n(L)$ such that $g_*(a_2')=a_1$. Since a homology $a_2'$ has a compact support, there is a finite subcomplex $P_2\subset L$ and an element $a_2\in h_n(P_2)$ which goes to $a_2'$ under the inclusion homomorphism. We define a bonding map $q^2_1\colon P_2\to P_1$ as the restriction $f{\restriction}_{P_2}$ of $f$ onto $P_2$. Then the condition (2) holds: $(q^2_1)_*(a_2)=a_1$. Then we define a countable basis $\mathcal{A}^2=\{(A^2_i,\alpha^2_i)\}$ of extension problems such that every $A^2_i$ is a subcomplex of $P_2$ with respect to iterated barycentric subdivision of the triangulation on $P_2$. Enumerate elements of $\mathcal{A}^2$ by all numbers of the form $2^k3^l$ with $k\geq 0$ and $l>0$. Lift all the problems from the list $\mathcal{A}^1$ to a space $P_2$, i.e.\ consider $(q^2_1)^{-1}(\mathcal{A}^1)$. Thus the family $(q^2_1)^{-1}(\mathcal{A}^1)\cup\mathcal{A}^2$ is enumerated by all numbers of the form $2^k3^l$, let $N\colon (q^2_1)^{-1}(\mathcal{A}^1)\cup\mathcal{A}^2\to\mathbb{N}$ be the enumeration function. Now consider the extension problem having number 2 in updated list and apply the whole staff from the above to obtain $P_3$ and so on. Thus, all problems in $\mathcal{A}^k$ will be enumerated by numbers of the form $p_1^{l_1}p_2^{l_2} \cdots p_k^{l_k}$ with $l_k>0$. Since $k\leq p_k$, we have $k\in N((q^k_1)^{-1}(\mathcal{A}^1)\cup (q^k_2)^{-1}(\mathcal{A}^2)\cup \cdots \mathcal{A}^k)$. Hence we can keep going for any $k$. As the result of this construction we have that if a problem $(A^l_i,\alpha^l_i)$ has number $k$, then $l\leq k$ and the problem is resolved by $q^{k+1}_l$. Thus, the conditions (1)--(3) hold. If we consider a continuous cohomology $h^*$ instead of homology, we apply Lemma 5.9 instead of Lemma 5.8. Then we apply the continuity to get a finite subcomplex $P_2$. The rest is the same. \end{proof} \begin{proof}[Proof of Theorem 5.1] By Corollaries 5.3 and 5.9 we can realize by compacta all fundamental cd-types. According to Theorem 4.11 an arbitrary cd-type $F\in\mathcal{F}_+$ can be presented as $\bigvee\{\Phi(G,k_G)\mid G\in\sigma\}$. Then the one-point compactification of the disjoint union of fundamental compacta $\bigcup\{F(G,k_G)\mid G\in\sigma\}$ realizes the cd-type $F$. The property $F_X \mathbin{[+]} F_Y=F_{X\times Y}$ follows from the definition of the operation $\mathbin{[+]}$ and Lemmas 3.3, 3.9 and 3.13. The equality $F_X\vee F_Y=F_{X\vee Y}$ follows from the formula $\operatorname{dim}_GX\vee Y=\max\{\operatorname{dim}_GX,\operatorname{dim}_GY\}$ which is the consequence of Theorem 1.5. \end{proof} \section{Test spaces} Given an abelian group $G$, a compactum $X$ is said to be $G$-{\it testing space} for some class of compacta $\mathcal{C}$ if for all spaces $Y\in\mathcal{C}$ the following equality holds: $$\operatorname{dim}_G=\operatorname{dim}(X\times Y)-\operatorname{dim} X.$$ \begin{theorem} For any abelian group $G$ and any natural number $n$, there exists an $n$-dimensional compactum $T_n(G)$ which is a $G$-testing space for class of compacta $Y$ satisfying the inequality $\operatorname{dim} Y-\operatorname{dim}_GY<n$. \end{theorem} The following is the table of the dimension of the product of two fundamental compacta with $n\geq m$: \begin{figure}[h] \begin{tabular}{r|c|c|c|c|c|c|c} & $(\mathbb{Z}_{(p)},n)$& $(\mathbb{Z}_p,n))$& $(\mathbb{Z}_{p^\infty},n))$& $(\mathbb{Q},n))$& $(\mathbb{Z}_{(q)},n))$& $(\mathbb{Z}_q,n))$& $(\mathbb{Z}_{q^\infty},n)$\\ \hline $F(\mathbb{Q},m)$& $m{+}n$& $n{+}1$& $n{+}1$& $m{+}n$& $m{+}n$& $n{+}1$& $n{+}1$\\ \hline $F(\mathbb{Z}_{(p)},n)$& $m{+}n$& $n{+}1$& $n{+}1$& $m{+}n$& $m{+}n$& $n{+}1$& $n{+}1$\\ \hline $F(\mathbb{Z}_p,n)$& $m{+}n$& $m{+}n$& $n{+}1$& $n{+}1$& $m{+}n$& $n{+}1$& $n{+}1$\\ \hline $F(\mathbb{Z}_{p^\infty},n)$& $m{+}n$& $m{+}n{-}1$& $m{+}n{-}1$& $n{+}1$& $m{+}n$& $m{+}n{-}1$& $n{+}1$\\ \end{tabular} \caption{Dimension of the product of two fundamental compacta with $n\geq m$} \end{figure} Here $q\neq p$. We leave to the reader the computations in this table. They are based on Proposition 4.5 and the formula $F_X \mathbin{[+]} F_Y = F_{X\times Y}$. The result of calculations, presented in the table, can be summarized in the following formula ($n\geq m$): $$\operatorname{dim} (F(G,n)\times F(G',m))=\operatorname{dim}_GF(G',m)+n.$$ \begin{proposition} For any fundamental cd-type $\Phi(G,n)$ and any other cd-type $F$ there is the formula: $$\|F \mathbin{[+]} \Phi(G,n)\| = \begin{cases} \max\{\|F\|+1, n+\phi_F(G)\}& \text{if $\| F \| \geq n$,}\\ n+\phi_F(G)& \text{if $\| F \| \leq n$.}\\ \end{cases} $$ \end{proposition} \begin{proof} The function $\phi_F$ was defined in the beginning of \S4. The fundamental cd-types can be given via functions $\phi_F$ by means of the table of \S5. If $F$ is a fundamental cd-type, then the result follows from the table. In general case by Theorem 4.11 $F = \bigvee\{\Phi(G',k_{G'}) \mid G'\in\sigma\}$. Then \begin{align*} \| F \mathbin{[+]} \Phi(G,n)\|& = \sup\left\{ \| \Phi(G',k_{G'}) \mathbin{[+]} \Phi(G,n) \| \right\}\\ & = \sup\left\{ \max\{k_{G'}\}+1, n+\phi_{\Phi(G',k_{G'})}(G)\} \right\}\\ & = \max\left\{ \sup\{k_{G'}\}+1,n+\sup\{\phi_{\Phi(G',k_{G'})}(G)\} \right\}\\ & = \max\{\|F\|+1,n+\phi_F(G)\}. \end{align*} \end{proof} \begin{proof}[Proof of Theorem 6.1] We define $T_n(G)$ as a compactum representing the following cd-type $\bigvee\{\Phi(h,n) \mid H\in\sigma(G)\}$. Let us consider a compactum $X$ with $\operatorname{dim} X-\operatorname{dim}_GX<n$. If $\|F_X\|<n$, then by Proposition 6.2, \begin{align*} \operatorname{dim}(X\times T_n(G))& = \| F_X \mathbin{[+]} \bigvee\{\Phi(H,n) \mid H\in\sigma(G)\} \|\\ & = \sup\{\| F_X \mathbin{[+]} \Phi(H,n) \| \mid H\in\sigma(G)\}\\ & = n + \phi_{F_X}(G)\\ & = n + \operatorname{dim}_GX. \end{align*} So, the testing formula holds. If $\|F_X\|\geq n$, then by Proposition 6.2, \begin{align*} \operatorname{dim}(X\times T_n(G))& = \|F_X \mathbin{[+]} \bigvee\{\Phi(H,n) \mid H\in\sigma(G)\}\|\\ & = \sup\{\| F_X \mathbin{[+]} \Phi(H,n)\| \mid H\in\sigma(G)\}\\ & = \max\{\| F_X \| + 1, n+\phi_{F_X}(G)\}\\ & = \max\{\operatorname{dim} X+1, n+\operatorname{dim}_GX\}. \end{align*} Since $\operatorname{dim} X-\operatorname{dim}_G\leq n-1$, we have that $\operatorname{dim} X+1\leq n+\operatorname{dim}_GX$ and hence, $\operatorname{dim}(X\times T_n(G))=n+\operatorname{dim}_GX$. \end{proof} \begin{theorem} For two finite dimensional compacta $X$ and $Y$ the following conditions are equivalent: \begin{enumerate} \item $X$ and $Y$ have the same cd-type: $F_X=F_Y$, \item for every compactum $Z$ there is the equality $\operatorname{dim}(X\times Z)=\operatorname{dim}(Y\times Z)$. \end{enumerate} \end{theorem} \begin{proof} The cd-type $F_{X\times Z}$ equals $F_X \mathbin{[+]} F_Z$ and hence depends only on cd-type of $X$. Therefore, $\operatorname{dim}(X\times Z) = \operatorname{dim}_{\mathbb{Z}}(X\times Z) = \|F_{X\times Z}\| = \|F_X \mathbin{[+]} F_Z\| = \|F_Y \mathbin{[+]} F_Z\| = \operatorname{dim}(Y\times Z)$. Given group $G$, we take $Z=T_n(G)$ with $n>\max\{\operatorname{dim} X,\operatorname{dim} Y\}$. Then by the testing equality we obtain: $\operatorname{dim}_GX+n = \operatorname{dim}(X\times T_n(G)) = \operatorname{dim}(Y\times T_n(G)) = \operatorname{dim}_GY+n$. Hence, $\operatorname{dim}_GX=\operatorname{dim}_GY$. \end{proof} \begin{corollary} A finite dimensional compactum $X$ is dimensionally full-valued if and only if $\operatorname{dim}(X\times Z)=\operatorname{dim} X+\operatorname{dim} Z$ for all compacta $Z$. \end{corollary} \begin{proof} Let $n=\operatorname{dim} X$, take $Y=I^n$. If $X$ is dimensionally full-valued, then it has the same cd-type as an $n$-cube $Y$. Since $\operatorname{dim}(Y\times Z)=n+\operatorname{dim} Z$, then by Theorem 6.3, $\operatorname{dim}(X\times Z)=n+\operatorname{dim} Z=\operatorname{dim} X+\operatorname{dim} Z$. If $\operatorname{dim}(X\times Z)=\operatorname{dim} X+\operatorname{dim} Z=n+\operatorname{dim} Z$, then $\operatorname{dim}(X\times Z)=\operatorname{dim}(Y\times Z)$ for all compacta $Z$. Hence $\operatorname{dim}_GX=\operatorname{dim}_GI^n=n$ for all $G$. Therefore, $X$ is dimensionally full-valued. \end{proof} The test spaces are very useful for extending some results of the Dimension Theory to a cohomological dimension. \begin{theorem} Let $f\colon X\to Y$ be a map between compacta and let $G$ be an abelian group. \begin{enumerate} \item If $f$ is $(k+1)$-to-1 map, i.e.\ the number of points in $f^{-1}(x)\leq k+1$, then $\operatorname{dim}_GX\geq \operatorname{dim}_GY-k$, \item If $f$ is an open and all preimages of point are countable, then $\operatorname{dim}_GX=\operatorname{dim}_GY$. \end{enumerate} \end{theorem} \begin{proof} (1). Consider a map $f\times \operatorname{id}\colon X\times T_n(G)\to Y\times T_n(G)$ for large enough $n$ and apply the Hurewicz Theorem to obtain $\operatorname{dim}(X\times T_n(G))\geq \operatorname{dim}(Y\times T_n(G))-k$. Then the inequality $\operatorname{dim}_GX\geq \operatorname{dim}_GY-k$ follows from the $G$-testing formula. (2). Consider the same map as in (1) and apply the Alexandroff Theorem to obtain the result. \end{proof} Let $F\in\mathcal{F}$ be a cd-type, denote by $kF$ the sum $\mathbin{[+]}_{i=1}^kF$. We recall that the integers $\mathbb{Z}$ are naturally imbedded in $\mathcal{F}$. For every $n\in\mathbb{Z}$ we denote by $\tilde n$ the image of $n$ in $\mathcal{F}$ under that imbedding. \begin{proposition} Let $G\in\sigma$, then \begin{enumerate} \item $2\Phi(G,n)=\Phi(G,2n)\vee \tilde 2$ and in the general case $k\Phi(G,n) = \Phi(G,kn) \vee \tilde k$ if $G\neq\mathbb{Z}_{p^{\infty}}$, \item $2\Phi(G,n) = \Phi(G,2n-1) \vee \tilde 2$ and $k\Phi(G,n) = \Phi(G,kn-k+1) \vee \tilde k$ if $G=\mathbb{Z}_{p^{\infty}}$. \end{enumerate} \end{proposition} \begin{proof} Let $\Phi(G,n)=(\mathcal{S}_n,\mathcal{D}_n;d_n)$, then $2\Phi(G,n) = (\mathcal{S}_n,\mathcal{D}_n;2d_n) = (\mathcal{S}_{2n},\mathcal{D}_{2n};2d_n)$. Let $\Phi(G,2n)\vee\tilde 2=(\mathcal{S}',\mathcal{D}';d')$. Then the field function $d'$ of $\Phi(G,2n)\vee\tilde 2$ is defined by the formula $d'(x)=\max\{d_{2n}(x),2\}=2d_n(x)$. If $\Phi(G,2n)$ is $p$-regular, then $\Phi(G,2n)\vee\tilde 2$ is $p$-regular. If $\Phi(G,2n)$ is $p$-singular, then $\Phi(G,2n)\vee\tilde 2$ is $p$-singular provided $2n>2$. Hence, $\mathcal{S}'=\mathcal{S}_{2n}$. Similarly, $\mathcal{D}'=\mathcal{D}_{2n}$. The proof in the case of $k>2$ is not more difficult. The difference in this case ($k=2$) is that the formula for $d'$ is the following $\max\{d_{2n-1}(x),2\}=2d_n(x)$. The rest of the argument is the same. \end{proof} \begin{lemma} Let $X$ be a fundamental compactum of the type $(G,n)$, $G\in\sigma$. Then for every $k$, the $k$-th power $X^k$ is a $G$-testing space for the class of compacta $Y$ with $\operatorname{dim} Y-\operatorname{dim}_GY<n$. \end{lemma} \begin{proof} By Proposition 6.6 the cd-type of a compactum $X^k$ is the same as the cd-type of the union of $Z\coprod I^k$ where $Z$ is a fundamental compactum of the cd-type $(G,m)$ and $m=\operatorname{dim} X^k$. Hence, $\operatorname{dim}(Y\times X^k)=\max\{\operatorname{dim}(Y\times Z),\operatorname{dim} Y+k\}$. By Theorem 6.1 we can continue $=\max\{\operatorname{dim}_GY+m, \operatorname{dim} Y+k\}$. Since for $k>1$ the inequality $m-k\geq kn-k+1-k\geq n>\operatorname{dim} Y-\operatorname{dim}_GY$ holds, $\operatorname{dim}_GY+m\geq \operatorname{dim} Y+k$. Hence, $\operatorname{dim}(Y\times X^k)=\operatorname{dim}_GY+m$. \end{proof} \begin{proposition} Let $R$ be a principal ideal domain with unity $1\in R$, then for no prime $p$, $\mathbb{Z}_{p^{\infty}}\in\sigma(R)$. \end{proposition} \begin{proof} Assume that $\mathbb{Z}_{p^{\infty}}\in\sigma(R)$. Then it means that $p$-torsion subgroup $T=p-\operatorname{Tor}(R)$ is $p$-divisible. Note that $T$ is an ideal in $R$. Therefore, $T=uR$ for some $u\in R$. Since $1\in R$, it follows that $u\in T$. Let $p^k$ be the order of $u$. Since $T$ is $p$-divisible, there is a quotient $u/p^k\in T$. Then $u/p^k=uv$ for some $v\in R$. Hence, $0=(p^ku)v=p^k(uv)=p^k(u/p^k)=u$. Contradiction. \end{proof} \begin{theorem} Let $f\colon X\to Y$ be a continuous map between finite dimensional compacta. Then \begin{enumerate} \item $\operatorname{dim}_GX\leq \operatorname{dim}_GY+\max\{\operatorname{dim} f^{-1}(y) \mid y\in Y\}$ for any abelian group $G$, \item $\operatorname{dim}_GX\leq \operatorname{dim} Y+\max\{\operatorname{dim}_Gf^{-1}(y) \mid y\in Y\}$ for any abelian group $G$, \item $\operatorname{dim}_GX\leq \operatorname{dim}_GY+\max\{\operatorname{dim}_Gf^{-1}(y) \mid y\in Y\}$ if $G$ is a principal ideal domain with the unity, \item $\operatorname{dim}_GX\leq \operatorname{dim}_GY+\max\{\operatorname{dim}_Gf^{-1}(y) \mid y\in Y\}+1$ for any abelian group $G$. \end{enumerate} \end{theorem} \begin{proof} Let $n>\operatorname{dim} X,\operatorname{dim} Y$. (1). We consider a map $f\times \operatorname{id} \colon X\times T_n(G)\to Y\times T_n(G)$. The Hurewicz Theorem from the Dimension Theory implies that \begin{align*} \operatorname{dim}(X\times T_n(G))& \leq \operatorname{dim}(Y\times T_n(G))\\ & \quad + \max\{\operatorname{dim}(f\times \operatorname{id})^{-1}(y,t) \mid (y,t)\in Y\times T_n(G)\}. \end{align*} since $(f\times \operatorname{id})^{-1}(y,t) = f^{-1}(y)$ for all $t$, we have the following $\operatorname{dim}_GX+n \leq \operatorname{dim}_GY+n+\max\{\operatorname{dim} f^{-1}(y) \mid y\in Y\}$. (2). For that case we consider a map $f\circ\pi\colon X\times T_n(G)\to Y$, where $\pi\colon X\times T_n(G)\to X$ is the projection. By the Hurewicz theorem, we have $ \operatorname{dim}(X\times T_n(G)) \leq \operatorname{dim} Y + \max\{\operatorname{dim}(f\circ\pi)^{-1}(y) \mid y\in Y\}$. Then \begin{align*} \operatorname{dim}_G X + n& \leq \operatorname{dim} Y + \max\{\operatorname{dim}(f^{-1}(y)\times T_n(G)) \mid y\in Y\}\\ & = \operatorname{dim} Y + \max\{\operatorname{dim}(f^{-1}(y))\} + n. \end{align*} (3). Let $G\in\sigma$. We consider a map $(f\circ\pi\times \operatorname{id})\colon X\times T_n(G)\times T_n(G)\to Y\times T_n(G)$. Note that $(f\circ\pi\times \operatorname{id})^{-1}(y,t)=f^{-1}(y)\times T_n(G)$. By Lemma 6.7 $T_n(G)\times T_n(G)$ is a $G$-testing space. This together with the Hurewicz theorem gives \begin{equation*}\tag{$\ast$} \operatorname{dim}_G X + \operatorname{dim}(T_n(G)\times T_n(G)) \leq \operatorname{dim}_G Y + n + \max\{\operatorname{dim}_Gf^{-1}(y)\} + n. \end{equation*} If $G\neq\mathbb{Z}_{p^{\infty}}$, then $\operatorname{dim}(T_n(G)\times T_n(G))\leq 2n$ and hence, $\operatorname{dim}_GX\leq \operatorname{dim}_GY+\max\{\operatorname{dim}_Gf^{-1}(y)\}$. Let $G$ be a PID with the unity. Then by Proposition 6.8 no $\mathbb{Z}_{p^{\infty}}$ belongs to $\sigma(G)$. By the Bockstein theorem $\operatorname{dim}_GX=\operatorname{dim}_HX$ for some $H\in\sigma(G)$. Then \begin{align*} \operatorname{dim}_G X& = \operatorname{dim}_H X\\ & \leq \operatorname{dim}_H Y + \max\{\operatorname{dim}_H f^{-1} \mid y\in Y\}\\ & \leq \operatorname{dim}_G Y + \max\{\operatorname{dim}_G f^{-1}(y) \mid y\in Y\}. \end{align*} (4). Apply Proposition 3.4(2) to ($\ast$) to obtain the required inequality. \end{proof} We recall that a map $f\colon X\to \mathbb{R}^n$ of a subset $X\subset\mathbb{R}^n$ is called an $\epsilon$-move if $\|f(x)-x\|<\epsilon$ for all $x\in X$. \begin{theorem} For any compactum $X\subset\mathbb{R}^n$ and for any abelian group $G$ there is an $\epsilon>0$, such that the inequality $\operatorname{dim}_Gf(X)\geq \operatorname{dim}_GX$ holds for every $\epsilon$-move $f\colon X\to\mathbb{R}^n$. \end{theorem} \begin{proof} Since a test space $T_n(G)$ is $n$-dimensional, we may assume that $T_n(G)\subset\mathbb{R}^{2n+1}$. The Alexandroff Theorem says that for a compactum $X\times T_n(G)\subset\mathbb{R}^n\times\mathbb{R}^{2n+1}$ there is a positive $\epsilon$ such that for every $\epsilon$-move $g\colon X\times T_n(G)\to\mathbb{R}^{3n+1}$ one has the inequality $\operatorname{dim}(g(X\times T_n(G)))\geq \operatorname{dim}(X\times T_n(G))$. Given an $\epsilon$-move $f\colon X\to\mathbb{R}^n$ we define another $\epsilon$-move $g\colon X\times T_n(G)\to\mathbb{R}^n\times\mathbb{R}^{2n+1}$ as $f\times \operatorname{id}$. We note that $g(X\times T_n(G))=f(X)\times T_n(G)$. Then \begin{align*} \operatorname{dim}(g(X\times T_n(G)))& = \operatorname{dim}(f(X)\times T_n(G))\\ & = \operatorname{dim}_G X + n\\ & \geq \operatorname{dim}(X\times T_n(G))\\ & = \operatorname{dim}_G X + n. \end{align*} Hence, $\operatorname{dim}_Gf(X) \geq \operatorname{dim}_GX$. \end{proof} \section{Infinite-dimensional compacta of finite cohomological dimension} According to the Realization Theorem (5.1) for any abelian group $G\in\sigma$ for any number $n\in\mathbb{N}$ there is an $n$-dimensional compactum $X_{n,G}$ with the cohomological dimension $\operatorname{dim}_GX_{n,G}=1$. Using this data it is easy to construct an infinite dimensional compactum $X$ with $\operatorname{dim}_GX=1$. It suffices to consider an one-point compactification $a(\bigcup_{n=1}^{\infty}X_{n,G})$ of a disjoint union of compacta $X_{n,G}$. As it follows from 1.3(3) there is no such compactum for $G=\mathbb{Z}$. By the Alexandroff Theorem any $n$-dimensional compactum has $\operatorname{dim}_{\mathbb{Z}}X=n$. Nevertheless one can prove the following: \begin{theorem} There is an infinite-dimensional compactum $X$ having $\operatorname{dim}_{\mathbb{Z}}X\leq 3$. \end{theorem} \begin{proof} The proof is based on the following result in K-theory: $$\tilde K_{\mathbb{C}}^*(K(\mathbb{Z},n);\mathbb{Z}_p)=0 \quad \text{for $n\geq 3$ \cite{B-M},\cite{A-H}.}$$ Here $h^*=K_{\mathbb{C}}^*(\ ;\mathbb{Z}_p)$ is the reduced complex K-theory with $\mathbb{Z}_p$ coefficients, i.e.\ $h^*$ is generalized cohomology theory defined by the spectrum $E_{2n}=BU^{M(\mathbb{Z}_p,1)}$ and $E_{2n+1}=U^{M(\mathbb{Z}_p,1)}$. This cohomology theory is continuous, since $h^k(L)$ is a finite group for every compact polyhedron $L$. We apply Theorem 5.2 to $P=S^4$, $K=K(\mathbb{Z},3)$ and $h^*$ for $n=0$ to obtain an essential map $f\colon X\to S^4$ of a compactum $X$ having $\operatorname{dim}_{\mathbb{Z}}X\leq 3$. If we assume for a moment that the dimension of $X$ is finite, then by Alexandroff Theorem, $\operatorname{dim} X\leq 3$. But a map of 3-dimensional compactum to a 4-dimensional sphere cannot be essential. Hence $\operatorname{dim} X=\infty$. \end{proof} We note that one can use K-homology instead of K-cohomology here, since $$\tilde K^{\mathbb{C}}_*(K(\mathbb{Z},3);\mathbb{Z}_p)=0$$ as well. In that case a compactum $X$ has a $h_*$-essential map $f\colon X\to S^4$. Moreover by the proof of Theorem 5.2 one can assume that any given element $a\in h_*(P)$ lies in the image $\operatorname{Im}(f_*)$. For applications we need a relative version of this. \begin{theorem} Let $h_*$ be a reduced generalized homology theory with $$h_*(K(G,n))=0,\quad n\in\mathbb{N}.$$ Then for every compact polyhedral pair $(P,L)$ and any element $a\in h_*(K,L)$ there is a compactum $X\supset L$ and a map $f\colon (X,L)\to (P,L)$ such that \begin{enumerate} \item $\operatorname{dim}_G(X\setminus L)\leq n$, \item $a\in \operatorname{Im}(f_*)$ and \item $f{\restriction}_L = \operatorname{id}_L$. \end{enumerate} \end{theorem} This theorem is a relative version of Theorem 5.2 for $K=K(G,n)$. If one applies this theorem to the pair $(B^4,\partial B^4)$ with $G=\mathbb{Z}$, $n=3$ and $h_*=\tilde K_*(\ ;\mathbb{Z}_p)$ for odd $p$ and some nontrivial element in $h_*(B^4,\partial B^4)$, he gets a compactum $X\supset S^3$ of $\operatorname{dim}_{\mathbb{Z}}X=3$ and an essential map onto $B^4$. Hence, $X$ is infinite-dimensional as in Theorem 7.1. Using more advanced algebraic topology we are going to prove the following: \begin{theorem}[\cite{D-W2}] There is an infinite dimensional compactum $X$ with $\operatorname{dim}_{\mathbb{Z}}(X\times X)=3$. \end{theorem} We recall that a truncated spectrum is a sequence of pointed spaces $\mathbb{E}=\{E_i\}$, $i\leq 0$, such that $E_{i-1}=\Omega E_i$. Thus, any truncated spectrum is generated by one space $E_0$. The lower half of every $\Omega$-spectrum is an example of a truncated spectrum. The reduced truncated cohomology of a given space $X$ with coefficients in a given truncated spectrum $T^i(X;\mathbb{E})$ is the set of pointed homotopy classes of mappings $X$ to $E_i$. Note that $T^i(X)$ is a group for $i<0$ and it is an abelian group for $i<-1$. Truncated cohomologies possess many features of generalized cohomology. For every map $f\colon X\to Y$ there is the induced homomorphism ($i>0$) $f^*\colon T^i(Y)\to T^i(X)$. Homotopic maps induce the same homomorphism and a null-homotopic map induces zero homomorphism. There is the natural Mayer-Vietoris exact sequence $$ \cdots \to T^r(A\cup B) \to T^r(A)\times T^r(B) \to T^r(A\cap B) \to T^{r+1}(A\cup B) \to \cdots $$ of groups for $r\leq -1$ and abelian groups for $r\leq -2$. Therefore Lemma 5.9 holds for a truncated cohomology for $n\leq -2$. We call a truncated homology $T^*$ continuous if for every direct limit of finite CW-complexes $L=\varinjlim \{L_i;\lambda^i_{i+1}\}$ the following formula holds $T^k(L)=\varprojLim T^k(L_i)$ for $k<0$. We note that the Milnor Theorem holds for truncated cohomologies: $$ 0 \to \operatorname{Lim}^1 \{T^{k-1}(L_i)\} \to T^k(L) \to \varprojLim \{T^k(L_i)\} \to 0. $$ Hence, if $T^k(M)$ is a finite group for every finite complex $M$ and every $k<-1$, By the Mittag-Lefler condition $T^*$ is continuous. We consider a truncated cohomology $T^*$ generated by a mapping space $E_0=(S^7)^M$ where $M=M(\mathbb{Z}_2,1)=\mathbb{R} P^2$ is a Moore space of the type $(\mathbb{Z}_2,1)$ and $S^7$ is the 7-dimensional sphere. \begin{lemma} The truncated cohomology theory $T^*$ is continuous. \end{lemma} For the proof we need the following \begin{proposition} Let $\nu_2\colon S^1\to S^1$ be a map of the degree two. Then the map $\nu_2\wedge \operatorname{id}\colon S^1\wedge\mathbb{R} P^2\to S^1\wedge\mathbb{R} P^2$ is null homotopic. \end{proposition} \begin{proof} The space $S^1\wedge\mathbb{R} P^2$ is the suspension $\Sigma M$ over the projective space and it can be defined as a quotient map $p\colon B^3\to \Sigma M$. Temporarily we denote by 2 a fixed map of degree 2 between 2-spheres and by 1, the identity map of the 2-sphere. Let $C_q$ denote the mapping cone of a map $q\colon X\to Y$ i.e.\ $C_q=\operatorname{cone}(X) \cup_q Y$. Consider the following commutative diagram: $$ \begin{CD} S^2 @>>1> S^2 @>>> C_1\\ @VV1V @VV2V @VVpV\\ S^2 @>>2> S^2 @>>> C_2\\ @AA2A @AA2A @AAgA\\ S^2 @>>2> S^2 @>>> C_2\\ \end{CD} $$ Here the mapping cone $C_1$ is homeomorphic to a 3-ball $B^3$ and $C_2$ is homeomorphic to $\Sigma M$. First we note that the map $g$ is homotopic to the map $\nu_2\wedge \operatorname{id}$. Then we show that $g$ has a lift $g'\colon \Sigma M\to B^3$ with respect to $p$. In fact $g'$ is defined by the following diagram: $$ \begin{CD} S^2 @>>1> S^2 @>>> C_1\\ @AA2A @AA1A @AAg'A\\ S^2 @>>2> S^2 @>>> C_2\\ \end{CD} $$ Since $B^3$ is contractible, $g'$ is null-homotopic and, hence $g$ is null-homotopic. \end{proof} \begin{proof}[Proof of Lemma 7.4] Show that every element of a group $T^k(L)$ has an order 2 for $k<0$. Indeed, $T^k(L) = [L,\Omega^{-k}(S^7)^M] = [\Sigma M,(S^7)^{\Sigma^{-k-1}L}]$. For any space $N$ and for any element $a\in[\Sigma M,N]$ represented by a map $f\colon \Sigma M\to N$, the element $2a$ is represented by a map $f\circ(\nu_2\wedge \operatorname{id})$ and it is homotopic to zero by virtue of Proposition 7.5. Note that $T^k(L)=[S^k\wedge L\wedge M,S^7]$. When a complex $L$ is finite this group is finitely generated. Hence in the case of $k<-1$, the group $T^k(L)$ of any finite complex $L$ is finite. As we know it suffices for the continuity. \end{proof} \begin{proposition} For every $k<0$ we have $T^k(K(\mathbb{Z}[\frac{1}{2}],1))=0$. \end{proposition} \begin{proof} We can present $K(\mathbb{Z}[\frac{1}{2}],1)$ as the direct limit of complexes $M_i$ where each $M_i$ is homotopy equivalent to the circle $S^1$ and every bonding map $\xi_i\colon M_i\to M_{i+1}$ is homotopy equivalent to a map of the degree two $S^1\to S^1$. Then $T^k(K(\mathbb{Z}[\frac{1}{2}],1)) = [\varinjlim \{M_i,\xi_i\},(S^7)^M] = [(\varinjlim \{M_i,\xi_i\})\wedge M,S^7] = [\varinjlim \{M_i\wedge M,\xi_i\wedge \operatorname{id}\},S^7]$. Consider a bonding map $\xi_i\wedge \operatorname{id}\colon M_i\wedge M\to M_{i+1}\wedge M$. This map is homotopy equivalent to the map $\nu_2\wedge \operatorname{id}$ and hence, it is homotopically trivial. Therefore the space $\varinjlim \{M_i\wedge M,\xi_i\wedge \operatorname{id}\}$ is homotopically trivial. Hence, $T^k(K(\mathbb{Z}[\frac{1}{2}],1))=0$. \end{proof} We also need the following result. \begin{theorem}[Miller Theorem (Sullivan Conjecture) \cite{Mi}] Let $K$ be a CW-complex of finite dimension and $\pi$ be a finite group. Then the mapping space $K^{K(\pi,1)}$ is weakly homotopy equivalent to a point. \end{theorem} \begin{proposition} For every $k$ we have $T^k(K(\mathbb{Z}_2,1))=0$. \end{proposition} \begin{proof} We note that $T^k(K(\mathbb{Z}_2,1)) = [K(\mathbb{Z}_2,1),(S^7)^{\Sigma^kM}] = [\Sigma^kM,(S^7)^{K(\mathbb{Z}_2,1)}] = 0$ by Theorem 7.7. \end{proof} The following Proposition is a version of Theorem 5.2 for a truncated cohomology. \begin{proposition} Let $P$ and $K$ be simplicial complexes and assume that $K$ is countable complex. Let $T^*$ be a reduced truncated continuous cohomology theory. If $T^n(P)\neq 0$ and $T^k(K)=0$ for some $n<-1$ and all $k<n$, then there exist a compactum $X$, having the property $K\in AE(X)$, and a $T^n$-essential map $f\colon X\to P$. \end{proposition} The proof is the same. \begin{proof}[Proof of Theorem 7.3] We take $P=S^3$, $K = K([\mathbb{Z}[\frac{1}{2}],1)\vee K(\mathbb{Z}_2,1)$ and $K^*$ is as above. Note that $T^{-2}(S^3) = [S^3,\Omega^2(S^7)^M]=[S^3\wedge S^2\wedge M, S^7] = [\Sigma^5M,S^7] = [M(\mathbb{Z}_2,6),S^7] = H^7(M(\mathbb{Z}_2,6)) = H_6(M(\mathbb{Z}_2,6)) = \mathbb{Z}_2 \neq 0$. By Propositions 7.5 and 7.6 we have $T^k(K)=0$ for $k\leq-2$. Proposition 7.9 gives us a compactum $X$ with $\operatorname{dim}_{\mathbb{Z}_2}X\leq 1$ and $\operatorname{dim}_{\mathbb{Z}[\frac{1}{2}]}X\leq 1$ and an essential map $f\colon X\to S^3$. By the Bockstein Theorem $\operatorname{dim}_{\mathbb{Z}_{(q)}}X\leq 1$ for all prime $q\neq p$. Hence, the cohomological dimensions of $X$ with respect to all fields from the Bockstein basis $\sigma$ do not exceed one. Hence by Theorem 3.15 $\operatorname{dim}_{\mathbb{Z}}(X\times X)\leq 3$. Hence (see also Lemma 2.9), $\operatorname{dim}_{\mathbb{Z}}X\leq 2$. Since $X$ admits an essential map onto $S^3$, the dimension of $X$ cannot be $\leq 2$. Therefore by Alexandroff Theorem $\operatorname{dim} X=\infty$. \end{proof} We recall that a space $X$ is strongly infinite dimensional provided that there exists an essential map $f\colon X\to I^{\infty}$ of $X$ onto the Hilbert cube. A map $f\colon X\to I^{\infty}$ is essential provided that $p\circ f\colon X\to I^n$ is essential for each coordinate projection $p\colon I^{\infty}\to I^n$. It is known that this definition does not depend on the product structure on the Hilbert cube $I^{\infty}$. Finally we recall that a map $f\colon X\to I^n$ is essential provided the extension problem $(f^{-1}(\partial I^n),f{\restriction}_{\cdots})$ on $X$ for mappings to $\partial I^n$ has no solution. \begin{theorem} Let $h^*$ be a reduced continuous cohomology theory such that $h^*(K)=0$ for some countable simplicial complex $K$. Then there exists a strongly infinite dimensional compactum $X$ having the property $K\in AE(X)$. \end{theorem} \begin{corollary} There exists a strongly infinite dimensional compactum $X$ with $\operatorname{dim}_{\mathbb{Z}}X\leq 3$. \end{corollary} \begin{proof} Take $K=K(\mathbb{Z},3)$ and $h^*=\tilde K_{\mathbb{C}}^*(\ ;\mathbb{Z}_p)$. \end{proof} \begin{corollary} For every prime $p$ there is a strongly infinite dimensional compactum $X$ with $\operatorname{dim}_{\mathbb{Z}[\frac{1}{p}]}X=1$. \end{corollary} \begin{proof} Take $K=K(\mathbb{Z}[\frac{1}{p}],1)$ and $h^*=\tilde H^*(\ ;\mathbb{Z}_p)$. \end{proof} \begin{proof}[Proof of Theorem 7.10] By induction we construct two inverse sequences $\{P_k,q^{k+1}_k\}$ and $\{I^k,\omega^{k+1}_k\}$ and a morphism between them, i.e.\ a sequence of maps $\{f_k\colon P_k\to I^k\}$ such that all squares are commutative. The first sequence consists of polyhedra and the second sequence consists of $k$-cubes, $k=1,2,\dots$ with bonding maps $\omega^{k+1}_k\colon I^{k+1}\to I^k$ defined as projections on factors. For every $k$ we define by the same induction an element $\mu_k\in h^*(I^k\partial I^k)$ and a countable basis $\mathcal{A}^k$ of extension problems on $P_k$ with respect to the complex $K$ and consisting of simplicial problems. We construct the sequences in such a way that \begin{enumerate} \item $(q^n_k)*(f^*_k(\mu_k))\neq 0$ for any $k$ and every $n>k$, \item every extension problem $(A^k_i,\alpha^k_i)\in\mathcal{A}^k$ is resolved by $q^j_k$ for some $j$. \end{enumerate} First, assume that we can construct such sequences. Then by Proposition 5.7 the limit space $X=\varprojlim \{P_k,q^{k+1}_k\}$ has the property $K\in AE(X)$. Since $(\omega^{\infty}_k\circ f)*(\mu_k) = (q^{\infty}_k)^*(f_k)*(\mu_k) \neq 0$, the map $\omega^{\infty}_k\circ f\colon X\to I^k$ is essential for every $k$. Therefore the limit map $f\colon X\to I^{\infty}$ is essential and, hence $X$ is strongly infinite dimensional. Now we present the induction. We define $P_1=I^1$ and $f_1=\operatorname{id}$. Take nonzero element $\mu_1\in h^*(I,\partial I)$ and fix a basis $\mathcal{A}^1$. Assume that the commutative diagram $$ \begin{CD} P_1 @<q^2_1<< P_2 @<q^3_2<< \cdots @<q^k_{k-1}<< P_k\\ @Vf_1VV @Vf_2VV @. @Vf_kVV\\ I^1 @<\omega^2_1<< I^2 @<\omega^3_2<< \cdots @<\omega^k_{k-1}<< I^k\\ \end{CD} $$ is already constructed, elements $\mu_i\in h^*(I^i,\partial I^i)$ are defined for $i\leq k$ and extension problem bases $\mathcal{A}^i$, $i\leq k$, are fixed such that \begin{enumerate} \item $(q^k_i)*(f^*_i(\mu_i)\neq 0$ for all $i\leq k$, \item All problems $\bigcup_{i=1}^k\mathcal{A}^i$ are enumerated by all numbers of the form $p_1^{l_1}p_2^{l_2}\dots p_k^{l_k}$ where $p_1,p_2,\dots, p_k$ are the first $k$ prime numbers, \item For every $i< k$ the problem having $i$-th number is resolved by some map $q^k_j$. \end{enumerate} To make an induction step we note that the number $k$ has the form $p_1^{l_1}\dots p_k^{l_k}$. Hence there is an extension problem $(A^r_i,\alpha^r_i)\in\mathcal{A}^r$ for $r\leq k$ having the number $k$ in our list. We lift that problem to the $k$-th level and apply Lemma 5.6 to resolve that lift by a simplicial (with respect to some subdivisions) map $g\colon L\to P_k$ having point preimages homeomorphic to $K$ or to one-point space. By virtue of Lemma 5.9, the induced homomorphism $g^*\colon H^*(P_k)\to h^*(L)$ is an isomorphism. Since $h^*$ is continuous, there exists a compact subcomplex $L'\subset L$ such that $g_1^*((f_i\circ q^k_i)^*(\mu_i))\neq 0$ for $i\leq k$ where $g_1$ is the restriction of $g$ onto $L'$. We define the complex $P_{k+1}=L'\times I$ and the bonding map $q^{k+1}_k\colon P_{k+1}\to P_k$ as the composition $g_1\circ\omega$ where $\omega\colon L'\times I\to L'$ is the projection. We define $f_{k+1}\colon P_{k+1}\to I^{k+1}$ as the product $(f_k\circ g_1)\times \operatorname{id}\colon L'\times I\to I^k\times I$. We let $\mu_{k+1}$ to be the suspension $\Sigma\mu_k$. Then we define a countable basis $\mathcal{A}^{k+1}$ of extension problems on $P_{k+1}$ consisting of simplicial problems. Enumerate all the problem in the list $\mathcal{A}^{k+1}$ by all numbers of the form $p_1^{l_1}\dots p_k^{l_k}p_{k+1}^{l_{k+1}}$ with $l_{k+1}>0$. Let us verify the properties (1)--(3) for $k+1$. It is clear that the conditions (2)--(3) hold. By the construction the property (1) holds for $i<k$. Then all we need is to check that $f_{k+1}^*(\mu_{k+1})\neq 0$. We note that the homomorphism $f_{k+1}^*\colon h^*(I^{k+1},\partial I^{k+1}) \to h^*(P_{k+1},(f_{k+1})^{-1}(\partial I^k))$ is generated by the following map $\operatorname{id}_{S^1} \wedge (f_k\circ g_1) \colon S^1 \wedge (L'/(f_k\circ g_1)^{-1}(\partial I^k) \to S^1 \wedge (I^k/\partial I^k)$ which is the suspension $\Sigma(f_k\circ g_1)$. Since $(f_k\circ g_1)^*(\mu_k)\neq 0$, we have that $f^*_{k+1}(\mu_{k+1})\neq 0$. Thus, the induction step is completed. \end{proof} \section{Resolution theorems} In this section we are proving some resolution theorems for the cohomological dimension. We start from resolving of polyhedra. First we describe Williams' construction. \begin{definition} A simplicial complex over $n$-simplex $\Delta^n$ is a pair $(L,\xi)$ where $L$ is a simplicial complex and $\xi\colon L\to\Delta^n$ is nondegenerate simplicial map (no edge goes to a vertex). \end{definition} \begin{example*} The first barycentric subdivision of any simplicial $n$-dimensional complex $K$ defines the natural complex over $\Delta^n$. The map $\xi\colon \beta^1K\to\Delta^n=\{0,1,\dots,n\}$ assigns to every barycenter $c_{\sigma}\in\beta^1K$ the dimension of corresponding simplex $\sigma$. \end{example*} Now for every resolution $f\colon X\to \Delta^n$ of a simplex $\Delta^n$ we can define a resolution of a simplicial complex $(L,\xi)$ over $\Delta^n$ by taking the pullback: $$ \begin{CD} X\Delta L @>>\xi'> X\\ @VVf'V @VVfV\\ |L| @>>\xi> \Delta^n\\ \end{CD} $$ For example Pontryagin surfaces (Example 1.9) were constructed by taking resolutions of some triangulations of 2-dimensional polyhedra which are induced by a resolution $\xi\colon M_p\to\Delta^2$. Recall that $\xi$ is a simplicial map of $M_p$ onto a 2-simplex $\Delta^2$. Here $M_p$ is the mapping cylinder of a map of degree $p$ between two circles. \begin{definition} Let $G$ be an abelian group and $L$ be a simplicial complex. An {\it Edwards-Walsh resolution} of $L$ in the dimension $n$ is a pair $(EW(L,G,n),\omega)$ consisting of a CW-complex $EW(L,G,n)$ and a map $\omega\colon EW(L,G,n)\to |L|$ onto a geometric realization of $L$ such that \begin{enumerate} \item $\omega$ is 1-to-1 over the $n$-skeleton $L^{(n)}$, hence it defines an inclusion $j\colon L^{(n)}\subset EW(L,G,n)$, \item for every simplex $\Delta$ of $L$, $\omega^{-1}(\Delta)$ is a subcomplex of $EW(L,G,n)$ having the type of Eilenberg-MacLane space $K(\bigoplus G,n)$, \item for every simplex $\Delta$ of $L$ the inclusion $\omega^{-1}(\partial\Delta)\subset\omega^{-1}(\Delta)$ induces an epimorphism $H^n(\omega^{-1}(\Delta);G)\to H^n(\omega^{-1}(\partial\Delta);G)$. \end{enumerate} Here we regard a contractible space as $K(\bigoplus G,n)$ with zero number of summands $G$. We recall that $\mathbb{Z}_{(\mathcal{L})}$ denotes the localization of integers at set of primes $\mathcal{L}\subset\mathcal{P}$. We say that an abelian group $G$ is {\it $\mathcal{L}$-local modulo torsion} if $G/\operatorname{Tor}(G)=G\otimes\mathbb{Z}_{(\mathcal{L})}$. \end{definition} \begin{lemma} For any of the groups $\mathbb{Z}$, $\mathbb{Z}_{(\mathcal{L})}$, $\mathbb{Z}_p$ for any $n\in\mathbb{N}$ and for any simplicial complex $L$ over a simplex $\Delta^m$ there is an Edwards-Walsh resolution $\omega\colon EW(L,G,n)\to |L|$ with the additional property for $n>1$: \begin{itemize} \item[(4-$\mathbb{Z}$)] the $(n+1)$-skeleton of $EW(L,\mathbb{Z},n)$ is isomorphic to $L^{(n)}$, \item[(4-$\mathbb{Z}_p$)] the $(n+1)$-skeleton of $EW(L,\mathbb{Z}_p,n)$ is obtained from $L^{(n)}$ by attaching $(n+1)$-cells by a map of degree $p$ to the boundary $\partial\Delta^{n+1}$ for every $(n+1)$-dimensional simplex $\Delta^{n+1}$. \item[(4-$\mathbb{Z}_{(\mathcal{L})}$)] for every subcomplex $N\subset L$ the homomorphism $j_*\colon H_n(N^{(n)};\mathbb{Z}_{(\mathcal{L})})\to H_n(\omega^{-1}(N);\mathbb{Z}_{(\mathcal{L})})$ generated by the inclusion of the $n$-skeleton of $N$ in~$\omega^{-1}(N)$ is an isomorphism and the kernel of the homomorphism $\omega_*\colon H_n(\omega^{-1}(N))\to H_n(N)$ is $\mathcal{L}$-local modulo torsions. \end{itemize} \end{lemma} \begin{proof} First we consider the case when $n>1$. We consider three different cases. ($\mathbb{Z}$). Induction on $m$. If $m\leq n$, we define $EW(L,\mathbb{Z},n) = |L|$ and $\omega=\operatorname{id}_L$. Assume that there is a resolution with the properties (1)--(4) for $m$-dimensional complex $L$. Consider a simplex $\Delta^{m+1}$ of the dimension $m+1$. The barycentric subdivision of its boundary $K=\beta^1\partial\Delta^{m+1}$ is a complex over $\Delta^m$ and, hence, we can apply the induction assumption. The $n$-dimensional homotopy group $\pi_n(EW(K,\mathbb{Z},n))$ is equal to $\pi_n(K^{(n)})$ by the property (4-$\mathbb{Z}$). Since $K^{(n)}$ is homotopy equivalent to the wedge of $n$-spheres, the $n$-th homotopy group equals $\bigoplus \mathbb{Z}$. Therefore there exists a complex $\bar K\supset EW(K,\mathbb{Z},n)$ of the type $K(\bigoplus\mathbb{Z},n)$ such that its $n{+}1$-skeleton coincides with the $n{+}1$-skeleton of $EW(K,\mathbb{Z},n)$ and, hence, coincides with $K^{(n)}$. We define a map $\bar\omega\colon \bar K\to\Delta^{m+1}$ such that $\bar\omega$ is an extension of $\omega\colon EW(K,\mathbb{Z},n)\to K$ and $\omega^{-1}(t)=\bar\omega^{-1}(t)$ for every $t\in K$. We note that $\bar\omega$ has all properties 1-4 and, hence is a resolution of $m{+}1$-simplex. Then we apply the Williams construction to obtain a resolution of an arbitrary complex over $\Delta^{m+1}$. All properties (1)--(4) are easy to verify. ($\mathbb{Z}_p$). The same, induction on $m$. Now we apply the property (4-$\mathbb{Z}_p$) to compute the $n$-dimensional homotopy group $\pi_n(EW(K,\mathbb{Z}_p,n))$. The result is $\bigoplus\mathbb{Z}_p$. Similarly we construct $\bar K$ from $EW(K,\mathbb{Z}_p,n)$ by attaching cells in dimensions $n+2$ and higher. Then we apply the Williams construction. ($\mathbb{Z}_{(\mathcal{L})}$). We apply induction on $m$. If $m\leq n$, we define $EW(L,\mathbb{Z}_{(\mathcal{L})},n)= |L|$ and $\omega=\operatorname{id}_L$. Let $m\ge n+1$ and let $K$ be as in ($\mathbb{Z}$). If $m=n+1$ we attach to the $n$-sphere $K$ a complex $K(\mathbb{Z}_{(\mathcal{L})},n)$ having a $(\mathcal{P}\setminus\mathcal{L})$-telescope as the $(n{+}1)$-skeleton to obtain a complex $\bar K$. If $m>n+1$, then $K$ is $n$-connected and hence the condition (4-$\mathbb{Z}_{(\mathcal{L})}$) and the induction assumption imply that the group $H_n(\omega^{-1}(K))$ is $\mathcal{L}$-local modulo torsion. Hence there is a short exact sequence $$0\to G\to H_n(\omega^{-1}(K))\to H_n(\omega^{-1}(K))\otimes\mathbb{Z}_{(\mathcal{L})}\to 0$$ where $G$ is $(\mathcal{P}\setminus\mathcal{L})$-torsion group. We note that $$H_n(\omega^{-1}(K))\otimes\mathbb{Z}_{(\mathcal{L})} = H_n(\omega^{-1}(K);\mathbb{Z}_{(\mathcal{L})}) = H_n(K^{(n)};\mathbb{Z}_{(\mathcal{L})}) = \bigoplus\mathbb{Z}_{\mathcal{L}}$$ by the induction assumption. Since $\omega^{-1}(K)$ is $(n{-}1)$-connected, then by the Hurewicz theorem we have a short exact sequence $0\to G\to\pi_n(\omega^{-1}(K))\to\bigoplus\mathbb{Z}_{(\mathcal{L})}\to 0$. We attach $(n{+}1)$-cells to $\omega^{-1}(K)$ along generators of the group $G$ and then we attach cells of higher dimension to obtain a complex $\bar K$ of the type $K(\bigoplus\mathbb{Z}_{(\mathcal{L})},n)$. As above we define an extension $\bar\omega\colon \bar K\to\Delta^{m+1}$ of $\omega\colon \omega^{-1}(K)\to K$ such that $\bar\omega(\bar K\setminus\omega^{-1}(K))\subset \operatorname{Int}\Delta^{m+1}$. Then we apply Williams' construction to obtain a resolution of an arbitrary complex over $\Delta^{m+1}$. The conditions (1)--(2) of an EW-resolution hold automatically. To verify (3) we show that every map $f\colon \omega^{-1}(K)\to K(\mathbb{Z}_{(\mathcal{L})},n)$ admits an extension $\bar f\colon \bar K\to K(\mathbb{Z}_{(\mathcal{L})},n)$. It holds true when $m=n+1$. For $m>n+1$ we note that $f_*(G)=0$ where the homomorphism $f_*\colon \pi_n(\omega^{-1}(K))\to\pi_n(K(\mathbb{Z}_{(\mathcal{L})},n))$ is induced by $f$. It means that $f$ can be extended to the $(n{+}1)$-dimensional skeleton of $\bar K$. Since there is no obstruction for extending this map over higher dimensional cells, the required extension exists. Now we check the property (4-$\mathbb{Z}_{(\mathcal{L})}$) by induction on the number of $(m{+}1)$-simplices in $N$. If that number is zero, the condition holds by the induction assumption. Let $N=N_1\cup\Delta$ where $\Delta$ is an $(m{+}1)$-simplex with $\Delta\cap N_1=\partial\Delta$. We consider the diagram generated by the inclusion $j\colon N^{(n)}\subset\omega^{-1}(N)$ and the homology Mayer-Vietoris sequence with $\mathbb{Z}_{(\mathcal{L})}$-coefficients for the triples $(N^{(n)},N^{(n)}_1,\Delta^{(n)})$ and $(\omega^{-1}(N^{(n)}), \omega^{-1}(N^{(n)}_1),\omega^{-1}(\Delta^{(n)}))$. We note that the spaces $\partial\Delta^{(n)}$ and $\omega^{-1}(\partial\Delta^{(n)})$ are $(n{-}1)$-connected. Then the induction assumption and the five lemma imply that $j_*$ is an isomorphism. Since the homomorphism $H_n(N^{(n)})\to H_n(N)$ is an epimorphism which is factored through the homomorphism $\omega_*\colon H_n(\omega^{-1}(N))\to H_n(N)$, the latter is also an epimorphism. We apply the homomorphism generated by tensoring with $\mathbb{Z}_{\mathcal{L}}$ to the short exact sequence $0\to K_N\to H_n(\omega^{-1}(N))\to H_n(N)\to 0$, where $K_N$ is the corresponding kernel. We expand that diagram by taking the Mayer-Vietoris sequence for the triad $(N,N_1,\Delta)$ and its preimage $\omega^{-1}(N,N_1,\Delta)$. For the kernels we obtain the diagram: $$ \begin{CD} K_{N_1}\oplus K_{\Delta} @>>> K_N @>>> 0\\ @VVV @V\phi VV\\ K_{N_1}\otimes\mathbb{Z}_{\mathcal{L}}\oplus K_{\Delta}\otimes\mathbb{Z}_{\mathcal{L}} @>>> K_N\otimes\mathbb{Z}_{\mathcal{L}} @>>> 0\\ \end{CD} $$ Then the induction assumption implies that the homomorphism $\phi$ is an epimorphism. Since $H_n(N^{(n)})$ is torsion free, by already proven part of the property (4-$\mathbb{Z}_{(\mathcal{L})}$) we have that all torsions of the group $H_n(\omega^{-1}(N))$ are $(\mathcal{P}\setminus\mathcal{L})$-torsions. Therefore all torsions of $K_N$ are of that type. Then we can conclude that the group $K_N$ is $\mathcal{L}$-local modulo torsions. By an abelinization of a finite complex $L$ we understand a finite complex $\operatorname{ab}(L)$ obtained from $L$ by attaching 2-dimensional cells killing all nontrivial commutators of a finite set of generators of the fundamental group $\pi_1(L)$. If $L^{(1)}$ is the 1-dimensional skeleton of a simplicial complex $L$, then by $\operatorname{ab}_L(L^{(1)})$ we denote an inductively constructed complex $\operatorname{ab}_L(L^{(1)}) = L_{\operatorname{dim} L-1} \supset \dots \supset L_3 \supset L_2 \supset L_1 = L^{(1)}$. Where $L_2$ is the union of abelinizations $\operatorname{ab}(\sigma^{(1)})$ of 1-skeletons of 3-simpexes $\sigma\in L$. To construct $L_3$ we consider a 4-dimensional simplex $\delta\in L$ and consider $\operatorname{ab}_{\delta^{(3)}}\delta^{(1)}\subset L_2$ and take its abelinization. Do it for all 4-simplexes, then $L_3$ will be the union of all those abelinizations and so on. \end{proof} If $n=1$ the property (4) for the group $\mathbb{Z}$ takes the following form: $$EW(L,\mathbb{Z},1)^{[2]}=\operatorname{ab}_L(L^{(1)}).$$ Here by $Y^{[k]}$ we denote the k-dimensional skeleton of CW-complex $Y$. For the group $\mathbb{Z}_p$ the property (4) becomes the following: $$EW(L,\mathbb{Z}_p,1)^{[2]} = \operatorname{ab}_L(L^{(1)}) \cup_p \{B^2 \mid \sigma\in L, \operatorname{dim}\sigma=2\}.$$ For the group $\mathbb{Z}_{(\mathcal{L})}$ the property (4) remains the same. Then the argument is basically the same as in the case $n>1$. \begin{lemma} Assume that a compact $X$ has the cohomological dimension $\operatorname{dim}_GX\leq n$. Then for every Edwards-Walsh resolution $\omega\colon EW(L,G,n)\to L$ and for every map $f\colon X\to L$ there is a map $f'\colon X\to EW(L,G,n)$ such that $\omega f'(x)$ lies in the same simplex of $L$ as $f(x)$ for every point $x\in X$. \end{lemma} \begin{proof} The result follows from the property (2) of Edwards-Walsh resolution and the fact that $K(\bigoplus G,n)\in AE(X)$. \end{proof} Suppose that $\{X_i,p^{i+1}_i\}$ is an inverse sequence of pointed spaces and base point preserving bonding maps. Then for every $m$ there is a natural embedding of the product $X_1\times\dots\times X_m$ into the infinite product $\prod_{i=1}^{\infty}X_i$. The sequence $$\begin{CD} X_1 @<p^2_1<< X_2 @<p^3_2<< \cdots @<P^m_{m-1}<< X_m \end{CD}$$ defines an imbedding of $X_m$ into the product $\prod_{i=1}^mX_i\subset\prod_{i=1}^{\infty}X_i$. The inverse sequence $\{X_i,p^{i+1}_i\}$ defines an embedding of the limit space $X$ in $\prod_{i=1}^{\infty}X_i$. The projection in the inverse sequence $p^{\infty}_m\colon X\to X_m$ coincides with the restriction on $X$ of the projection onto the factor $\prod_{i=1}^{\infty}X_i\to\prod_{i=1}^mX_i$. This system of imbeddings in $\prod_{i=1}^{\infty}X_i$ we call a {\it realization of the inverse sequence} $\{X_i,p^{i+1}_i\}$ in the product $\prod_{i=1}^{\infty}X_i$. Let $\rho_i$ be a metric on $X_i$ and let $\delta_i$ denote the diameter of $X_i$. We assume that $\sum_{i=1}^{\infty}\delta_i <\infty$. Then the formula $\rho(x,y)=\sum_{i=1}^{\infty}\rho_i(p^{\infty}_i(x),p^{\infty}_i(y))$ defines a metric $\rho$ on the product $\prod_{i=1}^{\infty}X_i$. Let $\mathcal{M}$ be a (finite) cover of a compact space $X$ with a given metric $\rho$. By $d(\mathcal{M})$ we denote the diameter of $\mathcal{M}$ = $\max\{\operatorname{diam} M \mid M\in\mathcal{M}\}$ and by $\lambda(\mathcal{M})$ we denote the Lebesgue number of $\mathcal{M}$: $$\lambda(\mathcal{M}) = \max\{r \mid \text{for any r-ball $O_r(x)$ there is $M\in\mathcal{M}$, $O_r(x)\subset M$}\}. $$ Here $O_r(x)$ is the ball in $X$ of a radius $r$ with respect to $\rho$, centered at $x\in X$. Let $M_x$ denote an arbitrary $M\in\mathcal{M}$ with the property $x\in O_{\lambda(\mathcal{M}}(x) \subset \operatorname{Cl}(M)$. \begin{lemma} Let $X=\varprojlim \{K_i,f^{i+1}_i\}$ and $Z=\varprojlim \{L_i,g^{i+1}_i\}$ be limit spaces of inverse systems of compacta. Suppose the first sequence is realized in $\prod_{i=1}^{\infty}K_i$ and for every $i$ a finite cover $\mathcal{M}^i$ of $K_i$, with the diameter $d_i$ and the Lebesgue number $\lambda_i$, and a mapping $\alpha_i\colon L_i\to K_i$ are defined such that \begin{enumerate} \item $\alpha_i(L_i)\cap M\neq\emptyset$ for every $M\in\mathcal{M}^i$, \item $d_i<\lambda_{i-1}/4$, \item the diagram $$ \begin{CD} L_{i+1} @>\alpha_{i+1}>>K_{i+1}\\ @Vg^{i+1}_iVV @Vf^{i+1}_iVV\\ L_i @>\alpha_i>> K_i\\ \end{CD} $$ is $\lambda_i/4$-commutative. \end{enumerate} Then there exists a continuous map $\alpha\colon Z\to X$ onto $X$ such that the point preimage $\alpha^{-1}(x)$ is the limit space of $\varprojlim \{\alpha^{-1}_i(M_{x_i}),q^{i+1}_i\}$ where $x_i=f^{\infty}_i(x)$ and $q^{i+1}_i$ is the restriction of $g^{i+1}_i$ on $\alpha_i^{-1}(M_{x_i})$. \end{lemma} \begin{proof} (A). First we show that for any $i$ and $k$ the diagram $$ \begin{CD} L_{i+k} @>\alpha_{i+k}>>K_{i+k}\\ @Vg^{i+k}_iVV @Vf^{i+k}_iVV\\ L_i @>\alpha_i>> K_i\\ \end{CD} $$ is $\lambda_i/2$-commutative, i.e.\ $\rho(\alpha_i(g^{i+k}_i(z)),f^{i+k}_i(\alpha_{i+k}(z))) < \lambda_i/2$ for all $z\in L_{i+k}$. We apply induction on $k$. For $k=1$ it follows by the condition (3) of the lemma. For $k>1$ we apply the triangle inequality to obtain \begin{multline*} \rho(\alpha_i(g^{i+k}_i(z)),f^{i+k}_i(\alpha_{i+k}(z)))\\ \leq \rho(\alpha_i(g^{i+k}_i(z)), f^{i+1}_i\alpha_{i+1}g^{i+k}_{i+1}(z)) + \rho(f^{i+1}_i\alpha_{i+1}g^{i+k}_{i+1}(z), f^{i+k}_i(\alpha_{i+k}(z))). \end{multline*} By the induction assumption we have $$ \rho(\alpha_i g^{i+1}_i(g^{i+k}_{i+1}(z)), f^{i+1}_i\alpha_{i+1}(g^{i+k}_{i+1}(z))) < \lambda_{i+1}/4 $$ and $$ \rho(\alpha_{i+1}g^{i+k}_{i+1}(z),f^{i+k}_{i+1} \alpha_{i+k}(z)) < \lambda_{i+1}/4. $$ By the definition of the metric $\rho$ the map $f^{i+1}_i$ is a contraction, hence $$ \rho(f^{i+1}_i\alpha_{i+1}g^{i+k}_{i+1}(z),f^{i+k}_i(\alpha_{i+k}(z))) < \lambda_{i+1}/4. $$ Therefore by the condition (2) of the Lemma we have the desired inequality $$ \rho(\alpha_i(g^{i+k}_i(z)),f^{i+k}_i(\alpha_{i+k}(z))) < \lambda_i/2 $$ for all $z\in L_{i+k}$. (B). Then we prove that the sequence of maps $\alpha_ig^{\infty}_i\colon Z\to\prod_{i=1}^{\infty}K_i$ has a limit. Denote by $s_k$ the sum $\sum_{i=k}^{\infty}\delta_i$ where $\delta_i$ is the diameter of $K_i$. Then for any point $z\in Z$ the triangle inequality \begin{multline*} \rho(\alpha_ig^{\infty}_i(z), \alpha_{i+k}g^{\infty}_{i+k}(z))\\ \leq \rho(\alpha_ig^{\infty}_i(z), f^{i+k}_i\alpha_{i+k}(g^{\infty}_{i+k}(z))) + \rho(f^{i+k}_i\alpha_{i+k}(g^{\infty}_{i+k}(z)), \alpha_{i+k}g^{\infty}_{i+k}(z)) \end{multline*} and the property (A) imply that $$ \rho(\alpha_ig^{\infty}_i(z), \alpha_{i+k}g^{\infty}_{i+k}(z)) \leq \lambda_i/2+s_i. $$ Then the proof follows from the Cauchy Criterion. Denote the limit map by $\alpha$. (C). We show that $\alpha(Z)\subset X$. Indeed, for every $z\in Z$ the distance from $\alpha_ig^{\infty}_i(z)$ to the preimage $(f^{\infty}_i)^{-1}(\alpha_ig^{\infty}_i(z))$ does not exceed $s_i$. Hence $\operatorname{lim}_{i\to\infty}\rho(\alpha_ig^{\infty}_i(z),X)=0$. (D). Then we show that the inverse sequence $\varprojlim \{\alpha^{-1}_i(M_{x_i}),q^{i+1}_i\}$ is well defined for any $x\in X$, i.e.\ we show that $g^{i+1}_i(\alpha_{i+1}^{-1}(M_{x_{i+1}})) \subset\alpha_i^{-1}(M_{x_i})$. Take an arbitrary point $y\in \alpha_{i+1}^{-1}(M_{x_{i+1}})$ and show that $\alpha_i(y)\in M_{x_i}$. By the triangle inequality we have \begin{align*} \rho(\alpha_ig^{i+1}_i(y),x_i)& \leq \rho(\alpha_ig^{i+1}_i(y), f^{i+1}_i\alpha_{i+1}(y)) + \rho(f^{i+1}_i\alpha_{i+1}(y), f^{i+1}_i(x_{i+1}))\\ & \leq \lambda_i/4+d_{i+1}. \end{align*} By the condition (2) it does not exceed $\lambda_i/2$. Hence $\alpha_ig^{i+1}_i(y)\in O_{\lambda_i/2}(x_i)\subset M_{x_i}$. (E). Show that $\alpha^{-1}(x) \supset\varprojlim \{\alpha^{-1}_i(M_{x_i}\}$. Let $z\in\varprojlim \{\alpha^{-1}_i(M_{x_i})\}$. Since $\rho(\alpha_ig^{\infty}_i(z),x_i) \leq d_i$, then $\rho(\alpha_ig^{\infty}_i(z),x) \leq d_i+s_i\to 0$. Hence $\alpha(z)=x$ and $z\in\alpha^{-1}(x)$. (F). Then we show that $\alpha^{-1}(x) \subset \varprojlim \{\alpha^{-1}_i(M_{x_i})\}$. Let $z\in\alpha^{-1}(x)$ and suppose that $z$ does not belong to $\varprojlim \{\alpha^{-1}_i(M_{x_i})\}$. Then there exists a number $i$ such that $g^{\infty}_i(z)$ does not belong to $M_{x_i}$. Therefore $\rho(\alpha_ig^{\infty}_i(z),x_i) > \lambda_i$. The property of the metric $\rho$ and the triangle inequality imply that \begin{align*} \rho(\alpha_{i+k}g^{\infty}_{i+k}(z),x)& \geq \rho(f^{i+k}_i\alpha_{i+k} g^{\infty}_{i+k}(z),x_i)\\ & \geq \rho(\alpha_ig^{\infty}_i(z),x_i) - \rho(f^{i+k}_i\alpha_{i+k} g^{\infty}_{i+k}(z),\alpha_ig^{\infty}_i(z))\\ & \geq \lambda_i - \lambda_i/2 \end{align*} by (A). Hence, $\rho(\alpha(z),x)\geq \lambda_i/2$. Contradiction. Thus, (E) and (F) imply that $\alpha^{-1}(x) = \varprojlim \{\alpha^{-1}_i(M_{x_i})\}$. \end{proof} We recall that a compact space $X$ is called {\it cell-like} if every map $f\colon X\to K$ of $X$ to a CW-complex is null homotopic. In that case $X$ can be imbedded in the Hilbert cube as the intersection of a nested sequence of sets homeomorphic to the Hilbert cube. If $X$ is finite dimensional, then it can be imbedded in the Euclidean space $\mathbb{R}^n$ as the intersection of a nested sequence of topological $n$-dimensional cells. This property of $F$ explains the name `cell-like'. \begin{proposition} If a compactum $X$ is the limit space of an inverse sequence of compact spaces with homotopy trivial bonding maps, then $X$ is cell-like. \end{proposition} \begin{proof} Let $X=\varprojlim \{X_i,p^{i+1}_i\}$. We assume that spaces $X_i$ are pointed and the bonding maps are point preserving. Then the system is realized in $\prod_{i=1}^{\infty}X_i$. Then $X=\bigcap_{k=1}^{\infty} (X_k\times\prod_{i=k+1}^{\infty}X_i)$. Given a map $f\colon X\to K$ there is an extension $\bar f$ over an open neighborhood $O$ of $X$ in $\prod_{i=1}^{\infty}X_i$. Because of compactness there is a number $k$ such that $X_k\times\prod_{i=k+1}^{\infty}X_i\subset O$. For large enough $k$ the diameter of the set $\bar f(x\times\prod_{i=k+1}^{\infty}X_i)$ less then a given $\epsilon$ for all $x\in X_k$. For a CW-complex $K$ there is an $\epsilon>0$ such that every $\epsilon$-close to $f$ map $g\colon X\to K$ is homotopic to $f$. Take $k$ chosen for this $\epsilon$. Then the two maps $f$ and $\bar f\circ\pi_k$ are homotopic. Here $\pi_k\colon \prod_{i=1}^{\infty}X_i\to\prod_{i=1}^kX_i$ is the projection of the product onto the factor. Note that $\bar f\circ\pi_k = \bar f\circ f^{\infty}_k = \bar f {\restriction}_{X_k}\circ f^{k+1}_k\circ f^{\infty}_{k+1}$. Since the map $f^{k+1}_k$ is homotopically trivial, the map $\bar f {\restriction}_{X_k}\circ f^{k+1}_k\circ f^{\infty}_{k+1}$ is null homotopic. Hence the map $f$ is null homotopic. \end{proof} A map between spaces $F\colon X\to Y$ is called cell-like if $f^{-1}(y)$ is a cell-like set for every $y\in Y$. Since the empty set is not cell-like, a cell-like map is always a map onto. \begin{theorem}[Edwards Resolution Theorem] Let $X$ be a compactum of cohomological dimension $\operatorname{dim}_{\mathbb{Z}}X=n$. Then there is a compactum $Z$ of dimension $\operatorname{dim} Z\leq n$ and a cell-like map $\alpha\colon Z\to X$. \end{theorem} \begin{proof} Let $X=\varprojlim \{P_i,p^{i+1}_i\}$ be a limit space of an inverse sequence of compact polyhedra. We construct inverse systems $\{K_i,f^{i+1}_i\}$ and $\{L_i,g^{i+1}_i\}$ as in Lemma 8.3 with $X=\varprojlim \{K_i,f^{i+1}_i\}$. In order to obtain a cell-like map $\alpha$ in the view of Proposition 8.4 we add one more condition on the sequences: \begin{itemize} \item[(4)] a map $q^{i+1}_i\colon \alpha^{-1}_{i+1}(M_{x_{i+1}})\to\alpha^{-1}_i(M_{x_i})$ is null homotopic. \end{itemize} We construct sequences by induction on $i$. Let $K_1=P_1$ and let $\tau_1$ be a triangulation on $K_1$. We define $L_1$ as an $n$-dimensional skeleton $L_1^{(n)}$ of $K_1$ with respect to triangulation $\tau_1$ and let $\alpha_1\colon L_1\to K_1$ be the inclusion. We define a cover $\mathcal{M}^1$ of $K_1$ by closed subsets as the union of stars of all vertices $\{\operatorname{Star}(v) \mid v\in\tau^{(0)}_1\}$. Also we fix a metric $\rho_1$ on $K_1$. Now assume that we have constructed sequences $\{K_i,f^i_{i-1}\}$, $\{L_i,g^i_{i-1}\}$, $\alpha_i\colon L_i\to K_i$ together with metrics $\rho_i$, triangulations $\tau_i$ on $K_i$ and covers $\mathcal{M}^i$ for all $i\leq m$, satisfying the properties (1)-(4) of Lemma 8.3 and additionally $K_i=P_{r_i}$ for some $r_i$, a complex $L_i$ is an $n$-dimensional skeleton of $K_i$ with respect to a subdivision $\tau'_m$ of the triangulation $\tau_i$ with the mesh $<\lambda_i/8$ and $\alpha_i$ is the inclusion map for all $i$. Also assume that a cover $\mathcal{M}^i$ is defined as $\{\operatorname{Star}(v) \mid v\in\tau^{(0)}_i\}$. Moreover we assume that all spaces $K_i$ are pointed and, hence, naturally imbedded in the product $\prod_{i=1}^mK_i$ and we assume that a metric $\rho_i$ on each $K_i$ is the induced metric from a metric $\rho^m$ on the product. We consider the Edwards-Walsh resolution $\omega\colon EW(\tau_m',\mathbb{Z},n)\to K_m$ and apply Lemma 8.2 to the map $f=p^{\infty}_{r_m}\colon X\to P_{r_m}=K_m$ to obtain a lift $f'\colon X\to EW(\tau'_m,\mathbb{Z},n)$. Since an Edwards-Walsh space is an ANR, there is a number $k>r_m$ and a map $\tilde f\colon P_k\to EW(\tau_m,\mathbb{Z},n)$ such that $\rho^m(\omega\tilde f(x),p^k_{r_m}(x))<\lambda_m/4$ for all $x\in P_k$. We define $K_{m+1}=P_k$, $f^{m+1}_m=p^k_{r_m}$. We define a metric $\rho^{m+1}$ on the product $\prod_{i=1}^{m+1}K_i$ as the sum of metrics $\rho^m$ on $\prod_{i=1}^mK_i$ and a metric $\rho_{m+1}$ bounded from above by $\frac{1}{2^m}$ on $K_{m+1}$. Then we imbed $K_{m+1}$ into the product $\prod_{i=1}^{m+1}K_i$ by maps $f^{m+1}_1,f^{m+1}_2,\dots, f^{m+1}_m, \operatorname{id}_{K_{m+1}}$. Fix a base point in $K_{m+1}$ to get a canonical imbedding of $\prod_{i=1}^mK_i$ in $\prod_{i=1}^{m+1}K_i$. Consider a triangulation $\tau_{m+1}$ on $K_{m+1}$ such that $d_{m+1} = d(\mathcal{M}^{m+1}) = d(\{\operatorname{Star}(v) \mid v\in\tau_{m+1}^{(0)}\}) < \lambda_m/4$ with respect to the metric $\rho^{m+1}$. Then the condition (2) of Lemma 8.3 is satisfied. We define $L_{m+1}$ as an $n$-dimensional skeleton of a subdivision $\tau'_{m+1}$ of $\tau_{m+1}$ with the mesh $<\lambda_{m+1}/8$ and $\alpha_{m+1}$ as the inclusion. Then the condition (1) holds. We define $g^{m+1}_m=\omega\circ\bar f {\restriction}_{L_{m+1}}$ where $\bar f$ is a cellular approximation of $\tilde f$. Then $\omega\circ\tilde f(x)$ and $g^{m+1}_m(x)$ lie in one simplex of $\tau'_m$ for any $x\in L_{m+1}$. By the triangle inequality we have \begin{align*} \rho^m(g^{m+1}_m(x),f^{m+1}_m(x))& \leq \rho^m(g^{m+1}_m(x),\omega\tilde f(x)) + \rho^m(\omega\tilde f(x), p^k_{r_m}(x))\\ & \leq \operatorname{mesh}\tau'_m + \lambda_m/8\\ & \leq \lambda_m/4. \end{align*} Hence (3) also holds for $i=m$. We note that by the construction $X=\varprojlim \{K_i,f^{i+1}_i\}$. It means that according to Lemma 8.3 there is a map $\alpha\colon Z\to X$ where $Z=\varprojlim \{L_i,g^{i+1}_i\}$ with $\alpha^{-1}(x) = \varprojlim \{M^{(n)}_{x_i},g^{i+1}_i {\restriction}_{\cdots}\}$. Note that $Z$ is at most $n$-dimensional as a limit space of $n$-dimensional complexes. If additionally we will have the property (4), then by Proposition 8.4 the map $\alpha$ will be cell-like. Show that the condition (4) holds. For that we prove the inclusion $$g^{m+1}_m(M^{(n+1)}_{x_{m+1}})\subset M^{(n)}_{x_m}.$$ Let $\Delta$ be an $n{+}1$-dimensional simplex from $M_{x_{m+1}}$. Then the image of $\Delta$ under the cellular map $\bar f$ lies in $n{+}1$-dimensional skeleton with respect to the CW-structure on the Edwards-Walsh complex: $\bar f(\Delta)\subset EW(\tau'_m,\mathbb{Z},n)^{[n+1]}$. By the property 4-$\mathbb{Z}$ of the Edwards-Walsh resolution the $n{+}1$-skeleton $EW(\tau'_m\mathbb{Z},n)^{[n+1]}$ is equal to $|(\tau'_m)^{(n)}|$. From the construction of the Edwards-Walsh complex follows $\bar f(\Delta)\subset\sigma^{(n)}$ for some simplex $\sigma\in\tau'_m$ containing $\omega\tilde f(\Delta)$. In the proof of Lemma 8.3 part (D) it was shown that $\alpha_ig^{i+1}_i(\alpha^{-1}_{i+1}(M_{x_{i+1}})) \subset O_{\lambda_i/2}(x_i)$. In our case it means that $g^{m+1}_m(M^{(n)}_{x_{m+1}})\subset O_{\lambda_i/2}(x_i)$. Hence $g^{m+1}_m(\partial\Delta) \subset O_{\lambda_i/2}(x_i)$. Hence, $\sigma\cap O_{\lambda_i/2}(x_i) \neq \emptyset$. Since $\operatorname{diam}\sigma<\lambda_m/8$, we have $\sigma \subset O_{\lambda_i}(x_i)$. Therefore $g^{m+1}_m(\Delta) \subset O_{\lambda_i}(x_i)\subset M_{x_i}$. Since $g^{m+1}_m(\Delta)\subset |(\tau'_m)^{(n)}|$, we have the desired inclusion $g^{m+1}_m(\Delta)\subset M_{x_m}^{(n)}$. Since $M_x$ is contractible, the inclusion $M_x^{(n)}\subset M_x^{(n+1)}$ is homotopy trivial for any $x$. Hence, the map $g^{m+1}_m {\restriction}_{\cdots}\colon M_{x_{m+1}}^{(n)}\to M_{x_m}^{(n)}$ is null homotopic. The condition (4) is checked. \end{proof} The following is a relative version of the Edwards Resolution theorem. \begin{theorem} Let $(X,A)$ be a compact pair with $\operatorname{dim}_{\mathbb{Z}}(X\setminus A)\leq n$. Then there exists a pair $(Z,A)$ with $\operatorname{dim}(Z\setminus A)\leq n$ and a cell-like map $\alpha\colon (Z,A)\to (X,A)$ which is the identity on $A$. \end{theorem} \begin{proof} The proof is exactly the same as in Theorem 8.5 with the only difference, that we present $(X,A)$ as the limit space of relative polyhedra $(P_i,A)$ with triangulations on $P_i\setminus A$ having simplices with sizes tending to zero when one approaches the subset $A$. \end{proof} A map between compacta $f\colon Y\to X$ is called $UV^n$-map if every fiber $f^{-1}(y)$ is approximately $n$-connected. We call a compactum $Z$ approximately $n$-connected if it has the $UV^n$-property, i.e.\ for any imbedding of $Z$ to ANR for every neighborhood $U\supset Z$ there is a smaller neighborhood $V\supset Z$ such that the inclusion $V\subset U$ induces a zero homomorphism for $k$-dimensional homotopy groups $\pi_k(V)\to\pi_k(U)$ for $k\leq n$. \begin{theorem} Let $X$ be a compactum of the cohomological dimension $\operatorname{dim}_{\mathbb{Z}_p}X=n$. Then there is a compactum $Z$ of dimension $\operatorname{dim} Z\leq n$ and a $\mathbb{Z}_p$-acyclic $UV^{n-1}$-map $\alpha\colon Z\to X$ onto $X$. \end{theorem} \begin{proof} As in the proof of Theorem 8.5 we start from an inverse system of polyhedra $\{P_i,p^{i+1}_i\}$ with the limit space $X$ and construct two inverse sequences $\mathcal{S}_1=\{K_i,f^{i+1}_i\}$ and $\mathcal{S}_2=\{L_i,g^{i+1}_i\}$ with limits $X=\varprojlim \mathcal{S}_1$ and $Z=\varprojlim \mathcal{S}_2$, satisfying the conditions (1)-(3) of Lemma 8.3. In order to get the above properties of the limit map $\alpha$ we add the following condition: \begin{itemize} \item[(4)] the map $q^{i+1}_i\colon M^{(n)}_{x_{i+1}}\to M^{(n)}_{x_i}$ induces zero homomorphism in cohomologies with $\mathbb{Z}_p$-coefficients. \end{itemize} The construction of $\mathcal{S}_1$ and $\mathcal{S}_2$ is the same as in the proof of Theorem 8.5 with the only difference that we consider the Edwards-Walsh resolutions for the group $\mathbb{Z}_p$ instead of $\mathbb{Z}$. We recall that $M_x$ is the star of some vertex. Hence $M_x^{(n)}$ is $n{-}1$-connected. Hence the limit map $\alpha$ is approximately $n{-}1$-connected, i.e.\ $UV^{n-1}$. All we have to show is that $(q^{i+1}_i)^*\colon H^n(M^{(n)}_{x_i};\mathbb{Z}_p) \to H^n(M^{(n)}_{x_{i+1}};\mathbb{Z}_p)$ is zero homomorphism. By the Universal Coefficient Theorem it suffices to show that $q^{i+1}_i$ induces zero homomorphism for $\mathbb{Z}_p$-homologies. By the argument of Theorem 8.5 we know that $q^{i+1}_i = \omega \circ \bar f{\restriction}_{M^{(n)}_{x_{i+1}}}$. Denote by $h$ the restriction on $M^{(n+1)}_{x_{i+1}}$ of the map $\bar f\colon K_{i+1}\to EW(M_{x_i},\mathbb{Z}_p,n)$ defined in the proof of Theorem 8.5. We recall $\bar f$ is a cellular map and $\bar f\circ i = j\circ\omega\circ\bar f = j\circ q^{i+1}_i$ where $i\colon M^{(n)}_{x_{i+1}}\to M^{(n+1)}_{X_{i+1}}$ and $j\colon M^{(n)}_{x_i}\to EW(M_{x_i},\mathbb{Z}_p,n)$ are inclusions. Hence $\bar f(M^{(n+1)}_{x_{i+1}})\subset EW(M_{x_i},\mathbb{Z}_p,n)^{[n+1]}$. The property (4-$\mathbb{Z}_p$) of Edwrads-Walsh resolution implies that the inclusion $M_x^{(n)}\subset EW(M_X,\mathbb{Z}_p,n)^{[n+1]}$ induces a monomorphism of homologies with $\mathbb{Z}_p$-coefficients. Then the commutative diagram $$ \begin{CD} H_n(M^{(n)}_{x_{i+1}};\mathbb{Z}_p) @>i_*>> H_n(M^{(n+1)}_{x_{i+1}};\mathbb{Z}_p)\\ @V(q^{i+1}_i)_*VV @Vh_*VV\\ H_n(M^{(n)}_{x_i};\mathbb{Z}_p) @>j_*>> H_n(EW(M_{x_i},\mathbb{Z}_p,n);\mathbb{Z}_p)\\ \end{CD} $$ implies that $j_*\circ (q^{i+1}_i)_*$ is zero homomorphism. Hence $(q^{i+1}_i)_*$ is zero homomorphism. \end{proof} \begin{remark} Let $\mathcal{L}\subset\mathcal{P}$ be a family of prime numbers and let $\operatorname{dim}_{\mathbb{Z}_p}X\leq n$ for all $p\in\mathcal{L}$. Then there exists $\mathbb{Z}_p$-acyclic, $p\in\mathcal{L}$, $UV^{n-1}$-map $f\colon Z\to X$ of $n$-dimensional compactum $Z$ onto $X$. \end{remark} \begin{proof} In the construction of the inverse sequences $\mathcal{S}_1$ and $\mathcal{S}_2$ we apply the Edwards-Walsh resolutions with different $p\in\mathcal{L}$ and with every $p$ infinitely many times. Then the result follows. \end{proof} The following is a relative version of Theorem 8.7. \begin{theorem} Let $(X,A)$ be a compact pair with $\operatorname{dim}_{\mathbb{Z}_p}(X\setminus A)\leq n$ for prime $p\in\mathcal{L}$. Then there exists a compact pair $(Z,A)$ with $\operatorname{dim}(Z\setminus A)\leq n$ and a $Z_p$-acyclic, $p\in\mathcal{L}$, $UV^{n-1}$-map $\alpha\colon (Z,A)\to (X,A)$ which is the identity on $A$. \end{theorem} \section{Resolutions preserving cohomological dimensions} \begin{notation*} Let $g\colon X\to K$ be a map onto a simplicial complex $K$ with a triangulation $\tau$. By $\operatorname{dim}_G(g,\tau)\leq n$ we denote the following property of $g$: \begin{quote} For every subcomplex $L\subset K$ with respect to $\tau$ an every extension problem $\phi\colon L\to K(G,n)$ is resolved by $g$. \end{quote} \end{notation*} We note that $\operatorname{dim}_G(g,\tau)$ is not a number. We consider the inequality $\operatorname{dim}_G(g,\tau)\leq n$ as one symbol. \begin{proposition} An Edwards-Walsh resolution $\omega\colon EW(\tau,G,n)\to K$ of a finite complex $K$ with a triangulation $\tau$ has the property $\operatorname{dim}_G(\omega,\tau)\leq n$. \end{proposition} \begin{proof} Consider a map $\phi\colon L\to K(G,n)$. It can be extended without problems over $n$-dimensional skeleton $K^{(n)}$ of $K$. Then by induction we can show that the map $w_m = \phi\omega {\restriction}_{\omega^{-1}(K^{(m)})\cup L}\colon \omega^{-1}(K^{(m)}\cup L) \to K(G,n)$ has an extension $w_{m+1}$ over $\omega^{-1}(K^{(m+1)}\cup L)$. This follows from the property (3) of the Edwards-Walsh resolution. The union $w$ of maps $w_m$ will be a solution of the extension problem $\phi\omega {\restriction}_{\omega^{-1}(L)}$. \end{proof} A map $f\colon K\to L$ between two simplicial complexes is called {\it combinatorial} if the preimage $f^{-1}(M)$ of every subcomplex $M \subset L$ is a subcomplex of $K$. \begin{lemma} Let $X$ be a limit space of the inverse sequence of polyhedra $\{K_i,q^{i+1}_i\}$ with fixed metrics $\rho_i$ and fixed triangulations $\tau_i$ on $K_i$ such that $$\operatorname{lim}_{i\to\infty}\operatorname{mesh}(q^{k+i}_k(\tau_{k+i}))=0$$ for all $k$. Assume that all bonding maps $q^{i+1}_i$ are combinatorial with respect to $\tau_{i+1}$ and $\tau_i$ and $\operatorname{dim}_G(q^{i+1}_i,\tau_i)\leq n$ for infinitely many $i$. Then $\operatorname{dim}_GX\leq n$. \end{lemma} \begin{proof} First we note that extension problems on $X$ of the type $(q^{\infty}_i)^{-1}(L,\phi)$ form a basis of extension problems for the mappings to $K(G,n)$, where $L$ is a subcomplex of $K_i$ with respect to the triangulation $\tau_i$. By the assumption of Lemma, every such problem $(L,\phi)$ is resolved by a map $q^j_i$ for some $j$. Indeed, take $j>i$ with $\operatorname{dim}_G(q^j_{j-1},\tau_{j-1})\leq n$. Since the map $q^{j-1}_i$ is combinatorial, the problem $(q^{j-1}_i)^{-1}(L,\phi)$ is resolved by $q^j_{j-1}$. Hence, the original problem $(L,\phi)$ is resolved by $q^j_i$. Therefore every basic problem $(q^{\infty}_i)^{-1}(L,\phi)$ is solvable and, hence, by Proposition 5.4, all extension problems on $X$ to $K(G,n)$ have a solution. Hence, by Theorem 1.1, $\operatorname{dim}_GX\leq n$. \end{proof} The Proof of Lemma 8.1 can be applied to prove the following. \begin{proposition} Let $G$ be one of the following groups $\mathbb{Z}$, $\mathbb{Z}_p$ or $\mathbb{Z}_{(\mathcal{L})}$ where $\mathcal{L}\subset\mathcal{P}$ is a set of primes. Let $K\subset L$ be a subcomplex of a simplicial complex $L$ and let $\omega\colon EW(K,G,n)\to K$ be an Edwards-Walsh resolution. Then there exists an Edwards-Walsh resolution $\bar\omega\colon EW(L,G,n)\to L$ with $\bar\omega {\restriction}_{\bar\omega^{-1}(K)}=\omega$. \end{proposition} \begin{theorem} Let $\mathcal{L}$ be a set of prime numbers and let $n\geq 2$. Then for every compactum $X$ of the cohomological dimension $\operatorname{dim}_{\mathbb{Z}_{(\mathcal{L})}}X\leq n$ there exists a compactum $Y$ having dimensions $\operatorname{dim} Y\leq n+1$ and $\operatorname{dim}_{\mathbb{Z}_{(\mathcal{L})}}Y\leq n$ and a $\mathbb{Z}_{(\mathcal{L})}$-acyclic map $\alpha\colon Y\to X$ of $Y$ onto $X$. \end{theorem} \begin{proof} We construct inverse sequences $\{K_i,f^{i+1}_i\}$, $\{L_i,g^{i+1}_i\}$ and a sequence of maps $\{\alpha\colon L_i\to K_i\}$ having the properties (1)--(3) of Lemma 8.3 with $X = \varprojlim \{K_i,f^{i+1}_i\}$ and $\operatorname{dim} L_i= n+1$ for all $i$. Then a compactum $Z$ of Lemma 8.3 will be at most $n{+}1$-dimensional. In order to obtain the acyclicity of the map $\alpha$, we require the following \begin{itemize} \item[(4)] A homomorphism $(q^{i+1}_i)^*\colon \tilde H^*(\alpha^{-1}_i(M_{x_i});\mathbb{Z}_{(\mathcal{L})}) \to \tilde H^*(\alpha^{-1}_{i+1}(M_{x_{i+1}});\mathbb{Z}_{(\mathcal{L})})$ is trivial. \end{itemize} To obtain the inequality $\operatorname{dim}_{\mathbb{Z}_{(\mathcal{L})}}Z\leq n$ we want to apply Lemma 9.2 and, hence, we require the existence of metrics $\hat\rho_i$ and triangulations $\kappa_i$ on $L_i$ such that a map $g^{i+1}_i$ is combinatorial with $\operatorname{dim}_{\mathbb{Z}_{(\mathcal{L})}}(g^{i+1}_i,\kappa_i)\leq n$ and with $$ \operatorname{lim}_{k\to\infty}\operatorname{mesh} (g^{i+k}_i(\kappa_{i+k}))=0 \quad \text{for any $i$.} $$ We construct that by induction. First we fix an inverse sequence of compact polyhedra $\{P_r,p^{r+1}_r\}$ with limit space $X$. By the induction on $m$ we construct the following diagram: $$ \begin{CD} L_1 @<g^2_1<< L_2 @<<< \cdots @<g^m_{m-1}<< L_m\\ @V\alpha_1VV @V\alpha_2VV @. @V\alpha_mVV\\ K_1 @<f^2_1<< K_2 @<<< \cdots @<f^m_{m-1}<< K_m\\ \end{CD} $$ On each $K_i$ we define a cover $\mathcal{M}^i$ with the diameter $d_i$ and the Lebesgue number $\lambda_i$, a triangulation $\tau_i$, a metric $\rho_i$ and a base point $x^*_i$. On each $L_i$ we define a triangulation $\kappa_i$, a metric $\hat\rho_i$ having the following properties \begin{enumerate} \renewcommand{\labelenumi}{{\normalfont (\roman{enumi})}} \item (1) of Lemma 8.3, i.e.\ $\alpha_i(L_i)\cap M\neq\emptyset$ for every $M\in\mathcal{M}^i$, \item (2) of Lemma 8.3, i.e.\ $d_i<\lambda_{i-1}/4$, \item (3) of Lemma 8.3, i.e.\ the diagram $$ \begin{CD} L_{i+1} @>\alpha_{i+1}>>K_{i+1}\\ @Vg^{i+1}_iVV @Vf^{i+1}_iVV\\ L_i @>\alpha_i>> K_i\\ \end{CD} $$ is $\lambda_i/4$-commutative. \item A homomorphism $(q^{i+1}_i)^*\colon \tilde H^*(\alpha^{-1}_i(M_{x_i});\mathbb{Z}_{(\mathcal{L})}) \to \tilde H^*(\alpha^{-1}_{i+1}(M_{x_{i+1}});\mathbb{Z}_{(\mathcal{L})})$ is trivial, where $M_{x_i}\in\mathcal{M}^i$ and $x_i\in O_{\lambda_i}(x)\subset \operatorname{Cl}(M_{x_i})$ \item All spaces $K_i$ are imbedded into the product $\prod_{j=1}^mK_j$ by the mapping $(f^i_1,f^i_2,\dots,f^i_{i-1},\operatorname{id}_{K_i},x^*_{i+1},\dots,x^*_m)$ and the metric $\rho_i$ is the induced from a brick metric $\rho^1+\cdots+\rho^m$ on the product. Also we assumed that $\operatorname{diam}_{\rho^i}K_i\leq 1/2^i$, \item $\operatorname{mesh}_{\rho_i}(\tau_i)<\lambda_i/16$, \item For every $M\in\mathcal{M}^i$, $M$ is a contractible subcomplex of $K_i$ with respect to $\tau_i$, \item For every $i$ there is $r(i)$ such that $K_i=P_{r(i)}$ and $f^{i+1}_i=p^{r(i+1)}_r(i)$, \item A complex $L_i$ has the following CW-complex structure: Take $n{+}1$-skeleton $K^{(n+1)}_i$ of $\tau_i$ subdivide some of its $n{+}1$-cells into a finite union of $n{+}1$-cells and replace some of the smaller $n{+}1$-cells by $n+1$-cells attached to the same boundary by maps of degree having all prime factors in $\mathcal{P}\setminus\mathcal{L}$. Then $\alpha_i$ is the natural projection of $L_i$ onto $K^{(n+1)}_i$ taking new $n{+}1$-cells to original, \item The cellular structure on $L_i$ agrees with the triangulation $\kappa_i$, i.e.\ every CW-subcomplex is a simplicial complex with respect to $\kappa_i$, \item Every complex $L_i$ is supplied with a metric $\hat\rho_i$ and $\operatorname{mesh}_{\hat\rho_j}(g^i_j(\kappa_i))\leq 1/2^i$ for every $j\leq i$, \item $g^{i+1}_i$ is combinatorial and $g^{i+1}_i=\omega_i\circ\tilde f_i$ where $\omega_i\colon EW(\kappa_i, \mathbb{Z}_{(\mathcal{L})},n)\to L_i$ is an Edwards-Walsh resolution. \end{enumerate} The beginning of the induction: let $K_1=P_1$, let $\tau_1'$ be a triangulation on $K_1$ and let $\rho^1$ be a metric on $K_1$ of the diameter $\leq 1/2$. Let $\mathcal{M}^1$ be a cover of $K_1$ by stars $M=\{\operatorname{Star}(v) \mid v\in(\tau_1')^{(0)}\}$. We define $\rho_1=\rho^1$ and consider a subdivision $\tau_1$ of $\tau_1'$ with $\operatorname{mesh}_{\rho_1}(\tau_1)\leq\lambda_1/8$ where $\lambda_1$ is the Lebesgue number of $\mathcal{M}^1$ with respect to the metric $\rho_1$. Define $L_1$ to be the $n{+}1$-skeleton of $K_1$ with respect to triangulation $\tau_1$ and define $\alpha_1\colon L_1\to K_1$ as the inclusion. Take any metric $\hat\rho_1$ on $L_1$ and fix a triangulation $\kappa_1$ on $L_1$ with $\operatorname{mesh}_{\hat\rho_1}(\kappa_1)<1/2$. Fix a point $x^*_1\in K_1$. All conditions (i)--(xii) are satisfied. Now we assume that the diagram $$ \begin{CD} L_1 @<g^2_1<< L_2 @<<< \dots @<g^m_{m-1}<< L_m\\ @V\alpha_1VV @V\alpha_2VV @. @V\alpha_mVV\\ K_1 @<f^2_1<< K_2 @<<< \dots @<f^m_{m-1}<< K_m\\ \end{CD} $$ is constructed satisfying the properties (i)--(xii). We consider the map $$ \alpha_m\circ\omega_m\colon EW(\kappa_m,\mathbb{Z}_{(\mathcal{L})},n) \to |\tau_m^{(n+1)}|. $$ According to the condition (ix) the homology group $H_n(\alpha^{-1}(\sigma))$ is a finite $(\mathcal{P}\setminus\mathcal{L})$-torsion group for every $n{+}1$-dimensional simplex $\sigma\in\tau_m$. The same holds true for every $n$-connected subcomplex $N\subset K_m$. By the property (4-$\mathbb{Z}_{(\mathcal{L})}$) for every simplex $\sigma$ of dimension $\ge n+1$ there is a short exact sequence $$ 0 \to K \to \pi_n(\omega_m^{-1}(\alpha^{-1}(\sigma^{(n+1)}))) \to H_n(\alpha^{-1}(\sigma^{(n+1)})) \to 0 $$ where $K$ is $\mathcal{L}$-local modulo torsions. Hence $K/\operatorname{Tor}(K)=\bigoplus\mathbb{Z}_{(\mathcal{L})}$ and $\operatorname{Tor}(K)$ consists of $(\mathcal{P}\setminus\mathcal{L})$-torsions. We consider an exact sequence $$ 0 \to K/\operatorname{Tor}(K) \to \pi_n(\omega_m^{-1}(\alpha^{-1}(\sigma^{(n+1)})))/\operatorname{Tor}(K) \to H_n(\alpha^{-1}(\sigma^{(n+1)})) \to 0. $$ Since $Ext(G,\bigoplus\mathbb{Z}_{(\mathcal{L})})=0$ for any finite $(\mathcal{P}\setminus\mathcal{L})$-torsion group $G$, we can present the group $\pi_n(\omega_m^{-1}(\alpha^{-1}(\sigma^{(n+1)})))/\operatorname{Tor}(K)$ as the direct sum of $\bigoplus\mathbb{Z}_{(\mathcal{L})}$ and some $(\mathcal{P}\setminus\mathcal{L})$-torsion group $G_{\sigma}$. Thus, we have an epimorphism $$\pi_n(\omega_m^{-1}(\alpha^{-1}(\sigma^{(n+1)})))\to\bigoplus\mathbb{Z}_{(\mathcal{L})}$$ with a $(\mathcal{P}\setminus\mathcal{L})$-torsion kernel $U_{\sigma}$. Now for every $\sigma\in\tau_m$ of dimension $\ge n+1$ we kill the elements of the above group $U_{\sigma}$ by attaching $n{+}1$-cells. Then by attaching cells of higher dimensions we turn the space $EW(\kappa_m,\mathbb{Z}_{(\mathcal{L})},n)$ into a EW-resolution $w_m\colon EW(\tau_m,\mathbb{Z}_{(\mathcal{L})},n)\to K_m$ of $\tau_m$. Here the projection $w_m$ takes new open cells to the interior of corresponding simplices $\sigma$. Since $\operatorname{dim}_{\mathbb{Z}_{(\mathcal{L})}}X\leq n$, by Lemma 8.2 there is a combinatorial lift $p'_m\colon X\to EW(\tau_m,\mathbb{Z}_{(\mathcal{L})},n)$ of $p^{\infty}_{r(m)}\colon X\to P_{r(m)}=K_m$ (see (8)). Since $EW(\tau_m,\mathbb{Z}_{(\mathcal{L})},n)$ is an absolute neighborhood extensor, there is a number $k$ and a map $f'_m\colon P_k\to EW(\tau_m,\mathbb{Z}_{(\mathcal{L})},n)$ such that \begin{equation*} \rho_m(w_mf'_m,p^k_{r(m)})<\lambda_m/16.\tag{$\ast$} \end{equation*} We define $r(m+1)=k$, $K_{m+1}=P_k$ and $f^{m+1}_m=p^k_{r(m)}$. Take a metric $\rho^{m+1}$ on $K_{m+1}$ of the diameter $\leq 1/2^{m+1}$ and define a metric $\rho_{m+1}$ on the product $\prod_{i=1}^{m+1}K_i$ as the sum $\rho_m+\rho^{m+1}$. Fix a point $x^*_{m+1}\in(f^{m+1}_m)^{-1}(x^*_m)$. The properties (v) and (viii) are satisfied. Consider a triangulation $\tau_{m+1}'$ of $K_{m+1}$ with $$d_{m+1} = d(\{\operatorname{Star}(v) \mid v\in(\tau_{m+1}')^{(0)}\})<\lambda_m/4$$ and define $$\mathcal{M}^{m+1} = \{\operatorname{Star}(v) \mid v\in(\tau_{m+1}')^{(0)}\}.$$ Then (ii) and (vii) are satisfied. Let $\tau_{m+1}$ be a subdivision of $\tau_{m+1}'$ with $\operatorname{mesh}_{\rho_{m+1}}(\tau_{m+1})<\lambda_{m+1}/16$, where $\lambda_{m+1}$ is the Lebesgue number of $\mathcal{M}^{m+1}$ with respect to $\rho_{m+1}$. Then (vi) holds. Let $\bar f_m\colon K_{m+1}\to EW(\tau_m,\mathbb{Z}_{(\mathcal{L})},n)$ be a cellular approximation of $f'_m$ with respect to $\tau_{m+1}$ and the standard CW-structure on $ EW(\tau_m,\mathbb{Z}_{(\mathcal{L})},n)$. By the construction the $n{+}1$-dimensional skeleton of $EW(\tau_m,\mathbb{Z}_{(\mathcal{L})},n)$ admits the following description: $$EW(\tau_m,\mathbb{Z}_{(\mathcal{L})},n)^{[n+1]} = EW(\kappa_m,\mathbb{Z}_{(\mathcal{L})},n)^{[n+1]}\cup_{\beta_i}B_i^{n+1},$$ where $\beta_i\colon \partial B^{n+1}_i \to EW(\kappa_m,\mathbb{Z}_{(\mathcal{L})},n)^{[n+1]}$ defines a $(\mathcal{P}\setminus\mathcal{L})-$torsion element $(\beta_i)_*$ in the homotopy group $\pi_n(EW(\kappa_m,\mathbb{Z}_{(\mathcal{L})},n)$. Now we construct a finite CW-complex $L_{m+1}$ as follows. Consider $n{+}1$-skeleton $K^{(n+1)}_{m+1} = |\tau^{(n+1)}_{m+1}|$ and the restriction of $\bar f_m$ on it. We may assume that for every $n{+}1$-simplex $\Delta$ in $\tau_{m+1}$ there is a partition of $\Delta$ into finitely many PL cells $\Delta=D^{n+1}_1 \cup \dots \cup D^{n+1}_s$ such that the image $\bar f_m(D^{n+1}_i)$ is an $n{+}1$-cell in $EW(\tau,\mathbb{Z}_{(\mathcal{L})},n)^{[n+1]}$. If $\bar f_m(D^{n+1}_i)=B_j^{n+1}$ for some $j$, we delete the interior of $D_i^{n+1}$ and attach an $n{+}1$-cell $\bar D_i^{n+1}$ by means of a map $\partial\bar D_i^{n+1}\to\partial D_i^{n+1}$ of the degree equal to the order of the element $(\beta_j)_*$. We define $\alpha_{m+1}\colon L_{m+1} \to |\tau^{(n+1)}_{m+1}| subset K_{m+1}$ by taking every cell $\bar D_i^{n+1}$ to $D_i^{n+1}$. Then the properties (i), (ix) hold. Denote $N_m=EW(\kappa_m,\mathbb{Z}_{(\mathcal{L})},n)^{[n+1]}$. Now the map $\bar f_m {\restriction}_{\bar f^{-1}(N_m)}\colon \bar f^{-1}(N_m)\to N_m$ has an extension $\tilde f_m\colon L_{m+1}\to N_m$. We define $g^{m+1}_m=\omega_m\circ\tilde f_m$. Then (xii) holds. Fix a metric $\hat\rho_{m+1}$ on $L_{m+1}$. We may assume that $L_{m+1}$ is a polyhedron and we take a triangulation $\kappa_{m+1}$ on it with $\operatorname{mesh}_{\hat\rho_j}(g^{m+1}_j(\kappa_{m+1})<1/2^{m+1}$ for all $j\leq m+1$. Then (x) and (xi) hold. In order to verify (iii) we have to show that $$\rho_m(\alpha_mg^{m+1}_m(x),f^{m+1}_m\alpha_{m+1}(x)) < \lambda_m/4.$$ Indeed, \begin{align*} \rho_m(\alpha_mg^{m+1}_m(x),f^{m+1}_m\alpha_{m+1}(x))& \leq \rho_m(\alpha_mg^{m+1}_m(x), w_m\bar f_m\alpha_{m+1}(x))\\ & \qquad + \rho_m(w_m\bar f_m(\alpha_{m+1}(x)),f^{m+1}_m(\alpha_{m+1}(x)))\\ \text{by (6) and ($\ast$)\qquad}& < \rho_m(\alpha_m\omega_m\tilde f_m(x), w_m\bar f\alpha_{m+1}(x)) + \lambda_m/8\\ & = \rho_m(w_m\tilde f_m(x), w_m\bar f\alpha_{m+1}(x)) + \lambda_m/8\\ & < \lambda_m/8 + \lambda_m/8\\ & = \lambda/4. \end{align*} Now we check (iv). Since a complex $M_{x_{m+1}}$ is contractible, its $n$-skeleton $M^{(n+1)}_{x_{m+1}}$ is $n$-connected. Hence by the construction the preimage $\alpha^{-1}_{m+1}(M_{x_{m+1}}) = \alpha^{-1}_{m+1}(M^{(n+1)}_{x_{m+1}})$ is $n{-}1$-connected. Note that $$H^n(\alpha^{-1}_{m+1}(M^{(n+1)}_{x_{m+1}});\mathbb{Z}_{(\mathcal{L})}) = H^n(M^{(n+1)}_{x_{m+1}};\mathbb{Z}_{(\mathcal{L})})=0.$$ Since $\alpha^{-1}_{m+1}(M^{(n+1)}_{x_{m+1}})$ is $n{+}1$-dimensional, it suffices to check (iv) in the dimension $n+1$. Note that $H^{n+1}(EW(L,\mathbb{Z}_{(\mathcal{L})},n);\mathbb{Z}_{(\mathcal{L})})=0$. Then by (12) $(q^{m+1}_m)^*$ is a zero homomorphism in the dimension $n+1$. \end{proof} We note that if $\mathcal{L}=\emptyset$, then $\mathbb{Z}_{(\mathcal{L})}=\mathbb{Q}$. There is a relative version of Theorem 9.4. \begin{theorem} Let $\mathcal{L}$ be a set of primes and let $n\geq 2$. Let $(X,A)$ be a compact pair with $\operatorname{dim}_{\mathbb{Z}_{(\mathcal{L})}}(X\setminus A)\leq n$. Then there exists a compact pair $(Z,A)$ with $\operatorname{dim}(Z\setminus A)\leq n+1$ and $\operatorname{dim}_{\mathbb{Z}_{(\mathcal{L})}}(Z\setminus A)\leq n$ and a $\mathbb{Z}_{(\mathcal{L})}$-acyclic map $\alpha\colon (Z,A)\to (X,A)$ onto $X$ which is identity on $A$. \end{theorem} \begin{proposition} For every finite simplicial complex $L$ there is the equality $$\pi_i(EW(L,\mathbb{Z}_p,k))=0\quad \text{for $k<i<2k-1$.}$$ \end{proposition} \begin{proof} Induction on $\operatorname{dim} L$. If $\operatorname{dim} L=0$, then the proposition holds. Assume that it holds for all $m$-dimensional complexes and let $L$ be $m{+}1$-dimen\-sional. We apply induction on the number of simplices in $L$. If $L$ consists of one simplex, then $EW(L,\mathbb{Z}_p,k)=K(\bigoplus\mathbb{Z}_p,k)$ and hence the proposition holds. Let $L=K\cup\Delta$ where $\Delta$ is a simplex of the dimension $m+1$. Since $i<2k-1$ and $\omega^{-1}(K')$ is $k{-}1$-connected for any subcomplex $K'\subset L$ where $\omega\colon EW(L,\mathbb{Z}_p,k)\to K'$ is the Edwards-Walsh resolution, the Mayer-Vietoris sequence holds for homotopy groups: $$\pi_i(\tilde\Delta)\oplus\pi_i(\tilde K) \to \pi_i(L) \to \pi_{i-1}(\tilde C) \to \pi_i(\tilde\Delta)\oplus\pi_i(\tilde K).$$ Here by $\tilde A$ we denote the preimage $\omega^{-1}(A)$ for a subcomplex $A\subset K$. Note that $\tilde A$ is an Edwards-Walsh resolution of $A$. The induction assumption implies that $\pi_i(L)=0$ for $k+1<i<2k-1$. Note that $\pi_k(\tilde C)\to\pi_k(\tilde\sigma)$ is a monomorphism. Then the exactness of the Mayer-Vietoris sequence implies that $\pi_{k+1}(L)=0$. \end{proof} \begin{proposition} For any $p$, $k$ and any simplicial complex $L$ there exists an Edwards-Walsh resolution $\omega\colon EW(L,\mathbb{Z}_p,k)\to L$ such that $\omega(EW(L,\mathbb{Z}_p,k)^{[n]})\subset L^{(k+1)}$ for all $n<2k-1$. Moreover, any such resolution $\omega\colon EW(L,\mathbb{Z}_p,k)\to L$ given over a subcomplex $L\subset K$ can be extended to $\bar\omega\colon EW(K,\mathbb{Z}_p,k)\to K$ with the same property $\omega(EW(K,\mathbb{Z}_p,k)^{[n]})\subset K^{(k+1)}$ for any $n<2k-1$. \end{proposition} \begin{proof} Induction on $m=\operatorname{dim} K$. If $m=k+1$ the statement is correct. Let $\partial\Delta$ be a boundary of an $m$-simplex with some triangulation $\tau$. Assume that $\omega\colon EW(\tau,\mathbb{Z}_p,k)\to\partial\Delta$ be an Edwards-Walsh resolution with the above property. By Proposition 9.6 $\pi_i(EW(\tau,\mathbb{Z}_p,k))=0$ for $k<i<2k-1$. Note that $\pi_i(EW(\tau,\mathbb{Z}_p,k))=0$ for $i<k$. By the property of Edwards-Walsh resolutions, $\pi_k(EW(\tau,\mathbb{Z}_p,k))=\bigoplus\mathbb{Z}_p$. Therefore we can construct $EW(\Delta,\mathbb{Z}_p,k)$ out of $EW(\tau,\mathbb{Z}_p,k)$ by attaching cells in the dimensions $2k-1$ and higher. Hence $\omega(EW(\Delta,\mathbb{Z}_p,k)^{[n]}) = \omega(EW(\tau,\mathbb{Z}_p,k)) \subset (\partial\Delta)^{(k+1)} \subset \Delta^{(k+1)}$. Then by induction on the number of simplices in a complex $K$ we can construct the required Edwards-Walsh resolution. \end{proof} \begin{theorem} For any set of primes $\mathcal{L}\subset\mathcal{P}$ and for every compactum $X$ with $\operatorname{dim}_{\mathbb{Z}}X\leq n$ and $\operatorname{dim}_{\mathbb{Z}_p}X\leq k$ for $p\in\mathcal{L}$ with $n<2k-1$ there exists a compactum $Y$ with $\operatorname{dim} Y\leq n$ and $\operatorname{dim}_{\mathbb{Z}_p}Y\leq k$ for all $p\in\mathcal{L}$ and a cell-like map $f\colon Y\to X$. \end{theorem} \begin{proof} Define a sequence $\{p(i)\}$ of primes such that each $p\in\mathcal{L}$ enters the sequence infinitely many times. We construct inverse sequences of polyhedra $$ \begin{CD} L_1 @<g^2_1<< L_2 @<<< \cdots @<g^m_{m-1}<< L_m @<<< \cdots\\ @V\alpha_1VV @V\alpha_2VV @. @V\alpha_mVV @.\\ K_1 @<f^2_1<< K_2 @<<< \cdots @<f^m_{m-1}<< K_m @<<< \cdots\\ \end{CD} $$ as in the proof of Theorem 9.4 with the properties \begin{enumerate} \renewcommand{\labelenumi}{{\normalfont (\alph{enumi})}} \item (1) of Lemma 8.3, \item (2) of Lemma 8.3, \item (3) of Lemma 8.3, \item $L_i$ is an $n$-skeleton of $K_i$ with respect to $\tau_i$ and $\alpha_i$ is the inclusion $K_i^{(n)}\subset K_i$, \item A map $q^{i+1}_i\colon M^{(n)}_{x_{i+1}}\to M^{(n)}_{x_i}$ is null-homotopic for odd $i$, \item (v) of Theorem 9.4, \item (vi) of Theorem 9.4, \item (vii) of Theorem 9.4, \item (vii) of Theorem 9.4, \item (x) of Theorem 9.4, \item (xi) of Theorem 9.4, \item $g^{i+1}_i$ is combinatorial and $g^{i+1}_i=\omega_i\circ\tilde f_i$, where $\omega_i\colon EW(\kappa_i,\mathbb{Z}_{p(\frac{i+1}{2})},k)$ for every odd $i$. \end{enumerate} Then it yields a cell-like map $\alpha\colon Z\to X$. Since $\operatorname{dim} L_i=n$, a compact $Z$ is at most $n$-dimensional. Propositions 9.1 and 9.2 imply that $\operatorname{dim}_{\mathbb{Z}_p}Z\leq k$ for all $p\in\mathcal{L}$. We construct the sequences above by induction. If $m$ is even, we construct $\alpha_{m+1}\colon L_{m+1}\to K_{m+1}$, $g^{m+1}_m$ and $f^{m+1}_m$ as in the proof of Theorem 8.5. If $m$ is odd, we consider an Edwards-Walsh resolution $$\omega_m\colon EW(\kappa_m,\mathbb{Z}_{p(\frac{m+1}{2})},k) \to L_m = K^{(n)}_m$$ as in Proposition 9.7. Again, by Proposition 9.7 there exists an extension $$w_m\colon EW(\tau_m,\mathbb{Z}_{p(\frac{m+1}{2})},k)\to K_m.$$ We construct $K_{m+1}$, $f^{m+1}_m$ and $L_{m+1}$ together with a cellular map $\bar f_m\colon K_{m+1}\to EW(K_m,\mathbb{Z}_{p(\frac{m+1}{2})},k)$ as in Theorem 9.4. Then by the property of this Edwards-Walsh resolution, stated in Proposition 9.7, $w_m\circ\bar f_m(L_{m+1}) \subset K_m^{(k+1)} \subset K_m^{(n)} = L_m$. Define $g^{m+1}_m=w_m\circ\bar f_m$. \end{proof} \section{Imbedding and approximation} According to the classical theorem every $n$-dimensional compactum can be imbedded in $\mathbb{R}^{2n+1}$. In this section we study the following question: Given cd-type $F$ find the least possible number $m$ such that $F$ has a representative $X$ embeddable in $R^m$. This question makes sense for cd-types with bounded norm $\| F\|<\infty$. The main result in this section is the following \begin{theorem} For every cd-type $F$ of $\| F\|=n$ there is a compactum $X\subset\mathbb{R}^{n+2}$ having cd-type $F$. \end{theorem} The proof of this theorem gives an independent proof of the Realization Theorem. We recall that $M(G,n)$ denotes a Moore space, i.e.\ a CW-complex with trivial homology groups in dimensions $i\neq n$ and with $H_n(M(G,n))=G$. \begin{proposition} Suppose that the join product $L\ast M(G,1)$ is $(n{+}1)$-connected for some countable complex $L$ and some abelian group $G$. Then there exists an $n$-dimensional compactum $Y\subset\mathbb{R}^{n+2}$ with nontrivial Steenrod homology group $H_n(Y;G)\neq 0$ and with $L\in AE(Y)$. \end{proposition} \begin{proof} Let $A=S^1\subset S^{n+2}$ be a circle in $(n+2)$-dimensional sphere and let $g\colon A\to M(G,1)$ induce a nontrivial element of the fundamental group $\pi_1(M(G,1))$. Since $L\ast M(G,1)$ is $(n{+}1)$-connected, we have $L\ast M(G,1)\in AE(S^{n+2})$. By Generalized Eilenberg-Borsuk theorem \cite{Dr3} there exists a compactum $Y\subset S^{n+2}$ with $L\in AE(Y)$ and an extension $\bar g\colon S^{n+2}\setminus Y\to M(G,1)$. Since the natural inclusion $i\colon M(G,1)\to K(G,1)$ induces an isomorphism of the fundamental groups, the composition $i\circ g$ is a homotopically nontrivial map. Therefore $i\circ \bar g$ is a homotopically nontrivial map. The map $i\circ\bar g$ represents some nontrivial element $\alpha \in \check H^1(S^{n+2}-Y;G)$. By the Sitnikov duality there is a dual nontrivial element $\beta \in H_n(Y;G)$. This implies that $\operatorname{dim} Y\ge n$. It is possible to show that $\operatorname{dim} Y=n$ but probably the easiest way to complete the proof is by taking an $n$-dimensional subset of $Y$. \end{proof} \begin{proposition} The suspension over the smash product of two CW-complexes is homotopy equivalent to their joint product $\Sigma (K\wedge L)\sim K\ast L$. \end{proposition} \begin{proof} Since by the definition $\Sigma (\varinjlim M_{\alpha})= \varinjlim \Sigma M_{\alpha}$, \begin{align*} K\wedge L& = \varinjlim \{L_{\alpha}\wedge K_{\beta} \mid \text{$L_{\alpha} \subset L$}, \text{$K_{\beta}\subset K$ are finite subcomplexes}\} \quad\text{and}\\ K\ast L& = \varinjlim \{L_{\alpha}\ast K_{\beta} \mid \text{$L_{\alpha}\subset L$, $K_{\beta}\subset K$ are finite subcomplexes}\}, \end{align*} it suffices to show that $\Sigma (K\wedge L)\sim K\ast L$ for compact CW-complexes. For any pair of compact based spaces $(X,x_0)$ and $(Y,y_0)$ there is a closed contractible set $C=X\ast \{y_0\}\cup\{x_0\}\ast Y$ lying in $X\ast Y$ such that the quotient space $X\ast Y/C$ is homeomorphic to the reduced suspension over the smash product $X\wedge Y$. We note that the quotient map is a homotopy equivalence. \end{proof} \begin{lemma} Suppose that two countable abelian groups have the properties $H\otimes G=0$ and $\operatorname{Tor}(H,G)=0$ ($\operatorname{Tor}$ means the torsion product). Then for every $n$ there exists an $n$-dimensional compactum $Y\subset\mathbb{R}^{n+2}$ with $\operatorname{dim}_HY\le 1$ and with nontrivial the Steenrod homology group $H_n(Y;G)\ne 0$. \end{lemma} \begin{proof} By virtue of Proposition 10.3, we may compute homology groups $H_i(M(H,1)\ast M(G,1))$ via homology groups of the smash product. The homology group of the smash product $X\wedge Y$ is equal to the homology group of the pair $(X\times Y,X \vee Y)$. Now the homology exact sequence of the pair $(M(H,1)\times M(G,1),M(H,1)\vee M(G,1))$ and the Kunneth formula imply that $H_i(M(H,1)\ast M(G,1))=0$ for all $i>0$. Since $\pi_1(M(H,1)\ast M(G,1))=0$, the space $M(H,1)\ast M(G,1)$ is $n$-connected for all $n$ by the Hurewicz theorem. Proposition 10.2 yields an $n$-dimensional compact $Y\subset S^{n+2}$ with $M(H,1)\in AE(Y)$. By Theorem 6 of \cite{Dr4} (see also Theorem 11.4) the property $M\in AE(Y)$ implies the property $ SP^\infty M\in AE(Y)$ where $SP^\infty$ is the infinite symmetric power. According to the Dold-Thom theorem \cite{D-T} $SP^\infty M(H,1)=K(H,1)$. So, we have the property $K(H,1)\in AE(X)$ and hence $\operatorname{dim}_HY\le 1$. \end{proof} \begin{theorem} For every $n$ and every $G\in\sigma$ there is a fundamental compactum $X$ of the type $F(G,n)$ lying in $\mathbb{R}^{n+2}$. \end{theorem} \begin{proof} We have four series of fundamental compacta. So, let us consider four cases. (1) $F(\mathbb{Q},n)$. We define $H=\bigoplus_{\text{all $p$}}\mathbb{Z}_p$ and $G=\mathbb{Q}$. Then the properties $G\otimes H=\operatorname{Tor}(G,H)=0$ hold. Apply Lemma 10.4 to obtain an $n$-dimensional compactum $ Y\subset \mathbb{R}^{n+2}$ with $\operatorname{dim}_HY\le 1$. Then it follows that $\operatorname{dim}_{\mathbb{Z}_p}Y\le 1$ for all primes $p$. The Bockstein inequality BI2 implies that $\operatorname{dim}_{\mathbb{Z}_{p^\infty}}Y\le 1$. The Bockstein inequality BI5 implies $\operatorname{dim}_{\mathbb{Z}_{(p)}}Y \le \operatorname{dim}_{\mathbb{Q}} Y$ provided $\operatorname{dim}_{\mathbb{Q}} Y\ge 2$. According to Lemma 10.4 $H_n(Y;\mathbb{Q})\ne 0$. That implies $\check H^n(Y;\mathbb{Q})\neq 0$ and hence $\operatorname{dim}_{\mathbb{Q}} Y\ge n\ge 2$. Since $Y$ is $n$-dimensional, $\operatorname{dim}_{\mathbb{Q}} Y\le n$ and hence $\operatorname{dim}_{\mathbb{Q}} Y=n$. The Bockstein inequality BI4: $\operatorname{dim}_{\mathbb{Q}} \le \operatorname{dim}_{\mathbb{Z}_{(q)}}$ completes the proof in the first case. (2) $F(\mathbb{Z}_{(p)},n)$. Define $H=\bigoplus_{q\ne p}\mathbb{Z}_q$ and $G=\mathbb{Z}_{(p)}$. Then we obtain $n$-dimensional $Y\subset \mathbb{R}^{n+2}$ which is one-dimensional with respect to $\mathbb{Z}_{q^\infty}$ and $\mathbb{Z}_q$. By virtue of the Bockstein inequality BI6 $\operatorname{dim}_{\mathbb{Z}_{p^\infty}}Y \le \max \lbrace \operatorname{dim}_{\mathbb{Q}} Y, \operatorname{dim}_{\mathbb{Z}_{(p)}}Y-1\rbrace$ it is sufficient to show that $\operatorname{dim}_{\mathbb{Z}_{p^\infty}}Y=n$. Lemma 10.4 implies that $H_n(Y;G)\ne 0$. Therefore $\operatorname{Hom}(\check H^n(Y),G)\ne 0$. Hence the group $\check H^n(Y)$ can not be divisible by $p$. This means that $\check H^n(Y)\otimes \mathbb{Z}_{p^\infty}\ne 0$ and hence $\check H^n(Y;\mathbb{Z}_{p^\infty})\ne 0$. (3) $F(\mathbb{Z}_p,n)$. Define $H=\mathbb{Z}\lbrack \frac{1}{p}\rbrack$ and $G=\mathbb{Z}_p$. By Lemma 10.4 we obtain an $n$-dimensional compactum $Y\subset \mathbb{R}^{n+2}$ which is one-dimensional with respect to the groups $\mathbb{Q}$, $\mathbb{Z}_{(q)}$, $\mathbb{Z}_q$, $\mathbb{Z}_{q^\infty}$ ($q\ne p$) and $H_n(Y,\mathbb{Z}_p)\ne 0$. Since $\operatorname{Hom}(\check H^n(Y),\mathbb{Z}_p)$ is nontrivial, the product $\check H^n(Y)\otimes \mathbb{Z}_p$ is nontrivial and hence c-$\operatorname{dim}_{\mathbb{Z}_p}Y=n$. The equality c-$\operatorname{dim}_{\mathbb{Z}_{(p)}}Y=n$ follows by the Bockstein theorem which claims that for a finite dimensional compact space $Y$ there is a prime $p$ such that $\operatorname{dim} Y = \operatorname{dim}_{\mathbb{Z}_{(p)}}Y$, and the equality $\operatorname{dim}_{\mathbb{Z}_{p^\infty}}Y=n-1$ follows from the Bockstein inequalities. (4) $F(\mathbb{Z}_{p^\infty},n)$. Consider $L=M(\mathbb{Z}\lbrack \frac{1}{p}\rbrack,1)\vee M(\mathbb{Z}_p,n-1)$. First we show that $L\ast M(\mathbb{Z}_{p^\infty},1)$ is an $n{+}1$-connected space. We have \begin{align*} H_i(L\ast M(\mathbb{Z}_{p^\infty},1))& = H_{i-1}(L\wedge M(\mathbb{Z}_{p^\infty},1))\\ & = H_{i-1}(M(\mathbb{Z}\lbrack \tfrac{1}{p}\rbrack,1)\wedge M(\mathbb{Z}_{p^\infty},1))\\ & \qquad \oplus H_{i-1}(M(\mathbb{Z}_p,n-1)\wedge M(\mathbb{Z}_{p^\infty},1)). \end{align*} Since $\mathbb{Z}\lbrack \frac{1}{p}\rbrack \otimes \mathbb{Z}_{p^\infty}=0$ and $\operatorname{Tor}(\mathbb{Z}\lbrack \frac{1}{p}\rbrack, \mathbb{Z}_{p^\infty})=0$, it follows that the space $M(\mathbb{Z}\lbrack \frac{1}{p}\rbrack,1)\wedge M(\mathbb{Z}_{p^\infty},1)$ is contractible. Notice that $H_{i-1}(M(\mathbb{Z}_p,n-1)\wedge M(\mathbb{Z}_{p^\infty},1))=0$ for $i-1\le n$. Then the Hurewicz theorem implies that $L\ast M(\mathbb{Z}_{p^\infty},1)$ is $n{+}1$-connected. Proposition 10.2 implies that there exist an $n$-dimensional compactum $Y\subset\mathbb{R}^{n+2}$ with the property $L\in AE(Y)$. Hence we have $M(\mathbb{Z}\lbrack \frac{1}{p}\rbrack,1)\in AE(Y)$ and $M(\mathbb{Z}_p,n-1)\in AE(Y)$. Therefore $\operatorname{dim}_{\mathbb{Z}\lbrack \frac{1}{p}\rbrack}Y\le 1$ and $\operatorname{dim}_{\mathbb{Z}_p}Y\le n-1$. These inequalities completely define the space $F(\mathbb{Z}_{p^\infty},n)$. \end{proof} \begin{proof}[Proof of Theorem 10.1] By Theorem 4.11 we have $F=\bigvee\Phi(G,k_G)$. Since $k_G\le n$, by Theorem 10.5 every fundamental type can be realized by a compactum $X_G\subset\mathbb{R}^n$. The countable wedge $X=\bigvee X_G$ can be imbedded in $\mathbb{R}^{n+2}$. Note that $D_X=F$. \end{proof} \begin{proposition} Let $X\subset\mathbb{R}^n$ be an arbitrary compactum. Then every map $f\colon X\to\mathbb{R}^n$ can be approximated by maps which do not change the cd-type. \end{proposition} \begin{proof} Let $f\colon X\to\mathbb{R}^n$ be given. Take a compact, $n$-dimensional polyhedron $P\subset\mathbb{R}^n$ such that $X\subset P$ and extend $f$ over $P$, i.e.\ get a map $\bar f\colon P\to\mathbb{R}^n$ such that $\bar f {\restriction}_X=f$. Approximate $\bar f$ by a simplicial, general position map $g\colon P\to\mathbb{R}^n$. Then $g {\restriction}_{\Delta}\colon \Delta\to\mathbb{R}^n$ is an embedding for every simplex $\Delta$ in $P$. Consider $f'=g {\restriction}_X$. Since $X = \bigcup\{X\cap\Delta \mid \Delta\subset P\}$, it follows that $f'(X) = \bigcup\{f'(X\cap\Delta) \mid \Delta\subset P\}$. Then $D_{f'(X)} = \bigvee D_{f'(X\cap\Delta)} = \bigvee D_{X\cap\Delta} = D_X$. \end{proof} \begin{theorem} For every compactum $X$ of $\operatorname{dim} X<n-2$ every map $f\colon X\to\mathbb{R}^n$ can be approximated by maps $f'$ with $D_X \preceq D_{f'(X)} \preceq D_X \vee 2$. \end{theorem} \begin{proof} Let $Z\subset\mathbb{R}^n$ be a realization of the cd-type of $X$ in $\mathbb{R}^n$ given by Theorem 10.1. Denote by $X'=Z\vee I^2\subset\mathbb{R}^n$. Let $C\subset C(X',\mathbb{R}^n)$ be a dense countable subset such that $D_{g(X')}=D_{X'}$ for all $g\in C$. The existence of such $C$ follows from Proposition 10.6. Denote by $N$ the union of images $\bigcup_{g\in C}g(X')$. By the Completion theorem \cite{Ol} there is a $G_{\delta}$ set $W\supset N$ such that $\operatorname{dim}_GW=\operatorname{dim}_GN$ for all $G\in\sigma$. Then every compactum $Z'\subset W$ has a cd-type $\preceq$ the cd-type of $X'$. Then the complement of $W$ in $\mathbb{R}^n$ is a countable union of compacta $\bigcup Y_i$. Note that every map $q\colon X'\to\mathbb{R}^n$ can be approximated by maps avoiding $Y_i$ for every $i$. Then by the main Theorem of \cite{Dr5} $\operatorname{dim}(Y_i\times X')<n$. It implies that $\operatorname{dim}(Y_i\times X)<n$ and $\operatorname{dim}(Y_i\times I^2)<n$ for all $i$. The last inequality means that $\operatorname{dim} Y_i<n-2$. Then by the Disjoining Theorem \cite{D-R-S1} $f$ can be approximated by maps $f'$ having the empty intersection with $\bigcup Y_i$. Since $f'(X)\subset W$, $D_{f'(X)}\preceq D_{X'}$. We may assume that $f'$ is a light map, then $D_X\preceq D_{f'(X)}$. \end{proof} \begin{corollary} For every compactum $X$ with dimensions $\operatorname{dim}_GX\geq 2$ and $\operatorname{dim} X<n-2$ every map $f\colon X\to\mathbb{R}^n$ can be approximated with mappings $f'$ with $\operatorname{dim}_Gf'(X)=\operatorname{dim}_GX$. \end{corollary} \begin{remark*} If $X$ is not dimensionally full-valued compactum of $\operatorname{dim} X=2$, say, $X$ is a Pontryagin surface, then according to Theorem 1.10 a map $f\colon X\to\mathbb{R}^3$ can not be approximated by maps $f'$ preserving the cd-type. \end{remark*} \section{Classifying spaces for cohomological dimension} \begin{proposition} The following conditions for an abelian group $G$ are equivalent: \begin{enumerate} \item $G$ is $p$-divisible, \item $\operatorname{Ext}(\mathbb{Z}_{p^{\infty}}, G)=0$, \item $\operatorname{Ext}(\mathbb{Z}_{p^{\infty}}, G)$ is $p$-divisible. \end{enumerate} \end{proposition} \begin{proof} This is a direct consequence of the short exact sequence $$ 0 \to {\operatorname{lim}}^1\operatorname{Hom}(\mathbb{Z}_{p^n},G) \to \operatorname{Ext}(\mathbb{Z}_{p^{\infty}}, G) \to \hat{G}_p \to 0, $$ where $\hat{G}_p=\operatorname{lim} G/p^nG$ is the $p$-adic completion of $G$. \end{proof} We recall that a space $M$ is called {\it simple} if the action of the fundamental group $\pi_1(M)$ on all homotopy groups is trivial. In particular this implies that $\pi_1(M)$ is abelian. \begin{lemma} Let $M$ be a simple CW-complex and let $X$ be a compactum. If $\operatorname{dim}_{H_i(M)}X\leq i$ for all $i$, then $\operatorname{dim}_{\pi_i(M)}X\leq i$ for all $i$. \end{lemma} \begin{proof} Let $\pi_n=\pi_n(M)$ and $H_n=H_n(M)$. We prove $\operatorname{dim}_{\pi_n}X\leq n$ by induction on $n$. Since $H_1(M)=\pi_1(M)$, the claim holds for $n=1$. Let $\operatorname{dim}_{\pi_i(M)}X\leq i$ hold for all $i<n$. For the group $\pi_n$ there is a short exact sequence $ 0 \to \left(\bigoplus_{\text{$p$ prime}} G^n_p \right) \to \pi_n \to F(\pi_n) \to 0, $ where $G^n_p$ is the Sylow $p$-subgroup of $\pi_n$ and $F(\pi_n)$ is torsion-free. By Lemma 2.2 it suffices to show $\operatorname{dim}_{F(\pi_n)}X\leq n$ and $\operatorname{dim}_{G^n_p}X\leq n$. Let us first show that $F(\pi_n)\neq 0$ implies $\operatorname{dim}_{\mathbb{Q}}X\leq n$. If $\pi_i$, $i<n$, are torsion groups, the Hurewicz theorem modulo the generalized Serre class of torsion groups implies $F(H_n)\neq 0$ and hence $\operatorname{dim}_{\mathbb{Q}}X\leq n$. If, however, at least one of the groups $\pi_i$ is not a torsion group, then by the same Hurewicz theorem we obtain $F(H_j)\ne 0$ for some $j<n$. Therefore, $\mathbb{Q}\in\sigma(F(H_j))$ and ${\operatorname{dim}}_{\mathbb{Q}}X \le \operatorname{dim}_{H_j}X \le j<n$. Let $p$ be a prime number. We consider the case when $F(\pi_n)$ is not $p$-divisible. In that case $\mathbb{Z}_{(p)}\in\sigma(F(\pi_n))$. We show that $\operatorname{dim}_{\mathbb{Z}_{(p)}}X\leq n$. We may assume that all groups $H_i$, $i<n$, are $p$-divisible without $p$-torsions. Otherwise, $\mathbb{Z}_p\in\sigma(H_i)$ or $\mathbb{Z}_{p^{\infty}}\in\sigma(H_i)$ and we have ${\operatorname{dim}}_{\mathbb{Z}_p}X \le {\operatorname{dim}}_{H_i}X \le i < n$ or ${\operatorname{dim}}_{\mathbb{Z}_{p^{\infty}}}X \le {\operatorname{dim}}_{H_i}X \le i < n$. In view of the inequality BI2, in both cases we have ${\operatorname{dim}}_{\mathbb{Z}_{p^{\infty}}}X+1 \le n$. Then the inequality $\operatorname{dim}_{\mathbb{Q}}X\leq n$ and the Bockstein Alternative (Theorem 2.7) imply that $\operatorname{dim}_{\mathbb{Z}_{(p)}}X\leq n$. Because of induction assumption, similarly we may assume that all groups $\pi_i$, $i<n$ are $p$-divisible and without $p$-torsions. Since $M$ is a simple CW-complex its $p$-completion $\hat{M}_p$ exists \cite{Bo-Ka}. Our assumptions, Proposition 11.1 and the exact sequence $$ 0 \to \operatorname{Ext}(\mathbb{Z}_{p^{\infty}}, \pi_i) \to \pi_i(\hat{M}_p) \to \operatorname{Hom}(\mathbb{Z}_{p^{\infty}}, \pi_{i-1}) \to 0 $$ imply $\pi_i(\hat{M}_p)=0$ for $i<n$. From the Hurewicz theorem we obtain $\pi_n(\hat{M}_p)=H_n(\hat{M}_p)$. This group is $\pi_n(\hat{M}_p)=\operatorname{Ext}(\mathbb{Z}_{p^{\infty}},\pi_n)$ and its $p$-divisibility would imply that it is the trivial group. Since $F(\pi_n)$ is not $p$-divisible, by Proposition 11.1 $\operatorname{Ext}(\mathbb{Z}_{p^{\infty}},F(\pi_n))$ is not $p$-divisible. Note that the $p$-adic completion of a torsion free group $\hat{F(\pi_n)}=\operatorname{Ext}(\mathbb{Z}_{p^{\infty}},F(\pi_n))$ is without torsion. The exactness of the sequence $$ \operatorname{Ext}(\mathbb{Z}_{p^{\infty}},\pi_n) \to \operatorname{Ext}(\mathbb{Z}_{p^{\infty}},F(\pi_n)) \to 0 $$ implies that $\operatorname{Ext}(\mathbb{Z}_{p^{\infty}}, \pi_n) = \pi_n(\hat{M}_p) = H_n(\hat{M}_p)$ is not a $p$-torsion group and is not $p$-divisible. Therefore $H_n(\hat{M}_p)\otimes \mathbb{Z}_{p^{\infty}}\neq 0$ and by the universal coefficient theorem $H_n(\hat{M}_p;\mathbb{Z}_{p^{\infty}})\neq 0$. One of the main properties of the $p$-completion $M\mapsto\hat{M}_p$ is that it induces an isomorphism of homology with coefficients in $\mathbb{Z}_p$ \cite{Bo-Ka}. With exact sequences $$ 0 \to \mathbb{Z}_{p^k} \to \mathbb{Z}_{p^{k+1}} \to \mathbb{Z}_p \to 0 $$ and induction we can prove that the $p$-completion induces an isomorphism in homology with coefficients in $\mathbb{Z}_{p^n}$ for arbitrary $n$. Since the tensor product and homology commute with the direct limit the $p$-completion induces also an isomorphism in homology with coefficients in $\mathbb{Z}_{p^{\infty}}$. Therefore $H_n(M;\mathbb{Z}_{p^{\infty}})\neq 0$. Since $H_{n-1}$ has no $p$-torsion this implies $H_n\otimes\mathbb{Z}_{p^{\infty}}\neq 0$. Thus and $\operatorname{dim}_{\mathbb{Z}_{(p)}}X\leq n$. Thus, we proved the inequality $\operatorname{dim}_{\mathbb{Z}_{(p)}}X\leq n$ for all $p$ for which $F(\pi_n)$ is $p$-divisible. Since the Bockstein family $\sigma(F(\pi_n))$ consists of all such $p$'s, we proved the inequality $\operatorname{dim}_{F(\pi_n)}X\le n$. To perform the induction step we still have to prove the inequalities $\operatorname{dim}_{G^n_p}X\le n$ for all $p$. When $F(\pi_n)$ is not $p$-divisible we have shown $\operatorname{dim}_{G^n_p}X \leq \operatorname{dim}_{\mathbb{Z}_p}X \leq \operatorname{dim}_{\mathbb{Z}_{(p)}}X \leq n$. Assume now $F(\pi_n)$ is $p$-divisible. We consider two cases: (1) $G^n_p$ is not $p$-divisible. In this case $\sigma(G^n_p)=\{\mathbb{Z}_p\}$ and we have to show the inequality $\operatorname{dim}_{\mathbb{Z}_p}X\leq n$. Like above we can assume that all groups $\pi_i$, $H_i$, $i\leq n-1$, have no $p$-torsion and are $p$-divisible. From the exact sequence $$ 0 \to \operatorname{Ext}(\mathbb{Z}_{p^{\infty}}, \pi_i) \to \pi_i(\hat{M}_p) \to \operatorname{Hom}(\mathbb{Z}_{p^{\infty}}, \pi_{i-1}) \to 0 $$ and Proposition 11.1 we obtain $\pi_i(\hat{M}_p)=0$ for $0<i<n$. Since $G^n_p$ is not $p$-divisible, Proposition 11.1 and the exactness of the sequence $$ 0 = \operatorname{Hom}(\mathbb{Z}_{p^{\infty}},F(\pi_n)) \to \operatorname{Ext}(\mathbb{Z}_{p^{\infty}},G^n_p) \to \operatorname{Ext}(\mathbb{Z}_{p^{\infty}},\pi_n) \to \operatorname{Ext}(\mathbb{Z}_{p^{\infty}},F(\pi_n)) = 0 $$ imply that the group $\pi_n(\hat{M}_p) = \operatorname{Ext}(\mathbb{Z}_{p^{\infty}}, \pi_n) = \operatorname{Ext}(\mathbb{Z}_{p^{\infty}}, G^n_p)$ is not trivial and is not $p$-divisible. Thus the Hurewicz theorem implies $H_i(\hat{M}_p)=0$ for $0<i<n$ and the group $H_n(\hat{M}_p)$ is not $p$-divisible. Therefore $H_n(\hat{M}_p)\otimes\mathbb{Z}_p\neq 0$ and $H_n(\hat{M}_p;\mathbb{Z}_p)\neq 0$. From the main properties of the $p$-completion we obtain $H_n(M;\mathbb{Z}_p)\neq 0$ and since $H_{n-1}$ is without $p$-torsion, $H_n\otimes\mathbb{Z}_p\neq 0$. Therefore $\mathbb{Z}_p\in\sigma(H_n)$ or $\mathbb{Z}_{(p)}\in\sigma(H_n)$. By virtue of Bockstein's inequality BI3 in both cases we have $\operatorname{dim}_{\mathbb{Z}_p}X\leq n$ and $\operatorname{dim}_{G^n_p}X\leq n$. (2) $G^n_p\neq 0$ is $p$-divisible. Then the group $\pi_n$ is $p$-divisible. Since $\sigma(G^n_p)=\{\mathbb{Z}_{p^{\infty}}\}$, we have to show that $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X\leq n$. We obtain this directly if $H_n$ has $p$-torsion elements, so assume $H_n$ has no $p$-torsion. Again we can assume also that all the groups $\pi_i$, $H_i$, $1\leq i\leq n-1$, are without $p$-torsion. Therefore the exact sequence $$ 0 \to \operatorname{Ext}(\mathbb{Z}_{p^{\infty}}, \pi_i) \to \pi_i(\hat{M}_p) \to \operatorname{Hom}(\mathbb{Z}_{p^{\infty}}, \pi_{i-1}) \to 0 $$ implies $\pi_n(\hat{M}_p)=0$ and the group $\pi_{n+1}(\hat{M}_p)$ maps epimorphically onto $\operatorname{Hom}(\mathbb{Z}_{p^{\infty}}, \pi_n)$. The latter group includes the $p$-adic integers $\hat{\mathbb{Z}}_p=\varprojlim \mathbb{Z}_{p^n}$ since $\operatorname{Hom}(\mathbb{Z}_{p^{\infty}},\mathbb{Z}_{p^{\infty}})\cong\hat{\mathbb{Z}}_p$. Therefore $\operatorname{Hom}(\mathbb{Z}_{p^{\infty}}, \pi_n)$ is not a $p$-torsion~group and since $\mathbb{Z}_{p^{\infty}}$ is divisible, the group $\operatorname{Hom}(\mathbb{Z}_{p^{\infty}}, \pi_n)$ contains $\operatorname{Hom}(\mathbb{Z}_{p^{\infty}},\mathbb{Z}_{p^{\infty}})$ which is not $p$-divisible, as a direct summand. Thus the group $\pi_{n+1}(\hat{M}_p)=H_{n+1}(\hat{M}_p)$ is neither a $p$-torsion group nor $p$-divisible. Therefore $H_{n+1}(\hat{M}_p)\otimes\mathbb{Z}_{p^{\infty}}\neq 0$ and $H_{n+1}(\hat{M}_p;\mathbb{Z}_{p^{\infty}})\neq 0$. This implies $H_{n+1}(M;\mathbb{Z}_{p^{\infty}})\neq 0$ and since by assumption $H_n$ has no $p$-torsion elements the universal coefficient theorem gives $H_{n+1}\otimes\mathbb{Z}_{p^{\infty}}\neq 0$ which in turn implies $\operatorname{dim}_{\mathbb{Z}_{(p)}}X\leq n+1$. If all the groups $\pi_i$, $1\leq i\leq n-1$, are torsion groups, the Hurewicz theorem modulo the generalized Serre class of torsion groups without $p$-torsion implies that $H_n$ has $p$-torsion and thus $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X\leq n$. If, however, $F(\pi_i)\neq 0$ for some $i$, $1\leq i\leq n-1$, we obtain $\operatorname{dim}_{\mathbb{Q}}X\leq i\leq n-1$. Bockstein's inequality BI6 then implies $\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X\leq n$. \end{proof} We recall that the $n$-th symmetric power $SP^nX$ of a space $X$ is the orbit space $X^n/S_n$ of the action of the symmetric group $S_n$ of degree $n$ by permutations of coordinates on the $n$-th power $X^n$. For a pointed space $X$ the inclusion $X^n\times\{x_0\}\subset X^{n+1}$ induces an embedding $SP^nX\to SP^{n+1}X$. The infinite symmetric power $SP^{\infty}X$ is the direct limit $\operatorname{lim}_{to}SP^nX$. \begin{lemma} The infinite symmetric power $SP^{\infty}M$ of a CW-complex $M$ is homotopy equivalent to the direct limit $\varinjlim \{\prod_{i=1}^nK(H_i(M),i) \mid n\in\mathbb{N}\}$. \end{lemma} \begin{proof} By Dold-Thom's theorem we have $\pi_i(SP^{\infty}(M))=H_i(M)$. Therefore there is a map of a Moore space $f_i\colon M(H_i(M),i)\to SP^{\infty}M$ which induces an isomorphism of $i$-dimensional homotopy groups. Note that the natural inclusion $\xi_i\colon M(H_i(M),i)\to SP^{\infty}M(H_i(M),i)$ induces an isomorphism of $i$-dimensional homotopy group. Consider a map $g_i\colon SP^{\infty}(M(H_i(M),i))\to SP^{\infty}M$ generated by $f_i$: regard $SP^{\infty}Y$ as the free abelian monoid over a space $Y$, then $$g_i(n_1x_1+n_2x_2+\cdots+n_mx_2) = n_1f_i(x_1)+n_2f_i(x_2)+\cdots+n_mf_i(x_m)$$ where $x_j$ are points in $M(H_i(M),i)$ and $n_j\in\mathbb{N}$. Then $f_i=g_i\circ\xi_i$. Therefore $g_i$ induces an isomorphism of $i$-dimensional homotopy groups. We define a map $\mu_n\colon \prod_{i=1}^nSP^{\infty}M(H_i(M),i)\to SP^{\infty}M$ by the formula $$\mu_n((w_1,\dots,w_n)) = g_1(w_1)+\cdots+g_n(w_n).$$ Note that the base point in $SP^{\infty}M(H_{n+1}(M),n+1)$ defines the natural imbedding $\prod_{i=1}^nSP^{\infty}M(H_i(M),i) \subset \prod_{i=1}^{n+1}SP^{\infty}M(H_i(M),i)$. Then $\mu_{n+1}$ restricted to $\prod_{i=1}^nSP^{\infty}M(H_i(M),i)$ coincides with $\mu_n$. Note that for $n\ge i$ the map $g_i$ can be naturally factored through $\mu_n$, $mu_n\circ\gamma=g_i$. It implies that $\mu_n$ induces an isomorphism of homotopy groups in dimensions $i\le n$. Hence $$\mu = \bigcup\mu_n \colon \varinjlim \{\prod_{i=1}^n SP^{\infty}M(H_i(M),i) \mid n\in\mathbb{N}\} \to SP^{\infty}M$$ is a weak homotopy equivalence. Since both spaces are CW-complexes, $\mu$ is a homotopy equivalence. \end{proof} \begin{theorem} Let $M$ be a simple CW complex and let $X$ be a finite dimensional compactum. Then the following are equivalent: \begin{enumerate} \item $M\in AE(X)$, \item $SP^{\infty}M\in AE(X)$, \item $\operatorname{dim}_{H_k(M)}X\le k$ for all $k$, \item $\operatorname{dim}_{\pi_k(M)}X\le k$ for all $k$. \end{enumerate} \end{theorem} \begin{proof} (1) $\Rightarrow$ (2). Since $X$ is compact, it suffices to show that $SP^nM\in AE(X)$ for all $n$. We recall that the support ${\operatorname{support}}(\mu)$ of an element $\mu\in SP^nY\subset Y$ is the unordered set of coordinates of $\mu$. We may assume that $M$ is a subcomplex of a contractible complex $C$. Then there is a natural embedding $SP^nM\subset SP^nC$ and $SP^nC$ is an absolute extensor for compact metric spaces. Let $\phi\colon A\to SP^nM$ be a continuous map of a closed subset $A\subset X$. Then there exists an extension $\psi\colon X\to SP^nC$. Let $\Gamma_{\psi} = \{(x,y)\in X\times C \mid y\in{\operatorname{support}}(\psi(x))\} \subset X\times C$ and let $F=\Gamma_{\psi}\cap(X\times M)$. Assume that we can prove the property $M\in AE(\Gamma_{\psi})$. Then the map $\pi\colon F\to M$, defined by the projection $\pi(x,c)=c$, admits an extension $\xi\colon \Gamma_{\psi}\to M$. Consider the map $$ \bar\phi = SP^n(\xi)\circ SP^n(j)^{-1}\circ i\circ(\psi,\operatorname{id}_X) \colon X \to SP^nM, $$ where $j\colon \Gamma_{\psi}\to X\times C$ and $i\colon X\times SP^n(C)\to SP^n(X\times C)$ are the natural embeddings. It is easily seen that $\bar\phi$ is an extension of $\phi$ over $X$. Now we prove the property $M\in AE(\Gamma_{\psi})$. We consider the following filtration on $X$: $X_1\subset X_2\subset\dots\subset X_n$, where $X_k=\{x\in X \mid |{\operatorname{support}(\psi(x))}|\le k\}$. Observe that the sets $X_k$ are closed for all $k$. Let $p\colon \Gamma_{\psi}\to X$ be the restriction to $\Gamma_{\psi}$ of the projection $X\times C\to X$. Put $\Gamma_k=p^{-1}(X_k)$. In view of the Finite Union Theorem (see \cite{Dr4}), it suffices to show that $M\in AE(\Gamma_k)$ for all $k$. Since $\Gamma_1=X_1$, the condition (1) implies $M\in AE(\Gamma_1)$. Assume that $M\in AE(\Gamma_k)$. The space $\Gamma_{k+1}\setminus\Gamma_k$ has a locally trivial fibration over the space $X_{k+1}\setminus X_k$ with $k{+}1$-point fiber. This implies that $M\in AE(\Gamma_{k+1}\setminus\Gamma_k)$. Therefore, $M\in AE(\Gamma_{k+1})$ \cite{Dr4}. (2) $\Rightarrow$ (3). By Lemma 11.3 we may conclude that $\varinjlim \prod_{i=1}^nK(H_i(M),i)\in AE(X)$. Since $X$ is finite dimensional, we have $\prod_{i=1}^nK(H_i(M),i)\in AE(X)$ where $n=\operatorname{dim} X$. Hence $K(H_k(M),k)\in AE(X)$ for all $k\le n$. Since $X$ is $n$-dimensional, this property holds for all $k$. Theorem 1.1 implies (3). (3) $\Rightarrow$ (4). Apply Lemma 11.2. (3) $\Rightarrow$ (1). By Theorem 1.1 we have $\check H^{k+1}(X,A;\pi_k(M))=0$ for every closed subset $A\subset X$. It follows that all obstructions to an extension of a map $f\colon A\to M$ are vanishing. Since $X$ is finite dimensional, there is an extension $\bar f\colon X\to M$. Hence $M\in AE(X)$. \end{proof} \begin{corollary} For finite dimensional compacta and for $k>1$ the following conditions are equivalent \begin{enumerate} \item $\operatorname{dim}_GX\leq k$, \item $M(G,k)\in AE(X)$. \end{enumerate} \end{corollary} This Corollary is a generalization of Alexandroff Theorem (Theorem 1.4.) for all abelian groups. Thus, for finite dimensional compacta Moore spaces are classifying spaces for the cohomological dimension as well as Eilenberg-MacLane spaces. The only possible exception is in the dimension one. \begin{problem*} Does the property $RP^2\in AE(X)$ hold for finite dimensional compactum $X$ with $\operatorname{dim}_{\mathbb{Z}_2}X=1$? \end{problem*} \begin{theorem} For any compactum $X$ of dimension $\operatorname{dim} X= n$ and any abelian group $G$ such that $\operatorname{dim}_GX\le k$ and $k\ge 2$ there exists a closed subset $Y\subset X$ with $\operatorname{dim} Y= n-1$ and $\operatorname{dim}_GY\le k-1$. \end{theorem} \begin{proof} By virtue of the Bockstein theorem it suffices to proof that for $G\in\sigma$. Since $k\ge 2$, the join product $M(G,k-1) \ast S^0$ is a Moore space $M(G,k)$. By Corollary 11.5, $M(G,k)\in AE(X)$. There exist two closed subsets $ Z^+,Z^-\subset X$ such that every separator $C\subset X$ has dimension $\ge n-1$. Let $f\colon \{Z^+,Z^-\}\to S^0$ be the separating map. By the Generalized Eilenberg-MacLane theorem there is a compactum $Y\subset X$ with $M(G,k-1)\in AE(Y)$ such that $f$ is extendible to $X\setminus Y$. Hence $Y$ is a separator and hence $\operatorname{dim} Y\ge n-1$. By Corollary 11.5 $\operatorname{dim}_GY\le k-1$. We always may assume that $\operatorname{dim} Y=k-1$. \end{proof} \begin{theorem} For any ring $R$, any $k\le n$ for finite dimensional compactum $X$ the following conditions are equivalent: \begin{enumerate} \item $\operatorname{dim}_R X \le n$, \item every map $f\colon A \to K(R,k)$ given on a closed subset $A \subset X$ can be extended over to the complement $X\setminus Y$ of a compact set $Y$ of $\operatorname{dim}_R Y \le n-k-1$. \end{enumerate} \end{theorem} \begin{proof} It is sufficient to prove this theorem for rings $R\in\sigma$. (1) $\Rightarrow$ (2). Let $M=M(R,n-k-1) \ast K(R,k)$ be the join product. It is easy to verify that $\operatorname{dim}_{H_k(M)}X\le\operatorname{dim}_RX$. Then Theorem 11.4 yields the property $M\in AE(X)$. Then by the Generalized Eilenberg-Borsuk theorem \cite{Dr3} every partial map $f\colon A\to K(R,k)$ can be extended over the complement of compactum $Y$ with $M(R,n-k-1)\in AE(Y)$. By Corollary 11.5, $\operatorname{dim}_R Y\le n-k-1$. (1) $\Leftarrow$ (2). Let $\{f_i\colon A_i\to K(R,k)\}$ be a countable basis of extension problems. The condition (2) gives us a compactum $Y_i$ of $\operatorname{dim}_R Y_i \le n-k-1$ and an extension $\bar f_i\colon X\setminus Y_i\to K(R,k)$. By the Countable Union theorem $\operatorname{dim}_R\cup Y_i\le n-k-1$. By the Completion Theorem, there is a $G_{\delta}$ set $Z\supset\cup Y_i$ of $\operatorname{dim}_R\le n-k-1$. Note that every compactum $C\subset X\setminus Z$ has the property $K(R,k)\in AE(C)$. Hence, by Theorem 1.1 and the Countable Union theorem, we have $\operatorname{dim}_R(X\setminus Z)\le k$. The Uryhson-Menger formula for the cohomological dimension \cite{Dy2} implies that $\operatorname{dim}_R X \le \operatorname{dim}_R Z + \operatorname{dim}_R (X\setminus Z) + 1 \le (n-k-1)+k+1 = n$. \end{proof} We note when $k=n$ the above theorem is contained in Theorem 1.1. \section{Cohomological dimension of ANR compacta} Absolute neighborhood retracts are locally contractible. This conditions gives a strict restriction on cohomological dimension. Surprisingly enough that locally contractible compacta can be dimensionally non-full-valued. \begin{lemma} Let $X$ be an ANR-compactum and $Y\in AE(X)$, then $K\in AE(X)$ for any CW-complex $K$ homotopy equivalent to $Y$. \end{lemma} \begin{proof} Let $h\colon K\to Y$ be a weak homotopy equivalence. The important property of $h$ is that $h_*\colon [Z,K]\to[Z,y]$ is a bijection for all spaces $Z$ which are homotopy equivalent to CW-complexes. Let $f\colon A\to K$ is a map of a closed subset $A\subset X$. Extend $f$ to $f'\colon V\to K$, where $ V$ is a closed neighborhood of $A$ in $X$. Let $\bar f\colon X\to Y$ be an extension of $h\circ f'\colon V\to Y$. Take a homotopy lift $\tilde f$ of $\bar f$. Since $h\circ\tilde f{\restriction}_{\operatorname{Int} V}$ is homotopic to $\bar f{\restriction}_{\operatorname{Int} V} = h\circ f'{\restriction}_{\operatorname{Int} V}$ and $\operatorname{Int} V$ is homotopy equivalent to a CW-complex, it follows that $\tilde f{\restriction}_{\operatorname{Int} V}$ is homotopic to $f'{\restriction}_{\operatorname{Int} V}$. Hence, $\tilde f{\restriction}_A\sim f'{\restriction}_A=f$. Thus, $f$ extends over $X$ up to homotopy, so it extends over $X$ by the Homotopy Extension Theorem. \end{proof} \begin{theorem} Let $G=\prod_{s\in S}G_s$ be the direct product of abelian groups. Then $\operatorname{dim}_GX = \max\{\operatorname{dim}_{G_s}X \mid s\in S\}$ for any compactum $X$. \end{theorem} \begin{proof} Since each $G_s$ is a direct summand of $G$, Corollary 1.7 implies that $\operatorname{dim}_{G_s}X\le\operatorname{dim}_GX$. Hence, $\max\{\operatorname{dim}_{G_s}X \mid s\in S\}\le\operatorname{dim}_GX$. Suppose that $\max\{\operatorname{dim}_{G_s}X \mid s\in S\}=n$. Note that $Y=\prod_{s\in S}K(G_s,n)\in AE(X)$. Note that $Y$ is weakly homotopy equivalent to $K(G,n)$. By Lemma 12.1, $K(G,n)\in AE(X)$, hence, by Theorem 1.1 $\operatorname{dim}_G \le n = \max\{\operatorname{dim}_{G_s}X \mid s\in S\}$. \end{proof} \begin{theorem} Let $X$ be an ANR compactum, then \begin{enumerate} \item $\operatorname{dim}_{\mathbb{Z}_{(p)}}X=\operatorname{dim}_{\mathbb{Z}_p}X$ for all prime $p$, \item $\operatorname{dim}_GX\ge\operatorname{dim}_{\mathbb{Q}}X$ for any abelian group $G\neq 0$. \end{enumerate} \end{theorem} \begin{proof} (1). In view of the Bockstein inequality BI3 it suffices to show that $\operatorname{dim}_{\mathbb{Z}_{(p)}}X\le \operatorname{dim}_{\mathbb{Z}_p}X$. Consider $G=\prod_k\mathbb{Z}_{p^k}$. Then by Theorem 12.2 and Proposition 2.3, $\operatorname{dim}_GX=\max\{\operatorname{dim}_{\mathbb{Z}_{p^k}}X\}=\operatorname{dim}_{\mathbb{Z}_p}X$. Since $G$ contains an element of infinite order and not divisible by $p$, we have $\mathbb{Z}_{(p)}\in\sigma(G)$. By the Bockstein theorem (Theorem 2.1) $\operatorname{dim}_{\mathbb{Z}_{(p)}}X\le\operatorname{dim}_GX=\operatorname{dim}_{\mathbb{Z}_p}X$. (2). If $\mathbb{Q}\in\sigma(G)$, then the inequality follows from Theorem 2.1. If $\mathbb{Z}_{(p)}\in\sigma(G)$, then the inequality BI4 implies the required inequality. If $\mathbb{Z}_p\in\sigma(G)$, then we apply BI4, (1) and Theorem 2.1 to obtain $\operatorname{dim}_{\mathbb{Q}}X \le \operatorname{dim}_{\mathbb{Z}_{(p)}}X = \operatorname{dim}_{\mathbb{Z}_p}X \le \operatorname{dim}_GX$. If $\mathbb{Z}_{p^{\infty}}\in\sigma(G)$, then we consider the group $A = \mathbb{Z}_{p^{\infty}}\times\mathbb{Z}_{p^{\infty}}\times \cdots$. By Theorem 12.2 $\operatorname{dim}_AX=\operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}X\le\operatorname{dim}_GX$. Since $A$ is not a torsion group and it is divisible by all $p$, by the definition $\mathbb{Q}\in\sigma(A)$. By Theorem 2.1 $\operatorname{dim}_{\mathbb{Q}} X \le \operatorname{dim}_A X \le \operatorname{dim}_G X$. \end{proof} \begin{corollary} Every ANR-compactum $X$ is of the basic type, i.e.\ the formula $\operatorname{dim}(X\times X) = 2 \operatorname{dim} X$ holds for $X$. \end{corollary} \begin{proof} We consider a finite dimensional ANR compactum $X$. By Theorem 1.4 and Theorem 2.1, $\operatorname{dim} X=\operatorname{dim}_{\mathbb{Z}}X=\operatorname{dim}_{\mathbb{Z}_{(p)}}X$ for some prime $p$. By Theorem 12.3, $\operatorname{dim} X=\operatorname{dim}_{\mathbb{Z}_p}X$. Then Criterion 3.17 completes the proof. \end{proof} \begin{theorem} Every 2-dimensional ANR compactum $X$ is dimensionally full-valued. \end{theorem} \begin{proof} By the Universal Coefficient Theorem the simplicial 1-dimensional cohomology is a free abelian group. Therefore the \v{C}ech 1-dimensional cohomology is a torsion free group. Hence by the Universal Coefficient Formula $\check H^1(A;\mathbb{Q})\ne 0$ for any $A$. Take a closed neighborhood $U\subset X$ which is contractible in $X$ and with $\operatorname{dim} U=2$. Then there is a compact subset $A\subset U$ with $\check H^2(U,A)\ne 0$. The homomorphism $\gamma\colon \check H^2(U,A)\to \check H^2(U)$ is trivial, since $\phi$ is trivial in the following diagram and $\alpha$ is surjective because of 2-dimensionality of $X$. $$ \begin{CD} \check H^2(U) @<\gamma<< \check H^2(U,A) @<<< \check H^1(A)\\ @A{\phi}AA @A{\alpha}AA @.\\ \check H^2(X) @<\nu<< \check H^2(X,A)\\ \end{CD} $$ Therefore $\check H^1(A)\ne 0$ and hence $\check H^1(A;\mathbb{Q})\ne 0$. Since the inclusion $A\subset X$ is homotopically trivial, the induced homomorphism in rational cohomologies is trivial. Hence $\check H^2(X,A;\mathbb{Q})\ne 0$. Now by Theorem 12.3 $X$ is dimensionally full-valued. \end{proof} \begin{theorem} For any prime $p$ there exists an AR compactum $M_p$, having dimensions: $\operatorname{dim} M_p = \operatorname{dim}_{\mathbb{Z}_{(p)}}M_p = \operatorname{dim}_{\mathbb{Z}_p}M_p = 4$ and $\operatorname{dim}_{\mathbb{Q}}M_p = \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}M_p = \operatorname{dim}_{\mathbb{Z}_q}M_p = 3$ where $q\ne p$ is prime. \end{theorem} For a map $f\colon A\to B$ we denote by $S_f=\{x\in A \mid f^{-1}f(x)\ne x\}\subset A$ the singularity set of $f$. We use the following theorem which generalizes Borsuk's ANR pasting theorem. \begin{theorem} Let $A,B,X$ be ANR compacta and let $\alpha\colon A\to X$ and $f\colon A\to B$ have the following property: $\alpha$ restricted to the singularity set $S_f$ is one-to one. Then the pushout $Y$ of the diagram $$ \begin{CD} A @>f>> B\\ @V{\alpha}VV @.\\ X\\ \end{CD} $$ is an ANR compactum provided it is finite dimensional. \end{theorem} \begin{proof} Consider the diagram: $$ \begin{CD} A @>f>> B \\ @V{\alpha}VV @V{\beta}VV\\ X @>\phi>> Y\\ \end{CD} $$ Since $\alpha$ is injective on $S_f$, the map $\phi$ is defined by the decomposition $\mathcal{F} = \{\alpha f^{-1}(y) \mid \text{$y\in f(S_f)$ and singletons}\}$. It is clear that this decomposition is upper semicontinuous. Hence $Y$ is a compact metric space. There is the natural map $q\colon DM_{\alpha,f}\to Y$ of the double mapping cylinder onto the pushout. By Borsuk's ANR pasting theorem $DM_{\alpha,f}$ is ANR. We show that map $q$ is cell-like, then the result follows. We consider three cases. (1) $y\in Y\setminus\phi\alpha A$. In that case $q^{-1}$ is a singleton, i.e.\ is cell-like. (2) $y\in \phi\alpha A\setminus\phi\alpha S_f$. In this case $q^{-1}$ is homeomorphic to the cone over $\alpha^{-1}(x)$ where $\phi(x)=y$. Hence it is cell-like. (3) $y\in\phi\alpha S_f$. In this case the restriction of $\alpha$ on $\alpha^{-1}\phi^{-1}(y)$ is a retraction $r$ onto $\phi^{-1}(y)$. Let $S=\alpha^{-1}\phi^{-1}(y)\cap S_f$. Then $q^{-1}$ is homeomorphic to the union of the mapping cylinder of $r$ and the cone over $S\subset\alpha^{-1}\phi^{-1}(y)$. We can define a contraction of this union to a point as follows. First we can deform the mapping cylinder $M_r$ to the image space $\phi^{-1}(y)\cong S$. This deformation can be extended to a homotopy of the whole $q^{-1}(y)$. As the result we have a deformation of the space to the union of the mapping cylinder of $\alpha$ restricted over $S$ and the cone over $S$. Since this is homeomorphic to the cone over $S$, we can contract that to a point. \end{proof} \begin{lemma} There is an imbedding of an infinite tree $T=\bigcup T_i$ in a four-dimensional cube $I^4$ such that there is a sequence of regular neighborhoods $N_1\subset N_2\subset\cdots$ of the finite trees $T_1\subset T_2\subset\cdots$ with the properties: \begin{enumerate} \item The union $\bigcup N_i=N$ is dense in $I^4$, \item For every $i$ there is an $\epsilon_i$-retraction $h_i\colon N_{i+1}\setminus \operatorname{Int}(N_i) \to \operatorname{Cl}(\partial N_{i+1}\setminus\partial N_i)$, \item $\sum \epsilon_i < \infty$, \item The restriction $h_i{\restriction}_{\operatorname{Cl}(\partial N_i\setminus\partial N_{i+1})}$ is an imbedding. \end{enumerate} \end{lemma} \begin{proof} We construct $T$ and $N$ by induction. Assume that $\operatorname{diam}(I^4)=1$ and choose a point $x_0\in\partial I^4$. We define $T_1$ as the segment from $x_0$ to the center $c$ of the cube $I^4$. Take $\epsilon_1=2$ and let $N_1$ be a regular neighborhood of $T_1$ in $I^4$. There is an $\epsilon_1$-retraction $h_1\colon N_1\to \operatorname{Cl}(\partial N_1\setminus\partial I^4)$. Consider a finite 1/2-net in $\operatorname{Int}(I^4\setminus N_1)$. Then we join points of the net by smooth arcs in $I^4$ of the length $\le 1$ with $c$. We may assume that all arcs are disjoint and transversal to $\partial N_1$. The union of these arcs with $T_1$ gives $T_2$. Then we consider a regular neighborhood $N'_2$ of $T_2$ such that there is an $\epsilon_2$-retraction $h_2\colon N'_2\setminus \operatorname{Int}(N_1)\to \operatorname{Cl}(\partial N'_2\setminus N_1)$ with $\epsilon_2=1$. Define $N_2=N'_2\cup N_1$. Consider a 1/4-net in $\operatorname{Int}(I^4\setminus N_2)$ and join every point of the net with one of the closest point of the previous net by an arc of length $\le 1/2$ and transversal to $\partial N_2$ and so on. \end{proof} \begin{proof}[Proof of Theorem 12.6] Let $N$ and $T$ be as above. Let $A=B=N\cap\partial I^4=D$ be a 3-dimensional disk. Define $X=I^4\setminus \operatorname{Int}(N)$. Since $\sum \epsilon_i < \infty$, the composition $\bar h = \dots \circ h_2 \circ h_1$ is a retraction of $I^4$ onto $X$. Hence $X\in AR$. We define $\alpha=\bar h{\restriction}_D$. We define $f\colon D\to D$ as follows. Denote $D_k=\alpha^{-1}(\bigcup_{i=1}^k\operatorname{Cl}(\partial N_i\setminus N_{i+1})$. Then we define $f_0\colon \partial D\to\partial D$ as a map of degree $p$. Since the second homotopy group is abelian, we can extend $f_0$ to $f_1\colon D_1\to D_1$ in such way that the restriction of $f_1$ on every component of the boundary $\partial D_1$ is a mapping the component to itself with the degree $p$. Then we can extend $f_1$ to $f_2\colon D_2\to D_2$ in the similar fashion and so on. Let $U=\bigcup_{i=1}^{\infty}\alpha^{-1}\operatorname{Cl}(\partial N_i\setminus N_{i+1})$. Then $D\setminus U=C$ is a Cantor set. We define $f$ on $U$ as the union of $f_i$ and $f{\restriction}_C=\operatorname{id}_C$. We note that $\alpha{\restriction}_U$ is injective and $S_f\subset U$. Also it is easy to see that the pushout in this case is at most 4-dimensional. Then Theorem 12.7 defines an $AR$-space $M_p=Y$. Note that $Z=\phi(\operatorname{Cl}(\partial I^4\setminus D))$ is homeomorphic to the Moore space $M(\mathbb{Z}_p,2)$. Since $H^3(Z;\mathbb{Z}_p)\ne 0$ and $Z\subset Y\in AR$, the exact sequence of pair $(Y,Z)$ implies that $\operatorname{dim}_{\mathbb{Z}_p}Y\ge 4$. Therefore the Bockstein Theorem and the Alexandroff Theorem together with BI3 imply that $$\operatorname{dim} M_p = \operatorname{dim}_{\mathbb{Z}_{(p)}}M_p = \operatorname{dim}_{\mathbb{Z}_p}M_p = 4.$$ We show that for every closed subset $F\subset Y$ the equality $\check H^3(F;\mathbb{Z}_q)=0$ holds for all prime $q\ne p$. Then Theorem 12.3 and the Bockstein Alternative imply that $$\operatorname{dim}_{\mathbb{Q}}M_p = \operatorname{dim}_{\mathbb{Z}_q}M_p = \operatorname{dim}_{\mathbb{Z}_{p^{\infty}}}M_p = 3.$$ Let $K=\beta^{-1}(F)$. There is a sequence of open 3-balls $\{B_i\}$ in $D$ such that \begin{enumerate} \item each ball is a component of a complement to $D_l$ for some $l$, \item $C\setminus K\subset\bigcup_{i=1}^{\infty}B_i$, \item $B_i\cap K=\emptyset$. \end{enumerate} Denote $D'=D\setminus\bigcup_{i=1}^{\infty}B_i$ and consider $F'=\beta(D')$. We show that the inclusion $F\subset F'$ induces an epimorphism in 3-dimensional cohomologies. Let $g\colon F\to K(G,3)$ be a map to Eilenberg-MacLane complex. Since $\operatorname{dim} D'=3$, there is an extension $\nu\colon D'\to K(G,3)$ of a map $g\circ\beta{\restriction}_{\beta^{-1}(F)}$. We define $\bar g\colon F'\to K(G,3)$ by the formula: $\bar g(z)=\nu\beta^{-1}{\restriction}_{D'}(z)$ for $z\in F'$. Since $\bar g$ is an upper semi-continuous multi-valued map, it suffices to show that $\nu\beta^{-1}(z)$ consists of one point for all $z\in F'$. By the definition this holds for $z\in F$. Let $z\in F'\setminus F$. Then by the definition of $D'$ we have that $\beta^{-1}(z)\cap D' \subset U = f(U) \subset S_f$. Since $\alpha{\restriction}_{S_f}$ is injective, $|f(f^{-1}\beta^{-1}(y))\cap S_f| \le 1$. This implies that $\beta^{-1}(z)\cap D'$ consists of one point. Next, we show that $\check H^3(F';\mathbb{Z}_q)=0$. We consider the map $\gamma\colon M_{\alpha}\to DM_{\alpha,f}$ generated by the map $f$. Let $\bar q\colon DM_{\alpha,f}\to Y$ be the quotient map of Theorem 12.7. Consider the diagram generated by the map $\gamma$ restricted to the pairs $(\bar q^{-1}(F'),D')$ and $(\gamma^{-1}\bar q^{-1}(F'),D')$ where $D'$ is considered here as the subset of $D=B$ and $D=A$ respectively; $$ \begin{CD} 0 @<<< \check H^3(\bar q^{-1}(F');\mathbb{Z}_q) @<\gamma<< \check H^3(\bar q^{-1}(F'),D';\mathbb{Z}_q) @<<< \check H^2(D';\mathbb{Z}_q)\\ @. @V{\gamma^*}VV @V{\phi_2}VV @V{\phi_3}VV\\ 0 @<<< \check H^3(\gamma^{-1}\bar q^{-1}(F');\mathbb{Z}_q) @<<< \check H^3(\gamma^{-1}\bar q^{-1}(F'),D';\mathbb{Z}_q) @<<< \check H^2(D';\mathbb{Z}_q)\\ \end{CD} $$ The homomorphism $\phi_2$ is generated by a relative homeomorphism and, hence, is an isomorphism. The homomorphism $\phi_3$ is generated by the restriction $f{\restriction}_{D'}$ which is a map of degree $p$ of an infinite wedge of spheres to itself. Hence it induces an isomorphism of cohomologies with coefficients in $\mathbb{Z}_q$ for $q$ relatively prime to $p$. The Five lemma implies that $\gamma^*$ is an isomorphism. Let $\bar \alpha\colon M_{\alpha}\to X$ be the natural projection to the range. The diagram $$ \begin{CD} DM_{\alpha,f} @>{\bar q}>> Y\\ @A{\bar\gamma}AA @A{\phi}AA\\ M_{\alpha} @>{\bar\alpha}>> X\\ \end{CD} $$ restricted to $F'\subset Y$ produces isomorphisms diagram for 3-dimensional cohomology: $$ \begin{CD} \check H^3(\bar q^{-1}(F');\mathbb{Z}_q) @<<< \check H^3(F';\mathbb{Z}_q)\\ @V{\gamma^*}VV @VVV\\ \check H^3(\gamma^{-1}\bar q^{-1}(F');\mathbb{Z}_q) @<<< \check H^3(\phi^{-1}(F');\mathbb{Z}_q)\\ \end{CD} $$ Since $X$ is 3-dimensional $AR$-space, $\check H^3(\phi^{-1}(F');\mathbb{Z}_q)=0$. Hence $\check H^3(F';\mathbb{Z}_q)=0$ and $\check H^3(F;\mathbb{Z}_q)=0$. \end{proof} \begin{remark} For relatively prime $p$ and $q$ the dimension of the product does not comply to the logarithmic law: $\operatorname{dim} M_p\times\operatorname{dim} M_q=7$. \end{remark} \begin{proof} By Alexandroff and Bockstein theorems we have \begin{align*} \operatorname{dim}(M_p\times M_q)& = \operatorname{dim}_{\mathbb{Z}}(M_p\times M_q)\\ & = \max\{\operatorname{dim}_{\mathbb{Z}_{(r)}}(M_p\times M_q)\} \quad \text{by Theorem 12.3}\\ & = \max\{\operatorname{dim}_{\mathbb{Z}_r}(M_p\times M_q)\}\\ & = \max\{\operatorname{dim}_{\mathbb{Z}_r}M_p+\operatorname{dim}_{\mathbb{Z}_r}M_q\}\\ & = 7. \end{align*} \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2005-01-28T23:18:19", "yymm": "0501", "arxiv_id": "math/0501523", "language": "en", "url": "https://arxiv.org/abs/math/0501523", "abstract": "This is a detailed introductory survey of the cohomological dimension theory of compact metric spaces.", "subjects": "General Topology (math.GN); Geometric Topology (math.GT)", "title": "Cohomological dimension theory of compact metric spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759587767815, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7079405653385225 }
https://arxiv.org/abs/0905.1430
Strong rational connectedness of toric varieties
In this paper, we prove that: For any given finitely many distinct points $P_1,...,P_r$ and a closed subvariety $S$ of codimension $\geq 2$ in a complete toric variety over a uncountable (characteristic 0) algebraically closed field, there exists a rational curve $f:\mathbb{P}^1\to X$ passing through $P_1,...,P_r$, disjoint from $S\setminus \{P_1,...,P_r\}$ (see Main Theorem). As a corollary, we prove that the smooth loci of complete toric varieties are strongly rationally connected.
\section{Introduction} \footnotetext{Both authors were partially supported by NSF grant DMS-0701465.} The concept of rationally connected varieties is independently invented by Koll\'{a}r-Miyaoka-Mori (\cite{kmm92b}) and Campana (\cite{ca92}). This kind of variety has interesting arithmetic and geometric properties. A class of proper rationally connected varieties comes from the smooth Fano varieties (\cite{ca92}, \cite{kmm92a} or \cite{kol96}). Shokurov (\cite{sh00}), Zhang (\cite{zh06}), Hacon and McKernan (\cite{hm07}) proved that FT (Fano type) varieties are rationally connected. An interesting question is whether the smooth locus of a rationally connected variety is rationally connected. In general the answer of the question is NO. However, for the FT (or log del Pezzo) surface case, Keel and McKernan gave an affirmative answer, that is, if $(S,\Delta)$ is a log del Pezzo surface, then its smooth locus $S^{sm}$ is rationally connected (\cite{km99}), but this does not imply the strong rational connectedness. The concept of strongly rationally connected varieties (see Definition \ref{D:SRC}) was first introduced by Hassett and Tschinkel (\cite{ht08}). A proper and smooth separably rationally connected variety $X$ over an algebraically closed field is strongly rationally connected (\cite{kmm92b} 2.1, or \cite{kol96} IV.3.9). Xu (\cite{xu08}) announced that the smooth loci of log del Pezzo surfaces are not only rationally connected but also strongly rationally connected, which confirms a conjecture of Hassett and Tschinkel (\cite{ht08}, Conjecture 20). It is expected that the smooth locus of an FT variety is strongly rationally connected (cf. Example \ref{E:ToricFT} and Main Theorem). Throughout the paper, we are working over an uncountably algebraically closed field of characteristic 0. It is interesting that whether Main Theorem holds for any algebraically closed (or perfect) field. \begin{main theorem} Let $X$ be a complete toric variety. Let $P_1,\ldots,P_r$ be finitely many distinct points in $X$ ($P_i$ possibly singular). Then there is a geometrically free rational curve $f:\mathbb{P}^1\rightarrow X$ over $P_i,1\leq i\leq r$ (see Definition \ref{D:GeoFree}). Moreover, $f$ is free over $P_i$ if all points $P_i$ are smooth. \end{main theorem} Main Theorem can be rephrased as follows: Let $X$ be a complete toric variety. For any given distinct points $P_1,\ldots,P_r\in X$ (possibly singular) and any given codimension $\geq 2$ subvariety $S\subseteq X$, there is a rational curve $f:\mathbb{P}^1\rightarrow X$ passing through $P_1,\ldots,P_r$, disjoint from $S\setminus\{P_1,\ldots,P_r\}$. If all points $P_i$ are smooth, then we get the following corollary. \begin{corollary} The smooth locus of a complete toric variety is strongly rationally connected. \end{corollary} \section{Preliminaries} When we say that $x$ is a point of a variety $X$, we mean that $x$ is a closed point in $X$. A \emph{rational curve} is a nonconstant morphism $f:\mathbb{P}^1\rightarrow X$. \label{D:FT} A normal projective variety $X$ is called \emph{FT} (\emph{Fano Type}) if there exists an effective $\mathbb{Q}$-divisor $D$, such that $(X,D)$ is klt and $-(K_X+D)$ is ample. See \cite{psh09} Lemma-Definition 2.6 for other equivalent definitions. Let $N\cong \mathbb{Z}^n$ be a lattice of rank $n$. A \emph{toric variety} $X(\Delta)$ is associated to a fan $\Delta$, a finite collection of convex cones $\sigma\subset N_{\mathbb{R}}:=N\otimes_{\mathbb{Z}}\mathbb{R}$ (see \cite{fu93} or \cite{od88}). \begin{example} \label{E:ToricFT} Projective toric varieties are FT. Let $K$ be the canonical divisor of the projective toric variety $X(\Delta)$, $T$ be the torus of $X$, and $\Sigma=X\setminus T=\sum D_i$ be the complement of $T$ in $X$. Then $K$ is linearly equivalent to $-\Sigma$. Since $X$ is projective, there is an ample invariant divisor $L$. Suppose that $L=\sum d_iD_i$. Let the polytope $\Box_L=\{m\in M|\langle m,e_i\rangle+d_i\geq 0,\forall e_i\in \Delta(1)\}$, where $M$ is the dual lattice of $N$, and $\Delta(1)$ is the set consisting of 1-dimensional cones in $\Delta$. Let $u$ be an element in the interior of $\Box_L$. Let $\chi^u$ be the corresponding rational function of $u\in M$ (see \cite{fu93} section 1.3), and div $\chi^u$ be the divisor of $\chi^u$. Then $D=$div $\chi^u+L$ is effective and ample and has support $\Sigma$. That is, $D=\sum d'_iD_i$ and all $d_i'>0$. Let $\epsilon$ be a positive rational number, such that all coefficients of prime divisors in $\epsilon D$ are strictly less than $1$. Then $\Sigma-\epsilon D$ is effective. It is easy to check that $(X,\Sigma-\epsilon D)$ is klt, and $-(K+\Sigma-\epsilon D)\sim \epsilon D$ is ample. Hence $X$ is FT. \end{example} \begin{definition} An \emph{isogeny} of toric varieties is a finite surjective toric morphism. Toric varieties $X$ and $Y$ are said to be \emph{isogenous} if there exists an isogeny $X\rightarrow Y$. The \emph{isogeny class} of a toric variety $X$ is a set consisting of all toric varieties $Y$ such that $X$ and $Y$ are isogenous. \end{definition} \begin{theorem} \label{T:isogeny}Let $f:X\rightarrow Y$ be a finite surjective toric morphism. Then there exists a finite surjective toric morphism $g:Y\rightarrow X$. \end{theorem} \begin{proof} Let $f:X\rightarrow Y$ be a finite surjective toric morphism of toric varieties and $\varphi:(N',\Delta')\rightarrow (N,\Delta)$ be the corresponding map of lattices and fans. Then we can identify $N'$ as a sublattice of $N$ and $\Delta'=\Delta$. There is an positive integer $r$ such that $rN$ is a sublattice of $N'$. Let $g$ be the corresponding toric morphism of $(rN,\Delta)\rightarrow (N',\Delta)$. Since $(rN,\Delta)$ and $(N,\Delta)$ induce an isomorphic toric variety, we get $g:Y\rightarrow X$ is a finite surjective toric morphism. \end{proof} The properties of isogeny: 1) Isogeny is an equivalence relation. 2) If a toric variety $Y$ is in the isogeny class of $X$ and $\mu:X\rightarrow Y$ is the isogeny, then there is a one-to-one correspondence between the set of orbits $\{O_i^X\}$ of $X$ and the set of orbits $\{O_i^Y=\mu(O_i^X)\}$ of $Y$. Hence $\dim O_i^X=\dim O_i^Y$ for all $i$, and the number of orbits is independent of the choice of toric varieties in an isogeny class of $X$. \smallskip A variety $X$ over a characteristic 0 field is rationally connected, if any two general points $x_1,x_2\in X$ can be connected by a rational curve of $X$ of a bounded family. \smallskip \begin{definition} \label{D:SRC}(\cite{ht08} Definition 14.) A smooth rationally connected variety $Y$ is \emph{strongly rationally connected} if any of the following conditions hold: (1) for each point $y\in Y$, there exists a rational curve $f:\mathbb{P}^1\rightarrow Y$ joining $y$ and a generic point in $Y$; (2) for each point $y\in Y$, there exists a free rational curve containing $y$; (3) for any finite collection of points $y_1,\ldots,y_m\in Y$, there exists a very free rational curve containing the $y_j$ as smooth points; (4) for any finite collection of jets $$\text{Spec }k[\epsilon]/\langle\epsilon^{N+1}\rangle\subset Y,\ \ \ i=1,\ldots,m$$ supported at distinct points $y_1,\ldots,y_m$, there exists a very free rational curve smooth at $y_1,\ldots,y_m$ and containing the prescribed jets. \end{definition} \begin{definition} \label{D:WeakFree} Let $X$ be a complete normal variety, $B$ be a set of finitely many closed points in $\mathbb{P}^1$, and $g:B\rightarrow X$ be a morphism. A rational curve $f:\mathbb{P}^1\rightarrow X$ is called \emph{weakly free} over $g$ if there exist an irreducible family of rational curves $T$ and an evaluation morphism ev: $\mathbb{P}^1\times T\rightarrow X$ such that 1) $f=f_{t_0}=$ ev$|_{\mathbb{P}^1\times t_0}$ for some $t_0\in T$, 2) for any $t\in T$, $f_t=$ ev$|_{\mathbb{P}^1\times t}$ is a rational curve and $f_t|_B=g$, 3) the evaluation morphism ev: $\mathbb{P}^1\times T\rightarrow X$ by ev$(x,t)=f_t(x)$ is dominant. We say that a \emph{rational curve} $f':\mathbb{P}^1\rightarrow X$ is a \emph{general deformation} of $f$, or $f'$ is a \emph{sufficiently general weakly free rational curve}, if there is an open dense subset $U$ of $T$, such that $f'=f_t$ and $t\in U\subseteq T$. We say that a \emph{weakly free rational curve} $g:\mathbb{P}^1\rightarrow X$ is a \emph{general deformation of} $f$, if there is an irreducible family $T'$, such that $T\cap T'$ contains an open dense subset in $T$, $g=g_{t'}$ for some $t'\in T'$ and $g$ is weakly free in its own family. \end{definition} \begin{definition} \label{D:GeoFree} Let $X$ be a complete normal variety, $B$ be a set of finitely many closed points in $\mathbb{P}^1$, and $g:B\rightarrow X$ be a morphism. A rational curve $f:\mathbb{P}^1\rightarrow X$ is called \emph{geometrically free} over $g$ if there exist an irreducible family of rational curves $T$ and an evaluation morphism ev: $\mathbb{P}^1\times T\rightarrow X$ such that 1) $f=f_{t_0}=$ ev$|_{\mathbb{P}^1\times{t_0}}$ for some $t_0\in T$, 2) for any $t\in T$, $f_t=$ ev$|_{\mathbb{P}^1\times t}$ is a rational curve and $f_t|_B=g$, 3) for any codimension 2 subvariety $Z$ in $X$, $f_t(\mathbb{P}^1)\cap Z\subseteq g(B)$ for general $t\in T$ (general meaning $t$ belongs to a dense open subset in $T$, depending on $Z$). \smallskip If $X$ is smooth over an uncountable field of characteristic 0, then weak freeness over $g$ is equivalent to usual freeness over $g$ if $|B|\leq 2$. \end{definition} \begin{remark} In our application, we usually assume $g$ is one-to-one. Let $P_i=g(Q_i)$ where $B=\{Q_i\}$. Without confusion, we say $f$ is geometrically free over $\{P_i\}$ (resp. weakly free over $\{P_i\}$) instead of saying that $f$ is geometrically free over $g$ (resp. weakly free over $g$). \end{remark} Weak freeness and geometric freeness are generalizations of usual freeness (see \cite{kol96} II.3.1 Definition) if the curve passes through singularities. To consider weakly free rational curves or geometrically free rational curves, we think of them as general members in a certain family. In particular, we can suppose that the morphism ev is flat. \begin{example} Let $X$ be a projective cone over a conic. Let $T$ be the family of all lines through the vertex $O$. Then $l\in T$ is not free. However $l$ is weakly free and geometrically free over $O$ by construction. \end{example} \smallskip We need a resolution as follows. \begin{theorem}\label{L:specialresolution} Let $X$ be a toric variety. Let $\Sigma$ be the invariant locus of $X$. Let $P_1,\ldots,P_r\in X$ be $r$ points. Let $f:\mathbb{P}^1\rightarrow X$ be a sufficiently general weakly free rational curve over $P_1,\ldots,P_r$. Then there exists a resolution $\pi:\tilde{X}\rightarrow X$, such that 1) $\pi^{-1}(\Sigma\cup\{P_i\})$ is a divisor with simple normal crossing; 2) $\pi^{-1}(P_j)\subseteq \pi^{-1}(\Sigma\cup\{P_i\})$ is a divisor for each point $P_j$; 3) $\pi:\tilde{X}\rightarrow X$ is an isomorphism over $X\setminus(\emph{Sing }X\cup \{P_i\})$; 4) sufficiently general $\tilde{f}(\mathbb{P}^1)$ intersects $\pi^{-1}(\Sigma\cup\{P_i\})$ over each $P_j$ only in divisorial points of $\pi^{-1}(\Sigma\cup\{P_i\})$, where $\tilde{f}:\mathbb{P}^1\rightarrow\tilde{X}$ is the proper birational transformation of a general deformation of $f$ and is a (weakly) free rational curve. More generally, let $f_j:\mathbb{P}^1\rightarrow X$, $1\leq j\leq m$ be finitely many sufficiently general weakly free rational curve over a subset of $\{P_i\}$, where $\{P_i\}$ is a set of finitely many distinct points in $X$. Then there exists a resolution $\pi:\tilde{X}\rightarrow X$ such that 1') $\pi^{-1}(\Sigma\cup \{P_i\})$ is a divisor with simple normal crossing; 2') $\pi^{-1}(P_i)\subseteq \pi^{-1}(\Sigma\cup\{P_i\})$ is a divisor for each point $P_i$; 3') $\pi:\tilde{X}\rightarrow X$ is an isomorphism over $X\setminus(\emph{Sing }X\cup \{P_i\})$; 4') For each $j$, sufficiently general $\tilde{f}_j(\mathbb{P}^1)$ intersects $\pi^{-1}(\Sigma\cup\{P_i\})$ over each $P_i$ only in divisorial points of $\pi^{-1}(\Sigma\cup\{P_i\})$, where $\tilde{f}_j:\mathbb{P}^1\rightarrow\tilde{X}$ is the proper birational transformation of a general deformation of $f_j$ and is a (weakly) free rational curve. \end{theorem} \begin{proof} When the ground field is of characteristic 0, 1)-3) follow from usual facts in the resolution theory, e.g. see \cite{km98} Theorem 0.2. However, in the toric or toroidal case, the same result holds for any field. More precisely, if all $P_i$ are invariant, we can use a toric resolution. If some $P_i$ are not invariant, they can be converted into toroidal invariant points $P_i$ after a toroidalization. \smallskip We say that $\tilde{f}(\mathbb{P}^1)$ intersects $\pi^{-1}(\Sigma\cup\{P_i\})$ over each $P_i$ in a divisorial point $x$ if $x$ belongs to only one prime divisor of $\pi^{-1}(\Sigma\cup \{P_i\})$ for some $i$ and the prime divisor is over $P_i$. To fulfill 4), we need extra resolution over intersections of the divisorial components of $\pi^{-1}(\Sigma\cup\{P_i\})$ through which general $\tilde{f}$ is passing over $P_i$. Termination of such resolution follows from an estimation by the multiplicities of intersection for $f(\mathbb{P}^1)$ with $\Sigma$. The last resolution is independent of the choice of a general rational curve by Lemma \ref{L:sufficentlygeneral} below. However it depends on the choice of intersections of divisorial components. For more details, see the proof of Lemma 4.3.4 in \cite{ch09}. For the general statement, we can get 1')-3') in a similar manner above. To fulfill 4'), we just need extra resolutions over each point $P_i$.\end{proof} We discuss some examples of rational curves on projective spaces and quotient projective spaces. \begin{example} \label{E:RCProj} For any given subvariety $S$ of codimension $\geq 2$ in $\mathbb{P}^n$, any points $P_1,\ldots,P_r\in \mathbb{P}^n$, and any integer $d\geq r$, there exists a rational curve $C$ of degree $d$, such that each $P_i\in C$ and $C\cap S=\emptyset$. Indeed, we can construct a tree $T$ with $r$ branches, such that each $P_i$ is a smooth points on a unique branch and disjoint from $S$. The tree can be smoothed into a rational curve $C$ passing through $P_1,\ldots,P_r$, disjoint from $S$. The rational curve $C$ has degree $r$. For $d\geq r$, we can attach $d-r$ rational curves to the tree $T$, and smooth it. \end{example} Applying Example \ref{E:RCProj}, we get \begin{example} Let $\pi:\mathbb{P}^n\rightarrow X$ be a finite morphism, $S$ be a codimension $\geq 2$ subvariety in $X$, and $\{P_i\}_{i=1}^m$ be a set of $m$ points outside $S$. Then there exists a rational curve $C$, such that each $P_i\in C$ and $C\cap S=\emptyset$. \end{example} In particular, the same result holds if $X$ is a quotient space $\mathbb{P}^n/G$, where $G$ is a finite group, for example, if $X$ is a weighted projective space. It is well known that if $X$ is a complete $\mathbb{Q}$-factorial toric variety with Picard number one, then there exist a weighted projective space $Y$ and a finite toric morphism $\pi:Y\rightarrow X$. So the same result holds for rational curves on a complete $\mathbb{Q}$-factorial toric variety with Picard number one. It is a very special case of our Main Theorem. \section{Proof of Main Theorem} In this section we prove Main Theorem. Let us first prove Main Lemma, which is a special weak case of Main Theorem. \begin{main lemma} Let $X$ be a complete toric variety. Let $P,Q\in X$ be two distinct points ($P,Q$ possibly singular). Let $S\subseteq X$ be a closed subvariety of codimension $\geq 2$. Then there exists a weakly free rational curve on $X$ over $P,Q$, disjoint from $S\setminus\{P,Q\}$. \end{main lemma} To prove Main Lemma, we need some preliminaries. \begin{lemma}\label{L:sufficentlygeneral} Let $f$ be a weakly free rational curve on $X$, and $F_1,\ldots,F_s\subseteq X$ be $s$ proper irreducible subvarieties in $X$. Then there exist $s',0\leq s'\leq s$, subvarieties among $\{F_j\}$ (after renumbering we assume they are $F_1,\ldots,F_{s'}$) such that a general deformation of $f$ intersects $F_1,\ldots,F_{s'}$, and is disjoint from $F_{s'+1},\ldots, F_s$. \end{lemma} The proof of this Lemma is a standard exercise in incidence relations. See \cite{ch09} Lemma 4.3.2 for a detailed proof. \begin{lemma} \label{L:movefromsmvar} Let $X$ be a complete toric variety. Let $P,Q\in X$ be two points (possibly singular), and $S$ be a closed subvariety of codimension $\geq 2$. Let $F_1,\ldots,F_{s}$ be all the irreducible components of \emph{Sing} $X$. Let $f:\mathbb{P}^1\rightarrow X$ be a sufficiently general weakly free rational curve over $P,Q$. Suppose $f(\mathbb{P}^1)$ intersects $F_1\setminus\{P,Q\},\ldots,F_{s'}\setminus\{P,Q\}$, and is disjoint from $F_{s'+1}\setminus\{P,Q\},\ldots,F_s\setminus\{P,Q\}$. Then there exists a weakly free rational curve $f'$ over $\{P,Q\}$, which is a general deformation of $f$, such that $f'(\mathbb{P}^1)$ is disjoint from $((S\setminus\text{\emph{Sing} }X) \cup F_{s'+1}\cup\ldots\cup F_{s})\setminus\{P,Q\}$. Moreover, for any fixed closed subvariety $Z$ of $X$, if $f(\mathbb{P}^1)\cap (Z\setminus\{P,Q\})=\emptyset$, then $f'(\mathbb{P}^1)\cap (Z\setminus\{P,Q\})=\emptyset$. \end{lemma} \begin{proof} Applying Theorem \ref{L:specialresolution} to the toric variety $X$ and two points $\{P,Q\}$, we get a resolution $\pi:\tilde{X}\rightarrow X$ satisfying 1)-3) in the theorem and a weakly free rational curve $\tilde{f}:\mathbb{P}^1\rightarrow \tilde{X}$ satisfying 4) in the theorem. A general deformation $\tilde{f}'$ of $\tilde{f}$ is weakly free, so $\tilde{f}'$ is free by \cite{kol96} II.3.11 (Here we need the assumption that the ground field is uncountable and of characteristic 0.) Moreover, we can assume that $\tilde{f}'$ is disjoint from $(S\setminus\text{Sing }X)\setminus \pi^{-1}\{P,Q\}$ by \cite{kol96} II.3.7. On the other hand, let $\Sigma$ be the invariant locus of $X$. Notice that Sing $X\subseteq \Sigma$. Then by Theorem \ref{L:specialresolution}, $\tilde{f}(\mathbb{P}^1)$ intersects $\pi^{-1}(\Sigma \cup\{P,Q\})$ divisorially over $P,Q$, and $\tilde{f}(\mathbb{P}^1)$ is disjoint from the closure of $\pi^{-1}(F_{s'+1}\setminus\{P,Q\}),\ldots,\pi^{-1}(F_s\setminus\{P,Q\})$. So the general deformation $\tilde{f}'$ of $\tilde{f}$ intersects open subsets of divisors $\pi^{-1}(P)$ and $\pi^{-1}(Q)$, disjoint from the closure of $((S\setminus \text{Sing }X)\setminus\pi^{-1}\{P,Q\})\cup \pi^{-1}(F_{s'+1}\setminus\{P,Q\})\cup\cdots\cup \pi^{-1}(F_s\setminus\{P,Q\})$. We apply Lemma \ref{L:wfreegowfree} by replacing $f'$ by $\tilde{f}'$, dominant morphism $\mu$ by $\pi:\tilde{X}\rightarrow X$, $\{P_i\}$ by $\{P,Q\}$, and $S$ by $(S\setminus$ Sing $X)\cup F_{s'+1}\cup\cdots\cup F_s$. Then we get the weakly free rational curve $f'=\pi \tilde{f'}:\mathbb{P}^1\rightarrow X$ is a general deformation of $f$ (see Definition \ref{D:WeakFree}), passing through points $P,Q$ and disjoint from $((S\setminus\text{Sing }X)\cup F_{s'+1}\cup\cdots\cup F_s)\setminus\{P,Q\}$. Moreover, we can assume that $f'$ is a weakly free rational curve over $P,Q$, by a base change of the family to which $f'$ belongs (For details, see the proof of Lemma 4.3.1 in \cite{ch09}). The last statement can be proved similarly. \end{proof} \begin{lemma} \label{L:wfreegowfree} Let $X,X'$ be two complete varieties with $\dim X>0$. Let $\mu:X'\rightarrow X$ be a dominant morphism. Then the image of a weakly free rational curve on $X'$ is weakly free on $X$ in the following sense: Let $P_1,P_2,\ldots,P_r\in \mu(X)$ be $r$ distinct points, and $S\subseteq X$ be a closed subvariety. Let $S'=\mu^{-1}S$, and $P_1',P_2',\ldots,P_r'\in X'$ be points such that $\mu(P_i')=P_i$ for $i=1,\ldots,r$. If $f':\mathbb{P}^1\rightarrow X'$ is a weakly free rational curve over $P_1',P_2',\ldots,P_r'$, disjoint from $S'\setminus\{P_1',P_2',\ldots,P_r'\}$, then $f=\mu\circ f''$ is a weakly free rational curve on $X$ over $P_1,P_2,\ldots,P_r$, disjoint from $S\setminus\{P_1,P_2,\ldots,P_r\}$, where $f''$ is a general deformation of $f'$. \end{lemma} \begin{proof} Since $f'$ is weakly free, ev: $\mathbb{P}^1\times T'\rightarrow X'$ is dominant, where $T'$ is the family associated to $f'$. Since $\mu:X'\rightarrow X$ is dominant, ev: $\mathbb{P}^1\times T'\rightarrow X'\rightarrow X$ is dominant. Hence for general deformation $f''\in T'$ of $f'$, $f=\mu\circ f''$ is a weakly free rational curve on $X$. \end{proof} \begin{lemma} \label{L:smoothisogeny} Let $X$ be a $\mathbb{Q}$-factorial toric variety, and $O$ be a singular orbit of $X$. Then there exists an isogeny $\mu:Y\rightarrow X$, such that $\mu^{-1}(O)$ is smooth. \end{lemma} \begin{proof} Let $(N,\Delta)$ be the lattice and fan associated to $X$. Let $N'$ be the sublattice generated by the primitive elements of the simplicial cone $\sigma$ such that $O$ is contained in the affine open subset $\sigma$ corresponding to. Let $Y$ be the toric variety corresponding to $(N',\Delta)$ and $\mu$ be the natural finite dominant morphism corresponding to $(N',\Delta)\rightarrow (N,\Delta)$. By construction of $\mu$, $\mu^{-1}(O)$ is smooth. \end{proof} \begin{proof}[Proof of Main Lemma] \underline{Step 1.} After $\mathbb{Q}$-factorization $q:X'\rightarrow X$, we can assume that $X$ is a complete $\mathbb{Q}$-factorial toric variety (\cite{fj03} Corollary 3.6). Indeed, a weakly free rational curve on $X'$ gives a weakly free rational curve on $X$ by Lemma \ref{L:wfreegowfree}. \smallskip \underline{Step 2.} A weakly free rational curve can be moved from any smooth variety of codimension $\geq 2$ in the sense of Lemma \ref{L:movefromsmvar}. So we can reduce the proof of Main Lemma to the case $S=I(X)$, where $I(X)$ denotes the union of orbits of $X$ of codimension $\geq 2$. Since $X$ is a toric variety, Sing $X\subseteq I(X)$. Indeed, for any subvariety $S\subseteq X$ of codimension $\geq 2$, suppose there is a sufficiently general weakly free rational curve $f:\mathbb{P}^1\rightarrow X$ over $P,Q\in X$, disjoint from $I(X)\setminus\{P,Q\}$. Apply Lemma \ref{L:movefromsmvar} to the subvariety $S$, and the weakly free rational curve $f$. Since Sing $X\subseteq I(X)$, $s'=0$ in Lemma \ref{L:movefromsmvar}, that is, $f(\mathbb{P}^1)$ is disjoint from $F_1\setminus\{P,Q\},\ldots,F_s\setminus\{P,Q\}$. Then there exists a weakly free rational curve $f'$, which is a general deformation of $f$, such that $f'(\mathbb{P}^1)$ is disjoint from $((S\setminus\text{Sing }X)\cup F_1\cup\ldots\cup F_s)\setminus\{P,Q\}=((S\setminus\text{Sing }X)\cup\text{ Sing }X)\setminus\{P,Q\}=S\setminus\{P,Q\}$. \smallskip \underline{Step 3.} Suppose that $I(X)$ consists of $\tilde{s}$ distinct orbits $O_1,\ldots,O_{\tilde{s}}$. Let $f:\mathbb{P}^1\rightarrow X$ be a sufficiently general weakly free rational curve over $P,Q$. By Lemma \ref{L:sufficentlygeneral}, we can assume that $f(\mathbb{P}^1)$ intersects with $O_1\setminus\{P,Q\},\ldots,O_{s'}\setminus\{P,Q\}$, and is disjoint from $O_{s'+1}\setminus\{P,Q\},\ldots,O_{\tilde{s}}\setminus\{P,Q\}$ for some $s'$. Notice that $s'$ depends on the points $P,Q$ and the variety $X$. However, since $s'$ is bounded by $\tilde{s}$, and $\tilde{s}$ is independent of choice of $X$ in an isogeny class, there exists an $\bar{s}$ such that for any toric variety $Y$ in the isogeny class of $X$, and two distinct points $P',Q'\in Y$, there exists a weakly free rational curve $f'_{\bar{s}}:\mathbb{P}^1\rightarrow Y$ over $P',Q'$, such that $f'_{\bar{s}}(\mathbb{P}^1)$ intersects with at most $O_1^Y\setminus\{P',Q'\},\ldots,O_{\bar{s}}^Y\setminus\{P',Q'\}$, and is disjoint from $O_{\bar{s}+1}^Y\setminus\{P,Q\},\ldots,O_{\tilde{s}}^Y\setminus\{P,Q\}$, where $O_i^Y$ are orbits of $Y$ of codimension $\geq 2$. Furthermore, we can assume that $\dim O_1^Y\geq \dim O_2^Y\geq \cdots\geq \dim O_{s'}^Y\geq \dim O_{s'+1}^Y\geq \cdots\geq\dim O_{\tilde{s}}^Y$. This order is good for us, because $\cup_{j\geq s}O_{j}^Y$ is closed for any $s$. \smallskip We fix a complete toric variety $X$, two points $P,Q$ and a weakly free rational curve $f_{\bar{s}}$ over $P,Q$. By Lemma \ref{L:wfreegowfree} and \ref{L:smoothisogeny}, we can suppose that the orbit $O_{\bar{s}}$ is smooth. Indeed, by Lemma \ref{L:smoothisogeny}, there is an isogeny $\mu:Y\rightarrow X$ such that $O_{\bar{s}}^Y=\mu^{-1}(O_{\bar{s}})$ is smooth. Let $P',Q'\in Y$ such that $\mu(P')=P,\mu(Q')=Q$. Then existence of a weakly free rational curve $f':\mathbb{P}^1\rightarrow Y$ over $P',Q'$, disjoint from $O_{\bar{s}}^Y\cup \cdots \cup O_{\tilde{s}}^Y$, implies existence of a weakly free rational curve $f:\mathbb{P}^1\rightarrow X$ over $P,Q$, disjoint from $O_{\bar{s}}\cup\cdots\cup O_{\tilde{s}}$, by Lemma \ref{L:wfreegowfree} with $X'=Y,\{P_i\}=\{P,Q\}$ and $S=O_{\bar{s}}^Y\cup O_{\bar{s}+1}^Y\cup\cdots\cup O_{\tilde{s}}^Y$. \smallskip \underline{Step 4.} Now, we prove that there is a weakly free rational curve $f_{\bar{s}-1}$ over $P,Q$ such that $f_{\bar{s}-1}(\mathbb{P}^1)$ intersects at most $O_1\setminus\{P,Q\},\ldots,O_{\bar{s}-1}\setminus\{P,Q\}$, and is disjoint from $O_{\bar{s}}\setminus\{P,Q\},\ldots,O_{\tilde{s}}\setminus\{P,Q\}$. Indeed, we have the following two cases: 1) If $f_{\bar{s}}(\mathbb{P}^1)$ is disjoint from $O_{\bar{s}}\setminus\{P,Q\}$, then let $f_{\bar{s}-1}=f_{\bar{s}}$. 2) If $f_{\bar{s}}(\mathbb{P}^1)$ intersects $O_{\bar{s}}\setminus\{P,Q\}$, we apply Lemma \ref{L:movefromsmvar} with $Z=O_{\bar{s}+1}\cup\cdots\cup O_{\tilde{s}}$ and $S=O_{\bar{s}}\cup Z$. Notice that $S$ and $Z$ are closed subvarieties of $X$ of codimension $\geq 2$, and $O_{\bar{s}}$ is smooth. In particular, $S\setminus$ Sing $X\supseteq O_{\bar{s}}$. By assumption, $f_{\bar{s}}(\mathbb{P}^1)\cap (Z\setminus\{P,Q\})=\emptyset$. Therefore, by the Lemma, there exists a weakly free rational curve $f_{\bar{s}-1}$ on $X$, which is a general deformation of $f_{\bar{s}}$, such that $f_{\bar{s}-1}(\mathbb{P}^1)$ intersects at most $O_1\setminus\{P,Q\},\ldots,O_{\bar{s}-1}\setminus\{P,Q\}$, and is disjoint from $(O_{\bar{s}}\cup Z)\setminus\{P,Q\}=(O_{\bar{s}}\setminus\{P,Q\})\cup(O_{\bar{s}+1}\setminus\{P,Q\})\cup\cdots\cup(O_{\tilde{s}}\setminus\{P,Q\})$. \smallskip \underline{Step 5.} By induction on $\bar{s}$, there is a weakly free rational curve $f_0$ over $P,Q$, disjoint from $I(X)\setminus\{P,Q\}$. \end{proof} \begin{proof}[Proof of Main Theorem] \underline{Step 1.} First, let us consider $S=$ Sing $X$. There is a free rational curve $f_0:C_0\cong\mathbb{P}^1\rightarrow X$ disjoint from $\{P_i\}\cup S$. Indeed, we can apply Main Lemma to the subvariety $\{P_i\}\cup S$ and any two smooth points $P,Q\not\in\{P_i\}\cup S$ in $X$. Since $f_0(\mathbb{P}^1)$ is in the smooth locus of $X$, $f_0$ is free and disjoint from $\{P_i\}\cup S$. \begin{center} \begin{overpic}[scale=0.8]{deform1} \end{overpic} \end{center} We construct a comb of smooth rational curves $C$ and a morphism $f:C\rightarrow X'$ as follows. \textbf{I.} Assume that $P_1,\ldots,P_{r'}$ are smooth points for some $r'$, $1\leq r'\leq r$, and $P_{r'+1},\ldots,P_r$ are singular points of $X$. Choosing points $t_1,\ldots,t_r\in C_0$, such that $P_i'=f_0(t_i)\in X$ are distinct. For each $j$, applying the Main Lemma to $S=$ Sing $X\cup\{P_i\}$ and points $P=P_j,Q=P_j'$, there is a weakly free rational curve $f_j:C_j\cong\mathbb{P}^1 \rightarrow X$ over $P_j,P_j'$ for each $1\leq j\leq r$, disjoint from $S\setminus\{P_j,P_j'\}$. \begin{center} \begin{overpic}[scale=0.8]{deform2} \end{overpic} \end{center} Applying the general statement of Theorem \ref{L:specialresolution} to weakly free rational curves $f_0,f_1,\ldots,f_r$ and the set $\{P_i\}=\{P_i\}_{i\geq r'+1}$, we get a resolution $\pi:X'\rightarrow X$. For each $1\leq i\leq r'$, since $P_i$ and $P_i'$ are smooth points, $f_i(\mathbb{P}^1)$ is contained in the smooth locus of $X$. Therefore $f_i$ is free for each $1\leq i\leq r'$ by \cite{kol96} II.3.11. We identify the curve $f_i:C_i\cong \mathbb{P}^1\rightarrow X$ birationally with a free rational curve $f_i:C_i\cong\mathbb{P}^1\rightarrow X'$. We also identify $P_i\in X$ with $P_i\in X'$ for $1\leq i\leq r'$, and $P_i'\in X$ with $P_i'\in X'$ for $1\leq i\leq r$. More precisely, $f_i(0_i)=P_i$, where $0_i\in C_i,1\leq i\leq r'$, and $f_i(\infty_i)=P_i'$ where $\infty_i\in C_i,1\leq i\leq r$. For each $r'+1\leq j\leq r$, $P_j$ is singular. Let $f_j':C_j\cong\mathbb{P}^1\rightarrow X'$ be the proper birational transformation of a sufficiently general deformation of $f_j$. Since $\pi:X'\rightarrow X$ is a resolution in Theorem \ref{L:specialresolution}, $f_j'(C_j)$ intersects $\pi^{-1}P_j$ divisorially over $P_j$ for $r'+1\leq j\leq r$, and is disjoint from the closure of $\pi^{-1}(S\setminus\{P_i\})$. Let $Q_j$ be a point in $f_j'(C_j)\cap \pi^{-1}P_j$ over $P_j$ for $r'+1\leq j\leq r$. We can suppose that $f_i$ is very free for $1\leq i\leq r'$ and $f_j'$ is very free for $r'+1\leq j\leq r$ by \cite{kmm92a} 1.1. or \cite{kol96} II.3.11. By construction of $f_i,1\leq i\leq r'$ and $f_j',r'+1\leq j\leq r$, $f_i(C_i)$ and $f_j'(C_j)$ are disjoint from the closure of $\pi^{-1}(S\setminus\{P_1,\ldots,P_r\})=\pi^{-1}(S\setminus\{P_{r'+1},\ldots,P_r\})$. \begin{center} \begin{overpic}[scale=0.8]{deform3} \end{overpic} \end{center} \textbf{II.} Gluing $\cup_{i=0}^r C_i$, we get a comb of smooth rational curves $C=\sum_{i=0}^r C_i$ and a morphism $f:C\rightarrow X'$. Indeed, we identify points $\infty_i\in C_i$ with $t_i\in C_0$ for each $1\leq i\leq r$. Then we have a comb of smooth rational curves $C=\sum_{i=0}^rC_i$ and a morphism $f:C\rightarrow X'$ because $f_0(t_i)=f_i(\infty_i)=P_i'$. Notice that $f(C)$ is disjoint from the closure of $\pi^{-1}(S\setminus\{P_1,\ldots,P_r\})$. In the end, $f:C\rightarrow X'$ can be smoothed into a rational curve $f':\mathbb{P}^1\rightarrow X'$ such that $f'$ is free over $P_i,1\leq i\leq r'$ and $Q_j,r'+1\leq j\leq r$, and is disjoint from the closure of $\pi^{-1}(S\setminus\{P_1,\ldots,P_r\})$ (We can generalize the proof of \cite{kol96} II.7.6 for comb to get $f'$ is a free rational curve over $\{P_1,\ldots,P_{r'},Q_{r'+1},\ldots,Q_{r}\}$, not only with $\{P_1,\ldots,P_{r'},Q_{r'+1},\ldots,Q_{r}\}$ fixed, as stated in \cite{kol96} II.7.6. Or we can attach additional rational curves to enlarge of the family of $f'$, such that $f'$ is a free rational curve over $\{P_1,\ldots,P_{r'},Q_{r'+1},\ldots,Q_{r}\}$ after a base change). \begin{center} \begin{overpic}[scale=0.8]{deform4} \end{overpic} \end{center} \underline{Step 2.} Now we consider any closed subvariety $S$ of codimension $\geq 2$. By Step 1, there is a free rational curve $f':\mathbb{P}^1\rightarrow X'$ over $P_1,\ldots,P_{r'}$, $Q_{r'+1},\ldots,Q_r$, disjoint from the closure of $\pi^{-1}(\text{Sing }X\setminus\{P_1,\ldots,P_r\})$, where $\pi:X'\rightarrow X$ is the resolution in Step 1. On the other hand, $\pi^{-1}((S\setminus\text{Sing }X) \setminus\{P_1,\ldots,P_r\})$ is a codimension $\geq 2$ subvariety on $X'$ by Theorem \ref{L:specialresolution} 3'). So a general deformation $f''$ of $f'$ is free over $P_1,\ldots,P_{r'},Q_{r'+1},\ldots,Q_r$, disjoint from $\pi^{-1}((S\setminus\text{Sing }X) \setminus\{P_1,\ldots,P_r\})$ by \cite{kol96} II.3.7. Since $f'$ is disjoint from the closure of $\pi^{-1}(\text{Sing }X\setminus\{P_1,\ldots,P_r\})$, $f''$ is disjoint from $\pi^{-1}(\text{Sing }X\setminus\{P_1,\ldots,P_r\})$. Hence $f''$ is disjoint from $\pi^{-1}(\text{Sing }X\setminus\{P_1,\ldots,P_r\})\cup \pi^{-1}((S\setminus\text{Sing }X) \setminus\{P_1,\ldots,P_r\})=\pi^{-1}(S\setminus\{P_1,\ldots,P_r\})$. Therefore, $\pi f''$ is a general deformation of $\pi f'$ over $P_1,\ldots,P_r$, disjoint from $S\setminus\{P_1,\ldots,P_r\}$, and thus $\pi f'$ is a geometrically free rational curve over $P_1,\ldots,P_r$ on $X$. \end{proof}
{ "timestamp": "2009-05-09T21:20:05", "yymm": "0905", "arxiv_id": "0905.1430", "language": "en", "url": "https://arxiv.org/abs/0905.1430", "abstract": "In this paper, we prove that: For any given finitely many distinct points $P_1,...,P_r$ and a closed subvariety $S$ of codimension $\\geq 2$ in a complete toric variety over a uncountable (characteristic 0) algebraically closed field, there exists a rational curve $f:\\mathbb{P}^1\\to X$ passing through $P_1,...,P_r$, disjoint from $S\\setminus \\{P_1,...,P_r\\}$ (see Main Theorem). As a corollary, we prove that the smooth loci of complete toric varieties are strongly rationally connected.", "subjects": "Algebraic Geometry (math.AG)", "title": "Strong rational connectedness of toric varieties", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759666033577, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.707940565116563 }
https://arxiv.org/abs/1507.08095
Approximation properties of isogeometric function spaces on singularly parameterized domains
We study approximation error bounds of isogeometric function spaces on a specific type of singularly parameterized domains. In this context an isogeometric function is the composition of a piecewise rational function with the inverse of a piecewise rational geometry parameterization. We consider domains where one edge of the parameter domain is mapped onto one point in physical space. To be more precise, in our configuration the singular patch is derived from a reparameterization of a regular triangular patch. On such a domain one can define an isogeometric function space fulfilling certain regularity criteria that guarantee optimal convergence. The main contribution of this paper is to prove approximation error bounds for the previously defined class of isogeometric discretizations.
\section{Introduction} Isogeometric analysis as presented by \cite{Hughes2005} starts from a discretization based on a B-spline or NURBS geometry parameterization. The standard theory, as developed in \cite{Bazilevs2006} relies on the regularity of the geometry mapping. If the geometry mapping is singular, the standard theory does not apply. Therefore, in the standard setting, the underlying geometry has to be diffeomorphic to a rectangle. Singular parameterizations have been studied in the context of isogeometric analysis in e.g. \cite{Lu2009,Cohen2010,Lipton2010} The regularity theory has already been partially extended to singularly parameterized domains in \cite{Takacs2011,Takacs2012-1}. However, the study of approximation properties of isogeometric spaces over singular patches still remains an open problem, which we study in more detail in this paper. In the context of finite elements there exist several related studies concerning degenerate $Q^1$ isoparametric elements, such as \cite{Jamet1977,Acosta2006}. In Section \ref{sec:2} we present the notation and the underlying setting that we consider throughout this paper. In Section \ref{sec:spaces-operators} we introduce the hierarchical mesh as well as the corresponding function space on it. In Section \ref{subsec:operator} we define an $L^2$-stable projection operator onto the hierarchical function space. We briefly mention how the construction can be generalized to locally quasi-uniform refinements in Section \ref{subsec:generalization-quasi-uniform} and present approximation error bounds in Section \ref{sec:appr-estimates}. Finally, we conclude the paper in Section \ref{sec:conclusion}. \section{Preliminaries} \label{sec:2} Isogeometric function spaces $\mathcal{V}$, as they are present in isogeometric analysis introduced in \cite{Hughes2005}, are built from an underlying B-spline or NURBS space. Hence, to introduce the notation needed, we start this preliminary section with recalling the notion of B-splines and NURBS. We do not give detailed definitions here but refer to standard literature for further reading, see \cite{PieglTiller1995,Prautzsch2002}. \subsection{B-splines, NURBS and isogeometric functions} Throughout this paper we consider the simple configuration of an isogeometric function space based on a single NURBS patch with uniform knots. Note that this simplification is not necessary, as we point out in Section \ref{subsec:generalization-quasi-uniform}. For simplicity we consider uniform B-splines over the parameter domain $]0,1[$. Given a degree $p\in\mathbb Z^+$ and a mesh size $h = 1/2^n$, with $n \in \mathbb Z^+_0$, the $i$-th B-spline, for $i=1,\ldots,2^n+p$, is denoted by $b^n_{i}(s)$. Here $n$ refers to the level of (dyadic) refinement. We assume that the knot vector is open and has uniformly distributed interior knots. We moreover denote by $b^n_{i,j}(s,t)$ the product of $b^n_{i}(s)$ and $b^n_{j}(t)$. Let $\mathcal{S}_n$ be the tensor-product B-spline space spanned by $b^n_{i,j}(s,t)$. We have given a weight function $w \in \mathcal{S}_{n^*}$ on some coarse level $n^*$, with $w>0$ uniformly. For all $n\geq n^*$ we define the NURBS space as $\mathcal{N}_n = \{ \frac{v_n}{w} : v_n \in \mathcal{S}_n \}$. Moreover, let $\f G \in (\mathcal{N}_{n^*})^2$ be a geometry parameterization, with \begin{equation*} \f G : \f B = \left]0,1\right[^2 \rightarrow \Omega \subset \mathbb R^2. \end{equation*} For $n\geq n^*$, the h-dependent spaces $\mathcal{V}_n$ of \emph{isogeometric functions} over the open domain $\Omega = \f G (\f B)$ are defined by \begin{equation*} \mathcal{V}_n = \left\{ \varphi_n: \Omega \rightarrow \mathbb R \; | \; \varphi_n = f_n \circ \f G^{-1}, \mbox{ with } f_n \in\mathcal{N}_n \right\}. \end{equation*} The goal of this paper is to prove approximation error bounds for h-refined isogeometric functions over singularly parameterized domains. In the following section we introduce a simple configuration of singularly parameterized domains. \subsection{Singular tensor-product patches derived from triangular patches} In this section we construct smooth isogeometric function spaces on singular patches, where the singular patches are derived from triangular B\'ezier patches. The configuration presented here is developed in more detail in \cite{Takacs2014}, which is based on \cite{Hu2001}. Let $\f u$ be the mapping \begin{eqnarray*} \f u : \;\; ]0,1[^2 & \; \rightarrow \; & \Delta = \{ (u,v): 0< u < 1 , \; 0< v< u \} \\ (s,t)^T & \; \mapsto \; & \left(s, s \, t\right)^T, \end{eqnarray*} and let $\f F: \Delta \rightarrow \mathbb R^2$ be a regular mapping. We assume that the parameterization $\f G$ is given as \begin{equation*} \f G = \f F \circ \f u. \end{equation*} All mappings $\f u$, $\f F$ and $\f G$ are defined on an open parameter domain and can be extended continuously to the boundary. Here, $\f G$ is singular at a part of the boundary, i.e. $\det \nabla \f G (s,t) = 0$ for all $(s,t) \in \{0\}\times [0,1]$. Let $\mathcal{W}_n$ be the isogeometric B-spline space on $\Delta$, i.e. \begin{equation*} \mathcal{W}_n = \left\{ \varphi_n: \Delta \rightarrow \mathbb R \; | \; \varphi_n = f_n \circ \f u^{-1}, \mbox{ with } f_n \in\mathcal{S}_n \right\}, \end{equation*} where the inverse of the singular mapping $\f u$ is equal to \begin{equation*} \f u^{-1}(u,v) = \left(u,\frac{v}{u}\right)^T. \end{equation*} The various introduced mappings and domains are depicted in Figure \ref{figureMapping}. \begin{figure}[!ht] \centering \begin{picture}(170,120) \put(0,0){\includegraphics[width=0.5\textwidth]{mapping.pdf}} \put(63,50){$s$} \put(175,50){$u$} \put(3,112){$t$} \put(112,112){$v$} \put(115,33){$\f F$} \put(44,33){$\f G$} \put(80,85){$\f u$} \put(105,4){$\Omega$} \put(15,70){$]0,1[^2$} \put(140,70){$\Delta$} \end{picture} \caption{Mappings $\f F$, $\f u$ and $\f G$ for an example domain $\Omega$}\label{figureMapping} \end{figure} \subsection{Regularity conditions} To prove approximation properties on $\Omega$, we need the following. \begin{assumption}\label{assu:equiv-norms} There exists a constant $C_F$, depending only on $\f F$ and on the degree $p$, such that \begin{equation*} \frac{1}{C_F} \| \varphi \|_{{H}^{k}(\Omega)} \leq \| \varphi \circ \f F \|_{{H}^{k}(\Delta)} \leq C_F \| \varphi \|_{{H}^{k}(\Omega)} \end{equation*} for all $0\leq k\leq p+1$ and for all $\varphi \in {H}^{p+1}(\Omega)$. \end{assumption} Considering a mapping that is continuous of order $p$ on the closure of the open domain, we have the following. \begin{proposition}\label{prop:equiv-norms} If $\f F \in (\mathscr{C}^{p}(\overline{\Delta}))^2$ and $\det\nabla\f F > \underline c > 0$, then Assumption \ref{assu:equiv-norms} is fulfilled. \end{proposition} The proof of this proposition can be carried out similarly to the proof of Lemma 3.5 in \cite{Bazilevs2006}. Here, the space $\mathscr{C}^{k}$ is defined as follows. \begin{definition} Given an open domain $D$. The space $\mathscr{C}^{k}(\overline{D})$ of $\mathscr{C}^{k}$-continuous functions on the closure of $D$ is defined as the space of functions $\varphi:D\rightarrow\mathbb R$ with $\varphi\in C^k(D)$ such that there exists a unique limit \begin{equation*} \lim_{\substack{\f y \rightarrow \f x\\ \f y \in D}} \frac{\partial^{|\alpha|} \varphi(\f y)}{\partial x_1^{\alpha_1} \partial x_2^{\alpha_2}} = \frac{\partial^{|\alpha|} \varphi(\f x)}{\partial x_1^{\alpha_1} \partial x_2^{\alpha_2}} \end{equation*} for all $\f x \in \partial D = \overline{D} \backslash D$ and for all $|\alpha| = \alpha_1 + \alpha_2 \leq k$. Here $C^k(D)$ is the standard space of $k$-times continuously differentiable functions on the open domain $D$. \end{definition} \section{Spaces and operators on the triangle} \label{sec:spaces-operators} In this section we introduce a hierarchical mesh refinement and corresponding spline spaces $\widehat{\mathcal{W}}_n$ on the triangle $\Delta$. In this configuration $\widehat{\mathcal{W}}_n$ is a subspace of $\mathcal{W}_n$. Then we introduce a projection operator $\Pi_{\widehat{\mathcal{W}}_n} : L^2(\Delta) \rightarrow \widehat{\mathcal{W}}_n$, that is locally bounded in $L^2$. \subsection{Hierarchical mesh and refinement} \label{subsec:mesh} Let $\rm{T}_n$ be the B\'ezier mesh corresponding to the function space $\mathcal{W}_n$ on $\Delta$, i.e. it is a regular mesh mapped by $\f u$. We consider a sequence of meshes $\widehat{\rm{T}}_n$, such that $\widehat{\rm{T}}_0 = \{ \Delta \}$ and \begin{equation*} \widehat{\rm{T}}_n = \{ \delta : \delta \in \rm{T}_n \cap (\Delta \backslash \Delta_{1/2}) \} \cup \{ \delta/2 : \delta \in \widehat{\rm{T}}_{n-1}\}, \end{equation*} where \begin{equation*} \Delta_{\gamma} = \{ (u,v): 0< u < \gamma , \; 0< v< u \} \end{equation*} and $\delta/2 = \{(\frac{u}{2},\frac{v}{2})\,:\, (u,v)\in\delta\}$. Here, with a slight abuse of notation, we have \begin{equation*} {\rm{T}}_n \cap (\Delta \backslash \Delta_{1/2}) = \{ \delta : \delta \in {\rm{T}}_{n} \mbox{ and } \delta \subseteq \Delta \backslash \Delta_{1/2} \}. \end{equation*} Note that all elements in $\widehat{\rm{T}}_n$ are shape regular, i.e. the radius of the largest inscribed circle of $\delta$ is of the same order $O(h)$ as the diameter of $\delta$. Figure \ref{figure:Mesh-hat-Tn} depicts the locally refined mesh for $n=1,2,3$. The meshes are defined in such a way, that the left half of the mesh $\widehat{\rm{T}}_n$ is a scaled version of the mesh $\widehat{\rm{T}}_{n-1}$. \begin{figure}[!ht] \centering \includegraphics[width=0.2\textwidth]{level1.pdf}\qquad \includegraphics[width=0.2\textwidth]{level2.pdf}\qquad \includegraphics[width=0.2\textwidth]{level3.pdf} \caption{Meshes $\widehat{\rm{T}}_1$, $\widehat{\rm{T}}_2$ and $\widehat{\rm{T}}_3$}\label{figure:Mesh-hat-Tn} \end{figure} This refinement scheme is presented in more detail in \cite{Takacs2012-2}. \subsection{Hierarchical function space} \label{subsec:space} Now we can define a (hierarchical) mapped B-spline space over this mesh. The following definition is based on the construction presented in \cite{Takacs2014}. \begin{definition} The basis $\widehat{\mathbb{S}}_n$ over $]0,1[^2$ is defined via \begin{equation*} \begin{array}{rcl} \widehat{\mathbb{S}}_n & = & \left\{ \hat{b}^n_{i,j} = b^n_i(s) B^i_{j}(t): 1\leq i\leq p+1 \mbox{ and } 1\leq j\leq i+1 \right\} \\ & \cup & \left\{ \hat{b}^n_{i,j} = b^n_i(s)b^1_j(t): p+1+1\leq i \leq p+2 \mbox{ and } 1\leq j\leq p+2 \right\} \\ & \cup & \left\{ \hat{b}^n_{i,j} = b^n_i(s)b^2_j(t): p+2+1\leq i \leq p+4 \mbox{ and } 1\leq j\leq p+4 \right\}\\ & & \ldots \\ & \cup & \left\{\hat{b}^n_{i,j} = b^n_i(s)b^n_j(t): p+2^{n-1}+1\leq i\leq p+2^{n} \mbox{ and } 1\leq j\leq p+2^{n} \right\}, \end{array} \end{equation*} where $B^i_{j}(t)$ is the $j$-th Bernstein polynomial of degree $i$. This defines a function space $\widehat{\mathcal{W}}_n$ over $\Delta$ via \begin{equation*} \widehat{\mathcal{W}}_n = \mbox{span}\left\{ \beta^n_{i,j} = \hat{b}^n_{i,j}\circ{\f u}^{-1} \; : \; \hat{b}^n_{i,j} \in \widehat{\mathbb{S}}_n \right\}. \end{equation*} \end{definition} \begin{lemma}\label{lem:hatW} The space $\widehat{\mathcal{W}}_n$ and the corresponding basis fulfill the following properties. \begin{itemize} \item[(A)] The space $\widehat{\mathcal{W}}_n$ is a subspace of ${\mathcal{W}}_n \cap \mathscr{C}^{p}(\overline{\Delta})$. \item[(B)] For all $\delta\in\widehat{\rm{T}}_n$ the space fulfills $\mathbb{P}^p \subseteq \widehat{\mathcal{W}}_n \;|_\delta \subseteq \mathbb{Q}^{p}$, where $\mathbb{P}^p$ is the space of bivariate polynomials of total degree $\leq p$ and $\mathbb{Q}^{p}$ is the space of bivariate polynomials of maximum degree $\leq p$. \item[(C)] The basis functions fulfill \begin{equation*} \beta^n_{i,j} (u,v) = \beta^{n-1}_{i,j} (2u,2v) \end{equation*} for all $1 \leq i \leq 2^{n-1}$ and for all $j$. \item[(D)] The basis forms a partition of unity. \end{itemize} \end{lemma} We omit the simple proof of this lemma. \subsection{Stable projection operator} \label{subsec:operator} Let $n_0 = \left\lceil \log_2(8p) \right\rceil$. Then we have ${h_0} = 1/2^{n_0}$. In the following we consider $n\geq n_0$. This is necessary for the proofs of Lemma \ref{lem:regular-part} and Theorem \ref{thm:L2-stability}, to be able to properly split the contributions from the singular point at the left and the regular part at the right of the parameter domain. We consider the standard dual basis for univariate B-splines, see e.g. \cite{Schumaker2007,beirao2014actanumerica} for more details. \begin{definition}\label{standard-dual-basis} Let $\lambda^{n}_k : L^2(0,1)\rightarrow\mathbb R$ be the dual basis for $b^n_i$ with $\lambda^{n}_k (b^n_i) = \delta^{k}_{i}$, for $i,k\in \{1,\ldots,2^n+p\}$. \end{definition} Moreover, consider a dual basis for polynomials on $\Delta_{h_0}$. \begin{definition}\label{n0-dual-basis} Let $\mu^{n_0}_{k,l} : L^2(\Delta_{h_0})\rightarrow\mathbb R$ be dual functionals, with \begin{equation*} \mu^{n_0}_{k,l}( \beta^{n_0}_{i,j} ) = \delta^{k}_{i}\delta^{l}_{j}, \end{equation*} for $k=1,\ldots,p+1$ and $l=1,\ldots,k$. \end{definition} We assume that all the functionals in Definitions \ref{standard-dual-basis} and \ref{n0-dual-basis} are bounded in $L^2$. This is no restriction, see \cite{Schumaker2007}. Let $c\f{I}$ be the mapping from $\mathbb R^2 \rightarrow \mathbb R^2$ with $(u,v)\mapsto (c\,u,c\,v)$. Then we can define a global dual basis for all $n>n_0$. \begin{definition} Let $n>n_0$. Given $\varphi \in L^2(\Delta)$, we set \begin{equation*} \Lambda^{n}_{k,l} (\varphi) = \mu^{n_0}_{k,l} (\varphi \circ (2^{n_0-n})\f{I}) \end{equation*} for all $k=1,\ldots,p+1$ and all $l=1,\ldots,k$. Moreover, we set \begin{equation*} \Lambda^{n}_{k,l} (\varphi) = \lambda^{n}_{k}\otimes\lambda^{m(k)}_l(\varphi \circ \f u) \end{equation*} for all $k=p+2,\ldots,2^n+p$ and all $l=1,\ldots,2^{m(k)}+p$. Here $m(k) = \lceil \log_2(k-p+1) \rceil$. \end{definition} It is easy to see, that $\Lambda^{n}_{k,l}$ is in fact a dual basis for the basis $\beta^{n}_{i,j}$ on $\Delta$. \begin{definition} The dual functionals naturally define a projection operator from $L^2(\Delta)$ onto $\widehat{\mathcal{W}}_n$, with \begin{equation*} \Pi_{\widehat{\mathcal{W}}_n} (\varphi) = \sum^{2^n+p}_{k=1}\sum_{l} \Lambda^{n}_{k,l}(\varphi) \beta^{n}_{k,l}, \end{equation*} where we appropriately sum over all $l$. \end{definition} Before we prove our main Theorem \ref{thm:L2-stability}, we recall two preliminary results. \begin{lemma}\label{lem:regular-part} Let $n\geq n_0$. There exists a constant $C_R>0$, depending only on $p$, such that for all $\varphi \in L^2(\Delta\backslash\Delta_{h_0})$ and for all $\delta \in \widehat{\rm{T}}_n \cap (\Delta \backslash \Delta_{3/8})$ we have \begin{equation} \left\| \Pi_{\widehat{\mathcal{W}}_n} \varphi \right\|_{L^2(\delta)} \leq C_R \left\| \varphi \right\|_{L^2(\tilde\delta)}, \end{equation} where $\tilde\delta$ is the support extension of $\delta$ as presented in \cite{Bazilevs2006}. \end{lemma} \begin{proof} A proof of this lemma follows directly from the standard theory for regularly parameterized domains (see e.g. \cite{Bazilevs2006,beirao2014actanumerica}), since $\Delta\backslash\Delta_{h_0}$ is parameterized via $\f u_0: \left] h_0,1 \right[ \times \left] 0,1 \right[ \rightarrow \Delta\backslash\Delta_{h_0}$ and the support extension of any $\delta \in \Delta \backslash \Delta_{3/8}$ remains in $\Delta\backslash\Delta_{(3/8 - ph)}$. Since $n_0$ is chosen large enough, we have $\frac{3}{8}-ph \geq h_0$ for all $h\leq h_0$, and therefore $\tilde\delta \subset \Delta\backslash\Delta_{h_0}$. In that case, the constant $C_R$ depends on the degree $p$ and on the Jacobian determinant of the mapping $\f u_0$, which depends on $n_0$ only. Since $n_0$ is fully determined by $p$, the constant $C_R$ depends on $p$ only. \end{proof} On the refinement level $n_0$ it is obvious that the operator is bounded. \begin{lemma}\label{lem:initial-level} There exists a constant $C_0>0$, depending only on $p$, such that for all $\varphi \in L^2(\Delta)$ and for all $\delta \in \widehat{\rm{T}}_{n_0}$ we have \begin{equation} \left\| \Pi_{\widehat{\mathcal{W}}_{n_0}} \varphi \right\|_{L^2(\delta)} \leq C_0 \left\| \varphi \right\|_{L^2(\tilde\delta)}. \end{equation} \end{lemma} \begin{proof} This lemma follows directly from the boundedness of the dual functionals. There is no condition on the scaling needed, since we consider the coarse level $n_0$ only. The constant $C_0$ depends on $p$ and $n_0$ only. Again, $n_0$ is fully determined by $p$, which concludes the proof. \end{proof} Now we can prove the main result, the stability of the projection operator. \begin{theorem}\label{thm:L2-stability} Let $n\geq n_0$. There exists a constant $C_S>0$, depending only on $p$, such that for all $\varphi \in L^2(\Delta)$ and for all $\delta \in \widehat{\rm{T}}_n$ we have \begin{equation} \left\| \Pi_{\widehat{\mathcal{W}}_n} \varphi \right\|_{L^2(\delta)} \leq C_S \left\| \varphi \right\|_{L^2(\tilde\delta)}. \end{equation} \end{theorem} \begin{proof} We split the proof into three parts. \begin{itemize} \item[(A)] Let $\delta \subset \Delta \backslash \Delta_{3/8}$. Then this statement follows directly from Lemma \ref{lem:regular-part}. \item[(B)] Let $\delta \subset \Delta_{3/8}$ and there exists an $m \in \mathbb{Z}^+$, with $m\leq n-n_0$, such that $2^m\f I (\delta) \subset \Delta_{3/4} \backslash \Delta_{3/8}$. Due to Lemma \ref{lem:hatW} (C) we have $$\widehat{\mathcal{W}}_n |_{\Delta_{3/8}} = \widehat{\mathcal{W}}_{n-1} |_{\Delta_{3/4}} \circ 2\f I$$ for all $n \geq \log_2(8 p)$. Hence we conclude \begin{equation*} \left\| \Pi_{\widehat{\mathcal{W}}_n} \varphi \right\|^2_{L^2(\delta)} = 2^{-m}\left\| \Pi_{\widehat{\mathcal{W}}_{n-m}} (\varphi\circ (2^{-m}\f I)) \right\|^2_{L^2(2^m\f I (\delta))}. \end{equation*} The desired result follows from Lemma \ref{lem:regular-part} and because \begin{equation*} 2^{-m} \left\|\varphi\circ ((2^{-m})\f I) \right\|^2_{L^2(\widetilde{2^m\f I (\delta)})} = \left\| \varphi \right\|^2_{L^2(\tilde\delta)}. \end{equation*} \item[(C)] We have \begin{equation*} \left\| \Pi_{\widehat{\mathcal{W}}_n} \varphi \right\|^2_{L^2(\delta)} = 2^{n_0-n}\left\| \Pi_{\widehat{\mathcal{W}}_{n_0}} (\varphi\circ (2^{n_0-n}\f I)) \right\|^2_{L^2(2^{n-n_0}\f I (\delta))}. \end{equation*} Here, the bound is fulfilled because of Lemma \ref{lem:initial-level}. \end{itemize} This concludes the proof with $C_S = \max(C_R,C_0)$. \end{proof} Having a stable projection operator, we can prove approximation error bounds on $\Omega$ for a special class of geometry mappings $\f G$. But before we present the proof, we discuss a possible generalization. \subsection{Generalization to locally quasi-uniform knot vectors} \label{subsec:generalization-quasi-uniform} The results can be generalized to certain configurations of non-uniform refinements. Let ${\rm{T}}_n$ be a sequence of locally quasi-uniform meshes with ${\rm{T}}_0 = \{ \Delta \}$. In that case one can also show that there exists a bounded operator, projecting onto the piecewise polynomial function space over the locally quasi-uniform mesh. The stability constant then depends on the degree $p$ and on the quasi-uniformity constants, i.e. the maximum ratio between the diameter of an element and the diameter of the inscribed circle, as well as the maximum ratio between the diameters of neighboring elements. Figure \ref{fig:qu-meshes} depicts a sequence of three locally quasi-uniform meshes. \begin{figure}[!ht] \centering \includegraphics[width=0.2\textwidth]{qu-level1.pdf}\qquad \includegraphics[width=0.2\textwidth]{qu-level2.pdf}\qquad \includegraphics[width=0.2\textwidth]{qu-level3.pdf} \caption{Meshes $\widehat{\rm{T}}_1$, $\widehat{\rm{T}}_2$ and $\widehat{\rm{T}}_3$ for a locally quasi-uniform refinement}\label{fig:qu-meshes} \end{figure} This generalization is based on the fact, that Lemma \ref{lem:regular-part} is actually valid for any mesh on $\Delta \backslash \Delta_{3/8}$ (see \cite{beirao2014actanumerica}). Moreover, Lemma \ref{lem:initial-level} can be generalized to arbitrary initial meshes. Then the constant depends only on the degree and on the quasi-uniformity of the initial mesh. We do not go into the details of this generalization here, but continue with the proof of approximation properties in the uniform case. \section{Approximation error bounds} \label{sec:appr-estimates} We first prove approximation error bounds on the triangle $\Delta$ and then show bounds on $\Omega$, using the equivalence of norms presented in Assumption \ref{assu:equiv-norms}. \subsection{Approximation on $\Delta$} On $\Delta$ we have the following approximation error bound. \begin{theorem}\label{thm:approx-delta} Let $n \geq n_0$. Then there exists a constant $C>0$, depending only on the degree $p$, such that for all $\varphi \in \mathcal{H}^{p+1}(\Delta)$, for all $q\leq p+1$, and for all $\delta \in \widehat{T}_n$ we have \begin{equation*} \left| \varphi - \Pi_{\widehat{\mathcal{W}}_n} \varphi \right|_{H^q(\delta)} \leq C h^{p-q+1} |\varphi |_{\mathcal{H}^{p+1}(\tilde\delta)}. \end{equation*} \end{theorem} \begin{proof} We show this result only for $q=0$. The statement for general $q$ follows immediately. According to \cite{BrambleHilbert1970}, there exists a constant $C_A$, depending only on $p$, and a polynomial $\psi \in \mathbb P^p$, such that \begin{equation*} \left\| \varphi - \psi \right\|_{L^2(\tilde\delta)} \leq C_A h^{p+1} |\varphi |_{\mathcal{H}^{p+1}(\tilde\delta)}. \end{equation*} The constant is uniformly bounded, due to the quasi-uniformity of the support extension $\tilde\delta$ of the element $\delta$. Due to Lemma \ref{lem:hatW} (B) we have $\psi \in \widehat{\mathcal{W}}_n$, hence \begin{equation*} \left\| \varphi - \Pi_{\widehat{\mathcal{W}}_n} \varphi \right\|^2_{L^2(\delta)} \leq \left\| \varphi - \psi \right\|^2_{L^2(\delta)} + \left\| \Pi_{\widehat{\mathcal{W}}_n} ( \varphi - \psi ) \right\|^2_{L^2(\delta)}. \end{equation*} From Theorem \ref{thm:L2-stability} it follows that \begin{equation*} \left\| \Pi_{\widehat{\mathcal{W}}_n} ( \varphi - \psi ) \right\|^2_{L^2(\delta)} \leq C_S^2 \left\| \varphi - \psi \right\|^2_{L^2(\tilde\delta)}. \end{equation*} Hence, we conclude \begin{equation*} \left\| \varphi - \Pi_{\widehat{\mathcal{W}}_n} \varphi \right\|^2_{L^2(\delta)} \leq (1+C_S^2) \left\| \varphi - \psi \right\|^2_{L^2(\tilde\delta)} \leq (1+C_S^2) C_A^2 h^{2p+2} |\varphi |^2_{\mathcal{H}^{p+1}(\tilde\delta)}, \end{equation*} which concludes the proof with $C = C_A \sqrt{1+C_S^2}$. \end{proof} We can now apply this theorem to prove bounds on the mapped physical domain $\Omega$. \subsection{Approximation on $\Omega$} This bound can be extended directly to some geometry mapping $\f G$ fulfilling additional regularity criteria. \begin{theorem} Let $n\geq \max(n_0,n^*)$, let $\f G = \f F \circ \f u$, where $\f F = \frac{1}{F_0}(F_1,F_2)^T$ with $F_0,F_1,F_2 \in \widehat{\mathcal{W}}_{n^*}$ and $\det\nabla\f F > \underline c > 0$, and let \begin{equation*} \Pi_{{\mathcal{V}}_n} (\varphi) = \frac{\Pi_{\widehat{\mathcal{W}}_n} ((\varphi \circ \f F).F_0)}{F_0}\circ \f F^{-1}. \end{equation*} Then there exists a constant $C>0$, depending only on the degree $p$ and on the geometry parameterization $\f G$, such that for all $\varphi \in {H}^{p+1}(\Omega)$, for all $q\leq p+1$, and for all $\omega = \f F(\delta)$, with $\delta \in \widehat{T}_n$, we have \begin{equation*} \left| \varphi - \Pi_{{\mathcal{V}}_n} \varphi \right|_{H^q(\omega)} \leq C h^{p-q+1} \|\varphi \|_{{H}^{p+1}(\tilde\omega)}. \end{equation*} \end{theorem} \begin{proof} The proof of this theorem is a direct consequence of Theorem \ref{thm:approx-delta} and the equivalence of norms as stated in Assumption \ref{assu:equiv-norms}. The assumption is fulfilled because of Lemma \ref{lem:hatW} (A) and Proposition \ref{prop:equiv-norms}. \end{proof} Note that the condition on the geometry mapping $\f G$ is a real restriction. A general NURBS mapping $\f G$ cannot be represented as a composition of a singular bilinear mapping $\f u$ with a regular mapping $\f F= \frac{1}{F_0}(F_1,F_2)^T$ which fulfills $F_0,F_1,F_2 \in \widehat{\mathcal{W}}_n$. Hence, for general NURBS parameterizations, the equivalence of norms in Assumption \ref{assu:equiv-norms} is not fulfilled. However, numerical evidence shows, that in many cases the equivalence of norms is not necessary for optimal order of approximation. This phenomenon may be studied in more detail. \section{Conclusion} \label{sec:conclusion} In this paper we could prove approximation error bounds for isogeometric function spaces over singularly parameterized domains, thus extending known results in isogeometric analysis to singular patches. We show these bounds for a hierarchically refined subspace of the full tensor product spline space. The refinement is considered to be uniform with $h$, which is not a necessary condition, as we pointed out in Section \ref{subsec:generalization-quasi-uniform}. We however restricted our study to a certain type of singular patches that are derived from regularly mapped triangular domains. In the framework we presented, this is necessary in order to guarantee the equivalence of norms between the triangle $\Delta$ and the mapped domain $\Omega$. It is not clear yet, how these results extend to more general configurations where this equivalence is not satisfied near the singularity. \input{takacs-arxiv2015.bbl} \end{document}
{ "timestamp": "2015-07-30T02:08:39", "yymm": "1507", "arxiv_id": "1507.08095", "language": "en", "url": "https://arxiv.org/abs/1507.08095", "abstract": "We study approximation error bounds of isogeometric function spaces on a specific type of singularly parameterized domains. In this context an isogeometric function is the composition of a piecewise rational function with the inverse of a piecewise rational geometry parameterization. We consider domains where one edge of the parameter domain is mapped onto one point in physical space. To be more precise, in our configuration the singular patch is derived from a reparameterization of a regular triangular patch. On such a domain one can define an isogeometric function space fulfilling certain regularity criteria that guarantee optimal convergence. The main contribution of this paper is to prove approximation error bounds for the previously defined class of isogeometric discretizations.", "subjects": "Numerical Analysis (math.NA)", "title": "Approximation properties of isogeometric function spaces on singularly parameterized domains", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759666033576, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7079405651165629 }
https://arxiv.org/abs/1607.02381
On the Optimal Boolean Function for Prediction under Quadratic Loss
Suppose $Y^{n}$ is obtained by observing a uniform Bernoulli random vector $X^{n}$ through a binary symmetric channel. Courtade and Kumar asked how large the mutual information between $Y^{n}$ and a Boolean function $\mathsf{b}(X^{n})$ could be, and conjectured that the maximum is attained by a dictator function. An equivalent formulation of this conjecture is that dictator minimizes the prediction cost in a sequential prediction of $Y^{n}$ under logarithmic loss, given $\mathsf{b}(X^{n})$. In this paper, we study the question of minimizing the sequential prediction cost under a different (proper) loss function - the quadratic loss. In the noiseless case, we show that majority asymptotically minimizes this prediction cost among all Boolean functions. We further show that for weak noise, majority is better than dictator, and that for strong noise dictator outperforms majority. We conjecture that for quadratic loss, there is no single sequence of Boolean functions that is simultaneously (asymptotically) optimal at all noise levels.
\section{Introduction and Problem Statement} Let $X^{n}\in\{0,1\}^{n}$ be a uniform Bernoulli random vector \footnote{As customary, upper case letters will denote random variables/vectors, and their lower case counterparts will denote specific values that they take } and let $Y^{n}$ be the result of passing $X^{n}$ through a memoryless binary symmetric channel (BSC) with crossover probability $\alpha\in[0,\frac{1}{2}]$. Recently, Courtade and Kumar conjectured the following: \begin{conjecture}[\cite{Boolean_conjecture}] \label{conj: Boolean function conjecture} For any Boolean function $\b(X^{n}):\{0,1\}^{n}\to\{0,1\}$ \begin{equation} I(\b(X^{n});Y^{n})=H(Y^{n})-H(Y^{n}|\b(X^{n}))\leq1-\binent(\alpha)\label{eq: conjecture inequality} \end{equation} where $\binent(\alpha)\dfn-\alpha\log\alpha-(1-\alpha)\log(1-\alpha)$ is the binary entropy function \footnote{Throughout, the logarithm $\log(t)$ is on base $2$, while $\ln(t)$ is the natural logarithm } \end{conjecture} Since the \emph{dictator} function $\dic(x^{n})\dfn x_{1}$ (or any other coordinate) achieves this upper bound with equality, then loosely stated, Conjecture \ref{conj: Boolean function conjecture} claims that dictator is the most ``informative'' one-bit quantization of $X^{n}$ in terms of reducing the entropy of $Y^{n}$. Despite considerable effort in several directions (e.g. \cite{Boolean_conjecture,anantharam,Chandar_Tcham,Ordentlich_Shayevitz_Weinstein}), Conjecture \ref{conj: Boolean function conjecture} remains generally unsettled. Recently, it was shown in \cite{Alex} that Conjecture \ref{conj: Boolean function conjecture} holds for very noisy channels, to wit for all $\alpha\geq\frac{1}{2}-\alpha^{*}$, for some absolute constant $\alpha^{*}>0$. From a different perspective, defining $Q_{k}\dfn\P[Y_{k}=1|Y^{k-1},\b(X^{n})]$, and using the chain rule, we can write \begin{align} H(Y^{n}|\b(X^{n})) & =\sum_{k=1}^{n}H(Y_{k}|Y^{k-1},\b(X^{n}))\nonumber \\ & =\sum_{k=1}^{n}\E\left[\ell_{\st[log]}(Y_{k},Q_{k})\right] \end{align} where $\ell_{\st[log]}(b,q)\dfn-\log[1-q-b(1-2q)]$ is the \emph{binary logarithmic loss }function\emph{. \footnote{The first argument of $\ell_{\st[log]}(b,q)$ represents the outcome of the next bit, and the second argument is the probability assignment for the bit being $1$. }\emph{ } Thus, the most informative Boolean function $\b(x^{n})$ can also be interpreted as the one that minimizes the (expected) \emph{sequential prediction cost} incurred when predicting the sequence $\{Y_{k}\}$ from its past, under logarithmic loss, and given $\b(X^{n})$. It is important to note that the logarithmic loss function is \textit{proper}, i.e., corresponds to a \emph{proper scoring rule} \cite{gneiting2007strictly} \footnote{Scoring rules are typically defined in the literature as a quantity to maximize, hence are the negative of cost functions } This means that using the true conditional distribution $Q_{k}$ as the predictor for $Y_{k}$ is guaranteed to minimize the expected prediction cost at time $k$. Given the above interpretation, it seems natural to ask the same question for other loss functions. Namely, what is the minimal sequential prediction cost of $\{Y_{k}\}$ incurred under a general loss function $\ell:\{0,1\}\times[0,1]\to\mathbb{R}_{+}$, \[ \l(Y^{n}|\b(X^{n}))\dfn\sum_{k=1}^{n}\E\left[\ell(Y_{k},Q_{k})\right], \] and what is the associated optimal Boolean function $\b(x^{n})$? Specifically, it makes sense to consider proper loss functions, as for such functions the optimal prediction strategy is ``honest''. The family of proper loss functions contains many members besides the logarithmic loss; in fact, the exact characterization of this family is well known \cite{gneiting2007strictly}. In this work we focus on another prominent member of this family, the \emph{quadratic loss function}. This loss function is simply the quadratic distance between the expected guess and the outcome. In the binary case, it is given by $\ell_{\st[quad]}(b,q)\dfn(b-q)^{2}$. Following that, we can define the \emph{sequential mean squared error} (SMSE) to be the (expected) sequential prediction cost of $Y^{n}$ incurred under quadratic loss given $\b(X^{n})$, namely \begin{align} \m(Y^{n}|\b(X^{n})) & \dfn\sum_{k=1}^{n}\E\left[\ell_{\st[quad]}(Y_{k},Q_{k})\right]\nonumber \\ & =\sum_{k=1}^{n}\E\left[Q_{k}(1-Q_{k})\right]\nonumber \\ & \dfn\sum_{k=1}^{n}\m(Y_{k}|Y^{k-1},\b(X^{n})).\label{eq: SMSE quadratic def} \end{align} In what follows, we show that for $\alpha=0$ (noiseless channel) the SMSE is asymptotically minimized by the majority function \footnote{In fact, for balanced functions, it is trivially maximized by the dictator } We further show that majority is better than dictator for small $\alpha$. This might tempt one to conjecture that majority is always asymptotically optimal for SMSE. However, we show that dictator is in fact better than majority for $\alpha$ close to $\frac{1}{2}$. Intuitively, it would seem that dictator is in some sense the function ``least affected'' by noise, and hence while majority is better at weak noise, dictator ``catches up'' with it as the noise increases. This intuition sits well Conjecture \ref{conj: Boolean function conjecture}, since for logarithmic loss all (balanced) functions are equally good at $\alpha=0$. We conjecture that the optimal function under quadratic loss must be close to majority for $\alpha\approx0$, and close to dictator for $\alpha\approx\frac{1}{2}$. The validity of this conjecture would imply in particular that, in contrast to the common belief in the logarithmic loss case, for quadratic loss there is no single sequence of Boolean functions that is simultaneously (asymptotically) optimal at all noise levels. \section{Results} Let $\Hamw(x_{k}^{m})$ be the Hamming weight of $x_{k}^{m}$. We denote the majority function by $\maj(x^{n})$, which is equal to $1$ whenever $\Hamw(x^{n})>\frac{n}{2}$, and $0$ whenever $\Hamw(x^{n})<\frac{n}{2}$. When $n$ is odd this definition is unambiguous, but when $n$ is even, the values of $\maj(x^{n})$ when $\Hamw(x^{n})=\frac{n}{2}$ are not defined, and any arbitrary choice of assignment of values to $\maj(x^{n})$ is proper for our needs. In the noiseless case ($\alpha=0$), the assertion in Conjecture \ref{conj: Boolean function conjecture} for the logarithmic loss is trivial, and equality is obtained for any \emph{balanced} function ($\P[\b(X^{n})=1]=\frac{1}{2}$), and specifically, for the dictator function. By contrast, for quadratic loss, finding the optimal function seems far from trivial even for $\alpha=0$. In the next theorem we provide a lower bound on the noiseless SMSE for any Boolean function, and show that the majority function asymptotically achieves it. \begin{thm}[Noiseless case] \label{thm: noiseless}For any Boolean function $\b(X^{n})$ \begin{equation} \m(X^{n}|\b(X^{n}))\geq\frac{n-2\ln2}{4},\label{eq: noiseless lower bound} \end{equation} and for majority \begin{equation} \m(X^{n}|\maj(X^{n}))\leq\frac{n-2\ln2}{4}+o(1).\label{eq: noiseless majority} \end{equation} \end{thm} Clearly, for dictator \[ \m(X^{n}|\dic(X^{n}))=\frac{n-1}{4} \] which is strictly worse than the SMSE of the majority function. In fact, it is easy to see that dictator in fact maximizes the SMSE. The minimal SMSE for moderate values of $n$, can be found efficiently. The idea is to trace, for each $n$, the optimal functions $\{\b_{w}^{(n)}\}_{w\in\{0,1,\ldots,2^{n}\}}$ under a weight constraint \[ \b_{w}^{(n)}\dfn\argmin_{\b(\cdot):\;\left|\left\{ x^{n}:\;\b(x^{n})=1\right\} \right|=w}\m(X^{n}|\b(X^{n})). \] The optimal function $\b^{(n)}$ is then given by optimizing over $w$, i.e., \[ \b^{(n)}\dfn\argmin_{w\in\{0,1,\ldots,2^{n}\}}\m(X^{n}|\b_{w}^{(n)}(X^{n})). \] Now, assuming that $\{\b_{w}^{(n)}\}$ were found for all input of size less than $n$, $\b_{w}^{(n+1)}$ can be found by partitioning it into two functions of input size $n$ - one pertaining to $x_{1}=0$ and the other to $x_{1}=1$. Indeed, observing (\ref{eq: SMSE quadratic def}) for any given function $\b(\cdot)$, it can be noted that the SMSE of the first time point, i.e., $\m(X_{k}|X^{k-1},\b(X^{n}))$, depends only on the weights $w_{0}=\left|\left\{ x_{2}^{n}:\;\b(0,x_{2}^{n})=1\right\} \right|$ and $w_{1}=\left|\left\{ x_{2}^{n}:\;\b(1,x_{2}^{n})=1\right\} \right|$. Further, for any given $(w_{0},w_{1}):\; w=w_{0}+w_{1}$, the SMSE of all other time points, i.e. $\sum_{k=2}^{n}\m(X_{k}|X^{k-1},\b(X^{n}))$, is minimized by setting \[ \b(0,x_{2}^{n+1})=\b_{w_{0}}^{(n)}(x_{2}^{n+1}) \] and \[ \b(1,x_{2}^{n+1})=\b_{w_{1}}^{(n)}(x_{2}^{n+1}). \] Hence, given $\{\b_{w}^{(n)}\}$ for all $n$, we can find $\b_{w}^{(n+1)}$ by simply going over all possible allocation of weights $(w_{0},w_{1}):\; w=w_{0}+w_{1}$. The output of such an algorithm is shown in Table \ref{tab:SMSE} for moderate input sizes. It can be seen that majority is optimal for $n=3$, but not for $n=5,7,9,11$. However, Theorem \ref{thm: noiseless} states that the difference tends to $0$, as $n\to\infty$. For $n=5$, the optimal function disagrees with majority on $4$ inputs. \begin{table} \begin{centering} \begin{tabular}{|c|c|c|c|c|} \hline & $\m(X^{n}|\maj(X^{n}))$ & $\min_{\b(\cdot)}\m(X^{n}|\b(X^{n}))$ & Excess SMSE of majority & Lower bound (\ref{eq: noiseless lower bound})\tabularnewline \hline \hline $n=3$ & $0.4792$ & $0.4792$ & $0$ & $0.4034$\tabularnewline \hline $n=5$ & $0.9676$ & $0.9686$ & $0.0010$ & $0.9034$\tabularnewline \hline $n=7$ & $1.4552$ & $1.4618$ & $0.0066$ & $1.4034$\tabularnewline \hline $n=9$ & $1.9483$ & $1.9569$ & $0.0086$ & $1.9034$\tabularnewline \hline $n=11$ & $2.4435$ & $2.4532$ & $0.0097$ & $2.4034$\tabularnewline \hline \end{tabular}\protect\caption{SMSE of majority and SMSE of the optimal function, and (\ref{eq: noiseless lower bound}). \label{tab:SMSE}} \par\end{centering} \end{table} Next, we consider the noisy case $\alpha\in(0,\frac{1}{2}]$, and derive a simple lower bound on the noisy SMSE for any Boolean function. Then, we provide an upper bound and a lower bound for the SMSE of majority \footnote{Eqs. (\ref{eq: noiseless lower bound}) and (\ref{eq: noiseless majority}) of Theorem \ref{thm: noiseless} can be obtained as special cases of (\ref{eq: noisy lower bound}) and (\ref{eq: noisy majority upper bound}) of Theorem \ref{thm: noisy}, by setting $\alpha=0$, but since the proof of the noisy case is based on Theorem \ref{thm: noiseless}, we have separated the results on the noiseless and noisy cases to two different theorems } \begin{thm}[Noisy case] \label{thm: noisy}For any Boolean function $\b(X^{n})$ \begin{equation} \m(Y^{n}|\b(X^{n}))\geq\frac{n-2\ln2\cdot(1-2\alpha)^{2}}{4}.\label{eq: noisy lower bound} \end{equation} Furthermore, for majority \begin{equation} \m(Y^{n}|\maj(X^{n}))\leq\frac{n-2\ln2\cdot(1-2\alpha)^{2}\cdot[1-\mu(\alpha)]}{4}+o(1),\label{eq: noisy majority upper bound} \end{equation} where \begin{equation} \mu(\alpha)\dfn\binent\left(\frac{\arccos(1-2\alpha)}{\pi}\right),\label{eq: mu definition} \end{equation} and \begin{equation} \m(Y^{n}|\maj(X^{n}))\geq\frac{n-\frac{1}{2\pi\alpha(1-\alpha)}(1-2\alpha)^{2}}{4}-O\left(\left(1-2\alpha\right)^{4}\right)+o(1).\label{eq: noisy majority lower bound} \end{equation} \end{thm} Since a straightforward derivation shows that for the dictator function, \[ \m(Y^{n}|\dic(X^{n}))=\frac{n-(1-2\alpha)^{2}}{4}, \] the above theorem implies that majority is asymptotically better than dictator for all\textbf{ }$\alpha\in[0,\underline{\alpha}]$ where $\underline{\alpha}\approx0.0057$, but that on the other hand, there exists $\overline{\alpha}<\frac{1}{2}$ such that dictator is better than majority for all $\alpha\in[\overline{\alpha},\frac{1}{2})$. \begin{rem} To improve the SMSE, unbalanced majority functions $\maj_{q}(\cdot)$ may be proposed, which assign $1$ to a set of $q\cdot2^{n}$ vectors of maximal Hamming weight, $q\in(0,1)$. In the noiseless case, such functions cannot asymptotically improve the SMSE, since the lower bound is achieved by ordinary majority functions ($q=\frac{1}{2}$). Furthermore, it can be shown that they offer no improvement even in the noisy case. Indeed, the noiseless SMSE of such functions is \[ \m(X^{n}|\maj(X^{n}))\leq\frac{n-2\ln2\cdot\binent(q)}{4}+o(1), \] which is minimized for $q=\frac{1}{2}$. In addition, the effect of the noise of the SMSE is related to boundary size between vectors with $\maj_{q}(x^{n})=1$ and vectors with $\maj_{q}(x^{n})=0$. For any fixed $q\in(0,1)$, the value of $1$ will be assigned by $\maj_{q}(\cdot)$ to vectors of Hamming weight $\frac{n}{2}-O(n^{\nicefrac{n}{2}+\rho})\leq\frac{n}{2}\leq\frac{n}{2}+O(n^{\nicefrac{n}{2}+\rho})$, which is asymptotically the same as for ordinary majority with $q=\frac{1}{2}$. So, the boundary size of $\maj_{q}(\cdot)$ is roughly as the boundary size of $\maj(\cdot)$, and the effect of the noise on the SMSE is asymptotically the same for all $q\in(0,1)$. Since the noiseless SMSE for $q=\frac{1}{2}$ is minimal, this seems to be the optimal choice even in the presence of noise $(\alpha\in(0,\frac{1}{2})$). \end{rem} The proofs of Theorems \ref{thm: noiseless} and \ref{thm: noisy} appear in Sections \ref{sec:The-Noiseless-Case} and \ref{sec:The-Noisy-Case}, respectively, and will shortly outlined. Throughout the proofs, we will only consider positive sequences of $n$ and so Landau notations should be interpreted with a positive sign. For example, if $a_{n}=\Theta(n)$ then $a_{n}$ is a positive sequence, increasing approximately linearly. In addition, we will denote the \emph{binary divergence} by $\bindiv(\alpha||\beta)\dfn\alpha\log\frac{\alpha}{\beta}+(1-\alpha)\log\frac{(1-\alpha)}{(1-\beta)}$, and the support of a random vector $X^{n}$ by ${\cal S}_{X^{n}}\dfn\{x^{n}:\;\P(X^{n}=x^{n})>0\}$. For brevity, we ignore integer constraints throughout the paper, as they do no affect the results. \section{Proof of the Noiseless Case Theorem\label{sec:The-Noiseless-Case}} In this section, we consider the noiseless case $\alpha=0$, namely where $X^{n}=Y^{n}$ with probability $1$, and prove Theorem \ref{thm: noiseless}. The outline of the proof is as follows. To prove the lower bound (\ref{eq: noiseless lower bound}) on the SMSE, we use the binary Pinsker inequality to upper bound the quadratic loss using the binary divergence. To prove that majority asymptotically achieves this lower bound, we first note that since $\maj(X^{n})$ is a balanced function, its value does not help predict $X_{1}$ at all, and similarly, the gain in SMSE from knowing $\maj(X^{n})$ at the first few time points is negligible. In the same spirit, at the last time point, the value of $\maj(X^{n})$ is only useful if $\Hamw(x^{n-1})=\frac{n}{2}$ (assuming odd $n$), which occurs with negligible probability, and similarly, the gain at the last few time points due to value of $\maj(X^{n})$ is also negligible. Hence, the gain in prediction cost from knowing $\maj(X^{n})$ is mainly obtained in the ``middle'' time points. However, even at those time points, the gain is moderate and the probability of the next bit, given the past and $\maj(X^{n})$ is still close to $\frac{1}{2}$, with high probability. So, as Pinsker's inequality is tight around $\frac{1}{2}$, the quadratic loss function can be replaced with a function of the binary divergence. In turn, the binary divergence is related to the entropy, conditioned on $\maj(X^{n})$. The entropy is simpler to handle, since conditioned on $\maj(X^{n})$ the reduction in the entropy of $X^{n}$ is $1$ bit, and this leads directly to (\ref{eq: noiseless majority}). It should be noted that while the above intuition is fairly simple, a careful analysis is required for the proof, since a constant deviation $\frac{2\ln2}{4}$ from $\frac{n}{4}$ is sought, which does not depend on $n$. We begin with proving the lower bound (\ref{eq: noiseless lower bound}) using Pinsker's inequality. \begin{IEEEproof}[Proof of (\ref{eq: noiseless lower bound})] Suppose that $\P[\b(X^{n})=1]=q$, and let $P_{k}\dfn\P[X_{k}=1|X^{k-1},\b(X^{n})=1]$. Conditioning on $\b(X^{n})=1$, $X^{n}$ is distributed uniformly over a set of size $q\cdot2^{n}$ and thus \begin{align} \m(X^{n}|\b(X^{n})=1) & =\sum_{k=1}^{n}\E\left[P_{k}(1-P_{k})\right]\nonumber \\ & =\frac{n}{4}-\sum_{k=1}^{n}\E\left[\left(P_{k}-\frac{1}{2}\right)^{2}\right]\nonumber \\ & \trre[\geq,a]\frac{n}{4}-\frac{2\ln2}{4}\sum_{k=1}^{n}\E\left[\bindiv(P_{k}||\nicefrac{1}{2})\right]\nonumber \\ & =\frac{n}{4}-\frac{2\ln2}{4}\sum_{k=1}^{n}\E\left[1-\binent(P_{k})\right]\nonumber \\ & =\frac{n}{4}-\frac{2\ln2}{4}\left[n-H(X^{n}|\b(X^{n})=1)\right]\nonumber \\ & =\frac{n}{4}+\frac{2\ln2\log(q)}{4} \end{align} where $(a)$ is using a binary version of Pinsker's inequality \cite[p. 370, Eq. (11.139)]{Cover:2006:EIT:1146355} \begin{equation} \bindiv(\alpha||\beta)\geq\frac{4}{2\ln2}(\alpha-\beta)^{2}\label{eq: Pinsker inequality binary} \end{equation} (where equality is achieved iff $\alpha=\beta$). Deriving a similar bound for the event $\b(X^{n})=0$, we obtain (\ref{eq: noiseless lower bound}) from \begin{align} \m(X^{n}|\b(X^{n})) & =q\cdot\m(X^{n}|\b(X^{n})=1)+(1-q)\cdot\m(X^{n}|\b(X^{n})=0)\nonumber \\ & \geq\frac{n}{4}-\frac{2\ln2\cdot\binent(q)}{4}\nonumber \\ & \geq\frac{n}{4}-\frac{2\ln2}{4}.\label{eq: averaging over b} \end{align} \end{IEEEproof} Proving the asymptotic achievability of the lower bound (\ref{eq: noiseless lower bound}) by the majority function is more intricate, and is based on the asymptotic achievability of equality in Pinsker's inequality (\ref{eq: Pinsker inequality binary}). We will need several definitions and lemmas. \begin{defn} A vector $v^{n}\in\{0,1\}^{n}$ is termed \emph{$t$-majority vector} if $\Hamw(v^{n})\geq tn$, where $t\in[0,1]$ is referred to as the \emph{threshold}. A random vector $V^{n}$ will be termed\emph{ $t$-majority random vector} if it is uniformly distributed over all \emph{$t$-}majority vectors of length $n$\emph{.} Let $\zeta_{n}(t)$ be the minimal integer larger or equal to $tn$. A random vector $V^{n}$ will be termed\emph{ pseudo $t$-majority random vector} if it is uniformly distributed over all \emph{$t$-}majority vectors of length $n$\emph{, }except possibly for some set ${\cal D}_{n}$, such that $\Hamw(v^{n})=\zeta_{n}(t)$ for all $v^{n}\in{\cal D}^{n}$, and there exists $v^{n}\in{\cal S}_{V^{n}}$ such that $\Hamw(v^{n})=\zeta_{n}(t)$. For brevity, we will sometime omit the parameter $t$ when $t=\frac{1}{2}$. The first lemma provides an approximation for the marginal distributions of a $t$-majority random vector. \end{defn} \begin{lem} \label{lem:marginal of t-majority}Let $\eta\in[0,\frac{1}{2})$ be given. Then, if $V^{n}$ is a\textcolor{red}{{} }pseudo $t$-majority random vector, \[ \max\left[\frac{1}{2},t\right]\leq\P[V_{k}=1]\leq\max\left[\frac{1}{2},t\right]+O_{\eta}\left(\frac{1}{n^{\nicefrac{1}{2}-\eta}}\right) \] for all $k\in[n]$. \end{lem} \begin{IEEEproof} See Appendix \ref{sec:Miscellaneous-Proofs}. \end{IEEEproof} Before we continue, we shortly comment on notation conventions. There is obviously a difference between a majority random vector of length $k$, and the first $k$ coordinates of a majority random vector of length $n$, when $k<n$. Nonetheless, to avoid double indexing, we will assume that $n$ is large enough but fixed, and the indices of $V^{n}$ will denote the corresponding components, e.g. $V_{k}^{k+m}$ are the components $(V_{k},\ldots,V_{k+m})$ of the majority random vector $V^{n}$. The following lemma shows that if $m_{n}$ increases slowly enough, then the entropy loss of $1$ bit of a majority random vector $V^{n}$, compared to the entropy of a uniform binary i.i.d. random vector, is mainly due to the entropy of the middle part of the vector $V_{m_{n}}^{n-m_{n}}$. In other words, the conditional entropies of the beginning and end parts are close to their maximal values, given by their length. \begin{lem} \label{lem: entropy of partial vector}Let $\rho\in(0,\frac{1}{4})$ and $m_{n}=O(n^{\nicefrac{1}{4}-\rho})$. Then, for a majority random vector $V^{n}$\textbf{ \[ H(V_{m_{n}+1}^{n-m_{n}})\leq n-1-2m_{n}+o(1). \] }\end{lem} \begin{IEEEproof} See Appendix \ref{sec:Miscellaneous-Proofs}. \end{IEEEproof} The following corollary is a weakening of lemma \ref{lem: entropy of partial vector}. \begin{cor} \label{cor: entropy of partial vector}Let $\rho\in(0,\frac{1}{4})$ and $m_{n}=O(n^{\nicefrac{1}{4}-\rho})$. Then, for a majority random vector $V^{n}$\textbf{ \[ H(V_{1}^{n-m_{n}})\leq n-1-m_{n}+o(1). \] } \end{cor} Now, consider a time index $k$ which is sufficiently far from the last index $n$. In the next lemma, we bound the probability that at time $k$, the number of ones in the vector is still significantly less than the minimal weight $\frac{k}{2}$ of vectors in the support of a majority random vector of length $k$. \begin{lem} \label{lem: exp small probability}Let $m_{n}$ be an increasing positive sequence, and let $\rho\in(0,1)$ be given. Then, for all majority random vectors $V^{n}$ with sufficiently large $n$, \[ \P\left[\Hamw(V_{1}^{k})\leq\frac{k-1}{2}-(n-k+1)^{\nicefrac{1}{2}+\rho}\right]\leq2^{-\Omega(m_{n}^{2\rho})}, \] for all $k\in[n-m_{n}]$.\end{lem} \begin{IEEEproof} See Appendix \ref{sec:Miscellaneous-Proofs}. \end{IEEEproof} We are now ready to prove that majority functions are asymptotically optimal. \begin{IEEEproof}[Proof of (\ref{eq: noiseless majority})] Let $\rho\in(0,\nicefrac{1}{8})$ be given, and define $m_{n}\dfn n^{\nicefrac{1}{4}-\rho}$. Let us define $V^{n}$ as the random vector distributed as $X^{n}$ conditioned on $\maj(X^{n})=1$. Clearly, $V^{n}$ is a majority random vector. For any given $k\in[n-m_{n}]$ let us define the events \begin{align} {\cal A}_{k} & \dfn\left\{ \Hamw(V_{1}^{k})\geq\frac{k-1}{2}-(n-k+1)^{\nicefrac{1}{2}+\rho}\right\} \nonumber \\ & =\left\{ \Hamw(V_{1}^{k})\geq\frac{n}{2}-r_{k}+1\right\} \label{eq: large probability set} \end{align} where $r_{k}\dfn\frac{(n-k+1)}{2}+(n-k+1)^{\nicefrac{1}{2}+\rho}$. Now, letting $P_{k}\dfn\P[V_{k}=1|V^{k-1}]$ we have \begin{align} \m(X^{n}|\maj(X^{n})=1) & =\sum_{k=1}^{n}\E\left[P_{k}(1-P_{k})\right]\nonumber \\ & =\frac{n}{4}-\sum_{k=1}^{n}\E\left[\left(P_{k}-\frac{1}{2}\right)^{2}\right]\nonumber \\ & \leq\frac{n}{4}-\sum_{k=1}^{n-m_{n}}\E\left[\left(P_{k}-\frac{1}{2}\right)^{2}\right]\nonumber \\ & \leq\frac{n}{4}-\sum_{k=1}^{n-m_{n}}\sum_{v^{k-1}\in{\cal A}_{k-1}}\P\left[V^{k-1}=v^{k-1}\right]\E\left[\left(P_{k}-\frac{1}{2}\right)^{2}\vert V^{k-1}=v^{k-1}\right]. \end{align} Now, let $v^{k-1}\in{\cal A}_{k-1}$. Conditioning on $V^{k-1}=v^{k-1}$, we have that $V_{k}^{n}$ is a $t$-majority random vector of length $n-k+1\geq m_{n}$, and its threshold $t$ is less than \begin{align} t\leq\frac{r_{k}}{n-k+1} & =\frac{1}{2}+\frac{1}{(n-k+1)^{\nicefrac{1}{2}-\rho}}\nonumber \\ & \leq\frac{1}{2}+\frac{1}{m_{n}^{\nicefrac{1}{2}-\rho}}. \end{align} So, assuming that $n$ is large enough, Lemma \ref{lem:marginal of t-majority} (with $\eta<\rho$) implies that conditioned on the event $V^{k-1}=v^{k-1}$ with $v^{k-1}\in{\cal A}_{k}$ \[ \frac{1}{2}\leq P_{k}\leq\frac{1}{2}+\frac{1}{m_{n}^{\nicefrac{1}{2}-\rho}}+\frac{1}{m_{n}^{\nicefrac{1}{2}-\eta}}\leq\frac{1}{2}+O_{\eta}\left(\frac{1}{n^{\nicefrac{1}{8}-\rho}}\right), \] for all $k\in[n-m_{n}]$. Consequently, as Pinsker's inequality is tight around $\frac{1}{2}$, \[ \left(P_{k}-\frac{1}{2}\right)^{2}\geq\left[1-o(1)\right]\frac{\ln2}{2}\bindiv(P_{k}||\nicefrac{1}{2}) \] and so \begin{align} \m(X^{n}|\maj(X^{n})=1) & \leq\frac{n}{4}-\frac{2\ln2}{4}\left[1-o(1)\right]\times\nonumber \\ & \hphantom{\leq}\sum_{k=1}^{n-m_{n}}\sum_{v^{k-1}\in{\cal A}_{k-1}}\P\left[V^{k-1}=v^{k-1}\right]\E\left[\bindiv(P_{k}||\nicefrac{1}{2})\vert V^{k-1}=v^{k-1}\right]. \end{align} Denoting $\tau_{k}\dfn\P\left[V^{k}\not\in{\cal A}_{k}\right]$, we have \begin{align} \E\left[\bindiv(P_{k}||\nicefrac{1}{2})\right] & =\sum_{v^{k-1}\in{\cal A}_{k-1}}\P\left[V^{k-1}=v^{k-1}\right]\E\left[\bindiv(P_{k}||\nicefrac{1}{2})\vert V^{k-1}=v^{k-1}\right]\nonumber \\ & \hphantom{=}+\sum_{v^{k-1}\not\in{\cal A}_{k-1}}\P\left[V^{k-1}=v^{k-1}\right]\E\left[\bindiv(P_{k}||\nicefrac{1}{2})\vert V^{k-1}=v^{k-1}\right]\nonumber \\ & \leq\sum_{v^{k-1}\in{\cal A}_{k-1}}\P\left[V^{k-1}=v^{k-1}\right]\E\left[\bindiv(P_{k}||\nicefrac{1}{2})\vert V^{k-1}=v^{k-1}\right]+\tau_{k-1},\label{eq: conditional and unconditional divergence} \end{align} because $\bindiv(P_{k}||\nicefrac{1}{2})=1-\binent(P_{k})\leq1$. Hence, \begin{align} \m(X^{n}|\maj(X^{n})=1) & \leq\frac{n}{4}-\frac{2\ln2}{4}\left[1-o(1)\right]\sum_{k=1}^{n-m_{n}}\left\{ \E\left[\bindiv(P_{k}||\nicefrac{1}{2})\right]-\tau_{k-1}\right\} \nonumber \\ & \trre[\leq,a]\frac{n}{4}-\frac{2\ln2}{4}\left[1-o(1)\right]\left[n-m_{n}-H(V_{1}^{n-m_{n}})\right]\nonumber \\ & \hphantom{=}+\left[1-o(1)\right]\sum_{k=1}^{n-m_{n}}2^{-cm_{n}^{2\rho}}\nonumber \\ & \trre[\leq,b]\frac{n}{4}-\left[1-o(1)\right]\frac{2\ln2}{4}+o(1)+n2^{-cm_{n}^{2\rho}}\nonumber \\ & =\frac{n}{4}-\frac{2\ln2}{4}+o(1),\label{eq: last bound on MMSE for majority} \end{align} where $(a)$ is using the chain rule, $\bindiv(P_{k}||\nicefrac{1}{2})=1-\binent(P_{k})$, and since from Lemma \ref{lem: exp small probability}, for some $c>0$ we have $\tau_{k}\leq2^{-cm_{n}^{2\rho}}$ for all $k\in[n-m_{n}]$, and $(b)$ is using Corollary \ref{cor: entropy of partial vector}. Finally, from symmetry, conditioning on $\maj(X^{n})=0$ we have \[ \m(X^{n}|\maj(X^{n})=0)\leq\frac{n}{4}-\frac{2\ln2}{4}+o(1) \] and so (\ref{eq: noiseless majority}) is obtained by averaging over $\maj(X^{n})$ (as in (\ref{eq: averaging over b})). \end{IEEEproof} \section{Proof of the Noisy Case Theorem \label{sec:The-Noisy-Case}} In this section, we consider the noisy case, and prove Theorem \ref{thm: noisy}. The outline of the proof is as follows. The lower bound of (\ref{eq: noisy lower bound}) is based on the the result of the noiseless case (\ref{eq: noiseless lower bound}), while taking into account that a noisy bit $Y_{k}$ is to be predicted rather than $X_{k}$. To prove (\ref{eq: noisy majority upper bound}) we use the noiseless SMSE of majority (\ref{eq: noiseless majority}), and quantify the loss in the SMSE conditioned on majority, due to the fact that noisy past bits $Y^{k-1}$ are observed, rather than the noiseless $X^{k-1}$. As in the noiseless case, the ``middle'' time points contain most of the loss. In addition, we use a bound on $H(Y^{n}|\maj(X^{n}))$ based on the \emph{stability} of majority. Finally, to prove (\ref{eq: noisy majority lower bound}) we use a different asymptotic lower bound on $H(\maj(X^{n})|Y^{n})$, which is based on the Gaussian approximation of a binomial random variable, resulting from the Berry-Essen central limit theorem. We then apply Pinsker's inequality, as in the noiseless case, to bound the SMSE via that entropy. To prove (\ref{eq: noisy lower bound}) begin with the next lemma, which states a bound on SMSE of a channel output in terms of the input's SMSE, for any input distribution. \begin{lem} \label{lem: single sample mmse over a channel}For $V\sim\mbox{Bern}(\beta),$ $Z\sim\mbox{Bern}(\alpha)$ independent of $V$, and $W=V+Z$ (modulo-$2$ sum), \[ \m(W)=\alpha(1-\alpha)+(1-2\alpha)^{2}\cdot\m(V). \] \end{lem} \begin{IEEEproof} See Appendix \ref{sec:Miscellaneous-Proofs}.\end{IEEEproof} \begin{lem} \label{lem: mmse over a channel}Let $V^{n}\in\{0,1\}^{n}$ be a random vector, and $W^{n}$ be the output of a BSC with crossover $\alpha$ fed by $V^{n}$, i.e. $W^{n}=V^{n}+Z^{n}$, where $Z^{n}\sim\mbox{Bern}(\alpha)$, independent of $V^{n}$. Then, \[ \m(W^{n})\geq\alpha(1-\alpha)\cdot n+(1-2\alpha)^{2}\cdot\m(V^{n}) \] with equality if $V^{n}$ is a memoryless random vector.\end{lem} \begin{IEEEproof} See Appendix \ref{sec:Miscellaneous-Proofs}. \end{IEEEproof} Using the above, we can prove (\ref{eq: noisy lower bound}). \begin{IEEEproof}[Proof of (\ref{eq: noisy lower bound})] Consider any Boolean function $\b(X^{n})$ and suppose that $\P\left[\b(X^{n})=1\right]=q$. Then, \begin{align} \m(Y^{n}|\b(X^{n})) & \trre[\geq,a]\alpha(1-\alpha)\cdot n+(1-2\alpha)^{2}\cdot\m(X^{n}|\b(X^{n}))\nonumber \\ & =\alpha(1-\alpha)\cdot n+q(1-2\alpha)^{2}\cdot\m(X^{n}|\b(X^{n})=1)+(1-q)(1-2\alpha)^{2}\cdot\m(X^{n}|\b(X^{n})=0)\nonumber \\ & \trre[\geq,b]\alpha(1-\alpha)\cdot n+(1-2\alpha)^{2}\cdot\frac{(n-2\ln2)}{4}\nonumber \\ & \geq\frac{n-(1-2\alpha)^{2}\cdot2\ln2}{4},\label{eq: lower bound on MMSE conditional} \end{align} where $(a)$ follows from Lemma \ref{lem: mmse over a channel}, and $(b)$ follows from (\ref{eq: noiseless lower bound}). \end{IEEEproof} To prove (\ref{eq: noisy majority upper bound}), we analyze, in the next two lemmas, the SMSE of a majority random vector $V^{n}$, and show that the quadratic loss in the beginning and end of the vector is close to its maximal value of $\frac{1}{4}$ per bit. \begin{lem} \label{lem: MMSE of beginning of vector}Let $m_{n}=O(n^{1-\rho})$ for some $\rho\in(0,1)$. Then, for a majority random vector $V^{n}$\textbf{ \[ \m(V_{1}^{m_{n}})=\sum_{k=1}^{m_{n}}\m(V_{k}|V_{1}^{k-1})\geq\frac{m_{n}}{4}-o(1). \] }\end{lem} \begin{IEEEproof} See Appendix \ref{sec:Miscellaneous-Proofs}.\end{IEEEproof} \begin{lem} \label{lem: MMSE of end of vector}Let $\rho\in(0,\frac{1}{8})$ and $m_{n}=O(n^{\nicefrac{1}{4}-\rho})$. Then, for a majority random vector $V^{n}$\textbf{ \[ \sum_{k=n-m_{n}+1}^{n}\m(V_{k}|V_{1}^{k-1})\geq\frac{m_{n}}{4}-o(1). \] }\end{lem} \begin{IEEEproof} See Appendix \ref{sec:Miscellaneous-Proofs}. \end{IEEEproof} We also need the following bound on the conditional entropy of the output, given a value of the majority of the input. \begin{lem} \label{lem: output entropy for majority}Let $\mu(\cdot)$ be as defined in (\ref{eq: mu definition}). Then, \[ H(Y^{n}|\maj(X^{n})=1)\leq n-1+\mu(\alpha)+o(1). \] \end{lem} \begin{IEEEproof} See Appendix \ref{sec:Miscellaneous-Proofs}. \end{IEEEproof} We can now prove (\ref{eq: noisy majority upper bound}). \begin{IEEEproof}[Proof of (\ref{eq: noisy majority upper bound})] In (\ref{eq: lower bound on MMSE conditional}), it may be observed that due to (\ref{eq: noiseless majority}), inequality $(b)$ is in fact an asymptotic equality, up to an $o(1)$ term. So, it remains to bound the loss in the inequality $(a)$ of (\ref{eq: lower bound on MMSE conditional}), which we denote by $\Phi$. Let us also denote $m_{n}=n^{\nicefrac{1}{4}-\rho}$ for some given $\rho\in(0,\frac{1}{4})$. Then, due to symmetry of the majority function, we may condition on the event $\maj(X^{n})=1$, and the loss of inequality $(a)$ of (\ref{eq: lower bound on MMSE conditional}) is \begin{align} \Phi & \dfn\m(Y^{n}|\maj(X^{n})=1)-\alpha(1-\alpha)\cdot n-(1-2\alpha)^{2}\cdot\m(X^{n}|\maj(X^{n})=1)\nonumber \\ & =\sum_{k=1}^{n}\m(Y_{k}|Y^{k-1},\maj(X^{n})=1)-\alpha(1-\alpha)\cdot n-(1-2\alpha)^{2}\cdot\sum_{k=1}^{n}\m(X_{k}|X^{k-1},\maj(X^{n})=1)\nonumber \\ & \trre[=,a](1-2\alpha)^{2}\cdot\left\{ \sum_{k=1}^{n}\m(X_{k}|Y^{k-1},\maj(X^{n})=1)-\m(X_{k}|X^{k-1},\maj(X^{n})=1)\right\} ,\label{eq: Phi loss} \end{align} where $(a)$ is using a derivation similar to (\ref{eq: mmse one sample}). First, using Lemma \ref{lem: MMSE of beginning of vector} \begin{align} & \hphantom{\leq}\sum_{k=1}^{m_{n}}\m(X_{k}|Y^{k-1},\maj(X^{n})=1)-\m(X_{k}|X^{k-1},\maj(X^{n})=1)\nonumber \\ & \leq\frac{m_{n}}{4}-\sum_{k=1}^{m_{n}}\m(X_{k}|X^{k-1},\maj(X^{n})=1)\nonumber \\ & \leq o(1),\label{eq: Phi loss start of vector} \end{align} and similarly, using Lemma \ref{lem: MMSE of end of vector} \begin{align} & \hphantom{\leq}\sum_{k=m_{n}+1}^{n}\m(X_{k}|Y^{k-1},\maj(X^{n})=1)-\m(X_{k}|X^{k-1},\maj(X^{n})=1)\nonumber \\ & \leq\frac{m_{n}}{4}-\sum_{k=m_{n}+1}^{n}\m(X_{k}|X^{k-1},\maj(X^{n})=1)\nonumber \\ & \leq o(1).\label{eq: Phi loss end of vector} \end{align} Then, from (\ref{eq: noiseless lower bound}) of Theorem \ref{thm: noiseless}, and the symmetry of conditioning $\maj(X^{n})=0$ and $\maj(X^{n})=1$, we have \[ \sum_{k=1}^{n}\m(X_{k}|X^{k-1},\maj(X^{n})=1)\geq\frac{n-2\ln(2)}{4}, \] and \begin{align} & \hphantom{=}\sum_{k=m_{n}+1}^{n-m_{n}}\m(X_{k}|X^{k-1},\maj(X^{n})=1)\nonumber \\ & =\sum_{k=1}^{n}\m(X_{k}|X^{k-1},\maj(X^{n})=1)-\sum_{k=1}^{m_{n}}\m(X_{k}|X^{k-1},\maj(X^{n})=1)\nonumber \\ & \hphantom{=}-\sum_{k=n-m_{n}+1}^{n}\m(X_{k}|X^{k-1},\maj(X^{n})=1)\nonumber \\ & \geq\sum_{k=1}^{n}\m(X_{k}|X^{k-1},\maj(X^{n})=1)-\frac{m_{n}}{4}-\frac{m_{n}}{4}\nonumber \\ & \geq\frac{n-2m_{n}-2\ln(2)}{4}.\label{eq: MMSE loss middle of vector} \end{align} So it remains to upper bound the first term in the sum of (\ref{eq: Phi loss}), viz. \[ \sum_{k=m_{n}+1}^{n-m_{n}}\m(X_{k}|Y^{k-1},\maj(X^{n})=1). \] We follow the outline of the proof of (\ref{eq: noiseless majority}) from Theorem \ref{thm: noiseless}. Let us denote the random variables $P_{k}(X^{k-1})\dfn\P(X_{k}=1|X^{k-1},\maj(X^{n})=1)$, and $R_{k}(Y^{k-1})\dfn\P(X_{k}=1|Y^{k-1},\maj(X^{n})=1)$, where their arguments will be sometimes omitted for brevity. In what follows, we will prove the existence of sets ${\cal B}_{k}\subset\{0,1\}^{k}$ such that $\upsilon_{k}\dfn\P\left[Y^{k}\not\in{\cal B}_{k}\right]\leq2^{-\frac{c}{2}m_{n}^{2\rho}}$ for some $c>0$ and for all $k\in\{m_{n}+1,\ldots,n-m_{n}\}$, and \[ \frac{1}{2}\leq R_{k}(y^{k-1})\leq\frac{1}{2}+O_{\eta}\left(\frac{1}{n^{\nicefrac{1}{8}-\rho}}\right) \] for all $y^{k-1}\in{\cal B}_{k-1}$. For $y^{k-1}\in{\cal B}_{k-1}$ Pinsker's inequality is tight and so \[ \left(R_{k}(y^{k-1})-\frac{1}{2}\right)^{2}\geq\left[1-o(1)\right]\frac{\ln2}{2}\bindiv(R_{k}(y^{k-1})||\nicefrac{1}{2}). \] Hence, \begin{align} & \hphantom{=}\sum_{k=m_{n}+1}^{n-m_{n}}\m(X_{k}|Y^{k-1},\maj(X^{n})=1)\nonumber \\ & =\sum_{k=m_{n}+1}^{n-m_{n}}\E\left[R_{k}(1-R_{k})\right]\nonumber \\ & =\frac{n-2m_{n}}{4}-\sum_{k=m_{n}+1}^{n-m_{n}}\E\left[\left(R_{k}-\frac{1}{2}\right)^{2}\right]\nonumber \\ & \leq\frac{n-2m_{n}}{4}-\sum_{k=m_{n}+1}^{n-m_{n}}\sum_{y_{1}^{k-1}\in{\cal B}_{k-1}}\P\left[Y^{k-1}=y_{1}^{k-1}\right]\E\left[\left(R_{k}-\frac{1}{2}\right)^{2}|Y^{k-1}=y_{1}^{k-1}\right]\nonumber \\ & \leq\frac{n-2m_{n}}{4}-\frac{2\ln(2)}{4}\left[1-o(1)\right]\sum_{k=m_{n}+1}^{n-m_{n}}\sum_{y_{1}^{k-1}\in{\cal B}_{k-1}}\P\left[Y^{k-1}=y_{1}^{k-1}\right]\E\left[\bindiv(R_{k}||\nicefrac{1}{2})|Y^{k-1}=y_{1}^{k-1}\right]\nonumber \\ & \trre[\leq,a]\frac{n-2m_{n}}{4}-\frac{2\ln(2)}{4}\left[1-o(1)\right]\sum_{k=m_{n}+1}^{n-m_{n}}\left\{ \E\left[\bindiv(R_{k}||\nicefrac{1}{2})\right]-\upsilon_{k}\right\} \nonumber \\ & \trre[\leq,b]\frac{n-2m_{n}}{4}-\frac{2\ln(2)}{4}\left[1-o(1)\right]\sum_{k=m_{n}+1}^{n-m_{n}}\E\left[\bindiv(R_{k}||\nicefrac{1}{2})\right]+o(1)\nonumber \\ & =\frac{n-2m_{n}}{4}-\frac{2\ln(2)}{4}\left[1-o(1)\right]\left[n-2m_{n}-\sum_{k=m_{n}+1}^{n-m_{n}}H(X_{k}|Y^{k-1},\maj(X^{n})=1)\right]+o(1)\nonumber \\ & \trre[\leq,c]\frac{n-2m_{n}}{4}-\frac{2\ln(2)}{4}\left[1-o(1)\right]\left[n-2m_{n}-\sum_{k=m_{n}+1}^{n-m_{n}}H(Y_{k}|Y^{k-1},\maj(X^{n})=1)\right]+o(1)\nonumber \\ & =\frac{n-2m_{n}}{4}-\frac{2\ln(2)}{4}\left[1-o(1)\right]\left[n-2m_{n}-H(Y_{m_{n}+1}^{n-m_{n}}|Y^{m_{n}},\maj(X^{n})=1)\right]+o(1)\nonumber \\ & \trre[\leq,d]\frac{n-2m_{n}}{4}-\frac{2\ln(2)}{4}\left[1-o(1)\right]\left[n-H(Y^{n}|\maj(X^{n})=1)\right]+o(1)\nonumber \\ & \trre[\leq,e]\frac{n-2m_{n}}{4}-\frac{2\ln(2)}{4}\left[1+\mu(\alpha)\right]+o(1),\label{eq: Phi loss Y given X middle} \end{align} $(a)$ is since, just as in (\ref{eq: conditional and unconditional divergence}), \[ \E\left[\bindiv(R_{k}||\nicefrac{1}{2})\right]\leq\sum_{y_{1}^{k-1}\in{\cal B}_{k-1}}\P\left[Y^{k-1}=y_{1}^{k-1}\right]\E\left[\bindiv(R_{k}||\nicefrac{1}{2})|Y^{k-1}=y_{1}^{k-1}\right]+\upsilon_{k}, \] $(b)$ is since $\upsilon_{k}\leq2^{-\frac{c}{2}m_{n}^{2\rho}}$, $(c)$ is using \begin{align} H(Y_{k}|Y^{k-1},\maj(X^{n})=1) & =H(X_{k}+Z_{k}|Y^{k-1},\maj(X^{n})=1)\nonumber \\ & \geq H(X_{k}+Z_{k}|Y^{k-1},Z_{k},\maj(X^{n})=1)\nonumber \\ & =H(X_{k}|Y^{k-1},Z_{k},\maj(X^{n})=1)\nonumber \\ & =H(X_{k}|Y^{k-1},\maj(X^{n})=1), \end{align} where the last equality is since $Z_{k}$ is independent of $(X_{k},Y^{k-1})$. Transition $(d)$ in (\ref{eq: Phi loss Y given X middle}) follows from \begin{align} H(Y_{n-m_{m}+1}^{n}|Y_{1}^{n-m_{n}},\maj(X^{n})=1) & \trre[\geq,i]H(Y_{n-m_{m}+1}^{n}|X_{1}^{n-m_{n}},\maj(X^{n})=1)\nonumber \\ & =H(X_{n-m_{m}+1}^{n}+Z_{n-m_{m}+1}^{n}|X_{1}^{n-m_{n}},\maj(X^{n})=1)\nonumber \\ & \geq H(X_{n-m_{m}+1}^{n}+Z_{n-m_{m}+1}^{n}|X_{1}^{n-m_{n}},Z_{n-m_{m}+1}^{n},\maj(X^{n})=1)\nonumber \\ & =H(X_{n-m_{m}+1}^{n}|X_{1}^{n-m_{n}},\maj(X^{n})=1)\nonumber \\ & \trre[\geq,ii]m_{n}-o(1), \end{align} where here $(i)$ follows from the data processing theorem and the fact that $Y_{1}^{n-m_{n}}-X_{1}^{n-m_{n}}-Y_{n-m_{n}+1}^{n}$, and $(ii)$ follows from (\ref{eq: bound on conditional entropy}) (proof of Lemma \ref{lem: entropy of partial vector}), and using a similar bound to $H(Y_{1}^{m_{n}}|Y_{m_{n}+1}^{n},\maj(X^{n})=1).$ Transition $(e)$ in (\ref{eq: Phi loss Y given X middle}) follows from Lemma \ref{lem: output entropy for majority}. To conclude, combining (\ref{eq: Phi loss}),(\ref{eq: Phi loss start of vector}), (\ref{eq: Phi loss end of vector}), (\ref{eq: MMSE loss middle of vector}) and (\ref{eq: Phi loss Y given X middle}) implies that \[ \Phi\leq(1-2\alpha)^{2}\cdot\frac{2\ln2}{4}\mu(\alpha)+o(1), \] which, together with (\ref{eq: lower bound on MMSE conditional}) implies (\ref{eq: noisy majority upper bound}). To complete the proof, it remains to assert the existence of the sets ${\cal B}_{k}$. To this end, recall that in the proof of (\ref{eq: noiseless majority}) in Section \ref{sec:The-Noiseless-Case}, we have defined the sets \[ {\cal A}_{k}\dfn\left\{ \Hamw(V_{1}^{k})\geq\frac{k-1}{2}-(n-k+1)^{\nicefrac{1}{2}+\rho}\right\} \] (cf. (\ref{eq: large probability set})) and showed that $\frac{1}{2}\leq P_{k}(x^{k-1})\leq\frac{1}{2}+O\left(\nicefrac{1}{n^{\nicefrac{1}{8}-\rho}}\right)$ for all $x^{k-1}\in{\cal A}_{k-1}$. In addition, Lemma \ref{lem: exp small probability} implied that there that there exists $c>0$ such that $\P\left[X^{k}\not\in{\cal A}_{k}\right]\leq2^{-cm_{n}^{2\rho}}$ for all $k\in\{m_{n}+1,\ldots,n-m_{n}\}$. Now, note that \begin{align} R_{k}(Y^{k-1}) & =\P(X_{k}=1|Y^{k-1},\maj(X^{n})=1)\nonumber \\ & =\sum_{x^{k-1}}\P\left(X^{k-1}=x^{k-1}|Y^{k-1},\maj(X^{n})=1\right)\cdot\P\left(X_{k}=1|X^{k-1}=x^{k-1},Y^{k-1},\maj(X^{n})=1\right)\nonumber \\ & =\sum_{x^{k-1}}\P\left(X^{k-1}=x^{k-1}|Y^{k-1},\maj(X^{n})=1\right)\cdot P_{k}(x^{k-1}), \end{align} so $R_{k}(Y^{k-1})$ is just an averaging of $P_{k}(x^{k-1})$. Since $P_{k}(x^{k-1})\geq\frac{1}{2}$ for all $x^{k-1}$, this immediately implies $R_{k}(y^{k-1})\geq\frac{1}{2}$. On the other hand \begin{align} R_{k}(Y^{k-1}) & =\sum_{x^{k-1}\in{\cal A}_{k-1}}\P\left(X^{k-1}=x^{k-1}|Y^{k-1},\maj(X^{n})=1\right)\cdot P_{k}(x^{k-1})\nonumber \\ & \hphantom{=}+\sum_{x^{k-1}\not\in{\cal A}_{k-1}}\P\left(X^{k-1}=x^{k-1}|Y^{k-1},\maj(X^{n})=1\right)\cdot P_{k}(x^{k-1})\nonumber \\ & \leq\frac{1}{2}+O\left(\frac{1}{n^{\nicefrac{1}{8}-\rho}}\right)+\P\left(X^{k-1}\not\in{\cal A}_{k-1}|Y^{k-1},\maj(X^{n})=1\right), \end{align} where we have bounded the first term using $P_{k}(x^{k-1})\leq\frac{1}{2}+O\left(\nicefrac{1}{n^{\nicefrac{1}{8}-\rho}}\right)$ for all $x^{k-1}\in{\cal A}_{k-1}$, and we have bounded the second term simply by using $P_{k}(x^{k-1})\leq1$. Let us inspect the random variable $\P[X^{k-1}\not\in{\cal A}_{k-1}|Y^{k-1},\maj(X^{n})=1]$. We know that its expected value satisfies \[ \E\left[\P\left(X^{k-1}\not\in{\cal A}_{k-1}|Y^{k-1},\maj(X^{n})=1\right)\right]=\P\left(X^{k-1}\not\in{\cal A}_{k-1}|\maj(X^{n})=1\right)\leq2^{-cm_{n}^{2\rho}}. \] So, for any given $\eta>0$ Markov's inequality implies that \[ \P\left[\P\left(X^{k-1}\not\in{\cal A}_{k-1}|Y^{k-1},\maj(X^{n})=1\right)\geq2^{\eta m_{n}^{2\rho}}2^{-cm_{n}^{2\rho}}\right]\leq2^{-\eta m_{n}^{2\rho}}. \] Choosing, e.g., $\eta=\frac{c}{2}$ we get that there exists a set ${\cal B}_{k}$ whose probability is larger than $1-2^{-\frac{c}{2}m_{n}^{2\rho}}$ such that \[ \P\left(X^{k-1}\not\in{\cal A}_{k-1}|Y^{k-1},\maj(X^{n})=1\right)\leq2^{-\frac{c}{2}m_{n}^{2\rho}} \] for all $y^{k-1}\in{\cal B}_{k}$. For this set, we have \[ R_{k}(Y^{k-1})\leq\frac{1}{2}+O\left(\frac{1}{n^{\nicefrac{1}{8}-\rho}}\right)+2^{-\frac{c}{2}m_{n}^{2\rho}}=\frac{1}{2}+O\left(\frac{1}{n^{\nicefrac{1}{8}-\rho}}\right), \] as required. \end{IEEEproof} To prove (\ref{eq: noisy majority lower bound}) we first need the following approximation to the entropy of majority functions. \begin{lem}[\cite{Or_private}] \label{lem: exact entropy of majority}We have \begin{equation} H(\maj(X^{n})|Y^{n})=\E\left\{ \binent\left[Q\left(\frac{\left|G(1-2\alpha)\right|}{\sqrt{4\alpha(1-\alpha)}}\right)\right]\right\} +o(1)\label{eq: Gaussian approximation} \end{equation} where $G\sim{\cal N}(0,1)$ is a standard Gaussian random variable, and $Q(\cdot)$ is the Q-function (the tail probability of the standard normal distribution).\end{lem} \begin{IEEEproof} See Appendix \ref{sec:Miscellaneous-Proofs}. \end{IEEEproof} \begin{rem} If we replace Lemma \ref{lem: output entropy for majority} in the proof of (\ref{eq: noisy majority upper bound}) with Lemma \ref{lem: exact entropy of majority}, we can get a sharper bound than (\ref{eq: noisy majority upper bound}), yet less explicit. \end{rem} In the next lemma, we evaluate $H(\maj(X^{n})|Y^{n})$ for $\alpha\approx\frac{1}{2}$. \begin{lem} \label{lem:approximated entropy of majority}We have \[ H(\maj(X^{n})|Y^{n})\geq1-\frac{1}{\pi\cdot\ln2}\left(\frac{(1-2\alpha)^{2}}{4\alpha(1-\alpha)}\right)-O\left((1-2\alpha)^{4}\right)+o(1). \] \end{lem} \begin{IEEEproof} See Appendix \ref{sec:Miscellaneous-Proofs}. \end{IEEEproof} We can now prove the lower bound on the SMSE of majority functions (\ref{eq: noisy majority lower bound}). \begin{IEEEproof}[Proof of (\ref{eq: noisy majority lower bound})] Using Lemma \ref{lem:approximated entropy of majority} and a derivation similar to (\ref{eq: entropy of majority}), for some $c>0$, and all $\alpha$ sufficiently close to $\frac{1}{2}$ \begin{align} H(Y^{n}|\maj(X^{n})) & =n-1+H(\maj(X^{n})|Y^{n})\nonumber \\ & \geq n-\frac{1}{\pi\cdot\ln2}\left(\frac{(1-2\alpha)^{2}}{4\alpha(1-\alpha)}\right)-c\left(1-2\alpha\right)^{4}+o(1). \end{align} Hence, as in the proof of (\ref{eq: noiseless lower bound}) in Section \ref{sec:The-Noiseless-Case} \begin{align} \m(Y^{n}|\maj(X^{n})) & \geq\frac{n}{4}-\frac{\ln2}{2}\left[n-H(Y^{n}|\maj(X^{n}))\right]\nonumber \\ & \geq\frac{n}{4}-\frac{1}{2\pi\alpha(1-\alpha)}\left(\frac{(1-2\alpha)^{2}}{4}\right)-c\left(1-2\alpha\right)^{4}+o(1) \end{align} for all sufficiently large $n$.\end{IEEEproof} \begin{rem} For the sake of proving (\ref{eq: noisy majority lower bound}), we only needed the second-order approximation, given by Lemma \ref{lem:approximated entropy of majority}. However, we note that the expression on the left-hand side of (\ref{eq: Gaussian approximation}) can be evaluated numerically to an arbitrary precision, e.g., via a power series expansion of the analytic function $\binent\left[Q(t)\right]$. \end{rem} \section{Discussion and Open Problems} The question addressed by Conjecture 1 can be equivalently cast as an optimal sequential prediction problem, seeking the Boolean function $\b(X^{n})$ that minimizes the cost in sequentially predicting the channel output sequence $Y^{n}$, under logarithmic loss. Adopting this point of view, it is natural to consider the same sequential prediction problem under other proper loss functions. In this paper, we have focused on the quadratic loss function. We began by considering the noiseless case $Y^{n}=X^{n}$, which is trivial under logarithmic loss but quite subtle under quadratic loss, and showed that majority asymptotically achieves the minimal prediction cost among all Boolean functions. For the case of noisy observations, we derived bounds on the cost achievable by general Boolean functions, as well as specifically by majority. Using these bounds, we showed that majority is better than dictator for weak noise, but that dictator catches up and outperforms majority for strong noise. This should be contrasted with Conjecture \ref{conj: Boolean function conjecture}, which surmises that dictator minimizes the sequential prediction cost under logarithmic loss, simultaneously at all noise levels. Thus, viewed through the lens of sequential prediction, the validity of Conjecture \ref{conj: Boolean function conjecture} appears to possibly hinge on the unique property of logarithmic loss, namely the fact that in the noiseless case all (balanced) Boolean functions result in the exact same prediction cost. The discussion above leads us to conjecture that under quadratic loss, there is no single sequence of functions $\{\b_{n}(X^{n})\}$ that asymptotically minimizes the prediction cost simultaneously at all noise levels. Moreover, it seems plausible that the optimal function must be close to majority for weak noise, and close to dictator for high noise. While it appears that characterizing the optimal function at a given noise level may be difficult, it would be interesting to understand its structural properties, e.g., whether it is monotone, balanced, odd, etc. For logarithmic loss, it is known that the optimal function is monotone \cite{Boolean_conjecture}. This fact can be easily established by first switching any non-monotone coordinate with the last coordinate (losing nothing due to the entropy chain rule), and then \textquotedbl{}shifting\textquotedbl{} \cite{alon1983density} the last coordinate (which can only decrease the cost, as there are no subsequent coordinates). However, monotonicity seems more difficult to establish under quadratic loss, even in the noiseless case; for example, the switching/shifting technique above fails due to the lack of a chain rule under quadratic loss. Finally, it would be interesting to extend this study to non-Boolean functions as well as to other proper loss functions. For example, our results readily indicate that majority is asymptotically optimal in the noiseless case for any loss function that behaves similarly to quadratic loss around $\frac{1}{2}$ (e.g., logarithmic loss). What is the family of proper loss functions for which majority is asymptotically optimal? \section*{Acknowledgment} We are grateful to Or Ordentlich for asking the question in the noiseless case that led to this research. We would also like to thank Or Ordentlich and Omri Weinstein for helpful discussions, and Uri Hadar for pointing out reference \cite{gneiting2007strictly}. \appendices{} \section{Miscellaneous Proofs\label{sec:Miscellaneous-Proofs}} \subsection{Noiseless case} \begin{IEEEproof}[Proof of Lemma \ref{lem:marginal of t-majority}] First assume that $V^{n}$ is a $t$-majority random vector (and not a pseudo $t$-majority random vector). From symmetry of $t$-majority random vector, $\P\left(V_{k}=1\right)=\P\left(V_{1}=1\right)$ for all $k\in[n]$, and so it remains to prove the statement for $k=1$. Let us begin with the case $t\leq\frac{1}{2}$. For $t=0$ we clearly have $P_{k}=\frac{1}{2}$. For $t=\frac{1}{2}$, the number $M_{1}$ of $\frac{1}{2}$-majority vectors such that $v_{1}=1$ ($M_{0}$ for $v_{1}=0$, respectively) is \[ M_{1}=\sum_{m=\frac{n}{2}-1}^{n-1}{n-1 \choose m}, \] and \[ M_{0}=\sum_{m=\frac{n}{2}}^{n-1}{n-1 \choose m}, \] where the index $m$ in the summation above counts the number of allowed ones in the vector $v_{2}^{n}$. So, as $M_{1}>M_{0}$, \[ \P\left(V_{k}=1\right)=\frac{M_{1}}{M_{0}+M_{1}}\geq\frac{1}{2}. \] Moreover, for all $n$ sufficiently large, \begin{align} \frac{{n-1 \choose \frac{n}{2}-1}}{\sum_{m=\frac{n}{2}-1}^{n-1}{n-1 \choose m}} & \leq\frac{{n-1 \choose \frac{n}{2}-1}}{2^{n-1}}\nonumber \\ & \trre[\leq,a]\sqrt{\frac{2}{\pi}}\cdot\frac{1}{\sqrt{n}}\cdot2^{(n-1)\left[\binent(\frac{1}{2}-\frac{1}{2(n-1)})-1\right]}\nonumber \\ & \leq\sqrt{\frac{2}{\pi}}\cdot\frac{1}{\sqrt{n}}, \end{align} where $(a)$ is using Lemma \ref{lem: binomial and entropy}. So \begin{align} \P\left(V_{k}=1\right) & =\frac{M_{1}}{M_{0}+M_{1}}\nonumber \\ & =\frac{\sum_{m=\frac{n}{2}-1}^{n-1}{n-1 \choose m}}{\sum_{m=\frac{n}{2}}^{n-1}{n-1 \choose m}+\sum_{m=\frac{n}{2}-1}^{n-1}{n-1 \choose m}}\nonumber \\ & =\frac{\sum_{m=\frac{n}{2}-1}^{n-1}{n-1 \choose m}}{2\cdot\sum_{m=\frac{n}{2}-1}^{n-1}{n-1 \choose m}-{n-1 \choose \frac{n}{2}-1}}\nonumber \\ & =\frac{1}{2-\frac{{n-1 \choose \frac{n}{2}-1}}{\sum_{m=\frac{n}{2}-1}^{n-1}{n-1 \choose m}}}\nonumber \\ & \leq\frac{1}{2}+\sqrt{\frac{2}{\pi}}\cdot\frac{1}{\sqrt{n}}, \end{align} where in the last inequality we have used $\frac{1}{2-s}\leq\frac{1}{2}+s$, valid for small $s$. Now, since $P_{k}$ is monotonic in $t$, then clearly \[ P_{k}\leq\frac{1}{2}+\sqrt{\frac{2}{\pi}}\cdot\frac{1}{\sqrt{n}}, \] for all $0\leq t\leq\frac{1}{2}$. Now for the case $t\geq\frac{1}{2}$. Using symmetry, the probability that $V_{k}=1$ is equal to the total number of ones in the support of $V^{n}$, divided by the total number of zeros and ones in the support of $V^{n}$ . So, \[ \P\left(V_{k}=1\right)=\frac{\sum_{m=tn}^{n}{n \choose m}\cdot m}{\sum_{m=tn}^{n}{n \choose m}\cdot n}\geq\frac{\sum_{m=tn}^{n}{n \choose m}\cdot tn}{\sum_{m=tn}^{n}{n \choose m}\cdot n}\geq t. \] On the other hand, denoting $l_{n}\dfn n^{\nicefrac{1}{2}+\eta}$, for all $n$ sufficiently large, \begin{align} \P\left(V_{k}=1\right) & =\frac{\sum_{m=tn}^{n}{n \choose m}\cdot\frac{m}{n}}{\sum_{m=tn}^{n}{n \choose m}}\nonumber \\ & =\frac{\sum_{m=tn}^{tn+l_{n}}{n \choose m}\cdot\frac{m}{n}}{\sum_{m=tn}^{n}{n \choose m}}+\frac{\sum_{m=tn+l_{n}+1}^{n}{n \choose m}\cdot\frac{m}{n}}{\sum_{m=tn}^{n}{n \choose m}}\nonumber \\ & \leq\frac{\sum_{m=tn}^{tn+l_{n}}{n \choose m}\cdot\left(t+\frac{l_{n}}{n}\right)}{\sum_{m=tn}^{tn+l_{n}}{n \choose m}}+\frac{\sum_{m=tn+l_{n}+1}^{n}{n \choose m}}{\sum_{m=tn}^{n}{n \choose m}}\nonumber \\ & =t+\frac{n^{\eta}}{\sqrt{n}}+\frac{\sum_{m=tn+l_{n}+1}^{n}{n \choose m}}{\sum_{m=tn}^{n}{n \choose m}}\nonumber \\ & \leq t+O_{\eta}\left(\frac{n^{\eta}}{\sqrt{n}}\right). \end{align} The last inequality follows from \begin{align} \frac{\sum_{m=tn+l_{n}+1}^{n}{n \choose m}}{\sum_{m=tn}^{n}{n \choose m}} & \trre[=,a]\frac{\sum_{m=tn}^{n}{n \choose m+l_{n}+1}}{\sum_{m=tn}^{n}{n \choose m}}\nonumber \\ & \trre[\leq,b]\max_{tn\leq m\leq n}\frac{{n \choose m+l_{n}+1}}{{n \choose m}}\nonumber \\ & \trre[\leq,c]\max_{tn\leq m\leq n-l_{n}-1}\frac{\sqrt{8n\frac{m}{n}(1-\frac{m}{n})}}{\sqrt{\pi n\cdot\frac{m+l_{n}+1}{n}(1-\frac{m+l_{n}+1}{n})}}\cdot\frac{2^{n\binent\left(\frac{m+l_{n}+1}{n}\right)}}{2^{n\binent\left(\frac{m}{n}\right)}}\nonumber \\ & =\left[1+o(1)\right]\sqrt{\frac{8}{\pi}}\max_{tn\leq m\leq n-l_{n}-1}2^{n\left[\binent\left(\frac{m}{n}+\frac{n^{\nicefrac{\eta}{2}}}{\sqrt{n}}\right)-\binent\left(\frac{m}{n}\right)\right]}\nonumber \\ & \trre[\leq,d]\left[1+o(1)\right]\sqrt{\frac{8}{\pi}}\max_{\frac{n}{2}\leq m\leq n-l_{n}-1}2^{n\left[\binent\left(\frac{m}{n}+\frac{n^{\eta}}{\sqrt{n}}\right)-\binent\left(\frac{m}{n}\right)\right]}\nonumber \\ & \trre[\leq,e]\sqrt{\frac{8}{\pi}}\cdot2^{n\left[\binent\left(\frac{1}{2}+\frac{n^{\eta}}{\sqrt{n}}\right)-\binent\left(\frac{1}{2}\right)\right]}\nonumber \\ & \trre[\leq,f]\sqrt{\frac{8}{\pi}}\cdot2^{-\frac{2}{\ln2}n^{\eta}}, \end{align} where $(a)$ is using the convention ${n \choose m}=0$ for $m>n$, $(b)$ is using Lemma \ref{lem: max bound on fraction}, $(c)$ is using Lemma \ref{lem: binomial and entropy}, $(d)$ is as $t\geq\frac{1}{2}$, $(e)$ is because the maximum is obtained at the minimal value of the feasible set, due the concavity of $\binent(\cdot)$, and $(f)$ is using the inequality $\binent\left(\frac{1}{2}+s\right)\leq1-\frac{2}{\ln2}s^{2}$. Finally, the marginal probability of $1$ for a pseudo $t$-majority random vector is only larger than for ordinary $t$-majority random vector, and smaller than the same marginal probability of a $(t+\frac{1}{n})$-majority random vector. So, the asymptotic upper bound does not change for pseudo $t$-majority random vectors. \end{IEEEproof} \begin{IEEEproof}[Proof of Lemma \ref{lem: entropy of partial vector}] From the chain rule for entropies and as conditioning reduces entropy \begin{align} n-1 & =H(V_{1}^{n})\nonumber \\ & =H(V_{m_{n}+1}^{n-m_{n}})+H(V_{1}^{m_{n}}|V_{m_{n}}^{n-m_{n}})+H(V_{n-m_{n}+1}^{n}|V_{1}^{n-m_{n}})\nonumber \\ & \geq H(V_{m_{n}+1}^{n-m_{n}})+H(V_{1}^{m_{n}}|V_{m_{n}}^{n})+H(V_{n-m_{n}+1}^{n}|V_{1}^{n-m_{n}}).\label{eq: lower bound using chain rule} \end{align} Now, for any vector $v^{n-m_{n}}$ such that $\Hamw(v_{1}^{n-m_{n}})\geq\frac{n}{2}+1$, it is assured that $v^{n}\in{\cal S}_{V^{n}}$, no matter what its suffix $v_{n-m_{n}+1}^{n}$ is. Thus, conditioning on this event, the suffix is distributed uniformly over $\{0,1\}^{m_{n}}$. This implies that \[ H(V_{n-m_{n}+1}^{n}|V_{1}^{n-m_{n}})\geq\P\left[\Hamw(V_{1}^{n-m_{n}})\geq\frac{n}{2}+1\right]\cdot m_{n}. \] Now, for all sufficiently large $n$ \begin{align} \P\left[\Hamw(V_{1}^{n-m_{n}})\geq\frac{n}{2}+1\right] & =\frac{\sum_{k=\frac{n}{2}+1}^{n-m_{n}}{n-m_{n} \choose k}\cdot2^{m_{n}}}{2^{n-1}}\nonumber \\ & =\frac{2\sum_{k=\frac{n}{2}+1}^{n-m_{n}}{n-m_{n} \choose k}}{2^{n-m_{n}}}\nonumber \\ & =\frac{2\sum_{k=\frac{n-m_{n}}{2}}^{n-m_{n}}{n-m_{n} \choose k}-2\sum_{k=\frac{n-m_{n}}{2}}^{\frac{n}{2}}{n-m_{n} \choose k}}{2^{n-m_{n}}}\nonumber \\ & \geq1-\frac{2\sum_{k=\frac{n-m_{n}}{2}}^{\frac{n}{2}}{n-m_{n} \choose k}}{2^{n-m_{n}}}\nonumber \\ & \geq1-2\left(\frac{m_{n}}{2}+1\right)\frac{{n-m_{n} \choose \frac{n-m_{n}}{2}}}{2^{n-m_{n}}}\nonumber \\ & \geq1-2m_{n}\frac{{n-m_{n} \choose \frac{n-m_{n}}{2}}}{2^{n-m_{n}}}\nonumber \\ & \geq1-2\sqrt{\frac{4}{\pi(n-m_{n})}}m_{n}, \end{align} where the last inequality is from Lemma \ref{lem: binomial and entropy}. Recalling that $m_{n}=O(n^{\nicefrac{1}{4}-\rho})$ \begin{align} H(V_{n-m_{n}+1}^{n}|V_{1}^{n-m_{n}}) & \geq m_{n}-\frac{4m_{n}^{2}}{\sqrt{\pi(n-m_{n})}}\nonumber \\ & =m_{n}-o(1).\label{eq: bound on conditional entropy} \end{align} From symmetry, $H(V_{1}^{m_{n}}|V_{m_{n}}^{n-m_{n}})$ can be evaluated to the exact same expression, and this leads to the required result. \end{IEEEproof} \begin{IEEEproof}[Proof of Lemma \ref{lem: exp small probability}] Let \[ r_{k}\dfn\frac{(n-k+1)}{2}+(n-k+1)^{\nicefrac{1}{2}+\rho}. \] Then, for some $c,c'>0$ \begin{align} \P\left[\Hamw(V_{1}^{k})\leq\frac{n}{2}-r_{k}\right] & =\P\left[\left\{ \Hamw(V_{1}^{k})\leq\frac{n}{2}-r_{k}\right\} \cap\left\{ \Hamw(V_{k+1}^{n})\geq r_{k}\right\} \right]\nonumber \\ & \hphantom{=}+\P\left[\left\{ \Hamw(V_{1}^{k})\leq\frac{n}{2}-r_{k}\right\} \cap\left\{ \Hamw(V_{k+1}^{n})<r_{k}\right\} \right]\nonumber \\ & =\P\left[\left\{ \Hamw(V_{1}^{k})\leq\frac{n}{2}-r_{k}\right\} \cap\left\{ \Hamw(V_{k+1}^{n})\geq r_{k}\right\} \right]\nonumber \\ & \leq\P\left[\Hamw(V_{k+1}^{n})\geq r_{k}\right]\nonumber \\ & \leq\frac{\sum_{l=r_{k}}^{n-k}{n-k \choose l}\cdot2^{k}}{2^{n-1}}\nonumber \\ & \trre[\leq,a]\frac{n}{2^{n-k-1}}{n-k \choose r_{k}}\nonumber \\ & \trre[\le,b]\frac{n}{2^{n-k-1}}2^{(n-k)\binent\left(\frac{r_{k}}{n-k}\right)}\nonumber \\ & \trre[\le,c]2n\cdot2^{-c'(n-k)^{2\rho}}\nonumber \\ & \leq2n\cdot2^{-c'\cdot m_{n}^{2\rho}}\nonumber \\ & \leq2^{-c\cdot m_{n}^{2\rho}}, \end{align} where $(a)$ is since $r_{k}\geq\frac{n-k}{2}$, $(b)$ is using Lemma \ref{lem: binomial and entropy}, and $(c)$ is using Taylor expansion of the binary entropy function at $\frac{1}{2}$. \end{IEEEproof} \subsection{Noisy case} \begin{IEEEproof}[Proof of Lemma \ref{lem: single sample mmse over a channel}] We have \begin{align} \m(W) & =\m(V+Z)\nonumber \\ & =\m(\beta*\alpha)\nonumber \\ & =\left[\beta(1-\alpha)+(1-\beta)\alpha\right]\cdot\left[\beta\alpha+(1-\beta)(1-\alpha)\right]\nonumber \\ & =\alpha(1-\alpha)+(1-2\alpha)^{2}\cdot\beta(1-\beta)\nonumber \\ & =\alpha(1-\alpha)+(1-2\alpha)^{2}\cdot\m(V).\label{eq: mmse one sample} \end{align} \end{IEEEproof} \begin{IEEEproof}[Proof of Lemma \ref{lem: mmse over a channel}] We will prove by induction. The relation holds (with equality) for $n=1$ from Lemma \ref{lem: single sample mmse over a channel}. We assume that the property hold up to $n-1$. Now, \begin{align} \m(W^{n}) & =\sum_{i=1}^{n-1}\m(W_{i}|W_{1}^{i-1})+\m(W_{n}|W_{1}^{n-1})\nonumber \\ & \geq\sum_{i=1}^{n-1}\m(W_{i}|W_{1}^{i-1})+\m(W_{n}|W_{1}^{n-1},Z_{1}^{n-1})\nonumber \\ & =\sum_{i=1}^{n-1}\m(W_{i}|W_{1}^{i-1})+\m(V_{n}+Z_{n}|V_{1}^{n-1},Z_{1}^{n-1})\nonumber \\ & \trre[=,a]\sum_{i=1}^{n-1}\m(W_{i}|W_{1}^{i-1})+\m(V_{n}+Z_{n}|V_{1}^{n-1})\nonumber \\ & \trre[=,b]\sum_{i=1}^{n-1}\m(W_{i}|W_{1}^{i-1})+\alpha(1-\alpha)+(1-2\alpha)^{2}\cdot\m(V_{n}|V_{1}^{n-1})\nonumber \\ & \trre[\geq,c](n-1)\alpha(1-\alpha)+(1-2\alpha)^{2}\cdot\m(V_{1}^{n-1})+\alpha(1-\alpha)+(1-2\alpha)^{2}\cdot\m(V_{n}|V_{1}^{n-1})\nonumber \\ & =n\alpha(1-\alpha)+(1-2\alpha)^{2}\cdot\m(V^{n}),\label{eq: mmse n samples} \end{align} where $(a)$ is since $(V_{n},Z_{n})-V_{1}^{n-1}-Z_{1}^{n-1}$, $(b)$ is using a conditional version of (\ref{eq: mmse one sample}) (which holds since the pointwise relation holds), and $(c)$ is using the induction assumption. Equality clearly holds when $V^{n}$ is a memoryless random vector. \end{IEEEproof} \begin{IEEEproof}[Proof of Lemma \ref{lem: MMSE of beginning of vector}] The proof is quite similar to the proof of (\ref{eq: noiseless majority}) in Section \ref{sec:The-Noiseless-Case}. Let $\rho\in(0,\nicefrac{1}{2})$ and $\eta\in[0,\frac{1}{2})$ be given. For any given $k\in[n-m_{n}]$ let us define the events \begin{align} {\cal A}_{k} & \dfn\left\{ \Hamw(V_{1}^{k})\geq\frac{k-1}{2}-(n-k+1)^{\nicefrac{1}{2}+\nicefrac{\rho}{3}}\right\} \nonumber \\ & =\left\{ \Hamw(V_{1}^{k})\geq\frac{n}{2}-r_{k}+1\right\} , \end{align} where $r_{k}\dfn\frac{(n-k+1)}{2}+(n-k+1)^{\nicefrac{1}{2}+\nicefrac{\rho}{3}}$. Let us analyze $\m(V_{k}|V_{1}^{k-1}=v_{1}^{k-1})$ for $1\leq k\leq m_{n}$ when $v_{1}^{k-1}\in{\cal A}_{k-1}$. Conditioning on $v_{1}^{k-1}\in{\cal A}_{k-1}$, we have that $V_{k}^{n}$ is a $t$-majority vector of length $n-k+1\geq n-m_{n}+1$, and its threshold is less than \[ t\leq\frac{r_{k}}{n-k+1}=\frac{1}{2}+\frac{1}{(n-k+1)^{\nicefrac{1}{2}-\nicefrac{\rho}{3}}}. \] Let $P_{k}\dfn\P[V_{k}=1|V_{1}^{k-1}]$. Assuming that $n$ is sufficiently large, Lemma \ref{lem:marginal of t-majority} (with $\eta<\frac{\rho}{3}$) implies that conditioned on the event $V^{k-1}\in{\cal A}_{k}$ \begin{align} \frac{1}{2}\leq P_{k} & \leq\frac{1}{2}+\frac{1}{(n-m_{n}+1)^{\nicefrac{1}{2}-\nicefrac{\rho}{3}}}+O_{\eta}\left(\frac{1}{(n-m_{n}+1)^{\nicefrac{1}{2}-\eta}}\right)\nonumber \\ & \leq\frac{1}{2}+O_{\eta}\left(\frac{1}{n^{\nicefrac{1}{2}-\nicefrac{\rho}{3}}}\right) \end{align} for all $k\in[n-m_{n}]$, and $n$ sufficiently large. Consequently, \[ \m(V_{k}|V_{1}^{k-1}=v_{1}^{k-1})=P_{k}(1-P_{k})\geq\frac{1}{4}-O_{\eta}\left(\frac{1}{n^{1-\nicefrac{2\rho}{3}}}\right). \] As in Lemma \ref{lem: exp small probability} (when replacing $m_{n}$, the maximal value of $k$, with a maximal value of $n-m_{n}$), there exists $c>0$ such that \[ \P\left[V^{k-1}\not\in{\cal A}_{k-1}\right]\leq2^{-c(n-m_{n})^{\nicefrac{2\rho}{3}}} \] for all $k\in[m_{n}]$, and then \begin{align} \sum_{k=1}^{m_{n}}\m(V_{k}|V_{1}^{k-1}) & \geq\sum_{k=1}^{m_{n}}\sum_{v^{k-1}\in{\cal A}_{k-1}}\P\left[V^{k-1}=v^{k-1}\right]\m(V_{k}|V_{1}^{k-1}=v^{k-1})\nonumber \\ & \geq\sum_{k=1}^{m_{n}}\left[1-2^{-c(n-m_{n})^{\nicefrac{2\rho}{3}}}\right]\left[\frac{1}{4}-O_{\eta}\left(\frac{1}{n^{1-\nicefrac{2\rho}{3}}}\right)\right]\nonumber \\ & \geq\frac{m_{n}}{4}-o_{\eta}(1). \end{align} \end{IEEEproof} \begin{IEEEproof}[Proof of Lemma \ref{lem: MMSE of end of vector}] Let us define the event \[ {\cal B}_{k}\dfn\left\{ \Hamw(V_{1}^{k})\geq\frac{n}{2}+1\right\} . \] As in the proof of Lemma \ref{lem: entropy of partial vector}, \begin{align} \P\left[V^{k}\in{\cal B}_{k}\right] & \geq\P\left[\Hamw(V_{1}^{n-m_{n}})\geq\frac{n}{2}+1\right]\nonumber \\ & \geq1-2\sqrt{\frac{4}{\pi(n-m_{n})}}m_{n}\nonumber \\ & =1-O\left(n^{-\nicefrac{1}{4}-\rho}\right) \end{align} for all $k\in\{n-m_{n}+1,\ldots,n\}$. Conditioned on $v_{1}^{k-1}\in{\cal B}_{k}$, all the suffixes $v_{k}^{n}$ are possible in order to obtain a majority vector, and hence $\P[V_{k}=1|V_{1}^{k-1}=v_{1}^{k-1}]=\frac{1}{2}$. Then, \begin{align} \sum_{k=n-m_{n}+1}^{n}\m(V_{k}|V_{1}^{k-1}) & \geq\sum_{k=n-m_{n}+1}^{n}\sum_{v_{1}^{k-1}\in{\cal B}_{k-1}}\P\left[V_{1}^{k-1}=v_{1}^{k-1}\right]\m(V_{k}|V_{1}^{k-1}=v_{1}^{k-1})\nonumber \\ & =\sum_{k=n-m_{n}+1}^{n}\left[1-O\left(n^{-\nicefrac{1}{4}-\rho}\right)\right]\cdot\frac{1}{4}\nonumber \\ & \geq\frac{m_{n}}{4}-O\left(\frac{1}{n^{2\rho}}\right)\nonumber \\ & \geq\frac{m_{n}}{4}-o(1). \end{align} \end{IEEEproof} \begin{IEEEproof}[Proof of Lemma \ref{lem: output entropy for majority}] The entropy is bounded as \begin{align} H(Y^{n}|\maj(X^{n})=1) & \trre[=,a]H(Y^{n}|\maj(X^{n}))\nonumber \\ & =H(\maj(X^{n})|Y^{n})+H(Y^{n})-H(\maj(X^{n}))\nonumber \\ & =H(\maj(X^{n})|Y^{n})+n-1\nonumber \\ & \trre[\leq,b]H(\maj(X^{n})|\maj(Y^{n}))+n-1\nonumber \\ & \trre[\leq,c]\binent\left[\P\left(\maj(X^{n})=\maj(Y^{n})\right)\right]+n-1\nonumber \\ & \trre[\leq,d]\mu(\alpha)+n-1+o(1),\label{eq: entropy of majority} \end{align} where $(a)$ follows from symmetry, $(b)$ from the data processing theorem, $(c)$ is from Fano's inequality, and $(d)$ is from \cite[Theorem 2.45]{Bool_book}. \end{IEEEproof} \begin{IEEEproof}[Proof of Lemma \ref{lem: exact entropy of majority}] The proof of is based on the Gaussian approximation of the binomial distribution using the Berry-Essen central limit theorem. For simplicity, we assume that $n$ is odd, but the proof can be easily generalized to any $n$. We begin by denoting \[ a(y^{n})\dfn\P[\maj(X^{n})=1|Y^{n}=y^{n}], \] we then writing \[ H(\maj(X^{n})|Y^{n})=\E\left\{ \binent\left[a(Y^{n})\right]\right\} . \] Since $Y^{n}$ is the output of a uniform Bernoulli random vector $X^{n}$ through a BSC with crossover probability $\alpha$, then $Y^{n}=X^{n}+Z^{n}$ where $Z^{n}\sim\mbox{Bern(\ensuremath{\alpha})}$. Equivalently, we also have $X^{n}=Y^{n}+Z^{n}$, where $Y^{n}$ is a uniform Bernoulli random vector, and $Z^{n}$ and $Y^{n}$ are independent. We next use the Berry-Essen central limit theorem \cite[Chapter XVI.5, Theorem 2]{Feller} to evaluate $a(y^{n})$. To this end, note that $\E[Z_{i}-\alpha]=0$, $\E[(Z_{i}-\alpha)^{2}]=\alpha(1-\alpha)$, and $\E[|Z_{i}-\alpha|^{3}]=\alpha(1-\alpha)\left[\alpha^{2}+(1-\alpha)^{2}\right]<\infty$. Then, \begin{align} a(y^{n}) & =\P\left[\Hamw(y^{n}+Z^{n})>\frac{n}{2}\right]\nonumber \\ & =\P\left[\sum_{i\in[n]:\; y_{i}=0}Z_{i}+\sum_{i\in[n]:\; y_{i}=1}(1-Z_{i})>\frac{n}{2}\right]\nonumber \\ & =\P\left\{ \sum_{i\in[n]:\; y_{i}=0}(Z_{i}-\alpha)+\sum_{i\in[n]:\; y_{i}=1}(\alpha-Z_{i})>(1-2\alpha)\left[\frac{n}{2}-\Hamw(y^{n})\right]\right\} \nonumber \\ & =\P\left\{ \frac{1}{\sqrt{n\alpha(1-\alpha)}}\left(\sum_{i\in[n]:\; y_{i}=0}(Z_{i}-\alpha)+\sum_{i\in[n]:\; y_{i}=1}(\alpha-Z_{i})\right)>\frac{(1-2\alpha)}{\sqrt{n\alpha(1-\alpha)}}\cdot\left[\frac{n}{2}-\Hamw(y^{n})\right]\right\} \nonumber \\ & \dfn\P\left\{ S_{n}>\frac{(1-2\alpha)}{\sqrt{n\alpha(1-\alpha)}}\cdot\left[\frac{n}{2}-\Hamw(y^{n})\right]\right\} , \end{align} where $S_{n}$ was implicitly defined. Now, the Berry-Essen central limit theorem implies that for some $C_{\alpha}$ \[ \sup_{s\in\mathbb{R}}\left|\P\left[S_{n}>s\right]-\P\left[G>s\right]\right|\leq\frac{C_{\alpha}}{\sqrt{n}}, \] where $G\sim{\cal N}(0,1)$. Further, \cite[Lemma 2.7]{csiszar2011information} provides a bound on the difference in the entropy of two probability distributions, in terms of the total variation distance between them. In our case, this implies that for all $n$ sufficiently large, \[ \sup_{s\in\mathbb{R}}\left|\binent\left(\P\left[S_{n}>s\right]\right)-\binent\left(\P\left[G>s\right]\right)\right|\leq-\frac{2C_{\alpha}}{\sqrt{n}}\ln\left(\frac{C_{\alpha}}{\sqrt{n}}\right)=o(1). \] Then, denoting \[ H_{n}\dfn\frac{(1-2\alpha)}{\sqrt{n\alpha(1-\alpha)}}\cdot\left[\frac{n}{2}-\Hamw(y^{n})\right] \] we have \begin{align} H(\maj(X^{n})|Y^{n}) & =\E\left\{ \binent\left[a(Y^{n})\right]\right\} \nonumber \\ & =\E\left\{ \binent\left(\P\left[S_{n}>H_{n}\right]\right)\right\} \nonumber \\ & =\E\left\{ \binent\left(\P\left[G>H_{n}\right]\right)\right\} +o(1)\nonumber \\ & =\E\left\{ \binent\left[Q(|H_{n}|)\right]\right\} +o(1) \end{align} where $Q(\cdot)$ is the Gaussian Q-function, and in the last equality we have used the facts that $Q(t)=1-Q(|t|)$ for $t<0$, and $\binent(p)=\binent(1-p)$. Now, applying the central limit theorem once again, we have that $H_{n}\Rightarrow\frac{(1-2\alpha)}{\sqrt{4\alpha(1-\alpha)}}\cdot G$, as $n\to\infty$, in distribution. To complete the proof, we note that since $\binent\left[Q(|t|)\right]$ is a bounded and continuous function of $t$, Portmanteau's lemma (e.g. \cite[Chapter VIII.1, Theorem 1]{Feller}) implies that \[ \E\left\{ \binent\left[Q(|H_{n}|)\right]\right\} \to\E\left\{ \binent\left[Q\left(\frac{|(1-2\alpha)G|}{\sqrt{4\alpha(1-\alpha)}}\right)\right]\right\} , \] as $n\to\infty$, concluding the proof. \end{IEEEproof} \begin{IEEEproof}[Proof of Lemma \ref{lem:approximated entropy of majority}] Let us denote $\alpha=\frac{1}{2}-\gamma$ for $\gamma\in(0,\frac{1}{2})$, and then let us inspect \[ \E\left\{ \binent\left[Q(\Gamma)\right]\right\} \dfn\E\left\{ \binent\left[Q\left(\frac{\left|G\right|\gamma}{\sqrt{(\frac{1}{2}-\gamma)(\frac{1}{2}+\gamma)}}\right)\right]\right\} \] as $\gamma\downarrow0$. Using Leibniz's integral rule, we obtain $Q'(t)=-\frac{1}{\sqrt{2\pi}}e^{-\nicefrac{t^{2}}{2}}$, $Q''(t)=\frac{t}{\sqrt{2\pi}}\cdot e^{-\nicefrac{t^{2}}{2}}$ and so, there exists $\overline{c}>0$ such that for all $t\geq0$ \[ Q(t)\geq\frac{1}{2}-\frac{t}{\sqrt{2\pi}}. \] Similarly, there exists $\tilde{c},s_{1}>0$ such that for all $s\in(0,s_{1})$ \[ \binent\left(\frac{1}{2}-s\right)\geq1-\frac{2}{\ln2}s^{2}-\tilde{c}s^{4}. \] Hence, for all sufficiently small $t>0$ \begin{align} \binent[Q(t)] & =\binent\left[\frac{1}{2}-\left(\frac{1}{2}-Q(t)\right)\right]\nonumber \\ & \geq1-\frac{2}{\ln2}\left(\frac{1}{2}-Q(t)\right)^{2}-\tilde{c}\left(\frac{1}{2}-Q(t)\right)^{4}\nonumber \\ & \geq1-\frac{1}{\pi\cdot\ln2}t^{2}-\frac{\tilde{c}}{4\pi^{2}}t^{4}. \end{align} So, there exists $\hat{c}>0$ such that for all sufficiently small $\gamma$, \begin{align} & \hphantom{=}\E\left\{ \binent\left[Q\left(\Gamma\right)\right]\right\} \nonumber \\ & \geq\P\left[\left|G\right|\leq\gamma^{-1+\rho}\right]\cdot\E\left\{ \binent\left[Q\left(\Gamma\right)\right]\vert\left|G\right|\leq\gamma^{-1+\rho}\right\} \nonumber \\ & \geq\P\left[\left|G\right|\leq\gamma^{-1+\rho}\right]\cdot\E\left\{ 1-\frac{1}{\pi\cdot\ln2}\Gamma^{2}-\frac{\tilde{c}}{4\pi^{2}}\Gamma^{4}\vert\left|G\right|\leq\gamma^{-1+\rho}\right\} \nonumber \\ & =\int_{-\gamma^{-1+\rho}}^{\gamma^{-1+\rho}}\frac{1}{\sqrt{2\pi}}e^{-\nicefrac{t^{2}}{2}}\cdot\left[1-\frac{1}{\pi\cdot\ln2}\left(\frac{\gamma^{2}t^{2}}{(\frac{1}{2}-\gamma)(\frac{1}{2}+\gamma)}\right)-\frac{\tilde{c}}{4\pi^{2}}\left(\frac{\gamma^{4}t^{4}}{(\frac{1}{2}-\gamma)^{2}(\frac{1}{2}+\gamma)^{2}}\right)\right]\cdot dt\nonumber \\ & =1-2Q(\gamma^{-1+\rho})-\int_{-\gamma^{-1+\rho}}^{\gamma^{-1+\rho}}\frac{1}{\sqrt{2\pi}}e^{-\nicefrac{t^{2}}{2}}\cdot\left[\frac{1}{\pi\cdot\ln2}\left(\frac{\gamma^{2}t^{2}}{(\frac{1}{2}-\gamma)(\frac{1}{2}+\gamma)}\right)+\frac{\tilde{c}}{4\pi^{2}}\left(\frac{\gamma^{4}t^{4}}{(\frac{1}{2}-\gamma)^{2}(\frac{1}{2}+\gamma)^{2}}\right)\right]\cdot dt\nonumber \\ & \geq1-2Q(\gamma^{-1+\rho})-\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\nicefrac{t^{2}}{2}}\cdot\left[\frac{1}{\pi\cdot\ln2}\left(\frac{\gamma^{2}t^{2}}{(\frac{1}{2}-\gamma)(\frac{1}{2}+\gamma)}\right)+\frac{\tilde{c}}{4\pi^{2}}\left(\frac{\gamma^{4}t^{4}}{(\frac{1}{2}-\gamma)^{2}(\frac{1}{2}+\gamma)^{2}}\right)\right]\cdot dt\nonumber \\ & =1-2Q(\gamma^{-1+\rho})-\frac{1}{\pi\cdot\ln2}\left(\frac{\gamma^{2}}{(\frac{1}{2}-\gamma)(\frac{1}{2}+\gamma)}\right)-\frac{\tilde{c}}{4\pi^{2}}\left(\frac{3\gamma^{4}}{(\frac{1}{2}-\gamma)^{2}(\frac{1}{2}+\gamma)^{2}}\right)\nonumber \\ & \trre[\geq,a]1-\frac{1}{\pi\cdot\ln2}\left(\frac{\gamma^{2}}{(\frac{1}{2}-\gamma)(\frac{1}{2}+\gamma)}\right)-\hat{c}\gamma^{4}, \end{align} where $(a)$ is since for any $\rho\in(0,1)$, using $Q(t)\leq\frac{1}{t}\cdot e^{-\nicefrac{t^{2}}{2}}$ we have \begin{equation} \P\left[\left|G\right|\geq\gamma^{-1+\rho}\right]=2Q(\gamma^{-1+\rho})\leq2\gamma^{1-\rho}\cdot\exp\left(-\frac{1}{2\gamma^{2-2\rho}}\right).\label{eq: upper bound on Q function} \end{equation} \end{IEEEproof} \section{Useful Results} \begin{lem}[{\cite[Lemma 17.5.1]{Cover:2006:EIT:1146355}}] \label{lem: binomial and entropy} For $0<\alpha<1$ such that $n\alpha$ is integer \[ \frac{2^{n\binent(\alpha)}}{\sqrt{8n\alpha(1-\alpha)}}\leq{n \choose n\alpha}\leq\frac{2^{n\binent(\alpha)}}{\sqrt{\pi n\alpha(1-\alpha)}}. \] \end{lem} \begin{lem}[{\cite[Lemma 1]{cover1996universal}}] \label{lem: max bound on fraction}If $\{a_{i}\}_{i=1}^{n}$ and $\{b_{i}\}_{i=1}^{n}$ are all non-negative numbers, then \[ \frac{\sum_{i=1}^{n}a_{i}}{\sum_{i=1}^{n}b_{i}}\leq\max_{1\leq i\leq n}\frac{a_{i}}{b_{i}}. \] \end{lem} \begin{cor} Under the conditions above and for any integer $l>0$, \[ \frac{\sum_{i=1}^{n-l}a_{i}}{\sum_{i=1}^{n}b_{i}}\leq\max_{1\leq i\leq n-l}\frac{a_{i}}{b_{i}}. \] This can be obtained by replacing $a_{i}$ with $0$ for $n-l+1\leq i\leq n$. \end{cor} \bibliographystyle{plain}
{ "timestamp": "2016-07-11T02:08:20", "yymm": "1607", "arxiv_id": "1607.02381", "language": "en", "url": "https://arxiv.org/abs/1607.02381", "abstract": "Suppose $Y^{n}$ is obtained by observing a uniform Bernoulli random vector $X^{n}$ through a binary symmetric channel. Courtade and Kumar asked how large the mutual information between $Y^{n}$ and a Boolean function $\\mathsf{b}(X^{n})$ could be, and conjectured that the maximum is attained by a dictator function. An equivalent formulation of this conjecture is that dictator minimizes the prediction cost in a sequential prediction of $Y^{n}$ under logarithmic loss, given $\\mathsf{b}(X^{n})$. In this paper, we study the question of minimizing the sequential prediction cost under a different (proper) loss function - the quadratic loss. In the noiseless case, we show that majority asymptotically minimizes this prediction cost among all Boolean functions. We further show that for weak noise, majority is better than dictator, and that for strong noise dictator outperforms majority. We conjecture that for quadratic loss, there is no single sequence of Boolean functions that is simultaneously (asymptotically) optimal at all noise levels.", "subjects": "Information Theory (cs.IT)", "title": "On the Optimal Boolean Function for Prediction under Quadratic Loss", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759660443166, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7079405647130789 }
https://arxiv.org/abs/1703.02415
Counting Permutations that Avoid Many Patterns
This paper presents a collection of experimental results regarding permutation pattern avoidance, focusing on cases where there are "many" patterns to be avoided.
\section{Preamble} Excellent introductions to the subject of {\it permutation patterns} can be found in \cite{B}, as well as in the wikipedia entry. In order to make the present article self-contained, let's review the basic definitions. A permutation $\pi \in \mathfrak{S}_n$ is said to {\itshape contain a copy} of $\sigma \in \mathfrak{S}_k$ if there is a subsequence of $\pi$ that is order isomorphic to $\sigma$. For example, the permutation $\pi=219378645$ contains a copy of $\sigma=1432$, because the subsequence $2975$ is order isomorphic to $1432$. We call $\sigma$ a {\itshape pattern}, we say that $\pi$ {\itshape avoids} the pattern $\sigma$ if no such subsequence exists, and we define the {\itshape permutation avoidance class} to be $Av_n(\sigma)=\{\pi \in \mathfrak{S}_n \text{ $|$ } \pi \text{ avoids } \sigma \}$. In the case where we wish to avoid an entire set of patterns of arbitrary lengths, say $\Sigma$, we define $Av_n(\Sigma)=\cap_{\sigma \in \Sigma}Av_n(\sigma)$. Finally, we say that two sets of patterns $\Sigma_1$ and $\Sigma_2$ are {\itshape Wilf-equivalent} provided that $|Av_n(\Sigma_1)|=|Av_n(\Sigma_2)|$ for all $n\geq 0$. \vspace{.75pc} \section{Pattern Avoidance via Templates} \vspace{.75pc} One approach to finding the sizes of permutation avoidance classes is to construct easily enumerated sets and then see if these sets avoid any interesting patterns. In this section, we develop a method of generating sets of permutations using {\bf templates} which both avoid certain patterns, and grow quickly as the lengths of the permutations increase. We will define two kinds of templates, but first will try to motivate their definition with a well-known proof of the well-known fact that the number of permutations of length $n$ which avoid the pattern 132, a quantity which we will call $B_n$, is equal to $C_n$, then $n^{th}$ Catalan number. \begin{theorem}\label{123-avoid} The number of $132$-avoiding permutations of length $n$ is given by $C_n$. \end{theorem} \begin{proof} The proof is by induction. When $n=0$, it is clear that $B_n=1$, so suppose that $B_m=C_m$ for all $m<n$. Consider a length-$n$ permutation $\pi$, and suppose that $n$ appears in position $i$. If $\pi$ avoids 132, it follows that the $i-1$ numbers which proceed $n$ must all be greater than all the $n-i$ numbers which follow $n$, and, moreover, the prefix of $\pi$ formed by the first $i-1$ numbers and the suffix formed by the last $n-i$ numbers must both avoid 132 themselves. Conversely, if these two conditions are met, then $\pi$ avoids 132. Any instance of 132 cannot have the 1 and the 2 on opposite sides of the number $n$ because every number preceding $n$ is greater than every number following it, but any instance of 132 also cannot have the 1 and the 2 on the same side of $n$ because both the prefix preceding $n$ and the suffix following $n$ avoid 132 (and, obviously, neither 1 nor 2 can be represented by $n$). It follows by induction that $B_n=\sum_{i=1}^n B_{i-1}\cdot B_{n-i}$ for all $n\ge 0$; since $B_n$ has the same initial condition as $C_n$ and follows the same recurrence, we conclude that $B_n=C_n$ for all $n\ge 0$. \end{proof} In this proof, we showed that every 132-avoiding permutation of length $n$ has the form $LnS$ where $L$ and $S$ are 132-avoiding permutations such that every number in $L$ is larger than every number in $S$. We will generalize this idea in the following definition. \begin{definition} A {\bf template} of length $t\ge 1$ is a pair of strings $P$ and $B$ of length $t$. We require that $P$ be a permutation of length $t$ and $B$ be a binary string of length $t$. We will denote the $i^{th}$ element of $P$ by $p_i$ and the $i^{th}$ element of $B$ by $b_i$. \end{definition} For every positive integer $n$ and template $T=(P, B)$, we define a set of permutations of length $n$, which we will call $R_{n,T}$, as follows. First, $R_{0,T}$ is the empty string and $R_{1,T}=\{1\}$ regardless of $T$. Then, $R_{n,T}$ is the set of permutations $\pi$ of length $n$ which can be divided into subwords (i.e. strings of consecutive elements of $\pi$) called $W_1,...,W_t$ (with $t=|P|=|B|$) such that if $p_i > p_j$, then every of $W_i$ greater than every element of $W_j$. Moreover, we require that each $W_i$ of length $l$ be an element of $U_{l,T}$, and, if $B_i=0$, then $W_i$ must have exactly one element. If these conditions are met, we say that $W_1,...,W_t$ {\bf fit} the template $T$, so a permutation of length $n$ is an element of $R_{n,T}$ if it can be decomposed into subwords which fit $T$. We now provide an example of the set of a template. \begin{example} Let $T=(231,101)$; then $R_{1,T}=\{1\}, R_{2,T}=\{12,21\},$ and $R_{3,T}=\{123,213,231,312,321\}$. To find the elements of $R_{3,T}$ we consider a permutation $\pi$ of length 3 and divide it up into subwords $W1,W2, W3$. We know that $W2$ is the string 3, and so we can choose $W1 \in R_{2,T}$ and $W3$ empty, $W3 \in R_{2,T}$ and $W1$ empty, or $W1,W3 \in R_{1,T}$. Because $|R_{2,T}|=2$, each of the first two options gives two distinct permutations in $R_{3,T}$ (123, 213, 312, and 321), while the last option gives one permutation (231). Note that $R_{3,T}$ is exactly the set of length 3 permutations which avoid 132. In fact $R_{n,T}$ is the set of length $n$ permutations which avoid 132; this fact can be checked by reviewing the proof of Theorem \ref{123-avoid}. Therefore, considering sets corresponding to templates does generalize the argument of Theorem \ref{123-avoid}. \end{example} \begin{example} Let $T=(2,1,3,5,4)$; then $R_{1,T}=\{1\}, R_{2,T}=\{12\}, R_{3,T}=\{123,132,213\}, R_{4,T}=\{1234,1243,1423,2134,2143,2314\}$. \end{example} Once we begin looking at permutations with length greater than 3 it becomes much harder (and likely impossible) to find templates which produce entire pattern avoidance classes. However, it is not too difficult to find templates which produce only permutations avoiding some set of patterns, which is to say subsets of pattern avoidance classes. Therefore, looking at templates lets us find lower bounds on the size of certain avoidance classes. The following proposition shows an application of this method. \begin{prop} Let $Q_n$ be the set of all permutations of $n$ which avoid every element of $\{2143, 2413, 3142\}$ and let $q_n=|Q_n|$. Then, if the sequence $(r_n)_{n=0}^\infty$ is defined by $r_0=r_1=1$, and $r_n= \sum_{i=1}^{n-1}\sum_{j=i+1}^n r_{i-1}r_{j-i-1}r_{n-j}$ for $n>1$, it holds that $q_n \ge r_n$ for all $n$. \end{prop} \begin{proof} The proof is complicated and not especially enlightening, and Theorem \ref{single} will allow a computer to quickly prove the proposition (the last paragraph of this proof, which is simple and straightforward is still necessary). This proof is included to illustrate the headache that Theorem \ref{single} will help alleviate. The main step of the proof is to show that $Q_n$ contains $R_{n, T}$ where $T=(45312,10101)$. We will show that every permutation in $R_{n,T}$ avoids 2143, 2413, and 3142. First, note that for 2413, and 3142, no proper subword with length greater than 2 contains only consecutive numbers. When we divide a permutation into subwords to fit into the template, each subword must contain only consecutive numbers. Thus we can conclude that if a pattern is present in a permutation in $R_{n,T}$, then it is contained entirely in a single subword or each element is in a different subword. The second case cannot occur because the permutation 45312 avoids both patterns. To see that the first case cannot occur, suppose by way of contradiction that it does, and pick $n$ minimally so that a permutation of $R_{n,T}$ contains one of the two patterns under consideration. When we divide up this permutation into subwords so that it fits into $T$, we must choose some subword to contain pattern, but then this subword is a shorter permutation which contains the pattern, providing a contradiction. This shows that every permutation in $R_{n,T}$ avoids 2413 and 3142. Next we will see that every permutation also avoids 2143. Again suppose by way of contradiction that there is a permutation in $R_{n,T}$ which contains 2143, and pick $n$ minimally so that this occurs. Then, if we divide up the permutation into 5 subwords, $W_1,...,W_5$, which fit the template $T$ the occurrence of 2143 cannot be contained entirely in any one subword. Therefore, $W_1$ either contains no part of the occurrence, contains the 2, or contains the 21. In the first two cases $W_2$ must not contain any part of the occurrence either; it cannot contain the 2 or 1 because it is the largest element of the permutation. If $W_1$ was empty, then we must fit 2143 into $W_3W_4W_5$, which is impossible because either the 2 will go in $W_3$ even though each element of $W_3$ must be greater than each element of $W_4$ and $W_5$, or else we would need to fit 143 into $W_5$ which can't happen because they are not consecutive integers (the 2 is missing). If $W_1$ contained 2, then $W_2$ must contain 4 and 3 because all the elements of every other subword must be less than the elements of $W_1$. Therefore, the permutations in $R_{n,T}$ avoid 2143, and so $R_{n,T} \subseteq Q_n$. Now, we just need to show that $|R_{n,T}|=r_n$. First, it follows from the definition of $R_{n,T}$ that $|R_{0,T}|=|R_{1,T}|=1$. Then, for a permutation in $R_{n,T}$, we will say that $n$ occurs at position $i$ and 1 at position $j$. We get that $1\le i \le n-1$ and $i+1 \le j \le n$. Then, $W_1$ can be any of the $r_{i-1}$ elements of $R_{i-1,T}$, $W_3$ can be any of the $r_{j-i-1}$ elements of $R_{j-i-1,T}$, and $W_5$ can be any of the $r_{n-j}$ elements of $R_{n-j,T}$. Therefore, $r_n= \sum_{i=1}^{n-1}\sum_{j=i+1}^n r_{i-1}r_{j-i-1}r_{n-j}$ for $n>1$. \end{proof} While this recurrence for $(r_n)$ is reminiscent of the Catalan recurrence, it does not appear to have a similarly nice closed form solution. Fortunately, it is possible to prove results of this kind experimentally without the need for detailed write-ups. The following theorem establishes a sufficient condition for $R_{n,T}$ to avoid a set of patterns which is independent of $n$, and so can be tested for all $n$ at once using a computer. \begin{theorem}\label{single} Let $T=(P,B)$ be a template, let $B$ have $k$ 0's, and let $\sigma$ be a pattern of length $l>0$. Then, if there exists an $n$ such that $R_{n,T}$ contains a permutation which has $\sigma$ as a pattern, there also exists an $m \le (l-1)(k+1)+1$ such that $R_{m,T}$ also contains a permutation which has $\sigma$ as a pattern. \end{theorem} \begin{proof} This Theorem will be an immediate corollary of Theorem \ref{multi}, and, while it can be proved separately, the proof is almost identical to that of Theorem \ref{multi}, so we omit it. \end{proof} With this result in hand, our laptop was able to prove Proposition 4 in 16 seconds using Maple. There is no particular reason to consider templates just one at a time. Analogously to how we originally defined templates, we define the set of length $n$ permutations corresponding to the set of templates ${\mathcal {T}}=\{T_1,...,T_r\}$. We will call this set of permutations $S_{n, \mathcal{T}}$, and define it recursively as follows. First, $S_{0,\mathcal T}$ is the empty string and $S_{1,\mathcal T}=\{1\}$ regardless of $\mathcal T$. Then, $S_{n,\mathcal T}$ is the set of permutations $\pi$ of length $n$ such that, for some $T=(P,B) \in \mathcal T$, we can divide $\pi$ into subwords $W_1,...,W_t$ such that if $p_i > p_j$, then every element of $W_i$ greater than every element of $W_j$. Moreover, we require that each $W_i$ of length $l$ be an element of $S_{l,\mathcal T}$ (rather than of $R_{l,T}$), and, if $B_i=0$, then $W_i$ must have exactly one element. We will finish this section by proving a generalization of Theorem \ref{single} for sets of templates, and giving an example of its application. \begin{theorem} \label{multi} Let ${\cal T} = \{(P_1, B_1),...,(P_r,B_r)\}$ be a set of templates, suppose that for all $i$, $B_i$ has no more than $k$ 0's, and let $\sigma$ be a pattern of length $l>0$. Then, if there exists an $n$ such that $S_{n,\cal T}$ contains a permutation which has $\sigma$ as a pattern, there also exists an $m \le (l-1)(k+1)+1$ such that $S_{m, \cal T}$ also contains a permutation which has $\sigma$ as a pattern. \end{theorem} \begin{proof} Fix $k$ and $n$; we proceed by induction on $l$. If $l=1$, then $\sigma$ is the pattern 1 and is contained in the permutation 1 which is the element of $S_{1,\cal T}$. Assume that the theorem holds for patterns of length up to $l-1$. Now let $\pi' \in S_{n, \cal T}$ be the permutation which contains $\sigma$ as a pattern, and pick some occurrence of $\sigma$ in $\pi'$. We can choose $T = (P,B) \in \cal T$ and divide $\pi'$ into subwords $W'_1,...,W'_t$ (where $t=|P|$) such that the $W'_i$ fit the template $T$. We can similarly divide $\sigma$ into subwords $U_1, ..., U_t$ so that $U_i$ is the portion of the chosen occurrence of $\sigma$ which lies in $W'_i$. If there exists $i$ such that only $U_i$ is nonempty, then $W'_i$ contains $\sigma$ and is shorter than $\pi'$, so set $\pi'=W'_i$ and repeat the decomposition for the new $\pi'$. Repeat until either at least two $U_i$ are nonempty or $|\pi'| \le (l-1)(k+1)+1$. In the second case we are done, so assume that the first case holds. We will now find $m$ and construct a permutation $\pi \in S_{m,\cal T}$ which contains $\sigma$. Like $\pi'$ we need to be able to divide $\pi$ into $W_1,...,W_t$ to fit $T$, so we will construct the $W_i$ individually. For each $i$, let $u_i=|U_i|$. By the induction hypothesis, there exist $W_i$ such that $|W_i| =w_i \le (u_i-1)(k+1)+1$, $W_i \in S_{w_i,\cal T}$, and $U_i$ is a pattern in $W_i$. It may be that for some $i$, $W_i$ is empty even though $B_i=0$; if this is the case, we must add up to $k$ new $W_i$ to ensure that each $W_i$ has length 1 whenever $B_i=0$. Lastly, we choose $i$ so that $p_i=1$ and $j$ so that $p_j=2$ and increase every element of $W_j$ by the same amount so that every element of $W_j$ is greater than every element of $W_i$, and we repeat this with $j=3..t$ and $i=j-1$. Now, concatenating all the $W_i$ gives a permutation $\pi$ of length $m$ in $S_{m, \cal T}$ which contains the pattern $\sigma$. It remains to show that $m \le (l-1)(k+1)+1$. Let $I=\{i : u_i>0\}$; using the construction of $\pi$ and the induction hypothesis, we find that $m \le \sum_{i\in I} ((u_i-1)(k+1)+1) + k = (k+1)(\sum_{i\in I} u_i) - k\cdot |I|+k=(k+1)(l)-k|I|+k \le (k+1)(l-1)+1$ because we found at the end of the first paragraph that $|I| \ge 2$. Therefore, the proof is complete by induction. \end{proof} \ref{multi} can give lower bounds on the sizes of many sets of avoidance classes. As an example, we offer the following proposition: \begin{prop} Let $Q_n$ be the set of all permutations of $n$ which avoid every element of $\{2341, 2413, 2431, 3241\}$ and let $q_n=|Q_n|$. Then, if the sequence $(s_n)_{n=0}^\infty$ is defined by $s_0=s_1=1$, and $s_n= \sum_{i=1}^{n-1}\sum_{j=i+1}^n 2 \cdot s_{i-1}s_{j-i-1}s_{n-j}$ for $n>1$, it holds that $q_n \ge s_n$ for all $n$. \end{prop} \begin{proof} Let $T_1=(14253, 10101), T_2=(15243,10101),$ and ${\cal T}=\{T_1,T_2\}$. Using Maple, one can generate $S_{n,\cal T}$ for $1 \le n \le 10$, and confirm that every permutation in each of these sets avoids 2341, 2413, 2431, and 3241 (we did this on a laptop in less than 7 minutes). Because these patterns all have length 4, both $B_1$ and $B_2$ have two 0's, and $(4-1)\cdot (2+1)+1=10$, Theorem \ref{multi} promises that, for all $n$, every permutation in $S_{n,\cal T}$ avoids 2341, 2413, 2431, and 3241. Now we show that $|S_{n, \cal T}| = s_n$ by induction. Certainly $|S_{0, \cal T}|=|S_{1, \cal T}|=1$. When picking a permutation in $S_{n, \cal T}$, we first choose whether this permutation will follow the template $T_1$ or $T_2$. This will not cause us to count any permutation twice because if a permutation has $n-1$ appear before $n$, then it can only follow $T_1$, and if it has $n$ appear before $n-1$ then it can only follow $T_2$. Now, for $T_1$, we must choose the location of $n-1$, call this $i$, and the location of $n$, call it $j$. For $T_2$, we will call the location of $n$ $i$ and the location of $n-1$ $j$. For either template, we have $1 \le i \le n-1$, $i+1 \le j \le n$. Once $i$ and $j$ are chosen, we can fill in the portion of the permutation before position $i$ in any of $s_{i-1}$ ways, the portion between positions $i$ and $j$ in $s_{j-i-1}$ ways, and the portion following position $j$ in $s_{n-j}$ ways. Therefore, $|S_{n,\cal T}|=\sum_{i=1}^{n-1}\sum_{j=i+1}^n 2 \cdot s_{i-1}s_{j-i-1}s_{n-j} = s_n$ for $n >1$. \end{proof} \vspace{.75pc} \section{Exhaustive Experimental Results for 4 Patterns of Length 4} \vspace{.75pc} In the study of permutation pattern avoidance there has been some interest in both enumerating specific classes avoiding relatively small sets of patterns, as well as determining the number of Wilf-equivalent classes on pattern sets of a particular form. Examples of such endeavors include Mikl\'os B\'ona's work enumerating the avoidance class of the pattern $\{1342\}$\cite{CV1}, the well known Erd\H{o}s-Szekeres theorem\cite{CV2} which proves the finiteness of classes avoiding the pattern set $\{12\ldots m,n\ldots 21\}$ for all $m,n\in\mathbb{N}$, or the fact that there are only three Wilf-equivalent avoidance classes for singleton sets of patterns of length 4, which can be derived from the work of B\'ona\cite{CV1} and Gessel\cite{CV3}, coupled with the so-called {\it West Equivalence} \cite{We} proved by Julian West. Some of this work can be aided by experimental mathematics, particularly when searching for avoidance classes which might be enumerable by a specific archetype or when searching for the number of Wilf-equivalent classes. Using a small handful of Maple scripts, we computed the number of symmetry classes (collections of pattern sets which give rise to trivially Wilf-equivalent avoidance classes) and a lower bound for the number of Wilf-equivalent classes for sets of 4 patterns of length 4. Amongst these classes we also searched for those which appeared to be enumerable by polynomials, and found a satisfying number of them. This choice of 4 patterns of length 4 was arbitrary; there is no reason other than computational expense to limit the analysis to small cases of $m$ patterns of length $n$. There is also no reason beyond convenience to only seek those classes which appear to be polynomial in size. The reader interested in looking for other archetypes can add to the code provided with the project, HCRV.txt. Should the need arise, much of the process of computing these bounds can be parallelized, providing a significant speedup if there are cores to spare. In total for 4 patterns of length 4, 1524 symmetry classes were found, there are at least 1100 Wilf-equivalent classes, and there were 60 such classes that appeared to be enumerable by polynomials of degrees between 4 and 7. Utilizing the maple scripts in VATTER.txt it is possible, at least in principle, to come up with automated proofs for these apparently polynomial classes, though again computational resources are the bottleneck. Provided in the table below, we list some pattern sets $\Sigma$ which seem to give rise to polynomially growing $|Av_n(\Sigma)|$, several terms of the sequences $|Av_n(\Sigma)|$, and the degrees of the conjectured polynomials (which may be reconstructed via interpolation). \newpage \begin{table}[ht] \caption{}\label{eqtable} \renewcommand\arraystretch{1.5} \noindent\[ \begin{array}{|c|c|c|} \hline \Sigma & |Av_n(\Sigma)| & \text{deg}(P(n)) \\ \hline \{1234,1243,1342,4231\}& 1,2,6,20,64,187,492,1170,2543,5116 & 6 \\ \hline \{1234,1243,1432,3412\}& 1, 2, 6, 20, 59, 148, 324, 638, 1157, 1966 & 5 \\ \hline \{1234,1243,2341,4231\}& 1, 2, 6, 20, 64, 184, 469, 1072, 2235, 4318 & 6 \\ \hline \{1234,1243,3241,3412\}& 1, 2, 6, 20, 58, 141, 297, 561, 975, 1588 & 4 \\ \hline \{1234,1324,2413,4231\}& 1, 2, 6, 20, 60, 159, 379, 827, 1675, 3184 & 6 \\ \hline \{1234,1342,1423,3421\}& 1, 2, 6, 20, 64, 182, 459, 1045, 2187, 4270 & 7 \\ \hline \end{array} \] \end{table} \section{Small Experiments for 12 Patterns of Length 4} \vspace{.75pc} This is implemented in the Maple package SmallExp.txt available from the webpage of this article (see Section 6). The procedure $Ask(num,N,n)$ generated a random set $S$ of size $num$ consisting of permutations of length 4, then outputted the sequence $(f(n))_{n=1..N}$, where $f(n)$ is the number of permutations of length n that avoid all elements of $S$ as subpermutations. The program $Receive(num,N,T,n)$ ran $Ask(num,N,n)$ $T$ times and compiled the results, taking particular note of whether the resulting sequence went to $0$ or matched a polynomial of some degree past a certain point. We ran the program for a total of 820 cases of $Aha(12,13,n)$. \begin{itemize} \item 45+31+63+52=191 (23.3\%) were ultimately zero. \item 66+38+78+85=267 (32.6\%) were ultimately constant. \item 61+33+81+83=258 (31.5\%) were ultimately a degree one polynomial in $n$. \item 16+13+20+17=66 (8.0\%) were ultimately a degree two polynomial in $n$. \item 3 were ultimately cubic function. \item 33 (4.0\%) did not evidently approach a polynomial within 13 steps. \end{itemize} The total runtime for all of these examples, run for sequences of up to length 13, was approximately 15 hours, for an average of 60-70 seconds per example. This time was not spread out evenly for each example; prior experimentation (and the recursive nature of the program) indicate that super-polynomial sequences take much longer to compute than polynomial (especially constant) ones. Experimentation also indicated that runtime increases dramatically as $N$ increases, particularly for the exponential sequences; this makes longer sequences impractical to derive via this method. In any case, the vast majority of sequences we derived approached a polynomial of degree 2 or less, and the ones that did not approach a polynomial were inspected by hand and appeared to follow an exponential function, so for the case of $num=12$, longer sequences may not unveil substantially more information anyway. One thing that was noticed was that, of the non-polynomial sequences generated by $Ask$, a majority (21/32 observed directly) obeyed a simple recurrence relation $f(n)=f(n-1)+f(n-2)+l(n)$, where $l(n)$ is a linear or constant function, past a given threshold. (As an example, [1, 2, 6, 12, 18, 26, 39, 60, 94, 149, 238, 382, 615] obeys the relationship $f(n)=f(n-1)+f(n-2)-5$ for $n$ at least 6.) It would be interesting to study the exact frequency of this sort of recurrence. \vspace{.75pc} \section{Using Zeilberger's Maple package VATTER.txt on Heterogeneous Pattern Sets} \vspace{.75pc} In \cite{V}, Vince Vatter describes a method to discover enumeration schemes for permutation classes using a computer. This was implemented and somewhat extended in \cite{Z} This method is implemented in the package \verb+VATTER.txt+, available from the paper's website. The \verb+VATTER+ procedures initially relevant to our work are \verb+SchemeFast+ and \verb+SeqS+. The former looks for an enumeration scheme of the class of permutations avoiding a certain set of patterns; the latter computes, for any desired positive integer $K$, the first $K$ terms corresponding to a given enumeration scheme. The details of the scheme can be found in \cite{Z}; the important point is that it gives a \emph{polynomial time} algorithm to enumerate the permutation class. We wrote a new Maple package, VATTERPLUS.txt (available from the webpage), that we will now describe. Our procedure \verb+VatterR3S4(r,s,K,Gvul,GvulGap)+ applies \verb+SchemeFast+ to $K$ sets of random permutations, where each set contains $r$ permutations of length 3 and $s$ permutations of length 4. (\verb+Gvul+ and \verb+GvulGap+ are optional search parameters for \verb+SchemeFast+, set to 4 and 2 by default.) The output is a list of theorems of the form ``The enumeration scheme of permutations avoiding \underline{\hspace{1cm}} is \underline{\hspace{1cm}}.'' For example, you can try \verb+VatterR3S4(3,2,4);+ to see some theorems about permutations avoiding $3$ patterns of length 3 and 2 patterns of length 4. For more theorems, see the data files \verb+v12.txt+, \verb+v21.txt+, \verb+v22.txt+, \verb+v23.txt+, \verb+v32.txt+, available from the webpage of this class (see Section 6). Simply adapting our \verb+VatterR3S4+ procedure to \verb+VatterR4S5+ to work for patterns of length 4 and 5 did not yield much success. The reason for this will show itself momentarily. The next avenue was to use procedure \verb+SipurF+ from the Maple package (available from the webpage) \verb+VATTER.txt+. This uses \verb+SchemeFast+ on ALL equivalence classes of patterns to avoid; i.e. \verb+{[1,2,3,4],[1,3,5,2,4]}+ and \\ \verb+{[4,3,2,1],[4,2,5,3,1]}+ are equivalent by the reversing bijection. This means most of the theorems found in accompanying files can be trivially enlarged. Applying \verb+SipurF([3,4],4,2,10,20,n,N,x,3,2)+ enumerates 17/18 equivalence classes for avoiding 1 pattern of length 3 and 1 pattern of length 4 (that are independent). In total 76/78 of the pairs have schemes found. However, applying \\ \verb+SipurF([4,5],4,2,10,20,n,N,x,3,2)+ only manages to find schemes for 11 of the 369 different equivalence classes. And only 1 had a scheme of depth 3. So attempting random pairs, as \verb+VatterR4S5+ does, usually won't work. Empirically, increasing the depth search to \verb+Gvul=5+ improves the odds of finding a scheme from $\frac{2}{400}=0.5\%$ to $\frac{4}{100}=4\%$ but each iteration takes roughly 10 times longer. And it is important to note that \cite{Z} recognizes there are cases that will NEVER find a scheme, no matter the depth. For more enumerating schemes, see the accompanying data files \verb+v45.txt+, \\ \verb+v445.txt+, \verb+v455.txt+, \verb+v4445.txt+, \verb+v4555.txt+, \verb+v4455.txt+. For a single theorem, it is probably best to attempt multiple iterations with \verb+Gvul=4+ and \verb+GvulGap=2+. If you want to improve chances for new theorems, you will eventually have to increase \verb+Gvul+. It may be that increasing the number of 4 patterns in the set will increase your chance of finding a scheme. This could be a result of the fewer number of permutations in general. \vspace{.75pc} \section{Maple Code and Extensive Output Files} \vspace{.75pc} See the front of this article {\tt http://www.math.rutgers.edu/\~{}zeilberg/mamarim/mamarimhtml/pamp.html}. \vspace{.75pc} \bibliographystyle{amsplain}
{ "timestamp": "2017-03-08T02:07:57", "yymm": "1703", "arxiv_id": "1703.02415", "language": "en", "url": "https://arxiv.org/abs/1703.02415", "abstract": "This paper presents a collection of experimental results regarding permutation pattern avoidance, focusing on cases where there are \"many\" patterns to be avoided.", "subjects": "Combinatorics (math.CO)", "title": "Counting Permutations that Avoid Many Patterns", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759660443166, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7079405647130789 }
https://arxiv.org/abs/2002.08156
Lattice structure of the random stable set in many-to-many matching market
For a many-to-many matching market, we study the lattice structure of the set of random stable matchings. We define a partial order on the random stable set and present two intuitive binary operations to compute the least upper bound and the greatest lower bound for each side of the matching market. Then, we prove that with these binary operations the set of random stable matchings forms two dual lattices.
\section{Introduction}\label{intro} Matchings have been studied for several decades, beginning with Gale and Shapley's pioneering paper \citep{gale1962college}. They introduce the notion of stable matchings for a marriage market and provide an algorithm for finding them. Since then, a considerable amount of work was carried out on both theory and applications of stable matchings. A matching is \textit{stable} if all agents have acceptable partners and there is no pair of agents, one of each side of the market, that would prefer to be matched to each other rather than to remain with the current partner. Unfortunately, the set of many-to-one stable matchings may be empty. Substitutability is the weakest condition that has so far been imposed on agents’ preferences under which the existence of stable matchings is guaranteed. An agent has \textit{substitutable preference} if he wants to continue being a partner with agents from the other side of the market even if other agents become unavailable \citep[see][for more detail]{kelso1982job,roth1984evolution}. One of the most important results in the matching literature is that the set of stable matchings has a dual lattice structure. This is important for at least two reasons. First, it shows that even if agents of the same side of the market compete for agents of the other side, the conflict is attenuated since, on the set of stable matchings, agents on the same side of the market have coincidence of interests. Second, many algorithms are based on this lattice structure. For example, algorithms that yield stable matchings in centralized markets. In this paper, we study the lattice structure of the random stable set for a general matching market, many-to-many matching markets with substitutable preferences and satisfiying the \textit{law of aggregated demand} (L.A.D.). Random stable matchings are very useful for at least two reasons. First, the randomization allows for a much richer space of possible outcomes and may be essential to achieve fairness and anonymity. Second, the framework of random stable matchings admits fractional matchings that capture time-sharing arrangements, \citep[see][among others]{rothblum1992characterization,roth1993stable,teo1998geometry,sethuraman2006many,baiou2000stable,dougan2016efficiency,neme2019characterization,neme2019many}. \cite{roth1993stable} define binary operations to compute the \textit{least upper bound (l.u.b.)} and the \textit{greatest lower bound (g.l.b.)} for random stable matchings for the marriage market. To do so, they use the first-order stochastic dominance as the partial order for random stable matchings. This partial order, can not be applied when agents' preferences are over subsets of agents of the other side of the market in a substitutable manner. For this reason, we present a new partial order --a natural extension of the first-order stochastic dominance-- for the random stable set of a matching market when agents' preferences are subsitutables and satisfies the L.A.D.. Generally, a random stable matching can be represented by different lotteries. Despite this, we prove that there is a unique way to represent a random stable matching fulfilling a special property: each stable matching involve in the lottery of this unique representation is comparable in the eyes of all firms, from now on we refer as \textit{decreasing representation}. This way, our partial order is independently of the representations of the random stable matching. The process to construct this decreasing representation for each random stable matchings, its presented as Algorithm 1. To ease the definition of the binary operations and proofs, we present the \textit{splitting procedure}. Given two random stable matchings, this procedure ``splits'' the decreasing representation of each random stable matching in a way that both lotteries have the same numbers of terms. Moreover, both lotteries have the same scalars, term to term. The splitting procedure is formalized by Algorithm 2 presented in Appendix \ref{apendice B}. Our main contribution in this paper is that, by defining two natural binary operations (pointing functions) that compute the \textit{l.u.b.} and \textit{g.l.b.} for random stable matchings, the set of these matchings has a dual lattice structure. In other words, as long as the set of (deterministic) stable matchings has a lattice structure, where the binary operations are computed via pointing functions, the set of random stable matchings also has a lattice structure. Further, for the special case in which all the scalars of the lottery are rational numbers, we show that there is a direct way to compute the \textit{l.u.b.} and the \textit{g.l.b.} Other finding derived from our proofs, is a version of Rural Hospital Theorem for random stable matchings. The Rural Hospital Theorem (for deterministic stable matchings) for a many-to-many matching market where all agents have substitutable preference satisfying the L.A.D. is presented in \cite{alkan2002class}. The paper illustrates the successive results needed to prove that the random sable set has a lattice structure with numeric examples. \subsection*{Related literature} The lattice structure of the set of stable matchings is introduced by \cite{knuth1976marriages} for the marriage market. Given two stable matchings he defines the \textit{l.u.b.} for men, by matching to each man with the best of the two partners, and the \textit{g.l.b.} for men, by matching to each man the less preferred between the two partners; these are usually called the \textit{pointing functions} relative to a partial order. \cite{roth1985college} shows that these binary operations (pointing functions) used in \cite{knuth1976marriages} do not work in the more general many-to-many and many-to-one matching markets introduced by \cite{kelso1982job} and \cite{roth1984evolution} respectively even under substitutable preferences. For a specific many-to-one matching market, the so-called the college admission problem, \cite{roth1992two} present a natural extension of Knuth's result for $q$-responsive preferences. \cite{marti2001lattice} further extend the results proved by \cite{roth1992two}. They identified a weaker condition than $q$-responsiveness, called $q$-separability, and propose two natural binary operations that give a dual lattice structure to the set of stable matchings in a many-to-one matching market with substitutable and $q$-separable preferences. Such binary operations are similar to the Knuth’s ones. \cite{risma2015binary} generalizes the result of \cite{marti2001lattice} by showing that their binary operations work well in many-to-one matching markets where the preferences of the agents satisfy substitutability and the \textit{law of aggregate demand} (a less restrictive than $q$-separability). Her paper is contextualized in many-to-one matching markets with contracts. \cite{manasero2019binary} extends the result in \cite{risma2015binary} to the many-to-many marching market, where one side has substitutable preferences satisfying the law of aggregate demand, and the other side has $q$-responsive preferences. \cite{alkan2002class} considers a market with multiple partners on both sides. For this market, preferences are given by rather general \textit{path-independent choice functions} that do not necessarily respect any ordering on individuals and satisfy the law of aggregated demand.\footnote{\cite{alkan2002class} calls ``the law of aggregated demand'' as ``cardinal monotonicity''.} He shows that the set of stable matchings in any two-sided market with path-independent choice functions and preferences satisfying the law of aggregated demand has a lattice structure under the common preferences of all agents on any side of the market. \cite{li2014new} presents an alternative proof for Alkan's result. The main distinction between \cite{li2014new} and \cite{alkan2002class} lies in the conditions over preferences: \cite{li2014new} assumes agents with complete preferences, whereas \cite{alkan2002class} assumes agents with incomplete revealed preferences. All of these mentioned papers share natural definitions of the binary operations via pointing functions. In other direction, there is an extensive literature that proves that the set of stable matchings has a lattice structure \citep[see][among others]{blair1988lattice,adachi2000characterization,fleiner2003fixed,echenique2004core,echenique2004theory,hatfield2005matching,ostrovsky2008stability,wu2018lattice}. All of these mentioned papers define in a difficult way, by means of fixed points, the \textit{l.u.b} and \textit{g.l.b.}. That is, these papers do not compute the binary operations via pointing functions. Regarding to the related literature concerning to lattice structures of random stable sets, \cite{roth1993stable}, define two binary operations for random stable matchings in marriage markets. For these very particular markets, they proved that the set of random stable matchings,\footnote{They prove that the ``stable fractional matching set'' coincides with random stable matching set} endowed with a partial order (first-order stochastic dominance) has a lattice structure. \cite{neme2019characterization} proved that the strongly stable fractional matching set in the marriage market, endowed with the same partial order (first-order stochastic dominance), has a lattice structure. The binary operations defined in \cite{roth1993stable} and also used by \cite{neme2019characterization}, can not be extended to a more general markets, not even to the college admission problem with $q$-responsive preferences. The paper is organized as follows. In Section \ref{prelimirany}, we introduce the matching market and preliminary results. In Section \ref{seccion algoritmo}, we prove that there is a unique way to represent a random stable matching with a decreasing property (Algorithm 1 and Theorem \ref{teorema del orden}). Also, we present a version of the Rural Hospital Theorem for random stable matchings (Proposition \ref{hospital rural para random}). In section \ref{order}, we present a partial order for random matchings when agents' preferences are substitutalbes and satisfies L.A.D. (Proposition \ref{Es orden parcial}). Also, we describe the splitting procedure that is formalized latter in Appendix \ref{apendice B}. In Section \ref{seccion main result}, we define the binary operations, and we prove that these natural binary operations computes the \textit{l.u.b.} and \textit{g.l.b.} for each side of the market (Proposition \ref{teorema de operaciones binarias}). Further, the main result of the paper is presented, and states that the random stable set has a dual lattice structure (Theorem \ref{teorema principal}). In subsection \ref{rational}, we show how to compute in a direct way the \textit{l.u.b.} and \textit{g.l.b.} for rational random stable matchings (these are random stable matchings where all scalars of their lotteries are rational numbers)(Corollary \ref{corolario para rational}). Section \ref{conclusiones} contains concluding remarks. Finally, Appendix \ref{apendice A} contains proofs for the decreasing representation and Appendix \ref{apendice B} contains proofs of the partial order, formalization of the splitting procedure (Algorithm 2), and the proof of the main theorem. \section{Preliminaries}\label{prelimirany} We consider many-to-many matching markets, where there are two disjoint sets of agents, the set of \textit{firms} $F$ and the set of \textit{workers} $W$. Each firm has an antisymmetric, transitive and complete preference relation ($>_f$) over the set of all subsets of $W$. In the same way, each worker has an antisymmetric, transitive and complete preference relation ($>_w$) over the set of all subsets of $F$. We denote by $P$ the preferences profile for all agents: firms and workers. A many-to-many matching market is denoted by $(F,W,P).$ Given a set of firms $S\subseteq F$, each worker $w\in W$ can determine which subset of $S$ would most prefer to hire. We call this the $w$'s choice set from $S$ and denote it by $Ch(S,>_w)$. Formally, $$ Ch(S,>_w)=max_{>_w}\{T:T\subseteq S\}. $$ Symmetrically, given a set of workers $S\subseteq W$, let $Ch(S,>_f)$ denote firm $f$'s most preferred subset of $S$ according to its preference relation $>_f$. Formally, $$ Ch(S,>_f)=max_{>_f}\{T:T\subseteq S\}. $$ \begin{definition} A \textbf{matching} $\mu$ is a function from the set $F\cup W$ into $2^{F\cup W}$ such that for each $w\in W$ and for each $f\in F$: \begin{enumerate}[1.] \item $\mu(w)\subseteq F$; \item $\mu(f)\subseteq W$; \item $w\in \mu(f)\Leftrightarrow f\in \mu(w)$ \end{enumerate} \end{definition} We say that agent $a\in F\cup W$ is matched if $\mu(a) \neq \emptyset$, otherwise he is unmatched. A matching $\mu$ is blocked by agent a if $\mu(a)\neq Ch(\mu(a),>_a)$. We say that a matching is individually rational if it is not blocked by any individual agent. A matching $\mu$ is blocked by a worker-firm pair $(w,f)$ if $w \notin \mu( f ), w \in Ch(\mu( f )\cup \{w\},>_f),$ and $f \in Ch(\mu( w )\cup \{f\},>_w)$. A matching $\mu$ is \textbf{stable} if it is not blocked by any individual agent or any worker-firm pair. The set of stable matchings is denoted by $\boldsymbol{\mathcal{S(P)}}.$ Further, a \textbf{random stable matching} is a lottery over stable matchings, and denote by $\boldsymbol{\mathcal{RS(P)}}$ the random stable set for the many-to-many matching market $(F,W,P)$. Given an agent $a$'s preference relation ($>_a$) and two stable matchings $\mu$ and $\mu'$, let $\mu(a) \geq_a \mu'(a)$ denote $\mu(a)=Ch(\mu(a) \cup \mu'(a),>_a)$. We say that $\mu(a) >_a \mu'(a)$ if $\mu(a) \geq_a \mu'(a)$ and $\mu(a) \neq \mu'(a)$. Given a preferences profile $P$, and two stable matchings $\mu$ and $\mu'$, let $\mu >_F \mu'$ denote the case in which all firms like $\mu$ at least as well as $\mu'$, with at least one firm preferring $\mu$ to $\mu'$ outright. Let $\mu \geq_F \mu'$ denote that either $\mu >_F \mu'$ or that $\mu= \mu'.$ Similarly, we define $>_W$ and $\geq_W$. Notice that, $\geq_F$ and $\geq_W$ are partial orders over the set of stable matchings. An agent $a$'s preferences relation satisfies \textbf{substitutability} if, for any subset $S$ of the opposite set (for instance, if $a\in F$ then $S\subseteq W$) that contains agent $b$, $b\in Ch(S,>_a)$ implies $b\in Ch(S'\cup \{b\},>_a)$ for all $S' \subseteq S.$ We say that an agent $a$'s preference relation ($>_a$) satisfies the\textbf{ law of aggregated demand (L.A.D.)} if for all subset $S$ of the opposite set and all $S'\subseteq S$, $|Ch(S',>_a)|\leq |Ch(S,>_a)|.$ \footnote{$|S|$ denotes the number of agents in $S$.} For a matching market $(F,W,P)$ where the preference relation of each agent satisfies substitutability and the LAD, \cite{alkan2002class}\footnote{\cite{li2014new} present an alternative proof for Alkan's result, \cite{li2014new} assumes agents with complete preferences, whereas \cite{alkan2002class} assumes agents with incomplete preferences.} proves that the set of stable matchings has a lattice structure. Given two stable matchings $\mu_1$ and $\mu_2$, \textit{l.u.b.} for firms is denoted by $\boldsymbol{\mu_1 \vee_F \mu_2}$ and \textit{g.l.b.} for firms is denoted by $\boldsymbol{\mu_1 \wedge_F \mu_2}$. Similarly, \textit{l.u.b.} for workers is denoted by $\boldsymbol{\mu_1 \vee_W \mu_2}$ and \textit{g.l.b.} for workers is denoted by $\boldsymbol{\mu_1 \wedge_W \mu_2}$. The binary operations are defined as follows, \citep[see][among others]{alkan2002class,li2014new}. $$\mu_1 \vee_F \mu_2(f)=\mu_1 \wedge_W \mu_2(f):=Ch(\mu_1(f)\cup \mu_2(f),>_f),\text{ for each firm }f\in F,$$ $$\mu_1 \vee_F \mu_2(w)=\mu_1 \wedge_W \mu_2(w):=\{f:w\in Ch(\mu_1(f)\cup \mu_2(f),>_f)\},\text{ for each worker }w\in W.$$ Similarly, $$\mu_1 \vee_W \mu_2(w)=\mu_1 \wedge_F \mu_2(w):=Ch(\mu_1(w)\cup \mu_2(w),>_w),\text{ for each worker }w\in W, $$ $$\mu_1 \vee_W \mu_2(f)=\mu_1 \wedge_F \mu_2(f):=\{w:f\in Ch(\mu_1(w)\cup \mu_2(w),>_w)\},\text{ for each firm }f\in F.$$ \begin{remark}\label{operaciones en matching es estable} Let $T\subseteq \mathcal{S(P)}$. We denote by $$\bigvee_{\nu\in{T}}\displaystyle{_{\!\!\!\!{F}}}~~\nu(f)=Ch(\bigcup_{\nu\in{T}}\nu(f),>_f)$$ and $$\bigwedge_{\nu\in{T}}\displaystyle{_{\!\!\!{F}}}~~\nu(f)=\{w:f\in Ch(\bigcup_{\nu\in{T}}\nu(w),>_w)\}.$$ By substitutability and transitivity, \cite{li2014new} proves that that $$\bigvee_{\nu\in{T}}\displaystyle{_{\!\!\!\!{F}}}~~\nu(f)\text{ and }\bigwedge_{\nu\in{T}}\displaystyle{_{\!\!\!{F}}}~~\nu(f)$$ are stable matchings and coincide with the \textit{l.u.b.} and \textit{g.l.b.} among the stable matchings in $T$ respectively. \end{remark} \section{Random stable matchings: representations}\label{seccion algoritmo} In this section, we present two results that have interest in themselves and that we use in the next section in order to prove that the set of random stable matchings has a lattice structure. Given a random stable matching that is represented as a lottery over stable matchings, we change its representation as a new lottery over a new set of stable matching. To be more specific, we prove that this new set of stable matchings have a decreasing property, namely, there is $\{\mu_1,\ldots,\mu_{\tilde{k}} \}\subseteq \mathcal{S(P)}$ with $\mu_\ell \geq_F \mu_{\ell+1}$ for $\ell=1,\ldots,{\tilde{k}}-1$. Also, we present a version for random stable matchings of the Rural Hospital Theorem (Proposition (RHT)). To describe the representation of a random stable matchings, first, we need to define an incidence vector. Then, given a stable matching $\mu$, a vector $x^{\mu}\in\left\{ 0,1\right\}^{\left\vert F\right\vert \times\left\vert W\right\vert }$ is an \textbf{incidence vector} where $x_{i,j}^{\mu}=1$ if and only if $j\in{\mu\left( i\right)}$ and $x_{i,j}^{\mu}=0$ otherwise. Hence, a random stable matching is represented as a lottery over the incidence vectors of stable matchings. That is, $$ x=\sum_{\nu\in{\mathcal{S(P)}}} \lambda_{\nu} x^{\nu} $$ where $0\leq \lambda_{\nu} \leq 1,~\sum_{\nu\in{\mathcal{S(P)}}}\lambda_{\nu}=1, \mbox{ and } \nu\in{\mathcal{S(P)}}$. Notice that, each entry of a random stable matching $x$, fulfils that $x_{i,j} \in [0,1]$. Given a random stable matching $x$, we define the support of $x$ as follows: $$ supp(x)=\{(i,j):x_{i,j}>0\}. $$ Given a random stable matching $x$, i.e. $x=\sum_{\nu\in{\mathcal{S(P)}}}\lambda_{\nu} x^{\nu};~0 \leq \lambda_{\nu} \leq 1,~\sum_{\nu\in{\mathcal{S(P)}}}\lambda_{\nu}=1,$ we define $A$ to be the set of all stable matchings involve in the lottery. Formally, $$ A=\bigg\{ \nu\in{\mathcal{S(P)}}: x=\sum_{\nu\in{\mathcal{S(P)}}}\lambda_{\nu} x^{\nu};~0< \lambda_{\nu} \leq 1,~\sum_{\nu\in{\mathcal{S(P)}}}\lambda_{\nu}=1 \bigg\}. $$ Now, in order to change the representation of the random stable matching $x$ proceed as follows: \begin{center} \begin{tabular}{l l} \hline \hline \multicolumn{2}{l}{\textbf{Algorithm 1:}}\vspace*{10 pt}\\ \textbf{Step $\boldsymbol{0}$} & Set $B_1:=A ~~\displaystyle{\bigcup} ~\bigg\{\displaystyle{\bigvee_{\nu\in{T}}}\displaystyle{_{\!\!{F}}}~~\nu:T\subseteq A\bigg\}\bigcup\bigg\{\bigwedge_{\nu\in{T}}\displaystyle{_{\!\!{F}}}~~\nu:T\subseteq A\bigg\} .$\\ & \hspace{20 pt}$x^1:=x$.\\ & \hspace{20 pt}$\mathcal{M}:=\emptyset$.\\ & \hspace{20 pt}$\Lambda:=\emptyset$.\\ \textbf{Step $\boldsymbol{k\geq1}$} & Set $\mu_{k}:=\displaystyle{\bigvee_{\nu\in{B_{k}}}}\displaystyle{_{\!\!\!{F}}}~~\nu.$\\ & \hspace{20 pt}$\mathcal{M}:=\mathcal{M}\cup \{\mu_k\}.$\\ & \hspace{20 pt}$\alpha_{k}:=min\{x^k_{i,j} :x^{\mu_{k}}_{i,j}=1 \}.$\\ & \hspace{20 pt}$\Lambda:=\Lambda \cup \{\alpha_k\}.$\\ &\hspace{20 pt}$\mathcal{L}_k:=\{(i,j)\in F\times W: x^k_{i,j}=\alpha_k \text{ and }x_{i,j}^{\mu_k}=1\}.$\\ & \hspace{20 pt}$C_k:=\displaystyle\bigcup_{(i,j)\in{\mathcal{L}_k}}\{\nu \in B_k:x_{i,j}^{\nu}=1\}.$\\ & \hspace{20 pt}$B_{k+1}:=B_k \setminus C_k.$ \\ & \texttt{IF} $B_{k+1}=\emptyset$,\\ & \hspace{20 pt}\texttt{THEN}, the procedure stops. \\ & \texttt{ELSE} set $x^{k+1}=\frac{x^k-\alpha_kx^{\mu_k}}{1-\alpha_k},$ and continue to Step $k+1.$\medskip\\ \hline \hline \end{tabular} \end{center} The following theorem states that there is a unique representation of a random stable matching with the decreasing property in the eyes of all firms. \begin{theorem}\label{teorema del orden} Let $x$ be a random stable matching and $\mathcal{M}$ be the output of Algorithm 1. Then, $x$ is represented as a lottery over stable matchings that belong to $\mathcal{M}$ where $\mu_{\ell}>_{F} \mu_{\ell+1}$ for each $\mu_\ell,\mu_{\ell+1} \in \mathcal{M}$. Moreover, the set $\mathcal{M}$ is unique. \end{theorem} \begin{proof} See proof in Appendix \ref{apendice A}. \end{proof} The following example illustrate Algorithm 1. \begin{example}\label{ejempplo1} Let $(F,W,P)$ be a many-to-one matching market instance where $F=\{f_1,f_2,f_3,f_4\}$, $W=\{w_1,w_2,w_3,w_4\}$ and the preference profile is given by $$ \begin{array}{c} >_{f_1}=\{w_1,w_2\},\{w_1,w_3\},\{w_2,w_4\},\{w_3,w_4\},\{w_1\},\{w_2\},\{w_3\},\{w_4\}. \\ >_{f_2}=\{w_3,w_4\},\{w_2,w_4\},\{w_1,w_3\},\{w_1,w_2\},\{w_3\},\{w_4\},\{w_1\},\{w_2\}. \\ >_{f_3}=\{w_1,w_3\},\{w_3,w_4\},\{w_1,w_2\},\{w_2,w_4\},\{w_1\},\{w_3\},\{w_2\},\{w_4\}. \\ >_{f_4}=\{w_2,w_4\},\{w_1,w_2\},\{w_3,w_4\},\{w_1,w_3\},\{w_2\},\{w_4\},\{w_1\},\{w_3\}. \\ >_{w_1}=\{f_2,f_4\},\{f_2,f_3\},\{f_1,f_4\},\{f_1,f_3\},\{f_2\},\{f_4\},\{f_3\},\{f_1\}. \\ >_{w_2}=\{f_2,f_3\},\{f_1,f_3\},\{f_2,f_4\},\{f_1,f_4\},\{f_2\},\{f_3\},\{f_1\},\{f_4\}. \\ >_{w_3}=\{f_1,f_4\},\{f_2,f_4\},\{f_1,f_3\},\{f_2,f_3\},\{f_1\},\{f_4\},\{f_2\},\{f_3\}. \\ >_{w_4}=\{f_1,f_3\},\{f_1,f_4\},\{f_2,f_3\},\{f_2,f_4\},\{f_1\},\{f_3\},\{f_4\},\{f_2\}. \\ \end{array} $$ It is easy to check that these preference relations are substitutable and satisfy LAD. The set of stable matchings is represented in Table 1 and its lattice for the partial order $\geq_F$ is represented in Figure 1. \bigskip \begin{minipage}{0.7\linewidth} \begin{center} \begin{tabular}{c|cccc} & $\boldsymbol{f_1}$ & $\boldsymbol{f_2}$ & $\boldsymbol{f_3}$ & $\boldsymbol{f_4}$ \\ \hline $\boldsymbol{\nu_1}$ & $\{w_1,w_2\}$ & $\{w_3,w_4\}$ & $\{w_1,w_3\}$ & $\{w_2,w_4\}$ \\ $\boldsymbol{\nu_2}$ & $\{w_1,w_3\}$ & $\{w_2,w_4\}$ & $\{w_3,w_4\}$ & $\{w_1,w_2\}$ \\ $\boldsymbol{\nu_3}$ & $\{w_2,w_4\}$ & $\{w_1,w_3\}$ & $\{w_1,w_2\}$ & $\{w_3,w_4\}$ \\ $\boldsymbol{\nu_4}$ & $\{w_3,w_4\}$ & $\{w_1,w_2\}$ & $\{w_2,w_4\}$ & $\{w_1,w_3\}$ \\ \end{tabular} \medskip \hspace {20pt} Table 1 \end{center} \end{minipage} \begin{minipage}{0.3\linewidth} {\small \hspace{15pt}\begin{tikzpicture}[scale=0.35] \node (1) at (2,18) {\small{$\nu_1$}}; \node (2) at (0,15) {\small{$\nu_2$}}; \node (3) at (4,15) {\small{$\nu_3$}}; \node (4) at (2,12) {\small{$\nu_4$}}; \draw(1) to (2); \draw(1) to (3); \draw(1) to (2); \draw(2) to (4); \draw(3) to (4); \end{tikzpicture}} \hspace {20pt} Figure 1 \end{minipage} \bigskip Let $x^1=\frac{3}{4}x^{\nu_2}+\frac{1}{4}x^{\nu_3}$ be a random stable matching. Now, we change the representation of $x^1$ as in Theorem \ref{teorema del orden}. Notice that $$ x^1= \left(\begin{matrix} \frac{3}{4} &\frac{1}{4} & \frac{3}{4} & \frac{1}{4} \\ \frac{1}{4} &\frac{3}{4} & \frac{1}{4} & \frac{3}{4} \\ \frac{1}{4} &\frac{1}{4} & \frac{3}{4} & \frac{3}{4} \\ \frac{3}{4} & \frac{3}{4} &\frac{1}{4} & \frac{1}{4} \\ \end{matrix}\right). $$ Then, $A=\{\nu_2, \nu_3\}$, and $B_1=\{\nu_1,\nu_2,\nu_3,\nu_4\}.$ Set $\mathcal{M}:=\emptyset$ and $\Lambda:=\emptyset.$ \noindent \textbf{Step 1} Set $\mu_1 := \nu_1=\displaystyle \bigvee_{\nu\in B_1}\displaystyle{_{\!\!\!\!{F}}}~\nu, ~\mathcal{M}:=\mathcal{M}\cup \{\mu_1\}, ~\alpha_1=\frac{1}{4}, ~\Lambda:=\Lambda\cup \{\alpha_1\} $ and $C_1=\{\nu_1,\nu_3\}.$ Since $B_2=B_1 \setminus C_1=\{\nu_2,\nu_4\}\neq \emptyset, $ then set $$ x^2:=\frac{x-\frac{1}{4}x^{\mu_1}}{1-\frac{1}{4}}=\left(\begin{matrix} \frac{2}{3} & 0 &1 & \frac{1}{3} \\ \frac{1}{3} &1 & 0 & \frac{2}{3} \\ 0 & \frac{1}{3} &\frac{2}{3} & 1 \\ 1 & \frac{2}{3} &\frac{1}{3} & 0 \\ \end{matrix}\right), $$ and continue to Step 2. \noindent \textbf{Step 2} Set $\mu_2 := \nu_2=\displaystyle \bigvee_{\nu\in B_2}\displaystyle{_{\!\!\!\!{F}}}~\nu, ~\mathcal{M}:=\mathcal{M}\cup \{\mu_2\}, ~\alpha_2=\frac{2}{3}, ~\Lambda:=\Lambda\cup \{\alpha_2\} $ and $C_2=\{\nu_2\}.$ Since $B_3=B_2 \setminus C_2=\{\nu_4\}\neq \emptyset, $ then set $$ x^3=\frac{x-\frac{2}{3}x^{\mu_2}}{1-\frac{2}{3}}=\left(\begin{matrix} 0 & 0 & 1 & 1 \\ 1 &1 & 0 & 0 \\ 0 & 1 &0 & 1 \\ 1 & 0 &1 & 0 \\ \end{matrix}\right), $$ and continue to Step 3. \noindent \textbf{Step 3} Set $\mu_3 := \nu_4=\displaystyle \bigvee_{\nu\in B_3}\displaystyle{_{\!\!\!\!{F}}}~\nu, ~\mathcal{M}:=\mathcal{M}\cup \{\mu_3\}, ~\alpha_3=1, ~\Lambda:=\Lambda\cup \{\alpha_3\} $ and $C_3=\{\nu_4\}.$ Since $B_4=B_3\setminus C_3= \emptyset, $ then the procedure stops. The output of Algorithm 1 is $\mathcal{M}=\{\mu_1,\mu_2,\mu_3\}=\{\nu_1,\nu_2,\nu_4\},$ and $\Lambda=\{\frac{1}{4},\frac{2}{3},1\}.$ Therefore, \begin{center} $x^1=\frac{1}{4}x^{\mu_1}+(1-\frac{1}{4})(\frac{2}{3})x^{\mu_2}+(1-\frac{1}{4})(1-\frac{2}{3})(1)x^{\mu_3}$ \end{center} \begin{center} $ =\frac{1}{4}x^{\mu_1}+\frac{1}{2}x^{\mu_2}+\frac{1}{4}x^{\mu_3}. $ \end{center} Since $\mu_1=\nu_1$, $\mu_2=\nu_2$ and $\mu_3=\nu_4$, then $x^1$ can be written as: \begin{center} $x^1=\frac{1}{4}x^{\nu_1}+\frac{1}{2}x^{\nu_2}+\frac{1}{4}x^{\nu_4}. $ \end{center} As we can see in Figure 1, the stable matchings of the lottery fulfils $\nu_1 \geq_F \nu_2 \geq_F \nu_4$. \end{example} The following proposition it is known as \textit{Rural Hospital Theorem}. For a many-to-many matching markets where the preference relation of each agent satisfies substitutability and the LAD is proved in \cite{alkan2002class}. \medskip \noindent \textbf{Proposition (RHT) (\cite{alkan2002class})} \textit{ Each agent is matched with the same number of partners in every stable matching. That is, $|\mu(a)|=|\mu'(a)|$ for each $\mu,\mu'\in \mathcal{S(P)}$ and for each $a\in F\cup W$.} \medskip Next, we present a version for random stable matchings of Proposition (RHT). \begin{proposition}\label{hospital rural para random} Let $x$ and $x'$ be two random stable matchings, then $\sum_{i\in F}x_{i,j}=\sum_{i\in F}x'_{i,j}$ for each $j\in W$, and $\sum_{j\in W}x_{i,j}=\sum_{j\in W}x'_{i,j}$ for each $i\in F$. \end{proposition} \begin{proof} See proof in Appendix \ref{apendice A}. \end{proof} From now on, by Proposition \ref{teorema del orden}, we assume that each random stable matching is already represented as a lottery over stable matchings in a decreasing way. \section{Partial order for random stable matchings}\label{order} In this section, we define a partial order for the set of random stable set in a many-to-many matching market with substitutable preferences satisfying the L.A.D.. This partial order is a generalization of the first-order stochastic dominance presented in \cite{roth1993stable} for the random stable set in the marriage market. Given two random stable matching $x$ and $y$ for the marriage market $(M,W,P)$, \cite{roth1993stable} define the partial order as follows. They say that $x$ weakly dominates$^\star$ $y$ for man $m$, (here denoted by $x \succeq_m^\star y$) if $$ \sum_{j\geq_mw} x_{m,j} \geq \sum_{j\geq_mw} y_{m,j} $$ for each $w\in W$. Further they say that $x \succeq_M^\star y$ if $x \succeq_m^\star y$ for each $m\in M$. The partial order $\succeq^\star_W$ is defined analogously. Notice that the partial orders $\succeq_M^\star$ and $\succeq_W^\star$ can not order random stable matchings when agents have preferences over subset of agents on the other side of the market in a substitutable manner. For this reason, for the setting considered in this paper, we define a new partial order. Formally, \begin{definition}\label{defino orden estocastico} Let $x=\sum_{i=1}^{I}\alpha_{i}x^{\mu^{x}_{i}}$ and $y=\sum_{j=1}^{J}\beta_{j}x^{\mu^{y}_{j}}$ with $\mu^{x}_i \geq_F \mu^{x}_{i +1}$ for each $i=1,\ldots,I-1$ and $\mu^{y}_j \geq_F \mu^{y}_{j +1}$ for each $j=1,\ldots,J-1$. We say that $\boldsymbol{x}$ \textbf{weakly dominates} $\boldsymbol{y}$ for the firm $f$, ($x\succeq_{f} y$), if and only if for each $\mu^y_{j}(f)$ $$ \sum_{i:\mu_{i}^{x}(f)\geq_{f}\mu_{j}^{y}(f)}\alpha_{i} \geq \sum_{l:\mu_{l}^{y}(f)\geq_{f}\mu_{j}^{y}(f)}\beta_{l}. $$ \end{definition} Further, we say that $\boldsymbol{x}$ \textbf{strongly dominates} $\boldsymbol{y}$ for the firm $f$, ($x\succ_f y$), if the above inequalities hold with at least one strict inequality for some $\mu_j^y(f)$. That is, $x\succ_f y$ when $x\succeq_f y$ and $x\neq y$ for the firm $f$. Further, if $x\succeq_{f} y$ for each $f\in F$ we denote that $x\succeq_{F} y$. We define $\succeq_{w}$, $\succ_{w}$ and $\succeq_{W}$ analogously. If we interpret the $x_{f,w}$ as the probability that $f$ is matched with $w$, then $x\succeq_f y$ if $x_{f,\cdot}$ stochastically dominates $y_{f,\cdot}$. Notice that, since both $x$ and $y$ are represented following Theorem \ref{teorema del orden}, then $$\sum_{l:\mu_{l}^{y}(f)\geq_{f}\mu_{j}^{y}(f)}\beta_{l} =\sum_{l=1}^{j}\beta_{l}.$$ Now we prove that the domination relation $\succeq_{F}$ is a partial order. The proof of $\succeq_{W}$ is analogously. Formally, \begin{proposition}\label{Es orden parcial} The domination relation $\succeq_{F}$ is a partial order. \end{proposition} \begin{proof} See proof in Appendix \ref{apendice B}. \end{proof} \subsection{Splitting procedure} In this subsection we explain the splitting procedure for two random stable matchings, that is formalized with an algorithm in Appendix \ref{apendice B}. Once we apply the splitting procedure for two random stable matchings, we define a domination relation ($\succeq_F^S$) that is further used to define the binary operation in a simple way. Given two random stable matchings $x=\sum_{i=1}^{I}\alpha_{i}x^{\mu^{x}_{i}}$ and $y=\sum_{j=1}^{J}\beta_{j}x^{\mu^{y}_{j}}$ with $\mu^{x}_i \geq_F \mu^{x}_{i +1}$ for each $i=1,\ldots,I-1$ and $\mu^{y}_j \geq_F \mu^{y}_{j +1}$ for each $j=1,\ldots,J-1$, the splitting procedure goes as follows: Let $\gamma_1=min\{\alpha_1, \beta_1\}$. W.l.o.g. assume that $\gamma_1=\alpha_1.$ Then, $$x=\gamma_1 \mu^x_1+\sum_{i=2}^{I}\alpha_{i}x^{\mu^{x}_{i}}$$ $$y=\gamma_1 \mu_1^y+(\beta_1-\gamma_1)\mu^{y}_1+\sum_{j=2}^{J}\beta_{j}x^{\mu^{y}_{j}}.$$ Notice that the first terms of each new representation have the same scalar. Now, take the second scalar of each representation and set $\gamma_2=min\{\alpha_2, \beta_1-\gamma_1\}$. If $\gamma_2=\alpha_2$, then $$x=\gamma_1 \mu^x_1+\gamma_2 \mu^x_2+\sum_{i=3}^{I}\alpha_{i}x^{\mu^{x}_{i}}$$ $$y=\gamma_1 \mu_1^y+\gamma_2 \mu^y_1+(\beta_1-\gamma_1-\gamma_2)\mu^{y}_1+\sum_{j=2}^{J}\beta_{j}x^{\mu^{y}_{j}}.$$ If $\gamma_2=\beta_1-\gamma_1$, then $$x=\gamma_1 \mu^x_1+\gamma_2 \mu^x_2+(\alpha_2-\gamma_2) \mu^x_2+\sum_{i=3}^{I}\alpha_{i}x^{\mu^{x}_{i}}$$ $$y=\gamma_1 \mu_1^y+\gamma_2 \mu^y_1+\sum_{j=2}^{J}\beta_{j}x^{\mu^{y}_{j}}.$$ Notice that the first two terms of each new representation have the same scalar. Now take the third scalar of each representation and set either $\gamma_3= min\{\alpha_3, \beta_1-\gamma_1-\gamma_2\}$ or $\gamma_3=min \{\alpha_2-\gamma_2, \beta_2\}$, and so forth so on. We illustrate the splitting procedure with the following example. \noindent \textbf{Example 1 (Continued)} \textit{Let $x=\frac{1}{4}x^{\nu_1}+\frac{1}{2}x^{\nu_2}+\frac{1}{4}x^{\nu_4}$ and $y=\frac{1}{6}x^{\nu^1}+\frac{1}{2}x^{\nu^3}+\frac{1}{3}x^{\nu^4}$. Notice that both random stable matchings are represented following Theorem \ref{teorema del orden}. Let $\gamma_1=min\{\frac{1}{4},\frac{1}{6}\}=\frac{1}{6},$ then \begin{center} $x=\frac{1}{6}x^{\nu_1}+(\frac{1}{4}-\frac{1}{6})x^{\nu_1}+\frac{1}{2}x^{\nu_2}+\frac{1}{4}x^{\nu_4},$ \end{center} \begin{center} $y=\frac{1}{6}x^{\nu^1}+\frac{1}{2}x^{\nu^3}+\frac{1}{3}x^{\nu^4}$ \end{center} Notice that the first term of each new representation have the same scalar $\frac{1}{6}$. Let $\gamma_2=min\{\frac{1}{4}-\frac{1}{6},\frac{1}{2}\}=\frac{1}{4}-\frac{1}{6}=\frac{1}{12},$ then \begin{center} $x=\frac{1}{6}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{1}{2}x^{\nu_2}+\frac{1}{4}x^{\nu_4},$ \end{center} \begin{center} $y=\frac{1}{6}x^{\nu^1}+\frac{1}{12}x^{\nu_3}+(\frac{1}{2}-\frac{1}{12})x^{\nu^3}+\frac{1}{3}x^{\nu^4}.$ \end{center} Notice that the second term of each new representation also have the same scalar $\frac{1}{12}$. Let $\gamma_3=min\{\frac{1}{2},\frac{1}{2}-\frac{1}{12}\}=\frac{1}{2}-\frac{1}{12}=\frac{5}{12},$ then \begin{center} $x=\frac{1}{6}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{5}{12}x^{\nu^2}+(\frac{1}{2}-\frac{5}{12})x^{\nu_2}+\frac{1}{4}x^{\nu_4},$ \end{center} \begin{center} $y=\frac{1}{6}x^{\nu^1}+\frac{1}{12}x^{\nu_3}+\frac{5}{12}x^{\nu^3}+\frac{1}{3}x^{\nu^4}.$ \end{center} Notice that the third term of each new representation also have the same scalar $\frac{5}{12}$. Let $\gamma_4=min\{\frac{1}{2}-\frac{5}{12},\frac{1}{3}\}=\frac{1}{2}-\frac{1}{12}=\frac{1}{12},$ then \begin{center} $x=\frac{1}{6}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{5}{12}x^{\nu^2}+\frac{1}{12}x^{\nu_2}+\frac{1}{4}x^{\nu_4},$ \end{center} \begin{center} $y=\frac{1}{6}x^{\nu^1}+\frac{1}{12}x^{\nu_3}+\frac{5}{12}x^{\nu^3}+\frac{1}{12}x^{\nu_4}+(\frac{1}{3}-\frac{1}{12})x^{\nu^4}.$ \end{center} Notice that the fourth term of each new representation also have the same scalar $\frac{1}{12}$. Let $\gamma_4=min\{\frac{1}{4},\frac{1}{3}-\frac{1}{12}\}=min\{\frac{1}{4},\frac{1}{4}\}=\frac{1}{4},$ then \begin{center} $x=\frac{1}{6}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{5}{12}x^{\nu^2}+\frac{1}{12}x^{\nu_2}+\frac{1}{4}x^{\nu_4},$ \end{center} \begin{center} $y=\frac{1}{6}x^{\nu^1}+\frac{1}{12}x^{\nu_3}+\frac{5}{12}x^{\nu^3}+\frac{1}{12}x^{\nu_4}+\frac{1}{4}x^{\nu^4}.$ \end{center} Notice that the fifth term of each new representation also have the same scalar $\frac{1}{4}$. Now, once the splitting procedure is complete, both $x$ and $y$ have five terms in each representation. Moreover, both lotteries have the same scalar, term to term. }\medskip Algorithm 2 presented in Appendix \ref{apendice B} is the formalization of the splitting procedure for two random stable matchings. In Appendix \ref{apendice B}, using the same example, we illustrate the splitting procedure using Algorithm 2 detailed the procedure step by step. Further, the following proposition states that the splitting procedure changes the representation of the two random stable matchings. \begin{proposition}\label{proposicion reescribir con algoritmo} Let $x$ and $y$ be two random stable matchings such that $$x=\sum_{\ell=1}^{I}\alpha^0_{\ell}\mu^{x}_{\ell} \text{ ~~~~~ and ~~~~~} y=\sum^J_{\ell=1}\beta^0_{\ell}\mu^{y}_{\ell}.$$ Then, there is $\Omega= \left\{\left(\gamma_\ell,~\tilde{\mu}^x_\ell,~\tilde{\mu}^y_\ell \right): \ell=1,\ldots,\tilde{k}\right\}$ defined by Algorithm 2, where $\tilde{k}$ is the last step of the algorithm such that $$x=\sum_{\ell=1}^{\tilde{k} }\gamma_\ell \tilde{\mu}_\ell^{x} \text{ ~~~~~ and ~~~~~}y=\sum _{\ell=1}^{\tilde{k}}\gamma_\ell \tilde{\mu}_\ell^{y}.$$ \end{proposition} \begin{proof} See proof in Appendix \ref{apendice B}. \end{proof} Once two random stable matchings goes through the splitting procedure, we can define the following domination relation. This domination relation and its equivalence with the partial order defined in Section \ref{order}, are used in the next section to prove the main result. \begin{definition} Let $x$ and $y$ be two random stable matchings such that $$x=\sum_{\ell=1}^{\tilde{k} }\gamma_\ell \tilde{\mu}_\ell^{x} \text{ ~~~~~ and ~~~~~}y=\sum _{\ell=1}^{\tilde{k}}\gamma_\ell \tilde{\mu}_\ell^{y},$$ where for each $\ell=1,\ldots,\tilde{k}$,~~ $0< \gamma_{\ell} \leq 1$, $\sum_{\ell=1}^{\tilde{k}}\gamma_{\ell}=1,$ and for each $\ell=1,\ldots,\tilde{k}-1$ $\tilde{\mu}_\ell^{x} \geq_F \tilde{\mu}_{\ell+1}^{x}$ and $\tilde{\mu}_\ell^{y} \geq_F \tilde{\mu}_{\ell+1}^{y}.$ We say that $\boldsymbol{x}$ \textbf{splittely dominates} $\boldsymbol{y}$ for all firms ($\boldsymbol{x\succeq_{F}^{S} y}$) if $\tilde{\mu}^{x}_{\ell} \geq_{F} \tilde{\mu}^{y}_{\ell}$ for each $\ell=1,\ldots,\tilde{k}.$ Analogously, we define $\boldsymbol{x\succeq^S_{W} y}$ for all workers. \end{definition} \begin{remark} Notice that the partial order $\succeq_F$ ($\succeq_W$) compares two random stable matchings independently if they are or not splitted. \end{remark} The following proposition proves that both domination relations ($\succeq_F$ and $\succeq_F^S$) are equivalent. \begin{proposition}\label{equivalencia de ordenes} The partial order $\succeq_F$ is equivalent to the domination relation $\succeq_F^S$. \end{proposition} \begin{proof} See proof in Appendix \ref{apendice B}. \end{proof} \begin{corollary} The domination relation $\succeq_F^S$ is also a partial order. \end{corollary} \section{Main result}\label{seccion main result} Given two random stable matchings represented after the splitting procedure, we define binary operations for random stable matchings to compute the \textit{l.u.b.} and \textit{g.l.b.} for each side of the market. Further, we state the main result of this paper: the set of random stable matchings has a dual lattice structure. Recall that $\vee_W$, $\wedge_W$, $\vee_F$ and $\wedge_F$ are the binary operations relative to the partial orders $\geq_{W}$ and $\geq_{F}$ defined between two (deterministic) stable matchings. Now, we extend these binary operations to random stable matchings. Formally, \begin{definition}\label{defino operaciones binarias} Let $x$ and $y$ be two random stable matchings such that $$x=\sum_{\ell=1}^{\tilde{k}}\gamma_{\ell}\tilde{\mu}^{x}_{\ell} \text{~~~~ and ~~~~}y=\sum_{\ell=1}^{\tilde{k}}\gamma_{\ell}\tilde{\mu}^{y}_{\ell} $$ where for each $\ell=1,\ldots,\tilde{k}$,~~ $0< \gamma_{\ell} \leq 1$, $\sum_{\ell=1}^{\tilde{k}}\gamma_{\ell}=1$, $\tilde{\mu}^{x}_{\ell},\tilde{\mu}^{y}_{\ell}\in{\mathcal{S(P)}}$, and for each $\ell=1,\ldots,\tilde{k}-1$,~~ $\tilde{\mu}^{x}_{\ell} \geq_{F} \tilde{\mu}^{x}_{\ell+1}$ and $\tilde{\mu}^{y}_{\ell} \geq_{F} \tilde{\mu}^{y}_{\ell+1}$. Define $\boldsymbol{x\veebar_F y,~x \barwedge_F y,~x\veebar_W y \text{ and }x \barwedge_W y}$ as follows: $$\boldsymbol{x\veebar_F y}:=\sum_{\ell=1}^{\tilde{k}}\gamma_{\ell}(\tilde{\mu}^{x}_{\ell}\vee_F \tilde{\mu}^{y}_{\ell}) \text{~~,~~~~~} \boldsymbol{x \barwedge_F y}:=\sum_{\ell=1}^{\tilde{k}}\gamma_{\ell}(\tilde{\mu}^{x}_{\ell} \wedge_F \tilde{\mu}^{y}_{\ell}),$$ and $$\boldsymbol{x\veebar_W y}:=\sum_{\ell=1}^{\tilde{k}}\gamma_{\ell}(\tilde{\mu}^{x}_{\ell}\vee_W \tilde{\mu}^{y}_{\ell}) \text{~~,~~~~~} \boldsymbol{x \barwedge_W y}:=\sum_{\ell=1}^{\tilde{k}}\gamma_{\ell}(\tilde{\mu}^{x}_{\ell} \wedge_W \tilde{\mu}^{y}_{\ell}),$$ \end{definition} \begin{remark} It is straightforward by Remark \ref{operaciones en matching es estable} that $x\veebar_F y,~x \barwedge_F y,~x\veebar_W y \text{ and }x \barwedge_W y$ are random stable matchings. \end{remark} Now, we are in position to prove that these binary operations defined for random stable matchings are actually the \textit{l.u.b.} and \textit{g.l.b.} for each side of the market. \begin{proposition}\label{teorema de operaciones binarias} Let $x$ and $y$ be two random stable matchings. Then, for $X\in \{F,W\}$ we have that $$x\veebar_X y =\textit{l.u.b.}_{\succeq_X}(x,y) \text{ ~~and~~ } x\barwedge_X y =\textit{g.l.b.}_{\succeq_X}(x,y).$$ Also, $$x\veebar_F y =x\barwedge_W y \text{ ~~and~~ } x\veebar_W y =x\barwedge_F y.$$ \end{proposition} \begin{proof} See proof in Appendix \ref{apendice B}. \end{proof} Now, we are in position to state the main result as follows, \begin{theorem}\label{teorema principal} $(\mathcal{RS(P)},\succeq_F,\veebar_F,\barwedge_F)$ and $(\mathcal{RS(P)},\succeq_W,\veebar_W,\barwedge_W)$ are dual lattices. \end{theorem} The following example illustrate how to compute the binary operations for two random stable matchings. \noindent \textbf{Example 1 (Continued)}\textit{ Given $x$ and $y$ represented as in Proposition \ref{proposicion reescribir con algoritmo}, we compute $x\veebar_F y$ and $x\barwedge_F y$ as follows: (The other two cases are similar)} \begin{center} $ x=\frac{1}{6}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{5}{12}x^{\nu_2}+\frac{1}{12}x^{\nu_2}+\frac{1}{4}x^{\nu_4}, $ \end{center} \begin{center} $y=\frac{1}{6}x^{\nu_1}+\frac{1}{12}x^{\nu_3}+\frac{5}{12}x^{\nu_3}+\frac{1}{12}x^{\nu_4}+\frac{1}{4}x^{\nu_4}. $ \end{center} \begin{center} $ x\veebar_F y=\frac{1}{6}x^{\nu_1 \vee_F \nu_1}+\frac{1}{12}x^{\nu_1 \vee_F \nu_3}+\frac{5}{12}x^{\nu_2 \vee_F \nu_3}+\frac{1}{12}x^{\nu_2 \vee_F \nu_4}+\frac{1}{4}x^{\nu_4 \vee_F \nu_4} $ \end{center} \begin{center} $ =\frac{1}{6}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{5}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_2}+\frac{1}{4}x^{\nu_4}, $ \end{center} \begin{center} $ =\frac{2}{3}x^{\nu_1}+\frac{1}{12}x^{\nu_2 }+\frac{1}{4}x^{\nu_4 }. $ \end{center} \begin{center} $ x\barwedge_F y=\frac{1}{6}x^{\nu_1 \wedge_F \nu_1}+\frac{1}{12}x^{\nu_1 \wedge_F \nu_3}+\frac{5}{12}x^{\nu_2 \wedge_F \nu_3}+\frac{1}{12}x^{\nu_2 \wedge_F \nu_4}+\frac{1}{4}x^{\nu_4 \wedge_F \nu_4} $ \end{center} \begin{center} $ =\frac{1}{6}x^{\nu_1}+\frac{1}{12}x^{\nu_3}+\frac{5}{12}x^{\nu_4}+\frac{1}{12}x^{\nu_4}+\frac{1}{4}x^{\nu_4}, $ \end{center} \begin{center} $ =\frac{1}{6}x^{\nu_1}+\frac{1}{12}x^{\nu_3 }+\frac{3}{4}x^{\nu_4}. $ \end{center} \subsection{Binary operations for rational random stable matchings}\label{rational} In this subsection, we compute the \textit{g.l.b.} and \textit{l.u.b.} for two random stable matchings where each scalar of the lottery is a \textit{rational number}. These random stable matchings are called \textbf{rational random stable matchings}. In this case, the splitting procedure is directly and different to the procedure described by Algorithm 2. Let $x$ and $y$ be two rational random stable matchings, represented as follows: \begin{equation}\label{alpha racional} x=\sum_{i=1}^{I}\alpha_{i}\mu^{x}_{i}, \end{equation} such that, $0< \alpha_{i} \leq 1$ for each $i=1,\ldots,I$; $\sum_{i=1}^{I}\alpha_{i}=1$, $\mu^{x}_{i}\in{\mathcal{S(P)}}$, $\alpha_{i}$ is a rational number and $\mu^{x}_{i} >_{F} \mu^{x}_{i+1}$; where $i=1,...,I-1.$ \begin{equation}\label{beta racional} y=\sum_{j=1}^{J}\beta_{j}\mu^{y}_{j}, \end{equation} such that, $0< \beta_{j} \leq 1$ for each $j=1,\ldots,J$; $\sum_{j\in{J}}\beta_{j}=1$, $\mu^{y}_{j}\in{\mathcal{S(P)}}$, $\beta_{j}$ is a rational number and $\mu^{y}_{j} >_{F} \mu^{y}_{j+1}$; where $j=1,...,J-1$. Since $\alpha_i$ and $\beta_j$ are positive rational numbers, we have that for each $\alpha_i$ there are natural numbers $a_i,~b_i$ such that $\alpha_i=\frac{a_i}{b_i}$. Similarly, for each $\beta_j$ there are natural numbers $c_j,~d_j$ such that $\beta_j=\frac{c_j}{d_j}.$ Denote by $e$ the \textit{least common multiple} (\textbf{\textit{lcm}}) of all denominators $b_i,~d_j$ for each $i=1,\ldots, I$ and for each $j=1,\ldots, J$. That is, $$ e=\textit{lcm}(b_1,\ldots,b_I,d_1,\ldots,d_J). $$ Then, we can write $\alpha_i=\frac{a_i}{b_i}=\frac{a_i \frac{e}{b_i}}{e}$ and $\beta_i=\frac{c_j}{d_j}=\frac{c_j \frac{e}{d_j}}{e}$ for each $i=1,\ldots, I$ and for each $j=1,\ldots, J.$ Hence, we can write all the scalars $\alpha$ and $\beta$ with the same denominator. Denote by $\gamma_k=\frac{1}{e}$ and define $$ \boldsymbol{\tilde{\mu}_k^{x}}:=\left\{ \begin{array}{ll} \mu^x_{1} & \text{~for~} k=1,\ldots,\frac{a_1}{b_1}e\\ \mu^x_{2} & \text{~for~} k=\frac{a_1}{b_1}e+1,\ldots,\left(\frac{a_2}{b_2}+\frac{a_1}{b_1}\right) e\\ \vdots & ~~~\vdots\\ \mu^x_{I} & \text{~for~} k=\left(\displaystyle\sum_{n=1}^{I-1}\frac{a_{n}}{b_{n}}\right) e+1,\ldots,\left(\displaystyle\sum_{n=1}^{I}\frac{a_{n}}{b_{n}}\right) e\\ \end{array} \right. $$ $$ \boldsymbol{\tilde{\mu}_k^{y}}:=\left\{ \begin{array}{ll} \mu^y_{1} & \text{~for~} k=1,\ldots,\frac{c_1}{d_1}e\\ \mu^y_{2} & \text{~for~} k=\frac{c_1}{d_1}e+1,\ldots,\left(\frac{c_2}{d_2}+\frac{c_1}{d_1}\right) e\\ \vdots & ~~~\vdots\\ \mu^y_{J} & \text{~for~} k=\left(\displaystyle\sum_{m=1}^{J-1}\frac{c_{m}}{d_{m}}\right) e+1,\ldots,\left(\displaystyle\sum_{m=1}^{J}\frac{c_{m}}{d_{m}}\right) e\\ \end{array} \right. $$ Then, we have that \begin{equation}\label{equacion racional x} x=\sum_{i=1}^{I}\alpha_{i}\mu^{x}_{i}=\sum_{i=1}^{I}\frac{a_i}{b_i}\mu^{x}_{i}=\sum_{i=1}^{I}\frac{a_i \frac{e}{b_i}}{e}\mu^{x}_{i}=\sum_{k=1}^{e}\frac{1}{e}\tilde{\mu}^x_k. \end{equation} Analogously, we have that \begin{equation}\label{equacion racional y} y=\sum_{j=1}^{J}\beta_{j}\mu^{y}_{j}=\sum_{j=1}^{J}\frac{c_j}{d_j}\mu^{y}_{j}=\sum_{j=1}^{J}\frac{c_j \frac{e}{d_j}}{e}\mu^{y}_{j}=\sum_{k=1}^{e}\frac{1}{e}\tilde{\mu}^y_k. \end{equation} Given two rational random stable matchings $x$ and $y$, represented as in (\ref{equacion racional x}) and (\ref{equacion racional y}), to compute $x\veebar_F y$, $x \barwedge_F y$, $x\veebar_W y$ and $x \barwedge_W y$ we state the following corollary of Theorem \ref{teorema de operaciones binarias}. \begin{corollary}\label{corolario para rational} Let $x$ and $y$ be two rational random stable matchings (i.e. each $\alpha$ and each $\beta$ in (\ref{alpha racional}) and (\ref{beta racional}) are rational numbers). Then, for $X\in \{F,W\}$ we have that $$x\veebar_X y=\sum_{k=1}^e\frac{1}{e}(\tilde{\mu}^{x}_{k}\vee_X \tilde{\mu}^{y}_{k}) \text{~~~~ and ~~~~} x \barwedge_X y=\sum_{k=1}^e\frac{1}{e}(\tilde{\mu}^{x}_k \wedge_X \tilde{\mu}^{y}_k).$$ \end{corollary} \noindent \textbf{Example 1 (Continued)}\textit{ Let $x$ and $y$ be two random stable matchings represented as in Proposition \ref{teorema del orden},} \begin{center} $x=\frac{1}{4}x^{\nu_1}+\frac{1}{2}x^{\nu_2}+\frac{1}{4}x^{\nu_4}, $ \end{center} \begin{center} $ y=\frac{1}{6}x^{\nu^1}+\frac{1}{2}x^{\nu^3}+\frac{1}{3}x^{\nu^4}. $ \end{center} \textit{Let $e=\textit{lcm}(2,3,4,6)=12$. Then, the random stable matchings $x$ and $y$ can be represented as:} \begin{center} $ x=\frac{1}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_2}+\frac{1}{12}x^{\nu_2}+\frac{1}{12}x^{\nu_2}+\frac{1}{12}x^{\nu_2}+\frac{1}{12}x^{\nu_2} $ \end{center} \begin{center} $+\frac{1}{12}x^{\nu_2}+\frac{1}{12}x^{\nu_4}+\frac{1}{12}x^{\nu_4}+\frac{1}{12}x^{\nu_4}, $ \end{center} \begin{center} $ y=\frac{1}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_3}+\frac{1}{12}x^{\nu_3}+\frac{1}{12}x^{\nu_3}+\frac{1}{12}x^{\nu_3}+\frac{1}{12}x^{\nu_3}+\frac{1}{12}x^{\nu_3} $ \end{center} \begin{center} $ +\frac{1}{12}x^{\nu_4}+\frac{1}{12}x^{\nu_4}+\frac{1}{12}x^{\nu_4}+\frac{1}{12}x^{\nu_4}. $ \end{center} \textit{Then, } \begin{center} $ x\veebar_F y=\frac{1}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_1} $ \end{center} \begin{center} $ +\frac{1}{12}x^{\nu_1}+\frac{1}{12}x^{\nu_2}+\frac{1}{12}x^{\nu_4}+\frac{1}{12}x^{\nu_4}+\frac{1}{12}x^{\nu_4} $ \end{center} \begin{center} $ =\frac{2}{3}x^{\nu_1}+\frac{1}{12}x^{\nu_2 }+\frac{1}{4}x^{\nu_4 }. $ \end{center} \textit{Analogously for $x\barwedge_F y,~x\veebar_W y$ and $x\barwedge_W y.$} \section{Concluding remarks}\label{conclusiones} In this paper, we prove an important result that involves two very much studied topics in the matching literature: random stable matchings and lattice structure. The many-to-many matching markets with substitutable preferences satisfying the L.A.D. are the most general matching markets in which it is known that the binary operations between two stable matchings (\textit{l.u.b.} and \textit{g.l.b.}) are computed via pointing functions. For these markets, we prove that the set of random stable matchings endowed with a partial order has a dual lattice structure. Moreover, we present natural binary operations to compute \textit{l.u.b.} and \textit{g.l.b.} between two random stable matching for each side of the markets. The partial order defined in this paper is a generalization of the first-order stochastic dominance for the case in which agents have substitutable preferences satisfying the L.A.D.. For more general matching markets, for instance markets that only satisfy substitutability (not L.A.D.), the binary operations between (deterministic) stable matchings are computed as fixed points. Then, the lattice structure of the set of random stable matchings for these markets is still an open problem, left for future research.
{ "timestamp": "2020-06-11T02:05:06", "yymm": "2002", "arxiv_id": "2002.08156", "language": "en", "url": "https://arxiv.org/abs/2002.08156", "abstract": "For a many-to-many matching market, we study the lattice structure of the set of random stable matchings. We define a partial order on the random stable set and present two intuitive binary operations to compute the least upper bound and the greatest lower bound for each side of the matching market. Then, we prove that with these binary operations the set of random stable matchings forms two dual lattices.", "subjects": "Theoretical Economics (econ.TH)", "title": "Lattice structure of the random stable set in many-to-many matching market", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759643671933, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7079405635026265 }
https://arxiv.org/abs/1707.03932
Group gradings on the superalgebras M(m,n), A(m,n) and P(n)
We classify gradings by arbitrary abelian groups on the classical simple Lie superalgebras $P(n)$, $n \geq 2$, and on the simple associative superalgebras $M(m,n)$, $m, n \geq 1$, over an algebraically closed field: fine gradings up to equivalence and $G$-gradings, for a fixed group $G$, up to isomorphism. As a corollary, we also classify up to isomorphism the $G$-gradings on the classical Lie superalgebra $A(m,n)$ that are induced from $G$-gradings on $M(m+1,n+1)$. In the case of Lie superalgebras, the characteristic is assumed to be $0$.
\section{Introduction} In the past two decades, gradings on Lie algebras by arbitrary abelian groups have been extensively studied. For finite-dimensional simple Lie algebras over an algebraically closed field $\FF$, the classification of fine gradings up to equivalence has recently been completed (assuming $\Char \FF = 0$) by efforts of many authors --- see the monograph \cite[Chapters 3--6]{livromicha} and the references therein, and also \cite{YuExc} and \cite{E14}. For a fixed abelian group $G$, the classification of $G$-gradings up to isomorphism is also known (assuming $\Char \FF \neq 2$), except for types $E_6$, $E_7$ and $E_8$ --- see \cite{livromicha} and \cite{EK_d4}. This paper is devoted to gradings on finite-dimensional simple Lie superalgebras. Over an algebraically closed field of characteristic $0$, such superalgebras were classified by V.~G.~Kac in \cite{kacZ,artigokac} (see also \cite{livrosuperalgebra}). In \cite{kacZ}, there is also a classification of $\ZZ$-gradings on these superalgebras. More recently, gradings by arbitrary abelian groups have been considered. Fine gradings on the exceptional simple Lie superalgebras, namely, $D(2,1;\alpha)$, $G(3)$ and $F(4)$, were classified in \cite{artigoelduque} and all gradings on the series $Q(n)$, $n\geq 2$, were classified in \cite{paper-Qn}. A description of gradings on matrix superalgebras, here denoted by $M(m,n)$ (see Section \ref{sec:Mmn}), was given in \cite{BS}, but the isomorphism problem was left open and fine gradings were not considered. The initial goal of this work was to classify abelian group gradings on the series $P(n)$, $n\geq 2$, and thereby complete the classification of gradings on the so-called ``strange Lie superalgebras''. Our approach led us to the study of gradings on the associative superalgebras $M(m,n)$ and the closely related Lie superalgebras $A(m,n)$. Throughout this work, the canonical $\ZZ_2$-grading of a superalgebra will be denoted by superscripts, reserving subscripts for the components of other gradings. Thus, a $G$-grading on a superalgebra $A = A\even \oplus A\odd$ is a vector space decomposition $\Gamma:\,A = \bigoplus_{g \in G} A_g$ such that $A_g A_h\subseteq A_{gh}$, for all $g,h\in G$, and each $A_g$ is compatible with the superalgebra structure, i.e., $A_g=A_g^\bz \oplus A_g^\bo$. Note that $G$-gradings on a superalgebra can be seen as $G\times \ZZ_2$-gradings on the underlying algebra. For the superalgebras under consideration, namely, $M(m,n)$, $A(m,n)$ and $P(n)$, the canonical $\ZZ_2$-grading can be refined to a canonical $\ZZ$-grading, whose components will be denoted by superscripts $-1, 0, 1$. Only gradings by abelian groups are discussed in this work, which is no loss of generality in the case of simple Lie superalgebras, because the support always generates an abelian group. All the (super)algebras and vector (super)spaces are assumed to be finite-dimensional over a fixed algebraically closed field $\FF$. When dealing with the Lie superalgebras $A(m,n)$ and $P(n)$, we will also assume $\Char \FF = 0$. The paper is structured as follows. Sections \ref{sec:generalities} and \ref{sec:gradings-on-matrix-algebras} have no original results. In the former, we introduce all basic definitions and a few general results for future reference, and the latter is a review of the classification of gradings on matrix algebras closely following \cite[Chapter 2]{livromicha}, with a slight change in notation. Section \ref{sec:Mmn} is devoted to the associative superalgebras $M(m,n)$, which have two kinds of gradings: the \emph{even gradings} are compatible with the canonical $\ZZ$-grading and the \emph{odd gradings} are not. (The latter can occur only if $m=n$.) The classification results for even gradings are Theorems \ref{thm:even-assc-iso} ($G$-gradings up to isomorphism) and \ref{thm:class-fine-even} (fine gradings up to equivalence). We present two descriptions of odd gradings: one as $G\times \ZZ_2$-gradings on the underlying matrix algebra (see Subsection \ref{ssec:grds-on-superalgebras}) and the other purely in terms of the group $G$ (see Subsection \ref{ssec:second-odd}). We classify odd gradings in Theorems \ref{thm:first-odd-iso} and \ref{thm:2nd-odd-iso} ($G$-gradings up to isomorphism) and in Theorem \ref{thm:class-fine-odd} (fine gradings up to equivalence). In Section \ref{sec:Amn}, we consider gradings on the Lie superalgebras $A(m,n)$, but only those that are induced from $M(m+1, n+1)$ (see Definition \ref{def:Type-I}). We classify them up to isomorphism in Theorem \ref{thm:even-Lie-iso} (even gradings) and in Theorem \ref{thm:first-odd-Lie-iso} and Corollary \ref{cor:2nd-odd-Lie-iso} (odd gradings). In Section \ref{sec:Pn}, we classify gradings on the Lie superalgebras $P(n)$: see Theorem \ref{thm:Pn-iso} for $G$-gradings up to isomorphism and Theorem \ref{thm:class-fine-Pn} for fine gradings up to equivalence. \section{Generalities on gradings}\label{sec:generalities} The purpose of this section is to fix notation and recall definitions concerning graded algebras and graded modules. \subsection{Gradings on vector spaces and (bi)modules}\label{subsec:graded-bimodules} Let $G$ be a group. By a \emph{$G$-grading} on a vector space $V$ we mean simply a vector space decomposition $\Gamma:\,V = \bigoplus_{g \in G} V_g$ where the summands are labeled by elements of $G$. If $\Gamma$ is fixed, $V$ is referred to as a {\em $G$-graded vector space}. A subspace $W \subseteq V$ is said to be \emph{graded} if $W = \bigoplus_{g \in G} (W \cap V_g)$. We will refer to $\ZZ_2$-graded vector spaces as \emph{superspaces} and their graded subspaces as \emph{subsuperspaces}. An element $v$ in a graded vector space $V = \bigoplus_{g \in G} V_g$ is said to be \emph{homogeneous} if $v\in V_g$ for some $g\in G$. If $0\ne v\in V_g$, we will say that $g$ is the \emph{degree} of $v$ and write $\deg v = g$. In reference to the canonical $\ZZ_2$-grading of a superspace, we will instead speak of the \emph{parity} of $v$ and write $|v| = g$. Every time we write $\deg v$ or $|v|$, it should be understood that $v$ is a nonzero homogeneous element. \begin{defi} Given two $G$-graded vector spaces, $V=\bigoplus_{g\in G} V_g$ and $W=\bigoplus_{g\in G} W_g$, we define their tensor product to be the vector space $V\otimes W$ together with the $G$-grading given by $(V \otimes W)_g = \bigoplus_{ab=g} V_{a} \otimes W_{b}$. \end{defi} The concept of grading on a vector space is connected to gradings on algebras by means of the following: \begin{defi} If $V=\bigoplus_{g\in G} V_{g}$ and $W=\bigoplus_{g\in G} W_{g}$ are two graded vector spaces and $T: V\rightarrow W$ is a linear map, we say that $T$ is \emph{homogeneous of degree $t$}, for some $t\in G$, if $T(V_g)\subseteq W_{tg}$ for all $g\in G$. \end{defi} If $S: U\rightarrow V$ and $T: V\rightarrow W$ are homogeneous linear maps of degrees $s$ and $t$, respectively, then the composition $T\circ S$ is homogeneous of degree $ts$. We define the {\em space of graded linear transformations} from $V$ to $W$ to be: \[ \Hom^{\text{gr}} (V,W) = \bigoplus_{g\in G} \Hom (V,W)_{g}\] where $\Hom (V,W)_{g}$ denotes the set of all linear maps from $V$ to $W$ that are homogeneous of degree $g$. If we assume $V$ to be finite-dimensional then we have $\Hom(V,W)=\Hom^{\gr}(V,W)$ and, in particular, $\End (V) = \bigoplus_{g\in G} \End (V)_g$ is a graded algebra. We also note that $V$ becomes a graded module over $\End(V)$ in the following sense: \begin{defi} Let $A$ be a $G$-graded algebra (associative or Lie) and let $V$ be a (left) module over $A$ that is also a $G$-graded vector space. We say that $V$ is a \emph{graded $A$-module} if $A_g \cdot V_h \subseteq V_{gh}$, for all $g$,$h\in G$. The concept of $G$-\emph{graded bimodule} is defined similarly. \end{defi} If we have a $G$-grading on a Lie superalgebra $L=L\even \oplus L\odd$ then, in particular, we have a grading on the Lie algebra $L\even$ and a grading on the space $L\odd$ that makes it a graded $L\even$-module. If we have a $G$-grading on an associative superalgebra $C=C\even \oplus C\odd$, then $C\odd$ becomes a graded bimodule over $C\even$. If $ \Gamma$ is a $G$-grading on a vector space $V$ and $g\in G$, we denote by $\Gamma^{[g]} $ the grading given by relabeling the component $V_h$ as $V_{hg}$, for all $h \in G$. This is called the \emph{(right) shift of the grading $\Gamma$ by $g$}. We denote the graded space $(V, \, \Gamma^{[g]})$ by $V^{[g]}$. From now on, we assume that $G$ is abelian. If $V$ is a graded module over a graded algebra (or a graded bimodule over a pair of graded algebras), then $V^{[g]}$ is also a graded (bi)module. We will make use of the following partial converse (see e.g. \cite[Proposition 3.5]{paper-Qn}): \begin{lemma}\label{lemma:simplebimodule} Let $A$ and $B$ be $G$-graded algebras and let $V$ be a finite-dimensional (ungraded) simple $A$-module or $(A,B)$-bimodule. If $\Gamma$ and $\Gamma'$ are two $G$-gradings that make $V$ a graded (bi)module, then $\Gamma'$ is a shift of $\Gamma$.\qed \end{lemma} Certain shifts of grading may be applied to graded $\ZZ$- or $\ZZ_2$-superalgebras. In the case of a $\ZZ$-superalgebra $L=L^{-1}\oplus L^{0}\oplus L^{1}$, we have the following: \begin{lemma}\label{lemma:opposite-directions} Let $L=L^{-1}\oplus L^0\oplus L^1$ be a $\ZZ$-superalgebra such that $L^1\, L^{-1}\neq 0$. If we shift the grading on $L^1$ by $g\in G$ and the grading on $L^{-1}$ by $g' \in G$, then we have a grading on $L$ if and only if $g' = g^{-1}$. \qed \end{lemma} We will describe this situation as \emph{shift in opposite directions}. \subsection{Universal grading group, equivalence and isomorphism of gradings} There is a concept of grading not involving groups. A \emph{set grading} on a (super)algebra $A$ is a decomposition $\Gamma:\,A=\bigoplus_{s\in S}A_s$ as a direct sum of sub\-(su\-per)\-spa\-ces indexed by a set $S$ and having the property that, for any $s_1,s_2\in S$ with $A_{s_1}A_{s_2}\ne 0$, there exists $s_3\in S$ such that $A_{s_1}A_{s_2}\subseteq A_{s_3}$. The \emph{support} of $\Gamma$ (or of $A$) is defined to be the set $\supp(\Gamma) := \{s\in S \mid A_s \neq 0\}$. Similarly, $\supp_\bz(\Gamma) := \{s\in S \mid A_s^\bz \neq 0\}$ and $\supp_\bo(\Gamma) := \{s\in S \mid A_s^\bo \neq 0\}$. For a set grading $\Gamma:\,A=\bigoplus_{s\in S}A_s$, there may or may not exist a group $G$ containing $\supp(\Gamma)$ that makes $\Gamma$ a $G$-grading. If such a group exists, $\Gamma$ is said to be a {\em group grading}. (As already mentioned, we only consider abelian group gradings in this paper.) However, $G$ is usually not unique even if we require that it should be generated by $\supp(\Gamma)$. The {\em universal (abelian) grading group} of $\Gamma$ is generated by $\supp(\Gamma)$ and has the defining relations $s_1s_2=s_3$ for all $s_1,s_2,s_3\in S$ such that $0\neq A_{s_1}A_{s_2}\subseteq A_{s_3}$. This group is universal among all (abelian) groups that realize the grading $\Gamma$ (see e.g. \cite[Chapter 1]{livromicha} for details). Let $\Gamma:\,A=\bigoplus_{g\in G} A_g$ and $\Delta:\,B=\bigoplus_{h\in H} B_h$ be two group gradings on the (super)algebras $A$ and $B$, with supports $S$ and $T$, respectively. We say that $\Gamma$ and $\Delta$ are {\em equivalent} if there exists an isomorphism of (super)algebras $\vphi: A\to B$ and a bijection $\alpha: S\to T$ such that $\vphi(A_s)=B_{\alpha(s)}$ for all $s\in S$. If $G$ and $H$ are universal grading groups then $\alpha$ extends to an isomorphism $G\to H$. In the case $G=H$, the $G$-gradings $\Gamma$ and $\Delta$ are {\em isomorphic} if $A$ and $B$ are isomorphic as $G$-graded (super)algebras, i.e., if there exists an isomorphism of (super)algebras $\vphi: A\to B$ such that $\vphi(A_g)=B_g$ for all $g\in G$. If $\Gamma:\,A=\bigoplus_{g\in G} A_g$ and $\Gamma':\,A=\bigoplus_{h\in H} A'_h$ are two gradings on the same (super)algebra $A$, with supports $S$ and $T$, respectively, then we will say that $\Gamma'$ is a {\em refinement} of $\Gamma$ (or $\Gamma$ is a {\em coarsening} of $\Gamma'$) if, for any $t\in T$, there exists (unique) $s\in S$ such that $A'_t\subseteq A_s$. If, moreover, $A'_t\ne A_s$ for at least one $t\in T$, then the refinement is said to be {\em proper}. A grading $\Gamma$ is said to be {\em fine} if it does not admit any proper refinements. Note that if $A$ is a superalgebra then $A=\bigoplus_{(g,i)\in G\times\mathbb{Z}_2}A_g^i$ is a refinement of $\Gamma$. It follows that if $\Gamma$ is fine then the sets $\supp_\bz(\Gamma)$ and $\supp_\bo(\Gamma)$ are disjoint. If, moreover, $G$ is the universal group of $\Gamma$, then the superalgebra structure on $A$ is given by the unique homomorphism $p: G \to \ZZ_2$ that sends $\supp_\bz(\Gamma)$ to $\bar 0$ and $\supp_\bo(\Gamma)$ to $\bar 1$. \begin{defi} Let $G$ and $H$ be groups, $\alpha:G\to H$ be a group homomorphism and $\Gamma:\,A=\bigoplus_{g\in G} A_g$ be a $G$-grading. The \emph{coarsening of $\Gamma$ induced by $\alpha$} is the $H$-grading ${}^\alpha \Gamma: A= \bigoplus_{h\in H} B_h$ where $ B_h = \bigoplus_{g\in \alpha\inv (h)} A_g$. (This coarsening is not necessarily proper.) \end{defi} The following result appears to be ``folklore''. We include a proof for completeness. \begin{lemma}\label{lemma:universal-grp} Let $\mathcal{F}=\{\Gamma_i\}_{i\in I}$, be a family of pairwise nonequivalent fine (abelian) group gradings on a (super)algebra $A$, where $\Gamma_i$ is a $G_i$-grading and $G_i$ is generated by $\supp(\Gamma_i)$. Suppose that $\mathcal{F}$ has the following property: for any grading $\Gamma$ on $A$ by an (abelian) group $H$, there exists $i\in I$ and a homomorphism $\alpha:G_i\to H$ such that $\Gamma$ is isomorphic to ${}^\alpha\Gamma_i$. Then % \begin{enumerate}[(i)] \item every fine (abelian) group grading on $A$ is equivalent to a unique $\Gamma_i$; \item for all $i$, $G_i$ is the universal (abelian) group of $\Gamma_i$. \end{enumerate} \end{lemma} \begin{proof} Let $\Gamma$ be a fine grading on $A$, realized over its universal group $H$. Then there is $i\in I$ and $\alpha: G_i \to H$ such that ${}^\alpha \Gamma_i \iso \Gamma$. Writing $\Gamma_i: A = \bigoplus_{g\in G_i} A_g$ and $\Gamma: A = \bigoplus_{h\in H} B_h$, we then have $\vphi \in \Aut(A)$ such that \[ \vphi\,\big( \bigoplus_{g\in \alpha\inv (h)} A_g \big) = B_h \] for all $h\in H$. Since $\Gamma$ is fine, we must have $B_h \neq 0$ if, and only if, there is a unique $g\in G_i$ such that $\alpha(g) = h$, $A_g\neq 0$ and $\vphi(A_g) = B_h$. Equivalently, $\alpha$ restricts to a bijection $\supp(\Gamma_i) \to \supp(\Gamma)$ and $\vphi(A_g) = B_{\alpha(g)}$ for all $g \in S_i:= \supp (\Gamma_i)$. This proves assertion $(i)$. Let $G$ be the universal group of $\Gamma_i$. It follows that, for all $s_1, s_2, s_3 \in S_i$, % \begin{equation*} \label{eq:relations-unvrsl-grp} \begin{split} & s_1s_2 = s_3 \text{ is a defining relation of } G \\ \iff & 0 \neq A_{s_1} A_{s_2} \subseteq A_{s_3}\\ \iff & 0 \neq B_{\alpha(s_1)} B_{\alpha(s_2)} \subseteq B_{\alpha (s_3)}\\ \iff & \alpha(s_1)\alpha(s_2) = \alpha(s_3) \text{ is a defining relation of } H. \end{split} \end{equation*} % Therefore, the bijection $\alpha\restriction_{S_i}$ extends uniquely to an isomorphism $\beta: G\rightarrow H$. By the universal property of $G$, there is a unique homomorphism $\gamma: G\to G_i$ that restricts to the identity on $S_i$. Hence, the following diagram commutes: % \begin{center} \begin{tikzcd} G \arrow[to=Gi, "\gamma"] \arrow[to = H, "\beta"]&&\\ && |[alias=H]|H\\ |[alias=Gi]|G_i \arrow[to=H, "\alpha"]&& \end{tikzcd} \end{center} % Since $\beta$ is an isomorphism, $\gamma$ must be injective. But $\gamma$ is also surjective since $S_i$ generates $G_i$. Hence $G_i$ is isomorphic to $G$. Since $\Gamma$ was an arbitrary fine grading, for each given $j\in I$, we can take $\Gamma = \Gamma_j$ (hence, $i=j$ and $H=G$). This concludes the proof of $(ii)$. \end{proof} \begin{defi} Let $\Gamma$ be a grading on an algebra $A$. We define $\Aut(\Gamma)$ as the group of all self-equivalences of $\Gamma$, i.e., automorphisms of $A$ that permute the components of $\Gamma$. Let $\operatorname{Stab}(\Gamma)$ be the subgroup of $\Aut(\Gamma)$ consisting of the automorphisms that fix each component of $\Gamma$. Clearly, $\operatorname{Stab}(\Gamma)$ is a normal subgroup of $\Aut(\Gamma)$, so we can define the \emph{Weil group} of $\Gamma$ by $\operatorname W (\Gamma) := \frac{\Aut(\Gamma)}{\operatorname{Stab}(\Gamma)}$. The group $\operatorname W (\Gamma)$ can be seen as a subgroup of the permutation group of the support and also as a subgroup of the automorphism group of the universal group of $\Gamma$. \end{defi} \subsection{Correspondence between $G$-gradings and $\widehat G$-actions}\label{ssec:G-hat-action} One of the most important tools for dealing with gradings by abelian groups on (super)algebras is to translate a $G$-grading into a $\widehat G$-action, where $\widehat G$ is the algebraic group of characters of $G$, \ie, group homomorphisms $G \rightarrow \FF^{\times}$. The group $\widehat{G}$ acts on any $G$-graded (super)algebra $A = \bigoplus_{g\in G} A_g$ by $\chi \cdot a = \chi(g) a$ for all $a\in A_g$ (extended to arbitrary $a\in A$ by linearity). The map given by the action of a character $\chi \in \widehat{G}$ is an automorphism of $A$. If $\FF$ is algebraically closed and $\Char \FF = 0$, then $A_g = \{ a\in A \mid \chi \cdot a = \chi (g) a\}$, so the grading can be recovered from the action. For example, if $A=A\even \oplus A\odd$ is a superalgebra, the action of the nontrivial character of $\ZZ_2$ yields the \emph{parity automorphism} $\upsilon$, which acts as the identity on $A\even$ and as the negative identity on $A\odd$. If $A$ is a $\ZZ$-graded algebra, we get a representation $\widehat \ZZ = \FF^\times \rightarrow \Aut (A)$ given by $\lambda \mapsto \upsilon_\lambda$ where $\upsilon_{\lambda}$ acts as $\lambda^i \id$ on $A^i$. A grading on a (super)algebra over an algebraically closed field of characteristic $0$ is said to be \emph{inner} if it corresponds to an action by inner automorphisms. For example, the inner gradings on $\Sl(n)$ (also known as Type I gradings) are precisely the restrictions of gradings on the associative algebra $M_n(\FF)$. \section{Gradings on matrix algebras} \label{sec:gradings-on-matrix-algebras} In this section we will recall the classification of gradings on matrix algebras. We will follow \cite[Chapter 2]{livromicha} but use slightly different notation, which we will extend to superalgebras in Section \ref{sec:Mmn}. The following is the graded version of a classical result (see e.g. \cite[Theorem 2.6]{livromicha}). We recall that a \emph{graded division algebra} is a graded unital associative algebra such that every nonzero homogeneous element is invertible. \begin{thm}\label{thm:End-over-D} Let $G$ be a group and let $R$ be a $G$-graded associative algebra that has no nontrivial graded ideals and satisfies the descending chain condition on graded left ideals. Then there is a $G$-graded division algebra $\D$ and a graded (right) $\D$-module $\mc{V}$ such that $R \simeq \End_{\D} (\mc{V})$ as graded algebras.\qed \end{thm} We apply this result to the algebra $R=M_n(\FF)$ equipped with a grading by an abelian group $G$. We will now introduce the parameters that determine $\mc D$ and $\mc V$, and give an explicit isomorphism $\End_{\D} (\mc{V})\simeq M_n(\FF)$ (see Definition \ref{def:explicit-grd-assoc}). Let $\D$ be a finite-dimensional $G$-graded division algebra. It is easy to see that $T= \supp \D$ is a finite subgroup of $G$. Also, since we are over an algebraically closed field, each homogeneous component $\D_t$, for $t\in T$, is one-dimensional. We can choose a generator $X_t$ for each $\D_t$. It follows that, for every $u,v\in T$, there is a unique nonzero scalar $\beta (u,v)$ such that $X_u X_v = \beta (u,v) X_v X_u$. Clearly, $\beta (u,v)$ does not depend on the choice of $X_u$ and $X_v$. The map $\beta: T\times T \rightarrow \FF^{\times}$ is a \emph{bicharacter}, \ie, both maps $\beta(t,\cdot)$ and $\beta(\cdot,t)$ are characters for every $t \in T$. It is also \emph{alternating} in the sense that $\beta (t,t) = 1$ for all $t\in T$. We define the \emph{radical} of $\beta$ as the set $\rad \beta = \{ t\in T \mid \beta(t, T) = 1 \}$. In the case we are interested in, where $\D$ is simple as an algebra, the bicharacter $\beta$ is \emph{nondegenerate}, \ie, $\rad \beta = \{e\} $. The isomorphism classes of $G$-graded division algebras that are finite-dimensional and simple as algebras are in one-to-one correspondence with the pairs $(T,\beta)$ where $T$ is a finite subgroup of $G$ and $\beta$ is an alternating nondegenerate bicharacter on $T$ (see e.g. \cite[Section 2.2]{livromicha} for a proof). Using that the bicharacter $\beta$ is nondegenerate, we can decompose the group $T$ as $A\times B$, where the restrictions of $\beta$ to each of the subgroups $A$ and $B$ is trivial, and hence $A$ and $B$ are in duality by $\beta$. We can choose the elements $X_t\in \D_t$ in a convenient way (see \cite[Remark 2.16]{livromicha} and \cite[Remark 18]{EK15}) such that $X_{ab}=X_aX_b$ for all $a\in A$ and $b\in B$. Using this choice, we can define an action of $\D$ on the vector space underlying the group algebra $\FF B$, by declaring $X_a\cdot e_{b'} = \beta(a, b') e_{b'}$ and $X_b\cdot e_{b'} = e_{bb'}$. This action allows us to identify $\D$ with $\End{(\FF B)}$. Using the basis $\{e_{b}\mid b\in B\}$ in $\FF B$, we can see it as a matrix algebra, where \[X_{ab}= \sum_{b'\in B} \beta(a, bb') E_{bb', b'}\] and $E_{b'', b'}$ with $b'$, $b'' \in B$, is a matrix unit, namely, the matrix of the operator that sends $e_{b'}$ to $e_{b''}$ and sends all other basis elements to zero. \begin{defi} We will refer to these matrix models of $\mc D$ as its \emph{standard realizations}. \end{defi} \begin{remark}\label{rmk:2-grp-transp} The matrix transposition is always an involution of the algebra structure. As to the grading, we have % \[ X_{ab}\transp = \sum_{b'\in B} \beta(a, bb') E_{b',bb'} = \beta(a,b) \sum_{b''\in B} \beta(a, b^{-1}b'') E_{b^{-1}b'', b''} = \beta(a,b) X_{ab^{-1}}. \] % It follows that if $T$ is an elementary 2-group, then the transposition preserves the degree. In this case, we will use it to fix an identification between the graded algebras $\D$ and $\D\op$. \end{remark} Graded modules over a graded division algebra $\mc D$ behave similarly to vector spaces. The usual proof that every vector space has a basis, with obvious modifications, shows that every graded $\mc D$-module has a \emph{homogeneous basis}, \ie, a basis formed by homogeneous elements. Let $\mc V$ be such a module of finite rank $k$, fix a homogeneous basis $\mc B = \{v_1, \ldots, v_k\}$ and let $g_i := \operatorname{deg} v_i$. We then have $\mc{V}\iso \ \D^{[g_1]}\oplus\cdots\oplus\D^{[g_k]}$, so, the graded $\mc D$-module $\mc V$ is determined by the $k$-tuple $\gamma = (g_1,\ldots, g_k)$. The tuple $\gamma$ is not unique. To capture the precise information that determines the isomorphism class of $\mc V$, we use the concept of \emph{multiset}, \ie, a set together with a map from it to the set of positive integers. If $\gamma = (g_1,\ldots, g_k)$ and $T=\supp \D$, we denote by $\Xi(\gamma)$ the multiset whose underlying set is $\{g_1 T,\ldots, g_k T\} \subseteq G/T$ and the multiplicity of $g_i T$, for $1\leq i\leq k$, is the number of entries of $\gamma$ that are congruent to $g_i$ modulo $T$. Using $\mc B$ to represent the linear maps by matrices in $M_k(\D) = M_k(\FF)\tensor \D$, we now construct an explicit matrix model for $\End_{\D}(\mc V)$. \begin{defi}\label{def:explicit-grd-assoc} Let $T \subseteq G$ be a finite subgroup, $\beta$ a nondegenerate alternating bicharacter on $T$, and $\gamma = (g_1, \ldots, g_k)$ a $k$-tuple of elements of $G$. Let $\D$ be a standard realization of a graded division algebra associated to $(T, \beta)$. Identify $M_k(\FF)\tensor \D \iso M_n(\FF)$ by means of the Kronecker product, where $n=k\sqrt{|T|}$. We will denote by $\Gamma(T, \beta, \gamma)$ the grading on $M_n(\FF)$ given by $\deg (E_{ij} \tensor d) := g_i (\deg d) g_j\inv$ for $i,j\in \{1, \ldots , k\}$ and homogeneous $d\in \D$, where $E_{ij}$ is the $(i,j)$-th matrix unit. \end{defi} If $\End(V)$, equipped with a grading, is isomorphic to $M_n(\FF)$ with $\Gamma(T, \beta, \gamma)$, we may abuse notation and also denote the grading on $\End(V)$ by $\Gamma(T,\beta,\gamma)$. We restate \cite[Theorem 2.27]{livromicha} using our notation: \begin{thm}\label{thm:classification-matrix} Two gradings, $\Gamma(T,\beta,\gamma)$ and $\Gamma(T',\beta',\gamma')$, on the algebra $M_n(\FF)$ are isomorphic if, and only if, $T=T'$, $\beta=\beta'$ and there is an element $g\in G$ such that $g \Xi(\gamma)=\Xi(\gamma')$.\qed \end{thm} The proof of this theorem is based on the following result (see Theorem 2.10 and Proposition 2.18 from \cite{livromicha}), which will also be needed: \begin{prop}\label{prop:inner-automorphism} If $\phi: \End_\D (\mc V) \rightarrow \End_\D (\mc V')$ is an isomorphism of graded algebras, then there is a homogeneous invertible $\D$-linear map $\psi: \mc V\rightarrow \mc V'$ such that $\phi(r)=\psi \circ r \circ \psi\inv$, for all $r\in \End_\D (\mc V)$.\qed \end{prop} \section{Gradings on $M(m,n)$}\label{sec:Mmn} \subsection{The associative superalgebra $M(m,n)$}\label{M(m,n)} Let $U = U\even \oplus U\odd$ be a superspace. The algebra of endomorphisms of $U$ has an induced $\Zmod2$-grading, so it can be regarded as a superalgebra. It is convenient to write it in matrix form: \begin{equation}\label{eq:End_U} \End(U) = \left(\begin{matrix} \End(U\even) & \Hom(U\odd,U\even)\\ \Hom(U\even,U\odd) & \End(U\odd)\\ \end{matrix} \right). \end{equation} Choosing bases, we may assume that $U\even=\FF^m$ and $U\odd=\FF^n$, so the superalgebra $\End(U)$ can be seen as a matrix superalgebra, which is denoted by $M(m,n)$. We may also regard $U$ as a $\ZZ$-graded vector space, putting $U^0=U\even$ and $U^1=U\odd$. By doing so, we obtain an induced $\ZZ$-grading on $M(m,n) = \End (U)$ such that \[(\End\, U)\even =(\End\, U)^0 = \left(\begin{matrix} \End(U\even) & 0\\ 0 & \End(U\odd)\\ \end{matrix} \right) \] and $(\End\, U)\odd = (\End\, U)^{-1}\oplus (\End\, U)^1$ where \[(\End U)^{1}= \left(\begin{matrix} 0 & 0\\ \Hom(U\even,U\odd) & 0\\ \end{matrix} \right) \,\text{ and }\, (\End U)^{-1}= \left(\begin{matrix} 0 & \Hom(U\odd,U\even)\\ 0 & 0 \\ \end{matrix} \right). \] This grading will be called the \emph{canonical $\ZZ$-grading} on $M(m,n)$ \subsection{Automorphisms of $M(m,n)$} It is known that the automorphisms of the superalgebra $\End(U)$ are conjugations by invertible homogeneous operators. (This follows, for example, from Proposition \ref{prop:inner-automorphism}.) The invertible even operators are of the form $\left( \begin{matrix} a&0\\ 0&d\\ \end{matrix}\right)$ where $a\in \GL(m)$ and $d\in \GL(n)$. The corresponding inner automorphisms of $M(m,n)$ will be called \emph{even automorphisms}. They form a normal subgroup of $\Aut(M(m,n))$, which we denote by $\mc E$. The inner automorphisms given by odd operators will be called \emph{odd automorphisms}. Note that an invertible odd operator must be of the form $\left( \begin{matrix} 0&b\\ c&0\\ \end{matrix}\right)$ where both $b$ and $c$ are invertible, and this forces $m=n$. In this case, the set of odd automorphisms is a coset of $\mc E$, namely, $\pi \mc E$, where $\pi$ is the conjugation by the matrix $\left( \begin{matrix} 0_n & I_n\\ I_n & 0_n\\ \end{matrix}\right)$. This automorphism is called the \emph{parity transpose} and is usually denoted by superscript: \begin{equation*} \left( \begin{matrix} a&b\\ c&d\\ \end{matrix}\right)^\pi = \left( \begin{matrix} d&c\\ b&a\\ \end{matrix}\right). \end{equation*} Thus, $\Aut (M(m,n)) = \mc E$ if $m\neq n$, and $\Aut (M(n,n)) = \mc E \rtimes \langle \pi \rangle$. \begin{remark}\label{rmk:Aut-ZZ-superalgebra} It is worth noting that $\mc E$ is the automorphism group of the $\ZZ$-\-su\-per\-al\-gebra structure of $M(m,n)$, regardless of the values $m$ and $n$. Indeed, the elements of this group are conjugations by homogeneous matrices with respect to the canonical $\ZZ$-grading, but all the matrices of degree $-1$ or $1$ are degenerate. \end{remark} \subsection{Gradings on matrix superalgebras}\label{ssec:grds-on-superalgebras} We are now going to generalize the results of Section \ref{sec:gradings-on-matrix-algebras} to the superalgebra $\M(m,n)$. It is clear that a $G$-graded associative superalgebra is equivalent to a $(G \times \ZZ_2)$-graded associative algebra, hence one could think that there is no new problem. But the description of gradings on matrix algebras presented in Section \ref{sec:gradings-on-matrix-algebras} does not allow us to readily see the gradings on the even and odd components of the superalgebra, so we are going to refine that description. We will denote the group $G\times \ZZ_2$ by $G^\#$ and the projection on the second factor by $p\colon G^\# \rightarrow \ZZ_2$. Also, we will abuse notation and identify $G$ with $G\times \{\barr 0\} \subseteq G^\#$. \begin{remark} If the canonical $\ZZ_2$-grading is a coarsening of the $G$-grading by means of a homomorphism $p\colon G\rightarrow \ZZ_2$ (referred to as the \emph{parity homomorphism}), then we have another isomorphic copy of $G$ in $G^\#$, namely, the image of the embedding $g\mapsto (g, p (g))$, which contains the support of the $G^\#$-grading. In this case, we do not need $G^\#$ and can work with the original $G$-grading. \end{remark} A $G$-graded superalgebra $\mathcal D$ is called a \emph{graded division superalgebra} if every nonzero homogeneous element in $\D\even \cup \D\odd$ is invertible --- in other words, $\D$ is a $G^\#$-graded division algebra. We separate the gradings on $\M(m,n)$ in two classes depending on the superalgebra structure on $\D$: if $\D\odd = 0$, we say that we have an \emph{even grading} and, if $\D\odd \ne 0$, we have an \emph{odd grading}. To see the difference between even and odd gradings, consider the $G^\#$-graded algebra $E=\End_\D (\mc U)$, where $\D$ is a $G^{\#}$-graded division algebra and $\mc U$ is a graded module over $\D$. Define \[ \mc U\even = \bigoplus_{g\in G^\#} \{u\in \mc U_g \mid p(g)=\barr 0\}\,\, \text{and} \,\,\mc U\odd = \bigoplus_{g\in G^\#} \{u\in \mc U_g \mid p(g)=\barr 1\}. \] Then $\mc U\even$ and $\mc U\odd$ are $\D\even$-modules, but they are $\D$-modules if and only if $\D\odd=0$. So, in the case of an even grading, $\mc U$ is as a direct sum of $\D$-modules, and all the information related to the canonical $\ZZ_2$-grading on $\End_\D (\mc U)$ comes from the decomposition $\mc U=\mc U\even \oplus \mc U\odd$. \begin{defi}\label{def:even-grd-on-Mmn} Similarly to Definition \ref{def:explicit-grd-assoc}, we will parametrize the even gradings on $M(m,n)$ as $\Gamma(T,\beta, \gamma_0, \gamma_1)$, where the pair $(T,\beta)$ characterizes $\D$ and $\gamma_0$ and $\gamma_1$ are tuples of elements of $G$ corresponding to the degrees of homogeneous bases for $\mc U\even$ and $\mc U\odd$, respectively. Here $\gamma_0$ is a $k_0$-tuple and $\gamma_1$ is a $k_1$-tuple, with $k_0\sqrt{|T|}=m$ and $k_1\sqrt{|T|}=n$. \end{defi} On the other hand, in the case of an odd grading, the information about the canonical $\ZZ_2$-grading is encoded in $\D$. To see that, take a homogeneous $\D$-basis of $\mc U$ and multiply all the odd elements by some nonzero homogeneous element in $\D\odd$. This way we get a homogeneous $\D$-basis of $\mc U$ such that the degrees are all in the subgroup $G$ of $G^\#$. If we denote the $\FF$-span of this new basis by $\widetilde U$, then $E\iso \End (\widetilde U)\tensor \D$ where the first factor has the trivial $\ZZ_2$-grading. \begin{defi}\label{def:odd-grd-on-Mmn-1} We parametrize the odd gradings by $\Gamma(T, \beta, \gamma)$ where $T\subseteq G^\#$ but $T\subsetneq G$, the pair $(T,\beta)$ characterizes $\D$, and $\gamma$ is a tuple of elements of $G = G\times \{\bar 0\}$ corresponding to the degrees of a homogeneous basis of $\mc U$ with only even elements. \end{defi} Clearly, it is impossible for an even grading to be isomorphic to an odd grading. The classification of even gradings is the following: \begin{thm}\label{thm:even-assc-iso} Every even $G$-grading on the superalgebra $M(m,n)$ is isomorphic to some $\Gamma(T,\beta, \gamma_0, \gamma_1)$ as in Definition \ref{def:even-grd-on-Mmn}. Two even gradings, $\Gamma = \Gamma(T,\beta, \gamma_0, \gamma_1)$ and $\Gamma' = \Gamma(T',\beta', \gamma_0', \gamma_1')$, are isomorphic if, and only if, $T=T'$, $\beta=\beta'$, and there is $g\in G$ such that \begin{enumerate}[(i)] \item for $m\neq n$: $g \Xi(\gamma_0)=\Xi(\gamma_0')$ and $g \Xi(\gamma_1)=\Xi(\gamma_1')$; \item for $m = n$: either $g \Xi(\gamma_0)=\Xi(\gamma_0')$ and $g \Xi(\gamma_1)=\Xi(\gamma_1')$ or $g\Xi(\gamma_0)=\Xi(\gamma_1')$ and $g \Xi(\gamma_1)=\Xi(\gamma_0')$. \end{enumerate} \end{thm} \begin{proof} We have already proved the first assertion. For the second assertion, we consider $\Gamma$ and $\Gamma'$ as $G^\#$-gradings on the algebra $M(m+n)$ and use Theorem \ref{thm:classification-matrix} to conclude that they are isomorphic if, and only if, $T=T'$, $\beta=\beta'$ and there is $(g,s)\in G^\#$ such that $(g,s)\Xi(\gamma)=\Xi(\gamma')$, where $\gamma$ is the concatenation of $\gamma_0$ and $\gamma_1$, where we regard the entries as elements of $G^\# = G\times \ZZ_2$ appending $\barr{0}$ in the second coordinate of the entries of $\gamma_0$ and $\bar 1$ in the second coordinates of the entries of $\gamma_1$. If $m\neq n$, the condition $(g,s)\Xi(\gamma)=\Xi(\gamma)$ must have $s=\barr0$, since the size of $\gamma_0$ is different from the size of $\gamma_1$. If $m=n$, the condition $(g,s)\Xi(\gamma)=\Xi(\gamma')$ becomes $g \Xi(\gamma_1)=\Xi(\gamma_1')$ if $s=\barr0$ and $g \Xi(\gamma_1)=\Xi(\gamma_0')$ if $s=\barr1$. \end{proof} We now turn to the classification of odd gradings. Recall that here we choose the tuple $\gamma$ to consist of elements of $G$. The corresponding multiset $\Xi(\gamma)$ is contained in $\frac{G^\#}{T} \iso \frac{G}{T \cap G}$. \begin{thm}\label{thm:first-odd-iso} Every odd $G$-grading on the superalgebra $M(m,n)$ is isomorphic to some $\Gamma(T,\beta, \gamma)$ as in Definition \ref{def:odd-grd-on-Mmn-1}. Two odd gradings, $\Gamma = \Gamma(T,\beta, \gamma)$ and $\Gamma' = \Gamma(T',\beta', \gamma')$, are isomorphic if, and only if, $T=T'$, $\beta=\beta'$, and there is $g\in G$ such that $g \Xi(\gamma)=\Xi(\gamma')$. \end{thm} \begin{proof} We have already proved the first assertion. For the second assertion, we again consider $\Gamma$ and $\Gamma'$ as $G^\#$-gradings and use Theorem \ref{thm:classification-matrix}: they are isomorphic if, and only if, $T=T'$, $\beta=\beta'$ and there is $(g,s)\in G^\#$ such that $(g,s)\Xi(\gamma)=\Xi(\gamma')$. Since $T$ contains an element $t_1$ with $p(t_1) = \barr 1$, we may assume $s=\barr 0$. \end{proof} In Subsection \ref{subsec:odd-gradings}, we will show that odd gradings can exist only if $m=n$. It may be desirable to express the classification in terms of $G$ rather than $G^\#$ (as we did for even gradings). We will return to this in Subsection \ref{ssec:second-odd}. \subsection{Even gradings and Morita context.}\label{subsec:even-gradings} First we observe that every grading on $M(m,n)$ compatible with the $\ZZ$-superalgebra structure is an even grading. This follows from the fact that $T=\supp \D$ is a finite group, and if a finite group is contained in $G\times \ZZ$, then it must be contained in $G\times \{0\}$. Hence, when we look at the corresponding $(G\times\ZZ_2)$-grading, we have that $T\subseteq G$, so no element of $\D$ has an odd degree. The converse is also true. Actually, we can prove a stronger assertion: if we write $\M(m,n)$ as in Equation \eqref{eq:End_U}, the subspaces given by each of the four blocks are graded. To capture this information, it is convenient to use the concepts of Morita context and Morita algebra. Recall that a \emph{Morita context} is a sextuple $\mathcal{C} = (R, S, M, N, \vphi, \psi )$ where $R$ and $S$ are unital associative algebras, $M$ is an $(R,S)$-bimodule, $N$ is a $(S,R)$-bimodule and $\vphi: M\tensor_{S} N\rightarrow R$ and $\psi: N\tensor_{R} M\rightarrow S$ are bilinear maps satisfying the necessary and sufficient conditions for \begin{equation*} C = \left(\begin{matrix}\label{eq:morita-algebra} R & M\\ N & S\\ \end{matrix} \right) \end{equation*} to be an associative algebra, \ie, \[\vphi(m_1\tensor n_1)\cdot m_2 = m_1\cdot \psi(n_1\tensor m_2) \text{ and }\psi(n_1\tensor m_1)\cdot n_2 = n_1\cdot \vphi(m_1\tensor n_2)\] \noindent for all $m_1,m_2\in M$ and $n_1,n_2\in N$. We can associate a Morita context to a superspace $U = U\even \oplus U\odd$ by taking $R = \End(U\even)$, $S = \End(U\odd)$, $M = \Hom(U\odd, U\even)$, $N = \Hom (U\even, U\odd)$, with $\vphi$ and $\psi$ given by composition of operators. Given an algebra $C$ as above and the idempotent $ \epsilon = \left(\begin{matrix} 1 & 0\\ 0 & 0\\ \end{matrix} \right) $, we can recover all the data of the Morita context (up to isomorphism): $R \iso \epsilon C \epsilon$, $S \iso (1 - \epsilon) C (1 - \epsilon)$, $M \iso \epsilon C (1-\epsilon)$, $N \iso (1-\epsilon) C \epsilon$ and $\phi$ and $\psi$ are given by multiplication in $C$. In other words, the concept of Morita context is equivalent to the concept of \emph{Morita algebra}, which is a pair $(C,\epsilon)$ where $C$ is a unital associative algebra and $\epsilon\in C$ is an idempotent. For example, we may consider $\M(m,n)$ as a Morita algebra by fixing the idempotent $ \epsilon = \left(\begin{matrix} I_m & 0_{m\times n}\\ 0_{n\times m} & 0_n\\ \end{matrix} \right) $, \ie, $M(m,n)$ is the Morita algebra corresponding to the Morita context associated to the superspace $U = \FF^m \oplus \FF^n$. \begin{defi} A Morita context $(R, S, M, N, \vphi, \psi )$ is said to be $G$-\emph{graded} if the algebras $R$ and $S$ are graded, the bimodules $M$ and $N$ are graded, and the maps $\vphi$ and $\psi$ are homogeneous of degree $e$. A Morita algebra $(C,\epsilon)$ is said to be $G$-\emph{graded} if $C$ is $G$-graded and $\epsilon$ is a homogeneous element (necessarily of degree $e$). \end{defi} Clearly, a Morita context is graded if, and only if, the corresponding Morita algebra is graded. \begin{remark}\label{remarkk} For every graded Morita algebra $(C,\epsilon)$, we can define a $\ZZ$-grading by taking $C^{-1} = \epsilon C (1-\epsilon )$, $C^0 = \epsilon C \epsilon \oplus (1-\epsilon)C(1-\epsilon)$ and $C^1=(1-\epsilon)C\epsilon$. In the case of $M(m,n)$, this is precisely the canonical $\ZZ$-grading. \end{remark} \begin{prop}\label{prop:3-equiv-even-morita-action} Let $\Gamma$ be a grading on the superalgebra $M(m,n)$. The following are equivalent: % \begin{enumerate}[(i)] \item $\Gamma$ is compatible with the canonical $\ZZ$-grading; \item $\Gamma$ is even; \item $M(m,n)$ equipped with $\Gamma$ is a graded Morita algebra. \vspace{1mm} \par\vbox{\parbox[t]{\linewidth}{Further, if we assume $\Char\FF=0$, the above statements are also equivalent to:}} \vspace{1mm} \item $\Gamma$ corresponds to a $\widehat G$-action by even automorphisms. \end{enumerate} \end{prop} \begin{proof} ~\\ \vspace{-2.5mm} \textit{(i) $\Rightarrow$ (ii):} See the beginning of this subsection. \vspace{2mm} \textit{(ii) $\Rightarrow$ (iii):} Regard $\Gamma$ as a $G^\#$-grading. By Theorem \ref{thm:End-over-D}, there is a graded division algebra $\D$ and a graded right $\D$-module $\mc U$ such that $\End_{\mc D} (\mc U) \simeq M(m,n)$. Take an isomorphism of graded algebras $\phi: \End_{\mc D} (\mc U) \rightarrow M(m,n)$. Since $\Gamma$ is even, $\mc U\even$ and $\mc U\odd$ are graded $\mc D$-submodules. Take $\epsilon' \in \End_{\mc D} (\mc U)$ to be the projection onto $\mc U\even$ associated to the decomposition $\mc U= \mc U\even \oplus \mc U\odd$. Clearly, $\epsilon'$ is a central idempotent of $\End_{\mc D} (\mc U)\even$, hence $\phi(\epsilon')$ is a central idempotent of $M(m,n)\even$, so either $\phi(\epsilon')=\epsilon$ or $\phi(\epsilon')=1-\epsilon$. Either way, $\phi^{-1}(\epsilon)$ is homogeneous, hence so is $\epsilon$. \vspace{2mm} \textit{(iii) $\Rightarrow$ (i):} Follows from Remark \ref{remarkk}. \vspace{2mm} \textit{(i) $\Leftrightarrow$ (iv):} This follows from the fact that the group of even automorphisms is precisely the group of automorphisms of the $\ZZ$-superalgebra structure on $M(m,n)$ (see Remark \ref{rmk:Aut-ZZ-superalgebra}). \end{proof} \begin{remark} It follows from Proposition \ref{prop:3-equiv-even-morita-action} that, if $\Char \FF = 0$, odd gradings exist only if $m=n$. In Subsection \ref{subsec:odd-gradings} we will give a characteristic-independent proof of this fact. \end{remark} We now know that the gradings on the $\ZZ$-superalgebra $M(m,n)$ are precisely the even gradings, but since the automorphism group is different from the $\ZZ_2$-superalgebra case, the classification of gradings up to isomorphism is also different. The proof of the next result is similar to the proof of Theorem \ref{thm:even-assc-iso}. \begin{thm} Let $\Gamma(T,\beta,\gamma_0,\gamma_1)$ and $\Gamma'(T',\beta',\gamma_0',\gamma_1')$ be $G$-gradings on the $\ZZ$-superalgebra $M(m,n)$. Then $\Gamma$ and $\Gamma'$ are isomorphic if, and only if, $T=T'$, $\beta=\beta'$, and there is $g\in G$ such that $g\Xi (\gamma_i) = \Xi (\gamma_i')$ for $i=0,1$. \qed \end{thm} As we mentioned in Subsection \ref{subsec:graded-bimodules}, we can always shift the grading on a graded (bi)module and still have a graded (bi)module. In a graded Morita context, as in the case of a graded superalgebra (see Lemma \ref{lemma:opposite-directions}), we have more structure to preserve: if we shift one of the bimodules by an element $g\in G$ and at least one of the bilinear maps is nonzero, then we are forced to shift the other bimodule by $g^{-1}$. As in the superalgebra case, we will refer to this situation as \emph{shift in opposite directions}. \begin{thm}\label{thm:graded-morita} Let $\mathcal{C}=(R, S, M, N, \vphi, \psi )$ be the Morita context associated with a superspace $U$ and fix gradings on $R$ and $S$ making them graded algebras. The bimodules $M$ and $N$ admit $G$-gradings so that $\mathcal{C}$ becomes a graded Morita context if, and only if, there exists a graded division algebra $\D$ and graded right $\D$-modules $\mc V$ and $\mc W$ such that $R\iso \End_{\D}(\mc V)$ and $S\iso \End_{\D} (\mc W)$ as graded algebras. Moreover, all such gradings on $M$ and $N$ have the form $M\iso \Hom_{\D}(\mc W, \mc V)^{[g]}$ and $N\iso \Hom_{\D}(\mc V, \mc W)^{[g^{-1}]}$ as graded bimodules, where $g\in G$ is arbitrary. \end{thm} \begin{proof} Suppose $M$ and $N$ admit $G$-gradings so that the Morita algebra $(C, \epsilon)$ associated to $\mc C$ becomes $G$-graded. By Theorem \ref{thm:End-over-D} there exists a graded division algebra $\D$ and a graded $\D$-module $\mc U$ such that $C \iso \End_{\D} (\mc U)$. Denote the image of $\epsilon$ under this isomorphism by $\epsilon'$ and let $\mc V = \epsilon'(\mc U)$ and $\mc W = (1 -\epsilon')(\mc U)$. Since $\epsilon$ is homogeneous, so is $\epsilon'$, hence $\mc V$ and $\mc W$ are graded $\D$-modules. It follows that $R \iso \epsilon M(m,n) \epsilon \iso \epsilon' \End_{\D} (\mc U) \epsilon' \iso \End_{\D} (\mc V)$ and, analogously, $S \iso \End_{\D} (\mc W)$. For the converse, write $C$ in matrix form by fixing a basis in $U$ and identify $\End_{\D} (\mc V)$ and $\End_{\D} (\mc W)$ with matrix algebras as in Definition \ref{def:explicit-grd-assoc}. Suppose there exist isomorphisms of graded algebras $\theta_1 \colon R\rightarrow \End_{\D} (\mc V) $ and $\theta_2 \colon S\rightarrow \End_{\D} (\mc W)$. Then there are $x\in \GL (m)$ and $y\in \GL (n)$ such that $\theta_1$ is the conjugation by $x$ and $\theta_2$ is the conjugation by $y$. It follows that the conjugation by $\begin{pmatrix} x & 0\\ 0 & y \end{pmatrix}$ \noindent is an isomorphism of algebras between $C\iso M(m,n)$ and \[\End_{\D} (\mc V \oplus \mc W) = \begin{pmatrix} \End_\D(\mc V) & \Hom_\D(\mc W, \mc V)\\ \Hom_\D(\mc V, \mc W) & \End_\D(\mc W) \end{pmatrix},\] \noindent hence we transport the gradings on $\Hom_\D(\mc W, \mc V)$ and $\Hom_\D(\mc V, \mc W)$ to $M$ and $N$, respectively. It remains to prove that the gradings on $M$ and $N$ are determined up to shift in opposite directions. Since in our case the Morita algebra $C$ is simple, $M$ and $N$ are simple bimodules. By Lemma \ref{lemma:simplebimodule}, the gradings on $M$ and $N$ are determined up to shifts, and the shifts have to be in opposite directions in order for $\vphi$ and $\psi$ to be degree-preserving. \end{proof} \subsection{Odd gradings}\label{subsec:odd-gradings} Let $\Gamma$ be an odd grading on $M(m,n)$. We saw in Subsection \ref{ssec:grds-on-superalgebras} that, as a $G^\#$-graded algebra, $M(m,n)$ is isomorphic to $E\iso \End(\tilde U)\tensor \D$ where the first factor has the trivial $\ZZ_2$-grading and $\D=\D\even\oplus \D\odd$, with $\D\odd\neq 0$, is a $G^\#$-graded division algebra that is simple as an algebra. Let $T\subseteq G^\#$ be the support of $\D$ and $\beta: T\times T \rightarrow \FF^\times$ be the associated bicharacter. We write $T^+ = \{t\in T \mid p(t)=\barr 0\} = T \cap G$ and $T^- = \{t\in T \mid p(t)=\barr 1\}$, and denote the restriction of $\beta$ to $T^+\times T^+$ by $\beta^+$. Note that there are no odd gradings if $\Char \FF =2$. Indeed, in this case, there is no nondegenerate bicharacter on $T$ because the characteristic of the field divides $|T|=2|T^+|$. From now on, we suppose $\Char \FF \neq 2$. For a subgroup $A\subseteq T$, we denote by $A'$ its orthogonal complement in $T$ with respect to $\beta$, i.e., $A' = \{t\in T\mid \beta(t, A) =1\}$. This is the inverse image of the subgroup $A^\perp\subseteq \widehat T$ under the isomorphism $T\rightarrow \widehat T$ given by $t\mapsto \beta(t,\cdot)$. In particular, $|A'| = [T:A]$. From these considerations, we have $(T^+)' = \langle t_0 \rangle$ where $t_0$ is an element of order 2. It follows that $\beta(t_0, t) = 1$ if $t\in T^+$ and $\beta(t_0, t) = -1$ if $t\in T^-$. For this reason, we call $t_0$ the \emph{parity element} of the odd grading $\Gamma$. Note that $\rad \beta^+ = T^+\cap (T^+)' = \langle t_0 \rangle$. Fix an element $0\neq d_0\in \D$ of degree $t_0$. By the definition of $\beta$, $d_0$ commutes with all elements of $\D\even$ and anticommutes with all elements of $\D\odd$. Since $d_0^2\in \D_e = \FF$, we may rescale $d_0$ so that $d_0^2=1$. Then $\epsilon := \frac{1}{2}(1+d_0)$ is a central idempotent of $\D\even$. Take a homogeneous element $0\neq d_1\in\D\odd$. Then $d_1\epsilon d_1\inv = \frac{1}{2}(1-d_0)=1-\epsilon$, which is another central idempotent of $\D\even$ and must have the same rank as $\epsilon$. Hence, $\D\even\iso \epsilon\D\even\oplus (1-\epsilon)\D\even$ (direct sum of ideals) and, consequently, $E\even \iso \End(\tilde U)\tensor \D\even = \End(\tilde U)\tensor \epsilon\D\even \oplus \End(\tilde U)\tensor (1-\epsilon)\D\even$, where the two summands have the same dimension. Therefore, odd gradings exist only if $m=n$. Also note that we have \begin{equation}\label{eq:D1eps} \D\odd \epsilon = (1-\epsilon) \D\odd. \end{equation} We are now going construct an even grading by coarsening a given odd grading. The reverse of this construction will be used in Subsection \ref{ssec:second-odd}. Let $H$ be a group and suppose we have an even grading $\Gamma'$ on $M(n,n)$ that is the coarsening of $\Gamma$ induced by a group homomorphism $\alpha: G\rightarrow H$. Since $\Gamma'$ is even, then the idempotent $\id_{\tilde U}\tensor\epsilon$ must be homogeneous with respect to $\Gamma'$. This means that $\alpha(t_0)=e$, so $\alpha$ factors through $\barr G := G/\langle t_0 \rangle$. This motivates the following definition: \begin{defi} Let $\Gamma$ be an odd $G$-grading on $M(n,n)$ with parity element $t_0$. The \emph{finest even coarsening of $\Gamma$} is the $\barr G$-grading ${}^\theta \Gamma$, where $\barr G := G/\langle t_0 \rangle$ and $\theta: G \to \barr G$ is the natural homomorphism. \end{defi} \begin{thm} Let $\Gamma = \Gamma(T, \beta, \gamma)$ be an odd grading on $M(n,n)$ with parity element $t_0$. Then its finest even coarsening is isomorphic to $\barr \Gamma = \Gamma(\barr T, \barr \beta, \barr \gamma, \barr u\barr \gamma)$, where $\barr T= \frac{T^+}{\langle t_0 \rangle}$, $\barr\beta$ is the nondegenerate bicharacter on $\barr T$ induced by $\beta^+$, $\barr\gamma$ is the tuple whose entries are the images of the entries of $\gamma$ under $\theta$, and $u \in G$ is any element such that $(u, \barr 1) \in T^-$. \end{thm} \begin{proof} Let us focus our attention on the $G$-graded division algebra $\D$. We now consider it as a $\barr G$-graded algebra, which has a decomposition $\D=\D\epsilon \oplus \D(1-\epsilon)$ as a graded left module over itself. \setcounter{claim}{0} \begin{claim} The $\D$-module $\D\epsilon$ is simple as a graded module. \end{claim} To see this, consider a nontrivial graded submodule $V\subseteq \D\epsilon$ and take a homogeneous element $0\neq v\in V$. Then we can write $v=d\epsilon$ where $d$ is a $\barr G$-homogeneous element of $\D$, so $d = d' + \lambda d' d_0$ where $d'$ is a $G$-homogeneous element and $\lambda\in \FF$. Hence, $v = d'\epsilon + \lambda d'd_0\epsilon = (1+\lambda)d'\epsilon$. Clearly, $(1+\lambda)d'\neq 0$, so it has an inverse in $\D$. We conclude that $\epsilon\in V$, hence $V=\D\epsilon$.\qedclaim Let $\barr \D := \epsilon \D \epsilon \iso \End_{\D}(\D\epsilon)$, where we are using the convention of writing endomorphisms of a left module on the right. By Claim 1 and the graded analog of Schur's Lemma (see \eg \cite[Lemma 2.4]{livromicha}), $\barr \D$ is a $\barr G$-graded division algebra. \begin{claim} The support of $\barr \D$ is $\barr T= \frac{T^+}{\langle t_0 \rangle}$ and the bicharacter $\barr \beta: \barr T\times \barr T\rightarrow \FF^\times$ is induced by $\beta^+: T^+\times T^+ \rightarrow \FF^\times$. \end{claim} We have $\barr \D = \epsilon \D\even \epsilon + \epsilon \D\odd \epsilon$ and $\epsilon \D\odd \epsilon = 0$ by Equation \eqref{eq:D1eps}, so $\supp \barr \D \subseteq \barr T$. On the other hand, for every $0\neq d\in \D\even$ with $G$-degree $t\in T^+$, we have that $\epsilon d\epsilon = d\epsilon = \frac{1}{2}(d+dd_0)\neq 0$, since the component of degree $t$ is different from zero. Hence $\supp \barr \D = \barr T$. Since $\epsilon$ is central in $\D\even$, we obtain $\barr\beta (\barr t,\barr s) = \beta (s, t) = \beta^+ (s, t)$ for all $t, s\in T^+$.\qedclaim We now consider $\D\epsilon$ as a graded right $\barr \D$-module. Then we have the decomposition $\D\epsilon = \epsilon \D\epsilon \oplus (1-\epsilon) \D\epsilon$. The set $\{\epsilon\}$ is clearly a basis of $\epsilon \D\epsilon$. To find a basis for $(1-\epsilon)\D\epsilon$, fix any $G$-homogeneous $0\neq d_1\in \D\odd$ with $\deg d_1 = t_1\in T^-$. Then we have $(1-\epsilon)\D\epsilon = (1-\epsilon)\D\even \epsilon + (1-\epsilon)\D\odd \epsilon = (1-\epsilon)\D\odd \epsilon = \D\odd \epsilon$ by Equation \eqref{eq:D1eps}. Since $d_1$ is invertible, $\{d_1\epsilon\}$ is a basis for $(1-\epsilon) \D\epsilon$. We conclude that $\{\epsilon, d_1\epsilon\}$ is a basis for $\D\epsilon$. Using the graded analog of the Density Theorem (see e.g. \cite[Theorem 2.5]{livromicha}), we have $\D\iso \End_{\barr \D}(\D\epsilon)\iso \End(\FF\epsilon\oplus \FF d_1\epsilon)\tensor \barr\D$. Hence, % \[ \begin{split} \End_\D(\mc U)&\iso\End (\tilde U) \tensor \D \iso \End (\tilde U) \tensor \End(\FF\epsilon \oplus \FF d_1\epsilon) \tensor \barr\D \\ &\iso \End(\tilde U\tensor \epsilon \oplus \tilde U\tensor d_1\epsilon) \tensor \barr\D \end{split} \] % as $\barr G$-graded algebras. The result follows. \end{proof} In the next section, we will show how to recover $\Gamma$ from $\barr\Gamma$ and some extra data. The following definition and result will be used there. \begin{defi} For every abelian group $A$ we put $A^{[2]} = \{a^2 \mid a\in A\}$ and $A_{[2]} = \{a\in A \mid a^2 = e \}$. \end{defi} Note that $T^{[2]}\subseteq T^+$, but $T^{[2]}$ can be larger than $(T^+)^{[2]}$ since it also includes the squares of elements of $T^-$. Also, the subgroup $\barr S = \{\barr t \in \barr T \mid t \in T^{[2]}\}$ of $\barr T$ can be larger than $\barr T^{[2]}$, but we will show that, surprisingly, it does not depend on $T^-$. \begin{lemma}\label{lemma:square-subgroup} Let $\theta: T^+\rightarrow \barr T=\frac{T^+}{\langle t_0 \rangle}$ be the natural homomorphism. Consider the subgroups $\barr S = \theta(T^{[2]})$ and $\barr R=\theta(T^+_{[2]})$ of $\barr T$. Then $\barr S$ is the orthogonal complement of $\barr R$ with respect to the nondegenerate bicharacter $\barr\beta$. \end{lemma} \begin{proof} We claim that $\barr S' = \barr R$. Indeed, % \[ \begin{split} \barr S' & = \{ \theta(t) \mid t\in T^+ \AND \barr\beta(\theta (t), \theta (s^2)) =1 \text{ for all }s\in T\}\\ & = \{\theta(t) \mid t\in T^+ \AND \beta (t, s^2) =1 \text{ for all }s\in T\}\\ & = \{ \theta(t) \mid t\in T^+ \AND \beta (t^2, s) =1 \text{ for all }s\in T\}\\ & = \{ \theta(t) \mid t\in T^+ \AND t^2=e \}\\ & = \barr R\,. \end{split} \] It follows that $\barr S = \barr R'$, as desired. \end{proof} \subsection{A description of odd gradings in terms of $G$}\label{ssec:second-odd} Our second description of an odd grading consists of its finest even coarsening and the data necessary to recover the odd grading from this coarsening. All parameters will be obtained in terms of $G$ rather than its extension $G^\#=G\times \ZZ_2$. Let $t_0\in G$ be an arbitrarily fixed element of order 2 and set $\barr G = \frac{G}{\langle t_0 \rangle}$. Let $\barr T \subseteq \barr G$ be a finite subgroup and let $\barr \beta: \barr T \times \barr T \rightarrow \FF^\times$ be a nondegenerate alternating bicharacter. We define $T^+\subseteq G$ to be the inverse image of $\barr T$ under the natural homomorphism $\theta: G\rightarrow \barr G$. Note that $\barr \beta$ gives rise to a bicharacter $\beta^+$ on $T^+$ whose radical is generated by the element $t_0$. We wish to define $T^-\subseteq G\times \{\barr 1\}$ so that $T=T^+\cup T^-$ is a subgroup of $G^\#$ and $\beta^+$ extends to a nondegenerate alternating bicharacter on $T$. From Lemma \ref{lemma:square-subgroup}, we have a necessary condition for the existence of such $T^-$, namely, for $\barr R=\frac{T^+_{[2]}}{\langle t_0 \rangle}$, we need $\barr R' \subseteq \barr G^{[2]}$ (indeed, $\barr S$ is a subgroup of $\overline {G^{[2]}} = \barr G^{[2]}$). We will now prove that this condition is also sufficient. \begin{prop}\label{prop:square-subgroup-converse} If $\left( \frac{T^+_{[2]} }{\langle t_0 \rangle}\right)'\subseteq \barr G^{[2]}$, then there exists an element $t_1\in G\times \{\barr 1\} \subseteq G^\#$ such that $T= T^+ \cup t_1\, T^+$ is a subgroup of $G^\#$ and $\beta^+$ extends to a nondegenerate alternating bicharacter $\beta:T\times T\rightarrow \FF^\times$. \end{prop} \begin{proof} Let $\chi\in \widehat {T^+}$ be such that $\chi(t_0) = -1$. Since $\chi^2(t_0)=1$, we can consider $\chi^2$ as a character of the group $\barr T = \frac{T^+}{\langle t_0 \rangle}$, hence there is $a\in T^+$ such that $\chi^2(\barr t) = \barr\beta(\barr a, \barr t)$ for all $\barr t\in \barr T$. Note that $\chi (a) = \pm 1$ and hence, changing $a$ to $a t_0$ if necessary, we may assume $\chi (a) = 1$. \bigskip \textit{(i) Existence of $t_1$}: \medskip As before, let $\barr R = \frac{T^+_{[2]}}{\langle t_0 \rangle}$. Then $\barr a \in \barr R'$. Indeed, if $b\in T^+_{[2]}$, then $\barr\beta(\barr a,\barr b) = \chi^2 (\barr b) = \chi (b^2) = \chi (e) =1$. By our assumption, we conclude that $\barr a\in \barr G^{[2]}$. We are going to prove that, actually, $a\in G^{[2]}$. Pick $u\in G$ such that $\barr u^2 = \barr a$. Then, either $a=u^2$ or $a=u^2t_0$. If $t_0 = c^2$ for some $c\in G$, then replacing $u$ by $uc$ if necessary, we can make $u^2 = a$. Otherwise, $t_0$ has no square root in $T^+$, which implies that $\barr R=\barr T_{[2]}$. Hence $\barr R' = (\barr T_{[2]})' = \barr T^{[2]} = \theta ((T^+)^{[2]})$. Thus, in this case, we can assume $u\in T^+$. Then $\chi(u^2) = \chi^2(u) = \barr \beta (\barr a, \barr u) = \barr \beta (\barr u^2, \barr u) =1$, hence $u^2 = a$. Finally, we set $t_1=(u,\barr 1) \in G^\#$. \bigskip \textit{(ii) Existence of $\beta$}: \medskip We wish to extend $\beta^+$ to $T=T^+ \cup t_1\, T^+$ by setting $\beta(t_1, t) = \chi (t)$ for all $t\in T^+$. It is clear that there is at most one alternating bicharacter on $T$ with this property that extends $\beta^+$. To show that it exists and is nondegenerate, we will first introduce an auxiliary group $\widetilde T$ and a bicharacter $\tilde\beta$. Let $\widetilde T$ be the direct product of $T^+$ and the infinite cyclic group generated by a new symbol $\tau$. We define $\tilde\beta:\widetilde T\times \widetilde T \rightarrow \FF^\times$ by $ \tilde\beta(s\tau^i,t\tau^j) = \beta^+(s,t)\, \chi (s)^{-j}\, \chi (t)^i$, where $s,t\in T^+$. It is clear that $\tilde\beta$ is an alternating bicharacter. \begin{claim*} $\langle a\tau^{-2} \rangle = \rad \tilde \beta\,$. \end{claim*} Let $t\in T^+$ and $\ell\in \ZZ$. Then \[ \tilde \beta (a\tau^{-2},t\tau^\ell) = \beta^+(a, t)\,\, \chi(t)^{-2} \, \chi(a)^{-\ell} = \barr\beta(\barr a, \barr t)\,\, \chi(t)^{-2} = \chi(t)^2 \, \chi(t)^{-2} = 1, \] hence, $\langle a\tau^{-2} \rangle \subseteq \rad \tilde \beta$. Conversely, if $s\tau^k \in \rad \tilde\beta$, then, $1 = \tilde \beta (s\tau^k, t_0) = \beta^+(s,t_0)\, \chi(t_0)^k = (-1)^k$, hence $k$ is even. From the previous paragraph, we know that $a\tau^{-2} \in \rad \tilde\beta$, hence $a^\frac{k}{2} \tau^{-k} \in \rad \tilde\beta$ and $s a^\frac{k}{2} = (s \tau^k) (a^\frac{k}{2} \tau^{-k}) \in \rad \tilde\beta$. Since $s a^\frac{k}{2} \in T^+$, we get $s a^\frac{k}{2} \in \rad \beta^+ = \{ e, t_0 \}$. But, if $sa^\frac{k}{2} = t_0$, we have $1 = \tilde\beta (sa^\frac{k}{2}, \tau) = \tilde\beta (t_0, \tau) = \chi(t_0)\inv = -1$, a contradiction. It follows that $sa^\frac{k}{2} = e$ and, hence, $s\tau^k = a^{-\frac{k}{2}}\tau^k = (a\tau^{-2})^{\frac{k}{2}}$, concluding the proof of the claim. \qedclaim We have a homomorphism $\vphi:\widetilde T\rightarrow T$ that is the identity on $T^+$ and sends $\tau$ to $t_1$. Clearly, $\ker \vphi = \langle a\tau^{-2} \rangle$. By the above claim, $\tilde\beta$ induces a nondegenerate alternating bicharacter on $\frac{\widetilde T}{\langle a\tau^{-2} \rangle}$, which can be transferred via $\vphi$ to a nondegenerate alternating bicharacter on $T$ that extends $\beta^+$. \end{proof} Now fix $\chi\in \widehat {T^+}$ with $\chi(t_0)=-1$ and let $a$ be the unique element of $T^+$ such that $\chi(a)=1$ and $\chi^2(\barr t) = \barr\beta (\barr a, \barr t)$ for all $t\in T^+$. Suppose that the condition of Proposition \ref{prop:square-subgroup-converse} is satisfied. Then part (i) of the proof shows that there exists $u\in G$ such that $u^2=a$. Moreover, part (ii) shows that there exists an extension of $\beta^+$ to a nondegenerate alternating bicharacter $\beta$ on $T=T^+\cup t_1T^+$, where $t_1=(u,\bar 1)$, such that $\beta(t_1,t)=\chi(t)$ for all $t\in T^+$. Clearly, such an extension is unique. We will denote it by $\beta_u$ and its domain by $T_u$. \begin{prop}\label{prop:roots-of-a} For every $T\subseteq G^\#$ such that $T\subsetneq G$ and $T\cap G=T^+$ and for every extension of $\beta^+$ to a nondegenerate alternating bicharacter $\beta$ on $T$, there exists $u\in G$ such that $u^2=a$, $T=T_u$ and $\beta=\beta_u$. Moreover, $\beta_u=\beta_{\tilde{u}}$ if, and only if, $u \equiv \tilde{u} \pmod{\langle t_0 \rangle}$. \end{prop} \begin{proof} We have $T=T^+ \cup T^-$ where $T^-\subseteq G\times \{\barr 1\}$ is a coset of $T^+$. We can extend $\chi$ to a character of $T$, which we still denote by $\chi$, and, since $\beta$ is nondegenerate, there is $t_1\in T$ such that $\beta(t_1, t) = \chi(t)$ for all $t\in T$. We have $t_1\in T^-$ since $\beta(t_1,t_0)=\chi(t_0)=-1$, so $t_1=(u,\bar 1)$, for some $u\in G$, and hence $T=T_u$. We claim that $t_1^2=a$. Indeed, $\chi(t_1^2) = \beta(t_1,t_1^2)=1$ and, for every $t\in T^+$, \[ \chi^2(\barr t) = \chi(t)^2 = \beta (t_1, t)^2 = \beta (t_1^2, t) = \barr\beta (\,\overline {(t_1^2)},\, \barr t)\,, \] so $t_1^2$ satisfies the definition of the element $a$. This completes the proof of the first assertion. Now suppose $\beta_u=\beta_{\tilde{u}}$, so in particular $t_1\,T^+=\tilde{t}_1\,T^+$ where $t_1 = (u, \barr 1)$ and $\tilde{t}_1 = (\tilde u, \barr 1)$. Then there is $r\in T^+$ such that $\tilde{t}_1 = t_1\,r$. Also, for every $t\in T^+$, \[ \chi(t) = \beta_{\tilde{u}}(\tilde{t}_1,t) = \beta_u (t_1\,r, t) = \beta_u(t_1, t)\,\beta_u(r,t) = \chi(t) \beta^+(r, t) \] and, hence, $\beta^+(r, t)=1$ for all $t\in T^+$. This means that $r = u\inv \tilde{u} \in \langle t_0 \rangle$. Conversely, if $\tilde u = u r$ for some $r\in \langle t_0 \rangle$, then $t_1\, T^+ = \tilde t_1\, T^+$. Also, for all $t\in T^+$, \[ \beta_u(t_1, t) = \chi(t) = \beta_{\tilde{u}}(\tilde{t}_1, t) = \beta_{\tilde{u}}(t_1r, t) = \beta_{\tilde{u}}(t_1, t)\, \beta^+(r, t) = \beta_{\tilde{u}}(t_1, t). \] It follows that $\beta_u=\beta_{\tilde{u}}$. \end{proof} Note that, keeping the character $\chi \in \widehat {T^+}$ with $\chi(t_0) = -1$ fixed, we have a surjective map from the square roots of $a$ to all possible pairs $(T,\beta)$. If we had started with a different character above, we would have obtained a different surjective map. Hence, for parametrization purposes, $\chi$ (and, hence, $a$) will be fixed. We are now in a position to give a classification of odd gradings in terms of $G$ only. We already have the following parameters: an element $t_0\in G$ of order $2$ and a pair $(\barr T, \barr\beta)$. For each $t_0$ and $\barr T$, we fix a character $\chi\in \widehat {T^+}$ satisfying $\chi(t_0) = -1$. The next parameter is an element $u\in G$ such that $u^2 = a$, where $a$ is the unique element of $T^+$ such that $\chi(a)=1$ and $\chi^2(\barr t) = \barr\beta (\barr a, \barr t)$ for all $t\in T^+$. Finally, let $\gamma = (g_1, \ldots, g_k)$ be a $k$-tuple of elements of $G$. With these data, we construct the grading $\Gamma (t_0, \barr T, \barr \beta, u, \gamma)$ as follows: \begin{defi}\label{def:odd-grd-on-Mmn-2} Let $\D$ be a standard realization of the $G^\#$-graded division algebra with parameters $(T_u,\beta_u)$. Take the graded $\D$-module $\mathcal U = \D^{[g_1]}\oplus \cdots \oplus \D^{[g_k]}$. Then $\End_\D (\mathcal U)$ is a $G^\#$-graded algebra, hence a superalgebra by means of $p:G^\# \rightarrow \ZZ_2$. As a superalgebra, it is isomorphic to $M(n,n)$ where $n=k\sqrt{|\barr T|}$. We define $\Gamma (t_0, \barr T, \barr \beta, u, \gamma)$ as the corresponding $G$-grading on $M(n,n)$. \end{defi} Theorem \ref{thm:first-odd-iso} together with Proposition \ref{prop:roots-of-a} give the following result: \begin{thm}\label{thm:2nd-odd-iso} Every odd $G$-grading on the superalgebra $M(n,n)$ is isomorphic to some $\Gamma (t_0, \barr T, \barr \beta, u, \gamma)$ as in Definition \ref{def:odd-grd-on-Mmn-2}. Two odd gradings, $\Gamma (t_0, \barr T, \barr \beta, u, \gamma)$ and $\Gamma (t_0', \barr T', \barr \beta', u', \gamma')$, are isomorphic if, and only if, $t_0=t_0'$, $\barr T = \barr T'$, $\barr\beta = \barr\beta'$, $u \equiv u' \pmod{\langle t_0 \rangle}$, and there is $g\in G$ such that $g\, \Xi(\gamma) = \Xi(\gamma')$.\qed \end{thm} \subsection{Fine gradings up to equivalence} We start by investigating the gradings on the superalgebra $M(m,n)$ that are fine among even gradings. By Proposition \ref{prop:3-equiv-even-morita-action}, this is the same as fine gradings on $M(m,n)$ as a $\ZZ$-superalgebra, and, by the discussion in Subsection \ref{subsec:odd-gradings}, the same as fine gradings if $m\neq n$ or $\Char \FF = 2$. We will use the following notation. Let $H$ be a finite abelian group whose order is not divisible by $\Char \FF$. Set $T_H = H\times \widehat H$ and define $\beta_H: T_H\times T_H \to \FF^\times$ by \[ \beta_H((h_1, \chi_1), (h_2, \chi_2)) = \chi_1(h_2)\, \chi_2 (h_1)\inv. \] Then $\beta_H$ is a nondegenerate alternating bicharacter on $T_H$. \begin{defi}\label{def:even-fine-grd-on-Mmn} Let $\ell \mid \operatorname{gcd}(m,n)$ be a natural number such that $\Char \FF\nmid\ell$ and put $k_0 := \frac{m}{\ell}$ and $k_1 := \frac{n}{\ell}$. Let $\Theta_\ell$ be a set of representatives of the isomorphism classes of abelian groups of order $\ell$. For every $H$ in $\Theta_\ell$, we define $\Gamma(H, k_0, k_1)$ to be the even $T_H\times \ZZ^{k_0 + k_1}$-grading $\Gamma(T_H, \beta_H, (e_1, \ldots, e_{k_0}), (e_{k_0 + 1}, \dots, e_{k_0 + k_1}))$ on $M(m,n)$, where $\{e_1, \ldots, e_{k_0+k_1}\}$ is the standard basis of $\ZZ^{k_0 + k_1}$. If $m$ and $n$ are clear from the context, we will simply write $\Gamma(H)$. \end{defi} Let $G_H$ be the subgroup of $T_H\times \ZZ^{k_0 + k_1}$ generated by the support of $\Gamma(H,k_0,k_1)$, i.e., $G_H = T_H\times \ZZ^{k_0 + k_1}_0$, where $\ZZ^k_0 := \{ (x_1, \ldots, x_k) \in \ZZ^k \mid x_1 + \cdots + x_k = 0\}$. \begin{thm}\label{thm:class-fine-even} The fine gradings on $M(m,n)$ as a $\ZZ$-superalgebra are precisely the even fine gradings. Every such grading is equivalent to a unique $\Gamma(H)$ as in Definition \ref{def:even-fine-grd-on-Mmn}. Moreover, every grading $\Gamma(H)$ is fine, and $G_H$ is its universal group. \end{thm} \begin{proof} By \cite[Proposition 2.35]{livromicha}, if we consider $\Gamma(H)$ as a grading on the algebra $M_{n+m}(\FF)$, it is a fine grading and $G_H$ is its universal group. It follows that the same is true of $\Gamma(H)$ as a grading on the superalgebra $M(m,n)$. Let $\Gamma = \Gamma(T, \beta, \gamma_0, \gamma_1)$ be any even $G$-grading on $M(m,n)$. We can write $T = A\times B$ where the restrictions of $\beta$ to the subgroups $A$ and $B$ are trivial and, hence, there is an isomorphism $\alpha: T_A \to T$ such that $\beta_A=\beta\circ(\alpha\times\alpha)$. We can extend $\alpha$ to a homomorphism $G_A \to G$ (also denoted by $\alpha$) by sending the elements $e_1, \ldots, e_{k_0}$ to the entries of $\gamma_0$ and the elements $e_{k_0+1}, \ldots, e_{k_0+k_1}$ to the entries of $\gamma_1$. It follows that ${}^\alpha \Gamma(A) \iso \Gamma$. Since all $\Gamma(H)$ are fine and pairwise nonequivalent (because their universal groups are pairwise nonisomorphic), we can apply Lemma \ref{lemma:universal-grp}, concluding that every fine grading on $M(m,n)$ as a $\ZZ$-superalgebra is equivalent to a unique $\Gamma(H)$. \end{proof} We now consider odd fine gradings on $M(n,n)$, so $\Char\FF\ne 2$. We first define some gradings on the algebra $M_{2n}(\FF)$ and then impose a superalgebra structure. \begin{defi}\label{def:param-fine-odd} Let $\ell\mid n$ be a natural number such that $\Char \FF\nmid\ell$ and put $k:= \frac{n}{\ell}$. Let $\Theta_{2\ell}$ be a set of representatives of the isomorphism classes of abelian groups of order $2\ell$. For every $H$ in $\Theta_{2\ell}$, we consider the $T_H\times \ZZ^k$-grading $\Gamma = \Gamma(T_H, \beta_H, (e_1, \ldots, e_k))$ on $M_{2n}(\FF)$, where $\{e_1, \ldots, e_k\}$ is the standard basis of $\ZZ^k$. Then we choose an element $t_0 \in T$ of order $2$ and define a group homomorphism $p: T_H\times\ZZ^k \to \ZZ_2$ by \[ p(t, x_1, \ldots, x_k) = \begin{cases*} \bar 0 & if $\beta(t_0, t) = 1$,\\ \bar 1 & if $\beta(t_0, t) = -1$. \end{cases*} \] This defines a superalgebra structure on $M_{2n}(\FF)$. By construction, $\Gamma$ is odd as a grading on this superalgebra $(M_{2n}(\FF),p)$, and this forces the superalgebra to be isomorphic to $M(n,n)$. We denote by $\Gamma(H, t_0, k)$ the grading $\Gamma$ considered as a grading on $M(n,n)$. If $n$ is clear from the context, we will simply write $\Gamma(H, t_0)$. \end{defi} Note that the parameter $t_0$ of $\Gamma(H, t_0, k)$ does not affect the grading on the algebra $M_{2n}(\FF)$, but, as we will see in Proposition \ref{prop:equiv-with-same-H}, different choices of $t_0$ can yield nonequivalent gradings on the superalgebra $M(n,n)$. \begin{prop}\label{prop:all-fine-odd} Each grading $\Gamma(H, t_0)$ on $M(n,n)$ is fine, and its universal group is $G_H = T_H\times \ZZ^k_0$. Every odd fine grading on $M(n,n)$ is equivalent to at least one $\Gamma(H, t_0)$. \end{prop} \begin{proof} As in the proof of Theorem \ref{thm:class-fine-even}, the first assertion follows from \cite[Proposition 2.35]{livromicha}. Let $\Gamma(T,\beta, \gamma)$ be an odd $G$-grading on $M(n,n)$ and let $t_0$ be its parity element. Then we can find subgroups $A$ and $B$ such that $T=A\times B$ and there exists an isomorphism $\alpha: T_A \to T$ such that $\beta_A=\beta\circ(\alpha\times\alpha)$. We define $t_0' := \alpha\inv(t_0)$ and extend $\alpha$ to a homomorphism $G_A \to G$ (also denoted by $\alpha$) by sending the elements $e_1, \ldots, e_k$ to the entries of $\gamma$. Then ${}^\alpha \Gamma(A, t_0') \iso \Gamma$. Selecting a representative from each equivalence class of gradings of the form $\Gamma(H, t_0)$, we can apply Lemma \ref{lemma:universal-grp}, which proves the second assertion. \end{proof} It remains to determine which of the gradings $\Gamma(H, t_0)$ are equivalent to each other. \begin{prop}\label{prop:equiv-with-same-H} The gradings $\Gamma = \Gamma(H, t_0)$ and $\Gamma' = \Gamma(H, t_0')$ on $M(n,n)$ are equivalent if, and only if, there is $\alpha \in \Aut(T_H, \beta_H)$ such that $\alpha(t_0) = t_0'$. \end{prop} \begin{proof} We will denote by $p: G_H \to \ZZ_2$ the parity homomorphism associated to the grading $\Gamma$ and by $p': G_H\to \ZZ_2$ the one associated to $\Gamma'$. If $\Gamma$ is equivalent to $\Gamma'$, there is an isomorphism $\vphi: (M_{2n}(\FF),p) \to (M_{2n}(\FF),p')$ of superalgebras that is a self-equivalence of the grading on $M_{2n}(\FF)$. Hence, we have the corresponding group automorphism $\alpha: G_H \to G_H$ in the Weyl group of the grading, and the following diagram commutes: % \begin{equation}\label{diag:parity} \begin{tikzcd} G_H \arrow[to = 2G, "\alpha"] \arrow[to = Z2, "p"'] && |[alias = 2G]| G_H \arrow[to = Z2, "p'"]\\ & |[alias = Z2]|\ZZ_2 & \end{tikzcd} \end{equation} % By the definition of $p$ and $p'$, this is equivalent to $\alpha(t_0) = \alpha(t_0')$. The automorphism $\alpha$ must send the torsion subgroup of $G_H$ to itself, so we can consider the restriction $\alpha\restriction_{T_H}$. By \cite[Corrolary 2.45]{livromicha}, this restriction is in $\Aut(T_H, \beta_H)$. For the converse, we use the same \cite[Corrolary 2.45]{livromicha} to extend $\alpha$ to an automorphism $G_H\to G_H$ in the Weyl group. Hence, there is an automorphism $\vphi$ of the algebra $M_{2n}(\FF)$ that permutes the components of the grading according to $\alpha$. The condition $\alpha(t_0) = \alpha(t_0')$ is equivalent to Diagram \eqref{diag:parity} being commutative, which shows that $\vphi: (M_{2n}(\FF),p) \to (M_{2n}(\FF),p')$ is an isomorphism of superalgebras. \end{proof} Combining Propositions \ref{prop:all-fine-odd} and \ref{prop:equiv-with-same-H}, we obtain: \begin{thm}\label{thm:class-fine-odd} Every odd fine grading on $M(n,n)$ is equivalent to some $\Gamma(H,t_0)$ as in Definition \ref{def:param-fine-odd}. Every grading $\Gamma(H,t_0)$ is fine, and $G_H$ is its universal group. Two gradings, $\Gamma(H, t_0)$ and $\Gamma(H', t_0')$, are equivalent if, and only if, $H=H'$ and $t_0'$ lies in the orbit of $t_0$ under the natural action of $\Aut(T_H, \beta_H)$.\qed \end{thm} For a matrix description of the group $\Aut(T_H, \beta_H)$, we refer the reader to \cite[Remark 2.46]{livromicha}. \section{Gradings on $A(m,n)$}\label{sec:Amn} Throughout this section it will be assumed that $\Char \FF = 0$. \subsection{The Lie superalgebra $A(m,n)$}\label{ssec:def-A} Let $U = U\even \oplus U\odd$ be a finite dimentional superspace. Recall that the \emph{general linear Lie superalgebra}, denoted by $\gl\, (U)$, is the superspace $\End(U)$ with product given by the \emph{supercommutator}: \[ [a,b] = ab - (-1)^{\abs{a}\abs{b}}ba. \] If $U\even=\FF^m$ and $U\odd=\FF^n$, then $\gl (U)$ is also denoted by $\gl(m|n)$. The \emph{special linear Lie superalgebra}, denoted by $\Sl (U)$, is the derived algebra of $\gl(U)$. As in the Lie algebra case, we describe it as an algebra of ``traceless'' operators. The analog of trace in the ``super'' setting is the so called \emph{supertrace}:% \[ \str \left(\begin{matrix} a & b\\ c & d\\ \end{matrix}\right) = \tr a - \tr d, \] and we have $\Sl (U) = \{ T\in \gl (U) \mid \Str T = 0\}$. Again, if $U\even=\FF^m$ and $U\odd=\FF^n$ then $\Sl (U)$ is also denoted by $\Sl(m|n)$. If one of the parameters $m$ or $n$ is zero, we get a Lie algebra, so we assume this is not the case. If $m\neq n$ then $\Sl(m|n)$ is a simple Lie superalgebra. If $m=n$, the identity map $I_{2n}\in \Sl(n|n)$ is a central element and hence $\Sl(n|n)$ is not simple, but the quotient $\mathfrak{psl}(n|n) := \Sl(n|n)/ \FF I_{2n}$ is simple if $n>1$. For $m$,$n\geq 0$ (not both zero), the simple Lie superalgebra $A(m,n)$ is $\Sl(m+1|n+1)$ if $m\neq n$, and $\mathfrak{psl}(n+1|n+1)$ if $m=n$. \begin{defi}\label{def:Type-I} If $\Gamma$ is a $G$-grading on $M(m,n)$, then, since $G$ is abelian, it is also a grading on $\gl(m|n)$ and, hence, restricts to its derived superalgebra $\Sl(m|n)$. If $m=n$, then the grading on $\Sl(m|n)$ induces a grading on $\mathfrak{psl}(n|n)$. If a grading on $\Sl(m|n)$ or $\mathfrak{psl}(n|n)$ is obtained in this way, we will call it a \emph{Type I} grading and, otherwise, a \emph{Type II} grading. \end{defi} \subsection{Automorphisms of $A(m,n)$}\label{ssec:auto-Amn} As in the Lie algebra case, the group of automorphisms of the Lie superalgebra $A(m,n)$ is bigger than the group of automorphisms of the associative superalgebra $\End(U)$. We define the \emph{supertranspose} of a matrix in $\End(U)$ by \begin{equation*} \left( \begin{matrix} a&b\\ c&d\\ \end{matrix}\right)^{s\top} = \left( \begin{matrix} a\transp & -c\transp\\ b\transp & d\transp\\ \end{matrix}\right). \end{equation*} The supertranspose map $\End(U) \to \End(U)$ is an example of a \emph{super-anti-automorphism}, \ie, it is $\FF$-linear and \[ (XY)^{s\top} = (-1)^{|X||Y|} Y^{s\top} X^{s\top}. \] Hence, the map $\tau:\Sl(m+1,n+1)\rightarrow \Sl(m+1,n+1)$ given by $\tau(X) = - X^{s\top}$ is an automorphism. By \cite[Theorem 1]{serganova}, the group of automorphisms of $A(m,n)$ is generated by $\tau$ and the automorphisms of $\End(U)$, which are restricted to traceless operators and, if necessary, taken modulo the center. In other words, if $m\neq n$, $\Aut(A(m,n))$ is generated by $\mc E \cup \{\tau\}$ and, if $m=n$, by $\mc E \cup \{\pi\,,\,\tau\}$. In both cases, $\mc E$ is a normal subgroup of $\Aut(A(m,n))$. Note that $\pi^2 = \id$, $\tau^2=\upsilon$ (the parity automorphism) and $\pi \tau = \upsilon \tau \pi$. Hence $\frac{\Aut(A)}{\mc E}$ is isomorphic to $\ZZ_{2}$ if $m\neq n$ and $\ZZ_2\times\ZZ_2$ if $m=n$. Note that a $G$-grading on $A(m,n)$ is of Type I if, and only if, it corresponds to a $\widehat{G}$-action on $A(m,n)$ by automorphisms that belong to the subgroup $\mc E$ if $m\ne n$ and to $\mc E\rtimes\langle\pi\rangle$ if $m=n$. If $\widehat{G}$ acts by automorphisms that belong to $\mc E$ then the Type I grading is said to be \emph{even} and, otherwise, \emph{odd}. \subsection{Superdual of a graded module}\label{ssec:superdual} We will need the following concepts. Let $\D$ be an associative superalgebra with a grading by an abelian group $G$, so we may consider $\D$ graded by the group $G^\# = G\times \ZZ_2$. Let $\U$ be a $G^\#$-graded \emph{right} $\D$-module. The parity $|x|$ of a homogeneous element $x \in \D$ or $x\in \U$ is determined by $\deg x \in G^\#$. The \emph{superdual module} of $\U$ is $\U\Star = \Hom_\D (\U,\D)$, with its natural $G^\#$-grading and the $\D$-action defined on the \emph{left}: if $d \in \D$ and $f \in \U \Star$, then $(df)(u) = d\, f(u)$ for all $u\in \mc U$. We define the \emph{opposite superalgebra} of $\D$, denoted by $\D\sop$, to be the same graded superspace $\D$, but with a new product $a*b = (-1)^{|a||b|} ba$ for every pair of $\ZZ_2$-homogeneous elements $a,b \in \D$. The left $\D$-module $\U\Star$ can be considered as a right $\D\sop$-module by means of the action defined by $f\cdot d := (-1)^{|d||f|} df$, for every $\ZZ_2$-homogeneous $d\in \D$ and $f\in \U\Star$. \begin{lemma}\label{lemma:Dsop} If $\D$ is a graded division superalgebra associated to the pair $(T,\beta)$, then $\D\sop$ is associated to the pair $(T,\beta\inv)$.\qed \end{lemma} If $\U$ has a homogeneous $\D$-basis $\mc B = \{e_1, \ldots, e_k\}$, we can consider its \emph{superdual basis} $\mc B\Star = \{e_1\Star, \ldots, e_k\Star\}$ in $\U\Star$, where $e_i\Star : \U \rightarrow \D$ is defined by $e_i\Star (e_j) = (-1)^{|e_i||e_j|} \delta_{ij}$. \begin{remark}\label{rmk:gamma-inv} The superdual basis is a homogeneous basis of $\U\Star$, with $\deg e_i\Star = (\deg e_i)\inv$. So, if $\gamma = (g_1, \ldots, g_k)$ is the $k$-tuple of degrees of $\mc B$, then $\gamma\inv = (g_1\inv, \ldots, g_k\inv)$ is the $k$-tuple of degrees of $\mc B\Star$. \end{remark} For graded right $\D$-modules $\U$ and $\V$, we consider $\U\Star$ and $\V\Star$ as right $\D\sop$-modules as defined above. If $L:\U \rightarrow \V$ is a $\ZZ_2$-homogeneous $\D$-linear map, then the \emph{superadjoint} of $L$ is the $\D\sop$-linear map $L\Star: \V\Star \rightarrow \U\Star$ defined by $L\Star (f) = (-1)^{|L||f|} f \circ L$. We extend the definition of superadjoint to any map in $\Hom_\D (\U, \V)$ by linearity. \begin{remark} In the case $\D=\FF$, if we denote by $[L]$ the matrix of $L$ with respect to the homogeneous bases $\mc B$ of $\U$ and $\mc C$ of $\V$, then the supertranspose $[L]\sT$ is the matrix corresponding to $L\Star$ with respect to the superdual bases $\mc C\Star$ and $\mc B\Star$. \end{remark} We denote by $\vphi: \End_\D (\U) \rightarrow \End_{\D\sop} (\U\Star)$ the map $L \mapsto L\Star$. It is clearly a degree-preserving super-anti-isomorphism. It follows that, if we consider the Lie superalgebras $\End_\D (U)^{(-)}$ and $\End_{\D\sop} (U\Star)^{(-)}$, the map $-\vphi$ is an isomorphism. We summarize these considerations in the following result: \begin{lemma}\label{lemma:iso-inv} If $\Gamma = \Gamma(T,\beta,\gamma)$ and $\Gamma' = \Gamma(T,\beta\inv,\gamma\inv)$ are $G$-gradings (considered as $G^\#$-gradings) on the associative superalgebra $M(m,n)$, then, as gradings on the Lie superalgebra $M(m,n)^{(-)}$, $\Gamma$ and $\Gamma'$ are isomorphic via an automorphism of $M(m,n)^{(-)}$ that is the negative of a super-anti-automorphism of $M(m,n)$. \end{lemma} \begin{proof} Let $\D$ be a graded division superalgebra associated to $(T,\beta)$ and let $\U$ be the graded right $\D$-module associated to $\gamma$. The grading $\Gamma$ is obtained by an identification $\psi: M(m, n) \xrightarrow{\sim} \End_\D (\U)$. By Lemma \ref{lemma:Dsop} and Remark \ref{rmk:gamma-inv}, $\Gamma'$ is obtained by an identification $\psi': M(m, n) \xrightarrow{\sim} \End_{\D\sop} (\U\Star)$. Hence we have the diagram: \begin{center} \begin{tikzcd} & \End_\D (\U) \arrow[to=3-2, "-\vphi"]\\ M(m, n) \arrow[ur, "\psi"] \arrow[dr, "\psi'"]\\ & \End_{\D\sop} (\U\Star) \end{tikzcd} \end{center} Thus, the composition $(\psi')\inv \, (-\vphi) \, \psi$ is an automorphism of the Lie superalgebra $M(m,n)^{(-)}$ sending $\Gamma$ to $\Gamma'$. \end{proof} \subsection{Type I gradings on $A(m,n)$} In this work, we only classify the gradings on $A(m,n)$ that are induced from the associative algebra $M(m+1, n+1)$. \begin{defi}\label{def:grd-on-Amn-I} If $\Gamma(T, \beta, \gamma_0,\gamma_1)$ is an even grading on $M(m+1,n+1)$ (see Definition \ref{def:even-grd-on-Mmn}), we denote by $\Gamma_A (T, \beta, \gamma_0,\gamma_1)$ the induced grading on $A(m,n)$. Analogously, if $\Gamma(T, \beta, \gamma)$, or alternatively $\Gamma(t_0, \barr T, \barr\beta, u, \gamma)$, is an odd grading on $M (n+1,n+1)$ (see Definitions \ref{def:odd-grd-on-Mmn-1} and \ref{def:odd-grd-on-Mmn-2}), we denote by $\Gamma_A (T, \beta, \gamma)$, respectively $\Gamma_A (t_0, \barr T, \barr\beta, u, \gamma)$, the induced grading on $A(n,n)$. (Recall that odd gradings can occur only if $m=n$.) \end{defi} \begin{thm}\label{thm:even-Lie-iso} If a $G$-grading of Type I on the Lie superalgebra $A(m,n)$ is even, then it is isomorphic to some $\Gamma_A(T,\beta, \gamma_0, \gamma_1)$ as in Definition \ref{def:grd-on-Amn-I}. Two such gradings, $\Gamma=\Gamma_A(T,\beta, \gamma_0, \gamma_1)$ and $\Gamma'=\Gamma_A (T',\beta', \gamma_0', \gamma_1')$, are isomorphic if, and only if, $T=T'$ and there are $\delta\in \{\pm 1\}$ and $g\in G$ such that $\beta^\delta=\beta'$ and \begin{enumerate}[(i)] \item for $m\neq n$: $g \Xi (\gamma_0^\delta) =\Xi(\gamma_0')$ and $g \Xi (\gamma_1^\delta) =\Xi(\gamma_1')$; \item for $m = n$: either $g \Xi(\gamma_0^\delta)=\Xi(\gamma_0')$ and $g \Xi(\gamma_1^\delta)=\Xi(\gamma_1')$ or $g\Xi(\gamma_0^\delta)=\Xi(\gamma_1')$ and $g \Xi(\gamma_1^\delta)=\Xi(\gamma_0')$. \end{enumerate} \end{thm} \begin{proof} Let $M = M(m+1, n+1)$. Since any automorphism of $M$ induces an automorphism of $A(m,n)$, the first assertion follows from Theorem \ref{thm:even-assc-iso} and the definition of Type I grading. We know from Subsection \ref{ssec:auto-Amn} that every automorphism of $A(m, n)$ arises from an automorphism of $M$ or the negative of a super-anti-automorphism of $M$. Moreover, this automorphism or super-anti-automorphism is uniquely determined and, hence, any Type I grading on $A(m,n)$ is induced by a unique grading on $M$. It follows that $\Gamma$ and $\Gamma'$ are isomorphic if, and only if, there exists either $(a)$ an automorphism or $(b)$ a super-anti-automorphism of $M$ sending $\Gamma (T,\beta, \gamma_0, \gamma_1)$ to $\Gamma (T',\beta', \gamma_0', \gamma_1')$. From Theorem \ref{thm:even-assc-iso}, we know that case $(a)$ holds if, and only if, the above conditions are satisfied with $\delta = 1$. From Lemma \ref{lemma:iso-inv}, there is an automorphism of $A(m,n)$ coming from a super-anti-automorphism of $M$ that sends $\Gamma (T,\beta, \gamma_0, \gamma_1)$ to $\Gamma (T,\beta\inv, \gamma_0\inv, \gamma_1\inv)$. It follows that case $(b)$ holds if, and only if, the above conditions are satisfied with $\delta = -1$. \end{proof} \begin{thm}\label{thm:first-odd-Lie-iso} If a $G$-grading of Type I on the Lie superalgebra $A(n,n)$ is odd, then it is isomorphic to some $\Gamma_A(T,\beta,\gamma)$ as in Definition \ref{def:grd-on-Amn-I}. Two such gradings, $\Gamma_A (T,\beta, \gamma)$ and $\Gamma_A (T',\beta', \gamma')$, are isomorphic if, and only if, $T=T'$, and there are $\delta \in \{\pm 1\}$ and $g\in G$ such that $\beta^\delta=\beta'$ and $g \Xi(\gamma^\delta)=\Xi(\gamma')$. \end{thm} \begin{proof} The same as for Theorem \ref{thm:even-Lie-iso}, but referring to Theorem \ref{thm:first-odd-iso} instead of Theorem \ref{thm:even-assc-iso}. \end{proof} The parameters $T$, $\beta$ and $\gamma$ in Theorem \ref{thm:first-odd-Lie-iso} are associated to the group $G^\#$, not $G$. Below we use parameters associated to $G$, as we did in Subsection \ref{ssec:second-odd}. \begin{cor}\label{cor:2nd-odd-Lie-iso} If a $G$-grading of Type I on the Lie superalgebra $A(n,n)$ is odd, then it is isomorphic to some $\Gamma_A (t_0, \barr T, \barr\beta, u, \gamma)$. Two such gradings, $\Gamma_A (t_0, \barr T, \barr \beta, u, \gamma)$ and $\Gamma_A (t_0', \barr T', \barr \beta', u', \gamma')$, are isomorphic if, and only if, $t_0=t_0'$, $\barr T = \barr T'$, and there are $\delta \in \{\pm 1\}$ and $g\in G$ such that $\barr\beta^\delta = \barr\beta'$, $u^\delta \equiv u' \,\, (\operatorname{mod}\,\, \langle t_0 \rangle)$ and $g\, \Xi(\gamma^\delta) = \Xi(\gamma')$. \end{cor} \begin{proof} Follows from Theorems \ref{thm:first-odd-Lie-iso} and \ref{thm:2nd-odd-iso}. \end{proof} \section{Gradings on $P(n)$}\label{sec:Pn} Throughout this section it will be assumed that $\Char \FF = 0$. \subsection{The Lie superalgebra $P(n)$}\label{subseq:Pn} Let $U = U\even \oplus U\odd$ be a superspace and let $\langle\, , \rangle: U\times U\rightarrow \FF$ be a bilinear form that is homogeneous with respect to the $\ZZ_2$-grading, i.e., has parity as a linear map $U\tensor U \rightarrow \FF$. We say that $\langle\, , \rangle$ is \emph{supersymmetric} if $\langle x,y\rangle = (-1)^{\abs{x}\abs{y}} \langle y,x \rangle$ for all homogeneous elements $x,y\in U$. From now on, we suppose that $\langle\, , \rangle$ is supersymmetric, nondegenerate, and odd. The \emph{periplectic Lie superalgebra} $\mathfrak{p}(U)$ is defined as $\mathfrak{p}(U)\even \oplus \mathfrak{p}(U)\odd$ where \[ \mathfrak{p}(U)^{i} = \{L\in \gl(U)^i\mid \langle L(x),y\rangle = - (-1)^{i\abs{x}} \langle x,L(y)\rangle\} \] for all $i\in\Zmod2$. The superalgebra $\mathfrak{p}(U)$ is not simple, but its derived superalgebra $P(U) = [\mathfrak{p}(U),\mathfrak{p}(U)]$ is simple if $\Dim U \geq 6$. Since $\langle\, , \rangle$ is nondegenerate and odd, it is clear that $U\odd$ is isomorphic to $(U\even)^*$ by $u \mapsto \langle u, \cdot\rangle $. Writing $U\even = V$, we can identify $U$ with $V\oplus V^*$. Since $\langle \, , \rangle$ is supersymmetric, with this identification we have \[ \langle v_1+v^*_1,v_2 + v_2^* \rangle = v_1^* (v_2) + v_2^*(v_1) \] for all $v_1, v_2\in V$ and $v_1^*, v_2^*\in V^*$. Hence, $P(U)$ is a subsuperspace of \[ \End(U) = \End(V \oplus V^*) = \begin{pmatrix} \End (V) & \Hom (V^*, V)\\ \Hom (V, V^*) & \End(V^*) \end{pmatrix} \] given by \[ P(U) = \left\{\left(\begin{matrix} a & b \\ c & -a^*\\ \end{matrix} \right)\Big| \,\tr a = 0,\, b=b^* \AND c=-c^*\right\}. \] In the case $V=\FF^{n+1}$, we write $\mathfrak{p}(n)$ for $\mathfrak{p}(U)$ and define $P(n) = [\mathfrak{p}(n),\mathfrak{p}(n)]$, where $n\geq 2$. Using the standard basis of $V$, we can identify $P(n)$ with the following subsuperalgebra of $M(n+1,n+1)^{(-)}$: \begin{equation}\label{eq:Pn-abstract} P(n) = \left\{\left(\begin{matrix} a & b \\ c & -a\transp\\ \end{matrix} \right)\Big| \,\tr a = 0,\, b=b\transp \AND c=-c\transp\right\}. \end{equation} One can readily check that $P(U)$ is a graded subspace of $\End (U)$ equipped with its canonical $\ZZ$-grading, so we have $P(U) = P(U)\inv \oplus P(U)^0 \oplus P(U)^1$. Also, the map $\iota: \Sl(n+1) \rightarrow P(n)^0$ given by $ \iota(a) = \left(\begin{matrix} a & 0 \\ 0 & -a\transp\\ \end{matrix} \right) $ is an isomorphism of Lie algebras. If we identify $\Sl(n+1)$ and $P(n)^0$ via this map, then $P(n)^{-1} \iso \mathrm{S}^2 (U\even) \iso V_{2\pi_1}$ and $P(n)^1 \iso \Exterior^2 (U\odd) \iso V_{\pi_{n-1}}$ as modules over $P(n)^0$, where $\pi_i$ denotes the $i$-th fundamental weight of $\Sl(n+1)$. \subsection{Automorphisms of $P(n)$} The automorphisms of $P(n)$ were originally described by V. Serganova (see \cite[Theorem 1]{serganova}). We give a more explicit description of the automorphism group that we will use for our purposes. \begin{lemma}\label{lemma:Pn-generates-Mmn} Let $U$ be a finite-dimensional superspace equipped with a supersymmetric nondegenerate odd bilinear form. The subset $P(U)$ generates $\End (U)$ as an associative superalgebra. \end{lemma} \begin{proof} Denote by $R$ the associative superalgebra generated by $P(U)$. We claim that $U$ is a simple $R$-module. Indeed, since $P(U)\even\iso \Sl(n+1)$, we have that $U\even \iso V_{\pi_1}$ and $U\odd \iso V_{\pi_n}$ are simple non-isomorphic modules over the Lie algebra $P(U)\even$. Also, the action of $P(U)\odd$ moves elements from $U\even$ to $U\odd$ and vice-versa, so $U$ does not have nonzero proper subspaces invariant under $P(U)$. By Density Theorem, since we are over an algebraically closed field, we conclude that $R = \End (U)$. \end{proof} \begin{prop}\label{prop:Aut-Pn} The group of automorphisms of $P(n)$ is $\frac{\GL (n+1)}{\{-1,+1\}}$ where $a\in \GL(n+1)$ acts as the conjugation by $\left( \begin{matrix} a&0\\ 0&(a\transp)^{-1}\\ \end{matrix}\right)$. \end{prop} \begin{proof} Let $P=P(n)$ and $\vphi: P \rightarrow P$ be a Lie superalgebra automorphism. Since it preserves the canonical $\ZZ_2$-grading, taking its restrictions, we obtain a Lie algebra automorphism $\vphi\subeven : P\even \rightarrow P\even$ and an invertible linear map $\vphi\subodd: P\odd \rightarrow P\odd$. \setcounter{claim}{0} \begin{claim} The components $P^{-1}$ and $P^1$ of the canonical $\ZZ$-grading are invariant under $\vphi$. \end{claim} We denote by $(P\odd)^{\vphi\subeven}$ the $P\even$-module $P\odd$ twisted by $\vphi\subeven$, \ie, the space $P\odd$ with a new action given by $\ell \cdot x = \vphi\subeven (\ell)x$ for all $\ell \in P\even$ and $x \in P\odd$. Clearly, the map $\vphi\subodd: P\odd \rightarrow (P\odd)^{\vphi\subeven}$ is a $P\even$-module isomorphism. In particular, $(P\odd)^{\vphi\subeven} = \vphi\subodd (P^{-1}) \oplus \vphi\subodd (P^{1})$, where $\vphi\subodd (P^{-1})$ and $\vphi\subodd (P^{1})$ are simple and non-isomorphic. It follows that either $(P^{-1})^{\vphi\subeven} = \vphi\subodd (P^{-1})$ or $(P^{-1})^{\vphi\subeven} = \vphi\subodd (P^{1})$. By dimension count, we have $(P^{-1})^{\vphi\subeven} = \vphi\subodd (P^{-1})$ and, similarly, $(P^{1})^{\vphi\subeven} = \vphi\subodd (P^{1})$ \begin{claim} The automorphism $\vphi\subeven$ is inner. \end{claim} If we identify $\Sl(n+1)$ with $P^0$ via the map $\iota$ defined in Subsection \ref{subseq:Pn}, we have $P^{-1}\iso V_{2\pi_1}$ as an $\Sl(n+1)$-module. By Claim 1, we know that $\vphi\subodd \restriction_{P\inv} : P\inv \rightarrow (P\inv)^{\vphi\subeven}$ is an isomorphism of modules, but if $\vphi\subeven$ were an outer automorphism, we would have $(V_{2\pi_1})^{\vphi\subeven} \simeq V_{2\pi_n}$, which would force $n=1$, a contradiction. \begin{claim} If $\varphi\subeven = \id$, then $\varphi = \upsilon_\lambda$ for some $\lambda\in \FF^\times$. \end{claim} Recall from Subsection \ref{ssec:G-hat-action} that $\upsilon_{\lambda}$ acts as $\lambda^i \id$ on $P^i$. Since $\varphi\subeven = \id$, $\vphi_{\bar1}: P\odd \rightarrow P\odd$ is a $P^0$-module automorphism. By Claim 1 and Schur's Lemma, $\vphi_{\bar1}\restriction_{P^{-1}}$ and $\vphi_{\bar1}\restriction_{P^{1}}$ are scalar operators. Due to the superalgebra structure, these two scalars must be inverses of each other, concluding the proof of the claim.\qedclaim By Claim 2, we know that there is an invertible $a\in \End(U\even)$ such that $\vphi\subeven$ is the conjugation by $A=\left( \begin{matrix} a&0\\ 0&(a\transp)^{-1}\\ \end{matrix}\right)$. By Claim 3, $\vphi$ must be this conjugation composed with $\upsilon_\lambda$ for some $\lambda\in \FF^\times$. But $\upsilon_\lambda$ is the conjugation by $\left( \begin{matrix} \mu\inv\id &0\\ 0&\mu \id\\ \end{matrix} \right)$ where $\mu^2=\lambda$, so we can adjust $a$ and assume that $\vphi$ is the conjugation by $A$. Since $P$ generates $M(n+1,n+1)$ as an associative superalgebra (Lemma \ref{lemma:Pn-generates-Mmn}), $A$ is determined up to scalar and, clearly, the only possible scalars are $-1$ and $1$. \end{proof} \begin{remark} The images of $\upsilon_\lambda$, $\lambda \in \FF^\times$, cover the group of outer automorphisms of $P(n)$ (see \cite[Theorem 1]{serganova}). \end{remark} \subsection{Restriction of gradings from $M(n+1,n+1)$ to $P(n)$} We start with a consequence of Proposition \ref{prop:Aut-Pn}. \begin{cor}\label{cor:automorphisms-Pn Every automorphism of $P(n)$ is the restriction of a unique even automorphism of $M(n+1, n+1)$ and every grading on $P(n)$ is the restriction of a unique even grading on $M(n+1, n+1)$. \end{cor} \begin{proof} Consider the embedding $\Aut(P(n))\rightarrow \Aut(M(n+1,n+1))$ that follows from Proposition \ref{prop:Aut-Pn}. The image consists of even automorphisms, so Proposition \ref{prop:3-equiv-even-morita-action}(iv) implies that every $G$-grading on $P(n)$ extends to an even grading on $M(n+1, n+1)$. The uniqueness follows from Lemma \ref{lemma:Pn-generates-Mmn}. \end{proof} Of course, not every even grading on $M(n+1,n+1)$ restricts to $P(n)$. We are going to obtain necessary and sufficient conditions for such restriction to be possible. Let $\D$ be a finite-dimensional graded division algebra. The concept of \emph{dual of a graded $\D$-module} is a special case of the concept of \emph{superdual} discussed in Subsection \ref{ssec:superdual}, which arises when the gradings on $\D$ and its graded modules are even. Furthermore, in our situation $T$ must be an elementary $2$-group (see Theorem \ref{thm:Pn-elem-2-grp}). Let us recall the definitions and specialize them to the case at hand. Let $\mc V$ be a \emph{right} graded $\D$-module. Then $\mc V^{\star}=\Hom_{\D} (\mc V, \D)$ is a \emph{left} $\D$-module with the action given by $(d\cdot f) (v) = d f(v)$ for all $d\in \D$, $f\in\mc V^\star$ and $v\in\mc V$. If $\mc B = \{v_1, \ldots, v_k\}$ is a homogeneous basis for $\mc V$, the dual basis $\mc B\Star \subseteq \mc V \Star$ consists of the elements $v_i\Star$, $1\leq i \leq k$, defined by $v_i\Star (v_j) = \delta_{ij}$. Note that $\operatorname {deg} v_i\Star = (\operatorname {deg} v_i)\inv$. Given two right $\D$-modules, $\mc V$ and $\mc W$, and a $\D$-linear map $L:\mc V\rightarrow \mc W$, we have the adjoint $L^\star: \mc W^\star\rightarrow \mc V^\star$ defined by $L^\star(f) = f\circ L$, for every $f\in\mc W^\star$. We now assume that $\D$ is a standard realization associated to a pair $(T, \beta)$ such that $T$ is an elementary $2$-group. With this we can identify $\D$ with $\D\op$ via transposition (see Remark \ref{rmk:2-grp-transp}) and, hence, we can regard left $\D$-modules as right $\D$-modules. In particular, if $\mc V$ is a graded right $\D$-module, then $\mc V\Star$ is a graded right $\D$-module via $(f \cdot d)(v) = d\transp f(v)$ for all $f\in \mc V\Star$, $d\in\D$ and $v\in \mc V$. Consider the space $\Hom_\D(\mc V, \mc W)$. Fixing homogeneous $\D$-bases $\mc B = \{v_1, \ldots, v_k\}$ and $\mc C = \{w_1, \ldots, w_\ell\}$ for $\mc V$ and $\mc W$, respectively, we obtain an isomorphism between $\Hom_\D(\mc V, \mc W)$ and $\M_{\ell \times k} (\D)$. The latter is naturally isomorphic to $\M_{\ell \times k} (\FF) \tensor \D$, so we will identify them. \begin{lemma}\label{lemma:D-transp} Let $L: \mc V\rightarrow \mc W$ be a $\D$-linear map. We fix homogeneous $\D$-bases $\mc B$ and $\mc C$ on $\mc V$ and $\mc W$, respectively, and their dual bases in $\mc V\Star$ and $\mc W\Star$. If $A\tensor d \in \M_{\ell \times k} (\FF) \tensor \D$ represents $L$, then $A\transp \tensor d\transp$ represents $L\Star$.\qed \end{lemma} We can regard the elements of $\M_{\ell \times k} (\FF) \tensor \D$ as matrices over $\FF$ via Kronecker product (as in Definition \ref{def:explicit-grd-assoc}). Then we have $A\transp \tensor d\transp = (A\tensor d)\transp$. \begin{thm}\label{thm:Pn-elem-2-grp} Let $U$ be a finite-dimensional superspace and let $\Gamma = \Gamma (T, \beta, \gamma_0, \gamma_1)$ be an even $G$-grading on $\End(U)$. The superspace $U$ admits a supersymmetric nondegenerate odd bilinear form such that $P(U)$ is a $G$-graded subsuperalgebra of $\End(U)^{(-)}$ if, and only if, $T$ is an elementary $2$-group and there is $g_0\in G$ such that $\Xi(\gamma_1) = g_0 \, \Xi(\gamma_0\inv)$. Moreover, if there are two supersymmetric nondegenerate odd bilinear forms on $U$ such that the corresponding $P_1(U)$ and $P_2(U)$ are $G$-graded subsuperalgebras, then $P_1(U)$ and $P_2(U)$ are ismorphic up to shift in opposite directions. \end{thm} \begin{proof} Assume that, for some form, $P(U)$ is a $G$-graded subsuperalgebra. Let $V=U\even$ and consider the identification of $U\odd$ with $V^*$ presented in Subsection \ref{subseq:Pn}. This way $\Gamma = \Gamma (T, \beta, \gamma_0, \gamma_1)$ is an even grading on \[ \End(U) = \End(V \oplus V^*) = \begin{pmatrix} \End (V) & \Hom (V^*, V)\\ \Hom (V, V^*) & \End(V^*) \end{pmatrix}. \] In particular, $\End(V)$ and $\End(V^*)$ are graded subspaces of $\End(U)\even$, with gradings $\Gamma (T, \beta, \gamma_0)$ and $\Gamma (T, \beta, \gamma_1)$, respectively. If \[ x = \left(\begin{matrix} a & 0 \\ 0 & -a^*\\ \end{matrix} \right) \] is a homogeneous element in $P(U)\even$, then both $u(x) := a \in \Sl(V) \subseteq \End(V)$ and $v(x) := -a^* \in \Sl(V^*) \subseteq \End(V^*)$ are homogeneous elements of the same degree. In other words, the maps $u: P(n)\even \rightarrow \Sl(V)$ and $v: P(n)\even \rightarrow \Sl(V^*)$ are homogeneous of degree $e$. Consider the algebra isomorphism $\vphi: \End(V)\op \rightarrow \End(V^*)$ associating to each operator its adjoint. Clearly, $\vphi(a) = - (v \circ u\inv) (a)$ for all $a\in \Sl(V)$. Since $\End(V) = \Sl(V)\, \oplus\, \FF\id_V$ and $\vphi(\id_V) = \id_{V^*}$, we see that $\vphi$ is homogeneous of degree $e$. From Lemma \ref{lemma:Dsop} and Remark \ref{rmk:gamma-inv}, we conclude that $\Gamma (T, \beta\inv, \gamma_0\inv) \iso \Gamma (T, \beta, \gamma_0)$, and hence, by Theorem \ref{thm:classification-matrix}, $\beta\inv = \beta$ and there is $g_0\in G$ such that $g_0\,\Xi(\gamma_0\inv) = \Xi(\gamma_1)$. Since $\beta$ is nondegenerate, $\beta\inv = \beta$ if, and only if, $T$ is an elementary $2$-group. Note that the $G$-graded algebra $P(U)\even$ is isomorphic (via the map $u$) to the $G$-graded subalgebra $\Sl(V)$ of $\End(V)^{(-)}$, where the grading on $\End(V)$ is $\Gamma(T,\beta, \gamma_0)$. Therefore, if we have two forms such that the corresponding $P_1(U)$ and $P_2(U)$ are $G$-graded subsuperalgebras, then their even parts are isomorphic as $G$-graded algebras. Using Lemmas \ref{lemma:simplebimodule} and \ref{lemma:opposite-directions}, we conclude the ``moreover'' part. Now assume, conversely, that $T$ is an elementary $2$-group and $\Xi(\gamma_1) = g_0 \, \Xi(\gamma_0\inv)$. We can adjust $\gamma_1$, if necessary, so that $\gamma_1 = g_0\, \gamma_0\inv$ and the isomorphism class of $\Gamma$ does not change. Let $\D$ be a standard realization of a graded division algebra associated to $(T,\beta)$ and let $\mc V$ be a graded right $\D$-module with a homogeneous basis $\mc B$ whose degrees are given by $\gamma_0$. Define $\mc U = \mc U\even \oplus \mc U\odd$ with $\mc U\even = \mc V$ and $\mc U\odd = (\mc V\Star)^{[g_0]}$. The $G$-grading $\Gamma$ on $\End(U)$ is defined by means of an isomorphism: \[ \begin{split} \End(U) \iso \End_\D (\mc U) &= \begin{pmatrix} \End_\D (\mc V) & \Hom_\D ((\mc V\Star)^{[g_0]}, \mc V)\\ \Hom_\D (\mc V, (\mc V\Star)^{[g_0]}) & \End_\D ((\mc V\Star)^{[g_0]}) \end{pmatrix}\\ &= \begin{pmatrix} \End_\D (\mc V) & \Hom_\D (\mc V\Star, \mc V)^{[g_0\inv]}\\ \Hom_\D (\mc V, \mc V\Star)^{[g_0]} & \End_\D (\mc V\Star) \end{pmatrix}. \end{split} \] Using the homogeneous $\D$-bases $\mc B$ for $\mc V$ and $\mc B\Star$ for $\mc V\Star$ to represent $\D$-linear maps by matrices in $M_k(\D) = M_k(\FF) \tensor \D$ and using the Kronecker product to identify the latter with $M_{n+1}(\FF)$, we obtain an isomorphism $\End(U) \xrightarrow{\sim} M(n+1, n+1)$, and $M(n+1, n+1)$ contains $\mathfrak{p}(n)$ and $P(n) = [\mathfrak{p}(n), \mathfrak{p}(n)]$ as in Equation \eqref{eq:Pn-abstract}. The above isomorphism $\End(U)\xrightarrow{\sim} M(n+1,n+1)$ of superagebras is given by an isomorphism of superspaces $U \xrightarrow{\sim} \FF^{n+1} \oplus \FF^{n+1}$. Hence, there exists a supersymmetric nondegenerate odd bilinear form on $U$ such that $P(U)$ corresponds to $P(n)$ under the above isomorphism. Finally, we have to show that $P(U)$ is a $G$-graded subsuperspace of $\End(U)$. Clearly, it is sufficient to prove the same for $\mathfrak{p}(U)$. But $\mathfrak{p}(U)$ corresponds to \begin{equation*} \mathfrak{p}(n) = \left\{\left(\begin{matrix} a & b \\ c & -a\transp\\ \end{matrix} \right)\Big| \,a,b,c\in M_{n+1}(\FF),\, b=b\transp \AND c=-c\transp\right\} \subseteq M(n+1,n+1), \end{equation*} which, in view of Lemma \ref{lemma:D-transp}, corresponds to the subsuperspace \[\begin{split} \bigg \{\left(\begin{matrix} a & b \\ c & -a\Star\\ \end{matrix} \right)\Big| \,a\in \End_\D(\mc U),\, b=b\Star\in \Hom_\D(\mc V\Star, \mc V),\, c=-c\Star\in \Hom_\D(\mc V, \mc V\Star) \bigg\} \end{split} \] of $\End_\D(\mc U)$, which is clearly a $G$-graded subsuperspace. \end{proof} \subsection{$G$-gradings up to isomorphism} \begin{defi}\label{def:grd-Pn} Let $T\subseteq G$ be a finite elementary $2$-subgroup, $\beta$ be a nondegenerate alternating bicharacter on $T$, $\gamma$ be a $k$-tuple of elements of $G$, and $g_0\in G$. We will denote by $\Gamma_P (T, \beta, \gamma, g_0)$ the grading on the superalgebra $P(n)$ obtained by restricting the grading $\Gamma(T,\beta,\gamma,g_0\gamma\inv)$ on $M(n+1,n+1)$ as in the proof of Theorem \ref{thm:Pn-elem-2-grp}. Explicitly, write $\gamma = (g_1, \ldots, g_k)$ and take a standard realization of a graded division algebra $\D$ associated to $(T, \beta)$. Then $M_{n+1}(\FF)\iso M_k(\FF) \tensor \D$ by means of Kronecker product, and \[ M(n+1, n+1) \iso % \begin{pmatrix} M_k (\FF)\tensor \D & M_k (\FF)\tensor \D\\ M_k (\FF)\tensor \D & M_k (\FF)\tensor \D \end{pmatrix} \] Denote by $E_{ij}$ the $(i,j)$-th matrix unit in $M_k (\FF)$. The grading $\Gamma(T,\beta,\gamma,g_0\gamma\inv)$ is given by: \begin{center} \begin{tabular}{@{$\bullet$ }ll} $\deg (E_{ij}\tensor d) = g_i (\deg d) g_j\inv$ & in the upper left corner;\\ $\deg (E_{ij}\tensor d) = g_i (\deg d) g_j \, g_0\inv$ & in the upper right corner;\\ $\deg (E_{ij}\tensor d) = g_i\inv (\deg d) g_j\inv g_0$ & in the lower left corner;\\ $\deg (E_{ij}\tensor d) = g_i\inv (\deg d) g_j$ & in the lower right corner. \end{tabular} \end{center} \end{defi} Note that the restriction of $\Gamma_P(T, \beta, \gamma, g_0)$ to the even part is the inner grading on $\Sl(n+1)$ with parameters $(T, \beta, \gamma)$. \begin{thm}\label{thm:Pn-iso} Every $G$-grading on the Lie superalgebra $P(n)$ is isomorphic to some $\Gamma_P (T, \beta, \gamma, g_0)$ as in Definition \ref{def:grd-Pn}. Two gradings, $\Gamma = \Gamma_P (T,\beta,\gamma,g_0)$ and $\Gamma' = \Gamma_P (T',\beta',\gamma',g_0')$, are isomorphic if and only if $T=T'$, $\beta = \beta'$, and there is $g\in G$ such that $g^2 g_0 = g_0'$ and $g\,\Xi(\gamma)=\Xi(\gamma')$. \end{thm} \begin{proof} The first assertion follows from Corollary \ref{cor:automorphisms-Pn} and Theorem \ref{thm:Pn-elem-2-grp}. For the second assertion, recall that $\Gamma$ and $\Gamma'$ are, respectively, the restrictions of the gradings $\widetilde \Gamma = \Gamma (T, \beta, \gamma, g_0 \gamma\inv)$ and $\widetilde \Gamma' = \Gamma (T', \beta', \gamma', g_0' (\gamma')\inv)$ on $M(n+1, n+1)$. $(\Rightarrow)$: Suppose $\Gamma \iso \Gamma'$. Since every automorphism of $P(n)$ extends to an automorphism of $M(n+1, n+1)$ (Corollary \ref{cor:automorphisms-Pn}), we have $\widetilde \Gamma \iso \widetilde \Gamma'$, which implies $T=T'$ and $\beta = \beta'$ by Theorem \ref{thm:even-assc-iso}. Let $\D$ be a standard realization associated to $(T, \beta)$ and let $\mc V$ be a right $\D$-module with basis $\mc B = \{v_1, \ldots, v_k\}$, which is graded by assigning $\deg v_i = g_i$. The same module, but with $\deg v_i = g_i'$, will be denoted by $\mc W$. Then $E = \End_\D (\mc V \oplus (\mc V\Star)^{[g_0]})$ and $E' = \End_\D (\mc W \oplus (\mc W\Star)^{[g_0']})$ are graded superalgebras. Using the bases $\mc B$ and $\mc B\Star$ and the Kronecker product, we can identify them with $M(n+1, n+1)$. The first identification gives the grading $\widetilde \Gamma$ on $M(n+1, n+1)$ and the second gives $\widetilde \Gamma'$. Let $\Phi$ be an automorphism of $M(n+1, n+1)$ that sends $\widetilde\Gamma$ to $\widetilde\Gamma'$. By Proposition \ref{prop:Aut-Pn}, $\Phi$ is the conjugation by \[ A = \begin{pmatrix} a & 0\\ 0 & (a\transp)\inv \end{pmatrix} \] for some $a\in \GL(n+1)$. By Lemma \ref{lemma:D-transp}, $\Phi$ corresponds to the isomorphism $E\rightarrow E'$ that is the conjugation by \[ \phi = \begin{pmatrix} \alpha & 0\\ 0 & (\alpha\Star)\inv \end{pmatrix} \] where $\alpha: \mc V\rightarrow \mc W $ and $(\alpha\Star)\inv: (\mc V\Star)^{[g_0]} \rightarrow (\mc W\Star)^{[g_0']} $ are $\D$-linear maps. On the other hand, by Proposition \ref{prop:inner-automorphism}, this isomorphism $E\to E'$ is the conjugation by a homogeneous bijective $\D$-linear map \[ \psi= \left( \begin{matrix} \psi_{11}&\psi_{12}\\ \psi_{21}&\psi_{22}\\ \end{matrix}\right). \] It follows that there is $\lambda\in \FF$ such that $\phi = \lambda\psi$, and, hence, $\phi$ is homogeneous. Let us denote its degree by $g$. Then both $\alpha$ and $(\alpha\Star)\inv$ must be homogeneous of degree $g$. Hence, $\alpha: \mc V^{[g]}\to\mc W $ is an isomorphism of graded $\D$-modules, so we conclude that $g \Xi(\gamma) = \Xi(\gamma')$. Considered as a map $\mc V\Star \rightarrow \mc W\Star$, $(\alpha\Star)\inv$ would have degree $g\inv$, so taking into account the shifts, it has degree $g\inv g_0\inv g_0'$, which must be equal to $g$, so $g_0' = g^2 g_0$. $(\Leftarrow)$: We may suppose $\D=\D'$. Since $g\Xi(\gamma) = \Xi(\gamma')$, we have an isomorphism of graded $\D$-modules $\alpha: \mc V^{[g]} \rightarrow \mc W$. As a map from $\mc V$ to $\mc W$, $\alpha$ is homogeneous of degree $g$, hence $(\alpha\Star)\inv: \mc V\Star \rightarrow \mc W\Star$ has degree $g\inv$. It follows that, as a map from $(\mc V\Star)^{[g_0]}$ to $(\mc W\Star)^{[g_0']}$, $(\mc \alpha\Star)\inv$ has degree $g\inv g_0\inv g_0' = g$. The desired automorphism of $P(n)$ that sends $\Gamma$ to $\Gamma'$ is the conjugation by the matrix $\psi=\left( \begin{matrix} \alpha&0\\ 0&(\alpha\Star)\inv\\ \end{matrix}\right).$ \end{proof} \subsection{Fine gradings up to equivalence} For every integer $\ell\ge 0$, we set $T_{(\ell)}=\ZZ_2^{2\ell}$ and fix a nondegenerate alternating bicharacter $\beta_{(\ell)}$, say, \[ \beta_{(\ell)} (x,y)=(-1)^{\sum_{i=1}^{2\ell} x_i y_{2\ell-i+1}}. \] \begin{defi}\label{def:fine-grd-Pn} For every $\ell$ such that $2^\ell$ is a divisor of $n+1$, put $k:=\frac{n+1}{2^\ell}$ and $\tilde{G}_{(\ell)}=T_{(\ell)}\times \ZZ^k$. Let $\{e_1,\ldots,e_k\}$ be the standard basis of $\ZZ^k$ and let $\langle e_0\rangle$ be the infinite cyclic group generated by a new symbol $e_0$. We define $\Gamma_P(\ell,k)$ to be the $\tilde{G}_{(\ell)}\times\langle e_0 \rangle$-grading $\Gamma_P (T_{(\ell)},\beta_{(\ell)},(e_1, \ldots, e_k), e_0)$ on $P(n)$. If $n$ is clear from the context, we will simply write $\Gamma_P(\ell)$. \end{defi} The subgroup of $\tilde{G}_{(\ell)} \times \langle e_0 \rangle$ generated by the support of $\Gamma_P(\ell,k)$ is \[ G_{(\ell)} := (T_{(\ell)}\times \ZZ^k_0)\oplus \langle 2e_1 - e_0 \rangle \iso \ZZ_2^{2\ell}\times \ZZ^k. \] \begin{prop}\label{prop:P-fine} The gradings $\Gamma_P(\ell)$ on $P(n)$ are fine and pairwise nonequivalent. \end{prop} \begin{proof} We can write $\Gamma_P (\ell) = \Gamma^{-1} \oplus \Gamma^0 \oplus \Gamma^1$ where $\Gamma^i$ is the restriction of $\Gamma_P(\ell)$ to the $i$-th component of the canonical $\ZZ$-grading of $P(n)$. We identify $ P(n)^0 = P(n)\even$ with $ \Sl(n+1)$ via de map $\iota$ defined in Subsection \ref{subseq:Pn}. Then the grading $\Gamma^0$ on $P(n)^0$ is the restriction to $\Sl(n+1)$ of a fine grading on $M_{n+1}(\FF)$ with universal group $T_{(\ell)}\times\ZZ^k_0$ (\cite[Proposition 2.35]{livromicha}), so it has no proper refinements among the inner gradings on $\Sl(n+1)$. Also, $\Gamma_P(\ell)$ and $\Gamma_P(\ell')$ are nonequivalent if $\ell\ne\ell'$, because their restrictions to $P(n)^0$ are nonequivalent. Note that the supports of $\Gamma^{-1}$, $\Gamma^0$ and $\Gamma^1$ are pairwise disjoint since they project to, respectively, $-e_0$, $0$, and $e_0$ in the direct summand $\langle e_0 \rangle $ of $\tilde{G}_{(\ell)}\times\langle e_0\rangle$. Suppose that the grading $\Gamma_P (\ell)$ admits a refinement $\Delta = \Delta\inv \oplus \Delta^0 \oplus \Delta^1$. Then $\Delta^0$ is an inner grading that is a refinement of $\Gamma^0$, hence they are the same grading (up to relabeling). Using Lemma \ref{lemma:simplebimodule}, we conclude that $\Gamma$ and $\Delta$ are the same grading, proving that $\Gamma$ is fine. \end{proof} \begin{thm}\label{thm:class-fine-Pn} Every fine grading on $P(n)$ is equivalent to a unique $\Gamma_P(\ell)$ as in Definition \ref{def:fine-grd-Pn}. Moreover, every grading $\Gamma_P(\ell)$ is fine, and $G_{(\ell)}$ is its universal group. \end{thm} \begin{proof} Let $\Gamma=\Gamma_P(G,T,\beta,\gamma,g_0)$ be any $G$-grading on $P(n)$. Since $T$ is an elementary $2$-group of even rank, we have an isomorphism $\alpha:T_{(\ell)}\to T$, for some $\ell$, such that $\beta_{(\ell)}=\beta\circ(\alpha\times\alpha)$. We can extend $\alpha$ to a homomorphism $G_{(\ell)}\rightarrow G$ (also denoted by $\alpha$) by sending the elements $e_1,\ldots,e_k$ to the entries of $\gamma$, and $e_0$ to $g_0$. By construction, ${}^\alpha\Gamma_P {(\ell)}\iso\Gamma$. It remains to apply Proposition \ref{prop:P-fine} and Lemma \ref{lemma:universal-grp}. \end{proof} \section*{Acknowledgments} The first two authors were Ph.D. students at Memorial University of Newfoundland while working on this paper. Helen Samara Dos Santos would like to thank her co-supervisor, Yuri Bahturin, for help and guidance during her Ph.D. program. All authors are grateful to Yuri Bahturin and Alberto Elduque for useful discussions. \newcommand{\noop}[1]{} \def$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2017-07-14T02:02:25", "yymm": "1707", "arxiv_id": "1707.03932", "language": "en", "url": "https://arxiv.org/abs/1707.03932", "abstract": "We classify gradings by arbitrary abelian groups on the classical simple Lie superalgebras $P(n)$, $n \\geq 2$, and on the simple associative superalgebras $M(m,n)$, $m, n \\geq 1$, over an algebraically closed field: fine gradings up to equivalence and $G$-gradings, for a fixed group $G$, up to isomorphism. As a corollary, we also classify up to isomorphism the $G$-gradings on the classical Lie superalgebra $A(m,n)$ that are induced from $G$-gradings on $M(m+1,n+1)$. In the case of Lie superalgebras, the characteristic is assumed to be $0$.", "subjects": "Rings and Algebras (math.RA)", "title": "Group gradings on the superalgebras M(m,n), A(m,n) and P(n)", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759643671933, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7079405635026265 }
https://arxiv.org/abs/1909.05841
GVZ-groups, Flat Groups, and CM-Groups
We show that a group is a GVZ-group if and only if it is a flat group. We show that the nilpotence class of a GVZ-group is bounded by the number of distinct degrees of irreducible characters. We also show that certain CM-groups can be characterized as GVZ-groups whose irreducible character values lie in the prime field.
\section{Introduction} In this note, all groups are finite, and we will write $\irr G$ for the set of irreducible characters of a group $G$. It is well-known that the inequality $\chi(1) \le \norm {G:Z(\chi)}^{1/2}$ holds for all irreducible characters $\chi \in \irr{G}$, where $Z (\chi) = \{ g \in G \mid \norm{\chi(g)} = \chi(1) \}$ is the center of $\chi$. Furthermore, equality holds if and only if every element of $G$ satisfies one of the two conditions: $g \in Z(\chi)$ or $\chi (g) = 0$. (See Chapter 2 of \cite{MI76}.) It has been observed that there is a parallelism between the irreducible characters of a group and the conjugacy classes of group. Furthermore, there seems to be a mysterious relationship between the degrees of the irreducible characters and the sizes of the conjugacy classes. Chillag has a nice exposition about these parallels in \cite{chillag}. With this in mind, it is not difficult to see that the inequality $\norm{\mathrm{cl}_G(g)}\le\norm{[g,G]}$ holds for every element $g\in G$, where $[g,G]$ is the (normal) subgroup of $G$ generated by the set $\gamma_G (g) = \{ [x,g] \mid x \in G\}$ of commutators involving $g$. Furthermore, equality holds if and only if $\gamma_G (g)$ is a subgroup of $G$. (See Corollary \ref{basics 1}.) In this paper, we consider groups where these extremes occur. Following \cite{AN12gvz}, we say $G$ is a {\it GVZ-group} if $\chi (1) = \norm {G:Z(\chi)}^{1/2}$ for every irreducible character $\chi \in \irr G$. A group $G$ is a {\it flat group}, following \cite{flatness}, if $\norm{\mathrm{cl}_G(g)}=\norm{[g,G]}$ holds for every element $g\in G$. Interestingly, these two extreme situations can only occur simultaneously. \begin{introthm}\label{flat = gvz} The group $G$ is flat if and only if it is a GVZ-group. \end{introthm} Let $G$ be a GVZ-group. Then $G$ is necessarily nilpotent (see \cite[Proposition 3.2]{OnoType} or \cite[Proposition 1.2]{AN12gvz}). Since nilpotent groups are $M$-groups, Taketa's theorem implies that the derived length of $G$ is bounded by the number distinct degrees of the irreducible characters of $G$. That is, if ${\rm dl} (G)$ is the derived length of $G$ and $\cd G = \{ \chi (1) \mid \chi \in \irr G\}$ is the set of irreducible character degrees, then ${\rm dl} (G) \le |\cd G|$. In general, there is not a bound between the number of character degrees and the nilpotence class of a nilpotent group. In particular, all dihedral, semi-dihedral and generalized quaternion $2$-groups have the character degree set $\{ 1, 2\}$ and there exist examples of each of these groups with arbitrarily large nilpotence classes. In fact, there has been some research on which sets of character degrees bound the nilpotence class of a $p$-group. (See \cite{class1}, \cite{class2}, and \cite{class3}.) We adapt the classical Taketa argument to show that the nilpotence class of a GVZ-group is bounded by the number of irreducible character degrees. If $G$ is a nilpotent group, then we write $c(G)$ for the nilpotence class of $G$. \begin{introthm}\label{taketa} If $G$ is a GVZ-group, then $c (G) \le |\cd G|$. \end{introthm} We mention that Theorem \ref{taketa} is not without content, as there exist GVZ-groups of arbitrarily high nilpotence class. Such examples are constructed in \cite{ ML19gvz, AN16gvz} and will be discussed in Section~\ref{gvz section}. We also note that Professor Mann has pointed out that Golikova and Starostin had previously constructed GVZ-groups that are $2$-groups of arbitrarily high nilpotence class in \cite{GoSt}. As a separate application, we illustrate a connection between GVZ-groups and another type of group called a ${\rm CM}_n$-group. Initially studied in \cite{BZ}, a group $G$ is called a ${\rm CM}_n$-group if for every normal subgroup of $G$ appears as the kernel of at most $n$ irreducible characters of $G$. Of particular interest in this note are the ${\rm CM}_{p-1}$-groups where $p$ is a prime. Specifically, we show the following. \begin{introthm} \label{CMp-1-intro} Let $p$ be a prime and let $G$ be a $p$-group. Then $G$ is a ${\rm CM}_{p-1}$-group if and only if $G$ is a GVZ-group and every character in $\irr G$ has values in the $p^{\rm th}$ cyclotomic field. \end{introthm} A ${\rm CM}_1$-group as defined above (not to be confused with the definition given in \cite{saeidi}) is often called a ${\rm CM}$-group. As a consequence of Theorem \ref{CMp-1}, we see that a $2$-group is a ${\rm CM}$-group if and only if it is a GVZ-group and a rational group. We close this introduction by thanking Professor Mann and Professor Abdollahi for comments regarding the earlier paper ``GVZ-groups'' that has been subsumed by this current paper. \section{Conjugacy classes of GVZ-groups}\label{gvz section} As far as we can determine, GVZ-groups were initially studied in \cite{OnoType} under the guise of {\it groups of Ono type}. In that paper, it was proved that GVZ-groups are nilpotent. Motivated by problem of Berkovich suggested in \cite{YBGPPOV1}, Nenciu introduced the definition of GVZ-groups in \cite{AN12gvz}. In \cite{AN12gvz, AN16gvz}, Nenciu studies GVZ-groups that have the additional condition where the centers of the irreducible characters form a chain of subgroups. Further properties of GVZ-groups can be found in \cite{SBMLnestedclass, ML19gvz}. A flat group is one where the equality $\mathrm{cl}_G(g)=g[g,G]$ holds for every $g\in G$. Finite flat groups were initially studied in \cite{flatness}, where a flat group is defined to be one in which every conjugacy class is a coset of a (necessarily normal) subgroup $N$. In Theorem 4.2 of \cite{flatness}, they prove that a (finite) flat nilpotent group is a GVZ-group. Using a different method, we prove that all flat groups are GVZ-groups. Since GVZ-groups are nilpotent, we may conclude that (finite) flat groups are in fact always nilpotent. To relate flat groups to GVZ-groups, we begin with some basic results about the subgroups $[g,G]$ and their connection to centers of characters. Our first lemma is an easy proof of the well-known fact that $[g,G]$ is normal in $G$. Recall from the Introduction that the set $\gamma_G(g)$ for $g\in G$ is defined to be the set of commutators $\{[g,x]\mid x\in G\}$. We note that $\gamma_G (g)$ need not be a normal subset of $G$. To see this consider $\gamma_{S_4} (1,2)$ and $\gamma_{S_4} (1,3)$. This is unusual, since usually when we show that a subgroup generated by a subset is normal, we show that the subset is normal. \begin{lem} \label{com sub normal} Let $G$ be a group. If $g\in G$, then $[g,G]$ is normal in $G$. \end{lem} \begin{proof} Consider elements $x, y \in G$. It suffices to show that $[g,x]^y \in [g,G]$. Since $[g,xy] = [g,y] [g,x]^y$, we have $[g,x]^y = [g,y]^{-1} [g,xy]\in[g,G]$. \end{proof} We will also need the following result that yields a useful description of the set of irreducible characters containing $[g,G]$ in their kernel. \begin{lem} \label{cen cond} Let $G$ be a group. Fix an element $g \in G$ and a character $\chi \in \irr G$. Then $g \in Z(\chi)$ if and only if $[g,G] \le \ker (\chi)$. \end{lem} \begin{proof} This follows immediately from the definition that states: $Z(\chi)/\ker (\chi) = Z (G/\ker (\chi))$. \end{proof} We next present a lemma that is motivated by Lemma 1 of \cite{camina}, Proposition 3.1 of \cite{ChMc}, Lemma 2.1 of \cite{gencam}, and Lemma 2.1 of \cite{mlaiki}. We also refer the reader to Lemmas 2.1 and 2.2 of the first author's expository paper \cite{mycam}. We also note that (1) implies (4) is Lemma 3.1 of \cite{squaringclasses}. Also, using Theorem 2.2 of \cite{squaringclasses}, one can derive that implication of (4) implies (1) when $|G|$ is odd. \begin{lem} \label{basics} Let $M$ be a normal subgroup of $G$ and let $g \in G \setminus M$. Then the following are equivalent: \begin{enumerate}[label={\bf(\arabic*)}] \item $g$ is conjugate to every element in $gM$. \item For every element $z \in M$, there exists an element $x \in G$ so that $[g,x] = z$. \item $|C_G (g)| = |C_{G/M} (gM)|$. \item $\chi (g) = 0$ for every character $\chi \in \irr {G \mid M}$. \end{enumerate} \end{lem} \begin{proof} We first show (1) and (2) are equivalent. Notice that if $z \in M$, then $g$ and $gz$ are conjugate if and only if there exists $x \in G$ so that $g^x = gz$. However, we see that $g^x = gz$ if and only if $[g,x] = g^{-1}g^x = g^{-1}gz = z$. Hence, $g$ will be conjugate to every element in $gM$ if and only if for every element $z \in M$, there is an element $x \in G$ so that $[g,x] = z$. We next show that (1) and (3) are equivalent. Note that $$ {\rm cl} (g) \subseteq \bigcup_{x \in G} (gM)^x. $$ We know that $|G:C_G (g)| = |{\rm cl}_G(g)|$. On the other hand, $|\bigcup_{x \in G} (gM)^x|$ will equal the number of conjugates of $gM$ times the size of $M$. Thus, we have the equality $|\bigcup_{x \in G} (gM)^x| = |G/M:C_{G/M} (gM)| |M|$. It follows that ${\rm cl}_G(g) = \bigcup_{x \in G} (gM)^x$ if and only if $|C_G (g)| = |C_{G/M} (gM)|$. On the other hand, ${\rm cl}_G(g) = \bigcup_{x \in G} (gM)^x$ if and only if $g$ is conjugate to all elements in $gM$. This implies that (1) and (3) are equivalent. Now, we show (3) and (4) are equivalent. By the second Orthogonality relation, we have $$ |C_G (g)| = \sum_{\chi \in \irr G} |\chi (g)|^2 = \sum_{\chi \in \irr {G/M}} |\chi (g)|^2 + \sum_{\chi \in \irr {G \mid M}} |\chi (g)|^2, $$ and $$ |C_{G/M} (gM)| = \sum_{\chi \in \irr {G/M}} |\chi (g)|^2. $$ This implies that we have $\sum_{\chi \in \irr {G \mid M}} |\chi (g)|^2 = 0$ if and only if the equality $|C_G (g)| = |C_{G/M} (gM)|$ holds. Since $|\chi (g)|^2$ is a nonnegative real number for all $\chi \in \irr {G \mid M}$, we conclude that $\sum_{\chi \in \irr {G \mid M}} |\chi (g)|^2 = 0$ if and only if $\chi (g) = 0$ for all $\chi \in \irr {G \mid M}$. This shows that (3) and (4) are equivalent. \end{proof} Applying Lemma~\ref{basics} to the normal subgroup $[g,G]$ for an element $g\in G$, we obtain the following Corollary. We remark that this result appears to have been known in \cite{OnoType}, although no proof is given there. \begin{cor} \label{basics 1} Let $G$ be a group and fix an element $g \in G \setminus Z(G)$. Then the following are equivalent: \begin{enumerate}[label={\bf(\arabic*)}] \item $\gamma_G (g)$ is a subgroup of $G$. \item $\mathrm{cl}_G (g) = g[g,G]$. \item $\chi (g) = 0$ for all characters $\chi \in \irr {G}$ satisfying $g\notin Z(\chi)$. \end{enumerate} \end{cor} \begin{proof} Observe that $\mathrm{cl}_G(g)=g\gamma_G(g)\subseteq g[g,G]$, and thus $g$ is flat if and only if $g$ is conjugate to every element of $[g,G]$, which happens if and only if $\gamma_G(g)$ is a subgroup (i.e. coincides with $[g,G]$). By Lemma~\ref{cen cond}, the set of all irreducible characters $\chi$ satisfying $g\notin Z(\chi)$ is exactly $\irr{G\mid[g,G]}$. The result thus follows easily by Lemma~\ref{basics}. \end{proof} We may now easily deduce Theorem~\ref{flat = gvz}. \begin{proof}[Proof of Theorem~\ref{flat = gvz}] Apply the result of Corollary~\ref{basics 1} to every element $g\in G$. \end{proof} As Theorem~\ref{flat = gvz} illustrates, GVZ-groups can be characterized in terms of a condition on its conjugacy classes. We mention one more characterization of GVZ-groups in terms of conjugacy classes, although this particular characterization applies only to GVZ-groups of odd order. We thank the referee of an earlier paper entitled ``GVZ-groups'' that has been subsumed by this current paper for suggesting this next result, which follows immediately from a result of Guralnick and Navarro appearing in \cite{squaringclasses}. \begin{thm} Let $G$ be a group of odd order. Then $G$ is a GVZ-group if and only if $\mathrm{cl}_G(g)^2$ is a conjugacy class for every element $g\in G$. \end{thm} \begin{proof} Let $g\in G$. It is not difficult to see that $C_G(g)\le C_G(g^2)$. Since $G$ has odd order, $\norm{\mathrm{cl}_G(g^2)}=\norm{\mathrm{cl}_G(g)}$. Thus the result follows immediately from \cite[Theorem 2.2]{squaringclasses}. \end{proof} \section{A Taketa analog} In this section, we give a proof of Theorem~\ref{taketa}. Before doing so, we discuss the derived length and nilpotence class of GVZ-groups. In Section 5 of \cite{AN16gvz}, Nenciu constructs GVZ $p$-groups of arbitrarily large nilpotence class with fixed exponent $p$. In Example 3 in Section 8 of \cite{ML19gvz}, Lewis gives a different construction, which illustrates the existence of GVZ-groups of arbitrarily high nilpotence class and exponent. One may verify that each of these examples has derived length two. It is unknown if there exist GVZ-groups of arbitrarily high derived length. On page 3218 of \cite{YB00}, Berkovich states (with no supporting explanation) that the derived length of a GVZ-group is probably bounded. We will also see in Section \ref{CMgroups} that it has been conjectured in the special case studied in that section that those groups are metabelian. We now show that the nilpotence class of a GVZ-group can be bounded in terms of the number of distinct degrees of its irreducible characters, denoted $\mathrm{cd}(G)$. For convenience, we set some notation. We let $G_i$ denote the $i^{\rm th}$ member of the {\it lower central series}. That is, we set $G_1 = G$ and we inductively define $G_{i+1} = [G_i,G]$ for every integer $i \ge 1$. For a nilpotent group $G$, recall that $c(G)$ is the nilpotence class of $G$ --- the smallest integer for which $G_{c(G)+1}=1$. The reader should compare this proof with the proof of Taketa's theorem (see \cite[Theorem5.12]{MI76}). \begin{proof}[Proof of Theorem~\ref{taketa}] Let $1 = d_1 < d_2 < \dotsb < d_n$ be the distinct degrees in $\cd G$. We work by induction on $\norm{G}$. If $G$ is abelian, then $c(G) = 1 = \norm{\cd G}$, and the result holds. Thus, we may assume that $G$ is nonabelian. Consider a character $\chi \in \irr G$. If $\ker(\chi) > 1$, then $\norm{G/\ker(\chi)} < \norm {G}$ and $\cd {G/\ker (\chi)} \subseteq \cd G$. By the inductive hypothesis, we have that $G_n \le \ker (\chi)$. Thus, if $G$ does not have a faithful irreducible character, then $G_n \le \bigcap_{\chi \in \irr G} \ker (\chi) = 1$. Therefore, we may assume that there exists a character $\chi \in \irr G$ with $\ker (\chi) = 1$. This implies that $Z (\chi) = Z (G)$. We have $d_i^2 \le \norm{G:Z(G)} = \chi(1)^2$ for ever integer $i$ with $1 \le i \le n$, and so, $\chi (1) = d_n$. Notice that if $a \in \cd {G/Z(G)}$, then $a^2 < \norm {G:Z(G)} = d_n^2$. It follows that $\norm{\cd {G/Z(G)}} \le n-1$. By the inductive hypothesis, this implies $G_{n-1}\le Z(G)$. We conclude that $G_n=1$, as desired. \end{proof} \section{${\rm CM}_n$-groups and GVZ-groups}\label{CMgroups} As we stated in the Introduction, ${\rm CM}_n$-groups were defined in \cite{BZ}. So far as we can tell, ${\rm CM}$-groups were initially studied in \cite{zhmud}. The reader may also want to consult \cite{zhmud1} for further results regarding these groups. We begin with a lemma that gives a lower bound on the number of faithful characters of a group. \begin{lem} \label{Euler} Let $G$ be a $p$-group and suppose that $Z(G)$ is cyclic, then the number of faithful characters in $\irr G$ is at least $\phi (|Z(G)|)$ where $\phi$ is the Euler $\phi$-function. \end{lem} \begin{proof} Note that for each faithful character $\lambda \in \irr {Z(G)}$, we see that $\lambda^G$ has at least one irreducible constituent $\chi$. Note that $\ker(\chi) \cap Z(G) = \ker \lambda = 1$, and so $\ker(\chi) = 1$. Hence, $\chi$ is a faithful character. On the other hand, any character in $\irr G$ will have a character of $Z(G)$ as its unique irreducible constituent when restricted to $Z(G)$. This implies that the number of faithful characters of $G$ is at least the number of faithful irreducible characters of $Z(G)$. Since $Z(G)$ is cyclic, the number of faithful irreducible characters equals $\phi (|Z(G)|)$. \end{proof} Recall that $G$ is a ${\rm CM}_n$-group if for every normal subgroup $N$, there are at most $n$ characters in $\irr G$ that have $N$ as their kernel. Note for every group $G$, there is a minimal positive integer $n$ so that $G$ is a ${\rm CM}_n$-group. \begin{lem} \label{p-1} If $p$ is a prime, $G$ is a $p$-group, and $G$ is a ${\rm CM}_n$-group, then $n \ge p-1$. \end{lem} \begin{proof} If $N = \ker(\chi)$ for some $\chi \in \irr G$, then $Z(\chi)/N = Z(G/N)$ is cyclic. By Lemma \ref{Euler}, we know that $G/N$ has at least $\phi (|Z(\chi)/N|)$ faithful irreducible characters. Since $Z(\chi)/N$ is a $p$-group, we know $\phi (|Z(\chi)/N|) \ge p-1$. \end{proof} If $p$ is a prime, then we write $\mathbb{Q}_p$ for the field obtained by adjoining a $p$th root of unity to the rationals. We now characterize the $p$-groups that are ${\rm CM}_{p-1}$-groups in the following theorem which includes Theorem \ref{CMp-1-intro} from the Introduction. We note that the equivalence of (1) and (2) is essentially proved in Theorem 9.3.16 of \cite{BZ}, but our proof seems to be considerably shorter. \begin{thm} \label{CMp-1} Let $p$ be a prime and let $G$ be a $p$-group. Then the following are equivalent: \begin{enumerate}[label={\normalfont\bf(\arabic*)}] \item $G$ is a ${\rm CM}_{p-1}$-group. \item $G$ is a GVZ-group and $|Z(\chi)/\ker(\chi)| = p$ for all $1_G \ne \chi \in \irr G$. \item $G$ is a GVZ-group and every character in $\irr G$ has values in $\mathbb{Q}_p$. \end{enumerate} \end{thm} \begin{proof Suppose that $G$ is a ${\rm CM}_{p-1}$-group. Consider a character $1_G \ne \chi \in \irr G$. By Lemma \ref{Euler}, we know that $G/\ker(\chi)$ has at least $\phi (|Z(\chi)/\ker(\chi)|)$ faithful irreducible characters. As we saw in Lemma \ref{p-1}, we have $\phi (Z(\chi)/\ker(\chi)) \ge p -1$. Since $G/\ker (\chi)$ has at most $p-1$ faithful irreducible characters, we deduce that $\phi (|Z(\chi)/\ker(\chi)|) \le p-1$, and thus, $\phi (|Z(\chi)/\ker(\chi)|) = p-1$. It is well-known that $\phi (|Z(\chi)/\ker(\chi)|) = p-1$ if and only if $|Z(\chi)/\ker(\chi)| = p$. We now have that $Z (\chi)/\ker(\chi)$ has $p-1$ nonprincipal irreducible characters. Since $G/\ker(\chi)$ has at most $p-1$ faithful irreducible characters, we conclude that $\gamma^G$ has a unique irreducible constituent for each character $\gamma \in \irr {Z(\chi)/\ker(\chi)}$. Observe that $\chi$ is a constituent of $\gamma^G$ for such a character $\gamma$, and it is not difficult to see that this implies that $\chi$ vanishes on $G \setminus Z(\chi)$. Since $\chi$ was arbitrary, this implies that $G$ is a GVZ-group and $|Z(\chi)/\ker(\chi)| = p$ for all $1_G \ne \chi \in \irr G$. Now, suppose that $G$ is a GVZ-group and $|Z(\chi)/\ker(\chi)| = p$ for all $1_G \ne \chi \in \irr G$. Consider a character $\chi \in \irr G$. We see that all the nonzero values of $\chi$ are on elements of $Z (\chi)$ and since $|Z(\chi)/\ker(\chi)| = p$, it follows that all the values of $\chi$ lie in $\mathbb{Q}_p$. Finally, suppose that $G$ is a GVZ-group and every character in $\irr G$ has values in $\mathbb{Q}_p$. Let $N$ be a normal subgroup of $G$. If no irreducible character of $G$ has $N$ as a kernel, then the result is true with respect to $N$. Thus, we may assume that there exists $\chi \in \irr G$ so that $\ker(\chi) = N$. Observe that $Z(\chi)/N = Z(G/N)$ is cyclic. Since $\chi$ has values in $\mathbb{Q}_p$, it follows that $Z(\chi)/N$ must have exponent $p$, and so, $Z(\chi)/N$ has order $p$. Now, every irreducible character in $\irr G$ that has $N$ as its kernel will have a unique character in $\irr {Z(\chi)/N}$ as an irreducible constituent. Since $G$ is a GVZ-group, we see that each irreducible character in $Z(\chi)/N$ has a unique irreducible constituent upon induction to $G$. Because $Z(\chi)/N$ has $p-1$ irreducible characters, this shows that there exists $p-1$ irreducible characters of $G$ that have $N$ as their kernel and this proves the result. \end{proof} It is easy to find examples to see that if $m > 1$, then the direct product of two ${\rm CM}_n$-groups need not be a ${\rm CM}_n$-group. However, when $n = p-1$ and the groups are $p$-groups, then the story is different. \begin{cor} Let $p$ be a prime and suppose that $H$ and $K$ are ${\rm CM}_{p-1}$-groups that are $p$-groups. Then $H \times K$ is a ${\rm CM}_{p-1}$-group. \end{cor} \begin{proof} Observe since $H$ and $K$ are GVZ-groups that $H \times K$ is a GVZ-group. Also, since all characters in $\irr H$ and $\irr K$ have values in $\mathbb{Q}_p$, it follows that all characters in $\irr {H \times K}$ have values in $\mathbb{Q}_p$. Applying Lemma \ref{CMp-1}, we see that $H \times K$ is a $\mathrm{CM}_{p-1}$-group. \end{proof} We say $G$ a ${\rm CM}$-group if $G$ is a ${\rm CM}_1$-group. (We note that \cite{saeidi} uses a somewhat different definition for ${\rm CM}_1$-groups.) Following the usual convention in the literature, we say $G$ is a {\it rational} group if all of the irreducible characters of $G$ are rational. It is not difficult to see that this is equivalent to the condition that $g$ is conjugate to $g^r$ for every integer $r$ satisfying $(r, \norm{G}) = 1$. We note on page 251 of \cite{BZ} it is mentioned that all ${\rm CM}$-groups are rational. Also, in Lemma 1.2 of \cite{saeidi}, it is proved that ${\rm CM}$-groups that are $2$-groups are rational. Several of the other results \cite{saeidi} also suggest that $G$ will be a GVZ-group, which turns out to be the case. \begin{cor}\label{CM} Let $G$ be a $2$-group. Then $G$ is a ${\rm CM}$-group if and only if $G$ is a GVZ-group and rational group. \end{cor} Appealing to Theorem~\ref{flat = gvz}, Corollary~\ref{CM} can be considered a group-theoretic characterization of those $2$-groups that are ${\rm CM}$-groups. In a private communication, Professor Mann has indicated that Zhmud has conjectured that ${\rm CM}$-groups are metabelian. This is consistent with Problem 1 of \cite{YBGPPOV1}, which is credited to Zhmud. Furthermore, in \cite{zhmud}, it is shown that ${\rm CM}$-groups are exactly the groups where any two elements with the same normal closure are conjugate. Problem 12.15 of the Kourovka Notebook \cite{Kourovka} asks if such groups are necessarily metabelian. In light of Corollary~\ref{CM}, we see that showing that these groups are metabelian would provide some evidence that GVZ-groups have bounded derived length (or perhaps are metabelian). We would like to thank Professor Abdollahi for pointing out the connection between these groups and the problem in the Kourovka Notebook. We close by remarking that in \cite{saeidi} it is proved that these groups are metabelian under the additional strong hypothesis that the group has at most five distinct irreducible character degrees. We note that there is a misstatement in the review of \cite{saeidi} in Math reviews where is stated that the author had proved all ${\rm CM}$-groups are metabelian (see \cite{rev}).
{ "timestamp": "2020-09-30T02:18:46", "yymm": "1909", "arxiv_id": "1909.05841", "language": "en", "url": "https://arxiv.org/abs/1909.05841", "abstract": "We show that a group is a GVZ-group if and only if it is a flat group. We show that the nilpotence class of a GVZ-group is bounded by the number of distinct degrees of irreducible characters. We also show that certain CM-groups can be characterized as GVZ-groups whose irreducible character values lie in the prime field.", "subjects": "Group Theory (math.GR)", "title": "GVZ-groups, Flat Groups, and CM-Groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759638081522, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7079405630991424 }
https://arxiv.org/abs/1407.1240
An elementary proof of linear programming optimality conditions without using Farkas' lemma
Although it is easy to prove the sufficient conditions for optimality of a linear program, the necessary conditions pose a pedagogical challenge. A widespread practice in deriving the necessary conditions is to invoke Farkas' lemma, but proofs of Farkas' lemma typically involve "nonlinear" topics such as separating hyperplanes between disjoint convex sets, or else more advanced LP-related material such as duality and anti-cycling strategies in the simplex method. An alternative approach taken previously by several authors is to avoid Farkas' lemma through a direct proof of the necessary conditions. In that spirit, this paper presents what we believe to be an "elementary" proof of the necessary conditions that does not rely on Farkas' lemma and is independent of the simplex method, relying only on linear algebra and a perturbation technique published in 1952 by Charnes. No claim is made that the results are new, but we hope that the proofs may be useful for those who teach linear programming.
\section{Introduction} In many contexts, particularly in business and economics, linear programming (LP) is taught as a self-contained subject, including proofs of necessary and sufficient conditions for optimality. The proof of sufficient conditions is straightforward, but, as explained below, the necessary conditions are seen by many as pedagogically challenging. Broadly speaking, these conditions are proved in two different ways. The first invokes Farkas' lemma,\footnote{The {\sl Chicago Manual of Style Online}, {\tt www.chicagomanualofstyle.org}, prefers repeating the ``s'' after the apostrophe in indicating possession by a word ending in ``s'', but states that it is also correct to omit the post-apostrophe ``s''. We have chosen the second option.} which can be stated in a surprisingly large number of forms (for example, \cite[pages 89--93]{Schrijver}) and which itself must be proved. According to the history sketched in \cite{Broyden}, the initial statement of Farkas' lemma was published in 1894, with its best-known exposition appearing in 1902. Despite the passage of more than a century since its correctness was established, different proofs of the lemma have continued to be devised, based on an array of motivations categorized in \cite{Broyden} as geometric, algebraic, and/or algorithmic. A widely used means of proving Farkas' lemma relies on separating hyperplane theorems (for example, \cite[pages 205--207]{Fletcher}, \cite[pages 297--301]{GMW},\cite[pages 170--172]{BT}), but there can be some discomfort in bringing these more advanced topics into an LP course whose audience is familiar only with basic linear algebra. Even so, this approach is especially convenient when optimality conditions for a range of increasingly complicated constrained optimization problems are to be presented (for example, \cite[pages 326--329]{NocWright}). Those who wish to keep Farkas' lemma but prefer to avoid separating hyperplanes can choose instead from a non-trivial number of ``elementary'' proofs of the lemma. Some of these, such as \cite{Dax, Svanberg}, involve properties of linear least-squares problems. An algebraic proof related to orthogonal matrices is given in \cite{Broyden} (see also \cite{RoosTerlaky}), and \cite{Bartl07} features a linear-algebraic approach to proving Farkas' lemma and other theorems of the alternative. A second strategy is to prove the necessary conditions for LP optimality without explicitly calling on Farkas' lemma. This can be done in a variety of ways, for example using LP-related results such as duality (see \cite[page 165]{BT}, \cite[pages 112-113]{FMW}) or finite termination of the simplex method with a guaranteed anticycling strategy \cite[page 86]{Schrijver}. A recent proof of LP optimality conditions \cite{Forsgren} is independent of the simplex method, relying instead on linear algebra and a perturbation technique introduced by \cite{Charnes} in the context of resolving degeneracy. This paper, which falls into the second group, presents proofs of optimality conditions for linear programs expressed in a generic form that includes both equalities and inequalities; see (\ref{eqn-togetherform}). The problem form may seem inconsequential, especially since all known LP problem forms can be mechanically transformed into one another. But form affects substance to a perhaps surprising extent, and can have a major effect on how students and practitioners think about linear programs and algorithms for solving them. Linear programs are (probably) most frequently expressed in textbooks using one of several variations on ``standard form'', of which the following is typical: \begin{equation} \label{eqn-stdform} \minimize{x\inI\!\!R^n}\;\; c^T\! x \quad\hbox{subject to}\quad Ax = b \quad\hbox{and}\quad x\ge 0, \end{equation} where $A$ is $m\times n$ with $m\le n$, $b\inI\!\!R^m$, $c\inI\!\!R^n$, and $A$ is assumed to have rank $m$. Two key features of this version of standard form are that the ``general'' constraints involving $A$ are all equalities, and that the only inequalities are simple lower bounds on the variables. Standard form is very closely tied to the simplex method, which is described in many papers and books (for example, the 1963 classic \cite{Dantzig}, \cite{Chvatal} and \cite{Vanderbei}) and which was, for almost 40 years, essentially the only method for solving linear programs. However, since the 1984 ``interior-point revolution'' in optimization (for example, \cite{MHWint,NemTodd}), a thorough treatment of linear programming requires presentation of interior-point methods. These are easier to motivate with {\sl all-inequality form}, which resembles a generic form for constrained optimization: \begin{equation} \label{eqn-allineqform} \minimize{x\inI\!\!R^n}\;\; c^T\! x \quad\hbox{subject to}\;\; Ax \ge b, \quad\hbox{where $A$ is $m\times n$}. \end{equation} A linear program in standard form (\ref{eqn-stdform}) may be transformed (by reformulating the constraints and/or adding variables) into an equivalent linear program in all-inequality form (\ref{eqn-allineqform}) and the other way around. This paper considers a generic mixed form in which equality and inequality constraints are denoted separately: \begin{equation} \label{eqn-togetherform} \minimize{x\inI\!\!R^n}\;\; c^T\! x \quad\hbox{subject to}\;\; A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr} \;\;\hbox{and}\;\; A_{\scriptstyle\Iscr} x \ge b_{\scriptstyle\Iscr}, \end{equation} where $A_{\scriptstyle\Escr}$ is $m_{\scriptstyle\Escr}\times n$ with $\mathop{\hbox{\rm rank}}(A_{\scriptstyle\Escr}) = m_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr}$ is $m_{\scriptstyle\Iscr}\times n$. This form corresponds to all-inequality form when $m_{\scriptstyle\Escr} = 0$, i.e., when $A_{\scriptstyle\Escr}$ is empty, and to standard form when $A_{\scriptstyle\Iscr} = I_n$ (the $n$-dimensional identity) and $b_{\scriptstyle\Iscr}=0$. This means that results on standard form as well as all-inequality form are immediately available from our results. We define the combined matrix $A$ and vector $b$, each with $m = m_{\scriptstyle\Escr} + m_{\scriptstyle\Iscr}$ rows, as \begin{equation} \label{eqn-abfulldef} A = \mtx{c}{A_{\scriptstyle\Escr}\\ A_{\scriptstyle\Iscr}} \quad\hbox{and}\quad b = \mtx{c}{b_{\scriptstyle\Escr}\\ b_{\scriptstyle\Iscr}}. \end{equation} where the index sets ${{\cal E}}$ and ${{\cal I}}$ are ${{\cal E}} = \{1,2,\dots,m_{\scriptstyle\Escr}\}$ and ${{\cal I}} = \{m_{\scriptstyle\Escr} + 1, \dots, m_{\scriptstyle\Escr} + m_{\scriptstyle\Iscr}\}$. Sections 2--4 contain a summary of background results that would be part of any course on linear programming; they are included to make the paper self-contained. The results in Sections 5--8 are not new in substance, but may be unfamiliar in form. In any case we hope that they might provide a useful option for proving LP optimality. For completeness, Section~\ref{sec-farkas} states and proves Farkas' lemma using the results in this paper; Section~\ref{sec-summary} summarizes the logical flow of results. \section{Notation, definitions, and background results} It is assumed that $c\ne 0$ and $A \ne 0$. The $i$th row of $A$ (\ref{eqn-abfulldef}) is denoted by $a_i^T$ and the $i$th component of $b$ by $b_i$. The problem constraints are said to be {\sl consistent\/} or {\sl feasible\/} if there exists at least one $\skew{2.8}\widehat x$ such that $A_{\scriptstyle\Escr} \skew{2.8}\widehat x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr}\skew{2.8}\widehat x \ge b_{\scriptstyle\Iscr}$, and an $\skew{2.8}\widehat x$ that satisfies the constraints is called a {\sl feasible point}. An immediate result, noted explicitly for completeness, is that linearity of the constraints means that every point on the line joining two distinct feasible points $\skew{2.8}\widehat x$ and $\skew{2.8}\bar x$ is also feasible. Optimality subject to constraints is inherently a relative condition, involving comparison of objective function values at a possible optimal point $\skew{2.8}\widehat x$ with those at other feasible points. In such a comparison, {\sl active\/} constraints play a crucial role. The $i$th constraint is said to be {\sl active\/} at a feasible point $\skew{2.8}\widehat x$ if $a_i^T \skew{2.8}\widehat x = b_i$. At a feasible point, the equality constraints $A_{\scriptstyle\Escr}\skew{2.8}\widehat x = b_{\scriptstyle\Escr}$ must be active, but an inequality constraint may be active or inactive (strictly satisfied). The set of indices of inequality constraints active at a feasible point $\skew{2.8}\widehat x$ is denoted by $\overset{\;\,=}{{\rule{0pt}{1.2ex}}\smash{\Iscr}}(\skew{2.8}\widehat x)$, and $\overset{\;\,>}{\vphantom{a}\smash{\Iscr}}(\skew{2.8}\widehat x)$ denotes the set of indices of the inactive inequality constraints at $\skew{2.8}\widehat x$. We use ${\mathcal A}(\skew{2.8}\widehat x)$ to denote the set of active constraints, which means that ${\mathcal A}(\skew{2.8}\widehat x) = {\cal E} \cup \overset{\;\,=}{{\rule{0pt}{1.2ex}}\smash{\Iscr}}(\skew{2.8}\widehat x)$. Let $\overset{\;\,=}{\vphantom{a}\smash{A}}_{\!\scriptstyle\Iscr}(\skew{2.8}\widehat x)$ denote the matrix of rows of $A_{\scriptstyle\Iscr}$ corresponding to active inequality constraints, and similarly for $\overset{\,=}{\vphantom{a}\smash{b}}_{\scriptstyle\Iscr}(\skew{2.8}\widehat x)$, so that, by definition, $\overset{\;\,=}{\vphantom{a}\smash{A}}_{\!\scriptstyle\Iscr}(\skew{2.8}\widehat x) \skew{2.8}\widehat x = \overset{\,=}{\vphantom{a}\smash{b}}_{\scriptstyle\Iscr}(\skew{2.8}\widehat x)$. The {\sl active-constraint matrix\/} $\overset{\;\,=}{\vphantom{a}\smash{A}}(\skew{2.8}\widehat x)$ then consists of $A_{\scriptstyle\Escr}$ and $\overset{\;\,=}{\vphantom{a}\smash{A}}_{\!\scriptstyle\Iscr}(\skew{2.8}\widehat x)$: \begin{equation} \label{eqn-actmatdef} \overset{\;\,=}{\vphantom{a}\smash{A}}(\skew{2.8}\widehat x) = \mtx{c}{A_{\scriptstyle\Escr}\\ \overset{\;\,=}{\vphantom{a}\smash{A}}_{\!\scriptstyle\Iscr}(\skew{2.8}\widehat x)}. \end{equation} The following definition, in which positivity of $\alpha^i$ is crucial, allows us to characterize feasible directions at a feasible point. \begin{definition}[Feasible direction.] \label{def-feasdir} The $n$-vector $p$ is a {\sl feasible direction\/} for the constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x \ge b_{\scriptstyle\Iscr}$ at the feasible point $\skew{2.8}\widehat x$ if $p\ne 0$ and there exists $\alpha^i > 0$ such that $\skew{2.8}\widehat x + \alpha p$ is feasible for $0 < \alpha\le \alpha^i$, where $\alpha^i$ may be infinite. \end{definition} By linearity, the value of the $i$th constraint when moving from a feasible point $\skew{2.8}\widehat x$ to $\skew{2.8}\widehat x + \alpha p$, where $p\ne 0$ and $\alpha > 0$, is given by \begin{equation} \label{eqn-xhatpert} a_i^T (\skew{2.8}\widehat x + \alpha p) = a_i^T \skew{2.8}\widehat x + \alpha a_i^T p. \end{equation} Relation (\ref{eqn-xhatpert}) shows that, to maintain feasibility with respect to an equality constraint $i$, $p$ must satisfy $a_i^T p = 0$. When $i$ is an {\sl inactive\/} inequality constraint, (\ref{eqn-xhatpert}) implies that, even if $a_i^T p < 0$, constraint $i$ will remain satisfied at $\skew{2.8}\widehat x + \alpha p$ if $\alpha> 0$ is sufficiently small. But if $i$ is an active inequality constraint, so that $a_i^T \skew{2.8}\widehat x = b_i$, it follows from (\ref{eqn-xhatpert}) that $\skew{2.8}\widehat x + \alpha p$ will be feasible with respect to constraint $i$ for $\alpha > 0$ only if $a_i^T p \ge 0$. Although inactive inequality constraints do not have an immediate local effect on feasibility, they may limit the size of the step that can be taken along a feasible direction $p$. \begin{definition}[The maximum feasible step.] \label{def-maxfeasible} Given a feasible point $\skew{2.8}\widehat x$ and a feasible direction $p$, let ${\mathcal D}(\skew{2.8}\widehat x, p)$ (for ``decreasing'') denote the set of indices of inequality constraints that are inactive at $\skew{2.8}\widehat x$ for which $a_i^T p < 0$: $$ {\mathcal D}(\skew{2.8}\widehat x,p) \buildrel\triangle\over= \{ i \mid i\in\overset{\;\,>}{\vphantom{a}\smash{\Iscr}}(\skew{2.8}\widehat x)\;\;\hbox{and}\;\; a_i^T p < 0\}. $$ If ${\mathcal D}(\skew{2.8}\widehat x,p) \ne \emptyset$, the positive scalar $\sigma^i$ (the step to constraint $i$ along $p$) is defined as \begin{equation} \label{eqn-steptoi} \sigma^i \buildrel\triangle\over= \frac{b_i-a_i^T \skew{2.8}\widehat x}{a_i^T p} \quad\hbox{for $i\in\overset{\;\,>}{\vphantom{a}\smash{\Iscr}}(\skew{2.8}\widehat x)$ and $a_i^T p < 0$.} \end{equation} The {\sl maximum feasible step}, denoted by $\widehat\sigma$, is the smallest such step, $\widehat\sigma \buildrel\triangle\over=\min_{i\in{\mathcal D}(\skew{2.8}\widehat x,p)} \sigma^i$. Any inequality constraint $i$ for which $\sigma^i = \widehat\sigma$, of which there may be more than one, becomes active at $\skew{2.8}\widehat x + \widehat\sigma p$. If ${\mathcal D}(\skew{2.8}\widehat x,p) = \emptyset$, $\widehat\sigma$ is taken as $+\infty$. \end{definition} In addition to feasibility, optimality conditions need to ensure that the objective function is as small as possible. \begin{definition}[Descent direction.] The vector $p$ is a {\sl descent direction} for the objective function $c^T x$ if $c^T p < 0$. \end{definition} \begin{definition}[Feasible descent direction.]\label{def-feasdesc} The direction $p$ is a feasible descent direction at a feasible point $\skew{2.8}\widehat x$ if $A_{\scriptstyle\Escr} p = 0$, $\overset{\;\,=}{\vphantom{a}\smash{A}}_{\!\scriptstyle\Iscr}(\skew{2.8}\widehat x) p \ge 0$, and $c^T p < 0$. No feasible descent direction exists at $\skew{2.8}\widehat x$ if there are no feasible directions or if $c^T p \ge 0$ for all feasible directions $p$. \end{definition} Obvious optimality conditions can now be stated in terms of existence or non-existence of a feasible descent direction. \begin{lemma}[Necessary and sufficient optimality conditions---Version I.] \label{lem-necsuff-one} $ $\newline When minimizing $c^T x$ subject to $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x\ge b_{\scriptstyle\Iscr}$, the feasible point $x\superstar$ is optimal if and only if no feasible descent direction exists at $x\superstar$. \end{lemma} \begin{proof} The ``only if'' result follows because existence of a feasible descent direction $p$ at a feasible point $x\superstar$ implies that there is a positive $\alpha$ such that $x\superstar + \alpha p$ is feasible and $c^T (x\superstar + \alpha p) = c^Tx\superstar + \alpha c^T p < c^T x\superstar$. Hence, $x\superstar$ cannot be optimal. To show the ``if'' result, assume that $x\superstar$ is feasible but not optimal. Then there is a feasible point $\skew3\widetilde x$ such that $c^T \skew3\widetilde x < c^T x\superstar$. Since $\skew3\widetilde x$ is feasible, we must have $A_{\scriptstyle\Escr} \skew3\widetilde x = b_{\scriptstyle\Escr}$ and $\overset{\;\,=}{\vphantom{a}\smash{A}}_{\!\scriptstyle\Iscr}(x\superstar) \skew3\widetilde x \ge \overset{\,=}{\vphantom{a}\smash{b}}_{\!\scriptstyle\Iscr}(x\superstar)$. In addition, it holds that $A_{\scriptstyle\Escr} x\superstar = b_{\scriptstyle\Escr}$ and $\overset{\;\,=}{\vphantom{a}\smash{A}}_{\!\scriptstyle\Iscr}(x\superstar) x\superstar = \overset{\,=}{\vphantom{a}\smash{b}}_{\!\scriptstyle\Iscr}(x\superstar)$. Hence, $c^T (\skew3\widetilde x-x\superstar)<0$, $A_{\scriptstyle\Escr} (\skew3\widetilde x -x\superstar) = 0$ and $\overset{\;\,=}{\vphantom{a}\smash{A}}_{\!\scriptstyle\Iscr}(x\superstar) (\skew3\widetilde x -x\superstar) \ge 0$, so that $\skew3\widetilde x-x\superstar$ is a feasible descent direction by Definition~\ref{def-feasdir}. \end{proof} Although Lemma~\ref{lem-necsuff-one} gives necessary and sufficient conditions for LP optimality, its usefulness is limited because it offers no way to verify these conditions. This is the point in teaching LP where Farkas' lemma usually enters the picture, but we now take a different route to the necessary and sufficient conditions for LP optimality. \section{Multipliers and optimality} An important feature of constrained optimization problems is the implicit existence of quantities that do not appear in the problem statement yet play a crucial role in optimality conditions. These quantities consist of $m$ (Lagrange) {\sl multipliers}, or {\sl dual variables}, one for each constraint, that connect the objective and the constraints. The next result shows that existence of a multiplier with certain properties produces a lower bound on the objective value in the feasible region. \begin{proposition}[Lower bound on LP objective.] \label{prop-lowerbound} Assume that $x\inI\!\!R^n$ satisfies $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x\ge b_{\scriptstyle\Iscr}$. Further assume that there exists a multiplier $\lambda\inI\!\!R^m$ such that $A^T\lambda = c$ and $\lambda_{\scriptstyle\Iscr}\ge 0$, where $\lambda_{\scriptstyle\Iscr}$ denotes the $m_{\scriptstyle\Iscr}$-vector of components of $\lambda$ corresponding to inequality constraints. (No sign restrictions apply to the multipliers for equality constraints.) Then $c^T x - \lambda^T b = \lambda^T (Ax-b) \ge 0$. \end{proposition} \begin{proof} Let $x$ be feasible. The assumed existence of $\lambda$ means that we can substitute $A^T\lambda$ for $c$ and use the facts that $\lambda_{\scriptstyle\Iscr} \ge 0$, $A_{\scriptstyle\Escr} x - b_{\scriptstyle\Escr} = 0$, and and $A_{\scriptstyle\Iscr}\skew{2.8}\widehat x-b_{\scriptstyle\Iscr}\ge 0$. We then have $$ c^T x - \lambda^T b = \lambda^T (A x - b) = \lambda_{\scriptstyle{\Escr}}^T(A_{\scriptstyle\Escr} x - b_{\scriptstyle\Escr}) + \lambda_{\scriptstyle{\Iscr}}^T(A_{\scriptstyle\Iscr} x - b_{\scriptstyle\Iscr}) \ge 0. $$ It follows that $c^T x$ is bounded below by $\lambda^T b$ for every feasible $x$. \end{proof} For a general linear program, a qualifying $\lambda$ may not exist. Furthermore, there can be more than one vector $\lambda$ satisfying the given conditions, each producing a different value of $\lambda^T b$. However, something special happens if $\lambda_{\scriptstyle{\Iscr}}^T(A_{\scriptstyle\Iscr}\skew{2.8}\widehat x-b_{\scriptstyle\Iscr}) = 0$, allowing us to state {\sl sufficient\/} conditions for LP optimality. \begin{proposition}[Sufficient conditions for LP optimality.] \label{prop-suffone} Consider the linear program of minimizing $c^T x$ subject to the consistent constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x\ge b_{\scriptstyle\Iscr}$. The feasible point $x\superstar$ is optimal if a multiplier $\lambda\superstar\inI\!\!R^m$ exists with the following three properties: (i) $A^T\lambda\superstar = c$, (ii) $\lambda\subiscr^{{\raise 0.5pt\hbox{$\nthinsp *$}}}\ge 0$, and (iii) $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T}(Ax\superstar-b) = 0$. The optimal objective value is $c^T x\superstar = \lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T} b$. \end{proposition} \begin{proof} Since $\lambda\superstar$ satisfies (i) and (ii), Proposition~\ref{prop-lowerbound} gives the lower bound $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T} b$ on the optimal value of the linear program. However, (iii) and Proposition~\ref{prop-lowerbound} show that the lower bound is attained for $x\superstar$, so that $x\superstar$ is optimal. \end{proof} The crucial relationship $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T}(Ax\superstar-b) = 0$, which means that, for every $i=1,\dots, m$, at least one of $\{\lambda_i^{{\raise 0.5pt\hbox{$\nthinsp *$}}}, a_i^Tx\superstar - b_i\}$ must be zero, is called {\sl complementarity}. The following result shows that the complementarity condition does not directly tie the multiplier $\lambda\superstar$ to a particular $x\superstar$. Rather, if complementarity holds for one $x\superstar$, it must hold for $\lambda\superstar$ together with any optimal solution $\skew{2.8}\widehat x$. \begin{proposition}[Properties of an optimal LP solution.] \label{prop-specialprops} Consider minimizing $c^T x$ subject to the consistent constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x\ge b_{\scriptstyle\Iscr}$. Assume that a multiplier $\lambda\superstar$ exists such that $A^T\lambda\superstar = c$, $\lambda\subiscr^{{\raise 0.5pt\hbox{$\nthinsp *$}}} \ge 0$, and assume that the optimal objective value is $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T} b$. Then a feasible point $\skew{2.8}\widehat x$ is optimal if and only if $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T}(A\skew{2.8}\widehat x -b) = 0$. \end{proposition} \begin{proof} For any feasible $x$, Proposition~\ref{prop-lowerbound} gives $c^T x - \lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T} b = \lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T} (A x - b) \ge 0$. Hence, under the assumption that the optimal value is $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T} b$, a point $\skew{2.8}\widehat x$ is optimal if and only if $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T}(A\skew{2.8}\widehat x - b) = 0$. \end{proof} \section{Vertices and their properties} Certain feasible points, known as {\sl vertices}, are extremely important in linear programming. \begin{definition}[Vertex.] \label{def-vertex} Given the consistent constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x \ge b_{\scriptstyle\Iscr}$, the point $\skew{2.8}\widehat x$ is a vertex if $A_{\scriptstyle\Escr} \skew{2.8}\widehat x = b_{\scriptstyle\Escr}$, $A_{\scriptstyle\Iscr} \skew{2.8}\widehat x \ge b_{\scriptstyle\Iscr}$, and the active-constraint matrix $\overset{\;\,=}{\vphantom{a}\smash{A}}(\skew{2.8}\widehat x)$ of (\ref{eqn-actmatdef}) has rank $n$. \end{definition} An immediate consequence is that there cannot be a vertex when the rank of the full constraint matrix $A$ (\ref{eqn-abfulldef}) is less than $n$. A nice feature of standard-form linear programs (\ref{eqn-stdform}) is that, when the constraints are consistent, a vertex must exist because the inequality constraints consist of the $n$-dimensional identity. This is not true in general for all-inequality form (\ref{eqn-allineqform}), even when the objective function is bounded below in the feasible region; consider, for example, minimizing $x_1 + x_2$ subject to $x_1 + x_2 \ge 1$. A vertex $\skew{2.8}\widehat x$ is the unique solution of the linear system formed by any nonsingular submatrix of $\overset{\;\,=}{\vphantom{a}\smash{A}}(\skew{2.8}\widehat x)$, which has rank $n$ by definition. A fundamental result is that the definitions of vertex and extreme point are equivalent, where an extreme point is a feasible point that does not lie on the line segment joining two distinct feasible points. (See, for example, Section 2.2 of \cite{BT} for a detailed treatment of related topics.) A simple combinatorial argument shows that the number of vertices is bounded above by $\tbinom{m}{n}$. Given the consistent constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x \ge b_{\scriptstyle\Iscr}$, the set of vertices ${\mathcal V}(A,b)$ can be found by enumerating and testing for feasibility every combination of the constraints in which the equality constraints hold with equality and a total of $n$ constraints hold with equality. (Of course, this procedure is not practical for large $m$ and $n$.) There are two kinds of vertices. At a {\sl nondegenerate vertex\/} $\skew{2.8}\widehat x$, exactly $n$ constraints are active and the active-constraint matrix $\overset{\;\,=}{\vphantom{a}\smash{A}}(\skew{2.8}\widehat x)$ is nonsingular. At a {\sl degenerate vertex\/} $\skew{2.8}\widehat x$, there are $n$ linearly independent active constraints, but more than $n$ constraints are active. The next two small results are stated formally for later reference. \begin{result}\label{res-inactindep} Let $F$ be a $q\times n$ nonzero matrix with $\mathop{\hbox{\rm rank}}(F) < n$ whose $i$th row is $f_i^T$. Assume that $p$ is a nonzero $n$-vector such that $Fp=0$. If $g$ is a vector such that $g^T p \ne 0$, then $g^T$ is linearly independent of the rows of $F$, i.e. $$ \mathop{\hbox{\rm rank}}\shortmtx{c}{F\\ g^T} = \mathop{\hbox{\rm rank}}(F) + 1. $$ \end{result} \begin{proof} If $g^T$ were a linear combination of the rows of $F$, then $g^T = y^T F$ for some vector $y$. Since $Fp=0$, substituting $y^T F$ for $g^T$ would give $g^T p = y^T F p = 0$, contradicting our assumption that $g^T p \ne 0$. \end{proof} Note that the implication in Result~\ref{res-inactindep} does not go the other way: if $Fp = 0$, $p\ne 0$, and $g^T p = 0$, then $g^T$ can nonetheless be linearly independent of the rows of $F$. This can be seen by example: $$ F = \mtx{rrrr}{1 & 1 & 0 & 0\\ 0 & 1 & 0 & -1}, \quad g^T = \mtx{cccc}{0 & 0 & 1 & 0}, \quad\hbox{and}\quad p^T = \mtx{cccc}{-1 & 1 & 0 & 1}. $$ \begin{result}\label{res-mustbeonea} Let $D$ be an $m\times n$ matrix with $\mathop{\hbox{\rm rank}}(D) = n$ whose $i$th row is $d_i^T$. Let $\widetilde D$ denote a subset of rows of $D$ such that $\mathop{\hbox{\rm rank}}(\widetilde D) = r < n$, and assume that $p$ is a nonzero vector such that $\widetilde D p = 0$. Then there is at least one row $d_j^T$ of $D$ that is not included in $\widetilde D$ such that (i) $d_j^T p \ne 0$ and (ii) $d_j^T$ is linearly independent of the rows of $\widetilde D$. \end{result} \begin{proof} Let the $r\times n$ matrix $F$ consist of $r$ linearly independent rows of $\widetilde D$, so that every row in $\widetilde D$ that is not in $F$ is a non-trivial linear combination of the rows of $F$. Hence the assumption that $\widetilde D p = 0$ implies that $Fp=0$. Because $\mathop{\hbox{\rm rank}}(\widetilde D) = r$ and $\mathop{\hbox{\rm rank}}(D) = n$, we can assemble a matrix $G$ consisting of $n-r$ rows of $D$ that are not in $\widetilde D$ and that are linearly independent of the rows of $\widetilde D$, such that the $n\times n$ matrix $$ M = \shortmtx{c}{F\\ G} \;\;\hbox{is nonsingular}. $$ Since $p\ne 0$, nonsingularity of $M$ means that $Mp\ne 0$ and, since $Fp=0$, this will be true only if $Gp\ne 0$. Given how $G$ is defined, there must be a row $d_j^T$ in $D$ but not in $\widetilde D$ such that $d_j^T p \ne 0$, and linear independence of $d_j^T$ follows directly from Result~\ref{res-inactindep}. \end{proof} Our next step is to determine when there is an {\sl optimal\/} vertex for the LP (\ref{eqn-togetherform}). We know that a vertex can exist only if the constraints are consistent and $\mathop{\hbox{\rm rank}}(A) = n$. Using a theoretical procedure, the next lemma guarantees the existence of an {\sl optimal\/} vertex under the added assumption that $c^T x$ is bounded below in the feasible region. \begin{lemma}[Existence of an optimal vertex.]\label{lem-optvert} Consider minimizing $c^T x$ subject to the consistent constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x\ge b_{\scriptstyle\Iscr}$, where the rank of $A$ (\ref{eqn-abfulldef}) is $n$. Let ${\mathcal V}$ denote the set of all vertices for the given constraints. Then either \begin{enumerate} \item[(i)] $c^T x$ is bounded below in the feasible region and there is a vertex $v\superstar\in{\mathcal V}$ where the smallest value of $c^T x$ in the feasible region is achieved; or \item[(ii)] $c^T x$ is unbounded below in the feasible region and there exists an $n$-vector $p$ such that $A_{\scriptstyle\Escr} p = 0$, $A_{\scriptstyle\Iscr} p\ge 0$, and $c^T p < 0$. \end{enumerate} \end{lemma} \begin{proof} Starting with any feasible point $x_0$, we define an iterative sequence $\{x_k\}$ that produces a vertex $v_j\in{\mathcal V}$ such that $c^T v_j \le c^T x_0$, unless we find an $n$-vector $p$ such that $A_{\scriptstyle\Escr} p = 0$, $A_{\scriptstyle\Iscr} p\ge 0$, and $c^T p < 0$. At $x_k$, $\overset{\;\,=}{\vphantom{a}\smash{A}}_k$ denotes the active-constraint matrix $\overset{\;\,=}{\vphantom{a}\smash{A}}(x_k)$ defined by (\ref{eqn-actmatdef}); note that the rows of $A_{\scriptstyle\Escr}$ are always present in $\overset{\;\,=}{\vphantom{a}\smash{A}}_k$. \begin{description} \item[Step 0.] Set $k=0$. \item[Step 1.] If $x_k$ is a vertex (i.e., $\mathop{\hbox{\rm rank}}(\overset{\;\,=}{\vphantom{a}\smash{A}}_k) = n$) then $x_k = v_j\in{\mathcal V}$ for some $j$. Stop; a vertex has been found such that $c^T v_j \le c^T x_0$. Otherwise, go to Step 2. \item[Step 2.] Since $\mathop{\hbox{\rm rank}}(\overset{\;\,=}{\vphantom{a}\smash{A}}_k) < n$, there exists a nonzero $p$ satisfying $\overset{\;\,=}{\vphantom{a}\smash{A}}_k p = 0$, so that any movement along $p$ does not alter the values of constraints active at $x_k$. Now we consider the inequality constraints that are inactive at $x_k$. It follows from Result~\ref{res-mustbeonea} with $\overset{\;\,=}{\vphantom{a}\smash{A}}_k$ playing the role of $\widetilde D$ that there must be at least one inactive inequality constraint index $j$ such that $a_j^T p \ne 0$; let ${\mathcal J}_k = \{j\mid a_j^T x_k > b_k \;\;\hbox{and}\;\; a_j^T p \ne 0\}$. \item[Step 3.] If $c^T p = 0$, we select $j\in{\mathcal J}_k$ and set $p_k = \pm p$, choosing the sign so that $a_j^T p_k < 0$ (since either choice will satisfy $\overset{\;\,=}{\vphantom{a}\smash{A}}_k p_k = 0$). Otherwise, if $c^T p \ne 0$, choose $p_k = \pm p$ so that, for some $j\in{\mathcal J}_k$, $a_j^T p_k < 0$ and $c^T p_k < 0$; if this is not possible it must hold that $c^T p_k < 0$ and $a_j^T p_k \ge 0$ for all $j\in{\mathcal J}_k$, in which case we exit and conclude that (ii) holds. Applying Definition~\ref{def-maxfeasible}, let $\alpha_k>0$ be the maximum feasible step along $p_k$. Then all the constraints inactive at $x_k$ remain feasible at $x_{k+1} = x_k + \alpha_k p_k$, and at least one additional linearly independent inequality constraint becomes active there. Hence $\mathop{\hbox{\rm rank}}(\overset{\;\,=}{\vphantom{a}\smash{A}}_{k+1}) > \mathop{\hbox{\rm rank}}(\overset{\;\,=}{\vphantom{a}\smash{A}}_k)$ and $c^T x_{k+1} \le c^T x_k$. \item[Step 4.] Increase $k$ to $k+1$ and return to Step 1. \end{description} For each initial $x_0$ there will be no more than $n$ executions of Step 1, since $\mathop{\hbox{\rm rank}}(A) = n$ and each pass through Step 3 increases the rank of $\overset{\;\,=}{\vphantom{a}\smash{A}}_k$ by at least one. This procedure confirms that, for every feasible point $x_0$, there is either a vertex $v_j\in{\mathcal V}$ such that $c^T v_j \le c^T x_0$ or an $n$-vector $p$ such that $A_{\scriptstyle\Escr} p = 0$, $A_{\scriptstyle\Iscr} p\ge 0$, and $c^T p < 0$. Let $v\superstar$ denote a vertex such that $c^T v\superstar \le c^T v_j$ for all $v_j$ in the finite set ${\mathcal V}$. If there is a feasible $x_0$ such that $c^T x_0 < c^Tv\superstar$, the procedure must give a $p$ such that $A_{\scriptstyle\Escr} p = 0$, $A_{\scriptstyle\Iscr} p\ge 0$, and $c^T p < 0$ and (ii) holds. Otherwise, $c^T x_0\ge c^T v\superstar$ for all feasible $x_0$ and (i) holds. \end{proof} \section{Optimality at a nondegenerate vertex} It is straightforward to derive necessary and sufficient conditions for optimality of a {\sl nondegenerate\/} vertex. \begin{proposition} [Optimality of a nondegenerate vertex.] \label{prop-nondegenopt} Consider the linear program of minimizing $c^T x$ subject to the consistent constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x\ge b_{\scriptstyle\Iscr}$. Assume that $x\superstar$ is a nondegenerate vertex where the active set is ${\mathcal A}(x\superstar)$ and that the $n$-vector $\overset{=}{\vphantom{a}\smash{\lambda}}$ is the solution of $\overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar)^T \overset{=}{\vphantom{a}\smash{\lambda}} = c$. Then $x\superstar$ is optimal if and only if $\overset{=}{\vphantom{a}\smash{\lambda}}_{\scriptstyle\Iscr}\ge 0$, where $\overset{=}{\vphantom{a}\smash{\lambda}}_{\scriptstyle\Iscr}$ denotes the components of $\overset{=}{\vphantom{a}\smash{\lambda}}$ corresponding to active inequality constraints. \end{proposition} \begin{proof} Because $x\superstar$ is nondegenerate, $\overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar)$ is nonsingular, which means that $\overset{=}{\vphantom{a}\smash{\lambda}}$ is unique. The ``if'' direction follows because, as we show next, we can define an $m$-vector $\lambda\superstar$ that satisfies the sufficient conditions of Proposition~\ref{prop-suffone}. Assume that the rows of $\overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar)$ are ordered with indices $\{w_1, \dots, w_n\}$, so that the $j$th component of $\overset{=}{\vphantom{a}\smash{\lambda}}$ is the multiplier for original constraint $w_j$. The full multiplier $\lambda\superstar$ is then defined as \begin{equation} \label{eqn-lamstardef} \lambda\superstar_{w_j} = \overset{=}{\vphantom{a}\smash{\lambda}}_j,\quad j = 1,\dots, n; \quad\hbox{and}\quad \lambda\superstar_j = 0 \quad\hbox{if $j\notin{\mathcal A}(x\superstar)$,} \end{equation} so that the multipliers corresponding to inactive inequality constraints are zero. Hence $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T}(Ax\superstar -b) = 0$, the sufficient conditions of Proposition~\ref{prop-suffone} are satisfied, and $x\superstar$ is optimal. For the ``only if'' direction, suppose that $[\overset{=}{\vphantom{a}\smash{\lambda}}_{\scriptstyle\Iscr}]_i$ is strictly negative for some active inequality constraint. Because $\overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar)$ is nonsingular, there is a unique direction $p$ satisfying $\overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar) p = e_i$, where $e_i$ is the $i$th coordinate vector, so that $p$ is a feasible direction. It then follows from the relation $c = \overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar)^T \overset{=}{\vphantom{a}\smash{\lambda}}$ that $c^T p = {\vphantom{\rule{1pt}{5.5pt}}\smash{\overset{\;\,=}{\vphantom{a}\smash{\lambda}}}}^T \overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar) p = [\overset{=}{\vphantom{a}\smash{\lambda}}_{\scriptstyle\Iscr}]_i < 0$ and $p$ is a feasible descent direction, which means that $x\superstar$ cannot be optimal. \end{proof} \section{Optimality at a degenerate vertex} Difficulties arise in proving necessary optimality conditions for a degenerate optimal vertex because Proposition~\ref{prop-nondegenopt} depends on nonsingularity of the active-constraint matrix at the vertex $x\superstar$. To address these difficulties, we use properties of a {\sl working set}, which is closely related to, but not the same as, the active set; see \cite[page 339]{GMW} for a more restricted definition. \begin{definition}[Working set.] \label{def-working} Given the consistent constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x\ge b_{\scriptstyle\Iscr}$, let $\skew{2.8}\bar x$ be a feasible point, not necessarily a vertex. Consider a set of $n_{\scriptscriptstyle\Wscr}$ distinct indices, ${\mathcal W} = \{w_1,\dots, w_{n_{\scriptscriptstyle\Wscr}}\}$, where $m_{\scriptstyle\Escr} \le n_{\scriptscriptstyle\Wscr}\le n$ and $w_i = i$ for $i = 1$, \dots $m_{\scriptstyle\Escr}$, i.e., the first $m_{\scriptstyle\Escr}$ indices in ${\mathcal W}$ are the indices of the equality constraints. Let $W$ be the associated $n_{\scriptscriptstyle\Wscr}\times n$ working matrix whose $i$th row is $a_{w_i}^T$, so that the first $m_{\scriptstyle\Escr}$ rows of $W$ are $A_{\scriptstyle\Escr}$. Let $b_{\scriptscriptstyle\Wscr}$ denote the vector consisting of components of $b$ corresponding to the indices in ${\mathcal W}$. Then ${\mathcal W}$ is a {\emph working set\/} at $\skew{2.8}\bar x$ if the following two properties apply: \begin{enumerate} \item[(1)] Every inequality constraint whose index is in ${\mathcal W}$ is active at $\skew{2.8}\bar x$, i.e., $W\skew{2.8}\bar x = b_{\scriptscriptstyle\Wscr}$; \item[(2)] The rows of $W$ are linearly independent. \end{enumerate} \end{definition} At a vertex, by definition there are $n$ linearly independent active constraints, so that it is always possible to define a nonsingular working-set matrix $W$ and an associated unique vector $\lambda_{\scriptscriptstyle\Wscr}$ that satisfies $W^T \lambda_{\scriptscriptstyle\Wscr} = c$, where component $j$ of $\lambda_{\scriptscriptstyle\Wscr}$ corresponds to original constraint $w_j$. Thus components $1$, \dots, $m_{\scriptstyle\Escr}$ of $\lambda_{\scriptscriptstyle\Wscr}$ correspond to the $m_{\scriptstyle\Escr}$ equality constraints, and the remaining components are associated with ``working'' (active) inequality constraints. If $x\superstar$ is an optimal nondegenerate vertex, the active set $\overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar)$ is a working set with $n_{\scriptscriptstyle\Wscr} = n$, $\lambda_{\scriptscriptstyle\Wscr}$ is the same as the unique solution $\overset{=}{\vphantom{a}\smash{\lambda}}$ of $\overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar)^T \overset{=}{\vphantom{a}\smash{\lambda}} = c$, and $[\lambda_{\scriptscriptstyle\Wscr}]_i \ge 0$ for $w_i\in\overset{\;\,=}{{\rule{0pt}{1.2ex}}\smash{\Iscr}}(x\superstar)$. But if $\skew{2.8}\widehat x$ is an optimal vertex that is degenerate, there can be more than one working set. This complicates optimality conditions because, even if $\widehat{\mathcal W}$ is a working set at a degenerate optimal vertex $\skew{2.8}\widehat x$, it may not be true that $[\lambda_{\scriptscriptstyle{\Wscrhat}}]_i \ge 0$ for inequality constraints in the working set. Consider, for example, the following all-inequality two-variable linear program of minimizing $c^T x$ subject to three inequality constraints $Ax\ge b$, with \begin{equation} \label{eqn-pertdegeninfig} c = \thinmtx{r}{1\,\\ -{\textstyle\frac12}},\quad A = \mtx{cr}{1 & 1\\ 1 & \frac{5}{2}\\ 1 & -2},\quad\hbox{and}\quad b =\thinmtx{r}{3\\ 6\\ -3}. \end{equation} (Note that, for this example, there are no equality constraints.) The optimal solution is a degenerate vertex $x\superstar = (1,2)^T$. There are three working sets, ${\mathcal W}_1 = \{1, 2\}$, ${\mathcal W}_2 = \{1, 3\}$, and ${\mathcal W}_3 = \{2, 3\}$, and the associated multipliers are $$ \lambda_{\scriptscriptstyle{\Wscr_1}} = \mtx{r}{2\\ -1}, \quad \lambda_{\scriptscriptstyle{\Wscr_2}} = \mtx{c}{{\textstyle\frac12}\\[3pt] {\textstyle\frac12}}, \quad\hbox{and}\quad \lambda_{\scriptscriptstyle{\Wscr_3}} = \mtx{c}{{\textstyle\frac13}\\[3pt] {\textstyle\frac23}}. $$ Although ${\mathcal W}_1$ identifies two linearly independent rows of $A$, optimality of $x\superstar$ cannot be determined by checking the signs of the components of $\lambda_{\scriptscriptstyle{\Wscr_1}}$. We therefore define an {\sl optimal working set\/} at an optimal point as one that will confirm optimality, as happens with working sets ${\mathcal W}_2$ and ${\mathcal W}_3$ in the example. \begin{definition}[Optimal working set.] \label{def-optworking} Given the consistent constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x\ge b_{\scriptstyle\Iscr}$ and the objective function $c^T x$, assume that ${\mathcal W}$ is a working set at the optimal point $\skew{2.8}\bar x$ (not necessarily a vertex), with $W$ the associated working matrix. Then ${\mathcal W}$ is an \emph{optimal working set} if (a) the linear system $W^T \lambda_{\scriptscriptstyle\Wscr} = c$ is compatible (which means that $\lambda_{\scriptscriptstyle\Wscr}$ exists and is unique), and (b) $[\lambda_{\scriptscriptstyle\Wscr}]_i\ge 0$ if $w_i$ is the index of an active inequality constraint. \end{definition} Note that the uniqueness mentioned in property (a) follows because the columns of $W^T$ are linearly independent (Definition~\ref{def-working}). The next proposition shows that, if the constraints are consistent, $\mathop{\hbox{\rm rank}}(A) = n$, and $c^T x$ is bounded below in the feasible region, then an optimal vertex and an associated optimal working set always exist, even if the vertex is degenerate. An optimal working set is obtained from the solution of a {\sl perturbed linear program\/} where, as in \cite{Forsgren}, the perturbations reflect motivation introduced in \cite{Charnes}; see, for example, \cite{Dantzig} and \cite[pages 34--35]{Chvatal}. The crucial property of the perturbations is that their presence guarantees existence of an optimal {\sl nondegenerate\/} vertex for the perturbed problem. \begin{proposition}[Existence of an optimal vertex, multiplier, and working set.] \label{prop-findoptworking} $ $\newline Consider minimizing $c^T x$ subject to the consistent constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x\ge b_{\scriptstyle\Iscr}$, where $c^T x$ is bounded below in the feasible region and $\mathop{\hbox{\rm rank}}(A) = n$. Then there are an optimal vertex $x\superstar$ and an associated optimal working set ${\mathcal W} = \{w_1, \dots, w_n\}$ of $n$ indices, such that the corresponding working-set matrix $W$ is nonsingular, from which an $m$-dimensional multiplier $\lambda\superstar$ can be constructed such that (i) $A^T \lambda\superstar = c$, (ii) $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T}(Ax\superstar -b) = 0$, and (iii) $\lambda\subiscr^{{\raise 0.5pt\hbox{$\nthinsp *$}}}\ge 0$. \end{proposition} \begin{proof} The proof has two parts: analyzing a perturbed linear program, and then using the resulting nondegenerate optimal vertex for the perturbed problem to define a solution and optimal working set for the original problem. \smallskip \noindent {\bf Part 1: Solving a perturbed linear program.} Consider the perturbed linear program: \begin{equation} \label{eqn-pertlp} \minimize{x\inI\!\!R^n}\quad c^T\! x \quad\hbox{subject to}\quad A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr} \;\;\hbox{and}\quad A_{\scriptstyle\Iscr} x \ge b_{\scriptstyle\Iscr} - e, \end{equation} where $e = (\epsilon, \epsilon^2, \dots, \epsilon^{m_{{\cal I}}})^T$ and $\epsilon > 0$ is arbitrary and ``sufficiently small''. (Note that the equality constraints are not perturbed.) Because the constraints of the original LP are consistent, so are the constraints of the perturbed problem. The objective $c^T x$ and constraint matrix $A$ are the same in both the original and perturbed problems. We know from Lemma~\ref{lem-optvert} that there must be an optimal vertex, denoted by $x_{\epsilon}$, for the perturbed problem. Let $\overset{\;\,=}{\vphantom{a}\smash{A}}_\epsilon$ denote the active-constraint matrix at $x_{\epsilon}$ with respect to the perturbed constraints of (\ref{eqn-pertlp}). Without loss of generality, since $x_{\epsilon}$ may be degenerate, we write the active constraints at $x_{\epsilon}$ as \begin{equation} \label{eqn-active-epsilon} \overset{\;\,=}{\vphantom{a}\smash{A}}_{\epsilon} x_{\epsilon} = \overset{\,=}{\vphantom{a}\smash{b}}_{\epsilon} - \overset{\;=}{\rule{0pt}{0.7ex}\smash{e}}_{\!\epsilon}, \quad\hbox{with}\quad \overset{\;\,=}{\vphantom{a}\smash{A}}_{\epsilon} = \mtx{c}{W_{\epsilon}\\ Y_\epsilon}, \;\; \overset{\,=}{\vphantom{a}\smash{b}}_{\epsilon} = \mtx{c}{b_{\scriptscriptstyle{\Wscr_{\epsilon}}}\\ b_{\scriptscriptstyle{{\cal Y}_{\epsilon}}}}, \;\;\hbox{and}\;\; \overset{\;=}{\rule{0pt}{0.7ex}\smash{e}}_{\!\epsilon} = \mtx{c}{e_{\scriptscriptstyle{\Wscr_{\epsilon}}}\\ e_{\scriptscriptstyle{{\cal Y}_{\epsilon}}}}, \end{equation} where $W_{\epsilon}$ is $n\times n$ and nonsingular. Let ${\mathcal W}_{\epsilon}$ denote the set of $n$ indices $\{w_1, \dots, w_n\}$ of the original constraints corresponding to the rows of $W_{\epsilon}$, where the matrix $A_{\scriptstyle\Escr}$ corresponding to the equality constraints occupies the first $m_{\scriptstyle\Escr}$ rows of $W_{\epsilon}$. The remaining $n-m_{\scriptstyle\Escr}$ rows of $W_{\epsilon}$ and the rows of $Y_{\epsilon}$ contain normals of active inequality constraints whose indices are not known in advance. The first $m_{\scriptstyle\Escr}$ components of $e_{\scriptscriptstyle{\Wscr_{\epsilon}}}$ are zero (since the equalities are not perturbed), followed by $n-m_{\scriptstyle\Escr}$ distinct powers of $\epsilon$: \begin{equation} \label{eqn-epspowers} e_{\scriptscriptstyle{\Wscr_{\epsilon}}} = (0,\dots, 0, \epsilon^{w_1}, \epsilon^{w_2},\dots, \epsilon^{w_{n-m_{\scriptscriptstyle{{\cal E}}}}})^T, \quad\hbox{with}\quad 1 \le w_i \le m_{\scriptstyle\Iscr},\;\; i=1,\dots, n-m_{\scriptstyle\Escr}. \end{equation} We next show by contradiction that $x_{\epsilon}$ must be nondegenerate for all sufficiently small $\epsilon$, i.e., that $Y_{\epsilon}$ must be empty. Let $y^T$ denote the normal of an inequality constraint in $Y_{\epsilon}$, and assume that it corresponds to the $j$th inequality constraint of the original problem. Because $W_{\epsilon}$ is nonsingular, there is a unique vector $q$ such that $y^T = q^T {W}_{\epsilon}$. Consequently, since $W_{\epsilon} x_{\epsilon} = b_{\scriptscriptstyle{\Wscr_{\epsilon}}} - e_{\scriptscriptstyle{\Wscr_{\epsilon}}}$ (see (\ref{eqn-active-epsilon})), we have $$ y^T x_{\epsilon} = q^T {W}_{\epsilon} x_{\epsilon} = q^T(b_{\scriptscriptstyle{\Wscr_{\epsilon}}} - e_{\scriptscriptstyle{\Wscr_{\epsilon}}}). $$ By assumption, $x_{\epsilon}$ is an optimal vertex for the perturbed problem, so that $y^T x_{\epsilon} \ge [b_{\scriptstyle\Iscr}]_j - \epsilon^j$ (since constraint $j$ is an inequality). But our further assumption (\ref{eqn-active-epsilon}) that the constraints in $Y_{\epsilon}$ are active at $x_{\epsilon}$ for all sufficiently small $\epsilon$ implies that this relation is an {\sl equality,\/} i.e., $y^T x_{\epsilon} = [b_{\scriptstyle\Iscr}]_j - \epsilon^j$. Substituting $q^T (b_{\scriptscriptstyle{\Wscr_{\epsilon}}} - e_{\scriptscriptstyle{\Wscr_{\epsilon}}})$ for $y^T x_{\epsilon}$ and rearranging, we obtain $$ q^T b_{\scriptscriptstyle{\Wscr_{\epsilon}}} - [b_{\scriptstyle\Iscr}]_j - q^T e_{\scriptscriptstyle{\Wscr_{\epsilon}}} + \epsilon^j = 0. $$ The left-hand side of this relation is a polynomial in $\epsilon$, in which $q^T b_{\scriptscriptstyle{\Wscr_{\epsilon}}}$ and $[b_{\scriptstyle\Iscr}]_j$ are independent of $\epsilon$ and the inner product $q^T e_{\scriptscriptstyle{\Wscr_{\epsilon}}}$ is a linear combination of the distinct powers of $\epsilon$ from (\ref{eqn-epspowers}), none of which is equal to $j$, and there is a term $\epsilon^j$. Such a polynomial can equal zero only when $\epsilon$ is exactly equal to one of the polynomial's roots. Hence equality cannot hold when $\epsilon$ is allowed to be any arbitrarily small positive value, and we obtain a contradiction. The same argument applies for all the constraints in ${\cal Y}_{\epsilon}$, so that $y^T x_{\epsilon} > [b_{\scriptstyle\Iscr}]_j - \epsilon^j$ for all $j\in{\cal Y}_{\epsilon}$. It follows that $Y_{\epsilon}$ is empty and that only the $n$ constraints in ${\mathcal W}_{\epsilon}$ are active, confirming that $x_{\epsilon}$ is a nondegenerate optimal vertex with active set $\overset{\;\,=}{\vphantom{a}\smash{A}}_{\epsilon} = W_{\epsilon}$. Letting $\overset{=}{\vphantom{a}\smash{\lambda}}_{\epsilon}$ denote the necessarily unique solution of $W_{\epsilon}^T \overset{=}{\vphantom{a}\smash{\lambda}}_{\epsilon} = c$, it follows from the ``only if'' direction of Proposition~\ref{prop-nondegenopt} that the components of $\overset{=}{\vphantom{a}\smash{\lambda}}_{\epsilon}$ corresponding to active inequality constraints are nonnegative: \begin{equation} \label{eqn-pertlamprops} W_{\epsilon}^T \overset{=}{\vphantom{a}\smash{\lambda}}_{\epsilon} = c \quad\hbox{and}\quad [\overset{=}{\vphantom{a}\smash{\lambda}}_{\epsilon}]_i \ge 0 \;\;\hbox{when}\;\; w_i \in\overset{\;\,=}{{\rule{0pt}{1.2ex}}\smash{\Iscr}}_{\epsilon}. \end{equation} \smallskip \noindent {\bf Part 2. Defining an optimal solution for the original problem.} We now show that the working set ${\mathcal W}_{\epsilon}$ for the perturbed problem is an {\sl optimal\/} working set for the original problem; see Definition~\ref{def-optworking}. Taking ${\mathcal W} = {\mathcal W}_{\epsilon} = \{w_1,\dots, w_n\}$ and $W = W_{\epsilon}$, we define $x\superstar$ as the (unique) solution of $\Workingx\superstar = b_{\scriptscriptstyle\Wscr}$, so that the $n$ linearly independent constraints represented in $W$ are active at $x\superstar$. Let $y^T$ denote the normal of any constraint not in $W$, and assume that it corresponds to the $j$th inequality constraint in the original problem. It remains to show that $y^T x\superstar \ge [b_{\scriptstyle\Iscr}]_j$, i.e., that $x\superstar$ is feasible with respect to the corresponding original inequality constraint. The proof of Part 1 shows that there is a unique $q$ such that $y^T = q^T W$ and that $q^T b_{\scriptscriptstyle\Wscr} - [b_{\scriptstyle\Iscr}]_j - q^T e_{\scriptscriptstyle\Wscr} + \epsilon^j > 0$ for all sufficiently small $\epsilon$. Since $y^T x\superstar = q^T b_{\scriptscriptstyle\Wscr}$, we have \begin{equation} \label{eqn-xstarstrict} y^Tx\superstar - [b_{\scriptstyle\Iscr}]_j - q^T e_{\scriptscriptstyle\Wscr} + \epsilon^j > 0, \end{equation} which has two consequences. \begin{enumerate} \item[(i)] A result from \cite[Lemma 1, Chapter 10]{Dantzig} says that a polynomial in $\epsilon > 0$ will be positive for all sufficiently small $\epsilon$ if and only if the coefficient of the smallest power of $\epsilon$ is positive. The cited result implies that, if the constant term $y^Tx\superstar - [b_{\scriptstyle\Iscr}]_j$ of the polynomial in (\ref{eqn-xstarstrict}) is nonzero, it must be positive, in which case original constraint $j$ is inactive at $x\superstar$. \item[(ii)] If $y^Tx\superstar - [b_{\scriptstyle\Iscr}]_j = 0$, then by definition original constraint $j$ is active at $x\superstar$. (This case applies when $x\superstar$ is degenerate.) \end{enumerate} In either case, $y^T x\superstar \ge [b_{\scriptstyle\Iscr}]_j$ and $x\superstar$ is feasible with respect to all of the original inequality constraints $A_{\scriptstyle\Iscr} x\ge b_{\scriptstyle\Iscr}$. The remaining ingredient needed to verify that $x\superstar$ and ${\mathcal W}$ are optimal involves multipliers. Since the nonsingular working matrix $W$ has been taken as $W_{\epsilon}$ and $W^T \overset{=}{\vphantom{a}\smash{\lambda}}_{\epsilon} = c$, we can define an $m$-vector $\lambda\superstar$, where $\lambda\superstar_{\scriptscriptstyle\Wscr}$ denotes the vector of components of $\lambda\superstar$ associated with constraints in the working set: \begin{equation} \label{eqn-full-lamdef} \lambda\superstar_{\scriptscriptstyle\Wscr} = \overset{=}{\vphantom{a}\smash{\lambda}}_{\epsilon} \quad\hbox{and}\quad \lambda_i^{{\raise 0.5pt\hbox{$\nthinsp *$}}} = 0 \;\;\hbox{if}\;\; i\ne {\mathcal W}, \end{equation} noting that $\lambda_i^{{\raise 0.5pt\hbox{$\nthinsp *$}}} \ge 0$ if the associated constraint is an inequality in ${\mathcal W}$; see (\ref{eqn-pertlamprops}). Thus we have obtained an optimal vertex $x\superstar$ and an optimal working set. Using the optimal working set, a multiplier $\lambda\superstar$ can be defined satisfying the sufficient optimality conditions of Proposition~\ref{prop-suffone}. \end{proof} The just-completed proof shows that the perturbed LP is guaranteed to have a nondegenerate optimal vertex, but this vertex will in general depend on the value of $\epsilon$ and the ordering of the powers of $\epsilon$ in the perturbed constraints. This non-uniqueness of $x_{\epsilon}$ and ${\mathcal W}_{\epsilon}$ is illustrated in Figure~\ref{fig-degenpic} for the all-inequality linear program (\ref{eqn-pertdegeninfig}). The contours of the linear objective are labeled as ``$\phi$''. The optimal degenerate vertex $x\superstar = (1,2)^T$ for the original LP is shown on the left, where the constraints include a thin shading on the infeasible side. In the remaining two figures, the constraints have been perturbed and the thickness of the shading reflects the size of the perturbation. The value of $\epsilon$ is deliberately taken as ${\textstyle\frac12}$ so that the effects can easily be seen. In the middle figure, constraints $1$, $2$, and $3$ are perturbed respectively by $\epsilon$, $\epsilon^2$, and $\epsilon^3$, producing a single nondegenerate vertex where constraints $2$ and $3$ are active. In the rightmost figure, the constraint perturbations are $\epsilon^2$, $\epsilon$, and $\epsilon^3$, creating two distinct nondegenerate vertices, with constraints $1$ and $3$ active at the optimal vertex (which differs from the optimal vertex in the middle figure). Our earlier analysis of (\ref{eqn-pertdegeninfig}) showed that the optimal working sets are indeed ${\mathcal W}_3 = \{2,3\}$ and ${\mathcal W}_2 = \{1,3\}$, shown respectively in the middle and rightmost figures. \begin{figure}[htb] \label{fig-pertdegen} \begin{center} \centering \centerline{\epsfbox{degen2.1}} \parbox[t]{.9\textwidth}{ \caption{\label{fig-degenpic} \small Effects of perturbing the constraints at a degenerate optimal vertex. }} \end{center} \end{figure} \section{Necessary and sufficient optimality conditions, version 2} The result of Proposition~\ref{prop-findoptworking} allows us to state necessary and sufficient conditions for optimality of a linear program with the form (\ref{eqn-togetherform}) in which the constraints are consistent, $\mathop{\hbox{\rm rank}}(A) = n$ (where $A$ is defined by (\ref{eqn-abfulldef})), and the objective function is bounded below in the feasible region. Note that they apply at any optimal point, whether or not it is a vertex. \begin{proposition}[Necessary and sufficient optimality conditions---Version II.] \label{prop-atlast} Consider the linear program of minimizing $c^T x$ subject to the consistent constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x \ge b_{\scriptstyle\Iscr}$, where $c^T x$ is bounded below in the feasible region and $\mathop{\hbox{\rm rank}}(A) = n$. The point $\skew3\widetilde x$, which need not be a vertex, is optimal if and only if $\skew3\widetilde x$ is feasible and there exists an $m$-vector ${\widetilde\lambda}$ such that $A^T {\widetilde\lambda} = c$, ${\widetilde\lambda}^T(A\skew3\widetilde x - b) = 0$ and ${\widetilde\lambda}_{\scriptstyle\Iscr} \ge 0$. The optimal objective value is ${\widetilde\lambda}^T b$. \end{proposition} \begin{proof} The ``if'' part was proved in Proposition~\ref{prop-suffone}. To confirm the ``only if'' part, we begin by observing that Proposition~\ref{prop-findoptworking} shows that an optimal vertex $x\superstar$ must exist for the given LP, with an associated optimal working set ${\mathcal W}$ that allows us to define an optimal $m$-component multiplier $\lambda\superstar$ such that $A^T\lambda\superstar = c$, $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T}(Ax\superstar - b)$, and $\lambda\superstar_{\scriptstyle\Iscr} \ge 0$; see (\ref{eqn-full-lamdef}). An important point is that $\lambda\superstar_{\scriptstyle\Iscr}$ contains multipliers for all the inequality constraints in the problem. Proposition~\ref{prop-suffone} shows that the optimal value is $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T} b$. Now suppose that the feasible point $\skew3\widetilde x$ is optimal, where $\skew3\widetilde x$ may or may not be a vertex. Proposition~\ref{prop-specialprops} states that $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T}(A\skew3\widetilde x - b) = 0$ must hold. Hence we can take ${\widetilde\lambda} = \lambda\superstar$ as a multiplier for $\skew3\widetilde x$. Again, we stress that optimality of $\skew3\widetilde x$ follows directly from existence of the multiplier ${\widetilde\lambda}$. \end{proof} \section{Identifying an optimal working set at an optimal vertex} The results proved thus far show that, when the constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x \ge b_{\scriptstyle\Iscr}$ are consistent, $\mathop{\hbox{\rm rank}}(A) = n$, and $c^T x$ is bounded below in the feasible region, an optimal vertex $\skew{2.8}\widehat x$ and a corresponding optimal working set ${\widehat\Wscr}$ of $n$ indices such that ${\widehat\Working}$ is nonsingular must exist (where the working set leads to a multiplier ${\widehat\lambda}$ that ensures optimality). We know from Proposition~\ref{prop-atlast} that ${\widehat\lambda}$ is also an optimal multiplier for any optimal point $x\superstar\ne\skew{2.8}\widehat x$. But the active constraints at $\skew{2.8}\widehat x$ and $x\superstar$ may be different, which means that the working set ${\widehat\Wscr}$ may not be a valid working set for $x\superstar$ because $\Workinghatx\superstar\ne b_{\scriptscriptstyle{\Wscrhat}}$; see the example following the proof of Proposition~\ref{prop-optworking}. The next result shows that given a specific optimal vertex $x\superstar$, a corresponding optimal nonsingular working-set matrix $W$ of of $n$ indices exists satisfying Definition~\ref{def-optworking}. The result relies on Proposition~\ref{prop-specialprops}, which shows that the multiplier associated with an optimal vertex satisfies the sufficient optimality conditions for any other optimal point. \begin{proposition} [Existence of an optimal working set.] \label{prop-optworking} For the linear program of minimizing $c^T x$ subject to the consistent constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x\ge b_{\scriptstyle\Iscr}$, where $\mathop{\hbox{\rm rank}}(A) = n$ and $c^T x$ is bounded below in the feasible region, suppose that an optimal vertex $x\superstar$ is given. Then there is an associated optimal working set ${\mathcal W}$ of $n$ indices, such that the corresponding working-set matrix $W$ is nonsingular. \end{proposition} \begin{proof} Proposition~\ref{prop-findoptworking} guarantees existence of an optimal vertex $\skew{2.8}\widehat x$, an optimal working set ${\widehat\Wscr}$ containing $n$ indices such that the corresponding working-set matrix ${\widehat\Working}$ is nonsingular, and an $m$-dimensional optimal vector ${\widehat\lambda}$. If $x\superstar = \skew{2.8}\widehat x$, we can take ${\mathcal W} = {\widehat\Wscr}$ and nothing more is needed. If $x\superstar\ne \skew{2.8}\widehat x$, we show next how to use ${\widehat\Wscr}$ to construct an optimal working set ${\mathcal W}$ for $x\superstar$. The working set ${\widehat\Wscr}$ contains precisely $n$ indices, which include those of the equality constraints plus a selection of active inequality constraints. Defining ${\widehat\Wscr}_{\scriptscriptstyle +}$ as the set of indices of the equality constraints plus the indices $i$ of inequality constraints with positive multipliers ${\widehat\lambda}_i$, let ${\widehat\Working}_{\scriptscriptstyle +}$ denote the associated submatrix of ${\widehat\Working}$, i.e., the matrix whose rows correspond to indices in ${\widehat\Wscr}_{\scriptscriptstyle +}$. Nonsingularity of ${\widehat\Working}$ implies that ${\widehat\Working}_{\scriptscriptstyle +}$ has full row rank. Since $\skew{2.8}\widehat x$ and $x\superstar$ are both optimal, we know from Proposition~\ref{prop-specialprops} that complementarity is satisfied for all constraints at both $\skew{2.8}\widehat x$ and $x\superstar$, which means that, if an inequality constraint in ${\widehat\Wscr}$ has a positive multiplier, then that constraint must be active at both $\skew{2.8}\widehat x$ and $x\superstar$. In addition, all equality constraints are satisfied at both $\skew{2.8}\widehat x$ and $x\superstar$. We therefore conclude that ${\widehat\Working}_{\scriptscriptstyle +}$ is a submatrix of $\overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar)$. Defining ${\widehat\Wscr}_0$ as the set of indices $i$ of inequality constraints that are active at $x\superstar$ for which ${\widehat\lambda}_i=0$ and letting ${\widehat\Working}_0$ denote the corresponding matrix, it follows that $$ \overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar) = \mtx{cc}{{\widehat\Working}_{\scriptscriptstyle +} \\ {\widehat\Working}_0}. $$ Consequently, since $x\superstar$ is a vertex, $\overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar)$ has full column rank. As ${\widehat\Working}_{\scriptscriptstyle +}$ has full row rank, we may therefore create a nonsingular $n\times n$ working-set matrix $W$ as a nonsingular $n\times n$ submatrix of $\overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar)$ that contains ${\widehat\Working}_{\scriptscriptstyle +}$ and let ${\mathcal W}$ denote the associated indices. \end{proof} For example, consider a three-variable all-inequality LP with six constraints $Ax\ge b$, where \begin{equation} \label{eqn-workexamp} A = \mtx{rrr}{0 & 0 & 1\\ 1 & 2 & 1\\ 1 & -1 & 2\\ 1 & 1 & 1\\ -1 & 0 & 1\\ 0 & 1 & -1},\quad b = \mtx{r}{1\\ 5\\ 3\\ 4\\ -2\\[3pt] -{\textstyle\frac12}},\quad\hbox{and}\quad c = \mtx{c}{1\\ 2\\ 3}. \end{equation} Two degenerate vertices, $x\superstar = (2,1,1)^T$ and $\skew{2.8}\widehat x = (3,{\textstyle\frac12},1)^T$, are optimal, and the optimal objective is $c^Tx\superstar = 7$. Suppose that $\skew{2.8}\widehat x$ is the optimal vertex produced by Proposition~\ref{prop-findoptworking}. The active set at $\skew{2.8}\widehat x$ is ${\mathcal A}(\skew{2.8}\widehat x) = \{1,2,5,6\}$, and ${\widehat\Wscr} = \{1,2,5\}$ is an optimal working set, with $$ {\widehat\Working}\skew{2.8}\widehat x = \mtx{rcc}{0 & 0 & 1\\ 1 & 2 & 1\\ -1 & 0 & 1} \mtx{c}{3\\ {\textstyle\frac12}\\ 1} = b_{\scriptscriptstyle{\Wscrhat}} = \mtx{r}{1\\ 5\\ -2} \quad\hbox{and}\quad {\widehat\lambda}_{\scriptscriptstyle{\Wscrhat}} = \mtx{c}{2\\ 1\\ 0}. $$ The associated $6$-component optimal multiplier is ${\widehat\lambda} = (2,1,0,0,0,0)^T$. Using the notation in the proof of Proposition~\ref{prop-optworking}, $n_{\scriptscriptstyle +} = 2$ and ${\widehat\Wscr}_{\scriptscriptstyle +} = \{1,2\}$. Now consider finding an optimal working set at $x\superstar$. As shown in the proof, constraints $1$ and $2$ must be active at $x\superstar$ (and indeed they are), but constraints $5$ and $6$ are not. The active set at $x\superstar$ is ${\mathcal A}(x\superstar) = \{1,2,3,4\}$ and $\Workinghatx\superstar \ne b_{\scriptscriptstyle{\Wscrhat}}$, so that ${\widehat\Wscr}$ is not an optimal working set for $x\superstar$, even though ${\widehat\lambda}$ is an optimal multiplier. Constraints $1$ and $2$ must be part of the working set at $x\superstar$. Since $\mathop{\hbox{\rm rank}}(\overset{\;\,=}{\vphantom{a}\smash{A}}(x\superstar)) = 3$, we need to add one further constraint which is active at $x\superstar$ to ${\widehat\Wscr}_{\scriptscriptstyle +}$. For this example, the extra constraint can be taken as constraint $3$ or $4$. In either case, the optimal multiplier is the same, $\lambda\superstar={\widehat\lambda}$, and the linear system $W x\superstar = b_{\scriptscriptstyle\Wscr}$ is satisfied. Note that if we seek a working set at a non-vertex optimal point, such as $\skew3\widetilde x = (\frac{5}{2}, \frac{3}{4}, 1)^T$ in example (\ref{eqn-workexamp}), then ${\widetilde\Wscr} = {\widehat\Wscr}_{\scriptscriptstyle +}$ is an optimal working set at $\skew3\widetilde x$. In fact, ${\widehat\Wscr}_{\scriptscriptstyle +}$ is an optimal working set at any optimal point. \section{A proof of Farkas' lemma} \label{sec-farkas} For completeness, we state and prove a common form of Farkas' lemma using the results in this paper. Note that in Farkas' lemma, no requirement is imposed about the rank of the matrix involved. \begin{lemma}[Farkas' lemma.] \label{lem-farkas} Given an $m\times n$ matrix $A$ and an $n$-vector $c$, precisely one of the following two conditions must be true: \begin{enumerate} \item[(1)] There exists $y\ge 0$ such that $A^T\! y = c$; \item[(2)] There exists $p$ such that $A p\ge 0$ and $c^T p < 0$. \end{enumerate} \end{lemma} \begin{proof} If $y$ satisfies (1) and $p$ satisfies (2) then $c^T p = y^T A p$. Because $A p\ge 0$ and $y\ge 0$, it follows that $y^T\! A p \ge 0$, which contradicts the relation $c^T\! p<0$ in (2). Hence (1) and (2) cannot both be true. To show that one of (1) or (2) must be true, we consider the all-inequality linear program \begin{equation} \label{eqn-farkaslp} \minimize{p\inI\!\!R^n}\;\; c^T\! p \quad\hbox{subject to}\quad \widetilde A p \ge b, \quad\hbox{with}\quad \widetilde A = \mtx{l}{\phantom- A\\ \phantom- I_n\\ -I_n} \;\;\hbox{and}\;\; b = \mtx{r}{0\\ -e\\ -e}, \end{equation} where $e$ denotes $(1,1,\dots, 1)^T$. The first $m$ constraints are $Ap\ge 0$ and the last $2n$ constraints are equivalent to requiring that $-1 \le p_i \le 1$ for $i=1$, \dots, $n$. This LP has the following properties: (i) $\widetilde A$ has rank $n$ because of the presence of the two identity matrices, (ii) the constraints $\widetilde A p\ge b$ are consistent because $p = 0$ is feasible, and (iii) the feasible region is bounded so the objective function is bounded below. Let $p\superstar$ denote an optimal solution of (\ref{eqn-farkaslp}). Proposition~\ref{prop-atlast} implies that there exists a nonnegative optimal multiplier $\lambda$, which we may partition as $\lambda=(\lambda_1,\lambda_2,\lambda_3)^T$, where $\lambda_1$ is an $m$-vector and $\lambda_2$ and $\lambda_3$ are $n$-vectors. Because $\lambda$ is an optimal multiplier, we know that $\widetilde A^T \lambda = c$ and $c^Tp\superstar = \lambda^T b$. Writing out these relations in partitioned form gives \begin{equation}\label{eqn-farkasopt} \\ \widetilde A^T \lambda = A^T \lambda_1 + \lambda_2 - \lambda_3 = c \;\;\hbox{with}\;\; \lambda_i\ge 0, \quad\hbox{and}\quad c^T p\superstar = \lambda^T b = -e^T\! (\lambda_2 + \lambda_3). \end{equation} Since $c^T p = 0$ at the feasible point $p=0$, the optimal objective value $c^Tp\superstar$ must be either zero or negative. If $c^T p\superstar = 0$, the second relation in (\ref{eqn-farkasopt}) implies that $-e^T\! (\lambda_2 + \lambda_3)=0$. Since $\lambda_2$ and $\lambda_3$ are both nonnegative, it follows that $\lambda_2=\lambda_3=0$. Consequently, the first relation in (\ref{eqn-farkasopt}) shows that $A^T \lambda_1=c$, $\lambda_1\ge 0$, which means that that case (1) of the lemma holds for $y=\lambda_1$. If $c^T p\superstar < 0$, then, since $Ap\superstar \ge 0$, $p\superstar$ satisfies relation (2) of the lemma. Consequently, exactly one of (1) and (2) has a solution. \end{proof} \section{Summary} \label{sec-summary} Assume that the constraints $A_{\scriptstyle\Escr} x = b_{\scriptstyle\Escr}$ and $A_{\scriptstyle\Iscr} x \ge b_{\scriptstyle\Iscr}$ are consistent, $\mathop{\hbox{\rm rank}}(A) = n$, and $c^T x$ is bounded below in the feasible region. We have shown that the feasible point $x\superstar$ is an optimal solution if and only if there exists an optimal multiplier $\lambda\superstar$ such that (i) $A^T\lambda\superstar = c$, (ii) $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T}(Ax\superstar -b) = 0$, and (iii) $\lambda\superstar_{\scriptstyle\Iscr} \ge 0$. These conditions were derived through elementary proofs of the following sequence of results: \begin{enumerate} \item[(a)] If $\lambda\superstar$ exists satisfying (i), (ii), and (iii), $x\superstar$ is optimal. (Proposition~\ref{prop-suffone}.) \item[(b)] Let $x\superstar$ be an optimal point with an associated multiplier $\lambda\superstar$ satisfying (i), (ii), and (iii). For any other optimal point $\skew3\widetilde x\nex\superstar$, condition (ii) is satisfied with $\lambda\superstar$ and $\skew3\widetilde x$, i.e., $\lambda^{{\raise 0.5pt\hbox{$\nthinsp *$}}T}(A\skew3\widetilde x - b) = 0$, and $c^T \skew3\widetilde x = c^Tx\superstar$. (Proposition~\ref{prop-specialprops}.) \item[(c)] There always exists an optimal vertex $\skew{2.8}\widehat x$ and an associated multiplier ${\widehat\lambda}$ satisfying (i), (ii), and (iii). (Propositions~\ref{prop-nondegenopt} and \ref{prop-findoptworking}.) \item[(d)] There must be an optimal multiplier corresponding to any optimal solution. (Proposition~\ref{prop-atlast}.) \item[(e)] There must be a nonsingular optimal working set corresponding to any optimal vertex. (Proposition~\ref{prop-optworking}.) \end{enumerate}
{ "timestamp": "2014-07-07T02:10:05", "yymm": "1407", "arxiv_id": "1407.1240", "language": "en", "url": "https://arxiv.org/abs/1407.1240", "abstract": "Although it is easy to prove the sufficient conditions for optimality of a linear program, the necessary conditions pose a pedagogical challenge. A widespread practice in deriving the necessary conditions is to invoke Farkas' lemma, but proofs of Farkas' lemma typically involve \"nonlinear\" topics such as separating hyperplanes between disjoint convex sets, or else more advanced LP-related material such as duality and anti-cycling strategies in the simplex method. An alternative approach taken previously by several authors is to avoid Farkas' lemma through a direct proof of the necessary conditions. In that spirit, this paper presents what we believe to be an \"elementary\" proof of the necessary conditions that does not rely on Farkas' lemma and is independent of the simplex method, relying only on linear algebra and a perturbation technique published in 1952 by Charnes. No claim is made that the results are new, but we hope that the proofs may be useful for those who teach linear programming.", "subjects": "Optimization and Control (math.OC)", "title": "An elementary proof of linear programming optimality conditions without using Farkas' lemma", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759638081522, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7079405630991424 }
https://arxiv.org/abs/2101.11378
Finite difference method for inhomogeneous fractional Dirichlet problem
We make the split of the integral fractional Laplacian as $(-\Delta)^s u=(-\Delta)(-\Delta)^{s-1}u$, where $s\in(0,\frac{1}{2})\cup(\frac{1}{2},1)$. Based on this splitting, we respectively discretize the one- and two-dimensional integral fractional Laplacian with the inhomogeneous Dirichlet boundary condition and give the corresponding truncation errors with the help of the interpolation estimate. Moreover, the suitable corrections are proposed to guarantee the convergence in solving the inhomogeneous fractional Dirichlet problem and an $\mathcal{O}(h^{1+\alpha-2s})$ convergence rate is obtained when the solution $u\in C^{1,\alpha}(\bar{\Omega}^{\delta}_{n})$, where $n$ is the dimension of the space, $\alpha\in(\max(0,2s-1),1]$, $\delta$ is a fixed positive constant, and $h$ denotes mesh size. Finally, the performed numerical experiments confirm the theoretical results.
\section{Introduction} Fractional Laplacian is of wide interest to both pure and applied mathematicians, and also has extensive applications in physical and engineering community \cite{Lischke2020, Deng.2018BPftFaTFO}. Based on the splitting of the integral fractional Laplacian, we provide the finite difference approximations for the one- and two-dimensional cases of the operator. Then the approximations are used to numerically solve the inhomogeneous fractional Dirichlet problem, i.e., \begin{equation}\label{eqtosol} \left\{ \begin{aligned} &(-\Delta)^{s}u(\mathbf{x})=f(\mathbf{x})\quad {\rm in}~ \Omega_{n},\\ &u(\mathbf{x})=g(\mathbf{x})\qquad {\rm in}~ \Omega_{n}^{c},\\ \end{aligned} \right . \end{equation} where $\Omega_{n}\subset\mathbb{R}^{n}$ $(n=1,2)$ is a bounded domain and $\Omega^{c}_{n}=\mathbb{R}^{n}\backslash\Omega_{n}$ denotes the complement of $\Omega_{n}$; $g(\mathbf{x})=0$ in $\Omega_{n}$, $g(\mathbf{x})\in L^{\infty}(\mathbb{R}^{n})$, and ${\bf supp}~g(\mathbf{x})$ is bounded; $(-\Delta)^{s}u(\mathbf{x})$ is the integral fractional Laplacian, which can be defined by \cite{Deng.2018BPftFaTFO,acosta2017-1} \begin{equation}\label{eqdeflap} (-\Delta)^{s}u(\mathbf{x})=c_{n,s} {\rm P.V.}\int_{\mathbb{R}^{n}}\frac{u(\mathbf{x})-u(\mathbf{y})}{|\mathbf{x}-\mathbf{y}|^{n+2s}}d\mathbf{y} \end{equation} with $c_{n,s}=\frac{2^{2s}s\Gamma(n/2+s)}{\pi^{n/2}\Gamma(1-s)}$, and $s\in(0,\frac{1}{2})\cup(\frac{1}{2},1)$. Moreover, \cite{Deng.2018BPftFaTFO,acosta2017-1} show that \eqref{eqdeflap} is equivalent to the following definition given via the pseudodifferential operator over the entire space $\mathbb{R}^{n}$, i.e., \begin{equation}\label{fourierlaplace} (-\Delta)^{s}u(\boldsymbol{\xi})=\mathcal{F}^{-1}(|\boldsymbol{\xi}|^{2s}\mathcal{F}(u)), \quad s>0, \end{equation} where $\mathcal{F}$ and $\mathcal{F}^{-1}$ stand for the Fourier transform and the inverse Fourier transform. L\'{e}vy process is one of the most commonly used models for describing anomalous diffusion phenomena \cite{Applebaum.2009Lpasc,Sato.1999Lpaidd}, especially $\alpha$-stable L\'{e}vy process. Fractional Laplacian is introduced as the infinitesimal generator of $\alpha$-stable L\'{e}vy process \cite{Deng.2018BPftFaTFO,Gao.2014METaEPfDSDbLN}. Since the singularity and non-locality, numerical approximation of fractional Laplacian is still a challenging topic. In the past few decades, finite difference method has been widely used to approximate fractional derivatives \cite{Gao.2014METaEPfDSDbLN,Alikhanov.2015Andsfttfde,Chen.2014FOASftSFDE,duo_novel_2018,duo_2019,huang2014-1,jin_two_2016-1,jin_correction_2017,Li.2018AoLGFfTFNPP,lin_finite_2007,Nie.2019NAotTdFKEfRaDP,tian_class_2015,zhang2019}. Among them, \cite{jin_two_2016-1,jin_correction_2017,Li.2018AoLGFfTFNPP,lin_finite_2007} discretize time fractional Caputo derivative by $L_{1}$ method and convolution quadrature method; \cite{Chen.2014FOASftSFDE,tian_class_2015} provide weighted and shifted Gr\"{u}nwald difference method to discretize fractional Riesz derivative; as for fractional Laplacian, \cite{Gao.2014METaEPfDSDbLN,duo_novel_2018,duo_2019,huang2014-1} propose the finite difference scheme for solving $d$-dimensional ($d=1,2,3$) fractional Laplace equation with homogeneous Dirichlet boundary condition; moreover, the finite difference schemes provided in \cite{Nie.2019NAotTdFKEfRaDP,zhang2019} for tempered fractional Laplacian with $\lambda=0$ still apply to fractional Laplacian. Different from the previous finite difference scheme for fractional Laplacian, we split it into the product of $(-\Delta)$ and $(-\Delta)^{s-1}$ according to its Fourier transform form, where $-\Delta$ denotes the classical Laplace operator, and $(-\Delta)^{s-1}$ (the exponent $s-1<0$) is a non-local operator without hyper-singularity (for the detailed definition, see \eqref{eqwithoutsi}). Then we use the Lagrange interpolation to discretize $(-\Delta)^{s-1}$ and the finite difference to $-\Delta$ for one- and two-dimensional cases, respectively. Moreover, some corrections are made to ensure the convergence when using our discretization to solve Eq. \eqref{eqtosol}. Compared with the discretizations in \cite{duo_novel_2018,duo_2019}, our scheme can deal with the inhomogeneous fractional Dirichlet problem more easily and accurately. Different from the discretizations proposed in \cite{Nie.2019NAotTdFKEfRaDP,zhang2019}, the current discretization can produce a Toeplitz matrix in one-dimensional case and a block-Toeplitz-Toeplitz-block for two-dimensional case; so fast Fourier Transform can be directly used to speed up the evaluation \cite{Chen.2005Mptaa}. Besides, we use some examples to verify the effectiveness of the designed scheme, including truncation errors, convergence, and the simulation of the mean exit time of L\'{e}vy motion with generator $\mathcal{A}=\nabla P(x)\cdot\nabla+ (-\Delta)^{s}$; the detailed results can refer to Section 5. The rest of the paper is organized as follows. In Section 2, we discretize one- and two-dimensional fractional Laplacian by using the Lagrange interpolation and the finite difference method. In Section 3, we provide the truncation errors for one- and two-dimensional cases, respectively. In Section 4, we make some corrections to ensure the convergence in solving the inhomogeneous fractional Dirichlet problem. Section 5 provides some numerical experiments to validate the effectiveness of the designed scheme. We conclude the paper with some discussions in the last section. Throughout the paper, $C$ is a positive constant and may be different at each occurrence. \section{Numerical discretization of the one- and two-dimensional integral fractional Laplacian} In this section, we first introduce a new presentation of integral fractional Laplacian according to its Fourier transform form, and then the detailed discretizations of one- and two-dimensional integral fractional Laplacian based on the Lagrange interpolation and finite difference method are provided. From \eqref{fourierlaplace}, one can split the fractional Laplacian in frequency domain into \begin{equation}\label{fourieq} \mathcal{F}((-\Delta)^{s}u)(\boldsymbol{\xi})=|\boldsymbol{\xi}|^{2}|\boldsymbol{\xi}|^{2s-2}\mathcal{F}(u). \end{equation} So for $s\in(0,\frac{1}{2})\cup(\frac{1}{2},1)$, we get a new presentation of fractional Laplacian after recovering \eqref{fourieq} to the corresponding time domain, i.e., \begin{equation}\label{eqreprefl} (-\Delta)^{s}u=(-\Delta)(-\Delta)^{s-1}u, \end{equation} where $(-\Delta)$ denotes the classical Laplace operator and $(-\Delta)^{s-1}$ is defined as \cite{Vazquez2012} \begin{equation}\label{eqwithoutsi} (-\Delta)^{s'}u=c_{n,s'}\int_{\mathbb{R}^{n}}|\mathbf{x}-\mathbf{y}|^{-2s'-n}u(\mathbf{y})d\mathbf{y},\qquad s'<0 \end{equation} with $c_{n,s'}=-\frac{2^{2s'}s'\Gamma(n/2+s')}{\pi^{n/2}\Gamma(1-s')}$ for $s'<0$. Below, we provide the detailed discretization for one- and two-dimensional fractional Laplacian based on the splitting \eqref{eqreprefl}, respectively. \subsection{One-dimensional discretization} Here we focus on the discretization of $(-\Delta)^{s}u$ with the inhomogeneous Dirichlet boundary condition in one-dimensional case. Suppose the bounded domain $\Omega_{1}=[-L,L]$ and $u=g(x)$ in $\Omega_{1}^{c}$; set $h=2L/N$ with $N\in\mathbb{N}$ and $x_{i}=-L+ih$, $i\in\mathbb{Z}$. Introduce $I_{i}=[x_{i-1},x_{i+1}]\cap\Omega_{1}$, $i=0,1,2,\ldots,N$. Denote $\phi_{i}(x)$ as the Lagrange basis polynomial on $I_{i}$, i.e., \begin{equation}\label{eqdefphii} \phi_{i}(x)=\bar{\phi}_{1}(x-x_{i})\chi_{I_{i}}(x), \end{equation} where $\chi_{I_{i}}(x)$ is the characteristic function on $I_{i}$ and $\bar{\phi}_{1}(y)$ is defined by \begin{equation*} \bar{\phi}_{1}(y)=\left\{\begin{aligned} 1-\frac{|y|}{h},&\quad y\in (-h,h),\\ 0,&\quad y\notin (-h,h). \end{aligned}\right. \end{equation*} Thus $u(x)$ can be approximated by \begin{equation*} u(x)\approx\mathbb{I}_{1}u(x)=\sum_{i=0}^{N}u_{i}\phi_{i}(x)+g(x), \end{equation*} where $u_{i}=u(x_{i})$ and $\mathbb{I}_{1}$ means the interpolation operator here. So we can approximate $(-\Delta)^{s-1}u$ by using \begin{equation*} \begin{aligned} (-\Delta)_{h}^{s-1}u(x_{i})=& c_{1,s-1}\int_{\Omega_{1}}|x_{i}-y|^{1-2s}\sum_{j=0}^{N}u_{j}\phi_{j}(y)dy\\ &+c_{1,s-1}\int_{\mathbb{R}}|x_{i}-y|^{1-2s}g(y)dy =\sum_{j=1}^{N-1}\bar{\omega}_{j-i}u_{j}+R_{i}, \end{aligned} \end{equation*} where, for $0< i,j< N$, \begin{equation}\label{eqdefomgk} \begin{aligned} \bar{\omega}_{j-i}=&c_{1,s-1}\int_{I_{j}}|x_{i}-y|^{1-2s}\phi_{j}(y)dy =c_{1,s-1}\int_{-h}^{h}|(j-i)h-y|^{1-2s}\bar{\phi}_{1}(y)dy \end{aligned} \end{equation} and \begin{equation*} R_{i}=c_{1,s-1}\int_{\mathbb{R}}|x_{i}-y|^{1-2s}(u_{0}\phi_{0}(y)+u_{N}\phi_{N}(y)+g(y))dy. \end{equation*} As for $(-\Delta)$, we can approximate it by \begin{equation*} (-\Delta)u_i\approx(-\Delta)_{h}u_i=-\frac{u_{i-1}-2u_{i}+u_{i+1}}{h^{2}}. \end{equation*} According to \eqref{eqreprefl}, we obtain the approximation of fractional Laplacian $(-\Delta)^{s}u$ with $s\in(0,\frac{1}{2})\cup(\frac{1}{2},1)$, i.e., \begin{equation}\label{eqdefFLH1D} \begin{aligned} (-\Delta)^{s}u_i\approx(-\Delta)^{s}_{h}u_i=&-\frac{(-\Delta)^{s-1}_{h}u_{i-1}-2(-\Delta)^{s-1}_{h}u_{i}+(-\Delta)^{s-1}_{h}u_{i+1}}{h^{2}}\\ =&\sum_{j=1}^{N-1}w_{j-i}u_{j}+(-\Delta)_{h}R_{i} \end{aligned} \end{equation} where \begin{equation}\label{eqdefw} w_{i}=(-\Delta)_{h}\bar{\omega}_{i} \end{equation} \subsection{Two-dimensional discretization} Here we discretize $(-\Delta)^{s}u$ with the inhomogeneous Dirichlet boundary condition in two-dimensional case. Suppose the bounded domain $\Omega_{2}=[-L,L]\times[-L,L]$, $u=g(x,y)$ in $\Omega_{2}^{c}$, the mesh size $h=2L/N$, $N\in\mathbb{N}$, and $(x_{i},y_{j})=(-L+ih,-L+jh)$, $i,~j\in \mathbb{Z}$. Denote $\phi_{i,j}$ as the Lagrange basis polynomial on $I_{i,j}=[x_{i-1},x_{i+1}]\times[y_{j-1},y_{j+1}]\cap\Omega_{2}$, $i,~j=0,1,2,\ldots,N$, i.e., \begin{equation}\label{eqdefphii2D} \phi_{i,j}(x,y)=\bar{\phi}_{2}(x-x_{i},y-y_{j})\chi_{I_{i,j}}(x,y), \end{equation} where $\chi_{I_{i,j}}(x,y)$ is the characteristic function on $I_{i,j}$ and $\bar{\phi}_{2}(x,y)$ is defined by \begin{equation*} \bar{\phi}_{2}(x,y)=\left\{\begin{aligned} \left (1-\frac{|x|}{h}\right )\left (1-\frac{|y|}{h}\right ),&\quad (x,y)\in (-h,h)\times(-h,h),\\ 0,&\quad (x,y)\notin (-h,h)\times(-h,h). \end{aligned}\right. \end{equation*} Introducing $\mathbb{I}_{2}$ as the interpolation operator in two space dimensions, one has \begin{equation*} u\approx\mathbb{I}_{2}u=\sum_{i=0}^{N}\sum_{j=0}^{N}u_{i,j}\phi_{i,j}+g(x,y), \end{equation*} where $u_{i,j}=u(x_{i},y_{j})$. Similarly, $(-\Delta)^{s-1}u(x,y)$ can be approximated by \begin{equation*} \begin{aligned} &(-\Delta)^{s-1}_{h}u(x_{i},y_{j}) =&\sum_{p=1}^{N-1}\sum_{q=1}^{N-1}\bar{\omega}_{p-i,q-j}u_{p,q}+R_{i,j}, \end{aligned} \end{equation*} where $|(x_i,y_j)-(\xi,\eta)|= \sqrt{(x_i-\xi)^{2}+(y_j-\eta)^{2}}$, and for $0<i,j,p,q<N$, \begin{equation}\label{eqdefomgk2D} \begin{aligned} \bar{\omega}_{p-i,q-j}=&c_{2,s-1}\int\int_{I_{p,q}}|(x_{i},y_{j})-(\xi,\eta)|^{-2s}\phi_{p,q}(\xi,\eta)d\xi d\eta\\ =&c_{2,s-1}\int_{-h}^{h}\int_{-h}^{h}|((p-i)h,(q-j)h)+(\xi,\eta)|^{-2s}\bar{\phi}_{2}(\xi,\eta)d\xi d\eta, \end{aligned} \end{equation} and \begin{equation*} \begin{aligned} R_{i,j}=&c_{2,s-1}\int\int_{\mathbb{R}^{2}}|(x_{i},y_{j})-(\xi,\eta)|^{-2s}g(\xi,\eta)d\xi d\eta\\ &+c_{2,s-1}\sum_{pq(p-N)(q-N)=0,0\leq p,q\leq N}\int\int_{\mathbb{R}^{2}}|(x_{i},y_{j})-(\xi,\eta)|^{-2s}u_{p,q}\phi_{p,q}d\xi d\eta. \end{aligned} \end{equation*} Next, using the following formula to approximate $(-\Delta)$, i.e., \begin{equation*} (-\Delta)u_{i,j}\approx(-\Delta)_{h,1}u_{i,j}=-\frac{u_{i-1,j}+u_{i+1,j}+u_{i,j+1}+u_{i,j-1}-4u_{i,j}}{h^{2}}, \end{equation*} one can get the approximation of $(-\Delta)^{s}u$, i.e., \begin{equation}\label{eqdefFLH2D} \begin{aligned} (-\Delta)^{s}_{h,1}u_{i,j} =(-\Delta)_{h,1}(-\Delta)^{s-1}_{h}u_{i,j} =\sum_{i=1}^{N-1}\sum_{j=1}^{N-1}w^{(1)} _{p-i,q-j}u_{i,j}+(-\Delta)_{h,1}R_{i,j},\\ \end{aligned} \end{equation} where \begin{equation}\label{eqdefw2D} w^{(1)}_{i,j}=(-\Delta)_{h,1}\bar{\omega}_{i,j} \end{equation} An alternative approximation for $(-\Delta)u$ can be got by using following formula, i.e., \begin{equation*} (-\Delta)u_{i,j}\approx(-\Delta)_{h,2}u_{i,j}=-\frac{u_{i-1,j-1}+u_{i+1,j-1}+u_{i-1,j+1}+u_{i+1,j+1}-4u_{i,j}}{2h^{2}}. \end{equation*} Also, $(-\Delta)^{s}u$ can be discretized as \begin{equation}\label{eqdefFLH2D_2} \begin{aligned} (-\Delta)^{s}_{h,2}u_{i,j} =(-\Delta)_{h,2}(-\Delta)^{s-1}_{h}u_{i,j} =&\sum_{i=1}^{N-1}\sum_{j=1}^{N-1}w^{(2)} _{p-i,q-j}u_{i,j}+(-\Delta)_{h,2}R_{i,j},\\ \end{aligned} \end{equation} where \begin{equation}\label{eqdefw2D_2} w^{(2)}_{i,j}=(-\Delta)_{h,2}\bar{\omega}_{i,j} \end{equation} Thus $(-\Delta)^{s}$ with $s\in(0,\frac{1}{2})\cup(\frac{1}{2},1)$ can be approximated by the convex combination of \eqref{eqdefFLH2D} and \eqref{eqdefFLH2D_2}, i.e., \begin{equation}\label{eqdefFLH2Dall} (-\Delta)^{s}u\approx(-\Delta)^{s}_{h}u=\theta(-\Delta)^{s}_{h,1}u+(1-\theta)(-\Delta)^{s}_{h,2}u,\quad \theta\in[0,1], \end{equation} which means \begin{equation*} \begin{aligned} (-\Delta)^{s}_{h}u_{i,j}=&\sum_{i=1}^{N-1}\sum_{j=1}^{N-1}w _{p-i,q-j}u_{i,j}+\left (\theta(-\Delta)^{s}_{h,1}+(1-\theta)(-\Delta)^{s}_{h,2}\right )R_{i,j},\\ \end{aligned} \end{equation*} where \begin{equation}\label{eqdefw2Dtheta} w_{i,j}=\theta w^{(1)}_{i,j}+(1-\theta)w^{(2)}_{i,j}. \end{equation} Here $w^{(1)}_{i,j}$ and $w^{(2)}_{i,j}$ are defined in \eqref{eqdefw2D} and \eqref{eqdefw2D_2}, respectively. \section{Truncation errors} In this section, we provide the estimate of $\|(-\Delta)^{s}u-(-\Delta)^{s}_{h}u\|_{\infty}$ in one- and two-dimensional cases, respectively. In the following, we denote $\|\cdot\|_{\infty}$ and $\|\cdot\|_{2}$ as the discrete $l^{\infty}$ and $l^{2}$ norms, and $\|\cdot\|_{L^{\infty}}$ as continuous ${L^{\infty}}$ norm. \begin{theorem}\label{onetwodimendis} Let $s\in(0,\frac{1}{2})\cup(\frac{1}{2},1)$. Suppose $(-\Delta)^{s}$ and $(-\Delta)^{s}_{h}$ are defined in \eqref{eqdeflap} and \eqref{eqdefFLH1D} or \eqref{eqdefFLH2Dall}, respectively. If $u\in C^{1,\alpha}(\bar{\Omega}^{\delta}_{n})$ with some fixed constant $\delta> 4h>0$ and $\alpha\in(\max(0,2s-1),1]$, then we have \begin{equation*} \|((-\Delta)^{s}-(-\Delta)^{s}_{h})u\|_{\infty}\leq Ch^{1+\alpha-2s},\quad \|((-\Delta)^{s}-(-\Delta)^{s}_{h})u\|_{2}\leq Ch^{1+\alpha-2s}, \end{equation*} where $\Omega^{\delta}_{n}=((-L-\delta,L+\delta))^{n}$, $n=1,2$. \end{theorem} Here, we only provide the proof in two-dimensional case in detail; and the proof in one-dimensional case can be got similarly. \begin{proof}[Proof of Theorem \ref{onetwodimendis} in two dimensions] For fixed $i,j$, according to \eqref{eqdefFLH2Dall}, we have \begin{equation}\label{eqtrunall} \begin{aligned} |((-\Delta)^{s}-(-\Delta)^{s}_{h})u_{i,j}|\leq &\theta|((-\Delta)^{s}-(-\Delta)^{s}_{h,1})u_{i,j}|\\ &+(1-\theta)|((-\Delta)^{s}-(-\Delta)^{s}_{h,2})u_{i,j}|,~~\theta\in[0,1]. \end{aligned} \end{equation} Using the definitions of $(-\Delta)^{s}$ and $(-\Delta)^{s}_{h,1}$ results in \begin{equation*} \begin{aligned} &|((-\Delta)^{s}-(-\Delta)_{h,1}^{s})u_{i,j}|\\ \leq &|((-\Delta)(-\Delta)^{s-1}-(-\Delta)_{h,1}(-\Delta)^{s-1})u_{i,j}|\\ &+|((-\Delta)_{h,1}(-\Delta)^{s-1}-(-\Delta)_{h,1}(-\Delta)_{h}^{s-1})u_{i,j}|\\ \leq &\uppercase\expandafter{\romannumeral1}+\uppercase\expandafter{\romannumeral2}. \end{aligned} \end{equation*} Let $\Phi(x_{i}-\xi,y_{j}-\eta)\in C^{2}_{0}(\Omega_{2}^{\delta-h})$, which satisfies $\Phi(x_{i}-\xi,y_{j}-\eta)=((x_{i}-\xi)^{2}+(y_{j}-\eta)^{2})^{-s}$ if $(\xi,\eta)\in\Omega^{\delta/2+h}_{2}\backslash(x_{i}-h,x_{i}+h)\times(y_{j}-h,y_{j}+h)$, and \begin{equation*} \begin{aligned} &\left \|\Phi(x,y)\right \|_{L^{\infty}(\mathbb{R}^{2})}\leq Ch^{-2s}; \quad \left \|\frac{\partial\Phi(x,y)}{\partial x}\right \|_{L^{\infty}(\mathbb{R}^{2})},\left \|\frac{\partial \Phi(x,y)}{\partial y}\right \|_{L^{\infty}(\mathbb{R}^{2})}\leq Ch^{-1-2s};\\ &\left \|\frac{\partial^{2}\Phi(x,y)}{\partial x^{2}}\right \|_{L^{\infty}(\mathbb{R}^{2})},\left \|\frac{\partial^{2} \Phi(x,y)}{\partial y^{2}}\right \|_{L^{\infty}(\mathbb{R}^{2})}\leq Ch^{-2-2s};\\ &\left \|\frac{\partial^{4}\Phi(x,y)}{\partial x^{4}}\right \|_{L^{\infty}(\mathbb{R}^{2}\backslash\Omega_{2})},\left \|\frac{\partial^{4} \Phi(x,y)}{\partial y^{4}}\right \|_{L^{\infty}(\mathbb{R}^{2}\backslash\Omega_{2})}\leq C.\\ \end{aligned} \end{equation*} Introduce the notations \begin{equation*} \frac{\partial^{2} \mu^{x}}{\partial x^{2}}=\frac{\partial^{2} \mu^{y}}{\partial y^{2}}=u. \end{equation*} Here, divide $\uppercase\expandafter{\romannumeral1}$ into two parts, i.e., \begin{equation*} \begin{aligned} \uppercase\expandafter{\romannumeral1}\leq& C\Bigg |((-\Delta)_{x}-(-\Delta)_{x,h,1})\int\int_{\mathbb{R}^{2}}|(x_{i},y_{j})-(\xi,\eta)|^{-2s}u(\xi,\eta)d\xi d\eta\Bigg |\\ &+C\Bigg |((-\Delta)_{y}-(-\Delta)_{y,h,1})\int\int_{\mathbb{R}^{2}}|(x_{i},y_{j})-(\xi,\eta)|^{-2s}u(\xi,\eta)d\xi d\eta\Bigg |\\ \leq& \uppercase\expandafter{\romannumeral1}^{x}+\uppercase\expandafter{\romannumeral1}^{y}, \end{aligned} \end{equation*} where $(-\Delta)_{x}=-\frac{\partial^{2}}{\partial x^{2}}$, $(-\Delta)_{y}=-\frac{\partial^{2}}{\partial y^{2}}$ and \begin{equation*} \begin{aligned} &(-\Delta)_{x,h,1}v_{i,j}=-\frac{v_{i-1,j}-2v_{i,j}+v_{i+1,j}}{h^{2}},\ (-\Delta)_{y,h,1}v_{i,j}=-\frac{v_{i,j-1}-2v_{i,j}+v_{i,j+1}}{h^{2}}.\\ \end{aligned} \end{equation*} Introduce $\Psi(x_{i}-\xi,y_{j}-\eta)=|(x_{i},y_{j})-(\xi,\eta)|^{-2s}-\Phi(x_{i}-\xi,y_{j}-\eta)$. For $\uppercase\expandafter{\romannumeral1}^{x}$, we find \begin{equation*} \begin{aligned} &\uppercase\expandafter{\romannumeral1}^{x}\leq C\Bigg |((-\Delta)_{x}-(-\Delta)_{x,h,1})\int\int_{\mathbb{R}^{2}}\Phi(x_{i}-\xi,y_{j}-\eta)u(\xi,\eta)d\xi d\eta\Bigg |\\ &~~+ C\Bigg |((-\Delta)_{x}-(-\Delta)_{x,h,1})\int\int_{\mathbb{R}^{2}}\Psi(x_{i}-\xi,y_{j}-\eta)u(\xi,\eta)d\xi d\eta\Bigg |\\ &\leq \uppercase\expandafter{\romannumeral1}^{x}_{1}+\uppercase\expandafter{\romannumeral1}^{x}_{2}. \end{aligned} \end{equation*} Introduce $\mathbb{D}^{\delta}_{i,j}=\{(\xi,\eta)|(x_{i}-\xi,y_{j}-\eta)\in \Omega_{2}^{\delta}\}$ and $\mathbb{D}^{\delta,0}_{i,j}=\mathbb{D}^{\delta}_{i,j}\backslash(-h,h)\times(-h,h)$. Simple calculations imply \begin{equation*} \begin{aligned} \uppercase\expandafter{\romannumeral1}^{x}_{1}\leq& C\Bigg |((-\Delta)_{x}-(-\Delta)_{x,h,1})\int\int_{\Omega_{2}^{\delta}}\frac{\partial^{2}\Phi(x_{i}-\xi,y_{j}-\eta)}{\partial \xi^{2}}\mu^{x}(\xi,\eta)d\xi d\eta\Bigg |\\ \leq& C\Bigg |((-\Delta)_{x}-(-\Delta)_{x,h,1})\int\int_{\mathbb{D}^{\delta}_{i,j}}\frac{\partial^{2}\Phi(\xi,\eta)}{\partial \xi^{2}}\mu^{x}(x_{i}-\xi,y_{j}-\eta)d\xi d\eta\Bigg |\\ \leq& C\Bigg |\int_{-h}^{h}\int_{-h}^{h}\frac{\partial^{2}\Phi(\xi,\eta)}{\partial \xi^{2}}((-\Delta)_{x}-(-\Delta)_{x,h,1})\mu^{x}(x_{i}-\xi,y_{j}-\eta)d\xi d\eta\Bigg |\\ & +C\Bigg |\int\int_{\mathbb{D}^{\delta,0}_{i,j}}\frac{\partial^{2}\Phi(\xi,\eta)}{\partial \xi^{2}}((-\Delta)_{x}-(-\Delta)_{x,h,1})\mu^{x}(x_{i}-\xi,y_{j}-\eta)d\xi d\eta\Bigg |\\ \leq& \uppercase\expandafter{\romannumeral1}^{x}_{1,1}+\uppercase\expandafter{\romannumeral1}^{x}_{1,2}. \end{aligned} \end{equation*} By Taylor's expansion, we have $|((-\Delta)_{x}-(-\Delta)_{x,h})v(x_{i})|\leq Ch^{1+\alpha}\|v\|_{C^{3,\alpha}([x_{i-1},x_{i+1}])}$ for $v\in C^{3,\alpha}([x_{i-1},x_{i+1}])$. Thus there holds \begin{equation*} \begin{aligned} \uppercase\expandafter{\romannumeral1}^{x}_{1,1}\leq& Ch^{1+\alpha}\int_{-h}^{h}\int_{-h}^{h}\Bigg |\frac{\partial^{2}\Phi(\xi,\eta)}{\partial \xi^{2}}\Bigg |d\xi d\eta\|u\|_{C^{1,\alpha}(\bar{\Omega}^{\delta}_{2})}\\ \leq& Ch^{1+\alpha-2s}\|u\|_{C^{1,\alpha}(\bar{\Omega}^{\delta}_{2})}. \end{aligned} \end{equation*} Using the fact \begin{equation*} \begin{aligned} &\left|\frac{\partial^{2}|(x,y)-(\xi,\eta)|^{-2s}}{\partial \xi^{2}}\right|\leq C |(x,y)-(\xi,\eta)|^{-2s-2}\quad {\rm for}~~(x,y)\neq(\xi,\eta),\\ \end{aligned} \end{equation*} we obtain \begin{equation*} \begin{aligned} \uppercase\expandafter{\romannumeral1}^{x}_{1,2}\leq& Ch^{1+\alpha}\int\int_{\mathbb{D}^{\delta,0}_{i,j}}\Bigg |\frac{\partial^{2}|(x_{i},y_{j})-(\xi,\eta)|}{\partial \xi^{2}}\Bigg |d\xi d\eta\|u\|_{C^{1,\alpha}(\bar{\Omega}^{\delta}_{2})}\\ \leq& Ch^{1+\alpha-2s}\|u\|_{C^{1,\alpha}(\bar{\Omega}^{\delta}_{2})}. \end{aligned} \end{equation*} Decomposing $\uppercase\expandafter{\romannumeral1}^{x}_{2}$ into three parts leads to \begin{align*} &\uppercase\expandafter{\romannumeral1}^{x}_{2}\leq C\Bigg |(-\Delta)_{x}\int_{x_{i}-h}^{x_{i}+h}\int_{y_{j}-h}^{y_{j}+h}\Psi(x_{i}-\xi,y_{j}-\eta)u(\xi,\eta)d\xi d\eta\Bigg |\\ &~~+C\Bigg |(-\Delta)_{x,h,1}\int_{x_{i}-h}^{x_{i}+h}\int_{y_{j}-h}^{y_{j}+h}\Psi(x_{i}-\xi,y_{j}-\eta)u(\xi,\eta)d\xi d\eta\Bigg |\\ &~~+C\Bigg |((-\Delta)_{x}-(-\Delta)_{x,h,1})\int\int_{\Omega_{2}^{c}}\Psi(x_{i}-\xi,y_{j}-\eta)u(\xi,\eta)d\xi d\eta\Bigg |\\ &~~\leq \uppercase\expandafter{\romannumeral1}^{x}_{2,1}+\uppercase\expandafter{\romannumeral1}^{x}_{2,2}+\uppercase\expandafter{\romannumeral1}^{x}_{2,3}. \end{align*} For $\uppercase\expandafter{\romannumeral1}^{x}_{2,1}$, we get, for some function $C_{0}(y)$ independent of $x$, \begin{equation*} \begin{aligned} \uppercase\expandafter{\romannumeral1}^{x}_{2,1}\leq &C\Bigg |\frac{\partial}{\partial x}\int_{x_{i}-h}^{x_{i}+h}\int_{y_{j}-h}^{y_{j}+h}\Psi(x_{i}-\xi,y_{j}-\eta)\frac{\partial u(\xi,\eta)}{\partial \xi}d\xi d\eta\Bigg |\\ \leq &C\Bigg |\frac{\partial}{\partial x}\int_{x_{i}-h}^{x_{i}+h}\int_{y_{j}-h}^{y_{j}+h}\Psi(x_{i}-\xi,y_{j}-\eta)\left (\frac{\partial u(\xi,\eta)}{\partial \xi}-C_{0}(\eta)\right )d\xi d\eta\Bigg |\\ \leq &C\Bigg |\int_{x_{i}-h}^{x_{i}+h}\int_{y_{j}-h}^{y_{j}+h}\frac{\partial\Psi(x_{i}-\xi,y_{j}-\eta)}{\partial x}\left (\frac{\partial u(\xi,\eta)}{\partial \xi}-C_{0}(\eta)\right )d\xi d\eta\Bigg |.\\ \end{aligned} \end{equation*} Choosing $C_{0}(y)=\frac{\partial u(x,y)}{\partial x}|_{x=x_{i}}$ results in \begin{equation*} \uppercase\expandafter{\romannumeral1}^{x}_{2,1}\leq Ch^{1+\alpha-2s}\|u\|_{C^{1,\alpha}(\bar{\Omega}^{\delta}_{2})}. \end{equation*} By using $|\Phi(\xi,\eta)|\leq Ch^{-2s}$ and the Taylor expansion, there holds \begin{equation*} \begin{aligned} \uppercase\expandafter{\romannumeral1}^{x}_{2,2}\leq& C\Bigg |\int_{-h}^{h}\int_{-h}^{h}\Psi(\xi,\eta)(-\Delta)_{x,h,1}u(x_{i}-\xi,y_{j}-\eta)d\xi d\eta\Bigg |\\ \leq &Ch^{1+\alpha-2s}\|u\|_{C^{1,\alpha}(\bar{\Omega}^{\delta}_{2})}. \end{aligned} \end{equation*} Simple calculations imply \begin{equation*} \begin{aligned} \uppercase\expandafter{\romannumeral1}^{x}_{2,3}\leq &Ch^{2}\delta^{-2-2s}\|u\|_{L^{\infty}(\mathbb{R}^{2})}. \end{aligned} \end{equation*} Combining above estimates, one has \begin{equation*} \uppercase\expandafter{\romannumeral1}^{x}\leq Ch^{1+\alpha-2s}. \end{equation*} Similarly, there is \begin{equation*} \uppercase\expandafter{\romannumeral1}^{y}\leq Ch^{1+\alpha-2s}. \end{equation*} As for $\uppercase\expandafter{\romannumeral2}$, the fact $\|u(\xi,\eta)-\mathbb{I}_{2}u(\xi,\eta)\|_{L^{\infty}(\Omega_{2})}\leq Ch^{1+\alpha}\|u\|_{C^{1,\alpha}(\bar{\Omega}^{\delta}_{2})}$ implies \begin{equation*} \begin{aligned} \uppercase\expandafter{\romannumeral2}\leq &C\left |(-\Delta)_{h,1}\int_{-L}^{L}\int_{-L}^{L}|(x_{i},y_{j})-(\xi,\eta)|^{-2s}(u(\xi,\eta)-\mathbb{I}_{2}u(\xi,\eta))d\xi d\eta\right|\\ \leq &Ch^{1+\alpha}\int_{-L}^{L}\int_{-L}^{L}|((-\Delta)_{h,1}|(x_{i},y_{j})-(\xi,\eta)|^{-2s})|d\xi d\eta\|u\|_{C^{1,\alpha}(\bar{\Omega}^{\delta}_{2})}. \end{aligned} \end{equation*} Introduce $\mathbb{D}_{i,j}=\Omega_{2}^{\delta}\backslash(x_{i}-2h,x_{i}+2h)\times(y_{i}-2h,y_{i}+2h)$. Simple calculations give \begin{equation*} \begin{aligned} &\int_{-L}^{L}\int_{-L}^{L}|((-\Delta)_{h,1}|(x_{i},y_{j})-(\xi,\eta)|^{-2s})|d\xi d\eta\\ \leq&C \int_{y_{j}-2h}^{y_{j}+2h}\int_{x_{i}-2h}^{x_{i}+2h}\left |((-\Delta)_{h,1}|(x_{i},y_{j})-(\xi,\eta)|^{-2s})\right |d\xi d\eta\\ &+C\int\int_{\mathbb{D}_{i,j}}\left |((-\Delta)_{h,1}|(x_{i},y_{j})-(\xi,\eta)|^{-2s})\right |d\xi d\eta\\ \leq& Ch^{-2s}, \end{aligned} \end{equation*} which leads to \begin{equation*} \uppercase\expandafter{\romannumeral2}\leq Ch^{1+\alpha-2s}\|u\|_{C^{1,\alpha}(\bar{\Omega}^{\delta}_{2})}. \end{equation*} Thus according to $\uppercase\expandafter{\romannumeral1}$ and $\uppercase\expandafter{\romannumeral2}$, we have \begin{equation*} \|((-\Delta)^{s}-(-\Delta)^{s}_{h,1})u\|_{\infty}\leq Ch^{1+\alpha-2s},\quad \|((-\Delta)^{s}-(-\Delta)^{s}_{h,1})u\|_{2}\leq Ch^{1+\alpha-2s}. \end{equation*} As for $\|((-\Delta)^{s}-(-\Delta)^{s}_{h,2})u\|_{\infty}$, by similar arguments, we can get the estimates \begin{equation*} \|((-\Delta)^{s}-(-\Delta)^{s}_{h,2})u\|_{\infty}\leq Ch^{1+\alpha-2s},\quad \|((-\Delta)^{s}-(-\Delta)^{s}_{h,2})u\|_{2}\leq Ch^{1+\alpha-2s}. \end{equation*} Collecting the above estimates, the desired results are reached. \end{proof} \section{Convergence in solving the inhomogeneous fractional Dirichlet problem} In this section, we first propose the sufficient conditions for getting the convergence when using the provided discretizations to solve Eq. \eqref{eqtosol}. Then we try to modify the discretizations provided in Sec. 3 according to the corresponding conditions. Finally, we present the convergence analyses in solving Eq. \eqref{eqtosol}. Now, we first provide a lemma which is useful for the convergence analyses. \begin{lemma}[\cite{Axelsson1994}]\label{lemmaGersgorin} Let matrix $\mathbf{A}$ be \begin{equation*} \mathbf{A}=\left[ \begin{matrix} a_{1,1}&a_{1,2}&\cdots& a_{1,N}\\ a_{2,1}&a_{2,2}&\cdots& a_{2,N}\\ \vdots&\vdots&\ddots&\vdots\\ a_{N,1}&a_{N,2}&\cdots&a_{N,N} \end{matrix}\right]. \end{equation*} Introduce the discs: \begin{equation} \begin{aligned} &C_i=\{z\in \mathbb{C};|z-a_{i,i}|\leq\sum_{i\neq j}|a_{i,j}|\},~1\leq i\leq N,\\ &C'_i=\{z\in \mathbb{C};|z-a_{i,i}|\leq\sum_{i\neq j}|a_{j,i}|\},~1\leq i\leq N. \end{aligned} \end{equation} The spectrum $\lambda(\mathbf{A})$ of $\mathbf{A}$ is enclosed in the union of $C_{i}$ and $C'_{i}$. \end{lemma} Below we give two theorems to state the sufficient conditions of achieving the convergence in solving Eq. \eqref{eqtosol} in one and two dimensions, respectively. \iffalse \begin{theorem}\label{thmcorstd} Let $\mathbf{U}$ be solution of Eq. \eqref{eqtosol} and $\mathbf{U}_{h}$ be the solution of \begin{equation}\label{eqmatrixform} \mathbf{B}_{1}\mathbf{U}_{h}+\mathbf{G}=\mathbf{F}, \end{equation} where $\mathbf{F}=\{f(x_{i})\}_{i=1}^{N-1}=\{f_{i}\}_{i=1}^{N-1}$, $\mathbf{G}=\{G_{i}\}_{i=1}^{N-1}$ and \begin{equation*} \mathbf{B}_{1}=\left[ \begin{matrix} b_{0}& b_{1}&\cdots &b_{N-2}\\ b_{1}& b_{0}&\cdots &b_{N-3}\\ \vdots&\vdots&\ddots&\vdots\\ b_{N-2}& b_{N-3}&\cdots &b_{0}\\ \end{matrix}\right ]. \end{equation*} If $\mathbf{B}_{1}$ and $\mathbf{G}$ satisfy following conditions: \begin{enumerate}[(1)] \item $\|\mathbf{F}-(\mathbf{B}_{1}\mathbf{U}+\mathbf{G})\|_{\infty}\leq Ch^{k}$; \item $b_{0}>0$, $b_{i}<0$ for $i\neq 0$; \item$\inf\limits_{i\in[1,N-1]}\sum\limits_{j=1}^{N}b_{|i-j|}>C_{0}>0$, \end{enumerate} then we obtain \begin{equation*} \begin{aligned} &\|U-U_{h}\|_{\infty}<Ch^{k},\quad\|U-U_{h}\|_{2}<Ch^{k}. \end{aligned} \end{equation*} \end{theorem} \fi \begin{theorem}\label{thmcorstd} Given two vectors $\mathbf{F}$, $\mathbf{G}$ and the matrix \begin{equation*} \mathbf{B}_{1}=\left[ \begin{matrix} b_{0}& b_{1}&\cdots &b_{N-2}\\ b_{1}& b_{0}&\cdots &b_{N-3}\\ \vdots&\vdots&\ddots&\vdots\\ b_{N-2}& b_{N-3}&\cdots &b_{0}\\ \end{matrix}\right ]. \end{equation*} Let $\mathbf{U}_{h}$ be the solution of the linear system \begin{equation}\label{eqmatrixform} \mathbf{B}_{1}\mathbf{U}_{h}+\mathbf{G}=\mathbf{F}. \end{equation} Assume $\mathbf{U}$, $\mathbf{B}_{1}$, and $\mathbf{G}$ satisfy the conditions: \begin{enumerate}[(1)] \item $\|\mathbf{F}-(\mathbf{B}_{1}\mathbf{U}+\mathbf{G})\|_{\infty}\leq Ch^{k}$, $\|\mathbf{F}-(\mathbf{B}_{1}\mathbf{U}+\mathbf{G})\|_{2}\leq Ch^{k}$; \item $b_{0}>0$, $b_{i}<0$ for $i\neq 0$; \item there exists some constant $C_{0}>0$ such that $\inf\limits_{i=1,2\ldots,N-1}\sum\limits_{j=1}^{N-1}b_{|i-j|}>C_{0}$. \end{enumerate} Then we obtain \begin{equation*} \begin{aligned} &\|\mathbf{U}-\mathbf{U}_{h}\|_{\infty}<Ch^{k},\quad\|\mathbf{U}-\mathbf{U}_{h}\|_{2}<Ch^{k}. \end{aligned} \end{equation*} \end{theorem} \begin{proof} By Lemma \ref{lemmaGersgorin} and the properties of $b_{i}$, we have \begin{equation*} \lambda_{min}(\mathbf{B}_{1})>C_{0}. \end{equation*} Let $\mathbf{e}^{\mathbf{U}}=\mathbf{U}_{h}-\mathbf{U}=\{e^{\mathbf{U}}_{i}\}_{i=1}^{N-1}$, $\bar{\mathbf{F}}=\mathbf{F}-(\mathbf{B}_{1}\mathbf{U}+\mathbf{G})=\{\bar{f}_{i}\}_{i=1}^{N-1}$. Then \begin{equation*} CC_{0}\|\mathbf{e}^{\mathbf{U}}\|^{2}_{2}\leq (\mathbf{B}_{1}\mathbf{e}^{\mathbf{U}},\mathbf{e}^{\mathbf{U}})= (\mathbf{\bar{F}},\mathbf{e}^{\mathbf{U}})\leq \|\mathbf{\bar{F}}\|_{2}\|\mathbf{e}^{\mathbf{U}}\|_{2}, \end{equation*} which leads to $\|\mathbf{e}^{\mathbf{U}}\|_{2}\leq CC_{0}^{-1}\|\mathbf{\bar{F}}\|_{2}$. Assuming $\|\mathbf{e}^{\mathbf{U}}\|_{\infty}=|e^{\mathbf{U}}_{p}|$, we have \begin{equation*} \begin{aligned} &e^{\mathbf{U}}_{p}(\bar{f}_{p}-CC_{0}e^{\mathbf{U}}_{p}) =e^{\mathbf{U}}_{p}\left (\sum_{i=1}^{i=N-1}b_{|p-i|}e^{\mathbf{U}}_{i}-CC_{0}e^{\mathbf{U}}_{p}\right )\\ &\quad=e^{\mathbf{U}}_{p}(\sum_{i=1,i\neq p}^{i=N-1}b_{|p-i|}e^{\mathbf{U}}_{i}+(b_{0}-CC_{0})e^{\mathbf{U}}_{p}) \geq e^{\mathbf{U}}_{p}(\sum_{i=1,i\neq p }^{i=N-1}b_{|p-i|}(e^{\mathbf{U}}_{i}-e^{\mathbf{U}}_{p}))\geq 0,\\ \end{aligned} \end{equation*} which yields \begin{equation*} \|\mathbf{e}^{\mathbf{U}}\|_{\infty}\leq CC_{0}^{-1}|\bar{f}_{p}|\leq CC_{0}^{-1}\|\mathbf{\bar{F}}\|_{\infty}. \end{equation*} Combining the first condition, we can get the desired results. \end{proof} Similarly, for the two-dimensional case, we find \begin{theorem}\label{thmcorstd2} Suppose $\mathbf{U}$, $\mathbf{U}_{h}$, $\mathbf{G}$, and $\mathbf{F}$ satisfy \begin{equation}\label{eqmatrixform2} \mathbf{B}_{2}\mathbf{U}_{h}+\mathbf{G}=\mathbf{F}, \end{equation} and \begin{equation*} \|\mathbf{F}-(\mathbf{B}_{2}\mathbf{U}+\mathbf{G})\|_{\infty}\leq Ch^{k},\quad \|\mathbf{F}-(\mathbf{B}_{2}\mathbf{U}+\mathbf{G})\|_{2}\leq Ch^{k}. \end{equation*} Here \begin{equation*} \mathbf{B}_{2}=\left[ \begin{matrix} \mathbf{T}_{0}& \mathbf{T}_{1}&\cdots &\mathbf{T}_{N-1}\\ \mathbf{T}_{-1}& \mathbf{T}_{0}&\cdots &\mathbf{T}_{N-2}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{T}_{-N+1}& \mathbf{T}_{-N+2}&\cdots &\mathbf{T}_{0} \end{matrix} \right ],~ \mathbf{T}_{k}=\left[ \begin{matrix} t_{k,0}& t_{k,1}&\cdots &t_{k,N-2}\\ t_{k,1}& t_{k,0}&\cdots &t_{k,N-3}\\ \vdots&\vdots&\ddots&\vdots\\ t_{k,N-2}& t_{k,N-3}&\cdots &t_{k,0} \end{matrix} \right ]. \end{equation*} Assume the following conditions are satisfied, \begin{enumerate}[(1)] \item $t_{k,i}>0$ for $k=i=0$, otherwise, $t_{k,i}<0$; \item $\inf_{p,q=1,\ldots,N-1}\sum_{i,j=1}^{N-1}t_{|p-i|,|q-j|}>C_{0}>0$. \end{enumerate} Then one has \begin{equation*} \begin{aligned} &\|\mathbf{U}-\mathbf{U}_{h}\|_{\infty}<Ch^{k},\quad\|\mathbf{U}-\mathbf{U}_{h}\|_{2}<Ch^{k}. \end{aligned} \end{equation*} \end{theorem} \subsection{Corrections for the one- and two-dimensional discretizations} From the above two theorems, we need to change some properties of the weights produced by the discretization in Sec. 3 for one- and two-dimensional cases. \subsubsection{One-dimensional case} Here we provide a lemma to state the properties of weights $w_{i}$ defined in \eqref{eqdefw}. \begin{theorem}\label{thmfraweightprop1d} Let $w_{i}$ be defined in \eqref{eqdefw}. Then $w_{i}$ satisfies: \begin{equation*} \begin{aligned} & w_{i}<0,~~|i|\geq 2; \quad w_{i}=w_{-i},~~i \geq 0;\\ &\sum_{i=-N+1}^{N-1}w_{i}\geq CL^{-2s}, \end{aligned} \end{equation*} where $2L$ means the length of $\Omega_1$. \end{theorem} \begin{proof} The definition of $c_{1,s-1}$ and simple calculations give, for $\zeta>h$, \begin{equation*} \begin{aligned} c_{1,s-1}<0,~~ (\zeta-h)^{1-2s}-2\zeta^{1-2s}+(\zeta+h)^{1-2s}<0, ~~{\rm for }~ s<\frac{1}{2} ;\\ c_{1,s-1}>0,~~ (\zeta-h)^{1-2s}-2\zeta^{1-2s}+(\zeta+h)^{1-2s}>0,~~ {\rm for }~ s>\frac{1}{2}, \end{aligned} \end{equation*} which leads to $w_{i}<0$, $|i|\geq 2$. As for $w_{1}$, simple calculations give \begin{equation}\label{weight1 11} w_{1}=-c_{1,s-1}h^{-2s}\frac{7 - 2^{5 - 2s} + 3^{3 -2s}}{(2s-3)(2s-2)}. \end{equation} Summing $w_{i}$ from $-N+1$ to $N-1$ gives \begin{equation*} \begin{aligned} \sum_{i=-N+1}^{N-1}w_{i} = &\frac{1}{h^{2}}\sum_{i=-N+1}^{N-1}(2\bar{\omega}_{i}-\bar{\omega}_{i+1}-\bar{\omega}_{i-1}) =2\frac{\bar{\omega}_{N-1}-\bar{\omega}_{N}}{h^{2}}. \end{aligned} \end{equation*} According to the definitions of $\bar{\omega}_{N}$, we have \begin{equation*} \begin{aligned} \frac{\bar{\omega}_{N-1}-\bar{\omega}_{N}}{h^{2}} = &\frac{c_{1,s-1}\int_{-h}^{h}(((N-1)h+\zeta)^{1-2s}-(Nh+\zeta)^{1-2s})\bar{\phi}_{1}(\zeta)d\zeta}{h^{2}}\\ \geq& C\frac{\int_{-h}^{h}L^{-2s}\bar{\phi}_{1}(\zeta)d\zeta}{h} \geq CL^{-2s}, \end{aligned} \end{equation*} which leads to desired results. \end{proof} From Theorem \ref{thmcorstd} and the fact $w_{1}>0$ for some $s\in(0,\frac{1}{2})$ (see \eqref{weight1 11}), we find that the numerical scheme constructed by \eqref{eqdefFLH1D} may not be effective. To make the $w_{i}$ satisfy the condition of Theorem \ref{thmcorstd} and get an effective numerical scheme, we do the modifications for $\bar{\omega}_{0}$, i.e., \begin{equation}\label{eqdefw0M} \bar{\omega}^{M}_{0}=\left\{ \begin{aligned} &0\qquad {\rm if}~w_{1}\geq 0,\\ &\bar{\omega}_{0}\qquad {\rm if}~w_{1}<0. \end{aligned} \right. \end{equation} Then we obtain a modified scheme \begin{equation}\label{modifiedscheme1} \begin{aligned} (-\Delta)_{h}^{s}u(x)\approx(-\Delta)_{h,M}^{s}u(x_{i})=&\sum_{j=1}^{N-1}w^{M}_{j-i}u_{j}+R_{i}, \end{aligned} \end{equation} where \begin{equation*} \begin{aligned} w^{M}_{0}=&-\frac{\bar{\omega}_{-1}-2\bar{\omega}^{M}_{0}+\bar{\omega}_{1}}{h^{2}},\quad w^{M}_{i}=w_{i},\quad |i|\geq 2,\\ w^{M}_{i}=&-\frac{\bar{\omega}^{M}_{0}-2\bar{\omega}_{1}+\bar{\omega}_{2}}{h^{2}},\quad |i|=1. \end{aligned} \end{equation*} By the definitions of $w^{M}_{1}$ and $\bar{\omega}_{i}$, it is easy to check that $w^{M}_{1}<0$.\\ Next, we present the truncation error of the modified discretization \eqref{modifiedscheme1}. \begin{theorem}\label{thmfra1dmd} Let $s\in(0,\frac{1}{2})\cup(\frac{1}{2},1)$. $(-\Delta)^{s}$ and $(-\Delta)^{s}_{h,M}$ are defined in \eqref{eqdeflap} and \eqref{modifiedscheme1}, respectively. If $u\in C^{1,\alpha}(\bar{\Omega}^{\delta}_{1})$ with some fixed constant $\delta>4h>0$ and $\alpha\in(\max(0,2s-1),1]$, then we have \begin{equation*} \|((-\Delta)^{s}-(-\Delta)^{s}_{h,M})u\|_{\infty}\leq Ch^{1+\alpha-2s},\quad \|((-\Delta)^{s}-(-\Delta)^{s}_{h,M})u\|_{2}\leq Ch^{1+\alpha-2s}, \end{equation*} where $\Omega^{\delta}_{1}=(-L-\delta,L+\delta)$. \end{theorem} \begin{proof} For fixed $i$, by triangle inequality and Theorem \ref{onetwodimendis}, we obtain \begin{equation*} \begin{aligned} |((-\Delta)^{s}-(-\Delta)^{s}_{h,M})u_{i}|\leq& |((-\Delta)^{s}-(-\Delta)^{s}_{h})u_{i}|+|((-\Delta)^{s}_{h}-(-\Delta)^{s}_{h,M})u_{i}|\\ \leq& Ch^{1+\alpha-2s}+\vartheta \end{aligned} \end{equation*} As for $\vartheta$, if $\bar{\omega}^{M}_{0}=\bar{\omega}_{0}$, there is $\vartheta=0$. Otherwise, we have \begin{equation*} \begin{aligned} \vartheta\leq& \left |(-\Delta)_{h}u_{i}\int_{-h}^{h}|y|^{1-2s}\bar{\phi_{1}}(y)dy\right | \leq Ch^{1+\alpha-2s}\|u\|_{C^{1,\alpha}(\bar{\Omega}^{\delta}_{1})}, \end{aligned} \end{equation*} which leads to the desired results. \end{proof} Thus we can get the following convergence results for one-dimensional case by Theorem \ref{thmcorstd}. \begin{theorem}\label{thm1dcon} Assume $s\in(0,\frac{1}{2})\cup(\frac{1}{2},1)$. Let $u$ and $\mathbf{U}_{h}$ be solutions of Eqs. \eqref{eqtosol} and \eqref{eqmatrixform} with $b_{i}=w^{M}_{i}$ and \begin{equation*} \mathbf{U}=\{u(x_{i})\}_{i=1}^{N-1},\quad\mathbf{G}=\left \{(-\Delta)_{h}R_{i}\right \}_{i=1}^{N-1},\quad \mathbf{F}=\{f(x_{i})\}_{i=1}^{N-1} \end{equation*} If $u\in C^{1,\alpha}(\bar{\Omega}^{\delta}_{1})$ with some fixed constant $\delta>4h>0$ and $\alpha\in(\max(0,2s-1),1]$, then we have \begin{equation*} \begin{aligned} &\|\mathbf{U}-\mathbf{U}_{h}\|_{\infty}<Ch^{1+\alpha-2s},\quad\|\mathbf{U}-\mathbf{U}_{h}\|_{2}<Ch^{1+\alpha-2s}. \end{aligned} \end{equation*} \end{theorem} \subsubsection{ Two-dimensional case} According to Theorem \ref{thmcorstd2}, to obtain an effective numerical scheme, we need to make $w_{i,j}$ satisfy the following requirements \begin{equation*} w_{0,0}^{M}>0 ~~{\rm and}~~w_{i,j}^{M}<0, \end{equation*} where $(i,j)\in\{(\pm 1,0),(0,\pm 1),(\pm 1,\pm 1)\}$. To be specific, we modify the $\bar{\omega}_{i,j}$ as \begin{equation*} \begin{aligned} &\bar{\omega}^{M}_{0,0}=\bar{\omega}_{0,0}+c_{0,0},\quad c_{0,0}\geq0;\quad \bar{\omega}^{M}_{i,j}=\bar{\omega}_{i,j}, \end{aligned} \end{equation*} and take $w_{i,j}^{M}$ as \begin{equation}\label{eqdeffra2dwmd} \begin{aligned} w_{i,j}^{M}=(\theta(-\Delta)_{h,1}+(1-\theta)(-\Delta)_{h,2})\bar{\omega}_{i,j}^{M}. \end{aligned} \end{equation} Thus the two-dimensional discretization scheme can be modified as \begin{equation}\label{eqdefFLH2Dmd} \begin{aligned} (-\Delta)^{s}_{h,M}u_{i,j}=&\sum_{i=1}^{N-1}\sum_{j=1}^{N-1}w _{p-i,q-j}^{M}u_{i,j}+(\theta(-\Delta)_{h,1}+(1-\theta)(-\Delta)_{h,2})R_{i,j}. \end{aligned} \end{equation} Similar to the proofs of Theorems \ref{thmfraweightprop1d} and \ref{thmfra1dmd}, there hold \begin{theorem}\label{thmfraweightprop2d} Let $w_{i,j}^{M}$ be defined in \eqref{eqdeffra2dwmd}. Then \begin{equation*} \begin{aligned} &\sum_{i=-N+1}^{N-1}\sum_{j=-N+1}^{N-1}w_{i,j}^{M}\geq CL^{-2s}. \end{aligned} \end{equation*} \end{theorem} \begin{theorem} Let $\Omega_{2}^{\delta}=(-L-\delta,L+\delta)\times(-L-\delta,L+\delta)$ and $s\in(0,\frac{1}{2})\cup(\frac{1}{2},1)$. Suppose $(-\Delta)^{s}$ and $(-\Delta)^{s}_{h,M}$ are defined in \eqref{eqdeflap} and \eqref{eqdefFLH2Dmd}, respectively. If $u\in C^{1,\alpha}(\bar{\Omega}_{2}^{\delta})$ with some fixed constant $\delta>4h>0$ and $\alpha\in(\max(0,2s-1),1]$, then we have \begin{equation*} \|((-\Delta)^{s}-(-\Delta)^{s}_{h,M})u\|_{\infty}\leq Ch^{1+\alpha-2s},\quad \|((-\Delta)^{s}-(-\Delta)^{s}_{h,M})u\|_{2}\leq Ch^{1+\alpha-2s}. \end{equation*} \end{theorem} Thus the corresponding convergence results can be obtained by Theorem \ref{thmcorstd2}. \begin{theorem}\label{thmcon2} Let $u$ and $\mathbf{U}_{h}$ be solutions of \eqref{eqtosol} and \eqref{eqmatrixform2} with $t_{i,j}=w_{i,j}^{M}$ and \begin{equation*} \begin{aligned} &\mathbf{U}=\{u(x_{i},y_{j})\}_{i,j=1}^{N-1},\quad \mathbf{F}=\{f(x_{i},y_{j})\}_{i,j=1}^{N-1},\\ &\mathbf{G}=\left \{(\theta(-\Delta)_{h,1}+(1-\theta)(-\Delta)_{h,2})R_{i,j}\right \}_{i,j=1}^{N-1}. \end{aligned} \end{equation*} After choosing suitable $\theta$ and $c_{0,0}$, we have if $u\in C^{1,\alpha}(\bar{\Omega}^{\delta}_{2})$ with some fixed constant $\delta>4h>0$ and $\alpha\in(\max(0,2s-1),1]$, \begin{equation*} \begin{aligned} &\|\mathbf{U}-\mathbf{U}_{h}\|_{\infty}<Ch^{1+\alpha-2s},\quad\|\mathbf{U}-\mathbf{U}_{h}\|_{2}<Ch^{1+\alpha-2s}, \end{aligned} \end{equation*} where $s\in(\frac{1}{250},\frac{1}{2})\cup (\frac{1}{2},1)$. \end{theorem} \begin{remark} By numerical experiments, we give the range of $\theta$ with different $s\in(\frac{1}{250},\frac{1}{2})\cup (\frac{1}{2},1)$ and $c_{0,0}$ in Figure \ref{figctheta} (shown in the shaded area), which makes above estimates hold. But for smaller $s$, we do not find a suitable $\theta$ to make $w^{M}_{i,j}$ satisfy Theorem \ref{thmcorstd2}. \begin{figure} \centering \subfigure[$c_{0,0}=1$]{ \includegraphics[width=0.45\linewidth,angle=0]{c1}\label{fig:c1}} \subfigure[$c_{0,0}=3$]{ \includegraphics[width=0.45\linewidth,angle=0]{c3}\label{fig:c3}} \subfigure[$c_{0,0}=7$]{ \includegraphics[width=0.45\linewidth,angle=0]{c7}\label{fig:c7}} \subfigure[$c_{0,0}=16$]{ \includegraphics[width=0.45\linewidth,angle=0]{c16}\label{fig:c16}} \caption{Range of $\theta$ for different $s$ and $c_{0,0}$.} \label{figctheta} \end{figure} \end{remark} \begin{remark} It is easy to check that the coefficient $c_{n,s}$ in \eqref{eqdeflap} can tend to $\infty$ when $s= \frac{1}{2}$ in one-dimensional case, but it doesn't for the two-dimensional case. \end{remark} \section{Numerical experiments} In this section, we first verify the convergence of the numerical method in discretizing $(-\Delta)^s$ and solving Eq. \eqref{eqtosol}. Then we simulate the mean exit time of L\'{e}vy motion with generator $\mathcal{A}=\nabla P(x)\cdot\nabla+ (-\Delta)^{s}$. From \cite{dyda2012}, we have \begin{equation}\label{eqexactsol1d} u=\left\{ \begin{aligned} (1-x^{2})^{P+s},&\quad x\in(-1,1),\\ 0,&\quad otherwise \end{aligned}\right. \end{equation} with $P\in \mathbb{R}$ and \begin{equation*} (-\Delta)^{s}u=\frac{2^{2s}\Gamma(\frac{1}{2}+s)\Gamma(P+1+s)}{\sqrt{\pi}\Gamma(P+1)}~_{2}F_{1}\left (\frac{1}{2}+s,-P;\frac{1}{2};x^{2}\right ),\quad x\in(-1,1), \end{equation*} with $~_{2}F_{1}$ being the Gauss hypergeometric function. Using this result, we test the truncation errors and the convergence rates (the right hand side and boundary terms of Eq. \eqref{eqtosol} are taken as the corresponding expressions). \begin{example} In this example, we consider the truncation error in one-dimensional case. Here we choose $\Omega_{1}=(-1,1)$, $g(x)=0$, and $P=2-s$ in \eqref{eqexactsol1d}. All the results presented in Table \ref{tab:1dtr} agree with Theorem \ref{onetwodimendis} \begin{table}[htbp] \centering \caption{$l^{\infty}(\Omega)$ truncation errors and convergence rates with $P=2-s$} \begin{tabular}{ccccc} \hline $s\backslash 2/h$ & 128&256 & 512 & 1024 \\ \hline 0.2 & 2.313E-03&8.321E-04 & 2.930E-04 & 1.016E-04 \\ & Rates&1.4749 & 1.5061 & 1.5274 \\ 0.4 & 4.168E-03&1.723E-03 & 7.215E-04 & 3.057E-04 \\ &Rates&1.2742 & 1.2559 & 1.2388 \\ 0.6 & 1.776E-02&9.859E-03 & 5.541E-03 & 3.142E-03 \\ & Rates&0.8495 & 0.8314 & 0.8185 \\ 0.8 & 6.368E-02&4.724E-02 & 3.536E-02 & 2.662E-02 \\ & Rates&0.4309 & 0.4179 & 0.4098 \\ \hline \end{tabular} \label{tab:1dtr} \end{table} \end{example} \begin{example} In this example, we use numerical scheme \eqref{eqmatrixform} to solve \eqref{eqtosol} with $g(x)=0$ and $\Omega_{1}=(-1,1)$. Here, we choose $P=1$ in \eqref{eqexactsol1d} which leads to $u\in C^{1,s}(\Omega_{1})$. The results presented in Table \ref{tab:1dP1D} show that the numerical scheme \eqref{eqmatrixform} has an $\mathcal{O}(h^{1+s})$ convergence rate which is higher than the one $(\mathcal{O}(h^{1-s}))$ predicted in Theorem \ref{thm1dcon}. \begin{table}[htbp] \centering \caption{$l^{\infty}(\Omega)$ errors and convergence rates with $P=1$} \begin{tabular}{cccccc} \hline $s\backslash 2/h$& 128&256 & 512 & 1024 \\ \hline 0.1 & 3.663E-04&1.911E-04 & 9.470E-05 & 4.570E-05 \\ & Rates&0.9389 & 1.0128 & 1.0512 \\ 0.2 & 1.005E-03&4.965E-04 & 2.329E-04 & 1.061E-04 \\ & Rates&1.0175 & 1.0922 & 1.1337 \\ 0.3 & 4.274E-04&1.859E-04 & 7.800E-05 & 3.219E-05 \\ & Rates&1.2011 & 1.2529 & 1.2770 \\ 0.6 & 2.415E-04&7.109E-05 & 2.433E-05 & 8.171E-06 \\ & Rates&1.7644 & 1.5470 & 1.5741 \\ \hline \end{tabular} \label{tab:1dP1D} \end{table} \end{example} \begin{example} We choose $P=0$ in \eqref{eqexactsol1d}. We first take $\Omega_{1}=(-0.5,0.5)$ and \begin{equation*} g(x)=\left\{ \begin{aligned} (1-x^{2})^{P+s},&\quad x\in(-1,1)\backslash \Omega_{1},\\ 0,&\quad otherwise \end{aligned}\right. \end{equation*} to verify the convergence when we use \eqref{eqmatrixform} to solve the inhomogeneous Dirichlet problem. According to Eq. \eqref{eqdefw0M}, we have $\bar{\omega}_{0}^{M}=0$ when $s=0.2$ and $\bar{\omega}_{0}^{M}=\bar{\omega}_{0}$ when $s=0.3,0.6,0.7$. From the results presented in Table \ref{tab:1dP0nD}, we find when $\bar{\omega}_{0}^{M}=0$, the convergence rates are $\mathcal{O}(h^{2-2s})$ which are the same as the ones predicted by Theorem \ref{thm1dcon} and when $\bar{\omega}_{0}^{M}\neq 0$, the convergence rates are $\mathcal{O}(h^{2})$ which are higher than the predicted ones. \begin{table}[htbp] \centering \caption{$l^{\infty}(\Omega)$ errors and convergence rates with $P=0$} \begin{tabular}{cccccc} \hline $s\backslash 2/h$& 128&256 & 512 & 1024 \\ \hline 0.2 & 5.20E-05&1.74E-05 & 5.82E-06 & 1.95E-06 \\ & Rates&1.5763 & 1.5834 & 1.5799 \\ 0.3 & 6.432E-06&1.622E-06 & 4.064E-07 & 1.007E-07 \\ & Rates&1.9870 & 1.9973 & 2.0123 \\ 0.6 & 6.647E-06&1.749E-06 & 4.547E-07 & 1.167E-07 \\ & Rates&1.9265 & 1.9434 & 1.9617 \\ 0.7 & 6.474E-06&1.718E-06 & 4.505E-07 & 1.166E-07 \\ & Rates&1.9143 & 1.9307 & 1.9502 \\ \hline \end{tabular} \label{tab:1dP0nD} \end{table} Afterwards, we show the numerical results that use \eqref{eqmatrixform} to solve \eqref{eqtosol} with $\Omega_{1}=(-1,1)$ and $g(x)=0$ in Table \ref{tab:1dP0D}. Due to $P=0$, the exact solution has a low regularity. The results presented in Table \ref{tab:1dP0D} show the numerical scheme \eqref{eqmatrixform} is still effective. \begin{table}[htbp] \centering \caption{$l^{\infty}(\Omega)$ errors and convergence rates with $P=0$} \begin{tabular}{ccccc} \hline $s\backslash 2/h$& 256&512 & 1024 & 2048 \\ \hline 0.2 & 7.681E-02&6.680E-02 & 5.812E-02 & 5.058E-02 \\ & Rates&0.2016 & 0.2008 & 0.2004 \\ 0.4 & 8.422E-03&6.387E-03 & 4.842E-03 & 3.670E-03 \\ & Rates&0.3990 & 0.3995 & 0.3997 \\ 0.6 & 2.459E-03&1.621E-03 & 1.069E-03 & 7.053E-04 \\ & Rates&0.6010 & 0.6005 & 0.6002 \\ 0.8 & 4.966E-04&2.859E-04 & 1.644E-04 & 9.450E-05 \\ & Rates&0.7964 & 0.7982 & 0.7991 \\ \hline \end{tabular} \label{tab:1dP0D} \end{table} \end{example} \begin{example} Here we present some examples in two dimensions. We choose \begin{equation*} u=\left\{ \begin{aligned} ((1-x^{2})(1-y^{2}))^{2}, &\quad(x,y)\in \Omega_{2};\\ 0, ~&\quad(x,y) \in \Omega_{2}^{c}, \end{aligned}\right. \end{equation*} where $\Omega_{2}=(-1,1)\times(-1,1)$ and $g(x,y)=0$. Table \ref{tab:trun2d} shows the truncation errors when using \eqref{eqdefFLH2Dall} with $\theta=0~ {\rm and}~ 1$~to approximate $(-\Delta)^{s}u$. Since $(-\Delta)^{s}u$ is unknown, the truncation errors are calculated by \begin{equation*} e_{h}=\|(-\Delta)^{s}_{h}u-(-\Delta)^{s}_{h/2}u\|_{\infty}. \end{equation*} All the results validate Theorem \ref{onetwodimendis}. \begin{table}[htbp] \centering \caption{$l^{\infty}(\Omega)$ truncation errors and convergence rates in two dimensions} \begin{tabular}{ccccc} \hline $(s,\theta)\backslash 2/h$& 64 & 128&512 & 1024 \\ \hline (0.3,0) & 1.238E-03 & 4.994E-04&1.968E-04 & 7.645E-05 \\ & Rates & 1.3099&1.3437 & 1.3641 \\ (0.3,1) & 1.324E-03 & 5.208E-04&2.021E-04 & 7.778E-05 \\ & Rates & 1.3461&1.3656 & 1.3777 \\ (0.8,0) & 1.358E-01 & 9.929E-02&7.399E-02 & 5.562E-02 \\ & Rates & 0.4516&0.4244 & 0.4116 \\ (0.8,1) & 1.332E-01 & 9.868E-02&7.384E-02 & 5.559E-02 \\ & Rates & 0.4330&0.4183 & 0.4097 \\ \hline \end{tabular} \label{tab:trun2d} \end{table} In Table \ref{tab:con2d}, we show the convergence of the numerical scheme \eqref{eqmatrixform2}. Since $(-\Delta)^{s}u$ is unknown, we use $(-\Delta)^{s}_{h}u$ with $h=\frac{1}{2048}$ and $\theta=1$ to approximately represent it. For $s=0.2,0.3$, we take $c_{0,0}=1$ and $\theta=0.5$; the convergence rates presented in Table \ref{tab:con2d} are the same as the ones predicted by Theorem \ref{thmcon2}. For $s=0.4,0.8$, we choose $c_{0,0}=0$ and $\theta=1$; the convergence rates are higher than the predicted ones. \begin{table}[htbp] \centering \caption{$l^{\infty}(\Omega)$ errors and convergence rates in two dimensions} \begin{tabular}{ccccc} \hline $s\backslash 2/h$& 64 & 128&256 & 512 \\ \hline 0.2 & 6.837E-03 & 2.371E-03&8.040E-04 & 2.654E-04 \\ & 0 & 1.5281&1.5600 & 1.5993 \\ 0.3 & 7.525E-03 & 3.030E-03&1.179E-03 & 4.419E-04 \\ & 0 & 1.3125&1.3618 & 1.4157 \\ 0.4 & 1.122E-03 & 2.826E-04&7.286E-05 & 1.834E-05 \\ & 0 & 1.9886&1.9557 & 1.9901 \\ 0.8 & 1.222E-03 & 3.049E-04&7.550E-05 & 1.837E-05 \\ & 0 & 2.0030&2.0138 & 2.0393 \\ \hline \end{tabular} \label{tab:con2d} \end{table} \end{example} \begin{example} Finally, we use the discretization \eqref{eqdefFLH2Dmd} to simulate the mean exit time $u(\mathbf{x})$ of an orbit starting at $\mathbf{x}$, from a two-dimensional bounded interval $\Omega_{2}$. According to Dynkin formula \cite{Applebaum.2009Lpasc,Sato.1999Lpaidd} of Markov processes, $u(\mathbf{x})$ satisfies \cite{Deng.2017Metaepftapwttplwt,Naeh.1990ADAttEP}, \begin{equation*} \left\{ \begin{aligned} \mathcal{A}u(\mathbf{x})&=1,\quad {\rm in}~\Omega_{2},\\ u(\mathbf{x})&=0,\quad {\rm in}~\Omega_{2}^{c}, \end{aligned}\right. \end{equation*} where \begin{equation*} \mathcal{A}=\nabla P(\mathbf{x})\cdot\nabla +(-\Delta)^{s}, \end{equation*} $\nabla$ denotes gradient operator, and $P(x)$ is a given potential. Here, we take $h=1/64$, $c_{0,0}=100$, $\theta=\frac{1}{2}$, $\Omega_{2}=((-1,1))^{2}$, and $P(\mathbf{x})=\kappa(x_{1}^{2}+x_{2}^{2})$ with $\mathbf{x}=(x_{1},x_{2})$. In Figure \ref{figmeanexitt}, we show the mean exit time when taking $s=0.2,~0.4,~0.6,~0.8$, and $\kappa=0.5$. Comparing Figure \ref{fig:t12} with Figures \ref{fig:t14}, \ref{fig:t16}, \ref{fig:t18}, we find the mean exit time becomes longer and boundary layer phenomena become weaker as $s$ increases. In Figure \ref{figmeanexitt2}, we show the mean exit time with $s=0.6$ and different $\kappa$. We find that the boundary layer phenomena become stronger and the mean exit time becomes longer as $\kappa$ increases. \begin{figure} \centering \subfigure[$s=0.2$]{ \includegraphics[width=0.45\linewidth,angle=0]{s02-05-0}\label{fig:t12}} \subfigure[$s=0.4$]{ \includegraphics[width=0.45\linewidth,angle=0]{s04-05-0}\label{fig:t14}} \subfigure[$s=0.6$]{ \includegraphics[width=0.45\linewidth,angle=0]{s06-05-0}\label{fig:t16}} \subfigure[$s=0.8$]{ \includegraphics[width=0.45\linewidth,angle=0]{s08-05-0}\label{fig:t18}} \caption{Mean exit time with $\kappa=0.5$.} \label{figmeanexitt} \end{figure} \begin{figure} \centering \subfigure[$\kappa=0.25$]{ \includegraphics[width=0.45\linewidth,angle=0]{s06-25-0}\label{fig:t25}} \subfigure[$\kappa=1$]{ \includegraphics[width=0.45\linewidth,angle=0]{s06-100-0}\label{fig:t100}} \subfigure[$\kappa=4$]{ \includegraphics[width=0.45\linewidth,angle=0]{s06-400-0}\label{fig:t400}} \subfigure[$\kappa=8$]{ \includegraphics[width=0.45\linewidth,angle=0]{s06-800-0}\label{fig:t800}} \caption{Mean exit time with $P(\mathbf{x})=\kappa(x_{1}^{2}+x_{2}^{2})$ and $s=0.6$.} \label{figmeanexitt2} \end{figure} \end{example} \section{Conclusions} A fundamentally new idea of discretizing the fractional Laplacian is introduced and used to solve the inhomogeneous fractional Dirichlet problem. The effectiveness of the designed scheme is ensured by the completely theoretical analyses and verified by numerical experiments. Specific applications for simulating the mean exit time of L\'evy processes under harmonic potential are provided; the effects of the strengthes of the potential and the L\'evy exponents are uncovered. \section*{Acknowledgements} This work was supported by the National Natural Science Foundation of China under Grant No. 12071195, and the AI and Big Data Funds under Grant No. 2019620005000775. \bibliographystyle{elsarticle-num}
{ "timestamp": "2021-01-28T02:21:19", "yymm": "2101", "arxiv_id": "2101.11378", "language": "en", "url": "https://arxiv.org/abs/2101.11378", "abstract": "We make the split of the integral fractional Laplacian as $(-\\Delta)^s u=(-\\Delta)(-\\Delta)^{s-1}u$, where $s\\in(0,\\frac{1}{2})\\cup(\\frac{1}{2},1)$. Based on this splitting, we respectively discretize the one- and two-dimensional integral fractional Laplacian with the inhomogeneous Dirichlet boundary condition and give the corresponding truncation errors with the help of the interpolation estimate. Moreover, the suitable corrections are proposed to guarantee the convergence in solving the inhomogeneous fractional Dirichlet problem and an $\\mathcal{O}(h^{1+\\alpha-2s})$ convergence rate is obtained when the solution $u\\in C^{1,\\alpha}(\\bar{\\Omega}^{\\delta}_{n})$, where $n$ is the dimension of the space, $\\alpha\\in(\\max(0,2s-1),1]$, $\\delta$ is a fixed positive constant, and $h$ denotes mesh size. Finally, the performed numerical experiments confirm the theoretical results.", "subjects": "Numerical Analysis (math.NA)", "title": "Finite difference method for inhomogeneous fractional Dirichlet problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759632491111, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7079405626956583 }
https://arxiv.org/abs/1603.06979
Wavelets and spectral triples for fractal representations of Cuntz algebras
In this article we provide an identification between the wavelet decompositions of certain fractal representations of $C^*-$algebras of directed graphs of M. Marcolli and A. Paolucci, and the eigenspaces of Laplacians associated to spectral triples constructed from Cantor fractal sets that are the infinite path spaces of Bratteli diagrams associated to the representations, with a particular emphasis on wavelets for representations of $\mathcal{O}_D$. In particular, in this setting we use results of J. Pearson and J. Bellissard, and A. Julien and J. Savinien, to construct first the spectral triple and then the Laplace Beltrami operator on the associated Cantor set. We then prove that in certain cases, the orthogonal wavelet decomposition and the decomposition via orthogonal eigenspaces match up precisely. We give several explicit examples, including an example related to a Sierpinski fractal, and compute in detail all the eigenvalues and corresponding eigenspaces of the Laplace Beltrami operators for the equal weight case for representations of Cuntz algebras, and in the uneven weight case for certain representations of $\mathcal{O}_2$, and show how the eigenspaces and wavelet subspaces at different levels are related.
\section{Introduction} In the 2011 paper~\cite{MP}, M.~Marcolli and A.~Paolucci, motivated by work of A.~Jonnson \cite{jonsson} and R.~Strichartz~\cite{Strichartz}, studied representations of Cuntz--Krieger $C^{\ast}$-algebras on Hilbert spaces associated to certain fractals, and constructed what they termed ``wavelets'' in these Hilbert spaces. These wavelets were so called because they provided an orthogonal decomposition of the Hilbert space, and the partial isometries associated to the $C^*$-algebra in question gave ``scaling and translation'' operators taking one orthogonal subspace to another. The results of Marcolli and Paolucci were generalized first to certain fractal representations of $C^{\ast}$-algebras associated to directed graphs and then to representations of higher-rank graph $C^{\ast}$-algebras $C^*(\Lambda)$ by some of the authors of this article in~\cite{FGKP} and~\cite{FGKP-survey}. The $k$-graph $C^{\ast}$-algebras $C^*(\Lambda)$ of Robertson and Steger~\cite{RS} are particular examples of these higher-rank graph algebras, and it was shown in~\cite{FGKP} that for these Robertson--Steger $C^{\ast}$-algebras there is a faithful representation of $C^*(\Lambda)$ on $L^2(X, \mu)$, where $X$ is a fractal space with Hausdorff measure $\mu.$ Moreover, this Hilbert space also admits a wavelet decomposition -- that is, an orthogonal decomposition such that the representation of $C^*(\Lambda)$ is generated by ``scaling and translation'' operators that move between the orthogonal subspaces. As in Marcolli and Paolucci's original construction, the wavelets in \cite{FGKP} and \cite{FGKP-survey} had a characteristic structure, in that they were chosen to be orthogonal to a specific type of function in the path space that could be easily recognized. Earlier, the theory of spectral triples and Fredholm modules of A. Connes had generated great interest \cite{connes}, and such objects had been constructed for dense subalgebras of several different classes of $C^{\ast}$-algebras, including the construction of spectral triples by E.~Christensen and C.~Ivan on the $C^{\ast}$-algebras of Cantor sets~\cite{Chr-Iva}, which in turn motivated the work of J.~Pearson and J.~Bellissard, who constructed spectral triples and related Laplacians on ultrametric Cantor sets \cite{pearson-bellissard-ultrametric}. Expanding on the work of Pearson and Bellisard, A.~Julien and J.~Savinien studied similarly constructed Laplacians on fractal sets constructed from substitution tilings \cite{julien-savinien-transversal}. In both the papers of Pearson and Bellissard and of Julien and Savinien, after the Laplacian operators were described, spanning sets of functions for the eigenspaces of the Laplacian were explicitly described in terms of differences of characteristic functions. It became apparent to the authors of the current paper that certain components of the wavelet system as described in \cite{FGKP} and the explicit eigenfunctions given by Julien and Savinien in \cite{julien-savinien-transversal} seemed related, and one of the aims of this paper is to analyze this similarity in the case of $C^{\ast}$-algebras of directed graphs as represented on their infinite path spaces. Indeed, we will show that under appropriate hypotheses, each orthogonal subspace described in the wavelet decomposition of~\cite{FGKP} can be expressed as a union of certain of the eigenspaces of the Laplace--Beltrami operator from~\cite{julien-savinien-transversal}. {We suspect that the hypotheses required for this result can be substantially weakened from their statement in Theorem~\ref{MPwaveletsthm} below, and plan to explore this question in future work~\cite{FGKJP-spectral-triples}.} More broadly, the goal of this paper is to elucidate the connections between graph $C^*$-algebras, wavelets on fractals, and spectral triples. We focus here on the case of one particular directed graph, namely the graph $\Lambda_D$ which has $D$ vertices and, for each pair ($v,w)$ of vertices, a unique edge $e$ with source $w$ and range $v$. Again, many of the results presented here will hold in greater generality; see the forthcoming paper \cite{FGKJP-spectral-triples} for details. In this paper we introduce the graph $C^*$-algebra (also known as a Cuntz algebra) associated to $\Lambda_D$; discuss the associated representations on fractal spaces as in \cite{MP, FGKP}; and present the associated spectral triples and Laplace--Beltrami operators associated to (fractal) ultrametric Cantor sets as adapted from recent work by Julien-Savinien, Pearson and Bellissard, Christensen et al., see e.g.~\cite{julien-savinien-transversal, pearson-bellissard-ultrametric, Chr-Ivan-Schr}. In particular we show in Theorem~\ref{MPwaveletsthm} that when one constructs the Laplace--Beltrami operator of \cite{julien-savinien-transversal} associated to the infinite path space of $\Lambda_D$ (which is an ultrametric Cantor set), the wavelets in \cite{MP} are exactly the eigenfunctions of the Laplacian. We then compute in detail all the eigenfunctions and eigenvalues of the Laplace--Beltrami operator associated to a representation of the Cuntz algebra $\mathcal{O}_D$ on a Sierpinski type fractal set (see \cite{MP} Section 2.6 and Section \ref{sec:sierp-rep} below) for the definition of this representation). For several different choices of a measure on the infinite path space of $\Lambda_D$, we also compute all the the eigenfunctions and eigenvalues of the associated Laplace--Beltrami operator; in the case when $D=2$ and this measure arises from assigning the two vertices of $\Lambda_D$ the weights $r$ and $1-r$ for some $r \in [0,1]$, we compare these results to wavelets associated to certain representations of ${\mathcal O}_2$ analyzed in Section 3 of \cite{FGKP-survey}. In further work \cite{FGKJP-spectral-triples} we will generalize these constructions to more general directed graphs and to higher-rank graphs, and also explain how to generalize certain other spectral triples associated to directed graphs, such as those described in \cite{CPR}, \cite{Chr-Iva}, \cite{goffme}, \cite{Whittaker}, and \cite{julien-putnam}, to higher-rank graphs. The structure of the paper is as follows. In Section 2, we review the definition of directed graphs, with an emphasis on finite graphs and the construction of both the infinite path space and Bratteli diagrams associated to finite directed graphs, the first as described in \cite{MP} among other places, and the second as described in~\cite{IR}. When the incidence matrices for our graphs are $\{0,1\}$ matrices, the infinite path space can defined in terms of both edges and vertices, and we describe this correspondence, together with the identification of the infinite path space $\Lambda^{\infty}$ with the associated infinite path space of the Bratteli diagram $\partial \mathcal{B}_\Lambda$ for a finite directed graph $\Lambda.$ In so doing, we note that these spaces are Cantor sets. We also review the semibranching function systems of K. Kawamura \cite{kawamura} and Marcolli and Paolucci \cite{MP} in this section, with an emphasis on those systems giving rise to representations of the Cuntz algebras ${\mathcal O}_D.$ In Section 3, we review representations of ${\mathcal O}_D$ on the $L^2$-spaces of Sierpinski fractals first constructed by Marcolli and Paolucci in \cite{MP}, and show that these representations are equivalent to the standard positive monic representations of ${\mathcal O}_D$ defined by D. Dutkay and P. Jorgensen in \cite{dutkay-jorgensen-monic}. In Section 4, we review the construction of spectral triples associated to weighted Bratteli diagrams, described by Pearson and Bellissard in \cite{pearson-bellissard-ultrametric} and Julien and Savinien in \cite{julien-savinien-transversal}, and provide explicit details of their construction for a variety of weights on the Bratteli diagram $\partial \mathcal{B}_D$ associated to the graph $\Lambda_D$. We describe in Theorem \ref{thm-Dixmier-trace-cuntz-algebra-O_D} the conditions under which the measure on $\partial \mathcal{B}_{D}$ agrees with the measure introduced by Marcolli and Paolucci, which we describe in Section 2. We also introduce the Laplace--Beltrami operator of Pearson and Bellissard \cite{pearson-bellissard-ultrametric} in this setting and review the specific formulas for its eigenvalues and associated eigenspaces. In Section 5 we review the construction of Marcolli and Paolucci's wavelets associated to representations of Cuntz--Krieger $C^*$-algebras on the $L^2$-spaces of certain fractal spaces, with the notation for these subspaces provided in earlier papers~\cite{FGKP, FGKP-survey} with an emphasis on representations of the Cuntz $C^*$-algebra $\mathcal{O}_D$, and prove our main theorem (Theorem \ref{MPwaveletsthm}), which is that in all cases that we consider, the wavelet subspaces for Marcolli and Paolucci's representations can be identified with the eigenspaces of the Laplace--Beltrami operator associated to the related Bratteli diagram. In Section 6, we examine certain representations of ${\mathcal O}_D$ where the weights involved are unevenly distributed among the vertices of $\Lambda_D$, and specializing to the study of uneven weights associated to representations of ${\mathcal O}_2,$ we compute explicitly the associated eigenvalues and eigenspaces for the Laplace--Beltrami operatore in this case, and provide the correspondence between these eigenspaces and certain wavelet spaces for monic representations of ${\mathcal O}_2$ first computed in~\cite{FGKP-survey}. This work was partially supported by a grant from the Simons Foundation (\#316981 to Judith Packer). \section{Cantor sets associated to directed graphs}\label{sec:Cantor_dir} We begin with a word about conventions. Throughout this paper, $\mathbb{N}$ consists of the positive integers, $\mathbb{N} = \{1, 2, 3, \ldots \}$; we use $\mathbb{N}_0$ to denote $\{0, 1, 2, 3 \ldots \}$. The symbol $\mathbb{Z}_N$ indicates the set $\{0, \ldots, N-1\}$. The Bratteli diagrams we discuss below do not have a root vertex; indeed, we think of the edges in a Bratteli diagram as pointing towards the zeroth level of the diagram. See Remark \ref{rmk-root} for more details. \subsection{Directed graphs and Bratteli diagrams} \begin{defn} \label{def:directed-graph} A \emph{directed graph} $\Lambda$ consists of a set of vertices $\Lambda^0$ and a set of edges $\Lambda^1$ and range and source maps $r,s:\Lambda^1\to \Lambda^0$. We say that $\Lambda$ is \emph{finite} if \[ \Lambda^n= \{ e_1 e_2 \ldots e_n: e_i \in \Lambda^1, r(e_i) = s(e_{i-1}) \ \forall \; i\} \] is finite for all $n\in\mathbb{N}$. If $\gamma = e_1 \cdots e_n$, we define $r(\gamma)= r(e_1)$ and $s(\gamma) = s(e_n)$, and we write $|\gamma| = n$. By convention, a path of length $0$ consists of a single vertex (no edge): if $|\gamma| = 0$ then $\gamma = (v)$ for some vertex $v$. We say that $\Lambda$ has \emph{no sources} if $v\Lambda^n =\{\gamma\in\Lambda^n: r(\gamma)=v\}\ne \emptyset$ for all $v\in\Lambda^0$ and all $n \in \mathbb{N}$. We say that $\Lambda$ is \emph{strongly connected} if \[ v\Lambda w = \bigcup_{n\in \mathbb{N}} \{\gamma \in v \Lambda^n: s(\gamma) = w \}\ne \emptyset\] for all $v,w\in\Lambda^0$. In a slight abuse of notation, if $\Lambda^n$ denotes the set of finite paths of length $n$, we denote by $\Lambda = \cup_{n\in \mathbb{N}_0} \Lambda^n$ the set of all finite paths, and by $\Lambda^\infty$ the set of infinite paths of a finite directed graph $\Lambda$: \[ \Lambda^\infty = \biggl\{ (e_i)_{i\in\mathbb{N}} \in \prod_{i=1}^\infty \Lambda^1: s(e_i) = r(e_{i+1}) \ \forall \ i \in \mathbb{N} \biggr\}. \] For $\gamma \in \Lambda$, we write $[\gamma] \subseteq \Lambda^\infty$ for the set of infinite paths with initial segment~$\gamma$: \begin{equation}\label{eq:cylin} [e_1 \ldots e_n] = \bigl\{ (f_i)_i \in \Lambda^\infty: f_i = e_i \ \forall \ 1 \leq i \leq n \bigr\}. \end{equation} We say that a path $\gamma = e_1 \ldots e_n$ has length $n$ and write $|\gamma| = n$. If $\gamma = (v)$ is a path of length 0, then $[\gamma] = [v] = \{ (f_i)_i \in \Lambda^\infty : r(f_1) = v \}$. Given a finite directed graph $\Lambda$, the \emph{vertex matrix} $A$ of $\Lambda$ is an $\Lambda^0 \times \Lambda^0$ matrix with entry $A({v,w}) = | v\Lambda^1 w|$ counting the number of edges with range $v$ and source $w$ in $\Lambda$. \end{defn} \begin{rmk} \label{rmk:top-on-inf-path} As shown in \cite{kprr} Corollary 2.2, if $\Lambda$ is finite and source-free, the cylinder sets $\{ [\gamma]: \gamma \in \Lambda\}$ form a compact open basis for a locally compact, totally disconnected, Hausdorff topology on $\Lambda^\infty$.\footnote{Note that if $\Lambda$ is finite, it is also row-finite, according to the definition given in Section 2 of \cite{kprr}.} If $\Lambda$ is finite, $\Lambda^\infty$ is also compact. \end{rmk} According to \cite{aHLRS3} Proposition 8.1, a strongly connected finite directed graph $\Lambda$ has a distinguished Borel measure $M$ on the infinite path space $\Lambda^\infty$ which is given in terms of the spectral radius $\rho(A)$ of the vertex matrix $A$; \begin{equation}\label{eq:measure} M([\gamma])=\rho(A)^{{-\vert\gamma\vert}}P_{s(\gamma)}, \end{equation} where $(P_v)_{v\in\Lambda^0}$ is the unimodular Perron--Frobenius eigenvector of the vertex matrix $A$. (See section~2 of \cite{FGKP} for details). \begin{defn} \label{def:bratteli-diagram} Let $\Lambda$ be a finite directed graph with no sources. The \emph{Bratteli diagram} associated to $\Lambda$ is an infinite directed graph $\mathcal{B}_\Lambda$, with the set of vertices $V=\sqcup_{n\ge 0} V_n$ and the set of edges $E=\sqcup_{n\ge 1}E_n$ such that \begin{itemize} \item[(a)] For each $n\in \mathbb{N}_0$, $V_n\cong\Lambda^0$ and $E_{n+1}\cong\Lambda^1$. \item[(b)] There are a range map and a source map $r,s:E\to V$ such that $r(E_n)\subseteq V_{n-1}$ and $s(E_n) \subseteq V_n$ for all $n\in \mathbb{N}$. \end{itemize} A \emph{path} $\gamma$ of length $n\in\mathbb{N}$ in $\mathcal{B}_\Lambda$ is an element \[ e_1e_2\dots e_n =(e_1,e_2,\dots, e_n)\in \prod_{i=1}^{n}E_n \] which satisfies $|e_i|=1$ $\forall i $, and $s(e_i)=r(e_{i+1})$ for all $1\le i\le n-1$. We denote by $F\mathcal{B}_\Lambda$ the set of all finite paths in the Bratteli diagram $\mathcal{B}_\Lambda$, and by $F^n\mathcal{B}_\Lambda$ the set of all finite paths in the Bratteli diagram $\mathcal{B}_\Lambda$ of length $n$. We denote by $\partial \mathcal{B}_\Lambda$ the set of infinite paths in the Bratteli diagram $\mathcal{B}_\Lambda$; \[ \partial \mathcal{B}_\Lambda = \{ e_1e_2\dots =(e_1,e_2,\dots)\in \prod_{n=1}^{\infty}E_n : |e_i|=1,\ s(e_i) = r(e_{i+1}) \ \forall \ i \in \mathbb{N}\}. \] Given a (finite or infinite) path $\gamma = e_1 e_2 \ldots$ in $\mathcal{B}_\Lambda$ and $m \in \mathbb{N}$, we write \[\gamma[0,m]= e_1 e_2 \cdots e_m.\] If $m = 0$ we write $\gamma[0,0] = r(\gamma).$ \end{defn} \begin{rmk} {Any finite path $\gamma$ of a length $n$ in a directed graph (or a Bratteli diagram) is given by a string of $n$ edges $e_1e_2\dots e_n$, which can be written uniquely as a string of vertices $v_0v_1\dots v_n$ such that $r(e_i)=v_{i-1}$ and $s(e_i)=v_i$ for $1\le i\le n$. Conversely, if the vertex matrix $A$ has all entries either $0$ or 1 (as will be the case in all of our examples), a given string of vertices $v_0 v_1 \ldots v_n$ with $v_i \in V_n$ for all $n \in \mathbb{N}_0$ corresponds to at most one string of edges, and hence at most one finite path $\gamma$. Thus even though our formal definition of a path is given as a string of edges, sometimes we use the notation of a string of vertices for a path.} \end{rmk} \begin{rmk} \label{rmk-root} {Note that our description of a Bratteli diagram is different from the one in \cite{julien-savinien-transversal} and \cite{bezuglyi-jorgensen}. First, the edges in $E_n$ in \cite{julien-savinien-transversal} and in \cite{bezuglyi-jorgensen} have source in $V_n$ and range in $V_{n+1}$; in other words, they point in the opposite direction from our edges. More substantially, though, in \cite{julien-savinien-transversal} and \cite{bezuglyi-jorgensen} every finite (or infinite) path in a Bratteli diagram starts from a vertex called a root vertex, $\circ$, and any finite path that ends in $V_n$ is given by $\epsilon_{r(e_1)} e_1 e_2\dots e_n$, where for each vertex $v \in V_0$, there is a unique edge $\epsilon_v$ connecting $\circ$ and $v$. This implies that a finite path that ends in $V_n$ consists of $n+1$ edges in their Bratteli diagram. However, our description of a Bratteli diagram in Definition \ref{def:bratteli-diagram} does not include a root vertex, and a finite path that ends in $V_n$ consists of $n$ edges. Thus, when we discuss Theorem~4.3 of \cite{julien-savinien-transversal} in Sections \ref{subsec-Laplace-Beltrami Operator-O-D} and \ref{sec:spect-triples-O-2} below, we will need to introduce a single path, the ``empty path'' of length -1, which we will denote by $\gamma[0,-1]$ for any and all paths $\gamma \in F\mathcal{B}_\Lambda$. The cylinder set of this path is $[\circ] = \partial \mathcal{B}_\Lambda$ when we translate Theorem~4.3 of \cite{julien-savinien-transversal} to our setting. } \end{rmk} \begin{rmk} \label{rmk:ident-graph-bratteli} As is suggested by the notation, a finite directed graph and its associated Bratteli diagram encode the same information in their sets of finite and infinite paths. We wish to emphasize this correspondence in this paper, to illuminate the way tools from a variety of disciplines combine to give us information about wavelets on fractals. \end{rmk} \begin{rmk} If $\Lambda$ is a strongly connected finite directed graph, then $\Lambda$ has no sources by Lemma 2.1 of \cite{aHLRS3}. Hence every vertex of the associated Bratteli diagram $\mathcal{B}_\Lambda$ also receives an edge. \end{rmk} \begin{example}\label{ex1} Consider a directed graph $\Lambda$ with two vertices $v,w$ and four edges $f_1,f_2, f_3$ and $f_4$ given as follows: \[ \begin{tikzpicture}[scale=1.5] \node[inner sep=0.5pt, circle] (v) at (0,0) {$v$}; \node[inner sep=0.5pt, circle] (w) at (1.5,0) {$w$}; \draw[-latex, thick] (w) edge [out=50, in=-50, loop, min distance=30, looseness=2.5] (w); \draw[-latex, thick] (v) edge [out=130, in=230, loop, min distance=30, looseness=2.5] (v); \draw[-latex, thick] (w) edge [out=240, in=-60] (v) ; \draw[-latex, thick] (v) edge [out=60, in=120] (w); \node at (-0.7, 0) {\color{black} $f_1$}; \node at (0.7, 0.7) {\color{black} $f_2$}; \node at (0.7, -0.3) {\color{black} $f_3$}; \node at (2.2, 0) {\color{black} $f_4$}; \end{tikzpicture} \] Note that $\Lambda$ is finite and strongly connected, and (consequently) has no sources. The vertex matrix $A$ is given by \[ A=\begin{pmatrix} 1 & 1\\ 1& 1\end{pmatrix}, \] and the associated Bratteli diagram $\mathcal{B}_\Lambda$ is \[ \begin{tikzpicture}[scale=1.5] \node[inner sep=0.5pt, circle] (v0) at (0,1) {$v^0$}; \node[inner sep=0.5pt, circle] (w0) at (0,0) {$w^0$}; \node[inner sep=0.5pt, circle] (v1) at (1,1) {$v^1$}; \node[inner sep=0.5pt, circle] (w1) at (1,0) {$w^1$}; \node[inner sep=0.5pt, circle] (v2) at (2,1) {$v^2$}; \node[inner sep=0.5pt, circle] (w2) at (2,0) {$w^2$}; \node[inner sep=0.5pt, circle] (v3) at (3,1) {$v^3$}; \node[inner sep=0.5pt, circle] (w3) at (3,0) {$w^3$}; \draw[-latex, thick] (v1) edge (v0); \draw[-latex, thick] (v1) edge (w0); \draw[-latex, thick] (w1) edge (v0); \draw[-latex, thick] (w1) edge (w0); \draw[-latex, thick] (v2) edge (v1); \draw[-latex, thick] (v2) edge (w1); \draw[-latex, thick] (w2) edge (v1); \draw[-latex, thick] (w2) edge (w1); \draw[-latex, thick] (v3) edge (v2); \draw[-latex, thick] (v3) edge (w2); \draw[-latex, thick] (w3) edge (v2); \draw[-latex, thick] (w3) edge (w2); \node at (3.2, 1) {$.$}; \node at (3.3,1) {$.$}; \node at (3.4, 1) {$.$}; \node at (3.5,1) {$.$}; \node at (3.2, 0) {$.$}; \node at (3.3,0) {$.$}; \node at (3.4, 0) {$.$}; \node at (3.5,0) {$.$}; \end{tikzpicture} \] \end{example} \begin{prop} Let $\Lambda$ be a finite directed graph. If every vertex $v$ in the directed graph $\Lambda$ receives two distinct infinite paths, then $\Lambda^\infty$ (equivalently, $\partial \mathcal{B}_\Lambda$) has no isolated points and hence it is a Cantor set. \end{prop} \begin{proof} Recall that a Cantor set is a totally disconnected, compact, perfect topological space. Moreover, $\Lambda^\infty$ is always compact Hausdorff and totally disconnected by Corollary 2.2 of \cite{kprr}, so it will suffice to show that $\Lambda^\infty$ has no isolated points. Suppose $\Lambda^\infty$ has an isolated point $ (e_i)_{i\in\mathbb{N}}$. Since the cylinder sets form a basis for the topology on $\Lambda^\infty$, this implies that there exists $n \in \mathbb{N}$ such that $[e_1 \cdots e_n]$ only contains $(e_i)_{i\in\mathbb{N}}$. In other words, for each $m \geq n$, there is only one infinite path with range $ s(e_m)$, contradicting the hypothesis of the proposition. \end{proof} \begin{cor} \label{cor:suff-cond-cantor} If $\Lambda$ is a finite directed graph with $\{0,1\}$ vertex matrix $A$ and every row sum of $A$ is at least 2, then $\Lambda^\infty$ (equivalently, $\partial \mathcal{B}_\Lambda$) is a Cantor set. \end{cor} \begin{proof} Note that the sum of the $v$th row of $A$ represents the number of edges in $\Lambda$ with range $v$. If every vertex receives at least two edges, then any cylinder set $[\gamma]$ will contain infinitely many elements, so $\Lambda^\infty$ has no isolated points. \end{proof} Corollary \ref{cor:suff-cond-cantor} tells us that the infinite path space of Example~\ref{ex1} is a Cantor set. \subsection{Cuntz algebras and representations on fractal spaces} \begin{defn}[{\cite[Definition~2.1]{dutkay-jorgensen-monic}}] Fix an integer $D>1.$ The {\em Cuntz algebra ${\mathcal O}_D$} is the universal $C^{\ast}$-algebra generated by isometries $\{T_i\}_{i=0}^{D-1}$ satisfying {the Cuntz relations} \begin{equation} \label{CK1} T_j^{\ast}T_i\;=\;\delta_{ij}\text{I}, \end{equation} and \begin{equation} \label{CK3} \sum_{i=0}^{D-1}T_iT_i^{\ast}\;=\;\text{I}. \end{equation} \end{defn} {The above definition of $\mathcal{O}_D$ is equivalent to the definition of $\mathcal{O}_{A_D}$ in the beginning of section 2 of \cite{MP} associated to the matrix $A_D$ that is a $D\times D$ matrix with 1 in every entry: \begin{equation}\label{matrixAD} A_{D} = \begin{pmatrix} 1 & 1 & 1 & ...&1\\ 1 & 1 & 1 & ...&1\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & 1 & 1 & ...&1 \\ 1 & 1 & 1 & ...&1\end{pmatrix}. \end{equation} As had been done previously by K. Kawamura \cite{kawamura}, Marcolli and Paolucci constructed representations of $\mathcal{O}_D$ (and more generally, the Cuntz--Krieger algebras ${\mathcal O}_A$ associated to a matrix $A$) by employing the method of ``semibranching function systems.'' We note for completeness that the semibranching function systems of Kawamura~\cite{kawamura} were for the most part defined on finite Euclidean spaces, e.g.\@ the unit interval $[0,1],$ whereas the semibranching function systems used by Marcolli and Paolucci~\cite{MP} were mainly defined on Cantor sets. \begin{defn}[{cf.~\cite{kawamura}, \cite[Definition 2.1]{MP}, \cite[Definition 2.16]{bezuglyi-jorgensen} }] \label{semibranchingdef} Let $(X,\mu)$ be a measure space, {fix an integer $D>1$ and let} $\{\sigma_i:X\to X\}_{i\in\mathbb Z_D}$ be a collection of $\mu$-measurable maps. The family of maps $\{\sigma_i\}_{i\in\mathbb Z_D}$ is called a \emph{semibranching function system} on $(X,\mu)$ with coding map $\sigma:X\to X$ if the following conditions hold: \begin{enumerate} \item For $i\in \mathbb{Z}_D$, set $R_{[i]}=\sigma_i(X)$. Then we have \[ \mu(X\backslash \cup_{i\in\mathbb Z_D}R_{[i]})=0\;\;\;\text{and}\;\;\; \mu(R_{[i]}\cap R_{[j]})=0\;\;\text{for}\;\;i\not=j. \] \item For $i\in \mathbb{Z}_D$, we have $\mu\circ \sigma_i \ll \mu$ and {the Radon--Nikodym derivative} satisfies \begin{equation} \label{semibranching} \frac{d(\mu\circ \sigma_i)}{d\mu}\;>\;0,\;\mu\text{-a.e.} \end{equation} \item For all $i\in \mathbb{Z}_D$, we have $$\sigma\circ \sigma_i(x)\;=\;x, \ \text{$\mu$-a.e}.$$ \end{enumerate} \end{defn} Kawamura and then Marcolli and Paolucci observed the following relationship between semibranching function systems and representations of ${\mathcal O}_D:$ \begin{prop}[{cf.~\cite[Proposition 2.4]{MP}, \cite[Theorem~2.22]{dutkay-jorgensen-monic}}] \label{MPrepprop} Let $(X,\mu)$ be a measure space, and let $\{\sigma_i:X\to X\}_{i\in\mathbb Z_D}$ be a semibranching function system on $(X,\mu)$ with coding map $\sigma:X\to X$. For each $i\in \mathbb Z_D$ define $S_i:L^2(X,\mu)\to L^2(X,\mu)$ by $$S_i(\xi)(x)\;=\;\chi_{R_{[i]}}(x)\Big(\frac{d\mu\circ \sigma_i}{d\mu}(\sigma(x))\Big)^{-\frac{1}{2}}\xi(\sigma(x))\;\;\text{for $\xi\in L^2(X,\mu)$ and $x\in X$}.$$ Then the family $\{S_i\}_{i\in \mathbb Z_D}$ satisfies the Cuntz relations Equations (\ref{CK1}) and (\ref{CK3}), and therefore generates a representation of the Cuntz algebra ${\mathcal O}_D.$ \end{prop} \begin{example} \label{ex:inf-path-repn} Let $\Lambda_D$ be the directed graph associated to the vertex matrix $A_D$. We can define a semibranching function system $\{(\sigma_i)_{i\in \mathbb{Z}_D}, \sigma\}$ on the Cantor set $(\Lambda_D^\infty, M)$ by thinking of elements of $\Lambda_D^\infty$ as sequences of vertices $(v_i)_{i \in \mathbb{N}_0}$ with $v_j \in \mathbb{Z}_D \ \forall \, j$. With this convention, we set \[ \sigma_i (v_0 v_1 v_2 \ldots ) = (i v_0 v_1 v_2 \ldots) \text{ and } \sigma(v_0 v_1 \ldots) = (v_1 v_2 \ldots).\] Then the Radon--Nikodym derivative $\frac{d(M \circ \sigma_i)}{dM}$ is given by \[ \frac{d(M \circ \sigma_i)}{dM} = \frac{1}{D}\] since the cylinder set $R_{[i]}$ has measure $\frac{1}{D}$ for all $i$, and the associated operators $S_i$ are given by \[ S_i (\xi)( v_0 v_1 v_2 \ldots) = \left\{ \begin{array}{cl} \sqrt{D} \xi( v_1 v_2 \ldots) & \text{ if } v_0 = i \\ 0 & \text{ else.} \end{array} \right. \] This representation of $\mathcal{O}_D$ is faithful by Theorem 3.6 of \cite{FGKP}, since every cycle in $\Lambda_D$ has an entrance. \end{example} \begin{example}[{cf.~\cite[Proposition 2.6]{MP}}] \label{ex:mp-fractal} Take {an integer} $D>1$, and let $K_D = \prod_{j=1}^{\infty}[\mathbb Z_D]_j$, {which is called the \emph{Cantor group} on $D$ letters in Definition~2.3 of~\cite{dutkay-jorgensen-monic}. As described in Section~2 of \cite{FGKP-survey}, $K_D$ has a Cantor set topology which is generated by cylinder sets \[ [n]=\{(i_j)_{j=1}^\infty\in K_D : i_1=n\}. \] According to Section~3 of \cite{dutkay-jorgensen-monic}, there is a measure $\nu_D$ on $K_D$ given by \[ \nu_D([n_1n_2\dots n_m])=\prod_{j=1}^m \frac{1}{D}=\frac{1}{D^m}. \] Note that $\nu_D$ is a Borel measure on $K_D$ with respect to the cylinder-set Cantor topology. } For each $j \in \mathbb{Z}_D$, define $\sigma_j$ {on $K_D$} by \[ \sigma_j\left( (i_1i_2\cdots i_k\cdots )\right )\;=\; (ji_1i_2\cdots i_k\cdots ). \] Then \[ R_{[j]} = {\sigma_j(K_D)}=\{(j i_1 i_2\cdots i_k\cdots ) \; : (i_1 i_2 \cdots i_k \cdots ) \in K_D\}=[j], \] and, denoting by $\sigma$ the one-sided shift on $K_D, \ \sigma\left( (i_1i_2\cdots i_k\cdots )\right )= (i_2i_3\cdots i_{k+1}\cdots ),$ we have that $\sigma\circ \sigma_j(x)=x$ for all $x\in K_D$ and $j\in \mathbb Z_D.$ Marcolli and Paolucci show in Section 2.1 of \cite{MP} that this data gives a semibranching function system. Moreover, since the measure of each set $R_{[i]}$ is $\frac{1}{D}$, the Radon--Nikodym derivative $\frac{d(\nu_D\circ \sigma_i)}{d\nu_D}$ satisfies \[ \frac{d(\nu_D\circ \sigma_i)}{d\nu_D}\;=\;\frac{1}{D}. \] {Thus, Proposition~\ref{MPrepprop} implies that there is a family of operators $\{S_i\}_{i\in \mathbb{Z}_D} \subseteq B(L^2(K_D, \nu_D))$ that generates a representation of the Cuntz algebra $\mathcal{O}_D$.} Moreover, this representation is faithful by Theorem 3.6 of \cite{FGKP}. To see this, let $\Lambda_D$ denote the directed graph with vertex matrix $A_D$, and note that labeling the vertices of $\Lambda_D$ by $\{0, 1, \ldots, D-1\}$ allows us to identify an infinite path $(e_i)_{i\in \mathbb{N}} \in \partial \mathcal{B}_D$ with the sequence $(r(e_i))_{i\in \mathbb{N}} \in K_D$. Moreover, in this case the Perron--Frobenius eigenvector associated to $A_D$ is \[P = \biggl( \frac{1}{D}, \frac{1}{D}, \ldots, \frac{1}{D} \biggr) ,\] and consequently \[ M([e_1 \ldots e_n]) = \frac{1}{D^{n+1}} = \nu_D\bigl( [r(e_1) r(e_2) \cdots r(e_n) s(e_n)] \bigr).\] Since the cylinder sets generate the topology on both $K_D$ and on $\partial \mathcal{B}_D$, this identification is measure-preserving. Thus, the representation $\{S_i\}_{i \in \mathbb{Z}_D}$ of $\mathcal{O}_D$ on $L^2(K_D, \nu_D)$ is equivalent to the infinite path representation of Example \ref{ex:inf-path-repn}. We can apply Theorem 3.6 of \cite{FGKP} to this latter representation to conclude that it is faithful, since every cycle in the graph $\Lambda_D$ associated to $A_D$ has an entry. In other words, \[C^*\bigl( \{S_i\}_{i\in \mathbb{Z}_D} \bigr) \cong \mathcal{O}_D.\] \end{example} \section{The action of ${\mathcal O}_D$ on $L^2({\mathbb S}_A,H)$}\label{subs:Sierp-fractal} As mentioned in the Introduction, we wish to show that when we represent $\mathcal{O}_D$ on a 2-dimensional Sierpinski fractal $\mathbb{S}_A$, this representation of $\mathcal{O}_D$ also gives rise to wavelets. We will then compare these wavelets with the eigenfunctions of the Laplace--Beltrami operator $\Delta_s$ of \cite{julien-savinien-transversal} that is associated to {$A_D$,} the $D \times D$ matrix of all 1's (that is, the matrix associated to the Cuntz algebra $\mathcal{O}_D$). To compare these functions, we will establish a measure-preserving isomorphism between $\mathbb{S}_A$ and the infinite path space of the directed graph (equivalently, Bratteli diagram) associated to $\mathcal{O}_D$ in this section. (See Theorem~\ref{thm:measure_preserving} below). \subsection{The Sierpinski fractal representation for $\mathcal{O}_D$} \label{sec:sierp-rep} Let $N$ and $D$ be positive integers with $N\geq 2,$ and let $A$ be a $N\times N \{0,1\}$-matrix with exactly $D$ entries consisting of the number $1$. Suppose that the nonzero entries of $A$ are in positions $\{(a_j,b_j)\}_{j=0}^{D-1},$ where $a_j, b_j\in\{0,1\cdots,N-1\}$ and in a lexicographic ordering we have $(a_0,b_0)<(a_1,b_1)<\cdots <(a_{D-1},b_{D-1}).$ Here we say $(a,b)<(a',b')$ if either $a<a'$ or if $a=a'$ and $b<b'.$ In Section~2.6 of \cite{MP}, Marcolli and Paolucci defined the Sierpinski fractal associated to $A,\;\mathbb{S}_A\;\subset [0,1]^2,$ as follows: \[ {\mathbb S}_A = \; \biggl\{ (x,y)= \biggl( \sum_{i=1}^\infty \frac{x_i}{N^i}, \sum_{i=1}^\infty \frac{y_i}{N^i} \biggr) : \; x_i, y_i \in \mathbb Z_N, \; A_{x_i,y_i}=1, \; \forall i\in \mathbb N \biggl\}. \] For each $j \in \mathbb{Z}_D$, we define \begin{equation}\label{eq:MPsemi_fts_Sierp} \tau_{j}(x,y) = \biggl( \frac{x}{N}+\frac{a_j}{N}, \frac{y}{N}+\frac{b_j}{N} \biggr) \quad \text{and}\quad \tau(x,y) = \biggl( N \Bigl( x-\frac{x_1}{N} \Bigr), N \Bigl( y-\frac{y_1}{N} \Bigr) \biggr). \end{equation} Lemma 2.23 of \cite{MP} tells us that the operators $\{\tau_j\}_{j \in \mathbb{Z}_D}$ form a semibranching function system with coding map $\tau$, and hence determine a representation of the Cuntz algebra $\mathcal{O}_D$ associated to $A_D$ given in \eqref{matrixAD}, on the Hilbert space $L^2(\mathbb{S}_A, H)$. Here $H$ is the Hausdorff measure on the fractal $\mathbb{S}_A$. According to the work of Hutchinson \cite{hutch}, we have \[ {\mathbb S}_A = \bigcup_{i=1}^D \tau_j({\mathbb S}_A). \] Moreover, the work of \cite{hutch} shows that the Hausdorff measure $H$ on $\mathbb{S}_A$ is the unique Borel probability measure on ${\mathbb S}_A$ satisfying the self-similarity equation \begin{equation}\label{eq:hausd-meas-scaling} H = \sum_{i=0}^{D-1} \frac{1}{D}(\tau_j)_*(H). \end{equation} In other words, \[ H(\tau_j({\mathbb S}_A)) = \frac{1}{D}H({\mathbb S}_A)) = \frac{1}{D}. \] It follows that, since \[ \tau_j ({\mathbb S}_A)) = \biggl\{ \biggl( \sum_{i=1}^\infty \frac{x_i}{N^i}, \sum_{i=1}^\infty \frac{{y_i}}{N^i} \biggr): (x_1,y_1)=(a_j,b_j) \biggr\}, \] \[ H\biggl( \biggl\{ \sum_{i=1}^\infty \frac{x_i}{N^i},\sum_{i=1}^\infty \frac{y_i}{N^i})\in {\mathbb S}_A: (x_1,y_1)=(a_j,b_j) \biggr\} \biggr) = \frac{1}{D}. \] By repeatedly applying the measure-similitude equation \eqref{eq:hausd-meas-scaling} we obtain \begin{multline} \label{eq:sierp-meas-formula} H\biggl( \biggl\{ \Bigl( \sum_{i=1}^\infty \frac{x_i}{N^i},\sum_{i=1}^\infty \frac{y_i}{N^i} \Bigr) \in {\mathbb S}_A: \forall 1 \leq i \leq M, \ (x_i,y_i)=(a_{j_i},b_{j_i}) \biggr\} \biggr)\\ = H(\tau_{j_1}\circ\tau_{j_2}\circ \cdots \circ \tau_{j_M}(\mathbb S)_A)=\Big(\frac{1}{D}\Big)^M. \end{multline} \subsection{The measure-preserving isomorphism} In this section, we discuss in more detail the relationship between the representation of $\mathcal{O}_D$ on $L^2(\mathbb{S}_A, H)$ and the infinite path representation of $\mathcal{O}_D$ on $L^2(\partial \mathcal{B}_D, M)$ described in Example~\ref{ex:inf-path-repn}. First, we note that the Hausdorff dimension of the Sierpinski fractal $\mathbb{S}_A$ introduced above is $$\frac{\ln D}{\ln N},$$ as established in Hutchinson's paper \cite{hutch}.\footnote{This formula is not in line with~\cite[Equation~(2.64)]{MP}, which gives $\ln D /( 2 \ln N)$ for the Hausdorff dimension. However, said equation appears to be a typo: the dimension should be~$2$ when $D = N^2$ (i.e.\@ when $\mathcal S_A$ is the unit square).} In particular, in the classical case of the Sierpinski triangle corresponding to the $2\times 2$ matrix $A = \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix},$ the Hausdorff dimension of $\mathbb{S}_A$ is $\frac{\ln 3}{\ln 2}.$ The main goal of this section is to prove the following: \begin{thm}\label{thm:measure_preserving} Let $A$ be the $N\times N$ matrix with entries consisting of only $0$'s and $1$'s with $D$ incidences of $1$'s in the entries $$(a_0,b_0)<(a_1,b_1)<\cdots <(a_{D-1},b_{D-1}),$$ where $a_j,b_j\in \mathbb Z_N.$ Consider the Sierpinski gasket fractal \[ {\mathbb S}_A\;=\; \biggl\{ \Bigl( \sum_{i\in \mathbb{N}}\frac{x_i}{N^i}, \sum_{i\in \mathbb{N}}\frac{y_i}{N^i} \Bigr) :\;A(x_i,y_i)=1,\;\forall i\in\mathbb N\biggr\}. \] Then there is a measure-theoretic isomorphism \[ \Upsilon = \Phi \circ \Theta: (\partial \mathcal{B}_D, M)\to ({\mathbb S}_A,H), \] where $(\partial \mathcal{B}_D,M)$ is the infinite path space of the Bratteli diagram associated to the $D \times D$ matrix with all ones, and $M$ is the measure given by Equation \eqref{eq:measure}: \[ M[\gamma] = D^{-|\gamma| -1}. \] Moreover, if $\{S_i\}_{i \in \mathbb{Z}_D}$ denotes the infinite path representation of $\mathcal{O}_D$ on $(\partial \mathcal{B}_\Lambda, M)$, and $\{T_i\}_{i \in \mathbb{Z}_D}$ denotes the representation of $\mathcal{O}_D$ on $(\mathbb{S}_A, H)$ associated to the semibranching function system \eqref{eq:MPsemi_fts_Sierp}, then for all $i \in\mathbb{Z}_D$, \[ T_i = S_i\circ \Upsilon. \] \end{thm} \begin{proof} Let $S_A$ denote the $D$-element symbol space of pairs from $\mathbb Z_N$ with $1$'s in the corresponding entry of $A:$ $$S_A=\{(a_0,b_0), (a_1,b_1), (a_2,b_2),\cdots, (a_{D-1},b_{D-1})\}\subset \mathbb Z_N\times \mathbb Z_N,$$ and let $X_A$ be the infinite product space $X_A=\;\prod_{i=1}^{\infty}S_A.$ Giving $S_A$ the discrete topology and $X_A$ the product topology, we see that $X_A$ is a Cantor set, by the arguments of Section 2 of \cite{FGKP-survey}. For every $i\in \mathbb N,$ let $\mu_{i,A}$ be the normalized counting measure on $S_A$; that is, for $S\subset S_A,$ $$\mu_{i,A}(S)\;=\; \frac{\#(S)}{D},$$ and let $\mu_A$ denote the infinite product measure $\mu_A=\prod_{i=1}^{\infty}(\mu_{i,A}).$ Note if we let \begin{equation}\label{eq:Serpin_cylin} \bigl[(a_{j_1},b_{j_1})(a_{j_2},b_{j_2})\cdots (a_{j_M},b_{j_M})\bigr] \end{equation} denote the cylinder set \[\begin{split} &[(a_{j_1},b_{j_1})(a_{j_2},b_{j_2})\cdots (a_{j_M},b_{j_M})]\\ &\;\;=\{\left((x_i,y_i)\right)_{i=1}^{\infty}\in X_A: (x_i,y_i)=(a_{j_i},b_{j_i}) \ \forall \ 1\leq i\leq M\}, \end{split}\] then $$\mu_A([(a_{j_1},b_{j_1})(a_{j_2},b_{j_2})\cdots (a_{j_M},b_{j_M})])=\frac{1}{D^M}.$$ Define now a map $\Phi: X_A\to {\mathbb S}_A$ by \[ \Phi\bigl( \left((x_i,y_i)\right)_{i=1}^{\infty} \bigr) = \Bigl( \sum_{i=1}^{\infty}\frac{x_i}{N^i}, \sum_{i=1}^{\infty}\frac{y_i}{N^i} \Bigr). \] The map $\Phi$ is continuous from the product topology on $X_A$ to the topology on ${\mathbb S}_A$ inherited from the Euclidean topology on $[0,1]\times [0,1].$ The map $\Phi$ is {\bf not} one-to-one, but if we let $E\subset X_A$ denote the set of points on which $\Phi$ is not injective, $\mu_A(E)=0$. Indeed, let's examine the set of points of $X_A$ where $\Phi$ may not be one-to-one: non-injectivity can come from pairs of sequence of the forms $(x_i, y_i)_i$, $(x'_i, y'_i)_i$ where $x_i$ is eventually $N-1$ and $x'_i$ is eventually $0$, and similarly exchanging $x$ and $y$. Notice also that if $A$ has no ones either on the first or on the last row, there will be no such pairs for which $x_i$ is eventually $N-1$ and $x'_i$ is eventually $0$. Therefore, since $A$ has $D$ total entries equaling 1, if two such pairs $(x_i, y_i)_i$ and $(x'_i, y'_i)_i$ are going to have the same image under $\Phi$, there need to be at most $D-1$ ones on the first row, and the same on the last row. Therefore, the measure of the set of pairs $(x_i, y_i)_i$ for which $x_i$ is eventually $N-1$ is smaller than $[(D-1)/D]^n$ for all $n$: it has zero measure. We reason similarly for the set of pairs $(x_i, y_i)$ for which $x_i$ is eventually $0$, for which $y_i$ is eventually $0$ and for which $y_i$ is eventually $N-1$. In conclusion, the set of points in $X_A$ on which $\Phi$ has a risk of not being one-to-one has measure zero. We also note that since $\Phi$ is continuous, it is a Borel measurable map, and that for any Borel subset $B$ of $\mathbb{S}_A$, $$\mu_A\circ [\Phi]_*(B)\;=H(B).$$ This is the case because a length-$M$ cylinder set in ${\mathbb S}_A$ (that is, any cylinder set \\ $\bigl[ (x_1, y_1), \ldots, (x_M, y_M) \bigr]$ consisting of all points in ${\mathbb S}_A$ whose first $M$ pairs of $N$-adic digits are fixed) has $H$-measure $\frac{1}{D^M},$ whereas when one pulls such sets back via $\Phi,$ we obtain cylinder sets of the form \[ \bigl[(a_{j_1},b_{j_1})(a_{j_2},b_{j_2})\cdots (a_{j_M},b_{j_M}) \bigr] \subseteq X_A \] which also have measure ${D^{-M}}.$ Since these sets generate the Borel $\sigma$-algebras for ${\mathbb S}_A$ and $X_A$ respectively, we get the desired equality of the measures. Now let $\mathcal{B}_D$ be the Bratteli diagram with $D$ vertices at each level, associated to {the matrix $A_D$ given in \eqref{matrixAD} (and, hence, to the directed graph $\Lambda_D$ with $D$ vertices and all possible edges)}. We equip the infinite path space $\partial \mathcal{B}_D$ with the measure of Equation~\eqref{eq:measure}, which in this case is $M([\gamma]) = D^{-|\gamma|-1}$. Label the vertices of $\Lambda^0$ by $\mathbb{Z}_D = \{0, 1, \ldots, D-1\}$, and define $\Theta : \partial \mathcal{B}_D \rightarrow X_A$ by \[ \Theta((e_i)_{i \geq 1}) = \bigl( (a_{r(e_1)}, b_{r(e_1)}), (a_{r(e_2)}, b_{r(e_2)}), (a_{r(e_3)}, b_{r(e_3)}), \ldots) \bigr); \] in other words, $\Theta$ takes an infinite path (written in terms of edges) $(e_i)_{i\in\mathbb{N}}$ to the sequence of vertices $(r(e_i))_{i\in\mathbb{N}}$ it passes through, and then maps this sequence of vertices to the corresponding element of $X_A$. The map $\Theta$ is bijective, since each pair of vertices has exactly one edge between them. In addition, both $\Theta$ and $\Theta^{-1}$ are continuous, since both the topology on $\partial \mathcal{B}_D$ and the topology on $X_A$ are generated by cylinder sets. In other words, $\Theta$ is a homeomorphism, and $M=\mu_A \circ [\Theta]_*$. We thus have shown that $\Upsilon= \Phi \circ \Theta$ is a Borel measure-theoretic isomorphism between the measure spaces $(\partial \mathcal{B}_D, M)$ and $({\mathbb S}_A,H)$. A routine computation, using the fact that \[ \Upsilon ((e_i)_{i\in \mathbb{N}}) = \left( \sum_{i\in \mathbb{N}} \frac{a_{r(e_i)}}{N^i}, \sum_{i\in \mathbb{N}} \frac{b_{r(e_i)}}{N^i} \right) ,\] will show that for any $i \in \mathbb{Z}_D$, $T_i = S_i \circ \Upsilon$ to finish the proof. \end{proof} We now recall the definition of Dutkay and Jorgensen \cite{dutkay-jorgensen-monic} of a {\it monic} representation of ${\mathcal O}_D:$ \begin{defn}[{cf.~\cite[Definition 2.6]{dutkay-jorgensen-monic}}] \label{def-equiv-measures-O-D} Let $D\in\mathbb N,$ and let $K_D$ be the infinite product Cantor group defined earlier. Let $\sigma_i:K_D\to K_D,\;0\leq i\leq D-1$ be as in Example \ref{ex:mp-fractal}. A {\it nonnegative monic system} is a pair $(\mu, (f_i)_{i\in\mathbb Z_D})$ where $\mu$ is a Borel probability measure on $K_D$ and $(f_i)_{i\in\mathbb Z_D}$ are nonnegative Borel measurable functions in $L^2(K_D,\mu)$ such that $\mu\circ \sigma_i^{-1} \ll \mu,$ and such that for all $i\in\mathbb Z_D$ $$\frac{d(\mu\circ \sigma_i^{-1})}{d\mu}=(f_i)^2$$ with the property that $f_i(x)\not=0,\;\mu$ a.e. on $\sigma_i(K_D),\;\forall i\in \mathbb{Z}_D.$ \end{defn} By Equation (2.9) of \cite{dutkay-jorgensen-monic}, there is a natural representation of ${\mathcal O}_D$ on $L^2(K_D, \mu)$ associated to a monic system $(\mu, (f_i)_{i\in\mathbb Z_D})$ given by $$\tilde{S}_if\;=\;f_i(f\circ \sigma),\;(i\in\mathbb Z_D,\;f\in L^2(K_D, \mu)).$$ If $\mu=\nu_D$ and we set $f_i={\sqrt{D}}\chi_{\sigma_i(K_D)},$ the corresponding monic system is called the {\it standard positive monic system} for ${\mathcal O}_D.$ \begin{cor} \label{cor-equiv-measures-O-D} The representation of ${\mathcal O}_D$ on $L^2({\mathbb S}_A,H)$ described in Section~\ref{sec:sierp-rep} above is equivalent to the monic representation of ${\mathcal O}_D$ corresponding to the standard positive monic system on $L^2(K_D,\nu_D)$. \end{cor} \begin{proof} Theorem \ref{thm:measure_preserving}, combined with the measure-theoretic identification of $(K_D, \nu_D)$ and $(\partial \mathcal{B}_D, M)$ established in Example \ref{ex:mp-fractal}, implies that we have a measure-theoretic isomorphism between $(K_D, \nu_D)$ and $(\mathbb{S}_A, H)$. Thus, to show that the corresponding representations of $ {\mathcal O}_D$ are unitarily equivalent, it only remains to check that the operators $\tilde{S}_i = f_i (f \circ \sigma)$ associated to the standard positive monic system, and the operators $\{T_i\}_{i \in \mathbb{Z}_D}$, match up correctly. To that end, observe that \begin{align*} \tilde{S}_i (\xi)(v_0 v_1 \ldots ) & = f_i(v_0 v_1 \ldots ) \xi( v_1 v_2 \ldots) = \begin{cases} \sqrt{D} \xi(v_1 v_2 \ldots) & \text{ if } v_0 = i \\ 0 & \text{ else.} \end{cases} \\ & = S_i(\xi)(v_0 v_1 \ldots). \end{align*} Since Theorem \ref{thm:measure_preserving} established that the operators $S_i$ and $T_i$ are unitarily equivalent, the Corollary follows. \end{proof} \section{Spectral triples and Laplacians for Cuntz algebras} \label{sec:spect-triples} Let $A_D$ be the $D\times D$ matrix with 1 in every entry and consider the Bratteli diagram $\mathcal{B}_D$ associated to $A_D$. If $D\ge 2$, then every row sum of $A_D$ is at least 2 by construction, and hence the associated infinite path space of the Bratteli diagram, $\partial \mathcal{B}_D$, is a Cantor set. In this section, {by using the methods in} \cite{julien-savinien-transversal}, we will construct a spectral triple on $\partial \mathcal{B}_D$. This spectral triple gives rise to a Laplace--Beltrami operator $\Delta_s$ on $L^2(\partial \mathcal{B}_D, \mu_D)$, where $\mu_D$ is the measure induced from the Dixmier trace of the spectral triple as in Theorem \ref{thm-Dixmier-trace-cuntz-algebra-O_D} below. We also compute explicitly the orthogonal decomposition of $L^2(\partial \mathcal{B}_D, \mu_D)$ in terms of the eigenfunctions of the Laplace--Beltrami operator $\Delta_s$ (cf.~\cite[Theorem~4.3]{julien-savinien-transversal}). \subsection{The Cuntz algebra $\mathcal{O}_D$ and its Sierpinski spectral triple} \label{subsec:cuntz-spectral-triple} \begin{defn} \label{def-weight-paths} Let $\Lambda$ be a finite directed graph; let $F(\mathcal{B}_\Lambda)_\circ$ be the set of all finite paths on the associated Bratteli diagram, including the empty path whose length we set to $-1$ by convention.. A \emph{weight} on $\mathcal{B}_\Lambda$ (equivalently, on $\Lambda$) is a function $w: F(\mathcal{B}_\Lambda)_\circ \to (0,\infty)$ satisfying \begin{itemize} \item[(a)] $w(\circ) = 1$ \item[(b)] \[ \lim_{n\to \infty} \sup \{ w(\eta): \eta \in \Lambda^n = F^n\mathcal{B}_\Lambda\} = 0, \] where we denoted by $\Lambda^n = F^n\mathcal{B}_\Lambda$ the set of finite paths of length $n $ on $\Lambda$ (equivalently, $\mathcal{B}_\Lambda$). \item[(c)] For any finite paths $\eta, \nu$ with $s(\eta) = r(\nu)$, we have $w(\eta \nu) < w(\eta) $. \end{itemize} A Bratteli diagram $\mathcal{B}_\Lambda$ with a weight $w$ is called a \emph{weighted Bratteli diagram}. \end{defn} \begin{rmk} \label{rmk-weights-on-vertices-and-edges} Observe that a weight that satisfies Definition 2.9 of \cite{julien-savinien-transversal} on the vertices of a Bratteli diagram $\mathcal{B}_\Lambda$ induces a weight on the finite paths of the Bratteli diagram as in Definition \ref{def-weight-paths} above. In fact in \cite{julien-savinien-transversal} and \cite{pearson-bellissard-ultrametric} the authors define a weight on $F\mathcal{B}_\Lambda$ by defining the weight first on vertices, and then extending it to finite paths via the formula $w(\eta) = w(s(\eta))$, for $\eta\in F\mathcal{B}_\Lambda$. \end{rmk} We will show below that a weight on $\mathcal{B}_\Lambda$ induces in turn a measure on the infinite path space $\partial \mathcal{B}_\Lambda \cong \Lambda^\infty$; see Theorem \ref{thm-Dixmier-trace-cuntz-algebra-O_D} below for details. \begin{defn} An \emph{ultrametric} $d$ on a topological space $X$ is a metric satisfying the strong triangle inequality: \[ d(x,y)\le \max\{d(x,z), d(y,z)\} \quad\text{for all $x,y,z\in X$.} \] \end{defn} \begin{prop}[{\cite[Proposition~2.10]{julien-savinien-transversal}}] \label{prop-on-weights} Let $\mathcal{B}_\Lambda$ be a weighted Bratteli diagram with weight $w$. We define a function $d_w$ on $\partial B_\Lambda\times \partial B_\Lambda$ by \[ d_w(x,y)=\begin{cases} w(x\wedge y) & \text{if $x\ne y$} \\ 0 & \text{otherwise}\end{cases}, \] where $x\wedge y$ is the longest common initial segment of $x$ and $y$. (If $r(x) \not= r(y)$ then we say $x \wedge y $ is the empty path $ \circ$, and $w(\circ) = 1$.) Then $d_w$ is an ultrametric on $\partial B_\Lambda$. \end{prop} Note that the ultrametric $d_w$ induces the same topology on $\partial \mathcal{B}_\Lambda$ as the cylinder sets in \eqref{eq:cylin}; thus, $(\partial \mathcal{B}_\Lambda, d_w)$ is called an \emph{ultrametric Cantor set}. \begin{defn} \label{def-choice-of-weight-Cuntz-algebra} Let $A_D$ be a $D\times D$ matrix with 1 in every entry and let $\mathcal{B}_D$ be the associated Bratteli diagram. Fix $\lambda >1$, and set \[ d = \ln D/\ln \lambda. \] We define a weight $w_D^{\lambda}$ on the Bratteli diagram $\mathcal{B}_D$ by setting \begin{itemize} \item[(a)] $ w_D^{\lambda}(\circ) = 1. $ \item[(b)] For any level $0$ vertex $v \in V_0$ of $\mathcal{B}_D$, $ w_D^{\lambda}(v) = \frac{1}{D}. $ \item[(c)] For any finite path $\gamma \in F^n\mathcal{B}_D$ of length $n$, \[ w_D(\gamma) =\lambda^{-n} \frac1D. \] \end{itemize} \end{defn} According to \cite{julien-savinien-transversal}, after choosing a weight on $\mathcal{B}_D$, we can build a spectral triple associated to it as in the following Theorem. Note that this result is a special case of Section 3 of \cite{julien-savinien-transversal}. \begin{thm}\label{prop:spectral} Fix an integer $D > 1$ and $\lambda >1 $. Let $(\mathcal{B}_D, w_D^{\lambda})$ be the weighted Bratteli diagram with the choice of weight $w_D^{\lambda}$ as in Definition \ref{def-choice-of-weight-Cuntz-algebra}. Let $(\partial \mathcal{B}_D, d_w^\lambda)$ be the associated ultrametric Cantor set. Then there is an even spectral triple $(C_{\text{Lip}}(\partial \mathcal{B}_D), \mathcal{H}, \pi_\tau, \slashed{D}, \Gamma)$, where \begin{itemize} \item $C_{\text{Lip}}(\partial \mathcal{B}_D)$ is the pre-$C^*$-algebra of Lipschitz continuous functions on $(\partial \mathcal{B}_D, d_w^\lambda)$, \item for each choice function $\tau: F\mathcal{B}_D \to \partial \mathcal{B}_D\times \partial \mathcal{B}_D$,\footnote{A choice function $\tau:F\mathcal{B}_D\to \partial \mathcal{B}_D\times \partial \mathcal{B}_D$ is a function that satisfies \[ \tau(\gamma)= : (\tau_{+}(\gamma), \tau_{-}(\gamma))\quad\text{where}\quad d_w(\tau_{+}(\gamma), \tau_{-}(\gamma))=w_D^\lambda(\gamma). \] } a faithful representation $\pi_\tau$ of $C_{\text{Lip}}(\partial \mathcal{B}_D)$ is given by bounded operators on the Hilbert space $\mathcal{H}=\ell^2(F\mathcal{B}_D)\otimes \mathbb{C}^2$ as \[ \pi_\tau(f)=\bigoplus_{\gamma\in F(\mathcal{B}_D)_\circ}\begin{pmatrix} f(\tau_{+}(\gamma)) & 0 \\ 0 & f(\tau_{-}(\gamma))\end{pmatrix}; \] \item the Dirac operator $\slashed{D}$ on $\mathcal{H}$ is given by \[ \slashed{D}=\bigoplus_{\gamma\in F(\mathcal{B}_D)_\circ} \frac{1}{w_D^\lambda(\gamma)}\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}; \] \item the grading operator is given by $\Gamma=1_{\ell^2(F(\mathcal{B}_D)_\circ)}\otimes \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}$. \end{itemize} \end{thm} \begin{defn}[{cf.~\cite[Theorem~3.8]{julien-savinien-transversal}}] The $\zeta$-function associated to the spectral triple of Theorem~\ref{prop:spectral} is given by \begin{equation} \label{eq-def-zeta} \zeta_D^\lambda(s) : ={\frac{1}{2}\operatorname{Tr}(|{\slashed{D}}|^{-s})}= \sum_{\gamma\in F(\mathcal{B}_D)_\circ}\big(w_D^\lambda(\gamma)\big)^s. \end{equation} \end{defn} \begin{prop}[{cf.~\cite[Theorem~3.8]{julien-savinien-transversal}}] \label{ab:conv-even-weight} The $\zeta$-function in Equation \eqref{eq-def-zeta} has abscissa of convergence equal to $d = \ln D / \ln \lambda $. \end{prop} \begin{proof} By a straightforward calculation we get (if we denote by $F^q(\mathcal{B}_D)_\circ$ the set of paths of length $q$): \[ \sum_{\gamma \in F(\mathcal{B}_D)_\circ} \Big(w_D^\lambda(\gamma)\Big)^s = D^{-s} \sum_{q \geq -1} \operatorname{Card} (F^q(\mathcal{B}_D)_\circ) \lambda^{-qs} = D^{-s} \sum_{q \geq -1} D^{q+1} \lambda^{-qs}, \] where $\operatorname{Card}(S)$ denotes the cardinality of the set $S$. It is clear that this sum converges precisely when $D/\lambda^s$ is smaller than~$1$, that is whenever \[ s > \frac{\ln D}{\ln \lambda}. \] \end{proof} It is known that the abscissa of convergence coincides with the upper Minkowski dimension of $\partial \mathcal{B}_D \cong \Lambda^\infty_D$ associated to the ultrametric $d_w^\lambda$~\cite[Theorem~2]{pearson-bellissard-ultrametric}. In the self-similar cases (when the weight is given as in Definition~\ref{def-choice-of-weight-Cuntz-algebra}), the upper Minkowski dimension turns out to coincide with the Hausdorff dimension~\cite[Theorem~2.12]{julien-savinien-embedding}. In particular, when the scaling factor $\lambda$ is just $N$, the Hausdorff dimensions of $(\Lambda^\infty_D, d_{w_D^N})$ and $\mathbb{S}_A$ coincide, where we equip $\mathbb{S}_A$ with the metric induced by the Euclidean metric on $[0,1]^2$. The Dixmier trace $\mu_D^\lambda(f)$ of a function $f\in C_{\text{Lip}}(\partial \mathcal{B}_D)$ is given by the expression below; see Theorem 3.9 of \cite{julien-savinien-transversal} for details. \begin{equation} \label{eq:zeta-mu} \mu_D^\lambda(f)=\lim_{s\downarrow d}\frac{\operatorname{Tr}(|{\slashed{D}}|^{-s}\pi_\tau(f))}{\operatorname{Tr}(|{\slashed{D}}|^{-s})}=\lim_{s\downarrow d}\frac{\operatorname{Tr}(|{\slashed{D}}|^{-s}\pi_\tau(f))}{ 2 \zeta_D^\lambda(s) } . \end{equation} In particular the limit given in \eqref{eq:zeta-mu} induces a measure $\mu_D^\lambda$ on $\partial \mathcal{B}_D$ characterized as follows. If $f=\chi_{[\gamma]}$ is the characteristic function of a cylinder set $[\gamma]$, and if $F_\gamma\mathcal{B}_D = \{ \eta \in F_\gamma\mathcal{B}_D: \eta = \gamma \eta'\}$ denotes the set of all finite paths with initial segment $\gamma$, we have \begin{equation}\label{eq:m_ind_Dix} \mu_D^\lambda([\gamma]) = \mu_D^\lambda(\chi_{[\gamma]}) =\lim_{s\downarrow d} \frac{\sum_{\eta\in F_\gamma(\mathcal{B}_D)_\circ}\big(w_D^\lambda(\eta)\big)^s}{\sum_{\eta\in F(\mathcal{B}_D)_\circ}\big(w_D^\lambda(\eta)\big)^s}. \end{equation} It actually turns out, as we prove in Theorem \ref{thm-Dixmier-trace-cuntz-algebra-O_D} below, that the measure $\mu_D^\lambda$ on $\partial \mathcal{B}_D$ is independent of $\lambda$; so we will also write, with notation as above \[ \mu_D([\gamma])=\mu_D(\chi_{[\gamma]}) = \mu_D^\lambda([\gamma])=\mu_D^\lambda(\chi_{[\gamma]}) \] Moreover, by combining Theorem \ref{thm:measure_preserving} with Theorem \ref{thm-Dixmier-trace-cuntz-algebra-O_D} below, we see that $\mu_D$ agrees with the Hausdorff measure of $\mathbb{S}_A$.} \begin{thm} \label{thm-Dixmier-trace-cuntz-algebra-O_D} For any choice of scaling factor $\lambda >1$, the measure $\mu_D^\lambda$ on $\partial \mathcal{B}_D$ {induced by the Dixmier trace} agrees with the measure $M$ associated to the infinite path representation of $\mathcal{O}_D$. Namely, for any finite path $\gamma \in F\mathcal{B}_D$, we have \begin{equation} \mu_D([\gamma])=\frac{1}{D^{|\gamma|+1 }} = M([\gamma]). \end{equation} \end{thm} \begin{proof} Note that, although the proof of this Theorem is very long for the more general case of Cuntz--Krieger algebras (cf.~\cite[Theorem 3.9]{julien-savinien-transversal}), it considerably simplifies for the case of Cuntz algebras covered here. First note that for the choice of the empty path $\gamma= \circ $ (whose cylinder set corresponds to the whole space), we have \begin{equation}\label{limit-eq-cuntz-case-circ} f(s) = \frac{\sum_{\eta \in F(\mathcal{B}_D)_\circ} (w_D^\lambda(\eta))^s} {\sum_{\eta \in F(\mathcal{B}_D)_\circ} (w_D^\lambda(\eta))^s } =1=\mu_D^\lambda(\Lambda^{\infty}_D) =M(\Lambda^{\infty}_D ) . \end{equation} Now we will compute $\mu_D^\lambda$, for a finite path $\gamma\not= \circ $ of length $n$ in $F^n\mathcal{B}_D$. Define, according to Equation \eqref{eq:m_ind_Dix}, \begin{equation}\label{limit-eq-cuntz-case} f(s) = \frac{\sum_{\eta \in F_\gamma \mathcal{B}_D} (w_D^\lambda(\eta))^s} {1+ \sum_{\eta \in F\mathcal{B}_D} (w_D^\lambda(\eta))^s }. \end{equation} Note that in the above expression we isolated the term corresponding to the empty path, for which $\Big( w^\lambda_D(\circ)\Big)^s = 1^s =1$. Moreover, since $\gamma$ is not the empty path, then $\eta = \circ$ does not occur in the sum in the numerator. If $\eta\in F_\gamma\mathcal{B}_D$, then $w_D^\lambda(\eta)^s$ only depends on the length of $\eta$, say $|\eta| = n+q$ for some $q\in \mathbb{N}_0$, and hence $w_D^\lambda(\eta)=D^{-1} \lambda^{-(n+q)}$. For $q\in \mathbb{N}_0$, let \[\begin{split} F^q\mathcal{B}_D & = \{ \eta \in F\mathcal{B}_D : \vert \eta \vert = q \}, \\ F_\gamma^q\mathcal{B}_D & = \{ \eta \in F_\gamma\mathcal{B}_D : \vert \eta \vert = n+q \}. \end{split}\] Then we can write \[ f(s) = \frac{D^{-s} \sum_{q\in \mathbb{N}_0} \operatorname{Card}(F_\gamma^q\mathcal{B}_D) \big( \lambda^{-(n+q)} \big)^s}{ 1 + D^{-s}\sum_{q\in \mathbb{N}_0} \operatorname{Card}(F_\gamma^q\mathcal{B}_D) \big( \lambda^{-q} \big)^s}. \] Since the vertex matrix $A_D$ of the Bratteli diagram $\mathcal{B}_D$ has 1 in every entry, every edge in $\mathcal{B}_D$ has $D$ possible edges that could follow it. Also note that $\eta\in F^q\mathcal{B}_D$ has its range in $V_0$ and its source in $V_q$, and hence we get \[ \operatorname{Card}(F^q\mathcal{B}_D)=D^{q+1}. \] But any finite path $\eta\in F^q_\gamma\mathcal{B}_D$ can be written as $\eta=\gamma\eta'$. Since $\gamma$ is fixed, the number of paths $\eta \in F^q_\gamma\mathcal{B}_D$ is the same as the number of possible paths $\eta'$. Since $r(\eta')=s(\gamma)$ is also fixed, we get \[ \operatorname{Card}(F^q_\gamma\mathcal{B}_D)= D^q. \] By multiplying both numerator and denominator of $f(s)$ by $D^s$, we obtain \begin{align*} f(s)& =\frac{D^{-s} \sum_{q\in \mathbb{N}_0} D^{q} \big({\lambda^{-(n+q)}}\big)^s}{ 1+ D^{-s}\sum_{q\in \mathbb{N}_0} D^{q+1} \big({\lambda^{-q}}\big)^s} = \frac{1}{\lambda^{ns}}\frac{\sum_{q\in \mathbb{N}_0}D^q \lambda^{-qs}}{D^s+\sum_{q\in \mathbb{N}_0} D^{q+1}\lambda^{-qs}} \\ & =\frac{1}{\lambda^{ns}}\frac{\sum_{q\in \mathbb{N}_0}\Big(\frac{D}{\lambda^s}\Big)^q}{\left(D^s+D\sum_{q\in \mathbb{N}_0}\Big(\frac{D}{\lambda^s}\Big)^q\right)}. \end{align*} Since $s>\frac{\ln D}{\ln \lambda}$, we have $\frac{D}{\lambda^s}<1$, thus $\sum_{q\in \mathbb{N}_0}\Big(\frac{D}{\lambda^s}\Big)^q$ converges and is equal to $\frac{1}{1-\frac{D}{\lambda^s}}$. Thus (again multiplying numerator and denominator of $f(s)$ by $1-\frac{D}{\lambda^s}$), \[ f(s)=\frac{1}{\lambda^{ns}}\frac{\frac{1}{1-\frac{D}{\lambda^s}}}{(D^s+D\frac{1}{1-\frac{D}{\lambda^s}})} =\frac{1}{\lambda^{ns}}\frac{1}{\Big((1-\frac{D}{\lambda^s})D^s+D\Big)} \] Now take the limit $s\downarrow d$ and recall that $\lambda^d=D$. So we have $(1-\frac{D}{\lambda^s})\to 0$ and hence \[ \lim_{s\downarrow d}f(s)=\frac{1}{\lambda^{nd}}\frac{1}{D}=\frac{1}{D^n}\frac{1}{D}=\frac{1}{D^{n+1}}, \] which is the desired result by Equation \eqref{eq:m_ind_Dix}. \end{proof} \subsection{The Laplace--Beltrami operator} \label{subsec-Laplace-Beltrami Operator-O-D} In Section 4 of \cite{julien-savinien-transversal}, the authors use the spectral triple associated to a weighted Bratteli diagram to construct a non-positive definite self-adjoint operator with discrete spectrum (which they fully describe) defined on the infinite path space of the given Bratteli diagram. Moreover, they show in Theorem 4.3 of \cite{julien-savinien-transversal} that the eigenfunctions of $\Delta_s$ form an orthogonal decomposition of the $L^2$-space of the boundary. Therefore, by applying the results of Section 4 of \cite{julien-savinien-transversal} to the spectral triples of Section \ref{subsec:cuntz-spectral-triple} above, we obtain, after we choose a weight $w_D^{\lambda}$ on $\mathcal{B}_D$ as in Definition \ref{def-choice-of-weight-Cuntz-algebra}, a non-positive definite self-adjoint operator $\Delta_s$ on $L^2(\partial \mathcal{B}_D, \mu_D)$ for any $s \in \mathbb{R}$, where $\mu_D$ is the measure on $\partial \mathcal{B}_D$ given in \eqref{eq:m_ind_Dix}. (Recall that $\mu_D$ does not depend on $\lambda$). Namely, for any $s\in \mathbb{R}$, the Laplace--Beltrami operator $\Delta_s$ on $L^2(\partial \mathcal{B}_D, \mu_D)$ is given by \begin{equation}\label{eq:Delta} \langle f, \Delta_s(g)\rangle=Q_s(f,g)=\frac{1}{2}\int_E \operatorname{Tr}(\vert \slashed{D}\,\vert^{-s}[\slashed{D}, \pi_\tau(f)]^*\,[{\slashed{D}},\pi_\tau(g)]\, d\mu_D(\tau), \end{equation} where $\operatorname{Dom} Q_s= \Span \{ \chi_{[\gamma]} : \gamma\in F\mathcal{B}_D \}$ and $Q_s$ is a closable Dirichlet form, and $\mu_D(\tau)$ is the measure induced by the Dixmier trace on the set $E$ of choice functions. Moreover, the eigenfunctions of $\Delta_s$ form an orthogonal decomposition of $L^2(\partial \mathcal{B}_D, \mu_D)$. In the remainder of this section we give the details of this decomposition and formulas for the eigenvalues. In Section \ref{wavelets-and-eigenfunctions-O-D} below, we describe the relationship between this orthogonal decomposition and the wavelet decomposition of $L^2(\Lambda^\infty, M)$ computed in \cite{FGKP}. \begin{thm}\cite[Theorem~4.3]{julien-savinien-transversal}\label{thm:eigen} Let $(\mathcal{B}_D, w_D^{D})$ be the weighted Bratteli diagram as in Theorem \ref{prop:spectral}. (Note that we made here the choice $\lambda=D$ for simplicity.) Let $\Delta_s$ be the Laplace--Beltrami operator on $L^2(\partial \mathcal{B}_D, \mu_D)$ given by \eqref{eq:Delta}. Then the eigenvalues of $\Delta_s$ are $0$, associated to the constant function $1$, and the eigenvalues $\{\lambda_\eta\}_{\eta \in (F\mathcal{B}_D)_\circ}$ with corresponding eigenspaces $\{E_\eta\}_{\eta \in (F\mathcal{B}_D)_\circ}$ of $\Delta_s$ are given by \[ \lambda_\circ = \bigl( G_s (\circ) \bigr)^{-1} = \frac{2D}{D-1}; \] \[ \lambda_\eta = -2 -2D^{3-s} \frac{1-D^{(3-s)|\eta|}}{1-D^{3-s}} - \frac{2D^{3 |\eta| +4}}{(D-1)D^{s(|\eta| +1)}}, \quad \eta \in F(\mathcal{B}_D) \] with \[ E_\circ = \Span \Bigl \{ D^{-1} \bigl( \chi_v - \chi_{v'} \bigr) \Bigr\} \ : \ v \ne v' \in V_0 \Bigr\}, \] \begin{multline*} E_\eta= \Span \Big\{\, D^{|\eta|+2} \left( {\chi_{[\eta e]}}-{\chi_{[\eta e']}} \right)\; : \\ \eta \in F(\mathcal{B}_D),\ e \ne e',\; |e|=|e'|=1,\; r(e)=r(e') = s(\eta)\, \Big\}. \end{multline*} \end{thm} \begin{proof} This follows from evaluating the formulas given in Theorem 4.3 of \cite{julien-savinien-transversal}, using Theorem \ref{thm-Dixmier-trace-cuntz-algebra-O_D} above to calculate the measures of the cylinder sets, and recalling that the diameter $\text{diam}[\gamma]$ of a cylinder set is given by the weight of $\gamma$. To be precise, since there are $D$ edges with a given range $v$, the size of the set \[ \bigl\{ (e, e') \in \Lambda^1 \times \Lambda^1: r(e) = r(e') = v, \ e \not= e' \bigr\} \] is $D(D-1)$ for any vertex $v$. Therefore, for any path $\eta \in \Lambda$, the constant $G_s(\eta)$ from Theorem 4.3 of \cite{julien-savinien-transversal} is given by \[ G_s(\eta) = \frac{D(D-1) D^{-2(|\eta| + 2)}}{2w_D(\eta)^{s-2}} = \frac{(D-1) D^{s(|\eta| +1)}}{2D^{4|\eta| + 5}}. \] Observe that, in the notation of \cite{julien-savinien-transversal}, a path of ``length 0'' corresponds to the empty path $\circ$, that is, whose cylinder set gives entire infinite path space, and a path of ``length 1'' corresponds to a vertex. In general, the length of a path in \cite{julien-savinien-transversal} corresponds to the number of vertices that this path traverses; hence a path of length $n$ for them is a path of length $n-1$ for us. In order to compute the eigenvalues $\lambda_\eta$ described in Theorem 4.3 of \cite{julien-savinien-transversal}, then, we also need to calculate $G_s(\circ) = G_s(\Lambda^\infty)$. Since the infinite path space has diameter 1 by Proposition 2.10 of \cite{julien-savinien-transversal}, we obtain \[ G_s(\circ) = G_s(\Lambda^\infty) = \frac{D(D-1)}{2 D^2} = \frac{D-1}{2D}.\] Now, if we denote the empty path $\circ$ by a path of ``length $-1$,'' we can rewrite the formula (4.3) from \cite{julien-savinien-transversal} for the eigenvalue $\lambda_\eta$ associated to a path $\eta$ as \begin{align*} \lambda_\eta & = \sum_{k=-1}^{|\eta| -1} \frac{\frac{1}{D^{k+2}} - \frac{1}{D^{k+1}}}{G_s(\eta[0,k])} - \frac{1}{D^{|\eta|+1} G_s(\eta)} \\ & = \frac{1-D}{D\frac{D-1}{2D}} + \sum_{k=0}^{|\eta|-1} \frac{1-D}{D^{k+2}} \frac{2 D^{4k + 5}}{(D-1)D^{s(k+1)}} - \frac{2D^{3|\eta| + 4}}{(D-1)D^{s(|\eta| +1)}} \\ & = - 2 - 2D^{3-s} \frac{1-D^{(3-s)|\eta|}}{1-D^{3-s}} - \frac{2D^{3|\eta| + 4}}{(D-1)D^{s(|\eta| +1)}}, \end{align*} using the notation of Definition \ref{def:bratteli-diagram}, and the fact that \[\sum_{k=0}^{|\eta|-1} \frac{2 D^{3k + 3}}{D^{s(k+1)}} =2D^{3-s} \sum_{k=0}^{|\eta|-1} \frac{D^{3k}}{D^{sk}} = 2D^{3-s} \frac{1-D^{(3-s)|\eta|}}{1-D^{3-s}}.\] \end{proof} \section{Wavelets and eigenfunctions for $\mathcal{O}_D$} \label{wavelets-and-eigenfunctions-O-D} In this section, we connect the eigenspaces $E_\gamma$ of Theorem \ref{thm:eigen} with the orthogonal decomposition of $L^2(\partial \mathcal{B}_D, M)$ associated to the wavelets constructed in \cite{MP} Section 3 (see also Section 4 of \cite{FGKP}). We begin by describing the wavelet decomposition of $L^2(\partial \mathcal{B}_D, M)$, which is a special case of the wavelets of \cite{MP} and \cite{FGKP}. To be precise, the wavelets we discuss here are those associated to the $D \times D$ matrix $A_D$ consisting of all 1's, but the wavelets described in \cite{MP} are defined for any matrix $A$ with entries from $\{0,1\}$. Let $\Lambda_D$ denote the directed graph with vertex matrix $A_D$. In what follows, we will assume that we have labeled the $D$ vertices of $\Lambda_D^0$ by $\mathbb{Z}_D = \{ 0, 1, \ldots, D-1\} ,$ and we will write infinite paths in $\Lambda^\infty_D = \partial \mathcal{B}_D$ as strings of vertices $(i_1 i_2 i_3 \ldots )$ where $i_j \in \mathbb{Z}_D$ for all $j$. Denote by ${\mathcal V}_0$ the (finite-dimensional) subspace of $L^2(\partial \mathcal{B}_D, M)$ given by \[ {\mathcal V}_0 \;=\; \Span \{\chi_{\sigma_i(\partial \mathcal{B}_D)}:\;i\in\mathbb Z_D\}. \] Define an inner product on $\mathbb C^{D}$ by \begin{equation}\label{eq:inner-prod} \bigl\langle (x_j), (y_j) \bigr\rangle_{PF}\;=\;\frac{1}{D}\sum_{j=0}^{D-1}\overline{x_j}y_j. \end{equation} We now define a set of $D$ linearly independent vectors $\{c^{j}:\;0\leq j \leq D-1\}\subset \mathbb C^{D}$, where $c^{j} = (c^{j}_0, \ldots, c^{j}_{D -1})$, by \[ c_{\ell}^{0}= 1\;\; \forall \ \ell \in \mathbb{Z}_D, \] and $\{c^{j}: 1\leq j\leq D-1\}$ an orthonormal basis for the subspace $\{(1,1,\cdots,1)\}^{\perp}$, with ${\perp} $ taken with respect to the inner product $\langle \cdot,\cdot \rangle_{PF}$. We now note that we can write each set $R_{[k]} = \sigma_k (\partial \mathcal{B}_D)$ as a disjoint union: $$R_{[k]}=\bigsqcup_{j=0}^{D-1}R_{[kj]},$$ where \[ R_{[kj]}\;=\; \bigl\{ (i_1i_2\cdots i_n\cdots \in \partial \mathcal{B}_D:\;\;i_1=k\;\text{and}\;i_2=j \bigr\}. \] Thus in terms of characteristic functions, $$\chi_{R_{[k]}}\;=\;\sum_{j=0}^{D-1}\chi_{R_{[kj]}}\;\;\text{for $k\in\mathbb Z_D$}.$$ Now, define functions $\{f^{j,k}\}_{j, k=0}^{D-1}$ on $\partial \mathcal{B}_D$ by $$f^{j, k}(x)\;=\;\sqrt{D}\sum_{\ell=0}^{D-1}c_{\ell}^{j}\chi_{R_{[k{\ell}]}}(x).$$ Moreover, since $c_\ell^0 = 1$ for all $\ell$, we have $$f^{0,k}=\sqrt{D}\sum_{\ell=0}^{D-1}c_{\ell}^{0}\chi_{R_{[k{\ell}]}} = \sqrt{D}\sum_{\ell=0}^{D-1} \chi_{R_{[k\ell]}} = \sqrt{D} \chi_{R_{[k]}}.$$ It follows that \[ \Span \bigl\{ f^{0,k} \bigr\}_{k=0}^{D-1} \;=\; \Span \bigl\{ \chi_{R_{[k]}} \bigr\}_{k=0}^{D-1}\;=\;{\mathcal V}_0. \] Now, we can use the functions $f^{j,k}$ to construct a wavelet basis of $L^2(\partial \mathcal{B}_D, M)$. First, a definition: for any word $w = w_1 w_2 \cdots w_n \in (\mathbb{Z}_D)^n$, write $S_w = S_{w_1} S_{w_2} \cdots S_{w_n}$, where $S_{w_i} \in L^2(\partial \mathcal{B}_D, M)$ is the operator defined in Proposition~\ref{MPrepprop}. \begin{thm}[{\cite[Theorem 3.2]{MP}; \cite[Theorem 4.2]{FGKP}}] \label{MPwaveletsthm} Fix an integer $D>1.$ Let $\{S_k\}_{k\in \mathbb{Z}_D}$ be the operators on $L^2(\partial \mathcal{B}_D, M)$ described in Proposition~\ref{MPrepprop}. Let $\{f^{j,k}:\;j,k\in\mathbb Z_D\}$ be the functions on $\partial \mathcal{B}_D$ defined in the above paragraphs. Define $${\mathcal W}_0\;=\; \Span \{f^{j,k}: j, k\in \mathbb Z_D, j \not= 0\};$$ $${\mathcal W}_n= \Span \{S_w(f^{j,k}):\;j,k\in \mathbb Z_D, \ j \not= 0, \text{ and}\;\;w \in (\mathbb{Z}_D)^n\}.$$ Then the subspaces ${\mathcal V}_0$ and $\{{\mathcal W}_n\}_{n=0}^{\infty}$ are mutually pairwise orthogonal in $L^2(\partial \mathcal{B}_D, M)$ and $$L^2(\partial \mathcal{B}_D, M)=\; \Span \left({\mathcal V}_0\oplus \Big[\bigoplus_{n=0}^{\infty} \mathcal{W}_n\Big]\right).$$ \end{thm} To calculate the functions $S_w (f^{j,k})$, we first observe that \[ S_i \chi_{R_{[k]}} = \sqrt{D} \chi_{R_{[ik]}};\] consequently, if $w = w_1 w_2 \cdots w_n$, \begin{equation} \label{eq:wavelet-fcns} S_w(f^{j,k}) = D^{(n+1)/2} \sum_{\ell = 0}^{D-1} c^j_\ell \chi_{[w_1 w_2 \cdots w_n k \ell]}. \end{equation} If we instead write the finite path $w $ as $ \gamma$, and observe that the edges in $E_{n+1}$ with range $k$ are in bijection with the pairs $(k\ell)_{\ell \in \mathbb{Z}_D}$, we see that for any path $\gamma \in F\mathcal{B}_D$ with $|\gamma | = n-1$, \begin{equation} \label{eq:wavelet-fcns-2} S_\gamma (f^{j, k}) = D^{(n+1)/2} \sum_{e \in E_{n+1}} c^j_e \chi_{[\gamma k e]}. \end{equation} A few more calculations lead us to the following \begin{thm} \label{equalweightswaveletOD} Let $\Lambda_D$ be the directed graph whose $D \times D$ adjacency matrix consists of all 1's. For each $\gamma \in \Lambda$, let $E_\gamma$ be the eigenspace of the Laplace--Beltrami operator $\Delta_s$ described in Theorem \ref{thm:eigen}. Then for all $n \geq 0$ we can write \[ \mathcal W_{n} = \bigoplus_{\gamma \in {{}\Lambda^{n}}} E_\gamma. \] In particular, \[ L^2(\Lambda^\infty, \mu) = \mathcal V_{-1} \oplus {\mathcal W}_{-1} \oplus \Biggl[ \, \bigoplus_{n \geq 0} \bigoplus_{\gamma \in \Lambda^{n}} E_\gamma \Biggr] = {\mathcal V_{0}\oplus \Biggl[ \, \bigoplus_{n \geq 0} \bigoplus_{\gamma \in \Lambda^{n}} E_\gamma \Biggr].} \] Moreover, for all $i \in \mathbb{Z}_D$ and all $\gamma \in F\mathcal{B}_D$,, the isometry $S_i$ given by \[ {S}_i f((v_1 v_2\ldots )) = \begin{cases} D^{1/2} f ((v_2 v_3 \ldots))& \text{if } v_1 = i, \\ 0 & \text{else.} \end{cases} \] maps $E_\gamma$ to $E_{i \gamma}$ unitarily. \end{thm} \begin{proof} Let $\gamma \in F\mathcal{B}_D$ be a path of length $n$. Recall that the subspaces $E_\gamma$ are spanned by functions of the form $\chi_{[\gamma e]} - \chi_{[\gamma e']}$, where $e \not= e'$ are edges in $E_{n+1}$. In other words, if we write a spanning function $\xi_{e, e'} = \chi_{[\gamma e]} - \chi_{[\gamma e']}$ of $E_\gamma$ as a linear combination of characteristic functions of cylinder sets, we have \[ \xi_{e, e'} = \sum_{f\in E_{n+1}} d_f \chi_{[\gamma f]}\] where $d_e = 1, d_{e'} = -1, d_f = 0 \ \forall \ f \not= e, e'$. In other words, the vector \[(d_f)_{r(f) = s(\gamma), f\in E_{n+1}}\] is in the subspace $(1, 1, \ldots, 1)^\perp$ of $\mathbb{C}^D$ which is orthogonal to $(1,1, \ldots, 1)$ in the inner product \eqref{eq:inner-prod}. It follows that $E_\gamma \subseteq \mathcal{W}_n$ whenever $|\gamma| = n$. Now, Theorem 4.3 of \cite{julien-savinien-transversal} tells us that each space $E_\gamma$ has dimension $D-1$. Moreover, there are $D^{n+1}$ paths $\gamma$ of length $n$, and $E_\gamma \perp E_\eta$ for all $\gamma, \eta$ with $|\gamma| = |\eta|$. Therefore, \[ \dim \left( \bigcup_{|\gamma| = n} E_\gamma\right) = {D^{n+1}(D-1)}.\] Similarly, $\dim \mathcal{W}_n = \text{Card} (F^{n-1}\mathcal{B}_D) \text{Card} ( \{ f^{j,k}\}_{j\not= 0}) = {D^n \cdot D(D-1)}{}$. This equality of dimensions thus implies that \[ \mathcal{W}_n = \bigcup_{|\gamma| = n} E_\gamma \ \forall \ n \in \mathbb{N}_0.\] For the last assertion, we simply observe that $S_i$ is an isometry with $S_i S_i^* = id|_{E_i}$. \end{proof} \subsection{Wavelets on $\mathbb{S}_A$} Let $A$ be an $N \times N$ $\{0, 1\}$-matrix with precisely $D$ nonzero entries. In this section we will describe wavelets on $\mathbb{S}_{A}$ associated to the Cuntz algebra $\mathcal{O}_D$ using the measure-preserving isomorphism between $(\mathbb{S}_A, H)$ and $(\partial \mathcal{B}_D, M)$ described in Theorem~\ref{thm:measure_preserving}. Since all edges in $\Lambda_D$ can be preceded (or followed) by any other edge, this infinite path space corresponds simply to $[0,1]$ by thinking of points in $[0,1]$ as infinite sequences in $\{0, \ldots, D-1\}^{\mathbb{N}}$ and using the $D$-adic expansion. The natural correspondence between $\mathbb{S}_A$ and points from $[0,1]$ in their $D$-adic expansions is given by labeling the nonzero entries in $A$ by the elements of $\{0, 1, \ldots, D-1\}$, and then identifying a cylinder set $[(x_1,y_1),(x_2,y_2),\dots,(x_n,y_n)]$ in $\mathbb{S}_A$ with the cylinder $[d_1\dots d_n]$, where $d_i \in \{0, \ldots, D-1\}$ is the integer corresponding to $A_{x_i,y_i}$. Thus, we obtain wavelets on $\mathbb{S}_A$ by using this identification to transfer the wavelets associated to the infinite path representation of $\mathcal{O}_D$ into functions on $\mathbb{S}_A$. These wavelets will agree with the eigenfunctions $E_\gamma$ of the Laplace--Beltrami operator associated to the Bratteli diagram for $\mathcal{O}_D$, by Theorem \ref{equalweightswaveletOD} above. To be more precise, Theorem \ref{equalweightswaveletOD} implies that we can interpret the eigenfunctions of Theorem \ref{thm:eigen} as a wavelet decomposition of $L^2(\partial \mathcal{B}_D, M)$, with \[ E_\gamma= \Span \biggl\{ \frac{1}{M[\gamma e]} \chi_{[\gamma e]} - \frac{1}{M[\gamma e']} \chi_{[\gamma e']} \biggr\}. \] Here $\gamma$ is a finite path in the graph $\Lambda_D$ associated to $\mathcal{O}_D$; writing $\gamma$ as a string of vertices, equivalently, $\gamma =d_0 d_1 d_2 \cdots d_n$ for $d_i \in \{ 0, \ldots, D-1\}$. Thus, if $d_i \in \mathbb{Z}_D$ corresponds to the pair $(x_i, y_i) \in S_A$, and $e, e' \in \{0, \ldots, D-1\}$ correspond to the pairs $(z, w), (z',w')$ in {{}the symbol set} $S_A$, the wavelet on $L^2(\mathbb{S}_A, H)$ associated to $\ \frac{1}{M[\gamma e]} \chi_{[\gamma e]} - \frac{1}{M[\gamma e']} \chi_{[\gamma e']}$ is \begin{multline*} \frac{1}{H([(x_1,y_1),(x_2,y_2),\dots,(x_n,y_n), (z, w)])} \chi_{[(x_1,y_1),(x_2,y_2),\dots,(x_n,y_n), (z,w)]}\\ - \frac{1}{H([(x_1,y_1),(x_2,y_2),\dots,(x_n,y_n), (z', w')])} \chi_{[(x_1,y_1),(x_2,y_2),\dots,(x_n,y_n), (z',w')]} \\ = \frac{1}{D^{n+2}}\left( \chi_{[(x_1,y_1),(x_2,y_2),\dots,(x_n,y_n), (z, w)]} - \chi_{[(x_1,y_1),(x_2,y_2),\dots,(x_n,y_n), (z', w')]} \right). \end{multline*} This correspondence allows us to transfer the spaces $E_\gamma$ from $L^2(\partial \mathcal{B}_D, M)$ to $L^2(\mathbb{S}_A, H)$, giving us an orthogonal decomposition of the latter. Moreover, the ``scaling and translation'' operators $S_i$ of Theorem \ref{equalweightswaveletOD} from the infinite path representation of $\mathcal{O}_D$ transfer (via the same correspondence between pairs ($x, y)$ with $A({x,y}) \not= 0$ and elements of $\{0, \ldots, D-1\}$) to the operators $T_i$ on $L^2(\mathbb{S}_A, H)$ introduced in Theorem \ref{thm:measure_preserving}. In other words, these operators $T_i$ allow us to move between the orthogonal subspaces of $L^2(\mathbb{S}_A, H)$, enabling us to view this as a wavelet decomposition. \section{Spectral triples and Laplacians for the Cuntz algebra $\mathcal{O}_D$: the uneven weight case} \label{sec:spect-triples-O-2} \subsection{The spectral triple} We are going to work in the general framework of Section~\ref{sec:spect-triples} with the difference that the weight (which we call $w_D^r$) is different from the Perron--Frobenius weights $w_D^\lambda$ we previously defined in Definition \ref{def-choice-of-weight-Cuntz-algebra}. For this section, we require that our weight is defined on finite paths as in Definition \ref{def-weight-paths}, rather than on vertices as in Definition \ref{def-choice-of-weight-Cuntz-algebra}. In particular, the weight $w_D^r$ will not be self-similar in the sense that $w_D^r(\gamma)$ will not depend only on the length and the source of $\gamma$, but also on the precise sequence of edges making up $\gamma$. \begin{defn} \label{def-choice-of-weight-Cuntz-O-D-algebra} Fix a vector $r = (r_1, \ldots, r_D)$ of positive numbers satisfying $\sum_i r_i =1$. (We also note that this condition is not essential, although it makes a nice normalization.) The weight $w_D^r$ on the graph $\Lambda_D$ with $D$ vertices $v_1, \ldots, v_D$ (equivalently, the Bratteli diagram $\mathcal{B}_D$) associated to the matrix $A_D$ is defined as follows. \begin{enumerate} \item Whenever $\gamma$ is the trivial (empty) path $\circ$, we set $w_D^r(\circ) =1$. \item {Associate to each vertex $v_i$ the weight $r_i$:} \[ w^r_D (v) = r_v,\ \forall v \in \Lambda_0. \] \item Given a path $\gamma = (e_1 \ldots e_n)$ with $|e_j|=1$, $s(e_i) = v_{j_i}$, and $r(e_1) = v_{j_0}$, we set the weight of $\gamma$ to be \[ w^r_D (\gamma) = \prod_{i=0}^n r_{j_i} . \] \item The diameter $\diam [\eta]$ of a cylinder set $[\eta]$ is defined to be equal to its weight, \[ \diam [\eta] = w^r_D (\eta) . \] \end{enumerate} \end{defn} Note in particular that $[\circ]= \Lambda_D^{\infty}$ and so $diam[\circ]=1$, which is consistent with the choice of our normalization. \begin{comment} \begin{itemize} \item[(a)] For any vertex $v$ of $\mathcal{B}_D$, \[ w_2^r(v_0) =r ,\ w^2(v_1)=(1-r) . \] \item[(b)] For path $\gamma_0$ of degree zero of $\mathcal{B}_D$ (note that $\gamma_0$ can be identified with either $v_0$ or $v_1$), \[ w_2^r( \gamma_0 ) = r,\ if \ \gamma_0\equiv v_0;\ w^2( \gamma_0)=(1-r),\ if \ \gamma_0\equiv v_1. \] \item[(c)] For any finite path $\gamma_n$ with length $n$, \[ w_2^r(\gamma) =\prod_{j=0}^n (s(\gamma[0,j]) ), \] where $\gamma[0,j] $ are is the restriction of $\gamma$ to $[0,j] $, $j=0,...,n$. \end{itemize} \end{defn} \end{comment} The set of finite paths on a graph has a natural tree structure. {In fact, if we denote by $(e_1 \ldots e_n)$ a string of composable edges (thus requiring $s(e_{i-1}) = r(e_i),\ \forall i$)} then the ``parent'' of $(e_1 \ldots e_n)$ is $(e_1 \ldots e_{n-1})$; the root is the path $\circ$ of length $-1$ which corresponds to $\Lambda^\infty_D$. In addition, the weight $w_D^r(\gamma)$ decreases to $0$ as the length of $\gamma$, $|\gamma|$, increases to infinity. Therefore, the Pearson--Bellissard construction from~\cite{pearson-bellissard-ultrametric} applies, and there is a spectral triple associated to the set of infinite paths as in Theorem~\ref{prop:spectral} (see also~\cite{julien-savinien-transversal,FGKJP-spectral-triples}). To be more precise, we have: \begin{prop}\label{prop:spectral-uneven} Let $\mathcal{B}_D$ be the Bratteli diagram associated to the matrix $A_D$. Let $(\mathcal{B}_D, w_D^r)$ be the weighted Bratteli diagram given in Definition~\ref{def-choice-of-weight-Cuntz-O-D-algebra}. Let $(\partial \mathcal{B}_D, d_w^r)$ be the associated ultrametric Cantor set. Then there is an even spectral triple $(C_{\text{Lip}}(\partial \mathcal{B}_D), \mathcal{H'}, \pi_\tau', \slashed{D}', \Gamma')$. \end{prop} The $\zeta$-function associated to the spectral triple of Theorem~\ref{prop:spectral} is given by \[ \zeta^r_D (s) =\frac{1}{2}\operatorname{Tr}(|{\slashed{D}'}|^{-s})=\sum_{\lambda \in F(\mathcal{B}_D)_\circ} \big(w_D^r(\lambda)\big)^s. \] We now want to compute the abscissa of convergence $s_r$ of the above $\zeta$-function. \begin{prop}\label{prop-abscissa-of-conv-O-2} The abscissa of convergence $s_r$ of the $\zeta$-function $\zeta^r_D (s) $ associated to the spectral triple in Proposition \ref{prop:spectral-uneven} is~$1$. \end{prop} \begin{proof} The formula for the $\zeta$-function can be written as follows: \begin{equation}\label{eq:zeta-r} \zeta^r_D(s) = \frac{1}{2}\operatorname{Tr}(|{\slashed{D}'}|^{-s}) = \sum_{n =-1}^\infty \sum_{\lambda \in \Lambda^n} \Big( w_D^r(\lambda)\Big)^s, \end{equation} with the convention that a path of length $-1$ is the empty path $ \circ $ with associated cylinder set $\Lambda^\infty$. In order to enumerate how many paths of which weight there are in $F(\mathcal{B})_\circ$, we will use the following argument. Consider the following formal polynomial in $D$ variables $X_1, \ldots, X_D$ with integer coefficients: \[ P (X_1, \ldots, X_D) = \Bigl( \sum_{i=1}^d X_i \Bigr)^{n+1}. \] After expanding, each monomial is of the form $c \prod_i X_i^{\alpha_i}${{} where $c$ is a constant}. The constant $c$ counts how many partitions of $\{0, \ldots, n\}$ into $D$ (possibly empty) subsets there are, of cardinality respectively $\alpha_1, \ldots, \alpha_D$. The set of such partitions for all possible choices of $\alpha_1,\ldots, \alpha_D$ is in bijection with $F^n\mathcal{B}_D$: given $\gamma = (e_1 \ldots e_n)$ (with $|e_i|=1$ and $s(e_{i-1}) = r(e_i),\ \forall i$), let $U_i = \{j \in \{0, \ldots, n\} \ : \ s(e_j) = v_i\}$. One sees that $\{U_i\}_{i=1}^{D}$ defines a partition of $\{0, \ldots, n\}$, and the map from $F^n\mathcal{B}_D$, the set of finite paths of $\mathcal{B}_D$ of length $n$, to the set of such partitions is a bijection. Indeed, $$w_D^r (\gamma) = \prod_i r_i^{\alpha_i}.$$ Now, we see that the sum in Equation~\eqref{eq:zeta-r} can be rewritten as \[ \zeta^r_D (s) = {\sum_{n=-1}^\infty P(r_1, \ldots, r_D) ^{s(n+1)} }= \sum_{n = -1}^\infty \Bigl( \sum_{i=1}^D r_i^s \Bigr)^{n+1}. \] This is a geometric series, which converges if and only if $\sum_i r_i^s < 1$. The function $s \mapsto \sum_i r_i^s$ is a decreasing function on $\mathbb{R}_+$ (since all the $r_i$ are less than $1$), and $\sum_i r_i = 1$. Therefore, the abscissa of convergence is exactly $s_r = 1$. \end{proof} \begin{rmk} Note that this guarantees that the upper Minkowski dimension of $(\partial \mathcal{B}_D, d_{w^r})$ is $1$, see~\cite[Theorem~2]{pearson-bellissard-ultrametric}. \end{rmk} \begin{thm} \label{thm-Dixmier-trace-uneven-weights} The measure $\mu_D^r$ on $\partial \mathcal{B}_D$ induced by the Dixmier trace is defined by $\mu_D^r ([\gamma]) = w_D^r (\gamma)$. \end{thm} \begin{proof} Note first that for the case $\gamma = \circ$, the result follows immediately from the definitions of $\mu_D^r$ and $w_D^r$. Given a cylinder set $[\gamma]\not= \Lambda^{\infty} , we have \[ \mu^r_D ([\gamma]) = \lim_{s \rightarrow s_r^+} \frac{ \sum_{\eta \, : \, r(\eta) = s(\gamma)} \Big( w_D^r (\gamma \eta)\Big)^s} {\zeta^r_D (s)}. \] One remark is in order: if $\gamma$ is a path of length $n$ and $0 < m < n$, then \[\begin{split} w_D^r(\gamma) & = w_D^r\bigl(r(e_1)\bigr) \prod_{i=1}^n w_D^r\bigl( s(e_i) \bigr) \\ & = \biggl( w_D^r\bigl(r(e_1)\bigr) \prod_{i=1}^m w_D^r\bigl( s(e_i) \bigr) \biggr) \biggl( w_D^r\bigl(s(e_{m+1})\bigr) \prod_{i=m+2}^n w\bigl( s(e_i) \bigr) \biggr) \\ & = \biggl( w_D^r\bigl(r(e_1)\bigr) \prod_{i=1}^m w_D^r\bigl( s(e_i) \bigr) \biggr) \biggl( w_D^r\bigl(r(e_{m+2})\bigr) \prod_{i=m+2}^n w_D^r\bigl( s(e_i) \bigr) \biggr) \\ &{ = w_D^r (e_1e_2 \cdots e_m) w_D^r(e_{m+2} \cdots e_n)}. \end{split}\] In particular, $w_D^r (\gamma \eta)$ is \textbf{not} $w_D^r(\gamma) w_D^r (\eta)$. Indeed, any path of the form $\gamma \eta$ with $s(\gamma) = r(\eta)$ can be written uniquely as $\gamma e\eta'$ where $e$ is the unique edge with $r(e) = s(\gamma)$ and $s(e) = r(\eta')$. By the computation above, $w_D^r (\gamma e \eta') = w_D^r (\gamma) w_D^r (\eta')$. {Moreover, since $\Lambda_D$ has precisely one edge connecting any pair of vertices, every finite path $\eta'$ in $\Lambda $ gives rise to exactly one $e$ such that $s(e) = r(\eta')$ and $r(e) = s(\gamma)$.} Therefore, \[ \sum_{\eta \, : \, r(\eta) = s(\gamma)} \bigl( w_D^r (\gamma \eta) \bigr)^s = \sum_{\eta' \in \Lambda} \bigl( w_D^r (\gamma) \bigr)^s \Big( w_D^r (\eta') \Big)^s = \bigl( w_D^r (\gamma) \bigr)^s \alpha(s), \] where $ \alpha(s) = \sum_{\eta' \in \Lambda} \Big( w_D^r (\eta') \Big)^s$. Moreover, since $ \lim_{s \rightarrow 1^+} \alpha(s) = + \infty$, we have \[ \mu_D^r ([\gamma]) = \lim_{s \rightarrow 1^+} \Big( w_D^r (\gamma)\Big)^s\ \frac{\alpha(s)}{1+\alpha(s)} = w_D^r(\gamma). \] \end{proof} In particular, we do \emph{not} have $\mu_D^r = \mu_D = M$. This should not be completely surprising, however. The Perron--Frobenius measure $M = \mu_D$ is the unique measure on $\Lambda^\infty$ under the following assumptions: the measure is a probability measure, and $\mu_D[\gamma]$ only depends on $|\gamma|$ and $s(\gamma)$. The second assumption is not satisfied for $\mu_D^r$. Note also that the choice of weight $w_D^r$ does not define a self-similar ultrametric Cantor set in the sense of \cite[Definition~2.6]{julien-savinien-embedding}, since again, the diameter of $[\gamma]$ does not just depend on $|\gamma|$ and $s(\gamma)$ but also on the specific sequence of edges. \subsection{The Laplace--Beltrami Operator} \label{subsec-Laplace-Beltrami Operator-O-2} As in Section \ref{subsec-Laplace-Beltrami Operator-O-D}, the Dixmier trace associated to the spectral triple of Proposition \ref{prop:spectral-uneven} induces the probability measure $\mu_D^r(\tau)$ on the set of choice functions; thus, by the classical theory of Dirichlet forms we can define a Laplace--Beltrami operator $\Delta_s^r$ on $L^2(\partial \mathcal{B}_D, \mu_D^r)$ as in Proposition~4.1 of \cite{julien-savinien-transversal} by \begin{equation}\label{eq:Delta-O-2} \langle f, \Delta_s^r(g)\rangle=Q_s(f,g)=\frac{1}{2}\int_E \operatorname{Tr}(\vert \slashed{D}\,\vert^{-s}[\slashed{D}, \pi_\tau(f)]^*\,[{\slashed{D}},\pi_\tau(g)]\, d\mu_D^r(\tau), \end{equation} where $\operatorname{Dom} Q_s=\Span \{ \chi_\gamma : \gamma\in F\mathcal{B}_2\}$ is a closable Dirichlet form. As before, $\Delta_s^r$ is self-adjoint and has pure point spectrum, and we can describe the spectrum of $\Delta_s^r$ explicitly. For our case we can additionally compute the eigenvalues and the eigenfunctions of $\Delta_s^r$ as follows. \begin{thm}\cite[Theorem~4.3]{julien-savinien-transversal}\label{thm:eigen-O-2} Let $\Delta_s^r$ be the Laplace--Beltrami operator on $L^2(\partial \mathcal{B}_2, \mu_D^r)$ given by \eqref{eq:Delta-O-2}. Then the eigenvalues $\{ \lambda_\eta^r \}$ and corresponding eigenspaces $\{E_\eta^r\}$ of $\Delta_s^r$ are given by, for $\eta\in F\partial \mathcal{B}_D$, \[ \lambda_\eta^r=\sum_{k=-1}^{|\eta|-1}\frac{1}{G_s(\eta[0,k])}\Big(\mu_D^r[\eta[0,k+1]] - \mu_D^r[\eta[0,k])]\Big)-\frac{\mu_D^r[\eta]}{G_s(\eta)}, \] \[ E_\eta^r = \Span \bigg\{\, \frac{\chi_{[\eta e]}}{\mu_D^r[\eta e]}-\frac{\chi_{[\eta e']}}{\mu_D^r[\eta e']}\; : \; e\ne e',\; |e|=|e'|=1,\; r(e)=r(e')\, \bigg\}, \] where $\eta[0,-1]= \circ$ and $\chi_{[\circ]} = \partial \mathcal{B}_D$, $G_s(\eta[0,-1])=\frac{1}{2}\sum_{v\ne w\in\Lambda^0}\mu_D^r[v]\, \mu_D^r[w]$, and for $\xi\in F\mathcal{B}_D$, \[ G_s(\xi)=\frac{1}{2} w_D^r(\xi)^{2-s}\sum_{e\ne e'\in r^{-1}(s(\xi))}\mu_D^r[\xi e]\, \mu_D^r[\xi e']. \] In addition, $0$ is an eigenvalue for the constant function $1$, and $\lambda_\circ = (G_s(\circ))^{-1}$ is an eigenvalue with eigenspace \[ E_\circ^r = \Span \biggl\{ \frac{\chi_{[v]}}{\mu_D^r[v]} - \frac{\chi_{[v']}}{\mu_D^r[v']} \; : \; v \ne v', \ v, v' \in V_0 \biggr\}. \] \end{thm} \begin{proof} Although Theorem 4.3 of \cite{julien-savinien-transversal} is stated only for the case when the weight function $w(\gamma)$ only depends on the length and the source of the path $\gamma$, as in Definition \ref{def-choice-of-weight-Cuntz-algebra}, a careful examination of the proof of that Theorem will show that the same proof works verbatim in the case of the weight $w_D^r$. \end{proof} \subsection{Eigenvalues and eigenfunctions for the $\mathcal{O}_2$ case} \label{eigenvlaues-wavelets-and-eigenfunctions-O-2} We are going to explicitly compute here the eigenvalues for the Laplace--Beltrami operator $\Delta_s^r$ in the $D=2$ case. Theorem \ref{thm:eigen-O-2} specializes in the case of $\mathcal{O}_2$ to give The formulas in Proposition \ref{eigenvalueO2unequal} below allow us to compute in principle the eigenvalue associated to any finite path. However it seems difficult to get an explicit formula that covers all the cases as the calculations in full generality are difficult to manage because of challenging bookkeeping. \begin{lemma}\label{lem-G-s-O-2-term} With notation as above, for a finite path $\xi(p,q) \in F(\partial \mathcal{B}_2)$ having $p$ vertices equal to $v_1$ and $q$ vertices equal to $v_2$ we have we have \[ G_s(\xi(p,q)) = r^{4p+1-ps}(1-r)^{4q+1-qs} \] More generally, if $\xi$ is any path, one can write \[ G_s (\xi) = \big( \mu_2^r[\xi] \big)^{4-s} r(1-r). \] \end{lemma} \begin{proof} We start with the second point. If $\xi$ is coded by its vertices, $\xi = (v_0, \ldots, v_{|\xi|})$ and $e \neq e'$ are vertices such that $r(e) = r(e') = s(\xi)$, then $w_D^r(\xi e) w_D^r(\xi e') = (w_D^r (\xi) )^2 r(1-r)$. Since $\mu_D^r[\xi] = w_D^r(\xi)$, we have \[ G_s (\xi) = \frac 1 2 \bigl( \mu_D^r [\xi] \bigr)^{2-s} 2 \bigl( \mu_D^r[\xi] \bigr)^2 r(1-r) \] and the result follows. (Note that the factor $2$ appears because $(v_1, v_2)$ and $(v_2, v_1)$ are the two pairs in the index of the sum defining $G_s(\xi)$.) For the first point, we compute \[ G_s (\xi(p,q)) = (1/2) \big[ r^p (1-r)^q \big]^{2-s} 2 \bigl( (r^p (1-r)^q r) (r^p(1-r)^q (1-r) \bigr). \] \end{proof} \begin{prop}\label{eigenvalueO2unequal} Let $\Delta_s^r$ be the Laplace--Beltrami operator on $L^2(\partial \mathcal{B}_2, \mu^r_2)$ given by \eqref{eq:Delta-O-2} for the choice of weight induced by \[ w_2^r( v_1) = r,\ w_2^r( v_2) = (1-r), \] where $r \in [0,1]$ is fixed. (Note that the notation used above is slightly different from the notation we used in Theorem~\ref{thm:eigen-O-2}). Let $\eta\in F\partial \mathcal{B}_2$ of length $n$ be determined by the string of vertices $(v_0, \ldots, v_n)$; also we write $(v_0, \ldots, v_k)$ for $\eta[0, k]$, for any $k \leq n$. Then we have \[ \lambda_\eta^r = \frac{w^r_2(v_0)-1}{r(1-r)} + \sum_{k=0}^{n-1} \frac{(\mu_2^r[v_0, \ldots, v_k])^{s-3}}{r(1-r)} (w^r_2(v_{k+1}) - 1) - \frac{(\mu_2^r[\eta])^{s-3}}{r(1-r)}. \] \end{prop} \begin{proof} We will use the fact that if one codes $\eta$ by its vertices $\eta=(v_0, \ldots, v_{|\eta|})$, then $\mu_2^r[v_0, \ldots, v_k] = \mu_2^r[v_0, \ldots, v_i]\mu_2^r[v_{i+1}, \ldots, v_k]$, as was established in the proof of Theorem~\ref{thm-Dixmier-trace-uneven-weights}. Consequently, we can factor the term $(\mu_2^r [\eta[0,k+1]] - \mu_2^r [\eta[0, k]])$ as follows: \[\begin{split} \mu_2^r [\eta[0,k+1]] - \mu_2^r [\eta[0, k]] & = \mu_2^r[v_0, \ldots, v_{k+1}] - \mu_2^r[v_0, \ldots, v_{k}] \\ & = \mu_2^r[v_0, \ldots, v_{k+1}] (\mu_2^r[v_{k}] - 1). \end{split}\] We therefore compute \[ \lambda_\eta^r = \sum_{k=-1}^{|\eta|-1}\frac{1}{G_s(\eta[0,k])} \Big( \mu^r_2[\eta[0,k+1]] - \mu^r_2[\eta[0,k]] \Big) - \frac{\mu^r_2[\eta]}{G_s(\eta)}, \] that is (using point~2 of Lemma~\ref{lem-G-s-O-2-term}) \begin{align*} \lambda_\eta^r &= \frac{ \mu_2^r[v_0] - 1}{G_s(\circ)} \\ & + \sum_{k=0}^{n-1} \frac{1}{r(1-r) (\mu_2^r[v_0, \ldots, v_k])^{4-s}} \mu_2^r[v_0, \ldots, v_k] (w_2^r(v_{k+1}) - 1) \\ & - \frac{\mu_2^r[\eta]}{r(1-r) (\mu_2^r [\eta])^{4-s} }. \end{align*} The result follows from algebraic simplifications. Note in particular that ${G_s(\circ)} = r (1-r) $. \end{proof} One can also construct representations and wavelet spaces of $\mathcal{O}_2$ associated to the weighted Bratteli diagram $(\partial \mathcal{B}_2, w_2^r)$; see Theorem 3.8 of \cite{FGKP-survey}. This is the analogue of Theorem 5.1 above for the uneven weight case. We now compute the eigenspaces corresponding to the eigenvalues of Proposition \ref{eigenvalueO2unequal} above, and show that they coincide with the wavelet spaces described in \cite{FGKP-survey} Theorem 3.8. In other words, we will show that if $\tilde{\mathcal{W}}_k$ are the orthogonal subspaces of $L^2(\partial \mathcal{B}_2, w_2^r)$ described in Theorem 3.8 of \cite{FGKP-survey}, \[\tilde{\mathcal{W}}_k = \bigoplus_{\eta: |\eta| = k} E^r_\eta.\] What is done below is similar to the result of Theorem \ref{equalweightswaveletOD}, but we allow unequal weights in what follows. By evaluating the formulas given in Theorem \ref{thm:eigen-O-2} we obtain: \begin{prop} \label{eigenspaceO2unequal} Let $\Delta_s^r$ be the Laplace--Beltrami operator on $L^2(\partial \mathcal{B}_2, \mu^r_2)$ given by \eqref{eq:Delta-O-2} for the choice of weight induced by the choice on the vertices $v_1$ and $v_2$ of the associated graph as \[ w_2^r( v_1) = r_1, \ w_2^r( v_2) = r_2 = 1-r_1, \] where $r =r_1 \in [0,1]$ is fixed. If we let $\circ$ denote the empty path, then the eigenspace $E_\circ^r$ with eigenvalue $\lambda_\circ^r$ is given by $\Lambda_2^{\infty}$, hence has dimension $1$ and \[ E_\circ^r = \Span \Bigl\{ \, \frac{\chi_{[v_1]}}{\mu^r_2([v_1])}-\frac{\chi_{[v_2]}}{\mu^r_2([v_2])}\, \Bigr\} = \Span \left\{ \frac{\chi_{[v_1]}}{r} - \frac{\chi_{[v_2]}}{1-r} \right\}. \] Given a finite non-empty path $\eta = v_{j_0} v_{j_1} \ldots v_{j_n}\in F\partial \mathcal{B}_2$ with $n+1$ vertices, where $j_i \in \{1,2\} \ \forall \ i$, the eigenspace $E_{\eta}^r$ with corresponding eigenvalue $\lambda_{\eta}^r$ described in Proposition \ref{eigenvalueO2unequal} is given by \begin{align*} E_{\eta}^r & = \Span \Bigl\{ \, \frac{\chi_{[{\eta} e]}}{\mu^r_2[\eta e]}-\frac{\chi_{[\eta e']}}{\mu^r_2[\eta e']}\; : \; e\ne e',\; |e|=|e'|=1,\; r(e)=r(e') = s(\eta)\, \Bigr\}\\ &= \Span \left\{ \frac{1}{(\prod_{i=0}^n r_{j_i}) r} \chi_{[v_{j_0} v_{j_1} \ldots v_{j_n} v_1]} - \frac{1}{(\prod_{i=0}^n r_{j_i}) (1-r)} \chi_{[v_{j_0} v_{j_1} \ldots v_{j_n} v_2]} \right\}. \end{align*} \end{prop} We now show how the scaling functions generating ${\mathcal V}_0$ in Theorem 3.8 of \cite{FGKP-survey} fit into the eigenspace picture described above. \begin{lemma} Let $r \in [0,1]$ be given, and let $\mu^r_2$ be the Markov probability measure on the infinite path space $\Lambda_2^{\infty}$ corresponding to the weight assigning $r$ to the vertex $v_1$ and $1-r$ to the vertex $v_2.$ Let ${\mathcal V}_{-1}$ denote the space of constant functions on $\Lambda_2^{\infty}.$ Then the scaling space $\tilde{\mathcal V}_0$ described in Theorem 3.8 of \cite{FGKP-survey} as the span of $\{\chi_{[v_1]},\;\chi_{[v_2]}\},$ the characteristic functions of cylinder sets corresponding to the vertices, can be written as $$\tilde{\mathcal V_0}\;=\;{\mathcal V}_{-1}\oplus E_{\circ}^r,$$ where $E_{\circ}^r$ is the eigenspace corresponding to the empty path. \end{lemma} \begin{proof} We note that $\tilde{\mathcal V_0}$, being generated by the orthogonal functions $\chi_{[v_1]}$ and $\chi_{[v_2]}$, has dimension $2.$ On the other hand, the space ${\mathcal V}_{-1}$ of constant functions on $\Lambda_2^{\infty}$ has dimension $1$ and \[ E_\circ^r = \Span \Bigl\{ \, \frac{\chi_{[v_1]}}{\mu^r_2([v_1])}-\frac{\chi_{[v_2]}}{\mu^r_2([v_2])}\, \Bigr\} \] also has dimension $1$ and is orthogonal to ${\mathcal V}_{-1}.$ It follows by a dimension count that $${\mathcal V_0}\;=\;{\mathcal V}_{-1}\oplus E_{\circ}^r,$$ as desired. \end{proof} \begin{prop} \label{zerowaveletlemma} Let $\mu^r_2$ be the Markov probability measure on the infinite path space $\Lambda_2^{\infty}$ corresponding to the weight assigning $r$ to the vertex $v_1$ and $1-r$ to the vertex $v_2.$ Then for the corresponding representation of ${\mathcal O}_2$ on $L^2(\Lambda_2^{\infty},\mu^r_2)$ defined in Theorem 3.8 of \cite{FGKP-survey}, we have \[ \tilde{\mathcal W}_0= \Span_{\eta: |\eta|=0}\{E_\eta^r\}, \] where $E_\eta^r$ are the eigenspaces of the Laplace-Beltrami operator defined in Proposition \ref{eigenspaceO2unequal}. \end{prop} \begin{proof} As in Theorem 3.8 of \cite{FGKP-survey} and Section \ref{wavelets-and-eigenfunctions-O-D} above, we have an inner product on $\mathbb C^2$ defined by $$\langle (x_j),(y_j)\rangle =\sum_{j=1}^2\overline{x_j}\cdot y_j\cdot r_j,$$ and a fixed vector $c^0 = c^{0,k}=(1,1).$ For $k=1,2,$ we find an orthonormal basis for $\{c^{0,k}\}^{\perp}$ denoted by $\{c^{1,k}\},$ where $ c^{1,k}=(c_{\ell}^{1,k})_{\ell\in\{1,2\}}.$ But here, a straightfoward calculation shows that we can take \[ c^{1,k}=\sqrt{r(1-r)} \Bigl( \frac{1}{r}, -\frac{1}{1-r} \Bigr),\;k=1,2. \] Therefore the wavelet $\psi_{1,k}$ of Theorem 3.8 of \cite{FGKP-survey} is given by \[\begin{split} \psi_{1,k} & = \frac{\sqrt{r(1-r)}}{\sqrt{r_k}} \biggl[ \frac{\chi_{[v_kv_1]}}{r}-\frac{\chi_{[v_kv_2]}}{1-r} \biggr] \\ & = \sqrt{r(1-r)r_k} \biggl[ \frac{\chi_{[v_kv_1]}}{\mu^r_2([v_kv_1])}-\frac{\chi_{[v_kv_2]}}{\mu^r_2([v_kv_2])} \biggr]. \end{split}\] Recall that \[E_{v_k}^r = \Span \left \{ \frac{1}{ \mu_2^r([v_k v_1]) } \chi_{[v_k v_1]} - \frac{1}{\mu_2^r([v_k v_2])} \chi_{[v_k v_2]} \right\}\] is a one-dimensional subspace of $L^2(\partial \mathcal{B}_2, \mu_2^r)$. Moreover, each vector $\psi_{1,k}$ is evidently a scalar multiple of the single spanning vector from $E_\eta^r$ for $\eta = v_k$ a path of length $0.$ Taking the span of the two vectors from $E_{v_1}^r$ and $E_{v_2}^r$ gives exactly the span of the $\psi_{1,k}$ for $k=1,2;$ since $\tilde{\mathcal W_0}$ is defined to be the span of the vectors $\psi_{1,k}$, the result follows. \end{proof} We now relate higher dimensional wavelet subspaces to the corresponding eigenspaces for the Laplacian: \begin{lemma} Let $\mu^r_2$ be the Markov probability measure on the infinite path space $\Lambda_2^{\infty}$ corresponding to the weight assigning $r$ to the vertex $v_1$ and $1-r$ to the vertex $v_2.$ Then for the corresponding representation of ${\mathcal O}_2$ on $L^2(\Lambda_2^{\infty},\mu^r_2)$ defined in Theorem 3.8 of \cite{FGKP-survey}, we have \[ \tilde{{\mathcal W}}_k=\Span_{\eta: |\eta|=k}\{E_\eta^r\}, \] where $E_\eta^r$ are the eigenspaces of the Laplacian defined in Proposition \ref{eigenspaceO2unequal}. \end{lemma} \begin{proof} We prove the result by induction. We have proved the result for $k=0$ directly. We now suppose that for $k=n$ we have shown \[ \tilde{\mathcal W}_n = \Span_{\eta: |\eta|=n} \bigl\{ E_\eta^r \bigr\}, \] where, as defined in Theorem 3.8 of \cite{FGKP-survey}, \[ \tilde{\mathcal W}_n\;=\; \Span \bigl\{ S_w(\psi_{1,k}):\;k=1,2,\;w \text{ is a word of length } n \bigr\}, \] for $\psi_{1,1}$ and $\psi_{1,2}$ the wavelets of Lemma \ref{zerowaveletlemma}, and for $w=w_1w_2\cdots w_n$ a word of length $n,$ where $w_i\in\mathbb Z_2,$ $S_w=S_{w_1}S_{w_2}\cdots S_{w_n}$, where (writing an infinite path $x$ as a sequence of vertices) \[ S_0 f(x) = \begin{cases} r^{-1/2} f(u_2 u_3 \ldots ) & \text{if } x = (v_1 u_2 u_3 \ldots), \\ 0 & \text{else.} \end{cases} \] and \[ S_1 f(x) = \begin{cases} (1-r)^{-1/2} f(u_2 u_3 \ldots ) & \text{if } x = (v_2 u_2 u_3 \ldots), \\ 0 & \text{else.} \end{cases} \] From this and the induction hypothesis, it follows that \[\begin{split} \tilde{\mathcal W}_{n+1} & = \Span \bigl\{ S_0(\tilde{\mathcal W_n}), S_1(\tilde{\mathcal W_n}) \bigr\} \\ & = \Span_{\eta: |\eta|=n} \bigl\{ S_0(E_\eta^r), S_1(E_\eta^r) \bigr\}, \end{split}\] where a typical element of $E_\eta^r$ looks like $$\frac{\chi_{[\eta e]}}{\mu^r_2([\eta e])}-\frac{\chi_{\eta e'}}{\mu^r_2([\eta e'])}\;.$$ Now if $\eta = u_0 u_1 \cdots u_n$ is a path of length $n$ whose $n+1$ vertices are given in order by $u_0 u_1u_2\cdots u_{n},$ we compute directly that \[ S_0\chi_{[\eta]} \;=\; \frac{1}{\sqrt{r}}\chi_{[v_1\eta]}, \] and \[ S_1\chi_{[\eta']} \;=\; \frac{1}{\sqrt{1-r}}\chi_{[v_2\eta']}. \] Therefore we can write, for $\eta$ of length $n$ and $e$ and $e'$ of length $1$ with $e\not=e',$ \[\begin{split} S_0 \biggl( \frac{\chi_{[\eta e]}}{\mu^r_2([\eta e])}-\frac{\chi_{[\eta e']}}{\mu^r_2([\eta e'])} \biggr) & = \frac{1}{\sqrt{r}} \biggl[ \frac{\chi_{[v_1\eta e]}}{\mu^r_2([\eta e])}-\frac{\chi_{[v_1\eta e']}}{\mu^r_2([\eta e'])} \biggr] \\ & = \sqrt{r} \biggl[ \frac{\chi_{[v_1\eta e]}}{\mu^r_2([v_1\eta e])}-\frac{\chi_{[v_1\eta e'] }}{\mu^r_2([v_1\eta e'])} \biggr] \end{split}\] which is a constant multiple of $$\frac{\chi_{[v_1\eta e]}}{\mu^r_2([v_1\eta e])}-\frac{\chi_{[v_1\eta e']}}{\mu^r_2([v_1\eta e'])}\; $$ which is a spanning function for the one-dimensional subspace $E_{v_1\eta}^r$. Similarly, $S_1(\frac{\chi_{[\eta e]}}{\mu^r_2([\eta e])}-\frac{\chi_{[\eta e']}}{\mu^r_2([\eta e'])})$ is a constant multiple of $$\frac{\chi_{[v_2\eta e]}}{\mu^r_2([v_2\eta e'])}-\frac{\chi_{[v_2\eta e']}}{\mu^r_2([v_2\eta' e'])}, $$ which spans $E_{v_2 \eta}^r$. Since all paths of length $n+1$ are of the form $v_i \eta$ for some path $\eta$ of length $n$ and some vertex $v_i$, with $i = 1,2$, it then follows that $$\Span_{\eta: |\eta|=n}\{S_0(E_\eta^r), S_1(E_\eta^r)\}=\;\Span_{\eta': |\eta'|=n+1}(E_{\eta'}^r).$$ But this shows that $$\tilde{\mathcal W}_{n+1}\;=\Span_{\eta': |\eta'|=n+1}(E_{\eta'}^r),$$ and the induction step of the proof is complete. \end{proof} The above results have established the following: \begin{thm} Let $\mu^r_2$ be the Markov probability measure on the infinite path space $\Lambda_2^{\infty}$ corresponding to the weight assigning $r$ to the vertex $v_1$ and $1-r$ to the vertex $v_2.$ Then for the corresponding representation of ${\mathcal O}_2$ on $L^2(\Lambda_2^{\infty},\mu^r_2)$ defined in Theorem 3.8 of \cite{FGKP-survey}, we have that the $k^{th}$-order wavelets defined there are all constant multiples of functions of the form \[ \frac{\chi_{[\eta e]}}{\mu^r_2([\eta e])}-\frac{\chi_{[\eta e']}}{\mu^r_2([\eta e'])},\;|\eta|=k,\;|e|=|e'|=1,\; r(e)=r(e')=s(\eta). \] \end{thm} As in the case of Lemma \ref{zerowaveletlemma}, the constant coefficient needed to transform the wavelet function $S_\eta \psi_{1,k}$ into the spanning function of $E_{\eta v_k}$ can be be computed to be $$\sqrt{r(1-r)}[\sqrt{r}]^j[\sqrt{1-r}]^{k+1-j},$$ where $j$ is the number of $v_1$'s appearing as vertices in the path $\eta v_k.$
{ "timestamp": "2016-03-24T01:01:08", "yymm": "1603", "arxiv_id": "1603.06979", "language": "en", "url": "https://arxiv.org/abs/1603.06979", "abstract": "In this article we provide an identification between the wavelet decompositions of certain fractal representations of $C^*-$algebras of directed graphs of M. Marcolli and A. Paolucci, and the eigenspaces of Laplacians associated to spectral triples constructed from Cantor fractal sets that are the infinite path spaces of Bratteli diagrams associated to the representations, with a particular emphasis on wavelets for representations of $\\mathcal{O}_D$. In particular, in this setting we use results of J. Pearson and J. Bellissard, and A. Julien and J. Savinien, to construct first the spectral triple and then the Laplace Beltrami operator on the associated Cantor set. We then prove that in certain cases, the orthogonal wavelet decomposition and the decomposition via orthogonal eigenspaces match up precisely. We give several explicit examples, including an example related to a Sierpinski fractal, and compute in detail all the eigenvalues and corresponding eigenspaces of the Laplace Beltrami operators for the equal weight case for representations of Cuntz algebras, and in the uneven weight case for certain representations of $\\mathcal{O}_2$, and show how the eigenspaces and wavelet subspaces at different levels are related.", "subjects": "Operator Algebras (math.OA)", "title": "Wavelets and spectral triples for fractal representations of Cuntz algebras", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759621310288, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7079405618886899 }
https://arxiv.org/abs/1701.07570
Dynamic Regret of Strongly Adaptive Methods
To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently. In this paper, we illustrate an intrinsic connection between these two concepts by showing that the dynamic regret can be expressed in terms of the adaptive regret and the functional variation. This observation implies that strongly adaptive algorithms can be directly leveraged to minimize the dynamic regret. As a result, we present a series of strongly adaptive algorithms that have small dynamic regrets for convex functions, exponentially concave functions, and strongly convex functions, respectively. To the best of our knowledge, this is the first time that exponential concavity is utilized to upper bound the dynamic regret. Moreover, all of those adaptive algorithms do not need any prior knowledge of the functional variation, which is a significant advantage over previous specialized methods for minimizing dynamic regret.
\section{Introduction} Online convex optimization is a powerful paradigm for sequential decision making \citep{Online:suvery}. It can be viewed as a game between a learner and an adversary: In the $t$-th round, the learner selects a decision $\mathbf{w}_t \in \Omega$, simultaneously the adversary chooses a function $f_t(\cdot): \Omega \mapsto \mathbb{R}$, and then the learner suffers an instantaneous loss $f_t(\mathbf{w}_t)$. This study focuses on the full-information setting \citep{bianchi-2006-prediction}, where the function $f_t(\cdot)$ is revealed to the leaner at the end of each round. The goal of the learner is to minimize the cumulative loss over $T$ periods. The standard performance measure is \emph{regret}, which is the difference between the loss incurred by the learner and that of the best fixed decision in hindsight, i.e., \[ \Reg(T)=\sum_{t=1}^T f_t(\mathbf{w}_t) - \min_{\mathbf{w} \in \Omega} \sum_{t=1}^T f_t(\mathbf{w}). \] The above regret is typically referred to as \emph{static} regret in the sense that the comparator is time-invariant. The rationale behind this evaluation metric is that one of the decision in $\Omega$ is reasonably good over the $T$ rounds. However, when the underlying distribution of loss functions changes, the static regret may be too optimistic and fails to capture the hardness of the problem. To address this limitation, new forms of performance measure, including \emph{adaptive} regret \citep{Adaptive:Hazan,Hazan:2009:ELA} and \emph{dynamic} regret \citep{zinkevich-2003-online}, were proposed and received significant interest recently. Given a parameter $\tau$, which is the length of the interval, the strong version of adaptive regret is defined as \begin{equation} \label{eqn:strong:adaptive} \begin{split} \SAReg(T,\tau) = \max_{[s, s+\tau -1] \subseteq [T]} \left(\sum_{t=s}^{s+\tau -1} f_t(\mathbf{w}_t) - \min_{\mathbf{w} \in \Omega} \sum_{t=s}^{s+\tau -1} f_t(\mathbf{w}) \right). \end{split} \end{equation} From the definition, we observe that minimizing the adaptive regret enforces the learner has small static regret over any interval of length $\tau$. Since the best decision for different intervals could be different, the learner is essentially competing with a changing comparator. A parallel line of research introduces the concept of dynamic regret, where the cumulative loss of the learner is compared against a comparator sequence $\mathbf{u}_1, \ldots, \mathbf{u}_T \in \Omega$, i.e., \begin{equation} \label{eqn:dynamic:1} \DReg(\mathbf{u}_1,\ldots,\mathbf{u}_T) = \sum_{t=1}^T f_t(\mathbf{w}_t) - \sum_{t=1}^T f_t(\mathbf{u}_t). \end{equation} It is well-known that in the worst case, a sublinear dynamic regret is impossible unless we impose some regularities on the comparator sequence or the function sequence \citep{Oinline:Dynamic:Comp}. A representative example is the functional variation defined below \begin{equation} \label{eqn:func:var} V_T = \sum_{t=2}^T \max_{\mathbf{w} \in \Omega} |f_t(\mathbf{w}) - f_{t-1}(\mathbf{w})|. \end{equation} \citet{Non-Stationary} have proved that as long as $V_T$ is sublinear in $T$, there exists an algorithm that achieves a sublinear dynamic regret. Furthermore, under the noisy gradient feedback, a general restarting procedure is developed, and it enjoys $O(T^{2/3}V_T^{1/3})$ and $O(\log T \sqrt{T V_T})$ rates for convex functions and strongly convex functions, respectively. This result is very strong in the sense that these rates are (almost) minimax optimal. However, the restarting procedure can only be applied when an upper bound of $V_T$ is known beforehand, thus limiting its application in practice. While both the adaptive and dynamic regrets aim at coping with changing environments, little is known about their relationship. This paper makes a step towards understanding their connections. Specifically, we show that the strongly adaptive regret in (\ref{eqn:strong:adaptive}), together with the functional variation, can be used to upper bound the dynamic regret in (\ref{eqn:dynamic:1}). Thus, an algorithm with a small strongly adaptive regret is automatically equipped with a tight dynamic regret. As a result, we obtain a series of algorithms for minimizing the dynamic regret that do not need any prior knowledge of the functional variation. The main contributions of this work are summarized below. \begin{compactitem} \item We provide a general theorem that upper bounds the dynamic regret in terms of the strongly adaptive regret and the functional variation. \item For convex functions, we show that the strongly adaptive algorithm of \citet{Improved:Strongly:Adaptive} has a dynamic regret of $O(T^{2/3} V_T^{1/3} \log^{1/3} T)$, which matches the minimax rate, up to a polylogarithmic factor. \item For exponentially concave functions, we propose a strongly adaptive algorithm that allows us to control the tradeoff between the adaptive regret and the computational cost explicitly. Furthermore, we demonstrate that its dynamic regret is $O(\sqrt{T V_T \log T})$, and this is the \emph{first} time such kind of dynamic regret bound is established for exponentially concave functions. \item Since strongly convex functions with bounded gradients are also exponentially concave, our previous result immediately implies a dynamic regret of $O(\sqrt{T V_T \log T})$, which is also minimax optimal up to a polylogarithmic factor. It also indicates our bound for exponentially concave functions is almost optimal. \end{compactitem} \section{Related Work} In this section, we give a brief introduction to previous work on static, adaptive, and dynamic regrets in the context of online convex optimization. \subsection{Static Regret} The majority of studies in online learning are focused on static regret \cite{Shalev:Primal:Dual,Sparse:Online}. For general convex functions, the classical online gradient descent achieves $O(\sqrt{T})$ and $O(\log T)$ regret bounds for convex and strongly convex functions, respectively \citep{zinkevich-2003-online,ML:Hazan:2007,ICML_Pegasos}. Both the $O(\sqrt{T})$ and $O(\log T)$ rates are known to be minimax optimal~\citep{Minimax:Regret}. When functions are exponentially concave, a different algorithm, named online Newton step, is developed and enjoys an $O(\log T)$ regret bound \citep{ML:Hazan:2007}. \subsection{Adaptive Regret} The concept of adaptive regret is introduced by \citet{Adaptive:Hazan}, and later strengthened by \citet{Adaptive:ICML:15}. To distinguish between them, we refer to the definition of \citet{Adaptive:Hazan} as weakly adaptive regret and the one of \citet{Adaptive:ICML:15} as strongly adaptive regret. The weak version is given by \[ \WAReg(T)= \max_{[s, q] \subseteq [T]} \sum_{t=s}^{q} f_t(\mathbf{w}_t) - \min_{\mathbf{w} \in \Omega} \sum_{t=s}^{q} f_t(\mathbf{w}). \] To minimize the adaptive regret, \citet{Adaptive:Hazan} have developed two meta-algorithms: an efficient algorithm with $O(\log T)$ computational complexity per iteration and an inefficient one with $O(T)$ computational complexity per iteration. These meta-algorithms use an existing online method (that was possibly designed to have small static regret) as a subroutine.\footnote{For brevity, we ignored the factor of subroutine in the statements of computational complexities. The $O(\cdot)$ computational complexity should be interpreted as $O(\cdot) \times s$ space complexity and $O(\cdot) \times t$ time complexity, where $s$ and $t$ are space and time complexities of the subroutine per iteration, respectively.} For convex functions, the efficient and inefficient meta-algorithms have $O(\sqrt{T \log^3 T})$ and $O(\sqrt{T \log T)}$ regret bounds, respectively. For exponentially concave functions, those rates are improved to $O(\log^2 T)$ and $O(\log T)$, respectively. We can see that the price paid for the adaptivity is very small: The rates of weakly adaptive regret differ from those of static regret only by logarithmic factors. A major limitation of weakly adaptive regret is that it does not respect short intervals well. Taking convex functions as an example, the $O(\sqrt{T \log^3 T})$ and $O(\sqrt{T \log T)}$ bounds are meaningless for intervals of length $O(\sqrt{T})$. To overcome this limitation, \citet{Adaptive:ICML:15} proposed a refined adaptive regret that takes the length of the interval as a parameter $\tau$, as indicated in (\ref{eqn:strong:adaptive}). If the strongly adaptive regret is small for all $\tau <T$, we can guarantee the learner has small regret over any interval of any length. In particular, \citet{Adaptive:ICML:15} introduced the following definition. \begin{definition} \label{def:strongly:adaptive} Let $R(\tau)$ be the minimax static regret bound of the learning problem over $\tau$ periods. An algorithm is \emph{strongly adaptive}, if \[ \SAReg(T,\tau)=O(\poly(\log T) \cdot R(\tau)). \] \end{definition} It is easy to verify that the meta-algorithms of \citet{Adaptive:Hazan} are strongly adaptive for exponentially concave functions,\footnote{That is because (i) $\SAReg(T,\tau) \leq \WAReg(T)$, and (ii) there is a $\poly(\log T)$ factor in the definition of strong adaptivity. } but not for convex functions. Thus, \citet{Adaptive:ICML:15} developed a new meta-algorithm that satisfies $\SAReg(T,\tau)=O( \sqrt{\tau} \log T )$ for convex functions, and thus is strongly adaptive. The algorithm is also efficient and the computational complexity per iteration is $O(\log T)$. Later, the strongly adaptive regret of convex functions was improved to $O( \sqrt{\tau \log T} )$ by \citet{Improved:Strongly:Adaptive}. \subsection{Dynamic Regret} \label{sec:dynamic} In a seminal work, \citet{zinkevich-2003-online} proposed to use the \emph{path-length} defined as \[ \mathcal{P}(\mathbf{u}_1, \ldots, \mathbf{u}_T)=\sum_{t=2}^T \|\mathbf{u}_t - \mathbf{u}_{t-1}\|_2 \] to upper bound the dynamic regret. Specifically, \citet{zinkevich-2003-online} proved that for any sequence of convex functions, the dynamic regret of online gradient descent can be upper bounded by $O(\sqrt{T} \mathcal{P}(\mathbf{u}_1, \ldots, \mathbf{u}_T))$. Another regularity of the comparator sequence, which is similar to the path-length, is defined as \[ \mathcal{P}'(\mathbf{u}_1, \ldots, \mathbf{u}_T)=\sum_{t=2}^T \|\mathbf{u}_t - \Phi_{t} (\mathbf{u}_{t-1})\|_2 \] where $\Phi_t (\cdot)$ is a dynamic model that predicts a reference point for the $t$-th round. \citet{Dynamic:ICML:13} developed a novel algorithm named dynamic mirror descent and proved that its dynamic regret is on the order of $\sqrt{T} \mathcal{P}'(\mathbf{u}_1, \ldots, \mathbf{u}_T)$ . The advantage of $\mathcal{P}'(\mathbf{u}_1, \ldots, \mathbf{u}_T)$ is that when the comparator sequence follows the dynamical model closely, it can be much smaller than the path-length $\mathcal{P}(\mathbf{u}_1, \ldots, \mathbf{u}_T)$. Let $\mathbf{w}_t^* \in \argmin_{\mathbf{w} \in \Omega} f_t(\mathbf{w})$ be a local minimizer of $f_t(\cdot)$. For any sequence of $\mathbf{u}_1, \ldots, \mathbf{u}_T \in \Omega$, we have \[ \begin{split} & \DReg(\mathbf{u}_1,\ldots,\mathbf{u}_T) =\sum_{t=1}^T f_t(\mathbf{w}_t) - \sum_{t=1}^T f_t(\mathbf{u}_t) \\ \leq & \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) = \sum_{t=1}^T f_t(\mathbf{w}_t) - \sum_{t=1}^T \min_{\mathbf{w} \in \Omega} f_t(\mathbf{w}). \end{split} \] Thus, $\DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*)$ can be treated as the worst case of the dynamic regret, and there are many work that were devoted to minimizing $\DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*)$ \citep{Oinline:Dynamic:Comp,Dynamic:Strongly,Dynamic:2016,Dynamic:Regret:Squared}. When a prior knowledge of $\mathcal{P}(\mathbf{w}_1^*, \ldots, \mathbf{w}_T^*)$ is available, $\DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*)$ can be upper bounded by $O(\sqrt{T \mathcal{P}(\mathbf{w}_1^*, \ldots, \mathbf{w}_T^*)})$ \citep{Dynamic:2016}. If all the functions are strongly convex and smooth, the upper bound can be improved to $O(\mathcal{P}(\mathbf{w}_1^*, \ldots, \mathbf{w}_T^*))$ \citep{Dynamic:Strongly}. The $O(\mathcal{P}(\mathbf{w}_1^*, \ldots, \mathbf{w}_T^*))$ rate is also achievable when all the functions are convex and smooth, and all the minimizers $\mathbf{w}_t^*$'s lie in the interior of $\Omega$ \citep{Dynamic:2016}. In a recent study, \citet{Dynamic:Regret:Squared} introduced a new regularity---\emph{squared} path-length \[ \mathcal{S}(\mathbf{w}_1^*, \ldots, \mathbf{w}_T^*)=\sum_{t=2}^T \|\mathbf{w}_t^* - \mathbf{w}_{t-1}^*\|_2^2 \] which could be much smaller than the path-length $\mathcal{P}(\mathbf{w}_1^*, \ldots, \mathbf{w}_T^*)$ when the difference between successive local minimizers is small. \citet{Dynamic:Regret:Squared} developed a novel algorithm named online multiple gradient descent, and proved that $\DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*)$ is on the order of $\min(\mathcal{P}(\mathbf{w}_1^*, \ldots, \mathbf{w}_T^*),\mathcal{S}(\mathbf{w}_1^*, \ldots, \mathbf{w}_T^*))$ for (semi-)strongly convex and smooth functions. Although closely related, adaptive regret and dynamic regret are studied independently and there are few discussions of their relationships. In the literature, dynamic regret is also referred to as tracking regret or shifting regret \citep{LITTLESTONE1994212,Herbster1998,Herbster:2001:TBL}. In the setting of ``prediction with expert advice'', \citet{Adamskiy2012} have shown that the tracking regret can be derived from the adaptive regret. In the setting of ``online linear optimization in the simplex'', \citet{Fixed:Share:NIPS12} introduced a generalized notion of shifting regret which unifies adaptive regret and shifting regret. Different from previous work, this paper considers the setting of online convex optimization, and illustrates that the dynamic regret can be upper bounded by the adaptive regret and the functional variation. \section{From Adaptive to Dynamic} In this section, we first introduce a general theorem that bounds the dynamic regret by the adaptive regret, and then derive specific regret bounds for convex functions, exponentially concave functions, and strongly convex functions. \subsection{Adaptive-to-Dynamic Conversion} Let $\mathcal{I}_1=[s_1, q_1], \mathcal{I}_2 = [s_2, q_2], \ldots, \mathcal{I}_k=[s_k, q_k]$ be a partition of $[1,T]$. That is, they are successive intervals such that \begin{equation} \label{eqn:intervel} \begin{split} s_1=1, \ q_i +1 = s_{i+1}, \ i \in [k-1], \textrm{ and } \ q_k=T. \end{split} \end{equation} Define the local functional variation of the $i$-th interval as \[ V_T(i) = \sum_{t=s_i+1}^{q_i} \max_{\mathbf{w} \in \Omega} |f_t(\mathbf{w}) - f_{t-1}(\mathbf{w})| \] and it is obvious that $\sum_{i=1}^k V_T(i) \leq V_T$.\footnote{Note that in certain cases, the sum of local functional variation $\sum_{i=1}^k V_T(i)$ can be much smaller than the total functional variation $V_T$. For example, when the sequence of functions only changes $k$ times, we can construct the intervals based on the changing rounds such that $\sum_{i=1}^k V_T(i)=0$.} Then, we have the following theorem for bounding the dynamic regret in terms of the strongly adaptive regret and the functional variation. \begin{thm} \label{thm:1} Let $\mathbf{w}_t^* \in \argmin_{\mathbf{w} \in \Omega} f_t(\mathbf{w})$. We have \[ \begin{split} \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \leq \min_{\mathcal{I}_1,\ldots,\mathcal{I}_k} \sum_{i=1}^k \big( \SAReg(T,|\mathcal{I}_i|) + 2 |\mathcal{I}_i| \cdot V_T(i) \big) \end{split} \] where the minimization is taken over any sequence of intervals that satisfy (\ref{eqn:intervel}). \end{thm} The above theorem is analogous to Proposition 2 of \citet{Non-Stationary}, which provides an upper bound for a special choice of the interval sequence. The main difference is that there is a minimization operation in our bound, which allows us to get ride of the issue of parameter selection. For a specific type of problems, we can plug in the corresponding upper bound of strongly adaptive regret, and then choose any sequence of intervals to obtain a concrete upper bound. In particular, the choice of the intervals may depend on the (possibly unknown) functional variation. Before proceeding to specific bounds, we introduce the following common assumption. \begin{ass}\label{ass:1} Both the gradient and the domain are bounded. \begin{compactitem} \item The gradients of all the online functions are bounded by $G$, i.e., $\max_{\mathbf{w} \in \Omega}\|\nabla f_t(\mathbf{w})\| \leq G$ for all $f_t$. \item The diameter of the domain $\Omega$ is bounded by $B$, i.e., $\max_{\mathbf{w}, \mathbf{w}' \in \Omega}\|\mathbf{w} -\mathbf{w}'\| \leq B$. \end{compactitem} \end{ass} \subsection{Convex Functions} For convex functions, we choose the meta-algorithm of \citet{Improved:Strongly:Adaptive} and take the online gradient descent as its subroutine. The following theorem regarding the adaptive regret can be obtained from that paper. \begin{thm} \label{thm:2} Under Assumption~\ref{ass:1}, the meta-algorithm of \citet{Improved:Strongly:Adaptive} is strongly adaptive with \[ \begin{split} \SAReg(T,\tau) \leq \left(\frac{12 BG}{\sqrt{2}-1} +8 \sqrt{7 \log T + 5} \right) \sqrt{\tau} = O(\sqrt{\tau \log T} ). \end{split} \] \end{thm} From Theorems~\ref{thm:1} and \ref{thm:2}, we derive the following bound for the dynamic regret. \begin{cor} \label{cor:convex} Under Assumption~\ref{ass:1}, the meta-algorithm of \citet{Improved:Strongly:Adaptive} satisfies \[ \begin{split} \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \leq & \max\left\{\begin{split} & (c +9 \sqrt{7 \log T + 5}) \sqrt{T} \\ & \frac{(c +8 \sqrt{5} ) T^{2/3} V_T^{1/3} }{\log^{1/6} T} + 24 T^{2/3} V_T^{1/3} \log^{1/3} T \end{split} \right.\\ = & O \left( \max\left\{ \sqrt{T \log T} , T^{2/3} V_T^{1/3} \log^{1/3} T \right\} \right) \end{split} \] where $c=12 BG/(\sqrt{2}-1)$. \end{cor} According to Theorem 2 of \citet{Non-Stationary}, we know that the minimax dynamic regret of convex functions is $O(T^{2/3} V_T^{1/3})$. Thus, our upper bound is minimax optimal up to a polylogarithmic factor. The key advantage of the meta-algorithm of \citet{Improved:Strongly:Adaptive} over the restarted online gradient descent of \citet{Non-Stationary} is that the former one do not need any prior knowledge of the functional variation $V_T$. Notice that the meta-algorithm of \citet{Adaptive:ICML:15} can also be used here, and its dynamic regret is on the order of $\max\left\{ \sqrt{T} \log T, T^{2/3} V_T^{1/3} \log^{2/3} T \right\}$. \subsection{Exponentially Concave Functions} We first provide the definition of exponentially concave (abbr.~exp-concave) functions \citep{bianchi-2006-prediction}. \begin{definition} A function $f(\cdot): \Omega \mapsto \mathbb{R}$ is $\alpha$-exp-concave if $\exp(-\alpha f(\cdot))$ is concave over domain $\Omega$. \end{definition} Exponential concavity is stronger than convexity but weaker than strong convexity. It can be used to model many popular losses used in machine learning, such as the square loss in regression, logistic loss in classification and negative logarithm loss in portfolio management~\citep{Sto:Exp:Con}. For exp-concave functions, \citet{Adaptive:Hazan} have developed two meta-algorithms that take the online Newton step as its subroutine, and proved the following properties. \begin{compactitem} \item The inefficient one has $O(T)$ computational complexity per iteration, and its weakly adaptive regret is $O(\log T)$. \item The efficient one has $O(\log T)$ computational complexity per iteration, and its weakly adaptive regret is $O(\log^2 T)$. \end{compactitem} As can be seen, there is a tradeoff between the computational complexity and the weakly adaptive regret: A lighter computation incurs a looser bound and a tighter bound requires a higher computation. In Section~\ref{sec:adaptive:alg}, we develop a unified approach, i.e., Algorithm~\ref{alg:1}, that allows us to trade effectiveness for efficiency explicitly. Lemma~\ref{lem:ending} indicates the proposed algorithm has \[ \left(\lfloor \log_K T \rfloor+1 \right) (K-1)=O\left( \frac{K \log T}{\log K} \right) \] computational complexity per iteration, where $K$ is a tunable parameter. On the other hand, Theorem~\ref{thm:adaptive} implies that for $\alpha$-exp-concave functions that satisfy Assumption~\ref{ass:1}, the strongly adaptive regret of Algorithm~\ref{alg:1} is \[ \left(\frac{(5d+1) \bar{m} + 2}{\alpha} + 5d \bar{m} GB \right) \log T = O\left( \frac{\log^2 T}{\log K}\right) \] where $d$ is the dimensionality and $\bar{m}=(\lfloor \log_K T \rfloor + 1)$. We list several choices of $K$ and the resulting theoretical guarantees in Table~\ref{sample-table}, and have the following observations. \begin{compactitem} \item When $K=2$, we recover the guarantee of the efficient algorithm of \citet{Adaptive:Hazan}, and when $K=T$, we obtain the inefficient one. \item By setting $K=T^{1/\gamma}$ where $\gamma>1$ is a small constant, such as $10$, the strongly adaptive regret can be viewed as $O(\log T)$, and at the same time, the computational complexity is also very low for a large range of $T$. \end{compactitem} \begin{table}[t] \caption{Efficiency and Effectiveness Tradeoff} \label{sample-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lll} \hline $K$ &Complexity & Adaptive Regret\\ \hline $2$ & $O(\log T)$ & $O(\log^2 T)$ \\ $T^{1/\gamma}$ & $O(\gamma T^{1/\gamma} )$ & $O(\gamma \log T)$ \\ $T$ & $O(T)$ & $O(\log T)$ \\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} According to Definition~\ref{def:strongly:adaptive}, Algorithm~\ref{alg:1} in this paper, as well as the two meta-algorithms of \citet{Adaptive:Hazan}, is strongly adaptive. Based on Theorem~\ref{thm:1}, we derive the dynamic regret of the proposed algorithm. \begin{cor} \label{cor:exp} Let $K=T^{1/\gamma}$, where $\gamma>1$ is a small constant. Suppose Assumption~\ref{ass:1} holds, $\Omega \subset \mathbb{R}^d$, all the functions are $\alpha$-exp-concave. Algorithm~\ref{alg:1}, with online Newton step as its subroutine, is strongly adaptive with \[ \begin{split} \SAReg(T,\tau) \leq \left(\frac{(5d+1) (\gamma+1) + 2}{\alpha} + 5d (\gamma+1) GB \right) \log T =O\left(\gamma \log T\right)=O\left( \log T\right) \end{split} \] and its dynamic regret satisfies \[ \begin{split} &\DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \\ \leq & \left(\frac{(5d+1) (\gamma+1) + 2}{\alpha} + 5d (\gamma+1) GB +2\right) \max\left\{ \log T, \sqrt{T V_T \log T} \right\} \\ = & O \left( \max\left\{ \log T, \sqrt{T V_T \log T} \right\} \right). \end{split} \] \end{cor} To the best of our knowledge, this is the first time such kind of dynamic regret bound is established for exp-concave functions. Furthermore, the discussions in Section~\ref{sec:strongly} implies our upper bound is minimax optimal, up to a polylogarithmic factor. \subsection{Strongly Convex Functions} \label{sec:strongly} In the following, we study strongly convex functions, defined below. \begin{definition} A function $f(\cdot): \Omega \mapsto \mathbb{R}$ is $\lambda$-strongly convex if \[ f(\mathbf{y}) \geq f(\mathbf{x}) + \left \langle \nabla f(\mathbf{x}), \mathbf{y} -\mathbf{x} \right \rangle + \frac{\lambda}{2} \|\mathbf{y} -\mathbf{x} \|_2^2, \ \forall \mathbf{x}, \mathbf{y} \in \Omega. \] \end{definition} It is easy to verify that strongly convex functions with bounded gradients are also exp-concave \citep{ML:Hazan:2007}. \begin{lemma} \label{lem:strongly} Suppose $f(\cdot): \Omega \mapsto \mathbb{R}$ is $\lambda$-strongly convex and $\nabla f(\mathbf{w}) \leq G$ for all $\mathbf{w} \in \Omega$. Then, $f(\cdot)$ is $\frac{\lambda}{G^2}$-exp-concave. \end{lemma} Thus, Corollary~\ref{cor:exp} can be directly applied to strongly convex functions, and yields a dynamic regret of $O(\sqrt{T V_T \log T})$. According to Theorem 4 of \citet{Non-Stationary}, the minimax dynamic regret of strongly convex functions is $O(\sqrt{T V_T})$, which implies our upper bound is almost minimax optimal. A limitation of Corollary~\ref{cor:exp} is that the constant in the upper bound depends on the dimensionality $d$. In the following, we show that when the functions are strongly convex and online gradient descent is used as the subroutine of Algorithm~\ref{alg:1}, both the adaptive and dynamic regrets are independent from $d$. \begin{cor} \label{cor:strong:convex} Let $K=T^{1/\gamma}$, where $\gamma>1$ is a small constant. Suppose Assumption~\ref{ass:1} holds, and all the functions are $\lambda$-strongly convex. Algorithm~\ref{alg:1}, with online gradient descent as its subroutine, is strongly adaptive with \[ \begin{split} \SAReg(T,\tau) \leq \frac{G^2}{2\lambda} \big(\gamma+1+ (3 \gamma+7) \log T \big ) = O\left(\gamma \log T\right)=O\left( \log T\right) \end{split} \] and its dynamic regret satisfies \[ \begin{split} \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \leq &\max\left\{\begin{split} & \frac{\gamma G^2 }{\lambda} + \left(\frac{5 \gamma G^2}{\lambda } +2 \right) \log T \\ & \frac{\gamma G^2 }{\lambda} \sqrt{\frac{T V_T}{\log T} }+ \left(\frac{5 \gamma G^2}{\lambda } +2 \right)\sqrt{T V_T \log T} \end{split} \right.\\ = & O \left( \max\left\{ \log T, \sqrt{T V_T \log T} \right\} \right). \end{split} \] \end{cor} \section{An Unified Adaptive Algorithm} \label{sec:adaptive:alg} In this section, we introduce a unified approach for minimizing the adaptive regret of exp-concave functions, as well as strongly convex functions. Let $E$ be an online learning algorithm that is designed to minimize the static regret of exp-concave functions or strongly convex functions, e.g., online Newton step \citep{ML:Hazan:2007} or online gradient descent \citep{zinkevich-2003-online}. Similar to the approach of following the leading history (FLH) \citep{Adaptive:Hazan}, at any time $t$, we will instantiate an expert by applying the online learning algorithm $E$ to the sequence of loss functions $f_t,f_{t+1},\ldots$, and utilize the strategy of learning from expert advice to combine solutions of different experts \citep{Herbster1998}. Our method is named as improved following the leading history (IFLH), and is summarized in Algorithm~\ref{alg:1}. Let $E^t$ be the expert that starts to work at time $t$. To control the computational complexity, we will associate an ending time $e^t$ for each $E^t$. The expert $E^t$ is alive during the period $[t, e^t-1]$. In each round $t$, we maintain a working set of experts $\mathcal{S}_t$, which contains all the alive experts, and assign a probability $p_t^j$ for each $E^j \in S_t$. In Steps~6 and 7, we remove all the experts whose ending times are no larger than $t$. Since the number of alive experts has changed, we need to update the probability assigned to them, which is performed in Steps~12 to 14. In Steps~15 and 16, we add a new expert $E^t$ to $\mathcal{S}_t$, calculate its ending time according to Definition~\ref{def:ending} introduced below, and set $p_t^t = \frac{1}{t}$. It is easy to verify $\sum_{j \in \mathcal{S}_t} p_t^j=1$. Let $\mathbf{w}_t^j$ be the output of $E^j$ at the $t$-th round, where $t \geq j$. In Step~17, we submit the weighted average of $\mathbf{w}_t^j$ with coefficient $p_t^j$ as the output $\mathbf{w}_t$, and suffer the loss $f_t(\mathbf{w}_t)$. From Steps~18 to 25, we use the exponential weighting scheme to update the weight for each expert $E^j$ based on its loss $f_t(\mathbf{w}_{t}^j)$. In Step~21, we pass the loss function to all the alive experts such that they can update their predictions for the next round. \begin{algorithm}[t] \caption{Improved Following the Leading History (IFLH)} \label{alg:1} \begin{algorithmic}[1] \STATE {\bf Input:} An integer $K$ \STATE Initialize $\mathcal{S}_0 = \emptyset$. \FOR{$t = 1, \ldots, T$} \STATE Set $Z_t = 0$ \LCOMMENT{Remove some existing experts} \FOR{$E^j \in \mathcal{S}_{t-1}$} \label{step:1} \IF{$e^j \leq t$}\label{step:2} \STATE Update $\mathcal{S}_{t-1} \leftarrow \mathcal{S}_{t-1} \setminus \{E^j\}$ \label{step:3} \ELSE \label{step:4} \STATE Set $Z_t = Z_t + \widehat{p}_t^j$ \label{step:5} \ENDIF \label{step:6} \ENDFOR \LCOMMENT{Normalize the probability} \FOR{$E^j \in \mathcal{S}_{t-1}$} \label{step:7} \STATE Set $p_t^j = \frac{\widehat{p}_t^j}{Z_t}\left(1 - \frac{1}{t}\right)$ \label{step:8} \ENDFOR \label{step:9} \LCOMMENT{Add a new expert $E^t$} \STATE Set $\mathcal{S}_t = \mathcal{S}_{t-1} \cup \{E^t\}$ \label{step:10} \STATE Compute the ending time $e^t=\mathcal{E}_K(t)$ according to Definition~\ref{def:ending} and set $p_t^t = \frac{1}{t}$ \label{step:11} \LCOMMENT{Compute the final predicted model} \STATE Submit the solution \[ \mathbf{w}_t = \sum_{E^j \in \mathcal{S}_t} p_t^j \mathbf{w}_t^j \] and suffer loss $f_t(\mathbf{w}_t)$ \label{step:12} \LCOMMENT{Update weights and expert} \STATE Set $Z_{t+1} = 0$ \label{step:13} \FOR{$E^j \in \mathcal{S}_t$} \STATE Compute $p_{t+1}^j = p_t^j \exp(-\alpha f_t(\mathbf{w}_{t}^j))$ and $Z_{t+1} = Z_{t+1} + p_{t+1}^j$ \STATE Pass the function $f_t(\cdot)$ to $E^j$ \label{step:14} \ENDFOR \FOR{$E^j \in \mathcal{S}_t$} \STATE Set $\widehat{p}_{t+1}^j = \frac{\widehat{p}_{t+1}^j}{Z_{t+1}}$ \ENDFOR \label{step:15} \ENDFOR \end{algorithmic} \end{algorithm} The difference between our IFLH and the original FLH is how to decide the ending time $e^t$ of expert $E^t$. In this paper, we propose the following base-$K$ ending time. \begin{definition}[Base-$K$ Ending Time] \label{def:ending} Let $K$ be an integer, and the representation of $t$ in the base-$K$ number system as \[ t= \sum_{\tau \geq 0} \alpha_\tau K^\tau \] where $0 \leq \alpha_\tau <K$, for all $\tau \geq 0$. Let $k$ be the smallest integer such that $\alpha_\tau > 0$, i.e., \[ k = \min\{\tau:\alpha_\tau > 0\}. \] Then, the base-$K$ ending time of $t$ is defined as \[ \mathcal{E}_K(t)=\sum_{\tau \geq k+1} \alpha_\tau K^\tau + K^{k+1}. \] In other words, the ending time is the number represented by the new sequence obtained by setting the first nonzero elements in the sequence $\alpha_0,\alpha_1,\ldots$ to be $0$ and adding $1$ to the element after it. \end{definition} Let's take the decimal system as an example (i.e., $K=10$). Then, \[ \begin{split} &E_{10}(1)=E_{10}(2)=\cdots =E_{10}(9)=10,\\ &E_{10}(11)=E_{10}(12)=\cdots=E_{10}(19)=20,\\ &E_{10}(10)=E_{10}(20)=\cdots=E_{10}(90)=100.\\ \end{split} \] We note that a similar strategy for deciding the ending time was proposed by \citet{Track_Large_Expert}, and a discussion about the difference is given in the supplementary. When the base-$K$ ending time is used in Algorithm~\ref{alg:1}, we have the following properties. \begin{lemma} \label{lem:ending} Suppose we use the base-$K$ ending time in Algorithm~\ref{alg:1}. \begin{compactenum} \item For any $t \geq 1$, we have \[ |\mathcal{S}_t| \leq \left(\lfloor \log_K t \rfloor+1 \right) (K-1)=O\left( \frac{K \log t}{\log K} \right). \] \item For any interval $I = [r, s] \subseteq [T]$, we can always find $m$ segments \[ I_j = [t_j, e^{t_j}-1], \ j \in [m] \] with $m \leq \lfloor \log_K s \rfloor + 1$, such that \[ t_1=r, \ e^{t_j}=t_{j+1}, \ j \in [m-1], \textrm{ and } e^{t_m} > s. \] \end{compactenum} \end{lemma} The first part of Lemma~\ref{lem:ending} implies that the size of $\mathcal{S}_t$ is $O(K \log t/\log K)$. An example of $\mathcal{S}_t$ in the decimal system is given below. \[ \mathcal{S}_{486}=\left\{ \begin{split} &481, \ 482, \ \ldots, \ 486, \\ &410, \ 420, \ \ldots, \ 480,\\ &100, \ 200, \ \ldots, \ 400 \end{split} \right\} . \] The second part of Lemma~\ref{lem:ending} implies that for any interval $I=[r, s]$, we can find $O(\log s/\log K)$ experts such that their survival periods cover $I$. Again, we present an example in the decimal system: The interval $[111, 832]$ can be covered by \[ [111, 119], \ [120, 199], \textrm{ and } [200, 999] \] which are the survival periods of experts $E^{111}$, $E^{120}$, and $E^{200}$, respectively. Recall that $E_{10}(111)=120$, $E_{10}(120)=200$, and $E_{10}(200)=1000$. Based on Lemma~\ref{lem:ending}, we have the following theorem regarding the adaptive regret of exp-concave functions. \begin{thm} \label{thm:adaptive} Suppose Assumption~\ref{ass:1} holds, $\Omega \subset \mathbb{R}^d$, all the functions are $\alpha$-exp-concave. If online Newton step is used as the subroutine in Algorithm~\ref{alg:1}, we have \[ \begin{split} \sum_{t=r}^s f_t(\mathbf{w}_t) - \min\limits_{\mathbf{w} \in \Omega} \sum_{t=r}^s f_t(\mathbf{w}) \leq \left(\frac{(5d+1) m + 2}{\alpha} + 5d mGB \right) \log T \end{split} \] where $m \leq \lfloor \log_K s \rfloor + 1$. And thus, \[ \begin{split} \SAReg(T,\tau) \leq \left(\frac{(5d+1) \bar{m} + 2}{\alpha} + 5d \bar{m} GB \right) \log T = O\left( \frac{\log^2 T}{\log K}\right) \end{split} \] where $\bar{m}=(\lfloor \log_K T \rfloor + 1)$. \end{thm} From Lemma~\ref{lem:ending} and Theorem~\ref{thm:adaptive}, we observe that the adaptive regret is a decreasing function of $K$, while the computational cost is an increasing function of $K$. Thus, we can control the tradeoff by tuning the value of $K$. For strongly convex functions, we have a similar guarantee but without any dependence on the dimensionality $d$, as indicated below. \begin{thm} \label{thm:strong:convex} Suppose Assumption~\ref{ass:1} holds, and all the functions are $\lambda$-strongly convex. If online gradient descent is used as the subroutine in Algorithm~\ref{alg:1}, we have \[ \begin{split} \sum_{t=r}^s f_t(\mathbf{w}_t) - \min\limits_{\mathbf{w} \in \Omega} \sum_{t=r}^s f_t(\mathbf{w}) \leq \frac{G^2}{2\lambda} \big(m+ (3 m +4) \log T\big ) \end{split} \] where $m \leq \lfloor \log_K s \rfloor + 1$. And thus \[ \begin{split} \SAReg(T,\tau) \leq \frac{G^2}{2\lambda} \big(\bar{m}+ (3\bar{m} +4) \log T \big)= O\left( \frac{\log^2 T}{\log K}\right) \end{split} \] where $\bar{m}=(\lfloor \log_K T \rfloor + 1)$. \end{thm} \section{Analysis} We here present the proofs of main theorems. The omitted proofs are provided in the supplementary. \subsection{Proof of Theorem~\ref{thm:1}} First, we upper bound the dynamic regret in the following way \begin{equation} \label{eqn:thm1:1} \begin{split} &\DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \\ = & \sum_{i=1}^k \left(\sum_{t=s_i}^{q_i} f_t(\mathbf{w}_t) - \sum_{t=s_i}^{q_i} \min_{\mathbf{w} \in \Omega} f_t(\mathbf{w}) \right)\\ = & \sum_{i=1}^k \left( \underbrace{\sum_{t=s_i}^{q_i} f_t(\mathbf{w}_t) - \min_{\mathbf{w} \in \Omega} \sum_{t=s_i}^{q_i} f_t(\mathbf{w})}_{:=a_i} +\underbrace{\min_{\mathbf{w} \in \Omega} \sum_{t=s_i}^{q_i} f_t(\mathbf{w})- \sum_{t=s_i}^{q_i} \min_{\mathbf{w} \in \Omega} f_t(\mathbf{w})}_{:=b_i} \right). \end{split} \end{equation} From the definition of strongly adaptive regret, we can upper bound $a_i$ by \[ \sum_{t=s_i}^{q_i} f_t(\mathbf{w}_t) - \min_{\mathbf{w} \in \Omega} \sum_{t=s_i}^{q_i} f_t(\mathbf{w}) \leq \SAReg(T,|\mathcal{I}_i|). \] To upper bound $b_i$, we follow the analysis of Proposition 2 of \citet{Non-Stationary}. \begin{equation} \label{eqn:thm1:bt:1} \begin{split} &\min_{\mathbf{w} \in \Omega} \sum_{t=s_i}^{q_i} f_t(\mathbf{w})- \sum_{t=s_i}^{q_i} \min_{\mathbf{w} \in \Omega} f_t(\mathbf{w}) = \min_{\mathbf{w} \in \Omega} \sum_{t=s_i}^{q_i} f_t(\mathbf{w})- \sum_{t=s_i}^{q_i} f_t(\mathbf{w}_t^*) \\ \leq & \sum_{t=s_i}^{q_i} f_t(\mathbf{w}_{s_i}^*)- \sum_{t=s_i}^{q_i} f_t(\mathbf{w}_t^*) \leq |\mathcal{I}_i| \cdot \max_{t \in [s_i,q_i]} \left( f_t(\mathbf{w}_{s_i}^*)- f_t(\mathbf{w}_t^*) \right). \end{split} \end{equation} Furthermore, for any $t \in [s_i,q_i]$, we have \begin{equation} \label{eqn:thm1:bt:2} \begin{split} &f_t(\mathbf{w}_{s_i}^*)- f_t(\mathbf{w}_t^*) = f_t(\mathbf{w}_{s_i}^*)- f_{s_i}(\mathbf{w}_{s_i}^*) + f_{s_i}(\mathbf{w}_{s_i}^*) - f_t(\mathbf{w}_t^*) \\ \leq & f_t(\mathbf{w}_{s_i}^*)- f_{s_i}(\mathbf{w}_{s_i}^*) + f_{s_i}(\mathbf{w}_t^*) - f_t(\mathbf{w}_t^*) \leq 2 V_T(i). \end{split} \end{equation} Combining (\ref{eqn:thm1:bt:1}) with (\ref{eqn:thm1:bt:2}), we have \[ \min_{\mathbf{w} \in \Omega} \sum_{t=s_i}^{q_i} f_t(\mathbf{w})- \sum_{t=s_i}^{q_i} \min_{\mathbf{w} \in \Omega} f_t(\mathbf{w}) \leq 2 |\mathcal{I}_i| \cdot V_T(i). \] Substituting the upper bounds of $a_i$ and $b_i$ into (\ref{eqn:thm1:1}), we arrive at \[ \begin{split} \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \leq \sum_{i=1}^k \left( \SAReg(T,|\mathcal{I}_i|)+ 2 |\mathcal{I}_i| \cdot V_T(i) \right). \end{split} \] Since the above inequality holds for any partition of $[1,T]$, we can take minimization to get a tight bound. \subsection{Proof of Corollary~\ref{cor:convex}} To simplify the upper bound in Theorem~\ref{thm:1}, we restrict to intervals of the same length $\tau$, and in this case $k=T/\tau$. Then, we have \[ \begin{split} & \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \\ \leq & \min_{1 \leq \tau \leq T} \sum_{i=1}^k \big( \SAReg(T,\tau) + 2 \tau V_T(i) \big) = \min_{1 \leq \tau \leq T} \left( \frac{ \SAReg(T,\tau) T}{\tau} + 2 \tau \sum_{i=1}^k V_T(i) \right)\\ \leq & \min_{1 \leq \tau \leq T} \left( \frac{ \SAReg(T,\tau) T}{\tau} + 2 \tau V_T \right). \end{split} \] Combining with Theorem~\ref{thm:2}, we have \[ \begin{split} \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \leq \min_{1 \leq \tau \leq T} \left( \frac{(c +8 \sqrt{7 \log T + 5}) T }{\sqrt{\tau}} + 2 \tau V_T \right). \end{split} \] where $c=12 BG/(\sqrt{2}-1)$. In the following, we consider two cases. If $V_T \geq \sqrt{\log T/T}$, we choose \[ \tau = \left( \frac{T \sqrt{\log T}}{V_T} \right)^{2/3} \leq T \] and have \[ \begin{split} &\DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \\ \leq & \frac{(c +8 \sqrt{7 \log T + 5} ) T^{2/3} V_T^{1/3} }{\log^{1/6} T} + 2 T^{2/3} V_T^{1/3} \log^{1/3} T\\ \leq & \frac{(c +8 \sqrt{5} ) T^{2/3} V_T^{1/3} }{\log^{1/6} T} + (2+ 8\sqrt{7}) T^{2/3} V_T^{1/3} \log^{1/3} T. \end{split} \] Otherwise, we choose $\tau=T$, and have \[ \begin{split} &\DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \\ \leq &(c +8 \sqrt{7 \log T + 5}) \sqrt{T} + 2 T V_T \\ \leq & (c +8 \sqrt{7 \log T + 5}) \sqrt{T} + 2 T \sqrt{\frac{\log T}{T}} \\ \leq & (c +9 \sqrt{7 \log T + 5}) \sqrt{T}. \end{split} \] \section{Proof of Theorem~\ref{thm:adaptive}} From the second part of Lemma~\ref{lem:ending}, we know that there exist $m$ segments \[ I_j = [t_j, e^{t_j}-1], \ j \in [m] \] with $m \leq \lfloor \log_K s \rfloor + 1$, such that \[ t_1=r, \ e^{t_j}=t_{j+1}, \ j \in [m-1], \textrm{ and } e^{t_m} > s. \] Furthermore, the expert $E^{t_j}$ is alive during the period $[t_j, e^{t_j}-1]$. Using Claim 3.1 of \citet{Hazan:2009:ELA}, we have \[ \begin{split} \sum_{t = t_j}^{e^{t_j}-1} f_t(\mathbf{w}_t) - f_t(\mathbf{w}^{t_j}_t) \leq \frac{1}{\alpha}\left(\log t_j + 2\sum_{t = t_j + 1}^{e^{t_j}-1}\frac{1}{t}\right), \ \forall j \in [m-1] \end{split} \] where $\mathbf{w}^{t_j}_{t_j}, \ldots, \mathbf{w}^{t_j}_{e^{t_j}-1}$ is the sequence of solutions generated by the expert $E^{t_j}$. Similarly, for the last segment, we have \[ \begin{split} \sum_{t = t_{m}}^{s} f_t(\mathbf{w}_t) - f_t(\mathbf{w}^{t_{m}}_t) \leq \frac{1}{\alpha}\left(\log t_{m} + 2\sum_{t = t_{m} + 1}^{s}\frac{1}{t}\right). \end{split} \] By adding things together, we have \begin{equation} \label{eqn:bound-1} \begin{split} & \sum_{j=1}^{m-1} \left(\sum_{t = t_j}^{e^{t_j}-1} f_t(\mathbf{w}_t) - f_t(\mathbf{w}^{t_j}_t) \right) + \sum_{t = t_{m}}^{s} f_t(\mathbf{w}_t) - f_t(\mathbf{w}^{t_{m}}_t) \\ \leq & \frac{1}{\alpha}\sum_{j=1}^m \log t_j + \frac{2}{\alpha} \sum_{t=r+1}^s \frac{1}{t} \leq \frac{m + 2}{\alpha} \log T . \end{split} \end{equation} According to the property of online Newton step \citep[Theorem 2]{ML:Hazan:2007}, we have, for any $\mathbf{w} \in \Omega$, \begin{equation} \label{eqn:bound-2} \sum_{t = t_j}^{e^{t_j}-1} f_t(\mathbf{w}^{t_j}_t) - f_t(\mathbf{w}) \leq 5d \left(\frac{1}{\alpha} +GB \right)\log T, \ \forall j \in [m-1] \end{equation} and \begin{equation} \label{eqn:bound-3} \sum_{t = t_{m}}^{s} f_t(\mathbf{w}^{t_m}_t) - f_t(\mathbf{w}) \leq 5d \left(\frac{1}{\alpha} +GB \right)\log T. \end{equation} Combining (\ref{eqn:bound-1}), (\ref{eqn:bound-2}), and (\ref{eqn:bound-3}), we have, \[ \begin{split} \sum_{t=r}^s f_t(\mathbf{w}_t) - \sum_{t=r}^s f_t(\mathbf{w}) \leq \left(\frac{(5d+1) m + 2}{\alpha} + 5d mGB \right) \log T \end{split} \] for any $\mathbf{w} \in \Omega$. \section{Proof of Corollary~\ref{cor:exp}} The first part of Corollary~\ref{cor:exp} is a direct consequence of Theorem~\ref{thm:adaptive} by setting $K=T^{1/\gamma}$. Now, we prove the second part. Following similar analysis of Corollary~\ref{cor:convex}, we have \[ \begin{split} \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \leq \min_{1 \leq \tau \leq T} \left\{ \left(\frac{(5d+1) (\gamma+1) + 2}{\alpha} + 5d (\gamma+1) GB \right) \frac{ T \log T }{\tau} + 2 \tau V_T \right\}. \end{split} \] Then, we consider two cases. If $V_T \geq \log T/T$, we choose \[ \tau = \sqrt{ \frac{T \log T }{V_T}} \leq T \] and have \[ \begin{split} \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \leq \left(\frac{(5d+1) (\gamma+1) + 2}{\alpha} + 5d (\gamma+1) GB +2\right)\sqrt{T V_T \log T} . \end{split} \] Otherwise, we choose $\tau=T$, and have \[ \begin{split} \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \leq & \left(\frac{(5d+1) (\gamma+1) + 2}{\alpha} + 5d (\gamma+1) GB \right) \log T + 2 T V_T \\ \leq &\left(\frac{(5d+1) (\gamma+1) + 2}{\alpha} + 5d (\gamma+1) GB \right) \log T + 2 T \frac{\log T}{T}\\ = &\left(\frac{(5d+1) (\gamma+1) + 2}{\alpha} + 5d (\gamma+1) GB +2\right) \log T . \end{split} \] \section{Proof of Theorem~\ref{thm:strong:convex}} Lemma~\ref{lem:strongly} implies that all the $\lambda$-strongly convex functions are also $\frac{\lambda}{G^2}$-exp-concave. As a result, we can reuse the proof of Theorem~\ref{thm:adaptive}. Specifically, (\ref{eqn:bound-1}) with $\alpha=\frac{\lambda}{G^2}$ becomes \begin{equation} \label{eqn:strong:convex:1} \begin{split} \sum_{j=1}^{m-1} \left(\sum_{t = t_j}^{e^{t_j}-1} f_t(\mathbf{w}_t) - f_t(\mathbf{w}^{t_j}_t) \right) + \sum_{t = t_{m}}^{s} f_t(\mathbf{w}_t) - f_t(\mathbf{w}^{t_{m}}_t) \leq \frac{(m + 2)G^2}{\lambda} \log T . \end{split} \end{equation} According to the property of online gradient descent \citep[Theorem 1]{ML:Hazan:2007}, we have, for any $\mathbf{w} \in \Omega$, \begin{equation} \label{eqn:strong:convex:2} \sum_{t = t_j}^{e^{t_j}-1} f_t(\mathbf{w}^{t_j}_t) - f_t(\mathbf{w}) \leq \frac{G^2}{2\lambda} (1+\log T), \ \forall j \in [m-1] \end{equation} and \begin{equation} \label{eqn:strong:convex:3} \sum_{t = t_{m}}^{s} f_t(\mathbf{w}^{t_m}_t) - f_t(\mathbf{w}) \leq \frac{G^2}{2\lambda} (1+\log T). \end{equation} Combining (\ref{eqn:strong:convex:1}), (\ref{eqn:strong:convex:2}), and (\ref{eqn:strong:convex:3}), we have, \[ \begin{split} \sum_{t=r}^s f_t(\mathbf{w}_t) - \sum_{t=r}^s f_t(\mathbf{w}) \leq \frac{G^2}{2\lambda} \big(m+ (3 m +4) \log T \big) \end{split} \] for any $\mathbf{w} \in \Omega$. \section{Proof of Corollary~\ref{cor:strong:convex}} The first part of Corollary~\ref{cor:strong:convex} is a direct consequence of Theorem~\ref{thm:strong:convex} by setting $K=T^{1/\gamma}$. The proof of the second part is similar to that of Corollary~\ref{cor:exp}. First, we have \[ \begin{split} \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \leq & \min_{1 \leq \tau \leq T} \left\{ \frac{G^2}{2\lambda} \big(\gamma+1+ (3 \gamma+7) \log T \big) \frac{T}{\tau} + 2 \tau V_T \right\}\\ \leq & \min_{1 \leq \tau \leq T} \left\{ \frac{( \gamma+ 5 \gamma \log T )G^2 T}{\lambda \tau} + 2 \tau V_T \right\} \end{split} \] where the last inequality is due to the condition $\gamma >1$. Then, we consider two cases. If $V_T \geq \log T/T$, we choose \[ \tau = \sqrt{ \frac{T \log T }{V_T}} \leq T \] and have \[ \begin{split} \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \leq & \frac{\gamma G^2 }{\lambda} \sqrt{\frac{T V_T}{\log T} }+ \frac{5 \gamma G^2}{\lambda } \sqrt{T V_T \log T} + 2 \sqrt{T V_T \log T} \\ =&\frac{\gamma G^2 }{\lambda} \sqrt{\frac{T V_T}{\log T} }+ \left(\frac{5 \gamma G^2}{\lambda } +2 \right)\sqrt{T V_T \log T} . \end{split} \] Otherwise, we choose $\tau=T$, and have \[ \begin{split} \DReg(\mathbf{w}_1^*,\ldots,\mathbf{w}_T^*) \leq & \frac{( \gamma+ 5 \gamma \log T )G^2}{\lambda } + 2 T V_T \\ \leq & \frac{( \gamma+ 5 \gamma \log T )G^2}{\lambda } +2 T \frac{\log T}{T} \\ =& \frac{\gamma G^2 }{\lambda} + \left(\frac{5 \gamma G^2}{\lambda } +2 \right) \log T. \end{split} \] \section{Conclusions and Future Work} In this paper, we demonstrate that the dynamic regret can be upper bounded by the adaptive regret and the functional variation, which implies strongly adaptive algorithms are automatically equipped with tight dynamic regret bounds. As a result, we are able to derive dynamic regret bounds for convex functions, exponentially concave functions, and strongly convex functions. All of these upper bounds are almost minimax optimal and this is the first time that such kind of dynamic regret bound is proved for exponentially concave functions. The adaptive-to-dynamic conversion leads to a series of dynamic regret bounds in terms of the functional variation. As we mentioned in Section~\ref{sec:dynamic}, dynamic regret can also be upper bounded by other regularities such as the path-length. It is interesting to investigate whether those kinds of upper bounds can also be established for strongly adaptive algorithms. Since we derive dynamic regret from adaptive regret, we conjecture that adaptive regret is more fundamental, and will try to give a rigorous proof in the future.
{ "timestamp": "2017-02-23T02:06:56", "yymm": "1701", "arxiv_id": "1701.07570", "language": "en", "url": "https://arxiv.org/abs/1701.07570", "abstract": "To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently. In this paper, we illustrate an intrinsic connection between these two concepts by showing that the dynamic regret can be expressed in terms of the adaptive regret and the functional variation. This observation implies that strongly adaptive algorithms can be directly leveraged to minimize the dynamic regret. As a result, we present a series of strongly adaptive algorithms that have small dynamic regrets for convex functions, exponentially concave functions, and strongly convex functions, respectively. To the best of our knowledge, this is the first time that exponential concavity is utilized to upper bound the dynamic regret. Moreover, all of those adaptive algorithms do not need any prior knowledge of the functional variation, which is a significant advantage over previous specialized methods for minimizing dynamic regret.", "subjects": "Machine Learning (cs.LG)", "title": "Dynamic Regret of Strongly Adaptive Methods", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759621310289, "lm_q2_score": 0.7217432062975978, "lm_q1q2_score": 0.7079405618886899 }
https://arxiv.org/abs/2011.03925
The algebra of binary trees is affine complete
A function on an algebra is congruence preserving if, for any congruence, it maps pairs of congruent elements onto pairs of congruent elements. We show that on the algebra of binary trees whose leaves are labeled by letters of an alphabet containing at least three letters, a function is congruence preserving if and only if it is polynomial.
\section{Introduction} A function on an algebra is congruence preserving if, for any congruence, it maps pairs of congruent elements onto pairs of congruent elements. A polynomial function on an algebra is a function defined by a term of the algebra using variables, constants and the operations of the algebra. Obviously, every polynomial function is congruence preserving. Algebras where all congruence preserving functions are polynomial functions are called {\em affine complete} in the terminology introduced by \cite{werner1971}. They are extensively studied in the book by \cite{KaarliPixley}. In the commutative case, many algebras have been shown to be affine complete: Boolean algebras \citep{gratzer1962}, $p$-rings with unit \citep{iskander1972}. For distributive lattices, \cite{plos} described congruence preserving functions, and \cite{gratzer1964} determined which distributive lattices are affine complete. Affine completenes is an intrinsic property of an algebra, which fails to hold even for very simple algebras: e.g., in $\+A = \langle \Z, + \rangle$, the function $f \colon \Z \to \Z$ defined by \[ f(x) = \texttt{if $x\geq0$ then $\dfrac{\Gamma(1/2)}{2\times4^x\times x!} \int_1^\infty e^{-t/2}(t^2-1)^x dt$ else $-f(-x)$.} \] has been proved to be congruence preserving \citep{cgg15}, but it is {\em not a polynomial function} because its power series is infinite. Hence $\+A = \langle \Z, + \rangle$ is not affine complete. In the non commutative case, very little is known about affine complete algebras. We proved in \cite{acgg20} that the free monoid $\Sigma^*$ is an associative non commutative affine complete algebra if $\Sigma$ has at least three letters, and we proved in \cite{acgg20} a partial result concerning a non commutative and non associative algebra: every {\em unary congruence preserving} function $f \colon T(\Sigma) \to T(\Sigma)$ is a polynomial function, where $T(\Sigma)$ is the algebra of {\em full} binary trees with leaves labelled by letters of an alphabet $\Sigma$ having at least three letters. We here generalize this result proving that a congruence preserving function $f \colon \+T(\Sigma)^n \to \+T(\Sigma)$ of any arity $n$ is a polynomial function, where $\+T(\Sigma)$ is the algebra of arbitrary (possibly non full) binary trees with labelled leaves. This generalization is twofold: (1) non full binary trees are allowed in $\+T(\Sigma)$, and (2) congruence preserving functions of arbitrary arity are allowed. This exhibits an example of a non commutative and non associative affine complete algebra. Non commutative and non associative algebras are of constant use in Computer Science, and congruences are also very often used, whence the potential usefulness of our result. We first define binary trees and their congruences, we then study conditions which will enable us to prove that every congruence preserving function is a polynomial function, and to finally prove the affine completeness of $T(\Sigma)$. \section{The algebra of binary trees} \subsection{Trees, congruences} For an algebra $\+A$ with domain $ A$, a {\em congruence} $\sim$ on $\+ A$ is an equivalence relation on $A$ which is compatible with the operations of $\+A$. We state the characterization of congruences by kernels of homomorphisms. \begin{lemma}\label{l:ker} Let $\+A = \langle A\,,\, \star \rangle$ be an algebra with a binary operation $\star$. An equivalence $\sim$ on $A$ is a congruence iff there exists an algebra $\+B = \langle B\,,\, * \rangle$ with a binary operation $*$ and there exists $\theta \colon A \to B$ a homomorphism such that $\sim$ coincides with the kernel congruence $\ker(\theta)$ of $\theta$, defined by $x \sim_\theta y$ iff $\theta(x) = \theta(y)$. \end{lemma} Let $\Sigma$ be an alphabet not containing $\{0,1\}$. We shall represent the algebra of binary trees over $\Sigma$, i.e., trees with leaves labeled by letters of $\Sigma$, as a set of words $\+T(\Sigma)$ on the alphabet $\Sigma \cup \{0,1\}$, together with the binary product operation $\star$. \begin{definition}\label{l:BTLL} The algebra $\+ {B} = \langle \+T(\Sigma), \star \rangle$ of binary trees over $\Sigma$ is defined as follows. \begin{itemize} \item A binary tree over $\Sigma$ is a finite set of words $t\subseteq \set{0,1}^*\Sigma$ such that: For any $ua, vb \in t$, if $ua \neq vb$ then $u$ is not a prefix of $v$ and $v$ is not a prefix of $u$. The carrier set $\+T(\Sigma)$ is the set of all binary trees. The empty set $\emptyset$ is a binary tree denoted by ${\bf 0}$. \item The binary product operation $\star$ is defined by: for $t,\; t' \in \+T(\Sigma)$, $t \star t' = 0.t \cup 1.t'$. In particular, ${\bf 0} \star {\bf 0} = {\bf 0}$. \end{itemize} \end{definition} \begin{figure} \centering \newcommand{\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}}{\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}} \put(0,0){\line(1,2){10}}\end{picture}} \newcommand{\rb}{\begin{picture}(10,20)(0,0) \put(0,20){\circle*{2}} \put(10,0){\line(-1,2){10}}\end{picture}} \setlength{\unitlength}{1.6\unitlength} \begin{picture}(250,50) \put(0,10){\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}}\put(10,30){\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}}\put(20,30){\rb} \put(0,5){$b$}\put(30,25){$a$} \put(50,20){\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}}\put(60,20){\rb} \put(50,15){$c$}\put(70,15){$d$} \put(90,10){\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}}\put(100,30){\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}}\put(110,10){\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}}\put(110,30){\rb}\put(120,10){\rb} \put(90,5){$b$}\put(110,5){$c$}\put(130,5){$d$} \put(150,10){\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}}\put(160,30){\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}}\put(160,10){\rb}\put(170,30){\rb}\put(180,10){\rb} \put(150,5){$b$}\put(170,5){$c$}\put(190,5){$d$} \put(210,10){\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}}\put(220,30){\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}}\put(230,10){\begin{picture}(10,20)(0,0) \put(10,20){\circle*{2}}\put(230,30){\rb}\put(240,10){\rb} \put(210,5){$a$}\put(230,5){$b$}\put(250,5){$c$} \end{picture} \caption{ From left to right, $t = \{00b,1a\}$, $\tau = \{0c,1d\}$, $t_1 = {\GS{a}{\tau}}(t) = \{00b,01c,11d\}$, $t_2 = \{00b,01c,11d\}$, $t_3 = \{00a,10b,11c\}$. Trees $t_1, \ t_2, \ t_3$ have the same size 6, trees $t_1$ and $t_3$ are similar (have the same skeleton.) } \label{fig:tree} \end{figure} When the alphabet $\Sigma$ is clear, we will denote by $\+T$ the set of all binary trees. Trees are generated by $\set{{\bf 0}} \cup \Sigma$ and the operation $\star$. An essential property of this algebra $\+B$ is that its elements are uniquely decomposable. \begin{lemma}[Unicity of decomposition]\label{uniquedecompo} If $t$ is a tree not in $\set{{\bf 0}} \cup \Sigma$ then there exists a unique ordered pair $\pair{t_1, t_2} \neq \pair{{\bf 0}, {\bf 0}}$ in $\+T^2$ such that $t = t_1 \star t_2$. \end{lemma} This property allows us to associate with each $t\in \+T$ its {\em size} $|t|$ (number of nodes) -- $|{\bf 0}| = 0 $, and for all $a \in \Sigma$, $|a| = 1$, -- if $t \notin \set{{\bf 0}} \cup \Sigma$ then $t = t_1 \star t_2$, and $|t| = |t_1| + |t_2| + 1$. \medskip If $|t| > 1$ then there exist $t_1, t_2 $ with $|t_i| < |t|$ such that $t = t_1\star t_2$. Trees $t \star\ t'$, ${\bf 0} \star t'$, $t \star {\bf 0}$ are trees whose root has two sons, a single right son, a single left son, respectively. See Figure~\ref{fig:tree}. \subsection{Homomorphisms, graftings} \begin{lemma}\label{l:ref1} Let $\+B = \langle B\,,\, * \rangle$ be an algebra with a binary operation $*$. Every mapping $h \colon \Sigma \to B$ can be uniquely extended to a homomorphism $h \colon \+T \to B$. \end{lemma} \begin{remark} 1) Because of the universal property of Lemma \ref{l:ref1}, homomorphisms are (uniquely) defined by giving their values on $\Sigma$. 2) For every endomorphism, $h({\bf 0}) = {\bf 0}$. Otherwise, as ${\bf 0} = {\bf 0} \star {\bf 0}$, $h({\bf 0}) = h({\bf 0}) \star h({\bf 0})$; if $h({\bf 0}) = t$ with $|t| \geq 1$ then $t = t \star t$ implies $|t| = 2|t| + 1$, a contradiction. \end{remark} \begin{definition} For a given $a \in \Sigma$, let $\nu_a$ be the endomorphism sending $\Sigma$ onto $a$. If for some $a \in \Sigma$, $\nu_a(t) = \nu_a(t')$, trees $t$ and $t'$ are said to be {\em similar}, which is denoted by $t \sim_s t'$. \end{definition} Note that the congruence $\sim_s$ does not depend on the choice of the letter $a \in \Sigma$ since $\nu_b(t) = \nu_b(\nu_a(t))$. From an intuitive viewpoint, $t \sim_s t'$ means that $t$ and $t'$ have the same skeleton, i.e., they are identical except for the leaf labels. See Figure~\ref{fig:tree}. Other congruences fundamental for our proof are the kernels of the grafting endomorphisms, defined below. \begin{definition}[Grafting]\label{grafting} Let $a \in \Sigma$ and $\tau \in \+T$. Then the grafting ${\GS{a}{\tau}} \colon \+T \to \+T$ is the endomorphism defined by its restriction on $\Sigma$ \[ {\GS{a}{\tau}}(b) = \begin{cases} \tau & \text{ if }b = a,\\ b &\text{ if }b \neq a. \end{cases} \] \end{definition} In other words, for any $a \in \Sigma$ and any $\tau \in \+T$, $\GS{a}{\tau}$ is the endomorphism sending the letter $a$ on $\tau$ and each other letter on itself. An endomorphism $h$ of $\langle \+T(\Sigma), \star \rangle$ is {\em idempotent} if for every $t \in \+T$, $h(h(t)) = h(t)$. By Lemma \ref{l:ref1}, $h$ is idempotent iff for every $a \in \Sigma$, $h(h(a)) = h(a)$. For instance if $a$ does not occur in $\tau$ then $\GS{a}{\tau}$ is idempotent. \begin{proposition}\label{p:G2} Let $\tau \in \+T$, let $t,\ t' \in \+T$, and let $a_1 \neq a_2$ be two letters in $\Sigma$. If $ {\GS{a_i}{\tau}}(t) = {\GS{a_i}{\tau}}(t')$ for $i = 1, 2$, then $t = t'$. \end{proposition} \begin{proof} By induction on $\min(|t|, |t'|)$. {\em Basis Case 0:} If $\min(|t|, |t'|) = 0$ then one of $t, t'$ is ${\bf 0}$, say $t = {\bf 0}$. If $t' \neq {\bf 0}$ then $t'$ contains at least one occurrence of some letter $b$. As $\GS{a_i}{\tau}(t') = \GS{a_i}{\tau}(t) = \GS{a_i}{\tau}({\bf 0}) = {\bf 0}$, we have $\GS{a_i}{\tau}(t') = {\bf 0}$, which implies (because $t' \neq {\bf 0}$ was supposed) that $\tau = {\bf 0}$. Then $\GS{a_i}{\tau}(t') = {\bf 0}$ implies that all leaves of $t'$ are equal to both $a_1$ and $a_2$, a contradiction. Hence $t' = {\bf 0}$ and $t = t'$. {\em Basis Case 1:} If $\min(|t|, |t'|) = 1$ then $t$ or $t'$ is a letter, say $t = b$, and there is one $i$, say $i = 1$, such that $a_1 \neq b$, thus $b = {\GS{a_1}{\tau}}(t) = {\GS{a_1}{\tau}}(t')$. \begin{itemize} \item If $t'$ is a letter $c\neq b$, then ${\GS{a_1}{\tau}}(c) = b$. If $c = a_1$ then $b = {\GS{a_1}{\tau}}(c) = \tau$. Since $\GS{a_2}{\tau}(c) = c = \GS{a_2}{\tau}(b) \in \set {\tau,b} = \set {b}$, we have that $c = b$, a contradiction. If $c \neq a_1$ and ${\GS{a_1}{\tau}}(c) = c \neq b = {\GS{a_1}{\tau}}(c)$, a contradiction. Hence $t' = t = b$. \item If $|t'| > 1$ then $t' = t'_1 \star t'_2$, and ${\GS{a_1}{\tau}(t')} = {\GS{a_1}{\tau}(t'_1)} \star {\GS{a_1}{\tau}(t'_2)}$ which can be only of size 0 or $\geq 2$, contradicting ${\GS{a_1}{\tau}}(t') = b$. this case is excluded. \end{itemize} \smallskip {\em Induction:} If $\min(|t|, |t'|) > 1$ then $t = t_1 \star t_2$ and $t' = t'_1 \star t'_2$ with $\min(|t_i|, |t'_i|) < \min(|t|, |t'|)$, for $i = 1, 2$. By Lemma \ref{uniquedecompo}, ${\GS{a_j}{\tau}}(t_1) \star {\GS{a_j}{\tau}}(t_2) = {\GS{a_j}{\tau}}(t'_1) \star {\GS{a_j}{\tau}}(t'_2)$ implies ${\GS{a_j}{\tau}}(t_i) = {\GS{a_j}{\tau}}(t'_i)$, for $j = 1, 2$. By the induction hypothesis $t_i = t'_i$, hence $t = t'$. \end{proof} \begin{proposition}\label{p:tt'sim} Let us fix $a \in \Sigma$, with $|\Sigma| \geq 3$, $t,\ t' \in \+T$ such that $t\sim_s t'$. (1) If, for some $\tau \in \+T$ of size $|\tau| \neq 1$, ${\GS{a}{\tau}}(t) = {\GS{a}{\tau}}(t')$, then $t = t'$. (2) If, for all $b \neq a$, $b \in \Sigma$, ${\GS{a}{b}}(t) = {\GS{a}{b}}(t')$, then $t = t'$. \end{proposition} \begin{proof} Both (1) and (2) are proved by induction on $|t| = |t'|$, and in both cases, the result obviously holds if $t = t' = {\bf 0}$. \medskip \noindent {\em Basis:} If $|t| = |t'| =1$. (1) We assume that $t = b \neq c = t'$. \noindent (i) If $a \not\in \{b,c\}$ then ${\GS{a}{\tau}}(t) = b \neq c = {\GS{a}{\tau}}(t')$, a contradiction. \noindent (ii) Otherwise, $a \in \{b,c\}$, e.g., $a = b = t$, then ${\GS{a}{\tau}}(t) = {\GS{a}{\tau}}(a) = \tau$ and ${\GS{a}{\tau}}(t') = {\GS{a}{\tau}}(c) = c$, hence $\tau = c$, which contradicts $|\tau| \neq 1$. (2) We assume that $t = b \neq c = t'$. \noindent (i) The case $a \not\in \{b, c\}$ yields a contradiction as in case (1). \noindent (ii) Otherwise, e.g., $a = b$, there exists $d \not\in \{a, c\}$, and we get $ {\GS{a}{d}}(t) = {\GS{a}{d}}(a) = d$ and ${\GS{a}{d}}(t') = {\GS{a}{d}}(c) = c$, contradicting ${\GS{a}{d}}(t) = {\GS{a}{d}}(t')$. \medskip \noindent {\em Induction:} As in Proposition \ref{p:G2} in both cases: since $t$ and $t'$ are similar, $t = t_1 \star t_2 $ and $t' = t'_1 \star t'_2$ with $t_i$ similar to $t'_i$ and $|t_i| < |t'_i|$. \end{proof} \subsection{Congruence preserving functions on trees} \label{sec:congr-pres-funct} \begin{definition}\label{def1:cp} A function $f \colon \+T^n \to \+T$ is {\em congruence preserving} (abbreviated into CP) if for all congruences $\sim$ on~$\+T$, for all $t_1, \ldots, t_n,\ t'_1, \ldots, t'_n$ in $\+T$, $t_i \sim t'_i$ for all $i = 1, \ldots, n$, implies $f(t_1, \ldots, t_n) \sim f(t'_1, \ldots, t'_n)$. \end{definition} \begin{remark}\label{r:hfh=fh} (1) It follows from Lemma \ref{l:ker} that CP functions are characterized by the fact that for all homomorphisms $h$ from $\pair{{\mathcal T}, \star}$ to any algebra $\pair{A, \star_A}$, $h(t_i) = h(t'_i)$ for all $i =1, \ldots, n$, implies $h(f(t_1, \ldots, t_n)) = h( f(t'_1, \ldots, t'_n))$. (2) If $f$ is CP and endomorphism $h$ is idempotent then $h(f(t_1, \ldots, t_n)) = h(f(h(t_1), \ldots, h(t_n)))$. Indeed, let $\sim_h$ be the congruence associated with $h$, for $i = 1, \ldots, n$, we have $h(t_i) = h(h(t_i))$, hence $t_i \sim_h h(t_i)$, whence the result. \end{remark} \noindent We will show that congruence preserving functions on the algebra $\langle \+T(\Sigma), \star \rangle$ are polynomial functions. Let us first formally define polynomials on trees. \begin{definition} Let $x_1, \ldots, x_n \not \in \Sigma$ be called {\em variables}. A {\em polynomial} $P(x_1, \ldots, x_n)$ is a tree on the alphabet $\Sigma \cup \{x_1, \ldots, x_n\}$. With every polynomial $P(x_1, \ldots, x_n)$ we will associate a {\em polynomial function} $\tilde P \colon \+T^n \to \+T$ defined by: for any $\vec u = \langle t_1, \ldots, t_i, \ldots, t_n \rangle \in \+T^n$, $\tilde P(\vec u) = \left\{ \begin{array}{ll} P & \mbox{if $P = {\bf 0}$ or $P \in \Sigma$}\\ t_i& \mbox{if $P = x_i$}\\ \widetilde {P_1}(\vec u) \star \widetilde {P_2}(\vec u)& \mbox{if $P = P_1 \star P_2$} \end{array} \right.$ \end{definition} Obviously, every polynomial function is CP. Our goal is to prove the converse, namely \begin{theorem}\label{E} Let $|\Sigma| \geq 3$. If $g \colon \+T^n \to \+T$ is CP then there exists a polynomial $P_g$ such that $g = \widetilde {P_g}$. \end{theorem} \section{Equality of CP functions} \label{sec:equal-cp-funct} \begin{notation} For any $f \colon {\mathcal T}^n \to {\mathcal T}$, we denote by $f\restrict {\Sigma^n}$ its restriction to $\Sigma^n$. \end{notation} In this section we prove that if $f$ and $g$ are two CP functions, then $f\restrict{\Sigma^n} = g\restrict{\Sigma^n}$ implies $f = g$, {\em provided that $\Sigma$ contains at least three letters.} \begin{lemma}\label{similar} Suppose $\Sigma$ has at least three letters. If $f$ and $g$ are unary CP functions on $ \+T$ such that for all $a \in \Sigma$, $f(a) = g(a)$ then for all $t \in \+T$, $f(t)$ and $g(t)$ are similar. \end{lemma} \begin{proof} We have to show that $\nu_a(f(t)) = \nu_a(g(t))$ for some $a \in \Sigma$ and for all $t$. As $\nu_a$ is idempotent and $f$ is CP, by Remark \ref{r:hfh=fh}~(2), $\nu_a(f(t)) = \nu_a(f(\nu_a(t)))$, and similarly for $g$. Hence it suffices to prove $f(\nu_a(t)) = g(\nu_a(t))$. Let $b_1, \ b_2 \in \Sigma$ such that $a,\ b_1,\ b_2$ are pairwise distinct. As $\GS{b_i}{\nu_a(t)}$ is idempotent, by Remark \ref{r:hfh=fh}~(2), we have $\GS{b_i}{\nu_a(t)}(f(b_i)) = \GS{b_i}{\nu_a(t)}(f(\nu_a(t)))$. The same holds for $g$, i.e., $\GS{b_i}{\nu_a(t)}(g(b_i)) = \GS{b_i}{\nu_a(t)}(g(\nu_a(t)))$. From $f(b_i) = g(b_i)$, we deduce that $\GS{b_i}{\nu_a(t)}(f(\nu_a(t))) = \GS{b_i}{\nu_a(t)}(g(\nu_a(t)))$. This equality holds for $i = 1, 2$, thus Proposition \ref{p:G2} implies that $f(\nu_a(t)) = g(\nu_a(t))$. \end{proof} \noindent The following proposition shows that a unary CP function $f$ is completely determined by its values on $\Sigma$. \begin{proposition} \label{unary} Suppose $\Sigma$ has at least three letters. If $f$ and $g$ are unary CP functions on $\+T$ such that for all $a \in \Sigma$, $f(a) = g(a)$ then for all $t \in \+T$, $f(t) = g(t)$. \end{proposition} \begin{proof} Let $a$ be a letter that occurs in $t$. For any other letter $b$, the endomorphisms $\GS{a}{b}$ and $\GS{a}{t_b}$ are idempotent, where $t_b = \GS{a}{b}(t)$. Thus by Remark \ref{r:hfh=fh}~(2), $\GS{a}{t_b}(f(a)) = \GS{a}{t_b}(f(t_b))$, and $\GS{a}{t_b}(g(a)) = \GS{a}{t_b}(g(t_b))$. As $f(a) = g(a)$ we have $\GS{a}{t_b}(f(t_b)) = \GS{a}{t_b}(g(t_b))$. By Lemma \ref{similar}, $f(t_b)$ and $g(t_b)$ are similar, and by Proposition \ref{p:tt'sim} (1) $f(t_b) = g(t_b)$. On the other hand, as $f$ and $g$ are CP and $t \sim_{\GS{a}{b}} t_b$, we get $\GS{a}{b}(f(t)) = \GS{a}{b}(f(t_b))$ and $\GS{a}{b}(g(t)) = \GS{a}{b}(g(t_b))$, hence $\GS{a}{b}(f(t)) = \GS{a}{b}(g(t)) $. As this is true for all $b \neq a$, we have by Proposition \ref{p:tt'sim} (2), $f(t) = g(t)$. \end{proof} Proposition \ref{unary} now can be generalized. \begin{notation} For any function $f \colon \+T^{n+1} \to \+T$, any $t \in \+T$, and $\vec u = \langle t_1, \ldots, t_n \rangle$, we define \noindent (1) a $n$-ary function $f_{\cdots,t}$ obtained by ``freezing'' the (n+1)th argument to the value $t$, and defined by: for all $\vec u \in \+T^{n}$, $f_{\cdots,t}(\vec u) = f(\vec u,t)$, \noindent (2) a unary function $f_{\vec u,\cdot}$ obtained by ``freezing'' the $n$ first arguments to the value $\vec u =\langle t_1, \ldots, t_n \rangle$, and defined by: for all $t\in\+T$, $f_{\vec u,\cdot}(t) = f(\vec u,t)$. \end{notation} \begin{proposition}\label{andre} Let $f$ and $g$ be n-ary CP functions on $\+T$ such that for all $a_1, \ldots, a_n \in \Sigma$, $f(a_1, \ldots, a_n) = g(a_1, \ldots, a_n)$ then for all $t_1, \ldots, t_n \in \+T$, $f(t_1, \ldots, t_n) = g(t_1, \ldots, t_n)$. \end{proposition} \begin{proof} By induction on $n$. For $n = 1$ the result was proved in Proposition \ref{unary}. Assume the result holds for $n$. By the hypothesis, for all $a_1, \ldots, a_n , a \in \Sigma$, we have $f(a_1, \ldots, a_n, a) = g(a_1, \ldots, a_n, a)$, i.e., $f_{\cdots,a}(a_1, \ldots, a_n) = g_{\cdots,a}(a_1, \ldots, a_n)$. By the induction applied to $f_{\cdots,a}$, for all $\vec u \in \+T^n$, $f_{\cdots, a}(\vec u) = g_{\cdots, a}(\vec u)$, or equivalently $f_{\vec u, \cdot}(a) = g_{\vec u, \cdot}(a)$. As $f_{\vec u, \cdot}(a) = g_{\vec u, \cdot}(a)$, applying now Proposition \ref{unary} to $f_{\vec u, \cdot}$ and $g_{\vec u, \cdot}$ yields $f_{\vec u, \cdot}(t) = g_{\vec u, \cdot}(t)$ for all $t$, hence $f(\vec u, t) = g(\vec u, t)$. \end{proof} \section{The algebra of binary trees is affine complete} To prove that any CP function is a polynomial function, as a consequence of Proposition~\ref{andre} and of the fact that a polynomial function is CP, it is enough to show that the restriction $f\restrict{\Sigma^n}$ of $f \colon {\mathcal T}^n \to {\mathcal T}$ to $\Sigma^n$ is equal to the restriction $\tilde{P}\restrict{\Sigma^n}$ of a $n$-ary polynomial function. For such restricted functions we introduce a weakened version WCP of the CP condition, namely, \begin{definition}\label{d:WCP} Function $g \colon \+T^n \to \+T$ is said to be WCP iff for any idempotent mapping $h \colon \Sigma \to \Sigma$, $\forall \vec u, \vec v \in \Sigma^n$, $h(\vec u) = h(\vec v) \implies h(g(\vec u)) = h(g(\vec v))$, where for $\vec u = \langle u_1, \ldots, u_n \rangle$, $h(\vec u)$ denotes $\langle h(u_1), \ldots, h(u_n) \rangle$. \end{definition} Every CP function is clearly WCP. \begin{lemma} If $g$ is WCP then for all $\vec u, \vec v \in \Sigma^n$, $g(\vec u)$ and $g(\vec v)$ are similar. \end{lemma} \begin{proof} As $\nu_a(\vec u) = \nu_a(\vec v) = \langle a,\ldots, a \rangle$ for all $\vec u, \vec v \in \Sigma^n$ and $g$ is WCP, $\nu_a(g(\vec u)) = \nu_a(g(\vec v))$. \end{proof} We often use a different form of the condition WCP, which deals only with alphabetic graftings. \begin{proposition} A function $g$ is WCP if and only if (GCP) (G for graftings) for all $a_1, a_2,\ldots, a_n \in \Sigma$, $i \in \{1, \ldots, n\}$ and $b_i \in \Sigma$, $\GS{a_i}{b_i}(g(a_1, \ldots, a_n)) = \GS{a_i}{b_i}(g(a_1, \ldots, a_{i-1}, b_i, a_{i+1}, \ldots, a_n))$. \end{proposition} \begin{proof} Since $\GS{a_i}{b_i}( a_1, \ldots, a_n) = \GS{a_i}{b_i}( a_1, \ldots, a_{i-1}, b_i, a_{i+1}, \ldots, a_n )$, clearly WCP implies GCP. The proof of the converse is by induction on $n$. It is obviously true for $n = 0$. Otherwise, let $h$ be a mapping $h \colon \Sigma \to \Sigma$ and let $\vec u, \vec v \in \Sigma^n$ such that $h(\vec u ) = h(\vec v)$, and let $a, b \in \Sigma$ such that $h(a) = h(b)$. By (GCP), we have $\GS{a}{b}(g(\vec u, a)) = \GS{a}{b}(g(\vec u, b))$, hence $h(\GS{a}{b}(g(\vec u, a))) = h(\GS{a}{b}(g(\vec u, b)))$. \noindent But $h( \GS{a}{b}(c)) = \left\{ \begin{array}{ll} h(c) &\mbox{if $c \neq a$}\\ h(b) = h(a)& \mbox{if $c = a$} \end{array}\right.$ hence $h \circ \GS{a}{b} = h$. Therefore $h(g(\vec u, a)) = h(g(\vec u, b))$, and by the induction applied to $g_{\ldots, b}$, $h(g(\vec u, a)) = h(g(\vec u, b)) = h(g(\vec v, b))$. \end{proof} Let us first study unary WCP functions whose restriction to $\Sigma$ takes its values in $\Sigma$. \begin{proposition}\label{p2Andre} Assume $|\Sigma| \geq 3$. Let $f \colon \+T \to \+T$ be WCP and such that $f(\Sigma) \subseteq \Sigma$. Then $f\restrict{\Sigma}$ is (1) either a constant function (2) or the identity. \end{proposition} \begin{proof} If $f$ is not the identity there exist $a,\ b$, with $a \neq b$ and $f(a) = b$. As $\GS{a}{b}(f(b)) = \GS{a}{b}(f(a)) = \GS{a}{b}(b) = b$, we get $f(b) \in \{a, b\}$. For $c \not\in \{a, b\}$, $\GS{a}{c}(f(c)) = \GS{a}{c}(f(a)) = b$ implies $f(c) = b$. It remains to prove that $f(b) = b$. From $\GS{b}{c}(f(b)) = \GS{b}{c}(f(c)) = c$, we deduce that $f(b) \in \{c, b\}$, hence $f(b) \in \{a, b\} \cap \{c, b\} = \set{b}$, which concludes the proof. \end{proof} We now will generalize Proposition \ref{p2Andre} by Proposition \ref{pr3Andre} (replacing a unary $f$ by a $n$-ary $g$). \begin{proposition}\label{pr3Andre} Assume $|\Sigma| \geq 3$. If $g \colon \+T^n \to \+T$ is WCP and such that $g(\Sigma^n) \subseteq \Sigma$, then $g\restrict{\Sigma^n}$ is (1) either a constant function (2) or a projection $\pi^n_i$. \end{proposition} \begin{proof} The proof is by induction on $n$. By Proposition \ref{p2Andre} it is true for $n = 1$. If $g$ is of arity $n+1$ then, by induction hypothesis, for any $a \in \Sigma$, the function $g_{\cdots, a}$ of arity $n$ is either a constant or a projection $\pi_i^n$. We first show that these functions are all equal to a given $\pi_i^n$, or all equal to a same constant, or every $g_{\cdots, a}$ is the constant function $a$. Let us assume that $g_{\cdots,a} = \pi_i^n$. Let $\vec u = \pair{a, \ldots, a, c, a,\ldots, a}$ and $\vec v = \pair{a, \ldots, a, d, a, \ldots, a}$ with $a ,c, d$ pairwise disjoint, so that for any $b$, $\GS{a}{b}(g(\vec u, a)) = c$ and $\GS{a}{b}(g(\vec v, a)) = d$. It follows from the GCP condition that $\GS{a}{b}(g(\vec u, a)) = \GS{a}{b}(g(\vec u, b)) = c$ and $\GS{a}{b}(g(\vec v, a)) = \GS{a}{b}(g(\vec v, b)) = d$, which is impossible if $g_{\cdots, b}$ is either a constant or a projection $\pi_j^n$ with $j \neq i$. Hence all $g_{\cdots, a}$ are equal to $\pi_i^n$, implying $g = \pi_i^{n+1}$. \smallskip Assume now all the $g_{\cdots, a}$ are constant. For every $\vec u, \vec v, a$, we have $g(\vec u, a) = g(\vec v, a)$. We choose an arbitrary $\vec u \in \Sigma^n$ which will be fixed. By the induction hypothesis $g_{\vec u, \cdot}$ is either (1) the identity, or (2) a constant $c$. In case (1), for all $ \vec v, a$, $g(\vec u, a) = g(\vec v, a) = a$ and $g = \pi_{n+1}^{n+1}$. In case (2), for all $\vec v, a, b$, $g(\vec u, a) = g(\vec v, b) = c$ and $g$ is a constant. \end{proof} As CP functions are WCP, for $g$ a CP function such that for some $a_1, \ldots, a_n \in \Sigma $, $g(a_1, \ldots, a_n) \in \Sigma$, we have shown that there exists a polynomial $P_g$, which is either a constant $a \in \Sigma$ or an $x_i$, such that $g = \widetilde {P_g}$. We will now extend to the case when $g(a_1, \ldots, a_n) \not \in \Sigma$. \begin{proposition}\label{prop:reduc} Assume that $|\Sigma| \geq 3$. Let $g \colon \+T^n \to \+T$ be WCP. Then there exists a polynomial $P_g$ such that $g\restrict{\Sigma^n} = \widetilde{P_g}\restrict{\Sigma^n}$. \end{proposition} \begin{proof} Let $\sigma(g)$ be the common size of all the $g(\vec u), \ \vec u \in \Sigma^n$. The proof is by induction on $\sigma(g)$. {\em Basis:} If $\sigma(g) = 0 $ then $g\restrict{\Sigma^n} = \tilde P\restrict{\Sigma^n} = {\bf 0}$. If $\sigma(g) = 1$ then $g(a_1, \ldots, a_n) \in \Sigma$ and the result is proved in Proposition \ref{pr3Andre}. {\em Induction:} If $\sigma(g) > 1$ there exists two functions $g_i \colon {\mathcal T}^n \to {\mathcal T}$ for $i = 1, 2$ such that for all $\vec u \in \Sigma^n $, $g(\vec u) = g_1(\vec u) \star g_2(\vec u)$, with $|\sigma(g_i)| < |\sigma(g)|$. It remains to show that both $g_1$ and $g_2$ are WCP. Let $\vec u, \vec v \in \Sigma^n$ be such that $h(\vec u) = h(\vec v)$ for some mapping $h \colon \Sigma \to \Sigma$. Extend $h$ as an endomorphism $\+T \to \+T$ by Lemma \ref{l:ref1}, then $h(g(\vec u)) = h(g_1(\vec u) \star g_2(\vec u)) = h(g_1(\vec u)) \star h(g_2(\vec u))$. Similarly, $h(g(\vec v)) = h(g_1(\vec v)) \star h(g_2(\vec v))$. As $g$ is WCP and $h(\vec u) = h(\vec v)$, we have $h(g(\vec u)) = h(g(\vec v))$. Then by Lemma \ref{uniquedecompo} (unique decomposition) we get $h(g_i(\vec u)) = h(g_i(\vec v))$ for $i =1, 2$. This is true for any $h$, thus $g_1$ and $g_2$ are WCP. By the induction hypothesis there exists $P_i $ such $\widetilde{P_i}\restrict{\Sigma^n} = {g_i}\restrict{\Sigma^n}$, hence ${g}\restrict{\Sigma^n} = \widetilde{P_1}\restrict{\Sigma^n}\star \widetilde{P_2}\restrict{\Sigma^n} = \widetilde{P_1 \star P_2}\restrict{\Sigma^n}$. \end{proof} \begin{theorem} If $f \colon {\mathcal T}^n\to {\mathcal T}$ is CP then there exists a polynomial $P$ such that $f = \tilde{P}$. \end{theorem} \begin{proof} Since $f$ is CP, $f$ also is WCP. By the previous proposition, there exists $P$ such that $f\restrict{\Sigma^n} = \tilde{P}\restrict{\Sigma^n}$, and by Proposition~\ref{andre}, $f = \tilde{P}$. \end{proof} \section{Conclusion} We proved that, when $\Sigma$ has at least three letters, the algebra of arbitrary binary trees with leaves labeled by letters of $\Sigma$ is an affine complete algebra (non commutative and non associative). \acknowledgments We thanks the referees for comments which helped to improve the paper. \nocite{*} \bibliographystyle{abbrvnat}
{ "timestamp": "2021-05-18T02:03:00", "yymm": "2011", "arxiv_id": "2011.03925", "language": "en", "url": "https://arxiv.org/abs/2011.03925", "abstract": "A function on an algebra is congruence preserving if, for any congruence, it maps pairs of congruent elements onto pairs of congruent elements. We show that on the algebra of binary trees whose leaves are labeled by letters of an alphabet containing at least three letters, a function is congruence preserving if and only if it is polynomial.", "subjects": "Formal Languages and Automata Theory (cs.FL)", "title": "The algebra of binary trees is affine complete", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759621310288, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7079405618886899 }
https://arxiv.org/abs/1510.07525
An adaptive finite element method in reconstruction of coefficients in Maxwell's equations from limited observations
We propose an adaptive finite element method for the solution of a coefficient inverse problem of simultaneous reconstruction of the dielectric permittivity and magnetic permeability functions in the Maxwell's system using limited boundary observations of the electric field in 3D. We derive a posteriori error estimates in the Tikhonov functional to be minimized and in the regularized solution of this functional, as well as formulate corresponding adaptive algorithm. Our numerical experiments justify the efficiency of our a posteriori estimates and show significant improvement of the reconstructions obtained on locally adaptively refined meshes.
\section{Introduction} \label{sec:intro} This work is a continuation of the recent paper \cite{BCN} and is focused on the numerical reconstruction of the dielectric permittivity $\varepsilon(x)$ and the magnetic permeability $\mu(x)$ functions in the Maxwell's system on locally refined meshes using an adaptive finite element method. The reconstruction is performed via minimization of the corresponding Tikhonov functional from backscattered single measurement data of the electric field $E(x,t)$. That means that we use backscattered boundary measurements of the wave field $E(x,t)$ which are generated by a single direction of a plane wave. In the minimization procedure we use domain decomposition finite element/finite difference methods of \cite{BMaxwell} for the numerical reconstructions of both functions. Comparing with \cite{BCN} we present following new points here: we adopt results of \cite{BOOK, BKK,KBB} to show that the minimizer of the Tikhonov functional is closer to the exact solution than guess of this solution. We present relaxation property for the mesh refinements for the case of our inverse problem and we derive a posteriori error estimates for the error in the minimization functional and in the reconstructed functions $\varepsilon(x)$ and $\mu(x)$. Further, we formulate two adaptive algorithms and apply them in the reconstruction of small inclusions. Moreover, in our numerical simulations of this work we induce inhomogeneous initial conditions in the Maxwell's system. Non-zero initial conditions involve uniqueness and stability results of reconstruction of both unknown functions $\varepsilon(x)$ and $\mu(x)$, see details in \cite{BCN, BCS}. Using our numerical simulations we can conclude that an adaptive finite element method can significantly improve reconstructions obtained on a coarse non-refined mesh in order to accurately obtain shapes, locations and values of functions $\varepsilon(x)$ and $\mu(x)$. An outline of this paper is as follows: in Section \ref{sec:model} we present our mathematical model and in Section \ref{sec:stat} we formulate forward and inverse problems. In Section \ref{sec:tikhonov} we present the Tikhonov functional to be minimized and in Section \ref{sec:spaces} we show different versions of finite element method used in computations. In Section \ref{sec:relax} we formulate relaxation property of mesh refinements and in Section \ref{sec:general} we investigate general framework of a posteriori error estimates in coefficient inverse problems (CIPs). In Sections \ref{sec:adaptrelax}, \ref{sec:errorfunc} we present theorems for a posteriori errors in the regularized solution of the Tikhonov functional and in the Tikhonov functional, correspondingly. In Sections \ref{sec:ref}, \ref{sec:alg} we describe mesh refinement recommendations and formulate adaptive algorithms used in computations. Finally, in Section \ref{sec:num} we present our reconstruction results. \section{The mathematical model} \label{sec:model} Let a bounded domain $\Omega \subset \mathbb{R}^d, d=2,3,$ have Lipschitz boundary $\partial \Omega$ and let us set $\Omega_T := \Omega \times (0,T)$, $\partial \Omega_T := \partial \Omega \times (0,T)$, where $T >0$. We consider Maxwell's equations in an inhomogeneous isotropic media in a bounded domain $\Omega \subset \mathbb{R}^3$ \begin{equation} \label{eq:maxwell} \left \{ \begin{array}{llllll} \partial_{t} D - \nabla \times H(x,t) = 0 &&\mbox{ in } \Omega_T\\ \partial_{t} B + \nabla \times E(x,t) = 0 && \mbox{ in } \Omega_T,\\ D(x,t)= \varepsilon E(x,t), \quad B(x,t)= \mu H(x,t),&&\\ E(x,0) = E_0(x), \quad H(x,0) = H_0(x), &&\\ \nabla \cdot D(x,t) = 0,\quad \nabla \cdot B(x,t) =0 && \mbox{ in } \Omega_T,\\ n \times D(x,t) =0,\quad n \cdot B(x,t) =0 && \textrm{on }\,\partial \Omega_T, \end{array} \right . \end{equation} where $x=(x_1,x_2,x_3)$. Here, $E(x,t)$ is the electric field and $H(x,t)$ is the magnetic field, $\varepsilon(x) > 0$ and $\mu(x) >0$ are the dielectric permittivity and the magnetic permeability functions, respectively. $E_0(x)$ and $H_0(x)$ are given initial conditions. Next, $n = n(x)$ is the unit outward normal vector to $\partial \Omega$. The electric field $E(x,t)$ is combined with the electric induction $D(x,t)$ via \begin{equation*} D(x,t) = \varepsilon E(x,t) = \varepsilon_{\rm vac} \varepsilon_r E(x,t), \end{equation*} where $\varepsilon_{\rm vac} \approx 8.854 \times 10^{-12}$ is the vacuum permittivity which is measured in Farads per meter, and thus $\varepsilon_r$ is the dimensionless relative permittivity. The magnetic field $H(x,t)$ is combined with the magnetic induction $B(x,t)$ via \begin{equation*} B(x,t) = \mu H(x,t) = \mu_{\rm vac} \mu_r H(x,t), \end{equation*} where $\mu_{\rm vac} \approx 1.257 \times 10^{-6}$ is the vacuum permeability measured in Henries per meter, from what follows that $\mu_r$ is the dimensionless relative permeability. By eliminating $B$ and $D$ from (\ref{eq:maxwell}) we obtain the model problem for the electric field $E$ with the perfectly conducting boundary conditions which is as follows: \begin{eqnarray} \varepsilon \frac{\partial^2 E}{\partial t^2} + \nabla \times ( \mu^{-1} \nabla \times E) &=& 0 ~ \mbox{in}~~ \Omega_T, \label{model1_1} \\ \nabla \cdot (\varepsilon E) &=& 0 ~ \mbox{in}~~ \Omega_T, \label{model1_2} \\ E(x,0) = f_0(x), ~~~E_t(x,0) &=& f_1(x)~ \mbox{in}~~ \Omega, \label{model1_3} \\ E \times n &=& 0 ~ \mbox{on}~~ \partial \Omega_T. \label{model1_4} \end{eqnarray} Here we assume that \begin{equation*} f_{0}\in H^{1}(\Omega), f_{1}\in L^{2}(\Omega). \end{equation*} By this notation we shall mean that every component of the vector functions $f_0$ and $f_1$ belongs to these spaces. Note that equations similar to (\ref{model1_1})-(\ref{model1_4}) can be derived also for the magnetic field $H$. As in our recent work \cite{BCN}, for the discretization of the Maxwell's equations we use a stabilized domain decomposition method of \cite{BMaxwell2}. In our numerical simulations we assume that the relative permittivity $\varepsilon_r$ and relative permeability $\mu_r$ does not vary much which is the case of real applications, see recent experimental work \cite{ BTKB} for similar observations. We do not impose smoothness assumptions on the coefficients $\varepsilon(x), \mu(x)$ and we treat discontinuities in a similar way as in \cite{CWZ14}. Thus, a discontinuous finite element method should be applied for the finite element discretization of these functions, see details in Section \ref{sec:spaces}. \section{Statements of forward and inverse problems} \label{sec:stat} We divide $\Omega$ into two subregions, $\Omega_{\rm FEM}$ and $\Omega_{\rm OUT}$ such that $\overline{\Omega} = \overline{\Omega}_{\rm FEM} \cup \overline{ \Omega}_{\rm OUT}$, $\Omega_{\rm FEM} \cap \Omega_{\rm OUT} = \emptyset$ and $\partial \Omega_{\rm FEM} \subset \partial \Omega_{\rm OUT}$. For an illustration of the domain decomposition, see Figure \ref{fig:fig1}. The boundary $\partial \Omega$ is such that $\partial \Omega =\partial _{1} \Omega \cup \partial _{2} \Omega \cup \partial _{3} \Omega$ where $\partial _{1} \Omega$ and $\partial _{2} \Omega$ are, respectively, front and back sides of the domain $\Omega$, and $\partial _{3} \Omega$ is the union of left, right, top and bottom faces of this domain. For numerical solution of (\ref{model1_1})-(\ref{model1_4}) in $\Omega_{\rm OUT}$ we can use either the finite difference or the finite element method on a structured mesh with constant coefficients $\varepsilon = 1$ and $\mu =1$. In $\Omega_{\rm FEM}$, we use finite elements on a sequence of unstructured meshes $K_h = \{K\}$, with elements $K$ consisting of triangles in $\mathbb{R}^2$ and tetrahedra in $\mathbb{R}^3$ satisfying the maximal angle condition \cite{Brenner}. \begin{figure}[tbp] \begin{center} \begin{tabular}{cc} {\includegraphics[width=7.0cm, clip = true, trim = 6.0cm 6.0cm 6.0cm 6.0cm]{common_2layer.eps}} & {\includegraphics[width=7.0cm, clip = true, trim = 6.0cm 6.0cm 6.0cm 6.0cm]{FEMdomain_2layer.eps}} \\ a) Test1: $\Omega = \Omega_{FEM} \cup \Omega_{OUT}$ & b) Test 1: $\Omega_{FEM}$ \\ {\includegraphics[width=7.0cm, clip = true, trim = 6.0cm 6.0cm 6.0cm 6.0cm]{common_1layer.eps}} & {\includegraphics[width=7.0cm, clip = true, trim = 6.0cm 6.0cm 6.0cm 6.0cm]{FEMdomain_1layer.eps}} \\ c) Test 2: $\Omega = \Omega_{FEM} \cup \Omega_{OUT}$ & d) Test 2: $\Omega_{FEM}$ \\ \end{tabular} \end{center} \caption{ \protect\small \emph{ Domain decomposition in numerical tests of Section \ref{sec:num}. a), c) The decomposed domain $\Omega= \Omega_{FEM} \cup \Omega_{OUT}$. b), d) The finite element domain $\Omega_{FEM}$. }} \label{fig:fig1} \end{figure} Let $S_T := \partial_1 \Omega \times (0,T)$ where $\partial_1 \Omega$ is the backscattering side of the domain $\Omega$ with the time domain observations, and define by $S_{1,1} := \partial_1 \Omega \times (0,t_1]$, $S_{1,2} := \partial_1 \Omega \times (t_1,T)$, $S_2 := \partial_2 \Omega \times (0, T)$, $S_3 := \partial_3 \Omega \times (0, T)$. To simplify notations, further we will omit subscript $r$ in $\varepsilon_r$ and $\mu_r$. We add a Coulomb-type gauge condition \cite{Ass,div_cor} to (\ref{model1_1})-(\ref{model1_4}) for stabilization of the finite element solution using the standard piecewise continuous functions with $ 0 \leq s \leq 1$, and our model problem (\ref{model1_1})-(\ref{model1_4}) which we use in computations rewrites as \begin{equation}\label{E_gauge} \begin{split} \varepsilon \frac{\partial^2 E}{\partial t^2} + \nabla \times ( \mu^{-1} \nabla \times E) - s\nabla ( \nabla \cdot(\varepsilon E)) &= 0~ \mbox{in}~~ \Omega_T, \\ E(x,0) = f_0(x), ~~~E_t(x,0) &= f_1(x)~ \mbox{in}~~ \Omega, \\ \partial _{n}E& = (0,f\left( t\right),0) ~\mbox{on}~ S_{1,1}, \\ \partial _{n} E& =-\partial _{t} E ~\mbox{on}~ S_{1,2}, \\ \partial _{n} E& =-\partial _{t} E ~\mbox{on}~ S_2, \\ \partial _{n} E& =0 ~\mbox{on}~ S_3, \\ \mu(x)=\varepsilon \left( x\right) &=1\text{ in }\Omega _{\rm OUT}. \end{split} \end{equation} In the recent works \cite{BMaxwell, BCN, BTKB} was demonstrated numerically that the solution of the problem (\ref{E_gauge}) approximates well the solution of the original Maxwell's system for the case when $1 \leq \mu(x) \leq 2, 1 \leq \varepsilon(x) \leq 15 $ and $s=1$. We assume that our coefficients $\varepsilon \left(x\right), \mu(x) $ of equation (\ref{E_gauge}) are such that \begin{equation} \label{2.3} \begin{split} \varepsilon \left(x\right) &\in \left[ 1,d_1\right],~~ d_1 = const.>1,~ \varepsilon(x) =1 \text{ for }x\in \Omega _{\rm OUT}, \\ \mu(x) &\in \left[ 1,d_2\right],~~ d_2 = const.>1,~ \mu(x) =1 \text{ for }x\in \Omega _{\rm OUT}, \\ \varepsilon \left(x\right), \mu(x) &\in C^{2}\left( \mathbb{R}^{3}\right) . \end{split} \end{equation} In our numerical tests the values of constants $d_1, d_2$ in (\ref{2.3}) are chosen from experimental set-up similarly with \cite{BTKB, SSMS} and we assume that we know them a priori. This is in agreement with the availability of a priori information for an ill-posed problem \cite{BKS, Engl, tikhonov}. Through the work we use following notations: for any vector function $ u \in \mathbb{R}^3$ when we write $u \in H^k(\Omega), k=1,2$, we mean that every component of the vector function $u$ belongs to this space. We consider the following \textbf{Inverse Problem (IP) } \emph{Assume that the functions }$\varepsilon\left(x\right)$ and $\mu(x)$ \emph{\ satisfy conditions (\ref{2.3}) for the known }$d_1, d_2 >1$\emph{\ and they are unknown in the domain }$\Omega \backslash \Omega_{\rm OUT}$\emph{. Determine the functions }$ \varepsilon\left(x\right), \mu(x) $\emph{\ for }$x\in \Omega \backslash \Omega_{\rm OUT},$ \emph{\ assuming that the following function }$\tilde E\left(x,t\right) $\emph{\ is known} \begin{equation} E \left(x,t\right) = \tilde E \left(x,t\right) ~\forall \left( x,t\right) \in S_T. \label{2.5} \end{equation} The function $\tilde E\left(x,t\right)$ in (\ref{2.5}) represents the time-dependent measurements of the electric wave field $E(x,t)$ at the backscattering boundary $\partial_1 \Omega$. In real-life experiments, measurements are performed on a number of detectors, see details in our recent experimental work \cite{BTKB}. \section{Tikhonov functional} \label{sec:tikhonov} We reformulate our inverse problem as an optimization problem, where we seek for two functions, the permittivity $\varepsilon(x)$ and permeability $\mu(x)$, which result in a solution of equations (\ref{E_gauge}) with best fit to time and space domain observations $\tilde E$, measured at a finite number of observation points on $\partial_1 \Omega$. Our goal is to minimize the Tikhonov functional \begin{equation} \begin{split} J( \varepsilon, \mu) := J(E, \varepsilon, \mu) &= \frac{1}{2} \int_{S_T}(E - \tilde{E})^2 z_{\delta }(t) d \sigma dt \\ &+\frac{1}{2} \gamma_1 \int_{\Omega}( \varepsilon - \varepsilon_0)^2~~ dx + \frac{1}{2} \gamma_2 \int_{\Omega}( \mu - \mu_0)^2~~ dx, \label{functional} \end{split} \end{equation} where $\tilde{E}$ is the observed electric field, $E$ satisfies the equations (\ref{E_gauge}) and thus depends on $\varepsilon$ and $\mu$, $\varepsilon _{0}$ is the initial guess for $\varepsilon $ and $\mu_{0}$ is the initial guess for $\mu$, and $\gamma_i, i=1,2$ are the regularization parameters. Here, $z_{\delta }(t)$ is a cut-off function, which is introduced to ensure that the compatibility conditions at $\overline{\Omega}_{T}\cap \left\{ t=T\right\} $ for the adjoint problem (\ref{adjoint}) are satisfied, and $\delta >0$ is a small number. The function $z_{\delta }$ can be chosen as in \cite{BCN}. Next, we introduce the following spaces of real valued vector functions \begin{equation}\label{spaces} \begin{split} H_E^1 &:= \{ w \in H^1(\Omega_T): w( \cdot , 0) = 0 \}, \\ H_{\lambda}^1 &:= \{ w \in H^1(\Omega_T): w( \cdot , T) = 0\},\\ U^{1} &=H_{E}^{1}(\Omega_T)\times H_{\lambda }^{1}(\Omega_T)\times C\left(\overline{\Omega}\right)\times C\left(\overline{\Omega}\right),\\ U^{0} &=L_{2}\left(\Omega_{T}\right) \times L_{2}\left(\Omega_{T}\right) \times L_{2}\left(\Omega \right)\times L_{2}\left(\Omega \right). \end{split} \end{equation} We also define the $L_2$ inner product and the norm over $\Omega_T$ and $\Omega$ as \begin{equation*} \begin{split} ((u,v))_{\Omega_T} &= \int_{\Omega} \int_0^T u v dx dt, \\ ||u||^2 &= ((u,u))_{\Omega_T}, \\ (u,v)_{\Omega} &= \int_{\Omega} u v dx, \\ |u|^2 &= (u,u)_{\Omega}. \end{split} \end{equation*} To solve the minimization problem we take into account (\ref{2.3}) and introduce the Lagrangian \begin{equation}\label{lagrangian} \begin{split} L(u) &= J(E, \varepsilon, \mu) - \left(\left( \varepsilon \partial_t \lambda, \partial_t E \right)\right)_{\Omega_T} -(\varepsilon \lambda(x,0), f_1(x))_{\Omega} + \left( \left( \mu^{-1}\nabla \times E, \nabla \times \lambda \right) \right)_{\Omega_T} \\ &+ s \left(\left( \nabla \cdot (\varepsilon E), \nabla \cdot \lambda \right) \right)_{\Omega_T} - ((\lambda, p(t) ))_{S_{1,1}} + (( \lambda \partial_t E ))_{S_{1,2}} + (( \lambda \partial_t E ))_{S_2}, \\ \end{split} \end{equation} where $u=(E,\lambda, \varepsilon, \mu) \in U^1$ and $p(t)= (0,f(t), 0)$ and $\partial_t$ define the derivative in time. We now search for a stationary point of the Lagrangian with respect to $u$ satisfying for all $\bar{u}= ( \bar{E}, \bar{\lambda}, \bar{\varepsilon}, \bar{\mu}) \in U^1$ \begin{equation} L'(u; \bar{u}) = 0 , \label{scalar_lagr2} \end{equation} where $ L^\prime (u;\cdot )$ is the Jacobian of $L$ at $u$. Equation above can be written as \begin{equation} L'(u; \bar{u}) = \frac{\partial L}{\partial \lambda}(u)(\bar{\lambda}) + \frac{\partial L}{\partial E}(u)(\bar{E}) + \frac{\partial L}{\partial \varepsilon}(u)(\bar{\varepsilon}) + \frac{\partial L}{\partial \mu}(u)(\bar{\mu}) = 0. \label{scalar_lagr} \end{equation} To find the Frech\'{e}t derivative (\ref{scalar_lagr}) of the Lagrangian (\ref{lagrangian}) we consider $L(u + \bar{u}) - L(u)$ for all $\bar{u} \in U^1$ and single out the linear part of the obtained expression with respect to $ \bar{u}$. In our derivation of the Frech\'{e}t derivative we assume that in the Lagrangian (\ref{lagrangian}) functions $u=(E,\lambda, \varepsilon, \mu) \in U^1$ can vary independently of each other. In this approach we obtain the same result as by assuming that functions $E$ and $\lambda$ are dependent on the coefficients $\varepsilon, \mu$, see also Chapter 4 of \cite{BOOK} where similar observations take place. Taking into account that $E(x,t)$ is the solution of the forward problem (\ref{E_gauge}), assumptions that $\lambda (x ,T) = \frac{\partial \lambda}{\partial t} (x,T) =0$, as well as $\mu=\varepsilon=1$ on $\partial \Omega$ and using conditions (\ref{2.3}), we obtain from (\ref{scalar_lagr}) that for all $\bar{u}$, \begin{equation}\label{forward} \begin{split} 0 = \frac{\partial L}{\partial \lambda}(u)(\bar{\lambda}) = &- (( \varepsilon \partial_t \bar{\lambda}, \partial_t E ))_{\Omega_T} -(\varepsilon f_1(x), \bar{\lambda}(x,0))_{\Omega} + (( \mu^{-1} \nabla \times E, \nabla \times \bar{\lambda}))_{\Omega_T} \\ &+ s ((\nabla \cdot(\varepsilon E), \nabla \cdot \bar{\lambda}))_{\Omega_T} - ((\bar{\lambda}, p(t)))_{S_{1,1}} + (( \bar{\lambda}, \partial_t E ))_{S_{1,2}} \\ & + (( \bar{\lambda}, \partial_t E ))_{S_2}~~\forall \bar{\lambda} \in H_{\lambda}^1(\Omega_T), \end{split} \end{equation} \begin{equation} \label{control} \begin{split} 0 = \frac{\partial L}{\partial E}(u)(\bar{E}) &= ((E - \tilde{E}, \bar{E} z_{\delta} ))_{S_T} - ((\varepsilon \partial_t \lambda, \partial_t \bar{E}))_{\Omega_T} + ((\mu^{-1} \nabla \times \lambda, \nabla \times \bar{E} ))_{\Omega_T} \\ &+ s ((\nabla \cdot \lambda, \nabla \cdot (\varepsilon \bar{E}))_{\Omega_T} - (( \partial_t \lambda, \bar{E} ))_{S_{1,2} \cup S_2} -(\varepsilon \bar{E}(x,0), \partial_t \lambda(x,0) )~~\forall \bar{E} \in H_{E}^1(\Omega_T). \end{split} \end{equation} Further, we obtain two equations that express that the gradients with respect to $\varepsilon$ and $\mu$ vanish: \begin{equation} \label{grad1} \begin{split} 0 = \frac{\partial L}{\partial \varepsilon}(u)(\bar{\varepsilon}) = &- (( \partial_t \lambda, \partial_t E~ \bar{\varepsilon} ))_{\Omega_T} - (\lambda(x,0), f_1(x)~\bar{\varepsilon})_{\Omega} \\ &+ s((\nabla \cdot (\bar{\varepsilon} E), \nabla \cdot \lambda ))_{\Omega_T} +\gamma_1 (\varepsilon - \varepsilon_0, \bar{\varepsilon})_{\Omega}~~\forall x \in \Omega, \end{split} \end{equation} \begin{equation} \label{grad2} 0 = \frac{\partial L}{\partial \mu}(u)(\bar{\mu}) = -(( \mu^{-2}~\nabla \times E, \nabla \times \lambda ~ \bar{\mu}))_{\Omega_T} +\gamma_2 (\mu - \mu_0, \bar{\mu})_{\Omega} ~\forall x \in \Omega. \end{equation} We observe that the equation (\ref{forward}) is the weak formulation of the state equation (\ref{E_gauge}) and the equation (\ref{control}) is the weak formulation of the following adjoint problem \begin{equation} \begin{split} \label{adjoint} \varepsilon \frac{\partial^2 \lambda}{\partial t^2} + \nabla \times (\mu^{-1} \nabla \times \lambda) - s \varepsilon \nabla ( \nabla \cdot \lambda) &= - (E - \tilde{E})|_{S_T} z_{\delta} ~ \mbox{ in } \Omega_T, \\ \lambda(\cdot, T)& = \frac{\partial \lambda}{\partial t}(\cdot, T) = 0, \\ \partial _{n} \lambda& = \partial _{t} \lambda ~\mbox{on}~ S_{1,2}, \\ \partial _{n} \lambda& = \partial _{t} \lambda ~\mbox{on}~ S_2, \\ \partial _{n} \lambda& =0 ~\mbox{on}~ S_3, \end{split} \end{equation} which is solved backward in time. We now define by $E(\varepsilon, \mu), \lambda(\varepsilon, \mu)$ the exact solutions of the forward and adjoint problems for given $\varepsilon, \mu$, respectively. Then defining by \begin{equation*} u(\varepsilon, \mu) = (E(\varepsilon, \mu), \lambda(\varepsilon, \mu), \varepsilon, \mu) \in U^1, \end{equation*} using the fact that for exact solutions $E(\varepsilon, \mu), \lambda(\varepsilon, \mu)$ because of (\ref{lagrangian}) we have \begin{equation} J( E(\varepsilon, \mu), \varepsilon, \mu) = L(u(\varepsilon, \mu)), \end{equation} ~ and assuming that solutions $E(\varepsilon, \mu), \lambda(\varepsilon, \mu) $ are sufficiently stable, see Chapter 5 of book \cite{lad} for details, we can write that the Frech\'{e}t derivative of the Tikhonov functional is the function $J'(\varepsilon, \mu, E(\varepsilon, \mu))$ which is defined as \begin{equation}\label{derfunc} \begin{split} J'(\varepsilon, \mu) := J'(\varepsilon, \mu, E(\varepsilon, \mu)) &= \frac{\partial J}{\partial \varepsilon}(\varepsilon, \mu, E(\varepsilon, \mu) ) + \frac{\partial J}{\partial \mu}(\varepsilon, \mu, E(\varepsilon, \mu)) \\ &= \frac{\partial L}{\partial \varepsilon}(u(\varepsilon, \mu)) + \frac{\partial L}{\partial \mu}(u(\varepsilon, \mu)). \end{split} \end{equation} Inserting (\ref{grad1}) and (\ref{grad2}) into (\ref{derfunc}), we get \begin{equation} \label{derfunc2} \begin{split} J'(\varepsilon, \mu)(x) &:= J'(\varepsilon, \mu, E(\varepsilon, \mu))(x) = - \int_0^T \partial_t \lambda~ \partial_t E~ (x,t)~ dt - \lambda(x,0) f_1(x) \\ &+ s \int_0^T (\nabla \cdot E ) (\nabla \cdot \lambda ) ~ (x,t)~ dt +\gamma_1 (\varepsilon - \varepsilon_0)(x) \\ &- \int_0^T (\mu^{-2}~\nabla \times E) ( \nabla \times \lambda) ~ (x,t)~ dt + \gamma_2 (\mu - \mu_0)(x). \end{split} \end{equation} \section{Finite element method} \label{sec:spaces} \subsection{Finite element spaces} For computations we discretize $\Omega_{\rm FEM} \times (0,T)$ in space and time. For discretization in space we denote by $K_h = \{K\}$ a partition of the domain $\Omega_{\rm FEM}$ into tetrahedra $K$ in $ \mathbb{R}^{3}$ or triangles in $ \mathbb{R}^{2}$. We discretize the time interval $(0,T)$ into subintervals $J=(t_{k-1},t_k]$ of uniform length $\tau = t_k - t_{k-1}$ and denote the time partition by $J_{\tau} = \{J\}$. In our finite element space mesh $K_h$ the elements $K$ are such that \begin{equation*} K_h = \cup_{K \in K_h} K=K_1 \cup K_2...\cup K_l, \end{equation*} where $l$ is the total number of elements $K$ in $\overline{\Omega}$. Similarly with \cite{EEJ} we introduce the mesh function $h=h(x)$ which is a piecewise-constant function such that \begin{equation}\label{meshfunction} h |_K = h_K ~~~ \forall K \in K_h, \end{equation} where $h_K$ is the diameter of $K$ which we define as the longest side of $K$. Let $r^{\prime }$ be the radius of the maximal circle/sphere contained in the element $K$. For every element $K \in K_h$ we assume the following shape regularity assumption \begin{equation} a_{1}\leq h_K \leq r^{\prime }a_{2};\quad a_{1},a_{2}=const.>0. \label{2.1} \end{equation} To formulate the finite element method for (\ref{scalar_lagr}), we define the finite element spaces. First we introduce the finite element trial space $W_h^E$ for every component of the electric field $E$ defined by \begin{equation} W_h^E := \{ w \in H_E^1: w|_{K \times J} \in P_1(K) \times P_1(J), \forall K \in K_h, \forall J \in J_{\tau} \}, \nonumber \end{equation} where $P_1(K)$ and $P_1(J)$ denote the set of linear functions on $K$ and $J$, respectively. We also introduce the finite element test space $W_h^{\lambda}$ defined by \begin{equation} W_h^{\lambda} := \{ w \in H_{\lambda}^1: w|_{K \times J} \in P_1(K) \times P_1(J), \forall K \in K_h, \forall J \in J_{\tau} \}. \nonumber \end{equation} To approximate the functions $\mu(x)$ and $\varepsilon(x)$ we will use the space of piecewise constant functions $V_{h} \subset L_2(\Omega)$, \begin{equation* V_{h}:=\{u\in L_{2}(\Omega ):u|_{K}\in P_{0}(K),\forall K\in K_h\}, \end{equation*} where $P_{0}(K)$ is the space of piecewise constant functions on $K$. In some numerical experiments we will use also the space of piecewise linear functions $W_{h} \subset H^1(\Omega)$, \begin{equation} W_h = \big\{ v(x) \in H^1(\Omega):~ v|_{K} \in P_1(K)~\forall K \in K_h \big\}. \end{equation} In a general case we allow the functions $\varepsilon(x), \mu(x)$ to be discontinuous, see \cite{KN}. Let $F_h$ be the set of all faces of elements in $K_h$ such that $F_h := F_h^I \cup F_h^B$, where $F_h^I$ is the set of all interior faces and $F_h^B$ is the set of all boundary faces of elements in $K_h$. Let $f \in F_h^I$ be the internal face of the non-empty intersection of the boundaries of two neighboring elements $K^{+}$ and $K^{-}$. We denote the jump of the function $v_{h}$ computed from the two neighboring elements $K^{+}$ and $K^{-}$ sharing the common side $f$ as \begin{equation} [ v_{h}]= v_{h}^{+}- v_{h}^{-}, \label{2.4} \end{equation} and the jump of the normal component $v_h$ across the side $f$ as \begin{equation} [[ v_{h} ]] = v_{h}^{+} \cdot n^+ + v_{h}^{-} \cdot n^-, \label{jump_normal} \end{equation} where $n^+, n^-$ is the unit outward normal on $f^+, f^-$, respectively. Let $P_h$ be the $L_2(\Omega)$ orthogonal projection. We define by $f_h^I$ the standard nodal interpolant \cite{EEJ} of $f$ into the space of continuous piecewise-linear functions on the mesh $K_h$. Then by one of properties of the orthogonal projection \begin{equation} \left\| f- P_hf\right\| _{L_{2}\left( \Omega \right) }\leq \left\| f-f_{h}^I\right\| _{L_{2}\left( \Omega \right) }. \label{2.5b} \end{equation} It follows from \cite{SZ} that \begin{equation} \left\| f-P_hf\right\| _{L_{2}\left( \Omega \right) }\leq C_I h \left\|~ f\right\| _{H^1\left( \Omega \right) }~~\forall f\in H^1(\Omega). \label{2.6} \end{equation} where $ C_I = C_I\left( \Omega \right) $ is a positive constant depending only on the domain $\Omega$. \subsection{A finite element method for optimization problem} \label{sec:fem} To formulate the finite element method for (\ref{scalar_lagr}) we define the space $U_h = W_h^E \times W_h^{\lambda} \times V_h \times V_h$. The finite element method reads: Find $u_h \in U_h$ such that \begin{equation} \label{varlagr} L'(u_h)(\bar{u})=0 ~\forall \bar{u} \in U_h . \end{equation} To be more precise, the equation (\ref{varlagr}) expresses that the finite element method for the forward problem (\ref{E_gauge}) in $\Omega_{FEM}$ for continuous $(\varepsilon, \mu)$ will be: find $E_h =({E_1}_h,{E_2}_h, {E_3}_h ) \in W_h^E$, such that for all $\bar{\lambda} \in W_h^\lambda$ and for the known $(\varepsilon_h, \mu_h) \in V_h \times V_h$ \begin{equation}\label{varforward} \begin{split} &- ((\varepsilon_h \frac{\partial \bar{\lambda}}{\partial t} \frac{\partial E_h}{\partial t})) - (\varepsilon_h f_1,\bar{\lambda}(x,0) )_{\Omega} + (( \mu_h^{-1} \nabla \times E_h, \nabla \times \bar{\lambda}))_{\Omega_T} + s ((\nabla \cdot(\varepsilon_h E_h), \nabla \cdot \bar{\lambda}))_{\Omega_T} \\ &- ((\bar{\lambda}, p(t)))_{S_{1,1}} + (( \bar{\lambda}, \partial_t E_h ))_{S_{1,2}} + (( \bar{\lambda}, \partial_t E_h ))_{S_2}=0~~\forall \bar{\lambda} \in H_{\lambda}^1(\Omega_T), \end{split} \end{equation} and the finite element method for the adjoint problem (\ref{adjoint}) in $\Omega_{FEM}$ for continuous $(\varepsilon, \mu)$ reads: find $\lambda_h = ({\lambda_h}_1,{\lambda_h}_2, {\lambda_h}_3) \in W_h^\lambda$, such that for the computed approximation $E_h =({E_1}_h,{E_2}_h, {E_3}_h ) \in W_h^E$ of (\ref{varforward}) and for all $ \bar{E} \in W_h^E$ and for the known $(\varepsilon_h, \mu_h) \in V_h \times V_h$ \begin{equation} \label{varadjoint} \begin{split} & (((E_h - \tilde{E})|_{S_T} z_{\delta}, \bar{E} )) - ((\varepsilon_h \partial_t \lambda_h, \partial_t \bar{E}))_{\Omega_T} + ((\mu_h^{-1} \nabla \times \lambda_h, \nabla \times \bar{E} ))_{\Omega_T} \\ &+ s ((\nabla \cdot \lambda_h, \nabla \cdot (\varepsilon_h \bar{E}))_{\Omega_T} - (( \partial_t \lambda_h, \bar{E} ))_{S_{1,2} \cup S_2} -( \varepsilon_h \bar{E}(x,0), \partial_t \lambda_h(x,0) ) = 0~~\forall \bar{E} \in H_{E}^1(\Omega_T).\\ \end{split} \end{equation} Similar finite element method for the forward and adjoint problems can be written for discontinuous functions $\varepsilon, \mu$ which will include additional terms with jumps for computation of coefficients. In our work similarly with \cite{CWZ14} we compute the discontinuities of coefficients $\epsilon$ and $\mu$ by computing the jumps from the two neighboring elements, see (\ref{2.4}) and (\ref{jump_normal}) for definitions of jumps. Since we are usually working in finite dimensional spaces $U_{h}$ and $U_{h} \subset U^{1}$ as a set, then $U_{h}$ is a discrete analogue of the space $U^{1}.$ It is well known that in finite dimensional spaces all norms are equivalent, and in our computations we compute approximations of smooth functions $\varepsilon(x), \mu(x)$ in the space $V_h$. \subsection{Fully discrete scheme} \label{sec:discrete} To write fully discrete schemes for (\ref{varforward}) and (\ref{varadjoint}) we expand $E_h$ and $\lambda_h$ in terms of the standard continuous piecewise linear functions $\{\varphi_i(x)\}_{i=1}^M$ in space and $\{\psi_k(t)\}_{k=1}^N$ in time, respectively, as \begin{equation*} \begin{split} E_h (x,t) &=\sum_{k=1}^N \sum_{i=1}^M \mathbf{E_{h}} \varphi_i(x)\psi_k(t), \\ \lambda_h(x,t) &=\sum_{k=1}^N \sum_{i=1}^M \mathbf{\lambda_{h}} \varphi_i(x)\psi_k(t), \end{split} \end{equation*} where $\mathbf{E_h} := E_{h_{i,k}}$ and $\mathbf{\lambda_h} := \lambda_{h_{i,k}}$ denote unknown coefficients at the point $x_i \in K_h$ and time level $t_k \in J_{\tau}$, substitute them into (\ref{varforward}) and (\ref{varadjoint}) to obtain the following system of linear equations: \begin{equation} \label{femod1} \begin{split} M (\mathbf{E}^{k+1} - 2 \mathbf{E}^k + \mathbf{E}^{k-1}) &= - \tau^2 K \mathbf{E}^k - s \tau^2 C \mathbf{E}^k + \tau^2 F^k + \tau^2 P^k - \frac{1}{2}\tau (MD)\cdot(\mathbf{E}^{k+1} - \mathbf{E}^{k-1}), \\ M (\boldsymbol{\lambda}^{k+1} - 2 \boldsymbol{\lambda}^k + \boldsymbol{ \lambda}^{k-1}) &= -\tau^2 S^k - \tau^2 K \boldsymbol{\lambda}^k - s \tau^2 C \boldsymbol{\lambda}^k + \frac{1}{2} \tau (MD)\cdot(\boldsymbol{\lambda}^{k+1} - \boldsymbol{\lambda}^{k-1}) + \tau^2 (D\lambda)^k.\\ \end{split} \end{equation} Here, $M$ is the block mass matrix in space and $MD$ is the block mass matrix in space corresponding to the elements at the boundaries $\partial_1 \Omega, \partial_2 \Omega$, $K$ is the block stiffness matrix corresponding to the rotation term, $C$ is the stiffness matrix corresponding to the divergence term, $ F^k, P^k, D\lambda^k, S^k$ are load vectors at time level $t_k$, $\mathbf{E}^k$ and $ \boldsymbol{\lambda}^k$ denote the nodal values of $\mathbf{E_h}$ and $\mathbf{\lambda_h}$, respectively, at time level $t_k$, and $\tau$ is the time step. We refer to \cite{BMaxwell} for details of derivation of these matrices. Let us define the mapping $F_K$ for the reference element $\hat{K}$ such that $F_K(\hat{K})=K$ and let $\hat{\varphi}$ be the piecewise linear local basis function on the reference element $\hat{K}$ such that $\varphi \circ F_K = \hat{\varphi}$. Then the explicit formulas for the entries in system (\ref{femod1}) at each element $K$ can be given as: \begin{equation*} \begin{split} M_{i,j}^{K} & = (\varepsilon_h ~\varphi_i \circ F_K, \varphi_j \circ F_K)_K, \\ K_{i,j}^{K} & = ( \mu_h^{-1} \nabla \times \varphi_i \circ F_K, \nabla \times \varphi_j \circ F_K)_K,\\ C_{i,j}^{K} & = ( \nabla\cdot (\varepsilon_h \varphi_i) \circ F_K, \nabla \cdot \varphi_j \circ F_K)_K,\\ S_{j}^{K}&= ((E_h -\tilde{E})_{S_T} z_{\sigma}, \varphi_j \circ F_K )_{K}, \\ F_{j}^{K}&= (\varepsilon_h f_1, \varphi_j \circ F_K )_{K}, \\ P_{j}^{K}&= ( p, \varphi_j \circ F_K )_{\partial_1 \Omega_K}, \\ MD_{j}^{K}&= (~\varphi_i \circ F_K, \varphi_j \circ F_K )_{\partial_1 \Omega_K \cup \partial_2 \Omega_K}, \\ D\lambda_{j}^{K}&= ( \varepsilon_h \partial_t \lambda_h(x,0), \varphi_j \circ F_K )_K, \\ \end{split} \end{equation*} where $(\cdot,\cdot)_K$ denotes the $L_2(K)$ scalar product, and $\partial_1 \Omega_K, \partial_2 \Omega_K$ are boundaries $\partial K$ of elements $K$, which belong to $\partial_1 \Omega, \partial_2 \Omega$, respectively. To obtain an explicit scheme, we approximate $M$ with the lumped mass matrix $M^{L}$ (for further details, see \cite{Cohen}). Next, we multiply (\ref{femod1}) with $(M^{L})^{-1}$ and get the following explicit method: \begin{equation} \label{fem_maxwell_full} \begin{split} (I + \frac{1}{2} \tau (M^{L})^{-1} MD) \mathbf{E}^{k+1} = &2\mathbf{E}^k - \tau^2 (M^{L})^{-1} K\mathbf{E}^k +\tau^2 (M^{L})^{-1} F^k + \tau^2 (M^{L})^{-1} P^k\\ &+ \frac{1}{2} \tau (M^{L})^{-1} (MD) \mathbf{E}^{k-1} - s \tau^2 (M^{L})^{-1} C \mathbf{E}^k -\mathbf{E}^{k-1}, \\ (I + \frac{1}{2} \tau (M^L)^{-1} MD) \boldsymbol{\lambda}^{k-1} = &-\tau^2 (M^{L})^{-1} S^k + 2\boldsymbol{\lambda}^k - \tau^2 (M^{L})^{-1} K \boldsymbol{\lambda}^k - s \tau^2 (M^{L})^{-1} C \boldsymbol{\lambda}^k \\ &+ \tau^2 (M^{L})^{-1} (D\lambda)^k -\boldsymbol{\lambda}^{k+1} + \frac{1}{2} \tau (M^L)^{-1} (MD) \boldsymbol{\lambda}^{k+1}. \end{split} \end{equation} In the case of the domain decomposition FEM/FDM method when the schemes above are used only in $\Omega_{FEM}$ we have \begin{equation} \label{fem_maxwell} \begin{split} \mathbf{E}^{k+1} = &2\mathbf{E}^k - \tau^2 (M^{L})^{-1} K\mathbf{E}^k +\tau^2 (M^{L})^{-1} F^k + \tau^2 (M^{L})^{-1} P^k - s \tau^2 (M^{L})^{-1} C \mathbf{E}^k -\mathbf{E}^{k-1}, \\ \boldsymbol{\lambda}^{k-1} = &-\tau^2 (M^{L})^{-1} S^k + 2\boldsymbol{\lambda}^k - \tau^2 (M^{L})^{-1} K \boldsymbol{\lambda}^k - s \tau^2 (M^{L})^{-1} C \boldsymbol{\lambda}^k + \tau^2 (M^{L})^{-1} D\lambda -\boldsymbol{\lambda}^{k+1}. \end{split} \end{equation} \section{Relaxation property of mesh refinements} \label{sec:relax} In this section we reformulate results of \cite{BKK} for the case of our \textbf{IP}. For simplicity, we shall sometimes write $||\cdot||$ for the $L_2$ norm. We use the theory of ill-posed problems \cite{tikhonov, TGSK} and introduce noise level $\delta $ in the function $\tilde{E}(x,t)$ in the Tikhonov functional (\ref{functional}). This means that \begin{equation} \tilde{E}(x,t)= \tilde{E}^{\ast }(x,t) + \tilde{E}_{\delta }(x,t);\text{ } \tilde{E}^{\ast }, \tilde{E}_{\delta }\in L_{2}\left( S_{T}\right) =H_{2}, \label{4.247} \end{equation} where $\tilde{E}^{\ast }(x,t)$ is the exact data corresponding to the exact function $z^*=(\varepsilon^*, \mu^*)$, and the function $\tilde{E}_{\delta }(x,t)$ represents the error in these data. In other words, we can write that \begin{equation} \left\Vert \tilde{E}_{\delta }\right\Vert _{L_{2}\left( S_{T}\right) }\leq \delta . \label{4.248} \end{equation} The question of stability and uniqueness of our \textbf{IP} is addressed in \cite{BCN, BCS} which is needed in the local strong convexity theorem formulated below. Let $H_1$ be the finite dimensional linear space. Let $Y$ be the set of admissible functions $(\varepsilon, \mu)$ which we defined in (\ref{2.3}), and let $Y_1 := Y \cap H_1$ with $G := \bar{Y}_1$. We introduce now the operator $F: G \to H_2$ corresponding to the Tikhonov functional (\ref{functional}) such that \begin{equation}\label{opF} F(z)(x,t) := F(\varepsilon, \mu)(x,t) = (E(x,t,\varepsilon, \mu) - \tilde{E})^2 z_{\delta }(t)~~ \forall (x,t) \in S_T, \end{equation} where $E(x,t,\varepsilon, \mu) := E(x,t)$ is the weak solution of the forward problem (\ref{E_gauge}) and thus, depends on $\varepsilon$ and $\mu$. Here, $z=(\varepsilon, \mu)$ and $z_{\delta }(t)$ is a cut-off function chosen as in \cite{BCN}. We now assume that the operator $F(z)(x,t) $ which we defined in (\ref{opF}) is one-to-one. Let us denote by \begin{equation}\label{neigh} V_{d}\left( z\right) =\left\{z^{\prime }\in H_{1}:\left\| z^{\prime } - z \right\| < d ~~\forall z=(\varepsilon, \mu) \in H_{1}\right\} \end{equation} the neighborhood of $z$ of the diameter $d$. We also assume that the operator $F$ is Lipschitz continuous what means that for $N_{1},N_{2} > 0$ \begin{equation} \left\| F^{\prime }(z) \right\| \leq N_{1},\left\| F^{\prime }(z_1) - F^{\prime }(z_2) \right\| \leq N_{2}\left\| z_1 - z_2 \right\|~~\forall z_1, z_2 \in V_{1}\left( z^{\ast }\right) . \label{2.7} \end{equation} Let the constant $D= D\left( N_{1},N_{2}\right) =const.>0$ is such that \begin{equation} \left\| J^{\prime }\left( z_1\right) -J^{\prime }\left(z_2\right) \right\| \leq D\left\| z_1 - z_2\right\|~~\forall z_1, z_2 \in V_{1}(z^*), \label{2.10} \end{equation} where $(\varepsilon^*, \mu^*)$ is the exact solution of the equation $F(\varepsilon^*, \mu^*)=0$. Similarly with \cite{BKK}, we assume that \begin{equation}\label{2.11a} \begin{split} \left\| \varepsilon_0 - \varepsilon^{\ast }\right\| &\leq\delta ^{\nu _{1}}, ~\nu_1 =const.\in \left(0,1\right), \\ \left\| \mu_0 - \mu^{\ast }\right\| &\leq \delta ^{\nu_2}, ~\nu_2 =const.\in \left(0,1\right), \\ \gamma_1 &= \delta ^{\zeta_1}, \zeta_1= const.\in ( 0,\min (\nu_1,2(1- \nu_1)), \\ \gamma_2 &= \delta ^{\zeta_2}, \zeta_2= const.\in ( 0,\min (\nu_2,2(1- \nu_2)), \end{split} \end{equation} which in closed form can be written as \begin{eqnarray} \left\| z_0 - z^{\ast }\right\| &\leq &\delta ^{(\nu _{1}, \nu_2)}, ~z_0=(\varepsilon_0, \mu_0), ~(\nu_1, \nu_2) =const.\in \left( 0,1\right) , \label{2.11} \\ (\gamma_1, \gamma_2) &=&\delta ^{(\zeta_1, \zeta_2)}, (\zeta_1, \zeta_2) =const.\in \left( 0,\min \left((\nu_1, \nu_2), 2\left( 1- (\nu_1, \nu_2)\right) \right) \right), \label{2.12} \end{eqnarray} where $(\gamma_1, \gamma_2)$ are regularization parameters in (\ref{functional}). Equation (\ref{2.11}) means that we assume that all initial guesses $z_0=(\varepsilon_0, \mu_0)$ are located in a sufficiently small neighborhood $V_{\delta ^{\mu _{1}}}(z^*) $ of the exact solution $z^*=(\varepsilon^*, \mu^*)$. Conditions (\ref{2.12}) imply that $(z^{\ast }, z_0)$ belong to an appropriate neighborhood of the regularized solution of the functional (\ref{functional}), see proofs in Lemmata 2.1 and 3.2 of \cite{BKK}. Below we reformulate Theorem 1.9.1.2 of \cite{BOOK} for the Tikhonov functional (\ref{functional}). Different proofs of it can be found in \cite{BOOK} and in \cite{BKK} and are straightly applied to our \textbf{IP}. We note here that if functions $(\varepsilon, \mu) \in H_{1}$ and satisfy conditions (\ref{2.3}) then $(\varepsilon, \mu) \in \rm{Int}\left( G\right) .$ \textbf{Theorem 1} \emph{Let }$\Omega \subset \mathbb{R}^{3}$ \emph{\ be a convex bounded domain with the boundary }$\partial \Omega \in C^{3}.$ \emph{ Suppose that conditions (\ref{4.247}) and (\ref{4.248}) hold. Let the function }$E(x,t) \in H^{2}(\Omega_T)$ \emph{\ in the Tikhonov functional (\ref{functional}) be the solution of the forward problem (\ref{E_gauge}) for the functions }$(\varepsilon, \mu) \in G$. \emph{ Assume that there exists the exact solutions }$(\varepsilon^*, \mu^*) \in G$\emph{\ of the equation }$F(\varepsilon^*, \mu^*) =0$ \emph{\ for the case of the exact data }$ \tilde{E}^{\ast }$\emph{\ in (\ref{4.247}). Let regularization parameters } $(\gamma_1, \gamma_2)$ \emph{ in (\ref{functional}) are such that } \begin{equation*} (\gamma_1, \gamma_2) = (\gamma_1, \gamma_2) \left( \delta \right) =\delta ^{2(\nu_1, \nu_2) },~~(\nu_1, \nu_2) =const.\in \left( 0,\frac{1}{4}\right)~~\quad \forall \delta \in \left( 0,1\right). \end{equation*} \emph{Let } $z_0=(\varepsilon_0, \mu_0)$ \emph{ satisfy (\ref{2.11}). Then the Tikhonov functional (\ref{functional}) is strongly convex in the neighborhood }$V_{(\gamma_1, \gamma_2) \left( \delta \right) }\left(\varepsilon^*, \mu^* \right) $ \emph{\ with the strong convexity constants }$(\alpha_1, \alpha_2)=(\gamma_1, \gamma_2) /2.$ \emph{The strong convexity property can be also written as} \begin{equation} \left\Vert z_{1} - z _{2}\right\Vert ^{2}\leq \frac{2}{\delta ^{2(\nu_1, \nu_2) }}\left( J'(z_1) - J'(z_2), z_{1} - z_{2}\right)~~\forall z_1 =(\varepsilon_1, \mu_1), z_2 =(\varepsilon_2, \mu_2) \in H_{1}. \label{4.249} \end{equation} \emph{ Alternatively, using the expression for the Fr\'{e}chet derivative given in (\ref{derfunc}) we can write (\ref{4.249}) as} \begin{equation}\label{convex} \begin{split} \left\Vert \varepsilon_{1} - \varepsilon_{2} \right \Vert ^{2} &\leq \frac{2}{\delta ^{2\nu_1 }}\left( J_{\varepsilon}'(\varepsilon_1, \mu_1) - J_{\epsilon}'(\varepsilon_2, \mu_2), \varepsilon_{1} - \varepsilon_{2}\right)~~\forall (\varepsilon_1, \mu_1), (\varepsilon_2, \mu_2) \in H_{1}, \\ \left\Vert \mu_{1} - \mu_{2} \right \Vert ^{2} &\leq \frac{2}{\delta ^{2\nu_2 }}\left( J_{\mu}'(\varepsilon_1, \mu_1) - J_{\mu}'(\varepsilon_2, \mu_2), \mu_{1} - \mu_{2}\right)~~\forall (\varepsilon_1, \mu_1), (\varepsilon_2, \mu_2) \in H_{1}, \end{split} \end{equation} \emph{where }$\left(\cdot , \cdot \right) $\emph{\ is the} $ L_2(\Omega)$ \emph{ inner product.} \emph{Next, there exists the unique regularized solution } $(\varepsilon_{\gamma_1}, \mu_{\gamma_2})$ \emph{ of the functional (\ref{functional}) in } $ (\varepsilon_{\gamma_1}, \mu_{\gamma_2}) \in V_{\delta ^{3(\nu_1, \nu_2) }/3}(\varepsilon^*, \mu^*).$\emph{\ The gradient method of the minimization of the functional (\ref{functional}) which starts at }$(\varepsilon_0, \mu_0)$\emph{\ converges to the regularized solution of this functional. Furthermore,} \begin{equation}\label{accur} \begin{split} \left\Vert \varepsilon_{\gamma_1} - \varepsilon^* \right\Vert &\leq \Theta_1 \left\Vert \varepsilon_0 - \varepsilon^* \right\Vert, ~~\Theta_1 \in (0,1), \\ \left\Vert \mu_{\gamma_2} - \mu^* \right\Vert &\leq \Theta_2 \left\Vert \mu_0 - \mu^* \right\Vert, ~~\Theta_2 \in (0,1). \end{split} \end{equation} The property(\ref{accur}) means that the regularized solution of the Tikhonov functional (\ref{functional}) provides a better accuracy than the initial guess $(\varepsilon_0, \mu_0)$ if it satisfies condition (\ref{2.11}). The next theorem presents the estimate of the norm $\left\Vert (\varepsilon, \mu) - (\varepsilon_{\gamma_1}, \mu_{\gamma_2}) \right\Vert $ via the norm of the Fr\'{e}chet derivative of the Tikhonov functional (\ref{functional}). \textbf{Theorem 2} \emph{Assume that conditions of Theorem 1 hold. Then for any functions }$(\varepsilon, \mu) \in V_{(\gamma_1, \gamma_2)(\delta)}(\varepsilon^*, \mu^*)$ \emph{the following error estimate holds} \begin{equation} \left\Vert (\varepsilon, \mu) - (\varepsilon_{\gamma_1(\delta)}, \mu_{\gamma_2(\delta)}) \right\Vert \leq \frac{2}{\delta ^{2(\nu_1, \nu_2) }} \left\Vert P_h J^{\prime }(\varepsilon, \mu) \right\Vert \leq \frac{2}{\delta ^{2(\nu_1, \nu_2) }} \left\Vert J^{\prime }(\varepsilon, \mu ) \right\Vert, \label{4.250} \end{equation} \emph{which explicitly can be written as} \begin{equation} \label{error_theorem2} \begin{split} \left\Vert \varepsilon - \varepsilon_{\gamma_1(\delta)} \right\Vert &\leq \frac{2}{\delta ^{2 \nu_1}} \left\Vert P_h J_{\varepsilon}^{\prime }(\varepsilon, \mu) \right\Vert \leq \frac{2}{\delta ^{2 \nu_1 }} \left\Vert J_{\epsilon}^{\prime }(\varepsilon, \mu )\right\Vert =\frac{2}{\delta ^{2 \nu_1 }} \left\Vert L_{\varepsilon}^{\prime }(u(\varepsilon, \mu) )\right\Vert, \\ \left\Vert \mu - \mu_{\gamma_2(\delta)} \right\Vert &\leq \frac{2}{\delta ^{2 \nu_2 }} \left\Vert P_h J_{\mu}^{\prime }(\varepsilon, \mu) \right\Vert \leq \frac{2}{\delta ^{2 \nu_2 }} \left\Vert J_{\mu}^{\prime }(\varepsilon, \mu )\right\Vert = \frac{2}{\delta ^{2 \nu_2 }} \left\Vert L_{\mu}^{\prime }(u(\varepsilon, \mu))\right\Vert, \end{split} \end{equation} \emph{where } $(\varepsilon_{\gamma_1(\delta)}, \mu_{\gamma_2(\delta)})$ \emph{ are minimizers of the Tikhonov functional (\ref{functional}) computed with regularization parameters } $(\gamma_1(\delta), \gamma_2(\delta))$ and $ P_h: L_{2}\left( \Omega \right) \rightarrow H_{1}$\emph{\ is the operator of orthogonal projection of the space }$L_{2}\left( \Omega \right) $ \emph{\ on its subspace }$H_{1}$\emph{.} \textbf{Proof.} Since $ (\varepsilon_{\gamma_1}, \mu_{\gamma_2}) :=(\varepsilon_{\gamma_1(\delta)}, \mu_{\gamma_2(\delta)})$ is the minimizer of the functional (\ref{functional}) on the set $G$ and $(\varepsilon_{\gamma_1}, \mu_{\gamma_2}) \in {\rm Int}\left( G\right),$ then $P_hJ^{\prime }(\varepsilon_{\gamma_1}, \mu_{\gamma_2}) = 0$, or \begin{equation}\label{4.2511} \begin{split} P_hJ^{\prime }_{\varepsilon}(\varepsilon_{\gamma_1}, \mu_{\gamma_2}) &= 0, \\ P_hJ^{\prime }_{\mu}(\varepsilon_{\gamma_1}, \mu_{\gamma_2}) &=0. \end{split} \end{equation} Similarly with Theorem 4.11.2 of \cite{BOOK} since $ (\varepsilon, \mu) - (\varepsilon_{\gamma_1}, \mu_{\gamma_2}) \in H_{1},$ then \begin{equation*} \begin{split} (J^{\prime }(\varepsilon, \mu) &- J^{\prime } (\varepsilon_{\gamma_1}, \mu_{\gamma_2}), (\varepsilon, \mu) - (\varepsilon_{\gamma_1}, \mu_{\gamma_2})) = (P_h J^{\prime }(\varepsilon, \mu) - P_h J^{\prime }(\varepsilon_{\gamma_1}, \mu_{\gamma_2}), (\varepsilon, \mu) - (\varepsilon_{\gamma_1}, \mu_{\gamma_2})). \end{split} \end{equation*} Hence, using (\ref{4.249}) and (\ref{4.2511}) we can write that \begin{equation*} \begin{split} \left\Vert (\varepsilon, \mu) - (\varepsilon_{\gamma_1}, \mu_{\gamma_2}) \right \Vert ^{2} &\leq \frac{2}{\delta ^{2(\nu_1, \nu_2) }} \left( J^{\prime }(\varepsilon, \mu) - J^{\prime }(\varepsilon_{\gamma_1}, \mu_{\gamma_2}), (\varepsilon, \mu) - (\varepsilon_{\gamma_1}, \mu_{\gamma_2})\right) \\ &=\frac{2}{\delta ^{2(\nu_1, \nu_2) }}\left( P_h J^{\prime }(\varepsilon, \mu) - P_h J^{\prime }(\varepsilon_{\gamma_1}, \mu_{\gamma_2}) ),(\varepsilon, \mu) - (\varepsilon_{\gamma_1}, \mu_{\gamma_2}) \right) \\ &=\frac{2}{\delta ^{2(\nu_1, \nu_2) }}( P_h J^{\prime }(\varepsilon, \mu), (\varepsilon, \mu) - (\varepsilon_{\gamma_1}, \mu_{\gamma_2})) \\ &\leq \frac{2}{\delta ^{2(\nu_1, \nu_2) }}\left\Vert P_h J ^{\prime }(\varepsilon, \mu) \right\Vert \cdot \left\Vert (\varepsilon, \mu) - (\varepsilon_{\gamma_1}, \mu_{\gamma_2}) \right\Vert . \end{split} \end{equation*} Thus, from the expression above we can get \begin{equation*} \left\Vert (\varepsilon, \mu) - (\varepsilon_{\gamma_1}, \mu_{\gamma_2}) \right\Vert ^{2} \leq \frac{2}{ \delta ^{2(\nu_1, \nu_2) }} \left \Vert P_h J^{\prime }(\varepsilon, \mu ) \right\Vert \cdot \left\Vert (\varepsilon, \mu) - (\varepsilon_{\gamma_1}, \mu_{\gamma_2}) \right\Vert. \end{equation*} We now divide the expression above by $\left\Vert (\varepsilon, \mu) - (\varepsilon_{\gamma_1}, \mu_{\gamma_2}) \right\Vert $. Using the fact that \begin{equation*} \left\Vert P_h J^{\prime}(\varepsilon, \mu ) \right\Vert \leq \left\Vert J^{\prime }(\varepsilon, \mu ) \right\Vert, \end{equation*} we obtain (\ref{4.250}), and using definition of the derivative of the Tikhonov functional (\ref{derfunc}) we get (\ref{error_theorem2}), where explicit entries of $L_{\varepsilon}^{\prime }(u(\varepsilon, \mu)), L_{\mu}^{\prime }(u(\varepsilon, \mu))$ are given by (\ref{grad1}), (\ref{grad2}), respectively. $\square $ Below we reformulate Lemmas 2.1 and 3.2 of \cite{BKK} for the case of Tikhonov functional (\ref{functional}). \textbf{Theorem 3} \emph{Let the assumptions of Theorems 1,2 hold. Let }$\left\Vert (\varepsilon^*, \mu^*) \right\Vert \leq C,$\emph{\ with a given constant }$C$. \emph{\ We define by }$M_{n}\subset H_{1}$\emph{\ the subspace which is obtained after }$n$\emph{\ mesh refinements of the mesh} $K_h$. \emph{Let } $h_n$ be the mesh function on $M_n$ \emph{as defined in Section \ref{sec:spaces}.} \emph{Then there exists the unique minimizer }$(\varepsilon_n, \mu_n) \in G\cap M_{n}$\emph{\ of the Tikhonov functional (\ref{functional}) such that the following inequalities hold} \begin{equation} \begin{split} \left\Vert \varepsilon_n - \varepsilon_{\gamma_1(\delta)}\right\Vert &\leq \frac{2}{\delta ^{2 \nu_1}} \left\Vert J^{\prime }_{\varepsilon}(\varepsilon, \mu ) \right\Vert, \\ \left\Vert \mu_n - \mu_{\gamma_2(\delta)}) \right\Vert &\leq \frac{2}{ \delta ^{2 \nu_2}}\left\Vert J_{\mu}^{\prime }(\varepsilon, \mu ) \right\Vert. \label{4.253} \end{split} \end{equation} Now we present relaxation property of mesh refinements for the Tikhonov functional (\ref{functional}) which follows from the Theorem 4.1 of \cite{BKK}. \textbf{Theorem 4} \textbf{. }\emph{Let the assumptions of Theorems 2, 3 hold. Let }$(\varepsilon_n, \mu_n) \in V_{\delta ^{3\mu }}\left(\varepsilon^*, \mu^*\right) \cap M_{n}$\emph{\ be the minimizer of the Tikhonov functional (\ref{functional}) on the set }$G\cap M_{n}.$ \emph{ The existence of the minimizer is guaranteed by Theorem 3. Assume that the regularized solution }$ (\varepsilon, \mu) \neq (\varepsilon_n, \mu_n)$ \emph{\ which means that }$ (\varepsilon, \mu) \notin M_{n}.$ \emph{Then the following relaxation properties hold} \begin{equation*} \begin{split} \left\Vert \varepsilon_{n+1} - \varepsilon \right\Vert &\leq \eta_{1,n} \left\Vert \varepsilon_{n} - \varepsilon \right\Vert, \\ \left\Vert \mu_{n+1} - \mu \right\Vert &\leq \eta_{2,n} \left\Vert \mu_{n} - \mu \right\Vert \end{split} \end{equation*} for $\eta_{1,n}, \eta_{2,n} \in (0,1)$. \section{General framework of a posteriori error estimate} \label{sec:general} In this section we briefly present a posteriori error estimates for three kinds of errors: \begin{itemize} \item for the error $|L(u) - L(u_h)|$ in the Lagrangian (\ref{lagrangian}); \item for the error $|J(\varepsilon, \mu) - J(\varepsilon_h, \mu_h)|$ in the Tikhonov functional (\ref{functional}); \item for the errors $|\varepsilon - \varepsilon_h|$ and $|\mu - \mu_h|$ in the regularized solutions $\varepsilon, \mu$ of this functional. \end{itemize} Here, $u_h, \varepsilon_h, \mu_h$ are finite element approximations of the functions $u, \varepsilon, \mu$, respectively. A posteriori error estimate in the Lagrangian was already derived in \cite{BMaxwell2} for the case when only the function $\varepsilon(x)$ in system (\ref{E_gauge}) is unknown. In \cite{Bondestam1, Bondestam2} were derived a posteriori error estimates in the Lagrangian which corresponds to modified system (\ref{E_gauge}) for $\mu=1$. A posteriori error in the Lagrangian (\ref{lagrangian}) can be derived straightforwardly from a posteriori error estimate presented in \cite{BMaxwell2} and thus, all details of this derivation are not presented here. However, to make clear how a posteriori errors in the Lagrangian and in the Tikhonov functional can be obtained, we present general framework for them. First we note that \begin{equation} \label{femerrors} \begin{split} J(\varepsilon, \mu) - J(\varepsilon_h, \mu_h) &= J_{\varepsilon}^{\prime}(\varepsilon_h, \mu_h)(\varepsilon - \varepsilon_h) + J_{\mu}^{\prime}(\varepsilon_h, \mu_h)(\mu - \mu_h) + R( \varepsilon, \varepsilon_h) + R(\mu, \mu_h), \\ L(u) - L(u_h) &= L^{\prime }(u_h)(u- u_h) + R(u,u_h), \\ \end{split} \end{equation} where $R( \varepsilon, \varepsilon_h), R(\mu, \mu_h), R(u,u_h),$ are remainders of the second order. We assume that $(\varepsilon_h, \mu_h)$ are located in the small neighborhood of the regularized solutions $(\varepsilon, \mu)$, correspondingly. Thus, since the terms $ R(u,u_h), R( \varepsilon, \varepsilon_h), R(\mu, \mu_h)$ are of the second order then they will be small and we can neglect them in (\ref{femerrors}). We now use the splitting \begin{equation}\label{splitting} \begin{split} u - u_h &= (u - u_h^I) + (u_h^I - u_h), \\ \varepsilon - \varepsilon_h &= (\varepsilon - \varepsilon_h^I) + (\varepsilon_h^I -\varepsilon_h), \\ \mu - \mu_h &= (\mu - \mu_h^I) + (\mu_h^I - \mu_h), \end{split} \end{equation} together with the Galerkin orthogonality principle \begin{equation} \begin{split} L^{\prime }(u_h)(\bar{u}) &= 0~~ \forall \bar{u} \in U_h,\\ J^{\prime}(z_h)(b) &= 0~~ \forall b \in V_h, \end{split} \end{equation} insert (\ref{splitting}) into (\ref{femerrors}) and get the following error representations: \begin{equation}\label{errorfunc} \begin{split} L(u) - L(u_h) &\approx L^{\prime}(u_h)( u - u_h^I), \\ J(\varepsilon, \mu) - J(\varepsilon_h, \mu_h) & \approx J_{\varepsilon}^{\prime}(\varepsilon_h, \mu_h)(\varepsilon - \varepsilon_h^I) + J_{\mu}^{\prime}(\varepsilon_h, \mu_h)(\mu - \mu_h^I). \end{split} \end{equation} In (\ref{splitting}), (\ref{errorfunc}) functions $u_h^I \in U_h$ and $\varepsilon_h^I, \mu_h^I \in V_h$ denote the interpolants of $u, \varepsilon, \mu$, respectively. Using (\ref{errorfunc}) we conclude that a posteriori error estimate in the Lagrangian involves the derivative of the Lagrangian $L^{\prime}(u_h)$ which we define as a residual, multiplied by weights $u - u_h^I$. Similarly, a posteriori error estimate in the Tikhonov functional involves the derivatives of the Tikhonov functional $J_{\varepsilon}^{\prime}(\varepsilon_h, \mu_h)$ and $J_{\mu}^{\prime}(\varepsilon_h, \mu_h)$ which represents residuals, multiplied by weights $\varepsilon - \varepsilon_h^I$ and $\mu - \mu_h^I$, correspondingly. To derive the errors $|\varepsilon - \varepsilon_h|$ and $|\mu - \mu_h|$ in the regularized solutions $\varepsilon, \mu$ of the functional (\ref{functional}) we will use the convexity property of the Tikhonov functional together with the interpolation property (\ref{2.6}). We now make both error estimates more explicit. \section{A posteriori error estimate in the regularized solution} \label{sec:adaptrelax} In this section we formulate theorem for a posteriori error estimates $|\varepsilon - \varepsilon_h|$ and $|\mu - \mu_h|$ in the regularized solution $\varepsilon, \mu$ of the functional (\ref{functional}). During the proof we reduce notations and denote the scalar product $(\cdot, \cdot)_{L_2}$ as $(\cdot, \cdot)$, as well as we denote the norm $\left \Vert\cdot, \cdot \right \Vert_{L_2}$ as $\left \Vert\cdot, \cdot\right \Vert $. However, if norm should be specified, we will write it explicitly. \textbf{Theorem 5} \emph{ Let the assumptions of Theorems 1,2 hold. Let $z_h =(\varepsilon_h, \mu_h) \in W_h$ be a finite element approximations of the regularized solution $z=(\varepsilon, \mu)$ on the finite element mesh $K_h$. Then there exists a constant $D$ defined in (\ref{2.10}) such that the following a posteriori error estimates hold} \begin{equation}\label{theorem1} \begin{split} \left \Vert \varepsilon - \varepsilon_h \right \Vert &\leq \frac{D}{\alpha_1} C_I \left (h || \varepsilon_h || + \left \Vert [\varepsilon_h] \right \Vert \right ) =\frac{2 D}{ \delta^{2 \nu_1}} C_I \left ( h || \varepsilon_h || + \left \Vert [\varepsilon_h] \right \Vert \right ) ~ \forall \varepsilon_h \in V_h, \\ \left \Vert \mu - \mu_h \right \Vert &\leq \frac{D}{\alpha_2} C_I \left ( h \left \Vert \mu_h \right \Vert + \left \Vert [\mu_h] \right \Vert \right ) = \frac{2 D}{\ \delta^{2 \nu_2} } C_I \left (h \left \Vert \mu_h \right \Vert + \left \Vert [\mu_h] \right \Vert \right ) ~ \forall \mu_h \in V_h. \end{split} \end{equation} \textbf{Proof.} Let $z_h=(\varepsilon_h, \mu_h)$ be the minimizer of the Tikhonov functional (\ref{functional}). The existence and uniqueness of this minimizer is guaranteed by Theorem 2. By the Theorem 1, the functional (\ref{functional}) is strongly convex on the space $L_2$ with the strong convexity constants $(\alpha_1, \alpha_2) = (\gamma_1/2, \gamma_2/2)$. This fact implies, see (\ref{4.249}), that \begin{equation} \label{4.222} (\alpha_1, \alpha_2) \left\Vert z - z_h \right \Vert_{L_{2}(\Omega)} ^{2} \leq \left(J^{\prime} (z) - J^{\prime}(z_h), z - z_h \right), \end{equation} where $J^{\prime}\left(z_h\right), J^{\prime}(z)$ are the Fr\'{e}chet derivatives of the functional (\ref{functional}). Using (\ref{4.222}) with the splitting \begin{equation*} z - z_h =\left( z - z_h^I \right) + \left(z_h^I - z_h \right) , \end{equation*} where $z_h^I$ is the standard interpolant of $z$, and combining it with the Galerkin orthogonality principle \begin{equation} \left(J^{\prime }(z_h) - J^{\prime}(z), z_h^I - z_h \right) =0 \label{4.223} \end{equation} such that $(z_h, z_h^I) \in W_h$, we will obtain \begin{equation} \label{4.224} (\alpha_1, \alpha_2) \left \Vert z -z_h \right \Vert_{L_2}^{2} \leq (J^{\prime}(z) - J^{\prime}(z_h), z - z_h^I). \end{equation} The right-hand side of (\ref{4.224}) can be estimated using (\ref{2.10}) as \begin{equation*} \left(J^{\prime}\left( z \right) - J^{\prime}(z_h), z - z_h^I \right) \leq D || z - z_h || \cdot || z - z_h^I ||. \end{equation*} Substituting above equation into (\ref{4.224}) we obtain \begin{equation} || z - z_h || \leq \frac{D}{(\alpha_1, \alpha_2) } || z - z_h^I ||. \label{theorem1_1} \end{equation} Using the interpolation property (\ref{2.6}) \begin{equation*} || z-z_h^I||_{L^2(\Omega)} \leq C_I h || z||_{H^1(\Omega)} \end{equation*} we get a posteriori error estimate for the regularized solution $z$ with the interpolation constant $C_I$: \begin{equation}\label{theorem1_1_1} || z - z_h || \leq \frac{D}{(\alpha_1, \alpha_2) } || z -z_h^I || \leq \frac{D}{(\alpha_1, \alpha_2) } C_I h ||z||_{H^1(\Omega)}. \end{equation} We can estimate $ h ||z||_{H^1(\Omega)}$ as \begin{equation}\label{theorem1_2} \begin{split} h ||z||_{H^1(\Omega)} &\leq \sum_K h_K || z||_{H^1(K)} = \sum_K || (z + \nabla z) ||_{L_2(K)} h_K \\ &\leq \sum_K \left(h_K || z_h||_{L_2(K)} + \left \Vert \frac{|[z_h]|}{h_K} h_K \right \Vert_{L_2(K)} \right) \\ &\leq h || z_h ||_{L_2(\Omega)} + \sum_K (\left \Vert [z_h] \right \Vert_{L_2(K)}). \end{split} \end{equation} We denote in (\ref{theorem1_2}) by $[z_h]$ the jump of the function $z_h$ over the element $K$, $h_K$ is the diameter of the element $K$. In (\ref{theorem1_2}) we also used the fact that \cite{JS} \begin{equation} \label{jumps} |\nabla z | \leq \frac{|[z_h]|}{h_K}. \end{equation} Substituting the above estimates into the right-hand side of (\ref{theorem1_1_1}) we get \begin{equation*} || z - z_h || \leq \frac{D}{(\alpha_1, \alpha_2) } C_I h || z_h || + \frac{D}{(\alpha_1, \alpha_2) } C_I \left \Vert [z_h] \right \Vert ~ \forall z_h \in W_h. \end{equation*} Now taking into account $z_h=(\varepsilon_h, \mu_h)$ we get estimate (\ref{theorem1}) for $|\varepsilon - \varepsilon_h|$ and $|\mu - \mu_h|$, correspondingly. $\square $ \section{A posteriori error estimates for the Tikhonov functional} \label{sec:errorfunc} In Theorem 2 we derive a posteriori error estimates for the error in the Tikhonov functional (\ref{functional}) obtained on the finite element mesh $K_h$. \textbf{Theorem 6} \emph{\ Suppose that there exists minimizer $(\varepsilon, \mu) \in H^1(\Omega)$ of the Tikhonov functional (\ref{functional}) on the mesh $K_h$. Suppose also that there exists a finite element approximation $z_h =(\varepsilon_h, \mu_h)$ of $z=(\varepsilon, \mu)$ of $J(\varepsilon, \mu)$ on the set $W_h$ and mesh $K_h$ with the mesh function $h$. Then the following approximate a posteriori error estimate for the error $ e=| J(\varepsilon, \mu) - J(\varepsilon_h, \mu_h) |$ in the Tikhonov functional (\ref{functional}) holds} \begin{equation}\label{theorem2} \begin{split} e= | J(\varepsilon, \mu) - J(\varepsilon_h, \mu_h) | &\leq C_I \left ( \left\| J_{\varepsilon}^{\prime}(\varepsilon_h, \mu_h)\right\| ( h || \varepsilon_h || + \left \Vert [\varepsilon_h] \right \Vert \right) \\ &+ \left\| J_{\mu}^{\prime}(\varepsilon_h, \mu_h)\right\| \left( h || \mu_h || + \left \Vert [\mu_h] \right \Vert \right) ) \\ &= C_I (\left\| L_{\varepsilon}^{\prime}(u(\varepsilon_h, \mu_h))\right\| \left( h ||\varepsilon_h || + \left \Vert [\varepsilon_h] \right \Vert \right) \\ &+ \left\| L_{\mu}^{\prime}(u(\varepsilon_h, \mu_h))\right\| \left( h || \mu_h || + \left \Vert [\mu_h] \right \Vert \right) ). \end{split} \end{equation} \textbf{Proof} By the definition of the Frech\'{e}t derivative of the Tikhonov functional (\ref{functional}) with $z=(\varepsilon, \mu), z_h=(\varepsilon_h, \mu_h)$ we can write that on the mesh $K_h$ \begin{equation}\label{theorem2_1} J(z) - J(z_h) = J'(z_h)(z - z_h) + R(z, z_h), \end{equation} where remainder $R(z, z_h) =O((z - z_h)^2),~~ (z - z_h) \to 0 ~~\forall z, z_h \in W_h$ and $ J^{\prime }(z_h)$ is the Fr\'{e}chet derivative of the functional (\ref{functional}). We can neglect the term $R(z, z_h)$ in the estimate (\ref{theorem2_1}) since it is small. This is because we assume that $z_h$ is the minimizer of the Tikhonov functional on the mesh $K_h$ and this minimizer is located in a small neighborhood of the regularized solution $z$. For similar results for the case of a general nonlinear operator equation we refer to \cite{BKS, BKK}. We again use the splitting \begin{equation} z - z_h = z - z_h^I + z_h^I - z_h \end{equation} and the Galerkin orthogonality \cite{EEJ} \begin{equation} J'(z_h)( z_h^I - z_h) = 0 \text{ } \forall z_h^I, z_h \in W_h \end{equation} to get \begin{equation}\label{dertikh} J(z) - J(z_h) \leq J'(z_h)(z - z_h^I), \end{equation} where $z_h^I$ is a standard interpolant of $z$ on the mesh $K_h$ \cite{EEJ}. Using (\ref{dertikh}) we can also write \begin{equation}\label{theorem2_2} |J(z) - J(z_h) | \leq || J'(z_h)|| \cdot ||z - z_h^I||, \end{equation} where the term $||z - z_h^I||$ can be estimated through the interpolation estimate \begin{equation*} ||z - z_h^I||_{L_2(\Omega)} \leq C_I || h~z||_{H^1(\Omega)}. \end{equation*} Substituting above estimate into (\ref{theorem2_2}) we get \begin{equation}\label{theorem2_3} | J(z) - J(z_h) | \leq C_I \left\| J^{\prime }(z_h)\right\| h ||z||_{H^1\Omega)}. \end{equation} Using (\ref{jumps}) we can estimate $ h || z||_{H^1(\Omega)}$ similarly with (\ref{theorem1_2}) to get \begin{equation}\label{theorem2_4} | J(z) - J(z_h) | \leq C_I \left\| J^{\prime}(z_h)\right\| \left( h || z_h || + \left \Vert [z_h] \right \Vert \right)~ \forall z_h \in W_h. \end{equation} Now taking into account $z_h=(\varepsilon_h, \mu_h)$ and using (\ref{derfunc}) we get estimate (\ref{theorem2}) for $|J(\varepsilon, \mu) - J(\varepsilon_h, \mu_h)|$. $\square$ \section{Mesh refinement recommendations} \label{sec:ref} In this section we will show how to use Theorems 5 and 6 for the local mesh refinement recommendation. This recommendation will allow improve accuracy of the reconstruction of the regularized solution $(\varepsilon, \mu)$ of our problem \textbf{IP}. Using the estimate (\ref{theorem1}) we observe that the main contributions of the norms of the reconstructed functions $(\varepsilon_h, \mu_h)$ are given by neighborhoods of thus points in the finite element mesh $K_h$ where computed values of $|h \varepsilon_h|$ and $|h \mu_h|$ achieve its maximal values. We also note that terms with jumps in the estimate (\ref{theorem1}) disappear in the case of the conforming finite element meshes and with $(\varepsilon_h, \mu_h) \in V_h$. Our idea of the local finite element mesh refinement is that it should be refined all neighborhoods of all points in the mesh $K_h$ where the functions $|h \varepsilon_h| $ and $|h \mu_h|$ achieves its maximum values. Similarly, the estimate (\ref{theorem2}) of Theorem 6 gives us the idea where locally refine the finite element mesh $K_h$ to improve the accuracy in the Tikhonov functional (\ref{functional}). Using the estimate (\ref{theorem2}) we observe that the main contributions of the norms in the right-hand side of (\ref{theorem2}) are given by neighborhoods of thus points in the finite element mesh $K_h$ where computed values of $|h \varepsilon_h|$, $|h \mu_h|$, as well as computed values of $|J_{\varepsilon}'(\varepsilon_h, \mu_h)|, |J_{\mu}'(\varepsilon_h, \mu_h)|$ achieve its maximal values. Recalling (\ref{derfunc}) and (\ref{grad1}), (\ref{grad2}) we have \begin{equation} \label{dertikhonov1} \begin{split} J_{\varepsilon}'(\varepsilon_h, \mu_h)(x) = &- \int_0^T (\partial_t \lambda~ \partial_t E)~ (x,t)~dt + s \int_0^T (\nabla \cdot E)~ (\nabla \cdot \lambda) (x,t)~dt \\ &-\lambda(x,0)f_1(x) + \gamma_1 (\varepsilon_h - \varepsilon_0)(x), ~ x \in \Omega, \end{split} \end{equation} \begin{equation} \label{dertikhonov1a} J_{\mu}'(\varepsilon_h, \mu_h)(x) = - \int_0^T (\mu_h^{-2}~\nabla \times E ~\nabla \times \lambda) (x,t)~dt +\gamma_2 (\mu_h - \mu_0)(x),~ x \in \Omega. \end{equation} Thus, the second idea where to refine the finite element mesh $K_h$ is that the neighborhoods of all points in $K_h$ where $|J_{\varepsilon}'(\varepsilon_h, \mu_h)|+ |J_{\mu}'(\varepsilon_h, \mu_h)|$ achieve its maximum, or both functions $|h \varepsilon_h| +|h \mu_h| $ and $|J_{\varepsilon}'(\varepsilon_h, \mu_h)|+ |J_{\mu}'(\varepsilon_h, \mu_h)|$ achieve their maximum, should be refined. We include the term $|h \varepsilon_h| +|h \mu_h| $ in the first mesh refinement recommendation, and the term $|J_{\varepsilon}'(\varepsilon_h, \mu_h)|+ |J_{\mu}'(\varepsilon_h, \mu_h)|$ in the second mesh refinement recommendation. In our computations of Section \ref{sec:num} we use the first mesh refinement recommendation and check performance of this mesh refinement criteria. \textbf{The First Mesh Refinement Recommendation for IP.} \emph{Applying Theorem 5 we conclude that we should refine the mesh in neighborhoods of those points in }$\Omega_{FEM}$\emph{\ where the function }$|h \varepsilon_h| +|h \mu_h| $\emph{\ attains its maximal values. More precisely, we refine the mesh in such subdomains of }$\Omega_{FEM} $\emph{\ where} \begin{equation*} |h \varepsilon_h| +|h \mu_h| \geq \widetilde{\beta} \max \limits_{{\Omega_{FEM}}} (|h \varepsilon_h| +|h \mu_h|), \end{equation*} \emph{where $ \widetilde{\beta} \in (0,1)$ is the number which should be chosen computationally and $h$ is the mesh function (\ref{meshfunction}) of the finite element mesh $K_h$}. \\ \textbf{The Second Mesh Refinement Recommendation for IP.} \emph{Using Theorem 6 we conclude that we should refine the mesh in neighborhoods of those points in }$ \Omega_{FEM} $\emph{\ where the function } $|J_{\varepsilon}'(\varepsilon_h, \mu_h)|+ |J_{\mu}'(\varepsilon_h, \mu_h)|$ \emph{\ attains its maximal values. More precisely, let }$\beta \in (0,1) $\emph{\ be the tolerance number which should be chosen in computational experiments. Refine the mesh $K_h$ in such subdomains of }$ \Omega_{FEM} $\emph{\ where} \begin{equation*} |J_{\varepsilon}'(\varepsilon_h, \mu_h) + J_{\mu}'(\varepsilon_h, \mu_h)| \geq \beta \max_{{\Omega_{FEM}}}(|J_{\varepsilon}'(\varepsilon_h, \mu_h) + J_{\mu}'(\varepsilon_h, \mu_h)|). \end{equation*} \textbf{Remarks} \begin{itemize} \item 1. We note that in (\ref{dertikhonov1}), (\ref{dertikhonov1a}) we have exact values of $E(x,t), \lambda(x,t)$ obtained with the computed functions $(\varepsilon_h, \mu_h)$. However, in our algorithms of Section \ref{sec:alg} and in computations of Section \ref{sec:num} we approximate exact values of $E(x,t), \lambda(x,t)$ by the computed ones $E_h(x,t), \lambda_h(x,t)$. \item 2. In both mesh refinement recommendations we used the fact that functions $\varepsilon, \mu$ are unknown only in $\Omega_{FEM}$. \end{itemize} \section{Algorithms for solution IP} \label{sec:alg} In this section we will present three different algorithms which can be used for solution of our \textbf{IP}: usual conjugate gradient algorithm and two different adaptive finite element algorithms. Conjugate gradient algorithm is applied on every finite element mesh $K_h$ which we use in computations. We note that in our adaptive algorithms we refine not only the space mesh $K_h$ but also the time mesh $J_{\tau}$ accordingly to the CFL condition of \cite{CFL67}. However, the time mesh $J_{\tau}$ is refined globally and not locally. It can be thought as a new research task to check how will adaptive finite element method work when both space and time meshes are refined locally. Taking into account remark of Section \ref{sec:ref} we denote by \begin{equation} \label{dertikhonov2} \begin{split} g_{\varepsilon}^n(x) = &- \int_0^T (\partial_t \lambda_h~ \partial_t E_h)~ (x,t,\varepsilon_h^n, \mu_h^n)~dt + s \int_0^T (\nabla \cdot E_h)~ ( \nabla \cdot \lambda_h) (x,t,\varepsilon_h^n, \mu_h^n )~dt \\ &-\lambda_h(x,0)f_1(x) +\gamma_1 (\varepsilon_h^n - \varepsilon_0)(x), ~ x \in \Omega, \end{split} \end{equation} \begin{equation} \label{dertikhonov3} g_{\mu}^n(x) = - \int_0^T ((\mu_h^n)^{-2}~\nabla \times E_h ~\nabla \times \lambda_h) (x,t, \varepsilon_h^n, \mu_h^n)~dt +\gamma_2 (\mu_h^n - \mu_0)(x),~ x \in \Omega, \end{equation} where functions $\lambda_h, E_h$ are approximated finite element solutions of state and adjoint problems computed with $\varepsilon :=\varepsilon_h^n$ and $\mu := \mu_h^n$, respectively, and $n$ is the number of iteration in the conjugate gradient algorithm. \subsection{Conjugate Gradient Algorithm} \label{sec:cg} \begin{itemize} \item[Step 0.] Discretize the computational space-time domain $\Omega \times [0,T]$ using partitions $K_{h}$ and $J_{\tau}$, respectively, see Section \ref{sec:spaces}. Start with the initial approximations $\varepsilon_{h}^{0}= \varepsilon_0$ and $\mu_{h}^{0}= \mu_0$ and compute the sequences of $\varepsilon_{h}^{n}, \mu_{h}^{n}$ as: \item[Step 1.] Compute solutions $E_{h}\left(x,t,\varepsilon_{h}^{n}, \mu_h^n\right) $ and $\lambda _{h}\left(x,t,\varepsilon_{h}^{n}, \mu_h^n\right) $ of state (\ref{E_gauge}) and adjoint (\ref{adjoint}) problems, respectively, using explicit schemes (\ref{fem_maxwell}). \item[Step 2.] Update the coefficient $\varepsilon_h:=\varepsilon_{h}^{n+1}$ and $\mu_h:=\mu_{h}^{n+1}$ on $K_{h}$ and $J_{\tau}$ via the conjugate gradient method \begin{equation*} \begin{split} \varepsilon_h^{n+1} &= \varepsilon_h^{n} + \alpha_{\varepsilon} d_{\varepsilon}^n(x),\\ \mu_h^{n+1} &= \mu_h^{n} + \alpha_{\mu} d_{\mu}^n(x), \end{split} \end{equation*} where \begin{equation*} \begin{split} d_{\varepsilon}^n(x)&= -g_{\varepsilon}^n(x) + \beta_{\varepsilon}^n d_{\varepsilon}^{n-1}(x),\\ d_{\mu}^n(x)&= -g_{\mu}^n(x) + \beta_{\mu}^n d_{\mu}^{n-1}(x), \end{split} \end{equation*} with \begin{equation*} \begin{split} \beta_{\varepsilon}^n &= \frac{|| g_{\varepsilon}^n(x)||^2}{|| g_{\varepsilon}^{n-1}(x)||^2},\\ \beta_{\mu}^n &= \frac{|| g_{\mu}^n(x)||^2}{|| g_{\mu}^{n-1}(x)||^2}. \end{split} \end{equation*} Here, $d_{\varepsilon}^0(x)= -g_{\varepsilon}^0(x), d_{\mu}^0(x)= -g_{\mu}^0(x)$ and $\alpha_{\varepsilon}, \alpha_{\mu} $ are step-sizes in the gradient update which can be computed as in \cite{Peron}. \item[Step 3.] Stop computing $\varepsilon_{h}^{n}$ at the iteration $M:=n$ and obtain the function $\varepsilon_h^M :=\varepsilon_h^n$ if either $||g_1^{n}||_{L_{2}( \Omega)}\leq \theta$ or norms $||\varepsilon_{h}^{n}||_{L_{2}(\Omega)}$ are stabilized. Here, $\theta$ is the tolerance in $n$ updates of the gradient method. \item[Step 4.] Stop computing $\mu_{h}^{n}$ at the iteration $N:=n$ and obtain the function $\mu_h^N := \mu_h^n$ if either $||g_2^{n}||_{L_{2}( \Omega)}\leq \theta$ or norms $||\mu_{h}^{n}||_{L_{2}(\Omega)}$ are stabilized. Otherwise set $n:=n+1$ and go to step 1. \end{itemize} \subsection{Adaptive algorithms} \label{sec:adaptalg} In this section we present two adaptive algorithms for the solution of our \textbf{IP}. In Adaptive algorithm 1 we apply first mesh refinement recommendation of Section \ref{sec:ref}, while in Adaptive algorithm 2 we use second mesh refinement recommendation of Section \ref{sec:ref}. We define the minimizer of the Tikhonov functional (\ref{functional}) and its approximated finite element solution on $k$ times adaptively refined mesh $K_{h_k}$ by $(\varepsilon, \mu)$ and $(\varepsilon_k, \mu_k)$, correspondingly. In our both mesh refinement recommendations of Section \ref{sec:ref} we need compute the functions $\varepsilon_k, \mu_k$ on the mesh $K_{h_k}$. To do that we apply conjugate gradient algorithm of Section \ref{sec:cg}. We will define by $\varepsilon_k := \varepsilon_h^M, \mu_k := \mu_h^N$ values obtained at steps 3 and 4 of the conjugate gradient algorithm. \vspace{0.5cm} \textbf{Adaptive Algorithm 1} \vspace{0.5cm} \begin{itemize} \item[Step 0.] Choose an initial space-time mesh ${K_h}_{0} \times J_{\tau_0}$ in $\Omega_{FEM} \times [0,T]$. Compute the sequences of $\varepsilon_k, \mu_k, k >0$, via following steps: \item[Step 1.] Obtain numerical solutions $\varepsilon_k, \mu_k$ on $K_{h_k}$ using the Conjugate Gradient Method of Section \ref{sec:cg}. \item[Step 2.] Refine such elements in the mesh $K_{h_k}$ where the expression \begin{equation} \label{alg2_2} |h \varepsilon_k| +|h \mu_k| \geq \widetilde{\beta}_k \max_{{\Omega_{FEM}}} (|h \varepsilon_k| +|h \mu_k|) \end{equation} is satisfied. Here, the tolerance numbers $ \widetilde{\beta}_k \in \left( 0,1\right) $ are chosen by the user. \item[Step 3.] Define a new refined mesh as $K_{h_{k+1}}$ and construct a new time partition $J_{\tau_{k+1}}$ such that the CFL condition of \cite{CFL67} for explicit schemes (\ref{fem_maxwell}) is satisfied. Interpolate $\varepsilon_k, \mu_k$ on a new mesh $K_{h_{k+1}}$ and perform steps 1-3 on the space-time mesh $K_{h_{k+1}} \times J_{\tau_{k+1}}$. Stop mesh refinements when $||\varepsilon_k - \varepsilon_{k-1}|| < tol_1$ and $||\mu_k - \mu_{k-1}|| < tol_2$ or $|| g_{\varepsilon}^k(x)|| < tol_3$ and $|| g_{\mu}^k(x)|| < tol_4$, where $tol_i, i=1,...,4$ are tolerances chosen by the user. \end{itemize} \vspace{0.5cm} \textbf{Adaptive Algorithm 2} \vspace{0.5cm} \begin{itemize} \item[Step 0.] Choose an initial space-time mesh $K_{h_{0}}\times J_{\tau_0}$ in $\Omega_{FEM}$. Compute the sequence $\varepsilon_k, \mu_k, k >0$, on a refined meshes $K_{h_k}$ via following steps: \item[Step 1.] Obtain numerical solutions $\varepsilon_k, \mu_k$ on $K_{h_k} \times J_{\tau_k}$ using the Conjugate Gradient Method of Section \ref{sec:cg}. \item[Step 2.] Refine the mesh $K_{h_k}$ at all points where \begin{equation} | g_{\varepsilon}^k(x)| + |g_{\mu}^k(x) | \geq \beta_k \max_{\Omega}( | g_{\varepsilon}^k(x)| + |g_{\mu}^k(x)|), \label{62} \end{equation} where a posteriori error indicators $g_{\varepsilon}^k, g_{\mu}^k$ are defined in (\ref{dertikhonov1}), (\ref{dertikhonov2}). We choose the tolerance number $\beta_k \in \left( 0,1\right) $ in numerical examples. \item[Step 3.] Define a new refined mesh as $K_{h_{k+1}}$ and construct a new time partition $J_{\tau_{k+1}}$ such that the CFL condition of \cite{CFL67} for explicit schemes (\ref{fem_maxwell}) is satisfied. Interpolate $\varepsilon_k, \mu_k$ on a new mesh $K_{h_{k+1}}$ and perform steps 1-3 on the space-time mesh $K_{h_{k+1}} \times J_{\tau_{k+1}}$. Stop mesh refinements when $||\varepsilon_k - \varepsilon_{k-1}|| < tol_1$ and $||\mu_k - \mu_{k-1}|| < tol_2$ or $|| g_{\varepsilon}^k(x)|| < tol_3$ and $|| g_{\mu}^k(x)|| < tol_4$, where $tol_i, i=1,...,4$ are tolerances chosen by the user. \end{itemize} \vspace{0.5cm} \textbf{Remarks} \begin{itemize} \item 1. First we make comments how to choose the tolerance numbers $\beta_k, \widetilde{\beta_k }$ in (\ref{62}), (\ref{alg2_2}). Their values depend on the concrete values of $ \max \limits_{\Omega_{FEM}}( | g_{\varepsilon}^k(x)| + |g_{\mu}^k(x)|)$ and $\max \limits_{\Omega_{FEM}} (|h \varepsilon_k| +|h \mu_k|)$, correspondingly. If we will take values of $\beta_k, \widetilde{\beta_k }$ which are very close to $1$ then we will refine the mesh in very narrow region of the $\Omega_{FEM}$, and if we will choose $\beta_k, \widetilde{\beta_k} \approx 0$ then almost all elements in the finite element mesh will be refined, and thus, we will get global and not local mesh refinement. Our numerical tests of Section \ref{sec:num} show that the choice of $\beta_k, \widetilde{\beta}_k =0.7$ is almost optimal one since with these values of the parameters $\beta_k, \widetilde{\beta_k }$ the finite element mesh $K_h$ is refined exactly at the places where we have computed the functions $(\varepsilon_h, \mu_h)$. \item 2. To compute $L_2$ norms $||\varepsilon_k - \varepsilon_{k-1}||$, $||\mu_k - \mu_{k-1}||$ in step 3 of adaptive algorithms the solutions $\varepsilon_{k-1}, \mu_{k-1}$ are interpolated from the mesh $K_{h_{k-1}}$ to the mesh $K_{h_k}$. \end{itemize} \section{Numerical studies of the adaptivity technique} \label{sec:num} In this section we present numerical tests for solution of our \textbf{IP} using adaptive algorithm 1 of Section \ref{sec:adaptalg}. Goal of our simulations is to show performance of the adaptivity technique in order to improve reconstruction which was obtained on a coarse non-refined mesh. In our tests we reconstruct two symmetric structures of Figure \ref{fig:fig1} which represents model of a waveguide with small magnetic metallic inclusions with the relative permittivity $\varepsilon_r =12$ and the relative magnetic permeability $\mu_r = 2.0$. We note that we choose in metallic targets $\varepsilon_r =12$ similarly with our recent work \cite{BCN} and experimental works \cite{ BTKB,BTKF,NBKF} where metallic targets were treated as dielectrics with large dielectric constants and they were called \emph{effective} dielectric constants. Values of them we choose similarly with \cite{BCN, BTKB, BTKF, NBKF} in the interval \begin{equation} \varepsilon_r \in \left(10,30\right). \label{2.51} \end{equation} In our tests we choose $\mu_r = 2.0$ because the relative magnetic permeability belongs to the interval $\mu_r \in [1,3]$, see \cite{SSMS} and \cite{BCN} for a similar choice. As in \cite{BCN} we initialize only one component $E_2$ of the electrical field $E=(E_1,E_2,E_3)$ on $S_T$ as a plane wave $f(t)$ such that (see boundary condition in (\ref{E_gauge})) \begin{equation}\label{f} \begin{split} f(t) =\left\{ \begin{array}{ll} \sin \left(\omega t \right) ,\qquad &\text{ if }t\in \left(0,\frac{2\pi }{\omega } \right) , \\ 0,&\text{ if } t>\frac{2\pi }{\omega }. \end{array} \right. \end{split} \end{equation} Compared with \cite{BCN} where in computations only zero initial conditions in (\ref{E_gauge}) were used, in Test 2 of our study we use non-zero initial condition for the second component $E_2$ given by the function \begin{equation}\label{initcond} \begin{split} f_0 (x) = E_2(x,0) &= \exp^{-(x_1^2 + x_2^2 + x_3^3)} \cdot \cos t|_{t=0} = \exp^{-(x_1^2 + x_2^2 + x_3^3)} , \\ f_1 (x) =\frac{ \partial E_2}{\partial t} (x,0) &= -\exp^{-(x_1^2 + x_2^2 + x_3^3)} \cdot \sin t|_{t=0} \equiv 0. \end{split} \end{equation} We perform two different tests with different inclusions to be reconstructed: \begin{itemize} \item Test 1. Reconstruction of two layers of scatterers of figure \ref{fig:fig2} -a) with additive noise $\sigma=7 \%$ and $\sigma=17\%$ in backscattered data on the frequency interval for $\omega \in [45, 60]$ with zero initial conditions in (\ref{E_gauge}). \item Test 2. Reconstruction of one layer of scatterers of figure \ref{fig:fig2}-b) with additive noise $\sigma=7 \%$ and $\sigma=17\%$ in backscattered data on the frequency interval for $\omega \in [45, 60]$ with one non-zero initial condition (\ref{initcond}) in (\ref{E_gauge}). \end{itemize} \begin{figure}[tbp] \begin{center} \begin{tabular}{cc} \includegraphics[width = 2.40cm, clip = true, trim = 9.0cm 3.0cm 4.0cm 2.0cm, angle = -90.0]{2layer_f45n7_exact.ps} & \includegraphics[width = 2.7cm, clip = true, trim = 8.0cm 3.0cm 4.0cm 2.0cm, angle = -90.0]{1layer_f50n17_exact.ps} \\ a) Test 1 & b) Test 2 \\ \end{tabular} \end{center} \caption{The exact values of functions $\varepsilon(x)$ and $\mu(x)$ are: $\varepsilon(x)=12.0, \mu(x)=2$ inside the small scatterers, and $\varepsilon(x)=\mu(x)=1.0$ everywhere else in $\Omega_{\rm FEM}$.} \label{fig:fig2} \end{figure} \subsection{Computational domains} For simulations of forward and adjoint problems we use the domain decomposition method of \cite{BMaxwell}. This method is convenient for our computations since it is efficiently implemented in the software package WavES \cite{waves} using PETSc \cite{petsc}. To apply method of \cite{BMaxwell} we divide our computational domain $\Omega$ into two subregions as described in Section \ref{sec:stat}, and we define $\Omega_{FDM} := \Omega_{\rm OUT}$ such that $\Omega = \Omega_{\rm FEM} \cup \Omega_{\rm FDM}$, see Figure \ref{fig:fig1}. In $\Omega_{\rm FEM}$ we use finite elements and in $\Omega_{\rm FDM}$ we will use finite difference method. We set functions $\varepsilon(x) = \mu(x)=1$ in $\Omega_{FDM}$ and assume that they are unknown only in $\Omega_{FEM}$. We choose the dimensionless domain $\Omega_{FEM}$ such that \begin{equation*} \Omega_{ FEM} = \left\{ x' = (x_1,x_2,x_3) \in ( -3.2,3.2) \times (-0.6,0.6) \times (-0.6,0.6) \right\} . \end{equation*} and the dimensionless domain $\Omega$ is set to be \begin{equation*} \Omega = \left\{ x' = (x_1,x_2,x_3) \in ( -3.4,3.4) \times (-0.8,0.8) \times (-0.8,0.8) \right\}. \end{equation*} Here, the dimensionless spatial variable $x^{\prime}= x/\left(1m\right)$. In the domain decomposition between n $\Omega_{ FEM}$ and $\Omega_{ FDM}$ we choose the mesh size $h=0.1$. We use also this mesh size for the coarse mesh ${K_h}_0$ in both adaptive algorithms of Section \ref{sec:adaptalg}. As in \cite{BMaxwell, BMaxwell2, BCN} in all our tests we set $s=1$ in (\ref{model2_1}) in $\Omega_{FEM}$. Because of the domain decomposition the Maxwell's system (\ref{E_gauge}) transforms to the wave equation in $\Omega_{FDM}$ such that \begin{equation}\label{waveeq} \begin{split} \frac{\partial^2 E}{\partial t^2} - \triangle E &=0,~ \mbox{in}~~ \Omega_{FDM} \times [0,T], \\ E_2(x,0) &= f_0(x), E_1(x,0) = E_3(x,0) = 0~ \mbox{ for}~~ x \in \Omega, \\ E_t(x,0) &= 0~ \mbox{for}~~ x \in \Omega, \\ E(x,t)& = (0, f\left(t\right),0) ,~ \mbox{on} ~\partial \Omega_{1}\times (0,t_{1}], \\ \partial _{n}E(x,t)& =-\partial _{t} E(x,t),~\mbox{on} ~\partial \Omega_{1}\times (t_{1},T), \\ \partial _{n}E(x,t)& =-\partial _{t} E(x,t),~ \mbox{on} ~\partial \Omega_{2}\times (0,T), \\ \partial _{n} E(x,t)& =0,~ \mbox{on}~\partial \Omega_{3}\times (0,T). \end{split} \end{equation} In $\Omega_{ FEM}$ we solve \begin{equation}\label{maxweq} \begin{split} \varepsilon \frac{\partial^2 E}{\partial t^2} + \nabla \times ( \mu^{-1} \nabla \times E) - s\nabla ( \nabla \cdot(\varepsilon E)) &= 0,~ \mbox{in}~~ \Omega_{{\rm FEM}}, \\ E(x,0) = 0, ~~~E_t(x,0) &= 0~ \mbox{in}~~ \Omega_{\rm FEM}, \\ E(x,t)|_{\partial \Omega_{\rm FEM}} &= E(x,t)|_{\partial \Omega_{{ FDM}_I}}. \end{split} \end{equation} In (\ref{maxweq}), $\partial \Omega_{{\rm FDM}_I}$ denotes the internal boundary of the domain $\Omega_{FDM}$, and $\partial \Omega_{FEM}$ denotes the boundary of the domain $\Omega_{FEM}$. In a similar way transforms also the adjoint problem (\ref{adjoint}) into two problems in $\Omega_{FDM}$ and in $\Omega_{FEM}$, which will be the same as in \cite{BCN}. We solve the forward and adjoint problems in time $[0,T]=[0,3]$ in both adaptive algorithms and choose the time step $\tau=0.006$ which satisfies the CFL condition \cite{CFL67}. To be able test adaptive algorithms we first generate backscattered data at $S_T$ by solving the forward problem (\ref{E_gauge}) with the plane wave $f(t)$ given by (\ref{f}) in the time interval $t=[0,3]$ with $\tau=0.006$ and with known values of $\varepsilon_ r =12.0, \mu_r =2$ inside scatterers of Figure \ref{fig:fig2} and $\varepsilon_r = \mu_r =1.0$ everywhere else in $\Omega$. Figure \ref{fig:Isosurfaces} presents isosurfaces of the exact simulated solution at different times. Particularly, in Figure \ref{fig:Isosurfaces}-c) we observe behaviour of non-zero initial condition (\ref{initcond}). Our data were generated on a specially constructed mesh for the solution of the forward problem: this mesh was several times refined in the places where inclusions of Figure \ref{fig:fig2} are located. This mesh is completely different than meshes used in computations in Tests 1, 2. Thus, the variational crime in our computations is avoided. Figures \ref{fig:backscatdata}-a), b) illustrate typical behavior of noisy backscattered data in Test 1 running it with $\omega =50$ in (\ref{f}). Figure \ref{fig:backscatdata}-b) shows result of computations of the forward problem in Test 2 when we take $\omega =60$ in (\ref{f}). Figure \ref{fig:backscatdata}-c),d) show the difference in backscattered data for all components of the electrical field at final time of computations $t=3$. \subsection{Reconstructions} \begin{table}[h] \label{tab:1} {\footnotesize Table 1. \emph{Results of reconstruction on a coarse meshes of Tables 5,6 for $\sigma =7\%$ together with computational errors between $\max\limits_{\Omega_{FEM} } \varepsilon_{\overline{N}}$ and exact $\varepsilon^{*}$ in procents. Here, $\overline{N}$ is the final iteration number in the conjugate gradient method for computation of $\varepsilon_r$, and $\overline{M}$ is the final iteration number for computation of $\mu_r$.}} \par \vspace{2mm} \centerline{ \begin{tabular}{|c|} \hline $\sigma= 7\%$ \\ \hline \begin{tabular}{c|c|c|c|c|c|c} Test 1 & $ \max\limits_{\Omega_{FEM} } \varepsilon_{\overline{N}}$ & error, $\%$ & $\overline{N}$ & $ \max\limits_{\Omega_{FEM} } \mu_{\overline{M}}$ & error, $\%$ & $\overline{M}$ \\ \hline $\omega=45$ & $15$ & $25$ & $10$ &$2.58$ &$29$ &$10$ \\ $\omega=50$ & $15$ & $25$ &$10 $ &$2.38$ &$19$ &$10$ \\ $\omega=60$ & $15$ &$25$ &$10$ &$2.46$ &$23$ &$10$ \\ \end{tabular} \\ \hline \begin{tabular} {c|c|c|c|c|c|c} Test 2 & $\max\limits_{\Omega_{FEM} } \varepsilon_{\overline{N}}$ & error, $\%$ & $\overline{N}$ & $ \max\limits_{\Omega_{FEM} } \mu_{\overline{M}}$ & error, $\%$ & $\overline{M}$ \\ \hline $\omega=45$ & $13.32$ &$11$& $10$ & $3.07$ &$53.5$ & $10$ \\ $\omega=50$ & $15 $&$25$ &$10$ &$2.62$ &$31$ &$10$ \\ $\omega=60$ & $9.3$ &$22.4$ &$10$ &$2.88$ & $44 $&$10$ \\ \end{tabular} \\ \hline \end{tabular} } \end{table} \begin{table}[h] \label{tab:2} {\footnotesize Table 2. \emph{Results of reconstruction on a coarse meshes of Tables 5,6 for $\sigma =17\%$ together with computational errors between $\max\limits_{\Omega_{FEM} } \varepsilon_{\overline{N}}$ and exact $\varepsilon^{*}$ in procents. Here, $\overline{N}$ is the final iteration number in the conjugate gradient method for computation of $\varepsilon_r$, and $\overline{M}$ is the final iteration number for computation of $\mu_r$.}} \par \vspace{2mm} \centerline{ \begin{tabular}{|c|} \hline $\sigma= 17\%$ \\ \hline \begin{tabular}{c|c|c|c|c|c|c} Test 1 & $ \max\limits_{\Omega_{FEM} } \varepsilon_{\overline{N}}$ & error, $\%$ & $\overline{N}$ & $ \max\limits_{\Omega_{FEM} } \mu_{\overline{M}}$ & error, $\%$ & $\overline{M}$ \\ \hline $\omega=45$ & $15$ & $25$ & $10$ &$2.35$ &$17.5$ &$10$ \\ $\omega=50$ & $15$ & $25$ &$10 $ &$2.89$ &$44.5$ &$10$ \\ $\omega=60$ & $15$ &$25$ &$8$ &$3.09$ &$53.6$ &$8$ \\ \end{tabular} \\ \hline \begin{tabular} {c|c|c|c|c|c|c} Test 2 & $\max\limits_{\Omega_{FEM} } \varepsilon_{\overline{N}}$ & error, $\%$ & $\overline{N}$ & $ \max\limits_{\Omega_{FEM} } \mu_{\overline{M}}$ & error, $\%$ & $\overline{M}$ \\ \hline $\omega=45$ & $15$ &$25$& $10$ & $2.39$ &$19.5$ & $10$ \\ $\omega=50$ & $15 $&$25$ &$10$ &$2.24$ &$12$ &$10$ \\ $\omega=60$ & $8.46$ &$29.5$ &$10$ &$2.50$ & $25 $&$10$ \\ \end{tabular} \\ \hline \end{tabular} } \end{table} \begin{table}[h] \label{tab:3} {\footnotesize Table 3. \emph{Results of reconstruction on a 5 times adaptively refined meshes of Tables 5,6 for $\sigma =7\%$ together with computational errors between $\max\limits_{\Omega_{FEM} } \varepsilon_{\overline{N}}$ and exact $\varepsilon^{*}$ in procents. Here, $\overline{N}$ is the final iteration number in the conjugate gradient method for computation of $\varepsilon_r$, and $\overline{M}$ is the final iteration number for computation of $\mu_r$.}} \par \vspace{2mm} \centerline{ \begin{tabular}{|c|} \hline $\sigma= 7\%$ \\ \hline \begin{tabular}{c|c|c|c|c|c|c} Test 1 & $ \max\limits_{\Omega_{FEM} } \varepsilon_{\overline{N}}$ & error, $\%$ & $\overline{N}$ & $ \max\limits_{\Omega_{FEM} } \mu_{\overline{M}}$ & error, $\%$ & $\overline{M}$ \\ \hline $\omega=45$ & $14.96$ & $24.6$ & $3$ &$1.82$ &$9$ &$3$ \\ $\omega=50$ & $14.96$ & $24.6$ &$3 $ &$1.73$ &$13.5$ &$3$ \\ $\omega=60$ & $14.95$ &$24.5$ &$3$ &$1.76$ &$12$ &$3$ \\ \end{tabular} \\ \hline \begin{tabular} {c|c|c|c|c|c|c} Test 2 & $\max\limits_{\Omega_{FEM} } \varepsilon_{\overline{N}}$ & error, $\%$ & $\overline{N}$ & $ \max\limits_{\Omega_{FEM} } \mu_{\overline{M}}$ & error, $\%$ & $\overline{M}$ \\ \hline $\omega=45$ & $12.97$ &$8$& $3$ & $1.99$ &$0.5$ & $3$ \\ $\omega=50$ & $14.57$ &$21.4$ &$3$ &$1.79$ &$10.5$ &$3$ \\ $\omega=60$ & $9.3$ &$22.5$ &$3$ &$1.91$ & $4.5$& $3$ \\ \end{tabular} \\ \hline \end{tabular} } \end{table} \begin{table}[h] \label{tab:4} {\footnotesize Table 4. \emph{Results of reconstruction on a 5 times adaptively refined meshes of Tables 5,6 for $\sigma =17\%$ together with computational errors between $\max\limits_{\Omega_{FEM} } \varepsilon_{\overline{N}}$ and exact $\varepsilon^{*}$ in procents. Here, $\overline{N}$ is the final iteration number in the conjugate gradient method for computation of $\varepsilon_r$, and $\overline{M}$ is the final iteration number for computation of $\mu_r$. }} \par \vspace{2mm} \centerline{ \begin{tabular}{|c|} \hline $\sigma= 17\%$ \\ \hline \begin{tabular}{c|c|c|c|c|c|c} Test 1 & $ \max\limits_{\Omega_{FEM} } \varepsilon_{\overline{N}}$ & error, $\%$ & $\overline{N}$ & $ \max\limits_{\Omega_{FEM} } \mu_{\overline{M}}$ & error, $\%$ & $\overline{M}$ \\ \hline $\omega=45$ & $14.96$ &$24.6$& $3$ & $1.65$ &$17.5$ & $3$ \\ $\omega=50$ & $14.96$ &$24.6$ &$3$ &$1.97$ &$1.5$ &$3$ \\ $\omega=60$ & $14.95$ &$24.5$ &$3$ &$2.04$ &$20$ &$3$ \\ \end{tabular} \\ \hline \begin{tabular}{c|c|c|c|c|c|c} Test 2 & $ \max\limits_{\Omega_{FEM} } \varepsilon_{\overline{N}}$ & error, $\%$ & $\overline{N}$ & $ \max\limits_{\Omega_{FEM} } \mu_{\overline{M}}$ & error, $\%$ & $\overline{M}$ \\ \hline $\omega=45$ & $14.69$ & $22.4$ & $3$& $1.71$ &$14.5$ & $3$ \\ $\omega=50$ & $14.47$ & $20.5$ &$3$ &$1.63$ &$18.5$ & $3$ \\ $\omega=60$ & $8.44$ &$29.7$ &$3$ &$1.74$ &$13$ &$3$ \\ \end{tabular} \\ \hline \end{tabular} } \end{table} \begin{table}[tbp] \label{tab:5} {\footnotesize Table 5. \emph{Test 1. Computed values of $\varepsilon_\mathrm{r}^{\mathrm{comp}} := \max \limits_{\Omega_{FEM}} \varepsilon_r \ \text{and} \ \mu_\mathrm{r}^{\mathrm{comp}} := \max \limits_{\Omega_{FEM}}\mu_r$ on the adaptively refined meshes. Computations are done with the noise $\sigma = 7\%.$}} \begin{center} {\footnotesize \begin{tabular}{|c|l|r|r|r|r|r|r|} \hline $\omega$ & & coarse mesh & 1 ref. mesh & 2 ref. mesh & 3 ref. mesh & 4 ref. mesh & 5 ref. mesh \\ \hline 45 & \# nodes & 10958 & 11028 & 11241 & 11939 & 14123 & 18750 \\ \hline & \# elements & 55296 & 55554 & 56624 & 60396 & 73010 & 96934 \\ \hline & $\varepsilon_\mathrm{r}^{\mathrm{comp}}$ & 15 & 15 & 15 & 15 & 15 & 14.96\\ \hline & $\mu_\mathrm{r}^{\mathrm{comp}}$ & 2.58 & 2.58 & 2.58 & 2.58 & 2.58 & 1.82 \\ \hline 50 & \# nodes & 10958 & 11031 & 11212 & 11887 & 13761 & 17892 \\ \hline & \# elements & 55296 & 55572 & 56462 & 60146 & 71010 & 92056 \\ \hline & $\varepsilon_\mathrm{r}^{\mathrm{comp}}$ & 15 & 15 & 15 & 15 & 15 & 14.96 \\ \hline & $\mu_\mathrm{r}^{\mathrm{comp}}$ & 2.38 & 2.38 & 2.38 & 2.38 & 2.38 & 1.73\\ \hline 60 & \# nodes & 10958 & 11050 & 11255 & 11963 & 13904 &18079 \\ \hline & \# elements & 55296 & 56666 & 60564 & 71892 & 61794 & 92926 \\ \hline &$\varepsilon_\mathrm{r}^{\mathrm{comp}}$ & 15 & 15 & 15 & 15 & 15 & 14.96 \\ \hline & $\mu_\mathrm{r}^{\mathrm{comp}}$ & 2.46 & 2.46 & 2.46 & 2.46 & 2.46 & 1.76 \\ \hline \end{tabular} } \end{center} \end{table} \begin{table}[tbp] \label{tab:6} {\footnotesize Table 6. \emph{Test 2. Computed values of $\varepsilon_\mathrm{r}^{\mathrm{comp}} := \max \limits_{\Omega_{FEM}} \varepsilon_r \ \text{and} \ \mu_\mathrm{r}^{\mathrm{comp}} := \max \limits_{\Omega_{FEM}}\mu_r$ on the adaptively refined meshes. Computations are done with the noise $\sigma = 17\%.$}} \begin{center} {\footnotesize \begin{tabular}{|c|l|r|r|r|r|r|r|} \hline $\omega$ & & coarse mesh & 1 ref. mesh & 2 ref. mesh & 3 ref. mesh & 4 ref. mesh & 5 ref. mesh \\ \hline 45 & \# nodes & 10958 & 11007 & 11129 & 11598 & 12468 & 14614 \\ \hline & \# elements & 55428 & 55428 &56024 & 58628 &63708 & 74558 \\ \hline & $\varepsilon_\mathrm{r}^{\mathrm{comp}}$ & 15 & 15 & 15 & 15 & 15 & 14.96\\ \hline & $\mu_\mathrm{r}^{\mathrm{comp}}$ & 2.39 & 2.39 & 2.39 & 2.39 & 2.39 & 1.71 \\ \hline 50 & \# nodes & 10958 & 11002 & 11106 & 11527 & 12433 & 14494 \\ \hline & \# elements & 55296 & 55398 & 55908 & 58240 & 63540 & 73900 \\ \hline & $\varepsilon_\mathrm{r}^{\mathrm{comp}}$ & 15 & 15 & 15 & 15 & 15 & 14.47 \\ \hline & $\mu_\mathrm{r}^{\mathrm{comp}}$ & 2.24 & 2.24 & 2.24 & 2.24 & 2.24 & 1.63\\ \hline 60 & \# nodes & 10958 & 11002 & 11104 & 11560 & 12459 &14888 \\ \hline & \# elements & 55296 &55398 & 55904 & 58402 & 63628&76068 \\ \hline &$\varepsilon_\mathrm{r}^{\mathrm{comp}}$ & 8.46 & 8.46 &8.46& 8.46 & 8.46 & 8.44 \\ \hline & $\mu_\mathrm{r}^{\mathrm{comp}}$ & 2.50& 2.50 & 2.50 & 2.50 & 2.50 & 1.74 \\ \hline \end{tabular} } \end{center} \end{table} We start to run adaptive algorithms with guess values of $\varepsilon_r =1.0, \mu_r = 1.0$ at all points in $\Omega$. In our recent work \cite{BCN} was shown that such choice of the initial guess gives a good reconstruction for both functions $\varepsilon_r$ and $\mu_r$, see also \cite{BKS, BMaxwell} for a similar choice of initial guess for other coefficient inverse problems (CIPs). Taking into account (\ref{2.51}) we choose following sets of admissible parameters for $\varepsilon_r$ and $\mu_r$ \begin{equation}\label{admpar} \begin{split} M_{\varepsilon} \in \{\varepsilon\in C(\overline{\Omega })|1\leq \varepsilon(x)\leq 15\},\\ M_{\mu} \in \{\mu\in C(\overline{\Omega })|1\leq \mu(x)\leq 3\}. \end{split} \end{equation} In our simulations we choose two constant regularization parameters $\gamma_1 =0.01, \gamma_2=0.7$ in the Tikhonov functional (\ref{functional}). These parameters satisfy conditions (\ref{2.11}) and were chosen because of our computational experience: such choices for the regularization parameters were optimal since they gave the smallest relative errors $e_{\varepsilon} = \frac{||\varepsilon - \varepsilon_h ||}{||\varepsilon_h ||}$ and $ e_{\mu} = \frac{||\mu - \mu_h||}{||\mu_h ||}$ in the reconstruction, see \cite{BCN} for details. Iteratively regularized adaptive finite element method for our \textbf{IP} when zero initial conditions $f_0=f_1=0$ in (\ref{E_gauge}) are initialized, is recently presented in \cite{samar}. Currently we perform numerical experiments with iteratively regularized adaptive finite element method for the case when we initialize one non-zero initial condition (\ref{initcond}) in (\ref{E_gauge}). This work will be described in the forthcoming paper. In the above mentioned works iterative regularization is performed via algorithms of \cite{BKS}. We also refer to \cite{Engl, IJT11} for different techniques for the choice of regularization parameters. To get our reconstructions of Figures \ref{fig:Recos_omega45noise7_zero}~-~\ref{fig:Recos_omega50noise17_nonzero}, we use image post-processing procedure described in \cite{BCN}. Tables 1-6 present computed results of reconstructions for $\varepsilon_r$ and $\mu_r$ on different adaptively refined meshes after applying adaptive algorithm 1. Similar results are obtained for adaptive algorithm 2, and thus they are not presented here. \subsubsection{Test 1} In this example we performed simulations with two additive noise levels in data: $\sigma= 7 \%$ and $\sigma= 17\%$, see Tables 1-6 for results. Using these tables we observe that the best reconstruction results for both noise levels are obtained for $\omega=45$ in (\ref{f}). Below we describe reconstructions obtained with $\omega = 45$ in (\ref{f}) and $\sigma=7\%$. The reconstructions of $\varepsilon_r$ and $\mu_r$ on initial coarse mesh are presented in Figure \ref{fig:Recos_omega45noise7_zero_coarse}. Using Table 1 we observe that we achieve good values of contrast for both functions already on a coarse mesh. However, Figures \ref{fig:yz_reconstruction_zero}-a), b) show us that the locations of all inclusions in $x_3$ direction should be improved. The reconstructions of $\varepsilon_r$ and $\mu_r$ on a final adaptively refined mesh are presented in Figure \ref{fig:Recos_omega45noise7_zero}. We observe significant improvement of reconstructions of $\varepsilon_r$ and $\mu_r$ in $x_3$ direction on the final adaptively refined mesh compared with reconstructions obtained on a coarse mesh, see Figure \ref{fig:yz_reconstruction_zero}. Figures \ref{fig:Recos_omega45noise7_zero_mesh}-a), c), e) show different projections of final adaptively refined mesh which was used for computations of images of Figures \ref{fig:Recos_omega45noise7_zero}, \ref{fig:yz_reconstruction_zero}-c), d). \subsubsection{Test 2} In this test we again used two additive noise levels in data, $\sigma= 7 \%$ and $\sigma= 17\%$, as well as non-zero initial condition (\ref{initcond}) in (\ref{E_gauge}). Results of computations are presented in Tables 1-6. Using these tables we see that the best reconstruction results in this test for both noise levels are obtained for $\omega=50$ in (\ref{f}). We now describe reconstructions obtained for $\omega = 50$ in (\ref{f}) and $\sigma= 17\%$. The reconstructions of $\varepsilon_r$ and $\mu_r$ on a coarse mesh are shown in Figure \ref{fig:Recos_omega50noise17_nonzero_coarse}. The reconstructions of $\varepsilon_r$ and $\mu_r$ on a final adaptively refined mesh are given in Figure \ref{fig:Recos_omega50noise17_nonzero}. We again observe significant improvement of reconstructions of $\varepsilon_r$ and $\mu_r$ in $x_3$ direction on the final adaptively refined mesh in comparison to the reconstruction obtained on a coarse mesh, see Figure \ref{fig:yz_reconstruction}. Figures \ref{fig:Recos_omega45noise7_zero_mesh}-b), d), f) show different projections of final adaptively refined mesh which was used for computations of images of Figures \ref{fig:Recos_omega50noise17_nonzero}, \ref{fig:yz_reconstruction}-c),d). \section{Conclusion} This work is a continuation of our previous study in \cite{BCN} and is focused on the solution of coefficient inverse problem for simultaneously reconstruction of functions $\varepsilon$ and $\mu$ from time-dependent backscattered data in the Maxwell's equations. To do that we have used optimization approach of \cite{BCN} applied on adaptively refined meshes. We derived a posteriori error estimates in the reconstructed coefficients $\varepsilon$ and $\mu$ and in the Tikhonov functional to be minimized. We then formulated two adaptive algorithms which allow reconstruction of $\varepsilon$ and $\mu$ on the locally adaptively refined meshes using these estimates. Numerically we tested our algorithms with two different noise levels, $\sigma= 7 \%$ and $\sigma= 17\%$, on the frequency band $\omega \in [45, 60]$. Main conclusion of our previous study of \cite{BCN} was that we could get the large contrast of the dielectric function $\varepsilon_r$ which allows us to reconstruct metallic targets, and that the contrast for $\mu_r$ was within limits of (\ref{admpar}). However, the size of $\mu_r$ in $x_1, x_2$ directions and location of all inclusions in $x_3$ direction should be improved. Using Figures \ref{fig:Recos_omega45noise7_zero_coarse}, \ref{fig:Recos_omega50noise17_nonzero_coarse} and Tables 1-6 of this note we can conclude that on the coarse mesh we get similar results as were obtained in \cite{BCN}. However, with mesh refinements, as was expected, quality of reconstruction is improved a lot, see Figures \ref{fig:yz_reconstruction_zero}, \ref{fig:Recos_omega50noise17_nonzero}, \ref{fig:yz_reconstruction}. Using these Figures and Tables 1-6 we observe that now all inclusions have correct locations in $x_3$ direction as well as their contrasts and sizes in $x_1, x_2$ directions are also improved and reconstructed with a good accuracy. We can conclude, that we have supported tests of our previous works \cite{BJ, BMaxwell2, BTKB,BKK, KBB} and have shown that the adaptive finite element method is very powerful tool for the reconstruction of heterogeneous targets, their locations and shapes accurately. Our adaptive algorithms can also be applied for the case when edge elements are used for the numerical simulation of the solutions of forward and adjoint problems, see \cite{CWZ99, CWZ14, FJZ10} for finite element analysis in this case. This as well as development of iteratively regularized adaptive finite element method can be considered as a challenge for the future research. \begin{figure}[tbp] \begin{center} \begin{tabular}{cc} {\includegraphics[scale=0.33,clip = true, trim = 6cm 6cm 6cm 3cm, angle = 0]{2layer_isosurf_zero_fem-10.eps}} & {\includegraphics[scale=0.33, clip = true, trim = 6cm 6cm 6cm 3cm, angle = 0]{2layer_isosurf_zero_fem-12.eps}} \\ a) FEM solution at $t= 1.5$ & b) FEM solution at $t= 1.8$ \\ {\includegraphics[scale=0.28, clip = true, trim = 6cm 6cm 6cm 3cm, angle = -3]{nonzero_t0.eps}} & {\includegraphics[scale=0.28, clip = true, trim = 6cm 6cm 6cm 3cm, angle = -3]{1layer_isosurf_nonzero_common-06.eps}} \\ c) FEM/FDM solution at $t=0$ & d) FEM/FDM solution at $t=1.8$ \end{tabular} \end{center} \caption{Isosurfaces of the simulated FEM/FDM solution of the model problem at different times: a), b) Test 1; c), d) Test 2.} \label{fig:Isosurfaces} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cccc} {\includegraphics[angle=0,width=6.0cm, clip = true, trim = 0cm 0cm 0cm 0cm, angle = 00]{backE2f50n7_zero.eps}} &{\includegraphics[width=6.0cm, clip = true, trim = 0cm 0cm 0cm 0cm, angle = 00]{f60n17backE2_nonzero.eps}} \\ \hspace{-1cm} (a) Test 1. $\omega = 50$, $\sigma = 7\%$, $t = 3$ & (b) Test 2. $\omega = 60$, $\sigma =17\%$, $t = 3$ \\ {\includegraphics[width=6.0cm, clip = true, trim = 0cm 0cm 0cm 0cm, angle = 00]{f50n7E1E2E3.eps}} &{\includegraphics[width=6.0cm, clip = true, trim = 0cm 0cm 0cm 0cm, angle = 00]{f60n17E1E2E3.eps}} \\ \hspace{-1cm} (c) Test 1. $\omega = 50$, $\sigma =7\%$, $t = 3$ & (d) Test 2. $\omega = 60$, $\sigma =17\%$, $t = 3$ \\ \end{tabular} \end{center} \caption{ a), b) Backscattered data of the one component, $E_2(x,t)$, of the electric field $E(\mathbf{x},t)$. c), d) Computed components $E_2$ (below) and $E_1 \ \text{and}\ E_3$ (on top) of the backscattered electric field $E(\mathbf{x},t)$.} \label{fig:backscatdata} \end{figure} \begin{figure} \begin{center} \begin{tabular}{c c c} {\includegraphics[width = 3.5cm, clip = true, trim = 5.0cm 0.0cm 1.0cm 0.0cm, angle = -90.0]{zerof45n7_new_eps_010_15.ps}} & {\includegraphics[width = 3.5cm, clip = true, trim = 5.0cm 0.0cm 1.0cm 0.0cm, angle = -90.0]{zerof45n7_new_mu_010_2-58.ps}} & \\ (a) $\max\limits_{\Omega_{FEM} }\varepsilon_r \approx 15$ & (b) $\max\limits_{\Omega_{FEM} }\mu_r \approx 2.5$ \end{tabular} \caption{Test 1. Computed images of reconstructed functions $\varepsilon_r(\mathbf{x}) \ \text{and} \ \mu_r(\mathbf{x})$ on a coarse mesh for $\omega = 45$, $\sigma =7\%$.} \label{fig:Recos_omega45noise7_zero_coarse} \end{center} \end{figure} \begin{figure} \begin{center} \begin{minipage}{1\textwidth} \begin{tabular}{c c c c} {\includegraphics[width = 3.5cm, clip = true, trim = 5.0cm 0.0cm 1.0cm 0.0cm, angle = -90.0]{zerof45n7_new_eps_53_14-96.ps}} & {\includegraphics[width = 3.5cm, clip = true, trim = 5.0cm 0.0cm 1.0cm 0.0cm, angle = -90.0]{zerof45n7_new_mu_53_1-82.ps}} \\ (a) $\max\limits_{\Omega_{FEM} } \varepsilon_r \approx 14.9$ & (b) $\max\limits_{\Omega_{FEM} } \mu_r \approx 1.8$ \end{tabular} \caption{Test 1. Computed images of reconstructed functions $\varepsilon_r(\mathbf{x}) \ \text{and} \ \mu_r(\mathbf{x})$ on a 5 times adaptively refined mesh presented in Figure \ref{fig:Recos_omega45noise7_zero_mesh}. Computations are done for $\omega = 45$, $\sigma = 7\%$.} \label{fig:Recos_omega45noise7_zero} \end{minipage} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular} {c c c } \includegraphics[width = 0.3\textwidth, clip = true, trim = 5.50cm 3.0cm 2.0cm 1.50cm,, angle = -90]{zerof45n7_new_eps_010_yz.ps} & \includegraphics[width =0.3\textwidth, clip = true, trim = 5.50cm 6.0cm 2.0cm 1.50cm, angle = -90]{zerof45n7_new_mu_010_yz.ps} &\\ \hspace{-1.5cm} (a) $\max\limits_{\Omega_{FEM} } \varepsilon_r \approx 15$ & \hspace{-1.5cm}(b) $\max\limits_{\Omega_{FEM} } \mu_r \approx 2.5$ \\ \includegraphics[width = 0.3\textwidth, clip = true, trim = 5.50cm 3.0cm 2.0cm 1.50cm,, angle = -90]{zerof45n7_new_eps_53_yz.ps} & \includegraphics[width =0.3\textwidth, clip = true, trim = 5.50cm 6.0cm 2.0cm 1.50cm, angle = -90]{zerof45n7_new_mu_53_yz.ps} &\\ \hspace{-1.5cm} (c) $\max\limits_{\Omega_{FEM} } \varepsilon_r \approx 14.9$ & \hspace{-1.5cm}(d) $\max\limits_{\Omega_{FEM} } \mu_r \approx 1.8$ \\ \end{tabular} \caption{Test 1. Computed images of reconstructed functions $\varepsilon_r(\mathbf{x}) \ \text{and} \ \mu_r(\mathbf{x})$ in $x_2 x_3$ view: a), b) on a coarse mesh, c), d) on a 5 times adaptively refined mesh. Computations are done for $\omega = 45$, $\sigma =7\%$.} \label{fig:yz_reconstruction_zero} \end{center} \end{figure} \begin{figure} \centering \begin{tabular}{c c c c } \includegraphics[width = 0.4\textwidth, clip = true, trim = 0cm 6cm 0cm 2cm, angle = 00]{f45n7_ref_5_xy_zoomed.eps} & \includegraphics[width = 0.4\textwidth, clip = true, trim = 1cm 6.5cm 1.0cm 3.0cm, angle = 0]{f50n17_ref_5_xy_zoomed.eps} \\ (a) $x_1 x_2$-view & (b) $x_1 x_2$-view \\ \includegraphics[width = 0.4\textwidth, clip = true, trim = 0cm 6cm 0cm 2cm, angle = 0]{f45n7_ref_5_xz_zoomed.eps} & \includegraphics[width = 0.4\textwidth, clip = true, trim = 1cm 6.5cm 1.0cm 3.0cm, angle = 0]{f50n17_ref_5_xz_zoomed.eps} \\ (c) $x_1 x_3$-view & (d) $x_1 x_3$-view \\ \includegraphics[width = 0.4\textwidth, clip = true, trim = 1cm 6.5cm 1.0cm 3.0cm, angle = 0]{f45n7_ref_5_yz_zoomed.eps} & \includegraphics[width = 0.4\textwidth, clip = true, trim = 1cm 6.5cm 1.0cm 3.0cm, angle = 0]{f50n17_ref_5_yz_zoomed.eps} \\ (e) $x_2 x_3$-view & (f) $x_2 x_3$-view \\ \end{tabular} \caption{Different projections of 5 times adaptively refined meshes for computed images of Figures \ref{fig:Recos_omega45noise7_zero} (on the left) and Figures \ref{fig:Recos_omega50noise17_nonzero} (on the right), respectively.} \label{fig:Recos_omega45noise7_zero_mesh} \end{figure} \begin{figure} \begin{center} \begin{tabular}{c c c} {\includegraphics[width =2.1cm, clip = true, trim = 9.0cm 4.0cm 6.0cm 4.0cm, angle = -90.0]{Nonzerof50n17_new_eps_010_15.ps}} & {\includegraphics[width = 2.1cm, clip = true, trim = 9.0cm 4.0cm 6.0cm 4.0cm, angle = -90.0]{Nonzerof50n17_new_mu_010_2-2408.ps}} & \\ c) $\max\limits_{\Omega_{FEM} }\varepsilon_r \approx 15$ & d)$\max\limits_{\Omega_{FEM} }\mu_r \approx 2.2$ \end{tabular} \caption{Test 2. Computed images of reconstructed functions $\varepsilon_r(\mathbf{x}) \ \text{and} \ \mu_r(\mathbf{x})$ on a coarse mesh for $\omega = 50$, $\sigma =17\%$.} \label{fig:Recos_omega50noise17_nonzero_coarse} \end{center} \end{figure} \begin{figure} \begin{tabular}{c c c c } {\includegraphics[width =2.1cm, clip = true, trim = 9.0cm 4.0cm 6.0cm 4.0cm, angle = -90.0]{Nonzerof50n17_new_eps_53_14-4776.ps}} & {\includegraphics[width =2.1cm, clip = true, trim = 9.0cm 4.0cm 6.0cm 4.0cm, angle = -90.0]{Nonzerof50n17_new_mu_53_1-6305.ps}} & \\ (a) $\max\limits_{\Omega_{FEM} }\varepsilon_r \approx 14.4$ & (b) $\max\limits_{\Omega_{FEM} } \mu_r \approx 1.6$ \end{tabular} \caption{Test 2. Computed images of reconstructed functions $\varepsilon_r(\mathbf{x}) \ \text{and} \ \mu_r(\mathbf{x})$ on a 5 times adaptively refined mesh. Computations are done with $\omega = 50$, $\sigma = 17\%$.} \label{fig:Recos_omega50noise17_nonzero} \end{figure} \begin{figure} \begin{center} \begin{tabular} {c c c } \includegraphics[width = 0.3\textwidth, clip = true, trim = 5.50cm 3.0cm 2.0cm 1.50cm, angle = -90]{nonzerof50n17_new_eps_010_yz.ps}& \includegraphics[width = 0.3\textwidth, clip = true, trim = 5.50cm 6.0cm 2.0cm 1.50cm, angle = -90]{nonzerof50n17_new_mu_010_yz.ps} & \\ \hspace{-1.5cm} (a) $\max\limits_{\Omega_{FEM} } \varepsilon_r \approx 15$ & \hspace{-1.5cm}(b) $\max\limits_{\Omega_{FEM} } \mu_r \approx 2.2$ \\ \includegraphics[width = 0.3\textwidth, clip = true, trim = 5.50cm 3.0cm 2.0cm 1.50cm, angle = -90]{nonzerof50n17_new_eps_yz.ps}& \includegraphics[width = 0.3\textwidth, clip = true, trim = 5.50cm 6.0cm 2.0cm 1.50cm, angle = -90]{nonzerof50n17_new_mu_yz.ps} & \\ \hspace{-1.5cm} (a) $\max\limits_{\Omega_{FEM} } \varepsilon_r \approx 14.4$ & \hspace{-1.5cm}(b) $\max\limits_{\Omega_{FEM} } \mu_r \approx 1.6$ \\ \end{tabular} \caption{Test 2. Computed images of reconstructed functions $\varepsilon_r(\mathbf{x}) \ \text{and} \ \mu_r(\mathbf{x})$ in $x_2 x_3$ view: a), b) on a coarse mesh, c), d) on a 5 times adaptively refined mesh. Computations are done for $\omega = 50$, $\sigma =17\%$.} \label{fig:yz_reconstruction} \end{center} \end{figure} \newpage \vspace{1cm} \section*{Acknowledgments} This research is supported by the Swedish Research Council (VR). The computations were performed on resources at Chalmers Centre for Computational Science and Engineering (C3SE) provided by the Swedish National Infrastructure for Computing (SNIC).
{ "timestamp": "2016-01-26T02:06:30", "yymm": "1510", "arxiv_id": "1510.07525", "language": "en", "url": "https://arxiv.org/abs/1510.07525", "abstract": "We propose an adaptive finite element method for the solution of a coefficient inverse problem of simultaneous reconstruction of the dielectric permittivity and magnetic permeability functions in the Maxwell's system using limited boundary observations of the electric field in 3D. We derive a posteriori error estimates in the Tikhonov functional to be minimized and in the regularized solution of this functional, as well as formulate corresponding adaptive algorithm. Our numerical experiments justify the efficiency of our a posteriori estimates and show significant improvement of the reconstructions obtained on locally adaptively refined meshes.", "subjects": "Numerical Analysis (math.NA)", "title": "An adaptive finite element method in reconstruction of coefficients in Maxwell's equations from limited observations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.980875959894864, "lm_q2_score": 0.7217432062975978, "lm_q1q2_score": 0.707940560274753 }
https://arxiv.org/abs/2011.05953
$(f,Γ)$-Divergences: Interpolating between $f$-Divergences and Integral Probability Metrics
We develop a rigorous and general framework for constructing information-theoretic divergences that subsume both $f$-divergences and integral probability metrics (IPMs), such as the $1$-Wasserstein distance. We prove under which assumptions these divergences, hereafter referred to as $(f,\Gamma)$-divergences, provide a notion of `distance' between probability measures and show that they can be expressed as a two-stage mass-redistribution/mass-transport process. The $(f,\Gamma)$-divergences inherit features from IPMs, such as the ability to compare distributions which are not absolutely continuous, as well as from $f$-divergences, namely the strict concavity of their variational representations and the ability to control heavy-tailed distributions for particular choices of $f$. When combined, these features establish a divergence with improved properties for estimation, statistical learning, and uncertainty quantification applications. Using statistical learning as an example, we demonstrate their advantage in training generative adversarial networks (GANs) for heavy-tailed, not-absolutely continuous sample distributions. We also show improved performance and stability over gradient-penalized Wasserstein GAN in image generation.
\section{Introduction} Divergences and metrics provide a notion of `distance' between multivariate probability distributions, thus allowing for comparison of models with one another and with data. Divergences are used in many theoretical and practical problems in mathematics, engineering, and the natural sciences, ranging from statistical physics, large deviations theory, uncertainty quantification and statistics to information theory, communication theory, and machine learning. In this work, we introduce and study what we term the $(f,\Gamma)$-divergences, denoted by $D_f^\Gamma$ and defined by the variational expression \begin{align} D_f^\Gamma(Q\|P) \equiv& \sup_{ g\in\Gamma}\left\{E_Q[ g]-\Lambda_f^P[g]\right\}\,,\label{eq:Df_Gamma_intro}\\ \Lambda_f^P[g]\equiv&\inf_{\nu\in\mathbb{R}}\left\{\nu+E_P[f^*( g-\nu)]\right\}\,,\label{eq:Lambda_f_def_intro} \end{align} where $Q$ and $P$ are probability measures, $f$ is a convex function with $f(1)=0$, $f^*$ denotes the Legendre Transform (LT) of $f$, and $\Gamma\subset \mathcal M_b(\Omega)$ is an appropriate function space\footnote{$\mathcal M_b(\Omega)$ denotes the set of all measurable and bounded real-valued functions on $\Omega$.}. The resemblance to the variational representation of the $f$-divergence is evident (see \req{eq:Df_variational_bounded_intro} below), however, the additional optimization over shifts $\nu$ in \eqref{eq:Lambda_f_def_intro}, which is motivated by the Gibbs variational principle \cite{BenTal2007}, will enable the derivation of many theoretical properties of the $(f,\Gamma)$-divergence. In the special case of the Kullback-Leibler (KL) divergence, $\Lambda_f^P[g]$ is exactly the cumulant generating function that arises in the Donsker-Varadhan variational formula \cite{Dupuis_Ellis}. We will show that the $(f,\Gamma)$-divergences are related to, interpolate between, and inherit key properties from both the $f$-divergences and the integral probability metrics (IPMs). To motivate the definition in \eqref{eq:Df_Gamma_intro}, we first recall the definition and basic properties of $f$-divergences and IPMs. The family of $f$-divergences includes among others the KL divergence \cite{kullback1951}, the total variation distance, the $\chi^2$-divergence, the Hellinger distance, and the Jensen-Shannon divergence \cite{Ali1966,csiszar1967}. The $f$-divergence between two probability measures $Q$ and $P$ induced by a convex function $f$ satisfying $f(1)=0$ is defined by \begin{equation}\label{eq:Df_def_intro} D_f(Q\|P)\equiv E_P[f(dQ/dP)]\, . \end{equation} This definition assumes absolute continuity between $Q$ and $P$, $Q\ll P$, which in particular means that the support of $Q$ is included in the support of $P$. The estimation of an $f$-divergence directly from \eqref{eq:Df_def_intro} is challenging since it requires knowledge of the likelihood ratio (i.e., Radon-Nikodym derivative) $dQ/dP$, such as when working within a parametric family, or of a reasonable approximation to $dQ/dP$, usually through histogram binning, kernel density estimation \cite{Wang2005, KandasamyEtAl:2015vm}, or through $k$-nearest neighbor approximation \cite{Wang2006}. However, parametric methods greatly restrict the collection of allowed models, resulting in reduced expressivity, whereas non-parametric likelihood-ratio methods do not scale efficiently with the dimension of the data \cite{KrishnamurthyEtAl:2014rd}. To address such challenges, statistical estimators which are based on variational representations of divergences have recently been introduced \cite{Nguyen_Full_2010,MINE_paper}. Variational representation formulas for divergences, often referred to as dual formulations, convert divergence estimation into, in principle, an infinite-dimensional optimization problem over a function space. A typical example of a variational representation is the LT representation of the $f$-divergence between $Q$ and $P$, given by \cite{Broniatowski,Nguyen_Full_2010} \begin{equation} D_f(Q\|P) = \sup_{ g\in \mathcal{M}_b(\Omega)} \big\{E_Q[ g]-E_P[f^*( g)]\big\}\,. \label{eq:Df_variational_bounded_intro} \end{equation} Such representations offer a useful mathematical tool to measure statistical similarity between data collections as well as to build, train, and compare complex probabilistic models. The main practical advantage of variational formulas is that an explicit form of the probability distributions or their likelihood ratio, $dQ/dP$, is not necessary. Only samples from both distributions are required since the difference of expected values in \eqref{eq:Df_variational_bounded_intro} can be approximated by statistical averages. In practice, the infinite-dimensional function space has to be approximated or even restricted. One of the first attempts was the restriction of the function space to a reproducing kernel Hilbert space (RKHS) and the corresponding kernel-based approximation in \cite{Nguyen_Full_2010}. More recently, the optimization \eqref{eq:Df_variational_bounded_intro} has been approximated using flexible regression models and particularly by neural networks \cite{MINE_paper} and these techniques are widely used in the training of generative adversarial networks (GANs) \cite{GAN,WGAN,f_GAN,wgan:gp}. Variational representations of divergences have also been used to quantify the model uncertainty in a probabilistic model (arising, e.g., from insufficient data and partial expert knowledge). For instance, applying the $f$-divergence formula \eqref{eq:Df_variational_bounded_intro} to $cg-\nu$, solving for $E_Q[g]$, and optimizing over $c>0$, $\nu\in\mathbb{R}$ leads to the uncertainty quantification (UQ) bound \cite{chowdhary_dupuis_2013,DKPP} \begin{align}\label{eq:UQ_Df} E_Q[g] \le \inf_{ c>0}\left\{\frac{1}{c}\Lambda_f^P[cg]+\frac{1}{c}D_f(Q\|P)\right\}\, . \end{align} Similarly, one can obtain a corresponding lower bound for any quantity of interest $g \in \mathcal{M}_b(\Omega)$. The UQ inequality \eqref{eq:UQ_Df} bounds the uncertainty in the expectation of $g$ under an alternative model $Q$ in terms of expectations under the baseline model $P$ and the discrepancy between $Q$ and $P$ (quantified via $D_f(Q\|P)$). Further discussion of the general connection between variational characterizations of divergences and UQ can be found in \cite{Glasserman2014,AtarChowdharyDupuis,Lam2016,Breuer2016,GKRW,Dupuis2019-AAP,Dupuis:Mao:2019,birrell2020optimizing}. Integral probability metrics are defined directly in terms of a variational formula \cite{Muller1997,sriperumbudur2009integral}, generalizing the Kantorovich–Rubinstein variational formula for the Wasserstein metric \cite{villani2008optimal}. More specifically, they are defined by maximizing the differences of respective expected values over a function space $\Gamma$, \begin{equation} W^{\Gamma}(Q, P) = \sup_{ g\in \Gamma} \big\{E_Q[ g]-E_P[ g]\big\}\,, \label{eq:IPM_intro} \end{equation} and we refer to this object as the $\Gamma$-IPM. Despite the name, IPMs are not necessarily metrics in the mathematical sense unless further assumptions on $\Gamma$ are made. This will not be an issue for us going forward, as we are not focused on the metric property; we will be concerned with the divergence property, as defined in Section \ref{sec:notation} below. Examples of IPMs include: the total variation metric, which is derived when the function space $\Gamma$ is the unit ball in the space of bounded measurable functions; the Wasserstein, metric where $\Gamma$ is the space of Lipschitz continuous functions with Lipschitz constant less than or equal to one; the Dudley metric, where the function space $\Gamma$ is the unit ball in the space of bounded and Lipschitz continuous functions; and the maximum mean discrepancy (MMD), where $\Gamma$ is the the unit ball in a RKHS, see also \cite{Muller1997,sriperumbudur2009integral,sriperumbudur2012}. The definition of an IPM through the variational formula \eqref{eq:IPM_intro} leads to straightforward and unbiased statistical estimation algorithms \cite{sriperumbudur2012}. Furthermore, the Wasserstein metric applied to generative adversarial networks (GANs) is known to substantially improve the stability of the training process \cite{WGAN,wgan:gp}, while MMD offers one of the most reliable two-sample tests for high dimensional statistical distributions \cite{gretton2012}. In summary, there are two fundamental mathematical ingredients involved in variational formulas for $f$-divergences and IPMs, with both families having their own strengths and weaknesses. \begin{enumerate}[a)] \item \emph{The Objective Functional}: The objective functional in a variational representation is the quantity being maximized, namely $E_Q[ g]-E_P[f^*( g)]$ for the $f$-divergences and $E_Q[ g]-E_P[ g]$ for the IPMs. The former depends on $f$ and for appropriate $f$'s it is strictly concave in $ g$, while the latter is the same for all IPMs and is linear in $ g$. Stronger convexity/concavity properties could result in improved statistical learning, estimation, and convergence performance. The ability to vary the objective functional by choosing $f$ also allows one to tailor the divergence to the data source, e.g., for heavy tailed data. Finally, note that alternative objective functionals can yield the same divergence \cite{BenTal2007,Ruderman,MINE_paper,birrell2020optimizing}, and their careful choice can have a substantial impact on their statistical estimation \cite{MINE_paper,Ruderman, birrell2020optimizing}. \item \emph{The Function Space}: This is the space over which the objective functional is optimized. In \eqref{eq:Df_variational_bounded_intro}, it is the same function space for all $f$-divergences, namely $\mathcal{M}_b(\Omega)$, while the choice of function space $\Gamma$ is what defines an IPM in \eqref{eq:IPM_intro}. The choice of $\Gamma$ has a profound impact on the properties of a divergence, e.g., the ability to meaningfully compare not-absolutely continuous distributions. \end{enumerate} As we will show, the properties of the $(f,\Gamma)$-divergences can be tailored to the requirements of a particular problem through the choice of the objective functional (via $f$) and the function space $\Gamma$. The need for such a flexible family of divergences that combines the strengths of both $f$-divergences and IPMs is motivated by problems in machine learning and UQ, where properties of the data source or baseline model dictate the requirements on $f$ and $\Gamma$, e.g., the $f$-divergence UQ bound \eqref{eq:UQ_Df} is unable to treat structurally different alternative models $Q$, which can easily be mutually singular with $P$, as $D_f(Q\|P)=\infty$ under a loss of absolute continuity; similar issues appear in GANs, \cite{WGAN}. Related approaches include the recent studies \cite{arXiv:1809.04542,NIPS2018_7771,miyato2018spectral,Bridging_fGan_WGan,NEURIPS2019_eae27d77,Dupuis:Mao:2019,2021arXiv210608929G}. { In \cite{miyato2018spectral} the authors studied the use of spectral normalization to impose a Lipschitz constraint on the discriminator of a GAN; this is an example of \eqref{eq:Df_Gamma_intro} with a particular choice of function space.} { In \cite{Bridging_fGan_WGan}, the authors proposed a class of objective functionals with an additional optimization layer, aiming to bridge the gap between the variational formulas for $f$-divergences and Wasserstein metrics and applied it to adversarial training of generative models. However, the paper does not provide a rigorous connection to the Wasserstein metric, since the function space appearing in their main Theorem 1 cannot include a Lipschitz constraint. This is in contrast to their practical implementation in Algorithm 1, which does employ a Lipschitz constraint. Our approach bridges this gap between theory and practice, as we are able to explicitly handle Lipschitz function spaces. Finally, our approach does not require the introduction of a third neural network, no matter what the choice of $f$-divergence may be.} On the other hand, the authors in \cite{Dupuis:Mao:2019} developed a variational formula for general function spaces in the case of the KL divergence, providing a systematic and rigorous interpolation between KL divergence and IPMs. Definition \eqref{eq:Df_Gamma_intro} can be also viewed as a regularization of the classical $f$-divergences, and related objects { have also been introduced and studied in \cite{arXiv:1809.04542,NEURIPS2019_eae27d77,NIPS2018_7771,2021arXiv210608929G}. While there is some overlap with several prior works}, the aim of this paper is to provide a systematic and rigorous development of the $(f,\Gamma)$-divergences, focusing on a number of new properties that are potentially beneficial in learning and UQ applications. Specifically: \begin{enumerate} \item We derive conditions under which $D_f^\Gamma$ has the divergence property, and thus provide a well-defined notion of `distance' (Part 4 of Theorem \ref{thm:general_ub} and Part 4 of Theorem \ref{thm:f_div_inf_convolution}). { One key novelty is the introduction of the object \eqref{eq:Lambda_f_def_intro} which is critical in the proof of this property.} \item We show that $D_f^\Gamma$ interpolates between the $f$-divergence and $\Gamma$-IPM in the sense of infimal convolutions, including existence of an optimizer (Parts 1 and 2 of Theorem \ref{thm:f_div_inf_convolution}). { Again, \eqref{eq:Lambda_f_def_intro} plays a critical role here.} \item Using the infimal convolution formula, we derive a mass-redistribution/mass-transport interpretation of the $(f,\Gamma)$-divergences (Section \ref{sec:mass_transport}). \item We show that the family of $(f,\Gamma)$-divergences includes $f$-divergences and $\Gamma$-IPMs in suitable asymptotic limits (Theorem \ref{thm:limit}). \item The relaxation of the hard constraint $ g\in\Gamma$ in \eqref{eq:Df_Gamma_intro} to a soft-constraint penalty term is presented in Theorem \ref{thm:soft_constraint}. This is a generalization of the gradient penalty method for Wasserstein metrics \cite{wgan:gp} to a much larger class of objective functionals and penalties and a key tool in designing numerically efficient implementations while still preserving the divergence property. \item Relaxation of the condition $\Gamma\subset \mathcal{M}_b(\Omega)$ in \eqref{eq:Df_Gamma_intro}, i.e., allowing $\Gamma$ to contain appropriate unbounded functions, is addressed in Theorem \ref{thm:unbounded_Lip}. This is a necessary point when employing neural network estimation with unbounded activation functions \item We show that the $(f,\Gamma)$-divergences inherit several properties from both $f$-divergences and the IPMs. The primary advantage inherited from IPMs is the ability to compare distributions which are not absolute continuous. The primary advantages inherited from the $f$-divergence are the strict concavity of the objective functional with respect to the test function, $g$, and the ability to compare heavy-tailed distributions (Section \ref{sec:examples}). \end{enumerate} When combined, these advantages establish a divergence with better convergence and estimation properties. We numerically demonstrate these merits in the training of GANs. In Section \ref{sec:submanifold_ex}, we show that the proposed divergence is capable of adversarial learning of lower dimensional sub-manifold distributions with heavy tails. In this example, both $f$-GAN \cite{f_GAN} and Wasserstein GAN with gradient penalty (WGAN-GP) \cite{wgan:gp} fail to converge or perform very poorly. { Furthermore, in Section \ref{ex:C10} we present improvements over WGAN-GP and WGAN with spectral-normalization (WGAN-SN) \cite{miyato2018spectral}, as measured by the inception score \cite{salimans2016improved} and FID score \cite{10.5555/3295222.3295408} (two standard performance measures), } in real datasets and particularly in CIFAR-10 \cite{krizhevsky2009learning} image generation. Interestingly, the training stability is significantly enhanced when using the proposed $(f,\Gamma)$-divergence, as compared to WGAN, which is evident from the fact that increasing the learning rate (i.e., stochastic gradient descent step size) eventually results in the collapse of WGAN but has comparatively little impact on our newly proposed method. We conjecture that this is due to the strict concavity of the objective functional of the $(f,\Gamma)$-divergence. We refer to these new proposed GANs which are based on $(f,\Gamma)$-divergences as $(f,\Gamma)$-GANs. The organization of the paper is as follows. The key properties of the $(f,\Gamma)$-divergences are presented in Section \ref{sec:gen_f_div}. The mass-redistribution/mass-transport interpretation of the $(f,\Gamma)$-divergences is discussed in Section \ref{sec:mass_transport}. Section \ref{sec:soft_constraint} develops a general theory of soft-constraint penalization. Section \ref{sec:unbounded_extension} provides conditions under which the function space $\Gamma$ can be expanded to contain unbounded functions. The application of the $(f,\Gamma$)-divergences in adversarial generative modelling is presented in Section \ref{sec:examples}. We conclude the paper and discuss plans for future work in Section \ref{sec:concl}. Finally, detailed proofs can be found in the appendices. \section{Construction and Properties of the $(f,\Gamma)$-Divergences} \label{sec:gen_f_div} In this section, we will derive the divergence property for the $(f,\Gamma)$-divergences and show that they interpolate between $f$-divergences and IPMs as it is described in our main result (Theorem \ref{thm:f_div_inf_convolution}). First we introduce our notation and recall some important properties of the $f$-divergences. \subsection{Notation}\label{sec:notation} For the remainder of the paper $(\Omega,\mathcal{M})$ will denote a measurable space, $\mathcal{M}(\Omega)$ will be the set of all measurable real-valued functions on $\Omega$, $\mathcal{M}_b(\Omega)$ will denote the subspace of bounded measurable functions, $\mathcal{P}(\Omega)$ will denote the space of probability measures on $(\Omega,\mathcal{M})$, and $M(\Omega)$ will be the set of finite signed measures on $(\Omega,\mathcal{M})$. A subset $\Psi\subset \mathcal{M}_b(\Omega)$ will be called {\bf $\mathcal{P}(\Omega)$-determining} if for all $Q,P\in\mathcal{P}(\Omega)$, $\int \psi dQ=\int \psi dP$ for all $\psi\in \Psi$ implies $Q=P$. The integral (expectation) of $ g$ with respect to $P\in\mathcal{P}(\Omega)$ will also be written as $E_P[ g]$. We say that a map $D:\mathcal{P}(\Omega)\times\mathcal{P}(\Omega)\to[0,\infty]$ has the {\bf divergence property} if $D(Q,P)=0$ if and only if $Q=P$; such maps provide a notion of `distance' between probability measures. \begin{remark} We emphasize that despite the standard (but potentially confusing) terminology, not all $f$-divergences have the divergence property; see Section \ref{sec:f_div_background} below for further information. Going forward, we will continue to distinguish between what we call a divergence and the divergence property. \end{remark} $(S,d)$ will denote a complete separable metric space (i.e., a Polish space), $C(S)$ will denote the space of continuous real-valued functions on $S$, and $C_b(S)$ will be the subspace of bounded continuous functions. $\text{Lip}(S)$ will denote the space of Lipschitz functions on $S$, $\text{Lip}_b(S)$ the subspace of bounded Lipschitz functions, and for $L> 0$ we let $\text{Lip}_b^L(S)$ denote the subspace consisting of bounded $L$-Lipschitz functions (i.e., functions having Lipschitz constant $L$). $\mathcal{P}(S)$ will denote the space of Borel probability measures on $S$ equipped with the Prohorov metric, thus making $\mathcal{P}(S)$ a Polish space. Recall that the Prohorov metric topology on $\mathcal{P}(S)$ is the same as the weak topology induced by the set of functions $\pi_ g:P\mapsto E_P[ g]$, $ g\in C_b(S)$. For $\mu\in M(S)$ (finite signed Borel measures on $S$) we define $\tau_\mu:C_b(S)\to\mathbb{R}$ by $\tau_\mu(g)=\int gd\mu$ and we let $\mathcal{T}=\{\tau_\mu:\mu\in M(S)\}$. $\mathcal{T}$ is a separating vector space of linear functionals on $C_b(S)$. We equip $C_b(S)$ with the weak topology from $\mathcal{T}$ (i.e., the weakest topology on $C_b(S)$ for which every $\tau\in \mathcal{T}$ is continuous), which makes $C_b(S)$ a locally convex topological vector space with dual space $C_b(S)^*=\mathcal{T}$ \cite[Theorem 3.10]{rudin2006functional}. We will let $\overline{\mathbb{R}}\equiv\mathbb{R}\cup\{-\infty,\infty\}$ denote the extended reals. Given a function $h:\mathbb{R}\to\overline{\mathbb{R}}$, its Legendre transform is defined by $h^*(y)\equiv \sup_{x\in\mathbb{R}}\{yx-h(x)\}$. Recall that if $h:\mathbb{R}\to(-\infty,\infty]$ is convex and lower semicontinuous (LSC) then $(h^*)^*=h$ \cite[Theorem 2.3.5]{bot2009duality}. Also recall that if $h$ is convex and finite on $(a,b)$ then the left and right derivatives, which we denote by $h^\prime_-(x)$ and $h^\prime_+(x)$ respectively, exist for all $x\in(a,b)$ \cite[Chapter 1]{roberts1974convex}. We will denote the closure of a set $A$ by $\overline{A}$ and its interior by $A^o$. { Finally, we include in Table \ref{table:notation} a list of important notations, some of which are defined elsewhere in the manuscript, with corresponding references.} \begin{table} \centering { \caption{List of main symbols used throughout the manuscript.} \begin{tabular}{||c c c ||} \hline Notation & Description & Reference \\ \hline \hline $(\Omega,\mathcal{M})$ & Measurable space & Section \ref{sec:notation}\\ \hline $(S,d)$ & Metric space & Section \ref{sec:notation}\\ \hline $M(\Omega)$ \& $M(S)$ & Spaces of finite signed measures & Section \ref{sec:notation} \\ \hline $\mathcal P(\Omega)$ \& $\mathcal P(S)$ & Spaces of probability measures & Section \ref{sec:notation} \\ \hline $\mathcal M(\Omega)$ \& $\mathcal M_b(\Omega)$ & Spaces of measurable real-valued functions & Section \ref{sec:notation} \\ \hline $C(S)$ \& $C_b(S)$ & Spaces of continuous real-valued functions & Section \ref{sec:notation} \\ \hline $\text{Lip}(S)$ \& $\text{Lip}_b(S)$ & Spaces of Lipschitz continuous functions & Section \ref{sec:notation} \\ \hline $P$, $Q$ & Probability distributions/measures & Section \ref{sec:notation} \\ \hline $f$ & Convex function on $\mathbb{R}$ & Definition \ref{def:F_1}\\ \hline $\mathcal F_1(a,b)$ & Set of convex functions & Definition \ref{def:F_1}\\ \hline $D_f$ & $f$-Divergence & \req{eq:Df_def} \\ \hline $\Lambda_f^P$ & Generalization of the cumulant generating function & \req{eq:Lambda_f_def}\\ \hline $\Gamma$ & Test function space & Definition \ref{def:f_Gamma_Div} \\ \hline $D_f^\Gamma$ & $(f,\Gamma)$-Divergence & \req{eq:gen_f_def} \\ \hline $W^\Gamma$ & $\Gamma$-Integral probability metric & \req{eq:gen_wasserstein} \\ \hline $W^\rho$ & Gradient-penalty Wasserstein divergence & \req{W_rho} \\ \hline $D^L_\alpha$ & Lipschitz $\alpha$-divergence & \req{eq:f_alpha_nu} - \eqref{eq:Df_alpha_L2} \\ \hline \end{tabular} \label{table:notation} } \end{table} \subsection{Background on $f$-Divergences}\label{sec:f_div_background} The $f$-divergences are constructed using functions of the following form: \begin{definition}\label{def:F_1} For $a,b$ with $-\infty\leq a<1<b\leq\infty$ we define $\mathcal{F}_1(a,b)$ to be the set of convex functions $f:(a,b)\to\mathbb{R}$ with $f(1)=0$. For $f\in\mathcal{F}_1(a,b)$, if $b$ is finite we extend the definition of $f$ by $f(b)\equiv\lim_{x\nearrow b}f(x)$. Similarly, if $a$ is finite we define $f(a)\equiv\lim_{x\searrow a}f(x)$ (convexity implies these limits exist in $(-\infty,\infty]$). Finally, extend $f$ to $x\not\in [a,b]$ by $f(x)=\infty$. The resulting function $f:\mathbb{R}\to(-\infty,\infty]$ is convex and LSC. \end{definition} The $f$-divergences are then defined as follows: \begin{definition}\label{def:f_div} For $f\in\mathcal{F}_1(a,b)$ and $Q,P\in\mathcal{P}(\Omega)$ the corresponding $f$-divergence is defined by \begin{align}\label{eq:Df_def} D_f(Q\|P)\equiv \begin{cases} E_P[f(dQ/dP)], & Q\ll P\\ \infty, &Q\not\ll P\,. \end{cases} \end{align} \end{definition} A number of important properties of $f$-divergences are collected in Appendix \ref{app:f_div}. An $f$-divergence defines a notion of `distance' between probability measures, as is made precise by the following divergence property: $D_f(Q\|P)\geq 0$ for all $f\in\mathcal{F}_1(a,b)$ and if $f$ is furthermore strictly convex at $1$ (i.e., $f$ is not affine on any neighborhood of $1$) then $D_f(Q\|P)=0$ if and only if $Q=P$. However, the $f$-divergences are generally not probability metrics. Our primary examples will be the KL divergence and the family of $\alpha$-divergences, which are constructed from the following functions: \begin{align}\label{eq:f_alpha_def} f_{KL}(x)\equiv x\log(x)\in\mathcal{F}_1(0,\infty)\,,\,\,\,\,\, f_\alpha(x)=\frac{x^\alpha-1}{\alpha(\alpha-1)}\in\mathcal{F}_1(0,\infty)\,,\text{ where $\alpha>0$, $\alpha\neq 1$}\,. \end{align} See \cite[Table 1]{f_GAN} for further examples. Key to our work are a pair of variational formulas that relate the $f$-divergence to the functional \begin{align}\label{eq:Lambda_f_def} \Lambda_f^P[g]\equiv\inf_{\nu\in\mathbb{R}}\{\nu+E_P[f^*( g-\nu)]\}\,,\,\,\,\,\,g\in\mathcal{M}_b(\Omega)\,, \end{align} As we will see, $\Lambda_f^P$ takes the place of the cumulant generating function when one generalizes from the KL divergence to $f$-divergences. The first of the following formulas expresses $D_f$ as an infinite-dimensional convex conjugate of $\Lambda_f^P$ and the second is the dual variational formula: \begin{enumerate} \item Let $f\in\mathcal{F}_1(a,b)$ and $Q,P\in\mathcal{P}(\Omega)$. Then, \begin{align} D_f(Q\|P)=&\sup_{ g\in \mathcal{M}_b(\Omega)}\{ E_Q[ g]-E_P[f^*( g)]\}\label{eq:Df_var_formula}\\ =&\sup_{ g\in \mathcal{M}_b(\Omega)}\{ E_Q[ g]-\Lambda_f^P[g]\}\, ,\label{eq:Df_var_formula_nu} \end{align} { where the second equality follows from \eqref{eq:Lambda_f_def} and \eqref{eq:Df_var_formula} due to the invariance of $\mathcal{M}_b(\Omega)$ under the shift map $ g\mapsto g-\nu$ for $\nu\in\mathbb{R}$; see also Proposition \ref{prop:Df_var_formula}.} \item Let $f\in\mathcal{F}_1(a,b)$ with $a\geq 0$, $P\in\mathcal{P}(\Omega)$, and $ g\in \mathcal{M}_b(\Omega)$. Then we can rewrite $\Lambda_f^P[g]$ as \begin{align}\label{eq:Df_Gibbs_var_formula} \Lambda_f^P[g]=\sup_{Q\in\mathcal{P}(\Omega): D_f(Q\|P)<\infty}\{E_Q[ g]-D_f(Q\|P)\}\,. \end{align} \end{enumerate} \begin{remark} $f$-divergences can alternatively be defined in terms of the densities of $Q$ and $P$ with respect to some common dominating measure \cite{LieseVajda}. This definition agrees with \req{eq:Df_def} when $Q\ll P$ but in some cases the definition in \cite{LieseVajda} leads to a finite value even when $Q\not\ll P$. In this paper, we use the definition \eqref{eq:Df_def} because it satisfies the variational formula \eqref{eq:Df_var_formula}, even when $Q\not\ll P$ (see the proof of Proposition \ref{prop:Df_var_formula}), as well as the dual formula \eqref{eq:Df_Gibbs_var_formula} \end{remark} When $f=f_{KL}$ it is straightforward to show that $\Lambda_f^P$ becomes the cumulant generating function, \begin{align}\label{eq:Lambda_KL} \Lambda_{f_{KL}}^P[g]=\log E_P[e^ g]\,, \end{align} and \req{eq:Df_var_formula_nu} becomes the Donsker-Varadhan variational formula \cite[Appendix C.2]{Dupuis_Ellis}. Subsequently, \req{eq:Df_Gibbs_var_formula} becomes the Gibbs variational formula \cite[Proposition 1.4.2]{Dupuis_Ellis}. For this reason, we will call \eqref{eq:Df_Gibbs_var_formula} the Gibbs variational formula for $f$-divergences. Versions of \req{eq:Df_var_formula} were proven in \cite{Broniatowski,Nguyen_Full_2010}; we provide an elementary proof in Theorem \ref{prop:Df_var_formula} of Appendix \ref{app:f_div} for completeness. \req{eq:Df_var_formula_nu} is implicitly found in \cite[Theorem 1]{Ruderman}; see \cite{birrell2020optimizing} for further discussion of this relationship. More specifically, \cite{Ruderman,birrell2020optimizing} show that when $a\geq 0$ the representation in \eqref{eq:Df_var_formula} arises from convex duality over the space of finite positive measures while \eqref{eq:Df_var_formula_nu} arises from convex duality over the space of probability measures. On a metric space $S$, the optimizations in \eqref{eq:Df_var_formula} - \eqref{eq:Df_var_formula_nu} can be restricted to $C_b(S)$ via the application of Lusin’s Theorem (see Corollary \ref{cor:Df_LSC}). The dual formula \eqref{eq:Df_Gibbs_var_formula} was proven in \cite{BenTal2007} and is also implicitly contained in \cite[Eq. (5)]{Ruderman} (we will require a generalization that also covers the case $a<0$; see Proposition \ref{prop:Gibbs_M1}). Under appropriate assumptions \cite[Theorem 4.4]{Broniatowski} the optimizer of \eqref{eq:Df_var_formula} is given by \begin{align}\label{eq:Df_optimizer} g_*=f^\prime(dQ/dP)\,. \end{align} The definition in \eqref{eq:Df_def} does not depend on the value of $f(x)$ for $x<0$ and it is invariant under the transformation $f\mapsto f_c$ where $f_c(x)=f(x)+c(x-1)$, $c\in\mathbb{R}$. However, the objective functionals in the variational formulas \eqref{eq:Df_var_formula} and \eqref{eq:Df_var_formula_nu} can depend on these choices due to the presence of $f^*$. They both depend on the definition of $f(x)$ for $x<0$. The identity $f_c^*(y)=f^*(y-c)+c$ implies that the objective functional in \eqref{eq:Df_var_formula} depends on the choice of $c$ but the objective functional in \req{eq:Df_var_formula_nu} does not. Substituting $f_c$ into \req{eq:Df_var_formula} and then taking the supremum over $c\in\mathbb{R}$ is another way to derive \req{eq:Df_var_formula_nu}, thus providing additional motivation for the introduction of $\Lambda_f^P$. \subsection{{ Definition and General Properties of the $(f,\Gamma)$-Divergences}}\label{sec:gen_f_div_thm} Motivated by \req{eq:Df_var_formula_nu} - \eqref{eq:Df_Gibbs_var_formula}, by working with subsets of test functions $\Gamma\subset \mathcal{M}_b(\Omega)$ we can construct a new family of so-called $(f,\Gamma)$-divergences whose convex conjugates at $g\in\Gamma$ equal $\Lambda_f^P[g]$ and that have variational characterizations akin to \req{eq:Df_var_formula_nu}. This is an extension of the ideas in \cite{Dupuis:Mao:2019}, which studied generalizations of the KL-divergence. \emph{The identification of $\Lambda_f^P$ as the proper replacement for the cumulant generating function is the key new insight required to extend from the KL case to general $f$}. Specifically, we make the following definition: \begin{definition}\label{def:f_Gamma_Div} Let $f\in\mathcal{F}_1(a,b)$ and $\Gamma\subset \mathcal{M}_b(\Omega)$ be nonempty. For $Q,P\in\mathcal{P}(\Omega)$ we define the $(f,\Gamma)$-divergence by \begin{align}\label{eq:gen_f_def} D_f^\Gamma(Q\|P)\equiv\sup_{ g\in\Gamma}\left\{E_Q[ g]-\Lambda_f^P[g]\right\}\,, \end{align} where $\Lambda_f^P$ was defined in \req{eq:Lambda_f_def}, and we define the $\Gamma$-IPM by \begin{align}\label{eq:gen_wasserstein} W^\Gamma(Q,P)\equiv\sup_{ g\in\Gamma}\left\{E_Q[ g ]-E_P[ g]\right\}\,. \end{align} \end{definition} When we want to emphasize the distinction between $D_f(Q\|P)$ and $D_f^\Gamma(Q\|P)$ we will refer to the former as a classical $f$-divergence. When $f$ corresponds to the KL-divergence (see \req{eq:f_alpha_def}) we write $R(Q\|P)$ and $R^\Gamma(Q\|P)$ in place of $D_f(Q\|P)$ and $D_f^\Gamma(Q\|P)$, respectively. The definition \eqref{eq:gen_f_def} is an infinite-dimensional convex conjugate, akin to \req{eq:Df_var_formula_nu}. From \eqref{eq:Df_var_formula_nu}, we see that $D_f=D_f^\Gamma$ when $\Gamma=\mathcal{M}_b(\Omega)$ or, on a metric space $S$ (and for appropriate $f$'s), when $\Gamma=C_b(S)$ (see Corollary \ref{cor:Df_LSC} and Remark \ref{remark:Df_nu_shift}). The $W^\Gamma$'s are generalizations of the classical Wasserstein metric on a metric space, which is obtained by setting $\Gamma=\text{Lip}_b^1(S)$. Neither $W^\Gamma$ nor $D_f^\Gamma$ necessarily have the divergence property, however, { our main results present conditions which do imply the divergence property. As we will see, the use of $\Lambda_f^P$ in \eqref{eq:gen_f_def} is crucial in our proof of the divergence property (see Theorem \ref{thm:general_ub}), as well as in our derivation of the infimal convolution formula (see Theorem \ref{thm:f_div_inf_convolution}). } One can alternatively write the $(f,\Gamma)$-divergence as \begin{align}\label{eq:Df_Gamma_def2} D_f^\Gamma(Q\|P)=\sup_{ g\in\Gamma,\nu\in\mathbb{R}}\left\{E_Q[ g-\nu]-E_P[f^*( g-\nu)]\right\}\,. \end{align} This formulation is useful when computing a numerical approximation to $D_f^\Gamma(Q\|P)$. { It shows that $\Lambda_f^P$ in \eqref{eq:gen_f_def} does not need to be computed separately; one can formulate the computation as a single optimization problem, incorporating one additional $1$-dimensional parameter.} In addition, if $\Gamma$ is closed under the shift transformations $ g\mapsto g-\nu$, $\nu\in\mathbb{R}$ then one can write \begin{align}\label{eq:Df_Gamma_no_shift} D_f^\Gamma(Q\|P)=\sup_{ g\in\Gamma}\left\{E_Q[ g]-E_P[f^*( g)]\right\}\,, \end{align} thus arriving at the objects defined in \cite{arXiv:1809.04542,NEURIPS2019_eae27d77,NIPS2018_7771}. In the KL case, one can simplify \req{eq:gen_f_def} by using \eqref{eq:Lambda_KL}, \begin{align}\label{eq:R_Gamma} R^\Gamma(Q\|P)=\sup_{ g\in\Gamma}\left\{E_Q[ g]-\log E_P[e^{ g}]\}\right\}\,, \end{align} which results in the special case studied in \cite{Dupuis:Mao:2019}. { Several of our results will require us to work on a metric space (see Section \ref{sec:polish}), but first we present several properties that hold more generally. In the following theorem we derive a dual variational formula to \eqref{eq:gen_f_def}, which shows that if $ g\in\Gamma$ then \req{eq:Df_Gibbs_var_formula} holds with $D_f$ replaced by $D_f^\Gamma$. This lends further credence to the definition \eqref{eq:gen_f_def} and its use of $\Lambda_f^P$.} \begin{theorem} \label{thm:gen_fdiv_dual_var} Let $f\in\mathcal{F}_1(a,b)$ where $a\geq 0$, $P\in\mathcal{P}(\Omega)$, and $\Gamma\subset \mathcal{M}_b(\Omega)$ be nonempty. For $g\in\Gamma$ we have \begin{align}\label{eq:Gibbs_VF_Phi_P} (D_f^\Gamma)^*( g;P)\equiv \sup_{Q\in \mathcal{P}(\Omega)}\{E_Q[ g] -D_f^\Gamma(Q\|P)\}=\Lambda_f^P[g]\,. \end{align} \end{theorem} \begin{remark} We refer to Theorem \ref{thm:gen_fdiv_dual_var_app} in Appendix \ref{app:proofs} for the proof. While most cases of interest like \req{eq:f_alpha_def} have $a\geq 0$, we also cover the case $a< 0$ in Theorem \ref{thm:gen_fdiv_dual_var_app}. \end{remark} Theorem \ref{thm:gen_fdiv_dual_var} establishes $D_f^\Gamma$ as a natural generalization of $D_f$ when $\Gamma$ is used as the test-function space, generalizing the dual formula \eqref{eq:Df_Gibbs_var_formula} for $f$-divegences obtained in \cite{BenTal2007,Ruderman}. { Next we show that the $D_f^\Gamma$ is bounded above by both $D_f$ and $W_\Gamma$. This fact allows the $(f,\Gamma)$-divergences to inherit many useful properties from both $f$-divergences and IPMs; see the examples in Section \ref{sec:examples}. We also give conditions under which $D^\Gamma_f$ has the divergence property and thus provides a notion of `distance' between probability measures. This, along with Theorem \ref{thm:f_div_inf_convolution} below, constitute the main theoretical results of this paper. The proof of Theorem \ref{thm:general_ub} can be found in Theorem \ref{thm:general_ub_app} of Appendix \ref{app:proofs}. \begin{theorem}\label{thm:general_ub} Let $f\in\mathcal{F}_1(a,b)$, $\Gamma\subset\mathcal{M}_b(\Omega)$ be nonempty, and $Q,P\in\mathcal{P}(\Omega)$. \begin{enumerate} \item \begin{align}\label{eq:inf_conv_ineq} D_f^\Gamma(Q\|P)\leq \inf_{\eta\in\mathcal{P}(\Omega)}\{D_f(\eta\|P)+W^\Gamma(Q,\eta)\}\,. \end{align} In particular, $D_f^\Gamma(Q\|P)\leq \min\{D_f(Q\|P),W^\Gamma(Q,P)\}$. \item The map $(Q,P)\in\mathcal{P}(S)\times\mathcal{P}(S)\mapsto D_f^\Gamma(Q\|P)$ is convex. \item If there exists $c_0\in \Gamma\cap\mathbb{R}$ then $D_f^\Gamma(Q\|P)\geq 0$. \item Suppose $f$ and $\Gamma$ satisfy the following: \begin{enumerate} \item There exist a nonempty set $\Psi\subset\Gamma$ with the following properties: \begin{enumerate} \item $\Psi$ is $\mathcal{P}(\Omega)$-determining. \item For all $\psi\in\Psi$ there exists $c_0\in\mathbb{R}$, $\epsilon_0>0$ such that $c_0+\epsilon \psi\in\Gamma$ for all $|\epsilon|<\epsilon_0$. \end{enumerate} \item $f$ is strictly convex on a neighborhood of $1$. \item $f^*$ is finite and $C^1$ on a neighborhood of $\nu_0\equiv f_+^\prime(1)$. \end{enumerate} Then: \begin{enumerate}[label=(\roman*)] \item $D_f^{\Gamma}$ has the divergence property. \item $W^{\Gamma}$ has the divergence property. \end{enumerate} \end{enumerate} \end{theorem} \begin{remark} Under stronger assumptions one can show that \req{eq:inf_conv_ineq} is in fact an equality; see Theorem \ref{thm:f_div_inf_convolution} below. \end{remark} \begin{remark} Assumptions 4(b) and 4(c) hold, for instance, if $f$ is strictly convex on $(a,b)$ and $\nu_0\in\{f^*<\infty\}^o$ (see Theorem 26.3 in \cite{rockafellar1970convex}). \end{remark} \req{eq:inf_conv_ineq} implies the following upper bound on $D^\Gamma_f$: \begin{corollary}[Upper Bounds]\label{cor:upper_bound} Let $\mathcal{U}\subset \mathcal{P}(\Omega)$. Then \begin{align} D_f^\Gamma(Q\|P)\leq \inf_{\eta\in\mathcal{U}}\{D_f(\eta\|P)+W^\Gamma(Q,\eta)\}\,. \end{align} For instance, $\mathcal{U}$ could be a pushforward family, i.e., the distributions of $h_\theta(X)$, $\theta\in \Theta$ where $h_\theta$ are $\Omega$-valued measurable maps and $X$ is a random quantity. Such families are used in GANs; see Section \ref{sec:examples}. \end{corollary} {\bf Examples of $P(\Omega)$-determining sets:} \begin{enumerate} \item Exponentials, $e^{c\cdot x}$, $c\in\mathbb{R}^n$, i.e., the moment generating function; see Section 30 in \cite{billingsley2012probability}. \item The set of $1$-Lipschitz functions, $g$, on a metric space with $\|g\|_\infty\leq 1$. This follows from the Portmanteau Theorem (see, e.g., Theorem 2.1 in \cite{billingsley2013convergence}). \item The unit ball of a reproducing kernel Hilbert space (RKHS), under appropriate assumptions (see \cite{JMLR:v12:sriperumbudur11a}). \item The set of ReLU neural networks. This follows from the universal approximation theorem \cite{Cybenko1989} and also applies to other activation functions, e.g., sigmoid. \item The set of ReLU neural networks with spectral normalization \cite{miyato2018spectral}. \end{enumerate} Several of these classes of functions have been utilized in existing methods; see Table \ref{tab:related_work} below. Our examples in Section \ref{sec:examples} will utilize Lipschitz functions and ReLU neural networks, including spectral normalization in Section \ref{sec:SN}. \begin{remark} Note that it is a well-known result that polynomials do not constitute a $\mathcal{P}(\Omega)$-determining set; there exist distinct measures that agree on all moments. \end{remark} \begin{remark} Depending on the domain, several of the above examples of $P(\Omega)$-determining sets consist of unbounded functions. To fit them into our framework it generally suffices to work with truncated versions of these functions; we refer to Section \ref{sec:unbounded_extension} for a detailed discussion. \end{remark} } \subsection{$(f,\Gamma)$-Divergences on Polish Spaces}\label{sec:polish} When working on a Polish space, $S$, and under further assumptions on $f$ and $\Gamma$, we are able to show that $D_f^\Gamma$ interpolates between the classical $f$-divergence, $D_f$, and the $\Gamma$-IPM, $W^\Gamma$. At various points, we will require $f$ and $\Gamma$ to have the following properties: \begin{definition}\label{def:admissible} We will call $f\in\mathcal{F}_1(a,b)$ {\bf admissible} if $\{f^*<\infty\}=\mathbb{R}$ and $\lim_{y\to-\infty}f^*(y)<\infty$ (note that this limit always exists by convexity). If $f$ is also strictly convex at $1$ then we will call $f$ {\bf strictly admissible}. We will call $\Gamma\subset C_b(S)$ {\bf admissible} if $0\in\Gamma$, $\Gamma$ is convex, and $\Gamma$ is closed in the weak topology generated by the maps $\tau_\mu$, $\mu\in M(S)$ (see Section \ref{sec:notation}). $\Gamma$ will be called {\bf strictly admissible} if it also satisfies the following property: There exists a $\mathcal{P}(S)$-determining set $\Psi\subset C_b(S)$ such that for all $\psi\in\Psi$ there exists $c\in\mathbb{R}$, $\epsilon>0$ such that $c\pm\epsilon \psi\in\Gamma$. \end{definition} Our main result, Theorem \ref{thm:f_div_inf_convolution}, will require admissibility of both $f$ and $\Gamma$. The functions $f_{KL}$ and $f_\alpha$, $\alpha>1$, defined in \req{eq:f_alpha_def}, are strictly admissible but $f_\alpha$, $\alpha\in(0,1)$ is not admissible { (however, Theorem \ref{thm:general_ub} above does apply to $f_\alpha$ for $\alpha\in(0,1)$)}. The admissibility requirements that $\Gamma$ be convex and closed will let us express $D_f^\Gamma$ as the infinite-dimensional convex conjugate of a convex and LSC functional. This will allow us to analyze $D_f^\Gamma$ using tools from convex analysis. Strict admissibility will be key in proving the divergence property for both $W^\Gamma$ and $D_f^\Gamma$. {\bf Examples of strictly admissible $\Gamma$:} \begin{enumerate} \item $\Gamma=C_b(S)$, which leads to the classical $f$-divergences. \item $\Gamma=\text{Lip}_b^1(S)$, i.e. all bounded 1-Lipschitz functions, which leads to generalizations of the Wasserstein metric. \item $\Gamma=\{g\in C_b(S):|g|\leq 1\}$, which leads to generalizations of the total variation metric. \item $\Gamma=\{g\in \text{Lip}^1_b(S):|g|\leq 1\}$, which leads to generalizations of the Dudley metric. \item { $\Gamma=\{g\in X:\|g\|_X\leq 1\}$, the unit ball in a RKHS $X\subset C_b(S)$ (under appropriate assumptions given in Lemma \ref{lemma:RKHS}). This yields a generalization of MMD and is also related to the recent KL - MMD interpolation method in \cite{2021arXiv210608929G}; the latter employs a soft constraint rather than working on the RKHS unit ball and is based on the representation \eqref{eq:Df_var_formula} instead of \eqref{eq:Df_var_formula_nu}.} \end{enumerate} Note that the first two examples are shift invariant (hence \req{eq:Df_Gamma_no_shift} is applicable) while the latter three are not. { We are now ready to present the second key theorem in this paper, where we derive the infimal convolution representation of $D_f^\Gamma$ and provide alternative (to Theorem \ref{thm:general_ub}) conditions that ensure $D_f^\Gamma$ possesses the divergence property. The proof can be found in Appendix \ref{app:proofs}, Theorem \ref{thm:f_div_inf_convolution_app}.} \begin{theorem} \label{thm:f_div_inf_convolution} Suppose $f$ and $\Gamma$ are admissible. For $Q,P\in\mathcal{P}(S)$ let $D^\Gamma_f(Q\|P)$ be defined by \eqref{eq:gen_f_def} and let $W^\Gamma(Q,P)$ be defined as in \eqref{eq:gen_wasserstein}. These have the following properties: \begin{enumerate} \item Infimal Convolution Formula: \begin{align}\label{eq:inf_conv} D_f^\Gamma(Q\|P)=\inf_{\eta\in \mathcal{P}(S)}\{D_f(\eta\|P)+W^\Gamma(Q,\eta)\}\,. \end{align} In particular, $0\leq D_f^\Gamma(Q\|P)\leq \min\{D_f(Q\|P),W^\Gamma(Q,P)\}$. \item If $D_f^\Gamma(Q\|P)<\infty$ then there exists $\eta_*\in\mathcal{P}(S)$ such that \begin{align}\label{eq:inf_conv_existence} D_f^\Gamma(Q\|P)=D_f(\eta_*\|P)+W^\Gamma(Q,\eta_*)\,. \end{align} If $f$ is strictly convex then there is a unique such $\eta_*$. \item Divergence Property for $W^\Gamma$: If $\Gamma$ is strictly admissible then $W^\Gamma$ has the divergence property. \item Divergence Property for $D^\Gamma_f$: If $f$ and $\Gamma$ are both strictly admissible then $D_f^\Gamma$ has the divergence property. \end{enumerate} \end{theorem} \begin{remark} If $a\geq 0$ in Definition \ref{def:F_1} then $f^*$ is nondecreasing and so the condition $\lim_{y\to-\infty}f^*(y)<\infty$ is satisfied; see Lemma \ref{lemma:f_star_inc}. In many cases, the divergence property for $D_f^\Gamma$ still holds even if one or both of the conditions $\lim_{y\to-\infty}f^*(y)<\infty$, $\{f^*<\infty\}=\mathbb{R}$ are violated and also under relaxed conditions on $\Gamma$; this was shown in Theorem \ref{thm:general_ub}. \end{remark} The infimal convolution formula \eqref{eq:inf_conv} - \eqref{eq:inf_conv_existence} gives one precise sense in which the $(f,\Gamma)$-divergence variationally interpolates between the $\Gamma$-IPM, $W^\Gamma$, and the classical $f$-divergence, $D_f$. It is a generalization of the results in \cite{NIPS2018_7771,Dupuis:Mao:2019}, the former assuming compactly supported measures and the latter covering the KL case. { We end this subsection by referring the reader to Table~\ref{tab:related_work}, which lists related works and connections to our general framework.} \begin{table} \centering { \caption{{ Summarizing how our main Theorems extend or relate to certain existing methods. Our theory either applies directly to the cited methods or motivates the construction of closely related interpolation and/or regularization methods that are based on \eqref{eq:Df_var_formula_nu}.}} \begin{tabular}{ |p{2.7cm}||p{3cm}|p{4.5cm}|p{4.5cm}| } \hline \multicolumn{4}{|c|}{Extension of \& connections to related work} \\ \hline Related Paper & Function Space $\Gamma$ & Objective Functional & Relevant Theorems\\ \hline Goodfellow et al., \cite{GAN}& Neural networks & JS divergence using \eqref{eq:Df_var_formula} &Theorem~\ref{thm:general_ub}\\ Nowozin et al., \cite{f_GAN}& Neural networks & f-divergence using \eqref{eq:Df_var_formula} & Theorem~\ref{thm:general_ub}\\ Belghazi et al., \cite{MINE_paper}& Neural networks & KL-div. using \eqref{eq:Df_var_formula} \& \eqref{eq:Df_var_formula_nu}& Theorem~\ref{thm:general_ub}\\ Miyato et al., \cite{miyato2018spectral}& Neural networks \& spectral normalization & IPM \eqref{eq:gen_wasserstein} or f-divergence \eqref{eq:Df_var_formula} & Theorem~\ref{thm:general_ub}\\ Arjovsky et al., \cite{WGAN} & $\text{Lip}_b^1(S)$ & IPM \eqref{eq:gen_wasserstein} & Theorem~\ref{thm:f_div_inf_convolution}\\ Gulrajani et al., \cite{wgan:gp} & $\text{Lip}_b(S)$ & IPM \eqref{eq:gen_wasserstein} \& gradient penalty & Theorem~\ref{thm:f_div_inf_convolution} \& Theorem~\ref{thm:soft_constraint}\\ Song et al., \cite{Bridging_fGan_WGan} (Algorithm 1)& $\text{Lip}_b^1(S)$ & KL divergence using \eqref{eq:Df_var_formula} & Theorem~\ref{thm:f_div_inf_convolution} \\ Nguyen et al., \cite{Nguyen_Full_2010} & RKHS & KL, f-divergence using \eqref{eq:Df_var_formula} & Theorem~\ref{thm:f_div_inf_convolution}\\ Gretton et al., \cite{gretton2012} & Unit ball in RKHS & IPM \eqref{eq:gen_wasserstein} & Theorem~\ref{thm:f_div_inf_convolution}\\ Glaser et al., \cite{2021arXiv210608929G} & RKHS & KL-div. using \eqref{eq:Df_var_formula} \& RKHS norm penalty & Theorem~\ref{thm:f_div_inf_convolution} \& Theorem~\ref{thm:soft_constraint}\\ Dupuis et al., \cite{Dupuis:Mao:2019} & convex \& closed $\Gamma$ & KL-divergence & Theorem~\ref{thm:f_div_inf_convolution}\\ \hline \end{tabular} \label{tab:related_work} } \end{table} \subsection{Additional Properties} The following theorem details the behavior of $D_f^\Gamma$ in a pair of limiting regimes and further illustrates the manner in which $D_f^\Gamma$ interpolates between $D_f$ and $W^\Gamma$. These results again require (strict) admissibility (see Definition \ref{def:admissible}). \begin{theorem}\label{thm:limit} Let $Q,P\in\mathcal{P}(S)$ and $\Gamma$, $f$ both be admissible. Then for all $c>0$ the set $\Gamma_c\equiv\{c g: g\in\Gamma\}$ is admissible and we have the following two limiting formulas. \begin{enumerate} \item If $\Gamma$ is strictly admissible then the sets $\Gamma_L$ are strictly admissible for all $L>0$ and \begin{align} \lim_{L\to\infty} D^{\Gamma_L}_f(Q\|P)=D_f(Q\|P)\,. \end{align} \item If $f$ is strictly admissible then \begin{align} \lim_{\delta\searrow 0}\frac{1}{\delta} D_f^{\Gamma_\delta}(Q\|P)=W^\Gamma(Q,P)\,. \end{align} \end{enumerate} \end{theorem} The proof of Theorem \ref{thm:limit} is very similar to that of the corresponding results in the KL case \cite[Proposition 5.1 and 5.2]{Dupuis:Mao:2019}. For completeness, we include its proof in Appendix \ref{app:proofs} (Theorem \ref{thm:limit_app}). { Theorem \ref{thm:general_ub} implies the following convergence and continuity properties (see Theorem \ref{thm:conv_app} in Appendix \ref{app:proofs} for the proof): \begin{theorem}\label{thm:conv} Let $f\in\mathcal{F}_1(a,b)$ and $\Gamma\subset\mathcal{M}_b(\Omega)$. Then: \begin{enumerate} \item If there exists $c_0\in \Gamma\cap\mathbb{R}$ then $W^\Gamma(Q_n,P)\to 0 \implies D_f^\Gamma(Q_n\|P) \to 0$ and $D_f(Q_n\|P)\to 0 \implies D_f^\Gamma(Q_n\|P) \to 0$, and similarly if one permutes the order of $Q_n$ and $P$. \item Suppose $f$ and $\Gamma$ satisfy the following: \begin{enumerate} \item There exist a nonempty set $\Psi\subset\Gamma$ with the following properties: \begin{enumerate} \item $\Psi$ is $\mathcal{P}(\Omega)$-determining. \item For all $\psi\in\Psi$ there exists $c_0\in\mathbb{R}$, $\epsilon_0>0$ such that $c_0+\epsilon \psi\in\Gamma$ for all $|\epsilon|<\epsilon_0$. \end{enumerate} \item $f$ is strictly convex on a neighborhood of $1$. \item $f^*$ is finite and $C^1$ on a neighborhood of $\nu_0\equiv f_+^\prime(1)$. \end{enumerate} Let $P,Q_n\in\mathcal{P}(\Omega)$, $n\in\mathbb{Z}_+$. If $D_f^\Gamma(Q_n\|P)\to 0$ or $D_f^\Gamma(P\|Q_n)\to 0$ then $E_{Q_n}[\psi]\to E_P[\psi]$ for all $\psi\in\Psi$. \item On a metric space $S$, if $f$ is admissible then the map $(Q,P)\in\mathcal{P}(S)\times\mathcal{P}(S)\mapsto D_f^\Gamma(Q\|P)$ is lower semicontinuous. \end{enumerate} \end{theorem} \begin{corollary}\label{cor:weak_conv} Under the assumptions of Part 2 of Theorem \ref{thm:conv} we have the following: If $\Gamma=\text{Lip}_b^1(S)$ where $S$ is a compact metric space then one can take $\Psi=\Gamma$ and thereby conclude that $D_f^\Gamma(Q_n\|P)\to 0$ iff $D_f^\Gamma(P\|Q_n)\to 0$ iff $Q_n\to P$ in distribution iff $W^\Gamma(Q_n,P)\to 0$. \end{corollary} \begin{remark} Corollary \ref{cor:weak_conv} follows from the equivalence between weak convergence and convergence in the Wasserstein metric on compact spaces; see Theorem 2 in \cite{pmlr-v70-arjovsky17a} for this further relations between convergence in the Wasserstein metric and other notions of convergence. \end{remark} } { Finally, we derive a data processing inequality for $(f,\Gamma)$-divergences (see Theorem \ref{thm:data_proc_app} in Appendix \ref{app:proofs} for the proof). This result applies to general measurable spaces. We will need the following notation: Let $(N,\mathcal{N})$ be another measurable space and $K$ be a probability kernel from $\Omega$ to $N$. Given $P\in\mathcal{P}(\Omega)$ we denote the composition of $P$ with $K$ by $P\otimes K$ (a probability measure on $\Omega\times N$) and we denote the marginal distribution on $N$ by $K[P]$. Given $g\in \mathcal{M}_b(\Omega\times N)$ we let $K[g]$ denote the bounded measurable function on $\Omega$ given by $x\to\int g(x,y)K_x(dy)$. \begin{theorem}[Data Processing Inequality]\label{thm:data_proc} Let $f\in\mathcal{F}_1(a,b)$, $Q,P\in\mathcal{P}(\Omega)$, and $K$ be a probability kernel from $(\Omega,\mathcal{M})$ to $(N,\mathcal{N})$. \begin{enumerate} \item Let $\Gamma\subset \mathcal{M}_b(N)$ be nonempty. Then \begin{align}\label{eq:data_proc1} D_f^\Gamma\left(K[Q]\|K[P]\right)\leq D_f^{K[\Gamma]}(Q\|P)\,. \end{align} \item Let $\Gamma\subset \mathcal{M}_b(\Omega\times N)$ be nonempty. Then \begin{align}\label{eq:data_proc2} D_f^\Gamma\left(Q\otimes K\|P\otimes K\right)\leq D_f^{K[\Gamma]}(Q\|P)\,. \end{align} \end{enumerate} \end{theorem} \begin{remark} In \req{eq:data_proc1} we use the obvious embedding of $\mathcal{M}_b(N)\subset\mathcal{M}_b(\Omega\times N)$ to define $K[\Gamma]\equiv\{K[g]:g\in\Gamma\}$. \end{remark} } \section{Mass-Redistribution/Mass-Transport Interpretation of the $(f,\Gamma)$-Divergences}\label{sec:mass_transport} The bound \begin{align}\label{eq:Df_Gamma_W_bound} D_f^\Gamma(Q\|P)\leq W^\Gamma(Q,P)\,, \end{align} which follows from { Part 1 of either Theorem \ref{thm:general_ub} or Theorem \ref{thm:f_div_inf_convolution},} makes it clear that $D_f^\Gamma(Q\|P)$ can be finite and informative even if $Q\not\ll P$. For instance, if $\Gamma=\text{Lip}_b^1(S)$ then $W^\Gamma$ is the classical Wasserstein metric, and this can be finite even for mutually singular $Q$ and $P$. It is well-known that the Wasserstein metric can be understood in terms of mass transport \cite{villani2008optimal}. Generalizing this idea, the variational formula \eqref{eq:inf_conv_existence} allows us to interpret the $(f,\Gamma)$-divergences in terms of a two-stage mass-redistribution/mass-transport process: \begin{enumerate} \item First the `mass' distribution, $P$, is redistributed to form an intermediate measure, $\eta_*$. This has cost $D_f(\eta_*\|P)$, which depends on the relative amount of mass moved from or added to each point, but is insensitive to the distance that the mass is moved. However, the support of $\eta_*$ cannot be enlarged or shifted outside the support of $P$ during its construction, otherwise the cost would be infinite. \item Next, the mass is transported from $\eta_*$ to $Q$ with a cost $W^\Gamma(Q,\eta_*)$ that depends on the distance the mass must be moved. In this step, the support of $\eta_*$ could be drastically different from the support of $Q$, if necessary. \end{enumerate} The optimizing $\eta_*$ achieves the optimal balance between the cost of redistributing mass in step 1 and the cost of transporting mass in step 2. \begin{remark} When $\Gamma\neq \text{Lip}_b^1(S)$, $D_f^\Gamma$ is still characterized by the above two-stage procedure, with the only difference being that the interpretation of $W^\Gamma$ may differ. \end{remark} In this section we derive a characterization of the solution to the infimal convolution problem \eqref{eq:inf_conv} (in the case where $f\in\mathcal{F}_1(a,b)$ with $a\geq 0$) and will use this to provide further insight into the mass-redistribution/mass-transport interpretation. A key step will be to first obtain existence and uniqueness results regarding the dual optimization problem \eqref{eq:Df_Gibbs_var_formula} for the classical $f$-divergences. The proof is found in Appendix \ref{app:proofs}, Theorem \ref{thm:Gibbs_optimizer_app}. \begin{theorem}\label{thm:Gibbs_optimizer} Let $P\in\mathcal{P}(\Omega)$, $ g\in\mathcal{M}_b(\Omega)$, and $f\in\mathcal{F}_1(a,b)$ be admissible with $a\geq 0$. If $f$ is strictly convex on $(a,b)$ then there exists $\nu_*\in\mathbb{R}$ such that \begin{align} dQ_*\equiv (f^*)^\prime( g-\nu_*) dP \end{align} is a probability measure and \begin{align} &\sup_{Q\in\mathcal{P}(\Omega)}\{E_Q[ g]-D_f(Q\|P)\}= E_{Q_*}[ g]-D_f(Q_*\|P)=\nu_*+E_P[f^*( g-\nu_*)]=\Lambda_f^P[g]\,. \end{align} Moreover, $Q_*$ is the unique solution to the optimization problem \begin{align}\label{eq:Gibb_Q_unique} \sup_{Q\in\mathcal{P}(\Omega)}\{E_Q[ g]-D_f(Q\|P)\}\,. \end{align} \end{theorem} Theorem \ref{thm:Gibbs_optimizer} (specifically, the generalization found in Theorem \ref{thm:Gibbs_optimizer_app}) allows us to derive in Theorem \ref{thm:inf_conv_sol} a characterization of the solution, $\eta_*$, to the infimal convolution problem \eqref{eq:inf_conv}. First we present a formal calculation; a precise statement of the result can be found in Theorem \ref{thm:inf_conv_sol} and a rigorous proof is given in Theorem \ref{thm:inf_conv_sol_app} of Appendix \ref{app:proofs}. This result generalizes Theorem 4.12 from \cite{Dupuis:Mao:2019}, which considered the KL case: First assume $(g_*, \nu_*)$ is a maximizer of \eqref{eq:gen_f_def}, and assume $\eta_*$ solves \eqref{eq:inf_conv}. Then \begin{align}\label{eq:Df_Phi_formula_informal} D_f^\Gamma(Q\|P)&=E_Q[ g_*]-(\nu_*+E_P[f^*( g_*-\nu_*)])\\ &=E_Q[ g_*]-E_{\eta_*}[ g_*]+E_{\eta_*}[ g_*]-(\nu_*+E_P[f^*( g_*-\nu_*)])\notag\\ &\le W^\Gamma(Q,\eta_*)+D_f(\eta_*\|P)= D_f^\Gamma(Q\|P)\, .\notag \end{align} Therefore, as the inequalities become equalities, we have \[ W^\Gamma(Q,\eta_*)=E_Q[ g_*]-E_{\eta_*}[g_*] \] and \begin{align}\label{eq:Df_equality_formal} D_f(\eta_*\|P)=E_{\eta_*}[ g_*]-(\nu_*+E_P[f^*( g_*-\nu_*)])\,. \end{align} Note that this also implies $E_P[(f^*)^\prime_+( g_*-\nu_*)]=1$. \begin{align} d\eta_* & = (f^*)^\prime( g_*-\nu_*)dP\, , \\ g_*& =f^\prime( d\eta_*/dP)+\nu_*\, \,\,\,\,P\text{-a.s.} \end{align} In particular, in the KL case \cite[Remark 4.11]{Dupuis:Mao:2019}), one has \begin{align} g_*=\log(d\eta_*/dP)+c_0\, \,\,\,\,P\text{-a.s.} \end{align} for some $c_0\in\mathbb{R}$ and if $Q\ll P$ this leads to \begin{align}\label{eq:KL:Gamma:classical} R^\Gamma(Q\|P)=E_Q[\log(d\eta_*/dP)]\,, \end{align} which has an obvious similarity to the formula for the classical KL divergence. \begin{theorem}\label{thm:inf_conv_sol} Let $\Gamma\subset C_b(S)$ be admissible and $f\in\mathcal{F}_1(a,b)$ be admissible, where $a\geq 0$ and $f^*$ is $C^1$. Fix $Q,P\in\mathcal{P}(S)$ and suppose we have $ g_*\in\Gamma$ and $\nu_*\in\mathbb{R}$ that satisfy the following: \begin{enumerate} \item $f((f^*)^\prime( g_*-\nu_*))\in L^1(P)$, \item $E_P[(f^*)^\prime( g_*-\nu_*)]=1$, \item $W^\Gamma(Q,\eta_*)=E_Q[ g_*]-E_{\eta_*}[ g_*]$, where $d\eta_*\equiv (f^*)^\prime( g_*-\nu_*)dP$. \end{enumerate} Then $\eta_*\in\mathcal{P}(S)$ solves the infimal convolution problem \eqref{eq:inf_conv} and \begin{align}\label{eq:Df_Phi_formula} D_f^\Gamma(Q\|P)=E_Q[ g_*]-(\nu_*+E_P[f^*( g_*-\nu_*)])\,. \end{align} If $f$ is strictly convex then $\eta_*$ is the unique solution to the infimal convolution problem. \end{theorem} \begin{remark} In the context of MMD, $g_*$ is called the witness function \cite{gretton2012}. In the KL case, the existence of $g_*$ can be proven under appropriate compactness assumptions \cite[Theorem 4.8]{Dupuis:Mao:2019}. \end{remark} \begin{remark} \req{eq:inf_conv_existence} from Theorem \ref{thm:f_div_inf_convolution} makes it clear that $D_f^\Gamma(Q\|P)< D_f(Q\|P)$ in `most' cases. An exception to this occurs when \req{eq:Df_var_formula} has an optimizer $g_*$ with $ g_*\in\Gamma$. In such cases we have $D_f^\Gamma(Q\|P)=D_f(Q\|P)$, the supremum \eqref{eq:gen_f_def} will also be achieved at $ g_*$ since \req{eq:Df_Phi_formula} holds with $\nu_*=0$, and the solution to the infimal convolution problem is $\eta_*=Q$. \end{remark} { In general, the task of computing the intermediate measure $\eta_*$ in \eqref{eq:inf_conv_existence} is difficult, though a naive approach could proceed as follows: \begin{enumerate} \item Approximate $\eta\in\mathcal{P}(S)$ by a neural network family $h_\theta(X)$, where $X$ is some random noise source (as in the generator of a GAN; see Section \ref{sec:examples}); in this step we are using Corollary \ref{cor:upper_bound} to construct an upper bound. \item Approximate $D_f(\eta\|P)$ and $W^\Gamma(Q,\eta)$ via their variational formulas \eqref{eq:Df_var_formula} or \eqref{eq:Df_var_formula_nu} and \eqref{eq:gen_wasserstein} respectively, with the function spaces being approximated via neural network families (as in the discriminator of a GAN; again, see Section \ref{sec:examples}). \item Solve the resulting min-max problem \eqref{eq:inf_conv} via a stochastic-gradient-descent method to approximate $\eta_*$ (and also $g_*$). \end{enumerate} We did not explore the effectiveness of this naive method here, as it is tangential to the goals of this paper; we leave the computation of $\eta_*$ for a future work. Nevertheless, the following subsection presents a simple example that provides useful intuition.} \subsection{Example: Dirac Masses} Here we consider a simple example involving Dirac masses where the $(f,\Gamma)$-divergence can be explicitly computed using Theorem \ref{thm:inf_conv_sol}. This example further illustrates the two-stage mass-redistribution/mass-transport interpretation of the infimal convolution formula \eqref{eq:inf_conv_existence} and demonstrates how the location and distribution of probability mass impacts the result; see Figure \ref{fig:mass_transport}. Further explicit examples in the KL case can be found in \cite{2020arXiv201108441M}. Let $0=x_1<x_2<x_3$ and define the uniform distributions \begin{align}\label{eq:dirac} P=\frac{1}{2}\delta_{x_1}+\frac{1}{2}\delta_{x_2}\,,\,\,\,\,\,Q=\frac{1}{3}\delta_{x_1}+\frac{1}{3}\delta_{x_2}+\frac{1}{3}\delta_{x_3}\,. \end{align} Note that $Q\not\ll P$ and so $D_f(Q\|P)=\infty$; we will see that the $(f,\Gamma)$-divergences can be finite. Specifically, we will compute the $(f_\alpha,\text{Lip}_b^1(\mathbb{R}))$-divergence for $\alpha>1$ via Theorem \ref{thm:inf_conv_sol}. To do this we must find $ g_*\in\text{Lip}_b^1(\mathbb{R})$ and $\nu_*\in\mathbb{R}$ such that \begin{align} &\frac{1}{2}(f_\alpha^*)^\prime( g_*(x_1)-\nu_*)+\frac{1}{2}(f_\alpha^*)^\prime( g_*(x_2)-\nu_*)=1\,,\label{eq:ex1}\\ & g_*\in \argmax_{ g\in\text{Lip}_b^1(\mathbb{R})}\left\{\frac{1}{3}( g(x_1)+ g(x_2)+ g(x_3))-\frac{1}{2}\left( g(x_1)(f^*_\alpha)^\prime( g_*(x_1)-\nu_*)+ g(x_2)(f^*_\alpha)^\prime( g_*(x_2)-\nu_*)\right)\right\}\,,\label{eq:ex2} \end{align} where \begin{align} (f_\alpha^*)^\prime(y)=(\alpha-1)^{1/(\alpha-1)}y^{1/(\alpha-1)}1_{y> 0} \end{align} \begin{figure} \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[scale=.50]{Figures/mass_transport_example.eps} \end{minipage} \hspace{0.5cm} \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[scale=.50]{Figures/mass_transport_Gamma_div.eps} \end{minipage} \caption{ Solution of the infimal convolution problem \eqref{eq:inf_conv_existence} for $D_{f_\alpha}^\Gamma(Q\|P)$, where $\Gamma=\text{Lip}_b^1(\mathbb{R})$ and $Q$ and $P$ are given by \req{eq:dirac}. The left panel shows the mass $\eta_*(x_2)$ as a function of $x_2$. For each value of $\alpha$ there is a transition point where all of the mass required by $Q$ at $x_3$ is first redistributed to $x_2$ when forming $\eta_*$, resulting in $\eta_*(x_2)=2/3$. Note that the amount of mass moved to $x_2$ in the redistribution step does not depend on the distance of $x_3$ from $x_2$, only on the distance of $x_2$ from $x_1=0$. The right panel shows $D_{f_2}^\Gamma(Q\|P)$ as a function of $x_2$ and for several different values of the ratio $x_3/x_2$. }\label{fig:mass_transport} \end{figure} (see \req{eq:f_alpha_star}); \req{eq:ex1} is a simplification of Assumption 2 from Theorem \ref{thm:inf_conv_sol} and \eqref{eq:ex2} corresponds to Assumption 3. The solution to the infimal convolution problem then has the form \begin{align}\label{eq:gamma_star_ex} d\eta_*= \frac{1}{2}(f_\alpha^*)^\prime( g_*(x_1)-\nu_*)\delta_{x_1}+ \frac{1}{2}(f_\alpha^*)^\prime( g_*(x_2)-\nu_*)\delta_{x_2}\,. \end{align} We will now outline how one solves for $\nu_*$ and $g_*$. Without loss of generality we can assume $ g_*(x_1)=0$ (the objective functional for $W^\Gamma$ is invariant under constant shifts and at the same time, shifting $g_*$ in $\eta_*$ can be achieved by redefining $\nu_*$). The only dependence on $ g(x_3)$ in \req{eq:ex2} is in the $ g(x_3)/3$ term, hence the optimal solution has $ g(x_3)=x_3-x_2+ g(x_2)$. Therefore we need to solve \begin{align}\label{eq:dirac_example_eqs} &\frac{1}{2}(f_\alpha^*)^\prime(-\nu_*)+\frac{1}{2}(f_\alpha^*)^\prime( g_*(x_2)-\nu_*)=1\,,\\ & g_*(x_2)\in \argmax_{ g(x_2)\in[-x_2,x_2]}\left\{\frac{1}{3}(x_3-x_2)+\left(\frac{2}{3}-\frac{1}{2}(f^*_\alpha)^\prime( g_*(x_2)-\nu_*)\right) g(x_2)\right\}\notag \end{align} for $\nu_*$ and $g_*(x_2)$. The solution to this is obtained as follows: \begin{enumerate} \item Let $\nu_*( g_2)$ be the unique solution to $\frac{1}{2}(f_\alpha^*)^\prime(-\nu_*)+\frac{1}{2}(f_\alpha^*)^\prime( g_2-\nu_*)=1$; the two terms on the left hand side will be used to obtain the redistributed weights in $\eta_*$. \item Take $ g_{*,2}$ such that $\frac{1}{2}(f_\alpha^*)^\prime( g_{*,2}-\nu_*( g_{*,2}))=2/3$; this is inspired by the second line in \req{eq:dirac_example_eqs}. \item If $0<x_2< g_{*,2}$ then the solution to \req{eq:dirac_example_eqs} is obtained at $\nu_*=\nu_*(x_2)$ and \begin{align} g_*(x)=\begin{cases} 0, & x< 0\\ x, &x\in[0,x_3)\\ x_3,& x\geq x_3\,. \end{cases} \end{align} In this case, the optimal solution has $1/3<\eta_*(x_2)<2/3$, i.e., some amount of mass is redistributed from $x_1=0$ to $x_2$ when forming $\eta_*$ and then mass is transported from both $x_1$ and $x_2$ to $x_3$ to form $Q$. \item If $x_2\geq g_{*,2}$ then the solution to \req{eq:dirac_example_eqs} is obtained at $\nu_*=\nu_*( g_{*,2})$ and \begin{align} g_*(x)=\begin{cases} 0, & x< 0\\ \frac{ g_{*,2}}{x_2} x, &x\in[0,x_2)\\ x-x_2+ g_{*,2},& x\in[x_2,x_3)\\ x_3-x_2+ g_{*,2},& x\geq x_3\,. \end{cases} \end{align} In this case, $x_2$ is sufficiently far away from $x_1=0$ that the optimal solution, $\eta_*$, is obtained by first redistributing mass from $x_1=0$ to $x_2$ so that $\eta_*(x_1)=1/3$, $\eta_*(x_2)=2/3$. In the second step, mass is transported solely from $x_2$ to $x_3$ in order to form $Q$. \end{enumerate} This completes the construction of $\eta_*$ from \req{eq:gamma_star_ex}. The value of the $(f_\alpha,\text{Lip}_b^1(\mathbb{R}))$-divergence can then be computed via \req{eq:Df_Phi_formula}. The computation of $g_{*,2}$ and $\nu_*(g_{*,2})$ from steps 1 and 2 must be done numerically and so we illustrate the solution graphically in Figure \ref{fig:mass_transport} by plotting $\eta_*(x_2)$ as a function of $x_2$ for a number of $\alpha$'s. This shows how the mass must be redistributed when forming $\eta_*$ from $P$. The above calculations reveal an interesting transition; when $x_2$ is not close\footnote{With closeness being defined not only relative to the distance between the two points but also depending on $f$.} to $x_1$ then the mass is transferred solely from $x_2$ after it has been redistributed from $x_1$. However, when $x_1$ and $x_2$ are close enough then redistributing all the necessary mass from $x_1$ to $x_2$ is not optimal and it is cheaper to transport probability mass from both $x_1$ and $x_2$ to $x_3$. The transition between these cases corresponds to the point where $x_2$ crosses above $g_{*,2}$ (which depends on $\alpha$) and hence $\eta_*(x_2)$ saturates at the value $2/3$. \section{Soft Constraints and the Divergence Property}\label{sec:soft_constraint} For computational purposes, it is often advantageous to replace the hard (i.e., strict) constraint $ g\in\Gamma$ with a soft constraint in the form of a penalty term, $V$, subtracted from the objective functional; by a penalty term, we mean $V$ `activates' (i.e., is nonzero) when the constraint $ g\in\Gamma$ is violated. In this way we can construct a new divergence $D_f^{V}$ with $D_f^\Gamma\leq D_f^{V}\leq D_f$ (we let the context distinguish between cases where the superscript denotes a constraint space and cases where it denotes a penalty term); see Theorem \ref{thm:soft_constraint} for the main result of this section. Of particular interest is the case $\Gamma=\text{Lip}^1_b(\mathbb{R}^n)$ (we equip $\mathbb{R}^n$ with the Euclidean metric), where the $1$-Lipschtiz constraint can be relaxed to a one-sided gradient penalty term, thus defining objects such as \begin{align}\label{eq:Lip_soft_constraint} D_f^{\rho}(Q\|P)=\sup_{ g\in\text{Lip}_b(\mathbb{R}^n)}\left\{E_Q[ g]-\Lambda_f^P[g]-\lambda\int \max\{0,\|\nabla g\|^2-1\}d\rho_{Q,P}\right\}\,, \end{align} where $\lambda>0$ is the strength of the penalty term and $\rho_{Q,P}$ is a positive measure (often depending on $Q$ and $P$). Here we are relying on Rademacher's theorem (see Theorem 5.8.6 in \cite{evans2010partial}): $L$-Lipschitz functions on $\mathbb{R}^n$ are differentiable Lebesgue-a.e. and the norm of the gradient is bounded by $L$. The penalty term in \req{eq:Lip_soft_constraint} will therefore be activated only when $ g$ is not $1$-Lipschitz. Divergences with soft Lipschitz constraints were first applied to Wasserstein GAN \cite{wgan:gp} with great success, but the theoretical properties of such objects have not been explored; specifically, it has not been shown that they satisfy the divergence property. Here we show in great generality that the relaxation of a hard constraint to a soft constraint preserves the divergence property, and therefore objects such as \eqref{eq:Lip_soft_constraint} still provide a well-defined notion of `distance' between probability measures. The basic requirement is that the penalty term, which we denote by $V$, vanishes on the constraint space $\Gamma$. \begin{lemma}\label{lemma:soft_constraint} Let $(\Omega,\mathcal{M})$ be a measurable space, $\Gamma\subset\widetilde{\Gamma}\subset\mathcal{M}(\Omega)$, $H:\widetilde{\Gamma}\times\mathcal{P}(\Omega)\times\mathcal{P}(\Omega)\to\overline{\mathbb{R}}$, and $V:\widetilde{\Gamma}\times\mathcal{P}(\Omega)\times\mathcal{P}(\Omega)\to[0,\infty]$ with $V|_{\Gamma\times\mathcal{P}(\Omega)\times\mathcal{P}(\Omega)}=0$. Define \begin{align} &D^\Gamma(Q\|P)=\sup_{ g\in \Gamma} H[ g;Q,P]\,,\,\,\,\,\, D^{\widetilde{\Gamma}}(Q\|P)=\sup_{ g\in {\widetilde{\Gamma}}} H[ g;Q,P]\,,\\ &D^{V}(Q\|P)=\sup_{ g\in{\widetilde{\Gamma}}}\{H[ g;Q,P]-V[ g;Q,P]\}\,,\notag \end{align} where $\infty-\infty\equiv -\infty$. If $D^\Gamma$ and $D^{\widetilde{\Gamma}}$ both have the divergence property then so does $D^{V}$. \end{lemma} \begin{remark} The convention $\infty-\infty\equiv-\infty$ is simply a convenient rigorous shorthand for restricting the supremum to those $ g$'s for which this generally undefined operation does not occur. \end{remark} \begin{remark}\label{remark:generalized_constraint} More generally, if the supremum $\sup_{ g\in \Gamma} H[ g;Q,P]$ is achieved at $ g_*\in\Gamma$ (depending on $Q,P$) then the requirement $V|_{\Gamma\times\mathcal{P}(\Omega)\times\mathcal{P}(\Omega)}=0$ can be relaxed to $V[ g_*;Q,P]=0$ for all $Q,P$. \end{remark} \begin{proof} Using $\Gamma\subset{\widetilde{\Gamma}}$, $V\geq 0$, and $V|_\Gamma=0$ we have $D^\Gamma\leq D^{V}\leq D^{\widetilde{\Gamma}}$. $D^\Gamma$ satisfies the divergence property, hence is non-negative. Therefore $D^{V}\geq 0$. $D^{\widetilde{\Gamma}}$ has the divergence property, hence if $Q=P$ then $0=D^{\widetilde{\Gamma}}(Q\|P)\geq D^{V}(Q\|P)\geq 0$. Therefore $D^{V}(Q\|P)=0$. Finally, if $D^{V}(Q\|P)=0$ then $D^\Gamma(Q\|P)=0$ and hence the divergence property for $D^\Gamma$ implies $Q=P$. \end{proof} Using Theorem \ref{thm:f_div_inf_convolution} and Corollary \ref{cor:D_f_var_unbounded}, we can apply Lemma \ref{lemma:soft_constraint} to the $(f,\Gamma)$-divergences and thereby conclude the following: \begin{theorem} \label{thm:soft_constraint} Let $f$ and $\Gamma\subset C_b(S)$ be strictly admissible. Let $\Gamma\subset{\widetilde{\Gamma}}\subset\mathcal{M}(S)$ and $V:{\widetilde{\Gamma}}\times\mathcal{P}(S)\times\mathcal{P}(S)\to[0,\infty]$ with $V|_{\Gamma\times\mathcal{P}(S)\times\mathcal{P}(S)}=0$. For $Q,P\in\mathcal{P}(S)$ define \begin{align} D^{V}_f(Q\|P)\equiv\sup_{ g\in{\widetilde{\Gamma}}}\left\{\left( E_Q[ g]-\Lambda_f^P[g]\right)-V[ g;Q,P]\right\}\,, \end{align} where $\infty-\infty\equiv-\infty$, $-\infty+\infty\equiv-\infty$. Then $D^{V}_f$ has the divergence property and $D_f^\Gamma\leq D_f^{V}\leq D_f$. \end{theorem} \begin{proof} Combine Lemma \ref{lemma:soft_constraint} with Part 4 of Theorem \ref{thm:f_div_inf_convolution} and Theorem \ref{thm:Df_var_unbounded} below; the latter shows that the variational formula for $D_f$ also holds when using the test-function space $\mathcal{M}(\Omega)$. \end{proof} \subsection{Soft-Lipschitz Constraints on $\mathbb{R}^n$: One-Sided Versus Two-Sided Penalties} The gradient penalty term in \req{eq:Lip_soft_constraint} is one-sided, meaning that it penalizes $\|\nabla g\|>1$ but not $\|\nabla g\|\leq 1$. This is consistent with the hard constraint that the Lipschitz constant be less than or equal to $1$. The first use of soft Lipschitz penalties in \cite{wgan:gp}, which considered the Wasserstein metric, also used a two-sided gradient penalty, \begin{align}\label{W_rho} W^\rho(Q,P)=\sup_{ g\in\text{Lip}_b(\mathbb{R}^n)}\left\{E_Q[ g]-E_P[ g]-\lambda\int (\|\nabla g\|-1)^2d\rho_{Q,P}\right\}\,, \end{align} which penalizes $\|\nabla g\|\neq 1$. An intuitively reasonable requirement to impose on any soft constraint is that it vanish on the exact optimizer (if one exists) of the original strictly-constrained optimization problem. The justification for a two-sided gradient penalty in the Wasserstein case rests on Proposition 1 in \cite{wgan:gp}, which shows that the exact optimizer of the Kantorovich-Rubinstein variational formula for the classical Wasserstein metric has gradient with norm $1$ a.e. As the two-sided gradient penalty vanishes on such functions, the object \eqref{W_rho} will still possess the divergence property (see Remark \ref{remark:generalized_constraint}). However, two-sided gradient penalties are not appropriate constraint-relaxations of the $(f,\Gamma)$-divergences, as the gradient of the exact optimizer generally does not have norm $1$ a.e. We demonstrate this via the following simple counterexample: Let $\Gamma=\text{Lip}^1_b(\mathbb{R}^n)$, $P\in\mathcal{P}(\mathbb{R}^n)$, and define $Q$ by $dQ/dP=Z^{-1}e^{-\min\{\|x\|,1\}/2}$. The optimizer of the variational formula defined in \eqref{eq:Df_optimizer} is given for the classical KL divergence by \begin{align} g_*=\log(dQ/dP)+1=-\min\{\|x\|,1\}/2+1+\log(Z^{-1})\,, \end{align} which is bounded and $1/2$-Lipschitz, and so $ g_*\in\Gamma$. Therefore it is straightforward to see that $g_*$ is also the optimizer for $R^{\Gamma}(Q\|P)$ and it satisfies $\|\nabla g_*\|\leq 1/2$ a.e. This proves that the 2-sided penalty does not vanish on $ g_*$. Similar counterexamples can be constructed using \req{eq:Df_optimizer} for other choices of $f$. \section{Extension of the $(f,\Gamma)$-Divergence Variational Formula to Unbounded Functions} \label{sec:unbounded_extension} The assumption that all of the test functions $ g\in\Gamma$ are bounded can be very restrictive in practice. In this section we provide general conditions under which the test-function space can be expanded to include (possibly) unbounded functions without changing the value of $D_f^\Gamma$. This fact will be used in the numerical examples in Section \ref{sec:examples} below. The main result in this Section in Theorem \ref{thm:unbounded_Lip}. The key step in the extension to unbounded $ g$'s is the following lower bound. \begin{lemma}\label{lemma:phi_unbounded} Let $f$, $\Gamma$ be admissible and, in addition, suppose $f^*$ is bounded below. Fix $Q,P\in\mathcal{P}(S)$. If $ g\in L^1(Q)$ and there exists $ g_n\in\Gamma$, a measurable set $A$, and $C\in\mathbb{R}$ with $ g_n\to g$ pointwise, $| g_n|\leq| g|$ for all $n$, and $ g_n\leq g 1_A+C1_{A^c}$ for all $n$, then \begin{align} D_f^\Gamma(Q\|P)\geq E_Q[ g]-\Lambda_f^P[g]\,. \end{align} \end{lemma} \begin{remark} The additional assumption that $f^*$ is bounded below is satisfied in many cases of interest, e.g., the KL divergence and $\alpha$-divergences for $\alpha>1$. \end{remark} \begin{proof} We need to show that \begin{align} D_f^\Gamma(Q\|P)\geq E_Q[ g] -(\nu+E_P[f^*( g-\nu)]) \end{align} for all $\nu\in\mathbb{R}$. Note that we have assumed $f^*$ is bounded below by some $D\in\mathbb{R}$, hence $E_P[f^*( g-\nu)]$ exists in $(-\infty,\infty]$. If $E_P[f^*( g-\nu)]=\infty$ then the claim is trivial, so for the remainder of this proof we suppose $f^*( g-\nu)\in L^1(P)$. The assumptions on $g$ allow us to use the dominated convergence theorem to conclude $E_Q[ g_n]\to E_Q[ g]$. Continuity of $f^*$ implies $f^*( g_n-\nu)\to f^*( g-\nu)$. The admissibility assumption implies $\lim_{y\to-\infty}f^*(y)<\infty$. Using this together with Lemma \ref{lemma:f_star_nondec} we see that $f^*$ is nondecreasing, hence \begin{align} D\leq f^*( g_n-\nu)\leq f^*( g-\nu)1_A+f^*(C-\nu)1_{A^c}\in L^1(P)\,. \end{align} Therefore the dominated convergence theorem implies $E_P[f^*( g_n-\nu)]\to E_P[f^*( g-\nu)]$. We have $ g_n\in \Gamma$, hence \req{eq:gen_f_def} implies \begin{align} D_f^\Gamma(Q\|P)\geq& \lim_{n\to\infty}\left(E_Q[ g_n]-(\nu+E_P[f^*( g_n-\nu)])\right)\\ =&E_Q[ g]-(\nu+E_P[f^*( g-\nu)])\,.\notag \end{align} This completes the proof. \end{proof} Using Lemma \ref{lemma:phi_unbounded}, one can augment $\Gamma$ by including any functions that satisfy the stated assumptions; this will not change the value of the supremum in \eqref{eq:gen_f_def}. Rather than formulating a general result of this type, we consider one of the most useful special cases, the set of Lipschitz functions. Other cases can be treated similarly. \begin{lemma}\label{lemma:Lip_Phi} Let $c:S\times S\to[0,\infty]$, $L\in(0,\infty)$, and define \begin{align}\label{eq:Lip_b_def} \text{Lip}_b^L(S,c)=\{ g\in C_b(S):| g(x)- g(y)|\leq L c(x,y)\text{ for all }x,y\in S\}\,. \end{align} If $c=d$ (the metric on $S$) then we use our earlier notation, $\text{Lip}_b^L(S)$, in place of $\text{Lip}_b^L(S,d)$. The set $\text{Lip}_b^L(S,c)$ is admissible and if $d\leq K c$ for some $K\in(0,\infty)$ then $\text{Lip}_b^L(S,c)$ is strictly admissible. \end{lemma} \begin{proof} Convexity is trivial. Weak convergence in $C_b(S)$ implies pointwise convergence (take $\mu=\delta_x$, $x\in S$), hence $\text{Lip}_b^L(S,c)$ is closed. Finally, if $d\leq K c$ then strict admissibility follows from the fact that $\text{Lip}_b^{L/K}(S)$ is $\mathcal{P}(S)$-determining and $\text{Lip}_b^{L/K}(S)\subset \text{Lip}_b^L(S,c)$. \end{proof} \begin{remark} For $L>0$ we have $\text{Lip}^L_b(S,c)=\{L g: g\in\text{Lip}^1_b(S,c)\}$ and so (under appropriate assumptions) Theorem \ref{thm:limit} implies the following limiting formulas: \begin{align} &\lim_{L\to\infty}D_f^{\text{Lip}^L_b(S,c)}(Q\|P)=D_f(Q\|P)\,,\\ &\lim_{L\searrow 0}L^{-1}D_f^{\text{Lip}^L_b(S,c)}(Q\|P)=W^{\text{Lip}^1_b(S,c)}(Q,P)\,.\notag \end{align} \end{remark} Using Lemma \ref{lemma:phi_unbounded} we can show that the boundedness constraint can be dropped in the formula for $D_f^{\Gamma}$ when $\Gamma=\text{Lip}_b^L(S,c)$; we exploit this fact in the numerical examples in Section \ref{sec:examples} below. \begin{theorem}\label{thm:unbounded_Lip} Let $c:S\times S\to[0,\infty]$, $L\in(0,\infty)$, and define \begin{align} \text{Lip}^L(S,c)=\{ g\in C(S):| g(x)- g(y)|\leq L c(x,y)\text{ for all }x,y\in S\}\,. \end{align} Let $f$ be admissible such that $f^*$ is bounded below. Then for $Q,P\in\mathcal{P}(S)$ we have \begin{align} \label{eq:unbounded_var_formula} D_f^{\text{Lip}_b^L(S,c)}(Q\|P)=&\sup_{ g\in\text{Lip}^L(S,c)\cap L^1(Q)}\left\{ E_Q[ g]-\Lambda_f^P[g]\right\}\,. \end{align} \end{theorem} \begin{proof} Lemma \ref{lemma:Lip_Phi} shows that $\Gamma\equiv \text{Lip}_b^L(S,c)$ satisfies the conditions of Theorem \ref{thm:f_div_inf_convolution}. Fix $ g\in \text{Lip}^L(S,c)\cap L^1(Q)$. For $n\in\mathbb{Z}^+$ define $ g_n=n1_{ g>n}+g1_{-n\leq g\leq n}-n1_{ g<-n}$. It is easy to see that $ g_n\in \text{Lip}_b^L(S,c)$, $ g_n\to g$, $| g_n|\leq| g|$, $ g_n\leq g1_{ g\geq 0}$. Therefore Lemma \ref{lemma:phi_unbounded} implies \begin{align}\label{eq:unbounded_var_formula_lb} D_f^\Gamma(Q\|P)\geq E_Q[ g ]-\inf_{\nu\in\mathbb{R}}\{\nu+E_P[f^*( g-\nu)]\}\,. \end{align} One inequality in \req{eq:unbounded_var_formula} follows from taking the supremum over all $ g\in \text{Lip}^L(S,c)\cap L^1(Q)$ in \req{eq:unbounded_var_formula_lb} and the reverse follows from the fact that $\text{Lip}_b^L(S,c)\subset\text{Lip}^L(S,c)\cap L^1(Q)$. \end{proof} \section{$(f,\Gamma)$-GANs}\label{sec:examples} Generative adversarial networks constitute a class of methods for `learning' a probability distribution, $Q$, via a two-player game between a discriminator and a generator (both neural networks) \cite{GAN,f_GAN,WGAN,wgan:gp,CumulantGAN:Pantazisetal}. Mathematically, most GANs can be formulated as divergence minimization problems for a divergence, $D$, that has a variational characterization $D(Q\|P)=\sup_{ g\in\Gamma} H[ g;Q,P]$. The goal is then to solve the following optimization problem: \begin{align}\label{eq:gen_GAN} \inf_{\theta\in\Theta} D(Q\|P_\theta)=\inf_{\theta\in\Theta} \sup_{ g\in\Gamma} H[ g;Q,P_\theta]&\,. \end{align} Here, $g$ is called the discriminator and $P_\theta$ is the distribution of $h_\theta(X)$, where $X$ is a random noise source and $h_\theta$, $\theta\in\Theta$ is a neural network family (the generator). The minimax problem \eqref{eq:gen_GAN} can be interpreted as two-player zero-sum game. GANs based on the Wasserstein-metric have been very successful \cite{WGAN,wgan:gp} and GANs based on the classical $f$-divergences have also been explored \cite{f_GAN}. Here we show that GANs based on the $D_f^\Gamma$ divergences, which generalize and interpolate between the above two extremes, inherit desirable properties from both IPM-GANs (e.g., Wasserstein GAN) and $f$-GANs. Specifically, we focus on the following: \begin{enumerate} \item $(f,\Gamma)$-GANs can perform well when applied to heavy-tailed distributions. This property is inherited from the classical $f$-divergences. \item $(f,\Gamma)$-GANs can perform well even when there is a lack of absolute continuity. This property is inherited from the $\Gamma$-IPMs. \end{enumerate} We will specifically focus on the cases where $f=f_\alpha$, $\alpha>1$, (see \req{eq:f_alpha_def}) and $\Gamma=\text{Lip}_b^L(\mathbb{R}^n)$ (see Lemma \ref{lemma:Lip_Phi}). We call the corresponding $(f,\Gamma)$-divergences the Lipschitz $\alpha$-divergences and will denote them by $D_{\alpha}^L$. As $\Gamma$ is closed under shifts, we can express these divergences in one of two ways (see \req{eq:Df_Gamma_no_shift}): \begin{align} D^L_{\alpha}(Q\|P)=&\sup_{ g\in \text{Lip}_b^L(\mathbb{R}^n)}\{E_Q[ g]-\Lambda_{f_\alpha}^P[g]\}\label{eq:f_alpha_nu}\\ =&\sup_{ g\in \text{Lip}_b^L(\mathbb{R}^n)}\{E_Q[ g]-E_P[f_\alpha^*( g)]\}\,.\label{eq:Df_alpha_L2} \end{align} The formula for $f_\alpha^*$ can be found in \req{eq:f_alpha_star} below. \begin{remark} Formally taking the $\alpha\to\infty$ limit of \eqref{eq:Df_alpha_L2} we arrive at what we call the Lipschitz $\infty$-divergence: \begin{align} D^L_{\infty}(Q\|P)=&\sup_{ g\in \text{Lip}_b^L(\mathbb{R}^n)}\{E_Q[ g]-E_P[\max\{ g,0\}]\}\,. \end{align} It is straightforward to show that $D^L_{\infty}(Q\|P)=LW(Q,P)$, where $W$ is classical Wasserstein metric \begin{align} W(Q,P)=\sup_{ g\in \text{Lip}_b^1(\mathbb{R}^n)}\{E_Q[ g]-E_P[ g]\}\,, \end{align} though they are expressed in terms of different objective functionals (hence their performance can differ in practice). \end{remark} In numerical computations it can be inconvenient to restrict one's attention to bounded discriminators only. Fortunately, as shown in Theorem \ref{thm:unbounded_Lip} above, the equality \eqref{eq:gen_f_def} remains true when $\Gamma$ is expanded to include many unbounded $ g$'s. This fact justifies our use of unbounded discriminators (i.e., unbounded activation functions) in the following computations. As our baseline method we take the two-sided gradient-penalized Wasserstein GAN (WGAN-GP) from \cite{wgan:gp}: {\bf WGAN-GP:} \begin{align}\label{eq:Wgan} \inf_\theta\sup_{ g\in\text{Lip}(\mathbb{R}^n)}\left\{E_{Q}[ g]-E_{P_\theta}[ g]-\lambda\int(\|\nabla g\|/L-1)^2d\rho_\theta\right\}\, , \end{align} where $\lambda>0$ is the strength of the penalty regularization. Here, and below, we have relaxed the Lipschitz constraint to a gradient penalty (two-sided for WGAN-GP and one-sided otherwise; see Section \ref{sec:soft_constraint} for further discussion). We approximate the supreumum over $ g$ by the supremum over a neural network family (the discriminator network). Again, the family of measures $P_\theta$ are the distributions of $X_\theta=h_\theta(X)$ where $h_\theta$ is the generator neural network, parameterized by $\theta\in\Theta$, and we let $X$ be a Gaussian noise source. Finally, we let $\rho_\theta\sim TX_\theta+(1-T)Y$ where $X_\theta$, $Y\sim Q$, $T\sim Unif([0,1])$ are all independent (this choice of $\rho_\theta$ was used in \cite{wgan:gp}). We compare WGAN-GP to the Lipschitz $\alpha$-GANs and Lipschitz KL-GAN, defined based on \eqref{eq:Df_alpha_L2} and \eqref{eq:f_alpha_nu} respectively: {\bf Lipschitz $\alpha$-GAN} \begin{align}\label{eq:Lip_alpha_GAN} \inf_{\theta\in\Theta}\sup_{ g\in\text{Lip}(\mathbb{R}^n)}\left\{E_{Q}[ g]- E_{P_\theta}[f_\alpha^*( g)]-\lambda\int\max\{0,\|\nabla g\|^2/L^2-1\}d\rho_\theta\right\}\,. \end{align} When we want to make the values of $\alpha$ and/or $L$ explicit we will refer to these as the $D_\alpha^L$-GANs. By swapping $Q$ and $P_\theta$ one obtains another family of GANs, which we call the reverse Lipschitz $\alpha$-GANs (when clarity is needed, \eqref{eq:Lip_alpha_GAN} will be called a forward GAN). We note that forward and reverse GANs can have very different properties \cite{2017arXiv170100160G}. In the case of the KL-divergence one can evaluate the optimization over $\nu$ in \eqref{eq:f_alpha_nu} (see \req{eq:R_Gamma}), leading to the following: {\bf Lipschitz KL-GAN} \begin{align}\label{eq:Lip_KL_GAN} \inf_{\theta\in\Theta}\sup_{ g\in\text{Lip}(\mathbb{R}^n)}\left\{E_{Q}[ g]-\log E_{P_\theta}[e^{ g}]-\lambda\int\max\{0,\|\nabla g\|^2/L^2-1\}d\rho_\theta\right\}\,. \end{align} { \begin{remark} For numerical purposes the GAN \eqref{eq:Lip_KL_GAN}, obtained using the representation \eqref{eq:Df_var_formula_nu}, performs significantly better than the GAN obtained from \eqref{eq:Df_var_formula}. This is due to the numerical issues inherent in computing $E_P[f^*_{KL}(g)]=E_P[\exp(g-1)]$, as compared to computing the cumulant generating function $\log E_P[\exp(g)]$; see also \cite{MINE_paper}. We also refer to \cite{birrell2020optimizing} for a more general perspective on finding tighter variational representations of divergences. \end{remark} } { \subsection{Statistical Estimation of $(f,\Gamma)$-Divergences} In numerical computations, we approximate the $(f,\Gamma)$-divergence by replacing expectations under $Q$ and $P$ in \eqref{eq:gen_f_def} or \eqref{eq:Df_Gamma_def2} with their $m$-sample empirical means using i.i.d. samples from $Q$ and $P$ respectively, i.e., we estimate \begin{align}\label{eq:f_Gamma_est} E[D_f^\Gamma(Q_m\|P_m)]=E\left[\sup_{g\in\Gamma,\nu\in\mathbb{R}}\{E_{Q_m}[g-\nu]-E_{P_m}[f^*(g-\nu)]\}\right]\,. \end{align} Note that, at fixed $g$ and $\nu$, the objective functional on the right-hand-side of \eqref{eq:f_Gamma_est} is an unbiased estimator of the $(f,\Gamma)$-divergence objective functional. Including the optimization over $g$ and $\nu$ we obtain a biased estimator which is an upper bound on $D_f^\Gamma$, as shown in the lemma below. \begin{lemma}\label{lemma:bias_bound} Let $f\in\mathcal{F}_1(a,b)$, $\Gamma\subset\mathcal{M}_b(\Omega)$ be nonempty, $Q,P\in\mathcal{P}(\Omega)$, and $Q_m$, $P_m$ be empirical distributions constructed from $m$ i.i.d. samples from $Q$ and $P$ respectively. Then \begin{align} E[D_f^\Gamma(Q_m\|P_m)]\geq D_f^\Gamma(Q\|P)\,. \end{align} \end{lemma} \begin{proof} Using \eqref{eq:Df_Gamma_def2} we can compute \begin{align} E[D_f^\Gamma(Q_m\|P_m)]=&E\left[\sup_{g\in\Gamma,\nu\in\mathbb{R}}\{E_{Q_m}[g-\nu]-E_{P_m}[f^*(g-\nu)]\}\right]\\ \geq &\sup_{g\in\Gamma,\nu\in\mathbb{R}}E\left[E_{Q_m}[g-\nu]-E_{P_m}[f^*(g-\nu)]\right]\notag\\ =&\sup_{g\in\Gamma,\nu\in\mathbb{R}}\{E_{Q}[g-\nu]-E_{P}[f^*(g-\nu)]\}=D_f^\Gamma(Q\|P)\,.\notag \end{align} \end{proof} \begin{remark} As noted above, in the KL case one can evaluate the optimization over $\nu$ in \eqref{eq:f_Gamma_est}. This results in a biased objective functional due to the presence of the logarithm outside the expectation in \eqref{eq:R_Gamma}. This same issue was addressed earlier in \cite{MINE_paper}, e.g., by using sufficiently large minibatch sizes or an exponential moving average. This concern is not present in the objective functional for \eqref{eq:f_Gamma_est} (or \eqref{eq:Lip_alpha_GAN}). \end{remark} } In the following GAN examples we work with Lipschitz functions and approximate the optimization over $\text{Lip}(\mathbb{R}^n)$ by the optimization over some neural network family $g_\phi$, $\phi\in\Phi$, and estimate the expectations using the $m$-sample empirical measures $Q_m$, $P_{m,\theta}$, $\rho_{m,\theta}$, e.g., we approximate the Lipschitz $\alpha$-GAN \eqref{eq:Lip_alpha_GAN} by \begin{align}\label{eq:Lip_alpha_GAN_emp} \inf_{\theta\in\Theta}\sup_{ \phi\in\Phi}\left\{E_{Q_m}[ g_\phi]- E_{P_{m,\theta}}[f_\alpha^*( g_\phi)]-\lambda\int\max\{0,\|\nabla g_\phi\|^2/L^2-1\}d\rho_{m,\theta}\right\}\,. \end{align} Various neural network architectures are known to be universal approximators \cite{HORNIK1989359,Cybenko1989,pinkus_1999,10.5555/3295222.3295371,pmlr-v125-kidger20a}. Approximating the supremum over $ g\in \text{Lip}(\mathbb{R}^n)$ by the supremum over a finite-dimensional neural network family essentially results in a lower bound on the original, intended divergence. In the case of KL and R{\'e}nyi divergences, such an approximation scheme is known to lead to consistent estimators as the sample size and network complexity grows (see \cite{MINE_paper} and \cite{2020arXiv200703814B} respectively). Investigating the analogous consistency result for the $(f,\Gamma)$-divergence estimator is one avenue for future work. \subsection{$(f,\Gamma)$-GANs for Non-Absolutely-Continuous Heavy-Tailed Distributions}\label{sec:submanifold_ex} We mentioned above that the $f$-divergences are better suited to heavy-tailed distributions, as compared to the Wasserstein metric. Before demonstrating this in the context of GANs we provide a simple explicit example. Let $dQ=x^{-2}1_{x\geq 1}dx$ and $dP=(1+\delta)x^{-(2+\delta)}1_{x\geq 1}dx$ for $\delta>0$, i.e., the tail of $P$ decays faster than that of $Q$. For $\alpha>1$ we can use \req{eq:Df_def} to compute \begin{align} D_{f_\alpha}(Q\|P)=\frac{1}{\alpha(\alpha-1)(1+\delta)^{\alpha-1}}\int_1^\infty x^{\delta \alpha-(2+\delta)}dx -\frac{1}{\alpha(\alpha-1)}\,, \end{align} and so $D_{f_\alpha}(Q\|P)<\infty$ for all $\delta\in(0,1/(\alpha-1))$. On the other hand, we can use the formula for the Wasserstein metric on $\mathcal{P}(\mathbb{R})$ from \cite{doi:10.1137/1118101} to compute \begin{align}\label{eq:W_infty_example} W(Q,P)=&\int_{-\infty}^\infty|F_Q(t)-F_P(t)|dt=\int_1^\infty t^{-1}-t^{-(1+\delta)}dt=\infty \end{align} for all $\delta>0$ ($F_P$ and $F_Q$ denote the cumulative distribution functions). This calculation suggests that Lipschtiz $\alpha$-GANs may succeed for heavy-tailed distributions, even when WGAN-GP fails to converge. On the other hand the Wasserstein metric can be finite and informative even when $Q$ and $P$ are non-absolutely continuous, unlike the classical $f$-divergences \eqref{eq:Df_def}. The $(f,\Gamma)$-divergences inherit both of these strengths from the Wasserstein and $f$-divergences (see Part 1 of Theorem \ref{thm:f_div_inf_convolution}), thus allowing for the training of GANs with heavy-tailed data and in the absence of absolute continuity. We demonstrate this via the following example, where both the WGAN-GP and classical $f$-GAN (i.e., without gradient penalty) fail to converge but the $(f,\Gamma)$-GANs succeed. Here the data source, $Q$, is a mixture of four 2-dimensional t-distributions with $0.5$ degrees of freedom, embedded in a plane in 12-dimensional space; note that this is a heavy-tailed distribution, as the mean does not exist; this suggests that WGAN will have difficulty learning this distribution. The generator uses a 10-dimensional noise source and so the generator and data source are generally not absolutely continuous with respect to one another (the former has support equal to the full 12-dimensional space while the latter is supported on a 2-dimensional plane). This suggests one cannot use the classical $f$-GAN \cite{f_GAN}, i.e., without gradient penalty (we confirmed that they perform very poorly on this problem). The $(f,\Gamma)$-GANs allow us to address both of the above difficulties; heavy tails can be accommodated by an appropriate choice of $f$ and the lack of absolute continuity is addressed by using a $1$-Lipschitz constraint (as in the Wasserstein metric). \begin{figure} \centering \begin{subfigure}[b]{1\textwidth} \includegraphics[width=1\linewidth]{Figures/student_submanifold_samples_5000.eps} \caption{} \end{subfigure} \begin{subfigure}[b]{1\textwidth} \includegraphics[width=1\linewidth]{Figures/student_submanifold_comp_samples_5000.eps} \caption{} \end{subfigure} \begin{subfigure}[b]{.99\textwidth} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[scale=.49]{Figures/student_submanifold_R_25_5000.eps}\end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[scale=.49]{Figures/student_submanifold_R_50_5000.eps} \end{minipage} \caption{} \end{subfigure} \caption{We present generator samples and their statistical behavior from Wasserstein and reverse Lipschitz $\alpha$-GAN methods. The dataset used in training consists of 5000 samples from a mixture of four 2-dimensional t-distributions with $0.5$ degrees of freedom that are embedded in a plane in 12-dimensional space. Panel (a) shows the projection onto the 2-dimensional support plane (each column shows the result after a given number of training epochs; the solid and dashed blue ovals mark the 25\% and 50\% probability regions, respectively, of the data source while the heat-map shows the generator samples. Panel (b) shows the generator distribution, projected onto components orthogonal to the support plane. Values concentrated around zero indicate convergence to the sub-manifold. In Panel (c) we show the fraction of generator samples, projected onto the 2D support plane of the measure, that are within the $25\%$ and $50\%$ probability regions. In this example we used gradient-penalty parameter values $\lambda=10$, $L=1$; { Wasserstein GAN was run with both 1-sided and 2-sided gradient penalties (GP-1 and GP-2 respectively).} In all cases the generator and discriminator have three fully connected hidden layers of 64, 32, and 16 nodes respectively, and with ReLU activation functions. The generator uses a 10-dimensional Gaussian noise source. Each SGD iteration was performed with a minibatch size of 1000 { and 5 discriminator iterations were performed for every one generator iteration}. Computations were done in TensorFlow and we used the RMSProp SGD algorithm with a learning rate of $2\times 10^{-4}$. }\label{fig:gen_alpha_GAN} \end{figure} In Figure \ref{fig:gen_alpha_GAN} below we show generator samples for Wasserstein GAN, as in \req{eq:Wgan} and \cite{wgan:gp}, and for various reverse Lipschitz $\alpha$-GANs \eqref{eq:Lip_alpha_GAN}. Specifically, panel (a) shows the projection onto the 2-dimensional support plane of $Q$ (the heat-map shows samples from the generator and the data source, $Q$, is illustrated by the blue ovals) and panel (b) shows the generator distribution, projected onto components orthogonal to the support plane. Panel (a) does not show WGAN-GP samples, as WGAN-GP failed to converge in this example; this is demonstrated in panel (b) wherein we see that the Lipschitz $\alpha$-GAN samples concentrate near the support plane (at $0$) while the WGAN-GP samples spread out away from the support plane. The classical $f$-GAN without gradient penalty \cite{f_GAN}, which we don't show here, similarly failed to converge; this is unsurprising due to the lack of absolute continuity. Again, we can see that WGAN-GP fails to converge, while the Lipschitz $\alpha$-GANs perform well. Some $\alpha$'s perform significantly better than others, making it an important hyperparameter to tune in this case. { Results from a second set of runs, using a larger sample set, are shown in Figure \ref{fig:gen_alpha_GAN_2} in Appendix \ref{app:extra_figs}; the conclusions are similar.} Forward Lipschitz GANs and forward Lipschitz KL-GANs all experienced blow-up and so they are not shown here. This behavior is reasonable when one considers the fact that $Q$ is heavy tailed, while $P_\theta$ is not (it is generated by pushing forward Gaussian noise by Lipschitz functions), and so $D_f(Q\|P_\theta)=\infty$, while $D_f(P_\theta\|Q)<\infty$ (see \req{eq:Df_def}). As we have already demonstrated the inability of the Wasserstein metric to compare heavy-tailed distributions (see \req{eq:W_infty_example}), it is reasonable to conjecture that the finiteness of $D_{f_\alpha}$ is key in determining the success of the $D_\alpha^L$-GAN. { Interestingly, the Lipschitz constraint also appears to be key to the convergence of the method, something one would not anticipate solely based on finiteness of the corresponding divergences. We illustrate this with Figure \ref{fig:2Dstudent_GAN} in Appendix \ref{app:extra_figs}, where we apply the same method to the mixture of four 2-dimensional t-distributions, but without the high-dimensional embedding. In this case, the classical $f$-divergence is finite, however we find that the classical $f$-GAN fails to converge --WGAN also fails-- but the $(f,\Gamma)$-GANs succeed. The theoretical understanding of this behavior is an interesting question, but we will not pursue it further here.} \subsection{ Strict Convexity and Enhanced Stability of $(f,\Gamma)$-GANs}\label{ex:C10} Even in the absence of heavy tails, we find that the Lipschitz $\alpha$-GANs can outperform WGAN-GP, as measured both by accuracy on quantities of interest as well as improved stability. The improved stability can be motivated by a simple (formal) calculation of the Hessian of the objective functional in \req{eq:Df_Gamma_no_shift}, \begin{align}\label{eq:loss:concavity:general} H_f[g;Q,P]\equiv E_Q[ g]-E_P[f^*( g)] \end{align} (see Appendix \ref{app:Taylor} for an analysis of the objective functional in the non shift-invariant case \eqref{eq:gen_f_def}). Let $g_0\in \Gamma$ and perturb in some direction $\psi$, i.e., take a line segment $ g_\epsilon= g_0+\epsilon\psi\in\Gamma$. Then \begin{align}\label{eq:H_f_inv_hessian} \frac{d^2}{d\epsilon^2}|_{\epsilon=0} H_f[g_\epsilon;Q,P]= -E_P[(f^*)^{\prime\prime}( g_0)\psi^2]\,. \end{align} Convexity of $f^*$ implies $(f^*)^{\prime\prime}\geq 0$. If we have $(f^*)^{\prime\prime}>0$ then \eqref{eq:H_f_inv_hessian} implies the objective functional is strictly concave at $g_0$ in all directions, $\psi$, are nonzero on a set of positive $P$-probability. { This strict concavity implies that the maximization problem \eqref{eq:gen_f_def} is a strictly convex optimization problem and suggests that numerical computation of $D_f^\Gamma(Q\|P)$ via \eqref{eq:gen_f_def} may generally be more stable than computation of the $\Gamma$-IPM \eqref{eq:gen_wasserstein}, as the latter uses a linear objective functional. Indeed, in \cite{daskalakis2018training} the authors demonstrated that gradient descent/ascent dynamics (used for training of GANs) oscillate without converging to the optimum for the Wasserstein-GAN loss function \eqref{eq:gen_wasserstein} in the special case where $\Gamma$ consists of a parametric family of linear functions. In this case, more sophisticated algorithms such as training with optimism \cite{daskalakis2018training} or two-step extra-gradient approaches \cite{abs-1901-08511} were required to guarantee convergence. Here our $(f, \Gamma)$ interpolation replaces the optimization of a linear objective functional in the case of the $\Gamma$-IPM \eqref{eq:gen_wasserstein} with the strictly concave problem \eqref{eq:gen_f_def}. In the case of a linear discriminator space $\Gamma$, we obtain a complete theoretical justification based on the concavity calculation \eqref{eq:H_f_inv_hessian}. In particular, we consider \eqref{eq:loss:concavity:general} where \begin{align}\label{eq:loss:concavity:linear} H_f[g_\phi;Q,P] = E_Q[g_\phi]-E_P[f^*(g_\phi)]\, , \end{align} and where we assume that $\Gamma=\{g=g_\phi(x): \phi=(\phi_1, \phi_2,...,\phi_k)\in D\}$ is a parametric linear family ($D$ is a closed, convex subset of $\mathbb{R}^k$), i.e., for any constants $a_0, a_1$ and any parameter values $\phi_0$, $\phi_1$ we have \begin{equation}\label{eq:linear:Gamma} g_{a_0\phi_0+a_1\phi_1}(x)=a_0g_{\phi_0}(x)+a_1g_{\phi_1}(x)\, . \end{equation} Using \eqref{eq:linear:Gamma} and considering \eqref{eq:H_f_inv_hessian} for any $g=g_{\phi_0}$ and $\psi(x)=g_{\phi_1}$, $g_\epsilon=g_{\phi_0}+\epsilon g_{\phi_1}$, we readily have \begin{align}\label{eq:H_f_inv_hessian_linear} \phi_1^\intercal\nabla_\phi^2H_f[g_{\phi_0};Q,P]\phi_1 = \frac{d^2}{d\epsilon^2}|_{\epsilon=0} H_f[g_\epsilon;Q,P] = -E_P[(f^*)^{\prime\prime}( g_{\phi_0}) g_{\phi_1}^2]\, , \end{align} provided all expected values are finite. As in \eqref{eq:H_f_inv_hessian}, this analysis implies the strict concavity of \eqref{eq:loss:concavity:linear} with respect to the linear parametrization $\phi$. Thus our analysis covers linear spaces $\Gamma$ such as linear combinations of splines or reproducing kernel Hilbert spaces (RKHS). However, when $\Gamma$ is a family of neural networks then the $g_\phi$'s are not linear in $\phi$ and the above analysis does not apply. We will not pursue the theoretical analysis of this important case here but instead we will carry out an empirical study that explores the improved stability that (local) strict concavity would imply. } \begin{figure}[ht] \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[scale=.50]{Figures/C10_Lip_alpha_GAN_0002.eps} \end{minipage} \hspace{0.5cm} \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[scale=.50]{Figures/C10_Lip_alpha_GAN_001.eps} \end{minipage} \caption{ { Comparison between Lipschitz $\alpha$-GAN and WGAN-GP (both 1 and 2-sided) on the CIFAR-10 dataset. Here we plot the inception score as a function of the number of training epochs (moving average over the last 5 data points, with results averaged over 5 runs). We also show the averaged final FID score in the legend, computed using 50000 samples from both $Q$ and $P$. The neural network architecture is as in Appendix F of \cite{wgan:gp}; in particular, it employs residual blocks. The left panel used an initial learning rate of $0.0002$ (the same as in \cite{wgan:gp}) while in the right panel we used an initial learning rate of $0.001$. Here, and in other similar tests, we find the Lipschitz $\alpha$-GANs to be significantly more stable and require less tuning of the learning rate; in particular, none of the WGAN-GP2 runs shown in the right panel were able to complete successfully. } }\label{fig:C10_GAN} \end{figure} In Figure \ref{fig:C10_GAN} we demonstrate both improved performance and improved stability of the Lipschitz $\alpha$-GANs, as compared to WGAN-GP, on the CIFAR-10 dataset \cite{krizhevsky2009learning}, which consists of 32x32 RGB images from 10 classes. We use the same ResNet neural network architecture as in \cite[Appendix F]{wgan:gp} and focus on evaluating the benefits of simply modifying the objective functional. We employ the adaptive learning rate Adam Optimizer method \cite{kingma2014adam} { using the hyperparameter values shown in Algorithm 1 of \cite{wgan:gp} (note that in \cite{wgan:gp}, $\alpha$ denotes the learning rate parameter and should not be confused with our use for $\alpha$-divergences)}. We show the inception score as a function of the number of training epochs; the inception score \cite{salimans2016improved} is a commonly used performance measure for evaluating the diversity of images produced by a GAN. It uses a pre-trained classifier to estimate the number of distinct classes produced by the generator and so, when applied to CIFAR-10, values closer to 10 are considered better. { In the legends we also show the final FID score achieved by each method. FID score is a performance measure that computes a distance between feature vectors of a classification model when applied to the original data, as compared to the generated samples \cite{10.5555/3295222.3295408}; a lower FID score is better.} In the left panel of Figure \ref{fig:C10_GAN} we show the results using an initial learning rate of $0.0002$; { we find a small improvement in inception score and substantial improvement in FID score when using the Lipschitz $\alpha$-GANs, as compared to WGAN-GP (either $1$ or $2$-sided).} { In this example we find the performance to be relatively insensitive to the value of $\alpha$. } In addition to the performance improvement, we find the Lipschitz $\alpha$-GANs to be far less sensitive to the choice of learning rate. In the right panel of Figure \ref{fig:C10_GAN} we show results using an initial learning rate of $0.001$; here we observe significant degradation of the performance of WGAN-GP, but only a slight impact on the Lipschitz $\alpha$-GANs. We conjecture that this increased stability is due to the strong concavity of the $(f,\Gamma)$-divergence objective functionals. { Regarding increased stability, these numerical findings, the analysis for a general (non-parameterized) function space $\Gamma$ in \eqref{eq:H_f_inv_hessian}, as well as for the linear parametric case \eqref{eq:H_f_inv_hessian_linear} provide only preliminary indications for the conjecture; a dedicated analysis for general parameterized $\Gamma$'s that will include nonconvex parametric families such as neural networks is clearly necessary but we will not pursue it further here. \subsubsection{Enhanced Stability and Spectral Normalization}\label{sec:SN} In \cite{miyato2018spectral} the authors showed that spectral normalization, which directly controls the Lipschitz constant of each layer of a neural network by setting the largest singular value of its weight matrix to $1$, provides enhanced stability as compared to WGAN-GP and at a lower computational cost (see Figures 1 and 2 in \cite{miyato2018spectral}). Their method, which uses the Jensen-Shannon divergence, is equivalent to \req{eq:Df_Gamma_no_shift} (i.e., they do not include an optimization over shifts as in \eqref{eq:Df_Gamma_def2}) with a change of variables $g=\log(D)$ and using a function space $\Gamma$ that consists of a neural network family with spectral normalization. In this example we use a spectral normalization function space in our method \eqref{eq:Df_Gamma_def2}; this falls under the purview of Theorem \ref{thm:general_ub} (see Table \ref{tab:related_work}). We provide empirical evidence that the improved stability they observed is at least partially due to the strict concavity of the objective functional. Specifically, we find that WGAN with spectral normalization fails to inherit this improved stability and even fails to outperform WGAN-GP. Our results demonstrate that combining spectral normalization with other (strictly convex) objective functionals can enhance stability, similar both to what was observed in \cite{miyato2018spectral} and also to what we found in Figure \ref{fig:C10_GAN}. Here we again study the case $f=f_\alpha$, denoting these methods by $D_\alpha^{SN}$; results are shown in Figure \ref{fig:C10_sn_GAN}. } \begin{figure} \centering \includegraphics[scale=.50]{Figures/C10_Lip_alpha_res_SN_GAN_0001.eps} \caption{ { A comparison between Lipschitz $\alpha$-GAN and WGAN, both using spectral normalization (SN) to enforce Lipschitz constraints. We used an initial learning rate of $0.0001$ and otherwise employed same ResNet architecture and hyperparameters as in Figure \ref{fig:C10_GAN}. In particular, we did not attempt to further optimize the architecture when changing from a gradient penalty to SN. None of the methods performed as well as their gradient penalty counterparts from Figure \ref{fig:C10_GAN}, but note the especially poor performance of WGAN-SN. This suggests a further robustness of our methods to the use of sub-optimal architecture and hyperparameters. }}\label{fig:C10_sn_GAN} \end{figure} \section{Conclusion}\label{sec:concl} We have provided a systematic and rigorous exploration of the properties of the $(f,\Gamma)$-divergences, as defined in \req{eq:Df_Gamma_intro}. This work was motivated by the need for a flexible collection of novel divergences that combine key properties from $f$-divergences and Wasserstein metrics, such as the ability to work with heavy tails and with not-absolutely continuous distributions. { A large list of proposed GANs fall under the presented mathematical framework (see Table \ref{tab:related_work}), unifying to a considerable extent the loss formulation of GANs.} We have illustrated the utility of the $(f,\Gamma)$-divergences in the training of GANs, showing both an increased domain of applicability and improved convergence stability. { The theoretical results allow for a wide range of choices on $f$ and $\Gamma$. We have shown that there are families of distributions that are better suited for $(f,\Gamma)$-divergence over either $f$-divergence or $\Gamma$-IPM. A more systematic exploration on the selection of proper $f$ and $\Gamma$ will add practical value from a practitioner's perspective, but further and more elaborate experimentation is required, along with a need for new theoretical insights. In the future we intend to further study the stability, the related statistical estimation theory, and explore these new divergences in additional challenging settings } such as high-dimensional time-series generation, extreme events prediction, mutual information estimation, and uncertainty quantification for heavy-tailed distributions and in the absence of absolute continuity.
{ "timestamp": "2021-09-16T02:24:42", "yymm": "2011", "arxiv_id": "2011.05953", "language": "en", "url": "https://arxiv.org/abs/2011.05953", "abstract": "We develop a rigorous and general framework for constructing information-theoretic divergences that subsume both $f$-divergences and integral probability metrics (IPMs), such as the $1$-Wasserstein distance. We prove under which assumptions these divergences, hereafter referred to as $(f,\\Gamma)$-divergences, provide a notion of `distance' between probability measures and show that they can be expressed as a two-stage mass-redistribution/mass-transport process. The $(f,\\Gamma)$-divergences inherit features from IPMs, such as the ability to compare distributions which are not absolutely continuous, as well as from $f$-divergences, namely the strict concavity of their variational representations and the ability to control heavy-tailed distributions for particular choices of $f$. When combined, these features establish a divergence with improved properties for estimation, statistical learning, and uncertainty quantification applications. Using statistical learning as an example, we demonstrate their advantage in training generative adversarial networks (GANs) for heavy-tailed, not-absolutely continuous sample distributions. We also show improved performance and stability over gradient-penalized Wasserstein GAN in image generation.", "subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)", "title": "$(f,Γ)$-Divergences: Interpolating between $f$-Divergences and Integral Probability Metrics", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759593358227, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7079405598712689 }
https://arxiv.org/abs/1403.6988
The adjacent sides of hyperbolic Lambert quadrilaterals
We prove sharp bounds for the product and the sum of the hyperbolic lengths of a pair of hyperbolic adjacent sides of hyperbolic Lambert quadrilaterals in the unit disk. We also show the Hölder convexity of the inverse hyperbolic sine function involved in the hyperbolic geometry.
\section{Introduction} Given a pair of points in the closure of the unit disk ${\mathbb B}^2\,,$ there exists a unique hyperbolic geodesic line joining these two points. Hyperbolic lines are simply sets of the form $C \cap {\mathbb B^2}$ where $C$ is a circle perpendicular to the unit circle, or a Euclidean diameter of $\mathbb{B}^2\,.$ For a quadruple of four points $\{a,b,c,d\}$ in the closure of the unit disk we can draw these hyperbolic lines through each of the four pairs of points $\{a,b\},$ $\{b,c\},$ $\{c,d\},$ and $\{d,a\}\,.$ If these hyperbolic lines bound a domain $D \subset {\mathbb B}^2$ such that the points $\{a,b,c,d\}$ are in the positive order on the boundary of the domain $D\,,$ then we say that the quadruple of points $\{a,b,c,d\}$ determines a hyperbolic quadrilateral $Q(a,b,c,d)\,$ and that the points $a,b,c,d$ are its vertices. A hyperbolic quadrilateral with angles equal to $\pi/2, \pi/2, \pi/2, \phi\,(0\leq \phi < \pi/2)\,,$ is called a hyperbolic {\em Lambert} quadrilateral \cite[p.~156]{be}, see Figure 1. Observe that one of the vertices of a Lambert quadrilateral may be on the unit circle, in which case the angle at that vertex is $\phi=0\,.$ In \cite[Theorems 1.1 and 1.2]{vw}, the authors gave the sharp bounds of the product and the sum of two hyperbolic distances between the opposite sides of hyperbolic Lambert quadrilaterals in the unit disk. By \cite[Proposition 3.3]{vw}, we know that the above two hyperbolic distances are also a pair of the lengths of the adjacent sides of hyperbolic Lambert quadrilaterals (see $d_1$, $d_2$ in Figure 1). Therefore, it is natural to raise the problem: how does the left pair of hyperbolic adjacent sides behave (see $d_3$, $d_4$ in Figure 1)? This is the motivation of this paper, and we will find the sharp bounds of the product and the sum of another pair of hyperbolic adjacent sides of hyperbolic Lambert quadrilaterals in the unit disk. The main results of this paper are formulated as follows. \begin{theorem}\label{Lamdd} Let $Q (v_a\,,v_b\,,v_c\,,v_d)$ be a hyperbolic Lambert quadrilateral in $\mathbb{B}^2$ and let the quadruple of interior angles $(\frac{\pi}{2}\,,\frac{\pi}{2}\,,\phi\,,\frac{\pi}{2})$, $\phi\in[0, \pi/2)\,,$ corresponds to the quadruple $(v_a\,,v_b\,,v_c\,,v_d)$ of vertices. Let $d_3=\rho(v_c\,,v_b)\,,$ $d_4=\rho(v_c\,,v_d)$ (see Figure \ref{Lamb}), and let $s={\rm th} \rho(v_a,v_c)\in(0,1)$, where $\rho$ is the hyperbolic metric in the unit disk. Then \begin{eqnarray*} d_3d_4\leq \left(\log \sqrt{\frac{1+s\sqrt{2-s^2}}{1-s^2}}\right)^2 \end{eqnarray*} Here equality holds if and only if $v_c$ is on the bisector of the interior angle at $v_a$. \end{theorem} \begin{theorem}\label{Lamdad} Let $Q(v_a\,,v_b\,,v_c\,,v_d)$\,, $d_3$, $d_4$ and $s$ be as in Theorem \ref{Lamdd}. \begin{eqnarray*} \log \sqrt{\frac{1+s}{1-s}}<d_3+d_4\le 2\log \frac{1+s\sqrt{2-s^2}}{1-s^2}. \end{eqnarray*} Equality holds in the right-hand side if and only if $v_c$ is on the bisector of the interior angle at $v_a$. \end{theorem} \begin{figure}[h] \centering \includegraphics[width=7.5cm]{fig1.pdf} \caption{\label{Lamb}A hyperbolic Lambert quadrilateral in $\mathbb{B}^2$ \end{figure} We denote the other two sides of $Q(v_a\,,v_b\,,v_c\,,v_d)$ by $$d_1=\rho(v_a\,,v_b)\,\,\,\,{\rm and}\,\,\,\,d_2=\rho(v_a\,,v_d).$$ In a Lambert quadrilateral, the angle $\phi$ is related to the lengths $d_1$, $d_2$ of the sides "opposite" to it as follows \cite[Theorem 7.17.1]{be}: $${\rm sh}\, d_1{\rm sh}\, d_2=\cos\phi.$$ See also the recent paper of A.~F.~Beardon and D.~Minda \cite[Lemma 5]{bm}. In \cite[Corollary 1.3]{vw}, M. Vuorinen and G.-D. Wang provided a connection between $d_1$, $d_2$ and $s={\rm th} \rho(v_a,v_c)$ as follows \begin{eqnarray}\label{d1d2th} {\rm th}^2\,d_1+{\rm th}^2\,d_2=s^2. \end{eqnarray} Proposition \ref{d3d4for} (in section 2) yields the following corollary, which provides a connection between $d_3$, $d_4$ and $s={\rm th} \rho(v_a,v_c)$. \begin{corollary} Let $s$, $d_3$ and $d_4$ be as in Theorem \ref{Lamdd}. Then \begin{eqnarray}\label{d3d4sh} {\rm sh}^2\,d_3+{\rm sh}^2\,d_4=\frac{s^2}{1-s^2}. \end{eqnarray} \end{corollary} By \eqref{d1d2th} and \eqref{d3d4sh}, we get the following equality $$\frac{1}{{\rm th}^2\,d_1+{\rm th}^2\,d_2}-\frac{1}{{\rm sh}^2\,d_3+{\rm sh}^2\,d_4}=1,$$ which shows the relation between the four sides of the hyperbolic Lambert quadrilateral $Q(v_a\,,v_b\,,v_c\,,v_d)$. This paper is organized as follows. In Section 2, the notation and facts on the hyperbolic metric are stated, and some lemmas on the inverse hyperbolic trigonometric functions are proved. Section 3 is devoted to the H\"older convexity of the inverse hyperbolic sine function. The main results are proved in Section 4. \section{Preliminaries} It is assumed that the reader is familiar with basic definitions of geometric function theory, see e.g. \cite{be,v}. We recall here some basic information on hyperbolic geometry \cite{be}. The chordal distance is defined by \begin{equation}\label{q} \left\{\begin{array}{ll} q(x,y)=\frac{|x-y|}{\sqrt{1+|x|^2}\sqrt{1+|y|^2}},&\,\,\, x\,,y\neq\infty\\ q(x,\infty)=\frac{1}{\sqrt{1+|x|^2}},&\,\,\, x\neq\infty, \end{array}\right. \end{equation} for $x,y\in\overline\mathbb{R}^2$. For an ordered quadruple $a,b,c,d$ of distinct points in $\overline{\mathbb{R}^2}$ we define the absolute ratio by $$|a,b,c,d|=\frac{q(a,c)q(b,d)}{q(a,b)q(c,d)}.$$ It follows from (\ref{q}) that for distinct points $a,b,c,d\in \mathbb{R}^2$ \begin{equation}\label{crossratio} |a,b,c,d|=\frac{|a-c||b-d|}{|a-b||c-d|}. \end{equation} The most important property of the absolute ratio is M\"obius invariance, see \cite[Theorem 3.2.7]{be}, i.e., if $f$ is a M\"obius transformation, then $$|f(a),f(b),f(c),f(d)|=|a,b,c,d|,$$ for all distinct $a,b,c,d$ in $\overline{\mathbb{R}^2}$. For a domain $G\varsubsetneq \mathbb{R}^2$ and a continuous weight function $w: G\rightarrow(0,\infty)\,,$ we define the weighted length of a rectifiable curve $\gamma\subset G$ to be $$l_w(\gamma)=\int_{\gamma}w(z)|dz|$$ and the weighted distance between two points $x,y \in G $ by $$d_w(x,y)=\inf_{\gamma}l_w(\gamma),$$ where the infimum is taken over all rectifiable curves in $G$ joining $x$ and $y$ ($x=(x_1,x_2),\,y=(y_1,y_2)$). It is easy to see that $d_w$ defines a metric on $G$ and $(G,d_w)$ is a metric space. We say that a curve $\gamma: [0,1]\rightarrow G$ is a geodesic joining $\gamma(0)$ and $\gamma(1)$ if for all $t\in (0,1)$, we have $$d_w(\gamma(0),\gamma(1))=d_w(\gamma(0),\gamma(t))+d_w(\gamma(t),\gamma(1)).$$ The hyperbolic distance in $\mathbb{H}^2$ and $\mathbb{B}^2$ is defined in terms of the weight functions $w_{\mathbb{H}^2}(x)=1/{x_2}$ and $w_{\mathbb{B}^2}(x)=2/{(1-|x|^2)}\,,$ respectively. We also have the corresponding explicit formulas \begin{equation}\label{cosh} \cosh\rho_{\mathbb{H}^2}(x,y)=1+\frac{|x-y|^2}{2x_2y_2} \end{equation} for all $x,y\in \mathbb{H}^2$ \cite[p.35]{be}, and \begin{equation}\label{sh} \rho_{\mathbb{B}^2}(x,y)=2{\rm arsh}\,\frac{|x-y|}{\sqrt{(1-|x|^2)(1-|y|^2)}} \end{equation} for all $x,y\in \mathbb{B}^2$ \cite[p.40]{be}. In particular, for $t\in(0,1)$, \begin{equation}\label{arth} \rho_{\mathbb{B}^2}(0,t e_1)=\log\frac{1+t}{1-t}=2{\rm arth} t. \end{equation} There is a third equivalent way to express the hyperbolic distances. Let $G\in\{\mathbb{H}^2,\mathbb{B}^2\}$, $x,y\in{G}$ and let $L$ be an arc of a circle perpendicular to $\partial G$ with $x,y\in L$ and let $\{x_*,y_*\}=L\cap\partial G$, the points being labelled so that $x_*, x, y, y_*$ occur in this order on $L$. Then by \cite[(7.26)]{be} \begin{equation}\label{rho} \rho_G(x,y)=\sup\{\log|a,x,y,b|:a,b\in\partial G\}=\log|x_*,x,y,y_*|. \end{equation} The hyperbolic distance remains invariant under M\"obius transformations of $G$ onto $G'$ for $G,\,G'\in\{\mathbb{H}^2,\mathbb{B}^2\}$. Hyperbolic geodesics are arcs of circles which are orthogonal to the boundary of the domain. More precisely, for $a,b\in \mathbb{B}^2$ (or $\mathbb{H}^2)$, the hyperbolic geodesic segment joining $a$ to $b$ is an arc of a circle orthogonal to $S^1$ (or $\partial \mathbb{H}^2)$. In a limiting case the points $a$ and $b$ are located on a Euclidean line through $0$ (or located on a normal of $\partial \mathbb{H}^2$), see \cite{be}. Therefore, the points $x_*$ and $y_*$ are the end points of the hyperbolic geodesic. For any two distinct points the hyperbolic geodesic segment is unique (see Figure \ref{h2} and \ref{b2}). For basic facts about the hyperbolic geometry we refer the interested reader to \cite{a}, \cite{be} and \cite{kl}. \medskip \begin{figure}[h] \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=8cm]{fig3.pdf} \caption{\label{h2} Hyperbolic geodesic segments in $\mathbb{H}^2$} \end{minipage} \hfill \hspace{1cm} \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=6cm]{fig4.pdf} \caption{\label{b2} Hyperbolic geodesic segments in $\mathbb{B}^2$} \end{minipage} \end{figure} \medskip By \cite[Exercise 1.1.27]{k} and \cite[Lemma 2.2]{kv}, for $x\,,y \,\in \mathbb{R}^2\setminus\{0\}$ such that $0,x,y$ are noncollinear, the circle $S^1(a,r_a)$ containing $x, y$ is orthogonal to the unit circle, where \begin{equation}\label{orar} a =i\frac{y(1+|x|^2)-x(1+|y|^2)}{2(x_2y_1-x_1y_2)}\,\,\,\, and\, \,\,\,r_a=\frac {|x-y|\big|x|y|^2-y\big|}{2|y||x_1y_2-x_2y_1|}\,. \end{equation} \medskip The following {\em monotone form of l'H${\rm \hat{o}}$pital's rule} is useful in deriving monotonicity properties and obtaining inequalities. See the extensive bibliography of \cite{avz}. \begin{lemma} \label{lhr}{\rm \cite[Theorem 1.25]{avv1}}. For $-\infty<a<b<\infty$, let $f,\,g: [a,b]\rightarrow \mathbb{R}$ be continuous on $[a,b]$, and be differentiable on $(a,b)$, and let $g'(x)\neq 0$ on $(a,b)$. If $f'(x)/g'(x)$ is increasing(deceasing) on $(a,b)$, then so are \begin{eqnarray*} \frac{f(x)-f(a)}{g(x)-g(a)}\,\,\,\,\,\,\,and\,\,\,\,\,\,\,\,\frac{f(x)-f(b)}{g(x)-g(b)}. \end{eqnarray*} If $f'(x)/g'(x)$ is strictly monotone, then the monotonicity in the conclusion is also strict. \end{lemma} From now on we let $r'=\sqrt{1-r^2}$ for $0<r<1$. \begin{lemma}\label{lecr} Let $s\in(0,1)$, $m={s}/\sqrt{1-s^2}$ and $r\in(0,1)$. (1) The function $f_1(r)\equiv{\rm arsh}(m\,r)/ {\rm arth}(s\,r)$ is strictly decreasing with range $(1,1/\sqrt{1-s^2})$. (2) The function $f_2(r)\equiv{\rm arsh}(m\,r) {\rm arsh}(m\,r')$ is strictly increasing on $(0,\frac{\sqrt{2}}{2}]$ and strictly decreasing on $[\frac{\sqrt{2}}{2}, 1)$ with maximum value $({\rm arsh}(\frac{\sqrt{2}}{2} m))^2$. (3) The function $f_3(r)\equiv{\rm arsh}(m\,r)+{\rm arsh}(m\,r')$ is strictly increasing on $(0,\frac{\sqrt{2}}{2}]$ and strictly decreasing on $[\frac{\sqrt{2}}{2}, 1)$ with range $({\rm arsh}\,m,\,2{\rm arsh}\,(\frac{\sqrt{2}}{2} m)]$. \end{lemma} \begin{proof} (1) Let $f_{11}(r)={\rm arsh}(m\,r)$ and $f_{12}(r)={\rm arth}(s\,r)$. Then $f_{11}(0^+)=f_{12}(0^+)=0$. By differentiation, $$ \frac{f'_{11}(r)}{f'_{12}(r)}=\frac{1-s^2r^2}{\sqrt{1-s^2r'^2}} $$ which is strictly decreasing. Hence by Lemma \ref{lhr}, $f_1$ is strictly decreasing with $f_1(0^+)=1/\sqrt{1-s^2}$ and $f_1(1^-)=1$ because $${\rm arsh}\,m=\log\sqrt{\frac{1+s}{1-s}}={\rm arth}\,s.$$ (2) By differentiation, $$f'_2(r)=\frac{m}{r'\sqrt{1+m^2r'^2}\sqrt{1+m^2r^2}}(\phi_2(r')-\phi_2(r)),$$ where $\phi_2(r)=r\sqrt{1+m^2r^2}{\rm arsh}(m\,r)$. It is clear that $\phi_2$ is strictly increasing. Therefore, we get the result. (3) By differentiation, $$f'_3(r)=\frac{m}{r'\sqrt{1+m^2r'^2}\sqrt{1+m^2r^2}}(\phi_3(r')-\phi_3(r)),$$ where $\phi_3(r)=r\sqrt{1+m^2r^2}$. It is clear that $\phi_3$ is strictly increasing and hence the result follows immediately. \end{proof} \begin{proposition}\label{d3d4for} Let $Q (v_a\,,v_b\,,v_c\,,v_d)$ be a hyperbolic Lambert quadrilateral in $\mathbb{B}^2$ and let the quadruple of interior angles $(\frac{\pi}{2}\,,\frac{\pi}{2}\,,\phi\,,\frac{\pi}{2})$, $\phi\in[0, \pi/2)\,,$ corresponds to the quadruple $(v_a\,,v_b\,,v_c\,,v_d)$ of vertices. Let $d_1=\rho(v_a\,,v_b)\,,$ $d_2=\rho(v_a\,,v_d)$ $d_3=\rho(v_c\,,v_b)\,,$ $d_4=\rho(v_c\,,v_d)$, and let $s={\rm th} \rho(v_a,v_c)\in(0,1)$ and $m=\frac{s}{\sqrt{1-s^2}}$. Then $$d_1={\rm arth}(s\,r),\,\,\,\,\,\,\,\,d_2={\rm arth}(s\,r'),\,\,\,\,\,\,\,\,d_3={\rm arsh}(m\,r'),\,\,\,\,\,\,\,\,d_4={\rm arsh}(m\,r),$$ where $r\in(0,1)$ is a constant. \end{proposition} \begin{proof} Since the hyperbolic distance is M\"obius invariant, we may assume that $v_a=0$, $v_b$ is on the real axis $X$, $v_d$ is on the imaginary axis $Y$ and $v_c=t e^{i\theta}$, $0<t<1$ and $0<\theta<\frac{\pi}{2}$ (see Figure \ref{Lamb}). Let $r=\cos\theta$. Then by (\ref{orar}) the circle $S^1(b,r_b)$ through $v_c$ and $\overline{v_c}$ is orthogonal to $\partial\mathbb{B}^2$, where $$ b=\frac{1+t^2}{2t\cos\theta}\,\,\,\,\,{\rm and}\,\,\,\,r_b=\frac{\sqrt{(1+t^2)^2-4t^2\cos^2\theta}}{2t\cos\theta}. $$ $d_1$ and $d_2$ are from the proof of Theorem 1.1 in \cite{vw} or directly derived by \eqref{arth}. By \eqref{sh}, we get $$ d_3=\rho(v_c,v_b)=2{\rm arsh}\,f_s(r), $$ where $$f_s(r)=\sqrt{\frac{g^2_s(r)+g^2_s(1)-2 r g_s(r)g_s(1)}{(1-g^2_s(r))(1-g^2_s(1))}}$$ and $$g_s(r)=\frac{1-\sqrt{1-s^2r^2}}{s\,r}.$$ Similarly, we get $$ d_4=\rho(v_c,v_d)=2{\rm arsh}\,f_s(r'). $$ By calculation, we get the equality $$f_s(r)\sqrt{1+f^2_s(r)}=\frac{m r'}{2},$$ which implies ${\rm sh}d_3=m r'$ and ${\rm sh}d_4=m r$. Therefore, $$d_3={\rm arsh}(m\,r')\,\,\,\,\,\,{\rm and}\,\,\,\,\,\,d_4={\rm arsh}(m\,r).$$ \end{proof} By Proposition \ref{d3d4for} and Lemma \ref{lecr}(1), we have the following theorem. \begin{theorem}\label{Lamd2d3} Let $Q(v_a\,,v_b\,,v_c\,,v_d)$\,, $d_1$, $d_2$, $d_3$, $d_4$ and $s$ be as in Proposition \ref{d3d4for}. Then $$d_2<d_3<\frac{1}{\sqrt{1-s^2}}d_2\,\,\,\,{\rm and}\,\,\,\,d_1<d_4<\frac{1}{\sqrt{1-s^2}}d_1.$$ \end{theorem} \begin{remark} M. Vuorinen and G.-D. Wang gave the bounds for the product and sum of $d_1$ and $d_2$ \cite[Theorem 1.1 and Theorem 1.2]{vw} by which and Theorem \ref{Lamd2d3} we can get the bounds for the product and sum of $d_3$ and $d_4$. But the results are weaker than that of Theorem \ref{Lamdd} and Theorem \ref{Lamdad} in this paper. \end{remark} \section{The H\"older convexity for the inverse hyperbolic sine function} The inverse hyperbolic sine and tangent functions play important roles in the study of the hyperbolic metric. In \cite[Theorem 2.21]{vw}, the authors showed the H\"older convexity of the inverse hyperbolic tangent function. We will study the similar property of the inverse hyperbolic sine function in this section. For $r,s\in(0,+\infty)$, the \emph{H\"older mean of order $p$} is defined by $$H_p(r,s)=\left(\frac{r^p+s^p}{2}\right)^{1/p} \quad\mbox{for}\quad{p\neq 0}, \quad H_0(r,s)=\sqrt{r\,s}.$$ For $p=1$, we get the arithmetic mean $A=H_1$; for $p=0$, the geometric mean $G=H_0$; and for $p=-1$, the harmonic mean $H=H_{-1}$. It is well-known that $H_p(r,s)$ is continuous and increasing with respect to $p$. Many interesting properties of H\"older means are given in \cite{bu} and \cite{hlp}. A function $f:I\to J$ is called {\it $H_{p,q}$-convex(concave)} if it satisfies $$f(H_p(r,s))\leq(\geq)H_q(f(r),f(s))$$ for all $r,s\in I$, and {\it strictly $H_{p,q}$-convex(concave)} if the inequality is strict except for $r=s$. For $H_{p,q}$-convexity of some special functions the reader is referred to \cite{avv2, avz, ba1, ba, cwzq, zwc}. \begin{lemma}\label{t1l1} Let $r\in(0,+\infty)$. (1) The function $f_1(r)\equiv\frac{{\rm arsh}\,r}{r}$ is strictly decreasing with range $(0,1)$. (2) The function $f_2(r)\equiv\frac{r(1+r^2)-\sqrt{1+r^2}\,{\rm arsh}\,r}{r^3}$ is strictly increasing with range $(2/3,1)$. \end{lemma} \begin{proof} (1) $f_{11}(r)={\rm arsh}\,r$ and $f_{12}(r)=r$, it is easy to see that $f_{11}(0^+)=f_{12}(0^+)=0$, then $$ \frac{f'_{11}(r)}{f'_{12}(r)}=\frac{1}{\sqrt{1+r^2}} $$ which is strictly decreasing. Hence by Lemma \ref{lhr}, $f_1$ is strictly decreasing with $f_1(0^+)=1$ and $f_1(+\infty)=\lim\limits_{r\rightarrow +\infty}f_1(r)=\lim\limits_{r\rightarrow +\infty}\frac{f'_{11}(r)}{f'_{12}(r)}=0$. \medskip (2) Let $f_{21}(r)=r(1+r^2)-\sqrt{1+r^2}\,{\rm arsh}\,r$ and $f_{22}(r)=r^3$, then $f_{21}(0^+)=f_{22}(0^+)=0$. By differentiation, we have $$ \frac{f'_{21}(r)}{f'_{22}(r)}=1-\frac 13 \frac{1}{\sqrt{1+r^2}}\frac{{\rm arsh}\,r}{r}, $$ which is strictly increasing by (1). Hence by Lemma \ref{lhr}, $f_2$ is strictly increasing with $f_2(0^+)=2/3$ and $f_2(+\infty)=\lim\limits_{r\rightarrow +\infty}f(r)=1$. \end{proof} \begin{lemma}\label{t1l2} For $p \in \mathbb{R}$ and $r\in(0,+\infty)$ define $$h_p(r)\equiv 1+p \sqrt{1+r^2}\frac{{\rm arsh}\,r}{r}-\frac{1}{\sqrt{1+r^2}}\frac{{\rm arsh}\,r}{r}.$$ (1) If $p\le-2$, then $h_p$ is strictly decreasing with range $(-\infty,p)$. (2) If $p>0$, then $h_p$ is strictly increasing with range $(p,+\infty)$. (3) If $p=0$, then $h_p$ is strictly increasing with range $(0,1)$. (4) If $-2<p<0$, then the range of $h_p$ is $(-\infty, C(p)]$, where $C(p)\equiv\sup\limits_{0<r<+\infty}{h_p(r)}\in (p\,,1)$. Moreover, $\lim\limits_{p\to-2}C(p)=-2$ and $\lim\limits_{p\to 0}C(p)=1$. \end{lemma} \begin{proof} By l'H\^opital's Rule, we get $$ \lim\limits_{r\to+\infty}\frac{\sqrt{1+r^2}{\rm arsh}\,r}{r}=\lim\limits_{r\to+\infty}\left(1+\frac{r\,{\rm arsh}\,r}{\sqrt{1+r^2}}\right)=+\infty. $$ Together with Lemma \ref{t1l1} (1), we have $h_p(0^+)=p$ and \begin{equation} h_p(+\infty)=\lim\limits_{r\rightarrow +\infty}h_p(r)= \left\{\begin{array}{ll} -\infty &\,\,\,p<0,\\ 1 &\,\,\,p=0,\\ +\infty &\,\,\,p>0. \end{array}\right. \end{equation} Next by differentiation, we have $$ h'_p(r)=\frac 1r\left(1-\frac{1}{\sqrt{1+r^2}}\frac{{\rm arsh}\, r}{r}\right)[p-f(r)], $$ where $f(r)=2-\frac{1}{1+r^2}-\frac{2}{f_2(r)}$ and $f_2(r)$ is as in Lemma \ref{t1l1} (2). Therefore, $f$ is strictly increasing from $(0,+\infty)$ onto $(-2,0)$. Hence we get (1)-(3). (4) If $-2<p<0$, since the range of $f$ is $(-2,0)$, we see that there exists exactly one point $r_0\in(0,+\infty)$ such that $p=f(r_0)$. Then $h_p$ is increasing on $(0, r_0)$ and decreasing on $(r_0,+\infty)$. Since $$ h_p(r)=1-\left(-p \sqrt{1+r^2}\frac{{\rm arsh}\,r}{r}+\frac{1}{\sqrt{1+r^2}}\frac{{\rm arsh}\,r}{r}\right)<1, $$ by the continuity of $h_p$, there is a continuous function $$C(p)\equiv\sup\limits_{0<r<+\infty}{h_p(r)}$$ with $p<C(p)<1$. Moreover, $\lim\limits_{p\to-2}C(p)=-2$ and $\lim\limits_{p\to 0}C(p)=1$. \end{proof} \begin{lemma}\label{t1l3} Let $p,q$ be real numbers and $r\in(0,+\infty)$. Let $$g_{p,q}(r)\equiv\frac{{\rm arsh}^{q-1}\,r}{r^{p-1}\sqrt{1+r^2}}.$$ (1) If $p\le-2$, then $g_{p,q}$ is strictly increasing for each $q\ge p$, and $g_{p,q}$ is not monotone for any $q<p$. (2) If $p>0$, then $g_{p,q}$ is strictly decreasing for each $q\le p$, and $g_{p,q}$ is not monotone for any $q>p$. (3) If $p=0$, then $g_{p,q}$ is strictly increasing for each $q\ge 1$, $g_{p,q}$ is strictly decreasing for each $q\le 0$, and $g_{p,q}$ is not monotone for any $0<q<1$. (4) If $-2<p<0$, then $g_{p,q}$ is strictly increasing for each $q\ge C(p)$, and $g_{p,q}$ is not monotone for any $q<C(p)$. Here $C(p)$ is the same as in Lemma \ref{t1l2}. \end{lemma} \begin{proof} By logarithmic differentiation in $r$, $$\frac{g'_{p,q}(r)}{g_{p,q}(r)}=\frac{1}{\sqrt{1+r^2}{\rm arsh}\, r}[q-h_p(r)],$$ where $h_p(r)$ is the same as Lemma \ref{t1l2}. Hence the results follow from Lemma \ref{t1l2}. \end{proof} \medskip The following theorem studies the $H_{p,q}$-convexity of ${\rm arsh}$. \begin{theorem}\label{ath1} The inverse hyperbolic sine function ${\rm arsh}$ is strictly $H_{p,q}$-convex on $(0,\infty)$ if and only if $(p,q)\in{D_1}\cup{D_2}$, while ${\rm arsh}$ is strictly $H_{p,q}$-concave on $(0,\infty)$ if and only if $(p,q)\in{D_3}$, where $$D_1=\{(p,q)|-\infty<p<-2,\, p\le q<+\infty\},$$ $$D_2=\{(p,q)|-2\le p\le 0,\, C(p) \le q<+\infty\},$$ $$D_3=\{(p,q)|0\le p< +\infty,\,-\infty<q\le p\},$$ and $C(p)$ is a continuous function with $C(-2)=-2$, $C(0)=1$ and $p<C(p)<1$ for $-2<p<0$. \end{theorem} \begin{proof} The proof is divided into the following four cases. {\bf Case 1.} $p\neq0$ and $q\neq0$. We may suppose that $0<x\leq y<1$. Define $$F(x,y)={\rm arsh}^{q}\left(H_p(x,y)\right)-\frac{{\rm arsh}^{q}x+{\rm arsh}^{q}y}{2}.$$ Let $t=H_p(x,y)$, then $\frac{\partial t}{\partial x}=\frac 12(\frac xt)^{p-1}$. If $x<y$, we see that $t>x$. By differentiation, we have $$\frac{\partial F}{\partial x}=\frac{q}{2}x^{p-1}\left(\frac{{\rm arsh}^{q-1}t}{t^{p-1}\sqrt{1+t^2}}-\frac{{\rm arsh}^{q-1}x}{x^{p-1}\sqrt{1+x^2}}\right).$$ \medskip {\it Case 1.1.} $p\le-2$, $q\ge{p}$ and $q\neq0$. By Lemma \ref{t1l3}(1), $\frac{\partial{F}}{\partial{x}}<0$ if $0>q\ge{p}$, and $\frac{\partial{F}}{\partial{x}}>0$ if $q>0 (\ge p)$. Then $F(x,y)$ is strictly decreasing and $F(x,y)\ge F(y,y)=0$ if $0>q\ge{p}$, and $F(x,y)$ is strictly increasing and $F(x,y)\leq F(y,y)=0$ if $q>0$. Hence we have $${\rm arth}(H_p(x,y))\le H_q({\rm arth} x,{\rm arth} y)$$ with equality if and only if $x=y$. In conclusion, ${\rm arsh}$ is strictly $H_{p,q}$-convex on $(0,+\infty)$ for $(p,q)\in\{(p,q)|p\le-2,\, p\le q<0\}\cup\{(p,q)|p\le-2,\, q>0\}$. \medskip {\it Case 1.2.} $p\le-2$, $q<p$. By Lemma \ref{t1l3}(1), with an argument similar to Case 1.1, it is easy to see that ${\rm arsh}$ is neither $H_{p,q}$-concave nor $H_{p,q}$-convex on the whole interval $(0,+\infty)$. \medskip {\it Case 1.3.} $p>0$, $q\le{p}$ and $q\neq0$. By Lemma \ref{t1l3}(2), $\frac{\partial{F}}{\partial{x}}>0$ if $q<0(\le{p})$, and $\frac{\partial{F}}{\partial{x}}<0$ if $0<q\le p$. Then $F(x,y)$ is strictly increasing and $F(x,y)\le F(y,y)=0$ if $q<0$, and $F(x,y)$ is strictly decreasing and $F(x,y)\ge F(y,y)=0$ if $0<q\le p$. Hence we have $${\rm arth}(H_p(x,y))\ge H_q({\rm arth} x,{\rm arth} y)$$ with equality if and only if $x=y$. In conclusion, ${\rm arsh}$ is strictly $H_{p,q}$-concave on $(0,+\infty)$ for $(p,q)\in\{(p,q)|p>0,\, q<0\}\cup\{(p,q)|p>0,\, 0<q\le p\}$. \medskip {\it Case 1.4.} $p>0$, $q>p$. By Lemma \ref{t1l3}(2), with an argument similar to Case 1.3, it is easy to see that ${\rm arsh}$ is neither $H_{p,q}$-concave nor $H_{p,q}$-convex on the whole interval $(0,+\infty)$. \medskip {\it Case 1.5.} $-2<p<0$, $q\geq{C(p)}$ and $q\neq0$. By Lemma \ref{t1l3}(4), $\frac{\partial{F}}{\partial{x}}<0$ if $0>q\geq{C(p)}$, and $\frac{\partial{F}}{\partial{x}}>0$ if $q\geq{C(p)}$ and $q>0$. Then $F(x,y)$ is strictly decreasing and $F(x,y)\ge F(y,y)=0$ if $0>q\geq{C(p)}$, and $F(x,y)$ is strictly increasing and $F(x,y)\leq F(y,y)=0$ if $q\geq{C(p)}$ and $q>0$. Hence we have $${\rm arth}(H_p(x,y))\le H_q({\rm arth} x,{\rm arth} y)$$ with equality if and only if $x=y$. In conclusion, ${\rm arsh}$ is strictly $H_{p,q}$-convex on $(0,+\infty)$ for $(p,q)\in\{(p,q)|-2<p<0,\, 0>q\ge C(p)\}\cup\{(p,q)|-2<p<0,\, q\geq{C(p)}, q>0\}$. \medskip {\it Case 1.6.} $-2<p<0$, $q<{C(p)}$ and $q\neq0$. By Lemma \ref{t1l2}(4), with an argument similar to Case 1.5, it is easy to see that ${\rm arsh}$ is neither $H_{p,q}$-concave nor $H_{p,q}$-convex on the whole interval $(0,+\infty)$. \bigskip {\bf Case 2.} $p\neq0$ and $q=0$. For $0<x\leq y<1$, let $$F(x,y)=\frac{{\rm arsh}^2(H_p(x,y))}{{\rm arsh}{x}\,{\rm arsh}{y}},$$ and $t=H_p(x,y)$. If $x<y$, we see that $t>x$. By logarithmic differentiation, we obtain $$\frac1{F}\frac{\partial F}{\partial x}=x^{p-1}\left(\frac{({\rm arsh}{t})^{-1}}{t^{p-1}\sqrt{1+t^2}}-\frac{({\rm arsh}{x})^{-1}}{x^{p-1}\sqrt{1+x^2}}\right).$$ \medskip {\it Case 2.1.} $p\le -2$ and $q=0(>p)$. By Lemma \ref{t1l3}(1), we have $\frac{\partial F}{\partial x}>0$ and $F(x,y)\le F(y,y)=1$. Hence we have $${\rm arsh}(H_p(x,y))\le\sqrt{{\rm arsh}{x}\,{\rm arsh}{y}}$$ with equality if and only if $x=y$. In conclusion, ${\rm arsh}$ is strictly $H_{p,q}$-convex on $(0,+\infty)$ for $(p,q)\in\{(p,q)|-p\le -2, q=0\}$. \medskip {\it Case 2.2.} $p>0$ and $q=0(<p)$. By Lemma \ref{t1l3}(2), we have $\frac{\partial F}{\partial x}<0$ and $F(x,y)\ge F(y,y)=1$. Hence we have $${\rm arsh}(H_p(x,y))\ge\sqrt{{\rm arsh}{x}\,{\rm arsh}{y}}$$ with equality if and only if $x=y$. In conclusion, ${\rm arsh}$ is strictly $H_{p,q}$-concave on $(0,+\infty)$ for $(p,q)\in\{(p,q)|p>0, q=0\}$. \medskip {\it Case 2.3.} $-2<p<0$ and $q=0\ge C(p)$. By Lemma \ref{t1l3}(4), we have $\frac{\partial F}{\partial x}>0$ and $F(x,y)\le F(y,y)=1$. Hence we have $${\rm arsh}(H_p(x,y))\le\sqrt{{\rm arsh}{x}\,{\rm arsh}{y}}$$ with equality if and only if $x=y$. In conclusion, ${\rm arsh}$ is strictly $H_{p,q}$-convex on $(0,+\infty)$ for $(p,q)\in\{(p,q)|-2<p<0, q=0\ge C(p)\}$. \medskip {\it Case 2.4.} $-2<p<0$ and $q=0<C(p)$. By Lemma \ref{t1l3}(4), with an argument similar to Case 2.3, it is easy to see that ${\rm arsh}$ is neither $H_{p,q}$-concave nor $H_{p,q}$-convex on the whole interval $(0,+\infty)$. \bigskip {\bf Case 3.} $p=0$ and $q\neq0$. For $0<x\leq y<1$, let $$F(x,y)={\rm arsh}^q(\sqrt{xy})-\frac{{\rm arsh}^q\,x+{\rm arsh}^q\,y}{2},$$ and $t=\sqrt{xy}$. If $x<y$, we have that $t>x$. By differentiation, we obtain $$\frac{\partial F}{\partial x}=\frac{q}{2x}\left(\frac{{\rm arsh}^{q-1}t}{t^{-1}\sqrt{1+t^2}}-\frac{{\rm arsh}^{q-1}x}{x^{-1}\sqrt{1+x^2}}\right).$$ \medskip {\it Case 3.1.} $p=0$ and $q\ge 1$. By Lemma \ref{t1l3}(3), we have $\frac{\partial F}{\partial x}>0$ and $F(x,y)\le F(y,y)=0$. Hence we have $${\rm arsh}(\sqrt{xy})\leq H_q({\rm arsh}{x},\,{\rm arsh}{y})$$ with equality if and only if $x=y$. In conclusion, ${\rm arsh}$ is strictly $H_{p,q}$-convex on $(0,1)$ for $(p,q)\in\{(p,q)|p=0, q\ge 1\}$. \medskip {\it Case 3.2.} $p=0$ and $q<0$. By Lemma \ref{t1l3}(3), we have $\frac{\partial F}{\partial x}>0$ and $F(x,y)\le F(y,y)=0$. Hence we have $${\rm arsh}(\sqrt{xy})\ge H_q({\rm arsh}{x},\,{\rm arsh}{y})$$ with equality if and only if $x=y$. In conclusion, ${\rm arsh}$ is strictly $H_{p,q}$-concave on $(0,1)$ for $(p,q)\in\{(p,q)|p=0, q\le 0\}$. \medskip {\it Case 3.3.} $p=0$ and $0<q<1$. By Lemma \ref{t1l3}(3), with an argument similar to Case 3.1 or Case 3.2, it is easy to see that ${\rm arsh}$ is neither $H_{p,q}$-concave nor $H_{p,q}$-convex on the whole interval $(0,+\infty)$. {\bf Case 4.} $p=q=0$. By Case 2.2, for all $x\,,y\in(0,+\infty)$, we have $${\rm arsh}(H_p(x,y))\ge \sqrt{{\rm arsh}{x}\,{\rm arsh}{y}},\quad\mbox{for}\quad{p>0}.$$ By the continuity of $H_p$ in $p$ and ${\rm arsh}$ in $x$, we have $${\rm arsh}(H_0(x,y))\ge H_0({\rm arsh}{x},{\rm arsh}{y}).$$ In conclusion, ${\rm arsh}$ is strictly $H_{0,0}$-concave on $(0,+\infty)$. This completes the proof of Theorem \ref{ath1}. \end{proof} \medskip Setting $p=1=q$ in Theorem \ref{ath1}, we obtain the concavity of ${\rm arsh}$ easily. \begin{corollary} The inverse hyperbolic sine function ${\rm arsh}$ is strictly concave on $(0,+\infty)$. \end{corollary} \comment{ \begin{lemma}\label{le1} Let $r\in(0,1)$. (1) The function $h_1(r)\equiv \frac{r'}{{\rm arth}\,r'}$ is strictly increasing and concave with range $(0,1)$. (2) The function $h(r)\equiv \frac{r}{{\rm arth}\,r}+\frac{r'}{{\rm arth}\,r'}$ is strictly increasing on $(0,\frac{\sqrt 2}{2}]$ and strictly decreasing on $[\frac{\sqrt 2}{2},1)$ and concave on $(0,1)$ with range $(1, \frac{\sqrt2}{\log(\sqrt2+1)}]$. \end{lemma} \begin{proof} (1) The monotonicity and the limiting values of $h_1$ can be easily obtained by Lemma \ref{lecr}(1). Now we prove the concavity of $h_1$. By differentiation, $$h'_1(r)=\frac{r'-r^2{\rm arth}\,r'}{rr'({\rm arth}\,r')^2}$$ and \begin{eqnarray}\label{le13} h''_1(r)r^2r'^3({\rm arth}\,r')^3=2r'^2-r'{\rm arth}\,r'-r^2({\rm arth}\,r')^2\equiv\psi_1(r'). \end{eqnarray} Then $\psi_1(r)=2r^2-r{\rm arth}\,r-r'^2({\rm arth}\,r)^2$ and by differentiation \begin{eqnarray*} \psi'_1(r)=r(4-\psi_2(r)), \end{eqnarray*} where $\psi_2(r)=\frac{1}{r'^2}+3\frac{{\rm arth}\,r}{r}-2({\rm arth}\,r)^2$. Since \begin{eqnarray*} {\rm arth}\,r=\frac12\log\frac{1+r}{1-r}=\sum_{n=0}^{\infty}\frac{r^{2n+1}}{2n+1}, \end{eqnarray*} we have \begin{eqnarray*} r^2r'^4\psi'_2(r) &=&(r^4+2r^2-3){\rm arth}\,r-r^3+3r\nonumber\\ &=&\sum_{n=0}^{\infty}\frac{r^{2n+5}}{2n+1}+2\sum_{n=1}^{\infty}\frac{r^{2n+3}}{2n+1}-3\sum_{n=2}^{\infty}\frac{r^{2n+1}}{2n+1}\\ &=&\sum_{n=2}^{\infty}\frac{16(n-1)}{(2n-3)(2n-1)(2n+1)}r^{2n+1}\\ &>&0. \end{eqnarray*} Hence $\psi_2$ is increasing with $\psi_2(0^+)=4$ and $\psi_1$ is decreasing with $\psi_1(0^+)=0$. Therefore by (\ref{le13}) $h''_1$ is negative and $h'_1$ is decreasing. Then $h_1$ is concave on $(0,1)$. \medskip (2) By differentiation, $$h'(r)=f'_1(r)+h'_1(r)=f'_1(r)-\frac{r}{r'}f_1'(r'),$$ where $f_1(r)=\frac{r}{{\rm arth}\,r}=h_1(r')$. It is easy to see that $h'(\frac{\sqrt2}{2})=0$. Therefore, $h$ is strictly increasing on $(0,\frac{\sqrt 2}{2}]$ and strictly decreasing on $[\frac{\sqrt 2}{2},1)$. By (1) and Lemma \ref{lecr}(1), $h'$ is strictly decreasing and $h(0^+)=h(1^-)=1$. Hence $h$ is concave and $$1<h(r)\leq h(\frac{\sqrt2}{2})=\frac{\sqrt2}{\log(\sqrt2+1)}.$$ This completes the proof. \end{proof} \begin{lemma}\label{le2} Let $C=1-\frac{\log(\sqrt{2}+1)}{\sqrt2}\approx 0.376775 $ and $r\in(0,1)$. Let $$g(r)=\frac{r}{r'}(\frac{{\rm arth}\,r}{{\rm arth}\,r'})^{p-1}.$$ (1) $g$ is strictly decreasing if $p\leq 0$ and strictly increasing if $p\geq C$. (2) If $p\in(0,C)$, then there exists exactly one point $r_0\in(0,\frac{\sqrt2}{2})$ such that $g$ is increasing on $(0,r_0)$, $(r'_0,1)$ and decreasing on $(r_0,r'_0)$. (3) If $p\in(0,C)$, then $g(0^+)=0$ and $g(1^-)=\infty$. \end{lemma} \begin{proof} (1) By logarithmic differentiation, \begin{eqnarray*} \frac{rr'^2{\rm arth}\,r{\rm arth}\,r'}{r{\rm arth}\,r'+r'{\rm arth}\,r}\cdot\frac{g'(r)}{g(r)}&=&p-1+\frac{{\rm arth}\,r{\rm arth}\,r'}{r{\rm arth}\,r'+r'{\rm arth}\,r}\\ &=&p-[1-\frac{1}{h(r)}], \end{eqnarray*} where $h(r)$ is as in Lemma \ref{le1}(2). Since $$0<1-\frac{1}{h(r)}\leq C,$$ we see that $g$ is strictly increasing if $p\geq C$ and decreasing if $p\leq 0$. \medskip (2) If $p\in(0,C)$, then there exists exactly one point $r_0\in(0,\frac{\sqrt2}{2})$ such that $g'(r_0)=g'(r'_0)=0$ because $1-\frac{1}{h(r)}$ is increasing on $(0,\frac{\sqrt2}{2})$ and decreasing on $(\frac{\sqrt2}{2},1)$ by Lemma \ref{le1}(2). Therefore, $g'>0$ if $r\in(0,r_0)\cup(r'_0,1)$ and $g'<0$ if $r\in(r_0,r'_0)$. Hence $g$ is increasing on $(0,r_0)$, $(r'_0,1)$ and decreasing on $(r_0,r'_0)$. \medskip (3) Since $0<p<C<1$ and \begin{eqnarray*} \lim\limits_{r\rightarrow 0^+}\frac{({\rm arth}\,r')^{1-p}}{r^{-p}} =\lim\limits_{r\rightarrow 0^+}\frac{1-p}{p}\cdot\frac{r^p}{r'({\rm arth}\,r')^p} =0, \end{eqnarray*} we have \begin{eqnarray*} \lim\limits_{r\rightarrow 0^+}g(r)=\lim\limits_{r\rightarrow 0^+}(\frac{r}{{\rm arth}\,r})^{1-p}\cdot(\frac{1}{r'})\cdot\frac{({\rm arth}\,r')^{1-p}}{r^{-p}}=0 \end{eqnarray*} and \begin{eqnarray*} \lim\limits_{r\rightarrow 1^-}g(r)=\lim\limits_{r\rightarrow 1^-}(\frac{{\rm arth}\,r'}{r'})^{1-p}\cdot r\cdot\frac{r'^{-p}}{({\rm arth}\,r)^{1-p}}=\infty. \end{eqnarray*} This completes the proof. \end{proof} \begin{theorem}\label{th1} Let $C$ be as in Lemma \ref{le2}. Then for all $r\in(0,1)$, \begin{eqnarray}\label{th11} H_p({\rm arth}\,r,{\rm arth}\,r')\leq {\rm arth}\,(\frac{\sqrt2}{2}) \end{eqnarray} holds if and only if $p\leq 0$ and \begin{eqnarray}\label{th12} H_p({\rm arth}\,r,{\rm arth}\,r')\geq {\rm arth}\,(\frac{\sqrt2}{2}) \end{eqnarray} holds if and only if $p\geq C$. Equalities hold if and only if $r=r'=\frac{\sqrt2}{2}$ and all inequalities are sharp in both cases. \end{theorem} \begin{proof} We obtain the inequality (\ref{th11}) immediately if $p=0$ by Lemma \ref{lecr}(2). Therefore, it suffices to discuss the case $p\neq 0$. Let $$f(r)=\frac 1p\log\frac{({\rm arth}\,r)^p+({\rm arth}\,r')^p}{2}\,,\,\,p\neq 0.$$ By differentiation, \begin{eqnarray*} f'(r)=\frac{({\rm arth}\,r')^{p-1}}{rr'[({\rm arth}\,r)^p+({\rm arth}\,r')^p]}[g(r)-1], \end{eqnarray*} where $g(r)$ is as in Lemma \ref{le2} and it is easy to see that $g(\frac{\sqrt2}{2})=1$. {\bf Case 1.} $p<0$. $f$ is strictly increasing on $(0,\frac{\sqrt 2}{2})$ and strictly decreasing on $(\frac{\sqrt 2}{2},1)$ by Lemma \ref{le2}(1). Therefore, $f(r)\leq f(\frac{\sqrt2}{2})$ for all $p<0$. {\bf Case 2.} $p\geq C$. By Lemma \ref{le2}(1) $f$ is strictly decreasing on $(0,\frac{\sqrt 2}{2})$ and strictly increasing on $(\frac{\sqrt 2}{2},1)$ and hence $f(r)\geq f(\frac{\sqrt2}{2})$. {\bf Case 3.} $p\in(0,C)$. By Lemma \ref{le2}(2), there exists exactly one point $r_1\in(0,r_0)$ such that $g(r_1)=g(r'_1)=1$, where $r_0\in(0,\frac{\sqrt2}{2})$ is as in Lemma \ref{le2}(2). Then $f$ is strictly decreasing on $(0,r_1)$, $(\frac{\sqrt2}{2},r'_1)$ and strictly increasing on $(r_1,\frac{\sqrt2}{2})$ , $(r'_1,1)$. Thus, $$f(r_1)=f(r'_1)<f(\frac{\sqrt2}{2}).$$ Since $f(0^+)=f(1^-)=\infty$, there exists $r_2\in(0,r_1)\cup(r'_1,1)$ such that $$f(\frac{\sqrt2}{2})<f(r_2).$$ Therefore, inequalities (\ref{th11}) or (\ref{th12}) can not hold for all $r\in(0,1)$. This completes the proof of Theorem \ref{th1}. \end{proof} } \section{Proof of Main Results } \begin{proof}[Proof of Theorem \ref{Lamdd}] By the proof of Proposition \ref{d3d4for}, we have $$d_3d_4={\rm arsh}(m\,r){\rm arsh}(m\,r')$$ where $m$, $r$ are the same as in the proof of Proposition \ref{d3d4for}. By Lemma \ref{lecr}(2), we have $$d_3d_4\le \left({\rm arsh} \left(\frac{\sqrt 2}{2}m\right)\right)^2.$$ This completes the proof of Theorem \ref{Lamdd}. \end{proof} \begin{proof}[Proof of Theorem \ref{Lamdad}] By the proof of Proposition \ref{d3d4for}, we have $$d_3+d_4={\rm arsh}(m\,r)+{\rm arsh}(m\,r')$$ where $m$, $r$ are the same as in the proof of Proposition \ref{d3d4for}. By Lemma \ref{lecr}(3), the desired conclusion follows. \end{proof} \medskip \subsection*{Acknowledgments}I wish to express my sincere gratitude to my supervisor Professor Matti Vuorinen whose suggestions and ideas were invaluable during my work. This research was supported by the Finnish National Doctoral Programme in Mathematics and its Applications, Turku University Foundation, and Academy of Finland project no. 268009.
{ "timestamp": "2014-03-28T01:08:02", "yymm": "1403", "arxiv_id": "1403.6988", "language": "en", "url": "https://arxiv.org/abs/1403.6988", "abstract": "We prove sharp bounds for the product and the sum of the hyperbolic lengths of a pair of hyperbolic adjacent sides of hyperbolic Lambert quadrilaterals in the unit disk. We also show the Hölder convexity of the inverse hyperbolic sine function involved in the hyperbolic geometry.", "subjects": "Metric Geometry (math.MG)", "title": "The adjacent sides of hyperbolic Lambert quadrilaterals", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759666033576, "lm_q2_score": 0.721743200312399, "lm_q1q2_score": 0.7079405592458251 }
https://arxiv.org/abs/2009.11689
Stable decompositions of coalition formation games
It is known that a coalition formation game may not have a stable coalition structure. In this study we propose a new solution concept for these games, which we call "stable decomposition", and show that each game has at least one. This solution consists of a collection of coalitions organized in sets that "protect" each other in a stable way. When sets of this collection are singletons, the stable decomposition can be identified with a stable coalition structure. As an application, we study convergence to stability in coalition formation games.
\section{Introduction} The literature on matching has recently emerged as one of the most successful, widely applied, and policy-relevant branches of economic theory: Understanding and designing mechanisms for school choice or kidney exchange have been significantly enhanced by the insights provided by a wide variety of matching models \citep[see][and references therein]{roth2018marketplaces}. From a theoretical perspective, all these problems can be formalized as coalition formation games in which each agent has preferences over the set of coalitions in which he/she may participate. Depending on the structure of the coalitions that the agents are allowed to form, these problems include hedonic games, one-sided problems such as the roommate problem, and two-sided problems running from the classical marriage problem to many-to-one matching problems with peer effects and externalities. One of the main goals of analyzing coalition formation games is to predict which coalitions will form and the most widely studied solution to date is that of stability. A coalition structure (i.e. a set of coalitions that partition the set of agents) is stable if it "protects" its coalitions in the following way: Whenever an agent has an incentive to deviate to a different coalition outside the structure, another agent has incentive not to allow that deviation, and thus prevents its formation. However, coalition formation games, in general, may have no stable coalition structure. The question then is whether a more general solution concept that retains the appeal of stability can be provided for situations in which there are no stable coalitions. This paper tackles that question and provides a positive answer by introducing a novel solution concept for the entire class of coalition formation games, called \emph{stable decomposition}. Our solution is based on the idea of protection, but applied in a more general framework.\footnote{The concept of stable decomposition is inspired by the notion of ``stable partition'' in roommate problems due to \cite{tan1991necessary}.} That is to say, instead of having a partition of the set of agents that protects their coalitions, as in a stable coalition structure, we study how a collection formed by special sets of coalitions protects its members in a way that resembles stability. One of the most important properties of our solution concept is that each coalition formation game has at least one stable decomposition, independently of whether a stable coalition structure exists or not. The special sets in a stable decomposition, called ``parties'', are of three different types: (i) A set consisting of singletons; (ii) sets consisting of a unique non-single coalition; and (iii) sets consisting of several non-single coalitions, called ``ring components''. Ring components exhibit a robust cyclical behavior with respect to the (unanimous) preference of the agents by which one coalition is preferred to another if the first is preferred to the second by each agent at the intersection of the two coalitions. The way in which a stable decomposition protects its parties depends on the party at hand. When the party is a ring component or a set consisting of a non-single coalition, protection of the party by the stable decomposition implies that if some agents of the party want to form a coalition that is not in the party, then another party of the decomposition prevents such a coalition from being formed. When the party is the set consisting of singletons, protection by the stable decomposition means that no party of the decomposition prevents the formation of non-single coalitions consisting of agents in the set of singletons. We find that a stable decomposition with no ring components can be identified with a stable coalition structure. Also, any stable coalition structure can be interpreted as a special type of stable decomposition, so our new solution concept identifies stable coalitions structures when they exist. An important feature of our solution concept is its close link to the known solution concept of the ``absorbing set''.\footnote{As far as we know, \cite{schwartz1970possibility} was the first to introduce this notion for collective decision making problems.} To formalize this solution and describe its relation to the stable decomposition solution, we first need to determine a dynamic process by means of a domination relation between coalition structures. In this paper, we adopt the standard (myopic) dynamic process which starts from a non-stable coalition structure and forms a new one containing a coalition of better-off agents, in which agents which are abandoned are single and all other coalitions remain unchanged. Given this dynamic process, an absorbing set is a minimal collection of coalition structures which, once entered throughout this dynamic process, is never left. A known result in the literature is that every coalition formation game has at least one absorbing set \citep[see][]{shenoy1979coalition}. In particular, a stable coalition structure is not dominated by any other coalition structure, so it coincides with a trivial absorbing set (i.e. a singleton). By contrast, a non-trivial absorbing set is formed by several coalition structures that exhibit a cyclical behavior. The main result of the paper presents a one to one correspondence between stable decompositions and absorbing sets: Each stable decomposition generates an absorbing set and, conversely, each absorbing set generates a stable decomposition (Theorem \ref{decomposition}). As a result, each coalition formation game has a stable decomposition (Corollary \ref{corolario existencia}). The cyclical behavior of coalitions on agents' preferences has been studied through the concept of a ``ring of coalitions". However, the notion of a ring is not sufficiently robust for two reasons: (i) A ring with disjoint coalitions does not always generate such behavior; and (ii) rings can overlap. To incorporate such subtleties, in our definition of stable decomposition we introduce the notion of a ring component that can be seen as the union of overlapping rings that have a joined cyclical behavior. An auxiliary result in our paper, which is of interest in it self, is the relation between the cyclical behavior of coalitions (rings) and the cyclical behavior of coalition structures, which we call ``cycles''. We prove that there is a ring of coalitions if and only if there is a cycle of coalition structures (Theorem \ref{Th ring cycle} ). We further present an algorithm that constructs a ring of coalitions, given a cycle of coalition structures. In spite of presenting a one-to-one correspondence between the stable decompositions and the absorbing sets of the game (Corollary \ref{bijection}), we claim that ours is a more compelling solution than that of an absorbing set. A stable decomposition is a simpler object because (unlike absorbing sets) its definition depends only on agents' preferences and not on the coalition structures, and thus does not require a dynamic process between coalition structures to be established. Furthermore, the notion of stable decomposition has more explanatory power than that of an absorbing set: It identifies the sets of non-single coalitions, ring components, responsible for the cyclical behavior between coalition structures. Finally, we present some applications of stable decompositions in different models. In roommate problems, the notion of stable partition which is useful for identifying the existence of stable matchings is not sufficient to induce an absorbing set. \cite{inarra2013absorbing} identify the stable partitions that do this job, and call them maximal stable partitions. We show that our proposed of solution coincides with a maximal stable partition (Proposition \ref{propisition3}). In marriage problems, we show that no stable decomposition has a ring component, providing an alternative proof that the set of stable matchings is non-empty (Proposition \ref{propisiton4}). One significant application that we discuss is the problem of convergence to stability. From a market design point of view, this formalizes particular dynamics of coalition formation that may emerge in the absence of a central planner. In cases where decentralized decision making in itself may not suffice to bring about a stable outcome, a centralized coordinating process must be imposed to that end. Decentralized processes can be formalized through the aforementioned dynamic process among coalition structures. A coalition formation game exhibits convergence to stability if, starting from any coalition structure, the dynamic process always leads towards a stable coalition structure. We find that a stable coalition formation game exhibits convergence to stability if and only if non of its stable decompositions has a ring component (Proposition \ref{proposition5}). This means that marriage and stable roommate problems exhibit convergence to stability (Corollaries \ref{corol conv en marriage} and \ref{corol conv en roommate}). \bigskip \noindent \textbf{Related literature} \noindent Our paper is related to several papers in cooperative and matching theory literature. The concept of an absorbing set has appeared in earlier publications in different contexts and under different names. As stated above, \cite{schwartz1970possibility} was the first to introduce this notion. \cite{shenoy1979coalition,shenoy1980dynamic} proposed it under the name of ``elementary dynamic solution'' for $n$-person cooperative games. \cite{jackson2002evolution} use the name ``closed cycles'' and \cite{olaizola2014asymmetric} that of absorbing sets, and apply this solution to the network problem.\footnote{The union of absorbing sets gives the ``admissible set'' \citep{kalai1977admissible}, a solution defined for abstract systems and applied to various bargaining situations. Recently, \cite{demuynck2019myopic} define the ``myopic stable set'' in a very general class of social environments and study its relation to other solution concepts.} Absorbing sets are analyzed by \cite{inarra2013absorbing} for the roommate problem, showing that a roommate problem has either trivial absorbing sets (stable matchings) or non-trivial absorbing sets. However, this division does not applied for most coalition formation games, where trivial and non-trivial absorbing sets frequently coexist. It is clear that the definition of an absorbing set depends on what dynamic process is chosen. For instance, following \cite{knuth1976marriages} for the marriage problem, \cite{tamura1993transformation} considers problems with equal numbers of men and women, all of them mutually acceptable, in which all agents are always matched. Unlike the standard blocking dynamics, this dynamic process assumes that when a couple satisfies a blocking pair the abandoned partners also match to each other. \cite{knuth1976marriages} poses the question of whether there is convergence to stability in this model and \cite{tamura1993transformation} gives a counter-example in which some matchings cannot converge to any stable matching. The example shows the coexistence of five absorbing sets of cardinality one and one of cardinality sixteen. Numerous papers have studied whether there are decentralized matching markets that converge to stability.\footnote{% There is also an unpublished manuscript by \cite{papai2003random} that addresses this problem.} \cite{roth1990random} introduce a process for studying convergence to stability for the marriage problem and \cite{chung2000existence} generalizes that process for the roommate problem with weak preferences. Later, \cite{klaus2005stable} extend it for many-to-one matching with couples and \cite{kojima2008random} for many-to-many matching problems. \cite{eriksson2008instability} show that a stable matching can be attained by means of a decentralized market, even in cases of incomplete information. Following a different approach, \cite{diamantoudi2004random} analyze convergence to stability in the stable roommate problem with strict preferences. The rest of the paper is organized as follows. Section \ref{section preliminares} presents the preliminaries, the dynamics of the domination relation and the notion of absorbing set. Section \ref{section stable decomposition} introduces the notion of the ring component and sets out the definition of stable decomposition. This notion enables us to establish our characterization result of the solution in terms of absorbing sets. We also analyze the relation between rings of coalitions and cycles of coalition structures. Section \ref{section aplications} contains applications of our results to the marriage and roommate problems and to convergence to stability. Some concluding remarks are presented in Section \ref{section concluding viejas}. Finally, Appendix \ref{apendice} contains some lemmata and proofs omitted in the main text. \section{Preliminaries}\label{section preliminares} In this section we present the notation, the domination relation and the absorbing sets induced by it. Let $N=\{1,\ldots,n\}$ be a finite set of \emph{agents}. A non-empty subset $C$ of $N$ is called a \emph{coalition}. Each agent $i\in N$ has a strict, transitive \emph{preference relation} over the set of coalitions to which he/she belongs, denoted by $\succ_{i}.$ Given coalitions $C$ and $C'$, when agent $i\in C\cap C^{\prime}$ prefers coalition $C$ to $C^{\prime}$ we write $C\succ_{i}C'$. We say that $C$ \emph{is (unanimously) preferred to} $C'$, and write $C\succ C'$, if $C\succ_i C'$ for each $i\in C' \cap C$. A preference profile $\succ_{N}=(\succ_{i})_{i\in N}$ defines a \emph{coalition formation game } which is denoted by $(N,\succ_N)$. Let $\mathcal{S}=\{\{i\}: i \in N\}$ be the set of singletons and $\mathcal{K}=\{C\in 2^N \setminus \mathcal{S}:C\succ_i\{i\}$ for each $i\in N\}$ be the set of \textit{permissible} coalitions of game $(N,\succ_N)$.\footnote{Throughout the paper $2^N$ denotes the collection of non-empty subsets of $N$.} Let $\Pi$ denote the set of partitions of $N$ formed by permissible coalitions or singletons, which we call \emph{coalition structures.} A generic element of $\Pi $ is denoted by $\pi.$ For each $\pi \in \Pi,$ $\pi(i)$ denotes the coalition in $\pi$ that contains agent $i.$ Let $\mathcal{B} \subseteq \mathcal{K}\cup \mathcal{S}$ be a collection of coalitions. Let $\boldsymbol{|\mathcal{B}|_\mathcal{K}}$ denote the number of non-single coalitions included in $\mathcal{B},$ i.e., $|\mathcal{B}|_{\mathcal{K}}=|\{C : C \in \mathcal{B} \cap \mathcal{K}\}|.$ A set $\mathcal{M}(\mathcal{B}) \subseteq \mathcal{B} \cap \mathcal{K}$ is \textit{maximal for} $\mathcal{B}$: \begin{enumerate}[(i)] \item If $C, C' \in \mathcal{M}(\mathcal{B})$ implies $C \cap C'=\emptyset.$ \item If there is $C \in (\mathcal{B}\cap \mathcal{K}) \setminus \mathcal{M}(\mathcal{B})$, then there is $C' \in \mathcal{M}(\mathcal{B})$ such that $C \cap C' \neq \emptyset.$ \end{enumerate} Notice that, for any coalition structure $\pi \in \Pi,$ there is a unique maximal set $\mathcal{M}(\pi)$ consisting of all its non-single coalitions.\footnote{Here we only consider coalition structures different from that which fulfills $\pi(i)=\{i\}$ for each $i\in N$.} Given a collection of non-single coalitions $\mathcal{B} \subset \mathcal{K}$ and a coalition $C \in \mathcal{K}\setminus \mathcal{B},$ we say that $C$ \textit{breaks} $\mathcal{B}$ if there is a maximal set $\mathcal{M(B)}$ such that: \begin{enumerate}[(i)] \item there is a coalition $C'\in \mathcal{M(B)}$ such that $C \cap C' \neq \emptyset,$ and \item $C \succ C''$ for each $C'' \in \mathcal{M(B)}$ with $C'' \cap C\neq \emptyset$. \end{enumerate} Furthermore, given coalition structure $\pi$ and a coalition $C\in \mathcal{K}\setminus \pi$, we say that $C$ \emph{blocks} $\pi$ if $C\succ\pi(i)$ for each $i\in C$. This is equivalent to saying that either (i) $C$ breaks the collection of non-single coalitions of $\pi$; or (ii) agents in $C$ are singles in $\pi$, i.e. $\pi(i)=\{i\}$ for each $i \in C.$ \footnote{From now on, when $C$ breaks a collection of non-single coalitions of $\pi$, we simply say that $C$ \textit{breaks} $\pi$.} The main solution concept for a coalition formation game is that of stability, namely a coalition structure that is immune to deviation of coalitions. In such games, a coalition structure $\pi \in\Pi$ is \emph{stable} if the existence of $C\in \mathcal{K}$ and $i\in C$ such that $C\succ_i\pi(i)$ implies the existence of $j\in C$ such that $\pi(j)\succ_j C$. Hereafter, a stable coalition structure is denoted by $\pi^N.$ \subsection{The domination dynamics} As mentioned above, a stable coalition structure is immune to any coalitional deviation. But if a coalition structure is not stable then its blocking by a coalition does not specify its transformation into a new coalition structure. However, once a coalition structure has been blocked there is no single way to define how the new coalition structure emerges. If one or more agents leave a coalition, what happens to the remaining agents? Do they become singletons or do they remain together? \cite{hart1983endogenous} argue that if a coalition is understood as an agreement of all its members and then some agents leave, the agreement breaks down and the remaining agents become singletons. In our analysis this interpretation fits well, because our modeling only considers permissible coalitions and singletons, and the coalition of abandoned agents might not be permissible once a new coalition is formed. \begin{definition}\label{domination} Let $(N, \succ_N)$ be a coalition formation game. The \textbf{domination relation} $\boldsymbol{\gg}$ over $\Pi$ is defined as follows: $\pi' \gg \pi$ if and only if there is $C \in \mathcal{K}$ such that \begin{enumerate}[(i)] \item $C \in \pi ^{\prime }$ and $C\succ \pi(i)$ for each $i\in C$, \item for each $C'\in \pi$ such that $C' \cap C \neq \emptyset$, $% \pi^{\prime}(j)=\{j\}$ for each $j\in C' \setminus C,$ \item for each $C' \in \pi$ such that $C' \cap C=\emptyset $, $C' \in \pi ^{\prime }$. \end{enumerate} To stress the role of coalition $C,$ $\pi'$ is said to dominate $\pi$ via $C$, and \textbf{$\boldsymbol{\pi' \gg \pi}$ via $\boldsymbol{C}$} is written. \end{definition} Condition (i) says that each agent $i$ of coalition $C$ improves in $\pi'$ with respect to his/her position in $\pi$. Condition (ii) says that coalitions from which one or more agents depart break into singletons in $\pi ^{\prime }$. Condition (iii) says that the coalitions that do not suffer any departure in $\pi $, remain unchanged in $\pi ^{\prime }$. Notice that the domination relation $\gg$ implies that agents behave myopically, in the sense that they take the decision about blocking a coalition structure by considering just the resulting coalition, i.e. they are unable to foresee their positions even one step ahead. \begin{remark} The domination relation $\gg$ is irreflexive, antisymmetric and not necessarily transitive. \end{remark} Given $\gg$, let $\gg^{T}$ be the \textit{transitive closure} of $\gg$. That is, $\pi^{\prime }\gg^{T}\pi$ if and only if there is a finite sequence of coalition structures $\pi=\pi^{0},\pi^{1},\ldots,\pi^{J}=\pi^{\prime}$ such that, for all $j \in\{1,...,J\}$, $\pi^{j}\gg\pi^{j-1}$. \subsection{Absorbing sets} An absorbing set is a minimal set of coalition structures that, once entered through the domination relation, is never left. Formally, \begin{definition}\label{absorbing} Let $(N, \succ_N)$ be a coalition formation game. A non-empty set of coalition structures $\mathcal{A} \subseteq \Pi$ is an \textbf{absorbing set} whenever for each $\pi \in \mathcal{A}$ and each $\pi' \in \Pi \setminus \{\pi\},$ $$\pi' \gg^T \pi\text{ if and only if }\pi' \in \mathcal{A}.$$ \noindent If $|\mathcal{A}| \geq 3$, $\mathcal{A}$ is said to be a \textbf{non-trivial absorbing set}. Otherwise, the absorbing set is \textbf{trivial}. \end{definition} \noindent Notice that coalition structures in $\mathcal{A}$ are symmetrically connected by the relation $\gg ^{T},$ and that no coalition structure in $\mathcal{A}$ is dominated by a coalition structure not in that set. Next, we introduce a remark containing five facts about absorbing sets. \begin{remark}\label{remarkabsorbing}\ Facts on absorbing sets. \begin{enumerate}[(i)] \item Each coalition formation game has an absorbing set. \item An absorbing set $\mathcal{A}$ contains no stable coalition structure if and only if $|\mathcal{A}| \geq3.$ \item $\pi^{N}$ is a stable coalition structure if and only if $\{ \pi^{N}\}$ is an absorbing set. \item For each non-stable coalition structure $\pi \in \Pi$, there are an absorbing set $\mathcal{A}$ and a coalition structure $\pi'\in \mathcal{A}$ such that $\pi' \gg^T \pi$. \item For each absorbing set $\mathcal{A}$, either $|\mathcal{A}|=1$ or $|\mathcal{A}|\geq 3.$ \end{enumerate} \end{remark} \noindent Remark \ref{remarkabsorbing} (i) follows from Definition \ref{absorbing} and the finiteness of $\Pi.$ (ii) is implied by the antisymmetry of $\gg$. Remark \ref{remarkabsorbing} (iii) recalls that each stable coalition structure is in itself an absorbing set. Remark \ref{remarkabsorbing} (iv) says that from any non-stable coalition structure there is a finite sequence of such structures that reaches a coalition structure of an absorbing set (this property is called outer stability in \cite{kalai1977admissible}). Remark \ref{remarkabsorbing} (v) is straightforwardly implied by (i) and (ii). \section{Stable decomposition}\label{section stable decomposition} In this section we present a new solution concept for the entire class of coalition formation games. We find that our proposed solution always exists and can be characterized in terms of absorbing sets. Furthermore, when the game has stable coalition structures, they can be identified with solutions of our type. First, we present the key ingredient for defining such a solution, which generalizes the well-known concept of a ring of coalitions. \subsection{Ring components}\label{subsection ring component and ciclos} A ring of coalitions is an ordered set of non-single coalitions that behaves cyclically, i.e., for each pair of consecutive coalitions of the ordered set the successor coalition is preferred to its predecessor. Formally, \begin{definition}\label{definicion de ring} An ordered set of non-single coalitions $(R_1, \ldots,R_J)\subseteq \mathcal{K}$, with $J\geq 3$, is a \textbf{ring} if $R_{j+1}\succ R_{j}$ for $j=1,\ldots,J$ subscript modulo $J$. \end{definition} For the sake of convenience, we sometimes identify a ring with the non-ordered set of its coalitions, $\mathcal{R}=\{R_1, \ldots,R_J\},$ and refer to coalitions in $\mathcal{R}$ as \textit{ring coalitions}. Notice that the definition of a ring requires that all agents at the intersection of two consecutive ring coalitions should better off.\footnote{There are several ways of defining rings of coalitions. \cite{pycia2012stability} and \cite{inal2015core} define cyclicity among coalitions by requiring that only one agent at the intersection of two consecutive coalitions strictly prefer the first of them to the second. In both these definitions, unlike ours, other members of two consecutive coalitions can oppose the transition from one coalition to the next.} A ring of coalitions is not a robust enough notion because in a coalition formation game: (i) there may be multiple rings and some of them may overlap; and (ii) some rings may have a maximal set of coalitions that cannot be broken by any other ring coalition. To address these issues, we present the notion of a ``ring component''. \begin{definition}\label{defino ring component} A \textbf{ring component} $\mathcal{RC}$ is a subset of $\mathcal{K}$ with $|\mathcal{RC}|_{\mathcal{K}}\geq 3$ satisfying: \begin{enumerate}[(i)] \item $R \succ^TR'$ for each pair $R, R' \in \mathcal{RC}$ with $R\neq R'$,\footnote{Here $\succ^T$ denotes the transitive closure of relation $\succ$.} \item for each maximal set $\mathcal{M}(\mathcal{RC})$ there is $R \in \mathcal{RC} \setminus \mathcal{M}(\mathcal{RC})$ such that $R$ breaks $\mathcal{M}(\mathcal{RC}).$ \end{enumerate} \end{definition} Thus, a ring component is a collection of rings such that: (i) each coalition in the collection is transitively preferred to any other coalition in the collection; and (ii) each maximal set of coalitions of the ring component is broken by a coalition of the collection. \begin{example}\label{ejemplo ring} Consider the game given by the following table: {\small \begin{center} \ra{1.1} \begin{tabular}{@{}ccccccccccccc@{}}\toprule $\boldsymbol{1}$ && $\boldsymbol{2}$ && $\boldsymbol{3}$ && $\boldsymbol{4}$ && $\boldsymbol{5}$ && $\boldsymbol{6}$ && $\boldsymbol{7}$ \\ \cmidrule{1-1} \cmidrule{3-3} \cmidrule{5-5} \cmidrule{7-7}\cmidrule{9-9} \cmidrule{11-11}\cmidrule{13-13} $12$ && $23$ && $34$ && $467$ && $15$ && $67$&& $467$ \\ $123$ && $123$ && $123$ && $45$ && $45$ && $467$&& $67$ \\ $15$ && $12$ && $23$ && $34$ && $5$ && $6$&& $7$ \\ $1$ && $2$ && $3$ && $4$ && && && \\ \bottomrule \end{tabular} \end{center} } \noindent In this game there are two rings:$\{15,12,23,34,45\}$ and $% \{15,123,34,45\}$. Ring $\{15,13,12,23,$ $34,45\}$ meets both conditions of Definition \ref{defino ring component} and is thus a ring component. However ring $\{15,123,34,45\}$ does not meet condition (ii) of Definition \ref{defino ring component} because, for instance, the maximal set $\{123,45 \} $ cannot be broken. Moreover, their union satisfies condition (i) but not condition (ii) because, again, $\{123,45 \} $ cannot be broken. \hfill $\Diamond$ \end{example} Two types of ring components can be distinguished according to how their maximal sets behave. We say that a ring component $\mathcal{RC}$ is \textit{simple} if for each maximal set $ \mathcal{M}(\mathcal{RC})$ and each coalition $R\in \mathcal{RC}$ such that $R$ breaks $ \mathcal{M}(\mathcal{RC})$ there is only one $R'\in \mathcal{M}(\mathcal{RC})$ such that $R\cap R'\neq \emptyset.$ Otherwise, $\mathcal{RC}$ is \emph{not simple}. The following two examples illustrate the two types of ring components. \medskip \noindent \textbf{Example 1 (Continued)} \textit{In Example \ref{ejemplo ring}, the unique ring component is $\mathcal{RC}=\{12,23,34,45,15\}$ and its maximal sets are $\{12,34\},\{12,45\},\{23,45\},\{23,15\}$, and $\{34,15\}.$ It is easy to see that each coalition in $\mathcal{RC}$ that breaks a maximal set has a non-empty intersection with only one coalition of the maximal set, so $\mathcal{RC}$ is simple. \hfill $\Diamond$} \begin{example}\label{ejemplo2} Consider the game given by the following table: {\small \begin{center} \ra{1.1} \begin{tabular}{@{}ccccccccccccccc@{}}\toprule $\boldsymbol{1}$ && $\boldsymbol{2}$ && $\boldsymbol{3}$ && $\boldsymbol{4}$ && $\boldsymbol{5}$ && $\boldsymbol{6}$ && $\boldsymbol{7}$ && $\boldsymbol{8}$\\ \cmidrule{1-1} \cmidrule{3-3} \cmidrule{5-5} \cmidrule{7-7}\cmidrule{9-9} \cmidrule{11-11}\cmidrule{13-13}\cmidrule{15-15} $12$ && $23$ && $356$ && $145$ && $356$ && $678$&& $78$ && $678$\\ $145$ && $12$ && $23$ && $46$ && $145$ && $46$&& $678$&& $78$ \\ $1$ && $2$ && $3$ && $4$ && $5$ && $356$&& $7$ && $8$\\ && && && && &&$6$ && && \\ \bottomrule \end{tabular} \end{center} } \noindent The unique ring component is $\mathcal{RC}=\{145,12,23,356,46\}$ and its maximal sets are $\{145,23\},$ $\{12,356\},$ $\{12,46\},$ and $\{23,46\}$. Notice that the maximal set $\{145,23\}$ is broken by coalition $356$ that has non-empty intersection with coalitions $145$ and $23$. Then, $\mathcal{RC}$ is not simple. \hfill $\Diamond$ \end{example} \subsection{A new general solution concept} Inspired by the idea of the ``stable partition'' introduced by \cite{tan1991necessary} for roommate problems, we next present our solution concept for general coalition formation games, which we call ``stable decomposition''. This concept applies the idea of ``protection'' not to coalitions but to special sets of coalitions that we call ``parties''. A stable decomposition, then, is a collection of parties that involves every agent of the game and that protects its parties. This means that if an external coalition breaks a party, then another party of the decomposition ``prevents the formation'' of such an external coalition. Below we give precise definitions of all the notions involved in our solution concept. Formally, a set of coalitions $\mathcal{B} \subset \mathcal{K}\cup \mathcal{S}$ is a \textit{party} if one of the following conditions holds: \begin{enumerate}[(i)] \item $\mathcal{B}\subset \mathcal{S},$ \item $\mathcal{B}\subset \mathcal{K}$ and $|\mathcal{B}|_{\mathcal{K}}=1,$ \item $\mathcal{B}$ is a ring component. \end{enumerate} Given a ring component $\mathcal{RC}$, its \textit{compact collection} $\mathcal{C}(\mathcal{RC})$ is defined as follows. If $\mathcal{RC}$ is simple, then $\mathcal{C}(\mathcal{RC})$ is equal to the collection of maximal sets of $\mathcal{RC}.$ If $\mathcal{RC}$ is not simple, then $\mathcal{C}(\mathcal{RC})=\{\{R\}: R \in \mathcal{RC}\}.$ Now, given a party $\mathcal{B} \subset \mathcal{K}$ and a coalition $C \in \mathcal{K} \setminus \mathcal{B}$ such that $N(\mathcal{B}) \cap C \neq \emptyset,$ we say that \textit{$\mathcal{B}$ impedes coalition $C$ to be formed} if:\footnote{From now on, if there is no confusions we sometimes write $\mathcal{B}$ with $|\mathcal{B}|_{\mathcal{K}}>1$ instead of $\mathcal{RC}$ to denote a ring component.} \begin{enumerate}[(i)] \item $\mathcal{B}=\{C'\}$ and there is an agent $i \in C'\cap C$ such that $C' \succ_i C$, or \item $\mathcal{B}$ is a ring component and for each $\mathcal{E} \in \mathcal{C}(\mathcal{B})$ there are a coalition $C'\in \mathcal{E}$ with $C'\cap C \neq \emptyset$ and an agent $i \in C'\cap C$ such that $C' \succ_i C$. \end{enumerate} Condition (i) states that when a party is formed by only one coalition, there is an agent that prefers to stay in the coalition of the party rather than move to the external coalition. Condition (ii) says that when the party is a ring component, for each set of the compact collection of the ring component there is an agent that belongs to a coalition in that set that prefers to stay in that coalition rather than move to the external coalition. Notice that when a ring component is simple the maximal sets of the ring component are the objects that prevent the formation of external coalitions, but if the ring component is not simple the individual coalitions of the ring component are the objects that prevent the formation of external coalitions. Given a party $\mathcal{B}$, denote by $N(\mathcal{B})$ the set of agents that belong to (at least) one coalition in $\mathcal{B},$ that is, $N(\mathcal{B}) \equiv \bigcup_{C \in \mathcal{B}}C.$ A collection of parties $\{\mathcal{B}_1, \ldots,\mathcal{B}_L\}$ \textit{partitions} $N$ if $\{N(\mathcal{B}_1), \ldots, N(\mathcal{B}_L)\}$ forms a partition of $N.$ Furthermore, given a party $\mathcal{B}$ and a collection of parties $\mathcalligra{D} \ $ that partitions $N$, \textit{$\mathcal{B}$ is said to be protected by $\mathcalligra{D} \ $} if for each coalition $C$ that breaks $\mathcal{B}$ there is $\mathcal{B}' \in \mathcalligra{D} \ $ such that $\mathcal{B}'$ prevents $C$ from being formed.\footnote{Notice that party $\mathcal{B}$ need not be included in $\mathcalligra{D} \ $.} Our solution concept for coalition formation games can now be defined. \begin{definition}\label{stable decomposition} A \textbf{stable decomposition} is a collection of parties $\mathcalligra{D} \ $ that partitions $N$ and satisfies the following: \begin{enumerate}[(i)] \item Each $\mathcal{B} \in \mathcalligra{D} \ $ such that $\mathcal{B} \subset \mathcal{K}$ is protected by $\mathcalligra{D} \ .$ \item There is at most one $\mathcal{B}^\star \in \mathcalligra{D} \ \ $ such that $\mathcal{B}^\star \subset \mathcal{S}.$ Moreover, for each party $\mathcal{B}' \subset \mathcal{K}$ with $N(\mathcal{B}') \subseteq N(\mathcal{B}^\star)$, $\mathcal{B}'$ is not protected by $\mathcalligra{D} \ .$ \end{enumerate} \end{definition} Condition (i) is the stability condition within parties in $\mathcalligra{D} \ $. Condition (ii) does not allow the formation of parties formed solely by the agents in party $\mathcal{B}^\star$. The reason for this exclusion is that those parties cannot be protected by $\mathcalligra{D} \ \ $. \begin{remark}\label{remark stable decomposition con stable partition} When $\mathcalligra{D} \ $ is such that each $\mathcal{B} \in \mathcalligra{D} \ $ satisfies $|\mathcal{B}|_{\mathcal{K}}\leq 1$, a stable decomposition can be identified with a stable coalition structure. Moreover, if $\pi^N=\{C_1, \ldots, C_L\}$ is a stable coalition structure then $\mathcalligra{D} \ $ is set as follows. Assume w.l.o.g. that the first $k$ coalitions of $\pi^N$ with $k\leq L$ satisfy the requirement that $C_k\in \mathcal{K}$ and the remaining coalitions belong to $\mathcal{S}$. Thus, $$\mathcalligra{D} \ =\left\{\{C_1\}, \ldots, \{C_k\}, \{C_{k+1},\ldots, C_L\}\right\}$$ is a stable decomposition. \end{remark} The following definition singles out a special type of coalition structure induced by a stable decomposition. \begin{definition} Let $\mathcalligra{D} \ $ be a stable decomposition. A $\mathcalligra{D}$ \ \textbf{-- coalition structure} is a coalition structure $\pi_{\mathcalligra{D}}$ \ such that: \begin{enumerate}[(i)] \item for each $\mathcal{B} \in \mathcalligra{D} \ $ \ with $\ |\mathcal{B}|_{\mathcal{K}}\leq1,$ $\pi_{\mathcalligra{D}}\ \cap\mathcal{B}=\mathcal{B},$ \item for each $\mathcal{B} \in \mathcalligra{D} \ $ \ with $\ |\mathcal{B}|_{\mathcal{K}}\geq 3,$ $\pi_{\mathcalligra{D}}\ \cap\mathcal{B}=\mathcal{E}$ for some $\mathcal{E}$ in $\mathcal{C}(\mathcal{B})$. Furthermore, $\pi_{\mathcalligra{D}}\ (i)=\{i\}$ for each $i\in N(\mathcal{B}) \setminus N(\mathcal{E}).$ \end{enumerate} \end{definition} Condition (i) says that each party of $\mathcalligra{D}$ ~ that is not a ring component must be included in the $\mathcalligra{D}$ \ -- coalition structure. Condition (ii) says that for each party that is a ring component, the $\mathcalligra{D}$ \ -- coalition structure includes a set of its compact collection and the rest of the agents of the ring component are single in this coalition structure. Given a stable decomposition $\mathcalligra{D}$ \ \ and a $\mathcalligra{D}$ -- coalition structure $\pi_{\mathcalligra{D}} \ ,$ the \textit{set generated by} $\pi_{\mathcalligra{D}} \ ,$ denoted by $\mathcal{A}_{\!\mathcalligra{D}}$\ , is the set formed by $\pi_{\mathcalligra{D}} \ $ together with all the coalition structures that transitively dominate it. Formally, $$\mathcal{A}_{\!\mathcalligra{D}} \equiv \{\pi_{\mathcalligra{D}} \ \}\cup \{\pi \in \Pi : \pi \gg^T \pi_{\mathcalligra{D}} \ \}.$$ The following result states that each set generated by a $\mathcalligra{D}$ -- coalition structure is actually an absorbing set. \begin{proposition}\label{absorbing and stable decomp} If $\mathcalligra{D}$ \ \ is a stable decomposition, then $\mathcal{A}_{\!\mathcalligra{D}}$ \ \ is an absorbing set. Furthermore, if there is a ring component in $\mathcalligra{D}$ , $\mathcal{A}_{\!\mathcalligra{D}}$ \ \ is a non-trivial absorbing set. \end{proposition} \begin{proof} See proof in Appendix \ref{apendice}. \end{proof} \noindent The absorbing set $\mathcal{A}_{\!\mathcalligra{D}} \ $ depends only on the stable decomposition $\mathcalligra{D}\ $, and not on the specific $\mathcalligra{D}$ -- coalition structure selected to construct it (see Lemmata \ref{pisubde} and \ref{pisubde bis} in Appendix \ref{apendice}). We are now in a position to present the main result of the paper, which is that absorbing sets and sets generated by $\mathcalligra{D}$ -- coalition structures are equivalent. \begin{theorem}\label{decomposition} $\mathcal{A}$ is an absorbing set if and only if $\mathcal{A}=\mathcal{A}_{\mathcalligra{D}}$ \ \ for a stable decomposition $\mathcalligra{D} \ .$ \end{theorem} To prove Theorem \ref{decomposition}, the relation between cyclical behavior in agents' preferences and cyclical behavior of coalition structures must first be understood. That relation is studied in the next subsection. The proof of Theorem \ref{decomposition} is thus relegated to Appendix \ref{apendice}. Each coalition formation game has an absorbing set and, by Theorem \ref{decomposition}, each absorbing set is the set generated by a $\mathcalligra{D}$ -- coalition structure for a stable decomposition $\mathcalligra{D} \ , $ \ so the following corollaries hold: \begin{corollary}\label{corolario existencia} Each coalition formation game has a stable decomposition. \end{corollary} \begin{corollary}\label{bijection} For each coalition formation game there is a bijection between absorbing sets and stable decompositions. \end{corollary} \noindent These corollaries, together with Remark \ref{remark stable decomposition con stable partition}, mean that our solution concept is always non-empty and that it generalizes the concept of the stable coalition structure. The following examples illustrate these results. Notice that when a stable decomposition includes a ring component $\mathcal{RC}$ that is simple each coalition structure of the absorbing set generated includes a maximal set of $\mathcal{RC}.$ By contrast, when a stable decomposition includes a ring component $\mathcal{RC}$ that is not simple there are coalition structures in the absorbing set generated that contain only one coalition of the ring component (and the remaining agents involved in the ring component are single in such coalition structures). \medskip \noindent \textbf{Example 1 (Continued)} \textit{In Example \ref{ejemplo ring} there are three stable decompositions: $\! \! \mathcalligra{D}\ \! = \{\{123\},$ $\{45\}, \{67\}\},$ $\! \! \mathcalligra{D}\ \, ' \!= \{\{15\},\{23\}, \{467\}\},$ and $\! \! \mathcalligra{D} \ \, '' \! = \{\{12,23,34,45,15\},\{67\}\}.$ Stable decompositions $\! \! \mathcalligra{D}\ $ and $\! \! \mathcalligra{D} \ \, ' $ induce the stable coalition structures $\{123, 45, 67\}$ and $\{15, 23, 467\},$ respectively. Stable decomposition $\! \! \mathcalligra{D} \ ''$ includes as a party the unique ring component of the game, $\mathcal{RC}=\{12,23,34,45,15\}.$ Notice that although coalition $467$ breaks party $\mathcal{RC}$ in $\! \! \mathcalligra{D} \ '',$ party $\{67\}$ in $\! \! \mathcalligra{D} \ ''$ prevents $467$ from being formed. Furthermore, the unique non-trivial absorbing set is $\mathcal{A}=\{\pi _{1},\pi _{2},$ $\pi _{3},\pi _{4},\pi _{5}\}$ where, $\pi _{1}=\{12,34,5,67\}$, $\pi _{2}=\{12,3,45,67\}$, $\pi _{3}=\{1,23,45,67\}$, $\pi _{4}=\{15,23,4,67\}$ and $\pi _{5}=\{15,2,34,67\}$. Notice that, since $\mathcal{RC}$ is simple, each coalition structure of $\mathcal{A}$ includes a maximal set. \hfill $\Diamond$} \medskip \noindent \textbf{Example 2 (Continued)} \textit{In Example \ref{ejemplo2} there are two stable decompositions: $\! \! \mathcalligra{D}\ \! = \{\{145\},$ $\{23\}, \{678\}\},$ and $\! \! \mathcalligra{D}\ ' = \{\{145,12,23,356,46\},\{78\}\}.$ Stable decomposition $\! \! \mathcalligra{D}\ $ induces the stable coalition structure $\{145, 23, 678\}$ and stable decomposition $\! \! \mathcalligra{D} \ '$ includes as a party the unique ring component of the game, $\mathcal{RC}=\{145,12,23,356,46\}.$ Notice that although coalition $678$ breaks party $\mathcal{RC}$ in $\! \! \mathcalligra{D} \ ',$ party $\{78\}$ in $\! \! \mathcalligra{D} \ '$ prevents $678$ from being formed. Furthermore, $\{145, 23, 678\}$ is the unique stable coalition structure, and the non-trivial absorbing set is formed by coalition structures $\{12,3,46,5,78\}$, $\{1,23,46,5,78\}$, $\{145,23,6,78\}$, $\{1,2,356,4,78\}$, $\{12,356,4,78\}$, $\{1,2,3,46,5,78\}$, $\{145,2,3,6,78\}$, $\{12,3,4,5,6,78\}$, and $\{1,23,4,5,6,78\}$. Notice that since $\mathcal{RC}$ is not simple, there are coalition structures in the absorbing set that only contain one coalition of the ring component. For instance, coalition structure $\{1,2,356,4,78\}$ only contains coalition $356$. \hfill $\Diamond$} \begin{example}\label{ejemplo 3} Consider the game given by the following table: \begin{center} \small{ \ra{1.1} \begin{tabular}{@{}cccccccccccccccccccccccccc@{}}\toprule $\boldsymbol{1}$ & & $\boldsymbol{2}$ && $\boldsymbol{3}$ && $\boldsymbol{4}$ & & $\boldsymbol{5}$ && $\boldsymbol{6}$ \\ \cmidrule{1-1} \cmidrule{3-3} \cmidrule{5-5} \cmidrule{7-7} \cmidrule{9-9} \cmidrule{11-11} $12$ &&$23$ && $34$ && $45$ && $56$ && $46$ \\% && && && && \\ $13$ && $12$ && $13$ && $46$ && $45$ && $56$ \\%\hline $1$ && $2$ && $23$ && $34$ && $5$ && $6$ \\%\hline && && $3$ && $4$ && && \\ \bottomrule \end{tabular}% } \end{center} This game has no stable coalition structure and two ring components: $\{12,23,13\}$ and $\{45,46,$ $56\}.$ Consider the collection of ring components $\{\{12,23,13\},\{45,46,56\}\}$. Ring component $\{12,$ $23,13\}$ is not protected by the collection, since coalition $34$ breaks $\{12,23,13\}$ and ring component $\{45,46,56\}$ does not prevent $34$ from being formed. Therefore, this collection is not a stable decomposition. However, the collection $\! \! \mathcalligra{D}\ \! = \{\{1,2,3\},\{45,46,56\}\}$ is the unique stable decomposition of the game because: (i) each non-single coalition formed by agents of the set of singles $\{1,2,3\}$, namely $12,23,$ and $ 13$, is not protected by $\! \! \mathcalligra{D}\ $; and (ii) no coalition breaks ring component $\{45,46,56\}$, so it is protected by $\! \! \mathcalligra{D}\ $. This stable decomposition induces the following $\! \! \mathcalligra{D}\ $-- coalition structures: $\{1,2,3,45,6\},$ $\{1,2,3,46,5\}$ and $\{1,2,3,56,4\},$ which generate the unique absorbing set of this game $\mathcal{A}=\{\pi _{1},\pi _{2},$\ldots,$\pi _{14}\}$ where:\medskip \noindent \begin{tabular}{l l l l} $\pi_{1}=\{1,2,3,45,6\}$ & $\pi_{2}=\{1,2,3,4,56\}$ & $\pi_{3}=\{1,2,3,5,46\}$ & $\pi_{4}=\{12,3,5,46\}$ \\ $\pi_{5}=\{12,3,45,6\}$ & $\pi_{6}=\{12,3,4,56\}$ & $\pi_{7}=\{1,23,5,46\}$ & $\pi_{8}=\{1,23,45,6\}$ \\ $\pi_{9}=\{1,23,4,56\}$ & $\pi_{10}=\{2,13,4,56\}$ & $\pi_{11}=\{2,13,5,46\}$ & $\pi_{12}=\{2,13,45,6\}$ \\ $\pi_{13}=\{1,2,34,56\}$ & $\pi_{14}=\{12,34,56\}.$ & & \\ \end{tabular} \medskip \noindent Notice that in each coalition structure of $\mathcal{A}$ there is a coalition of the ring component that belongs to $\! \! \mathcalligra{D}\ $. \hfill $\Diamond$ \end{example} \subsection{Relation between rings and cycles}\label{section ring en preferences} In this subsection we analyze how cyclical behavior that may arise in agents' preferences induces cyclical behavior of coalition structures. A cycle of coalitions structures is an ordered set of coalition structures that presents cyclical behavior. That is, for each pair of consecutive coalition structures of the ordered set, the successor coalition structure dominates its predecessor. Formally, \begin{definition} An ordered set of coalition structures $(\pi_1,\ldots,\pi_{J})\subset \Pi,$ with $J \geq 3,$ is a \textbf{cycle} if $\pi_{j+1} \gg \pi_{j}$ for $j=1,\ldots,J$ subscript modulo $J$ \end{definition} Next, we present an algorithm that constructs a ring of coalitions from a cycle of coalition structures. Let $\! \mathcalligra{C}\; =(\pi_1,\ldots,\pi_{J}) $ be a cycle of coalition structures, let $C_j$ denote the coalition that is formed in $\pi_j,$ i.e., $\pi_j \gg \pi_{j-1}$ via $C_j$, and consider the ordered set $\mathcal{C}=(C_1, \ldots,C_{J}).$ To construct a ring, proceed as follows: \bigskip \begin{center} \begin{tabular}{l l} \hline \hline \multicolumn{2}{l}{\textbf{Algorithm:}}\vspace*{10 pt}\\ \textbf{Step 1} & Set $\overline{R}_1$ as any coalition in $\mathcal{C}$. \\ \textbf{Step $\boldsymbol{t}$} & Set \\ & $\overline{R}_t \equiv \min_{r\geq 1}\{C_{j+r} \text{ such that } C_j=\overline{R}_{t-1} \text{ and } C_j \cap C_{j+r} \neq \emptyset \text{ with } j+r \text{ mod }J\}.$\\ & \texttt{IF} $\overline{R}_t=\overline{R}_s$ for $s<t,$ \\ & \hspace{20 pt}\texttt{THEN} set $(\overline{R}_{s+1}, \ldots, \overline{R}_t),$ and \texttt{STOP}. \\ & \texttt{ELSE} continue to Step $t+1.$ \\ \hline \hline \end{tabular} \end{center} \bigskip \noindent Notice that in each step of the algorithm a different coalition of $\mathcal{C}$ is selected except in the last step, where one of the previously selected coalitions is singled out. Therefore, the algorithm stops in at most $J+1$ steps (recall that $J=|\mathcal{C}|$). The following lemma shows that the ordered set $(\overline{R}_{s+1}, \ldots, \overline{R}_t),$ where $s$ is identified in the above algorithm, is actually a ring. To simplify notation, we rename the elements of the ordered set and write $(R_1, \ldots, R_\ell)=(\overline{R}_{s+1}, \ldots, \overline{R}_t).$ \begin{lemma}\label{construccion de Ring components} Let $\! \mathcalligra{C} \; \ $ be a cycle of coalition structures. Then, cycle $\! \mathcalligra{C} \; \ $ induces a ring. \end{lemma}% \begin{proof} Let $\! \mathcalligra{C} \; \ $ be a cycle of coalition structures. Applying the above algorithm results in the ordered set $(R_1, \ldots, R_\ell)$. We claim that the ordered set $(R_1, \ldots, R_\ell)$ thus constructed is a ring, i.e. for each $R_{j+1}$ and $R_j$ in the ordered set, $R_{j+1}\succ R_j$ and $\ell\geq 3$. Take any coalition $R_j.$ Coalition $R_{j+1}$ (modulo $\ell$) is the closest coalition that has a non-empty intersection with $R_j$ (following the modular order of the coalition structures in cycle $\! \mathcalligra{C} \ \ \,$), so all the coalition structures between the one in which $R_j$ breaks and the one in which $R_{j+1}$ breaks contain coalition $R_j$. Let $\pi$ and $\pi'$ be the two consecutive coalition structures in $\! \mathcalligra{C} \ $ such that $\pi' \gg \pi$ via $R_{j+1}$. $R_{j+1}$ is the breaking coalition, so $R_{j+1}$ belongs to $\pi'$. Furthermore, since $R_j$ belongs to $\pi$ and $R_{j+1}\cap R_j\neq \emptyset$, by Definition \ref{domination} $R_{j+1}\succ R_j$. Furthermore, $\ell \geq 3.$ This holds for the following two facts: (i) there are at least two coalitions in the ordered set, because all the coalitions that break in a cycle are also broken; (ii) if there are only two coalitions, say $R_1$ and $R_2,$ then there is an agent $i \in R_1\cap R_2$ such that $R_1 \succ_i R_2 \succ_i R_1,$ which by transitivity implies $R_1 \succ_i R_1,$ which is a contradiction. \end{proof} The following theorem establishes the relationship between a ring of coalitions in preferences and a cycle of coalition structures of the game. \begin{theorem}\label{Th ring cycle} A coalition formation game has a ring of coalitions if and only if it has a cycle of coalition structures. \end{theorem} \begin{proof} $(\Longleftarrow)$ This is proven by Lemma \ref{construccion de Ring components}. \noindent $(\Longrightarrow)$ Let $(R_1,\ldots,R_J)$ be a ring in coalition formation game $(N,\succ_N)$. This ring induces a cycle of coalition structures $\! \mathcalligra{C} \; =(\pi_1,\ldots,\pi_J)$ where $\pi_j$ is defined as follows: $$\pi_j (i)=\left\{ \begin{tabular}{ll} $R_j$ & $\text{for }i \in R_j $ \\ $\{i\}$ & otherwise.% \end{tabular}% \right. $$ Note that $\pi_j$ is obtained from $\pi_{j-1}$ by forming coalition $R_j$ for each $j=1,\ldots,J.$ \end{proof} Next, we illustrate the above result with an example. \noindent \textbf{Example 1 (Continued)} \textit{In Example \ref{ejemplo ring} there are two rings:$\{15,12,23,34,45\}$ and $% \{15,123,34,45\}$. The collection $\mathcalligra{C}=(\pi _{1},\pi _{2},$ $\pi _{3},\pi _{4},\pi _{5})$ where, $\pi _{1}=\{12,34,5,67\}$, $\pi _{2}=\{12,3,45,67\}$, $\pi _{3}=\{1,23,45,67\}$, $\pi _{4}=\{15,23,4,67\}$ and $\pi _{5}=\{15,2,34,67\}$ is a cycle of coalition structures. Starting from $\pi _{1}$, the set of blocking coalitions between coalition structures is $\mathcal{C}% =(45,23,15,34,12)$. Assume that Step 1 of the previous algorithm selects coalition $45$. The following steps select coalitions $15$, $12,$ $23$ and $34$, respectively. The algorithm ends when coalition $45$ is reached again and ring $(45,15,12,23,34)$ is obtained. \hfill $\Diamond$} \section{Some applications}\label{section aplications} In Subsection \ref{subseccion 5.1} we analyze the special characteristics of a stable decomposition in roommate and marriage problems. In Subsection \ref{subseccion 5.2} we discuss convergence to stability. \subsection{Stable decompositions in the roommate and marriage problems}\label{subseccion 5.1} In the roommate problem, introduced by \cite{gale1962college}, each agent has preferences over all coalitions of cardinality two to which he/she belongs. As is known, a roommate problem may not admit stable matchings. \cite{tan1991necessary} proves that a roommate problem has no stable matchings if and only if there is a stable partition with an odd ring.\footnote{In this subsection, as usual in roommate and marriage problems, we talk about ``matchings'' rather than coalition structures.} Consider the relation between stable decompositions and stable partitions. Parties in \cite{tan1991necessary} are similar to ours: A set of singletons, sets of one non-single coalitions, and rings. However, the notion of protection in \cite{tan1991necessary} is weaker than ours. In our terminology, a collection of parties $\mathcalligra{P}\ \ $ is a stable partition if, whenever coalition $C$ breaks party $\mathcal{B}\in\!\! \mathcalligra{P}\ \ $ there is party $\mathcal{B}'\in\!\! \mathcalligra{P}\ \ $ such that $C\setminus N(\mathcal{B})\subset N(\mathcal{B}')$ and $R \succ C$ for each $R\in \mathcal{B}'$ with $R\cap C \neq\emptyset$. This can be illustrated with the following example: \begin{example}[Example 2 in \citealp{inarra2013absorbing}]\label{ejemploMolis} Consider the game given by this table: {\small \[ \ra{1.1} \begin{tabular}{@{}ccccccccccccccccccccc@{}}\toprule $\mathbf{1}$ && $\mathbf{2}$ && $\mathbf{3}$ && $\mathbf{4}$ && $\mathbf{5}$ && $\mathbf{6}$ && $\mathbf{7}$ && $\mathbf{8}$ && $\mathbf{9}$ && \textbf{a} \\ \cmidrule{1-1} \cmidrule{3-3} \cmidrule{5-5} \cmidrule{7-7} \cmidrule{9-9} \cmidrule{11-11} \cmidrule{13-13} \cmidrule{15-15} \cmidrule{17-17} \cmidrule{19-19} $12$ && $23$ && $13$ && $47$ && $58$ && $69$ && $57$ && $68$ && $49$ && $a$ \\ $13$ && $12$ && $23$ && $48$ && $59$ && $67$ && $67$ && $48$ && $59$ && \\ $14$ && $24$ && $34$ && $49$ && $57$ && $68$ && $17$ && $58$ && $69$ && \\ $15$ && $25$ & & $35$ && $45$ && $45$ && $46$ && $47$ && $78$ && $79$ && \\ $16$ && $26$ && $36$ &&$46$ && $56$ && $6$ &&$ 79$ && $89$ && $89$ && \\ $17$ && $27$ && $37$ && $14$ && $5$ && && $78$ && $8$ && $9$ && \\ $18$ && $28$ && $38$ && $24$ && && && $7$ && && && \\ $19$ && $29$ && $39$ && $34$ && && && && && && \\ $1$ && $2$ && $3$ && $4$ && && && && && && \\ \bottomrule \end{tabular}% \]} \noindent This example has three stable partitions: $\{\{12,23,13\},\{48\},\{59\},\{67\},% \{a\}\}$,\\ $\{\{12,23,13\},$ $\{49\},\{57\},\{68\},\{a\}\},$ and $\{\{12,23,13\},\{47 \},\{58\},\{69\},\{a\}\}$. The first two are stable decompositions but the third is not. This is because party $\{47\}$ is not protected: Coalition $17$ breaks party $\{47\}$ and there is no party that prevents the formation of coalition $17$. In fact, when agent $1$ is abandoned by agent $2$ then $17$ can be formed. By contrast, $% \{\{12,23,13\},\{47\},\{58\},\{69\},\{a\}\}$ is a stable partition because no coalition breaks parties $\{12,23,13\}$ and $\{a\},$ and it can be seen that the protection criterion of Tan is met for the rest of the parties. An analysis of all coalitions that break party $\{47\}$ reveals the following: coalition $17$ breaks party $\{47\}$ but party $\{12,23,13\}$ satisfies the requirement that $12 \succ 13 \succ 17$. Coalition $67$ breaks party $\{47\}$ but party $\{69\}$ satisfies the requirement that $69 \succ 67$. Lastly, coalition $57$ breaks party $\{47\}$ but party $\{58\}$ satisfies the requirement that $58 \succ 57.$ The analysis for parties $\{58\}$ and $\{69\}$ is similar and we omit it. Thus, for the roommate problem, the notion of stable partition is weaker than the notion of stable decomposition. \hfill $\Diamond$ \end{example} Next, following \cite{inarra2013absorbing}, we use the term \textit{maximal stable partition} to refer to those stable partitions with the maximal set of satisfied agents, i.e. agents with no incentive to change partners. The following result can then be established: \begin{proposition}\label{propisition3} For any roommate problem there is a bijection between maximal stable partitions and stable decompositions. \end{proposition} \begin{proof} Theorem 1 in \cite{inarra2013absorbing} proves that there is a bijection between maximal stable partitions and absorbing sets. Our Corollary \ref{bijection} states that there is a bijection between absorbing sets and stable decompositions. Therefore, the result follows straightforwardly. \end{proof} \bigskip In the marriage problem,\ agents can be divided into two types: Men and women. An agent of one type can only be matched to an agent of the other, or can remain single. Thus, the marriage problem is a special case of the roommate problem with specific restrictions on preferences. \cite{gale1962college} show that stable matching may not exist in the roommate problem, but always exist in the marriage problem. \cite{chung2000existence} raises the question of why the marriage problem always admits stable matchings while the roommate, a generalization of the marriage problem, may not. The results of \cite{tan1991necessary} and \cite{chung2000existence}, who analyze the roommate problem for weak preferences show that a marriage problem, unlike the roommate problem, has no odd rings, which explains why the marriage problem always has stable matchings. In our terms the specific form of a stable decomposition can be shown in the marriage problem. \begin{proposition}\label{propisiton4} For any marriage problem no stable decomposition has a ring component. \end{proposition} \begin{proof} Consider a marriage problem. By Proposition \ref{propisition3}, there is a bijection between stable decompositions and (maximal) stable partitions of the problem. By Propositions 3.1 and 3.2 in \cite{tan1991necessary}, there is a bijection between (maximal) stable partitions and stable matchings of the problem. Also, by Remark \ref{remark stable decomposition con stable partition}, there is a bijection between stable matchings and stable decompositions with no ring component. This implies that no stable decomposition has a ring component. \end{proof} The following example illustrates the result. \begin{example} Consider the game given by this table: {\small \[ \ra{1.1} \begin{tabular}{@{}ccccccccccccc@{}}\toprule $\boldsymbol{m_1}$ && $\boldsymbol{m_2}$ && $\boldsymbol{m_3}$ && $\boldsymbol{w_1}$ && $\boldsymbol{w_2}$ && $\boldsymbol{w_3}$ \\ \cmidrule{1-1} \cmidrule{3-3} \cmidrule{5-5} \cmidrule{7-7} \cmidrule{9-9} \cmidrule{11-11} $m_1w_1$ && $m_2w_1$ && $m_3w_3$ && $m_3w_1$ && $m_2w_2$ && $m_1w_3$ \\ $m_1w_3$ && $m_2w_2$ && $m_3w_2$ && $m_1w_1$ && $m_3w_2$ && $m_3w_3$ \\ && && $m_3w_1$ && $m_2w_1$ && && \\ \bottomrule \end{tabular}% \]} This example has two even rings: $(m_1w_1,m_3w_1,m_3w_3,m_1w_3)$ and $(m_3w_1,m_3w_2,m_2w_2,m_2w_1)$. The union of these two rings satisfies Condition (i) but not Condition (ii) of Definition \ref{defino ring component}. To see this, take the three maximal sets that can be formed with this collection: \{$m_1 w_3 ,m_ 3w_1, m_2 w_2\}$, $\{m_ 1w_3 ,m_2 w_1, m_3 w_2\}$ and $\{m_1 w_1 ,m_2 w_2, m_3 w_3\}$. It is easy to verify that no coalition of these rings breaks any of these maximal sets. Hence, this collection of coalitions is not a ring component. \hfill $\Diamond$ \end{example} \subsection{Convergence to stability}\label{subseccion 5.2} We say that a coalition formation game $(N,\succ _{N})$ exhibits \textit{% convergence to stability} if for each non stable coalition structure $\pi \in \Pi $ there is a stable coalition structure $\pi ^{N}\in \Pi $ such that $\pi ^{N}\gg ^{T}\pi $. As claimed in the Introduction, the stable decomposition solution provides a tool for analyzing convergence to stability. If no stable decomposition of a coalition formation game has a ring component, then the game exhibits convergence to stability. However, a coalition formation game in which there are stable decompositions both with and without ring components does not exhibit convergence to stability. \begin{proposition}\label{proposition5} A stable coalition formation game exhibits convergence to stability if and only if no stable decomposition has a ring component. \end{proposition} \begin{proof} Let $(N,\succ _{N})$ be a stable coalition formation game. \noindent $(\Longrightarrow )$ Assume that $(N,\succ _{N})$ has a stable decomposition with a ring component. By Proposition \ref{absorbing and stable decomp}, the stable decomposition induces a non-trivial absorbing set $\mathcal{A }$. Let $\pi ^{N}$ be a stable coalition structure, so $\pi^N \in \Pi \setminus \mathcal{A}$. Thus, by Definition \ref{absorbing} there is no $\pi \in \mathcal{A% }$ such that $\pi^N \gg ^{T}\pi $. Therefore, $(N,\succ _{N})$ does not exhibit convergence to stability. \noindent$(\Longleftarrow )$ Assume that $(N,\succ _{N})$ has a stable decomposition with no ring component. By Remark \ref{remark stable decomposition con stable partition} this means that each stable decomposition can be identified with a stable coalition structure, so the game has only trivial absorbing sets. By Remark \ref{remarkabsorbing} (iii) and (iv), for each non stable coalition structure $\pi \in \Pi $ there is a stable coalition structure $\pi ^{N}\in \Pi $ such that $\pi ^{N}\gg ^{T}\pi $. \end{proof} \cite{roth1990random} prove that the marriage problem exhibits convergence to stability. Here, using our results, we give an alternative argument as to how this follows. Note that by Proposition \ref{propisiton4}, no stable decomposition of the marriage problem has a ring component. By Proposition \ref{proposition5}, the following result emerges straightforwardly. \begin{corollary}\label{corol conv en marriage} The marriage problem exhibits convergence to stability. \end{corollary} \cite{diamantoudi2004random} prove that when the roommate problem is stable (i.e. has a stable matching) it exhibits convergence to stability. We argue that following Proposition 2 in \cite{inarra2013absorbing}, the only absorbing sets of a stable roommate problem are stable matchings. Thus, by Remark \ref{remarkabsorbing} (iv), each non-stable matching is transitively dominated by a stable one. Therefore, we can state the following. \begin{corollary}\label{corol conv en roommate} If a roommate problem is stable, it exhibits convergence to stability. \end{corollary} \section{Concluding remarks}\label{section concluding viejas} To conclude, we first emphasize the results obtained and then propose some further research. We introduce a new solution concept called stable decomposition, for the entire class of coalition formation games. As said, the set of stable decompositions of a game is always non-empty and encompasses stable coalition structures when they exist. When a stable decomposition is not related to a stable coalition structure, it incorporates a new ingredient --the ring component-- which is the source of the cyclical behavior of the coalition structures. Our solution is characterized in terms of absorbing sets, i.e. there is a bijection between absorbing sets and stable decompositions. However, although a stable decomposition conveys the same information regarding the cyclical behavior of some coalition structures as an absorbing set, it is a much simpler object that can be derived exclusively from the preferences of the agents. As applications, we restate some important results about stability on roommate and marriage problems and analyze convergence to stability. Our approach opens up a number of interesting research directions, including the following: The paper relies on a dynamic process among coalition structures which is consistent with the standard blocking definition in that all members of the blocking coalition become strictly better off, and assumes that abandoned agents remain single in the newly formed coalition structure. However, another possibility is for the abandoned agents to get together as in, for instance, \cite{tamura1993transformation}. How our solution concept adapts to this new dynamics is an open question. Furthermore, \cite{pycia2012stability} and \cite{gallo2018rationing}, in different contexts, study what sharing rules induce stable coalition formation games, i.e., they assume that coalitions produce an output to be divided among their members according to a pre-specified sharing rule. In such environments, the sharing rule naturally induces a game in which each agent ranks the coalitions to which he/she belongs according to the payoffs that he/she could get. Here, the question to be answered is what rules can generate coalition formation games that exhibit convergence to stability.
{ "timestamp": "2021-12-30T02:16:47", "yymm": "2009", "arxiv_id": "2009.11689", "language": "en", "url": "https://arxiv.org/abs/2009.11689", "abstract": "It is known that a coalition formation game may not have a stable coalition structure. In this study we propose a new solution concept for these games, which we call \"stable decomposition\", and show that each game has at least one. This solution consists of a collection of coalitions organized in sets that \"protect\" each other in a stable way. When sets of this collection are singletons, the stable decomposition can be identified with a stable coalition structure. As an application, we study convergence to stability in coalition formation games.", "subjects": "Theoretical Economics (econ.TH); Computer Science and Game Theory (cs.GT)", "title": "Stable decompositions of coalition formation games", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759654852756, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.707940558438857 }
https://arxiv.org/abs/1910.07241
Weighted Monte Carlo with least squares and randomized extended Kaczmarz for option pricing
We propose a methodology for computing single and multi-asset European option prices, and more generally expectations of scalar functions of (multivariate) random variables. This new approach combines the ability of Monte Carlo simulation to handle high-dimensional problems with the efficiency of function approximation. Specifically, we first generalize the recently developed method for multivariate integration in [arXiv:1806.05492] to integration with respect to probability measures. The method is based on the principle "approximate and integrate" in three steps i) sample the integrand at points in the integration domain, ii) approximate the integrand by solving a least-squares problem, iii) integrate the approximate function. In high-dimensional applications we face memory limitations due to large storage requirements in step ii). Combining weighted sampling and the randomized extended Kaczmarz algorithm we obtain a new efficient approach to solve large-scale least-squares problems. Our convergence and cost analysis along with numerical experiments show the effectiveness of the method in both low and high dimensions, and under the assumption of a limited number of available simulations.
\section{Introduction} Recently, a new algorithm combining function approximation and integration with Monte Carlo simulation has been developed in \cite{nakatsukasa2018approximate}. This algorithm, called MCLS (Monte Carlo with Least Squares),\footnote{Note that the similarity between the names MCLS and Least-Squares Monte Carlo for American option pricing developed in \cite{Longstaff01valuingamerican} is only owed to the fact that both methods use Monte Carlo and least-squares.} draws on Monte Carlo's ability to deal with the curse of dimensionality and reduces the variance through function approximation. In this paper we extend MCLS to efficiently price European options in low and high dimensions. That is we approximate integrals of the form \[ I_\mu=\int_{E} f({\mathbf x}) d\mu({\mathbf x}), \] where $(E, \mathcal{A},\mu)$ is a probability space. MCLS is based on the principle ``approximate and integrate'' and mainly consists of three steps: i) generate $N$ sample points $\{{\mathbf x}_i\}_{i=1}^N \in E$, from $\mu$; ii) approximate the integrand $f$ with a linear combination of a priori chosen basis functions $f({\mathbf x}) \approx p({\mathbf x}):= \sum_{j=0}^n c_j \phi_j({\mathbf x})$, where the coefficients $\mathbf{c}=(c_0,\ldots,c_n)^\top$ are computed by solving the least-squares problem $\min_{\mathbf{c}\in\mathbb{R}^{n+1}}\|{\mathbf V}\mathbf{c}-{\mathbf f}\|_2$ for the Vandermonde\footnote{Usually the notion ``Vandermonde matrix'' is used for the special case when $\phi_i(x)=x^i$ and the matrix we are dealing with therefore is a generalized Vandermonde matrix. In statistics, such a matrix is usually referred to as ``design matrix''. For the rest of the paper we call it Vandermonde matrix as in \cite{nakatsukasa2018approximate}.} matrix ${\mathbf V}=(\phi_j({\mathbf x}_i))_{i=1,\ldots,N;j=0,\ldots,n}$ and $\mathbf{f}=(f(x_1),\ldots,f(x_N))^\top$; iii) integrate $p$ to approximate the integral $\sum_{j=0}^n c_j \int_{E} \phi_j({\mathbf x})\mu(d{\mathbf x}) \approx \int_{E} f({\mathbf x})\mu(d{\mathbf x})$. Key to exploit the advantage of function approximation for integration is to choose basis functions $\{\phi_j\}_{j=0}^n$ that can be easily, possibly exactly, integrated with respect to $\mu$, and that $p$ approximates $f$ well. It can be shown that for $n=0$ the MCLS estimator coincides with the standard MC estimator. For a given fixed $N$, MCLS can outperform MC for any reasonably chosen basis $\{\phi_j\}_{j=0}^n$ with $n>0$. Our contribution is fourfold. First, we provide a more detailed convergence, error and cost analysis for the MCLS estimator of $I_\mu$. This extends \cite{nakatsukasa2018approximate} which only considered $E=[0,1]^d$ and $\mu(d{\mathbf x})=d{\mathbf x}$. In particular, our cost analysis reveals that MCLS asymptotically becomes more accurate than MC at the same cost, see Proposition \ref{prop:asymptotic}. This is of practical use whenever high accuracy is required and a large number of simulations is available, as for instance in option pricing. For a typical task in portfolio risk management, instead, only a limited budget of simulations is available. This is because evaluating ${\mathbf f}$ is extremely costly on the IT infrastructure of standard financial institutions, such as insurance companies. Our error analysis suggests that MCLS provides a better approximation than MC for such limited budget situations, compare Proposition \ref{thm_CM} and the subsequent discussion. Second, we note that a computational bottleneck in MCLS is the storage requirement arising when solving the least-squares problem. Indeed, when the number of simulations $N$ and of the number of basis functions $n+1$ are too large, it is not feasible to explicitly store the $N\times (n+1)$ Vandermonde matrix ${\mathbf V}$. This severely limits the scalability of MCLS. In order to overcome this limitation we enhance MCLS by the randomized extended Kaczmarz (REK) algorithm \cite{zouzias2013randomized} for solving the least-squares problem. The benefit of REK is that no explicit storage of ${\mathbf V}$ is needed. However REK only applies efficiently to well conditioned least-squares problems. Here we profit from the reduction of the condition number of ${\mathbf V}$ thanks to a weighted sampling scheme due to \cite{cohen2017optimal}. Moreover, under this weighted sampling scheme the rows of ${\mathbf V}$ have equal Euclidean norm, which further speeds up the REK. Third, we apply MCLS to efficiently price European options in low and high dimensions. Here, the method turns out to be especially favorable for the class of polynomial models \cite{filipovic2016polynomial, filipovic2017polynomial} which covers widely used asset models such as the Black Scholes, Heston and Cox Ingersoll Ross models. In this framework conditional moments and thus expectations of polynomials in the underlying price process are given in closed form. This naturally suggests the choice of polynomials as basis functions $\{\phi_j\}_{j=0}^n$, for which step iii) of MCLS becomes very efficient. Fourth, we approximate the high-dimensional integral $\int_{[0,1]^d} \sin \Big (\sum_{j=1}^d x_j \Big ) d{\mathbf x}$ for $d=10$ and $d=30$, which shows that the limitations of the approach in \cite{nakatsukasa2018approximate} due to the high dimensionality have been indeed overcome considerably thanks to REK. The rest of the paper is organized as follows. In Section \ref{sec-MCLS} we review the main ingredients of our methodology and we explain how to combine them. In particular, in Section \ref{MCLS-review} we review MCLS as in \cite{nakatsukasa2018approximate} and we extend it to arbitrary probability measures. Then, in Section \ref{sec-optimally} we present the weighted sampling strategy proposed in \cite{cohen2017optimal}. In Section \ref{sec-kaczmarz} we review the randomized extended Kaczmarz algorithm and combine it with MCLS. We provide a convergence and a cost analysis in Section \ref{sec-cost analysis}. In Section \ref{sec-MCLS option pricing} we apply MCLS to option pricing. Here, we present numerical results for different polynomial models and payoff profiles in both low and high dimensionality. In Section \ref{sec-high dimensional application} we analyze the performance of MCLS for a standard high-dimensional integration test problem. We conclude in Section \ref{sec-conclusion}. \section{Core methods}\label{sec-MCLS} For the reader's convenience we present the three main ingredients of our methodology. First, we extend MCLS to arbitrary probability measures. Then, we present its combination with weighted sampling strategies as in \cite{nakatsukasa2018approximate}. Finally, we recap the randomized extended Kaczmarz algorithm and we propose our combined strategy. \subsection{MCLS}\label{MCLS-review} In this section we introduce a methodology to compute the definite integral \[ I_\mu:=\int_{E} f({\mathbf x}) d\mu({\mathbf x}), \] for some probability space $(E, \mathcal{A},\mu)$ and for a function $f : E \to {\mathbb R}$, which we assume to be square-integrable, i.e.\ in \begin{equation*} L^2_\mu = \{f:E \to{\mathbb R} \enskip | \enskip \| f \|^2_\mu = \int_E f({\mathbf x})^2 d\mu({\mathbf x}) < \infty, f \text{ measurable} \}, \end{equation*} which is a Hilbert space with the inner product $\langle f, g \rangle_\mu= \int_E f({\mathbf x}) g({\mathbf x}) d\mu({\mathbf x})$. The method is an extension of the method proposed in \cite{nakatsukasa2018approximate} for integrals with respect to the Lebesgue measure. To start, we choose a set of $n$ basis functions $\{\phi_j\}_{j=1}^n$, with $\phi_0 \equiv 1$, which will be used to approximate the integrand $f$. The idea is to choose basis functions $\phi_j$ that can be easily integrated. For instance, polynomials can be a good choice. Then, the steps of MCLS are as follows. First, as in standard Monte Carlo methods, one generates $N$ sample points $\{{\mathbf x}_i\}_{i=1}^N \in E$, according to $\mu$. Second, the integrand $f$ and the set of basis functions are evaluated at all simulated points $\{{\mathbf x}_i\}_{i=1}^N$ leading to the following least-squared problem: \begin{equation}\label{LS} \min_{\mathbf{c}\in\mathbb{R}^{n+1}} \left\| \underbrace{ \begin{bmatrix} 1&\phi_1({\mathbf x}_1)&\phi_2({\mathbf x}_1)&\ldots &\phi_n({\mathbf x}_1)\\ 1&\phi_1({\mathbf x}_2)&\phi_2({\mathbf x}_2)&\ldots&\phi_n({\mathbf x}_2)\\ \vdots&\vdots&\vdots\\ 1&\phi_1({\mathbf x}_N)&\phi_2({\mathbf x}_N)&\ldots&\phi_n({\mathbf x}_N)\\ \end{bmatrix}}_{=: {\mathbf V}} \begin{bmatrix} c_0\\c_1\\\vdots \\c_n \end{bmatrix} - \underbrace{\begin{bmatrix} f({\mathbf x}_1)\{\mathbf f}({\mathbf x}_2)\\\vdots\\ f({\mathbf x}_N) \end{bmatrix}}_{=:{\mathbf f}} \right\|_2, \end{equation} which we denote as $\min_{\mathbf{c}\in\mathbb{R}^{n+1}}\|{\mathbf V}\mathbf{c}-{\mathbf f}\|_2$. Note that \eqref{LS} can be seen as a discrete version of the projection problem $\min_\mathbf{c} \| f-\sum_{j=0}^n c_j\phi_j \|_\mu$. Third, one solves \eqref{LS}, whose solution is known to be explicitly given by \[ \mathbf{\hat{c} }= ({\mathbf V}^T{\mathbf V})^{-1}{\mathbf V}^T{\mathbf f}. \] At this point, the linear combination $p({\mathbf x}) := \sum_{j=0}^n\hat{c}_j \phi_j({\mathbf x})$ is an approximation of $f$. Finally, the last step consists of computing the integral of the approximant $p$, and $I$ is approximated by \begin{equation*} I \approx \hat I_{\mu,N}=\int_{E} p({\mathbf x})d\mu({\mathbf x})=\hat{c}_0+\sum_{j=1}^n\hat{c}_j\int_{E} \phi_j({\mathbf x})d\mu({\mathbf x}). \end{equation*} We summarize the procedure in Algorithm \ref{Algo1}. We remark that there is an interesting connection between MCLS and the standard Monte Carlo method: If one takes $n=0$, i.e. one approximates $f$ with a constant function, the resulting approximation is the solution of the least-squares problem \begin{equation*} \min_{\mathbf{c}\in\mathbb{R}^{n+1}} \left\| \begin{bmatrix} 1\\ 1\\ \vdots\\ 1\\ \end{bmatrix} c_0 - \begin{bmatrix} f({\mathbf x}_1)\{\mathbf f}({\mathbf x}_2)\\\vdots\\ f({\mathbf x}_N) \end{bmatrix} \right\|_2, \end{equation*} which is exactly given by $\hat{c}_0:= \frac{1}{N} \sum_{i=1}^N f({\mathbf x}_i)$, the standard Monte Carlo estimator. We recall that in the standard MC method, asymptotically for large $N$ the error scales like\footnote{See e.g. \cite{caflisch1998monte} for the first term and recall that the expectation of a random variable $X$ is the constant with minimal distance to $X$ in the $L^2$-norm, rescaling yields the identity \eqref{error standardmc}.} \begin{equation}\label{error standardmc} \frac{\big(\int_E (f({\mathbf x})-I_{\mu})^2 d\mu({\mathbf x})\big )^{\frac{1}{2}}}{\sqrt{N}} =\frac{\min_{c \in {\mathbb R}} \|f-c\|_\mu}{\sqrt{N}}=:\frac{\sigma(f)}{\sqrt{N}}. \end{equation} The quantity $(\sigma(f))^2$ is usually referred to as \textsl{the variance of $f$}. This relation between MC and MCLS leads to an asymptotic error analysis, which we detail in Section \ref{sec-convergence}. This connection leads to an asymptotic error analysis, which we detail in Section \ref{sec-convergence}. This connection can also be exploited in order to increase the speed of convergence by combining it with quasi-Monte Carlo. In \cite{nakatsukasa2018approximate} also other ways to speed up the procedure are proposed, for example by an adaptive choice of the basis functions (MCLSA). It is observed in \cite{nakatsukasa2018approximate} that the method performs well for dimensions $d$ up to $d=6$. For higher dimensions solving the least-squares problem \eqref{LS} becomes computationally expensive, this is mainly due to two effects: \begin{itemize} \item[(i)] The size of the matrix $V$, being $N \times (n+1)$, rapidly becomes very large, posing memory limitations. \item[(ii)] The condition number of the Vandermonde matrix $V$ typically gets large. \end{itemize} In the following we address these issues by combining MCLS with weighted sampling strategies and with the randomized extended Kaczmarz algorithm for solving the least-squares problem. \begin{algorithm}[t] \caption{Generalized MCLS} \begin{algorithmic}[1]\label{Algo1} \REQUIRE{Function $f$, basis functions $\{\phi_j\}_{j=1}^n$, $\phi_0\equiv 1$, integer $N (>n)$, probability distribution $\mu({\mathbf x})$ over domain $E$.} \ENSURE{Approximate integral $\hat I_{\mu,N}\approx \int_{E} f({\mathbf x})d\mu({\mathbf x})$} \STATE Generate sample points $\{{\mathbf x}_i\}_{i=1}^N \in E$, according to $\mu$. \STATE Evaluate $f({\mathbf x}_i)$ and $\phi_j({\mathbf x}_i)$, for $i=1,\ldots,N$ and $j=1,\ldots,n$. \STATE Solve the least-squares problem \eqref{LS} for $\mathbf{c}=[c_0,c_1,\ldots,c_n]^T$. \STATE Compute $\hat I_{\mu,N}=\hat{c}_0+\sum_{j=1}^n\hat{c}_j\int_{E} \phi_j({\mathbf x})d\mu({\mathbf x})$. \end{algorithmic} \end{algorithm} \subsection{Well conditioned least-squares problem via weighted sampling}\label{sec-optimally} It is crucial that the coefficient matrix ${\mathbf V}$ in~\eqref{LS} be well conditioned, from both a computational and (more importantly) a function approximation perspective. Computationally, an ill-conditioned ${\mathbf V}$ means the least-squares problem is harder to solve using e.g. the conjugate gradient method, and the randomized Kaczmarz method described Section~\ref{sec-kaczmarz}. From an approximation viewpoint, ${\mathbf V}$ having a large condition number\footnote{We denote by $\kappa_2({\mathbf V})$ the 2-norm condition number of the matrix ${\mathbf V}$.} $\kappa_2({\mathbf V})$ implies that the function approximation error (in the continuous setting) can be large: $\| f-\sum_{j=0}^n c_j \phi_j\|^2_{\mu}$ is bounded roughly by $\kappa_2({\mathbf V})\| f-\sum_{j=0}^n c_{j}^* \phi_j\|^2_\mu$ (see~\cite[\S~5.4]{nakatsukasa2018approximate}), where ${\mathbf c}^*:=\text{argmin}_{\mathbf{c}\in\mathbb{R}^{n+1}} \| f-\sum_{j=0}^n c_j\phi_j \|_\mu$. Hence in practice we devise the MCLS setting (choice of $\phi$ and sample strategy) so that ${\mathbf V}$ is well conditioned with high probability. A first step to obtain a well-conditioned Vandermonde matrix $V$, is to choose the basis to be orthonormal with respect to the scalar product $\langle \cdot \rangle_\mu$, for instance by applying a Gram-Schmidt orthonormalization procedure. Next, we observe that the strong law of large numbers yields \begin{equation* \frac{1}{N}({\mathbf V}^T{\mathbf V})_{i+1,j+1} = \frac{1}{N}\sum_{l=1}^N\phi_i({\mathbf x}_l)\phi_j({\mathbf x}_l) \stackrel{p}{\rightarrow} \int_{E}\phi_i({\mathbf x})\phi_j({\mathbf x})d\mu({\mathbf x}) =\delta_{ij} \end{equation*} as $N\rightarrow \infty$. Therefore, for a large number of samples $N$ we expect $\frac{1}{N}{\mathbf V}^T {\mathbf V}$ to be close to the identity matrix $\mathbf{Id}_{n+1} \in {\mathbb R}^{(n+1)\times(n+1)}$. This implies that $\kappa_2({\mathbf V})$ is close to $1$. In practice, however, the condition number often is large. This is because the number $N$ of sample points required to obtain a well-conditioned ${\mathbf V}$ might be very large. For example, if we consider the one-dimensional interval $E=[-1,1]$ with the uniform probability measure and an orthonormal basis of Legendre polynomials, one can show that at least $N=\mathcal{O}(n^2 \log(n))$ sample points are needed to obtain a well conditioned ${\mathbf V}$. This example and others are discussed in \cite{chkifa2015discrete, cohen2017optimal}. To overcome this problem, Cohen and Migliorati~\cite{cohen2017optimal} introduce a \emph{weighted} sampling for least-squares fitting. Its use for MCLS was suggested in~\cite{nakatsukasa2018approximate}, which we summarize here. Define the nonnegative function $w$ via \begin{equation} \label{eq:optsample} \frac{1}{w({\mathbf x})}=\frac{\sum_{j=0}^n\phi_{j}({\mathbf x})^2}{n+1}, \end{equation} which is a probability distribution since $\frac{1}{w}\geq 0$ on $E$ and $\int_E \frac{1}{w({\mathbf x})}d\mu({\mathbf x})=1$. We then take samples $\{\tilde{\mathbf x}_i\}_{i=1}^N$ according to $\frac{d\mu}{w}$. Intuitively this means that we sample more often in areas where $\sum_{i=0}^n\phi_{i}({\mathbf x})^2$ takes large values. The least-squares problem \eqref{LS} with the samples $\sim \frac{d\mu}{w}$ becomes \begin{equation} \label{eq:MCLSweight} \min_{{\mathbf c}}\|\sqrt{{\mathbf W}}({\mathbf V}{\mathbf c}-{\mathbf f})\|_2, \end{equation} where $\sqrt{{\mathbf W}} = \mbox{diag}(\sqrt{w(\tilde{\mathbf x}_1)},\sqrt{w(\tilde{\mathbf x}_2)},\ldots, \sqrt{w(\tilde{\mathbf x}_N)})$, and ${\mathbf V},{\mathbf f}$ are as before in~\eqref{LS} with ${\mathbf x}\leftarrow \tilde{\mathbf x}$. This is again a least-squares problem $\min_{\mathbf c}\|\widetilde{\mathbf V}{\mathbf c}-\tilde{\mathbf f}\|_2$, with coefficient matrix $\widetilde{\mathbf V}:=\sqrt{{\mathbf W}}{\mathbf V}$ and right-hand side $\tilde{\mathbf f} := \sqrt{{\mathbf W}}{\mathbf f}$, whose solution is ${\mathbf c}=(\widetilde{\mathbf V}^T\widetilde{\mathbf V})^{-1}\widetilde{\mathbf V}^T\sqrt{{\mathbf W}}{\mathbf f}$. With high probability, the matrix $\widetilde{\mathbf V}$ is well conditioned, provided that $N\gtrsim n\log n$, see Theorem 2.1 in \cite{cohen2017optimal}. \begin{remark}\label{rem-rows} Note that the left-multiplication by $\sqrt{{\mathbf W}}$ forces all the rows of $\widetilde{\mathbf V}$ to have the same norm (here $\sqrt{n+1}$); a property that proves useful in Section \ref{sec-kaczmarz}. \end{remark} A simple strategy to sample from $w$ is as follows: for each of the $N$ samples, choose a basis function $\phi_j$ from $\{\phi_j\}_{j=0}^n$ uniformly at random, and sample from a probability distribution proportional to $\phi_j^2$. We refer to \cite{2017arXiv170700026H} for more details. \subsection{Randomized extended Kaczmarz to solve the least-squares problem}\label{sec-kaczmarz} A standard least-squares solver that uses the QR factorization~\cite[Ch.~5]{golubbook4th} costs $O(Nn^2)$ operations, which quickly becomes prohibitive (relative to standard MC) when $n\gg 1$ . As an alternative, the conjugate gradient method (CG) applied to the normal equation $({\mathbf V}^T{\mathbf V}){\mathbf c} = {\mathbf V}^T{\mathbf f}$ is suggested in~\cite{nakatsukasa2018approximate}. For $\kappa_2({\mathbf V})=O(1)$ this reduces the computational cost to $O(Nn)$. However, CG still requires the storage of the whole matrix ${\mathbf V}$, which is $O(Nn)$. Indeed in practice, building and storing the matrix ${\mathbf V}$ becomes a major bottleneck in MCLS. To overcome this issue, here we suggest a further alternative, the randomized extended Kaczmarz (REK) algorithm developed by Zouzias and Freris~\cite{zouzias2013randomized}. REK is a particular stochastic gradient method to solve least-squares problems. It builds upon Strohmer and Vershynin's pioneering work \cite{strohmer2009randomized} and Needell's extension to inconsistent systems~\cite{needell2010randomized}, and converges to the minimum-norm solution by simultaneously performing projection and solution refinement at each iteration. The convergence is geometric in expectation, and as already observed in~\cite{strohmer2009randomized}, Kaczmarz methods can sometimes even outperform the conjugate gradient method in speed for well-conditioned systems. A block version of REK was introduced in~\cite{needell2015randomized}, which sometimes additionally improves the performance. Here we focus on REK and consider its application to MCLS. A pseudocode of REK is given in Algorithm~\ref{Algo2}. MATLAB notation is used, in which ${\mathbf V}(:,j)$ denotes the $j$th column of ${\mathbf V}$ and ${\mathbf V}(i,:)$ the $i$th row. The ${\mathbf z}^{(k)}$ iterates are the projection steps, which converge to ${\mathbf f}^\perp$, the part of ${\mathbf f}$ that lies in the orthogonal complement of ${\mathbf V}$'s column space. REK works by simultaneously projecting out the ${\mathbf f}^\perp$ component while refining the least-squares solution. \begin{algorithm}[t] \caption{REK: Randomized extended Kaczmarz method} \begin{algorithmic}[1]\label{Algo2} \REQUIRE{ ${\mathbf V}\in\mathbb{R}^{N\times n}$ and ${\mathbf f}\in\mathbb{R}^{N}$. } \ENSURE{Approximate solution ${\mathbf c} $ for $\min_{\mathbf c} \|{\mathbf V}{\mathbf c} -{\mathbf f}\|_2$} \STATE{Initialize ${\mathbf c} ^{(0)}=0$ and ${\mathbf z}^{(0)}={\mathbf f}$} \FOR{$k=1,2,\ldots,M$} \STATE{Pick $i=i_k\in\{1,\ldots,N\}$ with probability $\|{\mathbf V}(i,:)\|_2^2/\|{\mathbf V}\|_F^2$} \STATE{Pick $j=j_k\in\{1,\ldots,n+1\}$ with probability $\|{\mathbf V}(:,j)\|_2^2/\|{\mathbf V}\|_F^2$} \STATE{Set ${\mathbf z}^{(k+1)}={\mathbf z}^{(k)}-\frac{{\mathbf V}(:,j_k)^T{\mathbf z}^{(k)}}{\|{\mathbf V}(:,j_k)\|_2^2}{\mathbf V}(:,j_k)$} \STATE{Set ${\mathbf c} ^{(k+1)}={\mathbf c} ^{(k)}+\frac{f_{i_k}-z_{i_k}^{(k)}-{\mathbf V}(i_k,:)^T{\mathbf c} ^{(k)}}{\|{\mathbf V}(i_k,:)\|_2^2}{\mathbf V}(i_k,:)$} \ENDFOR \STATE{${\mathbf c} ={\mathbf c} ^{M}$} \end{algorithmic} \end{algorithm} Let us comment on REK (Algorithm~\ref{Algo2}) and its implementation, particularly in the MCLS context: \begin{itemize} \item Employing the weighted sampling strategy of Section~\ref{sec-optimally} significantly simplifies Algorithm \ref{Algo2}. Following Remark \ref{rem-rows}, the norm of the rows of $\widetilde{{\mathbf V}}$ are constant and equal to $\sqrt{n+1}$. This also implies that $\|\widetilde{{\mathbf V}}\|_F^2 = N(n+1)$. The index $i_k$ in line 3 is therefore simulated uniformly at random. This has a practical significance in MCLS as the probability distribution $(\|{\mathbf V}(i,:)\|_2^2/\|{\mathbf V}\|_F^2)_{i=1,\ldots,N}$ does not have to be computed before starting the REK iterates. This results in (a potentially enormous) computational reduction; an additional benefit of using the weighted sampling strategy, besides improving conditioning. \item The number of iterations $M$ is usually not chosen a priori but by checking convergence of ${\mathbf c} ^{(k)}$ infrequently. The suggestion in~\cite{zouzias2013randomized} is to check every $8\min(N,n)$ iterations for the conditions \begin{equation*} \frac{\|{\mathbf V} {\mathbf c}^{(k)}-({\mathbf f}-{\mathbf z}^{(k)}) \|_2}{\|{\mathbf V}\|_F\|{\mathbf c}^{(k)}\|_2} \leq \varepsilon,\qquad \mbox{and}\qquad \frac{\|{\mathbf V}^T {\mathbf z}^{(k)}) \|_2}{\|{\mathbf V}\|_F\|{\mathbf c}^{(k)}\|_2} \leq \varepsilon \end{equation*} for a prescribed tolerance $\varepsilon>0$. \item A significant advantage of REK is that it renders unnecessary the storage of the whole matrix ${\mathbf V}$: only the $i_k$th row and the $j_k$th column are needed, taking $O(N)$ memory cost. In practice, one can even sample in an online fashion: early samples can be discarded once the REK update is completed. \end{itemize} The convergence of REK is known to be geometric in the expected mean squared sense~\cite[Thm~4.1]{zouzias2013randomized}: after $M$ iterations, we have \begin{equation} \label{eq:REKconvergence} \mathbb{\widetilde{E}} \|{\mathbf c} ^{(M)}-\mathbf{\hat{c}}\|_2^2\leq \left( 1-\frac{(\sigma_{\min}({\mathbf V}))^2}{\|{\mathbf V}\|_F^2}\right)^{\lfloor \frac{M}{2}\rfloor}(1+2\kappa_2^2({\mathbf V}))\|\mathbf{\hat{c}}\|, \end{equation} where $\mathbf{\hat{c}}$ is the solution for $\min_{\mathbf c} \|{\mathbf V}{\mathbf c} -{\mathbf f}\|_2$ and the expectation $\mathbb{\widetilde{E}}$ is taken over the random choices of the algorithm. When ${\mathbf V}$ is close to having orthonormal columns (as would hold with weighted sampling and/or $N\rightarrow\infty$ with orthonormal basis functions $\phi$), the convergence in~\eqref{eq:REKconvergence} becomes $O((1-\frac{1}{n})^{\frac{M}{2}})$. Our experiments suggest that conjugate gradients applied to the normal equation is faster than Kaczmarz, so we recommend CG whenever it is feasible. However, as mentioned above, an advantage of (extended) Kaczmarz is that there is no need to store the whole matrix to execute the iterations. For these reasons, we suggest to choose the solver for the LS problem \eqref{LS} according to the scheme shown in Figure~\ref{scheme-algo}. Preliminary numerical experiments indicate that the threshold $10$ for $\kappa_2({\mathbf V})$ is a good choice. \begin{figure} \begin{center} \resizebox{8cm}{4cm}{\begin{tikzpicture} [ grow = down, sibling distance = 12em, level distance = 6.5em, edge from parent/.style = {draw, -latex}, sloped ] \node [root] {Can I store $V$?} child { node [env] {REK without storing $V$} edge from parent node [above] {\hspace{-0.2 cm}No} } child { node [env] {} child { node [env] {CG to normal equation} edge from parent node [above] {\hspace{-0.1cm}$\kappa_2(V) \leq 10$} } child { node [env] {QR based solver} edge from parent node [above, align=center] {\hspace{-0.3cm}$\kappa_2(V)>10$} } edge from parent node [above] {Yes} }; \end{tikzpicture}} \end{center} \caption{Choice of algorithm to solve the least-squares problem.}\label{scheme-algo} \end{figure} \section{Convergence and cost analysis}\label{sec-cost analysis} In this section we first present convergence results, on which basis we will derive a cost analysis. \subsection{Convergence}\label{sec-convergence} First, we obtain a convergence result and consequently asymptotic confidence intervals, applying the central limit theorem (CLT). The following statement and proof is a straightforward generalization of \cite[Theorem 5.1]{nakatsukasa2018approximate} for an arbitrary integrating probability measure $\mu$. \begin{proposition}\label{thm_convW} Fix $n$ and the $L^2_{\mu}$-basis functions $ \{\phi_j\}_{j=0}^n$ and let either $w=1$ or $w$ as in \eqref{eq:optsample}. Then with the weighted sampling $\frac{d\mu}{w}$, the corresponding MCLS estimator $\hat I_{\mu,N}$, as $N \to \infty$ we have \begin{equation*} \sqrt{N} (\hat I_{\mu,N} - I_\mu) \xrightarrow[]{d} \mathcal{N} (0, \min_\mathbf{c} \| \sqrt{w} (f-\sum_{j=0}^n c_j \phi_j)\|^2_\mu), \end{equation*} where $\xrightarrow[]{d}$ denotes convergence in distribution. \begin{proof} The proof is provided in the Appendix. \end{proof} \end{proposition} We observe that MCLS converges like $ \frac{\min_\mathbf{c} \| \sqrt{w}(f-\sum_{j=0}^n c_j\phi_j) \|_{\mu}}{\sqrt{N}}$ (when $\{\phi_j\}_{j=1}^n$ is fixed), highlighting the fact that the speed of convergence is still $1/\sqrt{N}$, but with variance reduced from $\min_c \| f-c \|_2$ (standard MC) to $\min_\mathbf{c} \| \sqrt{w} (f-\sum_{j=0}^n c_j \phi_j) \|_2$ (MCLS). In other words, the variance is reduced thanks to the approximation of the function $f$. The above proposition shows that the MCLS estimator yields an approximate integral $\hat I_{\mu,N}$ that asymptotically (for $N \to \infty$ and $\{\phi_j\}_{j=1}^n$ fixed) satisfies\footnote{We use the notation ``$\approx$'' with the statement ``for $N \to \infty$'' to mean that the relation holds for sufficiently large $N$. E.g. \eqref{eq:error_asympt} means $\mathbb{E}[|\hat I_{\mu,N}-I_\mu|] = \frac{ \min_{\mathbf{c} \in {\mathbb R}^{n+1}} \| \sqrt{w} (f-\sum_{j=0}^n c_j \phi_j)\|_\mu}{\sqrt{N}}+o(\frac{1}{\sqrt{N}})$ for $N \to \infty$.} \begin{equation}\label{eq:error_asympt} \mathbb{E}[|\hat I_{\mu,N}-I_\mu|] \approx \frac{ \min_{\mathbf{c} \in {\mathbb R}^{n+1}} \| \sqrt{w} (f-\sum_{j=0}^n c_j \phi_j)\|_\mu}{\sqrt{N}}, \end{equation} highlighting the fact that the asymptotic error is still $\mathcal{O}(1/\sqrt{N})$ (as in the standard MC), but with variance $(\sigma(f))^2$ reduced from $\min_{c \in {\mathbb R}} \| f-c \|_2^2$ (standard MC, see \eqref{error standardmc}) to $\min_{\mathbf{c}\in {\mathbb R}^{n+1}} \| \sqrt{w} (f-\sum_{j=0}^n c_j \phi_j) \|_\mu^2$ (MCLS). In other words, the variance is reduced thanks to the approximation of the function $f$ and the constant in front of the $\mathcal{O}(1/\sqrt{N})$ convergence in MCLS is equal to the function approximation error in the $L^2_\mu$ norm. After solving the least-squares problem \eqref{eq:MCLSweight}, the variance $\min_{\textbf{c}\in {\mathbb R}^{n+1}} \| \sqrt{w} (f-\sum_{j=0}^n c_j \phi_j)\|_\mu^2$ can be estimated via\footnote{This approximation is commonly used in linear regression, see e.g. \cite{hastie2009elements}.} \begin{equation}\label{sigmaLS} \widetilde\sigma_{LS}^2 := \frac{1}{N-n-1}\sum_{i=1}^N(w(\tilde {\mathbf x}_i))^2(f(\tilde{\mathbf x}_i)-p(\tilde{\mathbf x}_i))^2 =\frac{1}{N-n-1}\|{\mathbf W}({{\mathbf V}} \hat{\mathbf{c}}-{{\mathbf f}})\|^2_2, \end{equation} where the samples $\tilde{\mathbf x}_i, i=1,\cdots,N$ are taken according to $\frac{d\mu}{w}$. This leads to approximate confidence intervals, for example the $95\%$ confidence interval is approximately given by \begin{equation}\label{Cinterval} \Big [\hat I_{\mu,N} - 1.96\frac{\widetilde\sigma_{LS}}{\sqrt{N}}, \hat I_{\mu,N} - 1.96\frac{\widetilde\sigma_{LS}}{\sqrt{N}} \Big]. \end{equation} As explained in \cite{nakatsukasa2018approximate}, the MCLS estimator is not unbiased, in the sense that ${\mathbb E}(\hat I_{\mu,N}) \neq I_{\mu}$. However, one can show along the same lines as in the proof of~\cite[Proposition 3.1]{nakatsukasa2018approximate} that with the MCLS estimator $\hat I_{\mu,N}$ with $n$ and $\{\phi_j\}_{j=0}^n$ fixed, \begin{equation*} |I_\mu-{\mathbb E}(\hat I_{\mu,N})|=O\bigg(\frac{1}{N}\bigg). \end{equation*} This shows that the bias is of a smaller order than the error. In the case of weighted sampling, we moreover have a finite sample error bound, which follows directly from \cite[Theorem 2.1 (iv)]{cohen2017optimal}. Note that as this is a non-asymptotic result, it is especially useful in practice. \begin{proposition}\label{thm_CM} Assume that we adopt the weighted sampling $ \frac{d\mu}{w}$. For any $r > 0$, if $n$ and $N$ are such that $n \leq \kappa \frac{N}{\log(N)}-1$ for $\kappa= \frac{1-\log(2)}{2 + 2r}$, then \begin{equation}\label{CM-bound} \mathbb{E}[\|f-\tilde{p}\|_\mu^2] \leq \Big (1+ \frac{4\kappa}{\log(N)} \Big)\min_{\mathbf c}\|f-\sum_{j=0}^nc_j\phi_j\|_\mu^2 +2 \|f\|_\mu^2 N^{-r}, \end{equation} where $\tilde{p}$ is defined as \begin{equation*} \tilde{p}:= \begin{cases} p,& \text{if } \|\frac{1}{N}{\mathbf V}^T{\mathbf V}-\mathbf{I}\|_2\leq \frac{1}{2}\\ 0, & \text{otherwise}, \end{cases} \end{equation*} with $p = \sum_{j=0}^n \hat{c}_j \phi_j$, for $\hat{\mathbf{c}}$ being the solution of \eqref{eq:MCLSweight}. Note that the simulation is done with respect to $\frac{d\mu}{w}$. \end{proposition} We note the slight difference between $p$ and $\tilde{p}$; this is introduced to deal with the tail case in which ${\mathbf V}$ becomes ill-conditioned (which happens with low probability). This is used for a theoretical purpose, but in practice, this modification is not necessary and we do not employ it in our experiments. Proposition~\ref{thm_CM} allows us to define a non-asymptotic, proper bound for the expected error we commit when estimating the vector $\mathbf{c}^\ast$, solving the LS problem \eqref{eq:MCLSweight}. To see this, we first decompose the function $f$ into a sum of orthogonal terms \begin{equation}\label{fdecomp} f=\sum_{j=0}^n c^\ast_j \phi_j + g=: f_1+g, \end{equation} for some coefficients $c^\ast_j$, $j=0,\cdots,n$ and where $g$ satisfies $\int_E g({\mathbf x}) \phi_j({\mathbf x}) d\mu({\mathbf x})=0$ for all $j=0,\cdots, n$. Note that $\|g\|_\mu=\min_{\mathbf c}\|f-\sum_{j=0}^nc_j\phi_j\|_\mu$. Then, \begin{align*} \mathbb{E}[\|f- \sum_{j=0}^n \hat{c}_j \phi_j\|_\mu^2]&= \mathbb{E}[\|f -f_1- \sum_{j=0}^n \hat{c}_j \phi_j+f_1\|_\mu^2] \\ &= \|f -f_1\|^2_\mu+ \mathbb{E}[\|\mathbf{c}^\ast - \hat{\mathbf{c}}\|^2_2]=\|g\|_\mu^2 + \mathbb{E}[\|\mathbf{c}^\ast - \hat{\mathbf{c}}\|^2_2]. \end{align*} This, together with the bound \eqref{CM-bound} yields \begin{equation}\label{eq:boundvec} \mathbb{E}[\|\mathbf{c}^\ast - \hat{\mathbf{c}}\|^2_2]\leq \frac{4\kappa}{\log(N)}\min_{\mathbf{c}} \| f-\sum_{j=0}^n c_j \phi_j\|_\mu^2 +2 \|f\|_\mu^2 N^{-r}. \end{equation} When we are primarily interested in integration, we aim at an upper bound for the expected error of the first component of $\mathbf{c}$. The bound \eqref{eq:boundvec} clearly holds for the first component and this gives us a bound for $\mathbb{E}(|\hat I_{\mu,N}-I_\mu|^2)$. Intuitively, we expect that the error in the elements of $\mathbf{c}$ are not concentrated in any of the components. This suggests a heuristic bound \begin{equation}\label{heuristic-bound} \mathbb{E}[|\hat I_{\mu,N}-I_\mu|^2] \lessapprox \frac{1}{n}\left( \frac{4\kappa}{\log(N)}\min_{\mathbf{c}} \| f-\sum_{j=0}^n c_j \phi_j\|_\mu^2 +2 \|f\|_\mu^2 N^{-r}\right). \end{equation} This argument has already been proposed in \cite{nakatsukasa2018approximate}. A rigorous argument still remains an open problem. Observing that the first term of the right hand side is the dominant one (for $N \to \infty$) and assuming $n \approx \frac{N}{\log(N)}$, we can see that the heuristic bound \eqref{heuristic-bound} matches the asymptotic result derived in Proposition \ref{thm_convW}. \subsection{Cost Analysis} The purpose of this section is to reveal the relationship between the error vs. cost (in flops). The cost of MCLS is analyzed in \cite{nakatsukasa2018approximate} and in Table \ref{tab:compare} we report a cost and error comparison between MC and MCLS as given in Table 3.1 in \cite{nakatsukasa2018approximate}. Here, we highlight some cases for which MCLS outperforms MC in terms of accuracy or cost. \begin{remark}\label{remark: cost MCLS} Note that the cost of MCLS in Table \ref{tab:compare} is reported to be $C_fN+\mathcal{O}(Nn)$. As already mentioned at the beginning of Section \ref{sec-kaczmarz}, this reflects the cost of MCLS when applying the CG algorithm to solve the least-squares problem (whenever $\kappa_2({\mathbf V})=\mathcal{O}(1)$). In the case that we combine MCLS with the REK algorithm and $\kappa_2({\mathbf V})=\mathcal{O}(1)$, which happens with high probability whenever the weighted sampling strategy is used (see \cite[Theorem 2.1]{cohen2017optimal}), the cost is also $C_fN+\mathcal{O}(Nn)$, as shown in \cite[Lemma 9]{zouzias2013randomized} and in the subsequent discussion. The following cost analysis includes therefore the two options CG and REK. \end{remark} \begin{table}[t] \centering \begin{tabular}{lcc} & Cost & Convergence \\ \hline \rule{0pt}{1.5\normalbaselineskip} MC & $C_fN$ & $\displaystyle\frac{1}{\sqrt{N}}\min_{c}\|f-c\|_\mu$\\ MCLS & $C_fN+O(Nn)$ & $\displaystyle\frac{1}{\sqrt{N}}\min_{\mathbf{c}}\| \sqrt{w}(f-\sum_{j=0}^nc_j\phi_j)\|_\mu$\\ \end{tabular} \caption{Comparison between MC and MCLS. $N$ is the number of sample points and $C_f$ denotes the cost for evaluating $f$ at a single point. \label{tab:compare}} \end{table} First, consider the situation of a limited budget of sample points $N$ that can not be increased further, and the goal is to approximate the integral $I_\mu$ in the best possible way. This is a typical task in financial institutions. For instance, in portfolio risk management, simulation can be extremely expensive because a large number of risk factors and positions contribute to the company's portfolio. In this case even if MCLS is more expensive than MC (second column of Table \ref{tab:compare}), MCLS is preferable to MC as it yields a more accurate approximation (third column of Table \ref{tab:compare}). Second, we show under mild conditions that MCLS also asymptotically becomes more accurate than MC at the same cost. This can be of practical relevance whenever the integral $I_\mu$ needs to be computed at a very high accuracy and one is able to spend a high computational cost. Let us fix some notation: \begin{align*} &e_n:= \min_{\mathbf{c}} \|\sqrt{w} (f-\sum_{j=0}^n c_j \phi_j)\|_\mu \enskip \text{for } n \geq 0,\\ &\text{Cost}_{MC}(N):=C_f N,\\ &\text{Cost}_{MCLS}(N',n):=C_f N' + C_M N'n \enskip \text{for some } C_M>0,\\ & \text{error}_{MC}(N):= \frac{e_0}{\sqrt{N}},\\ & \text{error}_{MCLS}(N',n):= \frac{e_n}{\sqrt{N'}}, \end{align*} where the last two definitions reflect the asymptotic error behaviour for large $N$ and $N'$ (for a fixed $n$), depicted in Table \ref{tab:compare}. We are now in the position to present the result. \begin{proposition}\label{prop:asymptotic} Assume that $e_n =o \big(\frac{1}{\sqrt{n}}\big)$. Then there exists $\tilde{n} \in {\mathbb N}$ such that for any fixed $n>\tilde{n}$, $\text{error}_{MCLS}<\text{error}_{MC}$ as $\text{Cost}_{MCLS}=\text{Cost}_{MC} \to \infty$. \begin{proof} We first determine the value of $N=N(N',n)$ such that $\text{Cost}_{MCLS}=\text{Cost}_{MC}$: \begin{align*} \text{Cost}_{MCLS}=\text{Cost}_{MC} \iff N=N' \big ( 1+\frac{C_M}{C_f} n\big ). \end{align*} Consider now the error ratio under the constraint $\text{Cost}_{MCLS}=\text{Cost}_{MC}$, given by \begin{equation*} ER:= \frac{\text{error}_{MC}}{\text{error}_{MCLS}}=\frac{e_0}{e_n \sqrt{1+\frac{C_M}{C_f}n}}, \end{equation*} yielding \begin{equation* ER>1 \iff e_n \sqrt{1+\frac{C_M}{C_f}n} < e_0. \end{equation*} The assumption $e_n = o\big(\frac{1}{\sqrt{n}}\big)$ implies that there exists some $\tilde{n}$ such that $ER>1$ for all $n>\tilde{n}$. Now, fixing an arbitrary $n>\tilde{n}$ and letting $N'$ and consequently $N$ going to infinity yields the result. \end{proof} \end{proposition} \begin{remark} Note that the quantity $ER$ in the proof of Proposition \ref{prop:asymptotic} only reflects the error ratio asymptotically for $N,N'\to\infty$. Therefore we restrict the statement of the result to the asymptotic case where $\text{Cost}_{MCLS}=\text{Cost}_{MC} \to \infty$. \end{remark} To show the practical implication of this asymptotic analysis, in Figures~\ref{fig:costd=1} and~\ref{fig:costd=5} we examine the convergence of MC and MCLS. We consider the problem of integrating smooth and non-smooth functions for several dimensions $d$, on the unit cube $[0,1]^d$ and with respect to the Lebesgue measure. Even though the result of Proposition \ref{prop:asymptotic} holds for a fixed value of $n$, in practice the convergence rate can be improved by varying $n$ together with $N$, as illustrated in \cite{nakatsukasa2018approximate}, where such an adaptive strategy is denoted by MCLSA. For this reason, we show numerical results where we let the cost increase (represented on the x-axis) and for different choices of $n$ ($n$ fixed and $n$ varying)~\footnote{These figures differ from those in~\cite{nakatsukasa2018approximate} in that the $x$-axis is the cost rather than the number of sample points $N$.}. As expected, the numerical results reflect our analysis presented above. For all dimensions and chosen functions, we achieve an efficiency gain by an appropriate choice of $n$ and $N$, asymptotically. Note that the erratic convergence with fixed $n$ is a consequence of ill-conditioning; an effect described also in~\cite{nakatsukasa2018approximate}. Namely, when the number of sample points $N$ is not enough, ${\mathbf V}$ tends to be ill-conditioned and the least-squares problem $\min_{\mathbf c} \|{\mathbf V}{\mathbf c} -{\mathbf f}\|_2$ requires many CG iterations, resulting in higher cost than with a larger $N$. Therefore, the function ``N $\mapsto$ Cost(N)'' is not necessarily monotonically increasing in $N$. We observe that some of the curves in Figures~\ref{fig:costd=1} and~\ref{fig:costd=5}, for instance for $n$ fixed, are not functions of Cost(N), as they are not always single-valued. This shows that indeed the mapping ``N $\mapsto$ Cost(N)'' is not always monotone. \begin{figure}[ht] \centering \begin{minipage}[t]{0.49\hsize} \includegraphics[width=1.0\textwidth]{norm1vs2abs1d1Novern10k300donorm1fixd50qmc0lsqmc0rep32adapt1sqklim150} \end{minipage} \begin{minipage}[t]{0.5\hsize} \includegraphics[width=1.0\textwidth]{norm1vs2sin30xd1d1Novern10k300donorm1fixd50qmc0lsqmc0rep32adapt1sqklim150} \end{minipage} \caption{Cost vs Convergence plots for MC and MCLS with varying $n$: $n=50$, $n=\sqrt{N}$ and $n= N/\log N$ , for $d=1$. Cost is computed as $2N(n+1)k$, the flop counts in the CG iteration, where $k$ is the number of CG steps required. Left: Non-smooth function $f(x)=|x-\frac{1}{2}|$. Right: analytic function $f(x) = \sin(30x)$. } \label{fig:costd=1} \end{figure} \begin{figure}[ht] \centering \begin{minipage}[t]{0.49\hsize} \includegraphics[width=1.0\textwidth]{norm1vs2genzsumexpabsd5d5Novern10k11donorm1fixd5qmc0lsqmc0rep32adapt1sqklim6} \end{minipage} \begin{minipage}[t]{0.5\hsize} \includegraphics[width=1.06\textwidth]{norm1vs2sinsum5d5Novern10k11donorm1fixd5qmc0lsqmc0rep32adapt1sqklim6} \end{minipage} \caption{Same as in~\Cref{fig:costd=1}, but with $d=5$. The fixed value $n=251$ comes from $n={d+k \choose k}-1$ for degree $k=5$. Left: non-smooth function $f(x)=\sum_{i=1}^d\mbox{exp}(-|x-\frac{1}{2}|)$. Right: analytic function $f(x) = \sin(\sum_{i=1}^d x_i)$. } \label{fig:costd=5} \end{figure} \section{Application: European option pricing}\label{sec-MCLS option pricing} The option pricing problem is one of the main tasks in financial mathematics and can be summarized as follows. First, we fix a filtered probability space $(\Omega, \mathcal{F}, \mathcal{F}_t, \mathbb{Q})$, where $\mathbb{Q}$ denotes a risk neutral pricing measure. In this framework a stochastic process $(X_t)_{0 \leq t \leq T}$ defined on a time horizon $[0,T]$ for $T>0$ and taking values in a state space $E \subseteq {\mathbb R}^d$ is used to model the price of the financial assets. Then, the price at time $t=0$ of a European option with payoff function $f:E \to {\mathbb R}$ and maturing at time $T$ is given by \begin{equation}\label{price} e^{-rT}\mathbb{E}[f(X_T)]=e^{-rT}\int_E f({\mathbf x}) d\mu({\mathbf x}), \end{equation} where $r$ is a risk-free interest rate and $\mu$ denotes the distribution of $X_T$ whose support is assumed to be $E$ and $f \in L^1(\mu)$. \subsection{MCLS for European option pricing} In this section we explain how to apply MCLS to compute European option prices. When applying MCLS for computing \eqref{price} we observe two potential issues. First, the distribution $\mu$ often is not known explicitly. Therefore, we can not directly perform the sampling part, namely the first step of MCLS, as described in Algorithm \ref{Algo1}. Second, it is crucial that the basis functions $\{\phi_j \}_{j=0}^n$ are easily integrable with respect to $\mu$. Therefore we need to find an appropriate selection of the basis functions. Concerning the sampling part, if $\mu$ is explicitly known, as for example in the Black and Scholes framework (see Section \ref{BS example} and Section \ref{BS example2} for two examples), we can just generate sample points according to $\mu$. If $\mu$ is not explicitly known, typically the process $(X_t)_{0 \leq t \leq T}$ can still be expressed as the solution of a stochastic differential equation (SDE). In this case, we propose to simulate $N$ paths of $X_t$ by discretizing its governing SDE and collect the realizations of $X_T$. More details follow below and an example can be found in Section \ref{JH example}. To obtain an appropriate choice of the basis functions $\{\phi_j\}_{j=0}^n$ we need ${\mathbb E}[\phi_j(X_T)]$ to be easy to evaluate. To do so we exploit the structure of the underlying asset model. If $X_t$ belongs to the wide class of affine processes, which is true for a large set of popular models including Black and Scholes, Heston and Levy models, then the characteristic function of $X_t$ can be easily computed, as explained e.g. in \cite{duffie2003affine}. Therefore, the natural choice of basis functions is to choose exponentials. If $X_t$ is a polynomial diffusion \cite{filipovic2016polynomial} (as in our numerical examples in Section \ref{JH example}) or a polynomial jump-diffusion \cite{filipovic2017polynomial}, then its conditional moments are given in closed form. Therefore, polynomials are an excellent choice of basis functions. To summarize, the main steps of MCLS for option pricing are as follows (if $\mu$ is not known explicitly): \begin{enumerate} \item Simulate $N$ paths of the process $X_t$, from $t=0$ to $t=T$ (time to maturity), by discretization of the governing SDE. \item Let ${\mathbf x}_i$ for $i=1, \ldots, N$ be the realizations of $X_T$ for each simulated path. Then, we evaluate $f({\mathbf x}_i)$ and $\phi_j({\mathbf x}_i)$, for $i=1,\ldots,N$ and $j=1,\ldots,n$. \item Solve the least-squares problem \eqref{LS} to obtain the approximation of $f$. The solver can be chosen according to the scheme represented in Figure~\ref{scheme-algo}. \item Finally, the option price is approximated by (we omit the discounting factor) \begin{equation*} {\mathbb E}[f(X_T)] = \int_{E} f(x) d \mu(x) \approx \hat I_{\mu,N}:=\sum_{j=0}^n c_j \int_{E} \phi_j(x) d \mu(x) = \sum_{j=0}^n c_j {\mathbb E}[\phi_j(X_T)]. \end{equation*} Note that we selected the basis functions in such away that the quantities ${\mathbb E}[\phi_j(X_T)]$ can be easily evaluated. In particular, no Monte Carlo simulation is required. \end{enumerate} Algorithm \ref{mainAlgo} summarizes this procedure. \begin{algorithm}[t] \caption{Generalized MCLS for European option pricing} \begin{algorithmic}[1]\label{mainAlgo} \REQUIRE{Payoff function $f$, basis functions $\{\phi_j\}_{j=0}^n$, $\phi_0\equiv 1$, integer $N (>n)$, governing SDE of $X_T$.} \ENSURE{Approximate option price $\hat I_{\mu,N}\approx \int_E f({\mathbf x})d\mu(x)$} \STATE Simulate $N$ paths of the process $X_t$ from $t=0$ to $t=T$, and collect the realizations of $X_T$ in ${\mathbf x}_i$, $i=1,\ldots,N$. \STATE Evaluate $f({\mathbf x}_i)$ and $\phi_j({\mathbf x}_i)$, for $i=1,\ldots,N$ and $j=1,\ldots,n$. \STATE Solve the least-squares problem \eqref{LS} for $\mathbf{c}=[c_0,c_1,\ldots,c_n]^T$. \STATE Compute $\hat I_{\mu,N}=\sum_{j=0}^n c_j\int_{E} \phi_j({\mathbf x})d\mu(x)$. \end{algorithmic} \end{algorithm} In the case that $\mu$ is explicitly known, the error resulting from MCLS is analysed in Proposition \ref{thm_convW} and Proposition \ref{thm_CM}. In case we discretize the governing SDE of $X_t$, we introduce a second source of error, which we address in the following. Assume that $X_t$ is the solution of an SDE of the form \begin{align} \label{SDE} \begin{split} dX_t&=b(X_t)dt+\Sigma(X_t)dW_t,\\ X_0&=x_0, \end{split} \end{align} where $W_t$ denotes a $d$-dimensional Brownian motion, $b : \mathbb{R}^d \mapsto \mathbb{R}^{d}$, $\Sigma : \mathbb{R}^d \mapsto \mathbb{R}^{d \times d}$. and $x_0\in {\mathbb R}^d$. An approximation of the solution $X_t$ of \eqref{SDE} can be computed via a uniform Euler-Maruyama scheme, defined in the following. \begin{definition}\label{euler-maruyama} Consider an equidistant partition of $[0,T]$ in $N_s$ intervals, i.e. \begin{equation*} \Delta t= T/N_s, \quad t_i=i \Delta t \quad \text{for } i=0,\cdots, N_s, \end{equation*} together with \begin{equation*} \Delta \widetilde{W}_i = W_{t_{i+1}}-W_{t_i} \quad \text{for } i=0,\cdots, N_s. \end{equation*} Then, the Euler-Maruyama discretization scheme of \eqref{SDE} is given by \begin{align}\label{dSDE} \begin{split} \bar{X}_{i+1}&=\bar{X_i}+b(\bar{X_i})\Delta t+\Sigma(\bar{X_i}) \Delta \widetilde{W}_i, \quad \text{for } i=0,\cdots, N_s-1,\\ \bar{X_0}&=x_0, \end{split} \end{align} and the Euler-Maruyama approximation of $X_T$ is given by $\bar{X}_{N_s}$. \end{definition} Assume that we sample $N$ independent copies of $\bar{X}_{N_s}$ (first step of Algorithm \ref{mainAlgo}) and we apply MCLS to approximate \eqref{price}. Then the error naturally splits into two components as \begin{align*} | {\mathbb E}[f(X_T)]-\bar{I}_{\mu,N}| \leq |{\mathbb E}[f(X_T)]-{\mathbb E}[f(\bar{X}_{N_s})]|+|{\mathbb E}[f(\bar{X}_{N_s})]-\bar{I}_{\mu,N}|. \end{align*} The second summand can then be approximated as in \eqref{eq:error_asympt}. We collect the result in the following proposition. Note that for simplicity we assume a vanishing interest rate. \begin{proposition}\label{prop:disc error} Let $\bar{I}_{\mu,N}$ be the MCLS estimator obtained by sampling according to the Euler-Maruyama scheme as in Definition \ref{euler-maruyama}. Then, the MCLS error asymptotically (for $n$ fixed and $N \to \infty$) satisfies \begin{align}\label{eq:asymptotic_error_Euler} | {\mathbb E}[f(X_T)]-\bar{I}_{\mu,N}| \lessapprox |{\mathbb E}[f(X_T)]-{\mathbb E}[f(\bar{X}_{N_s})]|+\frac{ \min_{\textbf{c}} \| f-\sum_{j=0}^n c_j \phi_j\|_{\bar{\mu}}}{\sqrt{N}}, \end{align} where $\bar{\mu}$ is the distribution of $\bar{X}_{N_s}$. \end{proposition} The first term in the right-hand-side of \eqref{eq:asymptotic_error_Euler} is usually referred to as \textsl{time-discretization error}, while the second summand denotes the so-called \textsl{statistical error}. The time-discretization error and, more generally, the Euler-Maruyama scheme together with its properties, are well studied in the literature, see e.g.~\cite{kloeden1992numerical}. Depending on the regularity properties\footnote{For example, if $b$ and $\Sigma$ are four times continuously differentiable and $f$ is continuous and bounded, then the scheme converges weakly with order $1$. See \cite{kloeden1992numerical} for details.} of $f, b$ and $\Sigma$, one can conclude, for example, that the time-discretization error is bounded from above by $C |\Delta t|$, for a constant $C>0$. In this case, we say that the Euler-Maruyama scheme converges \textsl{weakly} with order~$1$. Finally, note that the statistical error can be further approximated as in \eqref{sigmaLS} using \begin{equation*} \min_c \| f-\sum_{j=0}^n c_j \phi_j\|_{\bar{\mu}}\approx \frac{1}{N-n-1}\sum_{i=1}^N(f({\mathbf x}_i)-p({\mathbf x}_i))^2=\frac{1}{N-n-1}\|{\mathbf V} \mathbf{c}-{\mathbf f}\|^2_2, \end{equation*} where the ${\mathbf x}_i$'s are sampled according to $\bar{\mu}$. \subsection{Numerical examples for option pricing in polynomial models} Next, we apply MCLS to numerically compute European option prices \eqref{price} for several types of payoff functions $f$ and in different models. In particular, the considered models belong to the class of polynomial diffusion models, introduced in \cite{filipovic2016polynomial}. All algorithms have been implemented in {\sc Matlab} version 2017a and run on a standard laptop (Intel Core i7, 2 cores, 256kB/4MB L2/L3 cache). In all of our numerical experiments the solver for numerical solution of the least-squares problem \eqref{LS} is chosen according to the scheme in Figure \ref{scheme-algo}. The choice of the examples lead us to test all of the three choices in the scheme. For the univariate pricing examples in Heston's and the Jacobi model, Section \ref{JH example} the CG algorithm is appropriate. In Section \ref{BS example}, a basket option price of medium dimensionality in the multivariate Black-Scholes model, a QR based method is employed, because the condition number of ${\mathbf V}$ was usually larger than $O(1)$. In these both cases we directly sample from the distribution of the underlying random variable $X_T$, where in the univariate case we solve an SDE. Finally, we consider pricing a rainbow option in a high dimensional multivariate Black-Scholes model in Section \ref{BS example2}, where the randomized extended Kazcmarz algorithm combined with the weighted sampling strategy yields a good performance. \subsubsection{Call option in stochastic volatility models}\label{JH example} We consider the Heston model as in \cite{heston1993closed}. The log asset price $X_t$ (meaning that the asset price $S_t$ is of the form $S_t=e^{X_t}$) and the squared volatility process $V_t$ are defined via the SDE \begin{align*} &dV_t=\kappa(\theta -V_t)dt+\sigma \sqrt{V_t}dW^1_{t},\\ &dX_t=(r-V_t/2)d_t + \rho \sqrt{V_t} dW^1_{t}+\sqrt{V_t}\sqrt{1-\rho^2}dW^2_{t}, \end{align*} where $W^1_{t}$ and $W^2_{t}$ are independent standard Brownian motions and the model parameters satisfy the conditions $\kappa \geq 0$, $\theta \geq 0$, $\sigma >0$, $r \geq 0$, $\rho \in [-1,1]$. The state space is $E = {\mathbb R}_+ \times \mathbb{R}$. The log-asset process in the Heston model is a polynomial diffusion and its moments can be computed according to the moment formula introduced in \cite[Theorem 3.1]{filipovic2016polynomial}. In this case the formula is given by \begin{equation}\label{moment-formula} {\mathbb E}[p(X_T,V_T)|\mathcal{F}_t]=H_n(X_t,V_t)e^{G_n(T-t)}\vec{p}, \end{equation} where $p$ is an arbitrary multivariate polynomial belonging to the space $\text{Pol}_n(\mathbb{R}^2)$ of bivariate polynomials of total maximal degree smaller than $n$, $H_n$ is a basis vector of $\text{Pol}_n(\mathbb{R}^2)$ and $\vec{p}$ is the coordinate vector of $p$ with respect to $H_n$. Finally, $G_n$ is the matrix representation of the action of the generator of $(V_t,X_t)$ restricted to the space $\text{Pol}_n(\mathbb{R}^2)$. Note that the matrix $G_n$ can be constructed as explained in \cite{kressner2017incremental}, with respect to the monomial basis. In the following we apply MCLS in the Heston model in order to price single-asset European call options with payoff function given by \begin{equation*} f(x)=(e^x-e^k)^+, \end{equation*} for a log-strike value $k$. We compare MC and MCLS to the Fourier pricing method introduced in \cite{heston1993closed}. In this experiment we use an ONB (with respect to the corresponding $L^2_\mu$ space, where $\mu$ is the distribution of $X_T$) of polynomials as basis functions $\phi_j$. Conveniently the ONB can be obtained by applying the Gram-Schmidt orthogonalization process to the monomial basis. Note that, even if the distribution $\mu$ is not known explicitly, we still can apply the Gram-Schmidt orthogonalization procedure since the corresponding scalar product and the induced norm can be computed via the moment formula \eqref{moment-formula}. Since the distribution of $X_T$ is not known a priori, we apply the Euler-Maruyama scheme as defined in \eqref{dSDE} and obtain \begin{align}\label{EM Heston} \begin{split} V_{0}&=v_0,\\ X_{0}&=x_0,\\ V_{t_i}&= V_{t_{i-1}}+\kappa(\theta -V_{t_{i-1}})\Delta t+\sigma \sqrt{V_{t_{i-1}}} \sqrt{\Delta t} Z^1_i, \\ X_{t_i}&= X_{t_{i-1}}+(r -V_{t_{i-1}}/2)\Delta t+\rho \sqrt{V_{t_{i-1}}} \sqrt{\Delta t} Z^1_i + \sqrt{V_{t_{i-1}}}\sqrt{1-\rho^2V_{t_{i-1}}} \sqrt{\Delta t} Z^2_i, \end{split} \end{align} for all $\quad i=1,\cdots, N_s$ and where $Z^1_i$ and $Z^2_i$ are independent standard normal distributed random variables. For the following numerical experiments we consider the set of model parameters \begin{align*} \sigma=0.15,\enskip v_0=0.04,\enskip x_0=0, \enskip \kappa = 0.5, \enskip \theta=0.01, \enskip \rho = -0.5, \enskip r = 0.01. \end{align*} As long as the square roots in \eqref{EM Heston} are positive the Euler-Maruyama scheme is well-defined. In our numerical experiments this was the case. To guarantee well-definedness, the scheme can be modified by taking the absolute value or the positive part of the arguments of the square roots. Such a modification is discussed, e.g., in \cite{kloeden2013converence}. The same remark holds for the forthcoming numerical examples. First, we apply MCLS to an in-the-money example, with payoff parameters \[ k=-0.1, \quad T=1/12, \] and we use $N_s=100$ time steps for the discretization of the SDE. We use an ONB consisting of polynomials of maximal degrees $0$ (standard MC), $1,3$ and $5$ and we obtain the results shown in Figure \ref{ITM_Heston}. In particular, we plot the absolute error of the prices and the width of the obtained $95\%$ confidence interval computed as in \eqref{sigmaLS} and \eqref{Cinterval}, against the number of simulated points N. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{CallHestonITM_Price3} \includegraphics[width=0.49\textwidth]{CallHestonITM_CI3} \caption{MCLS for ITM call option in Heston model for different polynomial degrees. Left: Absolute price error. Right: Width of $95\%$ confidence interval. \label{ITM_Heston}} \end{figure} Second, we apply again MCLS but this time to an at-the-money call option with parameters \[ k=0, \quad T=1/12 \] and to an out-of-the-money call option with parameters \[ k=0.1, \quad T=1/12. \] The results are shown in Figure \ref{ATM_Heston} and in Figure \ref{OTM_Heston}, respectively. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{CallHestonATM_Price3} \includegraphics[width=0.49\textwidth]{CallHestonATM_Ci3} \caption{MCLS for ATM call option in Heston model for different polynomial degrees. Left: Absolute price error. Right: Width of $95\%$ confidence interval. \label{ATM_Heston}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{CallHestonOTM_Price3} \includegraphics[width=0.49\textwidth]{CallHestonOTM_Ci3} \caption{MCLS for OTM call option in Heston model for different polynomial degrees. Left: Absolute price error. Right: Width of $95\%$ confidence interval. \label{OTM_Heston}} \end{figure} In this setting, for all different choices of payoff parameters, we show in Table \ref{imp_vols_Heston} the implied volatility\footnote{For a given call option price $C$ the implied volatility is defined as the volatility parameter that renders the corresponding Black Scholes price equal to $C$.} absolute errors for the MC and MCLS prices computed with a basis of polynomials of maximal degree $5$. The implied volatility error is measured against the implied volatility of the reference method. \begin{table}[t!] \centering \begin{tabular}{cccc} \multicolumn{4}{c}{\textbf{Implied vol absolute errors}} \\[0.5ex] \hline\\[-2ex] & $k=-0.1$ & $k=0$ & $k=0.1$ \\ N & \hspace{0.15cm} MC \hspace{0.15cm} MCLS& \hspace{0.15cm} MC \hspace{0.15cm} MCLS &\hspace{0.15cm} MC \hspace{0.15cm} MCLS \\[0.5ex] \hline\\[-2ex] 100 &-- \hspace{0.55cm} 0.21 &2.16 \enskip 0.29 &0.50 \enskip 0.37\\ 215 &4.35 \enskip 0.09 &0.47 \enskip 0.15 &1.56 \enskip 0.49\\ 464 &9.16 \enskip 0.31 &1.00 \enskip 0.00 &1.03 \enskip 0.26\\ 1000 &9.13 \enskip 0.28 &1.58 \enskip 0.17 &1.21 \enskip 0.02\\ 2154 &2.44 \enskip 0.22 &0.82 \enskip 0.16 &0.59 \enskip 0.19\\ 4642 &1.15 \enskip 0.09 &0.18 \enskip 0.02 &0.18 \enskip 0.24\\ 10000 &0.34 \enskip 0.04 &0.25 \enskip 0.03 &0.28 \enskip 0.01\\ \end{tabular} \captionof{table}{Implied volatility errors (in $\%$) for MC and MCLS with basis of polynomials of maximal degree $5$ in the Heston model, for different sizes $N$ of the sample set. \label{imp_vols_Heston}} \end{table} Before commenting on the numerical results, we apply MCLS to a second stochastic volatility model, the Jacobi model as in \cite{ackerer2016jacobi}. Here, the log asset price $X_t$ and the squared volatility process $V_t$ are defined through the SDE \begin{align*} &dV_t=\kappa(\theta -V_t)dt+\sigma \sqrt{Q(V_t)}dW^1_{t},\\ &dX_t=(r-V_t/2)dt + \rho \sqrt{Q(V_t)} dW^2_{t}+\sqrt{V_t-\rho^2Q(V_t)}dW^2_{t}, \end{align*} where \begin{equation*} Q(v)=\frac{(v-v_{min})(v_{max}-v)}{(\sqrt{v_{max}}-\sqrt{v_{min}})^2}, \end{equation*} for some $0 \leq v_{\min} < v_{\max}$. Here, $W^1_{t}$ and $W^2_{t}$ are independent standard Brownian motions and the model parameters satisfy the conditions $\kappa \geq 0$, $\theta \in [v_{\min},v_{\max}]$, $\sigma >0$, $r \geq 0$, $\rho \in [-1,1]$. The state space is in this case $E = [v_{\min}, v_{\max}] \times \mathbb{R}$. The matrix $G_n$ in \eqref{moment-formula} can be constructed as explained in the original paper \cite{ackerer2016jacobi} (with respect to a Hermite polynomial basis) or as in \cite{kressner2017incremental} (with respect to the monomial basis). For the numerical experiments we consider the set of model parameters \begin{align*} &\sigma=0.15,\enskip v_0=0.04,\enskip x_0=0, \enskip \kappa = 0.5, \enskip \theta=0.04, \\ &v_{\min}=10^{-4}, \enskip v_{\max}=0.08, \enskip \rho = -0.5, \enskip r = 0.01. \end{align*} We again consider single-asset European call options with payoff parameters \[ k=\{-0.1,0,0.1\}, \quad T=1/12. \] As reference pricing method we choose the polynomial expansion technique introduced in \cite{ackerer2016jacobi}, where we truncate the polynomial expansion of the price after $50$ terms. We simulate the whole path of $X_t$ from $0$ to $T$ in order to get the sample points $x_i$, $i=1, \cdots, n$. The discretization scheme of the SDE is given by \begin{align*} V_{0}&=v_0,\\ X_{0}&=x_0,\\ V_{t_i}&= V_{t_{i-1}}+\kappa(\theta -V_{t_{i-1}})\Delta t+\sigma \sqrt{Q(V_{t_{i-1}})} \sqrt{\Delta t} Z^1_i,\\ X_{t_i}&= X_{t_{i-1}}+(r -V_{t_{i-1}}/2)\Delta t+\rho \sqrt{Q(V_{t_{i-1}})} \sqrt{\Delta t} Z^1_i + \sqrt{V_{t_{i-1}}-\rho^2Q(V_{t_{i-1}})} \sqrt{\Delta t} Z^2_i \end{align*} for all $i=1,\cdots, N_s$, where $Z^1_i$ and $Z^2_i$ are independent standard normal distributed random variables and the rest of the parameters are as specified in the example for the Heston model. We use again an ONB consisting of polynomials of maximal degrees $0$ (standard MC), $1,3$ and $5$ and we obtain the results shown in Figures \ref{ITM_Jacobi}, \ref{ATM_Jacobi} and \ref{OTM_Jacobi}, for ITM, ATM and OTM call options, respectively. Lastly, we show in Table \ref{imp_vols_Jacobi} the implied volatility absolute errors for the MC and MCLS prices computed with a basis of polynomials with maximal degree $5$. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{CallJacobiITM_Price3} \includegraphics[width=0.49\textwidth]{CallJacobiITM_CI3} \caption{MCLS for ITM call option in Jacobi model for different polynomial degrees. Left: Absolute price error. Right: Width of $95\%$ confidence interval. \label{ITM_Jacobi}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{CallJacobiATM_Price3} \includegraphics[width=0.49\textwidth]{CallJacobiATM_Ci3} \caption{MCLS for ATM call option in Jacobi model for different polynomial degrees. Left: Absolute price error. Right: Width of $95\%$ confidence interval. \label{ATM_Jacobi}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{CallJacobiOTM_Price3} \includegraphics[width=0.49\textwidth]{CallJacobiOTM_Ci3} \caption{MCLS for OTM call option in Jacobi model for different polynomial degrees. Left: Absolute price error. Right: Width of $95\%$ confidence interval. \label{OTM_Jacobi}} \end{figure} \begin{table}[t] \centering \begin{tabular}{cccc} \multicolumn{4}{c}{\textbf{Implied vol absolute errors}} \\[0.5ex] \hline\\[-2ex] & $k=-0.1$ & $k=0$ & $k=0.1$ \\ N & \hspace{0.15cm} MC \hspace{0.15cm} MCLS& \hspace{0.15cm} MC \hspace{0.15cm} MCLS &\hspace{0.15cm} MC \hspace{0.15cm} MCLS \\[0.5ex] \hline\\[-2ex] 100 &8.75 \enskip 0.67 &2.75 \enskip 0.40 &0.72 \enskip 0.20\\ 215 &8.67 \enskip 0.46 &1.92 \enskip 0.33 &0.96 \enskip 0.14\\ 464 &-- \hspace{0.55cm} 0.30 &1.23 \enskip 0.07 &1.77 \enskip 0.10\\ 1000 &3.27 \enskip 0.39 &0.32 \enskip 0.13 &1.16 \enskip 0.24\\ 2154 &2.55 \enskip 0.11 &0.03 \enskip 0.13 &0.07 \enskip 0.14\\ 4642 & 3.26 \enskip 0.03 &0.68 \enskip 0.01 &0.15 \enskip 0.08\\ 10000 &0.47 \enskip 0.05 &0.35 \enskip 0.02 &0.32 \enskip 0.02\\ \end{tabular} \captionof{table}{Implied volatility errors (in $\%$) for MC and MCLS with basis of polynomials with maximal degree $5$ in the Jacobi model, for different sizes $N$ of the sample set. \label{imp_vols_Jacobi}} \end{table} We can observe that MCLS strongly outperforms the standard MC in terms of price errors, confidence interval width and implied volatility errors, for every type of moneyness, in both chosen stochastic volatility models. The last remark concerns the condition number of the Vandermonde matrix ${\mathbf V}$. Thanks to the choice of the ONB, in both models, its condition number is at most of order $10$. Therefore, the CG algorithm has been selected. As another consequence of the low condition number we did not implement weighted sampling. \subsubsection{Basket options in Black Scholes models - medium size problems}\label{BS example} In this section we address multi-dimensional option pricing problems of medium size, meaning with number of assets $d\leq 10$ and $N\leq 10^5$. The asset prices $S^1_t,\ldots,S^d_t$ follow a $d$-dimensional Black Scholes model, i.e. \begin{equation}\label{sde-BS} dS_t^i=rS_t^i+\sigma_{i}S_t^i dW_{t}^i,\quad i=1,\cdots, d, \end{equation} for some volatility values $\sigma_i$, $i=1,\ldots,d$, a risk-free interest rate $r$ and $d$ correlated Brownian motions $W_{t}^i$ with correlation parameters $\rho_{ij} \in [-1,1]$ for $i\neq j$. The state space is $E = \mathbb{R}^d_+ $ and the explicit solution of \eqref{sde-BS} is given by \begin{equation*} S_t^i = S_0^i \exp \big ( (r-\frac{\sigma_i^2}{2})t+\sigma_i W_t^i \big ). \end{equation*} The process $(S_t^1,\dots, S_t^d)$ is a polynomial diffusion and the moment formula is given by \begin{equation*} {\mathbb E}[p(S_T^1,\dots, S_T^d)|\mathcal{F}_t]=H_n(S_t^1,\dots, S_t^d)e^{G_n(T-t)}\vec{p}, \end{equation*} where the involved quantities are defined along the lines following \eqref{moment-formula}. The matrix $G_n$ can be computed with respect to the monomial basis as in the following lemma, and turns out to be diagonal, making Step 4 of Algorithm \ref{mainAlgo} even more efficient. \begin{lemma} Let $\mathcal{H}_n$ be the monomial basis of $\text{Pol}_n(\mathbb{R}_+^d)$. Let \begin{align*} \pi:\mathcal E\rightarrow \Bigg \{1,\ldots, \binom{n+d}{n}\Bigg\} \end{align*} be an enumeration of the set of tuples $\mathcal E=\{\mathbf{k}\in \mathbb{N}_0^d:|\mathbf{k}|\le n\}$. Then, the matrix representation $G_n$ of the infinitesimal generator of the process $(S^1_t,\cdots, S^d_t)$ with respect to $\mathcal{H}_n$ and restricted to $\text{Pol}_n(\mathbb{R}_+^d)$ is diagonal with diagonal entries \begin{align*} G_{\pi(\mathbf{k}),\pi(\mathbf{k})}= \frac{1}{2} \sum_{i=1}^d \sum_{j=1}^d \sigma_i \sigma_j \rho_{ij} (k_ik_j \mathbf{1}_{i \neq j}+k_i (k_i-1)\mathbf{1}_{i=j})+r\sum_{i=1}^d k_i. \end{align*} \begin{proof} The infinitesimal generator $\mathcal{G}$ of $(S^1_t,\cdots,S^d_t)$ is given by \begin{equation*} \mathcal{G}f = \frac{1}{2} \sum_{i=1}^d \sum_{j=1}^d \sigma_i \sigma_j \rho_{ij} s_i s_j \partial_{s_i s_j}f+ r\sum_{i=1}^d s_i \partial_{s_i} f, \end{equation*} which implies that for any monomial of the form $s_1^{k_1}\cdots s_d^{k_d}$ one has \begin{equation*} \mathcal{G} s_1^{k_1}\cdots s_d^{k_d} =s_1^{k_1}\cdots s_d^{k_d} \Big (\frac{1}{2} \sum_{i=1}^d \sum_{j=1}^d \sigma_i \sigma_j \rho_{ij} (k_ik_j \mathbf{1}_{i \neq j}+k_i (k_i-1)\mathbf{1}_{i=j})+r\sum_{i=1}^d k_i \Big ). \end{equation*} It follows that $G_n$ is diagonal as stated above. \end{proof} \end{lemma} For the following numerical experiments we consider basket options with payoff function \begin{equation}\label{payoff basket} f(s_1,\cdots, s_d) = \Big(\sum_{i=1}^d w_i s_i - K\Big)^+ \end{equation} for different moneyness with payoff parameters \begin{equation*} K=\{0.9,1,1.1\}, \quad T=1,\quad w_i=\frac{1}{d} \enskip \forall i. \end{equation*} Model parameters are chosen to be \begin{equation*} S_0^i = 1 \enskip \forall i, \quad \sigma_i = \text{rand}(0,0.5) \enskip \forall i, \quad \{\rho_{ij}\}_{i,j=1}^d=R_d, \quad r=0.01, \end{equation*} where $R_d$ denotes a random correlation matrix of size $d \times d$, where we choose $d=5$ and $d=10$. We compare MCLS to a reference price computed via a standard Monte Carlo algorithm with $10^6$ simulations. We plot again the absolute price errors and the width of the $95\%$ confidence intervals (computed as in \eqref{sigmaLS} and \eqref{Cinterval}) for different chosen polynomial degrees (maximally $1$ and maximally $3$). To be more precise, we used the monomial basis as functions $\{\phi_{j}\}_{j=0}^n$. Note that the distribution of the price process $(S_t^1, \cdots, S_t^d)$ is known to be the geometric Brownian distribution so that there is no need to simulate the whole path but only the process at final time $T$. The results are shown in Figures \ref{ITM_BS}, \ref{ATM_BS} and \ref{OTM_BS}. In the legend the represented number indicates again the maximal total degree of the basis monomials. For instance, if $d=2$ and the maximal total degree is $\operatorname{deg}=3$, this means the the basis functions $\phi_j$ are chosen to be $\{1,s_1,s_2,s_1^2, s_1s_2, s_2^2, s_1^3, s_1^2 s_2, s_1 s_2^2, s_2^3\}$. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{ITM_5d_Price4} \includegraphics[width=0.49\textwidth]{ITM_5d_Ci4} \includegraphics[width=0.49\textwidth]{ITM_10d_Price4} \includegraphics[width=0.49\textwidth]{ITM_10d_Ci4} \caption{MCLS for ITM basket option in Black Scholes model for different dimensions and polynomial degrees. Left: absolute price errors with respect to a reference price computed with $10^6$ simulations. Right: Width of $95\%$ confidence interval. \label{ITM_BS}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{ATM_5d_Price4} \includegraphics[width=0.49\textwidth]{ATM_5d_Ci4} \includegraphics[width=0.49\textwidth]{ATM_10d_Price4} \includegraphics[width=0.49\textwidth]{ATM_10d_Ci4} \caption{MCLS for ATM basket option in Black Scholes model for different dimensions and polynomial degrees. Left: absolute price errors with respect to a reference price computed with $10^6$ simulations. Right: Width of $95\%$ confidence interval. \label{ATM_BS}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{OTM_5d_Price4} \includegraphics[width=0.49\textwidth]{OTM_5d_Ci4} \includegraphics[width=0.49\textwidth]{OTM_10d_Price4} \includegraphics[width=0.49\textwidth]{OTM_10d_Ci4} \caption{MCLS for OTM basket option in Black Scholes model for different dimensions and polynomial degrees. Left: absolute price errors with respect to a reference price computed with $10^6$ simulations. Right: Width of $95\%$ confidence interval. \label{OTM_BS}} \end{figure} We observe that also in these multidimensional examples MCLS strongly outperforms the standard MC in terms of absolute price errors and width of the confidence intervals. Due to the use of the multivariate monomials as basis functions, the condition number of $V$ is relatively high, reaching values up to order $10^5$. However, the QR based algorithm chosen according to the selection scheme \ref{scheme-algo} for the numerical solution of the least-squares problem \eqref{LS} still yields accurate results. The Vandermonde matrix $V$ is here still storable, being of size at most $10^5 \times 286$. In the next section we treat problems of higher dimensionality leading to a Vandermonde matrix of bigger size. There, its storage is not feasible any more and neither CG nor QR based solver can be used. \subsubsection{Basket options in Black-Scholes models - large size problems}\label{BS example2} In the multivariate Black-Scholes model we now consider rainbow options with payoff function \begin{equation*} f(s_1,\cdots, s_d) = (K-\min(s_1,\cdots, s_d))^+, \end{equation*} so that we apply MCLS in order to compute the quantity \begin{align*} {\rm e}^{-rT}\mathbb{E}[ (K-\min(S_T^1,\cdots,S_T^d))^+] = {\rm e}^{-rT}\int_{{\mathbb R}^d_+} (K-\min(s_1,\cdots,s_d))^+ d\mu(s_1, \cdots, s_d), \end{align*} where $\mu$ is the distribution of $(S_T^1, \cdots, S_T^d)$. In contrast to the payoff \eqref{payoff basket} which presents one type of irregularity that derives from taking the positive part $(\cdot)^+$, this payoff function presents two types of irregularities: one again due to $(\cdot)^+$, and the second one deriving from the $\min(\cdot)$ function. This example is therefore more challenging. As in \cite{nakatsukasa2018approximate}, we rewrite the option price with respect to the Lebesgue measure \begin{equation*} {\rm e}^{-rT}\int_{[0,1]^d} \bigg(K-\min_{i=1,\dots,d}\Big(S_0^i \exp \big ( (r-\frac{\sigma_i^2}{2})t+\sigma_i \sqrt{T} \mathbf{L} \Phi^{-1}({\mathbf x}) \big ) \Big)\bigg)^+ d{\mathbf x}, \end{equation*} where $\mathbf{L}$ is the Cholesky decomposition of the correlation matrix and $\Phi^{-1}$ is the inverse map of the cumulative distribution function of the multivariate standard normal distribution. The model and payoff parameters are chosen to be \begin{equation*} S_0^i=1,\quad K=1, \quad \sigma_i=0.2\enskip \forall i,\quad \Sigma= I_d \quad T=1,\quad r=0.01, \end{equation*} so that we consider a basket option of uncorrelated assets. We apply MCLS for $d=\{5, 10, 20\}$ using different total degrees for the approximating polynomial space and compare it with a reference price computed using the standard MC algorithm with $10^7$ simulations. Also, we consider different numbers of simulations that go up to $10^6$. We choose a basis of tensorized Legendre polynomials, which form an ONB with respect to the Lebesgue measure on the unit cube $[0,1]^d$, and we perform the sampling step of MCLS (step 1) according to the optimal distribution as introduced in \cite{cohen2017optimal} and reviewed in Section \ref{sec-optimally}. The solver for the least-squares problem is chosen according to the scheme shown in Figure~\ref{scheme-algo}, where we assume that the Vandermonde matrix ${\mathbf V}$ can be stored whenever the number of entries is less than $10^8$. This implies that also, for example, for the case $d=5$ with polynomial degree $5$ and $10^6$ simulations ${\mathbf V}$ can not be stored. Indeed, for $d=5$, $\text{deg}=5$ and $N=10^6$ the matrix ${\mathbf V}$ has $2.52 \cdot 10^8$ entries. For all of these cases, we therefore solve the least-squares problem by applying the randomized extended Kaczmarz algorithm. In Figure~\ref{ATM_Kaczmarz} we plot the obtained price absolute errors and the width of the $95\%$ confidence intervals for all considered problems. We notice that MCLS outperforms again MC in terms of confidence interval width and price errors, as observed for medium dimensions. The choice of the weighted sampling strategy combined with the ONB allowed us to obtain a well conditioned matrix ${\mathbf V}$, according to the theory presented in the previous sections. These examples and the obtained numerical results show therefore that our extension of MCLS is effective and allows us to efficiently price single and multi-asset European options. In the next section we test our extended MCLS in a slightly different setting where the integrating function is smooth. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{d5_Price} \includegraphics[width=0.49\textwidth]{d5_CI} \includegraphics[width=0.49\textwidth]{d10_Price2} \includegraphics[width=0.49\textwidth]{d10_CI} \includegraphics[width=0.49\textwidth]{d20_Price2} \includegraphics[width=0.49\textwidth]{d20_CI2} \caption{MCLS for rainbow options in Black-Scholes model for different dimensions and polynomial degrees. Left: absolute price errors with respect to a reference price computed with $10^7$ simulations. Right: width of the $95\%$ confidence interval. \label{ATM_Kaczmarz}} \end{figure} \section{Application to high-dimensional integration}\label{sec-high dimensional application} In this section we apply the extended MCLS algorithm to compute the definite integral \begin{equation}\label{eq:sin integral} \int_{[0,1]^d} \sin \Big (\sum_{j=1}^d x_j \Big ) d{\mathbf x}. \end{equation} This is a classical integration problem~\cite{genz1984testing}, which was also considered in~\cite{nakatsukasa2018approximate}, where MCLS is applied to compute \eqref{eq:sin integral} for dimension at most $d=6$ and with at most $N=10^5$ simulations. Our goal is to show that, thanks to the use of REK, we can increase the dimension $d$ and the number of simulations $N$. Here we apply MCLS for $d=10$ and $d=30$ using a basis of tensorized Legendre polynomials of total degree $5$ and $4$, respectively. We compare it to the reference result which for $d=2$ $ (\text{mod } 4)$ is explicitly given by \begin{equation*} \int_{[0,1]^d} \sin \Big (\sum_{j=1}^d x_j \Big ) d{\mathbf x} = \sum_{j=0}^d (-1)^{j+1} \binom{d}{j} \sin(j). \end{equation*} Also, we consider different sample sizes that go up to $10^7$. We perform the sampling step of MCLS (step 1) again according to the optimal distribution. The choice of the solver for the least-squares problem is again taken according to the scheme in Figure~\ref{scheme-algo} and we assume that the Vandermonde matrix $V$ can be stored whenever the number of entries is less than $10^8$. The results are shown in Figure \ref{SinFunction}. Again, we have plotted the obtained absolute error computed with respect to the reference result (left) and the width of the $95\%$ confidence interval (right). First, we note that MCLS performs much better than the standard MC, as in the previous examples. Furthermore, the results are considerably better than the ones obtained in the previous section, see Figure \ref{ATM_Kaczmarz}. This is due to the fact that the integrand is now smooth (it is analytic/entire), while in the multi-asset option example it was only continuous. Indeed, the function approximation error $\min_{\mathbf{c} \in {\mathbb R}^{n+1}} \|\sqrt{w} (f-\sum_{j=0}^n c_j \phi_j)\|_\mu$ is expected to be much smaller in this case, since polynomials provide a more suitable approximating space for smooth functions than for irregular functions. According to Proposition \ref{thm_convW}, this results in a stronger variance reduction and hence a better approximation of the integral. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{SinFunctionErr_d10} \includegraphics[width=0.49\textwidth]{SinFunctionCI_d10} \includegraphics[width=0.49\textwidth]{SinFunctionErr_d30} \includegraphics[width=0.49\textwidth]{SinFunctionCI_d30} \caption{MCLS for integrating the function $\sin(\sum_{j=1}^d x_j)$ for $d=\{10,30\}$ and different polynomial degrees. Left: absolute errors with respect to a reference result given in closed form. Right: width of the $95\%$ confidence interval. \label{SinFunction}} \end{figure} \section{Conclusion and future work}\label{sec-conclusion} We have presented a numerical technique to price single and multi-asset European options and, more generally, to compute expectations of functions of multivariate random variables. The methodology consists of extending MCLS to arbitrary probability measures and combining it with weighted sampling strategies and the randomized extended Kaczmarz algorithm. The core concepts and algorithms have been presented in Section \ref{sec-MCLS}. In Section \ref{sec-cost analysis} we have proposed a new cost analysis. Here, we have shown that MCLS asymptotically outperforms the standard Monte Carlo method as the cost goes to infinity, provided that the integrand satisfies certain regularity conditions. In Section \ref{sec-MCLS option pricing} we have applied the new method to price European options. First, we have adapted the generalization to compute multi-asset option prices, where we have proposed to modify the sampling step of MCLS by discretizing the governing SDE of the underlying price process, whenever needed. The modification of the first step introduces a new source of error, which has been analyzed in Proposition \ref{prop:disc error}. In the Sections \ref{JH example}, \ref{BS example} and \ref{BS example2} we have applied the algorithm to price multi-asset European options in the Heston model, in the Jacobi model and in the multidimensional Black Scholes model, where we have exploited the fact the they belong to the class of polynomial diffusions and the moments can be computed in closed form. For these examples, MCLS usually provides considerably high accuracy compared to the standard MC for the same sample sizes. This typically holds for different sample sizes and when accuracy is measured in terms of implied volatility, see Table \ref{imp_vols_Heston} and Table \ref{imp_vols_Jacobi}, and in terms of option price errors and confidence interval widths, see for instance Figures \ref{ITM_Heston}-\ref{OTM_Jacobi} and Figure \ref{ATM_Kaczmarz}. As expected, enlarging the number of basis functions $n$ for a given sample size $N$ leads to more accurate results. Moreover, in Section \ref{BS example2} employing REK allowed us to solve high dimensional problems with high accuracy. For instance also our experiments for basket options on $20$ assets shows that enlarging the number of basis functions yields higher accuracy in terms of confidence intervals. Finally, in Section \ref{sec-high dimensional application} we considered the approximation of a multidimensional integral of a smooth function. Storage requirements limit the feasibility of the basic MCLS for high dimension. Indeed, in \cite{nakatsukasa2018approximate} only cases with maximal dimension $d=6$ and $N=10^5$ could be treated. Thanks to the application of REK we were able to treat dimension $d=30$ and $N$ up to $10^7$. This illustrates the effectiveness of our extended approach. To extend the approach further to even higher dimensions, other computational bottlenecks arising are to be addressed. Solving the storage issue in the least-squares problem with REK leaves us with a high number of function calls. We do not need to store the full Vandermode matrix, but instead rows and columns are required many times during the iteration. This leads to a high computational cost. One can reduce this cost by 1) reducing the number of function calls and by 2) making the function calls more efficient. To achieve 1), one can for instance store the rows and columns of the Vandermonde matrix which are called with highest probability. To achieve 2) one can exploit further insight of the functions, for instance using a low-rank approximation~\cite{grasedyck2013literature} or functional analogues of tensor decomposition approximation~\cite{gorodetsky2019continuous}. \section*{Appendix} Here, we present the proof of Proposition \ref{thm_convW}. \begin{proof} Note that the approximate function $\sum_{j=0}^n\hat{c}_j\phi_j$ and thus $\hat{I}_{\mu,N}$ only depends on the span of the basis functions $ \{\phi_j\}_{j=0}^n$ and not on the specific choice of the basis. Therefore, without loss of generality we can assume that the chosen basis functions $ \{\phi_j\}_{j=0}^n$ form an orthonormal basis (ONB) in $L^2_\mu$, i.e. $\int_E \phi_i({\mathbf x}) \phi_j({\mathbf x}) d\mu({\mathbf x})=\delta_{ij}$. We decompose the function $f$ into a sum of orthogonal terms \begin{equation}\label{fdecomp} f=\sum_{j=0}^n c^\ast_j \phi_j + g=: f_1+g, \end{equation} where $g$ satisfies $\int_E g({\mathbf x}) \phi_j({\mathbf x}) d\mu({\mathbf x})=0$ for all $j=0,\cdots, n$. Note that $\|g\|_\mu=\min_{{\mathbf c} \in {\mathbb R}^{n+1}}\|f-\sum_{j=0}^nc_j\phi_j\|_\mu$. Assume now that we sample according to $\frac{d\mu}{w}$ and obtain the points $\{\tilde{{\mathbf x}}_i\}_{i=1}^N$. Then, the vector of sample values in the weighted least-squares problem can be decomposed as \[ \tilde{{\mathbf f}}= [\tilde{f}_1(\tilde{{\mathbf x}}_1)+\tilde{g}(\tilde{{\mathbf x}}_1),\ldots,\tilde{f}_1(\tilde{{\mathbf x}}_N)+\tilde{g}(\tilde{{\mathbf x}}_N)]^T =\widetilde{{\mathbf V}} \mathbf{c}^\ast+\tilde{{\mathbf g}}, \] where $\widetilde{{\mathbf V}}$ and $\tilde{f}$ are defined as in \eqref{eq:MCLSweight} and $\tilde{g}:=\sqrt{w}g$ and hence \[\tilde{{\mathbf g}}=[\sqrt{w(\tilde{{\mathbf x}}_1)}g(\tilde{{\mathbf x}}_1),\dots, \sqrt{w(\tilde{{\mathbf x}}_N)}g(\tilde{{\mathbf x}}_N)].\] Let $\hat{\mathbf{c}}$ be again the least-squares solution to~\eqref{eq:MCLSweight}, then \[\hat{\mathbf{c}}=\text{argmin}_{\mathbf{c} \in {\mathbb R}^{n+1}} \| \widetilde{{\mathbf V}} \mathbf{c} - (\widetilde{{\mathbf V}} \mathbf{c}^\ast+\tilde{{\mathbf g}})\|_2 = (\widetilde{{\mathbf V}}^T\widetilde{{\mathbf V}})^{-1}\widetilde{{\mathbf V}}^T(\widetilde{{\mathbf V}} \mathbf{c}^\ast+\tilde{{\mathbf g}})= \mathbf{c}^\ast+ (\widetilde{{\mathbf V}}^T\widetilde{{\mathbf V}})^{-1}\widetilde{{\mathbf V}}^T\tilde{{\mathbf g}}, \] where the second summand is exactly $\mathbf{c}_g:=\mbox{argmin}_{\mathbf{c}\in {\mathbb R}^{n+1}} \|\widetilde{{\mathbf V}}\mathbf{c} - \tilde{{\mathbf g}}\|_2 $. It thus follows that the integration error is $\hat I_{\mu,N}-I_\mu = c_{g,0}=[1,0,\ldots,0](\widetilde{{\mathbf V}}^T\widetilde{{\mathbf V}})^{-1}\widetilde{{\mathbf V}}^T\tilde{{\mathbf g}}$. Now by the strong law of large numbers we have \begin{align*} \frac{1}{N}(\widetilde{{\mathbf V}}^T\widetilde{{\mathbf V}})_{i+1,j+1} = &\frac{1}{N}\sum_{l=1}^N w(\tilde{{\mathbf x}}_l)\phi_i(\tilde{{\mathbf x}}_l)\phi_j(\tilde{{\mathbf x}}_l) \\ &\rightarrow \int_{E}w(\tilde{{\mathbf x}})\phi_i(\tilde{{\mathbf x}})\phi_j(\tilde{{\mathbf x}})\frac{d\mu(\tilde{{\mathbf x}})}{w(\tilde{{\mathbf x}})}=\int_{E}\phi_i(\tilde{{\mathbf x}})\phi_j(\tilde{{\mathbf x}})d\mu(\tilde{{\mathbf x}}) =\delta_{ij} \end{align*} almost surely and in probability as $N\rightarrow \infty$, by the orthonormality of $\{\phi_j\}_{j=0}^n$. Therefore we have $\frac{1}{N}\widetilde{{\mathbf V}}^T\widetilde{{\mathbf V}}\stackrel{p}{\rightarrow} \mathbf{Id}_{n+1}$ as $N\rightarrow \infty$, where $\mathbf{Id}_{n+1}$ denotes the identity matrix in ${\mathbb R}^{(n+1)\times(n+1)}$. Moreover, $\sqrt{N}\left(\frac{1}{N}\sum_{i=1}^N w(\tilde{{\mathbf x}}_i)g(\tilde{{\mathbf x}}_i)\right) \stackrel{d}{\rightarrow} Z\sim \mathcal{N}(0,\|\sqrt{w}g\|_\mu^2)$ for $N\rightarrow \infty$ by the central limit theorem, where we used the fact $\int_E g({\mathbf x}) d\mu({\mathbf x})=0$ for the mean and $\int_{E} (w(\tilde{x})g(\tilde{{\mathbf x}}))^2 \frac{d\mu(\tilde{{\mathbf x}})}{w(\tilde{{\mathbf x}})}=\|\sqrt{w}g\|_\mu^2$ for the variance. Thanks to Slutsky's theorem (see e.g. \cite[Chapter 5]{gut2013probability}) we finally obtain \begin{equation*} \sqrt{N}(\hat I_{\mu,N}-I_\mu)= \sqrt{N}[1,0,\ldots,0]^T \frac{1}{N}(\frac{1}{N}\widetilde{{\mathbf V}}^T\widetilde{{\mathbf V}})^{-1}\widetilde{{\mathbf V}}^T\tilde{{\mathbf g}} \xrightarrow{d} \mathcal{N}(0,\|\sqrt{w}g\|_\mu^2). \end{equation*} \end{proof} \bibliographystyle{plain}
{ "timestamp": "2019-10-17T02:10:24", "yymm": "1910", "arxiv_id": "1910.07241", "language": "en", "url": "https://arxiv.org/abs/1910.07241", "abstract": "We propose a methodology for computing single and multi-asset European option prices, and more generally expectations of scalar functions of (multivariate) random variables. This new approach combines the ability of Monte Carlo simulation to handle high-dimensional problems with the efficiency of function approximation. Specifically, we first generalize the recently developed method for multivariate integration in [arXiv:1806.05492] to integration with respect to probability measures. The method is based on the principle \"approximate and integrate\" in three steps i) sample the integrand at points in the integration domain, ii) approximate the integrand by solving a least-squares problem, iii) integrate the approximate function. In high-dimensional applications we face memory limitations due to large storage requirements in step ii). Combining weighted sampling and the randomized extended Kaczmarz algorithm we obtain a new efficient approach to solve large-scale least-squares problems. Our convergence and cost analysis along with numerical experiments show the effectiveness of the method in both low and high dimensions, and under the assumption of a limited number of available simulations.", "subjects": "Computational Finance (q-fin.CP); Computational Engineering, Finance, and Science (cs.CE)", "title": "Weighted Monte Carlo with least squares and randomized extended Kaczmarz for option pricing", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759643671934, "lm_q2_score": 0.721743200312399, "lm_q1q2_score": 0.7079405576318888 }
https://arxiv.org/abs/math/0607118
Line partitions of internal points to a conic in PG(2,q)
All sets of lines providing a partition of the set of internal points to a conic C in PG(2,q), q odd, are determined. There exist only three such linesets up to projectivities, namely the set of all nontangent lines to C through an external point to C, the set of all nontangent lines to C through a point in C, and, for square q, the set of all nontangent lines to C belonging to a Baer subplane with at least 5 common points with C. This classification theorem is the analogous of a classical result by Segre and Korchmaros characterizing the pencil of lines through an internal point to C as the unique set of lines, up to projectivities, which provides a partition of the set of all noninternal points to C. However, the proof is not analogous, since it does not rely on the famous Lemma of Tangents of Segre. The main tools in the present paper are certain partitions in conics of the set of all internal points to C, together with some recent combinatorial characterizations of blocking sets of non-secant lines, and of blocking sets of external lines.
\section{Introduction} In 1977 Segre and Korchm\'aros gave the following combinatorial characterization of external lines to an irreducible conic in $PG(2,q)$, see \cite{KS}, \cite{H} Theorem 13.40, and \cite{KK}. \begin{theorem} \label{segrekorchmaros} If every secant and tangent of an irreducible conic meets a pointset $\mathcal L$ in exactly one point, then $\mathcal L$ is linear, that is, it consists of all points of an external line to the conic. \end{theorem} For even $q$, this was proven by Bruen and Thas \cite{BT}, independently. It is natural to ask for a similar characterization of a minimal pointset $\mathcal L$ meeting every {\em external line} to an irreducible conic $\mathrm C$ in exactly one point. In this case, we have two linear examples: a chord minus the common points with $\mathrm C$, and a tangent minus the tangency point (and, for $q$ even, minus the nucleus of $\mathrm C$, as well). For $q$ even, it is shown in \cite{MG} that there is exactly one more possibility for $\mathcal L$, namely, for any even square $q$, the set consisting of the points of a Baer subplane $\pi$ sharing $\sqrt{q}+1$ with $\mathrm C$, minus $\pi\cap \mathrm C$ and the nucleus of $\mathrm C$. The aim of the present paper is to prove an analogous result for $q$ odd. Henceforth, $q$ is always assumed to be odd, that is, $q=p^h$ with $p>2$ prime. Then the orthogonal polarity associated to $\mathrm C$ turns $\mathcal L$ into a {\em line partition} of the set of all internal points to $\mathrm C$. In terms of a line partition, Theorem \ref{segrekorchmaros} states that if $\mathcal L$ is a line partition of the set of all noninternal points to $\mathrm C$, then $\mathcal L$ is a pencil of lines through an internal point to $\mathrm C$. Our main result is the following theorem. \begin{theorem}\label{main} Let $\mathcal L$ be a line partition of the set of internal points to a conic $\mathrm C$ in $PG(2,q)$, $q$ odd. Then either \begin{itemize} \item $\#\mathcal L=q-1$, and $\mathcal L$ consists of the $q-1$ lines through an external point of $\mathrm C$ which are not tangent to $\mathrm C$, or \item $\#\mathcal L=q$, and $\mathcal L$ consists of the $q$ lines through a point of $\mathrm C$ distinct from the tangent to $\mathrm C$, or \item $\#\mathcal L=q$ for a square $q$, and $\mathcal L$ consists of all non tangent lines belonging to a Baer subplane $PG(2,\sqrt q)$ with $\sqrt{q}+1$ common points with $\mathrm C$. \end{itemize} \end{theorem} \section{Internal points to a conic} In this section a certain partition in conics of the internal points to a conic $\mathrm C$ in $PG(2,q)$, $q$ odd, is investigated. Assume without loss of generality that $\mathrm C$ has affine equation $Y=X^2$, and denote by $Y_\infty$ the infinite point of $\mathrm C$. Consider the pencil of conics $\mathcal F$ consisting of the conics $\mathrm C_s:Y=X^2-s$, with $s$ ranging over ${\mathbb F}_q$. First, an elementary property of $\mathcal F$ which will be useful in the sequel is pointed out. \begin{lemma}\label{oss2} Any line of $PG(2,q)$ not passing through $Y_\infty$ is tangent to exactly one conic of $\mathcal F$. \end{lemma} {\em Proof.} It is enough to note that the line of equation $Y=\alpha X+ \beta$ is tangent to $\mathrm C_s$ if and only if $s=-\frac{\alpha^2+4\beta}{4}$.\vspace{.3cm}\hfill$\blacksquare$ Recall that in the finite field ${\mathbb F}_q$ half the non-zero elements are quadratic residues or squares, and half are quadratic non-residues or non-squares. The quadratic character of ${\mathbb F}_q$ is the function $\chi$ given by $$ \chi(x)=\left\{\begin{array}{rl}0& \text{if }x=0\,,\\ 1& \text{if }x \text{ is a quadratic residue,}\\ -1& \text{if }x \text{ is a quadratic non-residue.} \end{array}\right. $$ \begin{lemma}\label{l1} Let $\mathrm C_{s}$ and $\mathrm C_{s'}$ be two distinct conics in $\mathcal F$. Then the affine points of $\mathrm C_{s'}$ are all either external or internal to $\mathrm C_{s}$, according to whether $\chi(s'-s)=1$ or $\chi(s'-s)=-1$. \end{lemma} {\em Proof.} Let $P=(a,a^2-s')$ be an affine point of $\mathrm C_{s'}$. The polar line $l_P$ of $P$ with respect to $\mathrm C_{s}$ has equation $Y=2aX-a^2+s'-2s$. Then it is straightforward to check that $l_P$ does not meet $\mathrm C_s$ if and only if $s'-s$ is a non-square in ${\mathbb F}_q$. \vspace{.3cm}\hfill$\blacksquare$ As a matter of terminology, we will say that a conic $\mathrm C_s$ is {\em internal} ({\em external}) to $\mathrm C_{s'}$ if all the affine points of $\mathrm C_s$ are internal (external) to $\mathrm C_{s'}$. Let ${\mathcal I}=\{\mathrm C_s\mid \chi(s)=-1\}$. Clearly, the set of internal points to $\mathrm C$ consists of the affine points of the conics in ${\mathcal I}$. Throughout the rest of this section we assume that $q\equiv 3 \pmod 4$. Note that this is equivalent to $\chi(-1)=-1$, see \cite{H}. Then Lemma \ref{l1} yields that $\mathrm C_s$ is internal to $\mathrm C_{s'}$ if and only if $\mathrm C_{s'}$ is external to $\mathrm C_{s}$. \begin{lemma}\label{nextern} Let $\mathrm C_{s}$ be a conic in ${\mathcal I}$. If $q\equiv 3 \pmod 4$, then there are exactly $\frac{q-3}{4}$ conics in ${\mathcal I}$ that are internal to $\mathrm C_s$. \end{lemma} {\em Proof.} The hypothesis $q\equiv 3 \pmod 4$ yields that for any $s\in {\mathbb F}_q$, $s\neq 0$, there are exactly $\frac{q-3}{4}$ ordered pairs $(u,v)\in {\mathbb F}_q \times {\mathbb F}_q$ with $s=u-v$ and $\chi(u)=\chi(v)=1$ (see e.g. \cite[Lemma 1.7]{WALL}). Via the correspondence $s'=-v$, the number of such pairs equals the number of $s'\in {\mathbb F}_q$ satisfying $\chi(s')=\chi(s'-s)=-1$. Then the assertion follows from Lemma \ref{l1}. \vspace{.3cm}\hfill$\blacksquare$ Denote ${\mathcal I}_s$ the set of conics of ${\mathcal I}$ which are internal to $\mathrm C_s$. The following lemma will be crucial in the proof of Theorem \ref{main}. \begin{lemma}\label{matrice} Let $q\equiv 3 \pmod 4$. Then the any integer function $\varphi$ on ${\mathcal I}$ such that \begin{equation}\label{chiave0} \sum_{\mathrm C_{s'}\in{\mathcal I}_s}\varphi(\mathrm C_{s'})= \sum_{\mathrm C_{s'}\in{\mathcal I}\setminus{\mathcal I}_s,\mathrm C_{s'}\neq\mathrm C_s} \varphi(\mathrm C_{s'}),\quad \text{ for any }\mathrm C_s\in {\mathcal I} \end{equation} is constant. \end{lemma} {\em Proof.} Let $\{s_1, s_2,\ldots,s_{\frac{q-1}{2}}\}$ be the set of non-squares in ${\mathbb F}_q$, and let $A=(a_{ij})$ be the $\frac{q-1}{2}\times \frac{q-1}{2}$ matrix given by $$ a_{ij}=\chi(s_i-s_j)\,. $$ Then by Lemma \ref{l1}, condition (\ref{chiave0}) is equivalent to $$ \sum_{\chi(s_i-s_j)=-1}\varphi(\mathrm C_{s_i})= \sum_{\chi(s_i-s_j)=1} \varphi(\mathrm C_{s_i}),\quad \text{ for any }j=1,\ldots,\frac{q-1}{2}\,, $$ that is, the vector $(\varphi(\mathrm C_{s_1}),\ldots,\varphi(\mathrm C_{s_i}),\ldots,\varphi(\mathrm C_{s_{\frac{q-1}{2}}}))$ belongs to the null space of $A$. Clearly if $\varphi$ is constant such a condition is fulfilled by Lemma \ref{nextern}. Then to prove the assertion, it is enough to show that the real rank of $A$ is at least $\frac{q-1}{2}-1$. As usual, denote $A_{1,1}$ the matrix obtained from $A$ by dismissing the first row and the first column column. Note that as the entries of $A_{1,1}$ are integers, $Det(A_{1,1})\pmod 2$ coincides with $Det({\tilde A_{1,1}})$, where ${\tilde A_{1,1}}$ is the matrix over the finite field with $2$ elements obtained from $A_{1,1}$ by substituting each entry $m_{ij}$ with $m_{ij}\pmod 2$. By definition of $A$, the entries of ${\tilde A_{1,1}}$ are equal to $1$, except those in the diagonal which are equal to zero. As $\frac{q-1}{2}-1$ is even, it is straightforward to check that ${\tilde A_{1,1}}^2$ is the identity matrix, whence $Det({\tilde A_{1,1}})=Det(A_{1,1})\pmod 2$ is different from $0$. \vspace{.3cm}\hfill$\blacksquare$ \section{Proof of Theorem \ref{main}} Throughout, $\mathrm C$ is an irreducible conic in $PG(2,q)$, $q$ odd, and $\mathcal L$ is a line partition of the set of internal points to $\mathrm C$. First, the possible sizes of $\mathcal L$ are determined. \begin{lemma}\label{easy} The size of $\mathcal L$ is either $q-1$ or $q$. In the latter case, $\mathcal L$ consists of $q$ secant lines to $\mathrm C$. \end{lemma} {\em Proof.} The number of internal points to a conic is $q(q-1)/2$, see \cite{H}. Also, a secant line of $\mathrm C$ contains $(q-1)/2$ internal points of $\mathrm C$, whereas the number of internal points on an external line is $(q+1)/2$. No internal point belongs to a tangent to $\mathrm C$. Let $\mathcal L$ consist of $h$ secants together with $k$ external lines to $\mathrm C$. As $\mathcal L$ is a line partition of the internal points to $\mathrm C$, $$ \frac{q(q-1)}{2}=h\frac{q-1}{2}+k\frac{q+1}{2}\,, $$ that is $$ q=h+k+\frac{2k}{q-1}\,. $$ As $\frac{2k}{q-1}$ is an integer, either $k=0$ and $h=q$, or $k=(q-1)/2=h$. \vspace{.3cm}\hfill$\blacksquare$ The classification problem for $\#\mathcal L=q-1$ is solved via the characterization of blocking sets of minimal size of the external lines to a conic, as given in \cite{AK1}. The dual of Theorem 1.1 in \cite{AK1} reads as follows. \begin{proposition}\label{agkor1} Let ${\mathcal R}$ be a lineset of size $q-1$ such that any internal point to $\mathrm C$ belongs to some line of ${\mathcal R}$. If either $q=3$ or $q> 9$ , then ${\mathcal R}$ consists of the $q-1$ lines through an external point of $\mathrm C$ which are not tangent to $\mathrm C$. For $q=5,7$ there exists just one more example, up to projectivities, for which some of the lines in ${\mathcal R}$ are external to $\mathrm C$. \end{proposition} From now on, assume that $\#\mathcal L=q$. Note that Lemma \ref{easy} yields that every line of $\mathcal L$ is a secant line of $\mathrm C$. We first deal with the case $q\equiv 3 \pmod 4$. \begin{lemma}\label{lem1} Let $\#\mathcal L=q$. If $q\equiv 3 \pmod 4$, then the number of lines of $\mathcal L$ through any point $P$ of $\mathrm C$ is $1$, $\frac{q+1}{2}$ or $q$. \end{lemma} {\em Proof.} We keep the notation of Section 2. Assume without loss of generality that $\mathrm C$ has equation $X^2-Y=0$, and that $P=Y_\infty$. Let $\mathcal L_P$ be the set of lines of $\mathcal L$ passing through $P$, and set $m=\#\mathcal L_P$. Also, for any $l\in \mathcal L\setminus \mathcal L_P$, denote $\mathrm C^{(l)}$ the conic of $\mathcal F$ which is tangent to $l$ according to Lemma \ref{oss2}. As any secant $l$ of $\mathrm C$ not passing through $P$ contains an odd number of internal points to $\mathrm C$, the conic $\mathrm C^{(l)}$ belongs to ${\mathcal I}$. We claim that for any $\mathrm C_s\in {\mathcal I}$ and for any $l\in \mathcal L\setminus \mathcal L_P$, $l$ not tangent to $\mathrm C_s$, \begin{equation}\label{frase} \mathrm C_s \text{ is external to } \mathrm C^{(l)} \text{ if and only if } l \text{ is a secant of } \mathrm C_s\,. \end{equation} Clearly, if $l$ is a secant of $\mathrm C_s$, then both the points of $\mathrm C_s\cap l$ are external to $\mathrm C^{(l)}$. Therefore $\mathrm C_s$ is external to $\mathrm C^{(l)}$. To prove the only if part of (\ref{frase}), note that for any $l\in \mathcal L\setminus \mathcal L_P$ the set of $\frac{q-1}{2}$ points of $l$ which are internal to $\mathrm C$ consists of one point lying on $\mathrm C^{(l)}$ together with $\frac{q-3}{4}$ point pairs, each of which contained in a conic of ${\mathcal I}$. Taking into account Lemma \ref{nextern}, this means that $l$ is a secant of all the conics of ${\mathcal I}$ that are external to $\mathrm C^{(l)}$. Now, for any $\mathrm C_s\in {\mathcal I}$ let $\varphi(\mathrm C_s)$ be the number of lines of $\mathcal L$ which are tangent to $\mathrm C_s$. Then, \begin{equation}\label{chiave} \sum_{\mathrm C_{s'}\in{\mathcal I}_s}\varphi(\mathrm C_{s'})= \sum_{\mathrm C_{s'}\in{\mathcal I}\setminus{\mathcal I}_s,\mathrm C_{s'}\neq\mathrm C_s} \varphi(\mathrm C_{s'}),\quad \text{ for any }\mathrm C_s\in {\mathcal I}\,. \end{equation} In fact, (\ref{frase}) yields that $\sum_{\mathrm C_{s'}\in{\mathcal I}_s}\varphi(\mathrm C_{s'})$ equals the number of lines in $\mathcal L\setminus \mathcal L_P$ which are secants to $\mathrm C_s$, that is $\frac{q-m-\varphi(\mathrm C_s)}{2}$. As the total number of lines in $\mathcal L$ which are tangent to a conic of ${\mathcal I}$ distinct from $\mathrm C_s$ is $q-m-\varphi(\mathrm C_s)$, Equation (\ref{chiave}) follows. Then by Lemma \ref{matrice}, $\varphi(\mathrm C_s)$ is an integer which is independent of $\mathrm C_s$. Denote $t$ such an integer. By Lemma \ref{oss2}, \begin{equation}\label{somma} \sum_{\mathrm C_s\in {\mathcal I}}\varphi(\mathrm C_s)=t\frac{q-1}{2}=q-m\,, \end{equation} which implies that either (a) $t=2$, $m=1$, (b) $t=0$, $m=q$, or (c) $t=1$, $m=\frac{q+1}{2}$. \vspace{.3cm}\hfill$\blacksquare$ \begin{lemma}\label{lem2} Let $\#\mathcal L=q$. If $q\equiv 3 \pmod 4$, then no point of $\mathrm C$ belongs to exactly $\frac{q+1}{2}$ lines of $\mathcal L$. \end{lemma} {\em Proof.} We keep the notation of the proof of Lemma \ref{lem1}. Also, for $Q\in \mathrm C$ let $m_Q$ be the number of lines of $\mathcal L$ passing through $Q$. Assume that $m_P=\frac{q+1}{2}$, with $P=Y_\infty$. As $\sum_{Q\in \mathrm C}m_Q=2q$, Lemma \ref{lem1} yields that there exists another point ${\bar P}\in \mathrm C$ belonging to exactly $\frac{q+1}{2}$ lines of $\mathcal L$, and that $m_Q=1$ for any point $Q\in\mathrm C$, $Q\notin \{P,{\bar P}\}$. As the projective group of $\mathrm C$ is sharply $3$-transitive on the points of $\mathrm C$ (see e.g. \cite{H}), we may assume that ${\bar P}$ coincides with $(0,0)$. Let ${\mathcal A}$ be the subset of ${\mathbb F}_q\setminus \{0\}$ consisting of the $\frac{q-1}{2}$ non-zero elements $u$ for which the line $Y=uX$ belongs to $\mathcal L$. Then the lines in $\mathcal L_P$ are those of equation $X=v$, with $v$ ranging over ${\mathbb F}_q \setminus {\mathcal A}$. Actually, ${\mathbb F}_q\setminus {\mathcal A}$ coincides with $\{-u\mid u \in{\mathcal A}\}\cup \{0\}$. In fact, $u\in {\mathcal A}$ yields $-u\notin {\mathcal A}$, otherwise the two lines of equation $Y= u X$ and $Y=-uX$ would be both lines of $\mathcal L$ tangent to the same conic $\mathrm C_{-{u^2}/{4}}$. By the proof of Lemma \ref{lem1} this is impossible, as $m_P=\frac{q+1}{2}$ yields that each conic in ${\mathcal I}$ has exactly one tangent in $\mathcal L\setminus \mathcal L_P$. Then, for any $u_1,u_2\in {\mathcal A}$, $u_1\neq u_2$, the lines $Y=u_1X$ and $X=-u_2$, as well as the lines $Y=u_2X$ and $X=-u_1$, meet in an external point to $\mathrm C$, that is $$ \chi(u_1^2+u_1u_2)=\chi(u_2^2+u_2u_1)=1 \,. $$ Equivalently, for any $u_1,u_2\in {\mathcal A}$, $u_1\neq u_2$, $$ \chi(u_1)\chi(u_1+u_2)=\chi(u_2)\chi(u_1+u_2)=1 \,, $$ whence all the elements in ${\mathcal A}$ and all the sums of two distinct elements in ${\mathcal A}$ have the same quadratic character. But this is actually impossible, as $q\equiv 3 \pmod 4$ yields that for any $u_1\in {\mathbb F}_q\setminus \{0\}$, $\epsilon \in \{-1,1\}$, the number of $u_2\in {\mathbb F}_q$ such that $\chi(u_2)=\chi(u_1+u_2)=\epsilon$ is $\frac{q-3}{4}$ (see e.g. \cite[Lemma 1.7]{WALL}). \vspace{.3cm}\hfill$\blacksquare$ \begin{proposition}\label{p2} Let $\#\mathcal L=q$. If $q\equiv 3 \pmod 4$, then $\mathcal L$ consists of the $q$ lines through a point of $\mathrm C$ distinct from the tangent to $\mathrm C$. \end{proposition} {\em Proof.} By Lemmas \ref{lem1} and \ref{lem2} the number $m_P$ of lines of $\mathcal L$ through a given point $P\in \mathrm C$ is either $1$ or $q$. As $\#\mathcal L=q>\frac{q+1}{2}$ it is impossible that $m_P=1$ for every $P\in \mathrm C$. Then there exists a point $P_0$ with $m_{P_0}=q$, which proves the assertion. \vspace{.3cm}\hfill$\blacksquare$ Assume now that $q\equiv 1 \pmod 4$. We first prove that any line partition of size $q$ of the internal points of $\mathrm C$ actually covers all the points of $\mathrm C$ as well. \begin{lemma}\label{key} Let $\#\mathcal L=q$. If $q\equiv 1 \pmod 4$, then any point of $\mathrm C$ belongs to some line of $\mathcal L$. \end{lemma} {\em Proof.} We keep the notation of Section 2. Assume that a point $P\in \mathrm C$ does not belong to any line of $\mathcal L$. Without loss of generality, let $P=Y_\infty$. Then the $q$ affine points of any conic $\mathrm C_s\in {\mathcal I}$ partition into sets $l\cap \mathrm C_s$, with $l$ ranging over $\mathcal L$. As $q$ is odd, there exists a line $l_s\in \mathcal L$ which is tangent to $\mathrm C_s$. Any line of $\mathcal L$ has an even number of internal points to $\mathrm C$, as $(q-1)/2$ is even. Then some line of $\mathcal L$ must be tangent to more than one conic of $\mathcal F$, which is a contradiction to Lemma \ref{oss2}.\vspace{.3cm}\hfill$\blacksquare$ To complete our investigation for $q\equiv 1\pmod 4$, the combinatorial characterization of blocking sets of non-secant lines to $\mathrm C$, as given in \cite{AK2}, is needed. The dual of Theorem in \cite{AK2} reads as follows. \begin{lemma}\label{agkor2} Let ${\mathcal R}$ be a lineset of size $q$ such that any non-external point to $\mathrm C$ belongs to some line of ${\mathcal R}$. Then one of the following occurs. \begin{itemize} \item[{\rm (a)}] ${\mathcal R}$ consists of $q$ lines through a point of $\mathrm C$ distinct from the tangent to $\mathrm C$, \item[{\rm (b)}] ${\mathcal R}$ consists of the lines of a subgeometry $PG(2,\sqrt q)$ which are not tangent to $\mathrm C$. \item[{\rm (c)}] ${\mathcal R}$ consists of the $q-1$ lines through an external point $P$ to $\mathrm C$ which are not tangent to $\mathrm C$, together with the polar line of $P$ with respect to $\mathrm C$. \end{itemize} \end{lemma} \begin{proposition}\label{p3} Let $\#\mathcal L=q$. If $q\equiv 1 \pmod 4$, then $\mathcal L$ consists either of the $q$ lines through a point of $\mathrm C$ distinct from the tangent to $\mathrm C$, or of the lines of a subgeometry $PG(2,\sqrt q)$ which are not tangent to $\mathrm C$. \end{proposition} {\em Proof.} Lemma \ref{key} yields that $\mathcal L$ satisfies the hypothesis of Lemma \ref{agkor2}. Actually, (c) of Lemma \ref{agkor2} cannot occur as in this case not every line of ${\mathcal R}$ is a secant line to $\mathrm C$. Hence the assertion is proved. \vspace{.3cm}\hfill$\blacksquare$ Theorem \ref{main} now follows from Propositions \ref{agkor1}, \ref{p2}, \ref{p3}.
{ "timestamp": "2006-07-05T12:55:59", "yymm": "0607", "arxiv_id": "math/0607118", "language": "en", "url": "https://arxiv.org/abs/math/0607118", "abstract": "All sets of lines providing a partition of the set of internal points to a conic C in PG(2,q), q odd, are determined. There exist only three such linesets up to projectivities, namely the set of all nontangent lines to C through an external point to C, the set of all nontangent lines to C through a point in C, and, for square q, the set of all nontangent lines to C belonging to a Baer subplane with at least 5 common points with C. This classification theorem is the analogous of a classical result by Segre and Korchmaros characterizing the pencil of lines through an internal point to C as the unique set of lines, up to projectivities, which provides a partition of the set of all noninternal points to C. However, the proof is not analogous, since it does not rely on the famous Lemma of Tangents of Segre. The main tools in the present paper are certain partitions in conics of the set of all internal points to C, together with some recent combinatorial characterizations of blocking sets of non-secant lines, and of blocking sets of external lines.", "subjects": "Combinatorics (math.CO)", "title": "Line partitions of internal points to a conic in PG(2,q)", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759632491111, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7079405568249204 }